Meta publishes an artificial intelligence model that can recognize objects in images
(Reuters) – Facebook owner Meta on Wednesday released an artificial intelligence model capable of extracting individual objects from within an image, along with a dataset of image annotations that it said was the largest of its kind ever.
The company’s research department said in a blog post that its Segment Anything Model, or SAM, was able to recognize objects from images and videos even in cases where it would not have encountered these objects in its training.
With SAM, objects can be selected by clicking on them or typing text prompts. In one demonstration, typing the word “cat” caused the tool to draw boxes around each cat in the photo.
Big tech companies have been heralding AI breakthroughs since Microsoft-backed OpenAI’s ChatGPT chatbot became a sensation in the fall, triggering a wave of investment and competition to dominate the space.
Meta has teased several features that use ChatGPT’s popular type of generative AI, which creates brand new content rather than just identifying or categorizing data like other AIs, though it has yet to release a product.
Examples include a tool that spins surreal videos from text prompts, and another tool that creates children’s book illustrations from prose.
CEO Mark Zuckerberg has said that incorporating such creative AI “creative aids” into Meta’s apps is a priority this year.
Meta already uses SAM-like technology internally for functions such as tagging images, moderating banned content, and determining recommended posts for Facebook and Instagram users.
The company said the launch of SAM would expand access to this type of technology.
The SAM model and dataset are available for download under a non-commercial license. Users who upload their own images to the accompanying prototype must also agree to use it for research purposes only.
Read all the Latest Tech News here.