For Jae Lee, a data scientist by training, it never made sense that video – which has become a huge part of our lives, with the rise of platforms like TikTok, Vimeo and YouTube – was difficult to search for. because of the technical barriers posed by understanding the context. Finding video titles, descriptions, and tags was still pretty straightforward, requiring no more than a basic algorithm. But seek in videos for specific moments and scenes were far beyond the capabilities of the technology, especially if those moments and scenes weren’t prominently labeled.
To solve this problem, Lee, alongside friends from the tech industry, created a cloud service for finding and understanding videos. It became Twelve Labs, which then raised $17 million in venture capital — $12 million of which came from a seed expansion round that ended today. Radical Ventures led the expansion with participation from Index Ventures, WndrCo, Spring Ventures, Weights & Biases CEO Lukas Biewald and others, Lee told TechCrunch in an email.
“Twelve Labs’ vision is to help developers create programs that can see, hear and understand the world like we do by giving them the most powerful video understanding infrastructure,” Lee said.

A demo of the capabilities of the Twelve Labs platform. Picture credits: Twelve laboratories
Twelve Labs, which is currently in closed beta, uses AI to attempt to extract “rich information” from videos such as motion and actions, objects and people, sound, on-screen text and speech to identify the relationships between them. The platform converts these different elements into mathematical representations called “vectors” and forms “temporal connections” between images, enabling applications such as video scene finding.
“As part of realizing the company’s vision to help developers build intelligent video applications, the Twelve Labs team is building ‘base models’ for understanding multimodal video,” said Lee. “Developers will be able to access these models through a suite of APIs, performing not only semantic search, but also other tasks such as long-form video ‘chapterization’, summary generation, and video Q&A.”
Google is taking a similar approach to video understanding with its MUM AI system, which the company uses to power video recommendations on Google Search and YouTube by selecting topics in videos (e.g., “acrylic paint materials”) based on audio, text and visual. contents. But while the technology may be comparable, Twelve Labs is one of the first vendors to bring it to market; Google chose to keep MUM in-house, refusing to make it available through a public API.
That being said, Google, along with Microsoft and Amazon, offer services (e.g., Google Cloud Video AI, Azure Video Indexer, and AWS Rekognition) that recognize objects, places, and actions in videos and extract rich metadata at the same time. picture level. There’s also Reminiz, a French computer vision startup that claims to be able to index any type of video and add tags to recorded and live-streamed content. But Lee says Twelve Labs is sufficiently differentiated, in part because its platform allows customers to hone AI to specific categories of video content.

Mockup API to fine-tune the model to work better with salad-related content. Picture credits: Twelve laboratories
“What we have found is that narrow AI products designed to detect specific problems exhibit high accuracy in their ideal scenarios in a controlled setting, but do not adapt as well to messy real-world data. “, said Lee. “They act more like a rule-based system and therefore lack the ability to generalize when deviations occur. We also see this as a limitation rooted in the lack of understanding of context. Understanding of context is what gives humans the unique ability to make generalizations across seemingly different real-world situations, and that’s where Twelve Labs stands alone.
Beyond research, Lee says Twelve Labs’ technology can drive things like ad insertion and content moderation, intelligently determining, for example, which videos showing knives are violent versus instructive. . It can also be used for media analysis and real-time commentary, he says, and to automatically generate highlight reels from videos.
Just over a year after its founding (March 2021), Twelve Labs has paying customers — Lee wouldn’t reveal exactly how many — and a multi-year contract with Oracle to train AI models using the infrastructure. Oracle cloud. Looking ahead, the startup plans to invest in developing its technology and expanding its team. (Lee declined to reveal the current size of Twelve Labs’ workforce, but LinkedIn data shows it’s about 18 people.)
“For most companies, despite the enormous value that can be obtained from large models, it really doesn’t make sense for them to train, operate and maintain these models themselves. By leveraging a Twelve Labs platform, any organization can leverage powerful video understanding capabilities with just a few intuitive API calls,” Lee said. “The future direction of AI innovation is headed directly toward multimodal video understanding, and Twelve Labs is well positioned to push the boundaries even further in 2023.”
#Twelve #Labs #lands #million #understands #video #context