Of course, the score of the football game is important. But sporting events can also inspire cultural moments that slip under the radar — such as Travis Kelce signing a heart to Taylor Swift in the channels. While such traffic can be social gold, it is easily missed by traditional content tagging systems. This is where the twelve labs come in.
“Every sports team or sports league has decades of footage that they sent to the game, around the field, about the players,” said Soyoung Lee, founder and head of GTM by twelve labs. However, these claims are often ineffective due to inconsistent and out-of-date content management. “Until now, most content tagging processes have been done manually.”
Twelve labs, a San Francisco-based startup in Video-Lighty AI, wants to unlock the value of video content by applying models that can create large archives, generate text summaries and create short-form clips from Form-Form. Its work extends beyond sports, industries that affect industries from entertainment and advertising to security.
“Great language models can read and write well,” Lee said. “But we want to go further to create a world where AI can also see.”
Are the twelve labs related to the eleven labs?
Founded in 2021, twelve labs are spoiled by velenlabs, an AI startup specializing in audio. “We started a year earlier,” Lee Joke, adding that the twelve labs – which called themselves after the initial size of their founding team – are partners with HACHackatis, including one called “23Labs.”
The ambitious idea for the startup was fired with interest from deep-pocketed donors. It has raised more than $100 million from investors such as nvidia, Intel, and the first studio, the studio of Squid game Creator Hwang Dong-Hyuk. Its advisory bench is full of stars, with fei-fei li, Jeffrey Katzenber and Alexandr Wang.
Twelve labs account for thousands of developers and hundreds of enterprise customers. The demand is very high in entertainment and media, in the travel of Hollywood Studios, sports teams, social media platforms that depend on twelve lab tools to use the transformation of labs, which help the placement of advertisements in the content.
Government agencies are also using the startup’s technology for video search and event retrieval. In addition to its work with the US and other nations, Lee said that a dozen labs have been deployed in the city of South Korea to help CCTV operators look at thousands of camera feeds and find specific events. To reduce security risks, the company has rolled out facial and facial biometric capabilities, he added.
Will video-traditional AI replace human jobs?
Most of the dozen or so labs it works on already argue that ai is adding to human jobs — a concern Lee only partially disputes. “I don’t know if jobs will be lost, with EUPTER SE, but jobs will have to change,” he said, comparing the change to how tools like Photoshop are reinventing creative roles.
If anything, Lee believes Systems like the twelve labs ‘will be remembered for democracy firmly in companies with large budgets. “Now you can do things with less, which means you have more stories that can be created from independent creators who don’t have that same capital,” he said. “It allows for the diffusion of content creation and human distribution.”
Twelve Labs isn’t the only AI video leak, but the company emphasizes a different need than most competitors. “We are happy that video is now starting to get more attention, but the way we see it is more in the big language models, video generation models like sora-to-video model AI-To-Video-to-Video-To-Video Model and Video-to-Video Model and App.
Currently, twelve labs offer video search, video analysis and video-to-text capabilities. The company plans to expand into agentic platforms that can not only understand video but also extract narration from it. Such models can be useful in more than creative fields, says Lee, pointing to examples such as retailers who identify peak foot-traffic hours or security clients who discover the sequence of events surrounding an accident.