Advanced AI Video Capabilities
Video is commonly regarded as one of the world’s largest unsearchable data sources, yet TwelveLabs’ cutting-edge technology turns it into a trove of accessible information. Whether it’s giving a sports network the ability to instantly pull every instance of a specific play style or commentator reaction or helping a broadcaster identify recurring themes across large volumes of footage, TwelveLabs helps teams turn their video archives into usable, indexable assets, unlocking both operational efficiency and new revenue opportunities.
TwelveLabs overcomes the inherent complexities associated with video understanding to allow customers to search video across all modalities. Specifically, TwelveLabs delivers:
- Natural language video search that pinpoints precise content moments
- Deep video understanding without requiring pre-defined labels
- Multimodal AI processing visual, audio, and text simultaneously
- Temporal intelligence connecting related events across time
- Enterprise solutions scaling extensive video libraries into accessible knowledge
“At MLSE, we are defining the future of the sports and entertainment business. Innovation is in our DNA, and we’re leading the charge in shaping what comes next. With powerful tools like Amazon Bedrock and TwelveLabs’ AI models supporting our vision, we’re accelerating our ability to create smarter, more immersive experiences for fans.” said Humza Teherany, Chief Strategy and Innovation Officer at Maple Leaf Sports & Entertainment.”
Unlocking the Power of Video Understanding for AWS Customers
With Marengo and Pegasus available in Amazon Bedrock, AWS customers can use TwelveLabs’ models to build and scale generative AI applications without managing underlying infrastructure. Using Amazon Bedrock, customers gain access to a broad set of capabilities while maintaining complete control over their data, benefiting from enterprise-grade security and utilizing cost control features—all essential for deploying AI responsibly at scale.
TwelveLabs’ fully managed, serverless models in Amazon Bedrock allow developers to:
- Create applications that search through videos, classify scenes, summarize content, and extract insights using natural language
- Build sophisticated video understanding features without specialized AI expertise
- Scale video processing from small collections to massive libraries with consistent performance
- Deploy solutions with enterprise-grade security and governance controls
“Video understanding is revolutionizing how industries like media & entertainment, sports, automotive, and education work with and discover content,” said Samira Panah Bakhtiar, General Manager of Media & Entertainment, Games, and Sports at AWS. “Over the last year, I have consistently said that natural language semantic search is a ‘strategic unlock’ for our entertainment customers, as they reexamine their existing intellectual property and breathe new life into it. By bringing TwelveLabs’ advanced models to Amazon Bedrock, we’re helping our customers make sense of any video moment, unlocking the full value of their treasured video assets. Businesses will now be able to easily search, categorize, and extract insights from their vast video libraries, enabling new use cases and better user experiences that were previously impossible without significant technical expertise.”
The integration will benefit multiple industries, from media, entertainment, advertising and beyond. For example:
- Film and TV Studios can rapidly manage video workloads from dailies, content repacking, and archive management
- Sports Leagues and Teams can efficiently create match highlights and create customized fan focused content at scale
- News Agencies and Broadcasters can quickly manage large libraries to find the moments that matter
- Streaming services can better package and distribute content across platforms and more effectively insert relevant video ads
AWS and TwelveLabs’ integration partner Monks expressed their excitement: “We’ve been putting AI to work across the entire video value chain for IP holders, broadcasters and brands. TwelveLabs in Amazon Bedrock makes it easier to realize opportunities for monetization in broadcast news, entertainment and sports by making it simpler and more secure to build and scale applications with powerful video understanding,” said Lewis Smithingham, EVP Strategic Industries at Monks.
Expanding Collaboration Between AWS and TwelveLabs
This announcement builds on a strong existing relationship between AWS and TwelveLabs and continues the momentum of their Strategic Collaboration Agreement (SCA). TwelveLabs is working with AWS to accelerate the development of its foundation models, deploy its advanced video understanding foundation models across new industries, and enhance its model training capabilities using Amazon SageMaker HyperPod. With the reliable and scalable infrastructure offered by SageMaker HyperPod, TwelveLabs has accelerated model training while reducing training costs.
“This integration with Amazon Bedrock represents the next phase in our collaboration with AWS, making our video understanding AI more accessible to enterprises worldwide,” added Lee.
To learn about TwelveLabs’ industry leading models, please explore twelvelabs.io, Marengo 2.7 and Pegasus 1.2. Find out more about TwelveLabs models in Amazon Bedrock here.
About TwelveLabs
TwelveLabs uses multimodal foundation models to bring human-like understanding to video data. The company’s foundation models map natural language to what’s happening inside a video, including actions, objects, and background sounds, allowing developers to create applications that can search through videos, classify scenes, summarize, and extract insights with unprecedented accuracy. Headquartered in the US, TwelveLabs serves customers across media, entertainment, sports, advertising, and government. For more information, visit www.twelvelabs.io
Media Contact
Amber Moore, Moore Communications, 1 5039439381, [email protected], Moore Communications
SOURCE TwelveLabs