Weeks after releasing its Photon text-to-image model, Luma AI shows no signs of slowing down. The creator of the popular generative AI platform Dream Machine announced at AWS’ re:Invent conference the launch of its Ray 2 model, which it boasts can produce videos using text and images, all in under 10 seconds.
Disclosure: I attended Amazon's 2024 re:Invent as a guest, with a portion of my travel expenses covered by the company. However, Amazon had no influence over the content of this post—these thoughts are entirely my own.
High-Quality Video From All Sources
Designed for consumers, prosumers, and professionals—basically, anyone creative—Ray 2 can be accessed through Luma AI’s Dream Machine platform. The company declares the model can generate five- to ten-second video clips featuring “advanced cinematography, smooth motion, and eye-catching drama.” In addition, Ray 2 can distinguish interactions between people, animals, and objects, meaning users can create accurate characters through natural language instruction understanding and reasoning.
Ray 2 appears to be a natural progression for the company. It started with its text-to-video generation model before moving into text-to-image. Now, it’s blending both models so developers can use whatever reference to produce what appears to be highly realistic videos. Luma AI hopes this appeals to corporations looking to develop commercial-grade videos without the expensive cost, while also ensuring it adheres to their brand strategy.
“In an increasingly complex world, video has become an essential form of expression
and a channel for learning, and we set out to offer a service that would help everyone–from creatives to professionals–become fluent in this new visual AI medium,” Amit Jain, Luma AI’s Chief Executive and co-founder, says.
Although unveiled at re:Invent, Luma’s Ray 2 model won’t be immediately available.
The timing of Ray 2 comes as the video generation space continues to heat up. OpenAI’s Sora model could soon launch, Google’s DeepMind-created Veo model is now available within Vertex AI, and Amazon is working on its own offering called Nova Reel. And the list of competitors grows, with Runway, Tencent, and more.
Reaping the AWS Relationship Benefits
It’s not by chance that his startup is showing off Ray 2 at re:Invent. Luma AI is a part of the latest cohort of startups participating in AWS’ generative AI accelerator. And it is reaping the benefits, forming a strategic partnership with the cloud computing giant to infuse Amazon Bedrock with Luma AI’s premium visual AI models.
“In partnering with AWS and offering our models in Amazon Bedrock, we can bring these powerful capabilities into the hands of even more people so that they can fuel their curiosity and achieve more extraordinary things with greater creativity and understanding,” Jain states.
Through Amazon Bedrock, developers can access all of Luma’s visual models using a single API. In addition, the startup is working to fine-tune its model on Amazon SageMaker Hyperpod, infrastructure designed for distributed training at scale. It will also start using Amazon’s Trainium and Inferentia chips to power its models.
Featured Image: An AI-generated image of a video reel going into a monitor. Image credit: Luma Dream Machine
Leave a Reply
You must be logged in to post a comment.