Luma Taps Adobe’s Firefly as Exclusive Third-Party Launch Partner for Ray3 Video Model

Credit: Luma AI

If you’re looking to try out Luma’s new Ray3 video model, there are only two places to do it, at least for the next couple of weeks: either on the Dream Machine platform or through the Adobe Firefly app. The generative AI startup has granted Adobe exclusive access to the model ahead of its broader public release.

Luma describes Ray3 as an LLM that excels at delivering cinematic-quality videos. With a new multimodal reasoning system, this video model is better equipped to comprehend the creator’s intent, plan coherent scenes, maintain character consistency, and produce more natural-flowing motion in scenes. It’s also the first time an LLM of its type supports High Dynamic Range (HDR).

Like its predecessor, Ray3 produces clips up to 10 seconds long, but with fewer hallucinations. This is due to improved reasoning—Luma claims the model can understand intent, think using generated visuals and concepts, and can also evaluate its outputs to provide better results. It comes with a set of creator tools, including Draft Mode to facilitate faster video iterations, an annotation feature that allows you to draw on images to achieve the right design and motion without further prompting, and native 1080p resolution generation.

Subscribe to The AI Economy

Ray3’s introduction comes just over nine months after Luma announced Ray2. At that time, the company boasted that Ray2 could generate “realistic videos with natural and coherent motion, unlocking new freedoms of creative expression and visual storytelling.” It even asserted that the model was designed for anyone creative, whether they’re consumers, prosumers, or professionals.

Here are several examples of Ray3 in action:

YouTube player
YouTube player
YouTube player

“By combining Adobe’s creative app ecosystem with the intelligence of Ray3, we are giving creators the ability to move from idea to cinematic video in seconds with professional-grade quality and control,” Karan Ganesan, Luma’s founding engineer, said in a statement. “This partnership isn’t just about faster workflows; it is about unlocking entirely new ways to imagine and tell stories.”

This also isn’t Luma’s first time having its models available on Adobe Firefly. Ray2 was added to the image- and video-generating tool in June. It’s not the only such model on the app either—Adobe also makes available LLMs from OpenAI, Google, Black Forest Labs, Pika, Ideogram, Topaz Labs, Moonvalley, and Runway.

With Ray3, creators can access it either through Firefly’s app or in Firefly Boards. They can utilize the model through text-to-video to generate b-roll or background footage that complements their existing content. Alternatively, they could use Ray3 in Firefly Boards to explore different visual directions, generating environments, shot compositions, and camera perspectives—all before shouting “action!”

“Adobe is building the creative AI ecosystem of the future with Adobe Firefly—our all-in-one destination to access the industry’s top creative AI models, including our commercially safe Firefly models and choice of models from trusted partners like Luma AI—all integrated into the tools creators know and use every day,” Adobe’s Vice President of New GenAI Business Ventures, Hannah Elsakr, stated. “With Ray3 now available in the Firefly app, Adobe customers are among the first to gain access to a powerful new video model that amplifies imagination and transforms workflows.”

To mark the occasion, Adobe is providing all paid Firefly and Creative Cloud Pro plan subscribers with unlimited free Ray3 generations now through October 1.

Featured Image: Credit: Luma AI

Subscribe to “The AI Economy”

Exploring AI’s impact on business, work, society, and technology.

Leave a Reply

Discover more from Ken Yeung

Subscribe now to keep reading and get access to the full archive.

Continue reading