Adobe is introducing a new image model to its Firefly family of image-generation AI models. Firefly 5 is designed to help creators become more effective storytellers. To achieve this, it now natively supports resolutions up to four megapixels, offers enhanced human rendering, improved lighting, and better editing capabilities. Adobe reiterates that all generations are commercially safe, high-quality, and suitable for production use.
This new model is one of many major announcements coming out of Adobe Max this week. In addition to Firefly 5, Adobe is integrating new third-party LLMs into Firefly, rolling out Firefly Custom Models for all creators, introducing new AI tools for soundtrack and speech generation, launching an Adobe Premiere app designed specifically for YouTube Shorts, embedding AI assistants into Express and Photoshop, and teasing a new initiative dubbed Project Moonlight.
“Adobe Firefly is the all-in-one destination to enable the ideation, creation, production, and publishing of content for creators and creative professionals, helping them across all the modalities they need to create in, from images, video, audio, [and] design. We want to offer the best tools and the best models to enable this creativity in one place at one price,” Alexandru Costin, Adobe’s vice president of generative AI and Sensei, remarked in a press briefing last week.
First unveiled in March 2023, Adobe Firefly was a response to the growing number of image-generation models being released into the marketplace. It competes against the likes of OpenAI’s DALL-E, Google’s Imagen, Stability AI’s Stable Diffusion, and Black Forest Labs’s Flux. Extending into generative tech is logical for Adobe, especially for a firm that specializes in creative tools. But rather than go head-to-head with other model makers, the company opted to differentiate itself by touting its commercial-safe policy.
Subscribe to The AI Economy
What’s Included in Firefly 5
Adobe’s fifth-generation image generation model comes just six months after Firefly 4 was released to the public. Costin told TechCrunch at the time that the fourth-generation model was trained with a “higher order of compute magnitude to enable them to generate more detailed images.” In addition, Firefly 4 provided better text-to-image generation and allowed users to use their own images as references to generate images in that style.

Firefly 5 is marketed as supporting native 4MP resolutions, though this applies only to the text-to-image model. The prompt-to-edit model, accessible via Generative Fill and Edit Text in Image, supports resolutions up to two megapixels. By comparison, Firefly 4 produces images at 2K resolution, meaning that Firefly 5 4MP resolution has a higher pixel count and better image details.
The company touts that it has also invested in human rendering, which hopefully means that characters generated will look like them instead of more like something out of Picasso, no matter which style or theme you choose in Firefly. Granted that, because it’s working off a dataset on which it has received permission to train, Adobe’s ability to produce images of humans leaves users wanting. Hopefully, this new model makes better images and videos.
Perhaps the signature features of Adobe’s new model are its prompt-based and layered image editing capabilities. “We know that creative professionals don’t only want to generate new content, but more importantly, they want to edit content they’ve shot themselves or created themselves or generative content. So we’re adding capabilities for prompt-based editing, where you can have your own asset and edit this asset with a simple prompt,” Costin explained. Now, creators will have more granular control over the AI-generated images, saving them the hassle of modifying entire prompts and regenerating everything.
“Game changing,” Brooke Hopper, the senior principal designer at Adobe, interjected. “It’s bringing…all of the generative capabilities, along with the power of Photoshop, which is, honestly, [is] just a dream. It’s very exciting.”
She demonstrated by showing a photo of her dog, Sadie, behind a metal fence. Using a simple text prompt, Firefly can remove that element without altering Sadie unnecessarily. Alternatively, think about a bowl of ramen with a pair of chopsticks resting on it as an image you generated. If you want to swap out the chopsticks for a different set or utensil entirely, rather than regenerating the entire image, you can select that element and then modify it using a prompt.
Layered image editing is available in preview mode.
Adding New Third-Party Models to Firefly

Firefly 5 isn’t the only AI model Adobe is highlighting at Adobe Max. It’s also expanding its lineup of third-party models, giving creators more options for how their work is generated. Previously, Firefly supported various image and video generation models from OpenAI, Google, Ideogram, Black Forest Labs, Runway, Luma, and Moonvalley. Today, the platform now includes two image models from Topaz (Bloom and Gigapixel) and an audio model from ElevenLabs (Multilingual v2).
“We want to be the place where our customers find all the models they need in their creative process. They’re all in the same place,” Costin states before reiterating Adobe’s AI policy—partners must maintain prompt and asset privacy and apply content credentials.
Firefly Custom Models For All
One way Adobe gave its customers greater control over their AI experience is through custom models. Introduced in March 2024, Firefly Custom Models are, as the name implies, customized models trained on Adobe’s Firefly and fine-tuned using a small set of brand assets. Companies would use the custom model to generate brand images and videos tailored to their needs.

Since launching, anyone who wanted to build a custom model needed an enterprise license and Adobe Storage for business. Today, Adobe is changing that and making Firefly Custom Models available to all creators. The catch: It’s available in a private beta through a waitlist and will be accessible through both the Firefly app and Boards.
“The goal is to enable on-style generation for the creative professionals so they can train a variant of an Adobe Firefly model in their custom styles, and then be able to increase their productivity and at the same time, stay commercially safe,” Costin reasons.
Firefly Custom Models are just one way Adobe is helping brands and creators create their own AI models. Another way is through the company’s AI Foundry, which it announced last week. That program operates similarly to Intel’s chip foundry, providing companies with a framework for designing and training models on their own data. Beyond having models trained on Firefly, organizations also benefit from Adobe’s expert team to support them. Comparing the two offerings, Custom Models is a “do-it-yourself” approach, while AI Foundry is where Adobe does all the heavy lifting to create customized models for a brand.
Featured Image: Credit: Adobe
Subscribe to “The AI Economy”
Exploring AI’s impact on business, work, society, and technology.



Leave a Reply
You must be logged in to post a comment.