
At Tuesday’s Adobe Max 2025 event, Adobe unveiled a number of new artificial intelligence (AI) tools and features. The addition of additional AI assistants to the Photoshop, Express, and Firefly platforms is the most noteworthy of them that can assist users in creating and editing images. New AI-powered tools for creating audio and video are also being added by the business to Firefly. In order to provide customers with more options in their everyday operations, the software giant based in San Jose, California, is also introducing a new Firefly model and incorporating new partner models.
To obtain on-screen context, the majority of businesses prefer to include AI helpers in a sidebar of their products. However, Adobe has developed a new mode for Express that enables you to generate new graphics and designs using text prompts. In the present version of Express, you can use the editing tools and controls by switching to the assistant mode and then back to the AI prompts.
Adobe has made significant investments in incorporating new AI capabilities and functionalities into its platform during the past few years. A new Firefly platform, developed by the firm, provides a variety of AI capabilities from both its own and third-party models. Now, in order to add new capabilities to its platforms, Adobe at the Adobe Max 2025 event unveiled a number of new features across both older and newer modalities.
Adobe unveiled the new Firefly Image Model 5 first, focussing on the fundamental technology. It integrates prompt-based editing, photorealistic quality, and native 4-Megapixel resolution into a single large language model (LLM), according to the manufacturer. Multi-layered editing is Firefly Image’s primary focus for the upcoming generation.
After an image has been created, users will essentially have the choice to either type a prompt and let AI do the work or touch and tweak the pieces for more precise control. Each image’s assets are treated as distinct layers by the AI model, which allows for independent editing.
Adobe is adding additional AI models from third parties to its current collection in addition to first-party models. Topaz’s picture generating models and ElevenLabs’ audio generation models are the largest additions.

Custom Firefly models are another new area of concentration for Adobe. Individual users will now have access to this, which was previously exclusively available to its commercial clientele. This essentially enables users to bring their own images and ensures visual consistency by training the Firefly models on the user’s style. Enrolling on its waitlist will grant users early access to the currently beta version.
The software empire stated that it has concentrated on the audio industry to provide users with new features in the new tools. A new Firefly video editor tool that can create, arrange, cut, and sequence clips, a new Generate Speech tool for voice-over creation, and a new Generate Soundtrack tool that can create studio-calibre, fully-licensed music tracks are now available. Users who are interested can sign up for a waitlist for the video editor, while the first two are in beta.
The Prompt-to-Edit tool, which was previously described, is compatible with the Flux.1 Kontext model from Black Forest Labs, Google’s Nano Banana, and Firefly Image Model 5.
A new set of AI tools for Adobe’s Creative Cloud apps was also revealed. The generative fill feature in Photoshop may now select third-party models, such as Black Forest Labs’ FLUX.1 Kontext and Google’s Gemini 2.5 flash, to extend images or delete objects. Premiere Pro, the company’s video editing program, will soon include an AI-powered object mask that will make it simple for users to recognise and pick objects or people to apply effects or change colours.
It’s interesting to see how the Generate Soundtrack operates. When creating a soundtrack for a video, users will first upload the video. The video is processed by the underlying AI model, which then suggests the appropriate speed, atmosphere, and musical genre. Users can manually alter any of the elements in the text box that contains all of these. When the user is happy with the selection, the AI tool will create and synchronise the music.
Agentic AI assistants will thereafter be available in all Adobe applications, including as Photoshop, Express, and Firefly platforms. Users can request that the chatbot make particular modifications or alterations to a creative asset, and the assistant will take care of it directly. Users will always have complete control over the AI’s behaviour, according to the business, and they can revert at any time. While there is presently a backlog for the AI helper for Photoshop, the Express AI assistant is available in beta.
Lastly, Project Moonlight, a new assistant type that is being tested, is a new endeavour that eliminates the need for the user to manually move between platforms by enabling all AI assistants across Adobe apps to operate together seamlessly, was also previewed by the firm. The event featured a demonstration of a private beta of the capability.
It can connect to a creator’s social media accounts to gain a deeper understanding of their style and work in tandem with various assistants from other Adobe tools. Adobe stated that the product is in private beta and is still in the early phases of development.
Using OpenAI’s app integrations API, the business stated that it is also investigating a method to integrate Adobe Express with ChatGPT so that users can generate designs immediately within ChatGPT.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







