After joking about “AI” being a drinking game trigger at MAX, Adobe’s chief product officer Scott Belsky said the company is moving away from the “prompt era” of the tech — which “cheapened and undermined the craft of creative professionals” by generating anything from text descriptions.
Instead, the new “control era” aims to improve creative workflows with AI in more specific ways within Creative Cloud apps.
Adobe has added a bunch of new AI “quick actions” that automatically apply effects for retouching backgrounds, teeth, eyes, skin, and more.
Lightroom’s mobile apps also now have the “Generative Remove” feature that was introduced to the desktop editor in May — making it easier to delete annoying objects from your images on the go.
Teased during the demo for Project Concept, Adobe says V4 of its Firefly Image Model will allow users to highlight areas of a generated image to adjust without making it again from scratch— for example, adding a guitar to a specific surface.
V3 has only just rolled out to Creative Cloud apps but this latest update will be available soon according to Adobe.
Dubbed “Project Concept,” this in-development planning app allows multiple creatives to hash out ideas in real time by mind-mapping inspirational images — just like Figma’s mood board tools.
Project Concept also includes a built-in generative AI “remix” feature that blends together aspects from multiple reference images. It’s not available yet, but Adobe says we’ll know more “in the near future.”
1/2
Some audience pictures snapped by Adobe Principal Director Terry White at today’s Max event started appearing in Frame.io in real-time as he was taking them, without needing to connect the camera to a computer.
And because his account was synced with Lightroom, they appeared there too — meaning there’s basically no delay for photographers to get their snaps ready for editing.
Adobe design evangelist Michael Fugoso was so excited to demo Project Neo — an Illustrator-like app for 3D design that was teased last year — that it felt like Bill and Ted had taken to the stage.
Project Neo is available as a free beta right now but we’ll hear more about general availability in the coming months.
Adobe’s AI video model is here, and it’s already inside Premiere Pro
New beta tools allow users to generate videos from images and prompts and extend existing clips in Premiere Pro.
As we started testing Windows 11 on Arm with new Copilot Plus PCs, we noticed issues with the performance of Adobe Premiere Pro. Adobe blocked the x86 software from Snapdragon X Elite laptops before their public launch, but now Windows Central says it’s available under emulation, and is “good enough for a basic video project,” while a planned Arm-native version is still in development.
That’s less than the $20 billion Adobe offered for the design platform company nearly two years ago, but investors buying in the secondary share sale included Coatue Management, General Catalyst Partners, Andreessen Horowitz, and Eddy Cue.
Figma left the Adobe deal with a $1 billion breakup fee, which, along with a big redesign, is part of why CEO Dylan Field remains optimistic.
Canva CEO Melanie Perkins thinks the design world needs more alternatives to Adobe
To her, AI is just an extension of what Canva has always done: make accessible design tools that cost less than Adobe’s.
That’s according to internal company messages obtained by Business Insider, regarding frustration among staffers over how Adobe handled the controversy surrounding a recent Terms of Service update.
Adobe has since released a blog to address concerns about AI training and content ownership, but its employees reportedly think greater transparency is needed.
“If our goal is truly to prioritize our users’ best interests (which, to be honest, I sometimes question), it’s astonishing how poor our communication can be.”
Native Arm64 versions of Photoshop, Lightroom, Firefly, and Express are available starting today, Adobe announced at the Surface event going on in Richmond right now. Illustrator and Premiere Pro won’t be far behind with June arrivals.
New Copilot Plus laptops and tablets with the architecture will be able to run the apps as soon as they arrive.
Why Adobe CEO Shantanu Narayen is confident we’ll all adapt to AI
The tech and the consumers both might not be quite ready yet, but he’s betting big on an AI future.
The third generation of Firefly generative AI — which Adobe claims can provide more accurate and photorealistic results than its predecessor — can be accessed via the Firefly web app.
This includes the Structure Reference and Style Reference tools in the Text to Image module, and a new Generative Expand feature for increasing the aspect ratio of images in the Generative Fill module.
As we saw in the beta release this new version of Adobe Express packs the same creative, editing, and generative AI features that desktop users have into an iOS and Android app.
It’s free to use, but to access Firefly and the full suite of editing tools you’ll need a $10 per month Premium membership.
Adobe calls its Firefly model “commercially safe” because it’s trained on Adobe’s stock library. However, Bloomberg reports that around 5 percent of the images in its training database are actually generated by other AI models.