Adobe's Photoshop upgrade reshapes images
One of the most magical tricks that generative artificial intelligence technology has brought us is the ability to quickly create impressive images with a simple line of text. Just type a few words describing what you want to see into Dall-E, Midjourney, or any of the many AI-powered image creation websites and up pops surprisingly high-quality representations of your visual request.
But as intriguing as these individual sites may be for explorations and experiments, when you need an image for a document, project, or social media post, though, you’ll typically want to create it in a software application focused on imaging. Adobe has built these kinds of applications for decades now, and its Photoshop has long been considered the king of photo and image editing tools. In fact, “photoshopping” an image is a term used almost as widely as “googling” an internet search request.
Last year, Adobe brought generative AI (GenAI) capabilities into Photoshop via its Firefly image-focused foundation model – a type of algorithm used in all GenAI applications. The initial implementation allowed you to extend the size of an existing image with new material that matched what was in the original image through features called Generative Fill and Generative Expand. Though not as flashy as text-based image generation, the features proved to be extremely popular with, and useful for, Photoshop users, according to Adobe.
This year, leveraging version 3 of the Firefly model, Adobe is greatly expanding the range and quality of those applications within Photoshop. First is Generate Image, which, as its name suggests, brings the full potential of text-based image creation into Photoshop. What makes it even more interesting, however, is the addition of something called Reference Image.
Building on the new Reference Style and Reference Structure capabilities of Firefly Image 3, you can have Photoshop create your image in a manner that adopts the style and/or overall structure of an image that you supply it with. For example, if you see an image of a person posing in a certain way with a particular type of lighting, you can have the latest version of Photoshop generate a new image that looks and feels like the reference image.
In addition, new capabilities such as Generate Similar or Generate Background allow you to easily customize a given image – either one created by Photoshop or a photo of your own. The net result is a significantly faster and much easier way to generate the kind of image, illustration, or other graphical element that you’re trying to create. Plus, the combination of a new Enhance Detail feature, along with generally refined image generation skills, can lead to almost scarily impressive results with little effort.
Admittedly, some photo professionals have been concerned for years about the level of image manipulation that Photoshop has enabled, and these new capabilities bring that to a whole other level. For most people, however, these new features should turn Photoshop from what was traditionally a powerful, but challenging, tool into something that almost anybody can use to create amazing images. There’s little doubt that even professional photographers will embrace some of the advanced editing features that this newest version of Photoshop enables.
Meta AI:What does Meta AI do? The latest upgrade creates images as you type and more.
Despite these benefits, even non-photographers have been concerned about the potential manipulation of image creation software for nefarious purposes, particularly in the era of “deep fakes.” To address these and other critical issues, Adobe has taken a strong stand on ethics and fair use, and it has approached the creation of its AI models in a unique and open way.
First, the Firefly model was trained on images from the company’s own Adobe Stock (ITS stock image and video service). This means that the raw data initially fed into Firefly in order for it to “learn” the characteristics and patterns of images as they relate to text – the basic essence of how all generative AI models work – comes from Adobe’s own content.
The company carefully controls and monitors what images go into Adobe Stock and has strict rules about what can and can’t be included. Specifically, the company filters for inappropriate material as well as copyrighted images, trademarks, photos of people without model release forms, etc. Because of this approach, Adobe guarantees that images created by its tools can be freely used without concerns. In addition, for businesses, Adobe provides indemnification. This means that a company using Adobe’s software can’t be sued for using one of the images the software generates – a huge issue that many organizations have been concerned about for all GenAI applications.
In addition, as a founding member of the Content Authenticity Initiative, Adobe also makes a point to tag any image it generates with Content Credentials, signifying that it was created by a GenAI tool. While not visible, this metadata ensures that anyone opening or using the image can see how it was generated.
If you’re curious to try out the new Photoshop beta, Adobe is offering a free seven-day trial. Monthly subscriptions start at $19.99. You can try Firefly 3 if you sign up for an Adobe account. Adobe also recently announced that it will be adding GenAI capabilities to its Premiere Pro video editing software later this year.
The world of GenAI-powered applications and services has already opened up some exciting and, frankly, pretty amazing new capabilities. These new image creation/editing tools from Adobe add to that arsenal and offer an intriguing and appealing view of where the technology is headed.
USA TODAY columnist Bob O'Donnell is the president and chief analyst of TECHnalysis Research, a market research and consulting firm. You can follow him on Twitter @bobodtech.