If there is one form of tech that has progressed at a pace never previously seen, it is Artificial Intelligence (AI). Quite simply, AI has infiltrated virtually every aspect of our lives, even if we don’t realise it.
From the apps on our phones, to the pop-up adverts we receive, machine-learning is embedded into nearly every piece of technology we use. And as technology progresses, these effects will only become more pronounced.
Photography, for example, is a medium that has benefitted from a significant surge in AI-led technology. Many smartphone lenses, such as those found on the Google Pixel and Apple’s recent iPhone 17 Pro, use a range of AI post-processing techniques that enhance low-light photography, while the zoom lenses use AI upscaling to make the photographs look as sharp as possible.
Many modern cameras, such as the Sony Alpha 6700 also benefit from advanced auto-framing and subject-tracking capabilities, thanks to deep learning AI algorithms that automatically detect movement and points of interest in the image, keeping them framed and in focus.
Editing suites like Photoshop also use AI-powered tools, such as Adobe Sensei, which can be used to select and mask objects, remove artifacts, and enhance image quality. Of course, while AI has given us all of these wonderful benefits, it has also become incredibly controversial thanks to its generative-image capabilities, which remain problematic for many reasons.
The Problem of AI-Generated Images in Photography Today
While AI image generation is very much a developing technology, it has reached a level of sophistication where it can be hard to tell a real image from one made by a machine. As you would expect, this leaves the technology prone to misuse and deception.
It is here that most of the ethical dilemmas stemming from AI emerge. Excessive manipulation of photographs can be used in nefarious and often political ways, which is something many of us will have seen in recent times.
Artificial photos and videos of well-known celebrities and politicians in unlikely situations have all cropped up on social media at some point over the last couple of years. While these may be used for entertainment purposes, there are genuine risks posed by AI video generation, the main being ‘deepfakes’.
These take the form of videos and still images in which a person’s face, body, or voice has been digitally and convincingly altered so that they appear to be in an entirely different place or situation, their likeness stolen and removed from its original context.
The risks here have already been demonstrated in high-profile cases such as that of Martin Lewis whose likeness was deceptively used in a series of deepfake videos last year. Additionally, AI lacks the human side of creativity that photographers often show. It might produce convincing images, but it lacks the emotional realness that only human photographers can capture.
Lastly, there is the risk of the photographer overly relying on the technology. AI is clearly smart enough to take control away from the photographer by automating elements like autofocus and subject tracking. While useful for the most part, in the long term this could lead to the decline of a photographer’s personal style and technical understanding.
Metadata, File Integrity & Authenticity Services
As AI becomes more advanced, editing and software tools have made advanced image and video editing techniques accessible to anyone, which means a higher rate of AI-generated content is being pumped out, making it harder to differentiate between what is real and what is fake. Fortunately, there are a range of photo authenticity verification tools available.
One example of this is Content Credentials, which allows creators to disclose whether they used generative AI in the process of making their image. It’s essentially a form of verifiable metadata that helps photographers generate, read, and display information about who produced a piece of content, when they produced it, and which tools and editing processes were used in the process.
This information can be used by photographers to help build a sense of trust amongst their followers, by disclosing when something was artificially constructed. Furthermore, platforms like Adobe have integrated Content Credentials across many of their apps. Adobe Photoshop, Lightroom, and Adobe Camera Raw have special capabilities with metadata technology, and any image that has been artificially generated by Adobe Firefly uses Content Credentials to reveal its use.
Watermarking vs Visual Authenticity Badging
If a photographer uses AI, or is worried about their work being used to aid AI algorithms, then there are ways around this. One example is watermarking, which usually takes the form of either a logo or the photographer’s name across the image. This protects them somewhat from editing alterations, though not entirely.
However, watermarks can also use invisible markers embedded directly into the image during its creation which can protect it from further tampering. On the other hand, visual authenticity badging can be used. This is a visible label that is added after the image is made, indicating to viewers that the image has been AI-generated.
Platforms like Instagram use this feature so that users of the platform aren’t fooled into thinking the image is real. There are also apps like PhotoVerify that allow users to determine fake photographs. These apps examine the images and use software that can detect whether the image is credible or artificial. Even then, however, there is still room for error as AI imagery is becoming harder to separate from reality, even for machines.
Case Studies: Trusted Sources vs Fake Visuals
There have been recent cases of reliable and well-known photographers engaging in AI practices. For example, Michael Christopher Brown, a photographer well respected in his industry, caused quite a stir in 2023 when he created a series of machine-generated photographs in his documentary shoot titled 90 Miles.
While he did disclose that the images were fake, it still caused controversy due to the fact that a professional photographer was using AI to generate fake stills. The photos ignited a debate about ethics and truth in photojournalism, particularly because it is a subgenre that has historically been known for its truthful approach.
In conclusion, AI will likely continue to polarise opinion from photographers and industry professionals, as it is playing a much bigger role now, particularly for anyone working in creative mediums. AI-powered algorithms are speeding up editing times, aiding workflows and processing photographs at a higher level of clarity than ever before. As the technology grows, it remains to be seen what new photographic opportunities – or dilemmas – it may bring.
Leave a Reply
You must be logged in to post a comment.