Adobe, is allegedly selling, in its stock image service, artificially-generated images about the Israel-Palestine War, according to a report by Interesting Engineering.
Adobe's alleged selling of reportedly photorealistic images is now being criticized as it might contribute to the spread of false information about the ongoing delicate humanitarian crisis in Gaza.
A Crikey report states that these AI-generated images are being used across the internet without indicating that the realistic photographs are fake. This recent criticism follows Adobe's image subscription service 'Adobe Stock' recently allowing its users to submit and sell AI photos.
Users can sell AI images, granted that the photographs are disclosed as "generated with AI." Along with this requirement, submission standards also apply to every picture, and anything unlawful will be banned from the service.
The report indicates that "Conflict between Israel and Palestine generative AI" is the title of the photorealistic image that appears when one searches for Palestine. It depicts a missile strike on a cityscape. Some pictures show fictitious protests, fighting on the ground, and even kids fleeing bomb explosions, all of which are fake.
Amidst misinformation flooding social media regarding the Israel-Hamas conflict, previously raising the concerns of the EU, these photographs are also being utilized without any indication as to their veracity.
Without citing it as generative AI creation, a few blogs, newsletters, and online news sources have published "Conflict between Israel and Palestine generative AI." It's unclear if these outlets are aware that the image is artificially generated.
Generative AI Causing Misinformation
Concerns are now also being raised by an expert that Crikey interviewed, a researcher studying the usage of AI-generated pictures, Dr. T.J. Thomson, also a senior lecturer at RMIT, expressed concerns over the transparency of AI image use and whether or not audiences are literate enough to recognize its usefulness.
The tenured researcher stated that pictures can mislead people, distort reality, and interfere with one's sense of accuracy and truth.
Interesting Engineering reports a prime example of how AI images can lead to misinformation was when Prime Minister Benjamin Netanyahu's office and Israel's official account tweeted a picture of a burnt newborn. The photo, despite being flagged by an AI detector tool as AI-generated, is still being reportedly claimed as accurate by some experts.
This gave rise to several allegations asserting that Hamas had beheaded over 40 infants. In addition, US President Joe Biden made the false allegation that he had seen images of terrorists decapitating infants; the White House later issued an explanation for this statement.
Adobe Welcoming AI
Adobe, has recently embraced AI in its services by unveiling a line of generative AI models named Firefly in March. These models are comparable to DALL-E and Midjourney, wherein users can provide a prompt and receive a picture in response.
Adobe also quickly added comparable generative AI features to one of its most popular products, Adobe Photoshop, just two months later. It is enabling users to utilize Photoshop tools to alter information created using a text prompt.
Related Article : Adobe Photoshop Elements 2024 Will Let You Use Its AI Tech for Free