AI Generated Images and Misinformation Jazzmyne Haines

It can be deceptively easy to take an AI-generated post as something real. Despite trying to train my eyes to detect them at a glance, I've fallen for a few of them.

One recent example I can think of is when I was scrolling through Tumblr a couple weeks before Thanksgiving, looking for ideas to make a snack board. Many of the images I came across looked legit, but when I double-checked one of them, it seemed a little off.

This picture was certainly pretty, with a lovely spread of fall fruits and rustic bread and some glasses of wine on a wood table on an outdoor patio. Or indoors, just with the windows open? The space was a little vague, but the image still looked like it could have been real. The pomegranates and grapes looked realistic, as did the leafy decor and the natural wood boards that made up the table. They all caught the light in ways that made sense and had naturalistic textures and shapes. For all I knew, it could have just been a case of good timing with the natural light and smart stage-setting.

On closer inspection, though, I also noticed that the pages of the book the unseen subject was reading while they eat had its pages fused together like it was a plastic model. There was also a clump of nondescript objects and textures in the bottom-right corner, completely unintelligible text on the wine bottle's label, slices on the rightmost plate that couldn't tell whether they wanted to be cheese or crackers, and a odd black artifact on one of the glasses. All of these tipped off to me that the photo wasn't the product of a human with a camera -- it was dreamt up by an AI program.

AI photo of bread and fruit on a table (tumblr.com)

In this particular case, the account that posted the image stated that they post AI photos they find online in an ask post. The purpose of the blog is entirely recreational, to display interesting images just for fun.

While my case of finding AI among a bunch of real photos was relatively benign, it clued me in to how convincing AI content can be, even if you think you can catch them with a quick gloss-over. As with any convincing means of fabrication, this cutting-edge invention can be taken advantage of for less-than-noble purposes.

In our vast, ever-expanding digital information space, AI images aren't always just for showing off the tech's capabilities. It can be used to spread false information across the internet, make outlandish hoaxes seem more plausible, and produce propaganda to sway people's political stances without them realizing what's happening.

What Are some cases of AI being used with ill intent?
  • AI-made pictures have been used to spread convincing rumors, such as one that involved a supposed explosion occurring just outside the Pentagon in May 2023. The concern about the image, which shows billowing smoke rising scarily close to the important building, caused enough concern to affect the stock market temporarily, but were eventually proven to have been fabricated by an anonymous individual.
  • Some scammers create fake dating app profiles with an AI-made portrait as a profile photo, lulling the person on the other side of the screen into a false sense of security so they could take a lot of money from them. This happened in Hong Kong in October 2024, in which a group of 27 people used photo and video deepfakes to lure victims into trying out a fake cryptocurrency trading platform, which netted them 46 million dollars' worth of virtual tokens.
  • AI images have been used to create viral posts to skew people's political views, even at the expense of innocent people. For example, people took advantage of the Hurricane Helene disaster, generating images of children stuck in the wreckage or looking fearful in a boat on a flooded road to garner people's sympathy, and thus sway their political opinion against the administration conducting relief efforts. In addition to stirring the pot politically and chipping away at people's faith in one another, these fake images run the risk of making people lose their compassion and hesitate to aid in relief efforts, which leads to less volunteering and donation and thus less help for people actually effected by disastrous storms.
  • AI could potentially be used for political memes and other viral images to artificially skew people's political opinions in a certain direction. Real candidates can take advantage of this, because the spread of a meme is not dependent on how true it is, and it can help them gain people's favor without being held accountable for their real actions.
  • AI has been used to falsely attain celebrity endorsements for products that companies sell online, such as one case in which a Taylor Swift deepfake was used to advertise a Le Creuset cookware giveaway.
Fake image of Pentagon explosion (piktochart.com)

Even despite the fast advancements in image-generating technology, it's still possible to pick out a robot among humans. Close scrutiny of an image doesn't just help you determine if a photo was taken by a genuine photographer - it can stop the spread of potentially harmful misinformation at the source.

What can give an ai image away?
  • AI pictures may have light sources and shadows that don't align with each other as they would in real life. Keep an eye out for shadows and highlights in strange areas on or around an object in the image.
  • AI also has trouble with human anatomy occasionally, adding extra fingers on hands or airbrushing skin to the point of looking unrealistically smooth and somewhat blurry. Eyelids and ears may also contain unusual artifacts.
  • Public spaces may be unusually empty, or an area may be filled with fantastical-looking objects.
  • Image generators have a tough time rendering text, so signs, logos, and labels will often come out as unreadable, bit-crushed chicken-scratch.
  • Be wary of photos, especially portraits of people, that look a little too good to be true. Some AI-generated images have a tendency to look like an amplified or idyllic version of the real world, adding oddly fantastical objects, lighting or backgrounds to a photo that's meant to come off as realistic.
  • If it's a video or a GIF instead, watch the subject as they move. Their hair may sway unnaturally, or their eyes might display strange blinking patterns or brief changes in shadows or skin tone. Software can also be used to watch for subtle rendering errors on a moving deepfake model.
  • A quick reverse-image search can reveal whether a picture is AI-generated or not by leading you to its original source or other websites where it exists.
  • Comparing two similar pictures can reveal them as fake, possibly generated with the same prompt with slight tweaks. For example, with the Hurricane Helene fake images, you can tell two of them were generated artificially because while they both showed a similar girl and background scene, the color of the boat and the pet dog's snout are different in either picture.
a. Green boat, brown muzzle.
b. Gray boat, yellow muzzle. People also seem to be missing from the background who were following the girl's boat in the pervious photo.
Also look out for signs of a scam that may accompany a message from someone using AI pictures or deepfakes.
  • Online scam messages are known to frequently have a written tone that sounds urgent or pushy, pressuring the recipient to act right now.
  • A scammer may insist that the conversation be kept between you and them, so they can avoid getting caught.
  • Scammer's often promise handsome rewards -- appealing to the point of being unrealistic -- in exchange for transferred cash, cryptocurrency, bank account info, or your personal information.

Sources Cited

  • BBC. (2024, May 9). How to spot AI generated images on social media. BBC Bitesize. https://www.bbc.co.uk/bitesize/articles/z6s4239
  • CNN Business. When seeing is no longer believing: Inside the Pentagon's race against deepfake videos. https://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/
  • Daniel, L. (2024, October 5). How Hurricane Helene deepfakes flooding social media hurt real people. https://www.forbes.com/sites/larsdaniel/2024/10/04/hurricane-helena-deepfakes-flooding-social-media-hurt-real-people/
  • Edwards, B. (2024, October 16). Deepfake lovers swindle victims out of $46M in Hong Kong AI scam. https://arstechnica.com/ai/2024/10/deepfake-lovers-swindle-victims-out-of-46m-in-hong-kong-ai-scam/
  • Foley, J. (2024, October 24). 25 of the best deepfake examples that terrified and amused the internet. https://www.creativebloq.com/features/deepfake-examples
  • ginger-by-the-sea. (2024, Sept 17). https://www.tumblr.com/ginger-by-the-sea/761926901659992064?source=share
  • Jingnan, H. (2024, October 18).AI-generated images have become a new form of propaganda this election season. https://www.npr.org/2024/10/18/nx-s1-5153741/ai-images-hurricanes-disasters-propaganda
  • Orebaugh, A. (2024, April 9). Hello? How scammers use AI to impersonate people and steal your money. https://engagement.virginia.edu/learn/thoughts-from-the-lawn/20240409-Orebaugh
  • Service. (2023, November 29). How to recognize AI-generated pictures, videos, and audio. OSINT Blog by Social Links. https://blog.sociallinks.io/how-to-recognize-ai-generated-pictures-videos-and-audio/
  • The New York Times. (2017, October 31). The dark art of political memes | Internetting with Amanda Hess [Video]. https://www.youtube.com/watch?v=-bgQmesnte8
  • Wong, V. (2024, May 3). 8 viral images created by AI. https://piktochart.com/blog/viral-ai-images/