After Hurricane Milton stormed Florida’s west coast, fake Artificial Intelligence-generated images are being circulated on social media platforms such as TikTok and Instagram. One video, garnering 1.5 million views on X, claimed to be a video of Hurricane Milton but was found to be AI-generated, according to CBS.
This was not an isolated incident—several AI-generated images were being shared as well during the recent Hurricane Helene on Sept. 26. Images of young girls crying in lifeboats while carrying soaked puppies have garnered genuine empathy from viewers, until they were proven fake.
“As more AI models are created and more data is both being gathered, the scope of the capabilities of AI increases,” president of the AI Club and junior Rohan Khanna said. “In the next few years, it is highly likely that it would be virtually impossible to differentiate between AI generated content and human-made content. With this, the spread of misinformation will greatly increase. Ultimately, it is up to the individual to actively distinguish fake from real.”
The virality of such media is evidence of how it is becoming increasingly difficult to distinguish between real content and AI-generated look-alikes. Additionally, these images can create false narratives that harm public trust in media organizations, according to Forbes. For example, AI images have been used to create partisan narratives, especially ones that criticize the Biden Administration’s response to the disaster, according to NPR. Although it may seem difficult to distinguish between fact and fiction, students should strive to be more aware of the differences between fake images and real content through media literacy in order to prevent misinformation.
“The Hurricane Helene photos demonstrate the current challenges with disinformation and social media,” Virginia Technology Professor of Public Relations Cayce Meyers said in an interview with Newswise. “AI technology is providing greater ability to create realistic images that are deceptive. The hurricane images have certainly had an impact on the public, and their spread and believability demonstrate how we now live in a new technological and communication reality in the age of artificial intelligence.”
Keeping up with the latest authentic images and information is vital to combating misinformation, and the erosion of trust in societal institutions can damage a person’s ability to empathize with difficult situations and draw them away from the news, according to Forbes. Often, AI images contain strange lighting, hyper-real and smooth surfaces and odd inconsistencies in hands and feet, according to Newswise.
Additionally, suspicious images can be fact-checked with other news sources and government websites to confirm their authenticity, according to CBS. The North Carolina Department of Public Safety has already started fact-checking rumors from AI images on its website. Students can check credible websites like government websites in order to prevent misinformation.
“AI has the potential to be a political weapon, with its versatility to create believable propaganda,” Khanna said. “Because there is little regulation in the field of AI currently, the threat of AI misinformation remains large. Fortunately, states like California have already taken action to begin to regulate AI, which will hopefully begin to prevent the rampant spread of misinformation.”
However, AI-generated content can be used beneficially, serving as creative inspiration or as a mock-up for artistic projects, according to Conroy Creative Counsel. This positive potential of AI is only hampered by the widespread misuse and misinformation. The problem is not the AI images, but the lack of media literacy surrounding them that have spread misinformation.
To keep themselves and others from being misinformed, students should strive to be more media literate in order to avoid the misuse of AI generation.