How to Navigate AI-Superpowered Disinformation

By Tom Cassara ’23

Tom Cassara ’23

If it feels like 2024’s election represents the third “tipping point” in eight years, considering the tumultuous 2016 and 2020 elections, you’re not alone. However, this year is different: it’s global. With more than 2 billion potential voters, the pressure is on as thousands of candidates seek positions of power around the world, including Mexico, India, the EU, and the US. 

As this unprecedented wealth of voters prepares to cast their ballots this year, they must trudge through an information landscape that feels less and less inviting, thanks to the disinformation scourge. Though deception and propaganda have plagued the past two elections, 2024 may be the most significant reckoning with the truth yet.   

The problem may feel Eurocentric or even wholly American, but disinformation is a worldwide problem. As AI accelerates, voters need to understand the tools that bad actors, like rival nations or cybercriminals, use to distort perception. 

 The AI-Powered Disinformation Toolkit 

The four major tools bad actors employ are large language learning models, deepfakes, text-to-video models, and voice-cloning tools. Though many people in popular culture use these terms interchangeably, this article settles on specific definitions of each tool to prevent confusion. 

So, what are these systems capable of? How do they impact the spread of misinformation/disinformation? And how do we, as voters, avoid falling victim to their influence? The answer, unfortunately, is not clear-cut. 

Large Language Learning Models 

The most widely known AI systems, large language learning models, can take in a prompt and spit out a response. Programmers can even teach these systems to replicate specific writing styles. The most famous examples are OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot. 

Researchers have already been sounding the alarm about the dangers of these technologies in the wrong hands. According to these experts, tools like ChatGPT improve the spread of disinformation online by making the fabrication of false information more accessible and cheaper. 

Identifying misinformation created by large language learning models will be challenging, but there are skills and tools that can help. A low-tech example is Lateral Reading, the process of tracking and verifying the credibility of a source while reading. This allows readers to quickly identify whether they are being deceived. Though this option requires more labor on your part, it is arguably more reliable than the next option. 

Another way of identifying AI-powered disinformation is by using an AI detection tool. These systems purportedly detect the presence of AI in a writing sample with varying levels of accuracy. Though this takes the labor off the reader’s shoulders, the effectiveness of these tools is up for debate. Even the crowd favorite OpenAI’s tool for identifying content created by their own software is only effective about 26% of the time. 

Deepfakes 

Deepfakes are synthetic media created using AI, often by combining or superimposing existing images and videos onto other content to produce realistic photos or by putting in text prompts. These can sometimes be used for comedic purposes: you may remember photos of Pope Francis in a puffer coat. However, they can also be used for malicious purposes, like the fabricated images of former President Donald J. Trump being arrested by the NYPD. Easily accessible tools like Mid-Journey produce these images. Most likely, these deepfakes will only continue to get worse and can cause severe damage. Imagine deep fakes of politicians leading major protests or causing a diplomatic incident. 

Navigating deepfakes can feel daunting in an era so attached to the adage “seeing is believing,” but there are ways to avoid being fooled. If you doubt the authenticity of a photo, analyze the way the subject(s) appear. Does someone have an extra finger? Do faces appear blurred or contorted? Are you able to reverse image search the photo? Most importantly, can you find the photo credits? Consider these questions when determining the authenticity of a dubious image. Also, look for watermarks. Though their presence may be infrequent, having the extra benchmark is helpful. 

 Text-to-Video Models 

OpenAI recently showcased its new product, Sora, a Text-to-Video software that, while not the first of its kind, is certainly more impressive than anything on the market. The technology, which can create life-like videos in response to a text prompt, all in a short time, will allow users to make videos of whatever their heart desires. 

While this technology is not available yet, it is projected to be released later this year. While the ability to create realistic photos is one thing, being able to fabricate entire clips could upset the information landscape in ways we are ill-prepared for. For example, with access to both Sora and TikTok, a single person can create ten-second clips depicting nearly anything they want to. The technology is new and has limitations; but one cannot ignore the capacity for harm. 

You’ll want to look for the same identifiers as the deep fakes. Focus on details that AI finds difficult to replicate, or you can, again, look for watermarks. Hopefully, the fact that these objects are in motion will make your job easier, but the technology is too new to know for sure. 

 Voice-Cloning Tools 

OpenAI has also unveiled its Voice Engine, a software capable of mimicking a person’s voice and speech patterns. The technology is convincing enough to fool close family members of the mimicked subject. The tool’s danger is clear enough that even its creator is withholding it from the public until it is sure it is safe. 

The applications of this technology are dangerous. Recently, a version of this tool was used to disrupt the New Hampshire Democratic primary by generating a fake audio clip of President Joe Biden urging voters not to turn out on his behalf. With this power, fake audio, interviews, private recordings, and other logs can be fabricated entirely. Conversely, the very fact that this exists will likely erode the public’s trust in the verified incriminating audio logs that are true but shed a negative light upon their favored candidate.  

Voice-Cloning technology is potentially the hardest tool to combat. For fake text, you can look for common linguistic failures/behaviors. For fake images, you can look for objects or body parts that are difficult to construct. Voice-cloning tools, however, require you to use your ears, and your ears alone, to determine whether or not you’re being deceived. 

The best advice is to hold off believing a voice recording until you can verify its authenticity. Check online: are credible sources reporting this audio log’s existence? Have fact checkers jumped in to explain its origin? 

 Doing Your Part 

No one is going to navigate this new information landscape for you. Tech companies, malicious internet users, and government actors all have a hand in the proliferation of the problem, but so might you. It is entirely possible that before or after reading this article, you have, or will have helped spread false information online, whether you were aware or not. 

As citizens of this modern world, it is our responsibility to remain vigilant online. We must think critically when taking in new information, and verify authenticity. We must maintain a healthy skepticism, balancing cynicism and naivete. And we must stand firm against the temptation to use this new technology to fabricate lies about our political opponents or the good deeds and strengths of our allies. 

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php