Battling Propaganda and Scams: The Importance of Skepticism in the Age of AI-Generated Content

AI is being utilized by propagandists as well

In a recent report, OpenAI highlighted the dangers of adversarial content and behaviors in the field of artificial intelligence. While independent researchers have been compiling databases of misuse, such as the AI Incident Database and the Political Deepfakes Incident Database, detecting misuse from an external standpoint can be challenging. This is especially true as AI tools become more advanced and widespread.

To combat influence operations and AI misuse, online users play a crucial role. The impact of such content is only felt if individuals view it, believe it, and further propagate it. In instances highlighted by OpenAI, online users identified fake accounts that utilized AI-generated text. Our personal research has revealed Facebook communities actively identifying AI-generated image content generated by spammers and scammers, helping those who may be less informed about the technology to avoid deception. A healthy dose of skepticism is increasingly valuable: taking a moment to verify the authenticity of content and individuals, as well as educating friends and family about the prevalence of generated content, can help social media users resist manipulation from propagandists and scammers.

As OpenAI’s blog post mentioned, “Threat actors work across the internet.” Therefore, it is imperative for us to do the same. As we enter a new era of AI-driven influence operations, we must address common challenges through transparency, data sharing, and collective vigilance to build a more resilient digital ecosystem. Josh A. Goldstein is a research fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), focusing on the CyberAI Project. Renée DiResta is the research manager at the Stanford Internet Observatory and the author of “Invisible Rulers: The People Who Turn Lies into Reality.”

Leave a Reply