Pillar C: Cybersecurity of AI SystemsC10

AI-Enabled Disinformation

Bot networks, AI-generated propaganda, influence operations, detection methods.

Part of Pillar C: Cybersecurity of AI Systems · Cybersecurity of AI Systems groups the disciplines that share methods, tools, and threat models with AI-Enabled Disinformation.

What is AI-Enabled Disinformation?

AI-enabled disinformation represents the weaponization of artificial intelligence to create, amplify, and target false or misleading narratives at scale. While disinformation is not new, AI dramatically lowers the cost and increases the effectiveness of influence operations — enabling automated content generation, sophisticated bot networks, hyper-personalized targeting, and real-time narrative adaptation that were previously impossible.

Large language models can generate persuasive, contextually appropriate propaganda in any language, creating unique content that evades duplicate-detection filters. AI-powered bot networks can simulate authentic social media engagement patterns, building credible personas over months before activating for influence campaigns. Micro-targeting algorithms can identify and exploit psychological vulnerabilities in specific audience segments, delivering tailored disinformation through the channels and formats most likely to be believed.

State-sponsored influence operations from Russia (Internet Research Agency), China (Spamouflage), and Iran have demonstrated sophisticated AI-assisted tactics. Defensive approaches include AI-powered detection of coordinated inauthentic behavior, content provenance verification, platform integrity measures, and cross-platform information sharing. The field requires collaboration between AI security researchers, platform trust and safety teams, intelligence analysts, and policymakers to address threats that span technical and geopolitical domains.

Why it matters

AI-powered disinformation threatens democratic institutions, market stability, and public health. The ability to generate and distribute convincing false narratives at scale is one of the most consequential near-term risks from generative AI.

AI-enabled disinformation connects AI security (how generative models can be misused) with information operations and national security. It demonstrates that AI threats extend far beyond technical systems into societal-scale harms.

Standards and frameworks

Curated resources

Authoritative sources we ground AI-Enabled Disinformation questions in — frameworks, research, guides, and tools.

DISARM Foundationframework

DISARM Framework

Framework for analyzing and countering disinformation. Provides a structured approach to information manipulation threats.

ENISAresearch

ENISA Threat Landscape Report

EU-focused annual threat assessment. Covers ransomware, supply chain, disinformation, state-sponsored threats. Useful counterpoint to US-centric sources.

MITresearch

MIT Media Lab — "The Spread of True and False News Online" (Vosoughi et al., Science, 2018)

Landmark study: false news spreads farther, faster, deeper than true news on social media. Not AI-specific but foundational for understanding why AI-generated disinformation is dangerous.

RAND Corporationresearch

RAND — "The Firehose of Falsehood" and Information Warfare research

Research on propaganda techniques, cognitive security, and information warfare. The "firehose of falsehood" model explains high-volume, multi-channel disinformation. Good for strategic questions.

C2PA (Adobe, Microsoft, BBC, others)framework

C2PA (Coalition for Content Provenance and Authenticity)

Technical standard for content provenance. Cryptographic binding of creation metadata to content. The leading technical approach to synthetic media authentication. Questions on architecture, limitations, and adoption challenges.

Georgetown / OpenAI / Stanfordresearch

Goldstein et al. — "Generative Language Models and Automated Influence Operations" (2023)

Analysis of how LLMs can amplify influence operations: cost reduction, scalability, personalization, multilingual content. Framework for assessing disinformation risk from generative AI.

OpenAIresearch

OpenAI — "Influence Operations Reports" (Threat Intelligence)

Reports on state-affiliated actors using AI for influence operations. Documents actual observed misuse, not theoretical risks. Key for questions about real-world AI-enabled disinformation.

CISAguide

CISA Deepfake Detection Resources

CISA guidance on understanding, detecting, and defending against deepfake threats in organizational contexts.

Stanfordguide

Stanford Internet Observatory

Research group studying abuse in information technologies, including AI-enabled disinformation, platform manipulation, and election interference.

Education and certifications

More in Cybersecurity of AI Systems

See how your AI-Enabled Disinformation skills stack up

301 questions available. Compete head-to-head or run a quick speed quiz to benchmark yourself.