Where every claim in SecProve
comes from.
A dense reading catalog. Every claim is footnoted. Sort by source, filter by pillar, type, or recency. Built for analysts who want to see what we are standing on.
Conference presentations covering novel attack techniques and defensive research. Essential for cutting-edge offensive/defensive questions. AI Village talks particularly relevant for Pillars B and C.
Comprehensive taxonomy of adversarial ML attacks and mitigations. Covers evasion, poisoning, extraction, and inference attacks with standardized terminology.
Test your knowledge · C1Adversarial Threat Landscape for AI Systems. ATT&CK-style knowledge base of adversarial ML techniques, tactics, and real-world case studies.
Introduced PGD-based adversarial training, currently the most reliable defense against adversarial examples. Established the robustness-accuracy tradeoff.
Test your knowledge · C1Seminal backdoor attack paper. Demonstrated trojaned models in transfer learning scenarios. Foundational for AI supply chain security questions.
Test your knowledge · C3Demonstrated that adversarial examples transfer between models, enabling black-box attacks via surrogate models. Key work on transferability.
Test your knowledge · C1Introduced the C&W attack, demonstrating that defensive distillation and other defenses could be reliably bypassed. Changed how robustness is evaluated.
Test your knowledge · C1The seminal paper introducing FGSM (Fast Gradient Sign Method). Established that adversarial examples are a fundamental property of neural networks, not a bug.
Test your knowledge · C1Standardized benchmark for evaluating adversarial robustness of ML models. Leaderboard of most robust models.
Test your knowledge · C1Comprehensive library for adversarial ML. Supports attacks, defenses, and robustness evaluation across multiple ML frameworks.
Test your knowledge · C1Microsoft's tool for assessing the security of ML models. Supports evasion, extraction, and inversion attacks.
Test your knowledge · C1Top 10 security risks specific to machine learning systems, including supply chain attacks, data poisoning, and model theft.
Test your knowledge · C1Historical survey tracing adversarial ML from 2004 spam filters through deep learning. Essential for questions on the evolution and taxonomy of adversarial attacks (evasion, poisoning, model extraction).
Test your knowledge · C1Ready to test what you've learned?
Our questions are built directly from these resources. Take a quiz and see how your knowledge stacks up.