Source library · 320 curated entries

Where every claim in SecProve comes from.

A dense reading catalog. Every claim is footnoted. Sort by source, filter by pillar, type, or recency. Built for analysts who want to see what we are standing on.

320SOURCES
143ORGS
50DOMAINS
320ADDED · 90 DAYS
Pillar · multi-selectall 4 selected
Domainsselect pillar(s) above
Browsing the full corpus. Pick pillars above to narrow to specific domains.
13 sources · matching filters · sorted by citation density
Sort
ACybersecurity1 source
01
Black Hat / DEF CON ArchivesBlack Hat / DEF CON

Conference presentations covering novel attack techniques and defensive research. Essential for cutting-edge offensive/defensive questions. AI Village talks particularly relevant for Pillars B and C.

Test your knowledge · A4
CCybersecurity of AI Systems12 sources
01

Comprehensive taxonomy of adversarial ML attacks and mitigations. Covers evasion, poisoning, extraction, and inference attacks with standardized terminology.

FrameworkIntermediateC1 · Adversarial Machine LearningC5 · AI Red TeamingNEW · 1mo ago
Test your knowledge · C1
02

Adversarial Threat Landscape for AI Systems. ATT&CK-style knowledge base of adversarial ML techniques, tactics, and real-world case studies.

Test your knowledge · C1
03

Introduced PGD-based adversarial training, currently the most reliable defense against adversarial examples. Established the robustness-accuracy tradeoff.

ResearchAdvancedC1 · Adversarial Machine LearningNEW · 1mo ago
Test your knowledge · C1
04

Seminal backdoor attack paper. Demonstrated trojaned models in transfer learning scenarios. Foundational for AI supply chain security questions.

Test your knowledge · C3
05

Demonstrated that adversarial examples transfer between models, enabling black-box attacks via surrogate models. Key work on transferability.

ResearchAdvancedC1 · Adversarial Machine LearningNEW · 1mo ago
Test your knowledge · C1
06

Introduced the C&W attack, demonstrating that defensive distillation and other defenses could be reliably bypassed. Changed how robustness is evaluated.

ResearchAdvancedC1 · Adversarial Machine LearningNEW · 1mo ago
Test your knowledge · C1
07

The seminal paper introducing FGSM (Fast Gradient Sign Method). Established that adversarial examples are a fundamental property of neural networks, not a bug.

ResearchAdvancedC1 · Adversarial Machine LearningNEW · 1mo ago
Test your knowledge · C1
08

Standardized benchmark for evaluating adversarial robustness of ML models. Leaderboard of most robust models.

ToolAdvancedC1 · Adversarial Machine LearningNEW · 1mo ago
Test your knowledge · C1
09

Comprehensive library for adversarial ML. Supports attacks, defenses, and robustness evaluation across multiple ML frameworks.

Test your knowledge · C1
10
CounterfitMicrosoft

Microsoft's tool for assessing the security of ML models. Supports evasion, extraction, and inversion attacks.

ToolIntermediateC1 · Adversarial Machine LearningNEW · 1mo ago
Test your knowledge · C1
11

Top 10 security risks specific to machine learning systems, including supply chain attacks, data poisoning, and model theft.

Test your knowledge · C1
12

Historical survey tracing adversarial ML from 2004 spam filters through deep learning. Essential for questions on the evolution and taxonomy of adversarial attacks (evasion, poisoning, model extraction).

ResearchIntermediateC1 · Adversarial Machine LearningNEW · 22d ago
Test your knowledge · C1

Ready to test what you've learned?

Our questions are built directly from these resources. Take a quiz and see how your knowledge stacks up.