Source library · 320 curated entries

Where every claim in SecProve comes from.

A dense reading catalog. Every claim is footnoted. Sort by source, filter by pillar, type, or recency. Built for analysts who want to see what we are standing on.

320SOURCES
143ORGS
50DOMAINS
320ADDED · 90 DAYS
Pillar · multi-selectall 4 selected
Domainsselect pillar(s) above
Browsing the full corpus. Pick pillars above to narrow to specific domains.
17 sources · matching filters · sorted by citation density
Sort
ACybersecurity1 source
01
Black Hat / DEF CON ArchivesBlack Hat / DEF CON

Conference presentations covering novel attack techniques and defensive research. Essential for cutting-edge offensive/defensive questions. AI Village talks particularly relevant for Pillars B and C.

Test your knowledge · A4
BApplied AI in Security3 sources
01

Evaluates model capabilities for autonomous cyber operations at each AI Safety Level (ASL). Defines thresholds where AI capability in offensive security requires additional safeguards. Key reference for responsible AI in offensive security.

Test your knowledge · B4
02

Research on using AI for penetration testing automation: reconnaissance, vulnerability discovery, exploit generation. Practitioner perspective on what's practical vs. theoretical.

GuideIntermediateB4 · AI in Offensive SecurityC5 · AI Red TeamingNEW · 22d ago
Test your knowledge · B4
03

Analysis of how LLMs can be used for offensive security tasks and the implications for defensive guardrails. Covers the dual-use nature of security LLMs.

Test your knowledge · B4
CCybersecurity of AI Systems13 sources
01

The definitive security risk list for LLM-powered applications. Covers prompt injection, insecure output handling, training data poisoning, and more.

FrameworkC2 · LLM-Specific AttacksC5 · AI Red Teaming★ STARTERNEW · 1mo ago
Test your knowledge · C2
02

Comprehensive taxonomy of adversarial ML attacks and mitigations. Covers evasion, poisoning, extraction, and inference attacks with standardized terminology.

FrameworkIntermediateC1 · Adversarial Machine LearningC5 · AI Red TeamingNEW · 1mo ago
Test your knowledge · C1
03

Adversarial Threat Landscape for AI Systems. ATT&CK-style knowledge base of adversarial ML techniques, tactics, and real-world case studies.

Test your knowledge · C1
04

Comprehensive guide to AI red teaming from Microsoft's dedicated AI security team. Covers methodology, tools, and findings.

GuideIntermediateC5 · AI Red TeamingNEW · 1mo ago
Test your knowledge · C5
05

Python Risk Identification Toolkit for generative AI. Automated red teaming framework for testing LLM applications.

ToolIntermediateC5 · AI Red TeamingC2 · LLM-Specific AttacksNEW · 1mo ago
Test your knowledge · C5
06

NVIDIA's open-source LLM vulnerability scanner. Tests for prompt injection, jailbreaking, data leakage, and more.

ToolIntermediateC5 · AI Red TeamingC2 · LLM-Specific AttacksNEW · 1mo ago
Test your knowledge · C5
07

Largest public AI red teaming event. 2,200+ participants testing multiple foundation models. Established community norms for responsible AI red teaming. Good for questions on practical red team methodology.

GuideIntermediateC5 · AI Red TeamingNEW · 22d ago
Test your knowledge · C5
08

Crowdsourced red teaming methodology with 38,961 attacks across multiple models. Taxonomy of harmful outputs and effectiveness of different red teaming strategies. Key reference for structured AI red teaming.

ResearchIntermediateC5 · AI Red TeamingNEW · 22d ago
Test your knowledge · C5
09

Framework for evaluating dangerous capabilities: persuasion, deception, cyber operations, self-replication. Defines evaluation methodology for frontier model safety. Questions on what to test and how to interpret results.

ResearchIntermediateC5 · AI Red TeamingC8 · AI Safety & AlignmentNEW · 22d ago
Test your knowledge · C5
10

Comprehensive library for adversarial ML. Supports attacks, defenses, and robustness evaluation across multiple ML frameworks.

Test your knowledge · C1
11

Practical lessons from large-scale LLM red teaming across real products. Covers failure modes, testing methodologies, and organizational patterns. Rare insight into enterprise-scale AI security.

GuideIntermediateC2 · LLM-Specific AttacksC5 · AI Red TeamingNEW · 22d ago
Test your knowledge · C2
12

Companion to AI RMF 1.0 specifically for generative AI. Maps 12 GenAI risks to RMF actions. Covers CBRN, CSAM, confabulation, data privacy, environmental, human-AI interaction, information integrity, IP, obscenity, toxicity, value chain.

Test your knowledge · C5
13

Description of external red teaming program and findings from GPT-4 pre-deployment testing. The system card details risk categories, testing methodology, and residual risks.

ResearchIntermediateC5 · AI Red TeamingC8 · AI Safety & AlignmentNEW · 22d ago
Test your knowledge · C5

Ready to test what you've learned?

Our questions are built directly from these resources. Take a quiz and see how your knowledge stacks up.