Source library · 320 curated entries

Where every claim in SecProve comes from.

A dense reading catalog. Every claim is footnoted. Sort by source, filter by pillar, type, or recency. Built for analysts who want to see what we are standing on.

320SOURCES
143ORGS
50DOMAINS
320ADDED · 90 DAYS
Pillar · multi-selectall 4 selected
Domainsselect pillar(s) above
Browsing the full corpus. Pick pillars above to narrow to specific domains.
18 sources · matching filters · sorted by citation density
Sort
ACybersecurity1 source
01
Black Hat / DEF CON ArchivesBlack Hat / DEF CON

Conference presentations covering novel attack techniques and defensive research. Essential for cutting-edge offensive/defensive questions. AI Village talks particularly relevant for Pillars B and C.

Test your knowledge · A4
BApplied AI in Security2 sources
01

Demonstrated GPT-4 exploiting real-world web vulnerabilities autonomously. 73% success rate on day-one CVEs. Key reference for questions about AI-augmented offensive capabilities and the asymmetry debate.

Test your knowledge · B4
02

Analysis of how LLMs can be used for offensive security tasks and the implications for defensive guardrails. Covers the dual-use nature of security LLMs.

Test your knowledge · B4
CCybersecurity of AI Systems15 sources
01

The definitive security risk list for LLM-powered applications. Covers prompt injection, insecure output handling, training data poisoning, and more.

FrameworkC2 · LLM-Specific AttacksC5 · AI Red Teaming★ STARTERNEW · 1mo ago
Test your knowledge · C2
02

Adversarial Threat Landscape for AI Systems. ATT&CK-style knowledge base of adversarial ML techniques, tactics, and real-world case studies.

Test your knowledge · C1
03

Collection of Anthropic's published research on AI safety, alignment, interpretability, and security.

Test your knowledge · C8
04

Python Risk Identification Toolkit for generative AI. Automated red teaming framework for testing LLM applications.

ToolIntermediateC5 · AI Red TeamingC2 · LLM-Specific AttacksNEW · 1mo ago
Test your knowledge · C5
05

Demonstrated that LLMs memorize and can be prompted to regurgitate training data verbatim, including PII. Foundational work on LLM privacy risks.

ResearchAdvancedC2 · LLM-Specific AttacksC4 · AI Data SecurityNEW · 1mo ago
Test your knowledge · C2
06

Showed that gradually escalating benign conversations can bypass safety filters over multiple turns. Defeats per-message safety checks.

ResearchAdvancedC2 · LLM-Specific AttacksNEW · 1mo ago
Test your knowledge · C2
07

Demonstrated indirect prompt injection attacks through RAG documents, emails, and web content. Essential reading for RAG security.

ResearchIntermediateC2 · LLM-Specific AttacksNEW · 1mo ago
Test your knowledge · C2
08

The GCG attack paper. Showed that adversarial suffixes can bypass safety alignment in LLMs, transferring across models.

ResearchAdvancedC2 · LLM-Specific AttacksNEW · 1mo ago
Test your knowledge · C2
09

NVIDIA's open-source LLM vulnerability scanner. Tests for prompt injection, jailbreaking, data leakage, and more.

ToolIntermediateC5 · AI Red TeamingC2 · LLM-Specific AttacksNEW · 1mo ago
Test your knowledge · C5
10

Demonstrated that long-context LLMs can be jailbroken by providing many examples of the desired behavior. Scales with context window size.

ResearchIntermediateC2 · LLM-Specific AttacksNEW · 1mo ago
Test your knowledge · C2
11

Practical lessons from large-scale LLM red teaming across real products. Covers failure modes, testing methodologies, and organizational patterns. Rare insight into enterprise-scale AI security.

GuideIntermediateC2 · LLM-Specific AttacksC5 · AI Red TeamingNEW · 22d ago
Test your knowledge · C2
12

Companion to AI RMF 1.0 specifically for generative AI. Maps 12 GenAI risks to RMF actions. Covers CBRN, CSAM, confabulation, data privacy, environmental, human-AI interaction, information integrity, IP, obscenity, toxicity, value chain.

Test your knowledge · C5
13

Extension of the LLM Top 10 specifically for agentic patterns. Covers excessive agency, insecure plugin/tool design, and multi-agent trust boundaries.

Test your knowledge · C11
14

Largest prompt injection competition dataset. Taxonomy of prompt injection techniques: context ignoring, fake completion, payload splitting, obfuscation. Empirical data on attack success rates across models.

ResearchIntermediateC2 · LLM-Specific AttacksNEW · 22d ago
Test your knowledge · C2
15

Systematic analysis of jailbreak techniques: competing objectives and mismatched generalization. Framework for understanding why safety training is inherently incomplete. Essential for nuanced jailbreak questions.

ResearchIntermediateC2 · LLM-Specific AttacksNEW · 22d ago
Test your knowledge · C2

Ready to test what you've learned?

Our questions are built directly from these resources. Take a quiz and see how your knowledge stacks up.