Source library · 320 curated entries

Where every claim in SecProve comes from.

A dense reading catalog. Every claim is footnoted. Sort by source, filter by pillar, type, or recency. Built for analysts who want to see what we are standing on.

320SOURCES
143ORGS
50DOMAINS
320ADDED · 90 DAYS
Pillar · multi-selectall 4 selected
Domainsselect pillar(s) above
Browsing the full corpus. Pick pillars above to narrow to specific domains.
9 sources · matching filters · sorted by citation density
Sort
CCybersecurity of AI Systems9 sources
01

Introduced DP-SGD for training neural networks with formal differential privacy guarantees. Foundation for private ML.

ResearchAdvancedC4 · AI Data SecurityNEW · 1mo ago
Test your knowledge · C4
02

First practical membership inference attack against ML models. Showed that ML APIs leak information about their training data.

ResearchAdvancedC4 · AI Data SecurityNEW · 1mo ago
Test your knowledge · C4
03

Voluntary framework for improving privacy through enterprise risk management. Complements the Cybersecurity Framework.

FrameworkIntermediateC4 · AI Data SecurityNEW · 1mo ago
Test your knowledge · C4
04

Demonstrated that LLMs memorize and can be prompted to regurgitate training data verbatim, including PII. Foundational work on LLM privacy risks.

ResearchAdvancedC2 · LLM-Specific AttacksC4 · AI Data SecurityNEW · 1mo ago
Test your knowledge · C2
05

Introduced SISA training for efficient machine unlearning — enabling models to "forget" specific training data without full retraining.

ResearchAdvancedC4 · AI Data SecurityNEW · 1mo ago
Test your knowledge · C4
06

Open-source DP libraries and practical guides. Bridges theory to implementation. Good for questions on real-world DP deployment challenges and privacy budget management.

ResearchIntermediateC4 · AI Data SecurityNEW · 22d ago
Test your knowledge · C4
07

Extracted training data from ChatGPT (production model) using a divergence attack. Showed alignment doesn't prevent memorization. Questions on the gap between safety fine-tuning and data protection.

ResearchIntermediateC4 · AI Data SecurityNEW · 22d ago
Test your knowledge · C4
08

The theoretical foundation for differential privacy. Essential for questions on privacy-preserving ML training (DP-SGD) and the epsilon-delta framework.

ResearchIntermediateC4 · AI Data SecurityNEW · 22d ago
Test your knowledge · C4
09

Extended training data extraction to image models. Showed Stable Diffusion memorizes and regurgitates training images. Important for multimodal AI data security questions.

ResearchIntermediateC4 · AI Data SecurityNEW · 22d ago
Test your knowledge · C4

Ready to test what you've learned?

Our questions are built directly from these resources. Take a quiz and see how your knowledge stacks up.