Where every claim in SecProve
comes from.
A dense reading catalog. Every claim is footnoted. Sort by source, filter by pillar, type, or recency. Built for analysts who want to see what we are standing on.
Introduced DP-SGD for training neural networks with formal differential privacy guarantees. Foundation for private ML.
Test your knowledge · C4First practical membership inference attack against ML models. Showed that ML APIs leak information about their training data.
Test your knowledge · C4Voluntary framework for improving privacy through enterprise risk management. Complements the Cybersecurity Framework.
Test your knowledge · C4Demonstrated that LLMs memorize and can be prompted to regurgitate training data verbatim, including PII. Foundational work on LLM privacy risks.
Test your knowledge · C2Introduced SISA training for efficient machine unlearning — enabling models to "forget" specific training data without full retraining.
Test your knowledge · C4Open-source DP libraries and practical guides. Bridges theory to implementation. Good for questions on real-world DP deployment challenges and privacy budget management.
Test your knowledge · C4Extracted training data from ChatGPT (production model) using a divergence attack. Showed alignment doesn't prevent memorization. Questions on the gap between safety fine-tuning and data protection.
Test your knowledge · C4The theoretical foundation for differential privacy. Essential for questions on privacy-preserving ML training (DP-SGD) and the epsilon-delta framework.
Test your knowledge · C4Extended training data extraction to image models. Showed Stable Diffusion memorizes and regurgitates training images. Important for multimodal AI data security questions.
Test your knowledge · C4Ready to test what you've learned?
Our questions are built directly from these resources. Take a quiz and see how your knowledge stacks up.