Where every claim in SecProve
comes from.
A dense reading catalog. Every claim is footnoted. Sort by source, filter by pillar, type, or recency. Built for analysts who want to see what we are standing on.
Open-source testing framework and toolkit for AI governance. Helps organizations validate AI systems against governance principles.
Test your knowledge · B7Open-source platform for managing the end-to-end ML lifecycle. Covers experiment tracking, model registry, and deployment.
Test your knowledge · B7Evaluation criteria for AI/ML platforms including security features. Good for questions about what to look for when evaluating AI security tooling.
Test your knowledge · B7Market categorization of AI security tools: model monitoring, adversarial robustness, privacy, compliance. Useful for understanding the vendor landscape without favoring specific vendors.
Test your knowledge · B7LLM-powered security assistant. Technical docs cover prompt engineering for security, incident summarization, KQL generation. Useful for questions about practical LLM integration in SOC, not product features.
NVIDIA's open-source toolkit for adding programmable guardrails to LLM applications. Supports input/output validation and topic control.
Test your knowledge · B7Comprehensive guide covering AI security threats, privacy risks, and practical controls for AI-powered applications.
Test your knowledge · B7Bug bounty platform focused on AI/ML vulnerabilities. Real-world vulnerability data in ML frameworks and models. Good for grounding tool security questions in actual discovered vulnerabilities.
Test your knowledge · B7Platform for ML experiment tracking, model versioning, and collaborative model development with security considerations.
Test your knowledge · B7Ready to test what you've learned?
Our questions are built directly from these resources. Take a quiz and see how your knowledge stacks up.