Where every claim in SecProve
comes from.
A dense reading catalog. Every claim is footnoted. Sort by source, filter by pillar, type, or recency. Built for analysts who want to see what we are standing on.
Security documentation for LangChain agent framework — sandboxing, tool permissions, prompt injection defenses, and deployment hardening.
Test your knowledge · C11Analysis of risks specific to AI agents: tool use, chain-of-thought exploitation, multi-step task failures, delegation risks. Key for understanding why agents create new attack surfaces beyond single-turn interactions.
Test your knowledge · C11Anthropic's open protocol for connecting AI models to external tools and data sources. Critical reading for agentic AI security.
Test your knowledge · C11Annual trends report. AI trust, risk, and security management (AI TRiSM) has been featured prominently. Good for strategic-level questions about where the industry is heading.
Test your knowledge · C11Framework for agentic AI governance: scope control, human oversight, auditability, containment. Defines key properties agents should have and failure modes to prevent.
Test your knowledge · C11Extension of the LLM Top 10 specifically for agentic patterns. Covers excessive agency, insecure plugin/tool design, and multi-agent trust boundaries.
Test your knowledge · C11OWASP guidance on securing agentic AI systems — tool use, delegation chains, memory poisoning, and multi-agent architectures.
Test your knowledge · C11Survey of tool-using, retrieval-augmented, and reasoning LMs. The architectural foundation for understanding agent capabilities and their security implications.
Test your knowledge · C11ToolEmu framework for evaluating agent risks in sandboxed environments. 36 risk categories across tool use failures. Practical methodology for agent security testing questions.
Test your knowledge · C11Ready to test what you've learned?
Our questions are built directly from these resources. Take a quiz and see how your knowledge stacks up.