OSAI
OffSec AI Security Practitioner
Offensive AI security — adversarial ML, LLM attacks, agent abuse.
› Quality score
Four-axis SecProve rubric, each 0–10. SecProve editorial assessment — each axis carries a written justification so you can push back on any single call without dismissing the whole score.
› Built for these roles
› Exam format
Hands-on lab exam against a simulated AI/ML deployment — adversarial perturbations, prompt-injection chains, agent abuse. Online proctored. New credential as of 2025.
Retake voucher $249 separately. No wait period beyond exam scheduling availability.
› Recertification
90 OffSec CE credits over the three-year cycle (avg 30/yr). No annual maintenance fee.
› NICE Framework work roles
The NIST NICE work-role IDs this cert maps to. NICCS lookup.
› Core domains covered
The 4 domains this cert is centrally about. Passing the exam demonstrates working knowledge of each.
Evasion attacks, poisoning attacks, model extraction, membership inference, model inversion, gradient-based attacks.
Prompt injection (direct & indirect), jailbreaking, prompt leaking, training data extraction, hallucination exploitation, agent manipulation.
AI system threat modeling, red teaming methodology for LLMs (OWASP Top 10 for LLMs), automated red teaming tools, evaluation frameworks.
Agent architectures & threat surface, tool/action security, delegation & permission escalation, memory & context poisoning, multi-agent system security.
› Also touched
Present in the blueprint but not the primary focus — you’ll be introduced but shouldn’t expect depth.
› Prerequisites
Strong offensive security background (OSCP-level). Python fluency required; ML familiarity strongly recommended.
- Adversarial ML attack taxonomy
- LLM prompt injection and jailbreak techniques
- Agent abuse and tool-use exploitation
› Progression
requiredrecommendedWhere this cert fits in the typical learning path. Required edges are vendor-gated; recommended edges reflect de facto industry progression.
No vendor-gated prereqs.
No certs require this one.
No follow-on certs reference this one yet.
› Study materials
Curated starting points. Not exhaustive — vet each against your learning style and the current exam version.
› Version & lifecycle
Newest OffSec credential focused on AI/ML attack tradecraft.
› Salary signal
AI red team / offensive AI engineer, US, 4-6 years. New role category.
Robert Half Salary Guide extrapolation · 2024 · US base only · p25–p75 range
› How it compares
OffSec OSAI is hands-on AI attack tradecraft; GASAE is automation-engineering focused.
↔ Compare side-by-side› Careers that commonly pursue this cert
Secure AI/ML systems from adversarial attacks, data poisoning, and model compromise. The fastest-growing specialization in cybersecurity.
Secures the platform that trains, stores, and serves ML models — multi-tenant GPU isolation, pipeline integrity, feature-store hygiene, secrets management in ML workflows.
See this cert’s domains highlighted on the interactive map, or compare it against the rest of the catalog.