50 domains·5 layers·interactive map

The Cyber Domain Map

Cybersecurity is often presented as disconnected specialties: IAM, cloud, SOC, GRC, vulnerability management, resilience. But security failures rarely stay inside one box. Identity issues become cloud incidents. Asset gaps undermine vulnerability management. Weak architecture makes monitoring noisy. This map shows how the major domains of cybersecurity connect, depend on one another, and break down in practice.

It is a practitioner-built view of cybersecurity, AI security, and quantum-era cryptography as a connected system. Explore, filter, and replay real incidents to see how domains actually interact — and how weaknesses cascade across environments.

The SecProve Cyber Systems Model (CSM) builds on foundations like the NIST Cybersecurity Framework and the Cyber Defense Matrix, but reorganizes them around how modern defenders think and operate. It reflects a reality where identity drives access, data drives risk, and resilience determines mission success.

§ 01 · interactive map

How 50 domains fit together.

Hover any domain to see what it depends on and what it influences. Filter by pillar or career path. Play a scenario to watch an incident ripple across the map. Click any domain for the full brief.

Compact view — pinch to zoom for detail, or rotate your phone for the full-size map.
pillar
career
certification
incident replay
L01
Govern
Set direction, own risk, shape policy, govern AI/quantum programs, work with people and narrative.
L02
Control
Decide who or what can do what, enforce it cryptographically, constrain AI behaviour.
L03
Build & Run
Build and run the systems — apps, cloud, data, networks, OT, AI infra, supply chain, quantum engineering.
L04
Detect & Respond
Watch, hunt, attack ethically, analyse, and respond — classical and AI.
L05
Futures
The emerging stack reshaping cybersecurity from both directions — AI toolkit, AI attack surface, and the quantum transition.
focus prerequisite constrains enables operationally coupledhover any domain to reveal its connections
take it with you

Downloads & embed

The Cyber Systems Model is published under CC BY 4.0. Use the poster in class, cite the CSV / JSON in a paper, or drop the embed into a blog post — attribution back to secprove.com/domains is the only ask.

Embed on your blog or wiki
§ 02 · deep dive

Every domain, in depth.

Each domain has a brief covering what it is, why it matters, where teams go wrong, and what it connects to. Use the map above to navigate; the cards below are the reference.

LAYER 1

Govern & Direct

Set direction, own risk, shape policy, govern AI/quantum programs, work with people and narrative.

Cybersecurity decisions ultimately answer business questions: what are we protecting, what risk are we willing to carry, and who owns the answer when something fails. The govern layer is where those questions get answered — it is the layer that is blamed loudest when an incident exposes a gap nobody owned.

A1

Governance, Risk & Compliance

PA

Risk frameworks (NIST RMF, ISO 31000, FAIR), policy development, audit, regulatory compliance, third-party risk.

Why it matters: GRC sets the bar that every other domain has to clear. When a control gap matters to regulators or a board, this is the layer that owns the answer.

Common mistake: Treating GRC as paperwork. The frameworks (NIST CSF, ISO 27001, FAIR) only earn their keep when they shape investment decisions, not when they're filed for the audit.

A18

Security Leadership

PA

Cyber risk quantification, board communication, security program development, budget & ROI.

Why it matters: Security leadership is the discipline of translating threats into board-level decisions and budget into measurable risk reduction. Most programs that fail, fail here.

Common mistake: Reporting activity (alerts handled, patches applied) instead of risk (what changed, what didn't).

A20

Security Awareness & Human Factors

PA

Phishing simulation, security culture measurement, behavioral psychology, insider threat programs, social engineering defense training.

Why it matters: People are the largest attack surface and the strongest sensor. The 2023 MGM and Caesars vishing incidents both turned on a single help-desk decision.

Common mistake: Annual click-through training that ages out within months. Quarterly with topical refreshers actually changes behavior.

A22

Information Operations & Cognitive Security

PA

Influence operations, cognitive warfare, counter-intelligence, OSINT/SOCMINT, PSYOP/MISO, foreign malign influence, hack-and-leak operations, narrative warfare, DISARM framework.

Why it matters: Information operations are now part of the cyber threat surface — narrative attacks influence insider behavior, partner trust, and incident escalation.

C7

AI Governance & Risk

PC

EU AI Act compliance, NIST AI RMF, AI risk assessment, model cards, algorithmic auditing, AI incident response.

Why it matters: EU AI Act compliance, NIST AI RMF, model cards, algorithmic auditing. AI governance is moving from voluntary frameworks to enforced regulation faster than most legal teams expected.

D4

Quantum-Safe Compliance

PD

NSA CNSA 2.0, NIST FIPS 203/204/205, OMB M-23-02, ETSI QSC, quantum readiness.

Why it matters: NSA CNSA 2.0, OMB M-23-02, ETSI QSC. The regulatory push is setting timelines that most organizations are not yet planning against.

LAYER 2

Control Access & Trust

Decide who or what can do what, enforce it cryptographically, constrain AI behaviour.

The traditional network perimeter is gone. What's left is a trust fabric — identity, policy, cryptography, and the decisions that keep AI systems inside acceptable behaviour. Get the trust fabric wrong and every other layer inherits the weakness.

A6

Identity & Access Management

PA

AuthN/AuthZ, SSO, MFA, PAM, RBAC/ABAC, identity governance, FIDO2/passkeys, plus non-human identity: service accounts, workload identity, agent / plugin identities.

Why it matters: Identity is the most-used attack vector in modern incidents. Compromise here turns into compromise everywhere because every layer above trusts what IAM asserts.

Common mistake: Strong joiner processes, weak mover/leaver. Stale entitlements accumulate quietly and surface as overprivilege during incidents.

A3

Zero Trust Architecture

PA

Zero trust principles, micro-segmentation, NIST SP 800-207, ZTNA, continuous verification, BeyondCorp.

Why it matters: Zero Trust is the architectural answer to a perimeter that no longer exists. NIST SP 800-207 codified the model; CISA's Maturity Model gives you a phased adoption path.

Common mistake: Buying a product labeled 'Zero Trust' instead of executing the architectural shift. ZTA is a posture, not a SKU.

A15

Cryptography

PA

Symmetric/asymmetric, PKI, TLS/SSL, hashing, post-quantum cryptography, key management.

Why it matters: Cryptography underpins every secure communication, transaction, and identity assertion. The algorithms in production today are about to change — see the quantum layer.

Common mistake: Crypto choices baked into application code without an inventory or upgrade path. Crypto-agility starts with knowing what you have.

C8

AI Safety & Alignment

PC

Guardrails, content filtering bypass, model monitoring, drift detection, output control.

Why it matters: Guardrails, output monitoring, drift detection. The discipline of keeping increasingly capable systems doing what they're supposed to do — and catching when they don't.

C11

Agentic AI Security

PC

Agent architectures & threat surface, tool/action security, delegation & permission escalation, memory & context poisoning, multi-agent system security.

Why it matters: Agentic AI systems take actions in the world — tool use, delegation chains, persistent memory. The attack surface is permission escalation, memory poisoning, and inter-agent trust, and most organizations haven't threat-modeled any of it.

LAYER 3

Build, Connect & Operate

Build and run the systems — apps, cloud, data, networks, OT, AI infra, supply chain, quantum engineering.

This is the engineering layer — the applications, cloud footprint, data stores, networks, OT, mobile estate, supply chain, AI infrastructure, and the quantum-safe engineering that will keep all of it running into the next decade. Most modern compromises land here.

A2

Network Security

PA

Firewalls, IDS/IPS, network segmentation, DNS security, SD-WAN, VPN, traffic analysis, wireless security.

Why it matters: The network is still where lateral movement happens. Segmentation, traffic inspection, and DNS hygiene remain load-bearing controls even in cloud-first organizations.

Common mistake: Flat internal networks behind a hardened perimeter. The first compromised endpoint then has line-of-sight to everything.

A4

Application Security

PA

OWASP Top 10, secure SDLC, SAST/DAST/IAST, API security, code review, DevSecOps.

Why it matters: Applications are where data lives and business logic runs. The OWASP Top 10 hasn't moved much in years because the same classes of bugs keep landing in new code.

Common mistake: Treating AppSec as a gate at the end of the SDLC. By then, the cheap fixes are gone.

A5

Cloud Security

PA

AWS/Azure/GCP security controls, IAM policies, CSPM, container security, shared responsibility model.

Why it matters: Cloud is now the default deployment target. Misconfigurations — not novel exploits — remain the leading cause of cloud breaches because the shared-responsibility line is easy to misread.

Common mistake: Defaulting to network thinking in cloud. Most compromises start with identity and policy, not packets.

A12

Data Security, Privacy & Protection

PA

Data classification, encryption-at-rest/in-transit, DLP, tokenization, privacy-by-design, plus the regulatory stack (GDPR, CCPA, HIPAA) that sets the bar.

Why it matters: Privacy and data protection determine the fines, lawsuits, and customer trust outcomes when something goes wrong. GDPR, CCPA, and HIPAA aren't going to relax.

Common mistake: Classifying data once and never revisiting it as the product changes.

A13

Supply Chain Security

PA

SBOM, vendor risk assessment, software supply chain attacks, dependency management.

Why it matters: SolarWinds, Log4Shell, and the MOVEit chain proved that attackers target the software and vendors you trust. Most of your CVE volume lives in transitive dependencies you didn't write.

Common mistake: Maintaining an SBOM nobody uses for prioritization. Without reachability and asset context, an SBOM is just an inventory.

A14

OT/ICS Security

PA

SCADA, PLC security, Purdue model, ICS-specific threats, IT/OT convergence, IEC 62443.

Why it matters: OT/ICS security protects the systems that move physical things — power, water, manufacturing. A compromise here can cause physical damage and endanger lives, which changes the calculus of every defense decision.

Common mistake: Importing IT security playbooks into OT without accounting for safety, availability, and protocol fragility.

A16

Mobile & IoT Security

PA

MDM, mobile app vulnerabilities, IoT protocols, firmware analysis, embedded systems security.

Why it matters: Mobile and IoT devices are the largest unmanaged attack surface in most organizations. Firmware that rarely updates, protocols not built for hostile networks.

Common mistake: Treating IoT as a procurement problem instead of a network and lifecycle problem.

A17

Cyber-Electronic Warfare

PA

Converged cyber and EW, spectrum security, GPS/GNSS spoofing, RF attacks, EMP hardening.

Why it matters: Cyber-electronic warfare is where digital attacks meet RF, GPS spoofing, and spectrum operations. Increasingly relevant for critical infrastructure and any organization with field operations.

C3

AI Supply Chain Security

PC

Model provenance, dataset poisoning, Hugging Face risks, ML library vulnerabilities, trojanized models.

Why it matters: Models you didn't train and datasets you didn't curate are now in your supply chain. Backdoored models on Hugging Face, poisoned datasets, vulnerable ML libraries — same supply-chain problem, new substrate.

C4

AI Data Security

PC

Training data poisoning, PII leakage from models, differential privacy, federated learning security.

Why it matters: Models memorize. Training data extraction is now a real attack. Differential privacy and federated learning are the durable answers, not after-the-fact filtering.

C6

AI Infrastructure Security

PC

GPU cluster security, ML pipeline security, model serving endpoints, secrets management in ML.

Why it matters: The platforms that train and serve AI have unique security needs — multi-tenant GPU isolation, pipeline integrity, secrets in ML workflows. Traditional cloud security misses most of this.

D5

Quantum Networking & Communication

PD

Quantum Key Distribution, QKD limitations, QRNG, deployed quantum networks.

Why it matters: Quantum Key Distribution and QRNG offer physics-based security guarantees, with practical limitations on cost, distance, and integration. Useful for specific use cases, not a universal answer.

D6

Quantum Security Engineering

PD

Quantum computer security, side-channels, quantum ML security, quantum-safe architecture.

Why it matters: Quantum security engineering is the operational work — inventorying cryptographic dependencies, designing crypto-agile architectures, and executing migration without breaking production.

A25

Security Architecture & Engineering

PA

Reference architectures, control frameworks (NIST SP 800-53, CIS Controls), secure-by-design patterns, threat modeling, trust-boundary design, technology standards.

Why it matters: Architecture is where identity, crypto, network, cloud, and data primitives get composed into a defensible whole. CISSP, CCSP, zero-trust literature, and cloud reference architectures all treat architecture as its own specialty — not an implicit sub-skill of AppSec or Cloud.

Common mistake: Mistaking a vendor stack for an architecture. Tools implement patterns; architecture decides which patterns to implement.

LAYER 4

Detect, Test & Respond

Watch, hunt, attack ethically, analyse, and respond — classical and AI.

Prevention is incomplete; this is the assume-failure layer. Here SOC analysts work, threat intel meets operational reality, detection engineers craft rules, pentesters and red teams probe, malware analysts reverse engines of compromise, deception traps trip, and incident responders contain what's already inside.

A7

Incident Response & Forensics

PA

IR playbooks, memory/disk/network forensics, chain of custody, malware analysis.

Why it matters: When prevention fails, response speed and discipline determine impact. The quality of your runbooks shows up in your dwell time and your post-incident report.

Common mistake: Tabletops that everyone passes. If nobody's surprised, the scenario was too easy.

A8

Threat Intelligence

PA

CTI lifecycle, MITRE ATT&CK, IOCs/TTPs, threat modeling (STRIDE, PASTA), STIX/TAXII.

Why it matters: Threat intelligence transforms raw observations about adversaries into decisions: what to detect, what to patch first, who to brief. Without it, every other detection is generic.

Common mistake: Subscribing to feeds you don't operationalize. Intel is only useful when it changes a control or a priority.

A9

Penetration Testing & Red Teaming

PA

Methodology (OSSTMM, PTES), web/network/mobile pentesting, social engineering, purple teaming.

Why it matters: Pen testing and red teaming are the most honest assessment of whether controls actually work under pressure. Everything else is theory until someone tries to break it.

Common mistake: Scoping engagements to confirm what the security team already believes, instead of probing the assumptions nobody wants tested.

A10

Security Operations

PA

SOC operations, SIEM tuning, SOAR playbooks, alert triage, log analysis, runbook development.

Why it matters: The 24/7 nerve center of cyber defense. SOC throughput sets the ceiling on how fast the rest of the program can react to anything that gets past prevention.

Common mistake: Optimizing for alert volume instead of investigation quality. A SOC drowning in low-signal alerts is the same as no SOC.

A11

Detection Engineering & Threat Hunting

PA

SIGMA/YARA/Suricata rule writing, hypothesis-driven hunting, log deep-dives, detection gap analysis.

Why it matters: Detections are the upstream input to every downstream response. Better detections produce better automation; bad detections produce automated noise.

Common mistake: Buying detection content from vendors without testing it against your environment's actual telemetry.

A19

Cyber Deception & Active Defense

PA

Honeypots, honeytokens, canary tokens, deception platforms, moving target defense, MITRE D3FEND, adversary engagement.

Why it matters: Cyber deception flips the asymmetry. Any interaction with a canary token or honeypot is a guaranteed alert with high fidelity — exactly what's missing from most SOC queues.

Common mistake: Deploying canaries without a clear ownership model for what happens when they trip.

A21

Malware Analysis & Reverse Engineering

PA

Static/dynamic analysis, sandbox analysis, assembly/disassembly, packer analysis, YARA rules, malware family classification.

Why it matters: Malware analysis turns adversary capabilities into IOCs, detection content, and attribution. A specialized skill that powers threat intel and detection engineering.

C5

AI Red Teaming

PC

AI system threat modeling, red teaming methodology for LLMs (OWASP Top 10 for LLMs), automated red teaming tools, evaluation frameworks.

Why it matters: Red teaming AI systems is how you find failure modes before launch. The OWASP Top 10 for LLMs gives you a starting taxonomy; structured evaluation gives you defensible coverage.

A23

Recovery, Resilience & Cyber Recovery

PA

Backup integrity, immutable snapshots, cyber-recovery vaults, restore orchestration, BCM/DR, tabletop exercises, ransom-scenario restoration drills.

Why it matters: CSF 2.0 keeps Recover as its own function because restoring compromised environments without re-inheriting the compromise is a distinct design problem. Immutable backups, integrity verification, and rehearsed cutovers are the operational core of ransomware resilience.

Common mistake: Backups that pass restore tests but have never been exercised against a compromised source. 'Our backups work' and 'we can recover from ransomware' are different claims.

A24

Exposure Management & Attack Surface

PA

External attack-surface management (EASM), cyber asset attack-surface management (CAASM), continuous threat exposure management (CTEM), attack-path analysis, validation, and remediation orchestration.

Why it matters: Exposure management has separated from classic vulnerability scanning as practitioners realized that asset visibility, attack-path context, and validation matter more than CVE counts. CAASM/EASM/CTEM are now distinct practice areas with their own tooling and workflows.

Common mistake: Counting findings instead of counting reachable-and-exploitable findings on critical assets. The metric that matters is what an attacker could actually do next.

LAYER 5

AI & Quantum Futures

The emerging stack reshaping cybersecurity from both directions — AI toolkit, AI attack surface, and the quantum transition.

AI and quantum aren't replacing the rest of the stack — they're reshaping it. This layer covers the AI-augmented defender toolkit (Pillar B), new AI-target attack surfaces (Pillar C), and the quantum-crypto transition (Pillar D). Filter by pillar to see each dimension in isolation.

B1

AI-Powered Threat Detection

PB

ML-based anomaly detection, UEBA, network traffic analysis, deep learning for malware.

Why it matters: ML-based detection finds threats that signature-based tools miss — anomalies in massive data volumes, novel malware variants, behavioral patterns at scale.

Common mistake: Deploying ML detection without a feedback loop. Models drift; without analyst-correction signals flowing back, accuracy degrades quietly.

B2

AI-Driven Security Automation

PB

SOAR + AI, automated triage, AI copilots for analysts, automated incident response.

Why it matters: AI-driven automation is how SOCs match the tempo of AI-augmented adversaries. Trust calibration and autonomy-tier policy decide whether the automation helps or just hides noise.

Common mistake: Letting the platform auto-block customer-facing infrastructure during low-confidence alerts. Match autonomy to business reversibility, not just technical confidence.

B3

AI for Vulnerability Management

PB

AI-assisted code review, predictive vulnerability prioritization (EPSS), automated patch assessment.

Why it matters: AI changed both ends of vulnerability management — discovery (Mythos, Codex Security) and prioritization (EPSS, KEV, reachability). The window between 'vulnerability exists' and 'being exploited' is now hours, not weeks.

Common mistake: Patching by CVE count. The metric that matters is exploitable-and-reachable-on-a-critical-asset.

B4

AI in Offensive Security

PB

AI-assisted pentesting, automated recon, AI-generated phishing/social engineering, deepfake attacks.

Why it matters: Defenders can't model threats they don't believe exist. MGM, Arup, Retool, and the Mythos restricted release are all calibration data for the cost curve of AI-augmented attacks.

Common mistake: Assuming the lure will look obviously fake. Modern AI-generated phishing reads like competent corporate writing.

B5

AI for Threat Intelligence

PB

NLP for threat reports, automated IOC extraction, AI-generated threat briefs, predictive modeling.

Why it matters: Threat intel volume far exceeds human processing capacity. NLP, knowledge graphs, and LLM summarization turn the flood into actionable briefs without scaling the team linearly.

B6

AI for GRC & Compliance

PB

AI-assisted audit, automated policy mapping, AI-driven risk scoring, compliance monitoring.

Why it matters: Continuous compliance is finally tractable when an LLM can map controls, summarize evidence, and flag drift. GRC stops being a quarterly photo and starts being a live signal.

B7

AI Security Tool Landscape

PB

AI-powered security tools — evaluation criteria, integration patterns, and comparative analysis.

Why it matters: The AI-security tool market is growing faster than buyers can evaluate it. This domain is the literacy layer — knowing what's real capability vs. AI-washing.

B8

Prompt Engineering for Security

PB

Using LLMs for log analysis, writing detection rules with AI assistance, AI-assisted OSINT, prompt design for security workflows.

Why it matters: Prompt engineering is now a working skill for analysts. Quality of LLM-driven log analysis, detection drafting, and OSINT depends almost entirely on the prompt and the data layer below it.

C1

Adversarial Machine Learning

PC

Evasion attacks, poisoning attacks, model extraction, membership inference, model inversion, gradient-based attacks.

Why it matters: Adversarial ML is the foundation discipline for understanding how attackers fool, steal, or corrupt the models you ship. Evasion, poisoning, extraction, inversion — distinct failure modes, each with its own defense.

C2

LLM-Specific Attacks

PC

Prompt injection (direct & indirect), jailbreaking, prompt leaking, training data extraction, hallucination exploitation, agent manipulation.

Why it matters: Prompt injection, jailbreaking, training-data extraction — LLMs introduced an entire class of vulnerabilities that traditional AppSec wasn't designed to handle.

Common mistake: Trusting the system prompt to enforce policy. Indirect prompt injection through any untrusted text is the bypass.

C9

Deepfakes & Synthetic Media

PC

Deepfake detection, synthetic voice/video attacks, identity verification bypass, C2PA standards.

Why it matters: Deepfake fraud is now a reproducible attack pattern with a track record (Arup, Retool). Provenance standards like C2PA help where present; process changes carry the rest.

C10

AI-Enabled Disinformation

PC

Bot networks, AI-generated propaganda, influence operations, detection methods.

Why it matters: AI-generated propaganda at scale is a cybersecurity problem when narratives drive insider behavior, partner trust, or board response.

D1

Quantum Computing Fundamentals

PD

Qubits, superposition, entanglement, Shor's algorithm, Grover's algorithm, cryptographic impact.

Why it matters: Understanding qubits, superposition, and Shor's algorithm well enough to brief leadership and defend the migration timeline against 'quantum is decades away' pushback.

D2

Post-Quantum Cryptography

PD

NIST PQC standards (ML-KEM, ML-DSA, SLH-DSA), crypto agility, PQC migration planning.

Why it matters: NIST has standardized post-quantum algorithms (ML-KEM, ML-DSA, SLH-DSA — FIPS 203/204/205). Migration is the practical work: inventory, prioritization, and crypto-agility.

D3

Quantum Threats to Existing Systems

PD

Harvest Now Decrypt Later, PKI impact, protocol vulnerabilities, critical infrastructure risk.

Why it matters: Harvest-now-decrypt-later means adversaries are collecting encrypted data today to decrypt once quantum matures. Long-lived secrets are vulnerable now, not 'when quantum arrives.'

§ 03 · hands-on

Where practitioners actually practice.

Certs prove you can pass exams; labs prove you can do the work. These are the platforms that show up on every "how do I actually learn this" Reddit thread — with the specific tracks, paths, and challenges worth your time. Tracks are tagged to the domains they exercise.

TryHackMefreemium
TryHackMeFoundationsBlue teamRed teamWeb / AppSec

Free tier + $14/mo VIP for most real content

Structured, guided rooms for people new to security — the most-recommended starting platform.

Gentle learning curve with hand-holdy walkthroughs; best for people who've never opened a terminal.

Hack The Boxfreemium
Hack The BoxRed teamWeb / AppSecFoundations

Free community boxes; $14-20/mo for VIP / Academy modules

Closer-to-real offensive work than THM; the ramp from CTF to actual pentest engagements.

CPTS / CBBH paths are cheaper OSCP-adjacent preparation.

PortSwiggerWeb / AppSec

Best-in-class, fully free web-vuln training from the makers of Burp.

Lab-by-lab coverage of every OWASP-class bug; the default answer to 'how do I learn AppSec.'

Tracks worth doing
TCM SecurityRed teamWeb / AppSecCloud

$30/mo all-access or pay-per-course

Practical offensive courses — PEH, OSINT, mobile, AD, external — priced for self-learners.

Feeds the PNPT / PJMR / PSOC certs; respected by hiring managers who've seen OSCP burnout.

CommunityFoundationsRed teamCrypto

Progressively harder puzzles — Linux, networking, crypto, web — run in public servers.

Zero setup, infinite replay value; Bandit is the classic first-Linux-challenge.

Rhino Security LabsCloudRed team

Free tool, but spins up real AWS (cost-per-scenario on your account)

Deploys deliberately-vulnerable AWS scenarios into your own account.

Hands-on IAM privilege-escalation paths — read AWS audit logs afterward to study detection.

Tracks worth doing

How Cybersecurity Domains Connect in Practice

The seams between layers are where the most consequential failures happen. These are the patterns that show up across post-mortems, regardless of industry or vendor stack.

A phishing attack is not just an email problem

A successful phish crosses Govern (the user's training and the help-desk policy that accepted the verification), Control (the credential reset that followed), Detect & Respond (the SOC alert that did or didn't catch the lateral movement), and the AI toolkit (the auto-triage that summarized the incident). Treating it as one control failure misses the system.

Cloud misconfiguration is almost always an identity & governance problem

The exposed S3 bucket isn't a cloud bug — it's a role design that went unreviewed, an ownership question nobody answered, and a policy nobody enforced. Cloud Security amplifies whatever's true in Govern and Control above it.

Vulnerability management without asset visibility is incomplete

EPSS, KEV, and reachability all assume you know what you have. Stale or fragmented inventories degrade prioritization quietly — and AI-driven scanners (Mythos, Codex Security) make that gap more dangerous, not less, because they generate validated findings on assets you didn't know existed.

Detection without response playbooks creates alert theater

Monitoring has limited value if teams can't triage quickly, assign ownership, and act decisively. SOC automation amplifies whatever's underneath — bad detections plus aggressive automation produce faster, more confident wrongness.

Securing the LLM you ship is not the same problem as using LLMs to defend yourself

AI in Security (Pillar B) is about how defenders use AI; Securing AI (Pillar C) is about defending AI systems. They're easy to conflate because the words overlap, but the disciplines, tools, and threat models are different. Treating them as one is the most common AI-security mistake leadership makes today.

A 2025 patch cycle isn't built for a 2026 attacker

Mythos-class autonomous discovery and AI-augmented n-day exploitation have compressed the window between vulnerability disclosure and active exploitation. The defensive answer isn't faster manual patching — it's reachability-aware prioritization, validated AI patches in PRs, and explicit acceptance of risk on the long tail.

Where Cyber Programs Commonly Break Down

Most security programs that fail don't fail because the right tool wasn't available. They fail because of a few patterns that recur across organizations of every size.

Strong tools, weak architecture

Investment lands on point products instead of how they fit together. The same alert fires in three places, the same data lives in five, and nobody can answer 'who owns this control' without a meeting.

Good visibility, poor ownership

The SIEM sees everything; nobody is accountable for acting on it. The classic tell is a quarterly metrics review where every red square has a different reason it didn't get addressed.

Good policies, weak enforcement

The org can describe what should happen, but enforcement is inconsistent across business units, cloud accounts, and operational teams. Audit-shaped maturity, exploit-shaped risk.

Alerting without prioritization

Detection content fires at the same severity regardless of what asset it touches or what the threat intel says. SOC drowns; analysts develop selective deafness; real incidents land in the noise.

AI deployed without trust calibration

An LLM verdict on every alert sounds modern and produces faster wrongness. Without an autonomy-tier policy and a working analyst feedback loop, AI in the SOC degrades trust instead of building it.

Crypto-agility deferred until it's too late

Crypto choices baked into application code without an inventory or upgrade path. When NIST PQC migration becomes urgent — and it will — the organizations that didn't start now will discover that 'replace RSA' is a multi-year program.

Security teams organized in silos

AppSec, cloud, IAM, SOC, and GRC each operate in their own narrative. The most consequential threats live at the seams between them — and the seams are where nobody is staffed.

Securing-AI work assigned to the AI-using team

The team building LLM features rarely has the threat modeling or red-teaming background to defend the systems they're shipping. Pillar C work needs its own ownership, not a side-of-desk for the ML team.

Missing recovery ownership

Backups exist; restore runbooks exist; but nobody owns the end-to-end 'compromised environment → clean restore → cutover' path as a rehearsed capability. The first time it's exercised for real, archival crypto / infra changes / legacy keys turn a tabletop into a week-long firefight.

Who This Map Is For

The map is useful in different ways depending on where you sit. Pick the role closest to yours.

Security leaders aligning teams and investments

Use this map to identify where your program is well-staffed, where it's silo'd, and where the most consequential gaps live at the seams between layers — especially across the Pillar B / Pillar C boundary inside the Futures layer.

Engineers trying to understand adjacent functions

If you work in one layer (AppSec, cloud, IAM, detection), this map shows you what depends on you and what you depend on. Useful when an incident or design review pulls you out of your usual scope.

Practitioners moving into architecture or strategy

The transition from operator to architect requires seeing cybersecurity as a connected system rather than a stack of certifications. Use this map to build the mental model, then drill into the domain pages.

Learners who want a system-level view

Cybersecurity certifications are organized for exam logistics. This map is organized for how the work actually fits together — a better starting point if you want to understand the field, not just credential-shop it.

Make cybersecurity legible.

SecProve helps make cybersecurity legible — not by reducing it to buzzwords, but by showing how the work actually fits together. Each domain on this map links to a deeper page with practitioner-grade content.