The cybersecurity skills gap isn’t getting better. Industry reports put the global shortfall at several million open roles, and the practitioners we do have are routinely under-trained for the work actually in front of them. “More training” is the obvious response — but if you look closely at what training typically means for a working security professional, the design itself is the problem.
The shape of what practitioners actually need is shifting fast, too. AI assistants are steadily absorbing the tactical layer of security work — generating detections, translating queries, proposing remediations, writing the commands. Remembering a specific flag, payload, or syntax matters less every quarter. What matters more is reading a situation correctly, spotting when a confident-sounding model is wrong, and making judgment calls under ambiguity. The skill ceiling is moving up the stack — toward concepts, context, and decision-making — and most training hasn’t moved with it.
Most cybersecurity learning still falls into one of two modes. Neither is good at building durable expertise, and neither builds the conceptual fluency this shift demands. Plot the common platforms on a simple 2×2 and the shape of the problem shows up immediately:
Cybersecurity learning, mapped
Where common platforms sit on a 2×2 of frequency and engagement
Mode 1: passive consumption
Courses. Conference talks. Newsletters. Blog posts. This is where most of us spend our learning budget because it’s cheap and it scales. The trouble is that reading about an attack technique and being able to recognize, reason through, or defend against one are very different things.
This isn’t a hot take — it’s settled learning science. In a comprehensive review of ten common study techniques, Dunlosky and colleagues rated rereading, highlighting, and summarization as “low utility” — the modalities students rely on most.[1]Practice testing and spaced retrieval, by contrast, earned the top “high utility” rating. Karpicke and Blunt showed in Science that retrieval practice produced substantially better long-term learning than even elaborative concept mapping.[2]
Put bluntly: if your training plan is a stack of courses and a feed reader, you’re investing in the modalities research has consistently shown produce the weakest retention.
Mode 2: intensive labs
Labs and CTFs are the other half of the training economy, and they’re the opposite extreme. A good lab environment is invaluable — there is no substitute for the muscle memory of actually exploiting a misconfiguration, triaging an alert, or hardening a pipeline. But the economics and the cognitive demand make labs a poor daily habit:
- A meaningful session is one to four hours.
- Subscription stacks run $30–$100+ per month, often per vendor.
- Each lab is narrow. You’d need dozens to touch the breadth of a modern security program.
- They demand bursty, high-intensity recall — great for depth, poor for building coverage across the forty-plus sub-disciplines a generalist needs to keep warm.
So working practitioners end up cramming labs in the weeks before a cert, on weekends, or during conference sprints. Between those bursts, the knowledge quietly decays. The spacing effect — one of the most replicated findings in cognitive psychology — says distributed practice dramatically outperforms massed practice for long-term retention.[3] Bursty labs are, by design, massed practice.
The AI wrinkle makes this worse, not better
Some argue AI assistants remove the need for any of this. You don’t need to remember every nmap flag or Suricata rule syntax — the tool will surface it. That’s correct, as far as it goes. The lower-level recall that certifications and flashcards have historically optimized for is less load-bearing now.
But that shifts the bar, it doesn’t remove it. Copilots don’t make judgment calls. They don’t tell you which of the seventeen plausible suggestions is appropriate for your threat model, your compliance regime, or your architecture. A practitioner who can’t distinguish a plausible-looking answer from a correct one is moredangerous in an AI-augmented workflow, not less. Research on the hypercorrection effect — the finding that high-confidence errors get corrected faster than low-confidence ones[4]— is a useful lens here. Most AI-assisted mistakes are high-confidence mistakes. You can’t catch them without a mental model the tool isn’t carrying for you.
So the skill that matters is moving up the stack: conceptual fluency, pattern recognition, and the ability to rapidly frame what you’re looking at. That is exactly the kind of knowledge retrieval practice is best at building — and exactly the kind passive reading and bursty labs are worst at sustaining.
The missing middle
What the field is missing is the thing every other competitive domain already has: a lightweight daily practice loop that keeps a broad skill surface warm.
Chess players have tactics trainers. Musicians have scales and sight-reading. Doctors have board-review question banks. Software engineers have LeetCode. Each is a low-friction, high-frequency retrieval habit that sits alongside the real work. Nobody claims a chess player becomes a grandmaster by only doing puzzles — but nobody claims they get there without them either.
Cybersecurity, oddly, has never had this layer. We have a deep-end (labs, CTFs, on-the-job reps) and a shallow-end (courses, newsletters). The middle — five to fifteen minutes of deliberate retrieval per day, across a wide skill surface, with rating and explanatory feedback — basically doesn’t exist as a normalized habit.
A systematic review of cybersecurity training by Caulfield and colleagues reached a related conclusion: interactive, scenario-based assessment consistently outperforms passive methods, and the field’s certification-heavy approach fails to track ongoing competence.[5]One-time exams verify you knew something on a Tuesday in March. They say nothing about whether you still know it, or whether you’ve kept pace with a threat landscape that rewrites itself every six months.
What this points to
The practitioners worth betting on five years from now aren’t the ones spending the most hours in courses or the most weekends in labs. They’re the ones who’ve made daily retrieval a habit: small, consistent, broad, with honest feedback. They use AI assistants to move fast — and they’ve built the conceptual scaffolding that lets them notice when the assistant is wrong.
The field doesn’t need more content. It needs better training design. Practice testing, spaced repetition, explanatory feedback, and skill-based rating aren’t novel ideas — they’re decades-old findings from mainstream education research that cybersecurity training has mostly ignored. Importing them isn’t a product pitch. It’s overdue.
For a fuller treatment of the learning-science literature behind this argument — including references on feedback, ELO-style skill rating, and Item Response Theory — see the research notes.
References
- Dunlosky, J., Rawson, K.A., Marsh, E.J., Nathan, M.J., & Willingham, D.T. (2013). Improving Students’ Learning With Effective Learning Techniques. Psychological Science in the Public Interest, 14(1), 4–58. Link
- Karpicke, J.D., & Blunt, J.R. (2011). Retrieval Practice Produces More Learning than Elaborative Studying with Concept Mapping. Science, 331(6018), 772–775. Link
- Cepeda, N.J., Pashler, H., Vul, E., Wixted, J.T., & Rohrer, D. (2006). Distributed Practice in Verbal Recall Tasks: A Review and Quantitative Synthesis. Psychological Bulletin, 132(3), 354–380. Link
- Butterfield, B., & Metcalfe, J. (2001). Errors Committed with High Confidence Are Hypercorrected. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27(6), 1491–1494. Link
- Caulfield, T. et al. (2023). A Systematic Review of Cybersecurity Training and Education. ACM Computing Surveys. Link