Every cybersecurity textbook teaches the Cyber Kill Chain as a seven-stage linear sequence: reconnaissance, weaponization, delivery, exploitation, installation, command and control, actions on objectives. Every certification exam tests it that way. Most of us were graded on memorizing the seven stages, in order, the first time we encountered the framework.

That isn’t how practitioners use it.

In a real incident response, nobody steps through the chain in order. The chain isn’t a checklist for the attacker, and it isn’t a triage workflow for the defender. It’s a narrative scaffold — a way to explain what happened to someone who wasn’t in the room when it happened, and a way to find the gaps in your detection and response coverage. The textbook framing teaches the wrong skill, and the mismatch shows up the first time a junior analyst tries to map a live intrusion to seven clean stages and finds that most of them don’t fit.

The chain is a story, not a sequence. This article is about what that means and what to do with it.

What the textbooks teach

The Cyber Kill Chain was published in 2011 by Eric Hutchins, Michael Cloppert, and Rohan Amin at Lockheed Martin, in a paper titled Intelligence-Driven Computer Network Defense Informed by Analysis of Adversarial Campaigns and Intrusion Kill Chains.[1] The framework was conceived against a specific class of threat: nation-state APT campaigns hitting defense industrial-base targets, where the attacker did reconnaissance, weaponized custom malware, delivered it via spear-phishing, exploited a software vulnerability, installed persistence, established command and control, and pursued objectives over a sustained dwell time. For that kind of intrusion in 2011, the seven stages were a real description of the work.

Fifteen years later, the chain is on every cert blueprint in the industry. CEH tests it. Security+ tests it. CISSP references it. SANS courses build modules around it. The pedagogy is consistent: memorize the seven stages, learn that “you can break the chain at any stage,” and treat each stage as a discrete defensive opportunity to disrupt the attacker’s plan.

The teaching is internally consistent. It’s also a poor description of how anyone uses the framework once they leave the classroom.

What practitioners actually use it for

Walk into a SOC, an IR retainer, or a CISO’s post-incident debrief and the kill chain shows up — but not as a triage rubric. It shows up as a structuring device for three specific problems.

1. Story scaffolding for executive briefings.When a breach happens, someone has to explain it to a CEO, a board, a regulator, or a press team. Those audiences cannot follow a packet capture or a raw alert timeline. They can follow a beat structure: “they got in here, they did this, they pivoted to that, they took this.” The kill chain provides that beat structure in a way the audience has, however dimly, heard of before. It is the same reason heist movies have a planning act, an entry act, and a getaway act — not because heists actually unfold that cleanly, but because the audience needs scaffolding to track what happened.

2. Detection coverage gap analysis.Detection engineers don’t use the chain to triage alerts. They use it backwards: at the program level, plot every active detection rule against the stage it’s designed to catch, and the stages with no rules light up. That’s a coverage map. It tells the team whether their detection program is tilted toward the back of the chain (post-compromise activity, where most SIEM rules cluster) or the front (recon, weaponization — usually thinly covered). The chain isn’t the map. The chain is the legend that makes the map readable.

3. Post-mortem narrative arc. After an incident, the team has to write the timeline. The kill chain organizes the timeline into stages so the post-mortem reader can find the answer to the question that actually matters: what was the earliest stage we could have caught this, and why didn’t we? That answer drives the remediation backlog — sensor gap, tuning gap, process gap, all of which look different. Without the chain, the post-mortem becomes a chronological recital that’s hard to learn from.

Notice the common thread: in all three uses, the chain is a tool for the defender’s communication and analysis, not a model of the attacker’s plan. That distinction is everything.

Where the linear teaching breaks

The textbook framing assumes the attacker walks the chain left-to-right, completing each stage before starting the next. Real intrusions don’t.

Real intrusions don’t walk the chain

Four breaches mapped against Lockheed Martin’s seven stages. ✓ used · ≈ collapsed into another stage · — skipped entirely

Recon
Weap
Deliv
Exploit
Install
C2
Actions
  • MGM Resorts (Scattered Spider)2023
    LinkedIn-driven help-desk vishing → MFA reset → BlackCat / ESXi encryption.
  • Snowflake customer breaches (UNC5537)2024
    Credential-marketplace shopping → log in with valid creds → bulk data exfil via Snowflake's own APIs.
  • MOVEit Transfer (Cl0p)2023
    Custom SQLi exploit chain ran in parallel against hundreds of customer instances.
  • SolarWinds SUNBURST2020
    Build-pipeline trojan → 18k orgs receive signed backdoor → multi-month dwell → second-stage tools loop the chain again from inside.
The chain teaches a tidy left-to-right pipeline. Modern intrusions skip stages outright (identity-first attacks have no weaponization), collapse two into one (logging in with valid creds is delivery and exploitation at the same time), or loop — SolarWinds ran the chain a second time from inside, weeks after the first pass.

Identity-first attacks collapse the middle of the chain. When the initial access vector is a valid credential bought on a credential marketplace or extracted from an infostealer log, there is no weaponization, no exploitation, and often no installation. The attacker logs in. That is delivery, exploitation, and access in a single step. Snowflake-customer breaches in 2024 hit this pattern cleanly — UNC5537 didn’t exploit Snowflake; they used valid customer credentials and Snowflake’s own API to exfiltrate data at scale.[2]A junior analyst trying to assign each step to a kill-chain stage spends thirty minutes on a problem that doesn’t have a clean answer.

Cloud-native intrusions don’t need C2.When the attacker’s objectives are accomplished by calling cloud APIs the victim has already authorized — pulling secrets from Secrets Manager, listing S3 buckets, modifying IAM policies — there is no command-and-control channel in the traditional sense. Traffic flows over the same TLS to the same hyperscaler the victim uses every minute of every day. The Lockheed paper’s C2 stage was modeled on beacons phoning home through firewalls. Cloud adversaries phone home to the victim’s own control plane.

Living-off-the-land has no weaponization.When the attacker uses PowerShell, WMI, AnyDesk, or signed Microsoft binaries to accomplish their objectives, there is no malware to develop and no payload to detonate. The textbook stage that frequently produces the strongest detection signal — weaponization — is empty.

RaaS and supply-chain campaigns run the chain in parallel, not in series. MOVEit was a single Cl0p exploit chain executed against hundreds of victim instances simultaneously. SolarWinds was a single weaponization that delivered to eighteen thousand organizations and then ran the chain a second time from inside each compromised network as second-stage tools (TEARDROP, RAINDROP, Cobalt Strike) selected high-value targets and looped through recon, lateral movement, and actions on objectives.[3] One chain on the wall hides four chains in the execution.

The mental shift

Once you treat the chain as a defender’s narrative tool rather than the attacker’s plan, two things change.

First, you stop trying to force-map every alert into a stage during triage. Triage runs on the alert’s own context: identity, behavior, asset criticality, recent baseline. The kill chain comes in after the dust settles, when the team is reconstructing what happened for the post-mortem and the executive deck.

Second, you pair the chain with the frameworks that handle the jobs the chain isn’t built for. MITRE ATT&CK is the granular detection vocabulary — tactics and techniques mapped to specific adversary behavior, the working language of detection engineering and threat hunting.[4]The Diamond Model is the attribution lens — adversary, capability, infrastructure, victim — for the threat-intel team building campaign trackers across multiple intrusions.[5]Each framework has a different job. Practitioners who try to make the kill chain do all three jobs end up frustrated, and practitioners who treat ATT&CK as a replacement miss the narrative scaffold the chain is uniquely good at.

What this means for hiring and education

Most cybersecurity interviews still test the chain by asking candidates to list the seven stages. That tests memorization, not judgment. A better question: walk me through how you’d explain a recent breach to a board, using the kill chain as scaffolding — and tell me which stages don’t fit cleanly and why. The first half tests the actual job application. The second half tests whether the candidate has used the framework on something real.

For educators and curriculum designers, the framing is straightforward: teach the chain’s history, its three practitioner uses, and the stages that modern intrusions skip or collapse. Pair it explicitly with ATT&CK and the Diamond Model from day one, with a clear statement of which job each tool earns its keep on. The fifteen minutes you save not making students memorize stage definitions buys hours of learning that survives contact with reality.

The takeaway

The Cyber Kill Chain is a useful framework. It is not a literal description of how attackers work, and it never was. Treating it as a narrative scaffold — for executive briefings, coverage gap analysis, and post-mortem timelines — is the use that has stuck because it’s the use the framework is actually good at.

The next time someone hands a junior analyst the seven stages and asks them to apply the chain to a live alert, do them a favor and tell them the truth about what the framework is for. They’ll save themselves the thirty minutes of confusion the rest of us already spent.


References & further reading

  1. Hutchins, E. M., Cloppert, M. J., & Amin, R. M. (2011). Intelligence-Driven Computer Network Defense Informed by Analysis of Adversarial Campaigns and Intrusion Kill Chains. Lockheed Martin Corporation. Link. The original paper. Worth reading once just to see how specific the threat model was.
  2. Mandiant / Google Cloud (2024). UNC5537 Targets Snowflake Customer Instances for Data Theft and Extortion. Link. Documents the credential-based access pattern across roughly 165 Snowflake customer environments.
  3. CISA (2021). Advanced Persistent Threat Compromise of Government Agencies, Critical Infrastructure, and Private Sector Organizations (Alert AA20-352A). Link. SUNBURST plus the second-stage tooling. The chain ran twice: once at delivery scale, once per high-value selected target.
  4. MITRE ATT&CK. Link. The detection-engineering vocabulary. Granular adversary tactics and techniques with mappings to data sources, mitigations, and observed actor groups.
  5. Caltagirone, S., Pendergast, A., & Betz, C. (2013). The Diamond Model of Intrusion Analysis. Link. Adversary, capability, infrastructure, victim — the four vertices threat-intel teams use to track campaigns across many intrusions.