There’s a phrase I’ve heard for fifteen years from IT security teams stepping into industrial environments for the first time: OT security is just IT security with steel-toed boots. It’s wrong. The boots are a hint that the consequences are different. The discipline is different too — and conflating the two is the most reliable way to get an IT-trained engineer hurt or to get a plant operator to stop returning their calls.
OT — operational technology — is the industrial control layer: SCADA, PLCs, RTUs, HMIs, safety instrumented systems, the sensor and actuator networks that run power grids, water treatment, oil refineries, manufacturing lines, transportation, and the rest of the physical-world infrastructure modern society quietly depends on. ICS — industrial control systems — is roughly synonymous; the OT/ICS naming is more about which standards body or vendor you grew up with than about a real distinction.
Walk into one of these environments thinking it’s an IT environment with louder fans and you will mis-prioritize, mis-patch, mis-design, and possibly cause an outage. Here’s the practitioner version of why.
The priorities are inverted
Information security teaches the CIA triad: confidentiality, integrity, availability. In an enterprise IT context, that ordering is mostly correct — the worst outcomes are usually data exfiltration, ransomware-encrypted file servers, and credential compromise. Confidentiality leads.
OT inverts the order. The operational truth is closer to safety, then availability, then integrity, and only then confidentiality. Some practitioners write this as SAIC. Whatever the acronym, the point is the same: a leaked plant SOP costs the company embarrassment; a compromised PLC can rupture a pipe, vent a steam header, send a refinery into runaway reaction, contaminate drinking water, or trip a power grid.
That priority shift propagates into every architectural decision. You don’t patch on a schedule that suits the vulnerability team; you patch when the plant is in a planned outage, which might be every two years if you’re lucky. You don’t deploy an agent that could crash the controller; you deploy passive monitoring that taps a SPAN port and never sends a packet onto the OT segment. You don’t enforce a TLS-everywhere posture on Modbus traffic; Modbus doesn’t do TLS, and the operator console expects to be able to read raw frames during fault diagnosis.
The device lifecycle is wrong by an order of magnitude
IT environments cycle hardware every 3–5 years. Software patches monthly. Operating systems get major upgrades every 2–3 years.
OT environments don’t. The controller you’re securing was specified in 2003, ordered in 2005, commissioned in 2007, and is still running in 2026 because there is a maintenance contract with the vendor and a written promise that swapping it out costs the plant six weeks of downtime. Some controllers in U.S. nuclear and water systems still run firmware whose only practical patch cadence is “the next major outage we already have on the calendar eighteen months from now.”
This isn’t laziness. It’s an economic and physical reality. The cost of a one-day plant shutdown for a refinery is measured in millions of dollars; the cost of a one-day shutdown for a hospital’s OT chiller plant is measured in patient evacuations. Patching becomes a planned-maintenance event, not a Tuesday afternoon. The defender’s job is to compensate — with segmentation, monitoring, and physical controls — for the fact that the underlying controllers can’t be brought to current vulnerability state on any IT cadence.
The protocols don’t authenticate
Modbus has been the lingua franca of industrial control since 1979. It has no authentication. None. You send a function-code 6 (write single register) frame to a Modbus device on a network segment you’re on, and the device writes the register. DNP3, Profinet, Ethernet/IP, and most of the rest of the industrial protocol family share the same original sin: they were designed for closed, trusted serial buses where authentication was assumed to be physical.
ISA/IEC 62443 is the standards-body answer — a comprehensive framework for industrial cybersecurity covering zones, conduits, security levels, and component requirements.[1]OPC UA, the modern industrial protocol, supports authentication and TLS natively. But the bolt-on retrofits to legacy protocols (Secure Modbus, DNP3 Secure Authentication) have partial deployment, and most operating plants run on the original unauthenticated dialects because that’s what the equipment speaks. Defenders compensate at the network layer: deep enforcement of zone-and-conduit segmentation, unidirectional gateways (data diodes) at the IT/OT boundary, and protocol-aware passive monitoring (Dragos, Claroty, Nozomi) that knows what a malicious Modbus frame looks like.
The threat model is different
IT security defends against ransomware crews, credential thieves, BEC operators, and a long tail of opportunists. OT defends against a smaller, more specialized adversary: nation-state actors with kinetic objectives, plus the occasional opportunistic ransomware spillover. The Stuxnet–Industroyer–TRITON lineage maps a deliberate APT progression toward operational disruption of physical processes.
The incidents that defined the field
A short, opinionated list. Notice what the targets are: not data, not credentials. Pipes, breakers, centrifuges, dosing pumps.
- Stuxnet2010Iranian uranium enrichment centrifuges (Natanz)The first publicly documented ICS attack with kinetic effect. Manipulated Siemens S7-300 PLC logic to physically damage centrifuges while showing operators normal readings on the HMI.
- Industroyer / CrashOverride2016Ukrainian power grid (Kyiv)Second Ukrainian grid attack. Used native ICS protocols (IEC 60870-5-101/104, IEC 61850) to send legitimate-looking switching commands directly to substation equipment. Roughly one hour of outage.
- TRITON / TRISIS2017Saudi petrochemical plant safety systemFirst malware to target a Safety Instrumented System (SIS) — a Schneider Triconex controller. Designed to disable the safety interlock that would have shut the plant down on an unsafe condition. The plant tripped before the payload completed.
- Oldsmar water treatment2021City water utility, FloridaRemote access via TeamViewer to the HMI; attacker briefly raised the sodium hydroxide setpoint 100×. An operator caught it visually within minutes. Famous for the embarrassing entry vector (shared password, exposed remote access tool).
- Colonial Pipeline2021U.S. East Coast fuel pipeline (billing systems)Ransomware (DarkSide) on the IT side. The company shut down OT operations as a precaution because IT/OT segmentation wasn't trusted. The fuel-supply panic that followed showed how thin the IT/OT abstraction is in practice.
- Volt Typhoon (China-attributed)2023–ongoingU.S. critical infrastructure (water, energy, comms)Pre-positioning campaign. Adversaries are inside OT-adjacent networks but haven't acted. CISA's read is preparation for kinetic conflict, not espionage. Living-off-the-land — legit RMM tools, legit credentials, no malware to sandbox.
Two of those incidents (Stuxnet, Industroyer) were almost certainly nation-state operations.[2] TRITON was attributed to a Russian government research institute by the U.S. Treasury in 2020.[3] Volt Typhoon is China-attributed and tracked by CISA as preparation for potential conflict, not espionage.[4]The threat actors who care about OT environments are mostly nation-state. Their tools are purpose-built. Their objectives are physical. The IT-side concept of “just an annoying ransomware crew” doesn’t map — the adversary is fundamentally different in capability and intent.
The Purdue model
OT architecture has a model that IT doesn’t: the Purdue Enterprise Reference Architecture, ISA-95. It defines six levels (0 through 5) that organize an industrial environment from the physical process upward through the corporate IT network. Level 0 is the physical sensors and actuators. Level 1 is the basic control (PLCs). Level 2 is the supervisory control (HMIs, SCADA). Level 3 is the manufacturing operations (historians, MES). Levels 4–5 are corporate IT. The interface between Level 3 and Level 4 — the IT/OT boundary, often called the DMZ — is the architectural crown jewel of OT defense.
IT/OT convergence is the marketing term for what’s really happening: business intelligence, ML-driven predictive maintenance, cloud analytics, and remote access push connectivity downward through the levels. Each connection is a defender’s question: does this data flow need to be bidirectional, or can a unidirectional gateway suffice? Can the IT-side asset reach anything in Level 2 or below, or is it strictly Level 3 and above? Can the remote-access tool be allowlisted on a specific jumphost, or is it sitting on the same flat network as the PLCs?
The Purdue model is the architectural language for these conversations. An IT-trained practitioner can’t skip learning it any more than a frontend developer can skip learning the box model.
What IT-trained practitioners get wrong moving in
Three patterns recur, in roughly this order:
1. Pushing IT-cadence patching onto OT systems. The IT engineer joins, sees a fleet of unpatched controllers, files a critical-severity ticket demanding everything be brought current. The plant manager explains, again, that the patch window is November 2027. The trust collapses on the first conversation. The right move is to compensate with detection and segmentation until the next planned outage, not to relitigate the patch cadence.
2. Active scanning OT networks. Traditional vulnerability scanners send probe traffic that crashes legacy controllers. The first time an IT team runs Nessus across Level 2 and the plant trips, the OT side stops listening forever. Use passive analysis (Dragos, Claroty, Nozomi, or open-source like Zeek with industrial protocol parsers) on a SPAN port instead.
3. Treating safety as an availability metric.A Safety Instrumented System (SIS) isn’t a backup; it’s the layer that prevents the plant from killing people when the primary control system fails. TRITON targeted exactly this layer because the attacker understood what an SIS does. IT-trained practitioners often categorize SIS systems alongside redundant controllers — that’s the wrong frame. SIS deserves its own threat model and its own monitoring, separate from the rest of OT.
What it means to do OT security well
Three signals separate practitioners who actually defend OT from practitioners who imported their IT playbook:
- They can read a P&ID (piping and instrumentation diagram) well enough to know which controllers are on the safety-critical loop and which ones are economic-loss-only.
- They’ve spent enough time with operators to know that the term “outage” means something different in OT than in IT — and they’ve adjusted their incident response playbook accordingly.
- They can articulate why ISA/IEC 62443 zones-and-conduits is more relevant than the corporate ISO 27001 control set for the OT estate, even if compliance reporting requires both.
The takeaway
OT security is its own discipline. The boots are the smallest of the differences. The priority order is inverted, the device lifecycle is an order of magnitude longer, the protocols don’t authenticate, the threat actors are nation-states with kinetic objectives, and the architectural language is the Purdue model rather than the corporate network diagram.
IT-trained practitioners can absolutely do OT security — many of the best OT defenders today started in IT — but the move requires a humility step that IT background sometimes makes harder: recognize that the discipline you’re entering has its own standards body, its own decade-deep literature, and its own small pool of practitioners who’ve been compensating for the constraints of physical infrastructure for longer than most IT engineers have been in the field. Read NIST SP 800-82 first.[5] Read the IEC 62443 series. Walk a plant. Then start making recommendations.
References & further reading
- ISA/IEC 62443 Series of Standards. Link. The international standard for industrial automation and control systems security. Zones-and-conduits, security levels, component requirements.
- Langner, R. (2013). To Kill a Centrifuge: A Technical Analysis of What Stuxnet’s Creators Tried to Achieve. Link. The definitive technical analysis. The Symantec W32.Stuxnet Dossier (Falliere, Murchu, Chien, 2011) is the other essential primary source.
- U.S. Department of the Treasury (2020). Treasury Sanctions Russian Government Research Institution Connected to the Triton Malware. Link. Public attribution of TRITON to Russia’s TsNIIKhM.
- CISA (2024). PRC State-Sponsored Actors Compromise and Maintain Persistent Access to U.S. Critical Infrastructure (AA24-038A). Link. The Volt Typhoon advisory. CISA’s framing: pre-positioning for kinetic conflict.
- NIST (2023). SP 800-82 Rev. 3: Guide to Operational Technology (OT) Security. Link. The closest thing the field has to a single canonical reference. Worth reading once cover-to-cover.