From Mainframe Time-Sharing to Hyperscale
The idea behind cloud computing is older than the internet. In the 1960s, John McCarthy proposed that computation might one day be organised as a public utility — like electricity. You wouldn’t own a power plant; you’d pay for kilowatts.
That vision took four decades to materialise. The journey has five distinct phases, each one breaking assumptions that seemed permanent at the time.
Phase 1 — Virtualisation (1999–2006)
The cloud didn’t begin with AWS. It began with VMware’s x86 hypervisor commercialising a decades-old IBM idea: run multiple isolated operating systems on one physical machine.
For the first time, a company could buy one server and sell capacity to ten customers simultaneously. The economics of shared infrastructure were suddenly viable.
Key moment: Salesforce (1999) launched “software as a service” — a CRM delivered entirely over a browser. No installation. No local server. It was radical, and the enterprise hated it for years before adopting it completely.
Phase 2 — Infrastructure as a Service (2006–2012)
AWS launched EC2 in 2006. The proposition was simple: rent a virtual server by the hour.
This destroyed the economics of on-premises infrastructure for most workloads. Before EC2, a startup had to buy servers, rack them in a data centre, wait 6–8 weeks for provisioning, and pay for capacity whether they used it or not. After EC2, a developer could provision 100 servers in three minutes and pay only for what they used.
Before:
Capital expenditure → Fixed capacity → Underutilised 70% of the time
After:
Operational expenditure → Elastic capacity → Pay per second
Security implication: Shared physical infrastructure introduced new threat classes. Side-channel attacks (Spectre, Meltdown) demonstrated that two virtual machines on the same CPU could leak data to each other through cache timing. Cloud providers now run dedicated hosts for workloads with strict isolation requirements.
Phase 3 — Platform and Software as a Service (2010–2016)
IaaS was just virtualised hardware. The real transformation came when cloud providers abstracted the OS layer entirely.
Platform as a Service (PaaS): Heroku, Azure App Service, Google App Engine. You deploy code. The platform handles OS patching, scaling, load balancing, and runtime management.
Software as a Service (SaaS): Microsoft 365, Google Workspace, Slack, GitHub. You consume applications through a browser. Zero infrastructure management.
The shared responsibility model crystallised in this era:
| Layer | Cloud Provider | Customer |
|---|---|---|
| Physical | ✓ | |
| Hypervisor | ✓ | |
| OS (IaaS) | ✓ | |
| OS (PaaS) | ✓ | |
| Runtime | ✓ (PaaS) | ✓ (IaaS) |
| Data | ✓ | |
| Identity | ✓ | |
| Application Config | ✓ |
This model is still widely misunderstood. Organisations regularly misconfigure data access controls because they believe “the cloud provider handles security.”
Phase 4 — Multi-Cloud and Hybrid (2016–2022)
As cloud adoption matured, a new problem emerged: vendor lock-in.
Organisations that built entirely on AWS discovered that migrating workloads was enormously expensive. Proprietary services (DynamoDB, Lambda, SQS) had no direct equivalent elsewhere. Cloud providers had designed their ecosystems to maximise switching costs.
The response was multi-cloud architecture — distributing workloads across AWS, Azure, and GCP simultaneously. This introduced new complexity:
- Identity federation across multiple clouds
- Consistent network security policy enforcement
- Unified logging and SIEM ingestion
- Cross-cloud cost visibility
Hybrid cloud integrated on-premises data centres with public cloud using services like Azure Arc, AWS Outposts, and Google Anthos. This was driven heavily by regulatory requirements: financial institutions and healthcare providers couldn’t move certain data to public cloud due to GDPR, DORA, and sector-specific regulations.
Phase 5 — Edge, OT Integration, and Sovereign Cloud (2022–Present)
This is where the evolution intersects directly with OT security — my primary domain.
Edge Computing
Cloud computing centralised everything. Edge computing reverses that, pushing compute back toward the data source. A factory floor generates terabytes of sensor data per day. Sending all of it to a cloud region 1,000 km away for processing introduces unacceptable latency for real-time control decisions.
Industrial edge computing puts compute nodes at the plant level:
- Azure IoT Edge runs containerised workloads on a gateway at the factory
- Data is pre-processed and filtered locally; only aggregated telemetry is sent to cloud
- Offline resilience: the plant continues operating if WAN connectivity is lost
This model directly addresses the OT/IT convergence challenge — ICS environments increasingly need cloud analytics capabilities without compromising the air-gap properties that protect safety systems.
[PLC / Sensor Layer]
↓ OPC-UA / Modbus
[Industrial Edge Gateway] ← Azure IoT Edge containers
↓ MQTT / AMQP
[Cloud (Azure IoT Hub)]
↓
[Analytics / Digital Twin / SIEM]
Sovereign Cloud
Europe’s NIS2 Directive and DORA regulation, combined with US ITAR/EAR requirements, have created demand for cloud infrastructure that operates within defined jurisdictional boundaries with domestic key management.
Azure Sovereign Regions (e.g., Azure Government, Azure Germany) operate under separate operational personnel with citizenship requirements, isolated management planes, and customer-controlled encryption keys that never leave the sovereign boundary.
For critical infrastructure operators — power grids, water treatment, telecommunications — this is not optional. It’s regulatory.
Where the Trajectory Points
The next phase is already visible in early deployments:
1. Confidential Computing: Hardware-enforced enclaves (Intel TDX, AMD SEV-SNP) that encrypt data even while it’s being processed. The cloud provider cannot access your workloads in memory. Critical for multi-party analytics where data must remain private.
2. AI-native Infrastructure: Cloud providers are rebuilding their data planes around AI workloads. Custom silicon (TPUs, AWS Trainium, Azure Maia) is replacing general-purpose GPUs for inference at scale. The infrastructure implications for security are significant — new attack surfaces in model serving, inference endpoints, and training pipelines.
3. OT-Native Cloud Services: Azure Defender for IoT, AWS IoT SiteWise, and Siemens MindSphere are evidence that cloud providers are building native integrations for industrial protocols (OPC-UA, MQTT, Modbus). The convergence of OT and cloud is accelerating.
The Security Implication Thread
Every phase of cloud evolution introduced new attack surface that took the industry years to understand and defend:
| Phase | New Surface | Key Threats |
|---|---|---|
| IaaS | Shared hypervisor | Side-channel attacks, VM escape |
| PaaS | Managed runtime | Supply chain (dependency confusion), SSRF |
| SaaS | Identity plane | OAuth token theft, AiTM phishing |
| Multi-cloud | Federated identity | Cross-cloud privilege escalation |
| Edge/OT | OT protocol exposure | ICS protocol attacks, firmware compromise |
The pattern is consistent: security lags adoption by 3–5 years. The organisations that close that gap — by hiring engineers who understand both the infrastructure and the adversary — are the ones that avoid the breach.