Home Blog The Evolution of Networking: From Hubs to AI-Ready Fabrics in 30 Years

Blog

Nov 5
The Evolution of Networking: From Hubs to AI-Ready Fabrics in 30 Years
Posted by Craig Grant

In three decades we went from shared 10 Mb hubs and beige pizza-box routers to leaf-spine fabrics pushing 400 GbE (and beyond), software-defined overlays, and AI-optimized data centers with liquid cooling and smart NICs. The big shifts: switches replaced hubs, virtualization and cloud changed traffic patterns, leaf-spine killed big L2 domains, EVPN-VXLAN became the standard overlay, copper gave way to fiber/DAC in the DC, and automation + telemetry replaced box-by-box CLI. The next leg: 800 G, co-packaged optics, liquid cooling, pervasive zero trust, and NetDevOps-by-default.

Contents:

  1. The Late 90s: From Hubs to Switched Ethernet
  2. 2005–2015: 10 GbE, Virtualization, and the Modern Campus
  3. 2015–2020: Leaf-Spine, 25/100 GbE, and SDN Becomes Boring (in a good way)
  4. 2020–2025: AI Workloads, 200/400 GbE, EVPN Everywhere
  5. Cabling & Optics: What Actually Works in 2025
  6. Data Center Enhancements That Mattered
  7. Practical Migration Patterns (What We Learned the Hard Way)
  8. What’s Next: 800 G, Liquid Cooling, and Zero-Trust by Design

 

1) The Late 90s: From Hubs to Switched Ethernet

The world then:

  • 10BASE-T → 100BASE-TX. Shared hubs gave way to switches with per-port collision domains.

  • Early VLANs & Spanning Tree. We carved up big flat networks, then watched STP block half the links to avoid loops.

  • Gigabit appears. Late-90s 1000BASE-X (fiber) and then 1000BASE-T (copper) arrived, but the campus stayed largely 100 Mb for a while.

 

Cabling reality:

  • Cat5 → Cat5e in offices; OM1/OM2 multimode in risers; SC/ST connectors everywhere.

  • Backbones started adopting fiber as server NICs finally grew teeth.

 

2) 2005–2015: 10 GbE, Virtualization, and the Modern Campus

Why it mattered:

  • 10 GbE moved from core to distribution to server access, especially with virtualization.

  • PoE/PoE+. Phones and APs drove 802.3af/at—the campus edge became a power source.

  • Wi-Fi took off. 802.11a/b/g → n made wireless a primary access.

  • WAN modernized. Frame-relay/TDM faded; MPLS and Ethernet handoffs became table stakes.

  • Storage converged. iSCSI grew up; FCoE had its moment; FC kept advancing in parallel.

 

Architectures:

  • The classic three-tier (access/distribution/core) campus held on, but data centers began moving to top-of-rack (ToR) with 10 GbE and end-of-row aggregation.

 

Cabling & optics:

  • Cat6/Cat6a for 10GBASE-T (heat and distance mattered).
  • OM3/OM4 for 10G SR links; LC replaced SC in new builds; MPO/MTP crept in for parallel optics.

 

3) 2015–2020: Leaf-Spine, 25/100 GbE, and SDN Becomes Boring

Why this was a turning point:

  • 25/50/100 GbE standardized, killing the 40 G detour for many.

  • Leaf-Spine (CLOS) became the default data center fabric—predictable east-west latency and scale.

  • EVPN-VXLAN (and vendor equivalents) turned overlays mainstream. We finally escaped giant L2/STP domains.

  • Whitebox NOS & APIs normalized automation. Ansible/Terraform pipelines started replacing change windows full of copy-paste CLI.

  • Microsegmentation & NFV. Security moved closer to the workload; firewalls and service chains went virtual.


Cabling & optics:

  • QSFP28 for 100 G, SFP28 for 25 G, DACs in-rack, AOCs for short runs, and OS2 single-mode for distance.

  • MPO trunks became standard for high-count fiber.

 

4) 2020–2025: AI Workloads, 200/400 GbE, EVPN Everywhere

The new normal:

  • 200/400 GbE is production-grade; 800 G is entering early deployments.

  • AI/ML clusters change traffic profiles: hungry east-west, elephant flows, ultra-low jitter.

  • RoCEv2/RDMA, ECN/PFC tuning, and lossless (or near-lossless) designs matter again.

  • SmartNICs/DPUs offload encryption, vSwitching, and storage.

  • NVMe-oF pushes storage onto the loss-tolerant-but-well-tuned Ethernet fabric.

  • Wi-Fi 6/6E/7 upgrades edge density and deterministic QoS for collaboration + IoT.

  • SASE/Zero-Trust. Identity-driven access from WAN to campus to app mesh.

 

Ops mindset:

  • Streaming telemetry (gNMI, model-driven). From SNMP polls to real-time signals.

  • Intent-based guardrails. Validate network state continuously; drift becomes observable.

  • CI/CD for networks. Change reviews, pre-checks, rollbacks—like app teams, finally.

 

5) Cabling & Optics: What Actually Works in 2025

Quick, pragmatic guidance you can drop into design docs.

Campus / Office

  • Copper access: Cat6 for 1G, Cat6a if you plan 2.5/5 GbE (multi-gig) or PoE++ power budgets.

  • Wi-Fi 6/7 APs: Budget for PoE++ (802.3bt) and multi-gig (2.5/5 GbE) uplinks.

  • Fiber backbone: OS2 single-mode for new risers; it outlives standards cycles.

 

Data Center

  • In-rack: Passive DAC for 10/25/50/100 G (check length limits, typically ≤3 m passive; active DAC extends that).

  • Row-to-row (short): AOC when you need flexibility without full SMF costs.

  • Fabric spines & long runs: OS2 SMF with QSFP-DD/OSFP optics at 100/200/400 G.

  • Parallel optics: Plan MPO-12/16 trunks with proper polarity and clean-fiber discipline.

  • OM4/OM5 MMF still fine intra-row, but single-mode wins longevity battles.


Connector alphabet soup (plain English):

  • SFP/SFP+ (1/10 G), SFP28 (25 G); QSFP+/QSFP28 (40/100 G); QSFP-DD/OSFP for 200/400/800 G.

  • LC for duplex fiber, MPO/MTP for parallel lanes.

 

6) Data Center Enhancements That Actually Moved the Needle

  • Leaf-Spine + EVPN-VXLAN. Scales horizontally, deterministic latency, simple ECMP.

  • Server-edge consolidation. Fewer NICs doing more with 25/100 G, SR-IOV, and DPUs for offload.

  • Storage modernization. NVMe-oF over Ethernet; keep an eye on buffer/ECN/PFC configs.

  • Power & cooling. From raised floors + CRAHs to hot/cold aisle containment, rear-door heat exchangers, and direct-to-chip liquid cooling for dense GPU racks.

  • Electrical distribution. Busways, higher-amp PDUs, lithium-ion UPS, and better PUE monitoring (even if you don’t publish it).

  • Observability. Line-rate sFlow/IPFIX, streaming telemetry, and digital twins for pre-change validation.

  • Security posture. Microsegmentation (at the overlay or host), mTLS east-west, ZTNA at the edge, and strong 802.1X on campus.

 

7) Practical Migration Patterns (Field-tested)

  1. Pre-stage your gateways (stop renumbering VLANs).
    Introduce HSRP/VRRP with the existing gateway IP as the virtual address. Make the legacy device active, bring up the new pair, then swing priority to migrate traffic without touching hosts.

  2. Shrink the blast radius.
    Kill sprawling L2 domains. Use EVPN-VXLAN for mobility/segmentation and anycast gateways at every leaf. STP becomes a corner case, not a design principle.

  3. Keep copper where it shines; don’t force it in the DC.
    Access ports for users/IoT? Copper. Anything east-west in the DC? Prefer DAC/AOC/SMF.

  4. Automate the boring stuff first.
    Backups, linting, golden templates, interface descriptions, QoS/policers, routing adjacencies. Then move to full CI/CD.

  5. Telemetry before troubleshooting.
    Turn on streaming telemetry and flow export before the go-live. You can’t analyze packets you didn’t capture.

  6. PoE budgets are real.
    Wi-Fi 6/7 APs, cameras, and door controllers will burn through under-sized PoE quickly. Model per-switch draw and diversity.

  7. AI racks are different.
    Expect >30–60 kW per rack, sometimes much higher. Plan for liquid cooling, denser power whips or busways, and very short-latency, wide pipes (400 G today, 800 G tomorrow).

 

8) What’s Next: 800 G, Co-Packaged Optics, Liquid Cooling, and Zero-Trust by Default

  • 800 G and 1.6 T: Shipping in leading-edge fabrics; watch co-packaged optics for power savings and signal integrity at scale.

  • CXL & memory fabrics: Early days, but disaggregated memory will change east-west profiles again.

  • Liquid everywhere: Cold plates, immersion, rear-door heat exchangers—AI/ML densities make this mainstream.

  • Secure-by-design networking: Per-app identity, continuous verification, and policy as code.

  • NetDevOps normality: Versioned configs, pre-deployment tests, per-PR lab sims, and safe rollbacks as a habit, not a hero move.

 

Speed & Media Cheat Sheet (Typical, Not Exhaustive)

Link Type

Common Media

Notes You’ll Actually Use

1 GbE access

Cat5e/Cat6 copper

Still fine for office endpoints and light PoE loads.

2.5/5 GbE access

Cat6/Cat6a copper

For Wi-Fi 6/7 AP uplinks and high-draw PoE++ devices.

10 GbE server edge

SFP+ DAC/AOC, OM4 SR, Cat6a (10GBASE-T)

DAC in-rack, SR for short fiber, copper if you must.

25 GbE server edge

SFP28 DAC/AOC, OM4 SR, OS2 LR

The sweet spot for modern hosts.

40 GbE aggregation

QSFP+ SR4 (MMF), LR4 (SMF)

Largely skipped now in greenfield in favor of 100 G.

100 GbE leaf-spine

QSFP28 DAC/AOC, DR/FR/LR (SMF)

“New normal” for spines; easy breakout to 4×25 G.

200/400 GbE spine/AI

QSFP-DD/OSFP (SMF), AOC in-row

Early mainstream; watch optics power & cooling.

800 GbE at scale

OSFP/QSFP-DD on SMF

Emerging deployments; plan trays, power, optics availability.

Tip: Choose optics families that break out cleanly (e.g., 100 G → 4×25 G, 400 G → 4×100 G). It preserves flexibility and simplifies sparing.

 

Campus Modernization in One Afternoon (Well… Almost)

  1. Swap old distribution-layer gear for multi-gig PoE++ access switches and a simple L3 core.

  2. Run OS2 single-mode in risers; keep copper at the edge.

  3. Standardize on 802.1X + dynamic VLAN/SGT (or overlay-based tags) for access control.

  4. Move backbone routing to OSPF/IS-IS + BGP where appropriate; avoid STP dependencies.

  5. Adopt SASE for off-prem users and branch simplicity; keep local breakout for SaaS.

 

A Note on Cost, Power, and Sustainability

  • Right-size speeds: 25 G access + 100 G aggregation remains a cost/benefit sweet spot for many DCs.

  • Model power early: High-density optics and PoE++ can tip PDUs over the edge fast.

  • Track PUE and inlet temps: You don’t need to publish them—but you do need to see them.

 

Glossary (Two-Line Versions)

  • EVPN-VXLAN: The de-facto L2/L3 overlay for DC fabrics. Scales segmentation without giant L2 domains.

  • Leaf-Spine: A predictable, non-blocking fabric using equal-cost paths.

  • DPU/SmartNIC: Offloads networking/storage/security from server CPUs.

  • RoCEv2/RDMA: Low-latency, loss-sensitive transport for AI/ML and storage.

  • SASE/ZTNA: Cloud-delivered security with identity at the center.

  • DAC/AOC: Short-run copper/fiber cables with optics built-in; perfect for in-rack and row.

 

Final Thoughts

Thirty years transformed networking from “keep the link up” to “ship features safely at scale.” The gear is faster, the optics are smarter, and the operating model is finally catching up to software engineering. If you’re planning the next refresh, don’t start with speeds and feeds—start with traffic patterns, failure domains, automation, and security posture. The optics and cables will follow.

 

LookingPoint offers multiple IT services if you’re interested. Want more information, give us a call! Please reach out to us at sales@lookingpoint.com and we’ll be happy to help!

Contact Us

Written By:

Craig Grant, Senior Consulting Engineer

subscribe to our blog

Get New Unique Posts