Modern DataCenter Topology

Modern datacenter network designs employ technology to promote virtual networks over the top of layer 3 underlay networks. This network design has multiple layers and maintaining segmentation between the layers is important. Compartmentalizing each of the designs aspects allows the engineer to facilitate the build. The topology , underlay, overlay components are designed within their own domains. This blog will be for my notes on the topology design.

Before all the automation , overlays and magical clouds can be built , the network design must scale to meet the demand. How to achieve scalability? Scale Out .

Scaling out is the concept of adding capacity to meet demand. In a Spine & Leaf topology this is adding devices at the “spine” layer to add east/west bandwidth or devices at the rack layer for access capacity. Keeping the features at a minimumn and adding as much bandwidth and redundancy as needed.

These topologies are deployed in PODs. A POD is a layer 3 micro-cluster to deploy services. The access or top of rack switch connect to fabric layer (spine) switches with 40/100 G uplinks to each .The top of rack (leaf) switches provide 10/25/40G access to the servers. The goal is to provider a “Fat Tree” where the uplinks from the ToR (Top of Rack) to the Fabric (Spine) Layer has an oversubscription to allow for non-contending network.

Fat Tree Design is oversubscription uplinks from the ToR — Fabric layer switches to provide Non-contended throughput. The image below is a sample POD design with the FSW (Fabric or Spines Switches), RSW (Top of Rack or Leaf) Switches . Each RSW has 3x 100G uplinks to each FSW , The Servers have 2x 10 uplink to the RSW. If the RSW are 48 x 10G and 6x100G, this would allow for 1.25:1 Oversubscription model a full rate.

The goal of the design is agility, so building out these PODs with Fixed port devices support predictable bandwidth , latency and avoid backplane bottlenecks from elephant flows in chassis boxes. The goal of a this design approach minimizes features needed , maximizes the physical resources of the switch and avoids software and other complications with chassis boxes. Chassis boxes are physical less flexible and encourage more dependency on “God” boxes.

What you do lose out on with chassis boxes is power and cabling efficiency when compared to singe rack unit, you gain in agility still Single rack switch topologies can be deployed for specific use cases , used for generations even with proper underlay/overlay design avoid vendor lock in. The key advantage of this design is fault tolerant design , with less “critical” devices , The allows for devices to be quicker to be replace , redeployed. In my next blog , will discuss the Underlay routing design of these networks.

References:

Russ White – Effective Data Center Design Techniques

Leave a comment