1. Boost Throughput with Spine-and-Leaf Network Architecture
LAN performance isn’t normally a huge concern for IT managers. Server- to-server communication is usually plenty fast for typical application programming interface calls, database operations and intertier communications. But as NGWC drives up the number of networked systems integrated into a single application, traditional core-aggregation-access data center architectures can hit a hard wall when oversubscription and high latency affect app performance.
Spine-and-leaf network architectures represent a rethinking of data center networks, driven by almost universal adoption of virtualization and greater east/west data flows. Cramming hundreds of virtual servers into a small space requires enormous network bandwidth brought directly to the rack, which means that 10-gigabit-per-second and 100Gbps links between the core and the rack are running at much higher levels of utilization.
Spine-and-leaf topologies are designed to accommodate higher bandwidth and lower latency requirements. Leaf switches (typically top-of-rack switches) serving stacks of hypervisors are connected to multiple spine switches, which replace the function of the core switch. At a minimum, a data center would have two spine switches for redundancy. But in larger deployments, there could be many spine switches. No matter how many there are, however, every leaf connects to every spine, such that every leaf is exactly one hop away from every other leaf, cutting network latency.
In addition to better fault tolerance, spine-and-leaf topologies also deliver more bandwidth and better scalability. By jettisoning the old loop-free Spanning Tree Protocol in favor of approaches such as equal-cost multi-path routing, Shortest Path Bridging and Transparent Interconnection of Lots of Links, spine-and-leaf networks can deliver much higher levels of performance by using all of the interswitch link bandwidth available.
2. Improve Your Security Through Microsegmentation
Traditional data center architecture security is network driven: Pack servers onto subnets, and then possibly isolate the subnets using firewalls or access control lists. The end result is “chronological security” — application isolation is determined by when the server was installed, not what function it is serving. Microsegmentation turns this around by focusing on the applications and data flows between them. In the extreme, microsegmentation isolates every server and controls the traffic between isolated segments.
Microsegmentation does present challenges, especially in large data centers, if the network infrastructure doesn’t have a flexible way to route and patch VLANs and subnets throughout the fabric. And, of course, microsegmentation does no good without an isolation component — either ultrafast firewalls or access lists built into the network infrastructure.
When properly implemented, microsegmentation delivers many security advantages. By controlling flow into each segment, this approach prevents attackers from moving laterally within the network and provides less of an attack surface. Microsegmentation reduces the urgency of patching and updates, helping to balance the cost of system downtime and update testing against the risk of unpatched vulnerabilities. And microsegmentation requires actually documenting flows between servers, which speeds troubleshooting and helps with compliance auditing.