Oct 01 2020
Data Center

3 Steps to Take When Deploying Next-Generation Work Centers

A responsive ecosystem that integrates government data sources into a single, cohesive picture calls for a fresh look at the data center.

State and local governments have a rich treasure-trove of data: everything from historical transaction logs and geospatial information to video surveillance cameras and connected sensors. The idea behind next-generation work center (NGWC) technology is to build an intelligent, responsive ecosystem that integrates all of these data sources into a single, cohesive picture, allowing users to assess a situation in real time and make decisions quickly.

While applications teams are off building these amazing new decision support systems, IT managers must assess the readiness of their data center networks to handle what’s barreling down the road. NGWC is not just another type of application, because the core idea revolves around integrating, combining and analyzing massive piles of data to deliver precise information to the decision-maker. In data center terms, that means orders of magnitude more network traffic from servers to storage and databases, as well as much higher levels of server-to-server data flow.

Supporting this shift in traffic patterns is easier when IT administrators deploy innovative technologies to manage and control data center networks. Here are some new (and not-so-new) ideas for IT managers who need to support NGWC.

1. Boost Throughput with Spine-and-Leaf Network Architecture

LAN performance isn’t normally a huge concern for IT managers. Server- to-server communication is usually plenty fast for typical application programming interface calls, database operations and intertier communications. But as NGWC drives up the number of networked systems integrated into a single application, traditional core-aggregation-access data center architectures can hit a hard wall when oversubscription and high latency affect app performance.

Spine-and-leaf network architectures represent a rethinking of data center networks, driven by almost universal adoption of virtualization and greater east/west data flows. Cramming hundreds of virtual servers into a small space requires enormous network bandwidth brought directly to the rack, which means that 10-gigabit-per-second and 100Gbps links between the core and the rack are running at much higher levels of utilization.

Spine-and-leaf topologies are designed to accommodate higher bandwidth and lower latency requirements. Leaf switches (typically top-of-rack switches) serving stacks of hypervisors are connected to multiple spine switches, which replace the function of the core switch. At a minimum, a data center would have two spine switches for redundancy. But in larger deployments, there could be many spine switches. No matter how many there are, however, every leaf connects to every spine, such that every leaf is exactly one hop away from every other leaf, cutting network latency.

In addition to better fault tolerance, spine-and-leaf topologies also deliver more bandwidth and better scalability. By jettisoning the old loop-free Spanning Tree Protocol in favor of approaches such as equal-cost multi-path routing, Shortest Path Bridging and Transparent Interconnection of Lots of Links, spine-and-leaf networks can deliver much higher levels of performance by using all of the interswitch link bandwidth available.

2. Improve Your Security Through Microsegmentation

Traditional data center architecture security is network driven: Pack servers onto subnets, and then possibly isolate the subnets using firewalls or access control lists. The end result is “chronological security” — application isolation is determined by when the server was installed, not what function it is serving. Microsegmentation turns this around by focusing on the applications and data flows between them. In the extreme, microsegmentation isolates every server and controls the traffic between isolated segments.

Microsegmentation does present challenges, especially in large data centers, if the network infrastructure doesn’t have a flexible way to route and patch VLANs and subnets throughout the fabric. And, of course, microsegmentation does no good without an isolation component — either ultrafast firewalls or access lists built into the network infrastructure.

When properly implemented, microsegmentation delivers many security advantages. By controlling flow into each segment, this approach prevents attackers from moving laterally within the network and provides less of an attack surface. Microsegmentation reduces the urgency of patching and updates, helping to balance the cost of system downtime and update testing against the risk of unpatched vulnerabilities. And microsegmentation requires actually documenting flows between servers, which speeds troubleshooting and helps with compliance auditing.

3. Stay on Top of Network Performance with Monitoring

Performance monitoring is not really a new concept or technology, but it is an area with a slew of new products and approaches. Rethinking the network and security of the data center to better support NGWC applications offers a rare opportunity to reconfigure how to monitor the network. While basic reachability checking hasn’t changed much in 20 years, network performance monitoring has shifted significantly. Previously, IT managers drilled down to network ports and interswitch links, but the focus is shifting from the port to the path: a combination of network elements that connects servers to each other.

The limitations of tools such as NetFlow and sFlow are especially important in high-performance NGWC environments. While these tools have been valuable in dealing with WAN troubleshooting and capacity planning, on the data center side, with 10/100 Gigabit links in place, more precise and complete information is needed. Switch manufacturers are building hardware-assisted performance monitoring tools into their platforms, and IT managers can take advantage of the more detailed end-to-end statistics they deliver.

IT managers should look for monitoring tools that let them define service-level agreements for performance and then quickly see why the SLA isn’t being met. Correlating logs from multiple network elements into a consistent and clear picture takes time and effort, but this investment pays off. With a system that delivers a unified view of both performance and security, an IT manager will be in a better position to work with NGWC application teams to help isolate and resolve performance issues.

Illustrations by Rob Dobi
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT