Data Center Stack

Enterprise data centers house critical computing resources in a secure, environmentally controlled environment. Data centers originally housed rows and rows of mainframe computers and related equipment dedicated to discrete functions for internal users. The client-server movement in the 80s and 90s brought computing power out of the data center and into the desktop and workgroup. This left the data centers to housing servers which provided back-end services to “fat-client” computers running business logic. In the late 90s and the first decade of the 21st century, the client-server model standardized on “thin-client” browsers; this pushed much of application processing back on the “server-side” back on computers in the data center. In this “n-tier” model, server-side processing has matured and specialized allowing increased scalability and manageability. Furthermore, the standardization of web and browser technology created a business to consumer (B2C) marketplace. The data center is once again the center of computing power, and the current trend is to have fewer and bigger data centers. In many cases, this means co-locating multiple resource owners, customers and companies either in a tenant model or a cloud model.

The data center is where many of the architecture principles are born out in physical representations. There seems to be a large knowledge gap between IT professionals who have grown up in a large, structured data center environment and those who – while equally talented – have not. Thus it is important for IT managers to understand the basics of data center design; maybe not so much from the facilities perspective but rather the n-tier data center “stack” and physical topology (Note that it’s also critical to adapt to the multiple and inconsistent uses and meanings of “layers”, “zones” and “tiers”). One of the better published resources is the Cisco Press tome Data Center Fundamentals by Mauricio Arregoces and Maurizio Portolani (the first chapter alone presents an excellent overview) and the follow-up Cisco Data Center Infrastructure 2.5 Design Guide. There are also a few white papers: two published by the Burton Group (now part of Gartner) “Perimeters and Zones” (2006) and “Network Perimeters” (2008); Gartner’s “Securing the Network Perimeter Is More Important Than Ever” (2005); and Cisco’s “Deploying Firewalls Throughout Your Organization” (2006), all of which focus on the security aspects.

In the n-tier model, the client browser renders the data it pulls from the web architecture, but the server-side of the architecture can consist of multiple and separate component servers. The application could consist of a browser and a web server or a browser, web server, and app server, or the client browser, web, application, and database server. While arguably more complex, this modularity is easier to scale, manage, and secure.  This grouping is typically accomplished via network security tiers where there is a one-for-one mapping of server-side application components to a network zone that supports that tier’s function. Note that these tiers may live at different layers of the OSI model, so the figure to the right reflects some abstraction.

Factor into the n-tier model the physical topology paradigm of a three-layer, hub-and-spoke data center LAN with access, aggregation, and core switches. The main impetus behind the deployment of the three-layer data center LAN architecture was the low port density of previous generations of data-center LAN switches. For instance, it was common for first-generation switches to only have 16 or 32 ports. Even a medium-sized data center required many access switches to interconnect its servers, because traffic almost always had to travel between a server connected to one access switch and a server connected to another access switch. The obvious way to interconnect these access switches was with another set of switches which were referred to as aggregation or distribution switches. Larger, higher-end data centers required a large number of access switches to interconnect the distribution switches, which was accomplished by yet a third set of switches, known as core switches.

These various groupings are as follows.

Perimeter Edge: The perimeter edge network borders the Internet and core network and provides connectivity into the various security and redundancy components, including redundant connections to ISPs, routing via IBGP and EBGP, and security controls governing access to the Internet from the enterprise and vice versa.

Campus Core: Switches in this tier connect the perimeter edge, the aggregation switches for server farms, the campus network, and any private circuits. The core is more a topology component rather than a logical security tier.

Aggregation Layer: This physical topology layer, also called the distribution layer, contains multilayer switches that perform an aggregation function connecting server farms that span multiple access switches. The original justification for this layer was the low-port density of first generation switches which had as few as 16 or 32 ports. Thus there tended to be a large number of access switches which then needed to be connected.

The aggregation layer also contains devices that provide services to all the server farms such as firewalls, load balancers, SSL terminations, IDS/IPS, etc.; many of which are located in the “Monitoring, Auditing, and Controlling Zone”.

Some engineers have lately recommended to reduce latency by removing this aggregation layer and replacing access switches with 10G modules connected directly to the core switches. This may become more common as 10G prices drop.

Server Farm and Access Layer: This layer contains access switches and the servers which are divided into logical security tiers: the Web or Presentation Tier, the Application Tier and the Data Tier. Each tier is increasingly locked down in terms of access.

Monitoring, Auditing, and Controlling Zone: This layer contains tools like IDS/IPS applications, configuration managers, centralized logging and security event managers.

Storage Tier (not shown in diagram): The storage tier is typically a SAN connected to database servers via fibre channel over fiber cabling. This particular model of the storage tier is under pressure from competing technologies of FCOE (Fibre Channel over Ethernet) and iSCSI (block storage accessed directly over TCP/IP).

 

This traditional model of data center switches is being reconsidered by some engineers and vendors. This is sometimes referred to as “flattening the data center”. The hub-and-spoke topology of the traditional three-layer data center LAN was appropriate for client-to-server communication (which is referred to as “north-south” traffic), but it is becoming suboptimal when applied to high volumes of server-to-server communication (referred to as “east-west” traffic). With higher port densities and higher speed interfaces, the aggregation and core layers can be combined.