Virtualisation set to transform data centre networking
Friday, November 12, 2010 at 8:56AM
Roy Rubenstein in Bridge Port Extension, Brocade virtual cluster switching (VCS), Converged Network Adaptor, Data Center Bridging, Edge Virtual Bridging, Feature, Fibre Channel over Ethernet, Juniper Stratus, TRILL, Virtual Ethernet Port Aggregation

Briefing:  Data centre switching

Part 3: Networking developments

The adoption of virtualisation techniques is causing an upheaval in the data centre. Virtualisation is being used to boost server performance, but its introduction is changing how switching equipment is networked.

“This is the most critical period of data centre transformation seen in decades,” says Raju Rajan, global system networking evangelist at IBM.

 

“We are on a long hard path – it is going to be a really challenging transition”

Stephen Garrison, Force10 Networks

 

 

 

Data centre managers want to accommodate variable workloads, and that requires the moving of virtualised workloads between servers and even between data centres. That is leading to new protocol developments and network consolidation, all the while making IT management more demanding, and hence requiring greater automation.

“We used to share the mainframe, now we share a pool of resources,” says Andy Ingram, vice president of product marketing and business development at the fabric and switching technologies business group at Juniper Networks. “What we are trying to get to is better utilisation of resources for the purpose of driving applications - the data centre is about applications.”

 

Networking challenges

New standards to meet the networking challenges created by virtualisation are close to completion and are already appearing in equipment. In turn, equipment makers are developing switch architectures that will scale to support tens of thousands of 10-Gigabit-per-second (Gbps) Ethernet ports. But industry experts expect these developments will take up to a decade to become commonplace due to the significant hurdles to be overcome.

“We are all marketing to a very distant future where most of the users are still trying to get their arms around eight virtual machines on a server,” says Stephen Garrison, vice president marketing at Force10 Networks. “We are on a long hard path – it is going to be a really challenging transition.”

IBM points out how its customers are used to working in IT siloes, selecting subsystems independently. New work practices across divisions will be needed if the networking challenges are to be addressed. “For the first time, you cannot make a networking choice without understanding the server, virtualisation, storage, security and the operations support strategies,” says Rajan.

A lot of the future value of these various developments will be based on enabling IT automation. “That is a big hurdle for IT to get over: allowing systems to manage themselves,” says Zeus Kerravala, senior vice president, global enterprise and consumer research at market research firm, Yankee Group. “Do I think this vision will happen? Sure I do, but it will take a lot longer than people think.” Yankee expects it will be closer to ten years before these developments become commonplace.

Networking provides the foundation for servers and storage and, ultimately, the data centre’s applications. “Fifty percent of the data centre spend is servers, 35% is storage and 15% networking,” says Ingram. “The key resources I want to be efficient are the servers and the storage; what interconnects them is the network.”

Traditionally, applications have resided on dedicated servers, but equipment usage has been low, at 10% commonly. Given the huge numbers of servers deployed in data centres, this is no longer acceptable.

 

“From an IT perspective, the cost of computing should fall quite dramatically; if it doesn’t fall by half we will have failed”

Zeus Kerravala, Yankee Group

 

 

 

Virtualisation splits a server’s processing into time-slots to support 10, 100 and even 1,000 virtual machines, each with its own application. The server usage improvement that results ranges from 20% to as high as 70%.

That could result in significant efficiencies when you consider the growth of server virtualisation: In 2010, deployed virtual machines will outnumber physical servers for the first time, claims market research firm, IDC. And Gartner has said that half of all workloads will be virtualised by 2012, quotes Cisco’s Kash Shaikh, Cisco’s group manager, data center product marketing.

In enterprise and hosting data centres, servers are typically connected using three tiers of switching. The servers are linked to access switches which in turn connect to aggregation switches whose role is to funnel traffic to the large, core switches.

An access switch typically sits on top of the server rack, explaining why it is also known as a top-of-rack switch. Servers are now moving from a 1Gbps to having a 10Gbps interface, with a top-of-rack switch connecting up to 40 servers typically.

Broadcom’s latest BCM56845 switch chip has 64x10Gbps ports.  The BCM56845 can link 40, 10Gbps-based servers to an aggregation switch via four high-capacity 40Gbps links. Each aggregation switch will likely have 6 to 12, 40Gbps ports per line card and between eight and 16 cards per chassis. In turn, the aggregation switches connect to the core switches. The result is a three-tier architecture that can link thousands, even tens of thousands, of servers in the data centre.

The rise of virtualisation impacts data centre networking profoundly.  Applications are no longer confined to single machines but are shared across multiple servers for scaling. Nor is the predominant traffic ‘north-south’, across this three-layer switch hierarchy. Instead, virtualisation promotes greater ‘east-west’ traffic, across the same tiered equipment.

“The network has to support these changes and it can’t be the bottleneck,” says Cindy Borovick, vice president, enterprise communications infrastructure and data centre networks, at IDC. The result is networking change on several fronts.

“The [IT] resource needs to scale to one large pool and it needs to be dynamic, to allow workloads to be moved around,” says Ingram. But that is the challenge: “The inherent complexity of the network prevents it from scaling and prevents it from being dynamic.”

 

Data Center Bridging

Currently, IT staff manage several separate networks: Ethernet for the LAN, Fibre Channel for storage and Infiniband for high-performance computing. To migrate the traffic types onto a common network, the IEEE is developing the Data Center Bridging (DCB) Ethernet standard. A separate Fibre Channel over Ethernet (FCoE) standard, developed by the International Committee for Information Technology Standards, enables Fibre Channel to be encapsulated onto DCB.

 

“No-one is coming to the market with 10-Gigabit [Ethernet ports] without DCB bundled in”

Raju Rajan, IBM

 

DCB is designed to enable the consolidation of many networks to just one within the data centre. A single server typically has multiple networks connected to it, including Fibre Channel and several separate 1Gbps Ethernet networks.

The DCB standard has several components including Priority Flow Control which provides eight classes of traffic; Enhanced Transmission Selection which manages the bandwidth allocated to different flows and Congestion Notification which, if a port begins to fill up, can notify upstream along all the hops to the source to back off from sending traffic.

“These standards - Priority Flow Control, Congestion Notification and Enhanced Transmission Selection – are some 98% complete, waiting for procedural things,” says Nick Ilyadis, chief technical officer for Broadcom’s infrastructure networking group. DCB is sufficiently stable to be encapsulated in silicon and is being offered on increasing numbers of platforms.

With these components DCB can transport Fibre Channel in a lossless way. Fibre Channel is intolerant to loss and can require minutes to recover from a lost packet. Now with DCB, critical storage traffic such as FCoE, iSCSI and network-attached storage is supported over Ethernet.

Network convergence may be the primary driver for DCB, but its adoption also benefits virtualisation. Since higher server usage results in extra port traffic, virtualisation promotes the transition from 1-Gigabit to 10-Gigabit Ethernet ports. “No-one is coming to the market with 10-Gigabit [Ethernet ports] without DCB bundled in,” says IBM’s Rajan.

The uptake is also being helped by the significant reduction in the cost of 10-Gigabit ports with DCB. “This year we will see 10-Gigabit DCB at about $350 per port, down from over $800 last year,” says Rajan. The upgrade is attractive when the alternative is using several 1-Gigabit Ethernet ports for server virtualisation, each port costing $50-$75.

Yet while DCB may be starting to be deployed, networking convergence remains in its infancy.

“FCoE seems to be lagging behind general industry expectations,” says Rajan. “For many of our data centre owners, virtualisation is the overriding concern.” Network convergence may be a welcome cost-reducing step, but it introduces risk. “So the net gain [of convergence] is not very clear, yet, to our end customers,” says Rajan. “But the net gain of virtualisation and cloud is absolutely clear to everybody.”

Global Crossing has some 20 data centres globally, including 14 in Latin America. These serve government and enterprise customers with storage, connectivity and firewall managed services.

“We are not using lossless Ethernet,” says Mike Benjamin, vice president at Global Crossing. “The big push for us to move to that [DCB] would be doing storage as a standard across the Ethernet LAN. “We today maintain a separate Fibre Channel fabric for storage. We are not prepared to make the leap to iSCSI or FCoE just yet.”

 

TRILL

Another networking protocol under development is the Internet Engineering Task Force’s (IETF) Transparent Interconnection of Lots of Links (TRILL) that promotes large-scale Ethernet networks. TRILL’s primary role is to replace the spanning tree algorithm that was never designed to address the latest data centre requirements.

Spanning tree disables links in a layer-two network to avoid loops to ensure traffic has only one way to get to a port. But disabling links can remove up to half the available network bandwidth. TRILL enables large layer-two network linking switches that avoids loops without turning off precious bandwidth.

“TRILL treats the Ethernet network as the complex network it really is,” says Benjamin. “If you think of the complexity and topologies of IP networks today, TRILL will have similar abilities in terms of truly understanding a topology to forward across, and permit us to use load balancing which is a huge step forward.”

 

"Data centre operators are cognizant of the fact that they are sitting in the middle of the battle of [IT vendor] giants and they want to make the right decisions”

Cindy Borovick, IDC

 

 

 

Collapsing tiers

Switch vendors are also developing flatter switch architectures to reduce the switching tiers from three to two to ultimately one large, logical switch. This promises to reduce the overall number of platforms and their associated management as well as switch latency.

Global Crossing’s default data centre switch design is a two-tier switch. “Unless that top tier starts to hit scaling problems, at which time we move into a three-tier,” says Benjamin. “A two-tier switch architecture really does have benefits in terms of cost and low-latency switching.”

Juniper Networks is developing a single-layer logical switch architecture. Dubbed Stratus, the architecture will support tens of thousands of 10Gbps ports and span the data centre. While Stratus has still to be detailed, Juniper has said the design will be based on a 64x10-Gbps building block chip. Stratus will be in customer trials by early 2011. “We have some customers that have some very difficult networking challenges that are signed up to be our early field trials,” says Ingram.

Brocade is about to launch its virtual cluster switching (VCS) architecture. “There will be 10 switches within a cluster and they will be managed as if it is one chassis,” says Simon Pamplin, systems engineering pre-sales manager for Brocade UK and Ireland. VCS supports TRILL and DCB.

“We have the ability to make much larger flat layer two networks which ease management and the mobility of [servers’] virtual machines, whereas previously you were restricted to the size of the spanning tree layer-two domain you were happy to manage, which typically wasn’t that big,” says Pamplin.

Cisco’s Shaikh argues multi-tiered switching is still needed, for system scaling and separation of workloads: “Sometimes [switch] tiers are used for logical separation, to separate enterprise departments and their applications.” However, Cisco itself is moving to fewer tiers with the introduction of its FabricPath technology within its Nexus switches that support TRILL.

“There are reasons why you want a multi-tier,” agrees Force10 Network’s Garrison. “You may want a core and a top-of rack switch that denotes the server type, or there are some [enterprises] that just like a top-of-rack as you never really touch the core [switches]; with a single-tier you are always touching the core.”

Garrison argues that a flat network should not be equated with single tier: “What flat means is: Can I create a manageable domain that still looks like it is layer two to the packet even if it is a multi-tier?”

Global Crossing has been briefed by vendors such as Juniper and Brocade on their planned logical switch architectures and the operator sees much merit in these developments. But its main concern is what happens once such an architecture is deployed.

“Two years down the road, not only are we forced back to the same vendor no matter what other technology advancements another vendor has made, we also risk that they have phased out that generation of switch we installed,” says Benjamin. If the vendor does not remain backwards compatible, the risk is a complete replacement of the switches may be necessary.

Benjamin points out that while it is the proprietary implementations that enable the single virtual architectures, the switches also support the network standards. Accordingly a data centre operator can always switch off the proprietary elements that enable the single virtual layer and revert back to a traditional switched architecture.

 

Edge Virtual Bridging and Bridge Port Extension

A networking challenge caused by virtualisation is switching virtual machines and moving them between servers. A server’s software-based hypervisor that oversees the virtual machines comes with a virtual switch. But the industry consensus it that hardware rather than software running on a server is best for switching.

There are two standards under development to handle virtualisation requirements: the IEEE 802.1Qbg Edge Virtual Bridging (EVB) and the IEEE 802.1Qbh Bridge Port Extension.  The 802.1Qbg camp is backed by many of the leading switch and network interface card vendors, while 802.1Qbh is based on Cisco Systems’ VN-Tag technology.

Virtual Ethernet Port Aggregation (VEPA), a proprietary element used as part of 802.1Qbg, is the transport mechanism used. In terms of networking, VEPA allows traffic to exit and re-enter the same server physical port to enable switching between virtual ports. EVB’s role is to provide the required virtual machine configuration and management.

“The network has to recognise the virtual machine appearing on the virtual interfaces and provision the network accordingly,” says Broadcom’s Ilyadis. “That is where EVB comes in, to recognise the virtual machine and use its credentials for the configuration.”

Brocade supports EVB and VEPA as part of its converged network adaptor (CNA) card that also support DCB and FCoE. “You have software switches within the hypervisor, you have some capabilities in the CNA and some in the edge switch,” says Pamplin. “We don’t see the soft-switch as too beneficial, as some CPU cycles are stolen to support it.”

Instead Brocade does the switching within the CNA.  When a virtual machine within a server needs to talk to another virtual machine, the switching takes place at the adaptor. “We vastly reduce what needs to go out on the core network,” says Pamplin.  “If we do have traffic that we need to monitor and put more security around, we can take that traffic out through the adaptor to the switch and switch it back – ‘hairpin’ it - into the server.”

The 802.1Qbh Bridge Port Extension uses a tag that is added to an Ethernet frame. The tag is used to control and identify the virtual machine traffic, and enable port extension. According to Cisco, the port extension allows the aggregation of a large number of ports through hierarchical switches. “This provides a way of doing a large fan-out while maintaining smaller management tiering,” says Prashant Gandhi, technical leader internet business systems unit at Cisco.

For example, top-of-rack switches could be port-extenders and be managed by the next-tier switch. “This would significantly simplifying provisioning and management of a large number of physical and virtual Ethernet ports,” says Gandhi.

 “The common goal of both [802.1Qbg and 802.1Qbh] standards is to help us with configuration management, to allow virtual machines to move with their entire configuration and not require us to apply and keep that configuration in sync across every single switch,” says Global Crossing’s Benjamin. “That is a huge step for us as an operator.”

“Our view is that VEPA will be needed,” says Gary Lee, director of product marketing at Fulcrum Microsystems, which has just announced its first Alta family switch chip that supports 72x10-Gigabit ports and can process over one billion packets per second.

Benjamin hopes both standards will be adopted by the industry: “I don’t think it’s a bad thing if they both evolve and you get the option to do the switching in software as well as in hardware based on the application or the technology that a certain data centre provider requires.” Broadcom and Fulcrum are supporting both standards to ensure its silicon will work in both environments.

“This [the Edge Virtual Bridging and Bridge Port Extension standards’ work] is still in flux,” says Ilyadis. “At the moment there are a lot of proprietary implementations but it is coming together and will be ready next year.”

 

The big picture

For Global Crossing, it will be economics or reliability considerations that will determine which technologies are introduces first. DCB may be first once its economics and reliability for storage cannot be ignored, says Benjamin. Or it will be the networking redundancy and reliability offered by the likes of TRILL that will be needed as the operator uses virtualization more.

And whether it is five or ten years, what will be the benefit of all these new protocols and switch architectures?

Malcolm Mason, EMEA hosting product manager at Global Crossing, says there will be less equipment doing more, which will save power and require less cabling. The new technologies will also enable more stringent service level agreements to be met.

“The end-user won’t notice a lot of difference, but what they should notice is more consistent application performance,” says Yankee’s Kerravala. “From an IT perspective, the cost of computing should fall quite dramatically; if it doesn’t fall by half we will have failed.”

Meanwhile data centre operators are working to understand these new technologies. “I get a lot of questions about end-to-end architectures,” says Borovick. “They are cognizant of the fact that they are sitting in the middle of the battle of [IT vendor] giants and they want to make the right decisions.”

 

Click here for Part 1: Single-layer switch architectures

Click here for Part 2: Ethernet switch chips

 

Article originally appeared on Gazettabyte (https://www.gazettabyte.com/).
See website for complete article licensing information.