Virtualisation set to transform data centre networking

Briefing:  Data centre switching

Part 3: Networking developments

The adoption of virtualisation techniques is causing an upheaval in the data centre. Virtualisation is being used to boost server performance, but its introduction is changing how switching equipment is networked.

“This is the most critical period of data centre transformation seen in decades,” says Raju Rajan, global system networking evangelist at IBM.

 

“We are on a long hard path – it is going to be a really challenging transition”

Stephen Garrison, Force10 Networks

 

 

 

Data centre managers want to accommodate variable workloads, and that requires the moving of virtualised workloads between servers and even between data centres. That is leading to new protocol developments and network consolidation, all the while making IT management more demanding, and hence requiring greater automation.

“We used to share the mainframe, now we share a pool of resources,” says Andy Ingram, vice president of product marketing and business development at the fabric and switching technologies business group at Juniper Networks. “What we are trying to get to is better utilisation of resources for the purpose of driving applications - the data centre is about applications.”

 

Networking challenges

New standards to meet the networking challenges created by virtualisation are close to completion and are already appearing in equipment. In turn, equipment makers are developing switch architectures that will scale to support tens of thousands of 10-Gigabit-per-second (Gbps) Ethernet ports. But industry experts expect these developments will take up to a decade to become commonplace due to the significant hurdles to be overcome.

“We are all marketing to a very distant future where most of the users are still trying to get their arms around eight virtual machines on a server,” says Stephen Garrison, vice president marketing at Force10 Networks. “We are on a long hard path – it is going to be a really challenging transition.”

IBM points out how its customers are used to working in IT siloes, selecting subsystems independently. New work practices across divisions will be needed if the networking challenges are to be addressed. “For the first time, you cannot make a networking choice without understanding the server, virtualisation, storage, security and the operations support strategies,” says Rajan.

A lot of the future value of these various developments will be based on enabling IT automation. “That is a big hurdle for IT to get over: allowing systems to manage themselves,” says Zeus Kerravala, senior vice president, global enterprise and consumer research at market research firm, Yankee Group. “Do I think this vision will happen? Sure I do, but it will take a lot longer than people think.” Yankee expects it will be closer to ten years before these developments become commonplace.

Networking provides the foundation for servers and storage and, ultimately, the data centre’s applications. “Fifty percent of the data centre spend is servers, 35% is storage and 15% networking,” says Ingram. “The key resources I want to be efficient are the servers and the storage; what interconnects them is the network.”

Traditionally, applications have resided on dedicated servers, but equipment usage has been low, at 10% commonly. Given the huge numbers of servers deployed in data centres, this is no longer acceptable.

 

“From an IT perspective, the cost of computing should fall quite dramatically; if it doesn’t fall by half we will have failed”

Zeus Kerravala, Yankee Group

 

 

 

Virtualisation splits a server’s processing into time-slots to support 10, 100 and even 1,000 virtual machines, each with its own application. The server usage improvement that results ranges from 20% to as high as 70%.

That could result in significant efficiencies when you consider the growth of server virtualisation: In 2010, deployed virtual machines will outnumber physical servers for the first time, claims market research firm, IDC. And Gartner has said that half of all workloads will be virtualised by 2012, quotes Cisco’s Kash Shaikh, Cisco’s group manager, data center product marketing.

In enterprise and hosting data centres, servers are typically connected using three tiers of switching. The servers are linked to access switches which in turn connect to aggregation switches whose role is to funnel traffic to the large, core switches.

An access switch typically sits on top of the server rack, explaining why it is also known as a top-of-rack switch. Servers are now moving from a 1Gbps to having a 10Gbps interface, with a top-of-rack switch connecting up to 40 servers typically.

Broadcom’s latest BCM56845 switch chip has 64x10Gbps ports.  The BCM56845 can link 40, 10Gbps-based servers to an aggregation switch via four high-capacity 40Gbps links. Each aggregation switch will likely have 6 to 12, 40Gbps ports per line card and between eight and 16 cards per chassis. In turn, the aggregation switches connect to the core switches. The result is a three-tier architecture that can link thousands, even tens of thousands, of servers in the data centre.

The rise of virtualisation impacts data centre networking profoundly.  Applications are no longer confined to single machines but are shared across multiple servers for scaling. Nor is the predominant traffic ‘north-south’, across this three-layer switch hierarchy. Instead, virtualisation promotes greater ‘east-west’ traffic, across the same tiered equipment.

“The network has to support these changes and it can’t be the bottleneck,” says Cindy Borovick, vice president, enterprise communications infrastructure and data centre networks, at IDC. The result is networking change on several fronts.

“The [IT] resource needs to scale to one large pool and it needs to be dynamic, to allow workloads to be moved around,” says Ingram. But that is the challenge: “The inherent complexity of the network prevents it from scaling and prevents it from being dynamic.”

 

Data Center Bridging

Currently, IT staff manage several separate networks: Ethernet for the LAN, Fibre Channel for storage and Infiniband for high-performance computing. To migrate the traffic types onto a common network, the IEEE is developing the Data Center Bridging (DCB) Ethernet standard. A separate Fibre Channel over Ethernet (FCoE) standard, developed by the International Committee for Information Technology Standards, enables Fibre Channel to be encapsulated onto DCB.

 

“No-one is coming to the market with 10-Gigabit [Ethernet ports] without DCB bundled in”

Raju Rajan, IBM

 

DCB is designed to enable the consolidation of many networks to just one within the data centre. A single server typically has multiple networks connected to it, including Fibre Channel and several separate 1Gbps Ethernet networks.

The DCB standard has several components including Priority Flow Control which provides eight classes of traffic; Enhanced Transmission Selection which manages the bandwidth allocated to different flows and Congestion Notification which, if a port begins to fill up, can notify upstream along all the hops to the source to back off from sending traffic.

“These standards - Priority Flow Control, Congestion Notification and Enhanced Transmission Selection – are some 98% complete, waiting for procedural things,” says Nick Ilyadis, chief technical officer for Broadcom’s infrastructure networking group. DCB is sufficiently stable to be encapsulated in silicon and is being offered on increasing numbers of platforms.

With these components DCB can transport Fibre Channel in a lossless way. Fibre Channel is intolerant to loss and can require minutes to recover from a lost packet. Now with DCB, critical storage traffic such as FCoE, iSCSI and network-attached storage is supported over Ethernet.

Network convergence may be the primary driver for DCB, but its adoption also benefits virtualisation. Since higher server usage results in extra port traffic, virtualisation promotes the transition from 1-Gigabit to 10-Gigabit Ethernet ports. “No-one is coming to the market with 10-Gigabit [Ethernet ports] without DCB bundled in,” says IBM’s Rajan.

The uptake is also being helped by the significant reduction in the cost of 10-Gigabit ports with DCB. “This year we will see 10-Gigabit DCB at about $350 per port, down from over $800 last year,” says Rajan. The upgrade is attractive when the alternative is using several 1-Gigabit Ethernet ports for server virtualisation, each port costing $50-$75.

Yet while DCB may be starting to be deployed, networking convergence remains in its infancy.

“FCoE seems to be lagging behind general industry expectations,” says Rajan. “For many of our data centre owners, virtualisation is the overriding concern.” Network convergence may be a welcome cost-reducing step, but it introduces risk. “So the net gain [of convergence] is not very clear, yet, to our end customers,” says Rajan. “But the net gain of virtualisation and cloud is absolutely clear to everybody.”

Global Crossing has some 20 data centres globally, including 14 in Latin America. These serve government and enterprise customers with storage, connectivity and firewall managed services.

“We are not using lossless Ethernet,” says Mike Benjamin, vice president at Global Crossing. “The big push for us to move to that [DCB] would be doing storage as a standard across the Ethernet LAN. “We today maintain a separate Fibre Channel fabric for storage. We are not prepared to make the leap to iSCSI or FCoE just yet.”

 

TRILL

Another networking protocol under development is the Internet Engineering Task Force’s (IETF) Transparent Interconnection of Lots of Links (TRILL) that promotes large-scale Ethernet networks. TRILL’s primary role is to replace the spanning tree algorithm that was never designed to address the latest data centre requirements.

Spanning tree disables links in a layer-two network to avoid loops to ensure traffic has only one way to get to a port. But disabling links can remove up to half the available network bandwidth. TRILL enables large layer-two network linking switches that avoids loops without turning off precious bandwidth.

“TRILL treats the Ethernet network as the complex network it really is,” says Benjamin. “If you think of the complexity and topologies of IP networks today, TRILL will have similar abilities in terms of truly understanding a topology to forward across, and permit us to use load balancing which is a huge step forward.”

 

"Data centre operators are cognizant of the fact that they are sitting in the middle of the battle of [IT vendor] giants and they want to make the right decisions”

Cindy Borovick, IDC

 

 

 

Collapsing tiers

Switch vendors are also developing flatter switch architectures to reduce the switching tiers from three to two to ultimately one large, logical switch. This promises to reduce the overall number of platforms and their associated management as well as switch latency.

Global Crossing’s default data centre switch design is a two-tier switch. “Unless that top tier starts to hit scaling problems, at which time we move into a three-tier,” says Benjamin. “A two-tier switch architecture really does have benefits in terms of cost and low-latency switching.”

Juniper Networks is developing a single-layer logical switch architecture. Dubbed Stratus, the architecture will support tens of thousands of 10Gbps ports and span the data centre. While Stratus has still to be detailed, Juniper has said the design will be based on a 64x10-Gbps building block chip. Stratus will be in customer trials by early 2011. “We have some customers that have some very difficult networking challenges that are signed up to be our early field trials,” says Ingram.

Brocade is about to launch its virtual cluster switching (VCS) architecture. “There will be 10 switches within a cluster and they will be managed as if it is one chassis,” says Simon Pamplin, systems engineering pre-sales manager for Brocade UK and Ireland. VCS supports TRILL and DCB.

“We have the ability to make much larger flat layer two networks which ease management and the mobility of [servers’] virtual machines, whereas previously you were restricted to the size of the spanning tree layer-two domain you were happy to manage, which typically wasn’t that big,” says Pamplin.

Cisco’s Shaikh argues multi-tiered switching is still needed, for system scaling and separation of workloads: “Sometimes [switch] tiers are used for logical separation, to separate enterprise departments and their applications.” However, Cisco itself is moving to fewer tiers with the introduction of its FabricPath technology within its Nexus switches that support TRILL.

“There are reasons why you want a multi-tier,” agrees Force10 Network’s Garrison. “You may want a core and a top-of rack switch that denotes the server type, or there are some [enterprises] that just like a top-of-rack as you never really touch the core [switches]; with a single-tier you are always touching the core.”

Garrison argues that a flat network should not be equated with single tier: “What flat means is: Can I create a manageable domain that still looks like it is layer two to the packet even if it is a multi-tier?”

Global Crossing has been briefed by vendors such as Juniper and Brocade on their planned logical switch architectures and the operator sees much merit in these developments. But its main concern is what happens once such an architecture is deployed.

“Two years down the road, not only are we forced back to the same vendor no matter what other technology advancements another vendor has made, we also risk that they have phased out that generation of switch we installed,” says Benjamin. If the vendor does not remain backwards compatible, the risk is a complete replacement of the switches may be necessary.

Benjamin points out that while it is the proprietary implementations that enable the single virtual architectures, the switches also support the network standards. Accordingly a data centre operator can always switch off the proprietary elements that enable the single virtual layer and revert back to a traditional switched architecture.

 

Edge Virtual Bridging and Bridge Port Extension

A networking challenge caused by virtualisation is switching virtual machines and moving them between servers. A server’s software-based hypervisor that oversees the virtual machines comes with a virtual switch. But the industry consensus it that hardware rather than software running on a server is best for switching.

There are two standards under development to handle virtualisation requirements: the IEEE 802.1Qbg Edge Virtual Bridging (EVB) and the IEEE 802.1Qbh Bridge Port Extension.  The 802.1Qbg camp is backed by many of the leading switch and network interface card vendors, while 802.1Qbh is based on Cisco Systems’ VN-Tag technology.

Virtual Ethernet Port Aggregation (VEPA), a proprietary element used as part of 802.1Qbg, is the transport mechanism used. In terms of networking, VEPA allows traffic to exit and re-enter the same server physical port to enable switching between virtual ports. EVB’s role is to provide the required virtual machine configuration and management.

“The network has to recognise the virtual machine appearing on the virtual interfaces and provision the network accordingly,” says Broadcom’s Ilyadis. “That is where EVB comes in, to recognise the virtual machine and use its credentials for the configuration.”

Brocade supports EVB and VEPA as part of its converged network adaptor (CNA) card that also support DCB and FCoE. “You have software switches within the hypervisor, you have some capabilities in the CNA and some in the edge switch,” says Pamplin. “We don’t see the soft-switch as too beneficial, as some CPU cycles are stolen to support it.”

Instead Brocade does the switching within the CNA.  When a virtual machine within a server needs to talk to another virtual machine, the switching takes place at the adaptor. “We vastly reduce what needs to go out on the core network,” says Pamplin.  “If we do have traffic that we need to monitor and put more security around, we can take that traffic out through the adaptor to the switch and switch it back – ‘hairpin’ it - into the server.”

The 802.1Qbh Bridge Port Extension uses a tag that is added to an Ethernet frame. The tag is used to control and identify the virtual machine traffic, and enable port extension. According to Cisco, the port extension allows the aggregation of a large number of ports through hierarchical switches. “This provides a way of doing a large fan-out while maintaining smaller management tiering,” says Prashant Gandhi, technical leader internet business systems unit at Cisco.

For example, top-of-rack switches could be port-extenders and be managed by the next-tier switch. “This would significantly simplifying provisioning and management of a large number of physical and virtual Ethernet ports,” says Gandhi.

 “The common goal of both [802.1Qbg and 802.1Qbh] standards is to help us with configuration management, to allow virtual machines to move with their entire configuration and not require us to apply and keep that configuration in sync across every single switch,” says Global Crossing’s Benjamin. “That is a huge step for us as an operator.”

“Our view is that VEPA will be needed,” says Gary Lee, director of product marketing at Fulcrum Microsystems, which has just announced its first Alta family switch chip that supports 72x10-Gigabit ports and can process over one billion packets per second.

Benjamin hopes both standards will be adopted by the industry: “I don’t think it’s a bad thing if they both evolve and you get the option to do the switching in software as well as in hardware based on the application or the technology that a certain data centre provider requires.” Broadcom and Fulcrum are supporting both standards to ensure its silicon will work in both environments.

“This [the Edge Virtual Bridging and Bridge Port Extension standards’ work] is still in flux,” says Ilyadis. “At the moment there are a lot of proprietary implementations but it is coming together and will be ready next year.”

 

The big picture

For Global Crossing, it will be economics or reliability considerations that will determine which technologies are introduces first. DCB may be first once its economics and reliability for storage cannot be ignored, says Benjamin. Or it will be the networking redundancy and reliability offered by the likes of TRILL that will be needed as the operator uses virtualization more.

And whether it is five or ten years, what will be the benefit of all these new protocols and switch architectures?

Malcolm Mason, EMEA hosting product manager at Global Crossing, says there will be less equipment doing more, which will save power and require less cabling. The new technologies will also enable more stringent service level agreements to be met.

“The end-user won’t notice a lot of difference, but what they should notice is more consistent application performance,” says Yankee’s Kerravala. “From an IT perspective, the cost of computing should fall quite dramatically; if it doesn’t fall by half we will have failed.”

Meanwhile data centre operators are working to understand these new technologies. “I get a lot of questions about end-to-end architectures,” says Borovick. “They are cognizant of the fact that they are sitting in the middle of the battle of [IT vendor] giants and they want to make the right decisions.”

 

Click here for Part 1: Single-layer switch architectures

Click here for Part 2: Ethernet switch chips

 


Fulcrum's Alta switch chips add programmable pipeline to keep pace with standards

Briefing:  Data centre switching 

Part 2: Ethernet switch chips

Fulcrum Microsystems has announced its latest FocalPoint chip family of Ethernet switches. The Alta FM6000 series family supports up to 72 10-Gigabit ports and can process over one billion packets a second.

 

“Instead of every top-of-rack switch having a CPU subsystem, you could put all the horsepower into a set of server blades”

Gary Lee, Fulcrum Microsystems

 

The company’s Alta FM6000 series is its third generation of FocalPoint Ethernet switches. Based on a 65nm CMOS process, the Alta switch architecture includes a programmable packet-processing pipeline that can support emerging standards for data centre networking. These include Data Center Bridging (DCB), Transparent Interconnection of Lots of Links (TRILL), and two server virtualisation protocols: the IEEE 802.1Qbg Edge Virtual Bridging and the IEEE 802.1Qbh Bridge Port Extension. 

 

Why is this important?

Data centre networking is undergoing a period of upheaval due to server virtualisation. Data centre operators must cope with the changing nature of traffic flows, as the predominant traffic becomes server-to-server (east-west traffic) rather than between the servers and end users (north-south).

In turn, IT staff want to consolidate the multiple networks they must manage - for LAN, storage and high-performance computing - onto a single network based on DCB. 

They also want to reduce the number of switch platforms they must manage. This is leading switch vendors to develop larger, flatter architectures; instead of the traditional three tiers of switches, vendors are developing sleeker two-tier and even a single-layer, logical switch architecture that spans the data centre. 

“There are people out there that have enterprise gear where their data centre connection has to go through the access, aggregation and core [switches],” says Gary Lee, director of product marketing at Fulcrum Microsystems. “They may not want to swap out that gear so they are going to continue to have three tiers even if it is not that efficient.”

But other customers such as large cloud computing players do not require such a switch hierarchy and its associated software complexity, says Lee: “They are the ones that are driving to a ‘lean core’, made up of top-of-rack and the end-of-row switch that acts as a core switch.”

Switch vendor Voltaire, a customer of Fulcrum’s ICs, uses such an arrangement to create 288 10-Gigabit Ethernet ports based on two tiers of 24-port switch chips. With the latest 72-port FM6000 series, a two-tier architecture with over 2,500 10-Gigabit ports becomes possible. “The software can treat the entire structure of chips as a single large virtual switch,” says Lee.

 

Alta architecture

Fulcrum's Alta FM6000 series architecture. Source: Fulcrum MicrosystemsFulcrum’s FocalPoint 6000 series comprises nine devices with capacities from 160 to 720 Gigabit-per-second (Gbps).  The Alta chip architecture has three main components:

  • Input-output ports
  • RapidArray shared memory and
  • the FlexPipe array pipeline.

Like Fulcrum’s second generation Bali architecture, the 6000 series has 96 serial/ deserialiser (serdes) ports but these have been upgraded from 3.125Gbps to 10Gbps.“We have very flexible port logic,” says Lee. “We can group four serdes to create a XAUI [10 Gigabit Ethernet] port or create an IEEE 40 Gigabit Ethernet port.”

RapidArray is a single shared memory which can be written to and read from at full line rate from all the ports simultaneous, says Fulcrum. Each memory output port has a set of eight class-of-service queues, while the shared memory can be partitioned to separate storage traffic from data traffic.

“The shared memory design is where we get the low latency, and good multicast performance which people in the broadband access market like for video distribution,” says Lee.

The architecture’s third main functional block is the FlexPipe array pipeline. The pipeline, new to Alta, is what enables up to a billion 64 byte packets to be processed each second.  The packet-processing pipeline combines look-up tables and microcode-programmable functional blocks that process a packet’s fields. Being able to program the array pipeline means the device can accommodate standards’ changes as they evolve, as well as switch vendors’ proprietary protocols.

 

OpenFlow

The FocalPoint software development kit that comes with the chips supports OpenFlow. OpenFlow is an academic initiative that allows networking protocols to be explored using existing hardware but it is of growing interest to data centre operators.

“It creates an industry-standard application programming interface (API) to the switches,” explains Lee. It would allow the likes of a Google or a Yahoo! to switch vendors’ switch platforms as long as both vendors supported OpenFlow.

OpenFlow also establishes the idea of a central controller that would run on a server to configure the network. “Instead of every top-of-rack switch having a CPU subsystem, you could put all the horsepower into a set of server blades,” says Lee. This promises to lower the cost of switches and more importantly enable operators to unshackle themselves from switch vendors’ software.

Lee points out that OpenFlow is still in its infancy. But Fulcrum has added an ‘OpenFlow vSwitch stub’ to its software that translates between OpenFlow APIs and FocalPoint APIs.

 

What next?

Fulcrum says it continues to monitor the various evolving standards such as DCB, TRILL and the virtualisation work. Fulcrum is also getting requests to support latency measurement on its chips using techniques such as synchronous Ethernet to ensure service level agreements are met.

As for future FocalPoint designs, these will have greater throughput, with larger table sizes, packet buffers and higher-speed 100 Gigabit Ethernet interfaces.

Meanwhile all nine FM6000 series’ chip members will be available from the second quarter, 2011.

 

 

Click here for Part 1: Single-layer switch architectures

Click here for Part 3: Networking developments

 


Is Broadcom’s chip powering Juniper’s Stratus?

Briefing:  Data centre switching

Part 1: Single-layer switch architectures

Juniper Networks’ Stratus switch architecture, designed for next-generation data centres, is several months away from trials. First detailed in 2009, Stratus is being engineered as a single-layer switch with an architecture that will scale to support tens of thousands of 10 Gigabit-per-second (Gbps) ports.

 

Stratus will be in customer trials in early 2011.

Andy Ingram, Juniper Networks

 

 

 

 

 

Data centres use a switch hierarchy, made up of three layers commonly. Multiple servers are connected to access switches, such as top-of-rack designs, which are connected to aggregation switches whose role is to funnel traffic to large, core data centre switches.

Moving to a single-layer design promises several advantages. Not only does a single-layer architecture reduce the overall number of managed platforms, bringing capital and operational expense savings, it also reduces switch latency.

 

Broadcom’s IC for Stratus?

The Stratus architecture has yet to be detailed by Juniper. But the company has said that the design will be based on a 64x10Gbps ASIC building block dubbed a path-forwarding engine (PFE).

“The building block – the PFE – that can have that kind of density (64x10Gbps) gives us the ability to build out the network fabric in a very economical way,” says Andy Ingram, vice president of product marketing and business development of the fabric and switching technologies business group at Juniper Networks.

Stratus is being designed to provide any-to-any connectivity and operate at wire speed. “You have a very dense, very high-cross-sectional bandwidth fabric,” says Ingram. “The only way to make it economical is to use dense ASICs.”

Broadcom’s latest StrataXGS Ethernet switch family - the BCM56840 series - comprises three devices to date, the largest of which - the BCM56845 – also has 64x10Gbps ports.

Juniper will not disclose whether it is using its own ASIC or a third-party device for Stratus.

Broadcom, however, has said that its BCM56840 series is being used by vendors developing flat, single-layer switch architectures. “Anyone using merchant Ethernet switching silicon to build a single-stage environment is probably using our technology,” says Nick Ilyadis, chief technical officer for Broadcom’s infrastructure networking group.

Stratus will be in customer trials in early 2011. “In a lot less than 6 months”, says Ingram. “We have some customers that have some very difficult networking challenges that are signed up to be our early field trials and we will work with them extensively.”

The timeline aligns with Broadcom’s claim that samples of the BCM56840 ICs have been available for months and will be in production by year-end.

According to Broadcom, only a handful of switch vendors have the resources to design such a complex switch ASIC and also expect to recoup their investment. Moreover, a switch vendor using Broadcom's IC has plenty of scope to differentiate their design using software, and even FPGA hardware if needed. It is software that brings out the many features of the BCM56845, says Broadcom.

 

The BCM56845

Broadcom’s BCM56840 ICs share a common feature set but differ in their switching capacity. The largest, the BCM56845, has a switching capacity of 640Gbps. The device’s 64x10 Gigabit Ethernet (GbE) ports can also be configured as 16x40 GbE ports. 

The BCM56845 supports data center bridging (DCB), the Ethernet protocol enhancement that enables lossless transmission of storage and high-performance computing traffic. It also supports the Fibre Channel over Ethernet (FCoE) protocol that frames Fibre Channel storage traffic over DCB-enhanced Ethernet.

Besides DCB Ethernet, the series switch includes layer 3 packet processing and routeing.  There is also a multi-stage content-aware engine that allows higher layer, more complex packet inspection (layer 4 to 7 of the Open Systems Interconnection model) and policy management.

The content-aware functional block can also be used for packet cut-through; a technique to reduce switch latency by inspecting header information and forwarding all the while the packet’s payload is arriving. Broadcom says the switch’s latency is less than one microsecond.

Most importantly, the BCM56845 addresses the move to a flatter switching architecture in the data centre.

It supports the Transparent Interconnection of Lots of Links (TRILL) standard ratified by the Internet Engineering Task Force (IETF) in July. Ethernet uses a spanning tree technique to avoid the creation of loops within a network. However the spanning tree becomes unwieldy as the Ethernet network size grows and works only at the expense of halving the available networking bandwidth. TRILL is designed to allow much larger Ethernet networks while using all available bandwidth.

Broadcom has its own protocol know as HiGig that adds tags to packets. Using HiGig, a very large logical switch can be created and managed, made up of multiple interconnected switches. Any port of the IC can be configured as a HiGig port.

So has Broadcom’s BCM56845 been chosen by Juniper Networks for Stratus?  “I really can’t comment on which customers are using this,” says Ilyadis.

 

 

Click here for Part 2: Ethernet switch chips

Click here for Part 3: Networking developments

 


Privacy Preference Center