Meeting the many needs of data centre interconnect

High capacity. Density. Power efficiency. Client-side optical interface choices. Coherent transmission. Direct detection. Open line system. Just some of the requirements vendors must offer to compete in the data centre interconnect market.

“A key lesson learned from all our interactions over the years is that there is no one-size-fits-all solution,” says Jörg-Peter Elbers, senior vice president of advanced technology, standards and IPR at ADVA Optical Networking. “What is important is that you have a portfolio to give customers what they need.”

 Jörg-Peter Elbers

Teraflex

ADVA Optical Networking detailed its Teraflex, the latest addition to its CloudConnect family of data centre interconnect products, at the OFC show held in Los Angeles in March (see video).

The platform is designed to meet the demanding needs of the large-scale data centre operators that want high-capacity, compact platforms that are also power efficient. 

 

A key lesson learned from all our interactions over the years is that there is no one-size-fits-all solution

 

Teraflex is a one-rack-unit (1RU) stackable chassis that supports three hot-pluggable 1.2-terabit modules or ‘sleds’. A sled supports two line-side wavelengths, each capable of coherent transmission at up to 600 gigabits-per-second (Gbps). Each sled’s front panel supports various client-side interface module options: 12 x 100-gigabit QSFPs, 3 x 400-gigabit QSFP-DDs and lower speed 10-gigabit and 40-gigabit modules using ADVA Optical Networking’s MicroMux technology.

“Building a product optimised only for 400-gigabit would not hit the market with the right feature set,” says Elbers. “We need to give customers the possibility to address all the different scenarios in one competitive platform.”   

The Teraflex achieves 600Gbps wavelengths using a 64-gigabaud symbol rate and 64-ary quadrature-amplitude modulation (64-QAM). ADVA Optical Networking is using Acacia’s Communications latest Pico dual-core coherent digital signal processor (DSP) to implement the 600-gigabit wavelengths. ADVA Optical Networking would not confirm Acacia is its supplier but Acacia decided to detail the Pico DSP at OFC because it wanted to end speculation as to the source of the coherent DSP for the Teraflex. That said, ADVA Optical Networking points out that Teraflex’s modular nature means coherent DSPs from various suppliers can be used.

 

The 1 rack unit Teraflex

The line-side optics supports a variety of line speeds – from 600Gbps to 100Gbps, the lower the speed, the longer the reach.

The resulting 3-sled 1RU Teraflex platform thus supports up to 3.6 terabits-per-second (Tbps) of duplex communications. This compares to a maximum 800Gbps per rack unit using the current densest CloudConnect 0.5RU Quadflex card.                                     

Markets

The data centre interconnect market is commonly split into metro and long haul.

The metro data centre interconnect market requires high-capacity, short-haul, point-to-point links up to 80km. Large-scale data centre operators may have several sites spread across a city, given they must pick locations where they can find them. Sites are typically no further apart than 80km to ensure a low-enough latency such that, collectively, they appear as one large logical data centre.

“You are extending the fabric inside the data centre across the data-centre boundary, which means the whole bandwidth you have on the fabric needs to be fed across the fibre link,” says Elbers. “If not, then there are bottlenecks and you are restricted in the flexibility you have.”  

Large enterprises also use metro data centre interconnect. The enterprises’ businesses involve processing customer data - airline bookings, for example - and they cannot afford disruption. As a result, they may use twin data centres to ensure business continuity.

Here, too, latency is an issue especially if synchronous mirroring of data using Fibre Channel takes place between sites. The storage protocol requires acknowledgement between the end points such that the round-trip time over the fibre is critical. “The average distance of these connections is 40km, and no one wants to go beyond 80 or 100km,” says Elbers, who stresses that this is not an application for Teraflex given it is aimed at massive Ethernet transport. Customers using Fibre Channel typically need lower capacities and use more tailored solutions for the application.

The second data centre interconnect market - long haul - has different requirements. The links are long distance and the data sent between sites is limited to what is needed. Data centres are distributed to ensure continual business operation and for quality-of-experience by delivering services closer to customers.

Hundreds of gigabits and even terabits are sent over the long-distance links between data centres sites but commonly it is about a tenth of the data sent for metro data centre interconnect, says Elbers.  

 

Direct Detection

Given the variety of customer requirements, ADVA Optical Networking is pursuing direct-detection line-side interfaces as well as coherent-based transmission.

At OFC, the system vendor detailed work with two proponents of line-side direct-detection technology - Inphi and Ranovus - as well as its coherent-based Teraflex announcement.

Working with Microsoft, Arista and Inphi, ADVA detailed a metro data centre interconnect demonstration that involved sending 4Tbps of data over an 80km link. The link comprised 40 Inphi ColorZ QSFP modules. A ColorZ module uses two wavelengths, each carrying 56Gbps using PAM-4 signalling. This is where having an open line system is important.

Microsoft wanted to use QSFPs directly in their switches rather than deploy additional transponders, says Elbers. But this still requires line amplification while the data centre operators want the same straightforward provisioning they expect with coherent technology. To this aim, ADVA demonstrated its SmartAmp technology that not only sets up the power levels of the wavelengths and provides optical amplification but also automatically measures and compensates for chromatic dispersion experienced over a link.  

ADVA also detailed a 400Gbps metro transponder card based on PAM-4 implemented using two 200Gbps transmitter optical subassemblies (TOSAs) and two 200Gbps receiver optical subassemblies (ROSAs) from Ranovus.      

 

Clearly there is also space for a direct-detection solution but that space will narrow down over time

 

Choices

The decision to use coherent or direct detection line-side optics boils down to a link’s requirements and the cost an end user is willing to pay, says Elbers.

As coherent-based optics has matured, it has migrated from long-haul to metro and now data centre interconnect. One way to cost-reduce coherent further is to cram more bits per transmission. “Teraflex is adding chunks of 1.2Tbps per sled which is great for people with very high capacities,” says Elbers, but small enterprises, for example, may only need a 100-gigabit link.

“For scenarios where you don’t need to have the highest spectral efficiency and the highest fibre capacity, you can get more cost-effective solutions,” says Elbers, explaining the system vendor’s interest in direct detection.

“We are seeing coherent penetrating more and more markets but still cost and power consumption are issues,” says Elbers. “Clearly there is also space for a direct-detection solution but that space will narrow down over time.”

Developments in silicon photonics that promise to reduce the cost of optics through greater integration and the adoption of packaging techniques from the CMOS industry will all help. “We are not there yet; this will require a couple of technology iterations,” says Elbers.

Until then, ADVA’s goal is for direct detection to cost half that of coherent.

“We want to have two technologies for the different areas; there needs to be a business justification [for using direct detection],” he says. “Having differentiated pricing between the two - coherent and direct detection - is clearly one element here.”   


ADVA's 100 Terabit data centre interconnect platform

  • The FSP 3000 CloudConnect comes in several configurations
  • The data centre interconnect platform scales to 100 terabits of throughput
  • The chassis use a thin 0.5 RU QuadFlex card with up to 400 Gig transport capacity
  • The optical line system has been designed to be open and programmable

ADVA Optical Networking has unveiled its FSP 3000 CloudConnect, a data centre interconnect product designed to cater for the needs of the different data centre players. The company has developed several sized platforms to address the workloads and bandwidth needs of data centre operators such as Internet content providers, communications service providers, enterprises, cloud and colocation players.

Certain Internet content providers want to scale the performance of their computing clusters across their data centres. A cluster is a grouping of distributed computing comprising a defined number of virtual machines and processor cores (see Clusters, pods and recipes explained, bottom). Yet there are also data centre operators that only need to share limited data between their sites.

ADVA Optical Networking highlights two internet content providers - Google and Microsoft with its Azure cloud computing and services platform - that want their distributed clusters to act as one giant global cluster.

“The performance of the combined clusters is proportional to the bandwidth of the interconnect,” says Jim Theodoras, senior director, technical marketing at ADVA optical Networking. “No matter how many CPU cores or servers, you are now limited by the interconnect bandwidth.”  

ADVA Optical Networking cites a Google study that involved running an application on different cluster configurations, starting with a single cluster; then two, side-by-side; two clusters in separate buildings through to clusters across continents. Google claimed the distributed clusters only performed at 20 percent capacity due to the limited interconnect bandwidth. “The reason you are hearing these ridiculous amounts of connectivity, in the hundreds of terabits, is only for those customers that want their clusters to behave as a global cluster,” says Theodoras.

Yet other internet content providers have far more modest interconnect demands. ADVA cites one, as large as the two global cluster players, that wants only 1.2 terabit-per-second (Tbps) between its sites. “It is normal duplication/ replication between sites,” says Theodoras. “They want each campus to run as a cluster but they don’t want their networks to behave as a global cluster.”   

 

FSP 3000 CloudConnect

The FSP 3000 CloudConnect has several configurations. The company stresses that it designed CloudConnect as a high-density, self-contained platform that is power-efficient and that comes with advanced data security features. 

All the CloudConnect configurations use the QuadFlex card that has a 800 Gigabit throughput: up to 400 Gigabit client-side interfaces and 400 Gigabit line rates. 

Jim TheodorasThe QuadFlex card is thin, measuring only a half rack unit (RU). Up to seven can be fitted in ADVA’s four rack-unit (4 RU) platform, dubbed the SH4R, for a line side transport capacity of 2.8 Tbps. The SH4R’s remaining, eighth slot hosts either one or two management controllers.   

The QuadFlex line-side interface supports various rates and reaches, from 100 Gigabit ultra long-haul to 400 Gigabit metro/ regional, in increments of 100 Gigabit. Two carriers, each using polarisation-multiplexing, 16 quadrature amplitude modulation (PM-16-QAM), are used to achieve the 400 Gbps line rate, whereas for 300 Gbps, 8-QAM is used on each of the two carriers. 

 

“The reason you are hearing these ridiculous amounts of connectivity, in the hundreds of terabits, is only for those customers that want their clusters to behave as a global cluster” 

 

The advantage of 8-QAM, says Theodoras, is that it is 'almost 400 Gigabit of capacity' yet its can span continents. ADVA is sourcing the line-side optics but uses its own code for the coherent DSP-ASIC and module firmware. The company has not confirmed the supplier but the design matches Acacia's 400 Gigabit coherent module that was announced at OFC 2015.  

ADVA says the CloudConnect 4 RU chassis is designed for customers that want a terabit-capacity box. To achieve a terabit link, three QuadFlex cards and an Erbium-doped fibre amplifier (EDFA) can be used. The EDFA is a bidirectional amplifier design that includes an integrated communications channel and enables the 4 RU platform to achieve ultra long-haul reaches. “There is no need to fit into a [separate] big chassis with optical line equipment,” says Theodoras. Equally, data centre operators don’t want to be bothered with mid-stage amplifier sites.         

Some data centre operators have already installed 40 dense WDM channels at 100GHz spacing across the C-band which they want to keep. ADVA Optical Networking offers a 14 RU configuration that uses three SH4R units, an EDFA and a DWDM multiplexer, that enables a capacity upgrade. The three SH4R units house a total of 20 QuadFlex cards that fit 200 Gigabit in each of the 40 channels for an overall transport capacity of 8 terabits.

ADVA CloudConnect configuration supporting 25.6 Tbps line side capacity. Source: ADVA Optical Networking

The last CloudConnect chassis configuration is for customers designing a global cluster. Here the chassis has 10 SH4R units housing 64 QuadFlex cards to achieve a total transport capacity of 25.6 Tbps and a throughput of 51.2 Tbps.   

Also included are 2 EDFAs and a 128-channel multiplexer. Two EDFAs are needed because of the optical loss associated with the high number of channels, such that an EDFA is allocated for each of the 64 channels. “For the [14 RU] 40 channels [configuration], you need only one EDFA,” says Theodoras.   

The vendor has also produced a similar-sized configuration for the L-band. Combining the two 40 RU chassis delivers 51.2Tbps of transport and 102.4 Tbps of throughput. “This configuration was built specifically for a customer that needed that kind of throughput,” says Theodoras.  

Other platform features include bulk encryption. ADVA says the encryption does not impact the overall data throughput while adding only a very slight latency hit. “We encrypt the entire payload; just a few framing bytes are hidden in the existing overhead,” says Theodoras.   

The security management is separate from the network management. “The security guys have complete control of the security of the data being managed; only they can encrypt and decrypt content,” says Theodoras.

CloudConnect consumes only 0.5W/ Gigabit. The platform does not use electrical multiplexing of data streams over the backplane. The issue with using such a switched backplane is that power is consumed independent of traffic. The CloudConnect designers has avoided this approach. “The reason we save power is that we don’t have all that switching going on over the backplane.” Instead all the connectivity comes from the front panel of the cards.  

The downside of this approach is that the platform does not support any-port to any-port connectivity. “But for this customer set, it turns out that they don’t need or care about that.”     

 

Open hardware and software  

ADVA Optical Networking claims is 4 RU basic unit addresses a sweet spot in the marketplace. The CloudConnect also has fewer inventory items for the data centre operators to manage compared to competing designs based on 1 RU or 2 RU pizza boxes, it says.   

Theodoras also highlights the system’s open hardware and software design.

“We will let anybody’s hardware or software control our network,” says Theodoras. “You don’t have to talk to our software-defined networking (SDN) controller to control our network.” ADVA was part of a demonstration last year whereby an NEC and a Fujitsu controller oversaw ADVA’s networking elements.

 

Every vendor is always under pressure to have the best thing because you are only designed in for 18 months 

 

By open hardware, what is meant is that programmers can control the optical line system used to interconnect the data centres. “We have found a way of simplifying it so it can be programmed,” says Theodoras. “We have made it more digital so that they don’t have to do dispersion maps, polarisation mode dispersion maps or worry about [optical] link budgets.” The result is that data centre operators can now access all the line elements.    

“At OFC 2015, Microsoft publicly said they will only buy an open optical line system,” says Theodoras. Meanwhile, Google is writing a specification for open optical line systems dubbed OpenConfig. “We will be compliant with Microsoft and Google in making every node completely open.”

General availability of the CloudConnect platforms is expected at the year-end. “The data centre interconnect platforms are now with key partners, companies that we have designed this with,” says Theodoras. 

 

Clusters, pods and recipes explained

A cluster is made up of a number of virtual machines and CPU cores and is defined in software. A cluster is a virtual entity, says Theodoras, unrelated to the way data centre managers define their hardware architectures. 

“Clusters vary a lot [between players],” says Theodoras. “That is why we have had to make scalability such a big part of CloudConnect.” 

The hardware definition is known as a pod or recipe. “How these guys build the network is that they create recipes,” says Theodoras. “A pod with this number of servers, this number of top-of-rack switches, this amount of end-of-row router-switches and this transport node; that will be one recipe.”    

Data centre players update their recipes every 18 months. “Every vendor is always under pressure to have the best thing because you are only designed in for 18 months,” says Theodoras.   

Vendors are informed well in advance what the next hardware requirements will be, and by when they will be needed to meet the new recipe requirements.    

In summary, pods and recipes refer to how the data centre architecture is built, whereas a cluster is defined at a higher, more abstract layer.   


Privacy Preference Center