Cisco Systems' intelligent light

Network optimisation continues to exercise operators and content service providers as their requirements evolve with the growth of services such as cloud computing. Cisco Systems' announced elastic core architecture aims to tackle  networking efficiency and address particular service provider requirements.

 

“The core [network] needs to be more robust, agile and programmable”

Sultan Dawood, Cisco

 

 

 

 

 

“The core [network] needs to be more robust, agile and programmable – especially with the advent of  cloud,” says Sultan Dawood, senior manager, service provider marketing at Cisco. “As service providers look at next-generation infrastructure, convergence of IP and optical is going to have a big play.”

Cisco's elastic core architecture combines several developments. One is the integration of Cisco's 100 Gigabit-per-second (Gbps) dense wavelength division multiplexing (DWDM) coherent transponder, first introduced on its ROADM platform, onto its router to enable IP-over-DWDM. 

This is part of what Cisco calls nLight – intelligent light - which itself has three components: its 100Gbps coherent ASIC hardware, the nLight control plane and nLight colourless and contentionless ROADMs. “As packet and optical networks converge, intelligence between the layers is needed,” says Dawood. “Today how the ROADM and the router communicate is limited."

There is the GMPLS [Generalized Multi-Protocol Label Switching] layer working at the IP layer, and WSON [Wavelength Switched Optical Layer] working at the optical layer. These two protocols are doing control plane functions at each of their respective layers. "What nLight is doing is communicating between these two layers [using existing parameters] and providing the interaction," says Dawood.

Ron Kline, principal analyst for network infrastructure at Ovum, describes nLight more generally as Cisco’s strategy for software-defined networking:  "Interworking control planes to share info across platforms and add the dynamic capabilities."

The second component of Cisco's announcement is an upgrade of its carrier-grade services engine, from 20Gbps to 80Gbps, that fits within Cisco's CSR-3 core router and will be available from May 2013. The services engine enables such services as IPv6 and 'cloud routing' - network positioning which determines the most suitable resource for a customer’s request based on the content’s location and the data centre's loading.

Cisco has also added anti distributed denial of service (anti-DDoS) software to counter cyber threats. “We have licensed software that we have put into our CRS-3 so that with our VPN services we can provide threat mitigation and scrub any traffic liable to hurt our customers,” says Dawood.

 

nLight

According to Cisco, several issues need to be addressed between the IP and optical layers. For example, how the router and the optical infrastructure exchange information like circuit ID, path identifiers and real-time information in order to avoid the manual intervention used currently.

“With this intelligent data that is extracted due to these layers communicating, I can now make better, faster decisions that result in rapid service provisioning and service delivery,” says Dawood.

Cisco cites as an example a financial customer requesting a low-latency path.  In this case, the optical network comes back through this nLight extraction process and highlights the most appropriate path. That path has a circuit ID that is assigned to the customer. If the customer then comes back to request a second identical circuit, the network can make use of the existing intelligence to deliver a similar-specification circuit.

Such a framework avoids lengthy, manual interactions between the IP and transport departments of an operator required when setting up an IP VPN, for example. By exchanging data between layers, service providers can understand and improve their network topology in real-time, and be more dynamic in how they shift resources and do capacity planning in their network.

Service providers can also improve their protection and restoration schemes and also how they configure and provision services. Such capabilities will enable operators to be more efficient in the introduction and delivery of cloud and mobile services.

 

Total cost of ownership

Market research firm ACG Research has done a total cost of ownership (TCO) analysis of Cisco's elastic core architecture. It claims using nLight achieves up to a halving of the TCO of the optical and packet core networks in designs using protected wavelengths. It also avoids a 10% overestimation of required capacity.

Meanwhile, ACG claims an 18-month payback and 156% return on investment from a CRS CGSE service module with its anti‐DDoS service, and a 24% TCO savings from demand engineering with the improved placement of routes and cloud service workload location.

Cisco says its designed framework architecture is being promoted in the Internet Engineering Task Force (IETF). The company is also liaising with the International Telecommunication Union (ITU) and the Optical Internetworking Forum (OIF) where relevant. 


Opnext's multiplexer IC plays its part in 100Gbps trial

AT&T’s 100 Gigabit-per-second (Gbps) coherent trial between Louisiana and Florida detailed earlier this week was notable for several reasons. It included a mix of 10, 40 and 100Gbps wavelengths, Cisco Systems' newest IP core router, the CRS-3, and a 100Gbps line-side design from Opnext.

 

According to Andrew Schmitt, directing analyst of optical at Infonetics Research, what is significant about the 100Gbps AT&T trial is the real-time transmission; unlike previous 100Gbps trials no received data was block-captured and decoded offline.

Such real-time transmission required the use of Opnext’s 100Gbps coherent design comprising its silicon germanium (SiGe) multiplexer chip, announced in January, and an FPGA mock-up of the receiver circuitry.

 

"Several industry observers claim coherent detection is the most significant development since the advent of dense wavelength division multiplexing"

 

The multiplexer IC implements polarisation-multiplexing quadrature phase-shift keying (PM-QPSK) modulation (also known as dual-polarisation QPSK or DP-QPSK) at a line rate of up to 128Gbit/s, to accommodate advanced forward error correction (FEC) needed for 100Gbps transmission.

Yet despite the high speed electronics, the IC can be surface-mounted, simplifying packaging and assembly while reducing the cost of the 100Gbps transponder.

 

Why is the multiplexer IC important?

To enable the transition to 100Gbps optical transmission its economics needs to be improved. 100Gbps line-side MSA modules are needed to complement emerging IEEE 100 Gigabit Ethernet optical transceivers.

The Optical Internetworking Forum (OIF) backed by industry players have alighted on PM-QPSK as the chosen modulation approach for 100Gbps line-side interfaces. Operators such as AT&T and Verizon also back the technology for 100Gbps deployments.

Such industry recognition of coherent detection using PM-QPSK is based on the technological benefits already demonstrated at 40Gbps by Nortel. Indeed several industry observers claim coherent detection is the most significant development since the advent of dense wavelength division multiplexing (DWDM). While Verizon has stated that its next-generation links will be optimised for 100Gbps coherent transmission.

But developing 100Gbps technology is costly, which is why the OIF and operators are keen to focus the industry’s development R&D dollars on a single technological approach to avoid what has happened for 40Gbps transmission where four modulation schemes were developed and are still being deployed.

Opnext is the first company to detail a 100Gbps multiplexer chip. By operating at 128Gbit/s, the device supports the OIF’s 100Gbps ultra long haul DWDM Framework document yet the chip is packaged within a ball grid array to enable the use of surface-mount manufacturing on the printed circuit board. This avoids the expense and design complications associated with using radio frequency connectors.

The IC could also be used for 40Gbps PM-QPSK transponders. “We might have chosen CMOS [for a 40Gbps design] but there is no reason not to run it at a lower speed,” says Matt Traverso, senior manager, technical marketing at Opnext.

 

Method used

The multiplexer IC is manufactured using a 0.13 micron SiGe process. The in-house design has been developed by the engineering team Opnext acquired with the purchase of StrataLight.

Design work began a year ago. The resulting chip takes 10 channels, each at up to 11.3Gbit/s, and coverts the data to four 32Gbps channels that are then phase encoded. The multiplexer chip outputs are two polarisations, each comprising two 32Gbps I and Q data streams (see diagram). For a complete 100Gbps line-card diagram, showing the multiplexer IC, demultiplexer/ receiver ASIC that make up the line side and the client-side module, click here.

The input channel rate of 11.3Gbps is to support the Optical Transport Network (OTN) ODU-4 format while the 32Gbps per channel ensures that there is sufficient bit headroom for powerful forward error correction. It is the need to support 32Gbps data rates that required Opnext to use SiGe technology. “CMOS is good for 25 to 28Gbps rates; beyond that for good optical transport you need silicon germanium,” says Traverso.

The consensus however is that the industry will consolidate on CMOS for the multiplexer and demultiplexer/ receiver ICs. It could be that when Opnext defined its multiplexer design goals and timeline, CMOS was not an option.

How was the use of surface-mount technology (SMT) made possible? “The physical interface of the IC was designed based upon SMT packaging models to allow for sufficient margin in the jitter budget to achieve good transmission performance,” says Traverso.  “The goal is to match the impedance over frequency from the chip contact through the packaging to the printed circuit board.”

Opnext has not said which foundry it is using to make the chip. Hitachi and IBM are obvious candidates but given Opnext’s history, Hitachi is most likely.

 

What next?

For 100Gbps line side transmission both multiplexing and demultiplexing circuitry are required. Opnext has detailed the multiplexing circuitry only.

At 100Gbps, the receiver circuitry requires the inverse demultiplexer circuitry – decoding the PM-QPSK signal and recovering the original 100Gbps (10x10Gbps) data. But also required are very high-speed analogue-to-digital converters (ADCs) along with a computationally powerful digital signal processor (DSP).

The ADC and DSP are used to recover the signal, compensating for chromatic and polarisation mode dispersions experienced during transmission. Given the channel data rate is 32Gbps, it implies that the ADCs are operating at 64 Gsample/s. 

This is why developing such a chip is expensive and so technically challenging. “It requires finances, technical talent, significant optics expertise, integrated circuit knowledge, DSP design and ADC expertise,” says Traverso.

The reputed fee for developing such an ASIC is US $20m. Given there are at least four system vendors, Opnext, and two transponder/ chip players believed to be developing such an ASIC, this is a huge collective investment. But then the ASIC is where system vendors and transponder makers can differentiate their coherent-based products.

The ASIC also highlights the marked difference between Gigabit Ethernet (GbE) and line-side interfaces.

For 40 and 100GbE transceivers, interoperability between vendors’ transceivers is key. Long-haul connections, in contrast, tend to be proprietary.  The industry may have alighted on a common modulation approach but paramount is optical performance. The ASIC, and the DSP and FEC algorithms it executes, is how vendor differentiation is achieved.

At OFC/NFOEC 2010 later this month working 100Gbps PM-QPSK modules are not expected. But it is likely that Opnext and others will detail their 100Gbps demultiplexing/ receiver ASICs.  Meanwhile, coherent modules at 40Gbps are expected.

References

[1] “Performance of Dual-Polarization QPSK for Optical Transport Systems” by Kim Roberts et al, click here.


Privacy Preference Center