Adding an extra dimension to ROADM designs

U.K. start-up ROADMap Systems, a developer of wavelength-selective switch technology, has completed a second round of funding. The amount is undisclosed but the start-up is believed to have raised several million dollars to date.

Karl HeeksThe company will use the funding to develop a prototype of its two-dimensional (2D) optical beam-steering technique to integrate 24 wavelength-selective switches (WSSes) within a single platform.

The WSS is a key building block used within reconfigurable optical add-drop multiplexers (ROADMs).

The company’s WSS technology uses liquid crystal on silicon (LCOS) technology, the basis of existing WSS designs from the likes of Finisar and Lumentum. However, the start-up has developed a way to steer beams in 2D whereas current WSSes operate in a single dimension only.

The Cambridge-based company’s pre-production prototype will integrate 24,1x12 WSSes within a single package. The platform promises service providers ROADM designs that deliver space, power consumption and operational cost savings as well as systems advantages.

 

Wavelength-selective switch

A WSS takes wavelength-division multiplexed (WDM) channels from an input fibre and distributes them as required across an array of output fibres. Typical WSS configurations include a 1x9 - a one input fibre port and nine output ports - and a 1x20.

Current WSS designs comprise a diffraction grating, a cylindrical lens and an LCOS panel that is used to deflect the light channels to the required output fibres.

The diffraction grating separates the WDM channels while the cylindrical lens produces an elongated projection of the channels onto the LCOS device. The panel’s liquid crystals are oriented in such a way to direct the projected light channels to the appropriate output fibres. The orientation of the arrays of liquid crystals that perform the various steerings are holograms.

Commercial WSSes use the LCOS panel to steer in one dimension only: left or right. This means the output fibres are arranged in an array and the number of fibres is limited by the total deflection the LCOS can achieve. ROADMap Systems has developed a technique that produces holograms on the LCOS panel that steer light in two dimensions: left and right, up and down and diagonally.

Moreover, the holograms are confined to a small area of the panel, far fewer pixels than the elongated beams of a 1D WSS. Such confinement allows multiple light beams to be steered to the output fibre bundles.  

“You use a much smaller area of the LCOS to bend things in 2D,” says Karl Heeks, CEO at ROADMap Systems.

 

Platform demonstrator

ROADMap System’s key intellectual property is its know-how to create the steering pattern - the hologram - programmed onto the LCOS panel.

The 2D WSS system requires calibration to create the precision holograms. The calibration data is generated during the device’s manufacture and forms the input to an algorithm that creates the holograms needed for the LCOS to steer accurately the traffic to the output fibres.

 

You use a much smaller area of the LCOS to bend things in 2D

 

ROADMap Systems has demonstrated its 2D steering technology to service providers, system vendors and optical subsystem players.

Now, the company is working to build the 24, 1x12 WSSs on an optical bench which it expects to complete by the year-end. The start-up is also creating the calibration software used for 2D beam steering as well as a user interface to allow networking staff to set up their required connections.

The first pre-production packaged systems – each one comprising a 4K LCOS panel and 312 fibres - are expected for delivery for trialling in 2019. The start-up is reluctant to give a firm date as it is still exploring design options. For example, ROADMap Systems has an improved lower-loss, more compact fibre coupling design but it has yet to decide whether to incorporate it or its existing design for its platform.  

“We are not intending the prototype to go into a system within the network,” says Heeks. “It is more a vehicle to illustrate its capabilities.”  

 

System benefits

The main benefit of ROADMap Systems’ 2D beam-steering WSS architecture is not so much its optical performance; the start-up expects its design to match the optical performance of existing 1D WSSes. Rather, there are architectural benefits besides the obvious integration and cost benefits of putting 24 WSSes in one platform.

The first system advantage is the ability to use the many WSSes to implement ROADMs of several degrees including the ROADM’s add-drop architecture.  A two-degree ROADM handles east and west fibre pairs while a three-degree ROADM adds north-facing traffic as well.

 

A ROADM architecture using 1xN splitters as part of the multicast switch. Source: ROADMap Systems.

To add and drop light-paths, a multicast switch is used (shown in green in the diagram above). The multicast switch can be implemented using optical splitters, however, due to their loss, optical amplifiers are needed to boost the signals, adding to the overall cost and system complexity.

WSSes can be used instead of the splitters as part of the multicast switch architecture such that optical amplification is not needed; the optical loss the WSS stage adds being much lower than the splitters. Removing optical amplification impacts significantly the overall ROADM cost (see diagram below).

 

A ROADM architecture using 1xN WSSes as part of the multicast switch. Source: ROADMap Systems.

The integrated platform’s large number of WSSes will ease the implementation of the latest generation of ROADMs that are colourless, directionless and contentionless.

A colourless ROADM decouples the wavelength-dependency such that a light-path can be used on any of the network interface ports. Directionless refers to having full flexibility in the routeing of a light-path to any of the ports. Lastly, contentionless means non-blocking, where the same wavelength channel can be accommodated across all the degrees of the ROADM without contention.

And being LCOS-based, ROADMap’s WSSes also support a flexible grid enabling the ROADM to support channels such as coherent transmissions above 200 gigabit-per-second that do not conform to the rigid 50GHz-wide ITU grid spacings.

The second system advantage of the platform is that with its many WSSes, it can route and add-drop wavelengths across both the C and L-bands. However, the company is not planning to implement this feature in its preproduction prototype.

 

Next steps

ROADMap Systems says it is focussed on producing and testing its pre-production prototype. A further round of investment will be needed to turn the design into a commercial product.

“We believe that such a highly-integrated architecture will offer immediate performance and economic benefits to many teleccom applications,” says Heeks. “It is also well positioned for datacentre – DCI - applications where data needs to be routed between distributed datacentres linked by parallel fibres."


Intel targets 5G fronthaul with a 100G CWDM4 module

  • Intel announced at ECOC that it is sampling a 10km extended temperature range 100-gigabit CWDM4 optical module for 5G fronthaul. 
  • Another announced pluggable module pursued by Intel is the 400 Gigabit Ethernet (GbE) parallel fibre DR4 standard.
  • Intel, a backer of the CWDM8 MSA, says the 8-wavelength 400-gigabit module will not be in production before 2020.

Intel has expanded its portfolio of silicon photonics-based optical modules to address 5G mobile fronthaul and 400GbE.

Robert BlumAt the European Conference on Optical Communication (ECOC) being held in Rome this week, Intel announced it is sampling a 100-gigabit CWDM4 module in a QSFP form factor for wireless fronthaul applications.

The CWDM4 module has an extended temperature range, -20°C to +85°C, and a 10km reach.

“The final samples are available now and [the product] will go into production in the first quarter of 2019,” says Robert Blum, director of strategic marketing and business development at Intel’s silicon photonics product division.

Intel also announced it will support the 400GBASE-DR4, the IEEE’s 400 GbE standard that uses four parallel fibres for transmit and four for the receive path, each carrying a 100-gigabit 4-level pulse amplitude modulation (PAM-4) signal. 

 

5G wireless

5G wireless will be used for a variety of applications. Already this year the first 5G fixed and mobile wireless services are expected to be launched. 5G will also support massive Internet of Things (IoT) deployments as well as ultra-low latency applications. 

The next-generation wireless standard uses new spectrum that includes millimetre wave spectrum in the 24GHz to 40GHz region. Such higher frequency bands will drive small-cell deployments. 

5G’s use of new spectrum, small cells and advanced air interface techniques such as multiple input, multiple output (MIMO) antenna technology is what will enable its greater data speeds and vastly expanded capacity compared to the current LTE cellular standard. 

Source: Intel.

The 5G wireless standard will also drive greater fibre deployment at the network edge. And it is here where mobile fronthaul plays a role, linking the remote radio heads at the antennas with the centralised baseband controllers at the central office (see diagram). Such fronthaul links will use 25-gigabit and 100-gigabit links. “We have multiple customers that are excited about the 100-gigabit CWDM4 for these applications,” says Blum 

Intel expects demand for 25-gigabit and 100-gigabit transceivers for mobile fronthaul to begin in 2019. 

 

Intel is now producing over one million PSM4 and CWDM4 modules a year

 

Client-side modules 

Intel entered the optical module market with its silicon photonics technology in 2016 with a 100-gigabit PSM4 module, quickly followed by a 100-gigabit CWDM4 module. Intel is now producing over one million PSM4 and CWDM4 modules a year. 

Intel will provide customers with 400-gigabit DR4 samples in the final quarter of 2018 with production starting in the second half of 2019. This is when Intel says large-scale data centre operators will require 400 gigabits.

“The initial demand in hyperscale data centres for 400 gigabits will not be for duplex [fibre] but parallel fibre,” says Blum. “So we expect the DR4 to go to volume first and that is why we are announcing the product at ECOC.”       

Intel says the advantages of its silicon photonics approach have already been demonstrated with its 100-gigabit PSM4 module. One is the optical performance resulting from the company’s heterogeneous integration technique combining indium-phosphide lasers with silicon photonics modulators on the one chip. Another advantage is scale using Intel’s 300mm wafer-scale manufacturing. 

Intel says demand for the 500m-reach DR4 module to go hand-in-hand with that for the 100-gigabit single- wavelength DR1, given how the DR4 will also be used in breakout mode to interface with four DR1 modules. 

“We don’t see the DR1 standard competing or replacing 100-gigabit CWDM4,” says Blum. “The 100-gigabit CWDM4 is now mature and at a very attractive price point.”

Intel is a leading proponent of the CWDM8 MSA, an optical module design based on eight wavelengths, each a 50 gigabit-per-second (Gbps) non-return-to-zero (NRZ) signal. The CWDM8 MSA was created to fast-track 400 gigabit interfaces by avoiding the wait for 100-gigabit PAM-4 silicon. 

When the CWDM8 MSA was launched in 2017, the initial schedule was to deploy the module by the end of this year. Intel also demonstrated the module working at the OFC show held in March. 

Now, Intel expects production of the CWDM8 in 2020 and, by then, other four-wavelength solutions using 100-gigabit PAM-4 silicon such as the 400G-FR4 MSA will be available. 

“We just have to see what the use case will be and what the timing will be for the CWDM8’s deployment,” says Blum. 


NeoPhotonics ups the baud rate for line and client optics

  • Neophotonics’ 64 gigabaud optical components are now being designed into optical transmission systems. The components enable up to 600 gigabits per wavelength and 1.2 terabits using a dual-wavelength transponder.    
  • The company’s high-end transponder that uses Ciena’s WaveLogic Ai coherent digital signal processor (DSP) is now shipping.  
  • NeoPhotonic is also showcasing its 53 gigabaud components for client-side pluggable optics capable of 100-gigabit wavelengths at the current European Conference on Optical Communication (ECOC) show being held in Rome.  

NeoPhotonics says its family of 64 gigabaud (Gbaud) optical components are being incorporated within next-generation optical transmission platforms. 

Ferris LipscombThe 64Gbaud components include a micro intradyne coherent receiver (micro-ICR), a micro integrable tunable laser assembly (micro-ITLA) and a coherent driver modulator (CDM).

The micro-ICR and micro-ITLA are the Optical Internetworking Forum’s (OIF) specification, while the CDM is currently being specified.   

“Three major customers have selected to use all three [64Gbaud components] and several others are using a subset of those,” says Ferris Lipscomb, vice president of marketing at NeoPhotonics.

NeoPhotonics also unveiled and demonstrated two smaller 64Gbaud component designs at the OFC show held in March. The devices - a coherent optical sub-assembly (COSA) and a nano-ITLA - are aimed at 400-gigabit coherent pluggable modules as well as compact line-card designs.

“These [two compact components] continue to be developed as well,” says Lipscomb.

 

Baud rate and modulation  

The current 100-gigabit coherent transmission uses polarisation-multiplexing, quadrature phase-shift keying (PM-QPSK) modulation operating at 32 gigabaud. The 100 gigabits-per-second (Gbps) data rate is achieved using four bits per symbol and a symbol rate of 32Gbaud.

Optical designers use two approaches to increase the wavelength’s data rate beyond 100Gbps. One approach is to increase the modulation scheme beyond QPSK using 16-ary quadrature amplitude modulation (16-QAM) or 64-QAM, the other is to increase the baud rate. 

“The baud rate is the on-off rate as opposed to the bit rate. That is because you are packing more bits in there than the on-off supports,” says Lipscomb. “But if you double the on-off rate, you double the number of bits.” 

Doubling the baud rate from 32Gbaud to 64Gbaud achieves just while using 64-QAM trebles the data sent per symbol compared to 100-gigabit PM-QSPK. Combining the two - 64Gbaud and 64-QAM - creates the 600 gigabits per wavelength. 

A higher baud rate also has a reach advantage, says Lipscomb, with its lower noise. “For longer distances, increasing the baud rate is better.” 

But doubling the baud rate requires more capable DSPs to interpret things at twice the rate. “And such DSPs now exist, operating at 64Gbaud and 64-QAM,” he says.    

 

Three major customers have selected to use all three [64Gbaud components] and several others are using a subset of those

 

Coherent components

NeoPhotonics’ 64Gbaud optical components are suitable for line cards, fixed-packaged transponders, 1-rack-unit modular platforms used for data centre interconnect and the CFP2 pluggable form factor. 

For data centre interconnect using 600-gigabits-per-wavelength transmissions, the distance achieved is up to 100km. For longer distances, the 64Gbaud components achieve metro-regional reaches at 400Gbps, and 2,000km for long-haul at 200Gbps.

But to fit within the most demanding pluggable form factors such as the OSFP and QSFP-DD, smaller componentry is required. This is what the coherent optical sub-assembly (COSA) and nano-ITLA are designed to address. The COSA combines the coherent modular driver and the ICR in a single gold-box package that is no larger than the individual 64Gbaud micro-ICR and CDM packages.   

 

Source: Gazettabyte

“There is a lot of interest in 400-gigabit applications for a CFP2, and in that form factor you can use the separate components,” says Lipscomb. “But for data centre interconnect, you want to increase the density as much as possible so going to the smaller OSPF or QSFP-DD requires another generation of [component] shrinking.”  

NeoPhotonics says there are two main approaches. One, and what NeoPhotonics has done with the nano-ITLA and COSA, is to separate the laser from the remaining circuitry such that two components are needed overall. A benefit of a separate laser is also lower noise. “But the ultimate approach would be to put all three in one gold box,” says Lipscomb. 

 

For data centre interconnect, you want to increase the density as much as possible so going to the smaller OSPF or QSFP-DD requires another generation of [component] shrinking       

 

Both approaches are accommodated as part of the OIF’s Integrated Coherent Transmitter-Receiver Optical Sub-Assembly (IC-TROSA) project.      

Another challenge to achieving coherent designs such as the emerging 400ZR standard using the OSFP or QSFP-DD is accommodating the DSP with the optics while meeting the modules’ demanding power constraints. This requires a 7nm CMOS DSP and first samples are expected by year-end with limited production occurring towards the end of 2019. Volume production of coherent OSFP and QSFP-DD modules are expected in 2020 or even 2021, says Lipscomb.   

 

100G client-side wavelengths 

NeoPhotonics also used the OFC show last March to detail its 53Gbaud components for client-side pluggables that are 100-gigabit single-wavelength and four-wavelength 400-gigabit designs. Samples of these have now been delivered to customers and are part of demonstrations at ECOC this week. 

The components include an electro-absorption modulated laser (EML) and driver for the transmitter, and photodetectors and trans-impedance amplifiers for the receiver path. The 53Gbaud EML can operate uncooled, is non-hermetic and is aimed for use with OSFP and QSFP-DD modules.

To achieve a 100-gigabit wavelength, 4-level pulse-amplitude modulation (PAM-4) is used and that requires an advanced DSP. Such PAM-4 DSPs will only be available early next year, says NeoPhotonics. 

The first 400-gigabit modules using 100-gigabit wavelengths will gain momentum by the end of 2019 with volume production in 2020, says Lipscomb.

The various 8-wavelength implementations such as the IEEE-defined 2km 400GBASE-FR8 and 10km 400GBASE-LR8 are used when data centre operators must have 400-gigabit client interfaces. 

The adoption of 100-gigabit single-wavelength implementations of 400 gigabits, in contrast, will be adopted when it becomes cheaper on a cost-per-bit basis, says Lipscomb: “It [100-gigabit single-wavelength-based modules] will be a general replacement rather than a breaking of bottlenecks.”   

NeoPhotonics is also making available its DFB laser technology for silicon-photonics-based modules such as the 2km 400G-FR4, as well as the 100-gigabit single-wavelength DR1 and the parallel-fibre 400-gigabit DR4 standards.   

 

WaveLogic AI transponder 

NeoPhotonics has revealed it is shipping its first module using Ciena’s WaveLogic Ai coherent DSP. “We are shipping in modest volumes right now,” says Lipscomb. 

The company is one of three module makers, the others being Lumentum and Oclaro, that signed an agreement with Ciena to use of its flagship WaveLogic Ai DSP for their coherent module designs. 

Lipscomb describes the market for the module as a niche given its high-end optical performance, what he describes as a fully capable, multi-haul transponder. “It has lots of features and a lot of expense too,” he says. “It is applied to specific cases where long distance is needed; it can go 12,000km if you need it to.”

The agreement with Ciena also includes the option to use future Ciena DSPs. “Nothing is announced yet and so we will have to see how that all plays out.” 


OPNFV's releases reflect the evolving needs of the telcos

The Open Platform for NFV (OPNFV) is increasingly focused on supporting cloud-native technologies and the network edge.

Heather KirkseyThe open source group, part of the Linux Foundation, specialises in the system integration of network functions virtualisation (NFV) technology.

The OPNFV issued Fraser, its latest platform release, earlier this year while its next release, Gambia, is expected soon.  

Moreover, the telcos continual need for new features and capabilities means the OPNFV’s work is not slowing down.

“I don’t see us entering maintenance-mode anytime soon,” says Heather Kirksey, vice president, community and ecosystem development, The Linux Foundation and executive director, OPNFV. 

 

Meeting a need

The OPNFV was established in 2014 to address an industry shortfall.  

“When we started, there was a premise that there were a lot of pieces for NFV but getting them to work together was incredibly difficult,” says Kirksey.

Open-source initiatives such as OpenStack, used to control computing, storage, and networking resources in the data centre, and the OpenDaylight software-defined networking (SDN) controller, lacked elements needed for NFV. “No one was integrating and doing automated testing for NFV use cases,” says Kirksey.

 

I don’t see us entering maintenance-mode anytime soon 

 

OPNFV set itself the task of identifying what was missing from such open-source projects to aid their deployment. This involved working with the open-source communities to add NFV features, testing software stacks, and feeding the results back to the groups.  

The nature of the OPNFV’s work explains why it is different from other, single-task, open-source initiatives that develop an SDN controller or NFV management and orchestration, for example. “The code that the OPNFV generates tends to be for tools and installation - glue code,” says Kirksey.

OPNFV has gained considerable expertise in NFV since its founding. It uses advanced software practices and has hardware spread across several labs. “We have a large diversity of hardware we can deploy to,” says Kirksey.

One of the OPNFV’s advanced software practices is continuous integration/ continuous delivery (CI/CD).  Continuous integration refers to how code is added to a software-build while it is still being developed unlike the traditional approach of waiting for a complete software release before starting the integration and testing work. For this to be effective, however, requires automated code testing. 

Continuous delivery, meanwhile, builds on continuous integration by automating a release’s update and even its deployment. 

“Using our CI/CD system, we will build various scenarios on a daily, two-daily or weekly basis and write a series of tests against them,” says Kirksey, adding that the OPNFV has a large pool of automated tests, and works with code bases from various open-source projects.

Kirksey cites two examples to illustrate how the OPNFV works with the open-source projects.

When OPNFV first worked with OpenStack, the open-source cloud platform took far too long - about 10 seconds - to detect a faulty virtual machine used to implement a network function running on a server. “We had a team within OPNFV, led by NEC and NTT Docomo, to analyse what it would take to be able to detect faults much more quickly,” says Kirksey. 

The result required changes to 11 different open-source projects, while the OPNFV created test software to validate that the resulting telecom-grade fault-detection worked. 

Another example cited by Kirksey was to enable IPv6 support that required changes to OpenStack, OpenDaylight and FD.io, the fast data plane open source initiative.   

 

The reason cloud-native is getting a lot of excitement is that it is much more lightweight with its containers versus virtual machines

 

OPNFV Fraser 

In May, the OPNFV issued its sixth platform release dubbed Fraser that progresses its technology on several fronts.

Fraser offers enhanced support for cloud-native technology that use microservices and containers, an alternative to virtual machine-based network functions.

The OPNFV is working with the Cloud Native Computing Foundation (CNCF), another open-source organisation overseen by the Linux Foundation. 

CNCF is undertaking several projects addressing the building blocks needed for cloud-native applications. The most well-known being Kubernetes, used to automate the deployment, scaling and management of containerised applications.

“The reason cloud-native is getting a lot of excitement is that it is much more lightweight with its containers versus virtual machines,” says Kirksey. “It means more density of what you can put on your [server] box and that means capex benefits.” 

Meanwhile, for applications such as edge computing, where smaller devices will be deployed at the network edge, lightweight containers and Kubernetes are attractive, says Kirksey.

Another benefit of containers is faster communications. “Because you don’t have to go between virtual machines, communications between containers is faster,” she says. “If you are talking about network functions, things like throughput starts to become important.”

The OPNFV is working with cloud-native technology in the same way it started working with OpenStack. It is incorporating the technology within its frameworks and undertaking proof-of-concept work for the CNCF, identifying shortfalls and developing test software. 

OPNFV has incorporated Kubernetes in all its installers and is adopting other CNCF work such as the Prometheus project used for monitoring. 

“There is a lot of networking work happening in CNCF right now,” says Kirksey. “There are even a couple of projects on how to optimise cloud-native for NFV that we are also involved in.”

OPNFV’s Fraser also enhances carrier-grade features. Infrastructure maintenance work can now be performed without interrupting virtual network functions. 

Also expanded are the metrics that can be extracted from the underlying hardware, while the OPNFV’s Calipso project has added modules for service assurance as well as support for Kubernetes.  

Fraser has also improved the support for testing and can allocate hardware dynamically across its various labs. “Basically we are doing more testing across different hardware and have got that automated as well,” says Kirksey. 

 

Linux Foundation Networking Fund

In January, the Linux Foundation combined the OPNFV with five other open-source telecom projects it is overseeing to create the Linux Foundation Networking Fund (LNF). 

The other five LNF projects are the Open Network Automation Platform (ONAP), OpenDaylight, FD.io, the PNDA big data analytics project, and the SNAS streaming network analytics system.

 

Edge is becoming a bigger and more important use-case for a lot of the operators

 

“We wanted to break down the silos across the different projects,” says Kirksey. There was also overlap with members sitting on several projects’ boards. “Some of the folks were spending all their time in board meetings,” says Kirksey. 

Service provider Orange is using the OPNFV Fraser functional testing framework as it adopts ONAP. Orange used the functional testing to create its first test container for ONAP in one day. Orange also achieved a tenfold reduction in memory demands, going from a 1-gigabyte test virtual machine to a 100-megabyte container. And the operator has used the OPNFV’s CI/CD toolchain for the ONAP work.

By integrating the CI/CD toolchain across projects, OPNFV says it is much easier to incorporate new code on a regular basis and provide valuable feedback to the open source projects.

The next code release, Gambia, could be issued as early as November.

Gambia will offer more support for cloud-native technologies. There is also a need for more work around Layer 2 and Layer 3 networking as well as edge computing work involving OpenStack and Kubernetes. 

“Edge is becoming a bigger and more important use-case for a lot of the operators,” says Kirksey.

OPNFV is also continuing to enhance its test suites for the various projects. “We want to ensure we can support the service providers real-world deployment needs,” concludes Kirksey.


Switch chips not optics set the pace in the data centre

Broadcom is doubling the capacity of its switch silicon every 18-24 months, a considerable achievement given that Moore’s law has slowed down. 

Last December, Broadcom announced it was sampling its Tomahawk 3 - the industry’s first 12.8-terabit switch chip - just 14 months after it announced its 6.4-terabit Tomahawk 2.

Rochan SankarSuch product cycle times are proving beyond the optical module makers; if producing next-generation switch silicon is taking up to two years, optics is taking three, says Broadcom. 

“Right now, the problem with optics is that they are the laggards,” says Rochan Sankar, senior director of product marketing at switch IC maker, Broadcom. “The switching side is waiting for the optics to be deployable.”

The consequence, says Broadcom, is that in the three years spanning a particular optical module generation, customers have deployed two generations of switches. For example, the 3.2-terabit Tomahawk based switches and the higher-capacity Tomahawk 2 ones both use QSFP28 and SFP28 modules. 

In future, a closer alignment in the development cycles of the chip and the optics will be required, argues Broadcom.

 

Switch chips

Broadcom has three switch chip families, each addressing a particular market. As well as the Tomahawk, Broadcom has the Trident and Jericho families (see table). 

 

All three chips are implemented using a 16nm CMOS process. Source: Broadcom/ Gazettabyte.

“You have enough variance in the requirements such that one architecture spanning them all is non-ideal,” says Sankar. 

The Tomahawk is a streamlined architecture for use in large-scale data centres. The device is designed to maximise the switching capacity both in terms of bandwidth-per-dollar and bandwidth-per-Watt. 

“The hyperscalers are looking for a minimalist feature set,” says Sankar. They consider the switching network as an underlay, a Layer 3 IP fabric, and they want the functionality required for a highly reliable interconnect for the compute and storage, and nothing more, he says. 

 

Right now, the problem with optics is that they are the laggards

 

Production of the Tomahawk 3 integrated circuit (IC) is ramping and the device has already been delivered to several webscale players and switch makers, says Broadcom.

The second, Trident family addresses the enterprise and data centres. The chip includes features deliberately stripped from the Tomahawk 3 such as support for Layer 2 tunnelling and advanced policy to enforce enterprise network security. The Trident also has a programmable packet-processing pipeline deemed unnecessary inlarge-scale data centres. 

But such features are at the expense of switching capacity. “The Trident tends to be one generation behind the Tomahawk in terms of capacity,” says Sankar. The latest Trident 3 is a 3.2-terabit device. 

The third, Jericho family is for the carrier market. The chip includes a packet processor and traffic manager and comes with the accompanying switch fabric IC dubbed Ramon. The two devices can be scaled to create huge capacity IP router systems exceeding 200 terabits of capacity. “The chipset is used in many different parts of the service provider’s backbone and access networks,” says Sankar. The Jericho 2, announced earlier this year, has 10 terabits of capacity. 

 

Trends 

Broadcom highlights several trends driving the growing networking needs within the data centre.

One is how microprocessors used within servers continue to incorporate more CPU cores while flash storage is becoming disaggregated. “Now the storage is sitting some distance from the compute resource that needs very low access times,” says Sankar.

The growing popularity of public cloud is also forcing data centre operators to seek greater servers utilisation to ‘pack more tenants per rack’. 

There are also applications such as deep learning that use other computing ICs such as graphics processor units (GPUs) and FPGAs. “These push very high bandwidths through the network and the application creates topologies where any element can talk to any element,” says Sankar. This requires a ‘flat’ networking architecture that uses the fewest networking hops to connect the communicating nodes. 

Such developments are reflected in the growth in server links to the first level or top-of-rack (TOR) switches, links that have gone from 10 to 25 to 50 and 100 gigabits. “Now you have the first 200-gigabit network interface cards coming out this year,” says Sankar.   

 

Broadcom has been able to deliver 12.8 terabits-per-second in 16nm, whereas some competitors are waiting for 7nm

 

Broadcom says the TOR switch is not the part of the data centre network experiencing greatest growth. Rather, it is the layers above - the leaf-and-spine switching layers - where bandwidth requirements are accelerating the most. This is because the radix - the switch’s inputs and outputs - is increasing with the use of equal-cost multi-path (ECMP) routing. ECMP is a forwarding technique to distribute the traffic over multiple paths of equal cost to a destination port. “The width of the ECMP can be 4-way, 8-way and 16-way,” says Sankar. “That determines the connectivity to the next layer up.”         

It is such multi-layered leaf-spine architectures that the Tomahawk 3 switch silicon addresses. 

 

Tomahawk 3

The Tomahawk 3 is implemented using a 16nm CMOS process and features 256 50-gigabit PAM-4 serialiser-deserialiser (serdes) interfaces to enable the 12.8-terabit throughput. 

“Broadcom has been able to deliver 12.8 terabits-per-second in 16nm, whereas some competitors are waiting for 7nm,” says Bob Wheeler, vice president and principal analyst for networking at the Linley Group. 

Sankar says Broadcom undertook significant engineering work to move from the 16nm Tomahawk 2’s 25-gigabit non-return-to-zero serdes to a 16nm-based 50G PAM-4 design. The resulting faster serdes design requires only marginally more die area while reducing the gigabit-per-Watt measure by 40 percent.     

The Tomahawk 3 also features a streamlined packet-processing pipeline and improved shared buffering. In the past, a switch chip could implement one packet-processing pipeline, says Wheeler. But at 12.8 terabit-per-second (Tbps), the aggregate packet rate exceeds the capacity of a single pipeline. “Broadcom implements multiple ingress and egress pipelines, each connected with multiple port blocks,” says Wheeler. The port blocks include MACs and serdes. “The hard part is connecting the pipelines to a shared buffer, and Broadcom doesn’t disclose details here.”

 

Source: Broadcom.

The chip also has telemetry support that exposes packet information to allow the data centre operators to see how their networks are performing. 

Adopting a new generation of switch silicon also has system benefits. 

One is reducing the number of hops between endpoints to achieve a lower latency. Broadcom cites how a 128x100 Gigabit Ethernet (GbE) platform based on a single Tomahawk 3 can replace six 64x100GbE switches in a two-tier arangement. This reduces latency by 60 percent, from 1 microsecond to 400 nanoseconds. 

There are also system cost and power consumption benefits. Broadcom uses the example of Facebook’s Backpack modular switch platform. The 8 rack unit (RU) chassis uses two tiers of switches - 12 Tomahawk chips in total. Using the Tomahawk 3, the chassis can be replaced with a 1RU platform, reducing the power consumption by 75 percent and system cost by 85 percent.   

 

Many in the industry have discussed the possibility of using the next 25.6-terabit generation of switch chip in early trials of in-package optics

 

Aligning timelines  

Both the switch-chip vendors and the optical module players are challenged to keep up with the growing networking capacity demands of the data centre. The fact that next-generation optics takes about a year longer than the silicon is not new. It happened with the transition from 40-gigabit QSFP+ to 100-gigabit QSFP28 optical modules and now from the 100-gigabit QSFP28 to 200 gigabit QSFP56 and 400-gigabit QSFP-DD production. 

“400-gigabit optical products are currently sampling in the industry in both OSFP and QSFP-DD form factors, but neither has achieved volume production,” says Sankar. 

Broadcom is using 400-gigabit modules with its Tomahawk 3 in the lab, and customers are doing the same. However, the hyperscalers are not deploying Tomahawk-3 based data center network designs using 400-gigabit optics. Rather, the switches are using existing QSFP28 interfaces, or in some cases 200-gigabits optics. But 400-gigabit optics will follow.

The consequence of the disparity in the silicon and optics development cycles is that while the data centre players want to exploit the full capacity of the switch once it becomes available, they can’t. This means the data centre upgrades conducted - what Sankar calls ‘mid-life kickers’ - are costlier to implement. In addition, given that most cloud data centres are fibre-constrained, doubling the number of fibres to accommodate the silicon upgrade is physically prohibitive, says Broadcom. 

“The operator can't upgrade the network any faster than the optics cadence, leading to a much higher overall total cost of ownership,” says Sankar. They must scale out to compensate for the inability to scale up the optics and the silicon simultaneously.

 

Optical I/O

Scaling the switch chip - its input-output (I/O) - presents its own system challenges. “The switch-port density is becoming limited by the physical fanout a single chip can support, says Sankar: “You can't keep doubling pins.”

It will be increasingly challenging to increase the input-output (I/O) to 512 or 1024 serdes in future switchchips while satisfying the system link budget, and achieving both in a power-efficient manner. Another reason why aligning the scaling of the optics and the serdes speeds with the switching element is desirable, says Broadcom. 

Broadcom says electrical interfaces will certainly scale for its next-generation 25.6-terabit switch chip. 

Linley Group’s Wheeler expects the 25.6-terabit switch will be achieved using 256 100-gigabit PAM4 serdes. “That serdes rate will enable 800 Gigabit Ethernet optical modules,” he says. “The OIF is standardising serdes via CEI-112G while the IEEE 802.3 has the 100/200/400G Electrical Interfaces Task Force running in parallel.”

But system designers already acknowledge that new ways to combine the switch silicon and optics are needed.

“One level of optimisation is the serdes interconnect between the switch chip and the optical module itself,” says Sankar, referring to bringing of optics on-board to shorten the electrical paths the serdes must drive. The Consortium of On-Board Optics (COBO) has specified just such an interoperable on-board optics solution

“The stage after that is to integrate the optics with the IC in a single package,” says Sankar.

Broadcom is not saying which generation of switch chip capacity will require in-package optics. But given the IC roadmap of doubling switch capacity at least every two years, there is an urgency here, says Sankar. 

The fact that there are few signs of in-package developments should not be mistaken for inactivity, he says: “People are being very quiet about it.”     

Brad Booth, chair of COBO and principal network architect for Microsoft’s Azure Infrastructure, says COBO does not have a view as to when in-package optics will be needed.

Discussions are underway within the IEEE, OIF and COBO on what might be needed for in-package optics and when, says Booth: “One thing that many people do agree upon is that COBO is solving some of the technical problems that will benefit in-package optics such as optical connectivity inside the box.”

The move to in-package optics represents a considerable challenge for the industry. 

“The transition and movement to in-package optics will require the industry to answer a lot of new questions that faceplate pluggable just doesn’t handle,” says Booth. “COBO will answer some of these, but in-package optics is not just a technical challenge, it will challenge the business-operating model.”

Booth says demonstrations of in-package optics can already be done with existing technologies. And given the rapid timelines of switch chip development, many in the industry have discussed the possibility of using the next 25.6-terabit generation of switch chip in early trials of in-package optics, he says.

 

There continues to be strong interest in white-box systems and strong signalling to the market to build white-box platforms

 

White boxes

While the dominant market for the Tomahawk family is the data centre, a recent development has been the use the 3.2-terabit Tomahawk chip within open-source platforms such as the Telecom Infra Project’s (TIP) Voyager and Cassini packet optical platforms

Ciena has also announced its own 8180 platform that supports 6.4 terabits of switching capacity, yet Ciena says the 8180 uses a Tomahawk 3, implying the platform will scale to 12.8Tbps.

Niall Robinson,vice president, global business development at ADVA, a member of TIP and the Voyager initiative, makes the point that since the bulk of the traffic remains within the data centre, the packet optical switch capacity and the switch silicon it uses need not be the latest generation IC.    

“Eventually, the packet-optical boxes will migrate to these larger switching chips but with some considerable time lag compared to their introduction inside the data centre,” says Robinson.

The advent of 400-gigabit client-port optics will drive the move to higher-capacity platforms such as the Voyager because it is these larger chips that can support 400-gigabit ports. “Perhaps a Jericho 2 at 9.6-terabit is sufficient compared to a Tomahawk 3 at 12.8-terabit,” says Robinson.

Edgecore Networks, the originator of the Cassini platform, says it too is interested in the Tomahawk 3 for its Cassini platform. 

“We have a Tomahawk 3 platform that is sampling now,” says Bill Burger, vice president, business development and marketing, North America at Edgecore Networks, referring to a 12.8Tbps open networking switch that supports 32, 400-gigabit QSFP-DD modules that has been contributed to the Open Compute Project (OCP). 

Broadcom’s Sankar highlights the work of the OCP and TIP in promoting disaggregated hardware and software. The initiatives have created a forum for open specifications, increased the number of hardware players and therefore competition while reducing platform-development timescales.  

“There continues to be strong interest in white-box systems and strong signalling to the market to build white-box platforms,” says Sankar.  

The issue, however, is the lack of volume deployments to justify the investment made in disaggregated designs. 

“The places in the industry where white boxes have taken off continues to be the hyperscalers, and a handful of hyperscalers at that,” says Sankar. “The industry has yet to take up disaggregated networking hardware at the rate at which it is spreading at least the appearance of demand.”

Sankar is looking for the industry to narrow the choice of white-box solutions available and for the emergence of a consumption model for white boxes beyond just several hyperscalers. 


Is ADVA Optical Networking looking to buy ECI Telecom?

Is ADVA Optical Networking preparing a bid for private company ECI Telecom? The latest consolidation rumour involving the two mid-tier metro players comes after Infinera’s announcement that it is acquiring Coriant, a deal that is expected to close this quarter. 

According to a source in the financial sector, ADVA wanted to acquire Coriant but failed to raise the required funds. Infinera’s successful bid for Coriant has led ADVA to consider alternatives as it looks to secure its future in a consolidating marketplace, with ECI Telecom being viewed as an attractive target. 

ECI Telecom is reportedly considering an initial public offering (IPO) on the London Stock Exchange to raise $170 million. A source close to ADVA confirmed that ‘ECI is looking for a home’ but declined to comment on whether ADVA is involved. Another source close to ADVA suggested that there may be some truth in such a bid.

ADVA declined to comment. 

An ECI spokesperson said the company has issued no statement regarding an IPO and expressed surprise when asked if ECI was looking to merge. The spokesperson declined to comment when asked about ADVA acquiring ECI. 

 

I wouldn't doubt that there are talks going on, I just don’t know how far they are. And, of course, things can always fall through.

 

If ADVA and ECI are in discussions, they are doing a good job keeping it quiet. This contrasts with Coriant where rumours started to circulate before the deal was announced.     

Mike Genovese, managing director and senior equity research analyst at MKM Partners, who broke the news that Infinera was acquiring Coriant, has no knowledge of any ADVA deal. But he says such a deal fits the industry trend of vendors looking for scale and combining to focus their R&D resources on coherent optics. 

Another financial analyst, George Notter, managing director, equity research, telecom and networking equipment analyst at Jefferies, is also unaware of any deal. 

“It is a plausible concept,” says Sterling Perrin, principal analyst, optical networking and transport at Heavy Reading. He can see why ADVA is looking and why ECI might be a good fit. “I wouldn't doubt that there are talks going on, I just don’t know how far they are,” says Perrin. “And, of course, things can always fall through.”

 

Acquisition benefits 

Perrin points to ADVA’s Euro 111 million ($131 million) revenues in 3Q 2017, a drop from its Euro 144 million ($165 million) revenues reported in the previous quarter. 

ADVA attributed the drop in revenues to two major customers, one an internet content provider (ICP) and the other a large US carrier that was going through a merger. Amazon was the ICP, with ADVA losing some business to Ciena, says Heavy Reading. ADVA’s quarterly revenues have still not returned to their former levels. 

“It made ADVA think of how they are going to replace that [business] going forward,” says Perrin. “The webscale business that they bet so heavily on is very competitive, and as they learned with Amazon, the customers are not very loyal.”

By acquiring ECI, ADVA would gain a packet-optical transport platform, a product it lacks, as well as a presence in new markets. ECI has benefitted in recent years from the growing telecom market in India. “Half of ECI’s revenues are coming from Asia, most of that being India,” says Perrin. In contrast, ADVA’s Asian business accounts for over 10 percent of in revenues. 

The two firms overlap in wavelength-division multiplexing equipment but not in the data centre interconnect market.

“ADVA might be looking for a land grab and to essentially double down in traditional telecom to make up for losses on the webscale side,” says Perrin. 

 

ADVA’s optical revenues in 2017 were $370 million while Heavy Reading estimates ECI’s optical revenues were $350 million last year

 

Mature market 

Optical transport equipment has become a mature market with fewer than a dozen players remaining. Outside of Asia, the main players are Ciena, Nokia, Cisco, Infinera-Coriant, ADVA and ECI Telecom.  

ADVA reported revenues of Euro 514 million in 2017 ($617 million). Heavy Reading says the two companies’ optical revenues are comparable: ADVA’s optical revenues in 2017 were $370 million while Heavy Reading estimates ECI’s optical revenues were $350 million last year. To put that in perspective, market leader Huawei’s optical revenues were $4 billion in 2017.

Both Coriant and ECI are privately held but Perrin says the fortunes of the two firms are very different.

Coriant was a company in decline which explains why its owners, Oaktree Capital Management, was keen for its sale. “ECI is doing really well right now,” says Perrin. ECI's revenues grew over 15 percent in 2017 compared to 2016 and the growth has continued this year. “Which is why you are hearing rumours of them floating publicly.” 

ECI is thus in a strong position in any potential negotiations.


T-API taps into the transport layer

The Optical Internetworking Forum (OIF) in collaboration with the Open Networking Foundation (ONF) and the Metro Ethernet Forum (MEF) have tested the second-generation transport application programming interface (T-API 2.0).

SK Telecom's Park Jin-hyo

T-API 2.0 is a standardised interface, released in late 2017 by the ONF, that enables the dynamic allocation of transport resources using software-defined networking (SDN) technology.

The interface has been created so that when a service provider, or one of its customers, requests a service, the required resources including the underlying transport are configured promptly.       

The OIF-led interoperability demonstration tested T-API 2.0 in dynamic use cases involving equipment from several systems vendors. Four service providers - CenturyLink, Telefonica, China Telecom and SK Telecom - provided their networking labs, located in three continents, for the testing.

 

Packets and transport 

SDN technology is generally associated with the packet layer but there is also a need for transport links, from fibre and wavelength-division multiplexing technology at Layer 0 through to Layer 2 Ethernet.   

Transport SDN differs from packet-based SDN in several ways. Transport SDN sets up dedicated pipes whereas a path is only established when packets flow for packet SDN. “When you order a 100-gigabit connection in the transport network, you get 100 gigabits,” says Jonathan Sadler, the OIF’s vice president and Networking Interoperability Working Group chair. “You are not sharing it with anyone else.” 

Another difference is that at the packet layer with its manipulation of packet headers is a digital domain whereas the photonic layer is analogue. “A lot of the details of how a signal interacts with a fibre, with the wavelength-selective switches, and with the different componentry that is used at Layer 0, are important in order to characterise whether the signal makes it through the network,” says Sadler. 

 

T-API 1.0 is a configure and step-away deployment, T-API 2.0 is where the dynamic reactions to things happening in the network become possible   

 

Prior to SDN, control functions resided on a platform as part of a network’s distributed control plane. Each vendor had their own interface between the control and the optical domain embedded within their platforms. T-API has been created to expose and standardise that interface such that applications can request transport resources independent of the underlying vendor equipment.  

NBI refers to a northbound interface while SBI stands for a southbound interface. Source: OIF.

To fulfil a connection across an operator’s network involves a hierarchy of SDN controllers. An application’s request is first handled by a multi-domain SDN controller that decomposes the request for the various domain controllers associated with the vendor-specific platforms. T-API 2.0’s role is to link the multi-domain controller to the application layer’s orchestrator and also connect the individual domain controllers to the multi-domain SDN controller (see diagram above). T-API is an example of a northbound interface. 

The same T-API 2.0 interface is used at both SDN controller levels, what differs is the information each handles. Sadler compares the upper T-API 2.0 interface to a high-level map whereas the individual TAPI 2.0 domain interfaces can be seen as maps with detailed ‘local’ data.  “Both [interfaces] work on topology information and both direct the setting-up of connections,” says Sadler. “But the way they are doing it is with different abstractions of the information.”     

 

T-API 2.0

The ONF developed the first T-API interface as part of its Common Information Model (CIM) work. The interface was tested in 2016 as part of a previous interoperability demonstration involving the OIF and the ONF.  

One important shortfall revealed during the 2016 demonstrations, and which has slowed its deployment, is that the T-API 1.0 interface didn't fully define how to notify an upper controller of events in the lower domains. For example, if a link is congested, or worst, lost, it couldn’t inform the upper controller to re-route traffic. This has been put right with T-API 2.0. 

“T-API 1.0 is a configure and step-away deployment, T-API 2.0 is where the dynamic reactions to things happening in the network become possible,” says Sadler.    

 

When it comes to the orchestrator tying into the transport network, we do believe T-API will be one of the main approaches for these APIs

 

Interoperability demonstration

In addition to the four service providers, six systems vendors took part in the recent interoperability demonstration: ADVA Optical Networking, Coriant, Infinera, NEC/ Netcracker, Nokia and SM Optics.

The recent tests focussed on the performance of the TAPI-2.0 interface under dynamic network conditions. Another change since the 2016 tests was the involvement of the MEF. The MEF has adopted and extended T-API as part of its Network Resource Modeling (NRM) and Network Resource Provisioning (NRP) projects, elements of the MEF’s Lifecycle Service Orchestration (LSO) architecture. The LSO allows for service provisioning using T-API extensions that support the MEF’s Carrier Ethernet services. 

Three aspects of the T-API 2.0 interface were tested as part of the use cases: connectivity, topology and notification. 

Setting up a service requires both connectivity and topology. Topology refers to how a service is represented in terms of the node edge points and the links. Notification refers to the northbound aspect of the interface, pushing information upwards to the orchestrator at the application layer. This allows the orchestrator in a multi-domain network to re-route connectivity services across domains.

The four use cases tested included multi-layer network connections whereby topology information is retrieved from a multi-domain network with services provisioned across domains. 

T-API 2.0 was also used to show the successful re-routing of traffic when network situations change such as a fault, congestion, or to accommodate maintenance work. Re-routing can be performed across the same layer such as the IP, Ethernet or optical layer, or, more optimally, across two or more layers. Such a capability promises operators the ability to automate re-routing using SDN technology.     

The two other use cases tested during the recent demonstration were the orchestrator performing network restoration across two or more domains, and the linking of data centres’ network functions virtualisation infrastructure (NFVI).  Such NFVI interconnect is a complex use case involving SDN controllers using T-API to create a set of wide area networks connecting the NFV sites. The use case set up is shown in the diagram below.  

 Source: OIF

SK Telecom, one of the operators that participated in the interoperability demonstration, welcomes the advent of T-API 2.0 and says how such APIs will allow operators to enable services more promptly.

“It has been difficult to provide services such as bandwidth-on-demand and networking services for enterprise customers enabled using a portal,” says Park Jin-hyo, executive vice president of the ICT R&D Centre at SK Telecom. “These services will be provided within minutes, according to the needs, using the graphical user interface of SK Telecom’s network-as-service platform.”

SK Telecom stresses the importance of open APIs in general as part of its network transformation plans. As well as implementing a 5G Standalone (SA) Core, SK Telecom aims to provide NFV and SDN-based services across its network infrastructure including optical transport, IP, data centres, wired access as well as networks for enterprise customers.

“Our final goal is to open the network itself to enterprise customers via an open API,” says Park. “Our mission is to create 5G-enabled network-slicing-based business models and services for vertical markets.”

 

Takeways

The OIF says the use cases have shown that T-API 2.0 enables real-time orchestration and that the main shortcomings identified with the first T-API interface have been addressed with T-API 2.0.

The OIF recognises that while T-API may not be the sole approach available for the industry - the IETF has a separate activity - the successful tests and the broad involvement of organisations such as the ONF and MEF make a strong case for T-API 2.0 as the approach for operators as they seek to automate their networks.  

“When it comes to the orchestrator tying into the transport network, we do believe T-API will be one of the main approaches for these APIs,“ says Sadler.

SK Telecom said participating in the interop demonstrations enabled it to test and verify, at a global level, APIs that the operators and equipment manufacturers have been working on. And from a business perspective, the demonstration work confirmed to SK Telecom the potential of the ‘global network-as-a-service’ concept.

 

Editor note: Added input from SK Telecom on September 1st. 


ADVA adds quantum-resistant security to its optical systems

ADVA has demonstrated two encryption techniques for optical data transmission to counter the threat posed by quantum computing.  

“Quantum computers are very powerful tools to solve specific classes of mathematical problems,” says Jörg-Peter Elbers, senior vice president, advanced technology at ADVA. “One of these classes of problems is solving equations behind certain cryptographic schemes.”  

 

The use of three key exchange schemes over one infrastructure: classical public-key encryption using the Diffie-Hellman scheme, the quantum-resistant Neiderreiter algorithm, and a quantum-key distribution (QKD) scheme. Source: ADVA

Public-key encryption makes use of discrete logarithms, an example of a one-way function. Such functions use mathematical operations that for a conventional computer are easy to calculate in one direction but are too challenging to invert. Solving such complex mathematical problems, however, is exactly what quantum computers excel at. 

A fully-fledged quantum computer does not yet exist but the rapid progress being made in the basic technologies suggests it is only a matter of time. Once such computers exist, public key based security will be undermined. 

The looming advent of quantum computers already threatens data that must remain secure for years to come. There are agencies that specialise in tapping fibre, says Elbers, while the cost of storage is such that storing huge amounts of data traffic in a data centre is affordable. “The threat scenario is certainly a real one,” says Elbers. 

 

Demonstrations

ADVA has demonstrated two techniques, one using quantum-key distribution (QKD) and the other a quantum-resistant algorithm.  

For quantum-key distribution, ADVA’s FSP 3000 platform is being used as part of the UK’s first quantum communication network that includes a metro network for Cambridge that is also linked to BT Labs in Ipswich, 120km away. 

ADVA’s platform enables the exchange of keys between sites used for encoding the data traffic. In the Cambridge metro, a quantum system from Toshiba is used to encode the keys while between Cambridge and BT Labs the equipment used is from ID Quantique.

 

The threat scenario is certainly a real one

 

For ADVA’s second demonstration, a quantum-resistant encryption algorithm - one invulnerable to quantum computing attacks - is incorporated into its FSP-3000 platform to encrypt 100 gigabit-per-second traffic flows over long-haul distances. ADVA has shown secure transmissions over 2,800km, spanning three European national research and educational networks.

“There is never 100 percent security in one system but you can increase security using multiple independent systems,” says Elbers. “You can use your classical encryption methods in use today and add quantum-key distribution or a quantum-resistant algorithm or use all three over one infrastructure.”  (See diagram, top.) 

 

Quantum key distribution 

Public key cryptography, comprising a public and a private key pair, is an example of an asymmetric key scheme. The public key, as implied by the name, is published with a recipient’s name. Any party wanting to send data securely to the user employs the published public key to scramble the data. Only the recipient, with the associated private key, can decode the sent data. The Diffie-Hellman algorithm is a widely used public key encryption scheme.

Jörg-Peter ElbersWith a symmetric scheme, the same key is used at both ends to lock and unlock the data. A well-known symmetric key algorithm is the Advanced Encryption Standard. AES-256, for example, uses a 256-bit key. 

Although being much more efficient than asymmetrical algorithms, the issue with the symmetrical scheme is getting the secret key to the recipient without it being compromised. The key can be sent manually with armed guards. A more practical approach is to send the key over a secure link using public key cryptography; the asymmetric key exchange scheme protects the transmission of the symmetric key used for the subsequent encryption of the payload.

Quantum computing is a potent threat because it undermines all asymmetric encryption schemes in widespread use today. 

Quantum key distribution, which uses particles of light or photons, is a proposed way to secure the symmetric key’s transfer. Here, single photons are used to transmit a binary signal that is then used to generate the same secret key at both ends.  Should an adversary eavesdrop with a photo-detector and steal the photon, the photon will not arrive at the other end. Should the hacker be more sophisticated and try to measure the photon before sending it on, they are stymied by the laws of physics since measuring a photon changes its parameters.

Given these physical properties of photons, the sender and receiver can jointly detect a potential eavesdropper. If the number of missing or altered photons is too high, the assumption is the link is compromised.

But with quantum key distribution, the distance a photon can travel is a few tens of kilometres only. A photon is inherently low-intensity light. For longer transmission distances, intermediate trusted sites are required to regenerate the key exchange along the way. BT uses two such trusted sites on the link between Cambridge and BT Labs.

ADVA along with Toshiba have been working on an open interface that allows secure quantum key distribution over a dense wavelength division multiplexing (DWDM) link, independent of the systems used. Having an open interface also means operators using different quantum key distribution systems can interoperate and chat, says Elbers.

 

The US National Institute of Science and Technology (NIST) is assessing candidate quantum-resistant algorithms with the goal of standardising a suite of protocols by 2024

 

One way to enable single-photon streams is to use a dedicated fibre. But to avoid the expense of a separate fibre, ADVA sends the photons over a dedicated channel alongside the data transmission channels that carry much higher intensity light.

“Ideally you want a single quantum but, in practice, you might work with a highly attenuated laser source that emits less than a single quantum on average,” says Elbers. “Everything you have on your co-propagating channels can impact the performance.” ADVA uses optical filtering to ensure the data channels don’t spill over and adversely affect the key’s transfer. 

 

Quantum-resistant algorithms 

The second approach uses maths rather than fundamental physics to make data encryption invulnerable to quantum computing. The result is what is referred to as quantum-resistant techniques.

The US National Institute of Science and Technology (NIST) is assessing candidate quantum-resistant algorithms with the goal of standardising a suite of protocols by 2024. 

The maths behind these schemes is complicated but what unifies them is that none are based on the mathematical problems susceptible to known quantum computing attacks.

ADVA uses the Niederreiter key exchange algorithm, one of NIST’s candidate schemes, for its system. To ensure the highest level of security for high-speed optical transmission a new symmetric key is sent frequently. The Neiderreiter algorithm uses comparatively long key lengths but Elbers points out that with a 100-gigabit payload, the overhead of long keys is minimal. Moreover, ADVA communicates key exchange information in the Optical Transport Network’s (OTN) OTU-4 frame’s overhead field.

Customers are already showing interest in quantum security, says Elbers, and is one of the reasons why ADVA is active in the UK’s Quantum Communications Hub initiative. “We are showing people that the technology is here, ready for deployment and can be integrated with existing systems,” says Elbers. 

For organisations keen to ensure the long-term secrecy of their data, they need to be considering now what they should be doing to address this, he adds. 


Infinera buying Coriant will bring welcome consolidation

Infinera is to purchase privately-held Coriant for $430 million. The deal will effectively double Infinera’s revenues, add 100 new customers and expand the systems vendor’s product portfolio.

Infinera's CEO, Tom FallonBut industry analysts, while welcoming the consolidation among optical systems suppliers, highlight the challenges Infinera faces making the Coriant acquisition a success.   

“The low price reflects that this isn't the best asset on the market,” says Sterling Perrin, principal analyst, optical networking and transport at Heavy Reading. “They are buying $1 of revenue for 50 cents; the price reflects the challenges.”   

 

Benefits 

According to Perrin, there are still too many vendors facing "brutal price pressures" despite the optical industry being mature. Removing one vendor that has been cutting prices to win business is good news for the rest. 

For Infinera, the acquisition of Coriant promises three main benefits, as outlined by its CEO, Tom Fallon, during a briefing addressing the acquisition. 

The first is expanding its vertically-integrated business model across a wider portfolio of products. Infinera develops its own optical technology: its indium-phosphide photonic integrated circuits (PICs) and accompanying coherent DSPs that power its platforms. Having its own technology differentiates the optical performance of its platforms and helps it achieve leading gross margins of over 40 percent, said Fallon.

Exploiting the vertical integration model will be a central part of the Coriant acquisition. Indeed, the company mentioned vertical integration 21 times in as many minutes during its briefing outlining the deal. Infinera expects to deliver industry-leading growth and operating margins once it exploits the benefits of vertical integration across an expanded portfolio of platforms, said Fallon.

 

Having a seat at the table with the largest global service providers to strategise about where their business is going will be invaluable

 

Buying Coriant also gives Infinera much-needed scale. Not only will Infinera double its revenues - Coriant’s revenues were about $750 million in 2017 while Infinera’s were $741 million for the same period - but it will expand its customer base including key tier-one service providers and webscale players. According to Fallon, the newly combined company will include nine of the top 10 global tier-one service providers and the six leading global internet content providers.

Infinera admits it has struggled to break into the tier-one operators and points out that trying to enter is an expensive and time-consuming process, estimated at between $10 million to $20 million each time. “[Now, with Coriant,] having a seat at the table with the largest global service providers to strategise about where their business is going will be invaluable,” said Fallon. 

 

Sterling Perrin of Heavy Reading The third benefit Infinera gains is an expanded product portfolio. Coriant has expertise in layer 3 networking, in the metro core with its mTera universal transport platform as well as SDN orchestration and white box technologies. Heavy Reading’s Perrin says Coriant has started development of a layer-3 router white box for edge applications.

Combining the two companies also results in a leading player in data centre interconnect.

“Coriant expands our portfolio, particularly in packet and automation where significant network investment is expected over the next decade,” said Fallon. The deal is happening at the right time, he said, as operators ramp spending as they undertake network transformation. 

Infinera will pay $230 million in cash - $150 million up front and the rest in increments - and a further $200 million in shares for Coriant. The company expects to achieve cost savings of $250 million between 2019 and 2021 by combining the two firms, $100 million in 2019 alone. The deal is expected to close in the third quarter of 2018. 

 

If a company is going to put that integrated product into their network, it’s a full-blown RFP process which Infinera may or may not win

 

Challenges 

Industry analysts, while seeing positives for Infinera, have concerns regarding the deal.  

A much-needed consolidation of weaker vendors is how George Notter, an analyst at the investment bank, Jefferies, describes the deal. For Infinera, however, continuing as before was not an option. Heavy Reading’s Perrin agrees: ”Infinera has been under a lot of pressure; their core business of long-haul has slowed.”

The deal brings benefits to Infinera: scale, complementary product sets, and the promise of being able to invest more in R&D to benefit its PIC technology, says Notter in a research note.

Gaining customers is also a key positive. “Infinera is really excited about getting the new set of customers and that is what they are paying for,” says Vladimir Kozlov, CEO of LightCounting Market Research. “However, these customers were gained by pricing products at steep discounts.” 

What is vital for Infinera is that it delivers its upcoming 2.4-terabit Infinite Capacity Engine 5 (ICE5) optical engine on time. The ICE5 is expected to ship in early 2019. In parallel, Infinera is developing its ICE6 due two years later. Infinera is developing two generations of ICE designs in parallel after being late to market with its current 1.2-terabit optical engine. 

 

Infinera is really excited about getting the new set of customers and that is what they are paying for

 

But even if the ICE5 is delivered on time, upgrading Coriant's platforms will be a major undertaking. “It sounds like they are going to fit their optical engines in all of Coriant’s gear; I don’t see how that is going to happen anytime quickly,” says Perrin.

Customers bought Coriant's equipment for a reason. Once upgraded with Infinera’s PICs, these will be new products that have to undergo extensive lab testing and full evaluations.  

Perrin questions how moving customers off legacy platforms to the new will not result in the service providers triggering a new request-for-proposal (RFP). “If a company is going to put that integrated product into their network, it’s a full-blown RFP process which Infinera may or may not win,” says Perrin. “Infinera talked a lot about the benefits of vertical integration but they didn’t really address the challenges and the specific steps they would take to make that work.”

LightCounting's Vladimir KozlovLightCounting’s Kozlov also questions how this will work. 

“The story about vertical integration and scaling up PIC production is compelling, but how will they support Coriant products with the PIC?” he says. “Will they start making pluggable modules internally? Will Coriant’s customers be willing to move away from the pluggables and get locked into Infinera’s PICs? Do they know something that we don’t?”

While Infinera is a top five optical platform supplier globally it hasn’t dominated the market with its PIC technologies, says Perrin. “Even if they technically pull off the vertical integration with the Coriant products, how much is that going to win business for them?” he says. “It is one architecture in a mix that has largely gone to pluggables.”

 

Transmode 

Infinera already has experience acquiring a systems vendor when it bought in 2015 metro-access player, Transmode. Strategically, this was a very solid acquisition, says Perrin, but the jury is still out as to its success. 

“The integration, making it work, how Transmode has performed within Infinera hasn’t gone as well as they wanted,” says Perrin. “That said, there are some good opportunities going forward for the Transmode group.” 

Infinera also had planned to integrate its PIC technology within Transmode’s products but it didn't make economic sense for the metro market. There may also have been pushback from customers that liked the Transmode products, says Perrin: “With Coriant it looks like they really are going to force the vertical integration.” 

Infinera acknowledges the challenges ahead and the importance of overcoming them if it is to secure its future. 

“Given the comparable sizes of each company’s revenues and workforce, we recognise that integration will be challenging and is vital for our ultimate success,” said Fallon.  


Imec eyes silicon photonics to solve chip I/O bottleneck

In the second and final article, the issue of adding optical input-output (I/O) to ICs is discussed with a focus on the work of the Imec nanoelectronics R&D centre that is using silicon photonics for optical I/O.

Part 2: Optical I/O

Imec has demonstrated a compact low-power silicon-photonics transceiver operating at 40 gigabits per second (Gbps). The silicon photonics transceiver design also uses 14nm FinFET CMOS technology to implement the accompanying driver and receiver electronics. 

Joris Van Campenhout“We wanted to develop an optical I/O technology that can interface to advanced CMOS technology,” says Joris Van Campenhout, director of the optical I/O R&D programme at Imec. “We want to directly stick our photonics device to that mainstream CMOS technology being used for advanced computing applications.”

Traditionally, the Belgium nanoelectronics R&D centre has focussed on scaling logic and memory but in 2010 it started an optical I/O research programme. “It was driven by the fact that we saw that electrical I/O doesn’t scale that well,” says Van Campenhout. Electrical interfaces have power, space and reach issues that get worse with each hike in transmission speed.

Imec is working with partner companies to research optical I/O. The players are not named but include semiconductor foundries, tool vendors, fabless chip companies and electronic design automation tools firms. The aim is to increase link capacity, bandwidth density - a measure of the link capacity that can be crammed in a given space - and reach using optical I/O. The research’s target is to achieve between a 10x to 100x in scaling.

The number of silicon photonics optical I/O circuits manufactured each year remains small, says Imec, several thousand to ten thousand semiconductor wafers at most. But Imec expects volumes to grow dramatically over the next five years as optical interconnects are used for ever shorter reaches, a few meters and eventually below one meter. 

“That is why we are participating in this research, to put together building blocks to help in the technology pathfinding,” says Van Campenhout. 

 

We wanted to develop an optical I/O technology that can interface to advanced CMOS technology

 

Silicon photonics transceiver 

Imec has demonstrated a 1330nm optical transceiver operating at 40Gbps using non-return-to-zero signalling. The design uses hybrid integration to combine silicon photonics with 14nm FinFET CMOS electronics. The resulting transceiver occupies 0.025 mm2, the area across the combined silicon photonics and CMOS stack for a single transceiver channel. This equates to a bandwidth density of 1.6 terabit-per-second/mm2

The silicon photonics and FinFET test chips each contain circuitry for eight transmit and eight receive channels. Combined, the transmitter path comprises a silicon photonics ring modulator and a FinFET differential driver while the receiver uses a germanium-based photo-detector and a first-stage FinFET trans-impedance amplifier (TIA).

The transceiver has an on-chip power consumption of 230 femtojoules-per-bit, although Van Campenhout stresses that this is a subset of the functionality needed for the complete link. “This number doesn’t include the off-chip laser power,” he says. “We still need to couple 13dBm - 20mW - of optical power in the silicon photonics chip to close the link budget.” Given the laser has an efficiency of 10 to 20 percent, that means another 100mW to 200mW of power.  

That said, an equivalent speed electrical interface has an on-chip power consumption of some 2 picojoules-per-bit so the optical interface still has some margin to better the power efficiency of the equivalent electrical I/O. In turn, the optical I/O’s reach using single-mode fibre is several hundred meters, far greater than any electrical interface.

Imec is confident it can increase the optical interface’s speed to 56Gbps. The layout of the CMOS circuits can be improved to reduce internal parasitic capacitances while Imec has already improved the ring modulator design compared to the one used for the demonstrator. 

“We believe that with a few design tweaks we can get to 56Gbps comfortably,” says Van Campenhout. “After that, to go faster will require new technology like PAM-4 rather than non-return-to-zero signalling.”

Imec has also tested four transmit channels using cascaded ring modulators on a common waveguide as part of work to add a wavelength-division multiplexing capability.

 

Transceiver packaging

The two devices - the silicon photonics die and the associated electronics - are combined using chip-stacking technology. 

Both devices use micro-bumps with a 50-micron pitch with the FinFET die flip-chipped onto the silicon photonics die. The combined CMOS and silicon photonics assembly is glued on a test board and wire-bonded, while the v-groove fibre arrays are attached using active alignment. The fibre-to-chip coupling loss, at 4.5dB in the demonstration, remains high but the researchers say this can be reduced, having achieved 2dB coupling losses in separate test chips. 

 

Source: Imec.

Imec is also investigating using through-silicon vias (TSV) technology and a silicon photonics interposer in order to replace the wire-bonding. TSVs deliver better power and ground signals to the two dies and enable high-speed electrical I/O between the transceiver and the ASIC such as a switch chip. The optics and ASIC could be co-packaged or the transceiver used in an on-board optics design next to the chip. 

“We have already shown the co-integration of TSVs with our own silicon photonics platform but we are not yet showing the integration with the CMOS die,” says Van Campenhout. “Something we are working on.”  

 

Co-packaging the optics with silicon will come at a premium cost

 

Applications

The first ICs to adopt optical I/O will be used in the data centre and for high-performance computing. The latest data centre switch ICs, with a capacity of 12.8 terabits, are implemented using 16nm CMOS. Moving to a 7nm CMOS process node will enable capacities of 51.2 terabits. “These are the systems where the bandwidth density challenge is the largest,” says Van Campenhout.

But significant challenges must be overcome before this happens, he says: “I think we all agree that bringing optics deeply integrated into such a product is not a trivial thing.” 

Co-packaging the optics with silicon will come at a premium cost. There are also reliability issues to be resolved and greater standardisation across the industry will be needed as to how the packaging should be done. 

Van Campenhout expects this will only happen in the next four to five years, once the traffic-handling capacity of switch chips doubles and doubles again.  

Imec has seen growing industry interest in optical I/O in the last two years. “We have a lot of active interactions so interest is accelerating now,” says Van Campenhout.    


Privacy Preference Center