Finisar demonstrates its first silicon photonics transceiver
- Finisar unveiled its first silicon photonics-based product, a 400-gigabit QSFP-DD DR4 module, at the recent ECOC event.
- The company also showed transceiver technology that simplifies the setting up of dense wavelength-division multiplexing (DWDM) links.
- Two 200-gigabit QSFP56 client-side modules and an extended reach 30km 400-gigabit eLR8 were also demonstrated by Finisar.
- A 64-gigabaud integrated tunable transmitter and receiver assembly (ITTRA) was used to send a 400-gigabit coherent wavelength.
Finisar is bringing to market its first silicon photonics-based optical module.
Christian UrricarietThe 400GBASE-DR4 is an IEEE 500m-reach 400-gigabit parallel fibre standard based on four fibres, each carrying a 100-gigabit 4-level pulse amplitude modulation (PAM-4) signal. Finisar’s DR4 is integrated into a QSFP-DD module.
“The DR4 is the 400-gigabit interface that most of the hyperscale cloud players are interested in first,” says Christian Urricariet, senior director of global marketing at Finisar.
The company demonstrated the module at the recent European Conference on Optical Communication (ECOC), held in Rome.
Silicon photonics-based DR4
The DR4 is an integrated design, says Finisar, comprising modulators and photo-detectors as well as modulator drivers and the trans-impedance amplifiers (TIAs).
Finisar chose silicon photonics for the DR4 after undertaking an extensive technology study. Silicon photonics emerged as ‘a clear winner’ in terms of cost and performance for photonic designs made up of similar functions in parallel, such as the four-channel DR4. Silicon photonics manufacturing is also scalable, making it ideal for high-volume designs.
The DR4 is the 400-gigabit interface that most of the hyperscale cloud players are interested in first
The DR4 can also be used in a breakout mode to interface to four 100GBASE-DR modules. Also referred to as the DR1, the 100GBASE-DR fits within an SFP-DD or a QSFP28 module.
The DR4-DR1 combination can link four servers, each using a 100-gigabit link, to a 400-gigabit port on a top-of-rack or mid-row switch. The top-of-rack 400-gigabit DR4 can also connect to a leaf switch with multiple 100-gigabit ports. “The DR4 can be used ‘top-of-rack down’ [to servers] or ‘top-of-rack up’ [to leaf switches],” says Urricariet. “This is similar to what people are doing with the [100-gigabit parallel fibre] PSM4.”
400-gigabit eLR8
Finisar also showcased an extended reach version of the IEEE 400GBASE-LR8 standard.
Dubbed the eLR8, the QSFP-DD module is a technology demonstrator not a product that extends the reach of the LR8 from 10km to 30km.
Finisar already has an LR8 product in a CFP8 pluggable module and is moving the design to the smaller QSFP-DD. The LR8 is an eight-wavelength duplex interface where each wavelength carries a 50-gigabit PAM-4 signal.
“The 400GBASE-LR8 is a low-risk approach to achieving a 400-gigabit duplex single-mode link in the short term,” says Urricariet. “You don’t have to wait for 100-gigabit PAM-4 [ICs] to be manufactured in high volume.”
Urricariet says the IEEE is considering developing an extended LR8 standard with a 40km reach but such distances could also be addressed using inexpensive coherent technology.
Finisar’s design achieves the extended range using the same components as its LR8 module - directly modulated DFB lasers and PIN photodetectors. “There is plenty of margin with that [LR8 design],” says Urricariet. This suggests Finisar picked the best performing DFBs and PINs for the eLR8 design.
The QSFP-DD 10km LR8 design is sampling now, with general availability from the first half of 2019.
Flextune
Configuring DWDM links can be likened to two groups of people separated in a wood at night. Each individual has a flashlight and is tasked with finding a counterpart from the second group, a process repeated until everyone is paired.
Setting up DWDM links is comparable to telling each individual the exact path to take to find their counterpart. The Flextune technology that Finisar has developed can be viewed as giving each individual the confidence to stride out - sweeping their flashlights as they go - till they find a counterpart.
Currently, setting up a DWDM link requires coordination between a field engineer and network operations staff. Each tunable transceiver that is plugged into a port is told which wavelength to tune to. The system itself may tell the transceiver the wavelength to use or a field engineer programs each transceiver before it is plugged into the platform.
Equally, the transceiver output fibre must be connected to the right optical multiplexer and demultiplexer (mux-demux) port, as do the transceivers at the link’s other end.
The result is a time-consuming process that is prone to human error.
With Flextune, the tunable transceivers are plugged into the equipment’s ports and connected to the mux-demux’s ports. “It does not matter which port,” says Urricariet. “The transceivers search for each other and self-configure to the right wavelength.”
Each Flextune-enabled transceiver operates independently of the transceiver at the other end; there is no master-slave arrangement, says Urricariet, although a master-slave arrangement can be used if requested.
The mux-demux must also be a blocking architecture for Flextune to work. “If the mux-demux does not block the other wavelengths on each port, then you have a mess,” says Urricariet. With such a mux-demux, the channels scanned are blocked until the transceiver’s output is passed to the right channel. Once the link is established, the two transceivers set permanently to that wavelength.
“It [the process] happens at both ends simultaneously and on all the ports,” says Urricariet. “The basic technique can self-tune up to 96 [DWDM] channels in around five minutes.”
Being able to tune independently of the host equipment means that the Flextune-enabled transceivers can also be sold directly to operators and plugged into any of their equipment.
Urricariet says Flextune promises welcome operational savings given DWDM’s increasing adoption in the access network with developments such as 5G fronthaul.
The basic technique can self-tune up to 96 [DWDM] channels in around five minutes
Flextune will also be used for metro and data centre interconnect applications, as well as connecting Remote PHY nodes being deployed in cable networks. “The Remote PHY is also a big focus for this type of feature,” says Urricariet.
Finisar demonstrated Flextune with its 10-gigabit tunable SFP+ modules that are now sampling. Flextune will also be adopted for its 25-gigabit SFP+ that will sample ‘very soon’, followed by coherent modules.
“We do have a CFP2-ACO module in production and other coherent products on our roadmap,” says Urricariet. “We will be looking to implement Flextune technology in these products as well.”
Google has started deployments of 2x200GbE
200 Gigabit Ethernet: a growing interim solution
Finisar also demonstrated two 200-gigabit modules. The QSFP56 implements the 2km FR4 specification. The 200-gigabit FR4 uses four coarse WDM (CWDM) wavelengths, each carrying a 50-gigabit PAM-4 signal.
Finisar has previously said it will develop 200-gigabit modules for the large-scale data centres interested in the technology as an interim solution before 400-gigabit modules ramp. Such an intermediate market for “one hyperscaler and maybe two” is sufficient to justify making 200-gigabit modules, says Urricariet.
Market research firm LightCounting has increased its forecast for 200 Gigabit Ethernet (GbE) modules due to interest from Facebook.
A presentation by Facebook at ECOC suggested that 400 GbE is far from being ready, says Vladimir Kozlov, CEO of LightCounting. “It looks like 200GbE is being considered now, but Facebook may change its mind again,” says Kozlov. “In the meantime, Google has started deployments of 2x200GbE [in an OSFP module] as planned.”
As with the 400-gigabit eLR8, Finisar also demonstrated an extended reach version of the 200-gigabit FR4 to achieve a 10km reach. “This is not to be confused with the 10km 200-gigabit LR4 that is a LAN-WDM grid based design,” says Urricariet. “The extended FR4 uses a CWDM grid.”
ITTRA
At OFC 2018 in March, Finisar unveiled its 32-gigabaud (Gbaud) integrated tunable transmitter and receiver assembly (ITTRA) that combines the optics and electronics required for an analogue coherent optics interface.
The ITTRA comprises a tunable laser, an optical amplifier, modulators, modulator drivers, coherent mixers, a photo-detector array and the accompanying TIAs. All the components of the 32Gbaud ITTRA are integrated within a gold box that is 70 percent smaller than the size of a CFP2 module. The integrated assembly also has a power consumption below 7.5W.
At ECOC, the company demonstrated its second ITTRA design operating at 64Gbaud to transmit a 400-gigabit wavelength using 16-ary quadrature amplitude modulation (16-QAM). Finisar would not detail the power consumption of the 64Gbaud ITTRA.
“The doubling of the speed to 64Gbaud will enable 400-gigabit DCO modules as well as 400ZR,” says Urricariet. Digital coherent optics (DCO) refers to coherent modules that integrate the optics and the coherent digital signal processor (DSP).
Samples and production of the 64Gbaud ITTRA are due in 2019.
Intel targets 5G fronthaul with a 100G CWDM4 module
- Intel announced at ECOC that it is sampling a 10km extended temperature range 100-gigabit CWDM4 optical module for 5G fronthaul.
- Another announced pluggable module pursued by Intel is the 400 Gigabit Ethernet (GbE) parallel fibre DR4 standard.
- Intel, a backer of the CWDM8 MSA, says the 8-wavelength 400-gigabit module will not be in production before 2020.
Intel has expanded its portfolio of silicon photonics-based optical modules to address 5G mobile fronthaul and 400GbE.
Robert BlumAt the European Conference on Optical Communication (ECOC) being held in Rome this week, Intel announced it is sampling a 100-gigabit CWDM4 module in a QSFP form factor for wireless fronthaul applications.
The CWDM4 module has an extended temperature range, -20°C to +85°C, and a 10km reach.
“The final samples are available now and [the product] will go into production in the first quarter of 2019,” says Robert Blum, director of strategic marketing and business development at Intel’s silicon photonics product division.
Intel also announced it will support the 400GBASE-DR4, the IEEE’s 400 GbE standard that uses four parallel fibres for transmit and four for the receive path, each carrying a 100-gigabit 4-level pulse amplitude modulation (PAM-4) signal.
5G wireless
5G wireless will be used for a variety of applications. Already this year the first 5G fixed and mobile wireless services are expected to be launched. 5G will also support massive Internet of Things (IoT) deployments as well as ultra-low latency applications.
The next-generation wireless standard uses new spectrum that includes millimetre wave spectrum in the 24GHz to 40GHz region. Such higher frequency bands will drive small-cell deployments.
5G’s use of new spectrum, small cells and advanced air interface techniques such as multiple input, multiple output (MIMO) antenna technology is what will enable its greater data speeds and vastly expanded capacity compared to the current LTE cellular standard.
Source: Intel.
The 5G wireless standard will also drive greater fibre deployment at the network edge. And it is here where mobile fronthaul plays a role, linking the remote radio heads at the antennas with the centralised baseband controllers at the central office (see diagram). Such fronthaul links will use 25-gigabit and 100-gigabit links. “We have multiple customers that are excited about the 100-gigabit CWDM4 for these applications,” says Blum
Intel expects demand for 25-gigabit and 100-gigabit transceivers for mobile fronthaul to begin in 2019.
Intel is now producing over one million PSM4 and CWDM4 modules a year
Client-side modules
Intel entered the optical module market with its silicon photonics technology in 2016 with a 100-gigabit PSM4 module, quickly followed by a 100-gigabit CWDM4 module. Intel is now producing over one million PSM4 and CWDM4 modules a year.
Intel will provide customers with 400-gigabit DR4 samples in the final quarter of 2018 with production starting in the second half of 2019. This is when Intel says large-scale data centre operators will require 400 gigabits.
“The initial demand in hyperscale data centres for 400 gigabits will not be for duplex [fibre] but parallel fibre,” says Blum. “So we expect the DR4 to go to volume first and that is why we are announcing the product at ECOC.”
Intel says the advantages of its silicon photonics approach have already been demonstrated with its 100-gigabit PSM4 module. One is the optical performance resulting from the company’s heterogeneous integration technique combining indium-phosphide lasers with silicon photonics modulators on the one chip. Another advantage is scale using Intel’s 300mm wafer-scale manufacturing.
Intel says demand for the 500m-reach DR4 module to go hand-in-hand with that for the 100-gigabit single- wavelength DR1, given how the DR4 will also be used in breakout mode to interface with four DR1 modules.
“We don’t see the DR1 standard competing or replacing 100-gigabit CWDM4,” says Blum. “The 100-gigabit CWDM4 is now mature and at a very attractive price point.”
Intel is a leading proponent of the CWDM8 MSA, an optical module design based on eight wavelengths, each a 50 gigabit-per-second (Gbps) non-return-to-zero (NRZ) signal. The CWDM8 MSA was created to fast-track 400 gigabit interfaces by avoiding the wait for 100-gigabit PAM-4 silicon.
When the CWDM8 MSA was launched in 2017, the initial schedule was to deploy the module by the end of this year. Intel also demonstrated the module working at the OFC show held in March.
Now, Intel expects production of the CWDM8 in 2020 and, by then, other four-wavelength solutions using 100-gigabit PAM-4 silicon such as the 400G-FR4 MSA will be available.
“We just have to see what the use case will be and what the timing will be for the CWDM8’s deployment,” says Blum.
NeoPhotonics ups the baud rate for line and client optics
- Neophotonics’ 64 gigabaud optical components are now being designed into optical transmission systems. The components enable up to 600 gigabits per wavelength and 1.2 terabits using a dual-wavelength transponder.
- The company’s high-end transponder that uses Ciena’s WaveLogic Ai coherent digital signal processor (DSP) is now shipping.
- NeoPhotonic is also showcasing its 53 gigabaud components for client-side pluggable optics capable of 100-gigabit wavelengths at the current European Conference on Optical Communication (ECOC) show being held in Rome.
NeoPhotonics says its family of 64 gigabaud (Gbaud) optical components are being incorporated within next-generation optical transmission platforms.
Ferris LipscombThe 64Gbaud components include a micro intradyne coherent receiver (micro-ICR), a micro integrable tunable laser assembly (micro-ITLA) and a coherent driver modulator (CDM).
The micro-ICR and micro-ITLA are the Optical Internetworking Forum’s (OIF) specification, while the CDM is currently being specified.
“Three major customers have selected to use all three [64Gbaud components] and several others are using a subset of those,” says Ferris Lipscomb, vice president of marketing at NeoPhotonics.
NeoPhotonics also unveiled and demonstrated two smaller 64Gbaud component designs at the OFC show held in March. The devices - a coherent optical sub-assembly (COSA) and a nano-ITLA - are aimed at 400-gigabit coherent pluggable modules as well as compact line-card designs.
“These [two compact components] continue to be developed as well,” says Lipscomb.
Baud rate and modulation
The current 100-gigabit coherent transmission uses polarisation-multiplexing, quadrature phase-shift keying (PM-QPSK) modulation operating at 32 gigabaud. The 100 gigabits-per-second (Gbps) data rate is achieved using four bits per symbol and a symbol rate of 32Gbaud.
Optical designers use two approaches to increase the wavelength’s data rate beyond 100Gbps. One approach is to increase the modulation scheme beyond QPSK using 16-ary quadrature amplitude modulation (16-QAM) or 64-QAM, the other is to increase the baud rate.
“The baud rate is the on-off rate as opposed to the bit rate. That is because you are packing more bits in there than the on-off supports,” says Lipscomb. “But if you double the on-off rate, you double the number of bits.”
Doubling the baud rate from 32Gbaud to 64Gbaud achieves just while using 64-QAM trebles the data sent per symbol compared to 100-gigabit PM-QSPK. Combining the two - 64Gbaud and 64-QAM - creates the 600 gigabits per wavelength.
A higher baud rate also has a reach advantage, says Lipscomb, with its lower noise. “For longer distances, increasing the baud rate is better.”
But doubling the baud rate requires more capable DSPs to interpret things at twice the rate. “And such DSPs now exist, operating at 64Gbaud and 64-QAM,” he says.
Three major customers have selected to use all three [64Gbaud components] and several others are using a subset of those
Coherent components
NeoPhotonics’ 64Gbaud optical components are suitable for line cards, fixed-packaged transponders, 1-rack-unit modular platforms used for data centre interconnect and the CFP2 pluggable form factor.
For data centre interconnect using 600-gigabits-per-wavelength transmissions, the distance achieved is up to 100km. For longer distances, the 64Gbaud components achieve metro-regional reaches at 400Gbps, and 2,000km for long-haul at 200Gbps.
But to fit within the most demanding pluggable form factors such as the OSFP and QSFP-DD, smaller componentry is required. This is what the coherent optical sub-assembly (COSA) and nano-ITLA are designed to address. The COSA combines the coherent modular driver and the ICR in a single gold-box package that is no larger than the individual 64Gbaud micro-ICR and CDM packages.
Source: Gazettabyte
“There is a lot of interest in 400-gigabit applications for a CFP2, and in that form factor you can use the separate components,” says Lipscomb. “But for data centre interconnect, you want to increase the density as much as possible so going to the smaller OSPF or QSFP-DD requires another generation of [component] shrinking.”
NeoPhotonics says there are two main approaches. One, and what NeoPhotonics has done with the nano-ITLA and COSA, is to separate the laser from the remaining circuitry such that two components are needed overall. A benefit of a separate laser is also lower noise. “But the ultimate approach would be to put all three in one gold box,” says Lipscomb.
For data centre interconnect, you want to increase the density as much as possible so going to the smaller OSPF or QSFP-DD requires another generation of [component] shrinking
Both approaches are accommodated as part of the OIF’s Integrated Coherent Transmitter-Receiver Optical Sub-Assembly (IC-TROSA) project.
Another challenge to achieving coherent designs such as the emerging 400ZR standard using the OSFP or QSFP-DD is accommodating the DSP with the optics while meeting the modules’ demanding power constraints. This requires a 7nm CMOS DSP and first samples are expected by year-end with limited production occurring towards the end of 2019. Volume production of coherent OSFP and QSFP-DD modules are expected in 2020 or even 2021, says Lipscomb.
100G client-side wavelengths
NeoPhotonics also used the OFC show last March to detail its 53Gbaud components for client-side pluggables that are 100-gigabit single-wavelength and four-wavelength 400-gigabit designs. Samples of these have now been delivered to customers and are part of demonstrations at ECOC this week.
The components include an electro-absorption modulated laser (EML) and driver for the transmitter, and photodetectors and trans-impedance amplifiers for the receiver path. The 53Gbaud EML can operate uncooled, is non-hermetic and is aimed for use with OSFP and QSFP-DD modules.
To achieve a 100-gigabit wavelength, 4-level pulse-amplitude modulation (PAM-4) is used and that requires an advanced DSP. Such PAM-4 DSPs will only be available early next year, says NeoPhotonics.
The first 400-gigabit modules using 100-gigabit wavelengths will gain momentum by the end of 2019 with volume production in 2020, says Lipscomb.
The various 8-wavelength implementations such as the IEEE-defined 2km 400GBASE-FR8 and 10km 400GBASE-LR8 are used when data centre operators must have 400-gigabit client interfaces.
The adoption of 100-gigabit single-wavelength implementations of 400 gigabits, in contrast, will be adopted when it becomes cheaper on a cost-per-bit basis, says Lipscomb: “It [100-gigabit single-wavelength-based modules] will be a general replacement rather than a breaking of bottlenecks.”
NeoPhotonics is also making available its DFB laser technology for silicon-photonics-based modules such as the 2km 400G-FR4, as well as the 100-gigabit single-wavelength DR1 and the parallel-fibre 400-gigabit DR4 standards.
WaveLogic AI transponder
NeoPhotonics has revealed it is shipping its first module using Ciena’s WaveLogic Ai coherent DSP. “We are shipping in modest volumes right now,” says Lipscomb.
The company is one of three module makers, the others being Lumentum and Oclaro, that signed an agreement with Ciena to use of its flagship WaveLogic Ai DSP for their coherent module designs.
Lipscomb describes the market for the module as a niche given its high-end optical performance, what he describes as a fully capable, multi-haul transponder. “It has lots of features and a lot of expense too,” he says. “It is applied to specific cases where long distance is needed; it can go 12,000km if you need it to.”
The agreement with Ciena also includes the option to use future Ciena DSPs. “Nothing is announced yet and so we will have to see how that all plays out.”
Xilinx delivers 58G serdes and showcases a 112G test chip
In the first of two articles, electrical input-output developments are discussed, focussing on Xilinx’s serialiser-deserialiser (serdes) work for its programmable logic chips. In Part 2, the Imec nanoelectronics R&D centre’s latest silicon photonics work to enable optical I/O for chips is detailed.
Part 1: Electrical I/O
Processor and memory chips continue to scale exponentially. The electrical input-output (I/O) used to move data on and off such chips scales less well. Electrical interfaces are now transitioning from 28 gigabit-per-second (Gbps) to 56Gbps and work is already advanced to double the rate again to 112Gbps. But the question as to when electrical interfaces will reach their practical limit continues to be debated.
Gilles Garcia“Some two years ago, talking to the serdes community, they were seeing 100 gigabits as the first potential wall,” says Gilles Garcia, communications business lead at Xilinx. “In two years, a lot of work has happened and we can now demonstrate 112 gigabits [electrical interfaces].”
The challenge of moving to higher-speed serdes is that the reach shortens with each doubling of speed. The need to move greater amounts of data on- and off-chip also has power-consumption implications, especially with the extra circuitry needed when moving from non-return-to-zero signalling to the more complex 4-level pulse-amplitude modulation (PAM-4) signalling scheme.
PAM-4 is already used for 56-gigabit electrical I/O for such applications as 400 Gigabit Ethernet optical modules and leading edge 12.8-terabit switch chips. Having 112-gigabit serdes at least ensures one further generation of switch chips and optical modules but what comes after that is still to be determined. Even if more can be squeezed out of copper, the trace lengths will shorten and optics will continue to get closer to the chip.
58-gigabit serdes
Xilinx announced in March its first two Virtex Ultrascale+ FPGAs that will feature 58Gbps serdes. The company also demonstrated the technology at the OFC show. “No one else on the show floor had the same [58G serdes] capabilities in terms of bit error rate, noise floor, the demonstration across backplane technology, and transmitting and receiving data simultaneously,” says Garcia.
The two FPGAs are the VU27P that features 32 of the 58Gbps serdes as well as 32, 33Gbps serdes, while the second device, the VU29P, has 48, 58Gbps serdes as well as 32, 33Gbps ones. Both FPGA devices will ship by the year-end, says Xilinx. Moreover, customers have already used Xilinx’s 58Gbps test chip to validate its working over their systems’ backplanes in preparation for the arrival of the FPGAs.
No one else on the show floor had the same [58G serdes] capabilities in terms of bit error rate, noise floor, the demonstration across backplane technology, and transmitting and receiving data simultaneously
The Ultrascale+ FPGAs are constructed using several dice attached to a single silicon interposer to form a 2.5D chip design, what Xilinx calls its stacked silicon interconnect technology. The 58Gbps serdes are integrated into each FPGA slice. “Consider each slice as a monolithic implementation,” says Garcia.
Source: Xilinx.
The two FPGAs with 58Gbps serdes are suited for such telecom applications as next-generation router and packet optical line cards that will use 200-gigabit and 400-gigabit client-side optical modules. The VU29P with its 48, 58Gbps serdes will be able to support line cards with up to six QSFP-DD or OSPF 400 Gigabit Ethernet modules (see the diagram of an example line card).
112-gigabit test chip
Xilinx also showcased its 112Gbps serdes test chip at the OFC show in March. “What we showed was it operating in full duplex mode - transmitting and receiving - running on the same board as the 58-gigabit serdes,” says Garcia. “The point being the 112-gigabit demo worked on a printed circuit board not designed for a 112-gigabit serdes.”
Xilinx stresses that the 112-gigabit serdes will appear on its next generation of FPGA devices implemented using a 7nm CMOS process. “It [the FPGA portfolio] will coincide with when the market needs 112 gigabits,” he says.
One obvious market indicator will be the emergence of optical modules that use electrical lanes operating at 112 gigabits. “The holy grail of optical modules is to use four [electrical] lanes for 400 gigabits,” says Garcia. The IEEE is working on such a specification and the work is expected to be completed at the end of 2019. Optical module vendors will likely have first samples in 2020. Then there is the separate timeline associated with next-generation 25.6-terabit switch chips.
“You need to have the full ecosystem before customers really implement 112Gbps serdes,” says Garcia.
Optical module trends: A conversation with Finisar
Finisar demonstrated recently a raft of new products that address emerging optical module developments. These include:
- A compact coherent integrated tunable transmitter and receiver assembly
- 400GBASE-FR8 and -LR8 QSFP-DD pluggable modules and a QSFP-DD active optical cable
- A QSFP28 100-gigabit serial FR interface
- 50-gigabit SFP56 SR and LR modules
Rafik Ward, Finisar’s general manager of optical interconnects, explains the technologies and their uses.
Compact coherent
Finisar is sampling a compact integrated assembly that supports 100-gigabit and 200-gigabit coherent transmission.
The integrated tunable transmitter and receiver assembly (ITTRA), to give it its full title, includes the optics and electronics needed for an analogue coherent optics interface.
The 32-gigabaud ITTRA includes a tunable laser, optical amplifier, modulators, modulator drivers, coherent mixers, a photo-detector array and the accompanying trans-impedance amplifiers, all within a gold box. “An entire analogue coherent module in a footprint that is 70 percent smaller than the size of a CFP2 module,” says Ward. The ITTRA's power consumption is below 7.5W.
Rafik WardFinisar says the ITTRA is smaller than the equivalent integrated coherent transmitter-receiver optical sub-assembly (IC-TROSA) design being developed by the Optical Internetworking Forum (OIF).
“We potentially could take this device and enable it to work in that [IC-TROSA] footprint,” says Ward.
Using the ITTRA enables higher-density coherent line cards and frees up space within an optical module for the coherent digital signal processor (DSP) for a CFP2 Digital Coherent Optics (CFP2-DCO) design.
Ward says the CFP2 is a candidate for a 400-gigabit coherent pluggable module along with the QSFP-DD and OSFP form factors. “All have their pros and cons based on such fundamental things as the size of the form factor and power dissipation,” says Ward.
But given coherent DSPs implemented in 7nm CMOS required for 400 gigabit are not yet available, the 100 and 200-gigabit CFP2 remains the module of choice for coherent pluggable interfaces.
The demonstration of the ITTRA implementing a 200-gigabit link using 16-QAM at OFC 2018. Source: Finisar
400 gigabits
Finisar also demonstrated its first 400-gigabit QSFP-DD pluggable module products based on the IEEE standards: the 2km 400GBASE-FR8 and the 10km 400GBASE-LR8. The company also unveiled a QSFP-DD active optical cable to link equipment up to 70m apart.
The two QSFP-DD pluggable modules use eight 50-gigabit PAM-4 electrical signal inputs that are modulated onto eight lasers whose outputs are multiplexed and sent over a single fibre. Finisar chose to implement the IEEE standards as its first QSFP-DD products as they are low-power and lower risk 400-gigabit solutions.
The alternative 2km 400-gigabit design, developed by the 100 Lambda MSA, is the 400G-FR4 that uses four 100-gigabit optical lanes. “This has some risk elements to it such as the [PAM-4] DSP and making 100-gigabit serial lambdas work,” says Ward. “We think the -LR8 and -FR8 are complementary and could enable a fast time-to-market for people looking at these kinds of interfaces.”
The QSFP-DD active optical cable may have a reach of 70m but typical connections are 20m. Finisar uses its VCSEL technology to implement the 400-gigabit interface. At the OFC show in March, Finisar demonstrated the cable working with a Cisco high-density port count 1 rack-unit switch.
I sometimes get asked by customers what is the best way to get to higher-density 100 gigabit. I point to the 400-gigabit DR4.
QSFP28 FR
Finisar also showed it 2km QSFP28 optical module with a single wavelength 100-gigabit PAM-4 output. The QSFP28 FR takes four 25 gigabit-per-second electrical interfaces and passes them through a gearbox chip to form a 50-gigabaud PAM-4 signal that is used to modulate the laser.
The QSFP28 FR is expected to eventually replace the CWDM4 that uses four 25-gigabit wavelengths multiplexed onto a single fibre. “The end-game is to get a 100-gigabit serial module,” says Ward. “This module represents the first generation of that.”
Finisar is also planning a 500m QSFP28 DR. The QSFP28 DR and FR will work with the 500m IEEE 400GBASE-DR4 that has four outputs, each a fibre carrying a 100-gigabit PAM-4 signal, with the -DR4 outputs interfacing with up to four FR or DR modules.
“I sometimes get asked by customers what is the best way to get to higher-density 100 gigabit,” says Ward. “I point to the 400 gigabit DR4, even though we call it a 400-gigabit part, it is also a 4x100-gigabit DR solution.”
Ward says that the 500m reach of the DR is sufficient for the vast majority of links in the data centre.
SFP56 SR and LR
Finisar has also demonstrated two SFP56 modules: a short reach (SR) version that has a reach of 100m over OM4 multi-mode fibre and the 10km LR single-mode interface. The SR is VSCEL-based while the LR uses a directly-modulated distributed feedback laser.
The SFP is deployed widely at speeds up to and including 10 gigabits while the 25-gigabit SFP shipments are starting to ramp. The SFP56 is the next-generation SFP module with a 50-gigabit electrical input and a 50-gigabit PAM-4 optical output.
The SFP56 will be used for several applications, says Finisar. These include linking servers to switches, connecting switches in enterprise applications, and 5G wireless applications.
Finisar says its 50 and 100 gigabit-per-lane products will likely be released throughout 2019, in line with the industry. “The 8-channel devices will likely come out at least a few quarters before the 4-channel devices,” says Ward.
400ZR will signal coherent’s entry into the datacom world
- 400ZR will have a reach of 80km and a target power consumption of 15W
- The coherent interface will be available as a pluggable module that will link data centre switches across sites
- Huawei expects first modules to be available in the first half of 2020
- At OFC, Huawei announced its own 250km 400-gigabit single-wavelength coherent solution that is already being shipped to customers
Coherent optics will finally cross over into datacom with the advent of the 400ZR interface. So claims Maxim Kuschnerov, senior R&D manager at Huawei.
Maxim Kuschnerov400ZR is an interoperable 400-gigabit single-wavelength coherent interface being developed by the Optical Internetworking Forum (OIF).
The 400ZR will be available as a pluggable module and as on-board optics using the COBO specification. The IEEE is also considering a proposal to adopt the 400ZR specification, initially for the data-centre interconnect market. “Once coherent moves from the OIF to the IEEE, its impact in the marketplace will be multiplied,” says Kuschnerov.
But developing a 400ZR pluggable represents a significant challenge for the industry. “Such interoperable coherent 16-QAM modules won’t happen easily,” says Kuschnerov. “Just look at the efforts of the industry to have PAM-4 interoperability, it is a tremendous step up from on-off keying.”
Despite the challenges, 400ZR products are expected by the first half of 2020.
400ZR use cases
The web-scale players want to use the 400ZR coherent interface to link multiple smaller buildings, up to 80km apart, across a metropolitan area to create one large virtual data centre. This is a more practical solution than trying to find a large enough location that is affordable and can be fed sufficient power.
Once coherent moves from the OIF to the IEEE, its impact in the marketplace will be multiplied
Given how servers, switches and pluggables in the data centre are interoperable, the attraction of the 400ZR is obvious, says Kuschnerov: “It would be a major bottleneck if you didn't have [coherent interface] interoperability at this scale.”
Moreover, the advent of the 400ZR interface will signal the start of coherent in datacom. Higher-capacity interfaces are doubling every two years or so due to the webscale players, says Kuschnerov, and with the advent of 800-gigabit and 1.6-terabit interfaces, coherent will be used for ever-shorter distances, from 80km to 40km and even 10km.
At 10km, volumes will be an order of magnitude greater than similar-reach dense wavelength-division multiplexing (DWDM) interfaces for telecom. “Datacom is a totally different experience, and it won’t work if you don’t have a stable supply base,” he says. “We see the ZR as the first step combining coherent technology and the datacom mindset.”
Data centre players will plug 400ZR modules into their switch-router platforms, avoiding the need to interface the switch-router to a modular, scalable DWDM platform used to link data centres.
The 400ZR will also find use in telecom. One use case is backhauling residential traffic over a cable operator’s single spans that tend to be lossy. Here, ZR can be used at 200 gigabits - using 64 gigabaud signalling and QPSK modulation - to extend the reach over the high-loss spans. Similarly, the 400ZR can also be used for 5G mobile backhaul, aggregating multiple 25-gigabit streams.
Another application is for enterprise connectivity over distances greater than 10km. Here, the 400ZR will compete with direct-detect 40km ER4 interfaces.
Having several use cases, not just data-centre interconnect, is vital for the success of the 400ZR. “Extending ZR to access and metro-regional provides the required diversity needed to have more confidence in the business case,” says Kuschnerov.
The 400ZR will support 400 gigabits over a single wavelength with a reach of 80km, while the target power consumption is 15W.
The industry is still undecided as to which pluggable form factor to use for 400ZR. The two candidates are the QSFP-DD and the OSFP. The QSFP-DD provides backward compatibility with the QSFP+ and QSFP28, while the OSFP is a fresh design that is also larger. This simplifies the power management at the expense of module density; 32 OSFPs can fit on a 1-rack-unit faceplate compared to 36 QSFP-DD modules.
The choice of form factor reflects a broader industry debate concerning 400-gigabit interfaces. But 400ZR is a more challenging design than 400-gigabit client-side interfaces in terms of trying to cram optics and the coherent DSP within the two modules while meeting their power envelopes.
The OSFP is specified to support 15W while simulation results published at OFC 2018 suggest that the QSFP-DD will meet the 15W target. Meanwhile, the 15W power consumption will not be an issue for COBO on-board optics, given that the module sits on the line card and differs from pluggables in not being confined within a cage.
Kuschnerov says that even if it proves that only the OSFP of the two pluggables supports 400ZR, the interface will still be a success given that a pluggable module will exist that delivers the required face-plate density.
400G coherent
Huawei announced at OFC 2018 its own single-wavelength 400-gigabit coherent technology for use with its OptiX OSN 9800 optical and packet OTN platform, and it is already being supplied to customers.
The 400-gigabit design supports a variety of baud rates and modulation schemes. For a fixed-grid network, 34 gigabaud signalling enables 100 gigabits using QPSK, and 200 gigabits using 16-QAM, while at 45 gigabaud 200 gigabits using 8-QAM is possible. For flexible-grid networks, 64 gigabaud is used for 200-gigabit transmission using QPSK and 400 gigabits using 16-QAM.
Huawei uses an algorithm called channel-matched shaping to improve optical performance in terms of data transmission and reach. This algorithm includes such techniques as pre-emphasis, faster-than-Nyquist, and Nyquist shaping. According to Kuschnerov, the goal is to squeeze as much capacity out of a network’s physical channel so that advanced coding techniques such as probabilistic constellation shaping can be used to the full. For Huawei’s first 400-gigabit wavelength solution, constellation shaping is not used but this will be added in its upcoming coherent designs.
Huawei has already demonstrated the transmission of 400 gigabits over 250km of fibre. “Current generation 400G-per-lambdas does not enable long-haul or regional transmission so the focus is on shorter reach metro or data-centre-interconnect environments,” says Kuschnerov.
When longer reaches are needed, Huawei can offer two line cards, each supporting 200 gigabits, or a single line card hosting two 200-gigabit modules. The 200-gigabits-per-wavelength is achieved using 64 gigabaud and QPSK modulation, resulting in a 2,500km reach.
Up till now, such long-haul distances have been served using 100-gigabitwavelengths. Now, says Kuschnerov, 200 gigabit at 64 gigabaud is becoming the new norm in many newly built networks while the 34 gigabaud 200 gigabit is being favoured in existing networks based on a 50GHz grid.
Ciena goes stackable with 8180 'white box' and 6500 RLS
Ciena has unveiled two products - the 8180 coherent networking platform and the 6500 reconfigurable line system - that target cable and cellular operators that are deploying fibre deep in their networks, closer to subscribers.
The 6500 line system is also aimed at the data centre interconnect market given how the webscale players are experiencing a near-doubling of traffic each year.
Source: Ciena
The cable industry is moving to a distributed access architecture (DAA) that brings fibre closer to the network’s edge and splits part of the functionality of the cable modem termination system (CMTS) - the remote PHY - closer to end users. The cable operators are deploying fibre to boost the data rates they can offer homes and businesses.
Both Ciena’s 8180 modular switch and the 6500 reconfigurable line system are suited to the cable network. The 8180 is used to link the master headend with primary and secondary hub sites where aggregated traffic is collected from the digital nodes (see network diagram). The 8180 platforms will use the modular 6500 line system to carry the dense wavelength-division multiplexed (DWDM) traffic.
“The [cable] folks that are modernising the access network are not used to managing optical networking,” says Helen Xenos, senior director, portfolio marketing at Ciena (pictured). “They are looking for simple platforms, aggregating all the connections that are coming in from the access.”
The 8180 can play a similar role for wireless operators, using DWDM to carry aggregated traffic for 4G and 5G networks.
Ciena says the 6500 optical line system will also serve the data centre interconnect market, complementing the WaveServer Ai, Ciena’s second-generation 1RU modular platform that has 2.4 terabits of client-side interfaces and 2.4 terabits of coherent capacity.
With the 8180, you are only using the capacity on the fibre that you have traffic for
“They [the webscale players] are looking for as many efficiencies as they can get from the platforms they deploy,” says Xenos. “The 6500 reconfigurable line system gives them the flexibility they need - a colourless, directionless, contentionless [reconfigurable optical add-drop multiplexer] and a flexible grid that extends to the L-band.”
A research note from analyst house, Jefferies, published after the recent OFC show where Ciena announced the platforms, noted that in many cable networks, 6-strand fibre is used: two fibre pairs allocated for business services and one for residential. Adding the L-band to the existing C-band effectively doubles the capacity of each fibre pair, it noted.
The 8180
Ciena’s 8180 is a modular packet switch that includes coherent optics. The 8180 is similar in concept to the Voyager and Cassini white boxes developed by the Telecom Infra Project. However, the 8180 is a two-rack-unit (2RU) 6.4-terabit switch compared to the 1RU, 2-terabit Voyager and the 1.5RU 3.2-terabit Cassini. The 8180 also uses Ciena’s own 400-gigabit coherent DSP, the WaveLogic Ai, rather than merchant coherent DSP chips.
The platform comprises 32 QSFP+/ QSFP28 client-side ports, a 6.4-terabit switch chip and four replaceable modules or ‘sleds’, each capable of accommodating 800 gigabits of capacity. The options include an initial 400-gigabit line-side coherent interface (a sled with two coherent WaveLogic Ai DSPs will follow), an 8x100-gigabit QSFP28 sled, a 2x400-gigabit sled and also the option for an 800-gigabit module once they become available.
Source: Ciena
Using all four sleds as client-side options, the 8180 becomes a 6.4-terabit Ethernet switch. Using only coherent sleds instead, the packet-optical platform has a 1.6-terabit line-side capacity. And because there is a powerful switch chip integrated, the input ports can be over-subscribed.“With the 8180, you are only using the capacity on the fibre that you have traffic for,” says Xenos.
6500 line system
The 6500 reconfigurable line system is also a modular design. Aimed at the cable, wireless, and data centre interconnect markets, only a subset of Ciena’s existing optical line systems features is used.
“The 6500 software has a lot of capabilities that the content providers are not using,” says Xenos. “They just want to use it as a photonic layer.”
There are three 6500 reconfigurable line system platform sizes: 1RU, 2RU and 4RU. The chassis can be stacked and managed as one unit. Card options that fit within the chassis include amplifiers and reconfigurable optical add-drop multiplexers (ROADMs).
The amplifier options area dual-line Erbium-doped fibre amplifiercard that includes an integrated bi-directional optical time-domain reflectometer (OTDR) used to characterise the fibre. There is also a half-line-width RAMAN amplifier card. The line system will support the C and L bands, as mentioned.
The reconfigurable line system also has ROADM cards: a 1x12 wavelength-selective switch (WSS) with integrated amplifier, a colourless 16-channel add-drop that support channels of any size (flexible grid), and a full-width card 1x32 WSS. “The 1x32 would be used for colourless, directionless and directionless [ROADM] configurations,” says Xenos.
The 6500 reconfigurable line system also supports open application porgramming interfaces (APIs) for telemetry, with a user able to program the platform to define the data streamed.“The platform can also be provisioned via REST APIs; something a content provider will do,” she says.
Ciena is a member of the OpenROADM multi-source agreement and was involved in last year’s AT&T OpenROADM trial with its 6500 Converged Packet Optical Transport (POTS) platform.
Will the 6500 reconfigurable line system be OpenROADM-compliant?
“This card [and chassis form factor] could be used for OpenROADM if AT&T preferred this platform to the other [6500 Converged POTS] one,” says Xenos. “You also have to design the hardware to meet the specifications for OpenROADM.”
Ciena expects both platforms to be available by year-end. The 6500 reconfigurable line system will be in customer trials at the end of this quarter while the 8180 will be trialed by the end of the third quarter.
COBO issues industry’s first on-board optics specification
- COBO modules supports 400-gigabit and 800-gigabit data rates
- Two electrical interfaces have been specified: 8 and 16 lanes of 50-gigabit PAM-4 signals.
- There are three module classes to support designs ranging from client-slide multi-mode to line-side coherent optics.
- COBO on-board optics will be able to support 800 gigabits and 1.6 terabits once 100-gigabit PAM-4 electrical signals are specified.
Source: COBO
Interoperable on-board optics has moved a step closer with the publication of the industry’s first specification by the Consortium for On-Board Optics (COBO).
COBO has specified modules capable of 400-gigabits and 800-gigabits rates. The designs will also support 800-gigabit and 1.6-terabit rates with the advent of 100-gigabit single-lane electrical signals.
“Four hundred gigabits can be solved using pluggable optics,” says Brad Booth, chair of COBO and principal network architect for Microsoft’s Azure Infrastructure. “But if I have to solve 1.6 terabits in a module, there is nothing out there but COBO, and we are ready.”
Origins
COBO was established three years ago to create a common specification for optics that reside on the motherboard. On-board optics is not a new technology but until now designs have been proprietary.
I have to solve 1.6 terabits in a module, there is nothing out there but COBO, and we are ready
Brad BoothSuch optics are needed to help address platform design challenges caused by continual traffic growth.
Getting data on and off switch chips that are doubling in capacity every two to three years is one such challenge. The input-output (I/O) circuitry of such chips consumes significant power and takes up valuable chip area.
There are also systems challenges such as routing the high-speed signals from the chip to the pluggable optics on the platform’s faceplate. The pluggable modules also occupy much of the faceplate area and that impedes the air flow needed to cool the platform.
Using optics on the motherboard next to the chip instead of pluggables reduces the power consumed by shortening the electrical traces linking the two. Fibre rather than electrical signals then carries the data to the faceplate, benefiting signal integrity and freeing faceplate area for the cooling.
Specification 1.0
COBO has specified two high-speed electrical interfaces. One is 8-lanes wide, each lane being a 50-gigabit 4-level pulse-amplitude modulation (PAM-4) signal. The interface is based on the IEEE’s 400GAUI-8, the eight-lane electrical specification developed for 400 Gigabit Ethernet.
The second electrical interface is a 16-lane version for an 800-gigabit module. Using a 16-lane design reduces packaging costs by creating an 800-gigabit module instead using two separate 400-gigabit ones. Heat management is also simpler with one module.
There are also systems benefits using an 800-gigabit module.“As we go to higher and higher switch silicon bandwidths, I don’t have to populate as many modules on the motherboard,” says Booth.
The latest switch chips announced by several companies have 12.8 terabits of capacity that will require 32, 400-gigabit on-board modules but only 16, 800-gigabit ones. Fewer modules simplify the board’s wiring and the fibre cabling to the faceplate.
Designers have a choice of optical formats using the wider-lane module, such as 8x100 gigabits, 2x400 gigabits, and even 800 gigabits.
COBO has tested its design and shown it can support a 100-gigabit electrical interface. The design uses the same connector as the OSFP pluggable module.
“In essence, with an 8-lane width, we could support an 800-gigabit module if that is what the IEEE decides to do next,” says Booth. “We could also support 1.6 terabits if that is the next speed hop.”
It is very hard to move people from their standard operating model to something else until there is an extreme pain point
Form factor and module classes
The approach chosen by COBO differs from proprietary on-board optics designs in that the optics is not mounted directly onto the board. Instead, the COBO module resembles a pluggable in that once placed onto the board, it slides horizontally to connect to the electrical interface (see diagram, top).
A second connector in the middle of the COBO module houses the power, ground and control signals. Separating these signals from the high-speed interface reduces the noise on the data signals. In turn, the two connectors act as pillars supporting the module.
The robust design allows the modules to be mounted at the factory such that the platform is ready for operation once delivered at a site, says Booth.
COBO has defined three module classes that differ in length. The shortest Class A modules are used for 400-gigabit multi-mode interfaces while Class B suits higher-power IEEE interfaces such as 400GBASE-DR4 and the 100G Lambda MSA’s 400G-FR4.
The largest Class C module is for the most demanding and power-hungry designs such as the coherent 400ZR standard. “Class C will be able to handle all the necessary components - the optics and the DSP - associated with that [coherent design],” says Booth.
The advantage of the on-board optics is that it is not confined to a cage as pluggables are. “With an on-board optical module, you can control the heat dissipation by the height of the heat sink,” says Booth. “The modules sit flatter to the board and we can put larger heat sinks onto these devices.”
We realised we needed something as a stepping stone [between pluggables and co-packaged optics] and that is where COBO sits
Next steps
COBO will develop compliance-testing boards so that companies developing COBO modules can verify their designs. Booth hopes that by the ECOC 2018 show to be held in September, companies will be able to demonstrate COBO-based switches and even modules.
COBO will also embrace 100-gigabit electrical work being undertaken by the OIF and the IEEE to determine what needs to be done to support 8-lane and 16-lane designs. For example, whether the forward-error correction needs to be modified or whether existing codes are sufficient.
Booth admits that the industry remains rooted to using pluggables, while the move to co-packaged optics, where the optics and the chip are combined in the same module - remains a significant hurdle, both in terms of packaging technology and the need for vendors to change their business models to build such designs.
“It is very hard to move people from their standard operating model to something else until there is an extreme pain point,” says Booth.
Setting up COBO followed the realisation that a point would be reached when faceplate pluggables would no longer meet demands while in-packaged technology would not be ready.
“We realised we needed something as a stepping stone and that is where COBO sits,” says Booth.
Further information
For information on the COBO specification, click here.
Oclaro makes available its EMLs and backs 400G-FR4
Lumentum’s plan to acquire Oclaro for $1.8 billion may have dominated the news at last month’s OFC show held in San Diego, but it was business as usual for Oclaro with its product and strategy announcements.
Adam Carter, chief commercial officer (pictured), positions Oclaro’s announcements in terms of general industry trends.
“On the line side, everywhere there are 100-gigabit and 200-gigabit wavelengths, you will see that transition to 400 gigabit and 600 gigabit,” he says. “And on the client side, you have 100 gigabit going to 400 gigabit.”
400G-FR
Oclaro announced it will offer the QSFP56-DD module implementing 400-FR4, the four-wavelength 400-gigabit 2km client-side interface. The 400G-FR4 is a design developed by the 100G Lambda MSA.
“This [QSFP-DD FR4] will enable our customers, particularly network equipment manufacturers, to drive 400 gigabit up to 36 ports in a one-rack-unit [platform],” says Carter.
Oclaro has had the required optical components - its 53-gigabaud lasers and high-end photo-detectors - for a while. What Oclaro has lacked is the accompanying 4-level pulse amplitude modulation (PAM-4) gearbox chip to take the 8x50 gigabits-per-second electrical signals and encode them into four 50-gigabaud ones.
The chips have now arrived for testing and if the silicon meets the specs, Oclaro will deliver the first modules to customers later this year.
Oclaro chose the QSFP-DD first as it expects the form factor to sell in higher volumes but it will offer the 400G-FR4 in the OSFP module.
Certain customers prefer the OSFP, in part because of its greater power-handling capabilities. “Some people believe that the OSFP’s power envelope gives you a little bit more freedom,” he says. “There is still a debate in the industry whether the QSFP-DD will be able to do long-reach [80km data centre interconnect] types of products.”
Oclaro says its transmit and receive optical sub-assemblies (TOSAs and ROSAs) are designed to fit within the more demanding QSFP-DD such they will also suit the OSFP.
If people want to buy the [EML] chips and do next-generation designs, they can come to Oclaro
EMLs for sale
Oclaro has decided to sell its electro-absorption modulated lasers (EMLs), capable of 25, 50 and 100-gigabit speeds.
“If people want to buy the chips and do next-generation designs, they can come to Oclaro for some top-end single-mode chipsets that we have developed for our own use,” says Carter.
Oclaro's EMLs are used for both coarse wavelength-division multiplexing (CWDM) and the tighter LAN-WDM wavelength grid based client-side interfaces and are available in uncooled and cooled packages.
Until now the company only sold its 25-gigabit directly modulated lasers (DMLs). “We have been selling [EMLs] strategically to one very large customer who consigns them to a manufacturer,” says Carter.
The EMLs are being made generally available due to demand. “There are not many manufacturers of this chip in the world,” says Carter, adding that the decision also reflects an evolving climate for business models.
5G and cable
Oclaro claims it is selling the industry’s first 10-gigabit tunable SFP+ operating over industrial temperature (I-temp) ranges: -40 to 85oC. There are two tunable variants spanning 40km and 80km, both supporting up to 96 dense WDM (DWDM) channels on a fibre. The module was first announced at OFC 2017.
Oclaro says cable networks and 5G wireless will require the I-temp tunable SFP+.
The cable industry’s adoption of a distributed access architecture (DAA) brings fibre closer to the network’s edge and splits part of the functionality of the cable modem termination system (CMTS) - the remote PHY - closer to the residential units. This helps cable operators cope with continual traffic growth and their facilities becoming increasingly congested with equipment. Comcast, for example, says it is seeing an annual growth in downstream traffic (to the home) of 40-50 percent.
The use of tunable SFP+ modules boost the capacity that can be sent over a fibre, says Carter. But the tunable SFP+ modules are now located at the remote PHY, an uncontrolled temperature environment.
For 5G, the 10Gbps tunables will carry antenna traffic to centralised base stations. Carter points out that the 40km and 80km reach of the tunable SFP+ will not be needed in all geographies but in China, for example, the goal is to limit the number of central offices such that the distances are greater.
Oclaro also offers an I-temp fixed-wavelength 25-gigabit SFP28 LR module. “It is lower cost than the tunable SFP+ so if you need 10km [for mobile fronthaul], you would tend to go for this transceiver,” says Carter.
Also unveiled is an optical chip combining a 1310nm distributed feedback laser (DFB) laser and a Mach-Zehnder modulator. “The 1310nm device will be used in certain applications inside the data centre,” says Carter. “There are customers that are looking at using PAM-4 interfaces for short-reach connections between leaf and spine switches.” The device will support 50-gigabit and 100-gigabit PAM-4 wavelengths.
Line-side optics
Oclaro announced it is extending its integrated coherent transmitter and integrated coherent receiver to operate in the L-band. The coherent optical devices support a symbol rate of up to 64 gigabaud to enable 400-gigabit and 600-gigabit wavelengths.
Telcos want to use the L-band alongside the C-band to effectively double the capacity of a fibre.
Also announced by Oclaro at OFC was a high-bandwidth co-packaged modulator driver, an indium phosphide-based Mach-Zehnder modulator.
Oclaro was part of the main news story at last year’s OFC when Ciena announced it would share its 400-gigabit WaveLogic Ai coherent digital signal processor (DSP) with three module makers: Oclaro, Lumentum and NeoPhotonics. Yet there was no Oclaro announcement at this year’s OFC regarding the transponder.
Carter says the WaveLogic Ai transponder is sampling and that it has been demonstrated to customers and used in several field trials: “It is still early right now with regard volume deployments so there is nothing to announce yet."
Will white boxes predominate in telecom networks?
Will future operator networks be built using software, servers and white boxes or will traditional systems vendors with years of network integration and differentiation expertise continue to be needed?
AT&T’s announcement that it will deploy 60,000 white boxes as part of its rollout of 5G in the U.S. is a clear move to break away from the operator pack.
The service provider has long championed network transformation, moving from proprietary hardware and software to a software-controlled network based on virtual network functions running on servers and software-defined networking (SDN) for the control switches and routers.
Glenn WellbrockNow, AT&T is going a stage further by embracing open hardware platforms - white boxes - to replace traditional telecom hardware used for data-path tasks that are beyond the capabilities of software on servers.
For the 5G deployment, AT&T will, over several years, replace traditional routers at cell and tower sites with white boxes, built using open standards and merchant silicon.
“White box represents a radical realignment of the traditional service provider model,” says Andre Fuetsch, chief technology officer and president, AT&T Labs. “We’re no longer constrained by the capabilities of proprietary silicon and feature roadmaps of traditional vendors.”
But other operators have reservations about white boxes. “We are all for open source and open [platforms],” says Glenn Wellbrock, director, optical transport network - architecture, design and planning at Verizon. “But it can’t just be open, it has to be open and standardised.”
Wellbrock also highlights the challenge of managing networks built using white boxes from multiple vendors. Who will be responsible for their integration or if a fault occurs? These are concerns SK Telecom has expressed regarding the virtualisation of the radio access network (RAN), as reported by Light Reading.
“These are the things we need to resolve in order to make this valuable to the industry,” says Wellbrock. “And if we don’t, why are we spending so much time and effort on this?”
Gilles Garcia, communications business lead director at programmable device company, Xilinx, says the systems vendors and operators he talks to still seek functionalities that today’s white boxes cannot deliver. “That’s because there are no off-the-shelf chips doing it all,” says Garcia.
We’re no longer constrained by the capabilities of proprietary silicon and feature roadmaps of traditional vendors
White boxes
AT&T defines a white box as an open hardware platform that is not made by an original equipment manufacturer (OEM).
A white box is a sparse design, built using commercial off-the-shelf hardware and merchant silicon, typically a fast router or switch chip, on which runs an operating system. The platform usually takes the form of a pizza box which can be stacked for scaling, while application programming interfaces (APIs) are used for software to control and manage the platform.
As AT&T’s Fuetsch explains, white boxes deliver several advantages. By using open hardware specifications for white boxes, they can be made by a wider community of manufacturers, shortening hardware design cycles. And using open-source software to run on such platforms ensures rapid software upgrades.
Disaggregation can also be part of an open hardware design. Here, different elements are combined to build the system. The elements may come from a single vendor such that the platform allows the operator to mix and match the functions needed. But the full potential of disaggregation comes from an open system that can be built using elements from different vendors. This promises cost reductions but requires integration, and operators do not want the responsibility and cost of both integrating the elements to build an open system and integrating the many systems from various vendors.
Meanwhile, in AT&T’s case, it plans to orchestrate its white boxes using the Open Networking Automation Platform (ONAP) - the ‘operating system’ for its entire network made up of millions of lines of code.
ONAP is an open software initiative, managed by The Linux Foundation, that was created by merging a large portion of AT&T’s original ECOMP software developed to power its software-defined network and the OPEN-Orchestrator (OPEN-O) project, set up by several companies including China Mobile and China Telecom.
AT&T has also launched several initiatives to spur white-box adoption. One is an open operating system for white boxes, known as the dedicated network operator system (dNOS). This too will be passed to The Linux Foundation.
The operator is also a key driver of the open-based reconfigurable optical add/ drop multiplexer multi-source agreement, the OpenROADM MSA. Recently, the operator announced it will roll out OpenROADM hardware across its network. AT&T has also unveiled the Akraino open source project, again under the auspices of the Linux Foundation, to develop edge computing-based infrastructure.
At the recent OFC show, AT&T said it would limit its white box deployments in 2018 as issues are still to be resolved but that come 2019, white boxes will form its main platform deployments.
Xilinx highlights how certain data intensive tasks - in-line security, performed on a per-flow basis, routing exceptions, telemetry data, and deep packet inspection - are beyond the capabilities of white boxes. “White boxes will have their place in the network but there will be a requirement, somewhere else in the network for something else, to do what the white boxes are missing,” says Garcia.
Transport has been so bare-bones for so long, there isn’t room to get that kind of cost reduction
AT&T also said at OFC that it expects considerable capital expenditure cost savings - as much as a halving - using white boxes and talked about adopting in future reverse auctioning each quarter to buy its equipment.
Niall Robinson, vice president, global business development at ADVA Optical Networking, questions where such cost savings will come from: “Transport has been so bare-bones for so long, there isn’t room to get that kind of cost reduction. He also says that there are markets that already use reverse auctioning but typically it is for items such as components. “For a carrier the size of AT&T to be talking about that, that is a big shift,” says Robinson.
Layer optimisation
Verizon’s Wellbrock first aired reservations about open hardware at Lightwave’s Open Optical Conference last November.
In his talk, Wellbrock detailed the complexity of Verizon’s wide area network (WAN) that encompasses several network layers. At layer-0 are the optical line systems - terminal and transmission equipment - onto which the various layers are added: layer-1 Optical Transport Network (OTN), layer-2 Ethernet and layer-2.5 Multiprotocol Label Switching (MPLS). According to Verizon, the WAN takes years to design and a decade to fully exploit the fibre.
“You get a significant saving - total cost of ownership - from combining the layers,” says Wellbrock. “By collapsing those functions into one platform, there is a very real saving.” But there is a tradeoff: encapsulating the various layers’ functions into one box makes it more complex.
“The way to get round that complexity is going to a Cisco, a Ciena, or a Fujitsu and saying: ‘Please help us with this problem’,” says Wellbrock. “We will buy all these individual piece-parts from you but you have got to help us build this very complex, dynamic network and make it work for a decade.”
Next-generation metro
Verizon has over 4,000 nodes in its network, each one deploying at least one ROADM - a Coriant 7100 packet optical transport system or a Fujitsu Flashwave 9500. Certain nodes employ more than one ROADM; once one is filled, a second is added.
“Verizon was the first to take advantage of ROADMs and we have grown that network to a very large scale,” says Wellbrock.
The operator is now upgrading the nodes using more sophiticated ROADMs, as part of its next-generation metro. Now each node will need only one ROADM that can be scaled. In 2017, Verizon started to ramp and upgraded several hundred ROADM nodes and this year it says it will hit its stride before completing the upgrades in 2019.
“We need a lot of automation and software control to hide the complexity of what we have built,” says Wellbrock. This is part of Verizon’s own network transformation project. Instead of engineers and operational groups in charge of particular network layers and overseeing pockets of the network - each pocket being a ‘domain’, Verizon is moving to a system where all the networks layers, including ROADMs, are managed and orchestrated using a single system.
The resulting software-defined network comprises a ‘domain controller’ that handles the lower layers within a domain and an automation system that co-ordinates between domains.
“Going forward, all of the network will be dynamic and in order to take advantage of that, we have to have analytics and automation,” says Wellbrock.
In this new world, there are lots of right answers and you have to figure what the best one is
Open design is an important element here, he says, but the bigger return comes from analytics and automation of the layers and from the equipment.
This is why Wellbrock questions what white boxes will bring: “What are we getting that is brand new? What are we doing that we can’t do today?”
He points out that the building blocks for ROADMs - the wavelength-selective switches and multicast switches - originate from the same sub-system vendors, such that the cost points are the same whether a white box or a system vendor’s platform is used. And using white boxes does nothing to make the growing network complexity go away, he says.
“Mixing your suppliers may avoid vendor lock-in,” says Wellbrock. “But what we are saying is vendor lock-in is not as serious as managing the complexity of these intelligent networks.”
Wellbrock admits that network transformation with its use of analytics and orchestration poses new challenges. “I loved the old world - it was physics and therefore there was a wrong and a right answer; hardware, physics and fibre and you can work towards the right answer,” he says. “In this new world, there are lots of right answers and you have to figure what the best one is.”
Evolution
If white boxes can’t perform all the data-intensive tasks, then they will have to be performed elsewhere. This could take the form of accelerator cards for servers using devices such as Xilinx’s FPGAs.
Adding such functionality to the white box, however, is not straightforward. “This is the dichotomy the white box designers are struggling to address,” says Garcia. A white box is light and simple so adding extra functionality requires customisation of its operating system to run these application. And this runs counter to the white box concept, he says.
We will see more and more functionalities that were not planned for the white box that customers will realise are mandatory to have
But this is just what he is seeing from traditional systems vendors developing designs that are bringing differentiation to their platforms to counter the white-box trend.
One recent example that fits this description is Ciena’s two-rack-unit 8180 coherent network platform. The 8180 has a 6.4-terabit packet fabric, supports 100-gigabit and 400-gigabit client-side interfaces and can be used solely as a switch or, more typically, as a transport platform with client-side and coherent line-side interfaces.
The 8180 is not a white box but has a suite of open APIs and has a higher specification than the Voyager and Cassini white-box platforms developed by the Telecom Infra Project.
“We are going through a set of white-box evolutions,” says Garcia. “We will see more and more functionalities that were not planned for the white box that customers will realise are mandatory to have.”
Whether FPGAs will find their way into white boxes, Garcia will not say. What he will say is that Xilinx is engaged with some of these players to have a good view as to what is required and by when.
It appears inevitable that white boxes will become more capable, to handle more and more of the data-plane tasks, and as a response to the competition from traditional system vendors with their more sophisticated designs.
AT&T’s white-box vision is clear. What is less certain is whether the rest of the operator pack will move to close the gap.
