The many paths to 400 gigabits
The race is on to deliver 400-gigabit optical interfaces in time for the next-generation of data centre switches expected in late 2018.
The industry largely agrees that a four-wavelength 400-gigabit optical interface is most desirable yet alternative designs are also being developed.
Optical module makers must consider such factors as technical risk, time-to-market and cost when choosing which design to back.
Rafik Ward, FinisarUntil now, the industry has sought a consensus on interfaces, making use of such standards bodies as the IEEE to serve the telecom operators.
Now, the volumes of modules used by the internet giants are such that they dictate their own solutions. And the business case for module makers is sufficiently attractive that they are willing to comply.
Another challenge at 400 gigabits is that there is no consensus regarding what pluggable form factor to use.
“There is probably more technical risk in 400 gigabits than any of the historical data-rate jumps we have seen,” says Rafik Ward, vice president of marketing at Finisar.
Shrinking timeframes
One-hundred-gigabit interfaces are now firmly established in the marketplace. It took several generations to achieve the desired module design. First, the CFP module was used, followed by the CFP2. The industry then faced a choice between the CFP4 and the QSFP28 form factors. The QSFP28 ended up winning because the 100-gigabit module met the price, density and performance expectations of the big users - the large-scale data centre players, says Paul Brooks, director of strategy for lab and production at Viavi Solutions.
“The QSFP28 is driving huge volumes, orders of magnitude more than we see with the other form factors,” he says.
There is probably more technical risk in 400 gigabits than any of the historical data-rate jumps we have seen
It was the telcos that initially drove 100-gigabit interfaces, as with all the previous interface speeds. Telcos have rigorous optical and physical media device requirements such that the first 100-gigabit design was the 10km 100GBASE-LR4 interface, used to connect IP routers and dense wavelength-division multiplexing (DWDM) equipment.
Paul Brooks, Viavi Solutions
But 100 gigabits is also the first main interface speed influenced by the internet giants. “One-hundred-gigabit volumes didn’t take that inflection point until we saw the PSM4 and CWDM4 [transceiver designs],” says Brooks. The PSM4 and CWDM4 are not IEEE specification but multi-source agreements (MSAs) driven by the industry.
The large-scale data centre players are now at the forefront driving 400 gigabits. They don’t want to wait for three generations of modules before they get their hands on an optimised design. They want the end design from the start.
“There was a lot of value in having iterations at 100 gigabits before we got to the high-volume form factor,” says Ward. “It will be more challenging with the compressed timeframe for 400 gigabits.”
Datacom traffic is driven by machine-to-machine communication whereas telecom is driven by consumer demand. Machine-to-machine has twice the growth rate.
Data centre needs
Brandon Collins, CTO of Lumentum, explains that the urgency of the large-scale data centre players for 400 gigabits is due to their more pressing capacity requirements compared to the telcos.
Brandon Collings, LumentumDatacom traffic is driven by machine-to-machine communication whereas telecom is driven by consumer demand. “Machine-to-machine has twice the growth rate,” says Collins. “The expectation in the market - and everything in the market aligns with this - is that the datacom guys will be adopting in volume much sooner than the telecom guys.”
The data centre players require 400-gigabit interfaces for the next-generation 6.4- and 12.8-terabit top-of-rack switches in the data centre.
“The reason why the top-of-rack switch is going to need 400-gigabit uplinks is because server speeds are going to go from 25 gigabits to 50 gigabits,” says Adam Carter, chief commercial operator for Oclaro.
A top-of-rack switch’s downlinks connect to the servers while the uplinks interface to larger ‘spine’ switches. For a 36-port switch, if four to six ports are reserved for uplinks and the remaining ports are at 50 gigabits-per-second (Gbps), 100-gigabit uplinks cannot accommodate all the traffic.
The 6.4-terabit and 12.8-terabit switches are expected towards the end of next year. These switches will be based on silicon such as Broadcom’s Tomahawk-III, start-up Innovium’s Teralynx and Mellanox’s Spectrum-2. All three silicon design examples use 50-gigabit electrical signalling implemented using 4-level pulse-amplitude modulation (PAM-4).
PAM-4, a higher order modulation scheme, used for the electrical and optical client interfaces is another challenge at 400-gigabit. The use of PAM-4 requires a slight increase in bandwidth, says Brooks, and introduces a loss that requires compensation using forward error correction (FEC). “Four-hundred-gigabits is the first Ethernet technology where you always have FEC on,” he says.
CFP8
The modules being proposed for 400-gigabit interfaces include the CFP8, the Octal Small Form Factor (OSFP) and the double-density QSFP (QSFP-DD) pluggable modules. COBO, the interoperable on-board optics standard, will also support 400-gigabit interfaces.
The QSFP-DD is designed to be backward compatible with the QSFP and QSFP28 pluggables while the OSFP is a new form factor.
At OFC earlier this year, several companies showcased 400-gigabit CFP8-based designs.
NeoPhotonics detailed a CFP8 implementing 400GBASE-LR8, the IEEE 802.3bs Task Force’s 10km specification that uses eight wavelengths, each at 50-gigabit PAM4. Finisar announced two CFP8 transceivers: the 2km 400GBASE-FR8 and the 10km 400GBASE-LR8. Oclaro also announced two CFP8 designs: the 10km 400GBASE-LR8 and an even longer reach 40km version.
The 400-gigabit CFP8 is aimed at traditional telecom applications such as linking routers and transport equipment.
NeoPhotonics’ CFP8 is not yet in production and the company says it is not seeing a present need. “There is probably a short window before it gets replaced by the QSFP-DD or, on the telecom side, the OSFP,” says Ferris Lipscomb, vice president of marketing at NeoPhotonics.
Finisar expects its 400-gigabit CFP8 products by the year-end, while Oclaro is sampling its 10km 400-gigabit CFP8.
But the large-scale data centre players are not interested in the CFP8 which they see as too bulky for the data centre. Instead, Amazon, Facebook, and equipment vendor Cisco Systems are backing the higher-density QSFP-DD, while Google and Arista Networks are proponents of the OSFP.
“The data centre players don’t need IEEE standardisation, they need the lowest cost and the most compact form factor,” says Lumentum’s Collings.
QSFP-DD and OSFP
To achieve 400 gigabits, the QSFP-DD has twice the number of electrical lanes of the QSFP, going from four to eight, while each lane’s speed is doubled to 56Gbps using PAM-4.
“Time and time again we have heard with the QSFP-DD that plugging in legacy modules is a key benefit of that technology,” says Scott Sommers, group product manager at Molex and a co-chair of the QSFP-DD MSA. The power envelope of the QSFP-DD is some 12W.
Yasunori Nagakubo, Fujitsu Optical ComponentsYasunori Nagakubo, director of marketing at Fujitsu Optical Components also highlights the high-density merits of the QSFP-DD. Up to 36 ports can fit on the front panel of a one-rack-unit (1RU) box, enabling a throughput of 14.4 terabits.
In contrast, the OSFP has been designed with a fresh sheet of paper. The form factor has a larger volume and surface area compared to the QSFP-DD and, accordingly, has a power envelope of some 16W. Up to 32 OSFP ports can fit on a 1RU front panel.
“The QSFP-DD is a natural evolution of the QSFP and is used for switch-to-switch interconnect inside the data centre,” says Robert Blum, director of strategic marketing and business development at Intel’s silicon photonics product division. He views the OSFP as being a more ambitious design. “Obviously, you have a lot of overlap in terms of applications,” says Blum. “But the OSFP is trying to address a wider segment such as coherent and also be future proofed for 800 gigabits.”
“A lot of people are trying to make everything fit inside a QSFP-DD but, after all, the OSFP is still a bigger form factor which is easier for different components to fit in,” says Winston Way, CTO, systems at NeoPhotonics. Should a 400-gigabit design meet the more constrained volume and power requirements of the QSFP-DD, the design will also work in an OSFP.
The consensus among the module makers is that neither the QSFP-DD nor the OSFP can be ignored and they plan to back both.
This [400 gigabits] may be the last hurrah for face-plate pluggables
“We have been in this discussion with both camps for quite some time and are supporting both,” says Collings. What will determine their relative success will be time-to-market issues and which switch vendors produces the switch with the selected form factors and how their switches sell. “Presumably, switches are bought on other things than which pluggable they elected to use,” says Collings.
Is having two form factors an issue for Microsoft?
“Yes and no,” says Brad Booth, principal network architect for Microsoft’s Azure Infrastructure and chair of the COBO initiative. “I understand why the QSFP-DD exists and why the OSFP exists, and both are the same reason why we started COBO.”
COBO will support 400-gigabit interfaces and also 800 gigabits by combining two modules side-by-side.
Booth believes that 400-gigabit pluggable module designs face significant power consumption challenges: “I’ve been privy to data that says this is not as easy as many people believe.”
Brad Booth, MicrosoftIf it were only 400-gigabit speeds, it is a question of choosing one of the two pluggable modules and running with it, he says. But for future Ethernet speeds, whether it is 800 gigabits or 1.6 terabits, the design must be able to meet the thermal environment and electrical requirements.
“I do not get that feeling when I look at anything that is a face-plate pluggable,” says Booth. “This [400 gigabits] may be the last hurrah for face-plate pluggables.”
Formats
There are several 400-gigabit interface specifications at different stages of development.
The IEEE’s 802.3bs 400 Gigabit Ethernet Task Force has defined four 400 Gigabit specifications: a multi-mode fibre design and three single-mode interfaces.
The 100m 400GBASE-SR16 uses 16 multi-mode fibres, each at 25Gbps. The -SR16 has a high fibre count but future 400-gigabit multi-mode designs are likely to be optimised. One approach is an eight-fibre design, each at 50Gbps. And a four-fibre design could be developed with each fibre using coarse wavelength-division multiplexing (CWDM) carrying four 25-gigabit wavelengths.
The expectation is that at OFC 2018 next March, many companies will be demonstrating their 400-gigabit module designs including four-wavelength ones
The three single-mode IEEE specifications are the 500m 400GBASE-DR4 which uses four single-mode fibres, each conveying a 100-gigabit wavelength, and the 2km 400GBASE-FR8 and 10km 400GBASE-LR8 that multiplex eight wavelengths onto a single-mode fibre, each wavelength carrying a 50-gigabit PAM-4 signal.
The 2km and 10km IEEE specifications use a LAN-WDM spacing scheme and that requires tight wavelength control and hence laser cooling. The standards also use the IEEE CDAUI-8 electrical interface that supports eight 50-gigabit PAM-4 signals. The -FR8 and -LR8 standards are the first 400-gigabit specifications being implemented using the CFP8 module.
A new initiative, the CWDM8 MSA, has been announced to implement an alternative eight-wavelength design based on CWDM such that laser cooling is not required. And while CWDM8 will also use the CDAUI-8 electrical interface, the signals sent across the fibre are 50-gigabit non-return-to-zero (NRZ). A retimer chip is required to convert the input 50-gigabit PAM-4 electrical signals into 50-gigabit NRZ before being sent optically.
Robert Blum, IntelProponents of the CWDM8 MSA see it as a pragmatic solution that offers a low-risk, timely way to deliver 400-gigabit interfaces.
“When we looked at what is available and how to do an optical interface, there was no good solution that would allow us to meet those timelines, fit the power budget of the QSFP-DD and be at the cost points required for data centre deployment,” says Intel’s Blum. Intel is one of 11 founding companies backing the new MSA.
A disadvantage of the MSA is that it requires eight lasers instead of four, adding to the module’s overall cost.
“Making lasers at eight different wavelengths is not a trivial thing,” says Vivek Rajgarhia, senior vice president and general manager, lightwave at Macom.
This is what the 100G Lambda MSA aims to address with its four 100-gigabit wavelength design over duplex fibre. This can be seen as a four-wavelength CWDM complement to the IEEE’s 400GBASE-DR4 500m specification.
Vivek Rajgarhia, Macom
The first 400-gigabit standard the MSA is developing is the 400G-FR4, a 2km link that uses a CDAUI-8 interface and an internal PAM4 chip to create the 100-gigabit PAM-4 signals that are optically multiplexed onto a fibre.
The large-scale data centre players are the main drivers of four-wavelength 400-gigabit designs. Indeed, two large-scale data centre operators, Microsoft and Alibaba, have joined the 100G Lambda MSA.
“People think that because I work at Microsoft, I don’t talk to people at Google and Facebook,” says Booth. “We may not agree but we do talk.
“My point to them was that we need a CWDM4 version of 400 gigabits; the LAN-WDM eight-wavelength is a non-starter for all of us,” says Booth. “If you talk to any of the big end users, they will tell you it is a non-starter. They are waiting for the FR4.”
“Everyone wants 400 gigabit - 4x100-gigabit, that is what they are looking for,” says Rajgarhia.
If companies adopt other solutions it is purely a time-to-market consideration. “If they are going for intermediate solutions, as soon as there is 400 gigabits based on 100-gigabit serial, there is no need for them, whether it is 200-gigabit or 8x50-gigabit modules,” says Rajgarhia.
At the recent ECOC 2017 show, Macom demonstrated a 100-gigabit single-wavelength solution based on its silicon photonics optics and its 100-gigabit PAM-4 DSP chip. MultiPhy also announced a 100-gigabit PAM-4 chip at the show and companies are already testing its silicon.
The expectation is that at OFC 2018 next March, many companies will be demonstrating their 400-gigabit module designs including four-wavelength ones.
Fujitsu Optical Components says it will have a working four-wavelength 400-gigabit module demonstration at the show. “Fujitsu Optical Components favours a 4x100-gigabit solution for 400 gigabits instead of the alternative eight-wavelength solutions,” says Nagakubo. “We believe that eight-wavelength solutions will be short lived until the 4x100-gigabit design becomes available.”
The roadmap is slipping and slipping because the QSFP-DD is hard, very hard
Challenges and risk
“Everyone understands that, ultimately, the end game is the QSFP-DD but how do we get there?” says Viavi’s Brooks.
He describes as significant the challenges involved in developing a four-wavelength 400-gigabit design. These include signal integrity issues, the optics for 100-gigabit single wavelengths, the PAM-4 DSP, the connectors and the ‘insanely hot and hard’ thermal issues.
“All these problems need to be solved before you can get the QSFP-DD to a wider market,” says Brooks. “The roadmap is slipping and slipping because the QSFP-DD is hard, very hard.”
Lumentum’s Collins says quite a bit of investment has been made to reduce the cost of existing 100-gigabit CWDM4 designs and this investment will continue. “That same technology is basically all you need for 400 gigabits if you can increase the bandwidth to get 50 gigabaud and you are using a technology that is fairly linear so you can switch from NRZ to PAM-4 modulation.”
In other words, extending to a 400-gigabit four-wavelength design becomes an engineering matter if the technology platform that is used can scale.
Microsoft’s Booth is also optimistic. He does not see any challenges that suggest that the industry will fail to deliver the 400-gigabit modules that the large-scale data centre players require: “I feel very confident that the ecosystem will be built out for what we need.”
Module companies backing the most technically-challenging four-wavelength designs face the largest risk, yet also the greatest reward if they deliver by the end of 2018 and into 2019. Any slippage and the players backing alternative designs will benefit.
How the 400-gigabit market transpires will be ‘very interesting’, says Finisar’s Ward: “It will be clear who executes and who does not.”
Has coherent optical transmission run its course?
Feature: Coherent's future
Three optical systems vendors share their thoughts about coherent technology and the scope for further improvement as they look two generations ahead to symbol rates approaching 100 gigabaud
Optical transmission using coherent detection has made huge strides in the last decade. The latest coherent technology with transmitter-based digital signal processing delivers 25x the capacity-reach of 10-gigabit wavelengths using direct-detection, according to Infinera.
Since early 2016, the optical systems vendors Infinera, Ciena and Nokia have all announced new coherent digital signal processor (DSP) designs. Each new generation of coherent DSP improves the capacity that can be transmitted over an optical link. But given the effectiveness of the latest coherent systems, has most of the benefits already been achieved?
Source: Infinera
“It is getting harder and harder,” admits Kim Roberts, vice president, WaveLogic science at Ciena. “Unlike 10 years ago, there are no factors of 10 available for improvement.”
Non-linear Shannon limit
It is the non-linear Shannon limit that defines how much information can be sent across a fibre, a function of the optical signal-to-noise ratio.
Kim Roberts of CienaThe limit is based on the work of famed mathematician and information theorist, Claude Shannon. Shannon's work was based on a linear communication channel with added Gaussian noise. Optical transport over a fibre is a more complex channel but the same Shannon bound applies, although assumptions for the non-linearities in the fibre must be made.
Roberts stresses that despite much work, the industry still hasn't figured out just what the upper limit is over a fibre for a given optical signal-to-noise ratio.
It is getting harder and harder. Unlike 10 years ago, there are no factors of 10 available for improvement.
"There are papers that show that with this method and this method, you can do this much," says Roberts. "And there are other papers that show that as the power goes up, there is no theoretical limit until you melt the fibre."
These are theoretical things, he says, but the key is that the headroom available remains unknown. What is known is that the theoretical limit remains well ahead of practical systems. Accordingly, systems performance can be improved using a combination of techniques and protocols coupled with advances in electro-optics.
Design goals
A key goal when designing a new optical transmission system is to increase the data sent for a given cost i.e. decrease the cost-per-bit. This is an ongoing requirement as the service providers contend with ever growing network traffic.
Another challenge facing engineers is meeting the demanding power, density and thermal constraints of their next-generation optical transport system designs.
One way to reduce the cost-per-bit is to up the symbol rate to increase the data sent over a wavelength. Traditional 100-gigabit and 200-gigabit dense wavelength-division multiplexing (DWDM) systems use 32-35 gigabaud (GBaud). The latest coherent DSPs already support more than one baud rate: Nokia’s PSE-2s coherent DSP supports 33Gbaud or 45Gbaud while Ciena’s WaveLogic Ai chipset supports 35Gbaud or 56Gbaud.
Having a choice of baud rates coupled with the various modulation scheme options means the same number of bits can be sent over a range of optical reaches. The more complex the modulation scheme, the closer the points are in a constellation and the harder it is to correctly detect the data at the receiver in the presence of noise. Accordingly, using the combination of a simpler modulation scheme and a higher baud rate allows the same data to be sent further.
Capacity-reach is what matters: how much capacity you can extract for a given reach
Nokia's 1.4-billion transistor PSE-2s supports two 200 gigabit-per-second (Gbps) formats: polarisation-multiplexing, 16-ary quadrature amplitude modulation (PM-16QAM) at 33Gbaud, or using PM-8QAM at 45Gbaud. The 200-gigabit wavelength has an optical reach of some 800km using 16-QAM at 33Gbaud but this rises to 1,600km when PM-8QAM at 45Gbaud is used. Alternatively, using 45Gbaud and PM-16QAM, more data can be sent: 250 gigabits-per-wavelength over 800km.
Nokia's Randy EisenachCoherent systems designers are not stopping there. “The next higher baud rate the industry is targeting is 61-68 Gbaud,” says Randy Eisenach, senior product marketing manager, optical networks at Nokia.
Operating at the higher gigabaud range - Infinera talks of 65-70Gbaud - a single transmitter-receiver pair sends twice the amount of data of traditional 32-35Gbaud systems using the same modulation format. But the higher-baud rates require the electro-optics to operate twice as fast. The analogue-to-digital and digital-to-analogue converters of the coherent DSP must sample at twice the baud rate - at least 130 billion samples-per-second. A 65-70Gbaud rate also requires silicon implemented using a more advanced and expensive CMOS process mode - 16nm instead of 28nm. In turn, the optical modulator and drivers need to work well at these higher rates.
“The optical networking industry is well on its way to solving these engineering and component issues in the next year or so,” says Eisenach.
The capacity-per-wavelength also goes up with baud rate. For shorter reach links, 400-600 gigabits-per-wavelength are possible at 65-70Gbaud and, according to Pravin Mahajan, Infinera’s director of product and corporate marketing, power consumption in terms of watts-per-gigabit will improve by some 2.5x.
Pravin Mahajan of InfineraAnd the system vendors are not stopping there: the next baud rate hike after 65-70Gbaud will be in the region of 80-100 Gbaud. The coherent DSPs that will support such data rates will need to be implemented using 7nm CMOS process (see table).
“Capacity-reach is what matters: how much capacity you can extract for a given reach,” says Mahajan. “These successive generations [of faster baud rates] all keep moving that curve upwards.”
DSP features
In addition to the particular baud rates chosen by the vendors for their DSP designs, each includes unique features.
Instead of modulating the data onto a single carrier, Infinera’s FlexCoherent DSP uses multiple Nyquist sub-carriers spread across a channel. The number of subs-carriers varies depending on the link. The benefit of the approach, says Infinera, is that it allows a lowering of the baud rate used which increases the tolerance to non-linear channel impairments experienced during optical transmission.
The FlexCoherent DSP also supports enhanced soft-decision forward-error correction (SD-FEC) including the processing of two channels that need not be contiguous. This is possible as the FlexCoherent DSP is dual-channel which particularly benefits long-haul and subsea applications, claims Infinera. By pairing two channels, the FEC codes can be shared. Pairing a strong channel with a weak one and sharing the codes allows some of the strength of the strong signal to be used to bolster the weaker one, extending its reach or even allowing a more advanced modulation scheme to be used.
Infinera has just announced that by using Nyquist sub-carriers and the FEC gain sharing technologies, its customer, Seaborn Networks, is able delivering 11.8 terabits of capacity over a 10,600km submarine link.
Nokia’s PSE-2s DSP has sufficient processing performance to support two coherent channels. Each channel can implement a different modulation format if desired, or the two can be tightly coupled to form a super-channel. Using 45Gbaud and PM-16QAM, two 250-gigabit channels can be implemented to enable a 500-gigabit muxponder card. The PSE-2s can also implement 400-gigabit wavelength but that is the only format where only one channel can be supported by the PSE-2s.
Ciena’s WaveLogic Ai, meanwhile, uses advanced coding schemes such that it no longer mentions particular modulation schemes but rather a range of line rates in 50-gigabit increments.
Coding schemes with names such as set-partition QPSK, matrix-enhanced PM-BPSK, and 8D-2QAM, have already started to appear in the vendors’ coherent DSPs.
“Vendors use a lot of different terms essentially for the same thing: applying some type of coding to symbols to improve performance,” says Eisenach.
There are two main coding approaches: constellation shaping, also known as probabilistic shaping, and multi-dimensional coding. Combining the two - probabilistic shaping and multi-dimensional coding - promises enhanced performance in the presence of linear and non-linear transmission impairments. These are now detailed.
Probabilistic shaping
The four constellation points of QPSK modulation are equidistant from the origin. With more advanced modulation schemes such as 16-QAM, the constellation points differ in their distance from the origin and hence have different energies. Points in the corners of the constellation, furthest from the origin, have the most energy since a point’s power is the square of the distance from the origin.
Here the origin is at the centre of the square 64-QAM constellation. With probabilistic shaping, more of the points closer to the origin are chosen with the resulting data rate going down. Source: Nokia
Probabilistic shaping uses the inner constellation points more than the outer points, thereby reducing the overall average energy and this improves the signal-to-noise ratio. To understand why, Ciena points out that the symbol error rate at the receiver is dominated by the distance between neighbouring points of the constellation. Reduced the average energy still keeps the distance between the points the same, but when gain is applied to restore the signal’s power levels, the effect is to increase the distance between points. “It means we have better separation between the points, we’ve expanded everything,” says Roberts.
Using probabilistic shaping delivers a maximum 1.53dB of improvement in a linear transmission channel. “That is the theoretical limit,” says Roberts. “In a non-linear world, we get a greater benefit from shaping beyond just shaping the noise.”
Probabilistic shaping also has another benefit: it allows the number of bits sent per symbol to be defined.
Using standard modulation schemes such as 64-QAM with no constellation shaping, 6 bits-per-symbol are sent. Using shaping and being selective in what points are used, fewer bits are sent and they don’t need to be integer values. “I can send 5.7, 5.6, 5.3, even 5.14 bits-per symbol,” says Roberts. “Until I get to 5 bits, and then I have a choice: do I use more shaping or do I start with 32-QAM, which is 5 bits-per-symbol.”
Technology A shows today's coherent DSPs: operating at 30-35Gbaud and delivering 100, 150 and 200Gbps capacities per wavelength. Technology B is Ciena's WaveLogic A. Operating at 56Gbaud, it delivers up to 400Gbps per wavelength in 50Gbps. Technology C will continue this trend. Operating around 70Gbaud, up to 600Gbps per wavelength will be possible in even finer speed increments of 25Gbps. Is this Ciena's next WaveLogic? Source: Ciena
This is very useful as it allows fine control of the data sent such that operators can squeeze just enough data to suit the margins available on a particular fibre link. “You don't have to choose between 100-gigabit and 200-gigabit wavelengths,” says Roberts. "You can use smaller jumps and that sometimes means sending more capacity.”
Three things are needed to fine-tune a link in this way. One is a coherent DSP that can deliver such variable increments on a wavelength using probabilistic shaping. Also needed is a flexible client signalling scheme such as the OIF’s Flexible Ethernet (FlexE) protocol, a protocol mechanism to vary the Ethernet payload for transmission. Lastly, intelligent networking software is required to determine what is happening in the network and the margins available to assess how much data can be squeezed down a link.
Ciena says it has not implemented probabilistic shaping in its latest WaveLogic Ai coherent DSP. But given the Ai will be a family of devices, the technique will feature in upcoming coherent DSPs.
Nokia published a paper at the OFC event held earlier this year showing the use of probabilistic shaping over a transatlantic link. Using probabilistic-shaped 64-QAM (PS-64QAM), a spectral efficiency of 7.46b/s/Hz was achieved over the 5,523km link. This equates to 32 terabits of capacity over the fibre, more than 2.5x the 12 terabits of the existing DWDM system that uses 100Gbps PM-QPSK.
Advanced coding
Multi-dimensional coding is another technique used to improve optical transmission. A 16-QAM constellation is a two-dimensional (2D) representation in one polarisation, says Roberts. But if both polarisations of light are considered as one signal then it becomes a 4D, 256-point (16x16) symbol. This can be further extended by including the symbols in adjacent time slots. This forms an 8D representation.
Non-linear compensation has been an interesting research topic. Nokia continues to investigate the topic and implementation methods but the benefits appear small for most real-world applications
The main two benefits of multi-dimensional coding are better noise performance and significantly better performance in the presence of non-linear impairments.
Nokia’s PSE-2s uses coding for its set-partition QPSK (SP-QPSK). Standard PM-QPSK uses amplitude and phase modulation, resulting in a 4-point constellation. With SP-QPSK, only three of the four constellation points are used for each symbol. A third fewer constellation points means less data is transported but the benefit of SP-QPSK is extended reach due to the greater Euclidean distance between the symbol points created by carefully mapping the sequence of symbols. This results in 2.5dB of extra gain compared to PM-QPSK, for a reach beyond 5,000km.
Using the PSE-2’s 45Gbaud symbol rate, the fewer constellation points of SP-QPSK can be compensated for to achieve the same overall 100Gbps capacity as PM-QPSK at 33Gbaud.
Infinera’s FlexCoherent uses what it calls matrix-enhanced PM-BPSK, a form of averaging that adds 1dB of gain. “Any innovation that adds gain to a link, the margin that you give to operators, is always welcome,” says Mahajan.
Ciena’s WaveLogic 3 Extreme coherent DSP supports the multi-dimension coding scheme 8D-2QAM to improve reach or capacity of long-reach spans.
Such techniques mean vendors have a wealth of available choices available. It is also why Ciena has stopped referring to modulation schemes and talks about its WaveLogic Ai at 35Gbaud supporting 100-250Gbps data rates in 50-gigabit increments while at 56Gbaud, the WaveLogic Ai delivers 100-400Gbps optical channels in 50-gigabit steps.
Probabilistic shaping and multi-dimensional coding are distinct techniques but combining the two means the shaping can be done across dimensions.
Design engineers thus have various techniques to keep improving performance and there are other directions too.
Forward-error correction is about 2dB from the theoretical limit and with improved design Ciena’s Roberts expects 1dB can be reclaimed.
In turn, signal processing techniques could be applied at the transmitter to compensate for expected non-linear effects. “Non-linear compensation has been an interesting research topic,” says Eisenach. “Nokia continues to investigate the topic and implementation methods but the benefits appear small for most real-world applications.”
So is there much scope for further overall improvement?
“There is still a lot more juice left," says Mahajan.
“It [coherent transmission improvement] is getting harder and harder,” adds Roberts. “It is taking more mathematics and more and more CMOS gates, but Moore’s law is providing lots of CMOS gates.”
This is an updated and extended version of an article that first appeared in Optical Connections magazine earlier this year.
Inphi unveils a second 400G PAM-4 IC family
Inphi has announced the Vega family of 4-level, pulse-amplitude modulation (PAM-4) chips for 400-gigabit interfaces.
The 16nm CMOS Vega IC family is designed for enterprise line cards and is Inphi’s second family of 400-gigabit chips that support eight lanes of 50-gigabit PAM-4.
Its first 8x50-gigabit family, dubbed Polaris, is used within 400-gigabit optical modules and was announced at the OFC show held in Los Angeles in March.
“Polaris is a stripped-down low-power DSP targeted at optical module applications,” says Siddharth Sheth, senior vice president, networking interconnect at Inphi (pictured). “Vega, also eight by 50-gigabits, is aimed at enterprise OEMs for their line-card retimer and gearbox applications.”
A third Inphi 400-gigabit chip family, supporting four channels of 100-gigabit PAM-4 within optical modules, will be announced later this year or early next year.
400G PAM-4 drivers
Inphi’s PAM-4 chips have been developed in anticipation of the emergence of next-generation 6.4-terabit and 12.8-terabit switch silicon and accompanying 400-gigabit optical modules such as the OSFP and QSFP-DD form factors.
Sheth highlights Broadcom’s Tomahawk-III, start-up Innovium’s Teralynx and Mellanox’s Spectrum-2 switch silicon. All have 50-gigabit PAM-4 interfaces implemented using 25-gigabaud signalling and PAM-4 modulation.
“What is required is that such switch silicon is available and mature in order for us to deploy our PAM-4 products,” says Sheth. “Everything we are seeing suggests that the switch silicon will be available by the end of this year and will probably go into production by the end of next year,” says Sheth.
Several optical module makers are starting to build 8x50-gigabit OSFP and QSFP-DD products
The other key product that needs to be available is the 400-gigabit optical modules. The industry is pursuing two main form factors: the OSFP and the QSFP-DD. Google and switch maker Arista Networks are proponents of the OSFP form factor while the likes of Amazon, Facebook and Cisco back the QSFP-DD. Google has said that it will initially use an 8x50-gigabit module implementation for 400 gigabit. Such a solution uses existing, mature 25-gigabit optics and will be available sooner than the more demanding 4x100-gigabit design that Amazon, Facebook and Cisco are waiting for. The 4x100 gigabit design requires 50Gbaud optics and a 50Gbaud PAM-4 chip.
Inphi says several optical module makers are starting to build 8x50-gigabit OSFP and QSFP-DD products and that its Polaris and Vega family of chips anticipate such deployments.
“We expect 100-gigabit optics to be available sometime around mid-2018 and our next-generation 100-gigabit PAM-4 will be available in the early part of next year,” says Sheth.
Accordingly, the combination of the switch silicon and optics means that the complete ecosystem will already exist next year, he says
Vega
The Polaris chip, used within an optical module, equalises the optical non-linearities of the incoming 50-gigabit PAM-4 signals. The optical signal is created using 25-gigabit lasers that are modulated using a PAM-4 signal that encodes two bits per signal. “When you run PAM-4 over fibre - whether multi-mode or single mode - the signal undergoes a lot of distortion,” says Sheth. “You need the DSP to clean up that distortion.”
The Vega chip, in contrast, sits on enterprise line cards and adds digital functionality that is not supported by the switch silicon. Most enterprise boxes support legacy data rates such as 10 gigabit and 1 gigabit. The Vega chip supports such legacy rates as well as 25, 50, 100, 200 and 400 gigabit, says Sheth.
The Vega chip can add forward-error correction to a data stream and decode it. As well as FEC, the chip also has physical coding sublayer (PCS) functionality. “Every time you need to encode a signal with FEC or decode it, you need to unravel the Ethernet data stream and then reassemble it,” says Sheth.
Also on-chip is a crossbar that can switch any lane to any other lane before feeding the data to the switch silicon.
Sheth stresses that not all switch chip applications need the Vega. For large-scale data centre applications that use stripped-down systems, the optical module would feed the PAM-4 signal directly into the switch silicon, requiring the use of the Polaris chip only.
A second role for Vega is driving PAM-4 signals across a system. “If you want to drive 50-gigabit PAM-4 signals electrically across a system line card and noisy backplane then you need a chip like Vega,” says Sheth.
A further application for the Vega chip is as a ‘gearbox’, converting between 50-gigabit and 25-gigabit line rates. Once high-capacity switch silicon with 50G PAM-4 signals are deployed, the Vega chip will enable the conversion between 50-gigabit PAM-4 and 25-gigabit non-return-to-zero (NRZ) signals.System vendors will then be able to interface 100-gigabit (4x25-gigabit) QSFP28 modules with these new switch chips.
One hundred gigabit modules will be deployed for at least another three to four years while the price of such modules has come down significantly. “For a lot of the cloud players it comes down to cost: are 128-ports at 100-gigabit cheaper that 32, 400-gigabit modules?” says Sheth. The company says it is seeing a lot of interest in this application.
We expect 100-gigabit optics to be available sometime around mid-2018 and our next-generation 100-gigabit PAM-4 will be available in the early part of next year
Availability
Inphi has announced two Vega chips: a 400-gigabit gearbox and a 400-gigabit retimer and gearbox IC. “We are sampling,” says Sheth. “We have got customers running traffic on their line cards.” General availability is expected in the first quarter of 2018.
As for the 4x100-gigabit PAM-4 chips, Sheth expects solutions to appear in the first half of next year: “We have to see how mature the optics are at that point and whether something can go into production in 2018.”
Inphi maintains that the 8x50-gigabit optical module solutions will go to market first and that the 4x100-gigabit variants will appear a year later. “If you look at our schedules, Polaris and the 4x100-gigabit PAM-4 chip are one year apart,” he says.
SFP-DD: Turning the SFP into a 100-gigabit module
An industry initiative has started to quadruple the data rate of the SFP, the smallest of the pluggable optical modules. The Small Form Factor Pluggable – Double Density (SFP-DD) is being designed to support 100 gigabits by doubling the SFP’s electrical lanes from one to two and doubling their speed.
Scott SommersThe new multi-source agreement (MSA), to be completed during 2018, will be rated at 3.5W; the same power envelope as the current 100-gigabit QSFP module, even though the SFP-DD is expected to be 2.5x smaller in size.
The front panel of a 1-rack-unit box will be able to support up to 96 SFP-DD modules, a total capacity of 9.6 terabits.
The SFP-DD is adopting a similar philosophy as that being used for the 400-gigabit QSFP-DD MSA: an SFP-DD port will support legacy SFPs modules - the 25-gigabit SFP28 and 10-gigabit SFP - just as the QSFP-DD will be backward compatible with existing QSFP modules.
“Time and time again we have heard with the QSFP-DD that plugging in legacy modules is a key benefit of that technology,” says Scott Sommers, group product manager at Molex and the chair of the new SFP-DD MSA. Sommers is also a co-chair of the QSFP-DD MSA.
Interest in the SFP-DD started among several like-minded companies at the OFC show held in March. Companies such as Alibaba, Molex, Hewlett Packard Enterprise and Huawei agreed on the need to extend the speed and density of the SFP similar to how the QSFP-DD is extending the QSFP.
The main interest in the SFP-DD is for server to top-of-rack switch connections. The SFP-DD will support one or two lanes of 28 gigabit-per-second (Gbps) or of 56Gbps using 4-level pulse-amplitude modulation (PAM-4).
“We tried to find server companies and companies that could help with the mechanical form factor like connector companies, transceiver companies and systems companies,” says Sommers. Fourteen promoter companies supported the MSA at its launch in July.
Specification work
The SFP-DD MSA is developing a preliminary hardware release that will be published in the coming months. This will include the single-port surface mount connector, the cage surrounding it and the module’s dimensions.
The goal is that the module will be able to support 3.5W. “Once we pin down the form factor, we will be able to have a better idea whether 3.5W is achievable,” says Sommers. “But we are very confident with the goal.”
The publication of the mechanical hardware specification will lead to other companies - contributors - responding with their comments and suggestions. “This will make the specification better but it does slow down things,” says Sommers.
The MSA’s attention will turn to the module’s software management specification once the hardware release is published. The software must understand what type of SFP module is plugged into the SFP-DD port, for example.
Supporting two 56Gbps lanes using PAM-4 means that up to four SFP-DD modules can be interfaced to a 400-gigabit QSFP-DD. But the QSFP-DD is not the only 400-gigabit module the SFP-DD could be used with in such a ‘breakout’ mode.“I don’t want to discount the OSFP [MSA],” says Sommers. “That is a similar type of technology to the QSFP-DD where it is an 8-channel-enabling form factor.”
The SFP could eventually support a 200-gigabit capacity. “It is no secret that this industry is looking to double speeds every few years,” says Sommers. He stresses this isn't the goal at present but it is there: “This MSA, for now, is really focussed on 25-gigabit non-return-to-zero or 50-gigabit PAM-4.”
Challenges
One challenge Sommers highlights for the SFP-DD is achieving a mechanically robust design: achieving the 3.5W as well as the signal integrity given the two lanes of 56Gbps.
The signal integrity advances achieved with the QSFP-DD work will be adopted for the SFP-DD. “That is why we don’t think it is going to take as long as the QSFP-DD,” he says.
The electro-optic components need to be squeezed into a smaller space and with the SFP-DD’s two lanes, there is a doubling of the copper lines going into the same opening. “This is not insurmountable but it is challenging,” says Sommers.
Further reading
Mellanox blog on the SFP-DD, click here
Heavy Reading’s take on optical module trends
The industry knows what the next-generation 400-gigabit client-side interfaces will look like but uncertainty remains regarding what form factors to use. So says Simon Stanley who has just authored a report entitled: From 25/100G to 400/600G: A Competitive analysis of Optical Modules and Components.
Implementing the desired 400-gigabit module designs is also technically challenging, presenting 200-gigabit modules with a market opportunity should any slip occur at 400 gigabits.
Simon StanleyStanley, analyst-at-large at Heavy Reading and principal consultant at Earlswood Marketing, points to several notable developments that have taken place in the last year. For 400 gigabits, the first CFP8 modules are now available. There are also numerous suppliers of 100-gigabit QSFP28 modules for the CWDM4 and PSM4 multi-source agreements (MSAs). He also highlights the latest 100-gigabit SFP-DD MSA, and how coherent technology for line-side transmission continues to mature.
Routes to 400 gigabit
The first 400-gigabit modules using the CFP8 form factor support the 2km-reach 400Gbase-FR8 and the 10km 400Gbase-LR8; standards defined by the IEEE 802.3bs 400 Gigabit Ethernet Task Force. The 400-gigabit FR8 and LR8 employ eight 50Gbps wavelengths (in each direction) over a single-mode fibre.
There is significant investment going into the QSFP-DD and OSFP modules
But while the CFP8 is the first main form factor to deliver 400-gigabit interfaces, it is not the form factor of choice for the data centre operators. Rather, interest is centred on two emerging modules: the QSFP-DD that supports double the electrical signal lanes and double the signal rates of the QSFP28, and the octal small form factor pluggable (OSFP) MSA.
“There is significant investment going into the QSFP-DD and OSFP modules,” says Stanley. The OSFP is a fresh design, has a larger power envelope - of the order of 15W compared to the 12W of the QSFP-DD - and has a roadmap that supports 800-gigabit data rates. In contrast, the QSFP-DD is backwards compatible with the QSFP and that has significant advantages.
“Developers of semiconductors and modules are hedging their bets which means they have got to develop for the QSFP-DD, so that is where the bulk of the development work is going,” says Stanley. “But you can put the same electronics and optics in an OSFP.”
Given there is no clear winner, both will likely be deployed for a while. “Will QSFP-DD win out in terms of high-volumes?” says Stanley. “Historically, that says that is what is going to happen.”
The technical challenges facing component and module makers are achieving 100-gigabit-per-wavelength for 400 gigabits and fitting them in a power- and volume-constrained optical module.
The IEEE 400 Gigabit Ethernet Task Force has also defined the 400GBase-DR4 which has an optical interface comprising four single-mode fibres, each carrying 100 gigabits, with a reach up to 500m.
“The big jump for 100 gigabits was getting 25-gigabit components cost-effectively,” says Stanley. “The big challenge for 400 gigabits is getting 100-gigabit-per-wavelength components cost effectively.” This requires optical components that will work at 50 gigabaud coupled with 4-level pulse-amplitude modulation (PAM-4) that encodes two bits per symbol.
That is what gives 200-gigabit modules an opportunity. Instead of 4x50 gigabaud and PAM-4 for 400 gigabits, a 200-gigabit module can use existing 25-gigabit optics and PAM-4. “You get the benefit of 25-gigabit components and a bit of a cost overhead for PAM-4,” says Stanley. “How big that opportunity is depends on how quickly people execute on 400-gigabit modules.”
The first 200-gigabit modules using the QSFP56 form factor are starting to sample now, he says.
100-Gigabit
A key industry challenge at 100 gigabit is meeting demand and this is likely to tax the module suppliers for the rest of this year and next. Manufacturing volumes are increasing, in part because the optical module leaders are installing more capacity and because of the entrance of many, smaller vendors into the marketplace.
End users buying a switch only populate part of the ports due to the up-front costs. More modules are then added as traffic grows. Now, internet content providers turn on entire data centres filled with equipment that is fully populated with modules. “The hyper-scale guys have completely changed the model,” says Stanley.
The 100-gigabit module market has been coming for several years and has finally reached relatively high volumes. Stanley attributes this not just to the volumes needed by the large-scale data centre operators but also the fact that 100-gigabit modules have reached the right price point. Another indicator of the competitive price of 100-gigabit is the speed at which 40-gigabit technology is starting to be phased out.
Developments such as silicon photonics and smart assembly techniques are helping to reduce the cost of 100-gigabit modules, says Stanley, and this will be helped further with the advent of the new SFP-DD MSA.
SFP-DD
The double-density SFP (SFP-DD) MSA was announced in July. It is the next step after the SFP28, similar to the QSFP-DD being an advance on the QSFP28. And just as the 100-gigabit QSFP28 can be used in breakout mode to interface to four 25-gigabit SFP28s, the 400-gigabit QSFP-DD promises to perform a similar breakout role interfacing to SFP-DD modules.
Stanley sees the SFP-DD as a significant development. “Another way to reduce cost apart from silicon photonics and smart assembly is to cut down the number of lasers,” he says. The number of lasers used for 100 gigabits can be halved from four using 28 gigabaud signalling and PAM-4). Existing examples of two-wavelength/ PAM-4 styled 100-gigabit designs are Inphi’s ColorZ module and Luxtera’s CWDM2.
The industry’s embrace of PAM-4 is another notable development of the last year. The debate about the merits of using 56-gigabit symbol rate and non-return-to-zero signalling versus PAM-4 with its need for forward-error correction and extra latency has largely disappeared, he says.
The first 400-gigabit QSFP-DD and OSFP client-side modules are expected in a year’s time with volumes starting at the end of 2018 and into 2019
Coming of age
Stanley describes the coherent technology used for line-side transmissions as coming of age. Systems vendors have put much store in owning the technology to enable differentiation but that is now changing. To the well-known merchant coherent digital signal processing (DSP) players, NTT Electronics (NEL) and Inphi, can now be added Ciena which has made its WaveLogic Ai coherent DSP available to three optical module partners, Lumentum, NeoPhotonics and Oclaro.
CFP2-DCO module designs, where the DSP is integrated within the CFP2 module, are starting to appear. These support 100-gigabit and 200-gigabit line rates for metro and data centre interconnect applications. Meanwhile, the DSP suppliers are working on coherent chips supporting 400 gigabits.
Stanley says the CFP8 and OSFP modules are the candidates for future pluggable coherent module designs.
Meanwhile, the first 400-gigabit QSFP-DD and OSFP client-side modules are expected in a year’s time with volumes starting at the end of 2018 and into 2019.
As for 800-gigabit modules, that is unlikely before 2022.
“At OFC in March, a big data centre player said it wanted 800 Gigabit Ethernet modules by 2020, but it is always a question of when you want it and when you are going to get it,” says Stanley.
Elenion's coherent and fibre-to-the-server plans
- Elenion’s coherent chip - an integrated modulator-receiver assembly - is now generally available.
- The company has a silicon photonics design library that includes over 1,000 elements.
- Elenion is also developing an optical engine for client-side interfaces.
Elenion Technologies has given an update on its activities and strategy after announcing itself eight months ago. The silicon photonics-based specialist is backed by private equity firm, Marlin Equity Partners, which also owns systems vendor, Coriant. Elenion had already been active for two and a half years and shipping product when it emerged from its state of secrecy last December.
Larry SchwerinElenion has since announced it is selling its telecom product, a coherent transceiver PIC, to Coriant and now other companies.
It has also progressed its optical engine design for the data centre that will soon be a product. Elenion has been working with Ethernet switch chip maker, Cavium, and data centre player, Microsoft, as part of its datacom work.
“We have moved forward,” says Larry Schwerin, the CEO of Elenion.
Coherent PIC
Elenion’s integrated modulator-receiver assembly is being used by Coriant for two CFP2 Analogue Coherent Optics (CFP2-ACO) modules as part of its Groove G30 platform.
The first is a short-reach CFP2-ACO for point-to-point 200-gigabit links that has a reach of at least 80km. The second is a high-performance CFP2-ACO that has a reach of up to 4,000km at 100 gigabits and 650km at 200 gigabits.
Schwerin says the company is now selling the coherent PIC to “a lot of people”. In addition to the CFP2-ACO, there is the Digital Coherent Optics (DCO) pluggable market where the PIC and the coherent digital signal processor (DSP) are integrated within the module. Examples include the CFP-DCO and the smaller CFP2-DCO which is now being designed into new systems. ADVA Optical Networking is using the CFP2-DCO for its Teraflex, as is its acquisition target MRV with its 200-gigabit coherent muxponder. Infinera’s latest XTM II platforms also use the CFP2-DCO.
We have got a library that has well over 1,000 elements
Using silicon photonics benefits the cost and performance of the coherent design, says Schwerin. The cost benefit is a result of optical integration. “You can look at it as a highly simplified supply chain,” says Schwerin. Coupling the electronics close to the optics also optimises overall performance.
Elenion is also targeting the line-card market for its coherent PIC. “This is one of the reasons why I wanted to stay out of the pluggable business,” says Schwerin. “There are a lot more customers out there if you stay out of pluggables because now you are selling an [optical] engine.”
The company is also developing a coherent PIC design that will support higher data rates such as 400- and 600-gigabit per lambda. “Without being too specific because we do remain stealthy, we have plans to support these applications,” says Schwerin.
Schwerin stresses that the real strength of the company is its design library used to develop its silicon photonics circuits. Elenion emerged out of a silicon photonics design-for-service company. “We have got a library that has well over 1,000 elements,” he says. Elenion says it can address custom design requests of companies using its design library.
Datacom
Elenion announced at the OFC show held in Los Angeles in March that it is working with Jabil AOC Technologies, a subsidiary of the manufacturing firm, Jabil Circuits. Elenion chose the contract manufacturer due to its ability to address both line-card and pluggable designs, the markets for its optical engines.
The two firms have also been working at the chip level on such issues as fibre attach, coupling the laser and adding the associated electronics. “We are trying to make the interface as elegant and streamlined as possible,” says Schwerin. “We have got initiatives underway so that you don't need these complex arrangements.”
Schwerin highlights the disparity between the unit volumes needed for the telecom and datacom markets. According to forecasts from market research firms, the overall coherent market is expected to grow to 800,000 and 1 million units a year by 2020. In contrast, the interfaces used inside one large-scale data centre can be up to 2 million. “To achieve rapid manufacturing and yield, you have got to simplify the process,” he says.
This is what Elenion is tackling. If 1,000 die can be made on a single silicon wafer, and knowing the interface volumes required and the yields, the total number of wafer runs can be determined. And it is the overall time taken from starting a wafer to the finished transceiver PIC output that Elenion is looking to shorten, says the CEO.
We ran that demo from 7 AM to 2 AM every day of the show
At OFC, Elenion hired a hotel suite near the convention centre to demonstrate its technologies to interested companies. One demonstration used its 25Gbps optical engine directly mounted on a Cavium QLogic network interface card (NIC) connecting a server to a high-capacity Cavium Xpliant Ethernet switch chip. The demo showed how 16 NICs could be connected to the switch chip for a total capacity of 400 gigabits. “No more direct-attached cables or active optical cables, literally fibre-to-the-server,” says Schwerin. “We ran that demo from 7 AM to 2 AM every day of the show.”
Elenion’s on-board optics design was based on the emerging Consortium of On-Board Optics (COBO) standard. “The Microsoft folks, we work with them closely, so obviously what we are doing follows their intent,” says Schwerin.
The optical engine will also support 56Gbps links when used with four-level pulse-amplitude modulation (PAM-4) and the company is even eyeing 100Gbps interfaces. For now, Elenion’s datacom optical engine remains a technical platform but a product will soon follow.
The company’s datacom work is also benefiting its telecom designs. “The platform technology that we use for datacom has now found its way into the coherent programme, especially around the packaging,” says Schwerin.
* The article was changed on July 25th to mention that Elenion's PIC is being used in two Coriant CFP-ACOs.
Meeting the many needs of data centre interconnect
High capacity. Density. Power efficiency. Client-side optical interface choices. Coherent transmission. Direct detection. Open line system. Just some of the requirements vendors must offer to compete in the data centre interconnect market.
“A key lesson learned from all our interactions over the years is that there is no one-size-fits-all solution,” says Jörg-Peter Elbers, senior vice president of advanced technology, standards and IPR at ADVA Optical Networking. “What is important is that you have a portfolio to give customers what they need.”
Jörg-Peter Elbers
Teraflex
ADVA Optical Networking detailed its Teraflex, the latest addition to its CloudConnect family of data centre interconnect products, at the OFC show held in Los Angeles in March (see video).
The platform is designed to meet the demanding needs of the large-scale data centre operators that want high-capacity, compact platforms that are also power efficient.
A key lesson learned from all our interactions over the years is that there is no one-size-fits-all solution
Teraflex is a one-rack-unit (1RU) stackable chassis that supports three hot-pluggable 1.2-terabit modules or ‘sleds’. A sled supports two line-side wavelengths, each capable of coherent transmission at up to 600 gigabits-per-second (Gbps). Each sled’s front panel supports various client-side interface module options: 12 x 100-gigabit QSFPs, 3 x 400-gigabit QSFP-DDs and lower speed 10-gigabit and 40-gigabit modules using ADVA Optical Networking’s MicroMux technology.
“Building a product optimised only for 400-gigabit would not hit the market with the right feature set,” says Elbers. “We need to give customers the possibility to address all the different scenarios in one competitive platform.”
The Teraflex achieves 600Gbps wavelengths using a 64-gigabaud symbol rate and 64-ary quadrature-amplitude modulation (64-QAM). ADVA Optical Networking is using Acacia’s Communications latest Pico dual-core coherent digital signal processor (DSP) to implement the 600-gigabit wavelengths. ADVA Optical Networking would not confirm Acacia is its supplier but Acacia decided to detail the Pico DSP at OFC because it wanted to end speculation as to the source of the coherent DSP for the Teraflex. That said, ADVA Optical Networking points out that Teraflex’s modular nature means coherent DSPs from various suppliers can be used.
The 1 rack unit Teraflex
The line-side optics supports a variety of line speeds – from 600Gbps to 100Gbps, the lower the speed, the longer the reach.
The resulting 3-sled 1RU Teraflex platform thus supports up to 3.6 terabits-per-second (Tbps) of duplex communications. This compares to a maximum 800Gbps per rack unit using the current densest CloudConnect 0.5RU Quadflex card.
Markets
The data centre interconnect market is commonly split into metro and long haul.
The metro data centre interconnect market requires high-capacity, short-haul, point-to-point links up to 80km. Large-scale data centre operators may have several sites spread across a city, given they must pick locations where they can find them. Sites are typically no further apart than 80km to ensure a low-enough latency such that, collectively, they appear as one large logical data centre.
“You are extending the fabric inside the data centre across the data-centre boundary, which means the whole bandwidth you have on the fabric needs to be fed across the fibre link,” says Elbers. “If not, then there are bottlenecks and you are restricted in the flexibility you have.”
Large enterprises also use metro data centre interconnect. The enterprises’ businesses involve processing customer data - airline bookings, for example - and they cannot afford disruption. As a result, they may use twin data centres to ensure business continuity.
Here, too, latency is an issue especially if synchronous mirroring of data using Fibre Channel takes place between sites. The storage protocol requires acknowledgement between the end points such that the round-trip time over the fibre is critical. “The average distance of these connections is 40km, and no one wants to go beyond 80 or 100km,” says Elbers, who stresses that this is not an application for Teraflex given it is aimed at massive Ethernet transport. Customers using Fibre Channel typically need lower capacities and use more tailored solutions for the application.
The second data centre interconnect market - long haul - has different requirements. The links are long distance and the data sent between sites is limited to what is needed. Data centres are distributed to ensure continual business operation and for quality-of-experience by delivering services closer to customers.
Hundreds of gigabits and even terabits are sent over the long-distance links between data centres sites but commonly it is about a tenth of the data sent for metro data centre interconnect, says Elbers.
Direct Detection
Given the variety of customer requirements, ADVA Optical Networking is pursuing direct-detection line-side interfaces as well as coherent-based transmission.
At OFC, the system vendor detailed work with two proponents of line-side direct-detection technology - Inphi and Ranovus - as well as its coherent-based Teraflex announcement.
Working with Microsoft, Arista and Inphi, ADVA detailed a metro data centre interconnect demonstration that involved sending 4Tbps of data over an 80km link. The link comprised 40 Inphi ColorZ QSFP modules. A ColorZ module uses two wavelengths, each carrying 56Gbps using PAM-4 signalling. This is where having an open line system is important.
Microsoft wanted to use QSFPs directly in their switches rather than deploy additional transponders, says Elbers. But this still requires line amplification while the data centre operators want the same straightforward provisioning they expect with coherent technology. To this aim, ADVA demonstrated its SmartAmp technology that not only sets up the power levels of the wavelengths and provides optical amplification but also automatically measures and compensates for chromatic dispersion experienced over a link.
ADVA also detailed a 400Gbps metro transponder card based on PAM-4 implemented using two 200Gbps transmitter optical subassemblies (TOSAs) and two 200Gbps receiver optical subassemblies (ROSAs) from Ranovus.
Clearly there is also space for a direct-detection solution but that space will narrow down over time
Choices
The decision to use coherent or direct detection line-side optics boils down to a link’s requirements and the cost an end user is willing to pay, says Elbers.
As coherent-based optics has matured, it has migrated from long-haul to metro and now data centre interconnect. One way to cost-reduce coherent further is to cram more bits per transmission. “Teraflex is adding chunks of 1.2Tbps per sled which is great for people with very high capacities,” says Elbers, but small enterprises, for example, may only need a 100-gigabit link.
“For scenarios where you don’t need to have the highest spectral efficiency and the highest fibre capacity, you can get more cost-effective solutions,” says Elbers, explaining the system vendor’s interest in direct detection.
“We are seeing coherent penetrating more and more markets but still cost and power consumption are issues,” says Elbers. “Clearly there is also space for a direct-detection solution but that space will narrow down over time.”
Developments in silicon photonics that promise to reduce the cost of optics through greater integration and the adoption of packaging techniques from the CMOS industry will all help. “We are not there yet; this will require a couple of technology iterations,” says Elbers.
Until then, ADVA’s goal is for direct detection to cost half that of coherent.
“We want to have two technologies for the different areas; there needs to be a business justification [for using direct detection],” he says. “Having differentiated pricing between the two - coherent and direct detection - is clearly one element here.”
Coherent optics players target the network edge for growth
Part 1: Coherent developments
The market for optical links for reaches between 10km and 120km is emerging as a fierce battleground between proponents of coherent and direct-detection technologies.
Interest in higher data rates such as 400 gigabits is pushing coherent-based optical transmission from its traditional long-distance berth to shorter-reach applications. “That tends to be where the growth for coherent has come from as it has migrated from long-haul to metro,” says Tom Williams, senior director of marketing at Acacia Communications, a coherent technology supplier.
Source: Acacia Communications, Gazettabyte
Williams points to the Optical Internetworking Forum’s (OIF) ongoing work to develop a 400-gigabit link for data centre interconnect. Dubbed 400ZR, the project is specifying an interoperable coherent interface that will support dense wavelength-division multiplexing (DWDM) links for distances of at least 80km.
Meanwhile, the IEEE standards group defining 400 Gigabit Ethernet has issued a Call-For-Interest to determine whether to form a Study Group to look at 400-Gigabit applications beyond the currently defined 10km 400GBASE-LR8 interface.
“Coherent moving to higher-volume, shorter-reach solutions shows it is not just a Cadillac product,” says Williams. Higher-volume markets will also be needed to fund coherent chip designs using advanced CMOS process nodes. “Seven nanometer [CMOS] becomes a very expensive prospect,” says Williams. “The traditional business case is not going to be there without finding higher volumes.”
Coherent moving to higher-volume, shorter-reach solutions shows it is not just a Cadillac product
Pico DSP
Acacia detailed its next-generation high-end coherent digital signal processor (DSP) at the OFC show held in Los Angeles in March.
Tom WilliamsDubbed Pico, the DSP will support transmission speeds of up to 1.2 terabits-per-second using two carriers, each carrying 600 gigabits of data implemented using 64-ary quadrature amplitude modulation (64QAM) and a 64 gigabaud symbol rate. The 16nm CMOS dual-core DSP also features an internal crossbar switch to support a range of 100-gigabit and 400-gigabit client interfaces.
ADVA Optical Networking is using the Pico for its Teraflex data centre interconnect product. The Teraflex design supports 3.6 terabits of line-side capacity in a single rack unit (1RU). Each 1RU houses three “sleds”, each supporting two wavelengths operating at up to 600 gigabits-per-second (Gbps).
But ADVA Optical Networking also detailed at OFC its work with leading direct-detection technology proponents, Inphi and Ranovus. For the data centre interconnect market, there is interest in coherent and direct-detection technologies, says ADVA.
Detailing the Pico coherent DSP before it is launched as a product is a new development for Acacia. “We knew there would be speculation about ADVA’s Teraflex technology and we preferred to be up front about it,” says Williams.
The 16nm Pico chip was also linked to an Acacia post-deadline paper at OFC detailing the company’s progress in packaging its silicon photonics chips using ball grid array (BGA) technology. Williams stresses that process issues remain before its photonic integrated circuit (PIC) products will use BGA packaging, an approach that will simplify and reduce manufacturing costs.
“You are no longer running the board with all the electronics through a surface mount line and then have technicians manually solder on the optics,” says Williams. Moreover, BGA packaging will lead to greater signal integrity, an important consideration as the data rates between the coherent DSP and the PIC increase.
It is an endorsement of our model but I do not think it is the same as ours. You still have to have someone providing the DSP and someone else doing the optics
Coherent competition
Ciena's recent announcement that it is sharing its WaveLogic Ai coherent DSP technology with optical module vendors Lumentum, Oclaro and NeoPhotonics is seen as a response to Acacia’s success as a merchant supplier of coherent modules and coherent DSP technologies.
Williams says Acacia’s strategy remains the same when asked about the impact of the partnership between Ciena and the optical module makers: to continue being first to market with differentiated products.
One factor that has helped Acacia compete with merchant suppliers of coherent DSPs - NEL and ClariPhy, now acquired by Inphi - is that it also designs the silicon photonics-based optics used in its modules. This allows a trade-off between the DSP and the optics to benefit the overall system design.
A challenge facing the three optical module makers working with Ciena is that each one will have to go off and optimise their design, says Williams. “It is an endorsement of our model but I do not think it is the same as ours,” he says. “You still have to have someone providing the DSP and someone else doing the optics.”
Coherent roadmap
Acacia has managed to launch a new coherent DSP product every year since 2011 (see diagram, above). In 2015 it launched its Denali DSP, the first to operate at line rates greater than 100Gbps.
Last year it announced the Meru, a low-power DSP for its CFP2-DCO module. The CFP2-DCO operates at 100Gbps using polarisation multiplexing, quadrature phase-shift keying, (PM-QPSK) and two 200Gbps modes: one using 16-ary quadrature amplitude modulation (PM-16QAM) and a longer reach variant, implemented using a higher baud rate and 8-ary quadrature amplitude modulation (PM-8QAM). The CFP2-DCO is already starting to be designed into platforms.
Since 2014, Acacia has launched a low-power DSP design every even year and a high-end DSP every odd year, with the Pico being the latest example.
Acacia has not said when the Pico coherent DSP will be generally available but ADVA Optical Networking has said it expects to launch the Teraflex in early 2018.
Real-time visibility makes optical networking smarter
Systems vendors are making optical networks smarter. Their latest equipment, combining intelligent silicon and software, can measure the status of the network and enable dynamic network management.
Ciena recently announced its Liquid Spectrum networking product while Infinera has launched its Instant Network. Both vendors exploit the capabilities of their latest generation coherent DSPs to allow greater network automation and efficiency. The vendors even talk about their products being an important step towards autonomous or cognitive networks.
"Operators need to do things more efficiently," says Helen Xenos, director, portfolio solutions marketing at Ciena. "There is a lot of unpredictability in how traffic needs to be connected over the network." Moreover, demands on the network are set to increase with 5G and the billions of devices to be connected with the advent of Internet of Things.
Existing optical networks are designed to meet worse-case conditions. Margins are built into links based on the fibre used and assumptions are made about the equipment's end-of-life performance and the traffic to be carried. Now, with Ciena's latest WaveLogic Ai coherent DSP-ASIC, not only is the performance of the network measured but the coherent DSP can be used to exploit the network's state rather than use the worse-case end-of-life conditions. "With Liquid Spectrum, you now don't need to operate the network in a static mode," says Xenos.
We are at the beginning of this new world of operating networks
Software applications
Ciena has announced the first four software applications as part of Liquid Spectrum. The first, Performance Meter, uses measured signal-to-noise ratio data from the coherent DSP-ASICs to gauge the network's state to determine how efficiently the network is operating.
Bandwidth Optimiser acts on the network planner's request for bandwidth. The app recommends the optimum capacity that can be run on the link, based on exploiting baud rate and the reach, and also where to place the wavelengths within the C-band spectrum. Moreover, if service demands change, the network engineer can decide to reduce the built-in margins. "I may decide I don't need to reserve a 3dB margin right now and drop it down to 1dB," says Xenos. Bandwidth Optimiser can then be rerun to see how the new service demand can be met.
This approach contrasts with the existing way end points are connected, where all the wavelengths used are at the same capacity, a user decides their wavelengths and no changes are made once the wavelengths are deployed. "It is much simpler, it [the app] takes away complexity from the user," says Xenos.
The Liquid Restoration app ensuring alternative capacity in response to the loss of a 300-gigabit route due to a fault. Source: Ciena
The two remaining apps launched are Liquid Restoration and Wave-Line Synchroniser. Liquid Restoration looks at all the available options if a particular path fails. "It will borrow against margin to get as much capacity as possible," says Xenos. Wave-Line Synchroniser is a tool that helps with settings so that Ciena's optics can work with another vendor's line system or optics from another vendor work with Ciena's line system.
Liquid Spectrum will be offered as a bundle as part of Ciena's latest BluePlanet Manage, Control and Plan tool that combines service and network management, resource control and planning.
Xenos says Liquid Spectrum represents the latest, significant remaining piece towards the industry's goal of developing an agile optical infrastructure. Sophisticated reconfigurable optical add-drop multiplexers (ROADMs) and flexible coherent DSPs have existed for a while but how such flexible technology has been employed has been limited because of the lack of knowledge of the real-time state of the network. Moreover, with these latest Liquid Spectrum software tools, much of the manual link engineering and complexity regarding what capacity can be supported and where in the spectrum it should be placed, says Xenos.
"We are at the beginning of this new world of operating networks," says Xenos. "Going forward, there will be an increasingly level of sophistication that will be built into the software."
Ciena demonstrated Liquid Spectrum at the OFC show held in Los Angeles last month.
Part 2: Infinera's Instant Network, click here
An insider's view on the merits of optical integration

Tolstikhin is president and CEO of Intengent, the Ottawa-based consultancy and custom design service provider, and an industry veteran of photonic integration. In 2005 he founded OneChip Photonics, a fabless maker of indium phosphide photonic integrated circuits for optical access.
One important lesson he learned at OneChip was how the cost benefit of a photonic integrated circuit (PIC) can be eroded with a cheap optical sub-assembly made from discrete off-the-shelf components. When OneChip started, the selling price for GPON optics was around $100 a unit but this quickly came down to $6. "We needed sales in volumes and they never came close to meeting $6," says Tolstikhin.
OneChip changed strategy, seeing early the emerging opportunity for 100-gigabit optics for the data centre but despite being among the first to demonstrate fully integrated 100-gigabit transmitter and receiver chips – at OFC 2013 – the company eventually folded.
When OneChip started, the selling price for GPON optics was around $100 a unit but this quickly came down to $6
Integent can be seen as the photonic equivalent of an electronic ASIC design house that was common in the chip industry, acting as the intermediary between an equipment vendor commissioning a chip design and the foundry making the chip.
Integent creates designs for system integrators which it takes to a commercial foundry for manufacturing. The company makes stand-alone devices, device arrays, and multi-function PICs. Integent uses the regrowth-free taper-assistant vertical integration (TAVI) indium phosphide process of the California-based foundry Global Communication Semiconductors (GCS). "We have also partnered with a prominent PIC design house, VLC Photonics, for PIC layout and verification testing,” says Tolstikhin. Together, Intengent, VLC and GCS offer a one-stop-shop for the development and production of PICs.
III-V and silicon photonics
Tolstikhin is a big fan of indium phosphide and related III-V semiconductor materials, pointing out that they can implement all the optical functions required for telecom and datacom applications. He is a firm believer that III-V will continue to be the material system of choice for various applications and argues that silicon photonics is not so much a competitor to III-V but a complement.
"Silicon photonics needs indium-phosphide-based sources but also benefits from III-V modulators and detectors, which have better performance than their silicon photonics counterparts," he says.
He admits that indium phosphide photonics cannot compete with the PIC scalability that silicon photonics offers. But that will benefit indium phosphide as silicon photonics matures. Intengent already benefits from this co-existence, offering specialised indium phosphide photonic chip development for silicon photonics as well.
"Silicon photonics cannot compete with indium phosphide photonics in relatively simple yet highest volume optical components for telecom and datacom transceivers," says Tolstikhin. Partly this is due to silicon photonics' performance inferiority but mainly for economical reasons.
Silicon photonics will have its chance, but only where it beats competing technologies on fundamentals, not just cost
There are also few applications that need monolithic photonic integration. Tolstikhin highlights coherent optics as one example but that is a market with limited volumes. Meanwhile, the most promising emerging market - transceivers for the data centre, whether 100-gigabit (4x25G NRZ) PSM or CWDM4 designs or in future 400-gigabit (4x100G PAM4) transceivers, will likely be implemented using optical sub-assembly and hybrid integration technologies.
Tolstikhin may be a proponent of indium phosphide but he does not dismiss silicon photonics' prospects: "It will have its chance, but only where it beats competing technologies on fundamentals, not just cost."
One such area is large-scale optoelectronic systems, such as data processors or switch fabrics for large-scale data centres. These are designs that cannot be assembled using discretes and go beyond the scalability of indium phosphide PICs. "This is not silicon photonics-based optical components instead of indium phosphide ones but a totally different system and possibly network solutions," he says. This is also where co-integration of CMOS electronics with silicon photonics makes a difference and can be justified economically.
He highlights Rockley Photonics and Ayar Labs as start-ups doing just this: using silicon photonics for large-scale electro-photonic integration targeting system and network applications. "There may also be more such companies in the making," says Tolstikhin. "And should they succeed, the entire setup of optics for the data centre and the role of silicon photonics could change quite dramatically."
