Heavy Reading’s take on optical module trends
The industry knows what the next-generation 400-gigabit client-side interfaces will look like but uncertainty remains regarding what form factors to use. So says Simon Stanley who has just authored a report entitled: From 25/100G to 400/600G: A Competitive analysis of Optical Modules and Components.
Implementing the desired 400-gigabit module designs is also technically challenging, presenting 200-gigabit modules with a market opportunity should any slip occur at 400 gigabits.
Simon StanleyStanley, analyst-at-large at Heavy Reading and principal consultant at Earlswood Marketing, points to several notable developments that have taken place in the last year. For 400 gigabits, the first CFP8 modules are now available. There are also numerous suppliers of 100-gigabit QSFP28 modules for the CWDM4 and PSM4 multi-source agreements (MSAs). He also highlights the latest 100-gigabit SFP-DD MSA, and how coherent technology for line-side transmission continues to mature.
Routes to 400 gigabit
The first 400-gigabit modules using the CFP8 form factor support the 2km-reach 400Gbase-FR8 and the 10km 400Gbase-LR8; standards defined by the IEEE 802.3bs 400 Gigabit Ethernet Task Force. The 400-gigabit FR8 and LR8 employ eight 50Gbps wavelengths (in each direction) over a single-mode fibre.
There is significant investment going into the QSFP-DD and OSFP modules
But while the CFP8 is the first main form factor to deliver 400-gigabit interfaces, it is not the form factor of choice for the data centre operators. Rather, interest is centred on two emerging modules: the QSFP-DD that supports double the electrical signal lanes and double the signal rates of the QSFP28, and the octal small form factor pluggable (OSFP) MSA.
“There is significant investment going into the QSFP-DD and OSFP modules,” says Stanley. The OSFP is a fresh design, has a larger power envelope - of the order of 15W compared to the 12W of the QSFP-DD - and has a roadmap that supports 800-gigabit data rates. In contrast, the QSFP-DD is backwards compatible with the QSFP and that has significant advantages.
“Developers of semiconductors and modules are hedging their bets which means they have got to develop for the QSFP-DD, so that is where the bulk of the development work is going,” says Stanley. “But you can put the same electronics and optics in an OSFP.”
Given there is no clear winner, both will likely be deployed for a while. “Will QSFP-DD win out in terms of high-volumes?” says Stanley. “Historically, that says that is what is going to happen.”
The technical challenges facing component and module makers are achieving 100-gigabit-per-wavelength for 400 gigabits and fitting them in a power- and volume-constrained optical module.
The IEEE 400 Gigabit Ethernet Task Force has also defined the 400GBase-DR4 which has an optical interface comprising four single-mode fibres, each carrying 100 gigabits, with a reach up to 500m.
“The big jump for 100 gigabits was getting 25-gigabit components cost-effectively,” says Stanley. “The big challenge for 400 gigabits is getting 100-gigabit-per-wavelength components cost effectively.” This requires optical components that will work at 50 gigabaud coupled with 4-level pulse-amplitude modulation (PAM-4) that encodes two bits per symbol.
That is what gives 200-gigabit modules an opportunity. Instead of 4x50 gigabaud and PAM-4 for 400 gigabits, a 200-gigabit module can use existing 25-gigabit optics and PAM-4. “You get the benefit of 25-gigabit components and a bit of a cost overhead for PAM-4,” says Stanley. “How big that opportunity is depends on how quickly people execute on 400-gigabit modules.”
The first 200-gigabit modules using the QSFP56 form factor are starting to sample now, he says.
100-Gigabit
A key industry challenge at 100 gigabit is meeting demand and this is likely to tax the module suppliers for the rest of this year and next. Manufacturing volumes are increasing, in part because the optical module leaders are installing more capacity and because of the entrance of many, smaller vendors into the marketplace.
End users buying a switch only populate part of the ports due to the up-front costs. More modules are then added as traffic grows. Now, internet content providers turn on entire data centres filled with equipment that is fully populated with modules. “The hyper-scale guys have completely changed the model,” says Stanley.
The 100-gigabit module market has been coming for several years and has finally reached relatively high volumes. Stanley attributes this not just to the volumes needed by the large-scale data centre operators but also the fact that 100-gigabit modules have reached the right price point. Another indicator of the competitive price of 100-gigabit is the speed at which 40-gigabit technology is starting to be phased out.
Developments such as silicon photonics and smart assembly techniques are helping to reduce the cost of 100-gigabit modules, says Stanley, and this will be helped further with the advent of the new SFP-DD MSA.
SFP-DD
The double-density SFP (SFP-DD) MSA was announced in July. It is the next step after the SFP28, similar to the QSFP-DD being an advance on the QSFP28. And just as the 100-gigabit QSFP28 can be used in breakout mode to interface to four 25-gigabit SFP28s, the 400-gigabit QSFP-DD promises to perform a similar breakout role interfacing to SFP-DD modules.
Stanley sees the SFP-DD as a significant development. “Another way to reduce cost apart from silicon photonics and smart assembly is to cut down the number of lasers,” he says. The number of lasers used for 100 gigabits can be halved from four using 28 gigabaud signalling and PAM-4). Existing examples of two-wavelength/ PAM-4 styled 100-gigabit designs are Inphi’s ColorZ module and Luxtera’s CWDM2.
The industry’s embrace of PAM-4 is another notable development of the last year. The debate about the merits of using 56-gigabit symbol rate and non-return-to-zero signalling versus PAM-4 with its need for forward-error correction and extra latency has largely disappeared, he says.
The first 400-gigabit QSFP-DD and OSFP client-side modules are expected in a year’s time with volumes starting at the end of 2018 and into 2019
Coming of age
Stanley describes the coherent technology used for line-side transmissions as coming of age. Systems vendors have put much store in owning the technology to enable differentiation but that is now changing. To the well-known merchant coherent digital signal processing (DSP) players, NTT Electronics (NEL) and Inphi, can now be added Ciena which has made its WaveLogic Ai coherent DSP available to three optical module partners, Lumentum, NeoPhotonics and Oclaro.
CFP2-DCO module designs, where the DSP is integrated within the CFP2 module, are starting to appear. These support 100-gigabit and 200-gigabit line rates for metro and data centre interconnect applications. Meanwhile, the DSP suppliers are working on coherent chips supporting 400 gigabits.
Stanley says the CFP8 and OSFP modules are the candidates for future pluggable coherent module designs.
Meanwhile, the first 400-gigabit QSFP-DD and OSFP client-side modules are expected in a year’s time with volumes starting at the end of 2018 and into 2019.
As for 800-gigabit modules, that is unlikely before 2022.
“At OFC in March, a big data centre player said it wanted 800 Gigabit Ethernet modules by 2020, but it is always a question of when you want it and when you are going to get it,” says Stanley.
Infinera unveils its next-gen packet-optical platforms
Source: Infinera
Infinera has unveiled its latest metro products that support up to 200-gigabit wavelengths using CFP2-DCO pluggable modules.
The XTM II platform family is designed to support growing metro traffic, low-latency services and the trend to move sophisticated equipment towards the network edge. Placing computing, storage and even switching near the network edge contrasts with the classical approach of backhauling traffic, sometimes deep within the network.
“If you backhaul everything, you really do not know if it belongs in that part of the network,” says Geoff Bennett, director, solutions and technology at Infinera. Backhauling inherently magnifies traffic whereas operators want greater efficiencies in dealing with bandwidth growth, he says: “This is where the more cloud-like architectures towards the network edge come in.”
But locating equipment at the network edge means it must fit within existing premises or in installed prefabricated huts where space and the power supplied are constrained.
“If you are asking service providers to put more complex equipment there, then you need low power utilisation,” says Bennett. “This has been a key piece of feedback from customers we have been asking as to how they want our existing products to evolve in the metro-access.”
Having a distributed switch fabric is a long-term advantage for Infinera
Infinera says its latest XTM II products are eight times denser in terms of tranmission capacity while setting a new power-consumption low of 20W-27W per 100 gigabits depending on the operating temperature (25oC to 55oC). Infinera claims its nearest metro equipment competitor achieves 47W per 100 gigabits.
Sterling Perrin, principal analyst, optical networking and transport at Heavy Reading, says Infinera has achieved the power-efficient design by using a distributed switch architecture rather that a central switch fabric and adopting the CFP2-DCO pluggable module with its low-power coherent DSP.
“If you have a centralised fabric and you put it into an edge application then for some cases it will be a perfect fit but for many applications, it will be overkill in terms of capacity and hence power,” says Perrin. “Infinera is able to do it in a modular fashion in terms of just how much capacity and power is put in an application.”
Having a distributed switch fabric is a long-term advantage for Infinera for these applications, says Perrin, whereas competitor vendors will also benefit from the CFP2-DCO for their next designs.
And even if a competitor uses a distributed design, they will not leapfrog Infinera, says Perrin, although he expects competitors’ designs to come down considerably in power with the adoption of the CFP2-DCO.
Infinera has chosen not to use its photonic integrated circuit (PIC) technology for its latest metro platform given the large installed base of XTM chassis that already use pluggable modules. “It would make sense that customers would give feedback that they want a product that has industry-leading performance but which is also backwards compatible,” says Bennett.
Infinera has said it will evaluate whether its PIC technology will be applied to each new generation of the product line. “So when you get to the XTM III they will have another round looking at it,” says Perrin. “If I were placing bets on the XTM III, I would say they are going to continue down this route [of using pluggables].”
Perrin expects line-side pluggable technology to continue to progress with companies such as Acacia Communications and the collaboration between Ciena with its WaveLogic DSP technology and several optical module makers.
“At what point is the PIC going to be better than what is available with the pluggables?” says Perrin. “For this application, I don’t see it.”
XTM II family
Infinera has already been shipping upgraded XTM chassis for the last 18 months in advance of the launch of its latest metro cards. The upgraded chassis - the one rack unit (1RU) TM-102/II, the 3RU TM-301/II and the 11RU TM-3000/II - all feature enhanced power management and cooling.
What Infinera is unveiling now are three cards that enhance the capacity and features of the enhanced chassis. The new cards will work with the older generation XTM chassis (without the ‘II’ suffix) as long as a vacant card slot is available and the chassis’ total power supply is not exceeded. This is important given over 30,000 XTM chassis have been deployed.
The Infinera cards announced are the 400-flexponder, a 200-gigabit muxponder, and the EMXP440 packet-optical transport switch. The distributed switch architecture is implemented using the EMXP440 card.
Operators will also be offered Infinera’s Instant Bandwidth feature as part of the XTM II whereby they can pay for the line side capacity they use: either 100-gigabit or 200-gigabit wavelengths using the CFP2-DCO. The Instant Bandwidth offered is not the superchannel format available for Infinera’s other platforms that use its PIC but it does offer operators the option of deploying a higher-speed wavelength when needed and paying later.
400G flexponder
The flexponder can operate as a transponder and as a muxponder. For a transponder, the client signal and line-side data rate operate at the same data rate. In contrast, a muxponder aggregates lower data-rate client signals for transport on a single wavelength.
Infinera’s 400-gigabit flexponder card uses four 100 Gigabit Ethernet QSFP28 client interfaces and two 200-gigabit CFP2-DCO pluggable line-side modules. Each CFP2-DCO can transport data at 100 gigabits using polarisation-multiplexing, quadrature phase-shift keying (PM-QPSK) modulation or at 200 gigabits using 16-ary quadrature amplitude modulation (PM-16QAM).
The 400-gigabit card can thus operate as a transponder when the CFP2-DCO transports at 100 gigabits and as a muxponder when it carries two 100-gigabit signals over a 200-gigabit lambda. Given the card has two CFP2 line-side modules, it can even operate as a transponder and muxponder simultaneously.
The flexponder card also supports OTN block encryption using the AES-256 symmetric key protocol.
The flexponder is an upgrade on Infinera’s existing 100-gigabit muxponder card. The eightfold increase in capacity is achieved by using two 200-gigabit ports instead of a single 100-gigabit module and halving the width of the line card.
Using the flexponder card, the TM-102/II chassis has a transport capacity of 400 gigabits, up to 1.6 terabits with the TM-301/II and a total of 4 terabits using the TM-3000/II platform.
We can dial back the FEC if you need low latency and don't need the reach
200G muxponder
The double-width 200G card includes all the electronics needed for multi-service multiplexing. The line-side optics is a single CFP2-DCO module whereas the client side can accommodate two QSFP28s and 12 SFP+ 10-gigabit modules. The card can multiplex a mix of services including 10GbE, 40GbE, and 100GbE; 8-, 16- and 32-gigabit Fibre Channel; OTN and legacy SONET/SDH traffic.
Other features include support for OTN block encryption using the AES-256 symmetric key protocol.
The card’s forward error correction performance can also be traded to reduce the traffic latency. “We can dial back the FEC if you need low latency and don't need the reach,” says Bennett.
OTN add-drop multiplexing can also be implemented by pairing two of the multiplexer cards.
EMXP440 switch and flexible open line system
The EMXP440 packet-optical transport switch card supports layer-two functionality such as Carrier Ethernet 2.0 and MPLS-TP. “Mobile backhaul and residential broadband, these are the cards the operators tend to use,” says Bennett.
The two-slot EMXP440 card has two CFP2-DCOs and 12 SFP+ client-side interfaces. The reason why the line side and client side interface capacity differ (400 gigabits versus 120 gigabits) is that the card can be used to build simple packet rings (see diagram, top).
The line-side interfaces can be used for ‘East’ and ‘West' traffic while the SFP+ modules can be used to add and drop signals. The EMXP440 card also has an MPO port such that up to 12 SFP+ further ports can be added using Infinera’s PTIO-10G card, part of its PT Fabric products.
A flexible grid open line system is also available for the XTM II. The XTM II’s 100-gigabit and 200-gigabit wavelengths fit within a 50GHz-wide fixed grid channel but Infinera is already anticipating future higher baud rates that will require channels wider than 50GHz. A flexible grid also improves the use of the fibre’s overall capacity. In turn, RAMAN amplification will also be needed to extend the reach using future higher order modulation schemes such as 32- and 64-QAM.
Infinera says the 400-gigabit flexponder card will be available in the next quarter while the 200-gigabit muxponder and the EMXP440 cards will ship in the final quarter of 2017.
Talking markets: Oclaro on 100 gigabits and beyond
Oclaro’s chief commercial officer, Adam Carter, discusses the 100-gigabit market, optical module trends, silicon photonics, and why this is a good time to be an optical component maker.
Oclaro has started its first quarter 2017 fiscal results as it ended fiscal year 2016 with another record quarter. The company reported revenues of $136 million in the quarter ending in September, 8 percent sequential growth and the company's fifth consecutive quarter of 7 percent or greater revenue growth.
Adam CarterA large part of Oclaro’s growth was due to strong demand for 100 gigabits across the company’s optical module and component portfolio.
The company has been supplying 100-gigabit client-side optics using the CFP, CFP2 and CFP4 pluggable form factors for a while. “What we saw in June was the first real production ramp of our CFP2-ACO [coherent] module,” says Adam Carter, chief commercial officer at Oclaro. “We have transferred all that manufacturing over to Asia now.”
The CFP2-ACO is being used predominantly for data centre interconnect applications. But Oclaro has also seen first orders from system vendors that are supplying US communications service provider Verizon for its metro buildout.
The company is also seeing strong demand for components from China. “The China market for 100 gigabits has really grown in the last year and we expect it to be pretty stable going forward,” says Carter. LightCounting Market Research in its latest optical market forecast report highlights the importance of China’s 100-gigabit market. China’s massive deployments of FTTx and wireless front haul optics fuelled growth in 2011 to 2015, says LightCounting, but this year it is demand for 100-gigabit dense wavelength-division multiplexing and 100 Gigabit Ethernet optics that is increasing China’s share of the global market.
The China market for 100 gigabits has really grown in the last year and we expect it to be pretty stable going forward
QSFP28 modules
Oclaro is also providing 100-gigabit QSFP28 pluggables for the data centre, in particular, the 100-gigabit PSM4 parallel single-mode module and the 100-gigabit CWDM4 based on wavelength-division multiplexing technology.
2016 was expected to be the year these 100-gigabit optical modules for the data centre would take off. “It has not contributed a huge amount to date but it will start kicking in now,” says Carter. “We always signalled that it would pick up around June.”
One reason why it has taken time for the market for the 100-gigabit QSFP28 modules to take off is the investment needed to ramp manufacturing capacity to meet the demand. “The sheer volume of these modules that will be needed for one of these new big data centres is vast,” says Carter. “Everyone uses similar [manufacturing] equipment and goes to the same suppliers, so bringing in extra capacity has long lead times as well.”
Once a large-scale data centre is fully equipped and powered, it generates instant profit for an Internet content provider. “This is very rapid adoption; the instant monetisation of capital expenditure,” says Carter. “This is a very different scenario from where we were five to ten years ago with the telecom service providers."
Data centre servers and their increasing interface speed to leaf switches are what will drive module rates beyond 100 gigabits, says Carter. Ten Gigabit Ethernet links will be followed by 25 and 50 Gigabit Ethernet. “The lifecycle you have seen at the lower speeds [1 Gigabit and 10 Gigabit] is definitely being shrunk,” says Carter.
Such new speeds will spur 400-gigabit links between the data centre's leaf and spine switches, and between the spine switches. “Two hundred Gigabit Ethernet may be an intermediate step but I’m not sure if that is going to be a big volume or a niche for first movers,” says Carter.
400 gigabit CFP8
Oclaro showed a prototype 400-gigabit module in a CFP8 module at the recent ECOC show in September. The demonstrator is an 8-by-50 gigabit design using 25 gigabaud optics and PAM-4 modulation. The module implements the 400Gbase-LR8 10km standard using eight 1310nm distributed feedback lasers, each with an integrated electro-absorption modulator. The design also uses two 4-wide photo-detector arrays.
“We are using the four lasers we use for the CWDM4 100-gigabit design and we can show we have the other four [wavelength] lasers as well,” says Carter.
Carter says IP core routers will be the main application for the 400Gbase-LR8 module. The company is not yet saying when the 400-gigabit CFP8 module will be generally available.
We can definitely see the CFP2-ACO could support 400 gigabits and above
Coherent
Oclaro is already working with equipment customers to increase the line-side interface density on the front panel of their equipment.
The Optical Internetworking Forum (OIF) has already started work on the CFP8-ACO that will be able to support up to four wavelengths, each supporting up to 400 gigabits. But Carter says Oclaro is working with customers to see how the line-side capacity of the CFP2-ACO can be advanced. “We can definitely see the CFP2-ACO could support 400 gigabits and above,” says Carter. “We are working with customers as to what that looks like and what the schedule will be.”
And there are two other pluggable form factors smaller than the CFP2: the CFP4 and the QSFP28. “Will you get 400 gigabits in a QSFP28? Time will tell, although there is still more work to be done around the technology building blocks,” says Carter.
Vendors are seeking the highest aggregate front panel density, he says: “The higher aggregate bandwidth we are hearing about is 2 terabits but there is a need to potentially going to 3.2 and 4.8 terabits.”
Silicon photonics
Oclaro says it continues to watch closely silicon photonics and to question whether it is a technology that can be brought in-house. But issues remain. “This industry has always used different technologies and everything still needs light to work which means the basic III-V [compound semiconductor] lasers,” says Carter.
“Producing silicon photonics chips versus producing packaged products that meet various industry standards and specifications are still pretty challenging to do in high volume,” says Carter. And integration can be done using either silicon photonics or indium phosphide. “My feeling is that the technologies will co-exist,” says Carter.
QSFP28 MicroMux expands 10 & 40 Gig faceplate capacity
- ADVA Optical Networking's MicroMux aggregates lower rate 10 and 40 gigabit client signals in a pluggable QSFP28 module
- ADVA is also claiming an industry first in implementing the Open Optical Line System concept that is backed by Microsoft
The need for terabits of capacity to link Internet content providers’ mega-scale data centres has given rise to a new class of optical transport platform, known as data centre interconnect.
Source: ADVA Optical Networking
Such platforms are designed to be power efficient, compact and support a variety of client-side signal rates spanning 10, 40 and 100 gigabit. But this poses a challenge for design engineers as the front panel of such platforms can only fit so many lower-rate client-side signals. This can lead to the aggregate data fed to the platform falling short of its full line-side transport capability.
ADVA Optical Networking has tackled the problem by developing the MicroMux, a multiplexer placed within a QSFP28 module. The MicroMux module plugs into the front panel of the CloudConnect, ADVA’s data centre interconnect platform, and funnels either 10, 10-gigabit ports or two, 40-gigabit ports into a front panel’s 100-gigabit port.
"The MicroMux allows you to support legacy client rates without impacting the panel density of the product," says Jim Theodoras, vice president of global business development at ADVA Optical Networking.
Using the MicroMux, lower-speed client interfaces can be added to a higher-speed product without stranding line-side bandwidth. An alternative approach to avoid wasting capacity is to install a lower-speed platform, says Theodoras, but then you can't scale.
ADVA Optical Networking offers four MicroMux pluggables for its CloudConnect data centre interconnect platform: short-reach and long-reach 10-by-10 gigabit QSFP28s, and short-reach and intermediate-reach 2-by-40 gigabit QSFP28 modules.
The MicroMux features an MPO connector. For the 10-gigabit products, the MPO connector supports 20 fibres, while for the 40-gigabit products, it is four fibres. At the other end of the QSFP28, that plugs into the platform, sits a CAUI-4 4x25-gigabit electrical interface (see diagram above).
“The key thing is the CAUI-4 interface; this is what makes it all work," says Theodoras.
Inside the MicroMux, signals are converted between the optical and electrical domains while a gearbox IC translates between 10- or 40-gigabit signals and the CAUI-4 format.
Theodoras stresses that the 10-gigabit inputs are not the old 100 Gigabit Ethernet 10x10 MSA but independent 10 Gigabit Ethernet streams. "They can come from different routers, different ports and different timing domains," he says. "It is no different than if you had 10, 10 Gigabit Ethernet ports on the front face plate."
Using the pluggables, a 5-terabit CloudConnect configuration can support up to 520, 10 Gigabit Ethernet ports, according to ADVA Optical Networking.
The first products will be shipped in the third quarter to preferred customers that help in its development while the products will be generally available at the year-end.
ADVA Optical Networking unveiled the MicroMux at OFC 2016, held in Anaheim, California in March. ADVA also used the show to detail its Open Optical Line System demonstration with switch vendor, Arista Networks.
Two years after Microsoft first talked about the [Open Optical Line System] concept at OFC, here we are today fully supporting it
Open Optical Line System
The Open Optical Line System is a concept being promoted by the Internet content providers to afford them greater control of their optical networking requirements.
Data centre players typically update their servers and top-of-rack switches every three years yet the optical transport functions such as the amplifiers, multiplexers and ROADMs have an upgrade cycle closer to 15 years.
“When the transponding function is stuck in with something that is replaced every 15 years and they want to replace it every three years, there is a mismatch,” says Theodoras.
Data centre interconnect line cards can be replaced more frequently with newer cards while retaining the chassis. And the CloudConnect product is also designed such that its optical line shelf can take external wavelengths from other products by supporting the Open Optical Line System. This adds flexibility and is done in a way that matches the work practices of the data centre players.
“The key part of the Open Optical Line System is the software,” says Theodoras. “The software lets that optical line shelf be its own separate node; an individual network element.”
The data centre operator can then manage the standalone CloudConnect Open Optical Line System product. Such a product can take coloured wavelength inputs and even provide feedback with the source platform, so that the wavelength is tuned to the correct channel. “It’s an orchestration and a management level thing,” says Theodoras.
Arista recently added a coherent line card to its 7500 spine switch family.
The card supports six CFP2-ACOs that have a reach of up to 2,000km, sufficient for most data centre interconnect applications, says Theodoras. The 7500 also supports the layer-two MACsec security protocol. However, it does not support flexible modulation formats. The CloudConnect does, supporting 100-, 150- and 200-gigabit formats. CloudConnect also has a 3,000km reach.
Source: ADVA Optical Networking
In the Open Optical Line System demonstration, ADVA Optical Networking squeezed the Arista 100-gigabit wavelength into a narrower 37.5GHz channel, sandwiched between two 100 gigabit wavelengths from legacy equipment and two 200 gigabit (PM-16QAM) wavelengths from the CloudConnect Quadplex card. All five wavelengths were sent over a 2,000km link.
Implementing the Open Optical Line System expands a data centre manager’s options. A coherent card can be added to the Arista 7500 and wavelengths sent directly using the CFP2-ACOs, or wavelengths can be sent over more demanding links, or ones that requires greater spectral efficiency, by using the CloudConnect. The 7500 chassis could also be used solely for switching and its traffic routed to the CloudConnect platform for off-site transmission.
Spectral efficiency is important for the large-scale data centre players. “The data centre interconnect guys are fibre-poor; they typically only have a single fibre pair going around the country and that is their network,” says Theodoras.
The joint demo shows that the Open Optical Line System concept works, he says: “Two years after Microsoft first talked about the concept at OFC, here we are today fully supporting it.”
MultiPhy raises $17M to develop 100G serial interfaces
MultiPhy is developing chips to support serial 100-gigabit-per-second transmission using 25-gigabit optical components. The design will enable short reach links within the data centre and up to 80km point-to-point links for data centre interconnect.
Source: MultiPhy
“It is not the same chip [for the two applications] but the same technology core,” says Avi Shabtai, the CEO of MultiPhy. The funding will be used to bring products to market as well as expand the company’s marketing arm.
There is a huge benefit in moving to a single-wavelength technology; you throw out pretty much three-quarters of the optics
100 gigabit serial
The IEEE has specified 100-gigabit lanes as part of its ongoing 400 Gigabit Ethernet standardisation work. “It is the first time the IEEE has accepted 100 gigabit on a single wavelength as a baseline for a standard,” says Shabtai.
The IEEE work has defined 4-by-100 gigabit with a reach of 500 meters using four-level pulse-amplitude modulation (PAM-4) that encodes 2 bits-per-symbol. This means that optics and electronics operating at 50 gigabit can be used. However, MultiPhy has developed digital signal processing technology that allows the optics to be overdriven such that 25-gigabit optics can be used to deliver the 50 gigabaud required.
“There is a huge benefit in moving to a single-wavelength technology,” says Shabtai. ”You throw out pretty much three-quarters of the optics.”
The chip MultiPhy is developing, dubbed FlexPhy, supports the CAUI-4 (4-by-28 gigabit) interface, a 4:1 multiplexer and 1:4 demultiplexer, PAM-4 operating at 56 gigabaud and the digital signal processing.
The optics - a single transmitter optical sub-assembly (TOSA) and a single receiver optical sub-assembly (ROSA) - and the FlexPhy chip will fit within a QSFP28 module. “Taking into account that you have one chip, one laser and one photo-diode, these are pretty much the components you already have in an SFP module,” says Shabtai. “Moving from a QSFP form factor to an SFP is not that far.”
MultiPhy says new-generation switches will support 128 SFP28 ports, each at 100 gigabit, equating to 12.8 terabits of switching capacity.
Using digital signal processing also benefits silicon photonics. “Integration is much denser using CMOS devices with silicon photonics,” says Shabtai. DSP also improves the performance of silicon photonics-based designs such as the issues of linearity and sensitivity. “A lot of these things can be solved using signal processing,” he says.
FlexPhy will be available for customers this year but MultiPhy would not say whether it already has working samples.
MultiPhy raised $7.2 million venture capital funding in 2010.
ECOC 2015 Review - Final Part
Part 2 - Client-side component and module developments
- The first SWDM Alliance module shown
- More companies detail CWDM4, CLR4 and PSM4 mid-reach modules
- 400 Gig datacom technologies showcased
- The CFP8 MSA for 400 Gigabit Ethernet unveiled
The CFP MSA modules including the newest CFP8. Source: Finisar
- Lumentum and Kaiam use silicon photonics for mid-reach modules
- Finisar demonstrates a 10 km 25 Gig SFP28, and low-latency 25 Gig and 100 Gig SR4 interfaces
Shortwave wavelength-division multiplexing
Finisar demonstrated the first 100 gigabit shortwave wavelength-division multiplexing (SWDM) module at ECOC. Dubbed the SWDM4, the 100 gigabit interface supports WDM over multi-mode fibre. Finisar showed a 40 version at OFC earlier this year. “This product [the SWDM4] provides the next step in that upgrade path,” says Rafik Ward, vice president of marketing at Finisar.
The SWDM Alliance was formed in September to exploit the large amount of multi-mode fibre used by enterprises. The goal of the SWDM Alliance is to extend the use of multi-mode fibre by enabling link speeds beyond 10 gigabit.
“We believe if you can do something with multi-mode fibre, you can achieve cost points that are not achievable with single-mode fibre,” says Ward. “SWDM4 allows us to have not only low-cost optics on either end, but allows customers to reuse their installed fibre.”
The SWDM4 interface uses four 25 gigabit VCSELs operating at wavelengths sufficiently apart that cooling is not required. “By having this [wavelength] gap, you can keep to relatively low-cost components like for multiplexing and de-multiplexing,” says Ward.
The 100 Gig SWDM4 achieves 70 meters over OM3 fibre and 100 meters over OM4 fibre. SWDM can scale beyond 100 gigabit, says Ward, but the challenge with multi-mode fibre remains the tradeoff between speed and distance.
Finisar is already shipping SWDM4 alpha samples to customers.
The SWDM Alliance founding members include CommScope, Corning, Dell, Finisar, H3C, Huawei, Juniper Networks, Lumentum, and OFS.
CWDM4, CLR4 and PSM4
Oclaro detailed a 100 gigabit mid-reach QSFP28 module that supports both the CWDM4 multi-source agreement (MSA) and the CLR4 MSA. “We can support either depending on whether, on the host card, there is forward-error correction or not,” says Robert Blum, director of strategic marketing at Oclaro.
Both MSAs have a 2 km reach and use four 25 gigabit channels. However, the CWDM4 uses a more relaxed optical specification as its overall performance is complemented with forward-error correction (FEC) on the host card. The CLR4, in contrast, does not use FEC and therefore requires a more demanding optical specification.
“The requirements are significantly harder to meet for the CLR4 specification,” says Blum. By avoiding FEC, the CLR4 module benefits low-latency applications such as financial trading.
Oclaro showed its dual-MSA module achieving a 10 km reach at ECOC even though the two specifications call for 2 km only. “We have very large margins for the module compared to the specification,” says Blum, adding that customers now need to only qualify one module to meet their CWDM4 or CLR4 line card needs.
Other optical module vendors that announced support for CWDM4 in a QSFP28 module include Source Photonics, whose module is also CLR4-compliant. Kaiam is making CWDM4 and CLR4 modules using silicon photonics as part of its designs.
Lumentum also detailed its CWDM4 and the PSM4, a QSFP28 that uses a single-mode ribbon cable to deliver 100 Gig over 500 meters. Lumentum says its CWDM4 and PSM4 QSFP28 products will be available this quarter. “These 100 gigabit modules are what the hyper-scale data centre operators are clamouring for,” says Brandon Collings, CTO of Lumentum.
The question is who can ramp and support the 100 Gig deployments that are going to happen next year
Lumentum says it is using silicon photonics technology for one of its designs but has not said which. “We have both technologies [indium phosphide and silicon photonics], we use both technologies, and silicon photonics is involved with one of these [modules],” says Collings.
There is demand for both the PSM4 and CWDM4, says Lumentum. Which type a particular data centre operator chooses depends on such factors as what fibre they have or plan to deploy, whether they favour single-mode fibre pairs or ribbon cable, and if their reach requirements are beyond 500 meters.
Quite a few module companies have already sampled [100 Gig] products, says Oclaro’s Blum: “The question is who can ramp and support the 100 Gig deployments that are going to happen next year.”
Technologies for 400 gigabit
Several companies demonstrated technologies that will be needed for 400 gigabit client-side interfaces.
NeoPhotonics and chip company InPhi partnered to demonstrate the use of PAM-4 modulation to achieve 100 gigabit. “To do PAM-4, you need not only the optics but a special PAM-4 DSP,” says Ferris Lipscomb, vice president of marketing at NeoPhotonics.
The 400 Gigabit Ethernet standard under development by the IEEE 802.3bs supports several configurations using PAM-4 including a four-channel parallel single-mode fibre configuration, each at 100 gigabit that will have a 500m reach, and two 8 x 50 gigabit, for 2 km and 10 km links.
The company showcased its 4x28 Gig transmitter optical sub-assembly (TOSA) that uses a photonic integrated circuit comprising electro-absorptive modulated lasers (EMLs). Combined with InPhi’s PAM-4 chip, two channels were combined to achieve 100 gigabit. NeoPhotonics says its EMLs are also capable of supporting 56 gigabaud rates which, coupled with PAM-4, would achieve 100 gigabit single channels.
Lipscomb points out that not only are there several interfaces under development but also various optical form factors. “For 100 Gig and 400 Gig client-side data centre links, there are several competing MSA groups,” says Lipscomb. “The final winning approach has not yet emerged and NeoPhotonics wants its solution to be generic enough so that it supports this winning approach once it emerges.”
Meanwhile, Teraxion announced its silicon photonics-based modulator technology for 100 gigabit (4 x 25 Gig) and 400 gigabit datacom interfaces. “People we talk to are interested in WDM applications for short-reach links,” says Martin Guy, Teraxion’s CTO and strategic marketing.
Teraxion says a challenge using silicon photonics for WDM is supporting a broad band of wavelengths. “People use surface gratings to couple light into the silicon photonics,” says Guy. “But surface gratings have a strong wavelength-dependency over the C-band.”
Teraxion has developed an edge coupler instead which is on the same plane as the propagating light. This compares to a surface grating where light is coupled vertical to the plane.
You hear a lot about the cost of silicon photonics but one of the key advantages is the density you can achieve on the chip itself. Having many modulators in a very small footprint has value for the platform; you can make smaller and smaller transceivers.
“We can couple light efficiently with large-tolerance alignment and our approach can be used for WDM applications,” says Guy. Teraxion’s modulator array can be used for CWDM4 and CLR4 MSAs as well as optical engines for future 400 gigabit datacom systems.
“You hear a lot about the cost of silicon photonics but one of the key advantages is the density you can achieve on the chip itself,” says Guy. “Having many modulators in a very small footprint has value for the platform; you can make smaller and smaller transceivers.”
CFP8 MSA
Finisar demonstrated a 400 gigabit link that included a mock-up of the CFP8 form factor, the latest CFP MSA member being developed to support emerging standards such as 400 Gigabit Ethernet.
The 400 gigabit demonstration implemented the 400GE-SR16 multi-mode standard. A Xilinx FPGA was used to implement an Ethernet MAC and generated 16, 25 Gig channels that were fed to four CFP4 modules, each implementing a 100GBASE-SR4 but collectively acting as the equivalent of the 400GE-SR16. The 16 fibre outputs were then fed to the CFP8 prototype which performed an optical loop-back function, sending the signals back to the CFP4s and FPGA.

The CFP8 will be able to support 6.4 terabit of switching on a 1U card when used in a 2 row by 8 module configuration. The CFP8 has a similar size and power consumption profile of the CFP2. “There is still a lot of work putting an MSA together for 400 gigabit,” says Ward, adding that there is still no timeframe as to when the CFP8 MSA will be completed.
25 Gig SFP28
Finisar also announced at ECOC a 1310nm SFP28 supporting 25 gigabit Ethernet over 10 km, complementing the 850nm SFP28 short reach module it announced at OFC 2015.
Ethernet vendors are designing their next-generation series of switches that use the SFP28, says Finisar, while the IEEE is completing standardising 25 Gigabit Rthernet over copper and multi-mode fibre options.
“There hasn’t yet been a motion to standardise a long-wave interface,” says Ward. “With the demo at ECOC, we have come out with a 25 Gig long-wave interface in advance of a standard.”
Ward points out that the large-scale data centres several years ago only had 40 gigabit as a higher speed option beyond 10 gigabit. Now enterprises will also have a 25 gigabit option.
Ward points out that 25 gigabit compared to 40 Gig delivers an attractive cost-performance. Forty gigabit short-reach and long-reach interfaces are based on four channels at 10 gigabit, whereas 25 gigabit uses one laser and one photo-detector that fit in an SFP28. This compares to a QSFP for 40 Gig.
“25 Gigabit Ethernet is a very interesting interface for the next set of customers after the Web 2.0 players that are looking to migrate beyond 10 gigabit,” said Ward.
Low-latency 25 Gig SR and 100 Gig Ethernet SR4 modules
Also announced by Finisar are 25 Gigabit Ethernet SFP28 SR and 100GE QSFP28 SR4 transceivers that can operate without accompanying FEC on the host board. The transceivers achieve a 30 meter reach on OM3 fibre and 40 meters using OM4 fibre.
“Using FEC simplifies the optical link,” says Ward. “It can take the cost out of the optics by having FEC which gives you additional gain.” But some customers have requested the parts for use without FEC to reduce link latency, similar to those that choose the CLR4 MSA for mid-reach 100 Gig.
Finisar has not redesigned its modules but offering modules that have its higher performing VCSELs and photo-detectors. “Think of it as a simple screen,” says Ward.
Click here for the ECOC 2015 Review - Part 1.
ECOC '15 Reflections: Part 2
Martin Zirngibl, head of network enabling components and technologies at Bell Labs.
Silicon Photonics is seeming to gain traction, but traditional component suppliers are still betting on indium phosphide.
There are many new start-ups in silicon photonics, most seem to be going after the 100 gigabit QSFP28 market. However, silicon photonics still needs a ubiquitous high-volume application for the foundry model to be sustainable.
There is a battle between 4x25 Gig CWDM and 100 Gig PAM-4 56 gigabaud, with most people believing that 400 Gig PAM-4 or discrete multi-tone with 100 Gig per lambda will win.
Will coherent make it into black and white applications - up to 80 km - or is there a role for a low-cost wavelength-division multiplexing (WDM) system with direct detection?
One highlight at ECOC was the 3D integrated 100 Gig silicon photonics by Kaiam.
In coherent, the analogue coherent optics (ACO) model seems to be winning over the digital coherent one, and people are now talking about 400 Gig single carrier for metro and data centre interconnect applications.
As for what I’ll track in the coming year: will coherent make it into black and white applications - up to 80 km - or is there a role for a low-cost wavelength-division multiplexing (WDM) system with direct detection?
Yukiharu Fuse, director, marketing department at Fujitsu Optical Components
There were no real surprises as such at ECOC this year. The products and demonstrations on show were within expectations but perhaps were more realistic than last year’s show.
Most of the optical component suppliers demonstrated support to meet the increasing demand of data centres for optical interfaces.
The CFP2 Analogue Coherent Optics (CFP2-ACO) form factor’s ability to support multiple modulation formats configurable by the user makes it a popular choice for data centre interconnect applications. In particular, by supporting 16-QAM, the CFP2-ACO can double the link capacity using the same optics.
Lithium niobate and indium-phosphide modulators will continue to be needed for coherent optical transmission for years to come
Recent developments in indium phosphide designs has helped realise the compact packaging needed to fit within the CFP2 form factor.
I saw the level of integration and optical engine configurations within the CFP2-ACO differ from vendor to vendor. I’m interested to see which approach ends up being the most economical once volume production starts.
Oclaro introduced a high-bandwidth lithium niobate modulator for single wavelength 400 gigabit optical transmission. Lithium niobate continues to play an important role in enabling future higher baud rate applications with its excellent bandwidth performance. My belief is that both lithium niobate and indium-phosphide modulators will continue to be needed for coherent optical transmission for years to come.
Chris Cole, senior director, transceiver engineering at Finisar
ECOC technical sessions and exhibition used to be dominated by telecom and long haul transport technology. There is a shift to a much greater percentage focused on datacom and data centre technology.
What I learned at the show is that cost pressures are increasing
There were no major surprises at the show. It was interesting to see about half of the exhibition floor occupied by Chinese optics suppliers funded by several Chinese government entities like municipalities jump-starting industrial development.
What I learned at the show is that cost pressures are increasing.
New datacom optics technologies including optical packaging, thermal management, indium phosphide and silicon integration are all on the agenda to track in the coming year.
Ciena's stackable platform for data centre interconnect
Ciena is the latest system vendor to unveil its optical transport platform for the burgeoning data centre interconnect market. Data centre operators require scalable platforms that can carry significant amounts of traffic to link sites over metro and long-haul distances, and are power efficient.
The Waveserver stackable interconnect system delivers 800 Gig traffic throughput in a 1 rack unit (1RU) form factor. The throughput comprises 400 Gigabit of client-side interfaces and 400 Gigabit coherent dense WDM transport.
For the Waveserver’s client-side interfaces, a mix of 10, 40 and 100 Gigabit interfaces can be used, with the platform supporting the latest 100 Gig QSFP28 optical module form factor. One prominent theme at the recent OFC 2015 show was the number of interface types now supported in a QSFP28.
On the line side, Ciena uses two of its latest WaveLogic 3 Extreme coherent DSP-ASICs. Each DSP-ASIC supports polarisation multiplexing, 16 quadrature amplitude modulation (PM–16-QAM), equating to 200 Gigabit transmission capacity.
The Extreme was chosen rather than Ciena’s more power-efficient WaveLogic 3 Nano DSP-ASIC to maximise capacity over a fibre. “The amount of fibre the internet content providers have tends to be limited so getting high capacity is key,” says Michael Adams, vice president of product and technical marketing at Ciena. The Nano DSP-ASIC does not support 16-QAM.
A rack can accommodate up to 44 Waveserver stackable units to deliver 88 wavelengths, each 50GHz wide, or 17.6 Terabit-per-second (Tbps) of capacity. And up to 96 wavelengths, or 19.2Tbps, is supported on a fibre pair.
"We are going down the path of opening the platform to automation"
“We could add flexible grid and probably get closer to 24 or 25 Tbps,” says Adams. Flexible grid refers to moving off the C-band's set ITU grid by using digital signal processing at the transmitter. By shaping the signal before it is sent, each carrier can be squeezed from a 50GHz channel into a 37.5GHz wide one, boosting overall capacity carried over the fibre.
Adams says that it is not straightforward to compare the power consumption of different vendors’ data centre interconnect platforms but Ciena believes its platform is competitive. He estimates that the Waveserver consumes between 1W and 1.5W per Gigabit line side.
Ciena has stated that between five and 10 percent of its revenues come from web-scale customers, and accounts for a third of its total 100 Gig line-side port shipments.
Web-scale companies include Internet content providers, providers of data centre co-location and interconnect, and enterprises. Web-scale companies also drive the traditional telecom optical networking market as they also use large amounts of the telcos' network capacity to link their sites.
The global data centre interconnect market grew 16 percent in 2014 to reach $US 2.5 billion, according to market research firm, Ovum. Almost half of the spending was by the communications service providers whereas the Internet content providers spending grew 64 percent last year.
Open software
Ciena also announced an open application development environment, dubbed emulation cloud, that allows applications to be developed without needing Waveserver hardware.
One obvious application is the moving server virtual machines between data centres. But more novel applications can be developed by the data centre operators and third-party developers. Ciena cites what it calls an augmented reality application that allows a mobile phone to be pointed at a Waveserver to inform the of user the status of the machine: which ports are active and what type of bandwidth each port is consuming. “It can also show power and specific optical parameters of each line port,” says Adams. “Right there, you have all the data you need to know.”
The Waveserver platform also comes with software that allows data centre managers to engineer, plan, provision and operate links via a browser. More sophisticated users can benefit from Ciena’s OPn architecture and a set of open application programming interfaces (APIs).
“We are going down the path of opening the platform to automation,” says Adams. “We can foresee for the most sophisticated users, plugging into APIs and going to some very specific optical parameters and playing with them.”
Waveserver Status
Ciena is demonstrating its Waveserver platform to over 100 customers, as part of an annual event at the company’s Ottawa site.
“We are well engaged with a variety of Internet content providers,” says Adams. “We will be in trials with many of those folks this summer.” General availability is expected at the end of the third quarter.
In May, Ciena announced it had entered a definitive agreement to acquire Cyan. Cyan announced its own N-Series data centre interconnect platform earlier this year. Ciena says it is premature to comment on the future of the N-Series platform.
OFC 2015 digest: Part 2

- CFP4- and QSFP28-based 100GBASE-LR4 announced
- First mid-reach optics in the QSFP28
- SFP extended to 28 Gigabit
- 400 Gig precursors using DMT and PAM-4 modulations
- VCSEL roadmap promises higher speeds and greater reach
MultiPhy readies 100 Gigabit serial direct-detection chip
MultiPhy is developing a chip that will support serial 100 Gigabit-per-second (Gbps) transmission using 25 Gig optical components. The device will enable short reach links within the data centre and up to 80km point-to-point links for data centre interconnect. The fabless chip company expects to have first samples of the chip, dubbed FlexPhy, by year-end.
Figure 1: A block diagram of the 100 Gig serial FlexPhy. The transmitter output is an electrical signal that is fed to the optics. Equally, the input to the receive path is an electrical signal generated by the receiver optics. Source: Gazettabyte
The FlexPhy IC comprises multiplexing and demultiplexing functions as well as a receiver digital signal processor (DSP). The IC's transmitter path has a CAUI-4 (4x28 Gig) interface, a 4:1 multiplexer and four-level pulse amplitude modulation (PAM-4) that encodes two bits per symbol. The resulting chip output is a 50 Gbaud signal used to drive a laser to produce the 100 Gbps output stream.
"The input/output doesn't toggle at 100 Gig, it toggles at 50 Gig," says Neal Neslusan, vice president of sales and marketing at MultiPhy. "But 50 Gig PAM-4 is actually 100 Gigabit-per-second."
The IC's receiver portion will use digital signal processing to recover and decode the PAM-4 signals, and demultiplex the data into four 28 Gbps electrical streams. The FlexPhy IC will fit within a QSFP28 pluggable module.
As with MultiPhy's first-generation chipset, the optics are overdriven. With the MP1101Q 4x28 Gig multiplexer and MP1100Q four-channel receiver, 10 Gig optics are used to achieve four 28 Gig lanes, while with the FlexPhy, a 25 Gig laser is used. "Using a 25 GigaHertz laser and double-driving it to 50 GigaHertz induces some noise but the receiver DSP cleans it up," says Neslusan.
The use of PAM-4 incurs an optical signal-to-noise ratio (OSNR) penalty compared to non-return-to-zero (NRZ) signalling used for MultiPhy's first-generation direct-detection chipset. But PAM-4 has a greater spectral density; the 100 Gbps signal fits within a 50 GHz channel, resulting in 80 wavelengths in the C-band. This equates to 8 terabits of capacity to connect data centres up to 80 km apart.
Within the data centre, MultiPhy’s physical layer IC will enable 100 Gbps serial interfaces. The design could also enable 400 Gig links over distances of 500 m, 2 km and 10 km, by using four FlexPhys, four transmitter optical sub-assemblies (TOSAs) and four receiver optical sub-assemblies (ROSAs).
Meanwhile, MultiPhy's existing direct-detection chipset has been adopted by multiple customers. These include two optical module makers – Oplink and a Chinese vendor – and a major Chinese telecom system vendor that is using the chipset for a product coming to market now.
