IOWN’s all-photonic network vision

What is the best way to send large amounts of data between locations? Its a question made all the more relevant with the advent of AI, and one that has been preoccupying the Innovative Optical and Wireless Network (IOWN) Global Forum that now has over 160 member companies and organisations
Optical networking has long established itself as the high-speed communications technology of choice for linking data centres or large enterprises’ sites.
The IOWN Global Forum aims to take optical networking a step further by enabling an all-optical network, to reduce the energy consumption and latency of communication links. Latency refers to the time it takes transmitted data to start arriving at the receiver site.
“The IOWN all-photonic network is the infrastructure for future enterprise networking,” says Masahisa Kawashima, IOWN technology director, IOWN development office, NTT Technology working group chair, IOWN Global Forum.

“The main significance of IOWN is setting a roadmap,” says Jimmy Yu, vice president and optical transport senior analyst at Dell’Oro. “It helps component and systems companies understand what technology and architectures that companies, such as NTT, are interested in for a next-generation optical and wireless network. It also fosters industry collaboration.”
IOWN architecture
The IOWN Global Forum’s all-optical network (APN) is to enable optical connectivity from edge devices to data centres at speeds exceeding 100 gigabits-per-second (Gbps).
The Forum envisions energy and latency performance improvements by driving optics to the endpoints. Linking endpoints will require a staged adoption of photonic technology as it continues to mature.
Professor Ioannis Tomkos, a member of the Optical Communications Systems & Networks (OCSN) Research Lab/Group at the Electrical and Computer Engineering Department at the University of Patras, says the aim of the IOWN Global Forum is to gradually replace electronics-based transmission, switching, and even signal processing functions with photonics. The OCSN Group recently joined the IOWN Global Forum.
The Forum has defined a disaggregated design for the all-photonic network. The following stages will include using optics to replace copper interconnect within platforms, interfacing photonics to chips, and, ultimately, photonic communications within a chip.
“If information-carrying light signals can remain in the optical domain and avoid opto-electronic and electro-optical conversions, that will ensure enhanced bandwidth and much reduced power consumption per bit,” says Tomkos.
The IOWN Global Forum was created in 2019 by Japanese service provider, NTT, Sony, and Intel. Since then, it has grown to over 160 members, including cloud players Google, Microsoft, and Oracle, telecom service providers British Telecom, Orange, KDDI, Telefónica, and companies such as Nvidia.
The Forum has developed an IOWN framework that includes the all-photonic network, digital twin computing (DTC), and a ‘cognitive foundation’ (CF). Digital twin computing enables the creation of virtual representations of physical systems, while the cognitive foundation is the architecture’s brain, allocating networking and computing resources as required.
“We expect future societies will be more data-driven and there will be many applications that collect huge real-time sensor data and analyse them,” says Kawashima. “The IOWN all-photonic network and disaggregated computing platforms will enable us to deploy digital twin application systems in an energy-efficient way.”
Optical infrastructure
The IOWN Global Forum’s all-photonic network uses open standards, such as the OpenROADM (Open reconfigurable optical add-drop multiplexing) Multi-Source Agreement (MSA), the OIF and the OpenZR+ MSA pluggable coherent optics, and the OpenXR Optics Forum standards. The IOWN Global Forum also adheres to the ‘white box’ platform designs defined by the Telecom Infra Project (TIP).
“There is a lot of similarity with the approach and objectives of TIP,” says an unnamed industry veteran who has observed the IOWN Global Forum’s organisation since its start but whose current employer is not a member. “Although the scope is not the same, I cannot help but wonder why we don’t combine the two as an industry.”
Kawashima says that optical hardware, such as ROADMs, pluggable optics, and transponder boards, is located at one site and operated by one organisation. Now, the Forum has disaggregated the design to enable the ROADM and transponders to be in different locations: the transponder can be deployed at a customer’s premises, remote from the ROADM’s location.
“We allow the operator of the switch node to be different from the operator of the aggregator node, and we allow the operator of the transponder node to be different from the operator of the ROADM nodes,” says Kawashima.
The disaggregation goal is to encourage the growth of a multi-operator ecosystem, unlike how optical transport is currently implemented. It is also the first stage in making the infrastructure nodes all-optical. Separating the transponder and the ROADMs promises to reduce capital expenditure, as the transceiver nodes can be upgraded separately from the ROADMs that can be left unchanged for longer.
Kawashima says that reducing infrastructure capital expenditure promises reduced connectivity prices: “Bandwidth costs will be cheaper.”
Service providers can manage the remote transponders at the customers’ sites, creating a new business model for them.

Early use cases
IOWN has developed several use cases as it develops the technology.
One is a data centre interconnect for financial service institutions that conduct high-frequency trading across geographically dispersed sites.
Another is remote video production for the broadcast industry. Here, the broadcast industry would use an all-photonic network to connect the site where the video feed originates to the cloud, where the video production work is undertaken.
A third use case is for AI infrastructure. An enterprise would use the all-photonic network to link its AI product development engineers to GPU resources hosted in the cloud.
If the network is fast enough and has sufficiently low latency, the GPUs can source data from the site, store it in their memory, process it, and return the answer. The aim is for enterprises not to need to upload and store their data in the cloud. “So that customers do not have to be worried about data leakages,” says Kawashima.
The Forum also publishes documents. “Once the proof-of-concept is completed, that means that our solution is technically proven and is ready for delivery,” says Kawashima.
Goals
At OFC 2025, held earlier this year, NTT, NTTCom, Orange, and Telefónica showcased a one terabit-per-second optical wavelength circuit using the IOWN all-photonics network.

The demonstration featured a digital twin of the optical network, enabling automated configuration of high-speed optical wavelength circuits. The trial showcased the remote control of data centre communication devices using an optical supervisory channel.
The Forum wants to prove the technical feasibility of the infrastructure architecture by year-end. It looks to approve the remote GPUs and financial services use cases.
“What we are trying to achieve this year is that the all-photonic network is commercially operable, as are several use cases in the enterprise networking domain,” says Kawashima.
IOWN’s ultimate success will hinge on the all-photonic network’s adoption and economic viability. For Kawashima, the key to the system architecture is to bring significant optical performance advantages.
Tomkos cautions that this transformation will not happen overnight and not without the support of the global industry and academic community. But the promise is growth in the global network’s throughput and reduced latency in a cost and power-efficient way.
OIF adds a short-reach design to its 1600ZR/ ZR+ portfolio

The OIF (Optical Internetworking Forum) has broadened its 1600-gigabit coherent optics specification work to include a third project, complementing the 1600ZR and 1600ZR+ initiatives.
The latest project will add a short-reach ‘coherent-lite’ digital design to deliver a reach of 2km to 20km and possibly 40km with a low latency below 300ns
The low latency will suit workloads and computing resources distributed across data centres.
“The coherent-lite is more than just the LR (long reach) work that we have done [at 400 gigabits and 800 gigabits],” says Karl Gass, optical vice chair of the OIF’s physical link layer (PLL) working group, adding that the 1600-gigabit coherent-lite will be a distinct digital design.
Doubling the data rate from 800 gigabits to 1600 gigabits is the latest battle line between direct-detect and coherent pluggable optics for reaches of 2km to 40km.
At 800 gigabits, the OIF members debated whether the same coherent digital signal processor would implement 800ZR and 800-gigabit LR. Certain OIF members argued that unless a distinct, coherent DSP is developed, a coherent optics design will never be able to compete with direct-detect LR optics.
“We have that same acknowledgement that unless it’s a specific design for [1600 gigabit] coherent-lite, then it’s not going to compete with the direct detect,” says Gass.
OIF’s 1600-gigabit specification work
The OIF’s 1600-gigabit roadmap has evolved rapidly in the last year.
In September 2023, the OIF announced the 1600ZR project to develop 1.6-terabit coherent optics with a reach of 80km to 120km. In January 2024, the OIF announced it would undertake a 1600ZR+ specification, an enhanced version of 1600ZR with a reach of 1,000km.
The OIF’s taking the lead in ZR+ specification work is a significant shift in the industry, promising industry-wide interoperability compared to the previous 400ZR+ and 800ZR+ developments.
Now, the OIF has started a third 1600-gigabit coherent-lite design.
1600ZR development status
Work remains to complete the 1600ZR Implementation Agreement, the OIF’s specification document. However, member companies have agreed upon the main elements, such as the framing schemes for the client side and the digital signal processing and using oFEC as the forward error correction scheme.
oFEC is a robust forward error correction scheme but adds to the link’s latency. It has also been chosen as the forward error correction scheme for 1600ZR. The OIF members want the ‘coherent-lite’ version to use a less powerful forward error correction to achieve lower latency.
The 1600ZR symbol rate chosen is around 235 gigabaud (GBd), while the modulation scheme is 16-ary quadrature amplitude modulation (16-QAM). The specified reach will be 80km to 120km. (See table below.)
The members will likely agree on the digital issues this quarter before starting the optical specification work. Before completing the Implementation Agreement, members must also spell out interoperability testing.
1600ZR+ development status
The 1600ZR+ work still has some open questions.
One is whether members choose a single carrier, two sub-carriers, or four to achieve the 1,000km reach. The issue is equalisation-enhanced phase noise (EEPN), which imposes tighter constraints on the received laser. Using sub-carriers, the laser constraints can be relaxed, enabling more suppliers. The single-carrier camp argues that sub-carriers complicate the design of the coherent digital signal processor (DSP).
The workgroup members have also to choose the probabilistic constellation shaping to use. Probabilistic constellation shaping gain can extend the reach, but it can also reduce the symbol rate and, hence, the bandwidth specification of the coherent modem’s components.
The symbol rate of the 1600ZR+ is targeted in the range of 247GBd to 263GBd.

Power consumption
The 1600ZR design’s power consumption was hoped to be 26W, but it is now expected to be 30W or more. The 1600ZR+ is expected to be even higher.
The coherent pluggable’s power consumption will depend on the CMOS process that the coherent DSP developers choose for their 1600ZR and 1600ZR+ ASIC designs. Will they choose the state-of-the-art 3nm CMOS process or wait for 2nm or even 1.8nm to become available to gain a design advantage?
Timescales
The target remains to complete the 1600ZR Implementation Agreement document quickly. Gass says the 1600ZR and 1600ZR+ Implementation Agreements could be completed this year, paving the way for the first 1600ZR/ZR+ products in 2026.
“We are being pushed by customers, which isn’t a bad thing,” says Gass.
The coherent-lite design will be completed later given that it has only just started. At present, the OIF will specify the digital design and not the associated optics, but this may change, says Gass.
The OIF's coherent optics work gets a ZR+ rating
The OIF has started work on a 1600ZR+ standard to enable the sending of 1.6 terabits of data across hundreds of kilometres of optical fibre.The initiative follows the OIF's announcement last September that it had kicked off 1600ZR. ZR refers to an extended reach standard, sending 1.6 terabits over an 80-120km point-to-point link.
600ZR follows the OIF’s previous work standardising the 400-gigabit 400ZR and the 800-gigabit 800ZR coherent pluggable optics.
The decision to address a ‘ZR+’ standard is a first for the OIF. Until now, only the OpenZR+ Multi-Source Agreement (MSA) and the OpenROADM MSA developed interoperable ZR+ optics.
The OIF’s members’ decision to back the 1600ZR+ coherent modem work was straightforward, says Karl Gass, optical vice chair of the OIF’s physical link layer (PLL) working group. Several companies wanted it, and there was sufficient backing. “One hyperscaler in particular said: ‘We really need that solution’,” says Gass.

OIF, OpenZR+, and OpenROADM
Developing a 1600ZR+ standard will interest telecom operators who, like with 400ZR and the advent of 800ZR, can take advantage of large volumes of coherent pluggables driven by hyperscaler demand. However, Gass says no telecom operator is participating in the OIF 1600ZR+ work.
“It appears that they are happy with whatever the result [of the ZR+ work] will be,” says Gass. Telecom operators are active in the OpenROADM MSA.
Now that the OIF has joined OpenZR+ and the OpenROADM MSA in developing ZR+ designs, opinions differ on whether the industry needs all three.
“There is significant overlap between the membership of the OpenZR+ MSA and the OIF, and the two groups have always maintained positive collaboration,” says Tom Williams, director of technical marketing at Acacia, a leading member of the OpenZR+. “We view the adoption of 1600ZR+ in the OIF as a reinforcement of the value that the OpenZR+ has brought to the market.”
Robert Maher, Infinera’s CTO, believes the industry does not need three standards. However, having three organisations does provide different perspectives and considerations.
Meanwhile, Maxim Kuschnerov, director R&D at Huawei, says the OIF’s decision to tackle ZR+ changes things.”OpenZR+ kickstarted the additional use cases in the industry, and OpenROADM took it away but going forward, it doesn’t seem that we need additional MSAs if the OIF is covering ZR+ for Ethernet clients in ROADM networks,” says Kuschnerov. “Only the OTN [framing] modes need to be covered, and the ITU-T can do that.”
Kuschnerov also would like more end-user involvement in the OIF group. “It would help shape the evolving use cases and not be guided by a single cloud operator,” he says.
ZR history
The OIF is a 25-year-old industry organisation with over 150 members, including hyperscalers, telecom operators, systems and test equipment vendors, and component companies.
In October 2016, the OIF started the 400ZR project, the first pluggable 400-gigabit Ethernet coherent optics specification. The principal backers of the 400ZR work were Google and Microsoft. The standard was designed to link equipment in data centres up to 120km apart.
The OIF 400ZR specification also included an un-amplified version with a reach of several tens of kilometres. The first 400ZR specification document, which the OIF calls an Implementation Agreement, was completed in March 2020 (see chart above).
The OIF started the follow-up on the 800ZR specification in November 2020, a development promoted by Google. Gass says the OIF is nearing completion of the 800ZR Implementation Agreement document, expected in the second half of 2024.
If the 1600ZR and ZR+ coherent work projects take a similar duration, the first 1600ZR and 1600ZR+ products will appear in 2027.
Symbol rate and other challenges
Moving to a 1.6-terabit coherent pluggable module using the same modulation scheme – 16-ary quadrature amplitude modulation or 16-QAM – used for 400ZR and 800ZR suggests a symbol rate of 240 gigabaud (GBd).
“That is the maths, but there might be concerns with technical feasibility,” says Gass. “That’s not to say it won’t come together.”
The highest symbol rate coherent modem to date is Ciena’s WaveLogic 6e, which was announced a year ago. The design uses a 3nm CMOS coherent digital signal processor (DSP) and a 200GBd symbol rate. It is also an embedded coherent design, not one required to fit inside a pluggable optical module with a constrained power consumption.

Kuschnerov points out that the baud rates of ZR and ZR+ have differed. And this will likely continue. 800ZR, using Ethernet with no probabilistic constellation shaping, has a baud rate of 118.2GBd, while 800ZR+, which uses OTN and probabilistic constellation shaping, has a baud rate of up to 131.35GBd. Every symbol has a varying probability when probabilistic constellation shaping is used. “This decreases the information per symbol, and thus, the baud rate must be increased,“ says Kuschnerov.
Doubling up for 1600ZR/ ZR+, those numbers become around 236GBd and 262GBd, subject to future standardisation. “So, saying that 1600ZR is likely to be at 240GBd is correct, but one cannot state the same for a potential 1600ZR+,” says Kuschnerov.
Nokia’s view is that for 1600ZR, the industry will look at operating modes that include 16QAM at 240 GBd. Other explored options include 64-QAM with probabilistic constellation shaping at 200GBd and even dual optical carrier solutions with each carrier operating at approximately 130GBd. “However, this last option may be challenging from a power envelope perspective,” says Szilárd Zsigmond, head of Nokia’s optical subsystems group.
In turn, if 1600ZR+ reaches 1,000km distances, the emphasis will be on higher baud rate options than those used for 1600ZR. “This will be needed to enable longer reaches, which will also put pressure on managing power dissipation,” says Zsigmond.
The coherent DSP must also have digital-to-analogue (DACs) and analogue-to-digital converters (ADCs) to sample at least at 240 giga-samples per second. Indeed, the consensus among the players is that achieving the required electronics and optics will be challenging.
“All component bandwidths have to double and that is a significant challenge,” says Josef Berger, associate vice president, cloud optics marketing at Marvell.
The coherent optics – the modulators and receivers – must extend their analogue bandwidth of 120GHz. Infinera is one company that is confident this will be achieved. “Infinera, with our highly integrated Indium Phosphide-based photonic integrated circuits, will be producing a TROSA [transmitter-receiver optical sub-assembly] capable of supporting 1.6-terabit transmission that will fit in a pluggable form factor,” says Maher.
The coherent DSP and optics operating must also meet the pluggable modules’ power and heat limits. “That is an extra challenge here: the development needs to maintain focus on cost and power simultaneously to bring the value network operators need,” says Williams. “Scaling baud rate by itself doesn’t solve the challenge. We need to do this in a cost and power-efficient way.”
Current 800ZR modules consume 30W or more, and since the aim of ZR modules is to be used within Ethernet switches and routers, this is challenging. In comparison, 400ZR modules now consume 20W or less.
“For 800ZR and 800ZR+, the target is to be within the 28W range, and this target is not changing for 1600ZR and 1600ZR+,” says Zsigmond. Coherent design engineers are being asked to double the bit rate yet keep the power envelope constant.
Certain OIF members are also interested in backward compatibility with 800ZR or 400ZR. “That also might affect the design,” says Gass.
Given the rising cost to tape out a coherent DSP using 3nm and even 2nm CMOS process nodes required to reduce power per bit, most companies designing ASICs will look to develop one design for the 1600ZR and ZR+ applications to maximise their return on investment, says Zsigmond, who notes that the risk was lower for the first generations of ZR and ZR+ applications. Most companies had already developed components for long-haul applications that could be optimised for ZR and ZR+ applications.
For 400ZR, which used a symbol rate of 60 GBd, 60-70 GBd optics already existed. For 800 gigabit transmissions, high-performance embedded coherent optics and pluggable, low-power ZR/ZR+ modules have been developed in parallel. “For 1600ZR/ZR+, it appears that the pluggable modules will be developed first,” says Zsigmond. “There will be more technology challenges to address than previous ZR/ZR+ projects.”
The pace of innovation is faster than traditional coherent transmission systems and will continue to reduce cost and power per bit, notes Marvell’s Berger: “This innovation creates technologies that will migrate into traditional coherent applications as well.”
Gass is optimistic despite the challenges ahead: “You’ve got smart people in the room, and they want this to happen.”
OIF's OFC 2024 demo
The OIF has yet to finalise what it will show for the upcoming coherent pluggable module interoperable event at OFC to be held in San Diego in March. But there will likely be 400ZR and 800ZR demonstrations operating over 75km-plus spans and 400-gigabit OpenZR+ optics operating over greater distance spans.
Optical transmission: sending more data over a greater reach
Keysight Technologies' chart plots the record-setting optical transmission systems of recent years.

The chart, compiled by Dr Fabio Pittalá, product planner, broadband and photonic center of excellence at Keysight, is an update of one previously published by Gazettabyte.
The latest chart adds data from last year’s conferences at OFC 2023 and ECOC 2023. And new optical transmission achievements can be expected at the upcoming OFC 2024 show, to be held in San Diego, CA in March.
Marvell kickstarts the 800G coherent pluggable era

Marvell has become the first company to provide an 800-gigabit coherent digital signal processor (DSP) for use in pluggable optical modules.
The 5nm CMOS Orion chip supports a symbol rate of over 130 gigabaud (GBd), more than double that of the coherent DSPs for the OIF’s 400ZR standard and 400ZR+.
Meanwhile, a CFP2-DCO pluggable module using the Orion can transmit a 400-gigabit data payload over 2,000km using the quadrature phase-shift keying (QPSK) modulation scheme.
The Orion DSP announcement is timely, given how this year will be the first when coherent pluggables exceed embedded coherent module port shipments.
“We strongly believe that pluggable coherent modules will cover most network use cases, including carrier and cloud data centre interconnect,” says Samuel Liu, senior director of coherent DSP marketing at Marvell.
Marvell also announced its third-generation ColorZ pluggable module for hyperscalers to link equipment between data centres. The Orion-based ColorZ 800-gigabit module supports the OIF’s 800ZR standard and 800ZR+.
Fifth-generation DSP
The Orion chip is a fifth-generation design yet Marvell’s first. First ClariPhy and then Inphi developed the previous four generations.

Inphi bought ClariPhy for $275 million in 2016, gaining the first two generation devices: the 40nm CMOS 40-gigabit LightSpeed chip and a 28nm CMOS 100- and 200-gigabit Lightspeed-II coherent DSP products. The 28nm CMOS DSP is now coming to the end of its life, says Liu.
Inphi added two more coherent DSPs before Marvell bought the company in 2021 for $10 billion. Inphi’s first DSP was the 16nm CMOS M200. Until then, Acacia (now Cisco-owned) had been the sole merchant company supplying coherent DSPs for CFP2-DCOs pluggable modules.
Inphi then delivered the 7nm 400-gigabit Canopus for the 400ZR market, followed a year later by the Deneb DSP that supports several 400-gigabit standards. These include 400ZR, 400ZR+, and standards such as OpenZR+, which also has 100-, 200-, and 300-gigabit line rates and supports the OpenROADM MSA specifications. “The cash cow [for Marvell] is [the] 7nm [DSPs],” says Liu.
The Inphi team’s first task after the acquisition was to convince Marvell’s CEO and its chief financial officer to make the most significant investment in a coherent DSP. Developing Orion cost between $100M-300M.
“We have been quiet for the last two years, not making any coherent DSP announcements,” says Liu. “This [the Orion] is the one.”
Marvell views being first to market with a 130GBd-plus generation coherent DSP as critical given how pluggables, including the QSFP-DD and the OSFP form factors, account for over half of all coherent ports shipped.
“It is very significant to be first to market with an 800ZR plug and DSP,” says Jimmy Yu, vice president at market research firm Dell’Oro Group. “I expect Cisco/Acacia to have one available in 2024. So, for now, Marvell is the only supplier of this product.”
Yu notes that vendors such as Ciena and Infinera have had 800 Gigabit-per-second (Gbps) coherent available for some time, but they are for metro and long-haul networks and use embedded line cards.
Use cases
The Orion DSP addresses hyperscalers’ and telecom operators’ coherent needs. The DSP also implements various coherent standards to ensure that the vendors’ pluggable modules work with each other.
Liu says a DSP’s highest speed is what always gets the focus, but the Orion also supports lower line rates such as 600, 400 and 200Gbps for longer spans.
The baud rate, modulation scheme, and the probabilistic constellation shaping (PCS) technique are control levers that can be varied depending on the application. For example, 800ZR uses a symbol rate of only 118GBd and the 16-QAM modulation scheme to achieve the 120km specification while minimising power consumption. When performance is essential, such as sending 400Gbps over 2,000km, the highest baud rate of 130GBd is used along with QPSK modulation.

China is one market where Marvell’s current 7nm CFP2-DCOs are used to transport wavelengths at 100Gbps and 200Gbps.
Using the Orion for 200-gigabit wavelengths delivers an extra 1dB (decibel) of optical signal-to-noise ratio performance. The additional 1dB benefits the end user, says Liu: they can increase the engineering margin or extend the transmission distance. Meanwhile, probabilistic constellation shaping is used when spectral efficiency is essential, such as fitting a transmission within a 100GHz-width channel.
Liu notes that the leading Chinese telecom operators are open to using coherent pluggables to help reduce costs. In contrast, large telcos in North America and Europe use pluggables for their regional networks. Still, they prefer embedded coherent modems from leading systems vendors for long-haul distances greater than 1,000km.
Marvell believes the optical performance enabled by its 130GBd-plus 800-gigabit pluggable module will change this. However, all the leading system vendors have all announced their latest generation embedded coherent modems with baud rates of 130GBd to 150GBd, while Ciena’s 200GBd 1.6-terabit WaveLogic 6 coherent modem will be available next year.
The advent of 800-gigabit coherent will also promote IP over DWDM. 400ZR+ is already enabling the addition of coherent modules directly to IP routers for metro and metro regional applications. An 800ZR and 800ZR+ in a pluggable module will continue this trend beyond 400 gigabit to 800 gigabits.
The advent of an 800-gigabit pluggable also benefits the hyperscalers as they upgrade their data centre switches from 12.8 terabits to 25.6 and 51.2 terabits. The hyperscalers already use 400ZR and ZR+ modules, and 800-gigabit modules, which is the next obvious step. Liu says this will serve the market for the next four years.
Fujitsu Optical Components, InnoLight, and Lumentum are three module makers that all endorsed the Orion DSP announcement.
ColorZ 800 module
In addition to selling its coherent DSPs to pluggable module and equipment makers, Marvell will sell to the hyperscalers its latest ColorZ module for data centre interconnect.
Marvell’s first-generation product was the 100-gigabit coherent ColorZ in 2016 and in 2021 it produced its 400ZR ColorZ. Now, it is offering an 800-gigabit version – ColorZ 800 – to address 800ZR and 800ZR+, which include OpenZR+ and support for lower speeds that extend the reach to metro regional and beyond.

“We are first to market on this module, and it is now sampling,” says Josef Berger, associate vice president of marketing optics at Marvell.
Marvell addressing its module for the hyperscaler market rather than telecoms makes sense, says Yu, as it is the most significant opportunity.
“Most communications service providers’ interest is in having optical plugs with longer reach performance,” says Dell’Oro’s Yu. “So, they are more interested in ZR+ optical variants with high launch power of 0dBm or greater.”
Marvell notes a 30 per cent cost and power consumption reduction for each generation of ColorZ pluggable coherent module.
Liu concludes by saying that designing the Orion DSP was challenging. It is a highly complicated chip comprising over a billion logic gates. An early test chip of the Orion was used as part of a Lumentum demonstration at the OFC show in March.
The ColorZ 800 module will start being sampled this quarter.
What follows the Orion will likely be a 1.6-terabit DSP operating at 240GBd. The OIF has already begun defining the next 1.6T ZR standard.
Ciena advances coherent technology on multiple fronts

- Ciena has unveiled the industry’s first coherent digital signal processor (DSP) to support 1.6-terabit wavelengths
- Ciena announced two WaveLogic 6 coherent DSPs: Extreme and Nano
- WaveLogic 6 Extreme operates at a symbol rate of up to 200 gigabaud (GBd) while the Nano, aimed at coherent pluggables, has a baud rate from 118-140GBd
Part 1: WaveLogic 6 coherent DSPs
Ciena has leapfrogged the competition by announcing the industry’s first coherent DSP operating at up to 200GBd.
The WaveLogic 6 chips are the first announced coherent DSPs implemented using a 3nm CMOS process.
Ciena’s competitors are – or will soon be – shipping 5nm CMOS coherent DSPs. In contrast, Ciena has chosen to skip 5nm and will ship WaveLogic 6 Extreme coherent modems in the first half of 2024.
Using a leading CMOS process enables the cramming of more digital logic and features in silicon. The DSP also operates a faster analogue front-end, i.e. analogue-to-digital converters (ADC) and digital-to-analogue (DAC) converters.
The WaveLogic 6 matches Ciena’s existing WaveLogic 5 family in having two DSPs: Extreme, for the most demanding optical transmission applications, and Nano for pluggable modules.
WaveLogic 6 Extreme is the first announced DSP that supports a 1.6-terabit wavelength; Acacia’s (Cisco) coherent DSP supports 1.2-terabit wavelengths and other 1.2-terabit wavelength DSPs are emerging.
WaveLogic 6 Nano addresses metro-regional networks and data centre interconnect (up to 120km). Here, cost, size, and power consumption are critical. Ciena will offer the WaveLogic 6 in QSFP-DD and OSFP pluggable form factors.
Class 3.5
Network traffic continues to grow exponentially. Ciena notes that the total capacity of its systems shipped between 2010 and 2021 has grown 150x, measured in petabits per second.
Increasing the symbol rate is the coherent engineers’ preferred approach to reduce the cost per bit of optical transport.
Doubling the baud rate doubles the data sent using the same modulation scheme. Alternatively, the data payload can be sent over longer spans.
However, upping the symbol rates increases the optical wavelength’s channel width. Advanced signal processing is needed to achieve further spectral efficiency gains.
One classification scheme of coherent modem symbol rate defines first-generation coherent systems operating at 30-34GBd as Class 1. Class 2 modems double the rate to 60-68GBd. The OIF’s 400ZR standard operating at 64GBd is a Class 2 coherent modem.
Currently-deployed optical transport systems operating at 90-107GBd reside between Class 2 and Class 3 (120-136GBd). Ciena’s WaveLogic 5 Extreme is one example, with its symbol rate ranging from 95-107GBd. Ciena has shipped over 60,000 WaveLogic 5 Extreme DSPs to over 200 customers.
Acacia’s latest CIM-8 coherent modem, now shipping, operates at 140GBd, making it a Class 3 design. Infinera, NEL, and Nokia announced their Class 3 devices before the OFC 2023 conference and exhibition.
Now Ciena, with its 200GBd WaveLogic 6 Extreme, sits alone between Class 3 and Class 4 (240-272GBd).
WaveLogic 6 Extreme
Ciena has extended the performance of all the components of the Extreme-based coherent modem to work at 200GBd.
These components include the DSP’s analogue front-end: the ADCs and DACs, the coherent optics and the modulator drivers and TIAs. All must operate with a 100GHz bandwidth.
To operate at 200GBd, the ADCs and DACs must sample over 200 giga-samples a second. This is pushing ADC and DAC design to the limit.
The coherent modem’s optics and associated electronics must also have a 100GHz operating bandwidth. Ciena developed the optics in-house and is also working with partners to bring the coherent optics to market with a 100GHz bandwidth.
Ciena uses silicon photonics for the Extreme’s integrated coherent receiver (ICR) optics. For the coherent driver modulator (CDM) transmitter, Ciena is using indium phosphide and is also evaluating other technology such as thin-film lithium niobate.

“There are multiple options that are available and being looked at,” says Helen Xenos, senior director of portfolio marketing at Ciena.
Much innovation has been required to achieve the fidelity with 100GHz electro-optics and get the signalling right between the transmitter-receiver and the ASIC, says Xenos.

Ciena introduced frequency division multiplexing (FDM) sub-carriers with the WaveLogic 5 Extreme, a technique to help tackle dispersion. With the introduction of edgeless clock recovery, Ciena has created a near-ideal rectangular spectrum with sharp edges.
“First, inside this signal, there are FDM sub-carriers, but you don’t see them because they are right next to each other,” says Xenos. “Getting rid of this dead space between carriers enables more throughput.”
Making the signal’s edges sharper means that wavelengths are packed more tightly, better using precious fibre spectrum. Edgeless clock recovery alone improves spectral efficiency by between 10-13 per cent, says Xenos.
Moving to 3nm allows additional signal processing. As an example, Ciena’s WaveLogic 6 Extreme DSP can select between 1, 2, 4 and 8 sub-carriers based on the dispersion on the link. WaveLogic 5 Extreme supports 4 sub-carrier FDM only.
The baud rate is also adjustable from 67-200GBd, while for the line rate, the WaveLogic 6 supports 200-gigabit to 1.6-terabit wavelengths using probabilistic constellation shaping (PCS).
Another signal processing technique used is multi-dimensional constellation shaping. These are specific modulations that are added to support legacy submarine links.
“For compensated submarine cables that have specific characteristics, they need a specialised type of design also in the DSP,” says Xenos.
Ciena also uses nonlinear compensation techniques to squeeze further performance and allow higher power signals, improving overall link performance.
Ciena can address terrestrial and new and legacy submarine links with the WaveLogic 6 Extreme running these techniques.
Xenos cites performance examples using the enhanced DSP performance of the WaveLogic 6 Extreme.
Using WaveLogic 5, an 800-gigabit wavelength can be sent at 95GBd using a 112.5GHz-wide channel. The 800-gigabit signal can cross several reconfigurable optical add-drop multiplexer (ROADM) hops.
Sending a 1.6-terabit wavelength at 185GBd over a similar link, the signal occupies a 200GHz channel. “And you get better performance because of the extra DSP enhancements,” says Xenos.
The operator Southern Cross has simulated using the WaveLogic 6 Extreme on its network and says the DSP will be able to send one terabit of data over 12,000km.
Optical transport systems benefits
Systems benefits of the Extreme DSP include doubling capacity, transmitting a 1.6-gigabit wavelength, and halving the power consumed per bit.
The WaveLogic 6 Extreme will fit within existing Ciena optical transport kit.
Xenos said the design goal is to get to the next level of cost and power reduction and maximise the network coverage for 800-gigabit wavelengths. This is why Ciena chose to jump to 3nm CMOS for the WaveLogic 6 Extreme, skipping 5nm CMOS.
WaveLogic 6 Nano
The 3nm CMOS WaveLogic 6 Nano addresses pluggable applications for metro and data centre interconnect.
“The opportunity is still largely in front of us [for coherent pluggables],” says Xenos.
The current WaveLogic 5 Nano operating between 31.5-70GBd addresses 100-gigabit to 400-gigabit coherent pluggable applications. These include fixed grid networks using 50GHz channels and interoperable modes such as OpenROADM, 400ZR and 400ZR+. Also supported is the 200-gigabit CableLabs specification.
The WaveLogic 5 Nano is also used in the QSFP-DD module with embedded amplification for high-performance applications.
There is also a new generation of specifications being worked on by standards bodies on client side and line side 800-gigabit and 1.6-terabit interfaces.
Developments mentioned by Xenos include an interoperable probabilistic constellation shaping proposal to be implemented using coherent pluggables.
The advent of 12.8-terabit and 25.6-terabit Ethernet switches gave rise to 400ZR. Now with the start of 51.2-terabit and soon 102.4-terabit switches, the OIF’s 800ZR standard will be needed.

There is also a ‘Beyond 400 Gig’ ITU-T and OpenROADM initiative to combine the interoperable OpenZR+ and the 400-gigabit coherent work of the OpenROADM MSA for a packet-optimised 800-gigabit specification for metro applications.
Another mode is designed to support not just Ethernet but OTN clients.
Lastly, there will also be long-distance modes needed at 400, 600, and 800-gigabit rates.
“With WaveLogic 6 Nano, the intent is to double the capacity within the same footprint,” says Xenos.
In addition to these initiatives, the WaveLogic 6 Nano will address a new application class for much shorter spans – 10km and 20km – at the network edge. The aim is to connect equipment across buildings in a data centre campus, for example.
Some customers want a single channel design and straightforward forward-error correction. Other customers with access to limited capacity will want a wavelength division multiplexed (WDM) solution.
The Nano’s processing and associated optics will be tuned to each application class. “The engineering is done so that we only use the performance and power required for a specific application,” says Xenos.
A Nano-based coherent pluggable connecting campus buildings will differ significantly from a pluggable sending 800 gigabits over 1,000km or across a metro network with multiple ROADM stages, she says.
The WaveLogic 6 Nano will be used with silicon photonics-based coherent optics, but other materials for the coherent driver modulator transmitter may be used.
Availability
Ciena taped out the first 3nm CMOS Extreme and Nano ICs last year.
The WaveLogic 6 Extreme-based coherent modem will be available for trials later this year. Product shipments and network deployments will begin in the first half of 2024.
Meanwhile, shipments of WaveLogic 6 Nano will follow in the second half of 2024.
The various paths to co-packaged optics

Near package optics has emerged as companies have encountered the complexities of co-packaged optics. It should not be viewed as an alternative to co-packaged optics but rather a pragmatic approach for its implementation.
Co-packaged optics will be one of several hot topics at the upcoming OFC show in March.
Placing optics next to silicon is seen as the only way to meet the future input-output (I/O) requirements of ICs such as Ethernet switches and high-end processors.
For now, pluggable optics do the job of routing traffic between Ethernet switch chips in the data centre. The pluggable modules sit on the switch platform’s front panel at the edge of the printed circuit board (PCB) hosting the switch chip.
But with switch silicon capacity doubling every two years, engineers are being challenged to get data into and out of the chip while ensuring power consumption does not rise.
One way to boost I/O and reduce power is to use on-board optics, bringing the optics onto the PCB nearer the switch chip to shorten the electrical traces linking the two.
The Consortium of On-Board Optics (COBO), set up in 2015, has developed specifications to ensure interoperability between on-board optics products from different vendors.
However, the industry has favoured a shorter still link distance, coupling the optics and ASIC in one package. Such co-packaging is tricky which explains why yet another approach has emerged: near package optics.
I/O bottleneck
“Everyone is looking for tighter and tighter integration between a switch ASIC, or ‘XPU’ chip, and the optics,” says Brad Booth, president at COBO and principal engineer, Azure hardware architecture at Microsoft. XPU is the generic term for an IC such as a CPU, a graphics processing unit (GPU) or even a data processing unit (DPU).
What kick-started interest in co-packaged optics was the desire to reduce power consumption and cost, says Booth. These remain important considerations but the biggest concern is getting sufficient bandwidth on and off these chips.
“The volume of high-speed signalling is constrained by the beachfront available to us,” he says.
Booth cites the example of a 16-lane PCI Express bus that requires 64 electrical traces for data alone, not including the power and ground signalling. “I can do that with two fibres,” says Booth.

Near package optics
With co-packaged optics, the switch chip is typically surrounded by 16 optical modules, all placed on an organic substrate (see diagram below).
“Another name for it is a multi-chip module,” says Nhat Nguyen, senior director, solutions architecture at optical I/O specialist, Ayar Labs.
A 25.6-terabit Ethernet switch chip requires 16, 1.6 terabits-per-second (1.6Tbps) optical modules while upcoming 51.2-terabit switch chips will use 3.2Tbps modules.
“The issue is that the multi-chip module can only be so large,” says Nguyen. “It is challenging with today’s technology to surround the 51.2-terabit ASIC with 16 optical modules.”

Near package optics tackles this by using a high-performance PCB substrate – an interposer – that sits on the host board, in contrast to co-packaged optics where the modules surround the chip on a multi-chip module substrate.
The near package optics’ interposer is more spacious, making the signal routing between the chip and optical modules easier while still meeting signal integrity requirements. Using the interposer means the whole PCB doesn’t need upgrading which would be extremely costly.
Some co-packaged optics design will use components from multiple suppliers. One concern is how to service a failed optical engine when testing the design before deployment. “That is one reason why a connector-based solution is being proposed,” says Booth. “And that also impacts the size of the substrate.”
A larger substrate is also needed to support both electrical and optical interfaces from the switch chip.
Platforms will not become all-optical immediately and direct-attached copper cabling will continue to be used in the data centre. However, the issue with electrical signalling, as mentioned, is it needs more space than fibre.
“We are in a transitional phase: we are not 100 per cent optics, we are not 100 per cent electrical anymore,” says Booth. “How do you make that transition and still build these systems?”
Perspectives
Ayar Labs views near package optics as akin to COBO. “It’s an attempt to bring COBO much closer to the ASIC,” says Hugo Saleh, senior vice president of commercial operations and managing director of Ayar Labs U.K.
However, COBO’s president, Booth, stresses that near package optics is different from COBO’s on-board optics work.
“The big difference is that COBO uses a PCB motherboard to do the connection whereas near package optics uses a substrate,” he says. “It is closer than where COBO can go.”
It means that with near package optics, there is no high-speed data bandwidth going through the PCB.
Booth says near package optics came about once it became obvious that the latest 51.2-terabit designs – the silicon, optics and the interfaces between them – cannot fit on even the largest organic substrates.
“It was beyond the current manufacturing capabilities,” says Booth. “That was the feedback that came back to Microsoft and Facebook (Meta) as part of our Joint Development Foundation.”
Near package optics is thus a pragmatic solution to an engineering challenge, says Booth. The larger substrate remains a form of co-packaging but it has been given a distinct name to highlight that it is different to the early-version approach.
Nathan Tracy, TE Connectivity and the OIF’s vice president of marketing, admits he is frustrated that the industry is using two terms since co-packaged optics and near package optics achieve the same thing. “It’s just a slight difference in implementation,” says Tracy.
The OIF is an industry forum studying the applications and technology issues of co-packaging and this month published its framework Implementation Agreement (IA) document.
COBO is another organisation working on specifications for co-packaged optics, focussing on connectivity issues.

Technical differences
Ayar Labs highlights the power penalty using near package optics due to its use of longer channel lengths.
For near package optics, lengths between the ASIC and optics can be up to 150mm with the channel loss constrained to 13dB. This is why the OIF is developing the XSR+ electrical interface, to expand the XSR’s reach for near package optics.
In contrast, co-packaged optics confines the modules and host ASIC to 50mm of each other. “The channel loss here is limited to 10dB,” says Nguyen. Co-packaged optics has a lower power consumption because of the shorter spans and 3dB saving.
Ayar Labs highlights its optical engine technology, the TeraPHY chiplet that combines silicon photonics and electronics in one die. The optical module surrounding the ASIC in a co-packaged design typically comprises three chips: the DSP, electrical interface and photonics.
“We can place the chiplet very close to the ASIC,” says Nguyen. The distance between the ASIC and the chiplet can be as close as 3-5mm. Whether on the same interposer Ayar Labs refers to such a design using athird term: in-package optics.
Ayar Labs says its chiplet can also be used for optical modules as part of a co-packaged design.
The very short distances using the chiplet result in a power efficiency of 5pJ/bit whereas that of an optical module is 15pJ/bit. Using TeraPHY for an optical module co-packaged design, the power efficiency is some 7.5pJ/bit, half that of a 3-chip module.
A 3-5mm distance also reduces the latency while the bandwidth density of the chiplet, measured in Gigabit/s/mm, is higher than the optical module.
Co-existence
Booth refers to near package optics as ‘CPO Gen-1’, the first generation of co-packaged optics.
“In essence, you have got to use technologies you have in hand to be able to build something,” says Booth. “Especially in the timeline that we want to demonstrate the technology.”
Is Microsoft backing near package optics?

“We are definitely saying yes if this is what it takes to get the first level of specifications developed,” says Booth.
But that does not mean the first products will be exclusively near package optics.
“Both will be available and around the same time,” says Booth. “There will be near packaged optics solutions that will be multi-vendor and there will be more vertically-integrated designs; like Broadcom, Intel and others can do.”
From an end-user perspective, a multi-vendor capability is desirable, says Booth.
Ayar Labs’ Saleh sees two developing paths.
The first is optical I/O to connect chips in a mesh or as part of memory semantic designs used for high-performance computing and machine learning. Here, the highest bandwidth and lowest power are key design goals.
Ayar Labs has just announced a strategic partnership with high performance computing leader, HPE, to design future silicon photonics solutions for HPE’s Slingshot interconnect that is used for upcoming Exascale supercomputers and also in the data centre.
The second path concerns Ethernet switch chips and here Saleh expects both solutions to co-exist: near package optics will be an interim solution with co-packaged optics dominating longer term. “This will move more slowly as there needs to be interoperability and a wide set of suppliers,” says Saleh.
Booth expects continual design improvements to co-packaged optics. Further out, 2.5D and 3D chip packaging techniques, where silicon is stacked vertically, to be used as part of co-packaged optics designs, he says.
Preparing for a post-pluggable optical module world

Part 1: OIF: ELSFP, XSR+, and CEI-112G-Linear
The OIF is working on several electrical and optical specifications as the industry looks beyond pluggable optical transceivers.
One initiative is to specify the external laser source used for co-packaged optics, dubbed the External Laser Small Form Factor Pluggable (ELSFP) project.
Industry interest in co-packaged optics, combining an ASIC and optical chiplets in one package, is growing as it becomes increasingly challenging and costly to route high-speed electrical signals between a high-capacity Ethernet switch chip and the pluggable optics on the platform’s faceplate.
The OIF is also developing 112-gigabit electrical interfaces to address not just co-packaged optics but also near package optics and the interface needs of servers and graphics processor units (GPUs).
Near package optics also surrounds the ASIC with optical chiplets. But unlike co-packaged optics, the ASIC and chiplets are placed on a high-performance substrate located on the host board.
ELSFP
Data centre operators have vast experience using pluggables and controlling their operating environment so that they don’t overheat. The thermal management of optics co-packaged with an ASIC that can dissipate hundreds of watts is far trickier.
“Of all the components, the one that hates heat the most is the laser,” says Nathan Tracy, TE Connectivity and the OIF’s vice president of marketing.
Players such as Intel and Juniper have integrated laser technology, allowing them to place the full transceiver on a chip. However, the industry trend is to use an external light source so that the laser is decoupled from the remaining optical transceiver circuitry.
“We bring fibre into and out of the co-packaged optical transceiver so why not add a couple more fibres and bring the laser source into the transceiver as well?” says Tracy.
Two approaches are possible. One is to box the lasers and place them within the platform in a thermally-controlled environment. Alternatively, the lasers can be boxed and placed on the equipment’s faceplate, as pluggable optics are today.
“We know how to do that,” says Tracy. “But it is not a transceiver, it is a module full of lasers.”
Such a pluggable laser approach also addresses a concern of the data centre operators: how to service the optics of a co-packaged design.
The OIF’s ELSFP project is working to specify such a laser pluggable module: its mechanical form factor, electrical interface, how light will exit the module, and its thermal management.
The goal is to develop a laser pluggable that powers up when inserted and has a blind-mate optical interface, ensuring light reaches the co-packaged optics transceivers on the host board with minimal optical loss.
“Optical interfaces are fussy things,” says Tracy. Such interfaces must be well-aligned, clean, and hold tight tolerances, says Tracy: “That is all captured under the term blind-mate.”
Optical fibre will deliver light from the laser module to the co-packaged optics but multi-core fibre may be considered in future.
One issue the OIF is discussing is the acceptable laser output power. The higher the output power, the more the source can be split to feed more co-packaged optics transceivers. But higher-power lasers have eye-safety issues.
Another topic being addressed is the fibre density the form factor should enable. The OIF wants a roadmap to ensure that future co-packaged optics’ needs are also met.
“The industry can then take that specification and go compete in the market, adding their differentiation on top of the standardisation,” says Tracy.
The OIF’s ELSFP members have submitted technical contributions and a draft specification exists. “Now we are in the iterative process with members building on that draft,” says Tracy.
Co-packaged optics and near package optics
As the capacity of switch chips continues to double, more interfaces are needed to get data in and out and the harder it is becoming to route the channels between the chip and the optical modules.
The chip package size is also increasing with the growing aggregate bandwidth and channels, says Tracy. These channels come out via the package’s solder balls that connect to the host board.
“You don’t want to make that ASIC package any bigger than it needs to be; packages have bad parasitics,” says Tracy
For a fully co-packaged design, a switch ASIC is surrounded by 16 optical engines. For next-generation 51.2-terabit switch ASICs, 3.2 terabits-per-second (Tbps) optical engines will be required. Add the optical engines and the switch package becomes even bigger.
“You are starting to get to the point where you are making the package bigger in ways that are challenging the industry,” says Tracy.
Near package optics offers an alternative approach to avoid cramming the optics with the ASIC. Here, the ASIC and the chiplets are mounted on a high-performance substrate that sits on the host card.
“Now the optical engines are a little bit further away from the switching silicon than in the co-packaged optics’ case,” says Tracy.
CEI-112G-Extra Short Reach Plus (XSR+) electrical interface
According to optical I/O specialist, Ayar Labs, near package optics and co-packaged optics have similar optical performance given the optical engines are the same. Where they differ is the electrical interface requirements.
With co-packaged optics, the channel length between the ASIC and the optical engine is up to 50mm and the channel loss is 10dB. With near package optics, the channel length is up to 150mm and a 13dB channel loss.
The OIF’s 112Gbps XSR+ electrical interface is to meet the longer reach needs of near package optics.
“It enables a little bit more margin or electrical channel reach while being focused on power reduction,” says Tracy. “Co-packaged optics is all about power reduction; that is its value-add.”
CEI-112G-Linear
A third ongoing OIF project – the CEI-112-Linear project – also concerns a 112Gbps chip-to-optical engine interface.
The project’s goal is to specify a linear channel so that the chip’s electrical transmitter (serdes) can send data over the link – made up of an optical transmitter and an optical receiver as well as the electrical receiver at the far end – yet requires equalisation for the transmitter and end receiver only.
“A linear link means we understand the transition of the signal from electrical to optical to electrical,” says Tracy. “If we are operating over a linear range then equalisation is straightforward.” That means simpler processing for the signal’s recovery and an overall lower power consumption.
By standardising such a linear interface, multiple chip vendors will be able to drive the optics of multiple I/O chiplet companies.
“Everything is about power savings, and the way to get there is by optimising the link,” says Tracy.
224-gigabit electrical interfaces
The OIF’s next-generation 224Gbps electrical interface work continues to progress. Member input to date has tackled the challenges, opportunities and the technologies needed to double electrical interface speeds.
“We are surveying the playing field to understand where the really hard parts are,” says Tracy.
A White Paper is expected in the coming year that will capture how the industry views the issues and the possible solutions.
“If you have industry consensus then it is easier to start a project addressing the specific implementation to meet the problem,” says Tracy.
100-gigabaud optics usher in the era of terabit transmissions
Telecom operators are in a continual battle to improve the economics of their optical transport networks to keep pace with the relentless growth of IP traffic.
One approach is to increase the symbol rate used for optical transmission. By operating at a higher baud rate, more data can be carried on an optical wavelength.
Ferris Lipscomb
Alternatively, a higher baud rate allows a simpler modulation scheme to be used, sending the same amount of data over greater distances. That is because the fewer constellation points of the simpler modulation scheme help data recovery at the receiver.
NeoPhotonics has detailed two optical components - a coherent driver-modulator and an intradyne coherent receiver (micro-ICR) - that operate at over 100 gigabaud (GBd). The symbol rate suits 800-gigabit systems and can enable one-terabit transmissions.
NeoPhotonics’ coherent devices were announced to coincide with the ECOC 2020 show.
Class 60 components
The OIF has a classification scheme for coherent optical components based on their analogue bandwidth performance.
A Class 20 receiver, for example, has a 3-decibel (dB) bandwidth of 20GHz. NeoPhotonics announced at the OFC 2019 show Class 50 devices with a 50GHz 3dB bandwidth. The Class 50 modulator and receiver devices are now deployed in 800-gigabit coherent systems.
NeoPhotonics stresses the classes are not the only possible operating points. “It is possible to use baud rates in between these standard numbers,” says Ferris Lipscomb, vice president of marketing at NeoPhotonics. “These classes are shorthand for a range of possible baud rates.”
“To get to 96 gigabaud, you have to be a little bit above 50GHz, typically a 55GHz 3dB bandwidth,” says Lipscomb. “With Class 60, you can go to 100 gigabaud and approach a terabit.”
It is unclear whether one-terabit coherent transponders will be widely used. Instead, Class 60 devices will likely be the mainstay for transmissions up to 800 gigabits, he says.

Source: NeoPhotonics, Gazettabyte
Design improvements
Several aspects of the components are enhanced to achieve Class-60 performance.
At the receiver, the photodetector’s bandwidth needs to be enhanced, as does that of the trans-impedance amplifier (TIA) used to boost the received signals before digitisation. In turn, the modulator driver must also be able to operate at a higher symbol rate.
“This is mainly analogue circuit design,” says Lipscomb. “You have to have a detector that will respond at those speeds so that means it can’t be a very big area; you can’t have much capacitance in the device.”
Similarly, the silicon germanium drivers and TIAs, to work at those speeds, must also keep the capacitance down given that the 3dB bandwidth is inversely proportional to the capacitance.
Systems vendors Ciena, Infinera, and Huawei all have platforms supporting 800-gigabit wavelengths while Nokia‘s latest PSE-Vs coherent digital signal processor (DSP) supports up to 600 gigabit-per-wavelength.
Next-generation symbol rate
The next jump in symbol rate will be in the 120+ gigabaud range, enabling 1.2-terabit transmissions.
“As you push the baud rate higher, you have to increase the channel spacing,” says Lipscomb. “Channels can’t be arbitrary if you want to have any backward compatibility.”
A 50GHz channel is used for 100- and 200-gigabit transmissions at 32GBd. Doubling the symbol rate to 64GBd requires a 75GHz channel while a 100GBd Class 60 design occupies a 100GHZ channel. For 128GBd, a 150GHz channel will be needed. “For 1.2 terabit, this spacing matches well with 75GHz channels,” says Lipscomb.
It remains unclear when 128GBd systems will be trialled but Lipscomb expects it will be 2022, with deployments in 2023.
Upping the baud rate enhances the reach and reduces channel count but it does not improve spectral efficiency. “You don’t start getting more data down a fibre,” says Lipscomb.
To boost transport capacity, a fibre’s C-band can be extended to span 6THz, dubbed the C++ band, adding up to 50 per cent more capacity. The L-band can also be used and that too can be extended. But two sets of optics and optical amplification are required when the C and L bands are used.
400ZR and OpenZR+
Lipscomb says the first 400ZR coherent pluggable deployments that link data centres up to 120km apart will start next year. The OIF 400ZR coherent standard is implemented using QSFP-DD or OSFP client-side pluggable modules.
“There is also an effort to standardise around OpenZR+ that has a little bit more robust definition and that may be 2022 before it is deployed,” says Lipscomb.
NeoPhotonics is a contributor member to the OpenZR+ industry initiative that extends optical performance beyond 400ZR’s 120km.
800-gigabit coherent pluggable
The OIF has just announced it is developing the next-generation of ZR optics, an 800-gigabit coherent line interface supporting links up to 120km. The 800-gigabit specification will also support unamplified fixed-wavelength links 2-10km apart.
“This [800ZR standard] will use between Class 50 and Class 60 optics and a 5nm CMOS digital signal processor,” says Lipscomb.
NeoPhotonics’ Class 60 coherent modulator and receiver components are indium phosphide-based. For the future 800-gigabit coherent pluggable, a silicon photonics coherent optical subassembly (COSA) integrating the modulator with the receiver is required.
NeoPhotonics has published work showing its silicon photonics operating at around 90GBd required for 800-gigabit coherent pluggables.
“This is a couple of years out, requiring another generation of DSP and another generation of optics,” says Lipscomb.
800G MSA defines PSM8 while eyeing 400G’s progress

A key current issue regarding data centres is forecasting the uptake of 400-gigabit optics.
If a rapid uptake of 400-gigabit optics occurs, it will also benefit the transition to 800-gigabit modules. But if the uptake of 400-gigabit optics is slower, some hyperscalers could defer and wait for 800-gigabit pluggables instead.
So says Maxim Kuschnerov, a spokesperson for the 800G Pluggable MSA (multi-source agreement).
The 800G MSA has issued its first 800-gigabit pluggable specification.
Dubbed the PSM8, the design uses the same components as 400-gigabit optics, doubling capacity in the same QSFP-DD pluggable form factor.
“Four-hundred-gigabit modules hitting volume is crucially important because the 800-gigabit specification leverages 400-gigabit components,” says Kuschnerov. “The more 400-gigabit is delayed, it impacts everything that comes after.”
PSM8
The PSM8 is an eight-channel parallel single-mode (PSM) fibre design, each fibre carrying 100 gigabits of data.
The 100m-reach PSM8 version 1.0 specification was published in August, less than a year after the 800G MSA was announced.
The 800G Pluggable MSA is developing two other 800-gigabit specifications based on 200-gigabit electrical and optical lanes.
One is a 500m four-fibre 800-gigabit implementation, each fibre a 200-gigabit channel. This is an 800-gigabit equivalent of the existing 400-gigabit IEEE DR4 standard.
The second design is a single-fibre four-channel coarse wavelength-division multiplexing (CWDM) with a 2km reach, effectively an 800-gigabit CWDM4.
Specifications
The 800G MSA chose to tackle a parallel single-mode fibre design because the components needed already exist. In turn, a competing initiative, the IEEE’s 100-gigabit-per-lane multi-mode fibre approach, will have a lesser reach.
“The IEEE has an activity for 100-gigabit per lane for multi-mode but the reach is 50m,” says Kuschnerov. “How much market will you get with a limited-reach objective?”
In contrast, the 100m reach of the PSM8 better serves applications in the data centre and offers a path for single-mode fibre which, long-term, will provide general data centre connectivity, argues Kuschnerov, whether parallel fibre or a CWDM approach.
Investment will also be needed to advance multi-mode optics to achieve 100 gigabits whereas PSM8 will use 50 gigabaud optics already used by 400-gigabit modules.
Kuschnerov stresses that the PSM8 is not a repackaging of two IEEE 400-gigabit DR4s designs. The PSM8 uses more relaxed specifications to reduce cost; a possibility given PSM8’s 100m reach compared to the DR4’s 500m.
“We have relaxed various specifications to enable more choice,” says Kuschnerov. For example, externally modulated lasers (EMLs), directly modulated lasers (DMLs) and silicon photonics-based designs can all be used.
The transmitter power has also been reduced by 2.5dB compared to the DR4, while the extinction ratio of the modulator is 1.5dB less.
The need for an 800-gigabit in a QSFP-800DD form factor is to serve emerging 25.6-terabit Ethernet switches. Using 400-gigabit optics, a 2-rack-unit-high (2RU) switch is needed whereas a 1RU switch platform is possible using 800-gigabit pluggables.
“The big data centre players all have different plans and their own roadmaps,” says Kuschnerov. “From our observation of the industry, the upgrading speed for 400 gigabit and 800 gigabit is slower than what was expected a year ago.”
First samples of the PSM8 module are expected in the second half of 2021 with volume production in 2023.
800-gigabit PSM4 and CWDM4
The members of the MSA have already undertaking pre-development work on the two other specifications that use 200-gigabit-per-lane optics: the 800-gigabit PSM4 and the CWDM4.
“It was a lot of work discussing the feasibility of 200-gigabit-per-lane,” says Kuschnerov. There is much experimental work to be done regarding the choice of modulation format and forward error correction (FEC) scheme which will need to be incorporated in future 4-level pulse-amplitude modulation (PAM-4) digital signal processors.
“We are progressing, the key is low power and low latency which is crucial here,” says Kushnerov. A tradeoff will be needed in the chosen FEC scheme ensuring sufficient coding gain while minimising its contribution to the overall latency.
As for the modulation scheme, while different PAM schemes are possible, PAM-4 already looks like the front runner, says Kuschnerov.

The 800G Pluggable MSA is at the proof-of-concept stage, with a demonstration of working 200-gigabit-per-lane optics at the recent CIOE show held in Shenzhen, China. “Some of the components used are not just prototypes but are designed for this use case although we are not there yet with an end-to-end product.”
The designs will require 200-gigabit electrical and optical lanes. The OIF has just started work on 200-gigabit electrical interfaces and will likely only be completed in 2025. Achieving the required power consumption will also be a challenge.
Catalyst
Since the embrace of 200-gigabit-per-lane technology by the 800G Pluggable MSA just over a year ago, other initiatives are embracing the rate.
The IEEE has started its ‘Beyond 400G’ initiative that is defining the next Ethernet specification and both 800-gigabit and 1.6 terabit optics are under consideration. As has the OIF with its next-generation 224-gigabit electrical interface.
“These activities will enable a 200-gigabit ecosystem,” says Kuschnerov. “Our focus is on 800-gigabit but it is having a much wider impact beyond 4×200-gigabit, it is impacting 1.6 terabits and impacting serdes (serialisers/ deserialisers).”
The 800G Pluggable MSA is doing its small part but what is needed is the development of an end-to-end 200-gigabit ecosystem, he says: “This is a challenging undertaking.”
The 800G Pluggable MSA now has 40 members including hyperscalers, switch makers, systems vendors, and component and module makers.







