II-VI expands its 400G and 800G transceiver portfolio

II-VI has showcased its latest high-speed optics. The need for such client-side modules is being driven by the emergence of next-generation Ethernet switches in the data centre.
The demonstrations, part of the OFC virtual conference and exhibition held last month, featured two 800-gigabit and two 400-gigabit optical transceivers.
“We have seen the mushrooming of a lot of datacom transceiver companies, primarily from China, and some have grown pretty big,” says Sanjai Parthasarathi, chief marketing officer at II-VI.
But a key enabler for next-generation modules is the laser. “Very few companies have these leading laser platforms – whether indium phosphide or gallium arsenide, we have all of that,” says Parthasarathi.
During OFC, II-VI also announced the sampling of a 100-gigabit directly modulated laser (DML) and detailed an optical channel monitoring platform.
“We have combined the optical channel monitoring – the channel presence monitoring, the channel performance monitoring – and the OTDR into a single integrated subsystem, essentially a disaggregated monitoring system,” says Parthasarathi.
An optical time-domain reflectometer (OTDR) is used to characterise fibre.
High-speed client-side transceivers
II-VI demonstrated two 800-gigabit datacom products.
One is an OSFP form factor implementing 800-gigabit DR8 (800G-DR8) and the other is a QSFP-DD800 module with dual 400-gigabit FR4s (2x400G-FR4). The DR8 uses eight fibres in each direction, each carrying a 100-gigabit signal. The QSFP-DD800 supports two FR4s, each carrying four, 100-gigabit wavelengths over single-mode fibre.

“These are standard IEEE-compliant reaches: 500m for the DR8 and 2km for the dual FR4 talking to individual FR4s,” says Vipul Bhatt, senior strategic marketing director, datacom at II-VI.
The 800G-DR8 module can be used as an 800-gigabit link or, when broken out, as two 400-gigabit DR4s or eight individual 100-gigabit DR optics.
II-VI chose to implement these two 800-gigabit interfaces based on the large-scale data centre players’ requirements. The latest switches use 25.6-terabit Ethernet chips that have 100-gigabit electrical interfaces while next-generation 51.2-terabit ICs are not far off. “Our optics is just keeping in phase with that rollout,” says Bhatt.
During OFC, II-VI also showcased two 400-gigabit QSFP112 modules: a 400-gigabit FR4 (400G-FR4) and a multi-mode 400-gigabit SR4 (400G-SR4).
The SR4 consumes less power, is more cost-effective but has a shorter reach. “Not all large volume deployments of data centres are necessarily in huge campuses,” says Bhatt.
II-VI demonstrated its 800-gigabit dual FR4 module talking to two of its QSFP112 400-gigabit FR4s.
Bhatt says the IEEE 802.3db standard has two 400G-SR4 variants, one with a 50m reach and the second, a 100m reach. “We chose to demonstrate 100m because it is inclusive of the 50m capability,” says Bhatt.

II-VI stresses its breadth in supporting multi-mode, short-reach single-mode and medium-reach single-mode technologies.
The company says it was the electrical interface rather than the optics that was more challenging in developing its latest 400- and 800-gigabit modules.
The company has 100-gigabit multi-mode VCSELs, single-mode lasers, and optical assembly and packaging. “It was the maturity of the electrical interface [that was the challenge], for which we depend on other sources,” says Bhatt.
100-gigabit PAM-4 DML
II-VI revealed it is sampling a 100-gigabit PAM-4 directly modulated laser (DML).
Traditionally, client-side modules for the data centre come to market using a higher performance indium phosphide externally-modulated laser (EML). The EML may even undergo a design iteration before a same-speed indium phosphide DML emerges. The DML has simpler drive and control circuitry, is cheaper and has a lower power consumption.
“But as we go to higher speeds, I suspect we are going to see both [laser types] coexist, depending on the customer’s choice of worst-case dispersion and power tolerance,” says Bhatt. It is too early to say how the DML will rank with the various worst-case test specifications.
Parthasarathi adds that II-VI is developing 100-gigabit and 200-gigabit-per-lane laser designs. Indeed, the company had an OFC post-deadline paper detailing work on a 200-gigabit PAM-4 DML.
Optical monitoring system
Optical channel monitoring is commonly embedded in systems while coherent transceivers also provide performance metrics on the status of the optical network. So why has II-VI developed a standalone optical monitoring platform?
What optical channel monitors and coherent modules don’t reveal is when a connector is going bad or fibre is getting bent, says Parthasarathi: “The health and the integrity of the fibre plant, there are so many things that affect a transmission.”
Operators may have monitoring infrastructure in place but not necessarily the monitoring of the signal integrity or the physical infrastructure. “If you have an existing network, this is a very easy way to add a monitoring capability,” says Parthasarathi.

“As we can control all the parts – the optical channel monitoring and the OTDR – we can configure it [the platform] to meet the application,” adds Sara Gabba, manager, analysis, intelligence & strategic marcom at II-VI. “Coherent indeed provides a lot of information, but this kind of unit is also suitable for access network applications.”
The optical monitoring system features an optical switch so it can cycle and monitor up to 48 ports.
With operators adopting disaggregated designs, each element in the optical network is required to have more intelligence and more autonomy.
“If you can provide this kind of intelligent monitoring and provide information about a specific link, you create the possibility to be more flexible,” says Gabba.
Using the monitoring platform, intelligence can be more widely distributed in the optical network complementing systems operators may have already deployed, she adds.
Inphi unveils first 800-gigabit PAM-4 signal processing chip

Inphi has detailed what it claims is the industry’s first digital signal processor (DSP) chip family for 800-gigabit client-side pluggable modules.
Dubbed Spica, the 4-level pulse-amplitude modulation (PAM-4) DSP family is sampling and is in the hands of customers.
The physical-layer company has also announced its third-generation Porrima family of PAM-4 DSPs for 400-gigabit pluggables.
The Porrima DSP with integrated laser driver has being made using a 7nm CMOS process; until now a 16nm CMOS has been used. Fabricating the chip using the more advanced process will reduce the power consumption of 400-gigabit module designs.
Applications
Eight-hundred-gigabit multi-source agreements (MSAs) will enable a new generation of high-speed optical transceivers to come to market.
The 800G Pluggable MSA developing optical specifications for 800-gigabit pluggable modules, is one that Inphi is promoting, while the QSFP-DD800 MSA is extending the double density form factor for 800 gigabits.

The main two markets driving a need for 800-gigabit modules are artificial intelligence (AI) and data centre switching, says Eric Hayes, senior vice president, networking interconnect at Inphi.
“AI, while still in its infancy, has all these applications and workloads that it can drive,” he says. “But one thing they have in common when we look at the data centres building large AI clusters is that they have very large data sets and lots of data flow.”
The speed of the input-output (I/O) of the AI processors used in the clusters is rising to cope with the data flows.
The second application that requires 800-gigabit modules is the advent of 25.6-terabit Ethernet switches used to network equipment within the data centre.
Inphi says there are two types of 25.6-terabit switch chips emerging: one uses 50-gigabit PAM-4 while the second uses 100-gigabit PAM-4 electrical interfaces.
“The 25.6-terabyte switch with 100-gigabit I/O is wanted for one-rack-unit (1RU) platforms,” says Hayes. “To do that, you need an 800-gigabit module.” Such switches have yet to reach the marketplace.
The first-generation AI processors used 25-gigabit non-return-to-zero (NRZ) signalling for the I/O while many of the devices shipping today use 50-gigabit PAM-4. “The latest designs that are coming to market have 100-gigabit I/O and we have the first DSP offering 100-gigabit on the host side,” says Hayes.
Spica and Porrima ICs
The Spica DSP takes 100-gigabits PAM-4 electrical signals from the host and performs retiming and pre-emphasis to generate the 100-gigabit PAM-4 signals used for modulation the optics before transmission. The laser driver is integrated on-chip.
The transmit path is a simpler design than the Porrima in that the signalling rate is the same at the input and the output. Accordingly, no gearbox circuitry is needed.
The main signal processing is performed at the receiver to recover the sent PAM-4 signals. A hybrid design is used combining analogue and digital signal processing, similar to the design used for the Porrima.
The Spica device supports 2×400-gigabit or 8×100-gigabit module designs and enables 800-gigabit or 8×100-gigabit optical interconnects. The 800-gigabit form factors used are the QSFP-DD800 and the OSFP. Inphi says both designs consume under 14W.
“The first module being built [using the Spica] is the OSFP because the end-user is demanding that, but we also have customers building QSFP-DDs,” says Hayes.
Meanwhile, Inphi’s Porrima family of devices is targeted at the 400G DR4 and 400G FR4 specifications as well as 100-gigabit module designs that use 100-gigabit PAM-4.
The two module types can even be combined when a 400-gigabit pluggable such as a QSFP-DD or an OSFP is used in breakout mode to feed four 100-gigabit modules implement using such form factors as the QSFP, uQSFP or SFP-DD.
Transitioning the Porrima to a 7nm process saves 1.5W of power, says Hayes, resulting in an 8W 400-gigabit module. The latest Porrima is sampling and is with customers.
Roadmap
Inphi says optical modules using the Spica DSP will be deployed in volume from the second half of 2021.
Before then, the DSP will be tested as part of customers’ module designs, then be integrated with the software before the complete 800-gigabit module is tested.
“There will then be interoperability testing between the modules once they become available and then small pilot networks using 800-gigabit modules will be built and tested before the go-ahead to mass deployment,” says Hayes.
All these stages will require at least a year’s work.

