Enabling 800-gigabit optics with physical layer ICs
Broadcom recently announced a family of 800-gigabit physical layer (PHY) chips. The device family is the company’s first 800-gigabit ICs with 100-gigabit input-output (I/O) interfaces.

Source: Broadcom
Moving from 50-gigabit to 100-gigabit-based I/O enables a new generation of 800-gigabit modules aligned with the latest switch chips.
“With the switch chip having 100-gigabit I/Os, PHYs are needed with the same interfaces,” says Machhi Khushrow, senior director of marketing, physical layer products division at Broadcom.
Broadcom’s latest 25.6 terabit-per-second (Tbps) Tomahawk 4 switch chip using 100-gigabit I/O was revealed at the same time.
800-gigabit PHY devices
The portfolio comprises three 800-gigabit PHY ICs. All operate at a symbol rate of 53 gigabaud, use 4-level pulse amplitude modulation (PAM-4) and are implemented in a 7nm CMOS process.
Two devices are optical PHYs: the BCM87800 and the BCM87802. These ICs are used within 800-gigabit optical modules such as the QSFP-DD800 and the OSFP form factors. The difference between the two chips is that the BCM87802 includes an integrated driver.
The third PHY - the BCM87360 - is a retimer IC used on line cards. Whether the chip is needed depends on the line card design and signal-integrity requirements; for example, whether the line card is used within a pizza box or part of a chassis-based platform.

Source: Broadcom
“If it is a higher-density card that is relatively small, it may only need 15 per cent of the ports with retimers,” says Khushrow. “If the line card is larger, where things fan out to longer traces, retimers may be needed for all the ports.”
All three 800-gigabit PHYs have eight 100-gigabit transmit and eight receive channels (8:8, as shown in the top diagram).
Applications
The optical devices support several 800-gigabit module designs that use either silicon photonics, directly modulated lasers (DMLs) or externally-modulated lasers (EMLs).
The 800-gigabit PHYs support the DR8 module (8 single-mode fibres, 500m reach), two 400-gigabit DR4 (4 single-mode fibres, 500m) or two FR4 in a module (each 4 wavelengths on a single-mode fibre, 2km) as well as the SR8, a parallel VCSEL-based design with a reach of 100m over parallel multi-mode fibre.
Timescales
Given the availability of these PHYs and that 800-gigabit modules will soon appear, will the development diminish the 400-gigabit market opportinity?
“This is independent of 400-gigabit [module] deployments,” says Khushrow.
The hyperscalers are deploying different architectures. There are hyperscalers that are only now transitioning to 200-gigabit modules while others are transitioning to 400- gigabit. They will all transition to 800 gigabit, he says: “How and when they transition are all at different points.”
Some of the hyperscalers deploying 400-gigabit modules are looking at 800 gigabit, and their deployment plans are maybe two to three years out. “We don’t expect 800 gigabit to cannibalise 400 gigabit, at least not in the near term,” he says.
Broadcom says 800-gigabit modules to ship in the second half of this year. “It all depends on how the switch infrastructure, line cards and optics become available,” says Khushrow.
Next developments
The landscape for high-speed networking in the data centre is changing and optics is moving closer to the switch chip, whether it is on-board optics or co-packaged optics.
“People are looking at both options,” says Khushrow.”It depends on the architecture of the data centre whether they use on-board optics or co-packaged optics.”
Meanwhile, the OIF is working on a 200-gigabit electrical interface standard.
Co-packaged optics is challenging and the technology has its own issues whereas optical transceivers are easier to use and deploy, says Khushrow.
Current industry thinking is that some form of co-packaged optics will be used with the adevnt of next-generation 51.2-terabit switch chips. But even with such capacity switches, pluggables will continue to be used, he says.
There will still be a need for PHYs, whether for pluggables, co-packaged designs or on the linecard.
“We will continue to provide those on our roadmap,” says Khushrow. “It is just a matter of what the form factor will be, whether it will be a packaged part or a die part.”
100-gigabaud optics usher in the era of terabit transmissions
Telecom operators are in a continual battle to improve the economics of their optical transport networks to keep pace with the relentless growth of IP traffic.
One approach is to increase the symbol rate used for optical transmission. By operating at a higher baud rate, more data can be carried on an optical wavelength.
Ferris Lipscomb
Alternatively, a higher baud rate allows a simpler modulation scheme to be used, sending the same amount of data over greater distances. That is because the fewer constellation points of the simpler modulation scheme help data recovery at the receiver.
NeoPhotonics has detailed two optical components - a coherent driver-modulator and an intradyne coherent receiver (micro-ICR) - that operate at over 100 gigabaud (GBd). The symbol rate suits 800-gigabit systems and can enable one-terabit transmissions.
NeoPhotonics’ coherent devices were announced to coincide with the ECOC 2020 show.
Class 60 components
The OIF has a classification scheme for coherent optical components based on their analogue bandwidth performance.
A Class 20 receiver, for example, has a 3-decibel (dB) bandwidth of 20GHz. NeoPhotonics announced at the OFC 2019 show Class 50 devices with a 50GHz 3dB bandwidth. The Class 50 modulator and receiver devices are now deployed in 800-gigabit coherent systems.
NeoPhotonics stresses the classes are not the only possible operating points. “It is possible to use baud rates in between these standard numbers,” says Ferris Lipscomb, vice president of marketing at NeoPhotonics. “These classes are shorthand for a range of possible baud rates.”
“To get to 96 gigabaud, you have to be a little bit above 50GHz, typically a 55GHz 3dB bandwidth,” says Lipscomb. “With Class 60, you can go to 100 gigabaud and approach a terabit.”
It is unclear whether one-terabit coherent transponders will be widely used. Instead, Class 60 devices will likely be the mainstay for transmissions up to 800 gigabits, he says.

Source: NeoPhotonics, Gazettabyte
Design improvements
Several aspects of the components are enhanced to achieve Class-60 performance.
At the receiver, the photodetector’s bandwidth needs to be enhanced, as does that of the trans-impedance amplifier (TIA) used to boost the received signals before digitisation. In turn, the modulator driver must also be able to operate at a higher symbol rate.
“This is mainly analogue circuit design,” says Lipscomb. “You have to have a detector that will respond at those speeds so that means it can’t be a very big area; you can’t have much capacitance in the device.”
Similarly, the silicon germanium drivers and TIAs, to work at those speeds, must also keep the capacitance down given that the 3dB bandwidth is inversely proportional to the capacitance.
Systems vendors Ciena, Infinera, and Huawei all have platforms supporting 800-gigabit wavelengths while Nokia‘s latest PSE-Vs coherent digital signal processor (DSP) supports up to 600 gigabit-per-wavelength.
Next-generation symbol rate
The next jump in symbol rate will be in the 120+ gigabaud range, enabling 1.2-terabit transmissions.
“As you push the baud rate higher, you have to increase the channel spacing,” says Lipscomb. “Channels can’t be arbitrary if you want to have any backward compatibility.”
A 50GHz channel is used for 100- and 200-gigabit transmissions at 32GBd. Doubling the symbol rate to 64GBd requires a 75GHz channel while a 100GBd Class 60 design occupies a 100GHZ channel. For 128GBd, a 150GHz channel will be needed. “For 1.2 terabit, this spacing matches well with 75GHz channels,” says Lipscomb.
It remains unclear when 128GBd systems will be trialled but Lipscomb expects it will be 2022, with deployments in 2023.
Upping the baud rate enhances the reach and reduces channel count but it does not improve spectral efficiency. “You don’t start getting more data down a fibre,” says Lipscomb.
To boost transport capacity, a fibre’s C-band can be extended to span 6THz, dubbed the C++ band, adding up to 50 per cent more capacity. The L-band can also be used and that too can be extended. But two sets of optics and optical amplification are required when the C and L bands are used.
400ZR and OpenZR+
Lipscomb says the first 400ZR coherent pluggable deployments that link data centres up to 120km apart will start next year. The OIF 400ZR coherent standard is implemented using QSFP-DD or OSFP client-side pluggable modules.
“There is also an effort to standardise around OpenZR+ that has a little bit more robust definition and that may be 2022 before it is deployed,” says Lipscomb.
NeoPhotonics is a contributor member to the OpenZR+ industry initiative that extends optical performance beyond 400ZR’s 120km.
800-gigabit coherent pluggable
The OIF has just announced it is developing the next-generation of ZR optics, an 800-gigabit coherent line interface supporting links up to 120km. The 800-gigabit specification will also support unamplified fixed-wavelength links 2-10km apart.
“This [800ZR standard] will use between Class 50 and Class 60 optics and a 5nm CMOS digital signal processor,” says Lipscomb.
NeoPhotonics’ Class 60 coherent modulator and receiver components are indium phosphide-based. For the future 800-gigabit coherent pluggable, a silicon photonics coherent optical subassembly (COSA) integrating the modulator with the receiver is required.
NeoPhotonics has published work showing its silicon photonics operating at around 90GBd required for 800-gigabit coherent pluggables.
“This is a couple of years out, requiring another generation of DSP and another generation of optics,” says Lipscomb.
Ayar Labs’ TeraPhy chiplet nears volume production
Moving data between processing nodes - whether servers in a data centre or specialised computing nodes used for supercomputing and artificial intelligence (AI) - is becoming a performance bottleneck.
Workloads continue to grow yet networking isn’t keeping pace with processing hardware, resulting in the inefficient use of costly hardware.
Networking also accounts for an increasing proportion of the overall power consumed by such computing systems.
These trends explain the increasing interest in placing optics alongside chips and co-packaging the two to boost input-output (I/O) capacity and reach.
At the ECOC 2020 exhibition and conference held virtually, start-up Ayar Labs showcased its first working TeraPHY, an optical I/O chiplet, manufactured using GlobalFoundries’ 45nm silicon-photonics process.
GlobalFoundries is a strategic investor in Ayar Labs and has been supplying Ayar Labs with TeraPHY chips made using its existing 45nm silicon-on-insulator process for radio frequency (RF) designs.
The foundry’s new 300mm wafer 45nm silicon-photonics process follows joint work with Ayar Labs, including the development of the process design kit (PDK) and standard cells.
“This is a process that mixes optics and electronics,” says Hugo Saleh, vice president of marketing and business development at Ayar Labs (pictured). “We build a monolithic die that has all the logic to control the optics, as well as the optics,” he says.
The latest TeraPHY design is an important milestone for Ayar Labs as it looks to become a volume supplier. “None of the semiconductor manufacturers would consider integrating a solution into their package if it wasn’t produced on a qualified high-volume manufacturing process,” says Saleh.
Applications
The TeraPHY chiplet can be co-packaged with such devices as Ethernet switch chips, general-purpose processors (CPUs), graphics processing units (GPUs), AI processors, and field-programmable gate arrays (FPGAs).
Ayar Labs says it is engaged in several efforts to add optics to Ethernet switch chips, the application most associated with co-packaged optics, but its focus is AI, high-performance computing and aerospace applications.
Last year, Intel and Ayar Labs detailed a Stratix 10 FPGA co-packaged with two TeraPHYs for a phased-array radar design as part of a DARPA PIPES and the Electronics Resurgence Initiative backed by the US government.
Adding optical I/O chiplets to FPGAs suits several aerospace applications including avionics, satellite and electronic warfare.
TeraPHY chiplet
The ECOC-showcased TeraPHY uses eight transmitter-receiver pairs, each pair supporting eight channels operating at either 16, 25 or 32 gigabit-per-second (Gbps), to achieve an optical I/O of up to 2.048 terabits.
The chiplet can use either a serial electrical interface or Intel’s Advanced Interface Bus (AIB), a wide-bus design that uses slower 2Gbps channels. The latest TeraPHY uses a 32Gbps non-return-to-zero (NRZ) serial interface and Saleh says the company is working on a 56Gbps version.
The company has also demonstrated 4-level pulse-amplitude modulation (PAM-4) technology but many applications require the lowest latency links possible.
“PAM-4 gives you a higher data rate but it comes with the tax of forward-error correction,” says Saleh. With PAM-4 and forward-error correction, the latency is hundreds of nanoseconds (ns), whereas the latency is 5ns using a NRZ link.
Ayar Labs’s next parallel I/O AIB-based TeraPHY design will use Intel’s AIB 1.0 specification and will use 16 cells, each having 80, 2Gbps channels, to achieve a 2.5Tbps electrical interface.
In contrast, the TeraPHY used with the Stratix 10 FPGA has 24 AIB cells, each having 20, 2Gbps channels for an overall electrical bandwidth of 960 gigabits, while its optical I/O is 2.56Tbps since 10 transmit-receive pairs are used.
The optical bandwidth is deliberately higher than the electrical bandwidth. First, not all the transmit-receive macros on the die need to be used. Second, the chiplet has a crossbar switch that allows one-to-many connections such that an electrical channel can be sent out on more than one optical interface and vice versa.
Architectures
Saleh points to several recent announcements that highlight the changes taking place in the industry that are driving new architectural developments.
He cites AMD acquiring programmable logic player, Xilinx; how Apple instances are now being hosted in Amazon Web Services’ (AWS) cloud to aid developers and Apple's processors, and how AWS and Microsoft are developing their own processors.
“Processors can now be built by companies using TSMC’s leading process technology using the ARM and RISC-V processor ecosystems,” he says. “AWS and Microsoft can target their codebase to whatever processor they want, including one developed by themselves.”
Saleh notes that Ethernet remains a key networking technology in the data centre and will continue to evolve but certain developments do need something else.
Applications such as AI and high-performance computing would benefit from a disaggregated design whereby CPUs, GPUs, AI devices and memory are separated and pooled. An application can then select the hardware it needs for the relevant pools to create the exact architecture it needs.
“Some of these new applications and processors that are popping up, there is a lot of benefit in a one-to-one and one-to-many connections,” he says. “The Achilles heel has always been how you disaggregate the memory because of latency and power concerns. Co-packaged optics with the host ASIC is the only way to do that.”
It will also be the only way such disaggregated designs will work given that far greater connectivity - estimated to be up to 100x that of existing systems - will be needed.
Expansion
Ayar Labs announced in November that it had raised $35 million in the second round of funding which, it says, was oversubscribed. This adds to its previous funding of $25 million.
The latest round includes four new investors and will help the start-up expand and address new markets.
One investor is a UK firm, Downing, that will connect Ayar Labs to European R&D and product opportunities. Saleh mentions the European Processor Initiative (EPI) that is designing a family of low-power European processors for extreme-scale computing. “Working with Downing, we are getting introduced into some of these initiatives including EPI and having conversations with the principals,” he says.
In turn, SGInnovate, a venture capitalist funded by the Singapore government, will help expand Ayar Labs’ activities in Asia. The two other investors are Castor Ventures and Applied Ventures, the investment arm of Applied Materials, the supplier of chip fabrication plant equipment.
“Applied Materials want to partner with us to develop the methodologies and tools to bring the technology to market,” says Saleh.
Meanwhile, Ayar Labs continues to grow, with a staff count approaching 100.


