Lumentum's optical circuit switch for AI data centres

Part 3: Data Centre Switching
The resurgence of optical circuit switches for use in data centres is gaining momentum, driven by artificial intelligence (AI) workloads that require scalable connectivity.
Lumentum is one of several companies that showcased an optical circuit switch at the OFC event in San Francisco in March. Lumentum’s R300 switch connects optically the 300 input ports to any of the 300 output ports. The optical circuit switch uses micro-electro-mechanical systems (MEMS), tiny mirrors that move electrostatically, to direct light from an input port to one of the 300 output ports.
The R300 addresses the network needs of AI data centres, helping link large numbers of AI accelerator chips such as graphics processor units (GPUs).
“We’ve been talking to all the hyperscalers in North America and China,” says Peter Roorda, general manager of the switching business unit at Lumentum. “The interest is pretty broad for the applications of interconnecting GPUs and AI clusters; that’s the exciting one.”
Optical circuit switches
In a large-scale data centre, two or three tiers of electrical switch platforms link the many servers’ processors. The number of tiers needed depends on the overall processor count. The same applies to the back-end network used for AI workloads. These tiers of electrical switches are arranged in what is referred to as a Clos or “Fat Tree” architecture.

Google presented a paper in 2022 revealing that it had been using an internally developed MEMS-based optical circuit switch for several years. Google used its optical circuit switches to replace all the top-tier ‘spine’ layer electrical switches across its data centres, resulting in significant cost and power savings.
Google subsequently revealed a second use for its switches to directly connect between racks of its tensor processor unit (TPU) accelerator chips. Google can move workloads across thousands of TPUs in a cluster, efficiently using its hardware and bypassing a rack when a fault arises.
Google’s revelation rejuvenated interest in optical switch technology, and at OFC, Lumentum showed its first R300 optical switch product in operation.
Unlike packet switches, which use silicon to process data at the packet level, an optical circuit switch sets up a fixed, point-to-point optical connection, akin to a telephone switchboard, for the duration of a session.
The optical switch is ideal for scenarios where large, sustained data flows are required, such as in AI training clusters.

Merits
The optical circuit switch’s benefits include cost and power savings and improved latency. Optical-based switch ports are data-rate independent. They can support 400 gigabit, 800 gigabit, and soon 1.6-terabit links without requiring an upgrade.
“Now, it’s not apples to apples; the optical circuit switch is not a packet switch,” says Roorda. “It’s just a dumb circuit switch, so there must be control plane software to manage it.” However, the cost, power, space savings, and port transparency incentives suffice for the hyperscalers to invest in the technology.
The MEMS-based R300
Lumentum has a 20-year history using MEMS. It first used the technology in its wavelength-selective switches used in telecom networks before the company adopted liquid crystal on silicon (LCOS) technology.
“We have 150,000 MEMS-based wavelength selective switches in the field,” says Roorda. “This gives us a lot of confidence about their reliability.”
MEMS-based switches are renowned for their manufacturing complexity, and Lumentum has experience in MEMS.
“This is a key claim as users are worried about the mechanical aspect of MEMS’ reliability,” says Michael Frankel, an analyst at LightCounting Market Research, which published an April report covering Ethernet, Infiniband and optical switches in cloud data centres. “Having a reliable volume manufacturer is critical.”
In its system implementation, Google revealed that it uses bi-directional transceivers in conjunction with the OCS.
“Using bi-directional ports is clever because you get to double the ports out of your optical circuit switch for the same money, “says Mike DeMerchant, Lumentum’s senior director of product line management, optical circuit switch. “But then you need customised, non-standard transceivers.”
A bi-directional design complicates the control plane management software because bi-directional transponders effectively create two sets of connections. “The two sets of transceivers can only talk in a limited fashion between each other, so you have to manage that additional control plane complexity,” says DeMerchant.
Lumentum enters the market with a 300×300 radix switch. Some customers have asked about a 1,000×1,000 port switch. From a connectivity perspective, bigger is better, says Roorda. “But bigger is also harder; if there is a problem with that switch, the consequences of a failure—the blast radius—are larger too,” he says.

Lumentum says there are requests for smaller optical circuit switches and expects to offer a portfolio of different-sized products in the next two years.
The R300 switch is cited as having a 3dB insertion loss, but Roorda says the typical performance is close to 1.5dB at the start of life. “And 3dB is good enough for using a standard off-the-shelf -FR4 or a -DR4 or -DR8 optical module [with the switch],” says Roorda.
A 400G QSFP-DD FR4 module uses four wavelengths on a single-mode fibre and has a reach of 2km, whereas a DR4 or DR8 uses a single wavelength on each fibre and has 4 or 8 single-mode fibre outputs, respectively, with a reach of 500m.
An FR4 interface is ideal with an optical circuit switch since multiple wavelengths are on a single fibre and can be routed through one port. However, many operators use DR4 and DR8 interfaces and are exploring using such transceivers.
“More ports would be consumed, diluting the cost-benefit, but the power savings would still be significant,” says Roorda.Additionally, in some applications, individually routing and recombining the separate ‘rails’ of DR4 or DR8 offer greater networking granularity. Here, the optical circuit switch still provides value, he says.
One issue with an optical circuit switch compared to an electrical-based one is that the optics go through both optical ports before reaching the destination transceiver, adding an extra 3dB loss. By contrast, for an electrical switch, the signal is regenerated optically by the pluggable transceiver at the output port.
LightCounting’s Frankel also highlights the switch’s loss numbers. “Lumentum’s claim of a low loss – under 2dB – and a low back reflection (some 60dB) are potential differentiators,” he says. “It is also a broadband design – capable of operating across the O-, C- and L-bands: O-band for data centre and C+L for telecom.”
Software and Hyperscaler Control
Lumentum is controlling the switch using the open-source SONiC [Software for Open Networking in the Cloud] network operating system (NOS), based on Linux. The hyperscalers will add the higher-level control plane management software using their proprietary software.
“It’s the basic control features for the optics, so we’re not looking to get into the higher control plane,” says DeMerchant.
Challenges and Scalability
Designing a 300×300 optical circuit switch is complicated. “It’s a lot of mirrors,” says Roorda. “You’ve got to align them, so it is a complicated, free-space, optical design.”
Reliability and scalable manufacturing are hurdles. “The ability to build these things at scale is the big challenge,” says Roorda. Lumentum argues that its stable MEMS design results in a reliable, simpler, and less costly switch.Lumentum envisions data centres evolving to use a hybrid switching architecture, blending optical circuit switches with Ethernet switches.
Roorda compares it to how telecom networks transitioned to using reconfigurable optical add-drop multiplexers (ROADMs).”It’ll be hybridised with packet switches because you need to sort the packets sometimes,” says Roorda.
Future developments may include multi-wavelength switching and telecom applications for optical circuit switches. “For sure, it is something that people are talking about,” he adds.
Lumentum says its R300 will be generally available in the second half of this year.
Tomahawk 6: The industry’s first 100-terabit switch chip

Part 2: Data Centre Switching
Peter Del Vecchio, product manager for the Tomahawk switch family at Broadcom, outlines the role of the company’s latest Tomahawk 6 Ethernet switch chip in AI data centres.
Broadcom is now shipping samples of its Tomahawk 6, the industry’s first 102.4-terabit-per-second (Tbps) Ethernet switch chip. The chip highlights AI’s impact on Ethernet networking switch chip design since Broadcom launched its current leading device, the 51.2-terabit Tomahawk 5. The Tomahawk 6 is more evolutionary, rather than a complete change, notes Del Vecchio. The design doubles bandwidth and includes enhanced networking features to support AI scale-up and scale-out networks.
Nvidia is the only other company that has announced a 102.4 terabit switch, and it’s scheduled for production in 2026,” says Bob Wheeler, analyst at large at market research firm LightCounting, adding that Nvidia sells switches, not chips.

Multi-die architecture
The Tomahawk 6 marks a shift from the monolithic chip design of the Tomahawk 5 to a multi-die architecture.
The 102.4 terabit Tomahawk 6 comes in two versions. One has 512 input-output lanes – serialisers/ deserialisers (serdes) – operating at 200-gigabit using 4-level pulse amplitude modulation signalling (PAM-4). The other Tomahawk 6 version has 1,024 serdes, each using 100-gigabit PAM-4.
“The core die is identical between the two, the only difference are the chiplets that are either for 100 gig or 200 gig PAM-4,” says Del Vecchio. The core die hosts the packet processing and traffic management logic.
The chip uses a 3nm CMOS process node, which improves power efficiency compared to the 5nm CMOS Tomahawk 5.
Broadcom does not quote exact power figures for the chip. “The Tomahawk 6 is significantly less than one watt per 100 gigabits-per-second, well below 1,000 watts,” says Del Vecchio. In contrast, the Tomahawk 5 consumes less than 512 watts.
AI networking: Endpoint-scheduled fabrics
The Tomahawk 6 chip is designed for AI clusters requiring near-100 per cent network utilisation.
“With previous data centre networks, it was unusual that the networks would be loaded to more than 60 to 70 per cent utilisation,” says Del Vecchio. “For AI, that’s unacceptable.”
The chip supports endpoint-scheduled fabrics, where traffic scheduling and load balancing occur at the endpoints to ensure the traffic is efficiently distributed across the network. An endpoint could be a network interface card (NIC) or an AI accelerator interface.
This contrasts with Broadcom’s other switch chip family, the Jericho 3-AI and the Ramon, which is designed for switch-scheduled fabrics. Here, the switch chip handles the networking and packet spraying, working alongside simpler end-point hardware.
The type of switch chip used – endpoint schedule or switch scheduled – depends on the preferences of service providers and hyperscalers. Broadcom says there is demand for both networking approaches.
The Tomahawk 6 uses Broadcom’s latest cognitive routing suite and enhanced telemetry to address the evolving AI traffic patterns.
The market shifted dramatically in 2022, says Del Vecchio, with demand moving from general data centre networking to one focused on AI’s needs. The trigger was the generative AI surge caused by the emergence of ChatGPT in November 2022, after the Tomahawk 5 was already shipping.
“There was some thought of AI training and for inference [with the Tomahawk 5], but the primary use case at that point was thought to be general data centre networks,” says Del Vecchio.
Wide and flat topologies
Tomahawk 6 supports two-tier networks connecting up to 128,000 AI accelerator chips, such as graphic processor units (GPUs). This assumes 200 gigabits per endpoint, which may be insufficient for the I/O requirements of the latest AI accelerator chips.
To achieve higher bandwidth per end-point – 800 gigabit or 1.6 terabit – multiple network planes are used in parallel, each adding 200 gigabits. This way, Broadcom’s design avoids adding an extra third tier of network switching.

“Rather than having three tiers, you have multiple networking planes, say, eight of those in parallel,” says Del Vecchio.Such a wide-and-flat topology minimises latency and simplifies congestion control, which is critical for AI workloads. “Having a two-tier network versus a three-tier network makes congestion control much easier,” he says.
Tomahawk 6’s enhanced adaptive routing and load balancing features caters to AI’s high-utilisation demands. The aim is to try to keep the port speed low, to maximise the radix, says Del Vecchio, contrasting AI networks with general data centres, where higher 800-gigabit port speeds are typical.
Scale-Up Ethernet
The above discussion refers to the scale-out networking approach. For scale-up networking, the first hop between the AI accelerator chips, the devices are densely interconnected using multiple lanes — four or eight 200-gigabit lanes — to achieve higher bandwidth within a rack.
Broadcom has taken a different approach to scale-up networking than other companies. It has chosen Ethernet rather than developing a proprietary interface like Nvidia’s NVlink or the industry-backed UALink.
Broadcom has released its Scale-Up Ethernet (SUE) framework, which positions Ethernet as a unified solution for scale-up networks and which it has contributed to the Open Compute Project (OCP).

SUE supports large-scale GPU clusters. “You can do 512 XPUs in a scale-up cluster, connected in a single hop,” says Del Vecchio. SUE’s features include link-level retry, credit-based flow control, and optimised headers for low-latency, reliable transport.
“There is no one-size-fits-all for scale-up,” says Wheeler. “For example, Google’s ICI [inter chip interconnect] is a remote direct memory access (RDMA) based interconnect, more like Ethernet than UALink or NVLink,” says Wheeler. “There will likely be multiple camps.”
Broadcom chose Ethernet for several reasons. “One is you can leverage the whole Ethernet ecosystem,” says Del Vecchio, who stresses it results in a unified toolset for front-end, back-end, and scale-up networks.
SUE also aligns with hyperscaler preferences for interchangeable interfaces. “They’d like to have one unified technology for all that,” says Del Vecchio.
Del Vecchio is also a Ultra Ethernet Consortium (UEC) steering committee member. The UEC focuses on scale-out for its 1.0 specification, which is set for public release soon.
Link-level retry (LLR) and credit-based flow control (CBFC) are already being standardised within UEC, says Del Vecchio, and suggests that there will also be scale-up extensions which will benefit Broadcom’s SUE approach.
Interconnects
Tomahawk 6 supports diverse physical interconnects, including 100-gigabit and 200-gigabit PAM-4 serdes and passive copper links up to 2 meters, enabling custom GPU cluster designs.

“There’s a lot of focus on these custom GPU racks,” says Del Vecchio, highlighting the shift from generic pizza-box switches to highly engineered topologies.
The goal is to increase the power to each rack to cram more AI accelerator chips, thereby increasing the degree of scale-up using copper interconnect. Copper links could be used to connect two racks to further double scale-up capacity.
Co-packaged optics: Enhancing reliability?
Co-packaged optics (CPO) has also become a design feature of switch chips. The Tomahawk 6 will be Broadcom’s third-generation switch chip that will also be offered with co-packaged optics.
“People are seeing how much power is going into the optics for these GPU racks,” says Del Vecchio. Co-packaged optics eliminates retimers and DSPs, reducing latency and burst errors
Broadcom and hyperscalers are currently investigating another key potential benefit of co-packaged optics. “There are indications that you wind up with significantly fewer link flaps,” he said. A link flap refers to an link instability.
Unlike pluggable optics, which introduce burst errors via DSPs, co-packaged optics offers random Gaussian noise, which is better suited for forward error correction schemes. “If you have an end-to-end CPO link, you have much more random errors,” he explained.
This suggests that using co-packaged optics could benefit the overall runtime of massive AI clusters, a notable development that, if proven, will favour the technology’s use. “We expect the Tomahawk 6 Davisson co-packaged optics version to follow Tomahawk 6 production closely,” says LightCounting’s Wheeler.
Design challenges
Tomahawk 6’s development required overcoming significant hurdles.
Packaging over 1,000 serdes was one. “There were no packages on the market anywhere near that size,” says Del Vecchio, emphasising innovations in controlling warpage, insertion loss, and signal integrity. Del Vecchio also highlights the complexity of fanning out 1,000 lanes. The multi-die design required low-latency, low-power chip-to-chip interfaces, with Broadcom using its experience developing custom ASICs.
Traffic management structures, like the Memory Management Unit (MMU), have also seen exponential complexity increases. “Some structures are 4x the complexity,” says Del Vecchio.
The multi-die design demanded efficient chip-to-chip interfaces, while packaging 1,000 serdes lanes required signal integrity and manufacturability innovations. “We spent a lot of time on the packaging technology,” he added.
Meanwhile, using architectural optimisations, such as automatic clock gating and efficient serdes design, improved power efficiency. What about the delay in announcing the latest Tomahawk switch chip compared to the clock-like 2-year launch date gaps of previous Tomahawk chips? (See table above.)
Del Vecchio says the delay wasn’t due to a technical issue or getting access to a 3nm CMOS process. Instead, choosing the right market timing drove the release schedule.
Broadcom believes it has a six-month to one-year lead on competing switch chip makers.
Production and market timing
Tomahawk 6 samples are now shipping to hyperscalers and original equipment manufacturers (OEMs). Production is expected within seven months, matching the timeline achieved with the Tomahawk 5. “We feel confident there is no issue with physical IP,” says Del Vecchio, based on the work done with Broadcom’s test chips and verification suites.
The simultaneous availability of 100-gigabit and 200-gigabit SerDes versions of the latest switch chip reflects AI’s bandwidth demands.
“There is such a huge insatiable demand for bandwidth, we could not afford the time delay between the 100-gig and 200-gig versions,” says Del Vecchio.
OFC 2025: industry reflections

Gazettabyte is asking industry figures for their thoughts after attending the recent 50th-anniversary OFC show in San Francisco. Here are the first contributions from Huawei’s Maxim Kuschnerov, NLM Photonics’ Brad Booth, LightCounting’s Vladimir Kozlov, and Jürgen Hatheier, Chief Technology Officer, International, at Ciena.
Maxim Kuschnerov, Director of R&D, Huawei
The excitement of last year’s Nvidia’s Blackwell graphics processing unit (GPU) announcement has worn off, and there was a slight hangover at OFC from the market frenzy then.
The 224 gigabit-per-second (Gbps) opto-electronic signalling is reaching mainstream in the data centre. The last remaining question is how far VCSELs will go—30 m or perhaps even further. The clear focus of classical Ethernet data centre optics for scale-out architectures is on the step to 448Gbps-per-lane signalling, and it was great to see many feasibility demonstrations of optical signalling showing that PAM-4 and PAM-6 modulation schemes will be doable.
The show demonstrations either relied on thin-film lithium niobate (TFLN) or the more compact indium-phosphide-based electro-absorption modulated lasers (EMLs), with thin-film lithium niobate having the higher overall optical bandwidth.
Higher bandwidth pure silicon Mach-Zehnder modulators have also been shown to work at a 160-175 gigabaud symbol rate, sufficient to enable PAM-6 but not high enough for PAM-4 modulation, which the industry prefers for the optical domain.
Since silicon photonics has been the workhorse at 224 gigabits per lane for parallel single-mode transceivers, a move away to thin-film lithium niobate would affect the density of the optics and make co-packaged optics more challenging.
With PAM-6 being the preferred modulation option in the electrical channel for 448 gigabit, it begs the question of whether the industry should spend more effort on enabling PAM-6 optical to kill two birds with one stone: enabling native signalling in the optical and electrical domains would open the door to all linear drive architectures, and keep the compact pure-silicon platform in the technology mix for optical modulators. Just as people like to say, “Never bet against copper,” I’ll add, “Silicon photonics isn’t done until Chris Doerr says so.”
If there was one topic hotter than the classical Ethernet evolution, it was the scale-up domain for AI compute architectures. The industry has gone from scale-up in a server to a rack-level scale-up based on a copper backplane. But future growth will eventually require optics.
While the big data centre operators have yet to reach a conclusion about the specifications of density, reach, or power, it is clear that such optics must be disruptive to challenge the classical Ethernet layer, especially in terms of cost.
Silicon photonics appears to be the preferred platform for a potential scale-up, but some vendors are also considering VCSEL arrays. The challenge of merging optics onto the silicon interposer along with the xPU is a disadvantage for VCSELs in terms of tolerance to high-temperature environments.
Reliability is always discussed when discussing integrated optics, and several studies were presented showing that optical chips hardly ever fail. After years of discussing how unreliable lasers seem, it’s time to shift the blame to electronics.
But before the market can reasonably attack optical input-output for scale-up, it has to be seen what the adoption speed of co-packaged optics will be. Until then, linear pluggable optics (LPO) or linear retimed optics (LRO) pluggables will be fair game in scaling up AI ‘pods’ stuffed with GPUs.
Brad Booth, CEO of NLM Photonics
At OFC, the current excitement in the photonics industry was evident due to the growth in AI and quantum technologies. Many of the industry’s companies were represented at the trade show, and attendance was excellent.
Nvidia’s jump on the co-packaged optics bandwagon has tipped the scales in favour of the industry rethinking networking and optics.
What surprised me at OFC was the hype around thin-film lithium niobate. I’m always concerned when I don’t understand why the hype is so large, yet I have still to see the material being adopted in the datacom industry.
Vladimir Kozlov, CEO of LightCounting
This year’s OFC was a turning point for the industry, a mix of excitement and concern for the future. The timing of the tariffs announced during the show made the event even more memorable.
This period might prove to be a peak of the economic boom enabled by several decades of globalisation. It may also be the peak in the power of global companies like Google and Meta and their impact on our industry.
More turbulence should be expected, but new technologies will find their way to the market.
Progress is like a flood. It flows around and over barriers, no matter what they are. The last 25 years of our industry is a great case study.
We are now off for another wild ride, but I look forward to OFC 2050.
Jürgen Hatheier, Chief Technology Officer, International, at Ciena
This was my first trip to OFC, and I was blown away. What an incredible showcase of the industry’s most innovative technology
One takeaway is how AI is creating a transformative effect on our industry, much like the cloud did 10 years ago and smartphones did 20 years ago.
This is an unsurprising observation. However, many outside our industry do not realise the critical importance of optical technology and its role in the underlying communication network. While most of the buzz has been on new AI data centre builds and services, the underlying network has, until recently, been something of an afterthought.
All the advanced demonstrations and technical discussions at OFC emphasise that AI would not be possible without high-performance network infrastructure.
There is a massive opportunity for the optical industry, with innovation accelerating and networking capacity scaling up far beyond the confines of the data centre.
The nature of AI — its need for intensive training, real-time inferencing at the edge, and the constant movement of data across vast distances between data centres — means that networks are evolving at pace. We’re seeing a significant architectural shift toward more agile, scalable, and intelligent infrastructure with networks that can adapt dynamically to AI’s distributed, data-hungry nature.
The diversity of optical innovation presented at the conference ranged from futuristic Quantum technologies to technology on the cusp of mainstream adoption, such as 448-gigabit electrical lanes.
The increased activity and development around high-speed pluggables also show how critical coherent optics has become for the world’s most prominent computing players.
OFC 2025: reflecting on the busiest optics show in years
Adtran’s Gareth Spence interviews Omdia’s Daryl Inniss (left) and the editor of Gazettabyte, live from the conference hall at OFC 2025.
The discussion covers the hot topics of the show and where the industry is headed next. Click here.
Marvell kickstarts the 800G coherent pluggable era

Marvell has become the first company to provide an 800-gigabit coherent digital signal processor (DSP) for use in pluggable optical modules.
The 5nm CMOS Orion chip supports a symbol rate of over 130 gigabaud (GBd), more than double that of the coherent DSPs for the OIF’s 400ZR standard and 400ZR+.
Meanwhile, a CFP2-DCO pluggable module using the Orion can transmit a 400-gigabit data payload over 2,000km using the quadrature phase-shift keying (QPSK) modulation scheme.
The Orion DSP announcement is timely, given how this year will be the first when coherent pluggables exceed embedded coherent module port shipments.
“We strongly believe that pluggable coherent modules will cover most network use cases, including carrier and cloud data centre interconnect,” says Samuel Liu, senior director of coherent DSP marketing at Marvell.
Marvell also announced its third-generation ColorZ pluggable module for hyperscalers to link equipment between data centres. The Orion-based ColorZ 800-gigabit module supports the OIF’s 800ZR standard and 800ZR+.
Fifth-generation DSP
The Orion chip is a fifth-generation design yet Marvell’s first. First ClariPhy and then Inphi developed the previous four generations.

Inphi bought ClariPhy for $275 million in 2016, gaining the first two generation devices: the 40nm CMOS 40-gigabit LightSpeed chip and a 28nm CMOS 100- and 200-gigabit Lightspeed-II coherent DSP products. The 28nm CMOS DSP is now coming to the end of its life, says Liu.
Inphi added two more coherent DSPs before Marvell bought the company in 2021 for $10 billion. Inphi’s first DSP was the 16nm CMOS M200. Until then, Acacia (now Cisco-owned) had been the sole merchant company supplying coherent DSPs for CFP2-DCOs pluggable modules.
Inphi then delivered the 7nm 400-gigabit Canopus for the 400ZR market, followed a year later by the Deneb DSP that supports several 400-gigabit standards. These include 400ZR, 400ZR+, and standards such as OpenZR+, which also has 100-, 200-, and 300-gigabit line rates and supports the OpenROADM MSA specifications. “The cash cow [for Marvell] is [the] 7nm [DSPs],” says Liu.
The Inphi team’s first task after the acquisition was to convince Marvell’s CEO and its chief financial officer to make the most significant investment in a coherent DSP. Developing Orion cost between $100M-300M.
“We have been quiet for the last two years, not making any coherent DSP announcements,” says Liu. “This [the Orion] is the one.”
Marvell views being first to market with a 130GBd-plus generation coherent DSP as critical given how pluggables, including the QSFP-DD and the OSFP form factors, account for over half of all coherent ports shipped.
“It is very significant to be first to market with an 800ZR plug and DSP,” says Jimmy Yu, vice president at market research firm Dell’Oro Group. “I expect Cisco/Acacia to have one available in 2024. So, for now, Marvell is the only supplier of this product.”
Yu notes that vendors such as Ciena and Infinera have had 800 Gigabit-per-second (Gbps) coherent available for some time, but they are for metro and long-haul networks and use embedded line cards.
Use cases
The Orion DSP addresses hyperscalers’ and telecom operators’ coherent needs. The DSP also implements various coherent standards to ensure that the vendors’ pluggable modules work with each other.
Liu says a DSP’s highest speed is what always gets the focus, but the Orion also supports lower line rates such as 600, 400 and 200Gbps for longer spans.
The baud rate, modulation scheme, and the probabilistic constellation shaping (PCS) technique are control levers that can be varied depending on the application. For example, 800ZR uses a symbol rate of only 118GBd and the 16-QAM modulation scheme to achieve the 120km specification while minimising power consumption. When performance is essential, such as sending 400Gbps over 2,000km, the highest baud rate of 130GBd is used along with QPSK modulation.

China is one market where Marvell’s current 7nm CFP2-DCOs are used to transport wavelengths at 100Gbps and 200Gbps.
Using the Orion for 200-gigabit wavelengths delivers an extra 1dB (decibel) of optical signal-to-noise ratio performance. The additional 1dB benefits the end user, says Liu: they can increase the engineering margin or extend the transmission distance. Meanwhile, probabilistic constellation shaping is used when spectral efficiency is essential, such as fitting a transmission within a 100GHz-width channel.
Liu notes that the leading Chinese telecom operators are open to using coherent pluggables to help reduce costs. In contrast, large telcos in North America and Europe use pluggables for their regional networks. Still, they prefer embedded coherent modems from leading systems vendors for long-haul distances greater than 1,000km.
Marvell believes the optical performance enabled by its 130GBd-plus 800-gigabit pluggable module will change this. However, all the leading system vendors have all announced their latest generation embedded coherent modems with baud rates of 130GBd to 150GBd, while Ciena’s 200GBd 1.6-terabit WaveLogic 6 coherent modem will be available next year.
The advent of 800-gigabit coherent will also promote IP over DWDM. 400ZR+ is already enabling the addition of coherent modules directly to IP routers for metro and metro regional applications. An 800ZR and 800ZR+ in a pluggable module will continue this trend beyond 400 gigabit to 800 gigabits.
The advent of an 800-gigabit pluggable also benefits the hyperscalers as they upgrade their data centre switches from 12.8 terabits to 25.6 and 51.2 terabits. The hyperscalers already use 400ZR and ZR+ modules, and 800-gigabit modules, which is the next obvious step. Liu says this will serve the market for the next four years.
Fujitsu Optical Components, InnoLight, and Lumentum are three module makers that all endorsed the Orion DSP announcement.
ColorZ 800 module
In addition to selling its coherent DSPs to pluggable module and equipment makers, Marvell will sell to the hyperscalers its latest ColorZ module for data centre interconnect.
Marvell’s first-generation product was the 100-gigabit coherent ColorZ in 2016 and in 2021 it produced its 400ZR ColorZ. Now, it is offering an 800-gigabit version – ColorZ 800 – to address 800ZR and 800ZR+, which include OpenZR+ and support for lower speeds that extend the reach to metro regional and beyond.

“We are first to market on this module, and it is now sampling,” says Josef Berger, associate vice president of marketing optics at Marvell.
Marvell addressing its module for the hyperscaler market rather than telecoms makes sense, says Yu, as it is the most significant opportunity.
“Most communications service providers’ interest is in having optical plugs with longer reach performance,” says Dell’Oro’s Yu. “So, they are more interested in ZR+ optical variants with high launch power of 0dBm or greater.”
Marvell notes a 30 per cent cost and power consumption reduction for each generation of ColorZ pluggable coherent module.
Liu concludes by saying that designing the Orion DSP was challenging. It is a highly complicated chip comprising over a billion logic gates. An early test chip of the Orion was used as part of a Lumentum demonstration at the OFC show in March.
The ColorZ 800 module will start being sampled this quarter.
What follows the Orion will likely be a 1.6-terabit DSP operating at 240GBd. The OIF has already begun defining the next 1.6T ZR standard.
Taking a unique angle to platform design

- A novel design based on a vertical line card shortens the trace length between an ASIC and pluggable modules.
- Reducing the trace length improves signal integrity while maintaining the merits of using pluggables.
- Using the vertical line card design will extend for at least two more generations the use of pluggables with Ethernet switches.
The travelling salesperson problem involves working out the shortest route on a round-trip to multiple cities. It’s a well-known complex optimisation problem.
Novel design that shortens the distance between an Ethernet switch chip and the front-panel optics
Systems engineers face their own complex optimisation problem just sending an electrical signal between two points, connecting an Ethernet switch chip to a pluggable optical module, for example.
Sending the high-speed signal over the link with sufficient fidelity for its recovery requires considerable electronic engineering design skills. And with each generation of electrical signalling, link distances are getting shorter.
In a paper presented at the recent ECOC show, held in Basel, consultant Chris Cole, working with Yamaichi Electronics, outlined a novel design that shortens the distance between an Ethernet switch chip and the front-panel optics.
The solution promises headroom for two more generations of high-speed pluggables. “It extends the pluggable paradigm very comfortably through the decade,” says Cole.
Since ECOC, there are plans to standardise the vertical line card technology in one or more multi-source agreements (MSAs), with multiple suppliers participating.
“This will include OSFP pluggable modules as well as QSFP and QSFP-DD modules,” says Cole.
Shortening links
Rather than the platform using stacked horizontal line cards as is common today, Cole and Yamaichi Electronics propose changing the cards’ orientation to the vertical plane.
Vertical line cards also enable the front-panel optical modules to be stacked on top of each other rather than side-by-side. As a result, the pluggables are closer to the switch ASIC; the furthest the high-speed electrical signalling must travel is three inches (7.6cm). The most distant span between the chip and the pluggable with current designs is typically nine inches (22.8cm).
“The reason nine inches is significant is that the loss is high as we reach 200 gigabits-per-second-per-lane and higher,” says Cole.

Current input-output proposals
The industry is pursuing several approaches to tackle such issues as the issues associated with high-speed electrical signalling and also input-output (I/O) bandwidth density.
One is to use twinaxial cabling instead of electrical traces on a printed circuit board (PCB). Such ‘Twinax’ cable has a lower loss, and its use avoids developing costly advanced-material PCBs.
Other approaches involve bringing the optics closer to the Ethernet switch chip, whether near-packaged optics or the optics and chip are co-packaged together. These approaches also promise higher bandwidth densities.
Cole’s talk focussed on a solution that continues using pluggable modules. Pluggable modules are a low-cost, mature technology that is easy to use and change.
However, besides the radio frequency (RF) challenges that arise from long electrical traces, the I/O density of pluggables is limited due to the size of the connector, while placing up to 36 pluggables on the 1 rack unit-high (1RU) front panel obstructs the airflow used for cooling.
Platform design
Ethernet switch chips double their capacity every two years. Their power consumption is also rising; Broadcom’s latest Tomahawk 5 consumes 500W.
The power supply a data centre can feed to each platform has an upper limit. It means fewer cards can be added to a platform if the power consumed per card continues to grow.
The average power dissipation per rack is 16kW, and the limit is around 32kW, says Cole. This refers to when air cooling is used, not liquid cooling.
He cites some examples.
A rack of Broadcom’s 12.8-terabit Tomahawk 3 switch chip – either with 32, 1RU or 16, 2RU cards with two chips per card – and associated pluggable optics consume over 30kW.
A 25.6-terabit Tomahawk 4-based chassis supports 16 line cards and consumes 28kW. However, using the recently announced Tomahawk 5, only eight cards can be supported, consuming 27KW.
“The takeaway is that rack densities are limited by power dissipation rather than the line card’s rack unit [measure],” says Cole.

Vertical line card
The vertical line card design is 4RU high. Each card supports two ASICs on one side and 64 cages for the OSFP modules on the other.
A 32RU chassis can thus support eight vertical cards or 16 ASICs, equivalent to the chassis with 16 horizontal 2RU line cards.
The airflow for the ASICs is improved, enabling more moderate air fans to be used compared to 1RU or 2RU horizontal card chassis designs. There is also airflow across the modules.
“The key change in the architecture is the change from a horizontal card to a vertical card while maintaining the pluggable orientation,” says Cole.
As stated, the maximum distance between an ASIC and the pluggables is reduced to three inches, but Cole says the modules can be arranged around the ASIC to minimise the length to 2.5 inches.
Alternatively, if the height of the vertical card is an issue, a 3RU card can be used instead, which results in a maximum trace length of 3.5 inches. “[In this case], we don’t have dedicated air intakes for the CPU,” notes Cole.
Cole also mentioned the option of a 3RU vertical card that houses one ASIC and 64 OSFP modules. This would be suitable for the Tomahawk 5. However, here the maximum trace length is five inches.
Vertical connectors
Yamaichi Electronics has developed the vertical connectors needed to enable the design.
Cole points out that, unlike a horizontal connector, a vertical one uses equal-length contacts. This is not the case for a flat connector, resulting in performance degradation since a set of contacts has to turn and hence has a longer length.
Cole showed the simulated performance of an OSFP vertical connector with an insertion loss of over 70GHz.
“The loss up to 70GHz demonstrates the vertical connector advantage because it is low and flat for all the leads,” says Cole. “So this [design] is 200-gigabit ready.”
He also showed a vertical connector for the OSFP-XD with a similar insertion loss performance.
Also shown was a comparison with results published for Twinax cables. Cole says this indicates that the loss of a three-inch PCB trace is less than the loss of the cable.
“We’ve dramatically reduced the RF maximum length, so we had solved the RF roadblock problem, and we maintain the cost-benefit of horizontal line cards,” says Cole.
The I/O densities may be unchanged, but it preserves the mature technology’s benefits. “And then we get a dramatic improvement in cooling because there are no obstructions to airflow,” says Cole.
Vladimir Kozlov, CEO of the market research firm, LightCounting, wondered in a research note whether the vertical design is a distraction for the industry gearing up for co-packaged optics.
“Possibly, but all approaches for reducing power consumption on next-generation switches deserve to be tested now,” said Kozlov, adding that adopting co-packaged optics for Ethernet switches will take the rest of the decade.
“There is still time to look at the problem from all angles, literally,” said Kozlov
ECOC '22 Reflections - Part 2

Gazettabyte is asking industry and academic figures for their thoughts after attending ECOC 2022, held in Basel, Switzerland. In particular, what developments and trends they noted, what they learned, and what, if anything, surprised them.
In Part 2, Broadcom‘s Rajiv Pancholy, optical communications advisor, Chris Cole, LightCouting’s Vladimir Kozlov, Ciena’s Helen Xenos, and Synopsys’ Twan Korthorst share their thoughts.
Rajiv Pancholy, Director of Hyperscale Strategy and Products Optical Systems Division, Broadcom*
The buzz at the show reminded me of 2017 when we were in Gothenburg pre-pandemic, and that felt nice.
Back then, COBO (Consortium for On-Board Optics) was in full swing, the CWDM8 multi-source agreement (MSA) was just announced, and 400-gigabit optical module developments were the priority.
This year, I was pleased to see the show focused on lower power and see co-packaged optics filter into all things ECOC.
Broadcom has been working on integrating a trans-impedance amplifier (TIA) into our CMOS digital signal processor (DSP), and the 400-gigabit module demonstration on the show floor confirmed the power savings integration can offer.
Integration impacts power and cost but it does not stop there. It’s also about what comes after 2nm [CMOS], what happens when you run out of beach-front area, and what happens when the maximum power in your rack is not enough to get all of its bandwidth out.
It is the idea of fewer things and more efficient things that draws everyone to co-packaged optics.
The OIF booth showcased some of the excitement behind this technology that is no longer a proof-of-concept.
Moving away from networking and quoting some of the ideas presented this year at the AI Hardware Summit by Alexis Bjorlin, our industry needs to understand how we will use AI, how we will develop AI, and how we will enable AI.
These were in the deeper levels of discussions at ECOC, where we as an industry need to continue to innovate, disagree, and collaborate.
Chris Cole, Optical Communications Advisor
I don’t have many substantive comments because my ECOC was filled with presentations and meetings, and I missed most of the technical talks and market focus presentations.
It was great to see a full ECOC conference. This is a good sign for OFC.
Here is an observation of what I didn’t see. There were no great new silicon photonics products, despite continued talk about how great it is and the many impressive research and development results.
Silicon photonics remains a technology of the future. Meanwhile, other material systems continue to dominate in their use in products.
Vladimir Kozlov, CEO of LightCounting
I am surprised by the progress made by thin-film lithium niobate technology. There are five suppliers of these devices now: AFR, Fujitsu, Hyperlight, Liobate, and Ori-chip.
Many vendors also showed transceivers with thin-film lithium niobate modulators inside.
Helen Xenos, senior director of portfolio marketing at Ciena
One key area to watch right now is what technology will win for the next Ethernet rates inside the data centre: intensity-modulation direct detection (IMDD) or coherent.
There is a lot of debate and discussion happening, and several sessions were devoted to this topic during the ECOC Market Focus.
Twan Korthorst, Group Director Photonic Solutions at Synopsys.
My main observations are from the exhibition floor; I didn’t attend the technical conference.
ECOC was well attended, better than previous shows in Dublin and Valencia and, of course, much better than Bordeaux (the first in-person ECOC in the Covid era).
I spent three days talking with partners, customers and potential customers, and I am pleased about that.
I didn’t see the same vibe around co-packaged optics as at OFC; not a lot of new things there.
There is a feeling of what will happen with the semiconductor/ datacom industry. Will we get a downturn? How will it look? In other words, I noticed some concerns.
On the other hand, foundries are excited about the prospects for photonic ICs and continue to invest and set ambitious goals.
A career in technology market analysis

John Lively reflects on a 30-year career.
It was a typical workday in 1989, sitting through a meeting announcing the restructuring of Corning’s planar coupler business.
The speaker’s final words were, “Lively, you’ll be doing forecasting.” It changed my life and set my career path for the next 30-plus years.
No one grows up with a desire to be a market analyst. Indeed, I didn’t ask for the job. What made it possible was an IBM PC and LOTUS 1-2-3 in my marine biology lab in the early 1980s (a story for another time).
After a stop at MIT for an MBA, this led to a job in Corning’s fledgling PC support team in 1985. Then it was Corning’s optical fibre business cost-modelling fibre-to-the-home networks on a PC, working with Bellcore and General Instrument engineers. From there, it was to forecast market demand for planar couplers in the FTTH market.
In the following decade, I had various market forecasting roles within Corning’s optical fibre and photonics businesses.
Each time I tried to put forecasting behind me by taking a marketing or product management job, management said they needed me to return to forecasting due to some crisis or another (thank you, Bernie Ebbers).
In 1999, I had an epiphany. If Corning thinks I’m better at forecasting than anything else, perhaps I should become a professional forecaster in a company whose product is forecasts.
Just then, through fate or coincidence, I received a call from fellow MIT alum Dana Cooperson who said her firm, RHK, was desperate for people and did I know anyone who might be interested?
For the uninitiated, that’s code for ‘would you be interested in joining us?’.
I joined just in time to enjoy the remaining months of the boom, followed by a bust in 2001. But all the while learning to be a market analyst in a new context. While at Corning, I had been both a producer and procurer of market research. At RHK, I was strictly a producer.
More importantly, there was a direct link between my words and spreadsheets and money coming in. It was exhilarating.
Working remotely
Thanks to the newly deployed cable modem/ HFC technology that I had been cost-modelling a decade earlier, I was working from home.
I have worked from home ever since, and I can say that remote working does work well for some people and jobs.
Some lessons I’ve learned include:
- Working from home works best if the entire firm, not just a few people, are doing it.
- Home working doesn’t mean you can’t travel, pandemics notwithstanding.
- Home workers need to have clear deliverables that they can be judged against. Give them responsibility for something tangible, with an unambiguous deadline.
- Requiring time-tracking sheets or online monitoring of home workers is insulting and demotivating.
- Companies must support home workers by investing in quality internet services and conferencing software/ equipment on both sides of the link
Required skills
By joining RHK, I had moved from a Fortune 500 company to one of 100 employees. Over the next two decades, I would move between large and small companies. I prefer small companies because it’s clear who contributes to their success and who doesn’t. Poor performers have nowhere to hide in a company of six people.
After more than 30 years in the market research arena, I have views on the role of a market analyst and the talents necessary to be a good one.
The goal of market analysis is to find information, analyse it, draw conclusions, then package and communicate it.
Doing market research is like assembling a jigsaw puzzle, from which several pieces are missing. Or, like a chef who must create a healthy, enjoyable meal from an assortment of good and bad raw ingredients.
A technology market analyst should be intellectually curious, have a solid background in sciences and technology, and have broad industry knowledge, i.e., understand the jargon, the tech, and the companies.
The analyst also needs to write concisely and quickly, is fluent in Excel, PowerPoint, and Word, is a great communicator and is approachable, likeable, and outgoing.
Of course, finding all the requisite skills in one person is rare, and larger companies commonly divide duties into specialities like data collection, analysis, and communication.
In small companies, this may not be overt but happens to a degree just the same.
Most importantly, a market analyst must be comfortable with uncertainty.
One never has all the pieces, and you must be OK filling in missing data points via extrapolation, intuition, historical parallels, or other means. And be comfortable admitting your mistakes and adjusting your findings when new data surfaces.
I believe this is why those with a scientific background are better suited to market research than engineers. Scientists are taught scepticism and revision as a way of life, while engineers seek the certainty of the ‘right’ answer.
Periods of note
Throughout my career, I’ve lived through interesting times.
Starting in 1985, it was the introduction of the first PCs into Corning and establishing their first email system, electronic newsletter, word processing, and expert-learning systems.
Then, in the mid-1990s working in the early days of amplified DWDM systems and when the EDFA business doubled its output yearly.
Then came the Internet bubble and optical industry boom/bust of 1999-2001, when dozens of companies were founded by a couple of PhDs with a PowerPoint presentation. At one point in 2000, my optical components practice at RHK had over 100 subscribing companies.
It was weird living through an episode that we knew would someday be written about, like the Dutch tulip mania of 1634.
More recently, and I believe, with a more positive outcome, it is/ has been fascinating to watch companies like Alphabet, Amazon, and Meta utilise a globally connected internet to become the first truly global communications, media, and retail companies.
Moreover, these companies transcend national, cultural, and language boundaries, connecting a billion or more users. And in the process, inventing hyperscale data centres, which in turn allow hundreds and thousands of other companies to ‘cloudify’ as well, extending their global reach.
Of all the innovations and changes taking place today, this is one I will continue to follow with wonder and amazement.
The promise of these companies is so great that I’m hopeful they will become beacons of positive change around the world in the 21st century.
Innovation has been breathtaking in optics. For example, coherent transport, the far-out science stuff of technical talks at my first OFC in 1988, is in commercial use.
We blithely speak of optical transceivers capable of Terabit-per-second speeds without stopping to think how amazing it is that anything, anywhere, could be made to turn off and on again, one TRILLION times a second!
It simply defies human understanding, and yet we make it easy.
A view of now
Today, it’s easy to be convinced that things are falling apart, between Russia’s war against Ukraine, COVID, economic turmoil, screwed-up supply chains, and populist politicians.
But I take solace that I’ve seen things like this before and lived through them. As a child, scenes of the Vietnam war were on the news every evening. But finally, there was peace in Vietnam.
In the 1970s, we had an oil embargo and sky-high gas prices. It also ended.
In the 1980s, inflation ran hot, pushing my student loan interest rate to 13%. But I paid it off, and rates came down.
AIDS struck fear and stoked prejudice for years, claiming my aunt and uncle before scientists uncovered its secrets and developed effective treatments.
So it will be with COVID. History shows that humans tire of strife and disease and will work to conquer our worst problems eventually.
Surprises
Two things come to mind regarding industry surprises over the last 30 years.
One is that optical technology keeps advancing. Despite how challenging each new generation seems, bit by bit and idea by idea, the industry collectively comes up with a solution, and the subsequent speed hike is commercialised.
Another is how people find ways to use it no matter how much bandwidth is created. RHK founder, John Ryan, was fond of telling us, “Bandwidth is like cupboard space; it’s never left empty for long.”
Another surprising thing is how long the interpersonal bonds formed at RHK have lasted.
Though it was just a flash in time, many of those who were there in 2000 remain connected as friends and colleagues more than 20 years later.
Several such alumni work at LightCounting now.
Climate Change
While doing all this, looking backwards and reflecting on change, I couldn’t help dwelling on another major problem we face today: climate change.
Forestalling climate change is the one thing I believe where humans are failing. But unfortunately, the causes are so rooted in our global socio-economic systems that citizens and governments are not capable of inflicting the necessary sacrifices on themselves.
I fear the worst-case scenarios are coming soon, with shifting temperature zones and rising seas. In response, many people, plants, and animals will migrate, following favourable conditions north or south or inland as the case may be, significantly increasing competition for resources of all kinds.
I also fear authoritarian governments may prove more effective at providing protection for some, and avoiding utter chaos, than our precious but fragile democracies.
A role for tech giants
I think the internet and companies with global reach can play a role in combatting the worst impacts of climate change.
Some of the hyperscalers, telecom operators, and equipment companies have been leaders in reducing carbon emissions.
I hope the interconnectedness and massive computing power of companies like Meta and Alphabet can be used to solve these large-scale problems.
My last thought is the realisation that when I eventually ease into retirement and cut back on travel, I may never get a chance to personally thank all the friends and colleagues I have made along the way.
People who have assisted my career, believed in me, educated me, and made me think differently, smile, and laugh.
So, just in case, I’ll say it here – thank you one and all – you made a difference to me.
It’s also been fun.
Changing the radio access network for good

The industry initiative to open up the radio access network, known as open RAN, is changing how the mobile network is architected and is proving its detractors wrong.
So says a recent open RAN study by market research company, LightCounting.
“The virtual RAN and open RAN sceptics are wrong,” says Stéphane Téral, chief analyst at LightCounting.
Japan’s mobile operators, Rakuten Mobile and NTT Docomo, lead the world with large-scale open RAN deployments. Meanwhile, many leading communications service providers (CSPs) continue to trial the technology with substantial deployments planned around 2024-25.
Japan’s fourth and newest mobile network operator, Rakuten Mobile, deployed 40,000 open RAN sites with 200,000 radio units by the start of 2022.
Meanwhile, NTT Docomo, Japan’s largest mobile operator, deployed 10,000 sites in 2021 and will deploy another 10,000 this year.
NTT Docomo has shown that open RAN also benefits incumbent operators, not just new mobile entrants like Rakuten Mobile and Dish Networks in the US that can embrace the latest technologies as they roll out their networks.
Virtual RAN and open virtual RAN
Traditional RANs use a radio unit and a baseband unit from the same equipment supplier. Such RAN systems use proprietary interfaces between the units, with the vendor also providing a custom software stack, including management software.
The vendor may also offer a virtualised system that implements some or all of the baseband unit’s functions as software running on server CPUs.
A further step is disaggregating the baseband unit’s functions into a distributed unit (DU) and a centralised unit (CU). Placing the two units at different locations is then possible.
A disaggregated design may also be from a single vendor but the goal of open RAN is to enable CSPs to mix and match RAN components from different suppliers. Accordingly, the virtual RAN using open interfaces, as specified by the O-RAN Alliance, is an open virtual RAN system.

The diagram shows the different architectures leading to the disaggregated, virtualised RAN (vRAN) architecture.
Open virtual RAN comprises radio units, the DU and CU functions that can be implemented in the cloud, and the RAN Intelligent Controller (RIC), the brain of the RAN, which runs applications.
Several radio units may be connected to a virtual DU. The radio unit and virtual DU may be co-located or separate, linked using front-haul technology. Equally, the CU can host several virtual DUs depending on the networking requirements, connected with a mid-haul link.
Rakuten Mobile has deployed the world’s largest open virtual RAN architecture, while NTT Docomo has the world’s largest brownfield open RAN deployment.
NTT Docomo’s deployment is not virtualised and is not running RAN functions in software.
“NTT Docomo’s baseband unit is not disaggregated,” says Téral. “It’s a traditional RAN with a front-haul using the O-RAN Alliance specification for 5G.”
NTT Docomo is working to virtualise the baseband units and the work is likely to be completed in 2023.
Opening the RAN
NTT Docomo and the MGMN Alliance were working on interoperability between RAN vendors 15 years ago, says Téral. The Japanese mobile operator wanted to avoid vendor lock-in and increase its options.
“NTT Docomo was the only one doing it and, as such, could not enjoy economies of scale because there was no global implementation,” says Téral.
Wider industry backing arrived in 2016 with the formation of the Telecom Infra Project (TIP) backed by Meta (Facebook) and several CSPs to design network architectures that promoted interoperability using open equipment.
The O-RAN Alliance formed in 2018 was another critical development. With over 300 members, the O-RAN Alliance has ten working groups addressing such topics as defining the interfaces between RAN functions to standardise the open RAN architecture.
The O-RAN Alliance realised it needed to create more flexibility to enable boxes to be interchanged, says Téral, and started in the RAN to allow any radio unit to work with any virtual DU.
Geopolitics is the third element to kickstart open RAN. Removing Chinese equipment vendors Huawei and ZTE from key markets brought Open RAN to the forefront as a way to expand suppliers.
Indeed, Rakuten Mobile was about to select Huawei for its network, but it decided in 2018 to adopt open RAN instead because of geopolitics.
“Geopolitics added a new layer and, to some extent, accelerated the development of open RAN,” he says. “But it does not mean it has accelerated market uptake.”
That’s because the first wave of 5G deployments by the early adopter CSPs seeking a first-mover advantage is ending. Indeed, the uptake in 5G’s first three years has eclipsed the equivalent rollout of 4G, says LightCounting.
To date, over 200 of 800 mobile operators worldwide have deployed 5G.
Early 5G adopters have gone with traditional RAN suppliers like Ericsson, Nokia, Samsung, NEC and Fujitsu. And with open RAN only now hitting its stride, it has largely missed the initial 5G wave.
Open RAN’s next wave
For the next two years, then, the dominant open RAN deployments will continue to be those of Rakuten Mobile and NTT Docomo, to which can be added the network launches from Dish Networks in the US, and 1&1 Drillisch of Germany, which is outsourcing its buildout to Rakuten Symphony.
Rakuten Mobile’s vendor offshoot, Rakuten Symphony, set up to commercialise Rakuten’s open RAN experiences, is also working with Dish Networks on its deployment.
Rakuten Mobile hosts its own 5G network, including open RAN in its data centres. Dish is working with cloud player Amazon Web Services to host its 5G network. Dish’s network is still in its early stages, but the mobile operator can host its network in Amazon’s cloud because it uses a cloud-native implementation that includes Open RAN.
The next market wave for Open RAN will start in 2024-25 when the leading CSPs begin to turn off their 3G and start deploying Open RAN for 5G.
It will also be helped by the second wave of 5G rollouts those 600 operators with LTE networks. However, this second 5G cycle may not be as large as the first cycle, says Téral, and there will be a lag between the two cycles that will not be helped if there is a coming economic recession.
Specific leading CSPs that were early cheerleaders for open RAN has since dampened their deployment plans, says Téral. For example, Telefónica and Vodafone first spoke in 2019 of 1,000s of site deployments but have scaled back their deployment plans.
The leading CSPs explain their reluctance to deploy open RAN due to its many challenges. One is interoperability issues; despite the development of open interfaces, getting the different vendors’ components to work together is still a challenge.
Another issue is integration. Disaggregating the various RAN components means someone must stitch them together. Certain CSPs do this themselves, but there is a need for system integrators, and this is a challenge.
Téral believes that while these are valid concerns, Rakuten and NTT Docomo have already overcome such complications; open RAN is now deployed at scale.
These CSPs are also reluctant to end their relationships with established suppliers.
“The service providers’ teams have built relationships and are used to dealing with the same vendors for so long,” says Téral. “It’s very complicated for them to build new relationships with somebody else.”
More RAN player entrants
Rakuten Symphony has assembled a team with tremendous open RAN experience. AT&T is one prominant CSP that has selected Rakuten Symphony to help it with network planning and speed up deployments.
NTT Docomo working with four vendors, has got their radio units and baseband units to work with each other. In addition, NTT Docomo is also promoting its platform dubbed OREC (5G Open RAN Ecosystem) to other interested parties.
NEC and Fujitsu, selected by NTT Docomo, have also gained valuable open RAN experience. Fujitsu is a system integrator with Dish while NEC is involved in many open RAN networks in Europe, starting with Telefónica.
There is also a commercial advantage for these systems vendors since Rakuten Mobile and NTT Docomo are the leading operators, along with DISH and 1&1, deploying open RAN for the next two years.
That said, the radio unit business continues to look up. “There is no cycle [with radio units]; you still have to add radio units at some point in particular parts of the network,” says Téral.
But for open RAN, those vendors not used by NTT Docomo and Rakuten Mobile must wait for the next deployment wave. Vendor consolidation is thus inevitable; Parallel Wireless being the first shoe to drop with its recently announced wide-scale layoffs.
So while open RAN has expanded the number of vendor suppliers, further acquisitions should be expected, as well as companies folding that will not survive until the next deployment wave, says Téral.
And soon at the chip level too
There is also a supply issue with open RAN silicon.
With its CPUs and FlexRAN software, Intel dominates the open RAN market. However, the CSPs acknowledge there is no point in expanding RAN suppliers if there is a vendor lock-in at the chip level, one layer below.
Téral says several chip makers are working with system vendors to enter the market with alternative solutions. These include ARM-based architectures, AMD-Xilinx, Qualcomm, Marvell’s Octeon family and Nvidia’s BlueField-3 data processing unit.
The CSPs are also getting involved in promoting more chip choices. For example, Vodafone has set up a 50-strong research team at its new R&D centre in Malaga, Spain, to work with chip and software companies to develop the architecture of choice for Open RAN to expand the chip options.
Outlook
LightCounting forecasts that the open vRAN market will account for 13 per cent of the total global RAN sales in 2027, up from 4 per cent in 2022.
A key growth driver will be the global switch to open virtual RAN in 2024-25, driven by the large Tier 1 CSPs worldwide.
“Between 2025 and 2030, you will see a mix of open RAN, and where it makes sense in parts of the network, traditional RAN deployments too,” says Téral.
Books read in 2021: Final Part

In the final favoured reads during 2021, the contributors are Daryl Inniss of OFS, Vladimir Kozlov of LightCounting Market Research, and Gazettabyte’s editor.
Daryl Inniss, Director, Business Development at OFS
Four thousand weeks is the average human lifetime.
A book by Oliver Burkeman: Four Thousand Weeks: Time Management For Mortals is a guide to using the finite duration of our lives.
Burkeman argues that by ignoring the reality of our limited lifetime, we fill our lives with busyness and distractions and fail to achieve the very fullness that we seek.
While sobering, Burkeman presents thought-provoking and amusing examples and stories while transitioning them into positive action.
An example is his argument that our lives are insignificant and that, regardless of our accomplishments, the universe continues unperturbed. Setting unrealistic goals is one consequence of our attempt to achieve greatness.
On the other hand, recognising our inability to transform the world should give us enormous freedom to focus on the things we can accomplish.
We can jettison that meaningless job, be fearless in the face of pandemics given that they come and go throughout history, and lower our stresses on financial concerns given they are transitory. What is then left is the freedom to spend time on things that do matter to us.
Defining what’s important is an individual thing. It need not be curing cancer or solving world peace – two of my favourites. It can be something as simple as making a most delicious cookie that your kids enjoy.
It is up to each of us to find those items that make us feel good and make a difference. Burkeman guides us to pursue a level of discomfort as we seek these goals.
I found this book profound and valuable as I enter the final stage of my life.
I continue to search for ways to fulfil my life. This book helps me to reflect and consider how to use my finite time.
Vladimir Kozlov, CEO and Founder of LightCounting Market Research
Intelligence is a fascinating topic. The artificial kind is making all the headlines but alien minds created by nature have yet to be explored.
One of the most bizarre among these is the distributed mind of the octopus. “Other Minds: The Octopus, the Sea and the Deep Origins of Consciousness, by Peter Godfery-Smith, is a perfect introduction to the subject.
The Overstory: A Novel, by Richard Powers takes the concept of alien minds to a new, more emotional level. It is a heavy read. The number of characters rivals that of War and Peace while the density matches the style of Dostoevsky. Yet, it is impossible not to finish the book, even if it takes several months.
It concerns the conflict of “alien minds”. The majority of the aliens are humans, cast from the distant fringes of our world. The trees emerge as a unifying force that keeps the book and the planet together. It is an unforgettable drama.
I have not cut a live tree since reading the book. I can not stop thinking about just how shallow our understanding of the world is.
The intelligence created by nature is more puzzling than dark matter yet it is shuffled into the ‘Does-not-matter’ drawer of our alien minds.
Roy Rubenstein, Gazettabyte’s editor
Ten per cent of my contacts changed jobs in 2021, according to LinkedIn.
Of these, how many quit their careers after 32 years at one firm? And deliberately downgraded their salaries?
That is what Kate Kellaway did. The celebrated Financial Times journalist quit her job to become a school teacher.
Kellaway is also a co-founder of Now Teach, a non-profit organisation that helps turn experienced workers in such professions as banking and the law into teachers.
In her book, Re-educated: How I Changed My Job, My Home and My Hair, Kellaway reflects on her career as a journalist and on her life. She notes how privileged she has been in the support she received that helped her correct for mistakes and fulfill her career; something that isn’t available to many of her students.
She also highlights the many challenges of teaching. In one chapter she describes a class and the exchanges with her students that captures this magnificently.

A book I reread after many years was Arthur Miller’s autobiography, Timebends: A Life.
In the mid-1980s on a trip to the UK to promote his book, Miller visited the Royal Exchange Theatre in Manchester. There, I got a signed copy of his book which I prize.
The book starts with his early years in New York, surrounded by eccentric Jewish relatives.
Miller also discusses the political atmosphere during the 1950s, resulting in his being summoned before the House Un-American Activities Committee. The first time I read this, that turbulent period seemed very much a part of history. This time, the reading felt less alien.
Miller is fascinating when explaining the origins of his plays. He also had an acute understanding of human nature, as you would expect of a playwright.
The book I most enjoyed in 2021 is The Power of Strangers: The Benefits of Connecting in a Suspicious World, by Joe Keohane.
The book explores talking to strangers and highlights a variety of people going about it in original ways.
Keohane describes his many interactions that include an immersive 3-day course on how to talk to strangers, held in London, and a train journey between Chicago and Los Angeles; the thinking being that, during a 42-hour trip, what else would you do but interact with strangers.
Keohane learns that, as he improves, there is something infectious about the skill: people start to strike up conversations with him.
The book conveys how interacting with strangers can be life-enriching and can dismantle long-seated fears and preconceptions.
He describes an organisation that gets Republican and Democrat supporters to talk. At the end of one event, an attendee says: “We’re all relieved that we can actually talk to each other. And we can actually convince the other side to look at something a different way on some subjects.”
If reading novels can be viewed as broadening one’s experiences through the stories of others, then talking to strangers is the non-fiction equivalent.
I loved the book.




