Ciena becomes a computer weaver

- Ciena is to buy optical interconnect start-up Nubis Communications for $270 million.
- The deal covers optical and copper interconnect technology for data centres
Ciena has announced its intention to buy optical engine specialist Nubis Communications for $270 million. If the network is the computer, Nubis’ optical engine and copper integrated circuit (IC) expertise will help Ciena better stitch together AI’s massive compute fabric.
Ciena signalled its intention to target the data centre earlier this year at the OFC show when it showcased its high-speed 448-gigabit serialiser-deserialiser IC technology and coherent lite modem. Now, Ciena has made a move for start-up Nubis, which plays at the core of AI data centres.
“Ciena’s expertise in high-speed components is relevant to 400G per lane Ethernet transceivers, but they never sold any products to this market,” says Vladimir Kozlov, CEO of LightCounting. “Nubis offers them an entry point with several designs and customer engagements.”
With the deal, Ciena is extending its traditional markets of wide area networks (WAN), metro, and short-reach dense wavelength division multiplexing (DWDM) to include AI networking opportunities. These opportunities include scale-across networks, where AI workloads are shared across multiple data centres, something Ciena can address, to now scale-out and scale-up networks for AI clusters in the data centre.
Puma optical engine
Nubis has developed two generations of compact optical engines for near-package optics (NPO) and co-package optics (CPO) applications. Its first-generation engine operates at 100 gigabits per second (Gbps), while its second, dubbed Puma, operates at 200 Gbps.
Nubis’s optical engine philosophy is based on escaping the optical channels from the surface of the optical engine, not its edge. The start-up also matches the number of optical channels to the electrical ones. The optical engine can be viewed as a sieve: data from the input channels flow through the chip and emerge in the same number of channels at the output. The engine acts as a two-way gateway, with one side handling electrical signals and the other, optical ones.
The Puma optical engine uses 16 channels in each direction, 16 by 200Gbps electrical signals for a total of 3.2 terabits per second (Tbps), and 16 fibres, each fibre carrying 200Gbps of data in the form of a wavelength. Puma’s total capacity is thus 6.4 terabits per second (Tbps). The engine also needs four external lasers to drive the optics, each laser feeding four channels or four fibres. The total fibre bundle of the device consists of 36 fibres: 32 for data (16 for receive and 16 for transmit), and four for the laser light sources.
Nubis is also a proponent of linear drive technology. Here, the advanced serdes on the adjacent semiconductor chip drives the optical engine, thereby avoiding the need for an on-engine digital signal processor (DSP) that requires power. The start-up has also developed a system-based simulator software tool that it uses to model the channel, from the transmitter to the receiver. The tool models not only the electrical and optical components within the channel but also the endpoints, such as the serdes.
Nitro
Nubis has an analogue IC team that designs its trans-impedance amplifiers (TIAs) and drivers used for the optical engine. The hardware compensates for channel impairments with low noise, high linearity, and at high speed. It is this channel simulator tool that Nubis used to optimise its optical engine, and to develop its second key technology, which Nubis calls Nitro —a chip that extends the reach of copper cabling.
“We use our linear optics learning and apply it to copper straight out of the gate, “said Peter Winzer, founder & CTO at Nubis, earlier this year. By using its end-to-end simulator tool, Nubis developed the Nitro IC, which extends the 1m reach of direct-attached copper to 4m using an active copper cable design. “We don’t optimise the driver chip, we optimise the end-to-end system,” says Winzer.
Nubis was also part of a novel design based on a vertical line card to shorten the trace length between an ASIC and pluggable modules.
Ciena’s gain
The acquisition of Nubis places Ciena at the heart of the electrical-optical transition inside the data centre. Ciena will cover both options: copper and optical interconnect. Ciena will gain direct-drive technology expertise for electrical and optical interfaces, enabling scale-up, as well as optical engine technology for scale-out, adding to its coherent technology expertise.

Ciena’s technologies will span coherent ultra-long-haul links all the way to AI accelerators, the heart of AI clusters. By combining Ciena’s 448-gigabit serdes with Nubis’s optical engine expertise, Ciena has a roadmap to develop 12.8Tbps and faster optical engines.
The acquisition places Ciena among new competitors that have chip and optical expertise and deliver co-packaged optics solutions alongside complex ICs such as Broadcom and Marvell.
The deal adds differentiation from Ciena’s traditional system vendor competitors, such as Cisco/ Acacia and Nokia. Huawei is active in long-haul optical and makes AI clusters. Ciena will also compete with existing high-speed optical players, including co-packaged optics specialists Ayar Labs and Ranovus, microLED player Avicena, and optical/IC fabric companies such as LightMatter and Celestial AI.
“Ciena will be a unique supplier in the co-packaged optics/near-packaged optics/active copper cabling data centre interconnect market,” says Daryl Inniss, Omdia’s thought lead of optical components and advanced fibre. “The other suppliers either have multiple products in the intra data centre market, like Broadcom and Nvidia, or they are interconnect-focused start-ups. These suppliers should all wonder what Ciena will do next inside the data centre.”
Ciena will enhance its overall expertise in chips, optics, and signal processing with the Nubis acquisition. It will also put Ciena in front of key processor players and different hyperscaler engineering teams, which drive next-generation AI systems.
Ciena will also have all the necessary parts for the various technologies, regardless of the evolving timescales associated with the copper-to-optical transition within AI systems. Ciena will add direct-detect technology and copper interconnect. On the optical side, it has coherent optical expertise, now coupled with near-package optics and co-packaged optics.
Nubis’ gain
Nubis’ 50-plus staff get a successful exit. The start-up was founded in 2020. Nubis will become a subsidiary of Ciena.
Nubis will be joining a much bigger corporate entity with deep expertise and pockets. Ciena has a good track record with its mergers. Think Nortel at the system level and Blue Planet, a software acquisition. Now the Nubis deal will bring Ciena firmly inside the data centre.
“This is a great deal for Nubis,” says Kozlov. “Congratulations to their team.”
What next?
The deal is expected to close in the fourth quarter of this year. Ciena expects the deal to start adding to its revenues from 2028, requiring Ciena and Nubis to develop products and deliver design wins in the data centre.
“Given the breadth of Ciena’s capabilities, its deep pockets, and products like its data centre out-of-band (DCOM) measurement product, router, and coherent transceivers, one can imagine that Ciena would offer more than co-packaged optics/ near-packaged optics/ active copper cabling inside the data centre,” says Inniss.
Lumentum's optical circuit switch for AI data centres

Part 3: Data Centre Switching
The resurgence of optical circuit switches for use in data centres is gaining momentum, driven by artificial intelligence (AI) workloads that require scalable connectivity.
Lumentum is one of several companies that showcased an optical circuit switch at the OFC event in San Francisco in March. Lumentum’s R300 switch connects optically the 300 input ports to any of the 300 output ports. The optical circuit switch uses micro-electro-mechanical systems (MEMS), tiny mirrors that move electrostatically, to direct light from an input port to one of the 300 output ports.
The R300 addresses the network needs of AI data centres, helping link large numbers of AI accelerator chips such as graphics processor units (GPUs).
“We’ve been talking to all the hyperscalers in North America and China,” says Peter Roorda, general manager of the switching business unit at Lumentum. “The interest is pretty broad for the applications of interconnecting GPUs and AI clusters; that’s the exciting one.”
Optical circuit switches
In a large-scale data centre, two or three tiers of electrical switch platforms link the many servers’ processors. The number of tiers needed depends on the overall processor count. The same applies to the back-end network used for AI workloads. These tiers of electrical switches are arranged in what is referred to as a Clos or “Fat Tree” architecture.

Google presented a paper in 2022 revealing that it had been using an internally developed MEMS-based optical circuit switch for several years. Google used its optical circuit switches to replace all the top-tier ‘spine’ layer electrical switches across its data centres, resulting in significant cost and power savings.
Google subsequently revealed a second use for its switches to directly connect between racks of its tensor processor unit (TPU) accelerator chips. Google can move workloads across thousands of TPUs in a cluster, efficiently using its hardware and bypassing a rack when a fault arises.
Google’s revelation rejuvenated interest in optical switch technology, and at OFC, Lumentum showed its first R300 optical switch product in operation.
Unlike packet switches, which use silicon to process data at the packet level, an optical circuit switch sets up a fixed, point-to-point optical connection, akin to a telephone switchboard, for the duration of a session.
The optical switch is ideal for scenarios where large, sustained data flows are required, such as in AI training clusters.

Merits
The optical circuit switch’s benefits include cost and power savings and improved latency. Optical-based switch ports are data-rate independent. They can support 400 gigabit, 800 gigabit, and soon 1.6-terabit links without requiring an upgrade.
“Now, it’s not apples to apples; the optical circuit switch is not a packet switch,” says Roorda. “It’s just a dumb circuit switch, so there must be control plane software to manage it.” However, the cost, power, space savings, and port transparency incentives suffice for the hyperscalers to invest in the technology.
The MEMS-based R300
Lumentum has a 20-year history using MEMS. It first used the technology in its wavelength-selective switches used in telecom networks before the company adopted liquid crystal on silicon (LCOS) technology.
“We have 150,000 MEMS-based wavelength selective switches in the field,” says Roorda. “This gives us a lot of confidence about their reliability.”
MEMS-based switches are renowned for their manufacturing complexity, and Lumentum has experience in MEMS.
“This is a key claim as users are worried about the mechanical aspect of MEMS’ reliability,” says Michael Frankel, an analyst at LightCounting Market Research, which published an April report covering Ethernet, Infiniband and optical switches in cloud data centres. “Having a reliable volume manufacturer is critical.”
In its system implementation, Google revealed that it uses bi-directional transceivers in conjunction with the OCS.
“Using bi-directional ports is clever because you get to double the ports out of your optical circuit switch for the same money, “says Mike DeMerchant, Lumentum’s senior director of product line management, optical circuit switch. “But then you need customised, non-standard transceivers.”
A bi-directional design complicates the control plane management software because bi-directional transponders effectively create two sets of connections. “The two sets of transceivers can only talk in a limited fashion between each other, so you have to manage that additional control plane complexity,” says DeMerchant.
Lumentum enters the market with a 300×300 radix switch. Some customers have asked about a 1,000×1,000 port switch. From a connectivity perspective, bigger is better, says Roorda. “But bigger is also harder; if there is a problem with that switch, the consequences of a failure—the blast radius—are larger too,” he says.

Lumentum says there are requests for smaller optical circuit switches and expects to offer a portfolio of different-sized products in the next two years.
The R300 switch is cited as having a 3dB insertion loss, but Roorda says the typical performance is close to 1.5dB at the start of life. “And 3dB is good enough for using a standard off-the-shelf -FR4 or a -DR4 or -DR8 optical module [with the switch],” says Roorda.
A 400G QSFP-DD FR4 module uses four wavelengths on a single-mode fibre and has a reach of 2km, whereas a DR4 or DR8 uses a single wavelength on each fibre and has 4 or 8 single-mode fibre outputs, respectively, with a reach of 500m.
An FR4 interface is ideal with an optical circuit switch since multiple wavelengths are on a single fibre and can be routed through one port. However, many operators use DR4 and DR8 interfaces and are exploring using such transceivers.
“More ports would be consumed, diluting the cost-benefit, but the power savings would still be significant,” says Roorda.Additionally, in some applications, individually routing and recombining the separate ‘rails’ of DR4 or DR8 offer greater networking granularity. Here, the optical circuit switch still provides value, he says.
One issue with an optical circuit switch compared to an electrical-based one is that the optics go through both optical ports before reaching the destination transceiver, adding an extra 3dB loss. By contrast, for an electrical switch, the signal is regenerated optically by the pluggable transceiver at the output port.
LightCounting’s Frankel also highlights the switch’s loss numbers. “Lumentum’s claim of a low loss – under 2dB – and a low back reflection (some 60dB) are potential differentiators,” he says. “It is also a broadband design – capable of operating across the O-, C- and L-bands: O-band for data centre and C+L for telecom.”
Software and Hyperscaler Control
Lumentum is controlling the switch using the open-source SONiC [Software for Open Networking in the Cloud] network operating system (NOS), based on Linux. The hyperscalers will add the higher-level control plane management software using their proprietary software.
“It’s the basic control features for the optics, so we’re not looking to get into the higher control plane,” says DeMerchant.
Challenges and Scalability
Designing a 300×300 optical circuit switch is complicated. “It’s a lot of mirrors,” says Roorda. “You’ve got to align them, so it is a complicated, free-space, optical design.”
Reliability and scalable manufacturing are hurdles. “The ability to build these things at scale is the big challenge,” says Roorda. Lumentum argues that its stable MEMS design results in a reliable, simpler, and less costly switch.Lumentum envisions data centres evolving to use a hybrid switching architecture, blending optical circuit switches with Ethernet switches.
Roorda compares it to how telecom networks transitioned to using reconfigurable optical add-drop multiplexers (ROADMs).”It’ll be hybridised with packet switches because you need to sort the packets sometimes,” says Roorda.
Future developments may include multi-wavelength switching and telecom applications for optical circuit switches. “For sure, it is something that people are talking about,” he adds.
Lumentum says its R300 will be generally available in the second half of this year.
OFC 2025 industry reflections - Final Part

Gazettabyte has been asking industry figures for their thoughts after attending the OFC conference held in San Francisco.
In the final part, Arista’s Vijay Vusirikala and Andy Bechtolsheim, Chris Doerr of Aloe Semiconductor, Adtran’s Jörg-Peter Elbers, and Omdia’s Daryl Inniss share their learnings. Vusirikala, Doerr, and Elbers all participated in OFC’s excellent Rump Session.
Vijay Vusirikala, Distinguished Lead, AI Systems and Networks, and Andy Bechtolsheim, Chief Architect, at Arista Networks.
OFC 2025 wasn’t just another conference. The event felt like a significant momentum-gaining inflexion point, buzzing with an energy reminiscent of the Dot.com era optical boom.
This palpable excitement, reflected in record attendance and exhibitor numbers, was accentuated for the broader community by the context set at Nvidia’s GTC event held two weeks before OFC, highlighting the critical role optical technologies play in enabling next-generation AI infrastructure.
This year’s OFC moved beyond incremental updates, showcasing a convergence of foundational technologies and establishing optics not just as a supporting player but a core driver in the AI era. The scale of innovation directed towards AI-centric solutions – tackling power consumption, bandwidth density, and latency – was striking.
Key trends that stood out were as follows:
Lower Power Interconnect technologies
The overarching topic was the need for more power-efficient optics for high-bandwidth AI fabrics. Legacy data centre optics are impacting the number of GPUs that fit into a given data centre’s power envelope.
Three main strategies were presented to address the power issue.
First, whenever possible, use copper cables, which are far more reliable and cost less than optics. The limitation, of course, is copper’s reach, which at 200 gigabit-per-lane is about 1-2m for passive copper cables and 3-4m for active redriven copper cables.
Second, eliminate the traditional digital signal processor (DSP) and use linear interface optics, including Linear Pluggable Optics (LPO), Near Package Optics (NPO), and Co-Packaged Optics (CPO), all of which offer substantial (60%) power savings, lower latency, and higher reliability compared to traditional DSP solutions.
The biggest difference between linear pluggable optics and co-packaged optics is that linear pluggable optics retains the well-known operational advantages of pluggable modules: configurability, multi-vendor support, and easy field serviceability (hot-swapping at module level), compared to fixed optics like co-packaged optics, which require chassis-level RMAs (return materials authorisation). It remains to be seen whether co-packaged optics can overcome the serviceability issues.
Third, developments in a host of new technologies – advances in copper interconnects, microLED-based interconnects, and THz-RF-over-waveguides – promise even lower power consumption than current silicon photonics-based interconnect technologies.
We look forward to hearing more about these new technologies next year.
Transition from 200 gigabit-per-lambda to 400 gigabit-per-lambda
With the 200 gigabit-per-lambda optical generation just moving into volume production in 2025-26, attention has already turned to the advancement and challenges of 400 gigabit-per-lambda optical technologies for future high-speed data transmission, aiming towards 3,200 gigabit (8×400 gigabit) modules.
Several technical approaches for achieving 400 gigabit-per-lambda were discussed, including PAM-4 intensity modulation direct detection (IMDD), PAM-4 dual-polarisation, and optical time division multiplexing (OTDM). The technology choices here include indium phosphide, thin-film lithium niobate (TFLN), and silicon photonics, which are compared based on RF (radio frequency) loss, integration, cost, and high-volume readiness.
For 400 gigabit lambda optics, indium phosphide and thin-film lithium niobate are strong candidates, as silicon photonics will struggle with the high bandwidth.
At this point, it is impossible to predict which platform will emerge as the high-volume winner. Delivering power and cost-effective 400-gigabit lambda optics will require a concerted industry effort from optical component suppliers, connector suppliers, and test and measurement vendors.
Multi-core fibre
A new pain point in large AI data centres is the sheer number of fibre cables and their associated volume and weight. One way to solve this problem is to combine multiple fibre cores in a single fibre, starting initially with four cores, which would offer a 4:1 reduction in fibre count, bulk, and weight.
Hollow-core fibre
Innovation continues in the foundational fibre itself. Hollow-core fibre generated significant buzz, with its potential for lower latency and wider bandwidth attracting intense interest.
The maturing hollow-core fibre ecosystem, including cabling and interconnection progress, suggests deployments beyond niche applications like high-frequency trading may be approaching, reaching areas like distributed AI processing.
AI-driven network evolution
AI isn’t just driving network demand, it is increasingly becoming a network management tool.
Numerous demonstrations showcased AI/machine learning applications for network automation, traffic prediction, anomaly detection, predictive maintenance – e.g., analysing optical time-domain reflectometer (OTDR) traces, configuration management, and resource optimisation. This represents a fundamental shift towards more efficient, reliable, self-configuring, self-healing, and self-optimising networks.
Along with the many technical talks and tutorials, show floor demos, and customer and supplier meetings, OFC attendees also had a chance to combine technology with some light-hearted fun at the rump session.
This year’s topic was rebuilding global communication infrastructure after an alien invasion, and three teams came up with thought-provoking ideas for this theme.
Chris Doerr, CEO of Aloe Semiconductor
The best way to describe OFC 2025 is a giant Mars dust storm that raged for days. The swirling sand made it difficult to see anything clearly, and the sound was so loud you couldn’t think.
Acronyms ending in “O” were hitting you from all sides: LPO, LRO, NPO, CPO, OIO. The wind blew away sand that had buried old technologies, such as lithium niobate, electro-optic polymer, and indium-phosphide modulators, and they joined the fray.
Only now that the storm has somewhat subsided can we start piecing together what the future holds.
The main driver of the storm was, of course, artificial intelligence (AI) systems. AI requires a vast number of communication interconnects. Most interconnects, at least within a rack, are still copper. While copper keeps making incredible strides in density and reach, a fibre-optic interconnect takeover seems more and more inevitable.
The Nvidia announcements of co-packaged optics (CPO), which go beyond co-packaged optics and deserve a new name, such as optical input-output (OIO) or system-on-chip (SOC), created a great deal of excitement and rethinking. Nvidia employs a silicon interposer that significantly increases the electrical escape density and shortens the electrical links. This is important for the long-term evolution of AI computing.
The CPO/OIO/SOC doesn’t mean the end of pluggables. Pluggables still bring tremendous value with attributes such as time-to-market, ecosystem, replaceability, etc.
Most pluggables will still be fully retimed, but 100 gigabit-per-lane seems comfortable with linear pluggable optics (LPO), and 200 gigabit-per-lane is starting to accept linear receive optics (LRO).
For 200 gigabit per lane, electro-absorption modulated lasers (EMLs) and silicon photonics have comfortably taken the lead. However, for 400 gigabit per lane, which had two main demos on the show floor by Ciena and Marvell, many technologies are jockeying for position, mostly EMLs, thin-film lithium niobate (TFLN), indium phosphide, and silicon photonics.
Many companies are abandoning silicon photonics, but this author feels this is premature. There were demos at OFC of silicon photonics attaining near 400 gigabit-per-lane, and the technology is capable of it.
An important thing to remember is that the new OIO/SOC technology is silicon photonics and comes from a CMOS foundry. Putting non-CMOS materials such as thin-film lithium niobate or indium phosphide in such a platform could take years of expensive development and is thus unlikely.
In summary, OFC 2025 was very active and exciting. Significant technology improvements and innovations are needed.
The dust is far from settled, so we must continue wading into the storm and trying to understand it all.
Jörg-Peter Elbers, Senior Vice President, Advanced Technology, Standards and IPR, Adtran
OFC 2025 marked its largest attendance since 2003 with nearly 17,000 visitors, as it celebrated its 50th anniversary.
Discussions in 1975 centred around advances in fibre technology for telecommunications. This year’s hottest topic was undoubtedly optical interconnects for large-scale AI clusters.
Following an insightful plenary talk by Pradeep Sindhu from Microsoft on AI data centre architecture, sessions were packed in which co-packaged optics (CPO) and associated technologies were discussed. The excitement had been fueled by Nvidia’s earlier announcement of using co-packaged optics in its next generation of Ethernet and Infiniband switches.
The show floor featured 800 gigabit-per-second (Gbps), 1.6 terabit-per-second (Tbps), and the first 3.2Tbps interconnect demonstrations using different interface standards and technologies.
For access, 50G-PON was showcased in triple PON coexistence mode, while interest in next-generation very high-speed PON spurred the technical sessions.
Other standout topics included numerous papers on fibre sensing, stimulating discussions on optical satellite communications, and a post-deadline paper demonstrating unrepeated hollow-core fibre transmission over more than 200km.
OFC 2025 was fun too. When else do you get to reimagine communications after an alien attack, as in this year’s rump session?
No visit to San Francisco is complete without trying one of Waymo’s self-driving taxis. Having been proud of Dmitri Dolgov, Waymo’s CEO, for his plenary talk at OFC 2019, it was thrilling to see autonomous driving in action. I love technology!
Daryl Inniss, Omdia Consultant, Optical Components and Fibre Technologies
I worked on commercialising fibre technology for emerging applications at OFS – now Lightera – from 2016 to 2023. I spent the prior 15 years researching and analysing the optical components market.
Today, I see a market on the cusp of a transition due to the unabated bandwidth demand and the rise of computing architectures to support high-performance computing (HPC) and artificial intelligence (AI).
Even optical fibre, the fundamental optical communications building block, is under intense scrutiny to deliver performance suitable for the next 30 to 50 years. Options include hollow-core and multi-core fibre, two of the three fibre technologies that caught my attention at OFC 2025.
The third, polarisation-maintaining fibre arrays for co-package optics, is one part of the conference’s biggest story. OFC 2025 provided a status update on these technologies.
Hollow-core fibre
OFC’s first day hollow-core fibre workshop (S2A) illustrated its niche status and its potential to remain in this state until the fibre is standardised. The industry ecosystem was well represented at the session.
Not surprisingly, challenges highlighted and summarised by Russ Ellis, Microsoft’s Principal Cloud Network Engineer, included manufacturing, field installation, field splicing, cabling, and termination inside the data centre. These are all expected topics and well understood.
I was surprised to hear Microsoft report that the lack of an established ecosystem was a challenge, and I’ll explain why below.
Coming into OFC, the biggest market question was fibre manufacturing scalability, as most reported draws are 5km or less. Supplier YOFC put this concern to rest by showcasing a +20 km spool from a single fibre draw on the show floor. And Yingying Wang, CEO of Linfiber, reported that 50 to 100km preforms will be available this year.
In short, suppliers can scale hollow-core fibre production.
From a field performance perspective, Microsoft highlighted numerous deployments illustrating successful fibre manufacturing, cabling, splicing, termination, installation, and testing. The company also reported no field failures or outages for cables installed over five years ago.
However, to my knowledge, the hollow-core fibre ecosystem challenge is a consequence of a lack of standardisation and discussion about standardisation.
Each fibre vendor has a different fibre design and a different glass outer diameter. Microsoft’s lack-of-an-ecosystem comment mentioned above is therefore unsurprising. Only when the fibre is standardised can an ecosystem emerge, generating volumes and reducing costs,
Today, only vertically integrated players benefit from hollow-core fibre. Until the fibre is standardised, technology development and adoption will be stunted.
Multi-core fibre
I was pleasantly surprised to find multiple transceiver vendors showcasing modules with integrated fan-in/fan-out (FIFO). This is a good idea as it supports multi-core fibre adoption.
One vendor is targeting FR4 (TeraHop for 2x400G), while another is targeting DR8 (Hyper Photonix for 8x100G). There is a need to increase core density, and it is good to see these developments.
However, we are still very far from multi-core fibre commercialisation as numerous operational factors, for example, are impacted. The good news is that multi-core fibre standardisation is progressing.
Polarisation-maintaining fibre
According to Nick Psaila, Intel’s principal engineer and technology development manager, polarisation-maintaining fibre arrays remain expensive.
The comment was made at Optica’s February online Industry Meeting and verified in my follow-up conversation with Psaila.
Using an external laser source is the leading approach to deliver light for co-packaged optics, highlighting an opportunity for high-volume, low-cost polarisation-maintaining fibre arrays.
Co-packaged optics were undoubtedly the most significant topic of the show.
Coherent showcased a 3Tbps concept product of VCSELs to be used in co-packaged optics. Given that multimode fibre is used in the shortest optical connections in data centres and that VCSELs have very low power consumption, I’m surprised I’ve not heard more about their use for this application.
Testing of emerging photonic solutions for HPC and AI devices has been identified as a bottleneck. Quantifi Photonics has taken on this challenge. The company introduced an oscilloscope that provided quality results comparable to industry-leading ones and is designed for parallel measurements. It targets photonic devices being designed for co-packaged optics applications.
Multiple channels, each with wavelength-division multiplexing lasers, must be tested in addition to all the electrical channels. This is time-consuming, expensive process, particularly using existing equipment.
Polymer modulators continue to be interesting because they have high bandwidth and the potential to be inexpensive. However, reliability is their challenge. Another vendor, NLM Photonics, is targeting this application.
The many vendors offering optical circuit switches was a surprising development. I wonder if this opportunity is sufficiently large to justify the number of vendors. I’m told that numerous internet content providers are interested in the technology. Moreover, these switches may be adopted in telecom networks. This is a topic that needs continual attention, specifically regarding the requirements based on the application.
Lastly, Lightmatter provided a clear description of its technology. An important factor is the optical interposer that removes input-output connectivity from the chip’s edge, thereby overcoming bandwidth limitations.
I was surprised to learn that the laser is the company’s design, although Lightmatter has yet to reveal more.
OFC 2025 industry reflections - Part 2

Gazettabyte is asking industry figures for their thoughts after attending the 50th-anniversary OFC show in San Francisco. In Part 2, the contributions are from BT’s Professor Andrew Lord, Chris Cole, Coherent’s Vipul Bhatt, and Juniper Network’s Dirk van den Borne.ontent
Professor Andrew Lord, Head of Optical Network Research at BT Group
OFC was a highly successful and lively show this year, reflecting a sense of optimism in the optical comms industry. The conference was dominated by the need for optics in data centres to handle the large AI-driven demands. And it was exciting to see the conference at an all-time attendance peak.
From a carrier perspective, I continued to appreciate the maturing of 800-gigabit plugs for core networks and 100GZR plugs (including bidirectional operation for single-fibre working) for the metro-access side.
Hollow-core fibre continues to progress with multiple companies developing products, and evidence for longer lengths of fibre in manufacturing. Though dominated by data centres and low-latency applications such as financial trading, use cases are expected to spread into diverse areas such as subsea cables and 6G xHaul.
There was also a much-increased interest in fibre sensing as an additional revenue generator for telecom operators, although compelling use cases will require more cost-effective technology.
Lastly, there has been another significant increase in quantum technology at OFC. There was an ever-increasing number of Quantum Key Distribution (QKD) protocols on display but with a current focus on Continuous—Variable QKD (CV-QKD), which might be more readily manufacturable and easier to integrate.
Chris Cole, Optical Communications Advisor
For the premier optics conference, the amount of time and floor space devoted to electrical interfaces was astounding.
Even more amazing is that while copper’s death at the merciless hands of optics continues to be reported, the percentage of time devoted to electrical work at OFC is going up. Multiple speakers commented on this throughout the week.
One reason is that as rates increase, the electrical links connecting optical links to ASICs are becoming disproportionately challenging. The traditional Ethernet model of electrical adding a small penalty to the overall link is becoming less valid.
Another reason is the introduction of power-saving interfaces, such as linear and half-retimed, which tightly couple the optical and electrical budgets.
Optics engineers now have to worry about S-parameters and cross-talk of electrical connectors, vias, package balls, copper traces and others.
The biggest buzz in datacom was around co-packaged optics, helped by Nvidia’s switch announcements at GTC in March.
Established companies and start-ups were outbidding each other with claims of the highest bandwidth in the smallest space; typically the more eye-popping the claims, the less actual hard engineering data to back them up. This is for a market that is still approximately zero and faces its toughest hurdles of yield and manufacturability ahead.
To their credit, some companies are playing the long game and doing the slow, hard work to advance the field. For example, I continue to cite Broadcom for publishing extensive characterisation of their co-packaged optics and establishing the bar for what is minimally acceptable for others if they want to claim to be real.
The irony is that, in the meantime, pluggable modules are booming, and it was exciting to see so many suppliers thriving in this space, as demonstrated by the products and traffic in their booths.
The best news for pluggable module suppliers is that if co-packaged optics takes off, it will create more bandwidth demand in the data centre, driving up the need for pluggables.
I may have missed it, but no coherent ZR or other long-range co-packaged optics were announced.
A continued amazement at each OFC is the undying interest and effort in various incarnations of general optical computing.
Despite having no merit as easily shown on first principles, the number of companies and researchers in the field is growing. This is also despite the market holding steady at zero.
The superficiality of the field is best illustrated by a slogan gaining popularity and heard at OFC: computing at the speed of light. This is despite the speed of propagation being similar in copper and optical waveguides. The reported optical computing devices are hundreds of thousands or millions of times larger than equivalent CMOS circuits, resulting in the distance, not the speed, determining the compute time.
Practical optical computing precision is limited to about four bits, unverfied claims of higher precision not withstanding, making it useless in datacenter applications.
Vipul Bhatt, Vice President, Corporate Strategic Marketing at Coherent.
Three things stood out at OFC:
- The emergence of transceivers based on 200-gigabit VCSELs
- A rising entrepreneurial interest in optical circuit switching
- And an accelerated momentum towards 1.6-terabit (8×200-gigabit transceivers) alongside the push for 400-gigabit lanes due to AI-driven bandwidth expansion.
The conversations about co-packaged optics showed increasing maturity, shifting from ‘pluggable versus co-packaged optics’ to their co-existence. The consensus is now more nuanced: co-packaged optics may find its place, especially if it is socketed, while front-panel pluggables will continue to thrive.
Strikingly, talk of optical interconnects beyond 400-gigabit lanes was almost nonexistent. Even as we develop 400 gigabit-per-lane products, we should be planning the next step: either another speed leap (this industry has never disappointed) or, more likely, a shift to ‘fast-and-wide’, blurring the boundary between scale-out and scale-up by using a high radix.
Considering the fast cadence of bandwidth upgrades, the absence of such a pivotal discussion was unexpected.
Dirk van den Borne, director of system engineering at Juniper Networks
The technology singularity is defined as the merger of man and machine. However, after a week at OFC, I will venture a different definition where we call the “AI singularity” the point when we only talk about AI every waking hour and nothing else. The industry seemed close to this point at OFC 2025.
My primary interest at the show was the industry’s progress around 1.6-terabit optics, from scale-up inside the rack to data centre interconnect and long-haul using ZR/ZR+ optics. The industry here is changing and innovating at an incredible pace, driven by the vast opportunity that AI unlocks for companies across the optics ecosystem.
A highlight was the first demos of 1.6-terabit optics using a 3nm CMOS process DSP, which have started to tape out and bring the power consumption down from a scary 30W to a high but workable 25W. Well beyond the power-saving alone, this difference matters a lot in the design of high-density switches and routers.
It’s equally encouraging to see the first module demos with 200 gigabit-per-lane VCSELs and half-retimed linear-retimed optical (LRO) pluggables. Both approaches can potentially reduce the optics power consumption to 20W and below.
The 1.6-terabit ecosystem is rapidly taking shape and will be ready for prime time once 1.6-terabit switch ASICs arrive in the marketplace. There’s still a lot of buzz around linear pluggable optics (LPO) and co-packaged optics, but both don’t seem ready yet. LPO mostly appears to be a case of too little, too late. It wasn’t mature enough to be useful at 800 gigabits, and the technology will be highly challenging for 1.6 terabits.
The dream of co-packaged optics will likely have to wait for two more years, though it does seem inevitable. But with 1.6 terabit pluggable optics maturing quickly, I don’t see it having much impact in this generation.
The ZR/ZR+ coherent optics are also progressing rapidly. Here, 800-gigabit is ready, with proven interoperability between modules and DSPs using the OpenROADM probabilistic constellation shaping standard, a critical piece for interoperability in more demanding applications.
The road to 1600ZR coherent optics for data centre interconnect (DCI) is now better understood, and power consumption projections seem reasonable for 2nm DSP designs.
Unfortunately, the 1600ZR+ is more of a question mark to me, as ongoing standardisation is taking this in a different direction and, hence, a different DSP design from 1600ZR. The most exciting discussions are around “scale-up” and how optics can replace copper for intra-rack connectivity.
This is an area of great debate and speculation, with wildly differing technologies being proposed. However, the goal of around 10 petabit-per-second (Pbps) in cross-sectional bandwidth in a single rack is a terrific industry challenge, one that can spur the development of technologies that might open up new markets for optics well beyond the initial AI cluster application.
OFC 2025: industry reflections

Gazettabyte is asking industry figures for their thoughts after attending the recent 50th-anniversary OFC show in San Francisco. Here are the first contributions from Huawei’s Maxim Kuschnerov, NLM Photonics’ Brad Booth, LightCounting’s Vladimir Kozlov, and Jürgen Hatheier, Chief Technology Officer, International, at Ciena.
Maxim Kuschnerov, Director of R&D, Huawei
The excitement of last year’s Nvidia’s Blackwell graphics processing unit (GPU) announcement has worn off, and there was a slight hangover at OFC from the market frenzy then.
The 224 gigabit-per-second (Gbps) opto-electronic signalling is reaching mainstream in the data centre. The last remaining question is how far VCSELs will go—30 m or perhaps even further. The clear focus of classical Ethernet data centre optics for scale-out architectures is on the step to 448Gbps-per-lane signalling, and it was great to see many feasibility demonstrations of optical signalling showing that PAM-4 and PAM-6 modulation schemes will be doable.
The show demonstrations either relied on thin-film lithium niobate (TFLN) or the more compact indium-phosphide-based electro-absorption modulated lasers (EMLs), with thin-film lithium niobate having the higher overall optical bandwidth.
Higher bandwidth pure silicon Mach-Zehnder modulators have also been shown to work at a 160-175 gigabaud symbol rate, sufficient to enable PAM-6 but not high enough for PAM-4 modulation, which the industry prefers for the optical domain.
Since silicon photonics has been the workhorse at 224 gigabits per lane for parallel single-mode transceivers, a move away to thin-film lithium niobate would affect the density of the optics and make co-packaged optics more challenging.
With PAM-6 being the preferred modulation option in the electrical channel for 448 gigabit, it begs the question of whether the industry should spend more effort on enabling PAM-6 optical to kill two birds with one stone: enabling native signalling in the optical and electrical domains would open the door to all linear drive architectures, and keep the compact pure-silicon platform in the technology mix for optical modulators. Just as people like to say, “Never bet against copper,” I’ll add, “Silicon photonics isn’t done until Chris Doerr says so.”
If there was one topic hotter than the classical Ethernet evolution, it was the scale-up domain for AI compute architectures. The industry has gone from scale-up in a server to a rack-level scale-up based on a copper backplane. But future growth will eventually require optics.
While the big data centre operators have yet to reach a conclusion about the specifications of density, reach, or power, it is clear that such optics must be disruptive to challenge the classical Ethernet layer, especially in terms of cost.
Silicon photonics appears to be the preferred platform for a potential scale-up, but some vendors are also considering VCSEL arrays. The challenge of merging optics onto the silicon interposer along with the xPU is a disadvantage for VCSELs in terms of tolerance to high-temperature environments.
Reliability is always discussed when discussing integrated optics, and several studies were presented showing that optical chips hardly ever fail. After years of discussing how unreliable lasers seem, it’s time to shift the blame to electronics.
But before the market can reasonably attack optical input-output for scale-up, it has to be seen what the adoption speed of co-packaged optics will be. Until then, linear pluggable optics (LPO) or linear retimed optics (LRO) pluggables will be fair game in scaling up AI ‘pods’ stuffed with GPUs.
Brad Booth, CEO of NLM Photonics
At OFC, the current excitement in the photonics industry was evident due to the growth in AI and quantum technologies. Many of the industry’s companies were represented at the trade show, and attendance was excellent.
Nvidia’s jump on the co-packaged optics bandwagon has tipped the scales in favour of the industry rethinking networking and optics.
What surprised me at OFC was the hype around thin-film lithium niobate. I’m always concerned when I don’t understand why the hype is so large, yet I have still to see the material being adopted in the datacom industry.
Vladimir Kozlov, CEO of LightCounting
This year’s OFC was a turning point for the industry, a mix of excitement and concern for the future. The timing of the tariffs announced during the show made the event even more memorable.
This period might prove to be a peak of the economic boom enabled by several decades of globalisation. It may also be the peak in the power of global companies like Google and Meta and their impact on our industry.
More turbulence should be expected, but new technologies will find their way to the market.
Progress is like a flood. It flows around and over barriers, no matter what they are. The last 25 years of our industry is a great case study.
We are now off for another wild ride, but I look forward to OFC 2050.
Jürgen Hatheier, Chief Technology Officer, International, at Ciena
This was my first trip to OFC, and I was blown away. What an incredible showcase of the industry’s most innovative technology
One takeaway is how AI is creating a transformative effect on our industry, much like the cloud did 10 years ago and smartphones did 20 years ago.
This is an unsurprising observation. However, many outside our industry do not realise the critical importance of optical technology and its role in the underlying communication network. While most of the buzz has been on new AI data centre builds and services, the underlying network has, until recently, been something of an afterthought.
All the advanced demonstrations and technical discussions at OFC emphasise that AI would not be possible without high-performance network infrastructure.
There is a massive opportunity for the optical industry, with innovation accelerating and networking capacity scaling up far beyond the confines of the data centre.
The nature of AI — its need for intensive training, real-time inferencing at the edge, and the constant movement of data across vast distances between data centres — means that networks are evolving at pace. We’re seeing a significant architectural shift toward more agile, scalable, and intelligent infrastructure with networks that can adapt dynamically to AI’s distributed, data-hungry nature.
The diversity of optical innovation presented at the conference ranged from futuristic Quantum technologies to technology on the cusp of mainstream adoption, such as 448-gigabit electrical lanes.
The increased activity and development around high-speed pluggables also show how critical coherent optics has become for the world’s most prominent computing players.
OFC 2025: reflecting on the busiest optics show in years
Adtran’s Gareth Spence interviews Omdia’s Daryl Inniss (left) and the editor of Gazettabyte, live from the conference hall at OFC 2025.
The discussion covers the hot topics of the show and where the industry is headed next. Click here.
OFC: After the aliens, a decade to rewire the Earth

At the OFC 2025 Rump Session, held in San Francisco, three teams were set a weighty challenge. If a catastrophic event—a visit by aliens —caused the destruction of the global telecommunications network, how would each team’s ‘superheroes’ go about designing the replacement network? What technologies would they use? And what issues must be considered?
The Rump Session tackled a provocative thought experiment. If the Earth’s entire communication infrastructure vanished overnight, how would the teams go about rebuilding it?
Twelve experts – eleven from industry and one academic – were split into three teams.
The teams were given ten years to build their vision network. A decade was chosen as it is a pragmatic timescale and would allow the teams to consider using emerging technologies.
The Rump Session had four rounds, greater detail being added after each.
The first round outlined the teams’ high-level visions, followed by a round of architectures. Then a segment detailed technology, the round where the differences in the team’s proposals were most evident. The final round (Round 4), each team gave a closing statement before the audience chose the winning proposal.
The Rump Session mixed deep thinking with levity and was enjoyed by the participants and audience alike.
Round 1: Network vision
The session began with each team highlighting their network replacement vision.
Team A’s Rebecca Schaevitz opened by looking across a hundred-year window. Looking back fifty years to 1975, networking and computing were all electrical, she said, telephone lines, mainframe computing, radio and satellite.
Schaevitz said that by 2075, fifty years hence, connectivity will be the foundation of civilisation. The key difference between the networks a century apart is the marked transition from electrons to photons.
In the future vision, everything will be connected—clothes, homes, roads, even human brains—using sensors and added intelligence. As for work, offices will be replaced with real-time interactive holograms (suggesting humanity will still be working in 2075).
Schaevitz then outlined what must be done in the coming decade to enable Team A’s Network 2075 vision.
The network’s backbone must be optical, supporting multiple wavelengths and quantum communications. Team A will complement the fixed infrastructure with terabit-speed wireless and satellite mega-constellations. And AI will enable the network to be self-healing and adaptive, ensuring no downtime.
Vijay Vusirikala outlined Team B’s network assumptions. Any new network will need to support the explosive growth in computing and communications while being energy constrained. “We must reinvent communications from the ground up for maximum energy savings,” said Visurikala.
But scarcity—in this case energy—spurs creativity. The goal is to achieve 1000x more capacity for the same energy demand.
The network will have distributed computing based on mega data centres and edge client computing. Massive bandwidth will be made available to link humans and to link machines. Lastly, just enough standardisation will be used for streamlined networking.
Team C’s Katharine Schmidtke closed the network vision round. The goal is universal and cheap communications, with lots of fibre deployed to achieve this.
The emphasis will be on creating a unified fixed-mobile network to aid quick deployment and a unified fibre-radio spectrum for ample connectivity.
Team C stressed the importance of getting the network up and running by using a modular network node. It also argued for micro data centres to deliver computing close to end users.
Global funding will be needed for the infrastructure rebuild, and unlimited rights of way will be a must. Unconstrained equipment and labour will be used at all layers of the network.
Team C will also define the communication network using one infrastructure standard for interoperability. One audience member questioned the wisdom of a tiny committee alone specifying such a grand global project.
The network will also be sustainable by recycling the heat from data centres for crop production and supporting local communities.
Round 2: Architectures
Team A’s Tad Hofmeister opened Round 2 by saying what must change: the era of copper will end – no copper landlines will be installed. The network will also only use packet switching, no more circuit switch technology. And IPv4 will be retired (to great cheering from the audience).
Team A also proposed a staged deployment. First, a network of airborne balloons will communicate with smartphones and laptops, which will be connected to the ground using free-space optical links.

Stage 2 will add base stations complemented with satellite communications. Fibre will be deployed on a massive scale along roads, railways, and public infrastructure.
Hofmeister stressed the idea of the network being open and disaggregated with resiliency and security integral to the design.
There will be no single mega-telecom or hyperscaler; instead, multiple networks and providers will be encouraged. To ensure interoperability, the standards will be universal.
Security will be based on a user’s DNA key. What about twins? asked an audience member. Hofmeister had that covered: time-of-birth data will be included.
Professor Polina Bayvel detailed Team B’s architectural design. Here, packet and circuit switching is proposed to minimise energy/bit/ km. It will be a network with super high bandwidths, including spokes of capacity extending from massive data centres connecting population centres.

Bayvel argued the case for underwater data centres: 15 per cent of the population live near the coast, she said, and an upside would be that people could work from the beach.
Team C’s Glenn Wellbrock proposed unleashing as much bandwidth as possible by freeing up the radio spectrum and laying hollow-core fibre to offer as much capacity as possible.

Wellbrock views hollow-core fibre as a key optical communications technology that promises years of development, just like first erbium-doped fibre amplifiers (EDFAs) and then coherent optics technology have done.
Team C showed a hierarchical networking diagram mapped onto the geography of the US – similar to today’s network – with 10s of nodes for the wide area network, 100s of metropolitan networks, and 10,000s of access nodes.
Wellbrock proposes self-container edge nodes based on standardised hardware to deliver high-speed wireless (using the freed-up radio spectrum) and fibre access. There would also be shared communal hardware, though service providers could add their own infrastructure. Differentiation would be based on services.
AI would provide the brains for network operations, with expert staff providing the initial training.
Round 3: Technologies
Round 3, the enabling technologies for the new network, revealed the teams’ deeper thinking.
Team A’s Chris Doerr advocated streamlining and pragmatism to ensure rapid deployment. Silicon photonics will make a quick, massive-scale, and economic deployment of optics possible. Doerr also favours massive parallelism based on 200 gigabaud on-off keying (not PAM-4 signalling). With co-packaged optics added to chips, such parallel optical input-output and symbol rate will save significant power.
Standards for all aspects of networking will be designed first. Direct detection will be used inside the data centre; coherent digital signal processing will be used everywhere else. More radically, in the first five years, all generated intellectual property regarding series, converters, modems, and switch silicon will be made available to all competition. Chips will be assembled using chiplets.
For line systems, C-band only followed by the deployment of Vibranium-doped optical amplifiers (Grok 3 gives a convincing list of the hypothetical benefits of VDFAs). Parallelism will also play a role here, with spatial division multiplexing preferred to combining a fibre’s O, S, C and L bands.
Like Team C, Doerr also wants vast amounts of hollow-core fibre. It may cost more, but the benefits will be long-term, he said.
Peter Winzer (Team B) also argued for parallelism and a rethink in optics: the best ‘optical’ network may not be ‘optical’ given that photons get more expensive the higher the carrier frequency. So, inside the data centre, using the terahertz band and guided-wave wire promises 100x energy per bit benefits compared to using O-band or C-band optics.
Winzer also argues for 1000x more energy-efficient backbone connectivity by moving to 10-micron wavelengths and ultra-wideband operation to compensate for the 10x spectral efficiency loss that results. But for this to work, lots of fibre will be needed. Here, hollow-core fibre is a possible option.
Chris Cole brought the round to a close with radical ways to get the networking deployed. He mentioned Meta’s Bombyx, an installation machine that spins compact fibre cables along power lines.
Underground cabling would use nuclear fibre boring (including the patent number) which produces so much heat that it bores a tunnel while lining its walls with the molten material it produces. An egg-shaped portable nuclear reactor to power data centre containers was also proposed.
Cole defined a ‘universal’ transceiver with quadruple phase-shift keying (QPSK) modulation with no digital signal processing. “Spectral efficiency is not important as fibre will be plentiful,” says Cole.

Completing arguments
After each team had spent a total of some 14 minutes outlining their networks, they were given one more round for final statements.
Maxim Kuschnerov expanded on the team’s first-round slide, which outlined the ingredients needed to enable its Network 2075 vision. He also argued that every network element and connected device should be part of a global AI network. And AI will help co-design the new access network.
The new network will enable a massive wave of intelligent devices. Data will be kept at the edge, and the network will enable low-latency communications and inferencing at the edge.
Team B’s Dave Welch outlined some key statements: fusion energy will power the data centres with 80 per cent of the energy recycled from the heat. Transistors will pass the 10THz barrier, there will be 1000x scaling for the same energy, and an era of atto-joules/bit will begin. “And human-to-human interactions will still make the world go round,” says Welch.
Team C’s Jörg-Peter Elbers ended the evening presentations by outlining schemes to enable the new network: high-altitude platforms in a mega constellation (20km up) trailing fibre to the ground.
Such fibres and free-space links would also act as a sensing early-warning system in case the aliens returned.

Lastly, Elbers suggested we all get a towel (an important multi-purpose tool as outlined in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy). A towel can be used for hand-to-hand combat (when wet), ward off noxious fumes, and help avoid the gaze of the Ravenous Bugblatter beast of Traal. Lastly, and in the spirit of the evening, if all else fails, a towel can be used for sending line-of-sight, low-bandwidth smoke signals.
Team C ended the presentations by throwing towels into the audience, like tennis stars after a match.

Common threads
All the teams agreed that fibre was necessary for the network backbone, with hollow-core fibre widely touted.
Two of the teams emphasised a staged rollout and all outlined ways to avoid the ills of existing legacy networks.
Differences included using satellites rather than fibre-fed high-altitude balloons, which are quicker and cheaper to deploy, and the idea of container edges rather than a more centralised service edge. All the teams were creative with their technological approaches.
What wasn’t discussed – it wasn’t in the remit – was the impact of a global disconnect on the world’s population. We would suddenly become broadband have-nots for several years, disconnected from smartphones and hours-per-day screen time.
The teams’ logical assumption was to get the network up and running with even greater bandwidth in the future. But would gaining online access after years offline change our habits? Would we be much more precious in using our upload and download bits? And what impact would a global comms disconnect have on society? Would we become more sociable? Would letter-writing become popular again? And would local communities be strengthened?
Maxim Kuschnerov came closest to this when, in his summary talk, he spoke about how the following iteration of network and communications should be designed to be a force for good for humanity and for its economic prospects.
Team winners
The audience chose Team B’s network proposal. However, the choice was controversial.
An online voting scheme, which would have allowed users to vote and change their vote as the session progressed, worked perfectly, but keeled over on the night.

The organisers’ fallback plan, measuring the decibel level of the audience’s cheers for each team, ended in controversy.
First, not all the Session attendees were present at the end. Second, a couple of the participants were seen self-cheering into a microphone. Evidence, if needed, as to the seriousness the ‘superheroes’ embraced architecting a new global network.
“It has been an evening of pure creative chaos: the more time I spend reflecting on the generated ideas, the more their value increases to me,” says Antonio Tartaglia of Ericsson, one of the organisers. “The voting chaos has been an act of God, because all three teams deserved to win.”
Tartaglia came up with this year’s theme for the Rump Session.
“Rump sessions are all about creative debate, and this year’s event took that to its full potential,” says Dirk van den Borne of Juniper Networks, another of the organisers. “Micro data centres, fibre-tethered balloons, Terahertz waveguides, and communication by pigeon; the sheer breath of ideas shows what an exciting and inventive industry we’re working in.”
The evening ended with a tribute to Team C’s Glenn Wellbrock. BT’s Professor Andrew Lord acknowledged Wellbrock’s career and contribution to optical communications.
Wellbrock officially retired days before the Rump Session.
Silicon photonics grapples with 3D packaging demands

Hesham Taha (pictured), CEO of start-up Teramount, is more upbeat about silicon photonics than ever. But, as he outlines, challenges remain.
Hesham Taha is putting in the miles. The CEO of Teramount has been travelling to the East and West to meet with companies.
Termount is working closely with customers and partners adopting its technology that adds fibre to silicon photonics chips.
“We’re shipping units to customers and partners, and we need to be close to them as they integrate our components and address the challenges of integration,” says Taha.
Teramount
For Taha, connecting fibre to a silicon photonics chip inside a pluggable optical transceiver is relatively straightforward.
Far more challenging is when the optical engine and chip are packaged together, known as co-packaged optics. Such a tight coupling raises reliability challenges.
The optical engine is close to the large, heat-generating chip, introducing manufacturing and reliability issues. Also, the fibre-connect to the optical engine inside the package must be scalable. Scaling is key because with each chip generation – whether an AI accelerator or a switch chip – the input-output (I/O) requirements grow.
Teramount’s technology couples the fibre to the silicon photonics chip using a photonic plug and photonic bump.
The photonic plug holds the fibres and couples them to the silicon photonics chip via a photonic bump, a component made during the silicon photonics wafer’s processing.
The photonic bump has two elements: a wideband deflector and a lens mirror for beam expansion. Expanding the light beam enables larger assembly tolerances.
The resulting wafer-level manufacturing may be more complicated, says Taha, but the benefits include relaxed tolerances in the assembly, wideband surface coupling, and the testing of the wafer and the die.
The photonic bump-and-plug combination also enables detachable optics for co-packaged optics designs, benefitting their manufacturing. (For more about Teramount’s technology, click here)

Silicon Photonics meets semiconductor thinking
Taha sees silicon photonics as a platform that enables the integration of optical functions at the wafer scale using standard semiconductor manufacturing techniques.
“It [silicon photonics design] has the same way of thinking as semiconductor people: chip designers, foundries, and packaging people,” says Taha. The result is that silicon photonics is bringing optics to chip design.
The growing maturity of the technology, and the emergence of foundries such as TSMC, GlobalFoundries, Tower Semiconductor, Intel, and ST Microelectronics offering silicon photonics, means that companies with photonic designs can be fabless; a model that has benefitted the chip industry.
Photonic chip designers can use a large foundry’s process design kit (PDK) and send off their silicon photonics designs to be manufactured in volume.
As for co-packaged optics, Taha sees it as a key in the evolution of silicon photonics. Co-package optics is the realisation of silicon photonics whereby optics is integrated next to advanced semiconductor chips.
Industry developments
The growing interest in silicon photonic and co-packaged optics is also evident in some recent announcements, driven by the AI compute scaling needs.
ST Microelectronics unveiled its 300mm wafer PIC100 silicon photonics technology. ST Microelectronics was active in silicon photonics 10 years ago and has returned due to the AI opportunity.
Marvell announced it offers a co-packaged optics solution for accelerator chips to address scale-up networking for AI architectures.
Leading foundry TSMC has outlined its silicon photonics roadmap, which includes co-packaged optics.
And at the GTC 2025 AI developers conference, Nvidia announced it is adding co-packaged optics to its switch platforms for scale-out networking.
“All this is not a surprise because this is where we expected silicon photonics to be one day when we founded the company 10 years ago,” says Taha. “It is just that this day is happening now.”
Teramount sees its fibre attach product as a part of an emerging ecosystem to enable standard semiconductor high-volume manufacturing.
This ecosystem comprises chip designers, foundries, OSATs [outsourced semiconductor assembly and test service providers], and system integrators.
But there are challenges. There may be wafer-scale manufacturing, but issues remain on the integration-packaging side.
“This is why we need to travel, to align with the different partners,” says Taha.
Challenges
Teramount is seeing challenges first-hand given its fibre-attach vantage point. Taha also highlights commercial issues still to be resolved.
The technical challenges revolve around integrating the silicon photonics die with the accompanying lasers and fibres in an advanced package.
Unlike a traditional pluggable optical transceiver, the silicon photonics chip is inserted in a hot environment and must meet operational temperatures of 85oC and even 105oC.
Multiple reflow soldering steps during manufacturing expose the packaging to even higher temperatures (270oC) and thermal stresses.
“These are new technical challenges that silicon photonic chip integration must meet 3D packaging requirements,” says Taha.
3D packaging has requirements that affect the fibre attach. For example, the silicon photonics chip is thinner than the die used in a pluggable if through-silicon via (TSV) technology is used.
TSV refers to the way a vertical electrical connector is done that passes through the die. Any mechanical stresses or warpage impacts the resulting optical performance of the die.
“Co-package optics is creating new challenges when connecting fibre to such thinner chips,” says Taha.
There are also issues with testing a design. “There are still no mature solutions for parallel optical and electrical testing,” says Taha.
The commercial issue he highlights centres around design ownership. With a pluggable module, all the components are delivered in one assembled device with one owner – the module maker.
With co-packaged optics, there are several stages of assembly, with components coming from multiple sources. “Who owns it?” says Taha.
Box system integrators making servers, switches, and the like don’t work with fibre. In contrast, co-packaged optics require connecting and managing hundreds of fibres that fit in a tight space. Good optical coupling and performance are a must to ensure the overall design.
“So this flow [for co-package optics] has yet to be set,” says Taha.
He says all the players, chip designers, foundry packaging vendors (OSATs), and system integrators still need to be aligned. That said, the industry, particularly the large silicon chip vendors, are working to make it happen, says Taha.
This implies that companies such as Nvidia, Intel, AMD, Broadcom, and Marvell are playing a key role here.
Given how the emerging AI opportunity is driving their chip businesses, they have every reason to make this work.
Adtran broadens its OLS and access module offerings
Adtran has unveiled two products before the OFC show in San Francisco taking place at the end of the month.
One is a 50 gigabit-per second (Gbps) SFP56 optical transceiver that uses 4-level pulse-amplitude modulation (PAM-4) for 5G front-haul and enhanced broadband applications.
The second product is the FSP 3000 IP OLS, a compact open line system (OLS) designed for point-to-point links between sites 120km apart.
The OLS has been developed to simplify the setting up of dense wavelength division multiplexing (DWDM) optical links.
Enhancing broadband
Adtran has been developing a range of transceiver products to address specific requirements in the access-aggregation marketplace.
These include the MicroMux Edge Bidi, a QSFP+ pluggable module that supports 4×10 gigabit signals over 40km for mobile backhaul and enterprise wireless applications.
Adtran also offers the AccessWave25, a 25-gigabit tunable wavelength transceiver in an SFP28 form factor with a reach of 40km.
The pluggable module is used to link remote physical layer devices (RPDs) in cable operators’ networks. Cable operators are upgrading their infrastructure from 10 gigabits to 25 gigabits to support DOCSIS 4.0.
“You can argue if DOCSIS 4.0 is here or coming at the year-end,” says Saeid Aramideh, vice president of business development, optical engines business unit, at Adtran. “But there is no argument about the need for 25-gigabit uplinks for the cable MSO market.”
Now Adtran is announcing the AccessWave50, a 50-gigabit SFP56 optical module for fronthaul, part of the radio access network (RAN) and for other developments driving traffic such as smart homes, Internet of Things, and Smart Cities.
Aramideh refers to these applications as driving ‘enhanced’ broadband networks, requiring the upgrading of 25 gigabit links to 50- and even 100-gigabit ones.
Front-haul networks
For mobile, telco operators and RAN equipment makers are working with optical component makers to drive innovation in pluggables for emerging architectures such as enhanced 5G and 6G, says Aramideh.
In mobile networks, the front-haul network carries radio signals using the CPRI (common public radio interface) or enhanced CPRI protocols between the remote radio heads and the baseband units.
For 5G front-haul, the modules used are mainly at 10 gigabits-per-second (Gbps) with some 25-gigabit modules deployed. Adtran’s AccessWave50 addresses the next speed hike.

Adtran has designed the AccessWave50 using proprietary signal-shaping and distance optimisation techniques along with 4-level pulse amplitude modulation (PAM-4) to achieve the 50Gbps line rate.
“PAM-4 is proving itself to be a cost-performance-optimised technology and give you spectral efficiency as you go to higher data rates,” says Aramideh. “Of course, it’s not coherent optics, but you don’t need coherent for all applications.”
AccessWave50 uses a tunable laser and has a 15km, not 40km reach, but that is sufficient, says Aramideh, since front-haul networks are latency-constrained. The SFP56 module consumes 2.5W only.
Compact networking
Adtran has also unveiled its latest open line system (OLS) for C-band coherent transceivers.
The company has been providing bespoke OLS systems for hyperscalers. ADVA, the company Adtran acquired in 2022, provided Microsoft with the OLS that, working with the original ColorZ modules from Marvell, enabled 100 gigabit PAM-4 transmissions over 80km links.

Adtran also provides an OLS for data centre interconnects using 400ZR coherent modules for reaches of 120km.
The latest FSP 3000 IP OLS platform is a compact one-rack (1RU) high box that supports eight wavelengths over 120km.
The platform also includes an OTDR (optical time domain reflectometer) for fibre diagnostics.
The OLS can be used with 400-gigabit, 800-gigabit, and ultimately 1.6 terabit coherent pluggable modules once available.
The OLS is also designed for telecom metro interconnect networks. “Telcos, in response to AI, are also looking for OLS technology tailored to coherent transceivers,” says Stephan Rettenberger, senior vice president of marketing and corporate communications at Adtran.
A chief design challenge has been to fit the OLS into a 1 RU form factor, requiring integration and packaging work. The OLS has also been designed to be set up and operated straightforwardly.
The platform is scalable: two racks stacked double the wavelength counts to 16.
The FSP 3000 IP OLS product is already in the hands of one telco customer, says Rettenberger.




