Ciena becomes a computer weaver

- Ciena is to buy optical interconnect start-up Nubis Communications for $270 million.
- The deal covers optical and copper interconnect technology for data centres
Ciena has announced its intention to buy optical engine specialist Nubis Communications for $270 million. If the network is the computer, Nubis’ optical engine and copper integrated circuit (IC) expertise will help Ciena better stitch together AI’s massive compute fabric.
Ciena signalled its intention to target the data centre earlier this year at the OFC show when it showcased its high-speed 448-gigabit serialiser-deserialiser IC technology and coherent lite modem. Now, Ciena has made a move for start-up Nubis, which plays at the core of AI data centres.
“Ciena’s expertise in high-speed components is relevant to 400G per lane Ethernet transceivers, but they never sold any products to this market,” says Vladimir Kozlov, CEO of LightCounting. “Nubis offers them an entry point with several designs and customer engagements.”
With the deal, Ciena is extending its traditional markets of wide area networks (WAN), metro, and short-reach dense wavelength division multiplexing (DWDM) to include AI networking opportunities. These opportunities include scale-across networks, where AI workloads are shared across multiple data centres, something Ciena can address, to now scale-out and scale-up networks for AI clusters in the data centre.
Puma optical engine
Nubis has developed two generations of compact optical engines for near-package optics (NPO) and co-package optics (CPO) applications. Its first-generation engine operates at 100 gigabits per second (Gbps), while its second, dubbed Puma, operates at 200 Gbps.
Nubis’s optical engine philosophy is based on escaping the optical channels from the surface of the optical engine, not its edge. The start-up also matches the number of optical channels to the electrical ones. The optical engine can be viewed as a sieve: data from the input channels flow through the chip and emerge in the same number of channels at the output. The engine acts as a two-way gateway, with one side handling electrical signals and the other, optical ones.
The Puma optical engine uses 16 channels in each direction, 16 by 200Gbps electrical signals for a total of 3.2 terabits per second (Tbps), and 16 fibres, each fibre carrying 200Gbps of data in the form of a wavelength. Puma’s total capacity is thus 6.4 terabits per second (Tbps). The engine also needs four external lasers to drive the optics, each laser feeding four channels or four fibres. The total fibre bundle of the device consists of 36 fibres: 32 for data (16 for receive and 16 for transmit), and four for the laser light sources.
Nubis is also a proponent of linear drive technology. Here, the advanced serdes on the adjacent semiconductor chip drives the optical engine, thereby avoiding the need for an on-engine digital signal processor (DSP) that requires power. The start-up has also developed a system-based simulator software tool that it uses to model the channel, from the transmitter to the receiver. The tool models not only the electrical and optical components within the channel but also the endpoints, such as the serdes.
Nitro
Nubis has an analogue IC team that designs its trans-impedance amplifiers (TIAs) and drivers used for the optical engine. The hardware compensates for channel impairments with low noise, high linearity, and at high speed. It is this channel simulator tool that Nubis used to optimise its optical engine, and to develop its second key technology, which Nubis calls Nitro —a chip that extends the reach of copper cabling.
“We use our linear optics learning and apply it to copper straight out of the gate, “said Peter Winzer, founder & CTO at Nubis, earlier this year. By using its end-to-end simulator tool, Nubis developed the Nitro IC, which extends the 1m reach of direct-attached copper to 4m using an active copper cable design. “We don’t optimise the driver chip, we optimise the end-to-end system,” says Winzer.
Nubis was also part of a novel design based on a vertical line card to shorten the trace length between an ASIC and pluggable modules.
Ciena’s gain
The acquisition of Nubis places Ciena at the heart of the electrical-optical transition inside the data centre. Ciena will cover both options: copper and optical interconnect. Ciena will gain direct-drive technology expertise for electrical and optical interfaces, enabling scale-up, as well as optical engine technology for scale-out, adding to its coherent technology expertise.

Ciena’s technologies will span coherent ultra-long-haul links all the way to AI accelerators, the heart of AI clusters. By combining Ciena’s 448-gigabit serdes with Nubis’s optical engine expertise, Ciena has a roadmap to develop 12.8Tbps and faster optical engines.
The acquisition places Ciena among new competitors that have chip and optical expertise and deliver co-packaged optics solutions alongside complex ICs such as Broadcom and Marvell.
The deal adds differentiation from Ciena’s traditional system vendor competitors, such as Cisco/ Acacia and Nokia. Huawei is active in long-haul optical and makes AI clusters. Ciena will also compete with existing high-speed optical players, including co-packaged optics specialists Ayar Labs and Ranovus, microLED player Avicena, and optical/IC fabric companies such as LightMatter and Celestial AI.
“Ciena will be a unique supplier in the co-packaged optics/near-packaged optics/active copper cabling data centre interconnect market,” says Daryl Inniss, Omdia’s thought lead of optical components and advanced fibre. “The other suppliers either have multiple products in the intra data centre market, like Broadcom and Nvidia, or they are interconnect-focused start-ups. These suppliers should all wonder what Ciena will do next inside the data centre.”
Ciena will enhance its overall expertise in chips, optics, and signal processing with the Nubis acquisition. It will also put Ciena in front of key processor players and different hyperscaler engineering teams, which drive next-generation AI systems.
Ciena will also have all the necessary parts for the various technologies, regardless of the evolving timescales associated with the copper-to-optical transition within AI systems. Ciena will add direct-detect technology and copper interconnect. On the optical side, it has coherent optical expertise, now coupled with near-package optics and co-packaged optics.
Nubis’ gain
Nubis’ 50-plus staff get a successful exit. The start-up was founded in 2020. Nubis will become a subsidiary of Ciena.
Nubis will be joining a much bigger corporate entity with deep expertise and pockets. Ciena has a good track record with its mergers. Think Nortel at the system level and Blue Planet, a software acquisition. Now the Nubis deal will bring Ciena firmly inside the data centre.
“This is a great deal for Nubis,” says Kozlov. “Congratulations to their team.”
What next?
The deal is expected to close in the fourth quarter of this year. Ciena expects the deal to start adding to its revenues from 2028, requiring Ciena and Nubis to develop products and deliver design wins in the data centre.
“Given the breadth of Ciena’s capabilities, its deep pockets, and products like its data centre out-of-band (DCOM) measurement product, router, and coherent transceivers, one can imagine that Ciena would offer more than co-packaged optics/ near-packaged optics/ active copper cabling inside the data centre,” says Inniss.
Avicena partners with TSMC to make its microLED links

TSMC, the leading semiconductor foundry, will make the photo-detectors used for Avicena Tech’s microLED optical interconnect technology.
Avicena is developing an optical interface that uses hundreds of parallel fibre links – each link comprising a tiny LED tranmitter and a silicon photo-detector receiver – to deliver terabit-per-second (Tbps) data transfers.
Avicena is targeting its microLED-based interconnect, dubbed LightBundle, for artifical intelligence (AI) and high-performance computing (HPC) applications.
The deal is a notable step for Avicena, aligning its technology with TSMC’s CMOS manufacturing prowess. The partnership will enable Avicena to transition its technology from in-house prototyping to high-volume production.
Visible-light technology
Avicena’s interconnects operate in the visible light spectrum at 425-430nm (blue light), differing from the near-infrared used by silicon photonics. The lower wavelength band enables simpler photo-detector designs where silicon efficiently absorbs blue light.
“Silicon is a very good detector material because the absorption length at that kind of wavelength is less than a micron,” says Christoph Pfistner, vice president of sales and marketing at Avicena. “You don’t need any complicated doping with germanium or other materials required for infrared detectors.”
Visible-light detectors can therefore be made using CMOS processes. For advanced CMOS nodes, however, such as used to make AI chips, hybrid bonding is required with a separate photo-detector wafer.
TSMC is adapting its CMOS Image Sensor (CIS) process used for digital cameras that operate in the megahertz range, to support Avicena’s photo-detectors that must work in the gigahertz range.
For the transmitter, Avicena uses gallium nitride-based microLEDs developed for the micro-display industry, paired with CMOS driver chips. Osram is Avicena’s volume LED supplier.
Osram has adapted its LED technology for high-speed communications and TSMC is now doing the same for the photo-detectors, enabling Avicena to mass produce its technology.
Specifications
The LED is used to transmit non-return-to-zero (NRZ) signalling at 3.5 to 4 gigabit-per-second (Gbps). Some 300 lanes are used to send the 800 gigabit data payload, clock, and associated overhead bits.
For the transmitter, a CMOS driver modulates the microLED while the receiver comprises a photo-detector, a trans-impedance amplifier (TIA) and a limiting amplifier.
By operating in this ‘slow and wide’ manner, the power consumption of less than 1 picojoule-per-bit (pJ/b) is achievable across 10m of the multi-mode fibre bundle. This compares to 3-5pJ/b using silicon photonics and up to 20pJ/b for optical pluggable transceivers though the latter support longer reaches.
The microLED links achieve a bandwidth density of over 1 terabit/mm and Avicena says this can be improved. Since the design is a 2D array, it is possible to extend the link density in area (in 2D) and not be confined to the ‘beachfront’ stretch. But this will be within certain limits, qualifies Pfistner
Applications
A key theme at the recent OFC 2025 show was optical interconnect options to linearly scale AI processing performance by adding more accelerator chips, referred to as the scale-up architecture.
At present copper links are used to scale up accelerators but the consensus is that, at some point, optics will be needed once the speed-distance performance of copper is exceeded. Nvidia’s roadmap suggests that copper can still support larger scale-up architectures for at least a couple of graphics processing unit (GPU) generations yet.
Avicena is first targeting its microLED technology in the form of an optical engine to address 1.6Tbps on-board optics modules. The same optical engine can also be used in active optical cables.
The company also plans to use its optical engine for co-packaged optics, and for in-package interconnect applications using a die-to-die (D2D) electrical interface such as the Universal Chiplet Interconnect Express (UCIe) or the OCP’s Bunch of Wires (BOW) interface. On-board optics, also known as co-packaged optics, refers to optics on a separate substrate close to the host ASIC, with both packaged together.
One such application for in-packaged optics is memory disaggregation involving high-bandwidth memory (HBM). “There’s definitely more and more interest in what some people refer to as optical HBM,” says Pfistner. He expects initial deployment of optical HBM in the 2029-2030 timeframe.
The foundry TSMC is also active in silicon photonics, developing the technology as part of its advanced system-in-package technology roadmap. While it is early days, Avicena’s microLED LightBundle technology could become part of TSMC’s optical offerings for applications such as die-to-die, xPU-to-memory, and in-packaged optics.
How CPO enables disaggregated computing

A significant shift in cloud computing architecture is emerging as start-up Drut Technologies introduces its scalable computing platform. The platform is attracting attention from major banks, telecom providers, and hyperscalers.
At the heart of this innovation is a disaggregated computing system that can scale to 16,384 accelerator chips, enabled by pioneering use of co-packaged optics (CPO) technology.
“We have all the design work done on the product, and we are taking orders,” says Bill Koss, CEO of Drut (pictured).
System architecture
The start-up’s latest building block as part of its disaggregated computing portfolio is the Photonic Resource Unit 2500 (PRU 2500) chassis that hosts up to eight double-width accelerator chips. The chassis also features Drut’s interface cards that use co-package optics to link servers to the chassis, link between the chassis directly or, for larger systems, through optical or electrical switches.
The PRU 2500 chassis supports various vendors’ accelerator chips: graphics processing units (GPUs), chips that combine general processing (CPU) and machine learning engines, and field programmable gate arrays (FPGAs).
Drut has been using third-party designs for its first-generation disaggregated server products. More recently the start-up decided to develop its own PRU 2500 chassis as it wanted to have greater design flexibility and be able to support planned enhancements.
Koss says Drut designed its disaggregated computing architecture to be flexible. By adding photonic switching, the topologies linking the chassis, and the accelerator chips they hold, can be combined dynamically to accommodate changing computing workloads.
Up to 64 racks – each rack hosting eight PRU 2500 chassis or 64 accelerator chips – can be configured as a 4096-accelerator chip disaggregated compute cluster. Four such clusters can be networked together to achieve the full 16,384 chip cluster. Drut refers to its compute cluster concept as the DynamicXcelerator virtual POD architecture.

The architecture can also be interfaced to an enterprise’s existing IT resources such as Infiniband or Ethernet switches. “This set-up has scaling limitations; it has certain performance characteristics that are different, but we can integrate existing networks to some degree into our infrastructure,” says Koss.
PRU-2500
The PRU 2500 chassis is designed to support the PCI Express 5.0 protocol. The chassis supports up to 12 PCIe 5.0 slots, including eight double-width slots to host PCIe 5.0-based accelerators. The chassis comes with two or four tFIC 2500 interface cards, discussed in the next section.
The remaining four of the 12 PCIe slots can be used for single-width PCIe 5.0 cards or Drut’s rFIC-2500 remote direct memory access (RDMA) network cards for optical-based accelerator-to-accelerator data transfers.
Also included in the PRU 2500 chassis are two large Broadcom PEX89144 PCIe 5.0 switch chips. Each PEX chip can switch 144 PCIe 5.0 lanes for a total bandwidth of 9.2 terabits-per-second (Tbps).

Co-packaged optics and photonic switching
The start-up is a trailblazer in adopting co-packaged optics. Due to the input-output requirements of its interface cards, Drut chose to use co-packaged optics since traditional pluggable modules are too bulky and cannot meet the bandwidth density requirements of the cards.
There are two types of interface cards. The iFIC 2500 is added to the host while the tFIC 2500 is part of the PRU 2500 chassis, as mentioned. Both cards are a half-length PCIe Gen 5.0 card and each has two variants: one with two 800-gigabit optical engines to support 1.6Tbps of I/O and one with four engines for 3.2Tbps I/O. It should be noted that these cards are used to carry PCIe 5.0 lanes, each lane operating at 32 gigabits-per-second (Gbps) using non-return-to-zero (NRZ) signalling.
The cards interface to the host server and connect to their counterparts in other PRU 2500 chassis. This way, the server can interface with as accelerator resources across multiple PRU 2500s.
Drut uses co-packaged optics engines due to their compact size and superior bandwidth density compared to traditional pluggable optical modules. “Co-package optics give us a high amount of density endpoints in a tiny physical form factor,” says Koss.
The co-packaged optics engines include integrated lasers rather than using external laser sources. Drut has already sourced the engines from one supplier and is also waiting on sources from two others.
“The engines are straight pipes – 800 gigabits to 800 gigabits,” says Koss. “We can drop eight lasers anywhere, like endpoints on different resource modules.”

Drut also uses a third-party’s single-mode-fibre photonic switch. The switch can be configured from 32×32 up to 384×384 ports. Drut will talk more about the photonic switching aspect of its design later this year.
The final component that makes the whole system work is Drut’s management software, which oversees the system’s traffic requirements and the photonic switching. The complete system architecture is shown below.

More development
Koss says being an early adopter of co-package optics has proven to be a challenge.
The vendors are still at the stage of ramping up volume manufacturing and resolving quality and yield issues. “It’s hard, right?” he says,
Koss says WDM-based co-packaged optics are 18 to 24 months away. Further out, he still foresees photonic switching of individual wavelengths: “Ultimately, we will want to turn those into WDM links with lots of wavelengths and a massive increase in bandwidth in the fibre plant.”
Meanwhile, Drut is already looking at its next PRU chassis design to support the PCIe 6.0 standard, and that will also include custom features driven by customer needs.
The chassis could also feature heat extraction technologies such as water cooling or immersion cooling, says Koss. Drut could also offer a PRU filled with CPUs or a PRU stuffed with memory to offer a disaggregated memory pool.
“A huge design philosophy for us is the idea that you should be able to have pools of GPUs, pools of CPUs, and pools of other things such as memory,” says Koss. “Then you compose a node, selecting from the best hardware resources for you.”
This is still some way off, says Koss, but not too far out: “Give us a couple of years, and we’ll be there.”
Has the era of co-packaged optics finally arrived?
Ayar Labs’ CEO, Mark Wade
Mark Wade, the recently appointed CEO of Ayar Labs, says his new role feels strangely familiar. Wade finds himself revisiting tasks he performed in the early days of the start-up that he helped co-found.
“In the first two years, I would do external-facing stuff during the day and then start working on our chips from 5 PM to midnight,” says Wade, who until last year was the company’s chief technology officer (CTO).
More practically, says Wade, he has spent much of the first months since becoming CEO living out of a suitcase and meeting with customers, investors, and shareholders.
History
Ayar Labs is bringing its technology to market to add high-bandwidth optical input-output (I/O) to large ASICs.
The technology was first revealed in a 2015 paper published in the science journal, Nature. In it, the optical circuitry needed for the interfaces was implemented using a standard CMOS process.
Vladimir Stojanovic, then an associate professor of electrical engineering and computer science at the University of California, Berkeley, described how, for the first time, a microprocessor could communicate with the external world using something other than electronics.
Stojanovic has left his role as a professor at the University of California, Berkeley, to become Ayar Labs’ CTO, following Wade’s appointment as CEO.

Focus
“A few years ago, we made this pitch that machine-learning clusters would be the biggest opportunity in the data centre,” says Wade. “And for efficient clusters, you need optical I/O.” Now, connectivity in artificial intelligence (AI) systems is a vast and growing problem. “The need is there, and our product is timed well,” says Wade.
Ayar Labs has spent the last year focusing on manufacturing and established low-volume production lines. The company manufactured approximately 10,000 optical chiplets in 2023 and expects similar volumes this year. The company also offers an external laser source SuperNova product that provides the light source needed for its optical chiplet.
Ayar Labs’ optical input-output (I/O) roadmap showing the change in electrical I/O interface evolving from Intel’s AIB to the UCIe standard, the move to faster data rates and, on the optical side, more wavelengths and the growing total I/O, per chiplet and packaged system. Source: Ayar Labs.
The products are being delivered to early adopter customers while Ayar Labs establishes the supply chain, product qualification, and packaging needed for volume manufacturing.
Wade says that some of its optical chiplets are being used for other non-AI segments. Ayar Labs has demonstrated its optical I/O being used with FPGAs for electronics systems for military applications. But the primary demand is for AI systems connectivity, whether compute to compute, compute to memory, compute to storage, and compute to a memory-semantic switch.
“A memory-semantic switch allows the scaling of a compute fabric whereby a bunch of devices need to talk to each other’s memory,” says Wade.
Wade cites Nvidia’s NVSwitch as one example: the first layer switch chip at the rack level that supports many GPUs in a non-blocking compute fabric. Another example of a memory-semantic switch is the open standard Compute Express Link (CXL).
The need for co-packaged optics
At the Optica Executive Forum event held alongside the recent OFC show, several speakers questioned the need for I/O based on optical chiplets, also called co-packaged optics.
Google’s Hong Liu, a Distinguished Engineer at Google Technical Infrastructure, described co-packaged optics as an ’N+2 years’ technology, perpetually coming in two years’ time, (N being the current year).
Ashkan Seyedi of Nvidia stressed that copper continues to be the dominant interconnect for AI because it beats optics in such metrics as bandwidth density, power, and cost. Existing data centre optical networking technology cannot simply be repackaged as optical compute I/O, as it does not beat copper. Seyedi also shared a table that showed how much more expensive optical was in terms of dollar per gigabit/second ($/ Gbps).
Wade starts to address these points by pointing out that nobody is making money at the application layer of AI. Partly, this is because the underlying hardware infrastructure for AI is so costly.
“It [the infrastructure] doesn’t have the [networking] throughput or power efficiency to create the headroom for an application to be profitable,” says Wade.
The accelerator chips from the likes of Nvidia and Google are highly efficient in executing the mathematics needed for AI. But it is still early days when it comes to the architectures of AI systems, and more efficient hardware architectures will inevitably follow.
AI workloads also continue to grow at a remarkable rate. They are already so large that they must be spread across systems using ever more accelerator chips. With the parallel processing used to execute the workloads, data has to be shared periodically between all the accelerators using an ’all-to-all’ command.
“With large models, machines are 50 per cent efficient, and they can get down to 30 per cent or even 20 per cent,” says Wade. This means expensive hardware is idle for more than half the time. And the issue will only worsen with growing model size. According to Wade, using optical I/O promises the proper bandwidth density – more terabits-per-second per mm, power efficiency, and latency.
“These products need to get proven and qualified for volume productions,” he adds. “They are not going to get into massive scale systems until they are qualified for huge scale production.”
Wade describes what is happening now as a land grab. Demand for AI accelerators is stripping supply, and the question is still being figured out as to how the economics of the systems can be improved.
“It is not about making the hardware cheaper, just how to ensure the system is more efficiently utilised,” says Wade. “This is a big capital asset; the aim is to have enough AI workload throughput so end-applications have a viable cost.”
This will be the focus as the market hits its stride in the coming two to three years. “It is unacceptable that a $100 million system is spending up to 80 per cent of its time doing nothing,” says Wade.
Wade also addresses the comments made the day at the Optica Executive Forum. “The place where [architectural] decisions are getting discussed and made are with the system-on-chip architects,” he says. “It’s they that decide, not [those at] a fibre-optics conference.”
He also questions the assumption that Google and Nvidia will shun using co-packaged optics.
Market opportunity
Wade does a simple back-of-an-envelope calculation to size the likely overall market opportunity by the early 2030s for co-packaged optics.
In the coming years, there will be 1,000 optical chiplets per server, 1,000 servers per data centre, while 1,000 new data centres using AI clusters will be built. That’s a billion devices in total. Even if the total addressable opportunity is several hundred million optical chiplets, that is still a massive opportunity by 2032, he says.
Wade expects Ayar Labs to ship 100,000 plus chiplets in the 2025-26 timeframe, with volumes ramping to the millions in the two years after that.
“That is the ramp we are aiming for,” he says. “Using optical I/O to build a balanced composable system architecture.” If co-packaged optics does emerge in such volumes, it will disrupt the optical component business and the mainstream technologies used today.
“Let me finish with this,” says Wade. “If we are still having this conversation in two years’ time, then we have failed.”
Teramount’s scalable fibre-attach for co-packaged optics
Part 2: Co-packaged optics: fibre-attach
Hesham Taha recently returned from a trip to the US to meet with leading vendors and players serving the silicon photonics industry.
“It is important to continue probing the industry,” says Taha, the CEO of start-up Teramount.
Teramount specialises in fibre assembly technology: coupling fibre to silicon photonics chips.
Taha is now back in the US, this time to unveil Teramount’s latest product at this week’s OFC show being held in San Diego. The company is detailing a new version of its fibre assembly technology, dubbed Teraverse-XD, that doubles the density of fibres connected to a silicon photonics chip.
Teramount is also announcing it is working with GlobalFoundries, a leading silicon-photonics foundry.
Connecting fibre to a silicon photonics device for a pluggable optical module is straightforward. However, attaching fibre to an optical engine for co-packaged optics is challenging. The coupling must be compact and scale to enable even denser connections in future. This is especially true with the co-packaging of future 100-terabit and 200-terabit Ethernet switch chips.
“If I were to describe the last year, it would be aligning our [Teramount] activities to the industry’s evolving needs,” says Taha. “A key part of those needs is being driven by optical activities for AI applications.”
Edge versus surface coupling
Companies are pursuing two main approaches to connecting fibre to a silicon photonics device: surface and edge (side) coupling.
Surface coupling – or its academic term, off-plane coupling – deflects light vertically, away from the chip’s surface. In contrast, edge (in-plane) or side coupling sends the optical waveguide’s light straight through to the fibre at the chip’s edge.
A silicon-photonics grating coupler is used for surface coupling, glancing the light away from the chip’s plane. However, the grating coupler is wavelength-dependent such that the angle of the defection varies with the light.
In contrast, side coupling is wideband. “You can carry multiple wavelengths on each channel,” says Taha. However, side coupling has limited interfacing space, referred to as ‘shoreline density’.
Side coupling is also more complicated to manufacture in volume. Directly bonding the fibre to the chip involves adhesive, and the fibres get in the way of reflow soldering. “It [side coupling] is doable for transceivers, but to make co-packaged optics, side coupling becomes complicated,” says Taha.
Teramount’s approach
Teramount’s approach couples the fibre to the silicon photonics chip using two components: a photonic plug and a photonic bump.
The photonic plug holds the fibres and couples them to the silicon photonics chip via the photonic bump, a component made during the silicon photonics wafer processing. The photonic bump consists of two elements: a wideband deflector and a lens mirror for beam expansion. Expanding the light beam enables much larger assembly tolerances: +/- 30 microns. And across this 60-micron window, only half a dB is lost in misalignment tolerances.
The resulting wafer-level manufacturing may be more complicated, says Taha, but the benefit is relaxed tolerances in the assembly, wide-band surface coupling, and when testing the wafer and the die.
The photonic bump-and-plug combination also enable detachable optics for co-packaged optics designs. This benefits manufacturing and is wanted for co-packaged optics.
Teraverse and Teraverse-XD
There is a clear demarcation between the optics and the switch chip when using pluggables in the data centre. In contrast, co-packaged optics is a system with the optics embedded alongside the chip. A vendor may work with multiple companies to make co-packaged optics, but one product results, with the chip and optical engined co-packaged.
Teramount’s Teraverse solution, using the plug-and-bump combination, brings pluggability to co-packaged optics. The fibres can be attached and detached from the optical engines. “It’s very important to keep that level of pluggability for co-packaged optics,” says Taha.
The approach also benefits manufacturing yield and testing. Separating the fibres from the package protects the fibres during reflow soldering. “Ideally, you want the fibre connected at the last stage and still maintain high level of testability during the packaging process,” says Taha.
Detachable fibre also brings serviceability to co-packaged optics, benefitting for data centre operators.
Teraverse, Teramount’s detachable fiber-to-chip interface, supports single-mode fiber with 125-micron diameter at a 127-micron pitch separation.

Teraverse-XD, announced for OFC, is a follow-on that doubles the fibre density to achieve a near 64-micron pitch. Here, fibres are placed on top of each other, scaling in the Z-dimension. The approach is like how rods or pipes are stored, with the second row of fibres staggered, sitting in the valleys between adjacent fibers in the lower row.
Two rows of photonic bumps are used to couple the light to each row of fibres (see image above). “It’s very important to keep the same real-estate but to have twice the number of fibres,” says Taha.
Future scaling is possible by adding more rows of fibres or by adopting fibres with a smaller pitch.
Teramount’s technology also supports both edge coupling and surface coupling. “We are agnostic,” says Taha. If a co-packaged optics or optical engine vendor wants to use side coupling, it can use the bump-and-plug combination. The bump deflects the beam upwards to the plug packaging which takes the fibres and sends them out horizontally. “We are converting edge coupling to wideband surface coupling,” says Taha. “You don’t need to sacrifice bandwidth to do surface coupling.”
If the vendor wishes to use a grating coupler, Teramount’s bump-and-plug supports that, too, enabling detachable fibering. But here, only the bump’s expanding mirror is used. “For the wideband surface coupling cased, the bump uses two components: the deflector and the expanding mirror,” says Taha.
Both cases are supported by what Teramount refers to as its Universal Photonic Coupler, shown.

Market expectations
Despite being discussed for over a decade, Taha is not surprised that data centre operators have yet to adopt co-packaged optics.
He points out that hyperscalers only want to use co-packaged optics for Ethernet switches once the technology is more mature. They can also keep using a proven alternative: pluggable modules, that continue to advance.
“Hyperscalers are not against the technology, but it is not mature enough,” says Taha. Hyperscalers and systems vendors also want an established supply chain and not proprietary solutions.
To date, Broadcom’s first co-packaged optics switch solution at 25.6-terabit was adopted by Tencent. Broadcom has announced for OFC that it is now delivering its latest 51.2-terabit Bailly co-packaged optics design, backed by ByteDance.
“AI is a different story,” says Taha. “This is the tipping point for a leading vendor to start taking seriously co-packaged optics.”
The advantage of co-packaged optics here is that it accommodates the reach – radix -as well as power savings and improved latency.
Taha expects initial volumes of co-packaged optics sales in 2026.
A coherent roadmap for co-packaged optics
Is coherent optics how co-packaged will continue to scale? Pilot Photonics certainly thinks so.

Part 1: Co-packaged optics
Frank Smyth, CTO and founder of Pilot Photonics, believes the firm is at an important inflection point.
Known for its comb laser technology, Pilot Photonics has just been awarded a €2.5 million European Innovation Council grant to develop its light-source technology for co-packaged optics.
The Irish start-up is also moving to much larger premises and is on a recruitment drive. “Many of our projects and technologies are maturing,” says Smyth.
Company
Founded in 2011, the start-up spent its early years coupled to Dublin City University. It raised its first notable investment in 2017.
The company began by making lab instrumentation based on its optical comb laser technology which emits multiple sources of light that are frequency- and phased-locked. But a limited market caused the company to pivot, adding photonic integration to its laser know-how.
Now, the start-up has a fast-switching, narrow-linewidth tunable laser, early samples of which are being evaluated by several “tier-one” companies.
Pilot Photonics also has a narrowband indium-phosphide comb laser for optical transport applications. This will be the next product it samples.
More recently, the start-up has been developing a silicon nitride-based comb laser for a European Space Agency project. “The silicon nitride micro-resonator in the comb is a non-linear element that enables a very broad comb for highly parallel communication systems and for scientific applications,” says Smyth. It is this laser type that is earmarked for the data centre and for co-packaged optics applications.
Smyth stresses that while still being a small company, the staff has broad expertise. “We cover the full stack,” he says.
Skills range from epitaxial wafer design, photonic integrated circuit (PIC)s and lasers, radio frequency (RF) and thermal expertise, and digital electronics and control design capabilities.
“We learned early on that it’s all well making a PIC, but if no one can interface to it, you are wasting your time,” says Smyth.
Co-packaged optics
Co-packaged optics refers to adding optics next to an ASIC that has significant input-output (I/O) data requirements. Examples of applications for co-packaged optics include high-capacity Ethernet switch chips and artificial intelligence (AI) accelerators. The goal is to give the chip optical rather than electrical interfaces, providing system-scaling benefits; as electrical signals get faster, their reach shrink.
The industry has been discussing co-packaged optics for over a decade. Switch-chip players and systems vendors have shown prototype designs and even products. And more than half a dozen companies are developing the optical engines that surround, and are packaged with, the chip.
However, the solutions remain proprietary, and while the OIF is working to standardise co-packaged optics, end users have yet to embrace the technology. In part, this is because pluggable optical modules continue to advance in data speeds and power consumption, with developments like linear-drive optics.
The ecosystem supporting co-packaged optics is also developing. Hyperscalers will only deploy co-packaged optics in volume when reliability and a broad manufacturing base are proven.
Yet industry consensus remains that optical I/O is a critical technology and that deployments will ramp up in the next two years. Ethernet switch capacity doubles every two years while AI accelerator chips are progressing rapidly. Moreover, the number of accelerator chips used in AI supercomputers is growing fast, from thousands to tens of thousands.
Pilot Photonics believes its multi-wavelength laser technology, coupled with the intellectual property it is developing, will enable co-packaged optics based on coherent optics to address such scaling issues.
Implementations
Co-packaged optics uses optical chiplets or ‘engines’ that surround the ASIC on a shared substrate. The optical engines typically use an external laser source although certain co-packaged optics solutions such as from Intel and Ranovus can integrate the laser as part of the silicon-photonics based optical engine.
Designers can scale the optical engine’s I/O capacity in several ways. They can increase the number of fibres connected to the optical engine, send more wavelengths down each fibre, and increase the wavelength’s data rate measured in gigabits per second (Gbps).
In co-packaged optics designs, 16 engines typically surround the chip. For a 25.6-terabit Ethernet chip, 16 x 1.6-terabit engines are used, each 1.6-terabit engine sending a 100Gbps DR1 signal per fibre. The total fibres per engine equals 32: 16 for the transmit and 16 for the receive (see table).
| Switch capacity/Tbps | Optical engine/Tbps | Optical engines | Data rate/fibre | No. fibres/ engine* |
| 25.6 | 1.6 | 16 | 100G DR, 500m | 32 |
| 25.6 | 3.2 | 8 | 100G DR, 500m | 64 |
| 51.2 | 6.4 | 8 | 400G FR4, 2km | 32 |
| 102.4 (speculative) | 6.4 | 16 | 400G FR4, 2km | 16 |
| 102.4 (speculation) | 12.8 | 8 | 400G FR4, 2km | 32 |
*Not counting the external laser source fibre.
Broadcom’s co-packaged optical approach uses eight optical engines around its 25.6-terabit Tomahawk 4 switch chip, each with 3.2Tbps capacity. For the Tomahawk 5, 51.2-terabit Bailly co-packaged optics design, Broadcom uses eight, 6.4Tbps optical engines, sending 400-gigabit FR4, or 4-wavelength coarse WDM wavelengths, across each fibre. Using FR4 instead of DR1 halves the number of optical engines while doubling overall capacity.
The co-packaging solutions used in the next-generation 102.4-terabit switch chip are still to be determined. Capacity could be doubled using twice as many fibres, or by using 200-gigabit optical wavelengths based on 112G PAM-4 electrical inputs, twice the speed currently used.
But scaling routes for the generation after that – 204.8-terabit switch chips and beyond – and the co-packaged optics design become unclear due to issues of dispersion and power constraints, says Smyth.
Scaling challenges
Assuming eight engines were used alongside the 200-terabit ASIC , each would need to be 25.6Tbps. The fibre count per engine could be doubled again or more wavelengths per fibre would be needed. One player, Nubis Communications, scales its engines and fibres in a 2D array over the top of the package, an approach suited to fibre-count growth.
Doubling the wavelength count is another option but adopting an 8-wavelength CWDM design with 20nm spacing means the wavelengths would cover 160nm of spectrum. Over a 2km reach, this is challenging due to problems with dispersion. Narrower channel spacings such as those used in the CW-WDM MSA (multi-source agreement) require temperature control to ensure the wavelengths stay put.
Keeping the symbol rate fixed but doubling the data rate is another option. But adopting the more complex PAM-8 modulation brings its own link challenges.
Another key issue is power. Current 51.2-terabit switches require 400mW of laser launch power (4 x 100mW lasers) per fibre and there are 128 transmit fibers per switch.
“Assuming a wall plug efficiency of 20 per cent, that is around 250W of power dissipation just for the lasers,” says Smyth. “Getting to 4Tbps per fibre appears possible using 16 wavelengths, but the total fiber launch power is 10 times higher, requiring 2.5kW of electrical power per switch just for the lasers.”
In contrast, single-polarisation coherent detection of 16-QAM signals through a typical path loss of 24dB could match that 4Tbps capacity with the original 250W of laser electrical power, he says.
The optimised total laser power improvement for coherent detection versus direct detection as a function of the additional losses in the signal path (the losses not also experienced by the local oscillator). Source: Pilot Photonics
Coherent detection is associated with a high-power digital signal processor (DSP). Are such chips feasible for such a power-sensitive application as co-packaged optics?
Coherent detection adds some DSP complexity, says Smyth, but it has been shown that for pluggable-based intra data centre links using 5nm CMOS silicon, 400-gigabit coherent and direct-detection are comparable in terms of ASIC power but coherent requires less laser power.
“Over time, a similar battle will play out for co-packaged optics. Laser power will become a bigger issue than DSP power,” he says.
The additional signal margin could be used for 10km links, with tens of terabits per fibre and even 80km links at similar per-fibre rates to current direct detection.
“We believe coherent detection in the data centre is inevitable,” says Smyth. “It’s just a question of when.”
Comb-based coherent co-packaged optics
Coherent co-packaged optics brings its own challenges. Coherent detection requires alignment between the signal wavelength and the local oscillator laser in the receiver. Manufacturing tolerances and the effects of ageing in simple laser arrays make this challenging to achieve.
“The wavelengths of a comb laser are precisely spaced, which greatly simplifies the problem,” says Smyth. “And combs bring other benefits related to carrier recovery and lack of inter-channel interference too”.
Pilot Photonics’ comb laser delivers 16 or 32 wavelengths per fibre, up to 8x more than existing solutions. Smyth says the company intends to fit its comb laser inside the OIF’s standardised External Laser Source pluggable form-factor,
The start-up is also developing a coherent ring resonator modulator for its design. The ring modulator is tiny compared with Mach-Zehnder interferometer modulators used for coherent optics.
Pilot Photonics is also developing IP for coherent signal processing. Because its comb laser locks the frequency and phase of the wavelengths generated, the overall control and signal processing can be simplified.
While it will offer the comb laser, the start-up does not intend to develop the DSP IC nor make optical engines itself.
“A strategic partnership with a company with its own manufacturing facilities would be the most effective way of getting this technology to market,” says Smyth.
DustPhotonics raises funding for 800G and 1.6T modules
- DustPhotonics has raised $24 million in funding.
- The start-up has taped out its 200 gigabit-per-lane optical chip.
- DustPhotonics expects the 1.6-terabit module market to ramp, starting year-end.

DustPhotonics, which develops chips for transmit optical sub-assemblies (TOSAs) for 400 and 800-gigabit pluggable optical modules, has raised $24 million. The funding extends its Series B funding round.
“When you start ramping up products, you have to iron out the creases around supply chain, production, and everything else,” says Ronnen Lovinger, CEO of DustPhotonics.
DustPhotonics has several customers and a backlog of orders for its 400 and 800-gigabit photonic integrated circuits (PICs). The company has also taped out its 200 gigabit-per-lane chip and will have products later this year.
800-gigabit PICs
DustPhotonic’s products include the Carmel-4-DR4, a 400-gigabit DR4 PIC, and several variants of its 800-gigabit Carmel-8.
“Most of our customers and engagements are interested in the 800-gigabit applications,” says Lovinger.
DustPhotonics has developed a way of attaching a laser source to its silicon photonics chip with sub-micron accuracy. The company uses standard off-the-shelf continuous-wave lasers operating at 1310nm.
The efficiency of the laser-attach scheme means one laser can power four channels, or two lasers can be used for a DR8 design, reducing cost and power consumption.
At the ECOC show last October, DustPhotonics unveiled three 800-gigabit Carmel-8 products. The products include a DR8 with a reach of 500m, a 2km DR8+, and an 800-gigabit ‘lite’ version that competes with 100-gigabit VCSEL designs and only uses one laser. Several customers are considering the Carmel-8-Lite for Ethernet and PCI Express applications.

Manufacturing
DustPhotonics is working with foundry Tower Semiconductors as it goes to production.
“Having a strong fab partner is very important for silicon photonics,” says Lovinger, who views Tower as a leading silicon photonics foundry. “We have been working with Tower for five years, and they have been a strong partner.” DustPhotonics is using several partners for device assembly.
DustPhotonics is headquartered in Israel and has 50 staff, 37 of whom are in R&D. Investors in the latest funding round include Sienna Venture Capital, Greenfield Partners, Atreides Management, and Exor Ventures.
Lovinger will attend the OFC show later this month for meetings with customers and prospects. “It is always good to see so many customers under the same roof,” he says.
200-gigabit optical
DustPhotonics has a highly stable silicon-photonics modulator that does not need to be temperature-controlled and operates at 200 gigabits per lane.
Developing a 200 gigabit-per-lane transmit chip means that DustPhotonics can address a 4-lane 800-gigabit DR4 and an 8-lane 1.6-terabit DR8 modules.
Lovinger says that many driver and digital signal processing chip companies already offer 800 gigabit/ 1.6-terabit chips. Thus, he sees the advent of 1.6 terabit modules as straightforward once its TOSA design is ready.
“Once we have 200 gigabits-per-lane, it takes us to 1.6 terabits and, in some configurations, 3.2 terabits,” says Lovinger. “We see the 1.6-terabit market starting at the end of this year and ramping in 2025.”
Lovinger says the progress of pluggable modules is postponing the need for co-packaged optics. That said, the company says it has the technologies needed to address co-packaged optics when the market finally needs it.
200-gigabit optical
DustPhotonics has a highly stable silicon-photonics modulator that does not need to be temperature-controlled and operates at 200 gigabits per lane.
Developing a 200 gigabit-per-lane transmit chip means that DustPhotonics can address a 4-lane 800-gigabit DR4 and an 8-lane 1.6-terabit DR8 modules.
Lovinger says that many driver and digital signal processing chip companies already offer 800 gigabit/ 1.6-terabit chips. Thus, he sees the advent of 1.6 terabit modules as straightforward once its TOSA design is ready.
“Once we have 200 gigabits-per-lane, it takes us to 1.6 terabits and, in some configurations, 3.2 terabits,” says Lovinger. “We see the 1.6-terabit market starting at the end of this year and ramping in 2025.”
Lovinger says the progress of pluggable modules is postponing the need for co-packaged optics. That said, the company says it has the technologies needed to address co-packaged optics when the market finally needs it.
Drut tackles disaggregation at a data centre scale
- Drut’s DynamicXcelerator supports up to 4,096 accelerators using optical switching and co-packaged optics. Four such clusters enable the scaling to reach 16,384 accelerators.
- The system costs less and is cheaper to run, has lower latency, and better uses the processors and memory.
- The system is an open design supporting CPUs and GPUs from different vendors.
- DynamicXcelerator will ship in the second half of 2024.

Drut Technologies has detailed a system that links up to 4,096 accelerator chips. And further scaling, to 16,384 GPUs, is possible by combining four such systems in ‘availability zones’.
The US start-up previously detailed how its design can disaggregate servers, matching the processors, accelerators, and memory to the computing task at hand. Unveiled last year, the product comprises management software, an optical switch, and an interface card that implements the PCI Express (PCIe) protocol over optics.
The product disaggregates the servers but leaves intact the tiered Ethernet switches used for networking servers across a data centre.
Now the system start-up is expanding its portfolio with a product that replaces the Ethernet switches with optical ones. “You can compose [compute] nodes and drive them using our software,” says Bill Koss, CEO of Drut.
Only Google has demonstrated the know-how to make such a large-scale flexible computing architecture using optical switching.
Company background
Drut was founded in 2018 and has raised several funding rounds since 2021.
Jitender Miglani, founder and president of Drut, previously worked at MEMS-based optical switch maker, Calient Technologies.
Drut’s goal was to build on its optical switching expertise and add the components needed to make a flexible, disaggregated computing architecture. “The aim was building the ecosystem around optical switches,” says Miglani.
The company spent its first two years porting the PCIe protocol onto an FPGA for a prototype interface card. Drut showcased its prototype product alongside a third-party optical switch as part of a SuperMicro server rack at the Supercomputing show in late 2022.
Drut has spent 2023 developing its next-generation architecture to support clusters of up to 4,096 endpoints. These can be accelerators like graphics processing units (GPUs), FPGAs, data processing units (DPUs), or storage using the NVM Express (nonvolatile memory express).
The architecture, dubbed DynamicXcelerator, supports PCIe over optics to link processors (CPUs and GPUs) and RDMA (Remote Direct Memory Access) over optics for data communications between the GPUs and between the CPUs.
The result is the DynamicXcelerator system, a large-scale reconfigurable computing for intensive AI model training and high-performance computing workloads.
DynamicXcelerator

The core of the DynamicXcelerator architecture is a photonic fabric based on optical switches. This explains why Drut uses PCIe and RDMA protocols over optics.
Optical switches brings size and flexibility and by relaying optical signals, their ports are data-rate independent.
Another benefit of optical switching is power savings. Drut says an optical switch consumes 150W whereas an equivalent-sized packet switch consumes 1,700W. On average, an Infiniband or Ethernet packet switch draws 750W when used with passive cables. Using active cables, the switch’s maximum power rises to 1,700W. “[In contrast], a 32-64-128-144 port all-optical switch draws 65-150W,” says Koss.
Drut also uses two hardware platforms. One is the PCIe Resource Unit, dubbed the PRU-2000, which hosts eight accelerator chips such as GPUs. Unlike Nvidia’s DGX platform, which uses Nvidia GPUs such as the Hopper, or Google, which uses its TPU5 tensor processor unit (TPU), Drut’s PRU-2000 is an open architecture and can use GPUs from Nvidia, AMD, Intel, and others. The second class of platform is the compute node or server, which hosts the CPUs.
DynamicXcelerator’s third principal component are the FIC 2500 interface cards.
The iFIC 2500 card is similar to Drut’s current product’s iFIC 1000, which features an FPGA and four QSFP28s. However, the iFIC 2500 supports the PCIe 5.0 generation bus and the Compute Express Link (CXL) protocols. The two other FIC cards are the tFIC 2500 and rFIC 2500.
“The iFIC and tFIC are the same card, but different software images,” says Koss. “The iFIC fits into a compute node or server while the tFIC fits into our Photonic Resource Unit (PRU) unit, which holds GPUs, FPGAs, DPUs, NVMe, and the like.”
The rFIC provides RDMA over photonics for GPU-to-GPU memory sharing. The rFIC card for CPU-to-CPU memory transfers is due later in 2024.
Miglani explains that PCIe is used to connect the GPUs and CPUs, but for GPU-to-GPU communication, RDMA is used since even PCIe over photonics has limitations.
Certain applications will use hundreds and even thousands of accelerators, so a PCIe lane count is one limitation, distance is another; a 5ns delay is added for each metre of fibre. “There is a window where the PCIe specification starts to fall off,” says Miglani.
The final component is DynamicXcelerator’s software. There are two software systems: the Drut fabric manager (DFM), which controls the system’s hardware configuration and traffic flows, and the Drut software platform (DSP) that interfaces applications onto the architecture.
Co-packaged optics
Drut knew it would need to upgrade the iFIC 1000 card. DynamicXcelerator uses PCIe 5.0, each lane being 32 gigabit-per-second (Gbps). Since 16 lanes are used, that equates to 512 gigabits of bandwidth.
“That’s a lot of bandwidth, way more that you can crank out with four 100-gigabit pluggables,” says Koss, who revealed co-packaged optics will replace pluggable modules for the iFIC 2500 and tFIC 2500 cards.
The card for the iFIC and tFIC will use two co-packaged optical engines, each 8×100 gigabits. The total bandwidth of 1.6 terabits – 16×100-gigabit channels – is a fourfold increase over the iFIC 1000.
System workings
The system’s networking can be viewed as a combination of circuit switching and packet switching.
The photonic fabric, implemented as a 3D torus (see diagram), supports circuit switching. Using a 3D torus, three hops at most are needed to link any two of the system’s endpoints.

One characteristic of machine learning training, such as large language models, is that traffic patterns are predictable. This suits an architecture that can set the resources and the connectivity for a task’s duration.
Packet switching is not performed using Infiniband. Nor is a traditional spine-leaf Ethernet switch architecture used. The DynamicXcelerator does uses Ethernet but in the form of a small, distributed switching layer supported in each interface card’s FPGA.

The smallest-sized DynamicXcelerator would use two racks of stacked PRU-2000s (see diagram). Further racks would be added to expand the system.
“The idea is that you can take a very large construct of things and create virtual PODs,” says Koss. “All of a sudden, you have flexible and fluid resources.”
Koss says a system can scale to 16,384 units by combining four clusters, each of 4,096 accelerators. “Each one can be designated as an ‘availability zone’, with users able to call resources in the different zones,” he says.
Customers might use such a configuration to segment users, run different AI models, or for security reasons. “It [a 16,384 unit system] would be huge and most likely something that only a service provider would do or maybe a government agency,” says Koss.
Capital and operation savings
Drut claims the architecture costs 30 per cent less than conventional systems, while operational cost-savings are 40 per cent.
The numbers need explaining, says Koss, given the many factors and choices possible.
The bill of materials of a 16, 32, 64 or 128-GPU design has a 10-30 per cent saving solely from the interconnect.
“The bigger the fabric, the better we scale in price as solutions using tiered leaf-spine-core packet switches involving Ethernet-Infiniband-PCIe are all built around the serdes of the switch chip in the box,” says Koss. “We have a direct-connect fabric with a very high radix, which allows us to build the fabric without stacked tiers like legacy point-to-point networks.”
There are also the power savings, as mentioned. Less power means less heat and hence less cooling.
“We can also change the physical wires in the network,” says Koss, something that can’t be done with leaf-spine-core networks, unless data centre staff change the cabling.
“By grouping resources around a workload, utilisation and performance are much better,” says Koss. “Apps run faster, infrastructure is grouped around workloads, giving users the power to do more with less.”
The system’s evolution is another consideration. A user can upgrade resources because of server disaggregation and the ability to add and remove resources from active machines.
“Imagine that you bought the DynamicXcelerator in 2024. Maybe it was a small sized, four-to-six rack system of GPUs, NVMe, etc,” says Koss. If, in mid-2026, Nvidia releases a new GPU, the user can take several PRU-2000s offline and replace the existing GPUs with the new ones.
“Also if you are an Nvidia shop but want to use the new Mi300 from AMD, no problem,” says Koss. “You can mix GPU vendors with the DynamicXcelerator.” This is different from today’s experience, where what is built is wasteful, expensive, complex, and certainly not climate-conscious, says Koss.
Plans for 2024
Drut has 31 employees, 27 of which are engineers. “We are going on a hiring binge and likely will at least double the company in 2024,” says Koss. “We are hiring in engineering, sales, marketing, and operations.”
Proof-of-concept DynamicXcelerator hardware will be available in the first half of 2024, with general availability then following.
The APC’s blueprint for silicon photonics

The Advanced Photonics Coalition (APC) wants to smooth the path for silicon photonics to become a high-volume manufacturing technology.
The organisation is talking to companies to tackle issues whose solutions will benefit the photonics technology.
The Advanced Photonics Coalition wants to act as an industry catalyst to prove technologies and reduce the risk associated with their development, says Jeffery Maki, Distinguished Engineer at Juniper Networks and a member of the Advanced Photonics Coalition’s board.
Origins
The Advanced Photonics Coalition was unveiled at the Photonic-Enabled Cloud Computing (PECC) Industry Summit jointly held with Optica last October.
The Coalition was formerly known as the Coalition for On-Board Optics (COBO), an industry initiative led by Microsoft.
Microsoft wanted a standard for on-board optics, until then it was a proprietary technology. At the time, on-board optics was seen as an important stepping stone between pluggable optical modules and their ultimate successor, co-packaged optics.
After years of work developing specifications and products, Microsoft chose not to adopt on-board optics in its data centres. Although COBO added other work activities, such as co-packaged optics, the organisation lost momentum and members.
Maki stresses that COBO always intended to tackle other work besides its on-board optics starting point.
Now, this is the Advanced Photonics Coalition’s goal: to have a broad remit to create working groups to address a range of issues.
Tackling technologies
Many standards organisations publish specifications but leave the implementation technologies to their member companies. In contrast, the Advanced Photonics Coalition is taking a technology focus. It wants to remove hurdles associated with silicon photonics to ease its adoption.
“Today, we see the artificial intelligence and machine learning opportunities growing, both in software and hardware,” says Maki. “We see a need in the coming years for more hardware and innovative solutions, especially in power, latency, and interconnects.”
Work Groups
In the past, systems vendors like Cisco or Juniper drove industry initiatives, and other companies fell in line. More recently, it was the hyperscalers that took on the role.
There is less of that now, says Maki: “We have a lot of companies with technologies and good ideas, but there is not a strong leadership.”
The Advanced Photonics Coalition wants to fill that void and address companies’ common concerns in critical areas. “Key customers will then see the value of, and be able to access, that standard or technology that’s then fostered,” says Maki.
The Advanced Photonics Coalition has yet to announce new working groups but it expects to do so in 2024.
One area of interest is silicon photonics foundries and their process design kits (PDKs). Each foundry has a PDK, made up of tools, models, and documentation, to help engineers with the design and manufacture of photonic integrated devices.
“A starting point might be support for more than one foundry in a multi-foundry PDK,” says Maki. “Perhaps a menu item to select the desired foundry where more than one foundry has been verified to support.”
Silicon photonics has long been promoted as a high-volume manufacturing technology for the optical industry. “But it is not if it has been siloed into separate efforts such that there is not that common volume,” says Maki.
Such a PDK effort would identify gaps that each foundry would need to fill. “The point is to provide for more than one foundry to be able to produce the item,” he says.
A company is also talking to the Advanced Photonics Coalition about co-packaged optics. The company has developed an advanced co-packaged optics solution, but it is proprietary.
“Even with a proprietary offering, one can make changes to improve market acceptance,” says Maki. The aim is to identify the areas of greatest contention and remedy them first, for example, the external laser source. “Opening that up to other suppliers through standards adoption, existing or new, is one possibility,” he says.
The Advanced Photonics Coalition is also exploring optical interconnecting definitions with companies. “How we do fibre-attached to silicon photonics, there’s a desire that there is standardisation to open up the market more,” says Maki. “That’s more surgical but still valuable.”
And there are discussions about a working group to address co-packaged optics for the radio access network (RAN). Ericsson is one company interested in co-packaged optics for the RAN. Another working group being discussed could tackle optical backplanes.
Maki says there are opportunities here to benefit the industry.
“Companies should understand that nothing is slowing them down or blocking them from doing something other than their ingenuity or their own time,” he says.
Status
COBO had 50 members earlier in 2023. Now, the membership listed on the website has dropped to 39 and the number could further dip; companies that joined for COBO may still decide to leave.
At the time of writing, an new as yet unannounced member has joined the Advanced Photonics Coalition, taking the membership to 40.
“Some of those companies that left, we think they will return once we get the working groups formed,” says Maki, who remains confident that the organisation will play an important industry role.
“Every time I have a conversation with a company about the status of the market and the needs that they see for the coming years, there’s good alignment amongst multiple companies,” he says.
There is an opportunity for an organisation to focus on the implementation aspects and the various technology platforms and bring more harmony to them, something other standards organisations don’t do, says Maki.
The market opportunity for linear drive optics

A key theme at OFC earlier this year that surprised many was linear drive optics. Its attention at the optical communications and networking event was intriguing because linear drive – based on using remote silicon to drive photonics – is not new.
“I spoke to one company that had a [linear drive] demo on the show floor,” says Scott Wilkinson, lead analyst for networking components at Cignal AI. “They had been working on the technology for four years and were taken aback; they weren’t expecting people to come by and ask about it. “
The cause of the buzz? Andy Bechtolsheim, famed investor, co-founder and chief development officer of network switching firm Arista Networks and, before that, a co-founder of Sun Microsystems.
“Andy came out and said this is a big deal, and that got many people talking about it,” says Wilkinson, author of a recent linear drive market research report.
Linear Drive
A data centre’s switch chip links to the platform’s pluggable optics via an electrical link. The switch chip’s serialiser-deserialiser (serdes) circuitry drives the signal across the printed circuit board to the pluggable optical module. A digital signal processor (DSP) chip inside the pluggable module cleans and regenerates the received signal before sending it on optically.

With linear drive optics, the switch ASIC’s serdes directly drives the module optics, removing the need for the module’s DSP chip. This cuts the module’s power consumption by half.
The diagram above contrasts linear drive optics compared with traditional pluggables and the emerging technology of co-packaged optics where the optics are adjacent to the switch chip and are packaged together. Linear drive optics can be viewed as a long-distance variant of co-packaged optics that comntinues to advance pluggable modules.
Proponents of linear drive claim that the power savings are a huge deal. “There will probably also be some cost savings, but it is not entirely clear how big they will be,” says Wilkinson. “But the only thing people want to discuss is the power savings.”
Misgivings
If linear drive’s main benefit is reducing power consumption, the technology’s sceptics counter with several technical and business issues.
One shortfall is that a module’s electrical and optical lanes must match in number and hence data rate. If there is a mismatch, the signal speeds must be translated between the electrical and optical lane rates, known as gearboxing. This task requires a DSP. Linear drive optics is thus confined to 800-gigabit optical modules: 800GBASE-DR8 and 800-gigabit 2xFR4. “There are people who think that at least 800 Gig – eight lanes in and eight lanes out – will continue to exist for a long time,” says Wilkinson.
Another question mark concerns the use of optics for artificial intelligence workloads. Adopters of AI will be early users of 200 gigabit-per-lane optics, requiring a gearbox-performing DSP.
Moreover, the advent of 200-gigabit electrical lanes will challenge serdes developers and, hence, linear drive designs. “It will be a technical challenge, the distances will be shorter, and some think it may never work,” says Wilkinson. “No matter how good the serdes is, it will not be easy.”
Co-packaged optics will also hit its stride once 200-gigabit serdes-based switch chips become available.
Another argument is that there are many ways to save power in the data centre; if linear drive introduces complications, why make it a priority?
Linear drive optics requires the switch chip vendors to develop high-quality serdes. Wilkinson says the leading switch vendors remain agnostic to linear drive, which is not a ringing endorsement. And while hyperscalers are investing time and resources into linear-drive technology, none have endorsed the technology such that they can withdraw at any stage without penalty.
“There is one story for linear drive and many stories against it,” admits Wilkinson. “When you compile them, it’s a pretty big story.”
Market opportunity
Cignal AI believes linear-drive optics will prove a niche market, with 800-gigabit linear-drive modules capturing 10 per cent of overall 800-gigabit pluggable shipments in 2027.
Wilkinson says the most promising example of the technology is active optical cables, where the modules and cables are a closed design. And while many companies are invested in the technology, and it will be successful, the opportunity will not be as significant as the proponents hope.





