Mixx Technologies’ vision for inference at scale


Mixx’s management team discusses its vision of reshaping AI infrastructure through silicon-integrated optical interconnects.
-
-
“Mixx, still in stealth mode, is developing an interconnect optimised for AI inferencing (tokens-per-second, latency, and power).
-
The start-up has developed a 25.6 terabit-per-second (Tbps) optical engine (chiplet) to enable large AI clusters.
-
Mixx raised $33 million in Series A funding in late 2025.
-
“When you’re stressed about closing your funding, you get horizontal lines,” Vivek Raghuraman, CEO and co-founder of Mixx Technologies, notes wryly, gesturing to his forehead. “Then, the lines go vertical because now we are focused on execution.”
It is a sentiment that will resonate with many start-up CEOs.
Founded in 2023, the San Jose-based company has set out to tackle the escalating data-movement bottleneck in AI clusters.
“At every layer of the AI, we are integrating photonics,” says Raghuraman. “The fundamental thesis for Mixx is mixing optics and electronics in a way that brings efficiency,” he says.
Interconnect challenge
AI training and inference workloads are increasingly limited not by compute but by data movement. Power-hungry electrical links, retimers, and multi-hop switch fabrics have latency and energy overheads, constraining the cluster’s size.
“The fundamental thesis for Mixx is mixing optics and electronics in a way that brings efficiency”
Mixx has developed its ‘HBxIO’ optical engine, a high-density input/output (I/O) chiplet that can be co-packaged with GPUs, custom AI accelerators and other chips such as network interface card controllers and switch chips.

The first-generation HBxIO delivers 25.6 Terabit-per-second (Tbps) of bi-directional bandwidth – 12.8Tbps in each direction – achieved using 200 gigabit-per-second optical lanes.
Bandwidth density is a key metric in advanced packaging where a die’s perimeter edges – its ’beachfront’ – used for I/O are highly valued.
Traditional electrical serial-deserialiser (serdes) interfaces occupy north/south edges of the ASIC, often the shorter edges (some 27mm). In contrast, the ASIC’s east/west beachfronts such as a GPU are used to interface to high-bandwidth memory (HBM).
Mixx’s focus is to achieve high densities by fitting more than 300 fibres across the 27mm die edge. Such density enables a high radix to connect many endpoints without intermediate switches.
“We can bring in more than 300 fibres, connecting at least 128 GPUs,” says Raghuraman, thereby reducing the number of hops between GPUs. “Today, the name of the game is increasing GPUs in a cluster in a way that all can operate as one large processing unit.”
Nvidia’s NVL72 rack, for example, uses 18 switches to connect 72 GPUs in a scale-up architecture. Mixx’s approach enables a switch-less or a minimal-switch cluster for latency-sensitive workloads. Indeed, using both chip edges, 256 GPUs can be connected.
Beyond that, system architects have flexibility: they can add optical or electrical switches for dynamic workloads, or extend to a multi-cluster across 500 meters.
“We’re providing the ability to create a switch-less cluster if latency is the primary goal of the workload,” says Rebecca Schaevitz, co-founder and chief product officer at Mixx. “But if you want dynamic workloads, and can accommodate the extra latency of a switch, you can connect a cluster of more than 4,000 GPUs using one layer of switches.”

The 500-meter limit stems from pragmatic engineering. “The reach can be provided by high-power lasers, while still being within the spectrum of what is available at the right cost and the right reliability,” says Raghuraman.
Pushing the reach further narrows the choice of viable lasers—a non-starter for hyperscalers prioritising supply-chain robustness.
Modulators and manufacturability
Mixx uses Mach-Zehnder interferometer modulators for its optical engine, rather than the compact ring resonators favoured by firms such as Ayar Labs and Nvidia.
Mach-Zehnder interferometers are bulkier, but Mixx says its design is optimised for high-density silicon photonics. “The Mach-Zehnders fit within that beachfront density,” says Raghuraman. “Bulkiness is all an artefact of how Mach-Zehnder in the past have been used in optical transceivers; we can leverage CMOS-scale technologies to bring effective performance.”
The Mach-Zehnder interferometer modulator also suits thermally sensitive environments such as next to GPUs while meeting interoperability and standards requirements.
The start-up is also claiming a 72 per cent reduction in a typical AI cluster’s power consumption. This stems from multiple factors. By enabling higher compute usage—reducing GPU idle time through faster data delivery—overall system power consumption drops. Eliminating retimers and signal-conditioning circuits also helps, reducing the link to 5 pJ/bit. “We are making it interoperable to standards,” he says.
Link-budget savings of nearly 4dB are achieved through optimised fibre connectivity—a core innovation. “Every dB saves power, whether on the laser or recovery circuits,” says Raghuraman.
This connectivity approach tackles manufacturability at semiconductor volumes. Mixx has developed a bottom-up solution to be compatible with semiconductor flows. The start-up highlights how it has eliminated optical epoxies and UV curing to survive 300°C+ assembly temperatures, while also maintaining optical alignment.
“We had to come up with a technology that works with the true semiconductor manufacturing ecosystem,” says Raghuraman. “We focused on what it takes to develop a solution that is truly manufacturable at scale.”
System-Level Thinking
Beyond hardware, Mixx has developed software, called GuardBand, that handles orchestration between GPUs. “How we enable this orchestration between the GPUs is all done by GuardBand, software that integrates with the network operating system that gives the controllability, observability, and all the telemetrics needed to manage the data movement within the cluster.”
The platform will support standard protocols at the fibre level- Ethernet, Ultra Ethernet, InfiniBand- while die-to-die interfaces can use UCIe or custom protocols. This ensures backwards compatibility and broad integration—whether for scale-up (massive single clusters), scale-out (distributed), scale-in (disaggregated memory/compute), or scale-across (long-reach).
Market dynamics
Mixx is using the funding round to accelerate engineering-sample delivery—primarily of the HBxIO chiplet—for customer integration into custom packages.
Company recruitment is also taking place, covering photonics, electronics, advanced packaging, and opto-mechanics. The company has over 50 staff in the US, Taiwan, and India.
Asked about the recent wave of optical interconnect acquisitions – Ciena acquiring Nubis, and Marvell’s just-completed acquisition of Celestial AI – Mixx’s view is AI interconnect is nascent. “There is no one technology that is going to be the solution,” says Raghuraman.
Tackling the issue from a system perspective rather than at the device level is what differentiates Mixx, says Raghuraman. “We thought about the system level, and then we are targeting building products that satisfy that system.”
AI inference demands underscore the start-up’s focus: low time-to-first-token and inter-token latency. “These two are the fundamentals that are going to be extremely critical for inference,” he says.
Schaevitz also notes that AI model developments, such as the adoption of a mixture-of-experts, mirror the human brain in having localised processing, yet like the brain, there is a need for a central database.
“That’s where connectivity is still going to grow,” says Schaevitz. “The more the ‘experts’ are scattered around, the more that this sloshing of data is important, because otherwise you’re waiting.”
Mixx will give a progress update at the upcoming OFC show in March. The company is working with select partners to build tier-one ecosystems that lower barriers to hyperscaler adoption.
“Most important for hyperscalers is the barrier to entry,” says Raghuraman. “We chose to solve one fundamental problem and leverage qualified technologies, where volume data is available.”
Mixx’s system, manufacturability, and efficiency-driven strategy aims to unlock the next phase of AI intelligence: large clusters with reduced power and latency that keep pace with evolving models.
Teramount Races to Scale Fibre-to-Chip Coupling

The start-up raised $50 million in 2025, funding that will move the company from pilot shipments to high-volume manufacturing of its optical input-output (I/O) interface technology.
Teramount is advancing its fibre-to-chip coupling technology toward volume manufacturing. The start-up is targeting the emerging co-packaged optics (CPO) market, where photonics is integrated directly with a chip to improve speed and energy efficiency.
The company has developed a way to attach fibres to a chip suited to semiconductor manufacturing.
The start-up’s focus is to match its interface to the process steps across the supply chain—at photonic chip foundries and outsourced semiconductor assembly and test (OSAT) houses.
The $50 million funding advances Teramount’s plans, says Hesham Taha, CEO of Teramount, to enable optical connectivity for use in advanced AI computing systems.
Wideband surface coupling
Teramount’s core intellectual property connects fibre to a chip’s surface without the bandwidth penalties associated with conventional surface coupling.
In silicon photonics, “surface coupling” is often shorthand for a grating coupler. Teramount’s approach uses what it calls a photonic bump — a coupling and packaging construct integrated into a semiconductor-style process flow — that delivers a wide optical bandwidth with the advantages of surface-mount packaging.

Surface coupling is attractive to photonic chip designers because it eases some of the geometric and packaging constraints of attaching fibres to a chip’s edge. This is becoming more important as systems push toward higher aggregate fibre counts, denser packaging, and tighter link budgets.

Teramount argues that its bump-based approach can reduce the bandwidth/efficiency trade-offs that start to dominate when designers scale fibre counts and power budgets.
“Our philosophy has been that we needed to bring optics to speak the same language as semiconductor manufacturing,” says Taha. “This is the purpose of our photonic bump.”
Product and positioning
Companies are evaluating Teramount’s TeraVERSE surface-mount fibre-attach product which is now sampling.
Teramount is pursuing more than one on-ramp into silicon photonics packaging and says its approach supports co-packaged optics for scale-out and for scale-up architectures once the industry is ready to replace copper interconnects with optics for short reaches (1-3m).

The start-up has a roadmap that enables fibre stacking to increase attach density—an important lever in co-packaged optics-where fibre routing becomes a constraint.
The company says its units are being integrated into silicon photonics wafers at multiple foundries, assembled at major OSATs, and delivered through partner and customer flows aimed at co-packaged optics programmes.
Such integration is challenging. Once optics moves inside advanced packages, alignment tolerances, reflow compatibility, thermal and mechanical stress, and test strategy, all must be addressed in a production setting. Teramount argues the ecosystem is learning these lessons in real time, with system requirements evolving rapidly.
“Customers are asking to increase the number of fibres, increase the optical power, and meet harsh environmental conditions,” says Taha. “Every vendor needs to accommodate these changes on the go.”
Taha also highlights other challenges: fibre management inside the system, accommodating single-mode and polarisation-maintaining fibre where required, and the practical question of “who owns what” when optics meets the chip industry’s established division of responsibilities.
New customers used to volumes
For Teramount, the most consequential change is the nature of the customer. When leading semiconductor companies—AI accelerator and switching players—come to a foundry with a co-packaged optics requirement, it forces engagement at a different level than photonics start-ups can drive.
“The main change is the ownership of the co-packaged optics,” says Taha. “It’s a chip company that is now coming to a foundry for co-packaged optics—and the volumes are going to be high.”
By volume, the expectation is millions of units per year. The company says it is targeting that capability for the middle of 2027.
This shift matters because foundry process tweaks, OSAT line development, supply-chain readiness, and reliability qualification move faster when a high-volume chip programme is the driver.
Manufacturing ramp
Teramount’s Series A funding is aimed at accelerating production plans, expanding suppliers and foundry engagements, and building internal capability to manage a volume transition.
Today, the company operates a pilot-line approach that uses distributed equipment across local and global suppliers, but Teramount is moving toward a consolidated and scalable flow. “We are gathering equipment to have the line in one place,” says Taha.
Teramount says it has grown its staff by half in 2025 to 60, with hiring focussed on operations and manufacturing expertise aligned with high-volume semiconductor making.
There is no one standard for co-packaged optics but eventually the market will settle on a de facto approach, says Taha.
Teramount is effectively betting that manufacturing readiness is a differentiator—and that wideband surface coupling, if it can be packaged and produced like microelectronics – can be part of the co-packaged optics wave that will ship at scale.
Meanwhile, Teramount is aligned with what a key industry transition looks like: a $50 million funding round pointing straight at an important volume manufacturing bottleneck.
Backers of Teramount’s Series A funding round include Koch Disruptive Technologies (KDT), AMD Ventures, Hitachi Ventures, Samsung Catalyst Fund, Wistron, and Grove Ventures.
Ayar Labs prepares to fulfil its optical input-output (I/O) vision

Ayar Labs progresses towards volume manufacturing of its TeraPHY optical input-output (I/O) chiplet.
It is a decade since Vladimir Stojanovic was co-author of a paper published in the science journal, Nature, outlining the first microprocessor to send and receive data optically.
“For the first time, a system – a microprocessor – has been able to communicate with the external world using something other than electronics,” said Stojanovic, then an associate professor of electrical engineering and computer science at the University of California, Berkeley
Ten years on, silicon photonics and optics packaged alongside silicon have come a long way.
Broadcom has added its third-generation co-packaged optics design to its 102.4 Terabit-per-second (Tbps) Tomahawk 6 switch chip. And Nvidia has announced two families of switches, its first, that use the optical technology.
Co-packaged optics has long been promoted as lowering power consumption and aiding processing scalability. But in the last year it has proven to be far more reliable than traditional pluggable optics.
Ayar Labs, too, has come a long way, a start-up Stojanovic co-founded and which, since 2024, has been its chief technology officer (CTO). In 2025, Ayar Labs detailed its third-generation TeraPHY optical I/O chiplet, first in a post-deadline paper at OFC 2025 and then at the Hot Chips 2025 event this summer.
The start-up has also announced partnerships with Taiwanese ASIC design companies, Alchip Technologies and Global Unichip Corp (GUC), both with strong links with leading foundry, TSMC.

Third-generation TeraPHY optical I/O chiplets
The latest TeraPHY optical I/O chiplet has a bidirectional bandwidth of 8Tbps, or 4Tbps in each direction (see diagram above). It is also the first chiplet design to carry Universal Chiplet Interconnect Express (UCIe) traffic optically. UCIe is a standard die-to-die protocol and Ayar Labs has extended its reach using light. UCIe can carry various protocols and Stojanovic describes the latest device as a ‘universal I/O chiplet’.
The chiplet uses eight 1Tbps optical ports, each supporting 512Gbps channels per direction. Each wavelength carries a 32Gbps signal and using 16 silicon photonics micro-ring resonators, there are 16 wavelengths per fibre.
Ayar Labs also makes a custom-designed laser module – the external light source – that powers the TeraPHY optical I/O chiplets. Dubbed the SuperNova light source, the module uses an array of distributed feedback (DFB) lasers provided by Sivers Photonics.
Ayar Labs uses Sivers’ DFB cell and has adapted it to create a laser array packaged in a module, with the lasers multiplexed and split into wavelengths.
A SuperNova module can have 8 or 16 ports, each with 16 wavelengths, for a total of 128 or 256 wavelengths.
From monolithic optics to modular chiplets
Ayar Labs is a fabless company, meaning it can choose a fab for its design to deliver the best performance and cost. “And in an appropriate ecosystem where our customers want to build their solutions,” adds Stojanovic.
“This [TeraPHY optical I/O chiplet] architecture lets us move seamlessly through different foundry processes,” says Stojanovic. “We can adopt the best CMOS node for logic while keeping the photonic building blocks stable.”
The 8Tbps TeraPHY device is built using GlobalFoundries’ 45SPCLO 45nm silicon-photonics process — a platform that Ayar Labs helped shape. But the design can also be migrated to TSMC’s more advanced CMOS nodes for the electrical IC while benefiting from TSMC’s silicon photonics and packaging flows.
Universal I/O chiplet
Each generation of Ayar Labs’ optical I/O engine follows the same architecture: a modular optical chip with a die-to-die interface, logic in between, and an optical serialiser/deserialiser (serdes) core.
The optical serdes carry the UCIe protocol. What that does, Stojanovic explains, is eliminate the electrical serdes from any connection between two chips — say, GPU-to-GPU or a GPU to a switch. “Each side runs a low-power UCIe interface that connects a few millimetres to our chiplet, and from there it can go anywhere in the system,” says Stojanovic.
The two GPU endpoints operate as if within one package – the definition of a scale-up architecture – creating what he calls the illusion of a single, massive GPU. This makes UCIe a fabric not just for multi-die packages but for multi-module systems, without changing how the GPUs or accelerators see each other.
On the optical side, each of the 16 wavelengths are spaced 200GHz apart, providing terabit aggregate bandwidth, with each port multiplexing and demultiplexing these wavelengths.
Ayar Labs has shown that the high wavelength count works over standard single-mode fibre over tens to hundreds of metres. “You can now run 30, 50 or even 100 metres without polarisation-maintaining fibre,” he says. “That’s essential if you want to scale clusters economically.”
The chip is protocol-agnostic. It can carry CXL, NVLink, UALink, Ethernet or other traffic, encapsulated in the UCIe streaming raw mode.
“Our chiplet never looks at what’s inside,” Stojanovic says. “It just gives the illusion that you’re talking over a wire to the chip next to you.”
This makes the device a universal building block for GPUs, switches, or memory controllers. With per-chiplet bandwidths now reaching multiple terabits per second, Ayar positions its design as a logical successor to high-speed electrical I/O.
“The UCIe scaling roadmap is faster than high-bandwidth memory (HBM),” he notes, “so we can reach or exceed HBM-class bandwidth per chiplet.”

Scale-up first, extended memory next
The first commercial use of the technology will be for GPU scale-up architectures that connect accelerators within and across racks. “That’s the natural order of things,” says Stojanovic. “Optics is clearly becoming valuable for scale-up and multi-rack domains.”
The next step will be to link GPUs to extended memory (see diagram above). Using the same universal I/O chiplet, designers can partition bandwidth between inter-GPU communication and memory traffic depending on workload. “That lets you tailor performance efficiency — teraflops per watt — as well as interactivity [for inference],” he says.
The common element across both applications is flexibility: one optical die serving multiple system roles.
From racks to ‘islands’
Stojanovic expects that optical I/O will help expand the number of AI accelerators in a scale-up domain before scale-out becomes necessary.
“A single switch can’t have a radix much higher than about 512,” he explains. “With multi-die GPU packages, you can reach about 1,000 GPU dies per domain today, and in the next few years, we’ll see clusters of 1,000 to 10,000 GPU dies acting as one.”
He describes these as high-speed optical islands — units that operate as a single accelerator within a data-centre-scale cluster. “If you have 1,000,000 GPUs in a data centre, it’s a hundred islands of 10,000,” he says.
Optical I/O helps solve the key limitation of electrical networks: switch congestion. “In two-stage Clos networks, congestion is the problem. If you have plenty of bandwidth, you can enable path diversity — multiple switch planes — which dramatically reduces latency. An uncongested switch traversal is a few hundred nanoseconds.”
In effect, the bandwidth abundance that optical I/O delivers becomes a new lever for scaling compute clusters without compromising efficiency or latency.
Manufacturing and partnership
To bring the chip into high-volume manufacturing, Ayar has partnered with Alchip Technologies, a leading ASIC design house closely tied to TSMC. Alchip designs advanced ASICs and packages for hyperscalers and will integrate Ayar’s optical engines directly into compute or switch packages.
“When you make an optical engine, you need to put it in an advanced package — an xPU or switch package,” Stojanovic explains. “Alchip has the experience and market access in hyperscale. Together, we can provide a packaged ASIC decorated with optical engines that’s ready to connect at cluster scale.”
This arrangement, he adds, “helps hyperscalers de-risk deployment. The product is manufactured in a high-volume flow certified by TSMC, tested and qualified in that ecosystem.”
Ayar Labs is also partnering with a second leading Taiwanese ASIC player, Global Unichip Corp (GUC), to integrate its TeraPHY optical engine into GUC’s advanced ASIC design services. GUC is an ASIC processing and packaging company, with TSMC as its largest shareholder.
Competing to win
With multiple companies now targeting optical I/O, Stojanovic identifies three factors that will differentiate between the solutions.
“First is being in the right ecosystem — access to the best foundry and packaging partners,” he says. “Second is the form factor. It has to be manufacturable at scale; that’s why we chose chiplets.”
The third is maturity: proven reliability, validated system behaviour, and a roadmap that spans generations. “We’ve qualified our lasers, pushed our previous chips through reliability studies, and done thorough system-level validation. That’s what makes this technology ready for high-volume manufacturing.”
Will AI’s demand for GPUs and interconnect eventually slow? Stojanovic doubts it. “When people talk about slowing down, what that really means is slowing down for the end user,” he says. “Inside, you’re actually speeding up.”
Models may stabilise in size, but inference now chains multiple computations per query. “That means your one computation has to be ten times faster. Interactivity still matters,” he says.
Large clusters will therefore remain essential. “In the next two to five years, you’ll see between one and ten thousand GPU dies working as one in a cluster,” he predicts. “That’s the architecture optical I/O makes possible.”
Interview: Richard Soref, the “founding father” of silicon photonics

Professor Richard Soref, pictured, shares his thoughts on promising areas in photonics.
Richard Soref has been thinking about light in silicon for longer than most photonic engineers have been working in the field.
His wide-ranging interests extend beyond photonics: two decades ago, he published a poetry book titled: “Your Fate.”
Over the course of his career, silicon photonics has moved from a speculative research topic to a foundational technology of modern data centres. Today, however, Soref operates far from the commercial momentum he helped set in motion.
Now approaching his 90th year, Soref no longer manages a research group, nor is his work funded: until recently, he had access to grants. Yet his engagement with photonics, optical computing, and emerging computing paradigms remains active and wide-ranging.
“I don’t rank opportunities,” Soref says. “If something looks interesting, I explore it.”
An eclectic view of photonics
Soref’s broad research interests make for a long list: mid-infrared sensing, optical and optoelectronic computing, artificial intelligence (AI) acceleration, and terahertz systems.
That breadth reflects a career spent moving between materials, wavelength regimes, and applications rather than following a single roadmap.
This is how he came to see silicon as a promising material for light.
In 1985, the only photonic chip that could interface to fibre was the III-V semiconductor chip. Soref wondered if a silicon chip could be used, and whether it might even do a better job.
He had read in a textbook that silicon is relatively transparent at the 1.30- and 1.55-micron wavelengths used for telecom, which inspired him to look at silicon as a material for optical waveguides.
Silicon promised the potential of using the chip industry’s advanced manufacturing infrastructure for electro-optical integration, and aligned with Soref’s interest in materials.
“I’m a science guy, and I have curiosity and fascination with what the world of materials offers,” he told Gazettabyte a decade ago. “If I have an avenue like that, I like to explore where the physics takes us.”
Silicon photonics and AI data centres
One current interest is the intersection of silicon photonics and AI infrastructure. The rapid scaling of AI workloads has placed unprecedented strain on data centres in terms of power consumption and data movement.
For Soref, it is not whether photonics will matter, but how it will be incorporated.
“Photonic analogue neural computing should be deployed alongside electronic GPUs,” he says. “They should share the computing tasks.”
In a recent multi-author paper, Soref and colleagues propose large-scale opto-electronic neurons capable of implementing transformer-based large language models. The approach is not to replace GPUs. Instead, photonic systems would handle specific workloads, offering advantages in processing speed and energy efficiency.
“This combined approach would improve overall processing performance and power efficiency,” says Soref.
The implications are substantial. The paper predicts that such architectures could reduce data-centre power consumption by an order of magnitude. Crucially, the proposed systems rely on optoelectronics manufactured through hybrid bonding to 12-inch (300mm) silicon wafers, aligning with existing semiconductor manufacturing infrastructure rather than requiring exotic processes.
Beyond today’s AI models
Soref is also looking beyond current generative AI systems. He points to emerging ideas such as spatial AI and world models, where machines integrate multiple sensor inputs and interact with physical environments.
“The inputs are not just digital data scraped from the internet,” he says. “They come from sensors—vision and other modalities.”
Such systems, he argues, could place even greater demands on computing efficiency and data handling, making optoelectronic approaches increasingly relevant.
Optical and quantum computing
Quantum computing inevitably enters the discussion. Soref approaches the topic with caution. He does not question its importance, rather he is wary of assuming that quantum systems will dominate future computing.
His exploration is for alternative approaches to quantum, including semiconductor-based single-photon detectors aimed at room-temperature quantum operation.
In a paper published in APL Quantum, Soref and collaborators proposed that by gating computation on nanosecond timescales, the impact of detector dark counts could be mitigated. Dark counts refer to false signals that occur even when no photons are present, a significant source of noise in photonic quantum systems. By using nanosecond timescales, the idea is to “listen” to the photon detectors during tiny windows only when quantum information is expected to arrive.
“It’s a different way of thinking about the problem,” says Soref.
The work has not gained widespread traction, something Soref attributes partly to inertia and to the heavy investment already committed to superconducting approaches. By contrast, he sees optical and optoelectronic computing as closer to engineering reality, even if still technically challenging.
Soref’s work philosophy
A striking aspect of Soref’s work is how he does it. His long history of funding from the U.S. Air Force Office of Scientific Research ended in mid-2025. When the grant expired, he chose not to apply for another. “That was a turning point,” he says.
Today, he works without sponsorship and without institutional affiliation.
He spends his time reading academic papers, scanning arXiv, and following developments across multiple subfields. When something catches his interest, he reaches out directly to researchers.
Some collaborations flourish. Others never begin.
“It’s getting harder,” he admits. “Everyone has their own obligations. I’m not their primary focus.”
Nevertheless, several long-term collaborations continue. In Europe, he has worked for more than a decade with researchers exploring novel photonic structures and materials.
One collaboration with Italian academics focuses on terahertz photonics. Working with colleagues in Italy, Soref has explored topological photonic crystals in silicon designed to guide terahertz waves with low loss and sharp bends.
By integrating phase-change materials and graphene micro-heaters, these structures could act as electro-optical switches at terahertz frequencies. The work is early-stage and largely theoretical, but it exemplifies Soref’s willingness to engage with problems outside mainstream commercial priorities.
Choosing freedom over pressure
Given his long-standing experience, Soref could take on formal advisory roles, industry consulting positions, or editorial leadership posts.
He was recently asked to serve as editor-in-chief of a new journal, but he declined due to the time and responsibilities involved. What he values is flexibility: the freedom to think, explore, and collaborate without managerial or institutional constraints.
But Soref admits he is conflicted. “Part of me wants to keep going with innovation. Part of me wonders whether I should phase out.”
Soref’s other pursuits also take up his time: he likes to travel and is an avid photographer with his work shared online. And it was at the Bread Loaf Writers’ Conference in Ripton, Vermont, where he refined and then published his book of poems.
But for now, he continues—reading, thinking, collaborating, and occasionally publishing—driven not by funding cycles or commercial pressure, but by intellectual curiosity.
“I do this for the intellectual reward,” says Soref.
448G: doing what has been done before may no longer be enough


-
-
-
-
-
PAM-4 may not carry 448G electrical signalling all the way
-
The OIF is leaning toward a reach-dependent roadmap, not one universal solution
-
Optimisation may have to broaden from the SerDes to the rack.
-
-
-
-
The industry keeps progressing in its ability to push more bits down a wire. It has always been challenging to double the transmitted bit rate, but there was headroom to speed up the serialiser/deserialiser (SerDes) circuitry every few years.
But now the OIF, the industry organisation tasked with doubling copper electrical interface speeds to 448 gigabit-per-second (Gbps), must consider more complex techniques and do its work more quickly due to AI’s scaling needs.
Around a decade ago, the OIF moved from simple non-return-to-zero (NRZ) signalling to 4-level pulse-amplitude-modulation (PAM-4) to double speeds to 56Gbps, known as CEI-56G (Common Electrical I/O). PAM-4 enabled a doubling of the bit rate while using existing 26-28 gigabaud (GBd) components.
PAM-4 was also used for the next two OIF CEI standards, at 112Gbps and 224Gbps. But for the latest 448Gbps standard under development, it is already evident that PAM-4 may not be enough.
When the CEI-448G work began in August 2024, the OIF involved other industry-standard bodies for the first time.
“We collectively understood that 448 gigabit is going to be more challenging for various reasons, including the laws of physics,” says Nathan Tracy, OIF president and technologist, system architecture team at TE Connectivity.
The OIF started pulling together a CEI-448G framework document and circulated it to gain input from various sources. The aim was to galvanise and create a unified position before publishing the document for wider circulation in November.
Delivering reach
The new standard must not only double the data rate from 224Gbps to 448Gbps but also try to retain the reach of previous CEI standards. “That is an exceptional challenge,” says Cathy Liu, the OIF vice president, distinguished engineer and director at Broadcom.
The OIF CEI work involves developing specifications, published as OIF Implementation Agreement documents, for different reaches. The shortest reaches are between dies, between a die and an optical engine within a package, and between separate dies near to each other. These applications fall under CEI-XSR and CEI-XSR+ (extremely short reach interfaces).
Connecting the host chip to a pluggable module requires the OIF’s very short-reach (CEI-VSR) interface. The next reach range is medium reach (MR), typically to connect chips on a printed circuit board.
The most challenging is the Long Reach (LR) spec, which can pass through a backplane to connect chassis in a rack or between adjacent racks. The challenge the OIF faces is that doubling the symbol rate while maintaining the same reach collides head-on with channel bandwidth limits and impairments.

Using PAM-4 at 448Gbps corresponds to a symbol rate of 224GBd, which in Nyquist terms implies a channel bandwidth of 112GHz. The OIF’s work group members have yet to achieve such a bandwidth, but progress is being made.
“Starting in October 2024, we were looking at channels that were rolling off at 75-80GHz; now we’re seeing channels approaching 100GHz,” says Tracy. Still short of 112GHz, but notable progress nonetheless. It also helps clarify the equalisation schemes that will be needed.
One solution is to adopt higher PAM schemes such as PAM-6 or PAM-8 to relax the analogue bandwidth. PAM-6 reduces the baud rate to around 174GBd and the analogue bandwidth from 112GHz to 90GHz, while PAM-8 reduces the baud rate further to 150GBd and a 75GHz bandwidth.
The higher-order PAMs may relax the analogue bandwidth targets but place greater demand on the receiver’s digital signal processor (DSP) tasked with recovering the greater number of bits per symbol.

OIF’s reasoning
The likelihood is that the work will start with the most challenging LR specification. “If we can do the long reach, then we can also support the chip-to-module, the VSR application and XSR, which is more targeted at the co-packaged optics application,” says Liu.
For LR, the aim is to span backplanes and chassis interconnect using copper cables. There will also be work on linear-drive and half-retimed designs, as these promise lower power consumption.
The OIF is cautiously optimistic about reach. “We’ve looked at channels at 400 gigabit that suggest that at least a meter of reach may be possible,” says Tracy. “It’s not a slam dunk; everything that we get will be hard fought for.”
Liu says that reach and low power are almost orthogonal requirements. The OIF challenge, she says, is to come up with solutions that meet these ‘orthogonal’ requirements.
For LR, it requires advances in connectors, equalisation schemes, and forward error correction (FEC) to counter channel impairments.
“Even with the improvement we have seen, there’s still a challenge existing in the channel loss and the crosstalk reflections, which is limited to a bandwidth of 90GHz,” says Liu.
That makes using 448G PAM-4 extremely challenging.
Connecting the host chip to the pluggable module, the OIF is optimistic that over time 120GHz will be possible, but the issue is that solutions are needed sooner. “It’s a very dynamic world we are in,” says Tracy.
Liu outlines that the OIF’s thinking is a staged, reach-dependent strategy that allows the industry to ship something soon to meet AI’s demanding networking needs even as channel limitations persist.
Liu outlines a world where different reaches adopt different solutions: PAM-4 may be possible for shorter reaches, while higher modulation schemes – PAM-6 or PAM-8 – may be required for longer reaches, considering the 90GHz channel reality.
The OIF stresses this is not yet policy. “We maybe diverge based on the different application spaces, the debating is still happening,” says Liu. But hard calls are approaching.
Crucially, Liu points out that the issue of backward compatibility will change if the industry adopts higher-order PAM schemes soon. Indeed, she argues it is an advantage if the OIF starts with higher-order PAM now because if channels improve later, the transition to future speeds (CEI-896G?) will be smoother.
“Don’t assume the ‘lowest PAM wins’ permanently—sometimes the industry keeps a higher-order scheme because it becomes a stepping-stone to the next throughput jump,” says Liu.
If a higher-order PAM is adopted, the need for more DSP assistance will be helped by continual advances in CMOS process nodes.
“When you’re using the higher modulation schemes, you can reduce the baud rate, so the analogue is easier, but then more complicated detection is needed,” says Liu. Yet more gates are a byproduct of more advanced CMOS nodes. So, putting the burden on the DSP rather than the analogue bandwidth may be the better approach.
About FEC schemes, Liu acknowledges the classic trade-off: higher modulation schemes require more advanced FEC, but the key is not to overdo it, as the price is greater power consumption.
The rack as a unit of optimisation
Tracy notes that the decisions regarding 448G can’t be evaluated just from a SerDes perspective since the optimisation target is shifting upwards.
“We need to think about the rack as the smallest building block that matters instead of thinking about host silicon,” says Tracy.
At this higher system scale, trade-offs may invert: “If we increase power dissipation a little bit at the silicon level, but at the rack level, it gives us a lower aggregate power, then that’s an important consideration,” he says.
That’s the subtext behind the entire 448G framework: the industry is choosing not just an electrical interface, but an evolution path for in-rack architecture under three key constraints, says Tracy. The first is a physical one – the bandwidth, reach, and noise, the second is economic: cost and power consumed, and the third is operational: deploying not just at scale but hyperscaler scale.
The pressure to move fast is already here: “Truly, the time is upon us,” says Tracy.
Books of 2025: Part 3

Gazettabyte is asking industry figures to pick their reads of 2025. In Part 3, Professor Martijn Heck, Brad Booth, Matthew Crowley, and Neil McRae share their choices.
Professor Martijn Heck, Photonic Integration, Eindhoven University of Technology
Why do I read? When I was a kid, I loved history and read a lot of history books and biographies. Napoleon and William of Orange were my favourites. At the age of 11, I fell in love with fantasy after my father introduced me to The Lord of the Rings. Later, my interest expanded to literary historical mysteries, such as the works of Monaldi & Sorti, Charles Palliser, and Matthew Pearl. But at the same time, one of my greatest joys of returning to Eindhoven after working 11 years abroad was that I could visit Eppo Strips again, our excellent local comic book store, to enjoy the art of a comic book.
So, what did I read last year that I would recommend? Elon Musk mentioned that Foundation by Isaac Asimov was one of his primary inspirations. It’s a classic that I never read, but my fascination (not adoration) with Musk pulled the trigger. It’s about the decline of the Galactic Empire and how an outcast group preserves knowledge and technology, using them to build a power base. Quite a lot of parallels with the current state of our world, and the fact that this is one of Musk’s favourites, should, maybe, ring alarm bells.
This year, Jan Terlouw died, one of the Netherlands’ most erudite and well-loved former politicians. He also wrote children’s books.
Well, young adult fiction, as we call it nowadays. After reading too many literary, complex-character, development novels, I started missing a simple element: a story. How to Become a King (Koning van Katoren in Dutch) offers precisely that. A boy needs to solve seven impossible tasks to become king. Sounds like a Grimm fairy tale, but has a contemporary message, covering issues from religious conflicts to environmental pollution. I think the world needs to read more children’s books to make big problems unequivocally clear and solve them.
Talking of religions, Small Gods: A Discworld model by Terry Pratchett should also be mentioned. It is a satire on religion and philosophy, which offers a much-needed lightness in a world where people sometimes take themselves far too seriously. Like in academia. And, of course, the author who imagined a disk-shaped world, carried by four elephants who stand on a giant turtle swimming through space, is a welcome source of inspiration for creativity, which is also needed in my profession. I did not see octarine in my optical spectrum analyser yet, though.
Lastly, back to comic books, let me throw you an extra free recommendation: check out Suske en Wiske (I refuse to translate this, for nostalgia reasons), the Blue Series. Great for you and your children.
Brad Booth, CEO of NLM Photonics
It can be challenging when you’re in a start-up to find time to read a book, but I have been working to allocate more time to reading. Previously, many of my reads were in the science fiction genre, such as the Dune series, for entertainment. Lately, though, I’ve been reading a broader range of books to gain different insights and perspectives.
One is Jonathan Haidt’s The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness. As someone who grew up before the Internet and smartphones, and then worked in the industry during their development, it was an enlightening read. As a father of two boys who fit within the profile of Haidt’s analysis, it was interesting to see how some of my decisions about access to electronics impacted their lives. My wife worked in the gaming community, where gaming and social media companies discussed player retention and engagement. While that may seem harmless, the impact on the younger generation is obvious today.
As a father and a CEO, I can see how media, social media, the internet, electronic devices, etc., have had a profound impact on our society and, more importantly, on our youth.
As we head into the AI age, I highly recommend this book to any parent or technologist seeking insight.
Another book that I’ve almost finished reading is Elmer Kelton’s “The Time It Never Rained.” It is a fictional account based on a long Texas drought that occurred in the 1950s. The story focuses on a rancher in West Texas and covers topics such as government subsidies, government intervention, prejudice, illegal immigration, and weather impacts on farmers and ranchers. While the story is set in the 1950s, it is interesting how much of it remains relevant today. It also provides excellent insight into the challenges that face independent farmers and ranchers, sort of like being in a start-up and living from one round of financing to another.
Matt Crowley, CEO of Scintil Photonics
One author I read was Peter Zeihan a geopolitical strategist who has written a series of books starting with the Accidental Superpower: Ten Years On that focusses on how geopolitics has evolved in the post-cold war era and where it is likely to go next.
Focussing on demographic, geographic, and historical trends, his take is worth reading for executives in the semiconductor and AI industries with globe-spanning supply chains and markets.
The modern era requires that executives, investors and analysts to consider geopolitical impacts on their business much more than in the past.
The second book that I enjoyed was BlindSight, by Peter Watts which in a philosophical sci-fi space adventure that explores our ideas of consciousness and intelligence through the lens of a group of humans with augmented intelligence encountering a truly alien intelligence that challenges ideas about intelligence, cognition and consciousness that are highly relevant to today’s debates about LLMs, AGI and how LLM augmented human cognition will evolve in the coming years.
Neil McRae, Chief Network Strategist at Juniper Networks.
There’s Got to Be a Better Way: How to Deliver Results and Get Rid of the Stuff That Gets in the Way of Real Work, by Donald C. Kieffer & Nelson P. Repenning is one of those rare leadership books that made me think, “Of course!”
It’s practical and insightful. We’ve all been in organisations that work harder but not smarter, usually because we’re busy treating the side effects rather than addressing the underlying problems. We also tend to assume work is static, when in reality it’s massively dynamic. And who hasn’t sat through the meeting where all the metrics are green, yet the project is months behind?
The authors introduce Dynamic Work Design, anchored in five principles: Solve the right problem – Structure for discovery – Connect the human chain – Regulate for flow – Visualise the work.
They expand on this through ideas such as using effective problem-solving tools (like System Dynamics), ensuring day-to-day work reveals where the next issue will come from, connecting the people who are best positioned to solve those problems, avoiding system overload, and making sure work is visible as it moves. It sounds simple, but it’s remarkable how few organisations operate this way. Too many try to push more through the system than it can handle, and far too many rely on metrics that hide opportunities for improvement rather than reveal them.
Crucially, the authors argue—and I fully agree—that it starts with leaders getting out of their offices. Leaders need to get onto the shop floor, understand the real work, strip away the noise, and get close to the actual metrics. By rolling up their sleeves, they build trust, create alignment, and foster a culture where people talk openly and in real time about problems and how to solve them. When progress is made visible, people can feel that things are getting better.
I can’t help but feel the telco industry, in particular, could benefit enormously from this kind of approach.
Overall, *There’s Got to Be a Better Way* is an outstanding read for anyone trying to build healthier, more effective organisations. It’s thoughtful, grounded, and refreshingly honest. I’d recommend it to leaders, managers, and anyone who’s ever wondered why good intentions so often collide with organisational reality, and how to close the gap between leaders, employees, and the work itself.
Marvell bets big on optical I/O with $3.25B Celestial AI deal

Acquisition crowns a breakthrough year for optical interconnects as AI scale-up pushes copper to its limits.
Marvell Technology announced it will buy optical input/output specialist Celestial AI for $3.25 billion. The deal’s value could rise to $5.5 billion if specific sales targets are met.
“We are playing offence in this company,” said Matt Murphy, Chairman and CEO of Marvell, on a bullish earnings call that opened with the acquisition announcement, adding: “Our future is very bright.”
The announcement marks the end of a notable year for optical interconnects and co-packaged optics, driven by the need to keep scaling AI clusters.
In March, Nvidia unveiled its first co-packaged optics-based Infiniband and Ethernet switch platforms. Broadcom then detailed the TH6-Davidsson, its third-generation co-packaged optics design that adds optical input/output (I/O) to its 102.4-terabit Tomahawk 6 Ethernet switch chip. And in October, Ciena acquired the co-packaged optics start-up Nubis Communications for $270 million.
Founded in 2020, Celestial AI has always targeted its Photonic Fabric technology to eight key players: four major hyperscalers and four chip players undertaking xPU development. Now, semiconductor firm Marvell has assessed the optical I/O marketplace and chosen Celestial AI. And it is willing to pay billions for the start-up.
LightCounting Market Research points out in a research note on the Celestial AI deal that Marvell’s lead customer is Amazon. Marvell separately revealed a related Amazon Stock warrant. Amazon has used such warrants with other vendors, such as Astera Labs and Credo Semiconductor, to discount the products it purchases.
By adding Celestial AI, Marvell’s data centre and AI strategy is strengthened through the integration of optical interconnect with its existing data centre chip portfolio. The acquisition will also reassure Amazon and other hyperscalers that may be working with Celestial AI – a start-up, albeit a well-funded one (Celestial AI has raised close to $600 million in funding) – that it now will have the backing of a key semiconductor player.

AI scale-up
Celestial AI has been developing its Photonic Fabric for AI scale-up networks, where multiple xPUs and memory are connected to enable linear scaling.
Current scale-up networks comprise 72 xPUs in a rack, but the number will keep growing to 144, 512, and 1,024 xPUs. Scale-up networks will also expand beyond a single rack. Connecting racks will require optical interconnect as copper will not be able to cope with the distance – tens of meters – and traffic: tens of terabits coming out of an xPU package.
At the core of Celestial AI’s technology is its Photonic Fabric chiplet, designed to link xPUs, xPUs to memory, and xPUs to a scale-up switch (see diagram below).
Celestial AI’s first-generation Photonic Fabric chiplet supports 16 terabits per second (Tbps), while the second-generation design increases this to 64 Tbps, a factor of four improvement using the same number of optical channels.
The second-generation design will be available sometime next year. The chiplet is added to implement a photonic fabric link on a multi-die xPU package.
The photonic fabric link uses an electrical IC implemented in 5nm CMOS and a separate photonic integrated circuit (PIC) that uses electro-absorption modulators, which Celestial AI claims are thermally stable and compact. By placing the electrical IC above the modulator, the driver-to-modulator path is short, reducing capacitance and improving signal integrity. No digital signal processor is needed at the receiver, reducing latency and consuming a several picojoules per bit.
The protocol Celestial AI uses for the optical link is flit-based. Flits are short, fixed-size packets that improve traffic latency and enable efficient forward error correction. The flit concept was introduced with the PCI Express 6.0 bus. Celestial AI says the latency for GPU-to-GPU comms is 128ns, and flits are an elegant way of managing latency, argues Celestial AI.
The start-up also provides complete link management and a protocol-adaptive layer that maps protocols to the flits, such as AXI (Advanced EXtensible Interface), HBM/DDR, UALink (Ultra Accelerator Link), CXL (Compute Express Link) and ESUN (Ethernet for Scale-Up Networking).

OMIB, memory modules, and the photonic fabric appliance
Celestial AI has also detailed its Optical Multi-Die Interconnect Bridge (OMIB), an optical equivalent of Intel’s Embedded Multi-Die Interconnect Bridge (EMIB) for electrical 2.5D interconnects.
By using OMIB, high-speed interfaces can be moved to the optical bridge, freeing up key space around the module’s edge. So, for a multi-xPU module where all the xPUs need to be connected, Celestial uses a separate plane for photonics (OMIB) to handle the I/O. The optical I/O can support UCIe, Max PHY, or a proprietary die-to-die interface. By freeing up the node’s periphery – Celestial AI uses the ‘beachfront’ solely for high-bandwidth stacked memory (HBM) and DDR memory.
Celestial’s business model has been to design Photonic Fabric chiplets customised to meet a chip player’s requirements, followed by unit sales. For OMIB, a hybrid model is possible: selling IP if the chip player wants to integrate the design in its xPU, but also selling a product in the form of a PIC.
Earlier this year, the start-up detailed a memory module that it claimed was the first system-on-chip with optical I/O at its centre, showing how the modulator can sit beneath the electrical driver at the die’s centre.
“Nobody’s ever built anything like this,” said Preet Virk, Co-Founder and COO at Celestial AI, earlier this year. “Because optics doesn’t like being in the middle of the die, it likes being at the side.”
Celestial AI is using the OMIB to free up the beachfront to interface with HBM and DDR memory. The high-performance memory module combines 48-72GB HBM3e and 2TB of DDR5 of memory along with 7.2Tbps bandwidth in each direction to the module. The HBM memory acts as a write-through cache to the slower DDR5 memory. From the GPU’s perspective, it has terabytes of high-speed memory, even though only a tiny fraction of that is HBM.
Connecting 16 of these modules in a 2-rack-unit (2RU) chassis results in a photonic fabric appliance with 33TB of unified memory coupled with a 115Tbps Flit-based electrical network switch. In the example cited, 16 xPUs connect to the ports such that each xPU, and each port, can access the entire 33TB of memory. Penguin Computing is building the 2RU appliance, and Celestial AI will provide the devices.
The start-up is also offering a module for a network interface card for server-based applications that are not AI but do need a large, unified memory.

Performance benefits
Celestial AI claims that, for deep learning recommendation models, a model might typically needs to be spread across 56 GPUs, not because of compute but because of the memory capacity required. Using its photonic fabric memory, the entire model can be loaded into the fabric and be accessed by a much smaller number of 16 GPUs, outperforming the 56-GPU configuration. The start-up cites a 12.5x performance improvement while cutting GPU count – and therefore capex and power – by nearly 70 per cent.
For large language models, the picture is different. A GPT-4-class system with 16 GPUs might have around 4TB of total memory, of which almost 1.8TB is used by the model weights. Having 33TB of memory fabric, the same 16 GPUs have an order of magnitude more memory. Celestial isn’t claiming fewer GPUs here; instead, it can offer longer context windows, higher batch sizes, and higher revenue per deployed GPU.
For both cases, the bottleneck shifts away from memory capacity and communication overhead back to compute, where xPU vendors are more comfortable competing.
Marvell’s gain
Buying Celestial AI, Marvell gains an optical interconnect technology for next-generation AI scale-up architectures. The technology will complement Marvell’s existing family of data centre chips and strengthens its hand of its custom ASIC unit, which develops core designs such as xPU for hyperscalers.
Marvell gained silicon photonics expertise through its acquisition of Inphi. Celestial AI now will add silicon photonics know-how at the chip and die-to-die levels.
The acquisition can also be viewed more broadly. While much is happening at the xPU level and between xPUs and memory, the system rack is becoming the new ‘compute’ unit, with scale-up architectures linking multiple such ‘nodes’.
This is what buying Celestial AI gives Marvell: Marvell already provides many of the technologies inside the rack, but with the Photonic Fabric technology, it can start addressing system-level issues and play a fundamental role at this higher level of system integration and co-design optimisation.
What next
Celestial AI expects to deliver its first product for scale-up connectivity to a hyperscaler customer in the first half of next year with volumes scheduled for early 2027. In particular, its Photonic Fabric memory modules and Photonic Fabric network interface cards will sample in the first half of 2026 while the Photonic Fabric chiplet integrated with xPU & switches is targeted for the second half of 2026. The Penguin memory chassis is also expected in 2026.
Celestial AI started generating revenues earlier this year by undertaking three chiplet designs.
Meanwhile, Marvell believes the acquisition will close in the first quarter of 2026. It expects the acquisition to deliver meaningful revenues from 2028.
LightCounting expects scale-up Ethernet and NVLink switches with co-packaged optics in 2026, but it does not expect them to be deployed until 2028. “More advanced co-packaged optics for scale-up switches is behind in maturity, but Marvell does have a window if it can execute,” says LightCounting. Marvell is unlikely to see a return on investment until 2030, but only $1 billion of the deal’s value is in cash, adds LightCounting.
Books of 2025: Part 2

Gazettabyte is asking industry figures to pick their reads of 2025. In Part 2, Julie Eng, Helen Xenos, Hojjat Salemi, and Stephen Hardy share their choices.
Julie Eng, CTO of Coherent
One memorable book I read this year was The Thinking Machine: Jensen Huang, Nvidia, and the World’s Most Coveted Microchip, by Stephen Witt. This book is worth reading, as it covers the formation and evolution of Nvidia, a company that is obviously very influential in photonics for optical networking in AI data centres.
I also read The Lost and the Found: A True Story of Homelessness, Found Family, and Second Chances, by Kevin Fagan, a San Francisco Chronicle reporter who wrote about homelessness in San Francisco. I also enjoyed The First Ladies by Marie Benedict and Victoria Christopher Murray, which was a fascinating portrait of Eleanor Roosevelt working with Mary McLeod Bethune for civil rights in the US.
There is also The Unforgiving Minute: A Soldier’s Education by Craig Mullaney. Craig is my colleague, and this book summarises his path through West Point to active duty.
I really appreciated this book, both because I learned more about my colleague’s life and experiences, and because my brother was in the military during wartime, which helped me gain better insight into his experiences.
I’m reading There’s Got to Be a Better Way: How to Deliver Results and Get Rid of the Stuff That Gets in the Way of Real Work by Nelson Repenning and Donald Kieffer. It’s about dynamic work design and how to lead a modern company to better results. I’ve started this one, so maybe I can report on it next year.
On re-reading this list, I’ve read mostly non-fiction or historical fiction this year. Maybe I’ll read more fiction in 2026.
Helen Xenos, Senior Director, Portfolio Marketing, Ciena
I chose a career in engineering some 30 years ago because I wanted to understand how things work. (Unsurprisingly, my favourite books are mystery books.)
More recently, my curiosity has expanded from how things work to how people work.
I have become fascinated by the science of human behaviour and what drives positive, lasting change.
This interest has blossomed with the wealth of podcasts now exploring these topics, which have introduced me to insightful books in this field.
My favourite book of 2025 is Changeable: How Collaborative Problem Solving Changes Lives at Home, at School, and at Work, by J. Stuart Ablon, PhD. It offers a mindset shift and presents an alternative to traditional reward-and-punishment approaches for addressing challenging behaviour in children—and adults. One of its interesting ideas is the notion that people who struggle with problem behaviour “lack skill, not will”, and “if they could do better, they would”. Ablon outlines how Collaborative Problem Solving can not only reduce conflicts but also strengthen the underlying executive function skills needed for long-term success, positively transforming interactions at home, in schools, and in the workplace.
At a time when developmental and behavioural challenges are rising dramatically, I found this book’s insights relevant and profoundly hopeful.
Hojjat Salemi, Chief Business Development Officer, Ranovus
There is much noise these days about US–China relations and who will dominate the world with AI. We get bombarded with headlines, hot takes, and opinions. This year, two books stood out to me. One is Breakneck: China’s Quest to Engineer the Future, by Dan Wang, and the other is Reshuffle: Who wins when AI restacks the knowledge economy, from one of my favourite thinkers, Sangeet Paul Choudary.
In Breakneck, Dan Wang explains China’s rise simply and powerfully. He describes China as an ‘engineering state’ — a place where the focus is on building, executing, and iterating quickly (i.e., Engineers are in charge). That mindset has enabled China to roll out high-speed rail, factories, and infrastructure at a pace the rest of the world watches with envy. He contrasts this with the US, which he calls a ‘lawyerly state,’ where even a single energy or rail project can get stuck for years in environmental assessments, zoning debates, and legal challenges. These systems exist for a reason, but they slow the country’s ability to lay the foundations for the next era of technology and industry. This was clear to me when I landed at St. Louis Airport and attended the recent Supercomputing 25 (SC25) conference and exhibition. This comparison gives a clean lens for understanding why some nations move quickly while others struggle to keep up. The book also highlights the downside of the China’s approach.
In Reshuffle, Sangeet Choudary examines a different transformation — the restructuring of global value chains through platforms, data, and ecosystems. He makes a strong case that the centre of gravity is shifting from companies that “own everything” to companies that orchestrate networks and interactions. Scale now comes from ecosystems, not vertical integration.
When you read both Breakneck and Reshuffle, you see two parallel forces shaping the world: China racing ahead in physical and industrial build-out, and global companies reshaping competition through digital platforms and ecosystem advantage. One book explains the speed of building; the other explains the logic of reorganising. Together, they offer a framework for understanding where technology, strategy, and global competition are heading.
Stephen Hardy, former editorial director of Lightwave
While it would be an exaggeration to state that half of the optical communications community plays the guitar, it would be difficult to swing a patch chord of decent length within a session at OFC and not strike someone who does. As I would be one such person were I still going to OFC, it is perhaps no surprise that I eagerly read I Want My MTV: The Uncensored Story of the Music Video Revolution, an oral history compiled by Rob Tannenbaum and Craig Marks that recounts the first dozen years of that onetime music video channel.
Scores of executives, recording artists, video directors, former Monkees, producers, etc., offer their often-conflicting viewpoints on MTV’s creation and evolution. Of course, there’s plenty of sex, drugs, rock and roll, rap (eventually), inside stories, and big hair. Also discussed are a video so bad it destroyed a successful rock star’s career, why MTV stopped playing videos, and the phenomenon that was Tawny Kitaen. The book is laugh-out-loud funny and entertaining.
Those of a more serious bent will want to investigate David Grann’s latest, The Wager. The subtitle describes the book as “A tale of Shipwreck, Mutiny, and Murder”; it is a tale of Pestilence, Cannibalism, Superstition, Dumb Luck, and Grub Street. The Wager was a man-of-war that foundered attempting to round Cape Horn in 1740. The aforementioned shipwreck, mutiny, and murder ensue as part of a harrowing tale of how a handful of crew members and officers survived such catastrophes and made it back to England – just not at the same time nor with the same account of what happened.
Grann offers insight into the everyday challenges of seafaring in the 1740s. It wasn’t for the faint of heart.
Lastly, as a big fan of the Expanse book and TV series, I was excited to discover that the two gentlemen who created that world under the combined nom de plume of James S.A. Corey have launched another sci-fi book series. The Captive’s War series begins with The Mercy of Gods, in which an elite team of Earth scientists can forestall the destruction of the planet by their new alien overlords if they can demonstrate their “usefulness” as humanity’s representatives. The team soon begins to wonder whether the definition of “usefulness” extends beyond completing the task they’ve been assigned. As with the Expanse, the novel contains interesting, flawed characters and ambitious world-building. That world includes far-fetched elements (including sentient parasites that are infecting – and killing – members of the scientific team), but somehow it holds together interestingly.
I can’t wait for the next instalment.
Books of 2025: Part 1
Gazettabyte is asking industry figures to pick their reads and listens of 2025. In Part 1, Neil McRae, Rebecca K. Schaevitz, Chris Cole, and Scott Wilkinson share their choices.
Neil McRae, Chief Network Strategist at Juniper Networks.
The Last Man on the Moon: Astronaut Eugene Cernan and America’s Race in Space tells the extraordinary story of Gene Cernan—Commander of Apollo XVII and the last human to walk on the Moon.
I’ve read this book more times than I can remember, and each time it delivers such a surge of inspirational energy that you could charge your iPhone with it.
Having been fortunate enough to meet Gene many times, I can say without hesitation that he was one of the most charismatic, brilliant, and genuinely inspiring people I’ve ever known. He was a role model in every regard—not only as a leader, but as someone who constantly pushed boundaries and challenged assumptions. His guiding philosophy echoes throughout the book: “How do you know how good you are unless you try?”
The memoir traces Cernan’s journey from his early life to his unexpected place among the Gemini and Apollo astronauts, joining the program surprisingly late yet quickly becoming indispensable. What stands out is his unwavering commitment to experimentation, to stretching human potential, and to paying attention to the details that matter—not just as an astronaut, but as a husband, father, leader, and, importantly, follower. (Not nearly enough is written about followership, in my view.)
Though the book isn’t meant to be a technical manual of the space program, it manages to weave in just the right amount of engineering, mission training, and operational insight to satisfy both casual readers and space enthusiasts.
Reader beware: this book may make you cry with joy. That’s why I keep returning to it. Ultimately, *The Last Man on the Moon* is far more than an astronaut’s memoir—it’s a testament to human ambition, teamwork, and the extraordinary risks behind the Apollo program. I’ve read many biographies and accounts of the era, but Cernan’s stands among the most compelling personal narratives of the space age.
Whether you’re fascinated by space exploration or drawn to stories of perseverance, adventure, and history in motion, this memoir offers a powerful and unforgettable journey—and maybe even a new set of coordinates if we ever build that time machine.
Rebecca K. Schaevitz, Co-Founder & Chief Product Officer, Mixx Technologies
I read quite a bit and could share a stack of books. But instead, I want to highlight something different: the podcast Acquired, hosted by Ben Gilbert and David Rosenthal. They dive deep into the stories behind the world’s most influential companies—Nvidia, TSMC, Trader Joe’s, Costco, the Indian Premier League, Nintendo, and their latest, a three-part saga on Google.
What makes the show special is the way Ben and David bring these stories to life. They make you fall even deeper in love with companies you already admire (looking at you, Trader Joe’s and Costco) while revealing the unexpected decisions and creative pivots that shaped their success.
Even though the episodes run long (4+ hours!), I listen in 20-minute segments on my commute—tiny windows of inspiration between all the roles I juggle as a co-founder and parent.
Reflecting on how others built enduring businesses is meaningful as we grow Mixx from the ground up. And who knows—maybe one day we’ll get the Acquired treatment ourselves. A co-founder can dream.
Chris Cole, Optical Communications Advisor
Earlier this year, I read Supreme Commander: The War Years of Dwight D. Eisenhower by Stephen E. Ambrose, who wrote the definitive biography of Eisenhower (Soldier and President), which I had read previously.
This is the ultimate management challenge of all times in terms of scope, difficulty, and unpredictability. Eisenhower had to manage disparate teams over large geographic locations, with multiple bosses and subordinates holding a full spectrum of views, in the face of a formidable opponent.
We tend to see how history happened as inevitable, but individuals alter the course in dramatic ways.
A lesser commander would have prolonged the conflict for many more years. War brings good and bad management into stark relief because the consequences are so severe. The stakes we deal with are much less; however, the lessons of leadership are universal.
Scott Wilkinson, Lead Analyst, Networking Components, CignalAI
By the end of 2025, my reading has been consumed by Walter Isaacson’s masterful – and very long – biography of Leonardo Da Vinci. But since I haven’t finished that one yet, it’s not a valid choice. Maybe I’ll have it completed in time for the 2026 list.
Other books that were interesting enough to talk about this year included “Killers of the Flower Moon by David Grann, which is required reading for anyone who saw the movie, Wasteland: The Secret World of Waste and the Urgent Search for a Cleaner Future by Oliver Franklin-Wallis which will make you think more carefully about everything you throw away, and Seveneves by Neal Stephenson that Andrew Schmitt convinced me to read and I thoroughly enjoyed.
But the one book that I bothered people with the most at parties was “Then Everything Changed: Stunning Alternate Histories of American Politics: JFK, RFK, Carter, Ford, Reagan” by Jeff Greenfield.
Greenfield’s fascinating book offers three alternative US histories based on events starting in the 1960s, and it differs from other, lesser alternative histories in the expertise of its author. Jeff Greenfield has been a political reporter and author for ages and knows the personalities and temperaments of all of the affected parties. What results are detailed, well-considered, and very thorough alternatives.
The scenarios covered include what if J F Kennedy had been assassinated between his electoral win and his inauguration – something that came very close to happening. If his wife hadn’t come to the door to wish him goodbye on that morning in December 1960, Lyndon B Johnson would have been the president during the early days of the civil rights movement and, critically, the Cuban Missile Crisis. JFK and Lyndon B Johnson were very different men, with distinct personalities and backgrounds. Sometimes history chooses wisely, and sometimes not so much.
Other scenarios include what if Robert F Kennedy had turned in a different direction and avoided his assassin in 1968, and what if Gerald Ford hadn’t flubbed his debate appearance against Jimmy Carter in 1976. Each is investigated in historical narrative form to demonstrate how what we assume was inevitable in our history is often just the luck of the draw.
Every page of the book offers historical insights into names that most Americans know, in ways they may never have considered.
Appearances from John McCain, Gary Hart, and others make the stories seem very real. And the threads the author follows from event to event are logical, with some going quite well and others not at all. History is a series of small events with enormous consequences.
Apologies to those whom I bothered with alternative history stories this year. I promise that, for the next few months, I will limit myself to telling interesting stories about Leonardo Da Vinci.
Scintil’s ‘laser focus’ on lasers for AI data centres

Part 2: Start-up funding
Scintil Photonics is betting that to keep scaling AI compute systems, integrated laser arrays will be needed alongside AI accelerator chips.
Scintil Photonics, a spin-off from French research lab CEA-Leti, has developed a heterogeneous integration photonics platform that combines indium phosphide lasers with silicon photonics.
The Grenoble-based start-up’s focus is to deliver light sources to feed co-packaged optics (CPO) in data centres. But its ambitions go beyond that.
“I’m convinced that we have absolutely the best heterogeneous integration technology platform in the world,” says Matt Crowley, who joined Scintil as CEO a year ago. “It was developed for a long time, it’s very difficult for others to replicate, it’s been scaled at [foundry] Tower Semiconductor, so it’s proven its manufacturability.”
Scintil’s task is to replace the piece-part manufacturing of numerous discrete optical components with a monolithically integrated design. “At Scintil, we want to take that to the next level by taking silicon photonics and bringing III-V and other more exotic materials into that integration flow,” he says.
Crowley’s background is in MEMS and semiconductors. He founded Vesper Technologies, a company specialising in MEMS microphones and accelerometers, which Qualcomm later acquired. Previously, he helped scale the start-up Sand 9, which was acquired by Analog Devices. That experience—turning custom wafer processes into high-volume production—is Scintil’s next challenge. “At my last company, we scaled to 60 million units with design wins at Samsung and Amazon,” says Crowley.
For most deep-tech start-ups, particularly wafer-based ones, transitioning from a few hundred prototypes to a manufacturing process that can produce millions of units is challenging.
“You have to convince customers you’ll be a reliable supplier, even assuming your specifications are better,” he says.
Structure and scale
Scintil recently raised €50 million in its Series B funding round. The backers include Yotta Capital Partners and NGP Capital, with participation from Nvidia and earlier investors.
“It was great to get that validation,” says Crowley. “Now we have to figure out how to ramp the product to production.” The funding will help the company expand its 50-staff and sites globally.
The company’s headquarters and core engineering are in Grenoble, France, complemented by designers in Toronto and the UK. A California office is planned as a customer-support lab, while Crowley is based in Boston.
“The primary location will be Grenoble, where engineering and operations sit,” says Crowley. California will likely be the second-largest office, where Scintil will work with customers to get systems up and running.
First bet
When Crowley joined Scintil a year ago, the start-up had two product directions. One was a generic photonic integrated circuit (PIC) platform, and the other was the external light source. The first significant decision he took was to focus solely on laser sources, where the company has seen strong customer interest. Crowley refers to this as being ‘laser-focussed on lasers”.
“In my experience, one of a start-up’s greatest advantages is focus,” he says. “A small group with high talent and great teamwork can out-execute larger groups.”
The goal is to develop Scintil’s LEAF Light product, a dense wavelength division multiplexing (DWDM) external laser source designed for next-generation co-packaged optics.
Accordingly, Scintil’s goal is to launch the light-source product, focussing the start-up on that. “To prove our platform and to prove the value of our IP, we have to launch a single product,” says Crowley.
Here, what is required are laser designs with high power, high reliability, and low cost and size. “There are two specs that are really important: wall-plug efficiency, but even more critically, channel spacing and consistency of manufacturing,” says Crowley.
Scintil believes that traditional distributed feedback (DFB) laser manufacturing won’t scale to the tens of millions of dense WDM array chips that will be needed starting in 2028.
Leaf light: precision and power
Scintil’s external light source is a monolithically integrated array of indium phosphide distributed-feedback (DFB) lasers, in configurations of 8 or 16 wavelengths, on a silicon photonics chip.
Each light source chip also features integrated waveguides and on-chip multiplexers, which combine the wavelengths of multiple lasers onto a single fibre. The design also integrates photodetectors and thermal-tuning elements to stabilise wavelength drift. “We take detectors, waveguides, all from the silicon-photonics toolkit at Tower Semiconductor, and put them on one chip with our DFB array,” says Crowley.
Scintil can support the CW-WDM multi-source agreement frequency grid where customers require it. “We are looking at what the customer wants,” says Yannick Paillard, Scintil’s chief commercial officer. “If they want the CW-WDM frequency grid, we can deliver that.”
Scintil can deliver 8-wavelength implementations at 200GHz spacings or 16-wavelength implementations at 100GHz spacings. And the company’s product will support more than one fibre output—eight wavelengths times eight fibres, for example.
“Because Scintil uses advanced semiconductor lithography, our lasers have better than ±10GHz precision,” Crowley says. “Competition struggles to get better than ±50GHz. That’s architecturally important because if your channels are too close, they start to interfere with each other downstream.”
Power output and efficiency are also on the roadmap. “We’ve achieved up to 20 milliwatts per carrier,” he says. “Market demand for higher power is increasing as customers want to split signals and generate more carriers.”
As for energy efficiency, Scintil cites Nvidia’s published results. “They’ve shown that the dense WDM co-packaged optics approach can get to sub-4 picojoules per bit today, with a path below one picojoule per bit,” says Crowley. “At that point, optics become more power-efficient than copper.”
Framed against copper, the objective is to achieve power per bit comparable to, and ultimately better than, copper at relevant distances.
Manufacturing partnerships
Production leverages Tower Semiconductor’s PH18 silicon-photonics platform, with Scintil performing post-processing to bond the III-V material to the back side of the silicon photonics wafer to form the lasers.
“Tower manufactures the silicon-photonics wafer [up till the III-V processing],” Crowley explains. “Then it goes to Scintil, where we have a wafer probe station with a custom probe head and optical measurement capability that we developed. We can do optical measurement of every die on a wafer.” The goal is to transfer that flow to high-volume assembly partners, or OSATs, as volumes increase.
“We’ll take our custom probe head and install it at an OSAT,” he says. “That’s how we scale. I can collect statistical data, feed it back to the foundry and design teams, and get into a continuous-improvement cycle.”
This known-good-die approach also offers flexibility. “Large customers may want to do their own assembly or co-package with other chips,” Crowley adds. “We’re open to selling them known-good dies or full modules.”
Scintil has already given companies samples of its product. The expectation is that it will make several thousand chips in 2026.
Speaking about the challenge of a start-up getting into the biggest accounts, Crowley says it is key to make life as easy as possible for partners.
“Do as much work for them as you can — build the full module, qualify it, give them the reliability data, the audit reports. That’s how you get designed in,” says Crowley.
Reliability
Coming from the MEMS world, Crowley brings a distinct perspective on reliability targets. “My last company had a failure rate of 0.2 parts per million,” he recalls. “In this industry, when someone says 0.7 per cent failure rate, there’s incredible room for improvement.”
He calls reliability “the hidden spec” in photonics. “We treat it as another design parameter,” he says. “If a metal trace is too thin or a layer isn’t laminating correctly, we expect designers and process engineers to fix it. Once wafer-level technology is working, you get a virtuous cycle: costs go down, performance and reliability go up.”
Scintil’s push to industrialise heterogeneous integration is one of many elements that will determine how the optics industry keeps pace with AI’s compute appetite.
Click here for Part 1: Start-up funding






