OFC 2025 industry reflections - Part 2

Exhibition floor. Source: OFC

Gazettabyte is asking industry figures for their thoughts after attending the 50th-anniversary OFC show in San Francisco. In Part 2, the contributions are from BT’s Professor Andrew Lord, Chris Cole, Coherent’s Vipul Bhatt, and Juniper Network’s Dirk van den Borne.ontent

Professor Andrew Lord, Head of Optical Network Research at BT Group

OFC was a highly successful and lively show this year, reflecting a sense of optimism in the optical comms industry. The conference was dominated by the need for optics in data centres to handle the large AI-driven demands. And it was exciting to see the conference at an all-time attendance peak.

From a carrier perspective, I continued to appreciate the maturing of 800-gigabit plugs for core networks and 100GZR plugs (including bidirectional operation for single-fibre working) for the metro-access side.

Hollow-core fibre continues to progress with multiple companies developing products, and evidence for longer lengths of fibre in manufacturing. Though dominated by data centres and low-latency applications such as financial trading, use cases are expected to spread into diverse areas such as subsea cables and 6G xHaul.

There was also a much-increased interest in fibre sensing as an additional revenue generator for telecom operators, although compelling use cases will require more cost-effective technology.

Lastly, there has been another significant increase in quantum technology at OFC. There was an ever-increasing number of Quantum Key Distribution (QKD) protocols on display but with a current focus on Continuous—Variable QKD (CV-QKD), which might be more readily manufacturable and easier to integrate.

Chris Cole, Optical Communications Advisor

For the premier optics conference, the amount of time and floor space devoted to electrical interfaces was astounding.

Even more amazing is that while copper’s death at the merciless hands of optics continues to be reported, the percentage of time devoted to electrical work at OFC is going up. Multiple speakers commented on this throughout the week.

One reason is that as rates increase, the electrical links connecting optical links to ASICs are becoming disproportionately challenging. The traditional Ethernet model of electrical adding a small penalty to the overall link is becoming less valid.

Another reason is the introduction of power-saving interfaces, such as linear and half-retimed, which tightly couple the optical and electrical budgets.

Optics engineers now have to worry about S-parameters and cross-talk of electrical connectors, vias, package balls, copper traces and others.

The biggest buzz in datacom was around co-packaged optics, helped by Nvidia’s switch announcements at GTC in March.

Established companies and start-ups were outbidding each other with claims of the highest bandwidth in the smallest space; typically the more eye-popping the claims, the less actual hard engineering data to back them up. This is for a market that is still approximately zero and faces its toughest hurdles of yield and manufacturability ahead.

To their credit, some companies are playing the long game and doing the slow, hard work to advance the field. For example, I continue to cite Broadcom for publishing extensive characterisation of their co-packaged optics and establishing the bar for what is minimally acceptable for others if they want to claim to be real.

The irony is that, in the meantime, pluggable modules are booming, and it was exciting to see so many suppliers thriving in this space, as demonstrated by the products and traffic in their booths.

The best news for pluggable module suppliers is that if co-packaged optics takes off, it will create more bandwidth demand in the data centre, driving up the need for pluggables.

I may have missed it, but no coherent ZR or other long-range co-packaged optics were announced.

A continued amazement at each OFC is the undying interest and effort in various incarnations of general optical computing.

Despite having no merit as easily shown on first principles, the number of companies and researchers in the field is growing. This is also despite the market holding steady at zero.

The superficiality of the field is best illustrated by a slogan gaining popularity and heard at OFC: computing at the speed of light. This is despite the speed of propagation being similar in copper and optical waveguides. The reported optical computing devices are hundreds of thousands or millions of times larger than equivalent CMOS circuits, resulting in the distance, not the speed, determining the compute time.

Practical optical computing precision is limited to about four bits, unverfied claims of higher precision not withstanding, making it useless in datacenter applications.

Vipul Bhatt, Vice President, Corporate Strategic Marketing at Coherent.

Three things stood out at OFC:

  • The emergence of transceivers based on 200-gigabit VCSELs
  • A rising entrepreneurial interest in optical circuit switching
  • And an accelerated momentum towards 1.6-terabit (8×200-gigabit transceivers) alongside the push for 400-gigabit lanes due to AI-driven bandwidth expansion.

The conversations about co-packaged optics showed increasing maturity, shifting from ‘pluggable versus co-packaged optics’ to their co-existence. The consensus is now more nuanced: co-packaged optics may find its place, especially if it is socketed, while front-panel pluggables will continue to thrive.

Strikingly, talk of optical interconnects beyond 400-gigabit lanes was almost nonexistent. Even as we develop 400 gigabit-per-lane products, we should be planning the next step: either another speed leap (this industry has never disappointed) or, more likely, a shift to ‘fast-and-wide’, blurring the boundary between scale-out and scale-up by using a high radix.

Considering the fast cadence of bandwidth upgrades, the absence of such a pivotal discussion was unexpected.

Dirk van den Borne, director of system engineering at Juniper Networks

The technology singularity is defined as the merger of man and machine. However, after a week at OFC, I will venture a different definition where we call the “AI singularity” the point when we only talk about AI every waking hour and nothing else. The industry seemed close to this point at OFC 2025.

My primary interest at the show was the industry’s progress around 1.6-terabit optics, from scale-up inside the rack to data centre interconnect and long-haul using ZR/ZR+ optics. The industry here is changing and innovating at an incredible pace, driven by the vast opportunity that AI unlocks for companies across the optics ecosystem.

A highlight was the first demos of 1.6-terabit optics using a 3nm CMOS process DSP, which have started to tape out and bring the power consumption down from a scary 30W to a high but workable 25W. Well beyond the power-saving alone, this difference matters a lot in the design of high-density switches and routers.

It’s equally encouraging to see the first module demos with 200 gigabit-per-lane VCSELs and half-retimed linear-retimed optical (LRO) pluggables. Both approaches can potentially reduce the optics power consumption to 20W and below.

The 1.6-terabit ecosystem is rapidly taking shape and will be ready for prime time once 1.6-terabit switch ASICs arrive in the marketplace. There’s still a lot of buzz around linear pluggable optics (LPO) and co-packaged optics, but both don’t seem ready yet. LPO mostly appears to be a case of too little, too late. It wasn’t mature enough to be useful at 800 gigabits, and the technology will be highly challenging for 1.6 terabits.

The dream of co-packaged optics will likely have to wait for two more years, though it does seem inevitable. But with 1.6 terabit pluggable optics maturing quickly, I don’t see it having much impact in this generation.

The ZR/ZR+ coherent optics are also progressing rapidly. Here, 800-gigabit is ready, with proven interoperability between modules and DSPs using the OpenROADM probabilistic constellation shaping standard, a critical piece for interoperability in more demanding applications.

The road to 1600ZR coherent optics for data centre interconnect (DCI) is now better understood, and power consumption projections seem reasonable for 2nm DSP designs.

Unfortunately, the 1600ZR+ is more of a question mark to me, as ongoing standardisation is taking this in a different direction and, hence, a different DSP design from 1600ZR.    The most exciting discussions are around “scale-up” and how optics can replace copper for intra-rack connectivity.

This is an area of great debate and speculation, with wildly differing technologies being proposed. However, the goal of around 10 petabit-per-second (Pbps) in cross-sectional bandwidth in a single rack is a terrific industry challenge, one that can spur the development of technologies that might open up new markets for optics well beyond the initial AI cluster application.


II-VI’s VCSEL approach for co-packaged optics

Vipul Bhatt

Co-packaged optics was a central theme at this year’s OFC show, held in San Diego. But the solutions detailed were primarily using single-mode lasers and fibre.

The firm II-VI is beating a co-packaged optics path using vertical-cavity surface-emitting lasers (VCSELs) and multi-mode fibre while also pursuing single-mode, silicon photonics-based co-packaged optics.

For multi-mode, VCSEL-based co-packaging, II-VI is working with IBM, a collaboration that started as part of a U.S. Advanced Research Projects Agency-Energy (ARPA-E) project to promote energy-saving technologies.

II-VI claims there are significant system benefits using VCSEL-based co-packaged optics. The benefits include lower power, cost and latency when compared with pluggable optics.

The two key design decisions that achieved power savings are the elimination of the retimer chip – also known as a direct-drive or linear interface – and the use of VCSELs.

The approach – what II-VI calls shortwave co-packaged optics – integrates the VCSELs, chip and optics in the same package.

The design is being promoted as first augmenting pluggables and then, as co-packaged optics become established, becoming the predominant solution for system interconnect.

For every 10,000 QSFP-DD pluggable optical modules used by a supercomputer that are replaced with VCSEL-based co-packaged optics, the yearly electricity bill will be reduced by up to half a million dollars, estimate II-VI and IBM.

VCSEL technology

VCSELs are used for active optical cables and short-reach pluggables for up to 70m or 100m links.

VCSEL-based modules consume fewer watts and are cheaper than single-mode pluggables.

Several factors account for the lower cost, says Vipul Bhatt,  vice president of marketing, datacom vertical at II-VI.

The VCSEL emits light vertically from its surface, simplifying the laser-fibre alignment, and multi-mode fibre already has a larger-sized core compared to single-mode fibre.

“Having that perpendicular emission from the laser chip makes manufacturing easier,” says Bhatt. “And the device’s small size allows you to get many more per wafer than you can with edge-emitter lasers, benefitting cost.”

The tinier VCSEL also requires a smaller current density to work; the threshold current of a distributed feedback (DFB) laser used with single-mode fibre is 25-30mA, whereas it is 5-6mA for a VCSEL. “That saves power,” says Bhatt.

Fibre plant

Hyperscalers such as Google favour single-mode fibre for their data centres. Single-mode fibre supports longer reach transmissions, while Google sees its use as future-proofing its data centres for higher-speed transmissions.

Chinese firms Alibaba and Tencent use multi-mode fibre but also view single-mode fibre as desirable longer term.

Bhatt says he has been hearing arguments favouring single-mode fibre for years, yet VCSELs continue to advance in speed, from 25 to 50 to 100 gigabits per lane.

“VCSELs continue to lead in cost and power,” says Bhatt. ”And the 100-gigabit-per-lane optical link has a long life ahead of it, not just for networking but machine learning and high-performance computing.“

II-VI says single-mode fibre and silicon photonics modules are suited for the historical IEEE and ITU markets of enterprise and transport where customers have longer-reach applications.

VCSELs are best suited for shorter reaches such as replacing copper interconnects in the data centre.

Copper interconnect reaches are shrinking as interface speeds increase, while a cost-effective optical solution is needed to support short and intermediate spans up to 70 meters.

“As we look to displace copper, we’re looking at 20 meters, 10 meters, or potentially down to three-meter links using active optical cables instead of copper,” says Bhatt. “This is where the power consumption and cost of VCSELs can be an acceptable premium to copper interconnects today, whereas a jump to silicon photonics may be cost-prohibitive.”

Silicon photonics-based optical modules have higher internal optical losses but they deliver reaches of 2km and 10km.

“If all you’re doing is less than 100 meters, think of the incredible efficiency with which these few milliamps of current pumped into a VCSEL and the resulting light launched directly and efficiently into the fibre,” says Bhatt. “That’s an impressive cost and power saving.”

Applications

The bulk of VCSEL sales for the data centre are active optical cables and short-reach optical transceivers.

“Remember, not every data centre is a hyperscale data centre,” says Bhatt. ”So it isn’t true that multi-mode is only for the server to top-of-rack switch links. Hyperscale data centres also have small clusters for artificial intelligence and machine learning.”

The 100m-reach of VCSELs-based optics means it can span all three switching tiers for many data centres.

The currently envisioned 400-gigabit VCSEL modules are 400GBASE-SR8 and the 8-by-50Gbps 400G-SR4.2. Both use 50-gigabit VCSELs: 25 gigabaud devices with 4-level pulse amplitude modulation (PAM-4).

The 400GBASE-SR8 module requires 16 fibres, while the 400G-SR4.2, with its two-wavelength bidirectional design, has eight fibres.

The advent of 100-gigabit VCSELs (50 gigabaud with PAM-4) enables 800G-SR8, 400G-SR4 and 100G-SR1 interfaces. II-VI first demonstrated a 100-gigabit VCSEL at ECOC 2019, while 100-gigabit VCSEL-based modules are becoming commercially available this year.

Terabit VCSEL MSA

The Terabit Bidirectional (BiDi) Multi-Source Agreement (MSA) created earlier this year is tasked with developing optical interfaces using 100-gigabit VCSELs.

The industry consortium will define 800 gigabits interface over parallel multi-mode fibre, the same four pairs of multi-mode fibre that support the 400-gigabit, 400G-BD4.2 interface. It will also define a 1.6 terabit optical interface.

The MSA work will extend the parallel fibre infrastructure from legacy 40 gigabits to 1.6 terabits as data centres embrace 25.6-terabit and soon 51.2-terabit switches.

Founding Terabit BiDi MSA members include II-VI, Alibaba, Arista Networks, Broadcom, Cisco, CommScope, Dell Technologies, HGGenuine, Lumentum, MACOM and Marvell Technology.

200-gigabit lasers and parallelism

The first 200-gigabit electro-absorption modulator lasers (EMLs) were demonstrated at OFC ’22, while the next-generation 200-gigabits directly modulated lasers (DMLs) are still in the lab.

When will 200-gigabit VCSELs arrive?

Bhatt says that while 200-gigabit VCSELs were considered to be research-stage products, recent interest in the industry has spurred the VCSEL makers to accelerate the development timeline.

Bhatt repeats that VCSELs are best suited for optimised short-reach links.

“You have the luxury of making tradeoffs that longer-reach designs don’t have,” he says. “For example, you can go parallel: instead of N-by-200-gig lanes, it may be possible to use twice as many 100-gig lanes.”

VCSEL parallelism for short-reach interconnects is just what II-VI and IBM are doing with shortwave co-packaged optics.

Shortwave co-packaged optics

Computer architectures are undergoing significant change with the emergence of accelerator ICs for CPU offloading.

II-VI cites such developments as Nvidia’s Bluefield data processing units (DPUs) and the OpenCAPI Consortium, which is developing interface technology so that any microprocessor can talk to accelerator and I/O devices.

“We’re looking at how to provide a high-speed, low-latency fabric between compute resources for a cohesive fabric,” says Bhatt. The computational resources include processors and accelerators such as graphic processing units (GPUs) and field-programmable gate arrays (FPGAs).

II-VI claims that by using multi-mode optics, one can produce the lowest power consumption optical link feasible, tailored for very-short electrical link budgets.

The issue with pluggable modules is connecting them to the chip’s high-speed signals across the host printed circuit board (PCB).

“We’re paying a premium to have that electrical signal reach through,” says Bhatt. “And where most of the power consumption and cost are is those expensive chips that compensate these high-speed signals over those trace lengths on the PCB.”

Using shortwave co-packaged optics, the ASIC can be surrounded by VCSEL-based interfaces, reducing the electrical link budget from some 30cm for pluggables to links only 2-3cm long.

“We can eliminate those very expensive 5nm or 7nm ICs, saving money and power,” says Bhatt.

The advantage of shortwave co-packaged optics is better performance (a lower error rate) and lower latency (between 70-100ns) which is significant when connecting to pools of accelerators or memory.

“We can reduce the power from 15W for a QSFP-DD module down to 5W for a link of twice the capacity,” says Bhatt, “We are talking an 80 per cent reduction in power dissipation. Another important point is that when power capacity is finite, every watt saved in interconnects is a watt available to add more servers. And servers bring revenue.”

This is where the 10,000-unit optical interfaces, $0.4-$0.5 million savings in yearly electricity costs comes from.

The power savings arise from the VCSEL’s low drive current, the use of the OIF’s ultra short-reach (USR) electrical interface and the IBM processor driving the VCSEL directly, what is called a linear analogue electrical interface.

In the first co-packaged optics implementation, IBM and II-VI use non-return-to-zero (NRZ) signalling.

The shortwave co-packaged optics has a reach of 20m which enables the potential elimination of top-of-rack switches, further saving costs. (See diagram.)

Source: II-VI

II-VI sees co-packaged optics as initially augmenting pluggables. With next-generation architectures using 1.6-terabit OSFP-XD pluggables, 20 to 40 per cent of those ports are for sub-20m links.

“We could have 20 to 40 per cent of the switch box populated with shortwave co-packaged optics to provide those links,” says Bhatt.

The remaining ports could be direct-attached copper, longer-reach silicon-photonics modules, or VCSEL modules, providing the flexibility associated with pluggables.

“We think shortwave co-packaged optics augments pluggables by helping to reduce power and cost of next-generation architectures.”

This is the secret sauce of every hyperscaler. They don’t talk about what they’re doing regarding machine learning and their high-performance systems, but that’s where they strive to differentiate their architectures, he says.

Status

Work has now started on a second-generation shortwave design that will use PAM-4 signalling. “That is targeted as a proof-of-concept in the 2024 timeframe,” says Bhatt.

The second generation will enable a direct comparison in terms of power, speed and bandwidth with single-mode co-packaged optics designs.

Meanwhile, II-VI is marketing its first-phase NRZ-based design.

“Since it is an analogue front end, it’s truly rate agnostic,” says Bhatt. “So we’re pitching it as a low-latency, low-power bandwidth density solution for traditional 100-gigabit Ethernet.”

The design also can be used for next-generation PCI Express and CXL disaggregated designs.

II-VI says there is potential to recycle hyperscaler data centre equipment by adding state-of-the-art network fabric to enable pools of legacy processors. “This technology delivers that,” says Bhatt.

But II-VI says the main focus is for accelerator fabrics: proprietary interfaces like NVlink, Fujitsu’s Tofu interconnect or HPE’s Cray’s Slingshot.

“At some point, memory pools or storage pools will also work their way into the hyperscalers’ data centres,” says Bhatt.


II-VI expands its 400G and 800G transceiver portfolio

Sanjai Parthasarathi, chief marketing officer at II-VI

II-VI has showcased its latest high-speed optics. The need for such client-side modules is being driven by the emergence of next-generation Ethernet switches in the data centre.

The demonstrations, part of the OFC virtual conference and exhibition held last month, featured two 800-gigabit and two 400-gigabit optical transceivers.

“We have seen the mushrooming of a lot of datacom transceiver companies, primarily from China, and some have grown pretty big,” says Sanjai Parthasarathi, chief marketing officer at II-VI.

But a key enabler for next-generation modules is the laser. “Very few companies have these leading laser platforms – whether indium phosphide or gallium arsenide, we have all of that,” says Parthasarathi.

During OFC, II-VI also announced the sampling of a 100-gigabit directly modulated laser (DML) and detailed an optical channel monitoring platform.

“We have combined the optical channel monitoring – the channel presence monitoring, the channel performance monitoring – and the OTDR into a single integrated subsystem, essentially a disaggregated monitoring system,” says Parthasarathi.

An optical time-domain reflectometer (OTDR) is used to characterise fibre.

High-speed client-side transceivers

II-VI demonstrated two 800-gigabit datacom products.

One is an OSFP form factor implementing 800-gigabit DR8 (800G-DR8) and the other is a QSFP-DD800 module with dual 400-gigabit FR4s (2x400G-FR4). The DR8 uses eight fibres in each direction, each carrying a 100-gigabit signal. The QSFP-DD800 supports two FR4s, each carrying four, 100-gigabit wavelengths over single-mode fibre.

Vipul Bhatt

“These are standard IEEE-compliant reaches: 500m for the DR8 and 2km for the dual FR4 talking to individual FR4s,” says Vipul Bhatt, senior strategic marketing director, datacom at II-VI.

The 800G-DR8 module can be used as an 800-gigabit link or, when broken out, as two 400-gigabit DR4s or eight individual 100-gigabit DR optics.

II-VI chose to implement these two 800-gigabit interfaces based on the large-scale data centre players’ requirements. The latest switches use 25.6-terabit Ethernet chips that have 100-gigabit electrical interfaces while next-generation 51.2-terabit ICs are not far off. “Our optics is just keeping in phase with that rollout,” says Bhatt.

During OFC, II-VI also showcased two 400-gigabit QSFP112 modules: a 400-gigabit FR4 (400G-FR4) and a multi-mode 400-gigabit SR4 (400G-SR4).

The SR4 consumes less power, is more cost-effective but has a shorter reach. “Not all large volume deployments of data centres are necessarily in huge campuses,” says Bhatt.

II-VI demonstrated its 800-gigabit dual FR4 module talking to two of its QSFP112 400-gigabit FR4s.

Bhatt says the IEEE 802.3db standard has two 400G-SR4 variants, one with a 50m reach and the second, a 100m reach. “We chose to demonstrate 100m because it is inclusive of the 50m capability,” says Bhatt.

Source: II-VI

II-VI stresses its breadth in supporting multi-mode, short-reach single-mode and medium-reach single-mode technologies.

The company says it was the electrical interface rather than the optics that was more challenging in developing its latest 400- and 800-gigabit modules.

The company has 100-gigabit multi-mode VCSELs, single-mode lasers, and optical assembly and packaging. “It was the maturity of the electrical interface [that was the challenge], for which we depend on other sources,” says Bhatt.

 

100-gigabit PAM-4 DML

II-VI revealed it is sampling a 100-gigabit PAM-4 directly modulated laser (DML).

Traditionally, client-side modules for the data centre come to market using a higher performance indium phosphide externally-modulated laser (EML). The EML may even undergo a design iteration before a same-speed indium phosphide DML emerges. The DML has simpler drive and control circuitry, is cheaper and has a lower power consumption.

“But as we go to higher speeds, I suspect we are going to see both [laser types] coexist, depending on the customer’s choice of worst-case dispersion and power tolerance,” says Bhatt. It is too early to say how the DML will rank with the various worst-case test specifications.

Parthasarathi adds that II-VI is developing 100-gigabit and 200-gigabit-per-lane laser designs. Indeed, the company had an OFC post-deadline paper detailing work on a 200-gigabit PAM-4 DML.

Optical monitoring system

Optical channel monitoring is commonly embedded in systems while coherent transceivers also provide performance metrics on the status of the optical network. So why has II-VI developed a standalone optical monitoring platform?

What optical channel monitors and coherent modules don’t reveal is when a connector is going bad or fibre is getting bent, says Parthasarathi: “The health and the integrity of the fibre plant, there are so many things that affect a transmission.”

Operators may have monitoring infrastructure in place but not necessarily the monitoring of the signal integrity or the physical infrastructure. “If you have an existing network, this is a very easy way to add a monitoring capability,” says Parthasarathi.

Sara Gabba, manager, analysis, intelligence & strategic marcom at II-VI.

“As we can control all the parts – the optical channel monitoring and the OTDR – we can configure it [the platform] to meet the application,” adds Sara Gabba, manager, analysis, intelligence & strategic marcom at II-VI. “Coherent indeed provides a lot of information, but this kind of unit is also suitable for access network applications.”

The optical monitoring system features an optical switch so it can cycle and monitor up to 48 ports.

With operators adopting disaggregated designs, each element in the optical network is required to have more intelligence and more autonomy.

“If you can provide this kind of intelligent monitoring and provide information about a specific link, you create the possibility to be more flexible,” says Gabba.

Using the monitoring platform, intelligence can be more widely distributed in the optical network complementing systems operators may have already deployed, she adds.


Privacy Preference Center