Crossing oceans: Loi Nguyen's engineering odyssey
Loi Nguyen arrived in the US with nothing but determination and went on to co-found Inphi, a semiconductor company acquired by Marvell for $10 billion. Now, the renowned high-speed semiconductor entrepreneur is ready for his next chapter.
“What is the timeline?”
It’s a question the CEO of Marvell, Matt Murphy, would pose to Loi Nguyen each year during their one-on-one meetings. “I’ve always thought of myself as a young guy; retirement seemed far away,” says Nguyen. “Then, in October, it seemed like the time is now.”
Nguyen will not, however, disappear. He will work on specific projects and take part in events, but this will no longer be a full-time role.
Early life and journey to the US
One of nine children, Nguyen grew up in Ho Chi Minh City, Vietnam. Mathematically inclined from an early age, he faced limited options when considering higher education.
“In the 1970s, you could only apply to one university, and you either passed or failed,” he says. “That decided your career.”
Study choices were also limited, either engineering or physics. Nguyen chose physics, believing entrance would be easier.
After just one year at university, he joined the thousands of ‘boat people’ that left Vietnam by sea following the end of the Vietnam War in 1975.
But that one year at university was pivotal. “It proved I could get into a very tough competitive environment,” he says. “I could compete with the best.”
Nguyen arrived in the US with limited English and no money. He found work in his first year before signing up at a community college. Here, he excelled and graduated with first-class honours.
Finding a mentor & purpose
Nguyen’s next achievement was to gain a full scholarship to study at Cornell University. At Cornell, Nguyen planned to earn his degree, find a job, and support his family in Vietnam. Then a Cornell academic changed everything.
The late Professor Lester Eastman was a pioneer researcher in high-speed semiconductor devices and circuits using materials such as gallium arsenide and indium phosphide. “Field-effect transistors (FETs), bipolar – any kind of high-speed devices,” says Nguyen. “I was just so inspired by how he talked about his research.”
In his senior year, Nguyen talked to his classmates about their plans. Most students sought industry jobs, but the best students were advancing to graduate school.
“What is graduate school?” Nguyen asked and was told about gaining a doctorate. “How one does that?” he asked and was told about the US Graduate Record Examination (GRE). “I hadn’t a clue,” he says.
The GRE deadline to apply to top US universities was only a week away, including an exam. Nguyen passed. He could now pursue a doctorate at leading US universities, but he chose to stay at Cornell under Professor Eastman: “I wanted to do high-speed semiconductors.”
His PhD addressed gallium arsenide FETs, which became the basis for today’s satellite communications.
Early career breakthroughs
After graduating, he worked for a satellite company focussing on low-noise amplifiers. NASA used some of the work for a remote sensing satellite to study cosmic microwave background radiation. “We were making what was considered the most sensitive low-noise receivers ever,” says Nguyen.
However, the work concluded in the early 1990s, a period of defence and research budget cuts. “I got bored and wondered what to do next,” he says.
Nguyen’s expertise was in specialised compound semiconductor devices, whereas CMOS was the dominant process technology for chip designs. He decided to undertake an MBA, which led to his co-founding the high-speed communications chip company Inphi.
While studying for his MBA, he met Tim Semones, another Inphi co-founder. The third co-founder was Gopal Raghavan whom Nguyen describes as a classic genius: “The guy could do anything.”
Building Inphi: innovation through persistence
The late 1990s internet boom created the perfect environment for a semiconductor start-up. Nguyen, Semones, and Raghavan raised $12 million to found Inphi, shorthand for indium phosphide.
The company’s first decade was focused on analogue and mixed-signal design. The market used 10-gigabit optics, so Inphi focused on 40 gigabits. But then the whole optical market collapsed, and the company had to repurpose.
Inphi went from designing indium phosphide chips at 40 gigabits-per-second (Gbps) to CMOS process circuits for memory working at 400 megabits-per-second (Mbps).
In 2007, AT&T started to deploy 40Gbps, indicating that the optical market was returning. Nguyen asked the chairman for a small team which subsequently developed components such as trans-impedance amplifiers and drivers. Inphi was too late for 40Gbps, so it focussed on chips for 100Gbps coherent optics.
Inphi also identified the emerging cloud data centre opportunity for optics. Initially, Nguyen considered whether 100Gbps coherent optics could be adopted within the data centre. However, coherent was too fast and costly compared to traditional non-return-to-zero (NRZ) signalling-based optics.
It led to Inphi developing a 4-level pulse-amplitude modulation (PAM4) chip. Nguyen says that, at the time, he didn’t know of PAM4 but understood that Inphi needed to develop technology that supported higher-order modulation schemes.
“We had no customer, so we had to spend our own money to develop the first PAM4 chip,” says Nguyen.
Nguyen also led another Inphi group in developing an in-house silicon photonics design capability.
These two core technologies – silicon photonics and PAM4 – would prove key in Inphi’s fortunes and gain the company a key design win with hyperscaler Microsoft with the COLORZ optical module.
Microsoft met Inphi staff at a show and described wanting a 100Gbps optical module that could operate over 80km to link data centre sites yet would consume under 3.5W. No design had done that before.
Inphi had PAM4 and silicon photonics by then and worked with Microsoft for a year to make it happen. “That’s how innovation happens; give engineers a good problem, and they figure out how to solve it,” says Nguyen.

Marvell transformation
The COVID-19 pandemic created unlikely opportunities. Marvell’s CEO, Matt Murphy, and then-Inphi CEO, Ford Tamer, served on the Semiconductor Industry Association (SIA) board together. It led to them discussing a potential acquisition during hikes in the summer of 2020 when offices were closed. By 2021, Marvell acquired Inphi for $10 billion.
“Matt asked me to stay on to help with the transition,” says Nguyen. “I knew that for the transition to be successful, I could play a key role as an Inphi co-founder.”
Nguyen was promoted to manage most of the Inphi optical portfolio and Marvell’s copper physical layer portfolio.
“Matt runs a much bigger company, and he has very well thought-out measurement processes that he runs throughout the year,” he says. “It is one of those things that I needed to learn: how to do things differently.”
The change as part of Marvell was welcome. “It invigorated me and asked me to take stock of who I am and what skills I bring to the table,” says Nguyen.
AI and connectivity
After helping ensure a successful merger integration, Nguyen returned to his engineering roots, focusing on optical connectivity for AI. By studying how companies like Nvidia, Google, and Amazon architect their networks, he gained insights into future infrastructure needs.
“You can figure out roughly how many layers of switching they will need for this and the ratio between optical interconnect and the GPU, TPU or xPU,” he says. “Those are things that are super useful.”
Nguyen says there are two “buckets” to consider: scale-up and scale-out networks. Scale-out is needed when connecting 10,000s, 100,000 and, in the future, 1 million xPUs via network interface cards. Scale-out networks use protocols such as Infiniband or Ethernet that minimise and handle packet loss.
Scale-up refers to the interconnect between xPUs in a very high bandwidth, low latency network. This more local network allows the xPUs to share each other’s memory. Here, copper is used: it is cheap and reliable. “Everyone loves copper,” says Nguyen. But copper’s limitation is reach, which keeps shrinking as signalling speeds increase.
“At 200 gigabits, if you go outside the rack, optics is needed,” he says. “So next-gen scale-up represents a massive opportunity for optics,” he says.
Nguyen notes that scale-up and scale-out grow in tandem. It was eight xPUs in a scale-up for up to a 25,000 xPU scale-out network cluster. Now, it is 72 xPUs scale-up for a 100,000 xPU cluster. This trend will continue.
Beyond Technology
Nguyen’s passion for wildlife photography is due to his wife. Some 30 years ago, he and his wife supported the reintroduction of wolves to the Yellowstone national Park in the US.
After Inphi’s initial public offering (IPO) in 2010, Nguyen could donate money to defend wildlife, and he and his wife were invited to a VIP retreat there.
“I just fell in love with the place and started taking up photography,” he says. Though initially frustrated by elusive wolves, his characteristic determination took over. “The thing about me is that if I’m into something, I want to be the best at it. I don’t dabble in things,” he says, laughing. “I’m very obsessive about what I want to spend my time on.
He has travelled widely to pursue his passion, taking what have proved to be award-winning photos.
Full Circle: becoming a role model
Perhaps most meaningful in Nguyen’s next chapter is his commitment to Vietnam, where he’s been embraced as a high-tech role model and a national hero.
He plans to encourage young people to pursue engineering careers and develop Vietnam’s high-speed semiconductor industry, completing a circle that began with his departure decades ago.
He also wants to spend time with his wife and family, including going on an African safari.
He won’t miss back-to-back Zoom calls and evenings away from home. In the last two years, he estimates that he has been away from home between 60 and 70 per cent of the time.
It seems retirement isn’t an ending but a new beginning.
The OIF's coherent optics work gets a ZR+ rating
The OIF has started work on a 1600ZR+ standard to enable the sending of 1.6 terabits of data across hundreds of kilometres of optical fibre.The initiative follows the OIF's announcement last September that it had kicked off 1600ZR. ZR refers to an extended reach standard, sending 1.6 terabits over an 80-120km point-to-point link.
600ZR follows the OIF’s previous work standardising the 400-gigabit 400ZR and the 800-gigabit 800ZR coherent pluggable optics.
The decision to address a ‘ZR+’ standard is a first for the OIF. Until now, only the OpenZR+ Multi-Source Agreement (MSA) and the OpenROADM MSA developed interoperable ZR+ optics.
The OIF’s members’ decision to back the 1600ZR+ coherent modem work was straightforward, says Karl Gass, optical vice chair of the OIF’s physical link layer (PLL) working group. Several companies wanted it, and there was sufficient backing. “One hyperscaler in particular said: ‘We really need that solution’,” says Gass.

OIF, OpenZR+, and OpenROADM
Developing a 1600ZR+ standard will interest telecom operators who, like with 400ZR and the advent of 800ZR, can take advantage of large volumes of coherent pluggables driven by hyperscaler demand. However, Gass says no telecom operator is participating in the OIF 1600ZR+ work.
“It appears that they are happy with whatever the result [of the ZR+ work] will be,” says Gass. Telecom operators are active in the OpenROADM MSA.
Now that the OIF has joined OpenZR+ and the OpenROADM MSA in developing ZR+ designs, opinions differ on whether the industry needs all three.
“There is significant overlap between the membership of the OpenZR+ MSA and the OIF, and the two groups have always maintained positive collaboration,” says Tom Williams, director of technical marketing at Acacia, a leading member of the OpenZR+. “We view the adoption of 1600ZR+ in the OIF as a reinforcement of the value that the OpenZR+ has brought to the market.”
Robert Maher, Infinera’s CTO, believes the industry does not need three standards. However, having three organisations does provide different perspectives and considerations.
Meanwhile, Maxim Kuschnerov, director R&D at Huawei, says the OIF’s decision to tackle ZR+ changes things.”OpenZR+ kickstarted the additional use cases in the industry, and OpenROADM took it away but going forward, it doesn’t seem that we need additional MSAs if the OIF is covering ZR+ for Ethernet clients in ROADM networks,” says Kuschnerov. “Only the OTN [framing] modes need to be covered, and the ITU-T can do that.”
Kuschnerov also would like more end-user involvement in the OIF group. “It would help shape the evolving use cases and not be guided by a single cloud operator,” he says.
ZR history
The OIF is a 25-year-old industry organisation with over 150 members, including hyperscalers, telecom operators, systems and test equipment vendors, and component companies.
In October 2016, the OIF started the 400ZR project, the first pluggable 400-gigabit Ethernet coherent optics specification. The principal backers of the 400ZR work were Google and Microsoft. The standard was designed to link equipment in data centres up to 120km apart.
The OIF 400ZR specification also included an un-amplified version with a reach of several tens of kilometres. The first 400ZR specification document, which the OIF calls an Implementation Agreement, was completed in March 2020 (see chart above).
The OIF started the follow-up on the 800ZR specification in November 2020, a development promoted by Google. Gass says the OIF is nearing completion of the 800ZR Implementation Agreement document, expected in the second half of 2024.
If the 1600ZR and ZR+ coherent work projects take a similar duration, the first 1600ZR and 1600ZR+ products will appear in 2027.
Symbol rate and other challenges
Moving to a 1.6-terabit coherent pluggable module using the same modulation scheme – 16-ary quadrature amplitude modulation or 16-QAM – used for 400ZR and 800ZR suggests a symbol rate of 240 gigabaud (GBd).
“That is the maths, but there might be concerns with technical feasibility,” says Gass. “That’s not to say it won’t come together.”
The highest symbol rate coherent modem to date is Ciena’s WaveLogic 6e, which was announced a year ago. The design uses a 3nm CMOS coherent digital signal processor (DSP) and a 200GBd symbol rate. It is also an embedded coherent design, not one required to fit inside a pluggable optical module with a constrained power consumption.

Kuschnerov points out that the baud rates of ZR and ZR+ have differed. And this will likely continue. 800ZR, using Ethernet with no probabilistic constellation shaping, has a baud rate of 118.2GBd, while 800ZR+, which uses OTN and probabilistic constellation shaping, has a baud rate of up to 131.35GBd. Every symbol has a varying probability when probabilistic constellation shaping is used. “This decreases the information per symbol, and thus, the baud rate must be increased,“ says Kuschnerov.
Doubling up for 1600ZR/ ZR+, those numbers become around 236GBd and 262GBd, subject to future standardisation. “So, saying that 1600ZR is likely to be at 240GBd is correct, but one cannot state the same for a potential 1600ZR+,” says Kuschnerov.
Nokia’s view is that for 1600ZR, the industry will look at operating modes that include 16QAM at 240 GBd. Other explored options include 64-QAM with probabilistic constellation shaping at 200GBd and even dual optical carrier solutions with each carrier operating at approximately 130GBd. “However, this last option may be challenging from a power envelope perspective,” says Szilárd Zsigmond, head of Nokia’s optical subsystems group.
In turn, if 1600ZR+ reaches 1,000km distances, the emphasis will be on higher baud rate options than those used for 1600ZR. “This will be needed to enable longer reaches, which will also put pressure on managing power dissipation,” says Zsigmond.
The coherent DSP must also have digital-to-analogue (DACs) and analogue-to-digital converters (ADCs) to sample at least at 240 giga-samples per second. Indeed, the consensus among the players is that achieving the required electronics and optics will be challenging.
“All component bandwidths have to double and that is a significant challenge,” says Josef Berger, associate vice president, cloud optics marketing at Marvell.
The coherent optics – the modulators and receivers – must extend their analogue bandwidth of 120GHz. Infinera is one company that is confident this will be achieved. “Infinera, with our highly integrated Indium Phosphide-based photonic integrated circuits, will be producing a TROSA [transmitter-receiver optical sub-assembly] capable of supporting 1.6-terabit transmission that will fit in a pluggable form factor,” says Maher.
The coherent DSP and optics operating must also meet the pluggable modules’ power and heat limits. “That is an extra challenge here: the development needs to maintain focus on cost and power simultaneously to bring the value network operators need,” says Williams. “Scaling baud rate by itself doesn’t solve the challenge. We need to do this in a cost and power-efficient way.”
Current 800ZR modules consume 30W or more, and since the aim of ZR modules is to be used within Ethernet switches and routers, this is challenging. In comparison, 400ZR modules now consume 20W or less.
“For 800ZR and 800ZR+, the target is to be within the 28W range, and this target is not changing for 1600ZR and 1600ZR+,” says Zsigmond. Coherent design engineers are being asked to double the bit rate yet keep the power envelope constant.
Certain OIF members are also interested in backward compatibility with 800ZR or 400ZR. “That also might affect the design,” says Gass.
Given the rising cost to tape out a coherent DSP using 3nm and even 2nm CMOS process nodes required to reduce power per bit, most companies designing ASICs will look to develop one design for the 1600ZR and ZR+ applications to maximise their return on investment, says Zsigmond, who notes that the risk was lower for the first generations of ZR and ZR+ applications. Most companies had already developed components for long-haul applications that could be optimised for ZR and ZR+ applications.
For 400ZR, which used a symbol rate of 60 GBd, 60-70 GBd optics already existed. For 800 gigabit transmissions, high-performance embedded coherent optics and pluggable, low-power ZR/ZR+ modules have been developed in parallel. “For 1600ZR/ZR+, it appears that the pluggable modules will be developed first,” says Zsigmond. “There will be more technology challenges to address than previous ZR/ZR+ projects.”
The pace of innovation is faster than traditional coherent transmission systems and will continue to reduce cost and power per bit, notes Marvell’s Berger: “This innovation creates technologies that will migrate into traditional coherent applications as well.”
Gass is optimistic despite the challenges ahead: “You’ve got smart people in the room, and they want this to happen.”
OIF's OFC 2024 demo
The OIF has yet to finalise what it will show for the upcoming coherent pluggable module interoperable event at OFC to be held in San Diego in March. But there will likely be 400ZR and 800ZR demonstrations operating over 75km-plus spans and 400-gigabit OpenZR+ optics operating over greater distance spans.
ECOC 2023 industry reflections - Part 3

Gazettabyte is asking industry figures for their thoughts after attending the recent ECOC show in Glasgow. In particular, what developments and trends they noted, what they learned and what, if anything, surprised them. Here are responses from Coherent, Ciena, Marvell, Pilot Photonics, and Broadcom.
Julie Eng, CTO of Coherent
It had been several years since I’d been to ECOC. Because of my background in the industry, with the majority of my career in data communications, I was pleasantly surprised to see that ECOC had transitioned from primarily telecommunications, and largely academic, into more industry participation, a much bigger exhibition, and a focus on datacom and telecom. There were many exciting talks and demos, but I don’t think there were too many surprises.
In datacom, the focus, not surprisingly, was on architectures and implementations to support artificial intelligence (AI). The dramatic growth of AI, the massive computing time, and the network interconnect required to train models are driving innovation in fibre optic transceivers and components.
There was significant discussion about using Ethernet for AI compared to protocols such as InfiniBand and NVLink. For us as a transceiver vendor, the distinction doesn’t have a significant impact as there is little if any, difference in the transceivers we make for Ethernet compared to the transceivers we make for InfiniBand/NVLink. However, the impact on the switch chip market and the broader industry are significant, and it will be interesting to see how this evolves.
Linear pluggable optics (LPO) was a hot topic, as it was at OFC 2023, and multiple companies, including Coherent, demonstrated 100 gigabit-per-lane LPO. The implementation has pros and cons, and we may find ourselves in a split ecosystem, with some customers preferring LPO and others preferring traditional pluggable optics with DSP inside the module. The discussion is now moving to the feasibility of 200 gigabit-per-lane LPO.
Discussion and demonstrations of co-packaged optics also continued, with switch vendors starting to show Ethernet switches with co-packaged optics. Interestingly, the success of LPO may push out the implementation of co-packaged optics, as LPO realizes some of the advantages of co-packaged optics with a much less dramatic architectural change.
One telecom trend was the transition to 800-gigabit digital coherent optical modules, as customers and suppliers plan for and demonstrate the capability to make this next step. There was also significant interest in and discussion about 100G ZR. We demonstrated a new version with 0dBm high optical output power at ECOC 2023 while other companies showed components to support it. This is interesting for cable providers and potentially for data centre interconnect and mobile fronthaul and backhaul.
I was very proud that our 200 gigabit-per-lane InP-based DFB-MZ laser won the 2023 ECOC Exhibition Industry Award for Most Innovative Product in the category of Innovative Photonics Component.
ECOC was a vibrant conference and exhibition, and I was pleased to attend and participate again.
Loudon Blair, senior director, corporate strategy, Ciena
ECOC 2023 in Glasgow gave me an excellent perspective on the future of optical technology. In the exhibition, integrated photonic solutions, high-speed coherent pluggable optical modules, and an array of testing and interoperability solutions were on display.
I was especially impressed by how high-bandwidth optics is being considered beyond traditional networking. Evolving use cases include optical cabling, the radio access network (RAN), broadband access, data centre fabrics, and quantum solutions. The role of optical connectivity is expanding.
In the conference, questions and conversations revolved around how we solve challenges created by the expanding use cases. How do we accommodate continued exponential traffic growth on our fibre infrastructure? Coherent optics supports 1.6Tbps today. How many more generations of coherent can we build before we move on to a different paradigm? How do we maximize density and continue to minimize cost and power? How do we solve the power consumption problem? How do we address the evolving needs of data centre fabrics in support of AI and machine learning? What is the role of optical switching in future architectures? How can we enhance the optical layer to secure our information traversing the network?
As I revisited my home city and stood on the banks of the river Clyde – at a location once the shipbuilding centre of the world – I remembered visiting my grandfather’s workshop where he built ships’ compasses and clocks out of brass.
It struck me how much the area had changed from my childhood and how modern satellite communications had disrupted the nautical instrumentation industry. In the same place where my grandfather serviced ships’ compasses, the optical industry leaders were now gathering to discuss how advances in optical technology will transform how we communicate.
It is a good time to be in the optical business, and based on the pace of progress witnessed at ECOC, I look forward to visiting San Diego next March for OFC 2024.
Dr Loi Nguyen, executive vice president and general manager of the cloud optics business group, Marvell
What was the biggest story at ECOC? That the story never changes! After 40 years, we’re still collectively trying to meet the insatiable demand for bandwidth while minimizing power, space, heat, and cost. The difference is that the stakes get higher each year.
The public debut of 800G ZR/ZR+ pluggable optics and a merchant coherent DSP marked a key milestone at ECOC 2023. For the first time, small-form-factor coherent optics delivers performance at a fraction of the cost, power, and space compared to traditional transponders. Now, cloud and service providers can deploy a single coherent optics in their metro, regional, and backbone networks without needing a separate transport box. 800 ZR/ZR+ can save billions of dollars for large-scale deployment over the programme’s life.
Another big topic at the show was 800G linear drive pluggable optics (LPO). The multi-vendor live demo at the OIF booth highlighted some of the progress being made. Many hurdles, however, remain. Open standards still need to be developed, which may prove difficult due to the challenges of standardizing analogue interfaces among multiple vendors. Many questions remain about whether LPO can be scaled beyond limited vendor selection and bookend use cases.
Frank Smyth, CTO and founder of Pilot Photonics
ECOC 2023’s location in Glasgow brought me back to the place of my first photonics conference, LEOS 2002, which I attended as a postgrad from Dublin City University. It was great to have the show close to home again, and the proximity to Dublin allowed us to bring most of the Pilot team.
Two things caught my eye. One was 100G ZR. We noted several companies working on their 100G ZR implementations beyond Coherent and Adtran (formerly Adva) who announced the product as a joint development over a year ago.
100G ZR has attracted much interest for scaling and aggregation in the edge network. Its 5W power dissipation is disruptive, and we believe it could find use in other network segments, potentially driving significant volume. Our interest in 100G ZR is in supplying the light source, and we had a working demo of our low linewidth tunable laser and mechanical samples of our nano-iTLA at the booth.
Another topic was carrier and spatial division multiplexing. Brian Smith from Lumentum gave a Market Focus talk on carrier and spatial division multiplexing (CSDM), which Lumentum believes will define the sixth generation of optical networking.
Highlighting the approaching technological limitation on baud rate scaling, the ‘carrier’ part of CSDM refers to interfaces built from multiple closely-spaced wavelengths. We know that several system vendors have products with interfaces based on two wavelengths, but it was interesting to see this from a component/ module vendor.
We argue that comb lasers come into their own when you go beyond two to four or eight wavelengths and offer significant benefits over independent lasers. So CSDM aligns well with Pilot’s vision and roadmap, and our integrated comb laser assembly (iCLA) will add value to this sixth-generation optical networking.
Speaking of comb lasers, I attended an enjoyable workshop on comb lasers on the Sunday before the meetings got too hectic. The title was ‘Frequency Combs for Optical Communications – Hype or Hope’. It was a lively session featuring a technology push team and a market pull team presenting views from academia and industry.
Eric Bernier offered an important observation from HiSilicon. He pointed to a technology gap between what the market needs and what most comb lasers provide regarding power per wavelength, number of wavelengths, and data rate per lane. Pilot Photonics agrees and spotted the same gap several years ago. Our iCLA bridges it, providing a straightforward upgrade path to scaling to multi-wavelength transceivers but with the added benefits that comb lasers bring over independent lasers.
The workshop closed with an audience participation survey in which attendees were asked: Will frequency combs play a major role in short-reach communications? And will they play a major role in long-reach communications?
Unsurprisingly, given an audience interested in comb lasers, the majority’s response to both questions was yes. However, what surprised me was that the short-reach application had a much larger majority on the yes side: 78% to 22%. For long-reach applications the majority was slim: 54% to 46%.
Having looked at this problem for many years, I believe the technology gap mentioned is easier to bridge and delivers greater benefits for long-reach applications than for short-reach, at least in the near term.
Natarajan Ramachandran, director of product marketing, physical layer products division, Broadcom
Retimed pluggables have repeatedly shown resiliency due to their standards-based approach, offering reliable solutions, manufacturing scale, and balancing metrics around latency, cost and power.
At ECOC this year, multiple module vendors demonstrated 800G DR4 and 1.6T DR8 solutions with 200 gigabit-per-lane optics. As the IEEE works towards ratifying the specs around 200 gigabit per lane, one thing was clear at ECOC: the ecosystem – comprising DSP vendors, driver and transimpedence amplifier (TIA) vendors, and VCSEL/EML/silicon photonics vendors – is ready and can deliver.
Several vendors had module demonstrations using 200 gigabit-per-lane DSPs. What also was apparent at ECOC was that the application space and use cases, be it within traditional data centre networks, AI and machine learning clusters and telcom, continue to grow. Multiple technologies will find the space to co-exist.
Marvell kickstarts the 800G coherent pluggable era

Marvell has become the first company to provide an 800-gigabit coherent digital signal processor (DSP) for use in pluggable optical modules.
The 5nm CMOS Orion chip supports a symbol rate of over 130 gigabaud (GBd), more than double that of the coherent DSPs for the OIF’s 400ZR standard and 400ZR+.
Meanwhile, a CFP2-DCO pluggable module using the Orion can transmit a 400-gigabit data payload over 2,000km using the quadrature phase-shift keying (QPSK) modulation scheme.
The Orion DSP announcement is timely, given how this year will be the first when coherent pluggables exceed embedded coherent module port shipments.
“We strongly believe that pluggable coherent modules will cover most network use cases, including carrier and cloud data centre interconnect,” says Samuel Liu, senior director of coherent DSP marketing at Marvell.
Marvell also announced its third-generation ColorZ pluggable module for hyperscalers to link equipment between data centres. The Orion-based ColorZ 800-gigabit module supports the OIF’s 800ZR standard and 800ZR+.
Fifth-generation DSP
The Orion chip is a fifth-generation design yet Marvell’s first. First ClariPhy and then Inphi developed the previous four generations.

Inphi bought ClariPhy for $275 million in 2016, gaining the first two generation devices: the 40nm CMOS 40-gigabit LightSpeed chip and a 28nm CMOS 100- and 200-gigabit Lightspeed-II coherent DSP products. The 28nm CMOS DSP is now coming to the end of its life, says Liu.
Inphi added two more coherent DSPs before Marvell bought the company in 2021 for $10 billion. Inphi’s first DSP was the 16nm CMOS M200. Until then, Acacia (now Cisco-owned) had been the sole merchant company supplying coherent DSPs for CFP2-DCOs pluggable modules.
Inphi then delivered the 7nm 400-gigabit Canopus for the 400ZR market, followed a year later by the Deneb DSP that supports several 400-gigabit standards. These include 400ZR, 400ZR+, and standards such as OpenZR+, which also has 100-, 200-, and 300-gigabit line rates and supports the OpenROADM MSA specifications. “The cash cow [for Marvell] is [the] 7nm [DSPs],” says Liu.
The Inphi team’s first task after the acquisition was to convince Marvell’s CEO and its chief financial officer to make the most significant investment in a coherent DSP. Developing Orion cost between $100M-300M.
“We have been quiet for the last two years, not making any coherent DSP announcements,” says Liu. “This [the Orion] is the one.”
Marvell views being first to market with a 130GBd-plus generation coherent DSP as critical given how pluggables, including the QSFP-DD and the OSFP form factors, account for over half of all coherent ports shipped.
“It is very significant to be first to market with an 800ZR plug and DSP,” says Jimmy Yu, vice president at market research firm Dell’Oro Group. “I expect Cisco/Acacia to have one available in 2024. So, for now, Marvell is the only supplier of this product.”
Yu notes that vendors such as Ciena and Infinera have had 800 Gigabit-per-second (Gbps) coherent available for some time, but they are for metro and long-haul networks and use embedded line cards.
Use cases
The Orion DSP addresses hyperscalers’ and telecom operators’ coherent needs. The DSP also implements various coherent standards to ensure that the vendors’ pluggable modules work with each other.
Liu says a DSP’s highest speed is what always gets the focus, but the Orion also supports lower line rates such as 600, 400 and 200Gbps for longer spans.
The baud rate, modulation scheme, and the probabilistic constellation shaping (PCS) technique are control levers that can be varied depending on the application. For example, 800ZR uses a symbol rate of only 118GBd and the 16-QAM modulation scheme to achieve the 120km specification while minimising power consumption. When performance is essential, such as sending 400Gbps over 2,000km, the highest baud rate of 130GBd is used along with QPSK modulation.

China is one market where Marvell’s current 7nm CFP2-DCOs are used to transport wavelengths at 100Gbps and 200Gbps.
Using the Orion for 200-gigabit wavelengths delivers an extra 1dB (decibel) of optical signal-to-noise ratio performance. The additional 1dB benefits the end user, says Liu: they can increase the engineering margin or extend the transmission distance. Meanwhile, probabilistic constellation shaping is used when spectral efficiency is essential, such as fitting a transmission within a 100GHz-width channel.
Liu notes that the leading Chinese telecom operators are open to using coherent pluggables to help reduce costs. In contrast, large telcos in North America and Europe use pluggables for their regional networks. Still, they prefer embedded coherent modems from leading systems vendors for long-haul distances greater than 1,000km.
Marvell believes the optical performance enabled by its 130GBd-plus 800-gigabit pluggable module will change this. However, all the leading system vendors have all announced their latest generation embedded coherent modems with baud rates of 130GBd to 150GBd, while Ciena’s 200GBd 1.6-terabit WaveLogic 6 coherent modem will be available next year.
The advent of 800-gigabit coherent will also promote IP over DWDM. 400ZR+ is already enabling the addition of coherent modules directly to IP routers for metro and metro regional applications. An 800ZR and 800ZR+ in a pluggable module will continue this trend beyond 400 gigabit to 800 gigabits.
The advent of an 800-gigabit pluggable also benefits the hyperscalers as they upgrade their data centre switches from 12.8 terabits to 25.6 and 51.2 terabits. The hyperscalers already use 400ZR and ZR+ modules, and 800-gigabit modules, which is the next obvious step. Liu says this will serve the market for the next four years.
Fujitsu Optical Components, InnoLight, and Lumentum are three module makers that all endorsed the Orion DSP announcement.
ColorZ 800 module
In addition to selling its coherent DSPs to pluggable module and equipment makers, Marvell will sell to the hyperscalers its latest ColorZ module for data centre interconnect.
Marvell’s first-generation product was the 100-gigabit coherent ColorZ in 2016 and in 2021 it produced its 400ZR ColorZ. Now, it is offering an 800-gigabit version – ColorZ 800 – to address 800ZR and 800ZR+, which include OpenZR+ and support for lower speeds that extend the reach to metro regional and beyond.

“We are first to market on this module, and it is now sampling,” says Josef Berger, associate vice president of marketing optics at Marvell.
Marvell addressing its module for the hyperscaler market rather than telecoms makes sense, says Yu, as it is the most significant opportunity.
“Most communications service providers’ interest is in having optical plugs with longer reach performance,” says Dell’Oro’s Yu. “So, they are more interested in ZR+ optical variants with high launch power of 0dBm or greater.”
Marvell notes a 30 per cent cost and power consumption reduction for each generation of ColorZ pluggable coherent module.
Liu concludes by saying that designing the Orion DSP was challenging. It is a highly complicated chip comprising over a billion logic gates. An early test chip of the Orion was used as part of a Lumentum demonstration at the OFC show in March.
The ColorZ 800 module will start being sampled this quarter.
What follows the Orion will likely be a 1.6-terabit DSP operating at 240GBd. The OIF has already begun defining the next 1.6T ZR standard.
Marvell’s CTO: peering into the future is getting harder

CTO interviews part 4: Noam Mizrahi
In a wide-ranging interview, Noam Mizrahi (pictured), executive vice president and corporate chief technology officer (CTO) at Marvell, discusses the many technologies needed to succeed in the data centre. He also discusses a CTO’s role and the importance of his focussed thinking ritual.
Noam Mizrahi has found his calling.
“I’m inspired by technology,” he says. “Every time I see an elegant technical solution – and it can be very simple – it makes me smile.”
Marvell hosts an innovation contest, and at one event, Mizrahi mentioned this to participants. “So they issued stickers saying, ‘I made Noam smile’,” he says.
Marvell’s broad portfolio of products spans high-end processors, automotive Ethernet, storage, and optical modules.
“This technology richness means that every day I come to work, I feel I learn something new,” he says.
Chip design
The interview with Mizrahi occurred before the passing away on March 24th of Gordon Moore, aged 94, who co-founded Intel.
In his article published in Electronics in 1965, Moore observed how chip transistor count doubled roughly yearly, what became known as Moore’s law.
The law has driven the semiconductor industry for decades and, like all exponential trends, is reaching its limit.
Since Marvell’s business is infrastructure ICs, it is experiencing the law’s demise first hand.
While the core definition of Moore’s law is ending, technology and process advancement are still enabling the cramming of more transistors on a die, says Mizrahi. However, greater processing performance and lower power consumption are occurring at a different pace and cost structure.
It is now very costly to make chips using the latest 5nm and 3nm CMOS process nodes.
The cost is not just the chip mask (reticle) but also such aspects as intellectual property (IP), architecture, design verification, electronics design automation (EDA) tools, and design validation.
Getting to the first product using 5nm CMOS can cost as high as $450 million, while for 3nm, the estimate is $600 million.
Also, development flow takes longer due to the complexity involved and will cause a redefinition of what is meant by a ‘current generation’ of a chip, says Mizrahi.
Design reuse is also increasingly required; not just reusing IP but the validation process in order to speed up a chip’s introduction.
In turn, designers must be innovative since processing performance and lower power consumption are harder to achieve.
Areas include package design optimisation, chip input-output (I/O), and the software to claw back processing performance that previously came from using the latest CMOS process.
IC designers will also be forced to choose which chips to make using the latest CMOS process node.
Overall, fewer chip companies will be able to afford chips made in leading CMOS processes, and fewer companies will buy such ICs, says Mizrahi.
Rise of chiplets
Chiplets will also play a role in a post-Moore’s law world.
“Chiplets are currently a very hot topic,” says Mizrahi.

A chiplet is a die implementing a functional block. The chiplet is added alongside a central die for a system-on-chip (SoC) design. Using chiplets, designs can exceed the theoretical limit of the mask size used to make a chip.
Marvell has long been a chiplet pioneer, says Mizrahi. “Today, it all seems reasonable, but when we did all that, it was not so obvious.” Marvell makes one chip that has 17 dies in a package.
Chiplets are particularly suited for artificial intelligence (AI) ASICs, what Mizrahi describes as ‘monsters of chips’.
Chiplets enable designers to control yield, which is essential when each 3nm CMOS chip lost to a defect is so costly.
Using chiplets, a design can be made using a mix of CMOS process nodes, saving power and speeding up a chip’s release.
Mizrahi applauds the work of the Universal Chiplet Interconnect Express (UCIe) organisation, creating chiplet standards.
But the chiplets’ first use will be as internally-designed dies for a company’s product, he says. Chip designers buying best-in-class chiplets from third parties remains some way off.
A CTO’s role
Mizrahi’s role is to peer into the future to identify the direction technologies will take and their impact on Marvell’s markets and customers.
He says a company-level longer-term technological strategy that combines the strengths of Marvell’s product lines is needed to secure the company’s technical lead.
“That is my job, and I love it,” he says.
It’s also challenging; predicting the future is hard, especially when the marketplace is dynamic and constantly changing. Technology is also very costly and time-consuming to develop.
“So, making the right decision as to what technology we need to invest in for the future, that is tough,” says Mizrahi.
Rapidly changing market dynamics are also challenging Marvell’s customers, who don’t always know what they need to do.
“Creating this clarity with them is challenging but also a great opportunity if done correctly,” says Mizrahi. “That is what keeps me motivated.”
Job impact
How does Mizrahi, Marvell’s CTO since 2020, assess his impact?
The question stems from a comment by Coherent’s Dr Julie Eng that assessing a CTO’s impact is more complicated than, say, a product line manager’s. On becoming CTO, Eng discussed with Coherent’s CEO how best to use her time to benefit the company. She also called other CTOs about the role and what works for them.
“I would say that my goals are tangible and clear, but the environment and the topics that I deal with are far less tangible and clear,” says Mizrahi.
He is required to identify technology trends and determine which ones need to be ’intercepted’. “What do we need to do to get there and ensure that we have the right technologies in place,” he says.
But how technologies play out is hard to determine and becoming harder given the longer development cycles.
“It’s critical to identify these technologies and their impact ahead of time to give yourself enough time to prepare for what must be done, so you can start the development in time for when the wave hits.”
Marvell’s strategy
Marvell’s company focus is infrastructure IC.
“We deal with the network, connectivity, storage, security, all the infrastructure around the processor,” says Mizrahi.
Marvell has been acquiring companies to bolster its technology portfolio and system expertise. The acquisitions include Cavium, Inphi, and Innovium. Last year, Marvell also bought CXL specialist Tanzanite Silicon Solutions.
“It’s going to be very important that you possess all the components in the infrastructure because, otherwise, it is tough to design a solution that brings value,” says Mizrahi.
Being able to combine all the pieces helps differentiate a company.
“I’m not sure there are many other companies that possess all the components needed to make effective infrastructure,” he says.
Disaggregation
Mizrahi gave a talk at Marvell’s Industry Analyst Day last December entitled Disaggregation using Optics.
During the talk, he described how data centres have been flexible enough to absorb new use cases and applications in the past, but now this is changing.
“AI training clusters are going to require a different type of data centre,” says Mizrahi. “It is more like a supercomputer, not the same traditional server architecture we see today.”
His analyst day talk also highlighted the need to disaggregate systems to meet the pace of scaling required and remove dependencies between components so they can be disaggregated and scaled independently.
Compute Express Link (CXL) and memory is one such component disaggregation example.
The CXL protocol optimises several memory parameters in computing systems, namely latency, bandwidth, and memory semantics. Memory semantics is about overseeing correct access by several devices using a shared memory.
CXL enables the disaggregation of memory currently bound to a host processor, thereby not only optimising the performance metrics but reducing overall cost.
Mizrahi cites the issue of poor memory usage in data centres. Microsoft Azure issued research that showed half of its virtual machines never touch half the memory.
“This means that memory is stranded when virtual machines are rented and are unavailable to other users,” says Mizrahi. “And memory is one of the largest spends in data centres.”
CXL enables memory pooling. From this pool, memory is assigned to an application in real time and released when workload execution is completed.
Pooled memory promises to save hyperscalers hundreds of millions of dollars.
“Of course, it’s not easy to do, and it will take time, but that’s just one motivation for doing things [using CXL].”
His analyst talk also stated how optics is the one media that addresses all the disaggregation issues: bandwidth, power, density, and the need for larger clusters.
“We’re going to see an all-optical type of connectivity if you look far enough into the future,” he says. “Of course, not today and not tomorrow.”
Mizrahi’s talk also suggested that AI will need even larger scale computing than supercomputers.
He cites Tesla’s supercomputer used to train its autonomous vehicle neural network.
“If you look at what it is composed of, it is a supercomputer,” says Mizrahi. “Some say it’s one of the top five or top 10 supercomputers, and its only purpose is to train autonomous vehicle neural networks.”
Last year, Meta also announced a supercomputer for training purposes.
Such AI training systems are the tip of the iceberg, he says.
“Ask yourself, what is a unit for a training cluster,“ says Mizrahi. “Is it eight GPUs (graphics processing units), 256 GPUs, 4k TPUs (tensor processing units), or maybe it is an entire data centre in one cluster?”
That is where it is all going, he says.
Pluggable modules and co-packaged optics
Co-packaged optics continues to evolve, but so are standard pluggable modules.
There is a good reason why pluggable optics remain in favour, and that will continue, says Mizrahi. But at some point, designers won’t have a choice, and co-packaged optics will be needed. That, however, is some way off.
In time, both these technologies will be used in the data centre.
Co-packaged optics is focussed on high-capacity networking switches. “And we are right in the middle of this and developing into it,” says Mizrahi.
Another place where co-packaged optics will be used, potentially even sooner, is for AI clusters.
Such co-packaged optics will connect switches to compose AI clusters, and, longer term, the GPUs will use optical I/O as their primary interface.
Such optical I/O helps meet bandwidth, power reduction, and power density requirements.
“Let’s say you want to build a cluster of GPUs, the larger the cluster, the better, but these are so power-hungry. If you do it with electrical connectivity, you must maintain proximity to achieve high speeds,” says Mizrahi. “But that, of course, limits your ability to put more GPUs into a cluster because of power density limitations.”
Using optical I/O, however, somewhat eases the density requirement, enabling more GPUs in a cluster.
But there are issues. What happens if something fails?
Today, with pluggables, one link is affected, but with co-packaged optics, it is less simple. “Also how do you scale production of these things to the scale of a data centre?” says Mizrahi.
These questions will ensure the coexistence of these different solutions, he says.
But AI is driving the need for the newer technology. Mizrahi cites how, in data centres, high-end switches have a capacity of 25 terabits while servers use a 50-gigabit interface. “That means, if for simplicity we ignore topologies and redundancies, you can connect 500 servers to that switch,” he says.
GPUs today have a 3.6 terabit-per-second full duplex I/O connectivity to talk to their peer GPUs.
“It only takes seven GPUs to saturate that very same [25.6-terabit capacity] switch,” he says. “The bandwidth requirement, it just explodes, and it’s going to be very hard to keep doing that electrically.”
This is why co-packaged optics will be needed.
Typical workday
Mizrahi is based in Israel, whereas Marvell’s headquarters is in Santa Clara, California.
“It [Israel] is the centre of my life and where my family is,” says Mizrahi. “I travel a lot, to the point where I think my biological clock is somewhere over the ocean.”
His day spreads across many time zones. Early morning calls are to the Far East before he turns to local issues. Then, his afternoon coincides with morning US Eastern time, while his evening aligns with morning US Western time.
That said, Marvell’s CEO repeatedly emphasises his desire for all employees to balance work and family.
“He encourages and insists to see that happen, which helps me keep a balance,” says Mizrahi.
Prime focus time
Mizrahi loves sports and is a keen runner.
He ensures he does not miss his seven or eight-mile daily run, even on days when he has a long flight.
“Every morning, it is my alone time,” he says. “It’s when I let my brain work, and it is my prime focus time.”
He is also a family man and has three children. He is keen to spend as much time as possible with his wife and kids.
“It’s not going to be long before they [the children] start their journey away from home, so I try to cherish every minute I have with them,“ he says.
He reads a lot, including technical material. “I told you, I’m inspired by technology.”

He cites two recently read books.
One, in Hebrew, is called Red Skies by Daniel Shinar.
“It talks about a friendship between two young guys from two sides of the fence,” he says. A friendship that proves impossible due to the reality of the situation.
The second book, one he found fascinating and meaningful, was part of a training course given at Marvell, called The Leadership Challenge by James Kouzes and Barry Posner.
“It gives you practices that the authors see as key for exemplary leadership, and it gave me so many things to think about,” he says. “To recognise things in my behaviour or other people, I view as leaders.”
Marvell plans for CXL's introduction in the data centre

The open interconnect Compute Express Link (CXL) standard promises to change how data centre computing is architected.
CXL enables the rearrangement of processors (CPUs), accelerator chips, and memory within computer servers to boost efficiency.
“CXL is such an important technology that is in high focus today by all the major cloud hyperscalers and system OEMs,” says Thad Omura, vice president of flash marketing at Marvell.
Semiconductor firm Marvell has strengthened its CXL expertise by acquiring Tanzanite Silicon Solutions.
Tanzanite was the first company to show two CPUs sharing common memory using a CXL 2.0 controller implemented using a field-programmable gate array (FPGA).
Marvell intends to use CXL across its portfolio of products.
Terms of the deal for the 40-staff Tanzanite acquisition have not been disclosed.
Data centre challenges
Memory chips are the biggest item spend in a data centre. Each server CPU has its own DRAM, the fast volatile memory overseen by a DRAM controller. When a CPU uses only part of the memory, the rest is inactive since other server processors can’t access it.
“That’s been a big issue in the industry; memory has consistently been tied to some sort of processor,” says Omura.
Maximising processing performance is another issue. Memory input-output (I/O) performance is not increasing as fast as processing performance. Memory bandwidth available to a core has thus diminished as core count per CPU has increased. “These more powerful CPU cores are being starved of memory bandwidth,” says Omura.
CXL tackles both issues: it enables memory to be pooled improving usage overall while new memory data paths are possible to feed the cores.
CXL also enables heterogeneous compute elements to share memory. For example, accelerator ICs such as graphic processing units (GPUs) working alongside the CPU on a workload.
CXL technology
CXL is an industry-standard protocol that uses the PCI Express (PCIe) bus as the physical layer. PCI Express is used widely in the data centre; PCIe 5.0 is coming to market, and the PCIe 6.0 standard, the first to use 4-level pulse-amplitude modulation (PAM-4), was completed earlier this year.
In contrast, other industry interface protocols such as OpenCAPI (open coherent accelerator processor interface) and CCIX (cache coherent interconnect for accelerators) use custom physical layers.
“The [PCIe] interface feeds are now fast enough to handle memory bandwidth and throughput, another reason why CXL makes sense today,” says Omura.
CXL supports low-latency memory transactions in the tens of nanoseconds. In comparison, the non-volatile memory express storage (NVMe), which uses a protocol stack run on a CPU, has tens of microseconds transaction times.
“The CXL protocol stack is designed to be lightweight,” says Omura. “It doesn’t need to go through the whole operating system stack to get a transaction out.”
CXL enables cache coherency, which is crucial since it ensures that the accelerator and the CPU see the same data in a multi-processing system.
Memory expansion
The first use of CXL will be to simplify the adding of memory.
A server must be opened when adding extra DRAM using a DIMM (dual in-line memory module). And there are only so many DIMM slots in a server.
The DIMM also has no mechanism to pass telemetry data such as its service and bit-error history. Cloud data centre operators use such data to oversee their infrastructure.
Using CXL, a memory expander module can be plugged into the front of the server via PCIe, avoiding having to open the server. System cooling is also more straightforward since the memory is far from the CPU. The memory expander’s CXL controller can also send telemetry data.
CXL also boosts memory bandwidth. When adding a DIMM to a CPU, the original and added DIMM share the same channel; capacity is doubled but not the interface bandwidth. Using CXL however opens a channel as the added memory uses the PCIe bus.
“If you’re using the by-16 ports on a PCIe generation five, it [the interface] exceeds the [DRAM] controller bandwidth,” says Omura.

Pooled memory
CXL also enables memory pooling. A CPU can take memory from the pool for a task, and when completed, it releases the memory so that another CPU can use it. Future memory upgrades are then added to the pool, not an individual CPU. “That allows you to scale memory independently of the processors,” says Omura.
The likely next development is for all the CPUs to access memory via a CXL switch. Each CPU will no longer needs a local DRAM controller but rather it can access a memory expander or the memory pool using the CXL fabric (see diagram above).
Going through a CXL switch adds latency to the memory accesses. Marvell says that the round trip time for a CPU to access its local memory is about 100ns, while going through the CXL switch to pooled memory is projected to take 140-160ns.
The switch can also connect a CXL accelerator. Here, an accelerator IC is added to memory which can be shared in a cache coherent manner with the CPU through the switch fabric (see diagram above).
I/O acceleration hardware can also be added using the CXL switch. Such hardware includes Ethernet, data processing unit (DPU) smart network interface controllers (smartNICs), and solid-state drive (SSD) controllers.
“Here, you are focused on accelerating protocol-level processing between the network device or between the CPU and storage,” says Omura. These I/O devices become composable using the CXL fabric.
More CXL, less Ethernet
Server boxes in the data are stacked. Each server comprises CPUs, memory, accelerators, network devices and storage. The servers talk to each other via Ethernet and other server racks using a top-of-rack switch.
But the server architecture will change as CXL takes hold in the data centre.

“As we add CXL into the infrastructure, for the first time, you’re going to start to see disaggregate memory,” says Omura. “You will be able to dynamically assign memory resources between servers.”
For some time yet, servers will have dedicated memory. Eventually, however, the architecture will become disaggregated with separate compute, memory and I/O racks. Moreover, the interconnect between the boxes will be through CXL. “Some of the same technology that has been used to transmit high-speed Ethernet will also be used for CXL,” says Omura.
Omura says deployment of the partially-disaggregated rack will start in 2024-25, while complete disaggregation will likely appear around the decade-end.
Co-packaged optics and CXL
Marvell says co-packaging optics will fit well with CXL.

“As you disaggregate memory from the CPU, there is a need to have electro-optics drive distance and bandwidth requirements going forward,” says Nigel Alvares, vice president of solutions marketing at Marvell.
However, CXL must be justified from a cost and latency standpoint, limiting its equipment-connecting span.
“The distance in which you can transmit data over optics versus latency and cost is all being worked out right now,” says Omura. The distance is determined by the transit time of light in fibre and the forward error correction scheme used.
But CXL needs to remain very low latency because memory transactions are being done over it, says Omura: “We’re no longer fighting over just microseconds or milliseconds of networking, now we’re fighting over nanoseconds.”
Marvell can address such needs with its acquisition of Inphi and its PAM-4 and optical expertise, the adoption of PAM-4 encoding for PCIe 6.0, and now the addition of CXL technology.
Vodafone's effort to get silicon for telco

This as an exciting time for semiconductors, says Santiago Tenorio, which is why his company, Vodafone, wants to exploit this period to benefit the radio access network (RAN), the most costly part of the wireless network for telecom operators.
The telecom operators want greater choice when buying RAN equipment.
As Tenorio, a Vodafone Fellow (the company’s first) and its network architecture director, notes, there were more than ten wireless RAN equipment vendors 15 years ago. Now, in some parts of the world, the choice is down to two.
“We were looking for more choice and that is how [the] Open RAN [initiative] started,” says Tenorio. “We are making a lot of progress on that and creating new options.”
But having more equipment suppliers is not all: the choice of silicon inside the equipment is also limited.
“You may have Fujitsu radios or NEC radios, Samsung radios, Mavenir software, whatever; in the end, it’s all down to a couple of big silicon players, which also supply the incumbents,” he says. “So we thought that if Open RAN is to go all the way, we need to create optionality there too to avoid vendor lock-in.”
Vodafone has set up a 50-strong research team at its new R&D centre in Malaga, Spain, that is working with chip and software companies to develop the architecture of choice for Open RAN to expand the chip options.
Open RAN R&D
Vodafone’s R&D centre’s 50-staff are organised into several streams, but their main goal is to answer critical issues regarding the Open RAN silicon architecture.
“Things like whether the acceleration is in-line or look-aside, which is a current controversy in the industry,” says Tenorio. “These are the people who are going to answer that question.”
With Open RAN, the virtualised Distributed Unit (DU) runs on a server. This contrasts with specialised hardware used in traditional baseband units.
Open RAN processes layer 1 data in one of two ways: look-aside or in-line. With look-aside, the server’s CPU performs certain layer 1 tasks, aided by accelerator hardware to perform tasks like forward error correction. This requires frequent communication between the two that limits processing efficiency.
In-line solves this by performing all the layer 1 processing using a single chip. Dell, for example, has an Open RAN accelerator card that performs in-line processing using Marvell’s silicon.
When Vodafone announced its Open RAN silicon initiative in January, it was working with 20 chip and software companies. More companies have since joined.
“You have software players like middleware suppliers, also clever software plug-ins that optimise the silicon itself,” says Tenorio. “It’s not only silicon makers attracted by this initiative.”
Vodafone has no preconceived ideas as to the ideal solution. “All we want is the best technical solution in terms of performance and cost,” he says.
By performance, Vodafone means power consumption and processing. “With a more efficient solution, you need less [processing] cores,” says Tenorio.
Vodafone is talking to the different players to understand their architectures and points of view and is doing its own research that may include simulations.
Tenorio does not expect Vodafone to manufacture silicon: “I mean, that’s not necessarily on the cards.” But Vodafone must understand what is possible and will conduct lab testing and benchmark measurements.
“We will do some head-to-head measurements that, to be fair, no one I know does,” he says. Vodafone’s position will then be published, it will create a specification and will drive vendors to comply with it.
“We’ve done that in the past,” says Tenorio. “We have been specifying radios for the last 20 years, and we never had to manufacture one; we just needed to understand how they’re done to take the good from the bad and then put everybody on the art of the possible.”
Industry interest
The companies joining Vodafone’s Open RAN chip venture are motivated for different reasons.
Some have joined to ensure that they have a voice and influence Vodafone’s views. “Which is super,” says Tenorio.
Others are there because they are challengers to the current ecosystem. “They want to get the specs ahead of anybody to have a better chance of succeeding if they listen to our advice, which is also super,” says Tenorio.
Meanwhile, software companies have joined to see whether they can improve hardware performance.
“That is the beauty of having the whole ecosystem,” he says.

Work scale
The work is starting at layer 1 and not just the RAN’s distributed unit (DU) but also the radio unit (RU), given how the power amplifier technology is the biggest offender in terms of power consumption.
Layers 2 and 3 will also be tackled. “We’re currently running that on Intel, and we’re finding that there is a lot of room for improvement, which is normal,” says Tenorio. “It’s true that running the three layers on general-purpose hardware has room for improvement.”
That room for improvement is almost equivalent to one full generation of silicon, he says.
Vodafone says that it also can’t be the case that Intel is the only provider of silicon for Open RAN.
The operator expects new hardware variants based on ARM, perhaps AMD, and maybe the RISC-V architecture at some point.
“We will be there to make it happen,” says Tenorio.
Other chip accelerators
Does such hardware as Graphics Processing Units (GPUs), Data Processing Units (DPUs) and also programmable logic have roles?
“I think there’s room for that, particularly at the point that we are in,” says Tenorio. “The future is not decided yet.”
The key is to avoid vendor lock-in for layer 1 acceleration, he says.
He highlights the work of such companies like Marvell and Qualcomm to accelerate layer 1 tasks, but he fears this will drive the software suppliers to take sides on one of these accelerators. “This is not what we want,” he says.
What is required is to standardise the interfaces to abstract the accelerator from the software, or steer away from custom hardware and explore the possibilities of general-purpose but specialised processing units.
“I think the future is still open,” says Tenorio. “Right now, I think people tend to go proprietary at layer 1, but we need another plan.”
“As for FPGAs, that is what we’re trying to run away from,” says Tenorio. “If you are an Open RAN vendor and can’t afford to build your ASIC because you don’t have the volume, then, okay, that’s a problem we were trying to solve.”
Improving general-purpose processing avoids having to go to FPGAs which are bulky, power-hungry and expensive, says Tenorio but he also notes how FPGAs are evolving.
“I don’t think we should have religious views about it,” he says. “There are semi-programmable arrays that are starting to look better and better, and there are different architectures.”
This is why he describes the chip industry as ‘boiling’: “This is the best moment for us to take a view because it’s also true that, to my knowledge, there is no other kind of player in the industry that will offer you a neutral, unbiased view as to what is best for the industry.”
Without that, the fear is that by acquisition and competition, the chip players will reduce the IC choices to a minimum.
“You will end up with two to three incumbent architectures, and you run a risk of those being suboptimal, and of not having enough competition,” says Tenorio.
Vodafone’s initiative is open to companies to participate including its telco competitors.
“There are times when it is faster, and you make a bigger impact if you start things on your own, leading the way,” he says.
Vodafone has done this before: In 2014, it started working with Intel on Open RAN.
“We made some progress, we had some field trials, and in 2017, we approached TIP (the Telecom Infra Project), and we offered to contribute our progress for TIP to continue in a project group,” says Tenorio. “At that point, we felt that we would make more progress with others than going alone.”
Vodafone is already deploying Open RAN in the UK and has said that by 2030, 30 per cent of its deployments in Europe will be Open RAN.
“We’ve started deploying open RAN and it works, the performance is on par with the incumbent architecture, and the cost is also on par,” says Tenorio. “So we are creating that optionality without paying any price in terms of performance, or a huge premium cost, regardless of what is inside the boxes.”
Timeline
Vodafone is already looking at in-line versus look-aside.
“We are closing into in-line benefits for the architecture. There is a continuous flow of positions or deliverables to the companies around us,” says Tenorio. “We have tens of meetings per week with interested companies who want to know and contribute to this, and we are exchanging our views in real-time.”
There will also be a white paper published, but for now, there is no deadline.
But there is an urgency to the work given Vodafone is deploying Open RAN, but this research work is for the next generation of Open RAN. “We are deploying the previous generation,” he says.
Vodafone is also talking, for example, to the ONF open-source organisation, which announced an interest in defining interfaces to exploit acceleration hardware.
“I think the good thing is that the industry is getting it, and we [Vodafone] are just one factor,” says Tenorio. “But you start these conversations, and you see how they’re going places. So people are listening.”
The industry agrees that layer 1 interfacing needs to be standardised or abstracted to avoid companies ending in particular supplier camps.
“I think there’ll be a debate whether that needs to happen in the ORAN Alliance or somewhere else,” says Tenorio. “I don’t have strong views. The industry will decide.”
Other developments
The Malaga R&D site will not just focus on Open RAN but other parts of the network, such as transport.
Transport still makes use of proprietary silicon but there is also more vendor competition.
“The dollars spent by operators in that area is smaller,” says Tenorio. “That’s why it is not making the headlines these days, but that doesn’t mean there is no action.”
Two transport areas where disaggregated designs have started are the disaggregated backbone router, and the disaggregated cell site gateway, both being sensible places to start.
“Disaggregating a full MPLS carrier-grade router is a different thing, but its time will come,” says Tenorio, adding that the centre in Malaga is not just for Open RAN, but silicon for telcos.
Marvell's 50G PAM-4 DSP for 5G optical fronthaul

- Marvell has announced the first 50-gigabit 4-level pulse-amplitude modulation (PAM-4) physical layer (PHY) for 5G fronthaul.
- The chip completes Marvell’s comprehensive portfolio for 5G radio access network (RAN) and x-haul (fronthaul, midhaul and backhaul).
Marvell has announced what it claims is an industry-first: a 50-gigabit PHY for the 5G fronthaul market.
Dubbed the AtlasOne, the PAM-4 PHY chip also integrates the laser driver. Marvell claims this is another first: implementing the directly modulated laser (DML) driver in CMOS.
“The common thinking in the industry has been that you couldn’t do a DML driver in CMOS due to the current requirements,” says Matt Bolig, director, product marketing, optical connectivity at Marvell. “What we have shown is that we can build that into CMOS.”
Marvell, through its Inphi acquisition, says it has shipped over 100 million ICs for the radio access network (RAN) and estimates that its silicon is in networks supporting 2 billion cellular users.
“We have been in this business for 15 years,” says Peter Carson, senior director, solutions marketing at Marvell. “We consider ourselves the number one merchant RAN silicon provider.”
Inphi started shipping its Polaris PHY for 5G midhaul and backhaul markets in 2019. “We have over a million ships into 5G,” says Bolig. Now Marvell is adding its AtlasOne PHY for 5G fronthaul.
Mobile traffic
Marvell says wireless data has been growing at a compound annual growth rate (CAGR) of over 60 per cent (2015-2021). Such relentless growth is forcing operators to upgrade their radio units and networks.
Stéphane Téral, chief analyst at market research firm, LightCounting, in its latest research note on Marvell’s RAN and x-haul silicon strategy, says that while 5G rollouts are “going gangbusters” around the world, they are traditional RAN implementations.
By that Téral means 5G radio units linked to a baseband unit that hosts both the distributed unit (DU) and centralised unit (CU).
But as 5G RAN architectures evolve, the baseband unit is being disaggregated, separating the distributed unit (DU) and centralised unit (CU). This is happening because the RAN is such an integral and costly part of the network and operators want to move away from vendor lock-in and expand their marketplace options.
For RAN, this means splitting the baseband functions and standardising interfaces that previously were hidden within custom equipment. Splitting the baseband unit also allows the functionality to be virtualised and be located separately, leading to the various x-haul options.
How the RAN is being disaggregated includes virtualised RAN and Open RAN. Marvell says Open RAN is still in its infancy but is a key part of the operators’ desire to virtualise and disaggregate their networks.
“Every Open RAN operator that is doing trials or early-stage deployments is also virtualising and disaggregating,” says Carson.
RAN disaggregation is also occuring in the vertical domain: the baseband functions and how they interface to the higher layers of the network. Such vertical disaggregation is being undertaken by the likes of the ONF and the Open RAN Alliance.
The disaggregated RAN – a mixture of the radio, DU and CU units – can still be located at a common site but more likely will be spread across locations.
Fronthaul is used to link the radio unit and DU when they are at separate locations. In turn, the DU and CU may also be at separate locations with the CU implemented in software running on servers deep within the network. Separating the DU and the CU is leading to the emergence of a new link: midhaul, says Téral.
Fronthaul speeds
Marvell says that the first 5G radio deployments use 8 transmitter/ 8 receiver (8T/8R) multiple-input multiple-output (MIMO) systems.
MIMO is a signal processing technique for beamforming, allowing operators to localise the capacity offered to users. An operator may use tens of megahertz of radio spectrum in such a configuration with the result that the radio unit traffic requires a 10Gbps front-haul link to the DU.
Leading operators are now deploying 100MHz of radio spectrum and massive MIMO – up to 32T/32R. Such a deployment requires 25Gbps fronthaul links.
“What we are seeing now is those leading operators, starting in the Asia Pacific, while the US operators have spectrum footprints at 3GHz and soon 5-6GHz, using 200MHz instantaneous bandwidth on the radio unit,” says Carson.
Here, an even higher-order 64T/64R massive MIMO will be used, driving the need for 50Gbps fronthaul links. Samsung has demonstrated the use of 64T/64R MIMO, enabling up to 16 spatial layers and boosting capacity by 7x.
“Not only do you have wider bandwidth, but you also have this capacity boost from spatial layering which carriers need in the ‘hot zones’ of their networks,” says Carson. “This is driving the need for 50-gigabit fronthaul.”
AtlasOne PHY
Marvell says its AtlasOne PAM-4 PHY chip for fronthaul supports an industrial temperature range and reduces power consumption by a quarter compared to its older PHYs. The power-saving is achieved by optimising the PHY’s digital signal processor and by integrating the DML driver.
Earlier this year Marvell announced its 50G PAM-4 Atlas quad-PHY design for the data centre. The AtlasOne uses the same architecture but differs in that it is integrated into a package for telecom and integrates the DML driver but not the trans-impedance amplifier (TIA).
“In a data centre module, you typically have the TIA and the photo-detector close to the PHY chip; in telecom, the photo-detector has to go into a ROSA (receiver optical sub-assembly),” says Bolig. “And since the photo-detector is in the ROSA, the TIA ends up having to be in the ROSA as well.”
The AtlasOne also supports 10-gigabit and 25-gigabit modes. Not all lines will need 50 gigabits but deploying the PHY future-proofs the link.
The device will start going into modules in early 2022 followed by field trials starting in the summer. Marvell expects the 50G fronthaul market to start in 2023.
RAN and x-haul IC portfolio
One of the challenges of virtualising the RAN is doing the layer one processing and this requires significant computation, more than can be handled in software running on a general-purpose processor.
Marvell supplies two chips for this purpose: the Octeon Fusion and the Octeon 10 data processing unit (DPU) that provides programmability and as well as specialised hardware accelerator blocks needed for 4G and 5G. “You just can’t deploy 4G or 5G on a software-only architecture,” says Carson.
As well as these two ICs and its PHY families for the various x-haul links, Marvell also has a coherent DSP family for backhaul (see diagram). Indeed, LightCounting’s Téral notes how Marvell has all the key components for an all-RAN 5G architecture.
Marvell also offers a 5G virtual RAN (VRAN) DU card that uses the OcteonFusion IC and says it already has five design wins with major cloud and OEM customers.
Marvell’s latest acquisition: switch-chip firm Innovium

- Innovium will be Marvell’s fifth acquisition in four years
Marvell is buying switch-chip maker, Innovium, for $1.1 billion to bolster its revenues from the lucrative data centre market.
The combination of Innovium with Inphi, Marvell’s most recent $10 billion acquisition, will enable the company to co-package optics alongside the high-bandwidth, low-latency switch chips.
Marvell returns to the market to gain a scalable, low-latency architecture
“Inphi has quite a bit of experience shipping silicon photonics with the ColorZ and ColorZ II [modules],” says Nariman Yousefi, executive vice president, automotive, coherent DSP and switch group at Marvell. “And we have programmes inside the company to do co-packaged optics as well.”
Innovium
Innovium’s Teralynx family addresses the needs of large-scale data centres and will complement Marvell’s Prestera switch-chip portfolio that addresses enterprise and carrier applications.
Formed in 2014, Innovium is a private company with a staff of 230, 185 of which are engaged in R&D. The company has also raised a total of $400 million in funding.
Innovium is already shipping its 12.8-terabit Teralynx 7 to a leading cloud provider and expects revenues of $150 million in 2022. And earlier this year, it announced it shipped over 1 million 400-gigabit switch-silicon ports in 2020.
“The top cloud players are the ones that drive most of the revenues,” says Yousefi. “But there is a long list of customers that are engaged with Innovium at different capacities and there are a bunch of Tier-2s [data centre operators].”
Marvell gained the Xpliant programmable switch-chip architecture for the data centre when it acquired Cavium Networks in 2018, says Devan Adams, principal analyst at LightCounting.
But soon after the acquisition, the Xpliant switch chip line was discontinued as Marvell decided to concentrate on expanding its Prestera chip family.
Now Marvell has returned to the market to gain a scalable, low-latency architecture that addresses the needs of the mega data centre players.
“When you think of the overall data centre market and how it is booming, Innovium makes Marvell’s solutions more attractive to the key cloud customers by helping them expand their switch-chip offerings,” says Adams.

Marvell says it was impressed with the Innovium design team and with the Teralynx architecture when assessing the company as a potential buy. “We also liked the fact that customers have validated the architecture and that it is shipping and in live networks,” says Yousefi.
Broadcom dominates the switch-chip market. According to the market research company, 650 Group, Broadcom had 72 per cent of the 50-gigabit serialiser-deserialiser (serdes) cloud-based switch market in the first quarter, 2021, while Innovium had 27 per cent.
The cloud players want a choice of suppliers, not just for procurement reasons but to ensure sufficiently strong suppliers that can address their needs.
This latest acquisition, expected to close before the year-end, will be Marvell’s fifth acquisition in four years.
Marvell acquired Inphi earlier this year and two custom ASIC companies in 2019: Avera Semiconductor, originally the ASIC group of IBM Microelectronics, and Aquantia that has multi-gigabit PHY expertise. A year before that, Marvell acquired Cavium, as mentioned.
Marvell will use its sales force to promote Innovium’s products to a larger customer base including customers using its Prestera switch chips.
Adams also notes that Marvell has a broad supply chain and a strategic relationship with leading foundry TSMC that will benefit Innovium in the making of its chips, especially when semiconductors are currently in short supply.
Switch chip styles
There are two types of Ethernet switch chips. For the mega data centres, what is important is capacity and the chip’s throughput per Watt (gigabit-per-second/ Watt). Cloud players need to move traffic efficiently in the data centre and with a low latency. Such chips have a streamlined packet-processing capability. Examples include Broadcom’s Tomahawk and Innovium’s Teralynx lines.
In contrast, enterprises need to support various networking protocols and that requires a broad feature set and packet-processing capability. Marvell’s Prestera and Broadcom’s Trident portfolios fall into this category.
“It is hard to design one device that addresses both,” says Yousefi. “That is why there are two different architectures, design teams, databases and chips.”
Marvell highlights Innovium’s Teralynx portfolio’s low power and low latency. “Even though the application for these devices is supposed to be streamlined, Innovium has managed to put in programmability features that makes the architecture more flexible,” says Yousefi. “These are important differentiators.”
Innovium’s Teralynx 8 family includes a 7nm CMOS, 25.6-terabit chip with 112 gigabit-per-second (Gbps) serialisers-deserialisers (serdes). “The Teralynx 8 switch chip is in the bring-up phase with customers; it is not shipping in volume yet,” says Yousefi.
A future Teralynx 9 has also been mentioned.
Yousefi confirms there will be a next-generation 51.2-terabit switch chip and devices beyond that; what the next device will be called is to be determined.
The Marvell acquisition will also combine the serdes expertise of Inphi and Innovium. “We are going to help, but right now we can’t really do much as two separate companies,” says Yousefi.
Integration
Yousefi is also definitive about Marvell’s co-packaged optics plans but points out that the adoption of the technology will take time for the whole industry.
The integration of the Innovium team within Marvell will be fine-tuned once the two companies formally merge. At a high-level, the Innovium team will continue to focus on what it does best: the high-capacity product line, says Yousefi.
“The real opportunity is how do you leverage the collective teams’ knowledge and efficiencies, share the best practices, help each other out during peak resource crunches, and release products more efficiently,” he says.
More acquisitions
The Innovium deal follows the likes of Intel buying Barefoot Networks and Nvidia buying networking specialist Mellanox which designs its own switch chips.
For Adams, it was those deals that suggested it was only a question of time before someone bid for Innovium.
Adams admits he has no insight into Marvell’s acquisition plans, but he points to how Marvell had its own server CPU chip, the ThunderX3 chip based on ARM cores, which was cancelled last year. Could Marvell decide to re-enter the market via the acquisition route?
Another potential technology Marvell could acquire is programmable logic. FPGAs are used in the data centre as accelerators. Adams also points out that certain switch vendors have added FPGAs to their platforms for niche applications such as high-frequency trading.
As for artificial intelligence (AI) hardware, Marvell has its own IP and has added hardware blocks for AI as part of it Octean 10 design. So perhaps the buying of an AI chip start-up is less likely for now.
Yousefi does not rule out more Marvell acquisitions. “The industry is all about growth and how you can position yourself to do many things,” he says.
But he stresses it will take Marvell time to absorb the latest acquisitions of Inphi and Innovium: “That is just as important as acquiring the right assets.”
Marvell's first Inphi chips following its acquisition

Marvell unveiled three new devices at the recent OFC virtual conference and show.
One chip is its latest coherent digital signal processor (DSP), dubbed Deneb. The other two chips, for use within the data centre, are a PAM-4 (4-level pulse-amplitude modulation) DSP, and a 1.6-terabit Ethernet physical layer device (PHY).
The chips are Marvell’s first announced Inphi products since it acquired the company in April. Inphi’s acquisition adds $0.7 billion to Marvell’s $3 billion annual revenues while the more than 1,000 staff brings the total number of employees to 6,000.
Marvell spends 30 per cent of its revenues on R&D.
Acquisitions
Inphi is the latest of a series of Marvell acquisitions as it focusses on data infrastructure.
Marvell acquired two custom ASIC companies in 2019: Avera Semiconductor, originally the ASIC group of IBM Microelectronics, and Aquantia that has multi-gigabit PHY expertise.
A year earlier Marvell acquired processing and security chip player, Cavium Networks. Cavium had acquired storage specialist, QLogic, in 2017.
These acquisitions have more than doubled Marvell’s staff. Inphi brings electro-optics expertise for the data centre and optical transport and helps Marvell address the cloud and on-premises data centre markets as well as the 5G carrier market.
Marvell is also targeting the enterprise/ campus market and what it highlights as a growth area, automotive. Nigel Alvares, vice president of solutions at Marvell, notes the growing importance of in-vehicle networking, what he calls a ‘data-centre-on-wheels’.
“Inphi’s technology could also help us in automotive as optical technologies are used for self-driving initiatives in future,” says Alvares.
Inphi also brings integration, co-packaging and multi-chip module expertise.

Merchant chip and custom ASIC offerings
Cloud operators and 5G equipment vendors are increasingly developing custom chip designs. Marvell says it is combining its portfolio with their intellectual property (IP) to develop and build custom ICs.
Accordingly, in addition to its merchant chips such as the three OFC-announced devices, Marvell partners with cloud players or 5G vendors, providing them with key IP blocks that are integrated with their custom IP. Marvell can also build the ASICs.
Another chip-design business model Marvell offers is the integration of different hardware in a multi-chip package. The components include a custom ASIC, merchant silicon, high-speed memory and third-party chiplets.
“We co-package and deliver it to a cloud hyperscaler or a 5G technical company,” says Alvares.
Marvell says this chip strategy serves two market sectors: the cloud hyperscalers and the telcos.
Cloud players are developing custom solutions as they become more vertically integrated. They also have deep pockets. But they can’t do everything because they are not chip experts so they partner with companies like Marvell.
“The five to 10 hyperscalers in the world, they are doing so much creative stuff to optimise applications that they need custom silicon,” says Alvares.
The telcos, in contrast, are struggling to grow their revenues and favour merchant ICs, given they no longer have the R&D budgets of the past. It is this split in the marketplace which Marvell is targeting its various chip services.
OFC announcements
At OFC, Marvell announced the Deneb coherent DSP, used for optical transport including the linking of equipment between data centres.
The Deneb DSP is designed with open standards in mind and complements the 400-gigabit CMOS Canopus DSP announced by Inphi in 2019.
Deneb adds the oFEC forward error correction scheme to support open standards such as OpenZR+, 100-gigabit ZR, the 400-gigabit OpenROADM MSA and CableLabs’ 100-gigabit standard.
The 100-gigabit ZR is targeted at the 5G access market and mobile backhaul. Like the OIF 400G ZR, it supports reaches of 80-120km but uses quadrature phase-shift keying (QPSK) modulation.
“Not only do we support 100 gigabit [coherent] but we also have added the full industrial temperature range, from -40oC to 85oC,” says Michael Furlong, associated vice president, product marketing at Marvell.
The Deneb DSP is sampling now. Both the Deneb and Canopus DSPs will have a role in the marketplace, says Furlong.

Atlas PAM-4 DSP and the 1.6-terabit PHY
Marvell also announced at OFC the Atlas PAM-4 DSP and a dual 800-gigabit PHY devices used within the data centre.
Atlas advances Marvell’s existing family of Polaris PAM-4 DSPs in that it integrates physical media devices. “We are integrating [in CMOS] the trans-impedance amplifier (TIA) and laser drivers,” says Alvares.
Using the 200-gigabit Atlas reduces an optical module design from three chips to two; the Atlas comprises a transmit chip and a receiver chip (see diagram below). Using the Atlas chips reduces the module’s bill of materials, while power consumption is reduced by a quarter.

The Atlas chips, now sampling, are not packaged but offered as bare die and will be used for 200-gigabit SR4 and FR4 modules. Meanwhile, Marvell’s 1.6-terabit PHY – the 88X93160, – is a dual 800-gigabit copper DSP that performs retimer and gearbox functions.
“We view this as the key data centre building block for the next decade,” says Alvares. “The world is just starting to design 100-gigabit serial for their infrastructure.”
The device, supporting 16, 100-gigabit lanes, is the industry’s first 100-gigabit serial retimer, says Marvell. The device drives copper cable and backplanes and is being adopted for links between the server and the top-of-tack switch or to connect switches in the data centre.
The device is suitable for next-generation 400-gigabit and 800-gigabit Ethernet links that use 100-gigabit electrical serialisers-deserialisers (serdes).
The 5nm CMOS device supports over a 38dB (decibel) link budget and reduces I/O power by 40 per cent compared to a 50-gigabit Nigel PAM4-based PHY.
The 100-gigabit serdes design will also be used with Marvell’s Prestera switch portfolio.







