Is 6G’s fate to repeat the failings of 5G wireless?

Will the telecom industry embark on another costly wireless upgrade? Telecom consultant and author William Webb thinks so and warns that it risks repeating what happened with 5G.

William Webb

William Webb published the book The 5G Myth in 2016. In it, he warned that the then-emerging 5G standard would prove costly and fail to deliver on the bold promises made for the emerging wireless technology.

Webb sees history repeating itself with 6G, the next wireless standard generation. In his latest book, The 6G Manifesto, he reflects on the emerging standard and outlines what the industry and its most significant stakeholder – the telecom operators – could do instead.

Developing a new generation wireless standard every decade has proved beneficial, says Webb. However, the underlying benefits with each generation has diminished to the degree that, with 5G, it is questionable whether the latest generation was needed.

Wireless generations

There was no first-generation (1G) cellular standard. Instead, there was a mishmash of analogue cellular standards that were regional and manufacturer-specific.

The second-generation (2G) wireless standard brought technological alignment and, with it, economies of scale. Then, the 3G and the 4G standards advanced the wireless radio’s air interface. 5G was the first time the air interface didn’t change; an essential ingredient of generational change was no longer needed.

The issue now is that the wireless industry is used to new generations. But if the benefits being delivered are diminishing, who exactly is this huge undertaking serving? asks Webb. With 5G, certainly not the operators. The operators have invested heavily in rolling out 5G and may eventually see a return but that is far from their experience to date.

The wireless industry is unique in adopting a generation approach. There are no such generations for cars, aeroplanes, computers, and the internet, says Webb: “Ok, the internet went from IPv4 to IPv6, but that was an update, as and when needed.” With 5G, there was no apparent need for a new generation, he says. It wasn’t as if networks were failing, or there was a fundamental security issue. or that 4G suffered from a lack of bandwidth.

Instead, 5G was driven by academics and equipment vendors. “They postulated what some of the new applications might be,” says Webb. “Some of them were crazy guesses, the most obvious being remote surgery.” That implied a need for faster wireless links and more bandwidth. Extra bandwidth meant higher and wider radio frequency bands which came at a cost for the operators. Higher radio spectrum – above 3GHz – means greater radio signal attenuation requiring smaller-area radio cells and a greater network investment for the operators.

The industry has been working on 6G for several years. Yet, it is still early to discuss the likely outcome. Webb outlines three possible approaches for 6G: HetNets, 5G-on-steroids, and 6G in the form of software updates only.

HetNets

Webb is a proponent for operators collaborating on heterogenous networks (HetNets).

He says the idea originated with 3G but has never been adopted. The concept requires service providers to collaborate to combine disparate networks — cellular, WiFi, and satellite — to improve connectivity and coverage and ultimately improve the end-user experience.

“Perhaps this is the time to do it,” says Webb, even if he is not optimistic: operators have never backed the idea because they favour their own networks.

In the book The 6G Manifesto, Webb explores the HetNets concept, how it could be implemented and the approach’s benefits. The implementation could also be done primarily in software, which the operators favour for 6G (see below).

“They would need to remove a few things like authentication and native provisioning of voice from their networks,” says Webb. There would also need to be some form of coordinator, essentially a database-switch that could run in the cloud.

5G on steroids

The approach adopted for 5G is an application-driven approach, whereby academics and equipment vendors identified applications and their requirements and developed the necessary technologies. Such an approach for 6G, says Webb, is yet more 5G on steroids. 6G will be faster than 5G, require higher frequency spectrum and be designed to address more sectors, each with their own requirements.

“The operators understand their economics, of course, and are closer to their customers,” says Webb. It is the operators not the manufacturers that should be driving 6G.

6G as software

The third approach is for 6G to be the first cellular generation that involves software only to avoid substantial and costly hardware upgrades.

Webb says the operators have not suggested what exactly these software upgrades would do, more that after their costly 5G network upgrades, they want to avoid another cycle of expensive network investment.

Backing a software approach allows operators to avoid being seen as dragging their feet. Rather, they could point to the existing industry organisation, the GSMA, and its releases that occur every 18 months that enhance the current generation and are largely software-based. This could become the new model in future.

5G could have been avoided and simply been an upgrade to 4.5G, says Webb. With periodic releases and software updates 6G could be avoided.

But the operators need to be more vocal. However, there is no consensus among operators globally. China will deploy 6G, whatever its form. But, warns Webb, if the operators don’t step up, 6G will be forced on them. “Hence my call to arms in the book, which says to the operators: ‘If you want an outcome that is different to 5G, you need to step up’.”

A manifesto

Webb argues that the pressure and expectation from 6G wireless are so great that the likely outcome is that it will repeat what happened with 5G.

The logic that 6G is not needed and its needs served with software upgrades will not be enough to halt the momentum driving 6G. 6G will thus not help the operators reverse their fortunes and generate new growth. This is not good news given that service providers already operate in a utility market while facing fierce competition.

“If you look at most utilities – gas, electricity, water – you end up with a monopoly network supplier and then perhaps some competition around the edges,” says Webb. “Telecoms is now a utility in that each mobile operator is delivering something that towards every consumer looks indistinguishable.”

It is not good news too for the equipment vendors. Vendors may drive 6G and get one more generation of equipment sales but it is just delaying the inevitable.

Webb believes the telcos’ revenues will remain the same, resulting in a somewhat profitable businesses: “They’re making more profit than utilities but less than technology companies.”

Webb’s book ends with a manifesto for 6G.

Mobile technology underpins modern life, and having an always-present connectivity is increasingly important yet must also be affordable to all. He calls for operators to drive 6G standards and for governments to regulate in a way that benefits citizens’ connectivity services.

Users have not benefitted from 5G. If that is to change with 6G, there needs to be a clear voice that makes a wireless world better for everyone.

 

Further Information:

The significance of 6G, click here


Is network traffic growth dwindling to a trickle?

“Network capacities are sufficient, and with data usage expected to plateau in the coming years, further capacity expansion is not needed. We have reached the end of history for communications.”

– Willian Webb, The End of Telecoms History


William Webb has pedigree when it comes to foreseeing telecoms trends.

Webb wrote The 5G Myth in 2016, warning that 5G would be a flop.

In the book, he argued that the wireless standard’s features would create limited interest and fail to grow revenues for mobile operators.

The next seven years saw the telcos promoting 5G and its capabilities. Now, they admit their considerable investments in 5G have delivered underwhelming returns.

His latest book, The End of Telecoms History, argues that telecoms has reached a maturity that satisfies the link speeds needed and that traffic growth is slowing.

“There will be no end of new applications,” says Webb. “But they won’t result in material growth in data requirements or in data speeds.”

What then remains for the telcos is filling in the gaps to provide connectivity everywhere.

Traffic growth slowdown

Earlier this year, AT&T’s CEO, John Stankey, mentioned that its traffic had grown 30 per cent year over year, the third consecutive year of such growth for the telco. The 30 per cent annual figure is the typical traffic growth rate that has been reported for years.

“My take is that we are at about 20 per cent a year annual growth rate worldwide, and it’s falling consistently by about 5 per cent a year,” says Webb.

In 2022, yearly traffic growth was 30 per cent; last year, it was 25 per cent. These are the average growth rates, notes Webb, and there are enormous differences worldwide.

“I was just looking at some data and Greece grew 45 per cent whereas Bahrain declined 10 per cent,” says Webb. “Clearly, there will be big differences between operators.”

He also cites mobile data growth numbers from systems vendor Ericsson. In North America, the growth between 2022 and 2024 was 24 per cent, 17 per cent, and 26 per cent.

“So it is fluctuating around the 20 per cent mark,” says Webb.

Other developments 

What about trends like the ever-greater use of digital technologies experienced by many industries, including telecoms? Or the advent of artificial intelligence (AI), which is leading to significant data centre builds, and how AI is expected to change traffic?

“If you look at all non-personal data use, such as the Internet of Things and so on, traffic levels are tiny,” says Webb. There are exceptions, such as security cameras generating video streams. “I don’t see that trend materially changing overall data rates,” says Webb.

He also doesn’t see AI meaningfully growing overall traffic. AI is useful for improving the running of networks but not changing the amount of wireless traffic. “If anything, it might reduce it because you can be more intelligent about what you need to send,” he says.

While Webb admits that AI data centre builds will require extra fixed networking capacity, as will sharing workloads over distributed data centres in a metropolitan area, he argues that this represents a tiny part of the overall network.

He does not see any new devices emerging that will replace the smartphone, dramatically changing how we consume and interact with data.

5G and 6G

Webb also has doubts about the emerging 6G wireless standard. The academic community is busy developing new capabilities for the next wireless standard. “The problem with that is that academics are generally not grounded in the reality of what will make money in the future,” says Webb. Instead, developers should challenge academics to develop the technologies needed for their applications to succeed.

Webbs sees two 6G camps emerging. The first camp wants 6G to address all the shortfalls of 5G using terahertz frequencies and delivering hundreds of gigabits speeds.

“Let’s max out on everything, and then surely, something wonderful must happen,” says Webb. “This strikes me as not learning the lessons of 5G.”

The second camp, including several telcos, does not want to spend any money on 6G but instead wants the technology, in the form of software updates, to address high operational costs and the difficulties in running different network types.

“In this case, 6G improves the operator’s economics rather than improve the end-user offering, which I think makes sense,” says Webb.

“We may end up in a situation where 6G has all this wondrous stuff, and the operators turn around and say they are not interested,” says Webb. “I see a significant risk for 6G, that it just isn’t ever really deployed anywhere.”

Webb’s career in telecoms spans 35 years. His PhD addressed modulation schemes for radio communications. He spent seven years at the UK regulator Ofcom addressing radio spectrum strategy, and he has also been President of the IET, the UK’s equivalent of the IEEE. Webb also co-founded an IoT startup that Huawei bought. For the last 15 years, he has been a consultant covering telecom strategy and technology.

Outlook

The dwindling growth in traffic will impact the telecom industry.

Webb believes the telcos’ revenues will remain the same resulting in somewhat profitable businesses. “They’re making more profit than utilities but less than technology companies,” says Webb.

He also expects there will be more mergers, an obvious reaction to a market flattening out. The aim is to improve profitability.

Given his regulatory background, is that likely? Regulators shun consolidation as they want to keep competition high. He expects it to happen indirectly, with telcos increasingly sharing networks. Each market will offer three or four brands for consumers per market but fewer networks; operators merging in all but name.

Will there even be a need for telecom consultants?  “I have to say, as I’ve made these predictions, I’ve been thinking what am I needed for now?” says Webb, laughing.

If he is right, the industry will be going through a period of change.

But if the focus becomes extending connectivity everywhere, there is work to be done in understanding and addressing the regulatory considerations, and also how best to transition the industry.

“I do suspect that just as the rest of the industry is effectively more a utility, it will need fewer and fewer consultants,” he says.


Europe's first glimpse of a live US baseball game

The Radôme protecting the vast horn antenna

It is rare to visit a museum dedicated to telecoms, never mind one set in beautiful grounds. Nor does it often happen that the visit coincides with an important anniversary for the site.

La Cité des Télécoms, a museum set in 11 hectares of land in Pleumeur-Bodou, Brittany, France, is where the first TV live feed was sent by satellite from the US to Europe.

The Telstar 1 communications satellite was launched 60 years ago, on July 10, 1962. The first transmission that included part of a live Chicago baseball game almost immediately followed.

By then, a vast horn radio antenna had been constructed and was awaiting the satellite’s first signals. The Radôme houses the antenna, an inflated dome-shaped skin to protect it from the weather. The antenna is built using 276 tonnes of steel and sits on 4,000 m3 of concrete. Just the bolts holding together the structure weigh 10 tonnes. It is also the largest inflated unsupported dome in the world.

The antenna continued to receive satellite transmissions till 1985. The location was then classed as a site of national historical importance. The huge horn antenna is unique since the twin antenna in the US has been dismantled.

The Cité des Télécoms museum was opened in 1991 and the site is a corporate foundation supported by Orange.

History of telecoms

A visitor to the museum is guided through a history of telecoms.

The tour begins with key figures of telecom such as Samuel Morse, Guglielmo Marconi, Lee de Forest and Thomas Edison. Lesser known inventors are also included, like Claude Chappe, who developed a semaphore system that eventually covered all of France.

The tour moves on to the advent of long-distance transmission of messages using telegraphy. Here, a variety of exquisitely polished wooden telegraphy systems are exhibited. Also included are rooms that explain the development of undersea cables and the advent of optical fibre.

Source: Cité des Télécoms

In the optical section, an exhibit allows a user to point a laser at different angles to show how internal reflection of an optical fibre always guides the incident light to the receiver.

Four video displays expertly explain to the general public what is single-mode fibre, optical amplification, wavelength-division multiplexing, forward error correction, and digital signal processing.

The digital age

Radio waves and mobile communications follow before the digital world is introduced, starting with George Boole and an interactive display covering Boolean algebra. Other luminaries introduced include Norbert Wiener and Claude Shannon.

There are also an impressive collection of iconic computing and communications devices, including an IBM PC, the Apple II, an early MacBook, generations of mobile phones, and the French’s effort to computerise the country, the Minitel system, which was launced in 1982 and was only closed down in 2012.

The tour ends with interactive exhibits and displays covering the Web, Bitcoin and 5G.

The Radôme

The visit’s highlight is the Radôme.

On entering, you arrive in a recreated office containing 1960s engineering paraphernalia – a technical drawing board, slide rules, fountain pens, and handwritten documents. A guy (in a video) looks up and explains what is happening in the lead-up to the first transmission.

The horn antenna used to receive the first satellite TV broadcasts from the US.

You then enter the antenna control centre and feel the tension and uncertainty as to whether the antenna will successfully receive the Telstra transmission. From there, you enter the vast dome housing the antenna.

TV displays take you through the countdown to the first successful transmission. Then a video display projected onto the vast ceiling gives a whistle-stop tour of the progress made since 1962: images sent from the moon landing in 1969, live World Cup football matches in 1970 through to telecom developments of the 1980s, 1990s, and 2000s.

The video ends with a glimpse of how telecoms may look in future.

Future of telecoms

The Radôme video is the closest the Cité des Télécoms museum comes to predicting the future and more would have been welcome.

But perhaps this is wise since, when you exit the Radome, a display bordering a circular lawn shows each key year’s telecom highlight from 1987 to 2012.

In 1987, the first optical cable linked Corsica to mainland Europe. The following year the first transatlantic optical cable (TAT-8) was deployed, while Bell Labs demonstrated ADSL in 1989.

The circular lawn display continues. In 1992, SMS was first sent, followed by the GSM standard in 1993. France Telecom’s national network became digital in 1995. And so it goes, from the iPhone in 2007 to the launch of 4G in Marseille in 2012.

There the display stops. There is no mention of Google, data centres, AI and machine learning, network functions virtualization, open RAN or 6G.

The Radôme

A day out in Brittany

The Radôme and the colossal antenna are a must-see, while the museum does an excellent job of demystifying telecoms. The museum is located in the Pink Granite Coast, a prime tourist attraction in Brittany.

Perhaps the museum’s key takeaway is how quickly digitisation and the technologies it has spawned have changed our world.

What lies ahead is anyone’s guess.


Nvidia's plans for the data processor unit

BlueField-3 die. Source: Nvidia

When Nvidia’s CEO, Jensen Huang, discussed its latest 400-gigabit BlueField-3 data processing unit (DPU) at the company’s 2021 GTC event, he also detailed its successor.

Companies rarely discuss chip specifications two generations ahead; the BlueField-3 only begins sampling next quarter.

The BlueField-4 will advance Nvidia’s DPU family.

It will double again the traffic throughput to 800 gigabits-per-second (Gbps) and almost quadruple the BlueField-3’s integer processing performance.

But one metric cited stood out. The BlueField-4 will increase by nearly 1000x the number of terabit operators-per-second (TOPS) performed: 1,000 TOPS compared to the BlueField-3’s 1.5 TOPS.

Huang said artificial intelligence (AI) technologies will be added to the BlueField-4, implying that the massively parallel hardware used for Nvidia’s graphics processor units (GPUs) are to be grafted onto its next-but-one DPU.

Why add AI acceleration? And will it change the DPU, a relatively new processor class?

Data processor units

Nvidia defines the DPU as a programmable device for networking.

The chip combines general-purpose processing – multiple RISC cores used for control-plane tasks and programmed in a high-level language – with accelerator units tailored for packet-processing data-plane tasks.

“The accelerators perform functions for software-defined networking, software-defined storage and software-defined security,” says Kevin Deierling, senior vice president of networking at Nvidia.

The DPU can be added to a Smart Network Interface Card (SmartNIC) that complements the server’s CPU, taking over the data-intensive tasks that would otherwise burden the server’s most valuable resource.

Other customers use the DPU as a standalone device. “There is no CPU in their systems,” says Deierling.

Storage platforms is one such example, what Deierling describes as a narrowly-defined workload. “They don’t need a CPU and all its cores, what they need is the acceleration capabilities built into the DPU, and a relatively small amount of compute to perform the control-path operations,” says Deierling.

Since the DPU is the server’s networking gateway, it supports PCI Express (PCIe). The PCIe bus interfaces to the host CPU, to accelerators such as GPUs, and supports NVMe storage. NVMe is a non-volatile memory host controller interface specification.

BlueField 3

When announced in 2021, the 22-billion transistor BlueField-3 chip was scheduled to sample this quarter. “We need to get the silicon back and do some testing and validation before we are sampling,” says Deierling.

The device is a scaled-up version of the BlueField-2: it doubles the throughput to 400Gbps and includes more CPU cores: 16 Cortex-A78 64-bit ARM cores.

Nvidia deliberately chose not to use more powerful ARM cores. “The ARM is important, there is no doubt about it, and there are newer classes of ARM,” says Deierling. “We looked at the power and the performance benefits you’d get by moving to one of the newer classes and it doesn’t buy us what we need.”

The BlueField-3 has the equivalent processing performance of 300 X86 CPU cores, says Nvidia, but this is due mainly to the accelerator units, not the ARM cores.

The BlueField-3 input-output [I/O] includes Nvidia’s ConnectX-7 networking unit that supports 400 Gigabit Ethernet (GbE) which can be split over 1, 2 or 4 ports. The DPU also doubles the InfiniBand interface compared to the BlueField-2, either a single 400Gbps (NDR) port or two 200Gbps (HDR) ports. There are also 32 lanes of PCI Express 5.0, each lane supporting 32 giga-transfers-per-second (GT/s) in each direction.

The memory interface is two DDR5 channels, doubling both the memory performance and the channel count of the BlueField-2.

The data path accelerator (DPA) of the BlueField-3 comprises 16 cores, each supporting 16 instruction threads. Typically, when a packet arrives, it is decrypted and the headers are inspected after which the accelerators are used. The threads are used if the specific function needed is not accelerated. Then, a packet is assigned to a thread and processed.

“The DPA is a specialised part of our acceleration core that is highlighly programmable,” says Deierling.

Other programmable logic blocks include the accelerated switching and packet processing (ASAP2) engine that parses packets. It inspects packet fields looking for a match that tells it what to do, such as dropping the packet or rewriting its header.

In-line acceleration

The BlueField-3 implements the important task of security.

A packet can have many fields and encapsulations. For example, the fields can include a TCP header, quality of service, a destination IP and an IP header. These can be encapsulated into an overlay such as VXLAN and further encapsulated into a UDP packet before being wrapped in an outer IP datagram that is encrypted and sent over the network. Then, only the IPSec header is exposed; the remaining fields are encrypted.

Deierling says the BlueField-3 does the packet encryption and decryption in-line.

For example, the DPU uses the in-line IPsec decode to expose the headers of the various virtual network interfaces – the overlays – of a received packet. Picking the required overlay, the packet is sent to a set of service-function chainings that use all the accelerators available such as tackling distributed denial-of-service and implementing a firewall and load balancing.

“You can do storage, you can do an overlay, receive-side scaling [RSS], checksums,” says Deierling. “All the accelerations built into the DPU become available.”

Without in-line processing, the received packet goes through a NIC and into the memory of the host CPU. There, it is encrypted and hence opaque; the packet’s fields can’t benefit from the various acceleration techniques. “It is already in memory when it is decrypted,” says Deierling.

The DPU and its functional units are shown within the dotted line, the host processor here is an x86 CPU. Source: Nvidia

Often, with the DPU, the received packet is decrypted and passed to the host CPU where the full packet is visible. Then, once the host application has processed the data, the data and packet may be encrypted again before being sent on.

“In a ‘zero-trust’ environment, there may be a requirement to re-encrypt the data before sending it onto the next hop,” says Deierling. “In this case, we just reverse the pipeline.”

An example is confidential healthcare information where data needs to be encrypted before being sent and stored.

DPU evolution

There are many application set to benefit from DPU hardware. These cover the many segments Nvidia is addressing including AI, virtual worlds, robotics, self-driving cars, 5G and healthcare.

All need networking, storage and security. “Those are the three things we do but it is software-defined and hardware-accelerated,” says Deierling.

Nvidia has an ambitious target of launching a new DPU every 18 months. That suggests the BlueField-4 could sample as early as the end of 2023.

The 800-gigabit Bluefield-4 will have 64-billion transistors and nearly quadruple the integer processing performance of the BlueField-3: from 42 to 160 SPECint.

Nvidia says its DPUs, including the BlueField-4, are evolutionary in how they scale the ARM cores, accelerators and throughput. However, the AI acceleration hardware added to the BlueField-4 will change the nature of the DPU.

“What is truly salient is that [1,000] TOPS number,” says Deierling. “And that is an AI acceleration; that is leveraging capabilities Nvidia has on the GPU side.”

Self-driving cars, 5G and robotics

An AI-assisted DPU will support such tasks as video analytics, 5G and robotics.

For self-driving cars, the DPU will reside in the data centre, not in the car. But that too will change.“Frankly, the car is becoming a data centre,” notes Deierling.

Deep learning currently takes place in the data centre but as the automotive industry adopts Ethernet, a car’s sensors – lidar, radar and cameras – will send massive amounts of data which an IC must comprehend.

This is relevant not just for automotive but all applications where data from multiple sensors needs to be understood.

Deierling describes Nvidia as an AI-on-5G company.

“We have a ton of different things that we are doing and for that, you need a ton of parallel-processing capabilities,” he says. This is why the BlueField-4 is massively expanding its TOPS rating.

He describes how a robot on an automated factory floor will eventually understand its human colleagues.

“It is going to recognize you as a human being,“ says Deierling. “You are going to tell it: ‘Hey, stand back, I’m coming in to look at this thing’, and the robot will need to respond in real-time.”

Video analytics, voice processing, and natural language processing are all needed while the device will also be running a 5G interface. Here, the DPU will reside in a small mobile box: the robot.

“Our view of 5G is thus more comprehensive than just a fast pipe that you can use with a virtual RAN [radio access network] and Open RAN,” says Deierling. “We are looking at integrating this [BlueField-4] into higher-level platforms.”


Marvell's 50G PAM-4 DSP for 5G optical fronthaul

Marvell's wireless portfolio of ICs. Source: Marvell.

  • Marvell has announced the first 50-gigabit 4-level pulse-amplitude modulation (PAM-4) physical layer (PHY) for 5G fronthaul.
  • The chip completes Marvell’s comprehensive portfolio for 5G radio access network (RAN) and x-haul (fronthaul, midhaul and backhaul).

Marvell has announced what it claims is an industry-first: a 50-gigabit PHY for the 5G fronthaul market.

Dubbed the AtlasOne, the PAM-4 PHY chip also integrates the laser driver. Marvell claims this is another first: implementing the directly modulated laser (DML) driver in CMOS.

“The common thinking in the industry has been that you couldn’t do a DML driver in CMOS due to the current requirements,” says Matt Bolig, director, product marketing, optical connectivity at Marvell. “What we have shown is that we can build that into CMOS.”

Marvell, through its Inphi acquisition, says it has shipped over 100 million ICs for the radio access network (RAN) and estimates that its silicon is in networks supporting 2 billion cellular users.

“We have been in this business for 15 years,” says Peter Carson, senior director, solutions marketing at Marvell. “We consider ourselves the number one merchant RAN silicon provider.”

Inphi started shipping its Polaris PHY for 5G midhaul and backhaul markets in 2019. “We have over a million ships into 5G,” says Bolig. Now Marvell is adding its AtlasOne PHY for 5G fronthaul.

Mobile traffic

Marvell says wireless data has been growing at a compound annual growth rate (CAGR) of over 60 per cent (2015-2021). Such relentless growth is forcing operators to upgrade their radio units and networks.

Stéphane Téral, chief analyst at market research firm, LightCounting, in its latest research note on Marvell’s RAN and x-haul silicon strategy, says that while 5G rollouts are “going gangbusters” around the world, they are traditional RAN implementations.

By that Téral means 5G radio units linked to a baseband unit that hosts both the distributed unit (DU) and centralised unit (CU).

But as 5G RAN architectures evolve, the baseband unit is being disaggregated, separating the distributed unit (DU) and centralised unit (CU). This is happening because the RAN is such an integral and costly part of the network and operators want to move away from vendor lock-in and expand their marketplace options.

For RAN, this means splitting the baseband functions and standardising interfaces that previously were hidden within custom equipment. Splitting the baseband unit also allows the functionality to be virtualised and be located separately, leading to the various x-haul options.

How the RAN is being disaggregated includes virtualised RAN and Open RAN. Marvell says Open RAN is still in its infancy but is a key part of the operators’ desire to virtualise and disaggregate their networks.

“Every Open RAN operator that is doing trials or early-stage deployments is also virtualising and disaggregating,” says Carson.

RAN disaggregation is also occuring in the vertical domain: the baseband functions and how they interface to the higher layers of the network. Such vertical disaggregation is being undertaken by the likes of the ONF and the Open RAN Alliance.

The disaggregated RAN – a mixture of the radio, DU and CU units – can still be located at a common site but more likely will be spread across locations.

Fronthaul is used to link the radio unit and DU when they are at separate locations. In turn, the DU and CU may also be at separate locations with the CU implemented in software running on servers deep within the network. Separating the DU and the CU is leading to the emergence of a new link: midhaul, says Téral.

Fronthaul speeds

Marvell says that the first 5G radio deployments use 8 transmitter/ 8 receiver (8T/8R) multiple-input multiple-output (MIMO) systems.

MIMO is a signal processing technique for beamforming, allowing operators to localise the capacity offered to users. An operator may use tens of megahertz of radio spectrum in such a configuration with the result that the radio unit traffic requires a 10Gbps front-haul link to the DU.

Leading operators are now deploying 100MHz of radio spectrum and massive MIMO – up to 32T/32R. Such a deployment requires 25Gbps fronthaul links.

“What we are seeing now is those leading operators, starting in the Asia Pacific, while the US operators have spectrum footprints at 3GHz and soon 5-6GHz, using 200MHz instantaneous bandwidth on the radio unit,” says Carson.

Here, an even higher-order 64T/64R massive MIMO will be used, driving the need for 50Gbps fronthaul links. Samsung has demonstrated the use of 64T/64R MIMO, enabling up to 16 spatial layers and boosting capacity by 7x.

“Not only do you have wider bandwidth, but you also have this capacity boost from spatial layering which carriers need in the ‘hot zones’ of their networks,” says Carson. “This is driving the need for 50-gigabit fronthaul.”

AtlasOne PHY

Marvell says its AtlasOne PAM-4 PHY chip for fronthaul supports an industrial temperature range and reduces power consumption by a quarter compared to its older PHYs. The power-saving is achieved by optimising the PHY’s digital signal processor and by integrating the DML driver.

Earlier this year Marvell announced its 50G PAM-4 Atlas quad-PHY design for the data centre. The AtlasOne uses the same architecture but differs in that it is integrated into a package for telecom and integrates the DML driver but not the trans-impedance amplifier (TIA).

“In a data centre module, you typically have the TIA and the photo-detector close to the PHY chip; in telecom, the photo-detector has to go into a ROSA (receiver optical sub-assembly),” says Bolig. “And since the photo-detector is in the ROSA, the TIA ends up having to be in the ROSA as well.”

The AtlasOne also supports 10-gigabit and 25-gigabit modes. Not all lines will need 50 gigabits but deploying the PHY future-proofs the link.

The device will start going into modules in early 2022 followed by field trials starting in the summer. Marvell expects the 50G fronthaul market to start in 2023.

RAN and x-haul IC portfolio

One of the challenges of virtualising the RAN is doing the layer one processing and this requires significant computation, more than can be handled in software running on a general-purpose processor.

Marvell supplies two chips for this purpose: the Octeon Fusion and the Octeon 10 data processing unit (DPU) that provides programmability and as well as specialised hardware accelerator blocks needed for 4G and 5G. “You just can’t deploy 4G or 5G on a software-only architecture,” says Carson.

As well as these two ICs and its PHY families for the various x-haul links, Marvell also has a coherent DSP family for backhaul (see diagram). Indeed, LightCounting’s Téral notes how Marvell has all the key components for an all-RAN 5G architecture.

Marvell also offers a 5G virtual RAN (VRAN) DU card that uses the OcteonFusion IC and says it already has five design wins with major cloud and OEM customers.


Marvell exploits 5nm CMOS to add Octeon 10 DPU smarts

Jeffrey Ho

The Octeon family has come a long way since the networking infrastructure chip was introduced by Cavium Networks in 2005.

Used for data centre switches and routers, the original chip family featured 1 to 16, 64-bit MIPS cores and hardware acceleration units for packet processing and encryption. The devices were implemented using foundry TSMC’s 130nm CMOS process.

Marvell, which acquired Cavium in 2018, has taped out the first two devices of its latest, seventh-generation Octeon 10 family.

The devices, coined data processing units (DPU), will feature up to 36 state-of-the-art ARM cores, support a 400-gigabit line rate, 1 terabit of switching capacity, and dedicated hardware for machine-learning and vector packet processing (VPP).

Marvell is using TSMC’s latest 5nm CMOS process to cram all these functions on the DPU system-on-chip.

The 5nm-implemented Octeon 10 coupled with the latest ARM cores and improved interconnect fabric will triple data processing performance while halving power consumption compared to the existing Octeon TX2 DPU.

DPUs join CPUs and GPUs

The DPU is not a new class of device but the term has become commonplace for a processor adept at computing and moving and processing packets.

Indeed, the DPU is being promoted as a core device in the data centre alongside central processing units (CPUs) and graphic processing units (GPUs).

As Marvell explains, a general-purpose CPU can perform any processing task but it doesn’t have the computational resources to meet all requirements. For certain computationally-intensive tasks like graphics and artificial intelligence, for example, the GPU is far more efficient.

The same applies to packet processing. The CPU can perform data-plane processing tasks but it is inefficient when it comes to intensive packet processing, giving rise to the DPU.

“The CPU is just not effective from a total cost of ownership, power and performance point of view,” says Nigel Alvares, vice president of solutions marketing at Marvell.

Data-centric tasks

The DPU is used for smart network interface controller (SmartNIC) cards found in computer servers. The DPU is also suited for standalone tasks at the network edge and for 5G.

Marvell says the Octeon DPU can be used for data centres, 5G wireless transport, SD-WAN, and fanless boxes for the network edge.

Data centre computation is moving from application-centric to more data-centric tasks, says Marvell. Server applications used to host all the data they needed when executing algorithms. Now applications gather data from various compute clusters and locations.

“The application doesn’t have all the data but there is a lot of data that needs to be pumped into the application from many points,” says Jeffrey Ho, senior product manager at Marvell. “So a lot of network overlay, a lot of East-West traffic.”

This explains the data-centric nature of tasks or, as Ho describes it, the data centre appearing as a mesh of networks: “It’s a core network, it is a router network, it is an enterprise network – all in one block.”

Octeon 10 archtecture

The Octeon 10 family uses ARM’s latest core architecture, the Neoverse N2 processor, Arm’s first Armv9 Infrastructure CPU, for general-purpose computational tasks. Each ARM core has access to hierarchical cache memory and external DDR5 SDRAM memory.

The initial Octeon 10 family members range from the CN103XX which has up to eight ARM N2 cores, each with Level 1 and private Level 2 cache and shared level 2 and 3 caches (8MB and 16MB, respectively).

The most powerful DPU of the Octeon 10 family is the DPU400 which will have up to 36 ARM cores and 36MB level 2 and 72MB level 3 caches.

“Then you have the acceleration hardware that is very friendly to this generic compute,” says Ho.

Source: Marvell

One custom IP block is for vector packet processing (VPP). VPP has become popular since becoming available as open-source software that batch-processes packets with similar attributes. Marvell says that until now, the hardware processed the packets one at a time such that the potential of VPP has not been fully realised.

The Octean 10 is the first device family to feature hardware for VPP acceleration. Accordingly, only one look-up table operation and one logical decision may be required before header manipulation is performed for each of the grouped packets. The specialised hardware accelerates VPP by between 3-5x.

The DPU also integrates on-chip hardware for machine-learning inferencing tasks.

Machine learning can be applied to traffic patterns on a compute cluster such that every day or so, newly learned models can be downloaded onto the DPU for smart malware detection. This is learnt behaviour; no rules need be written in code, says Marvell. Machine learning can determine if a packet is malware to an accuracy of 80 per cent, says Ho.

The hardware can even identify suspect packets by learning application types even when the packets themselves are encrypted using the IPSec protocol.

The DPU’s machine learning inference hardware can also be used for other tasks such as beamforming optimisation in cellular networks.

As for the 400-gigabit rating of the Octeon DPU, this is the maximum input and output that a CPU can cope with if every packet needs processing. And when each packet passes through the IPsec encyption engines, the maximum line rate is 400 gigabits.

In turn, if a packet need not pass through the CPU or no cryptography is required, one terabit of Layer 2/ Layer 3 switching can be done on-chip.

“All these are separate accelerator capability of the platform,” says Ho. The data path bandwidth of the DPU is 400Gbps+, IPSec throughput is 400Gbps+, and the switch capability is 1 terabit.”

Using software, the DPU accelerators are configured according to the data processing needs which may use all or some of the on-chip accelerators, he says.

Nigel Alvares

4G and 5G cellular systems

For radio access networks, the radio head units talk to a distributed unit (DU) and a centralised unit (CU). (See diagram.)

Source: Marvell

The DU chassis houses six or eight line cards typically. The DU has a main controller that connects all the signals and backhauls them to the CU. This requires a two-chip solution with a switch chip next to each Octeon.

Using a 5nm process, the switch-integrated Octeon DPU reduces the DU’s overall power consumption and bill of materials. This Octeon DPU can be used for the DU, the fronthaul gateway and even the CU.

The DPU also exploits the 1-terabit switching capacity for the DU chassis example. Here, six Octeon Fusion-O chips, which perform Layer 1 processing, are connected to six radio units. Each of the six Fusion-O chips connects to the DPU via a 50-gigabit serialiser/ deserialiser (serdes).

Typical DU designs may use two Octeon DPUs, the second being a standby host DPU. This accounts for six line cards and two Octeon DPUs per DU chassis.

Source: Marvell

Market status

Marvell says that for 100-gigabit-throughput DPUs, up to 60 per cent of volumes shipped are used in the cloud and 40 per cent at the network edge.

Since throughput rates in the cloud are growing faster than the network edge, as reflected with the advent of 200- and 400-gigabit SmartNIC cards, the overall ratio of devices used for the cloud will rise.

The first two Octeon 10 devices taped out two months ago were the CN106XX and CN106XXS. These devices will sample in the second half of 2021.

The two will be followed by the CN103XX which is expected around spring 2022 and following that will be the DPU400.


Nokia shares its vision for cost-reduced coherent optics

Nokia explains why coherent optics will be key for high-speed short-reach links and shares some of its R&D activities. The latest in a series of articles addressing what next for coherent.

Part 3: Reducing cost, size and power

Coherent optics will play a key role in the network evolution of the telecom and webscale players.

The modules will be used for ever-shorter links to enable future cloud services delivered over 5G and fixed-access networks.

The first uses will be to link data centres and support traffic growth at the network edge.

This will be followed with coherent optics being used within the data centre, once traffic growth requires solutions that 4-level pulse-amplitude modulation (PAM4) direct-detect optics can no longer address.

“If you look at PAM4 up to 100 gigabit for long reach and extended reach optics – distances below 80km – it does not scale to higher data rates,” says Marc Bohn, part of product management for Nokia’s optical subsystem group. ”It only scales if you use 100-gigabit in parallel.”

However, to enable short-reach coherent optics, its cost, size and power consumption will need to be reduced significantly. Semiconductor packaging techniques will need to be embraced as will a new generation of coherent digital signal processors (DSPs).

Capacity growth

The adoption of network-edge and on-premise cloud technologies are fueling capacity growth, says Tod Sizer, smart optical fabric & devices research lab leader at Nokia Bell Labs.

Nokia says capacity growth is at 50 per cent per annum and is even faster within the data centre; for every gigabyte entering a data centre, ten gigabytes are transported within the data centre.

“All of this is driving huge amounts of growth in optical capacity at shorter distances,” says Sizer. “To meet that [demand], we need to have coherent solutions to take over where PAM-4 stops.”

Sizer oversees 130 engineers whose research interests include silicon photonics, coherent components and coherent algorithms.

Applications

As well as data centre interconnect, coherent optics will be used for 5G, access and cable networks; markets also highlighted by Infinera and Acacia Communications.

Marc Bohn

Nokia says the first driver is data centre interconnect.

The large-scale data centre operators triggered the market for 80-120km coherent pluggables with the 400ZR specification for data centre interconnect.

“Right now, with the different architectures in data centres, these guys are saying 80-120km may be an overshoot, maybe we need something for shorter distances to be more efficient,” says Bohn. “Certainly, coherent can tackle that and that is what we are preparing for because there is no alternative, only coherent can cover that space.”

5G is also driving the need for greater bandwidth.

“Traditionally a whole load of processing has been done at the remote ratio head but increasingly, for cost and performance reasons, people are looking at pulling the processing back into the data centre,” says Sizer.

Another traffic driver is how each cellular antenna has three sectors and can use multiple frequency bands.

“Some research we are looking at requires 400 gigabits and above,” says Sizer. “If you want to do a full [mobile] front haul for a massive MIMO (multiple input, multiple output) array, for example.”

Challenges

Several challenges need to be overcome before coherent modules are used widely for shorter-reach links.

To reduce coherent module cost, the optics and DSP need to be co-packaged, borrowing techniques developed by the chip industry.

“Optical and electrical should be brought close together,” says Bohn. “[They should be] co-designed and co-packaged, and the ideal candidate for that is to combine silicon photonics and the DSP.”

The aim is to turn complex designs into a system-on-chip. “Both [the DSP and silicon photonics] are CMOS and you can apply 2D and 3D [die] stacking multi-chip module techniques,” says Bohn, who contrasts it with the custom and manual manufacturing techniques used today.

The coherent DSP also needs to be much simpler than the high-end DSPs used for long-distance optical transport.

For example, the dispersion compensation, which accounts for a significant portion of the chip’s circuitry, is less demanding for shorter links. The forward-error-correction scheme used can also be relaxed as can the bit precision of the analogue-to-digital and digital-to-analogue converters.

Nokia can co-design the silicon photonics and the DSP following its acquisition of Elenion. Nokia is also exploiting Elenion’s packaging know-how and the partnerships it has developed.

Inside the data centre

Nokia highlights two reasons why coherent will eventually be used within the data centre.

The first is the growth in capacity needed inside the data centre. “For the same reason we believe coherent will be the right solution for data centre interconnect and access, the same argument can be made within the data centre,” says Sizer.

A campus data centre is distributed across several buildings and linking them is driving a need for 400-gigabit lanes or more.

This requires a ZR-like solution but for 2km or so rather than 80km.

“It is one of the solutions certainly but that will be driven an awful lot by whether we can make cost-effective solutions to meet the cost targets of the data centre,” says Sizer. That said, there are other ways this can be addressed such as adding fibre.

“Having parallel systems is another area of ongoing research,” says Sizer. “We may need to have unique solutions if traffic grows faster inside the data centre than outside such as spatial-division multiplexing as well as coherent.”

The use of coherent interfaces for networking inside the data centre will take longer.

Bohn points out that 51.2-terabit and 102.4-terabit switches will continue to be served using direct-detect optics but after that, it is unclear because direct-detect optics tops out at 100-gigabits or perhaps 200-gigabits per lane.

“With coherent, it is much easier to get to higher data rates especially over shorter distances,” says Bohn.

Another development benefitting the use of coherent is the next Ethernet standard after 400 Gigabit Ethernet (GbE).

“My research team is looking at that and, in particular, 1.6 Terabit Ethernet (TbE) which is fairly out in the future,” says Sizer. “It will demand a coherent solution, as I expect 800GbE will as well.”

Work to define the next Ethernet standard is starting now and will only be completed in 2025 at the earliest.


Acacia targets access networks with coherent QSFP-DD

Tom Williams

  • Acacia Communications has announced a 100-gigabit coherent QSFP-DD pluggable module.
  • The module is the first of several for aggregation in the access network.

The second article addressing what next for coherent

Part 2: 100-gigabit coherent QSFP-DD

Acacia Communications has revisited 100-gigabit coherent but this time for access rather than metro networks.

Acacia’s metro 100-gigabit coherent pluggable product, a CFP, was launched in 2014. The pluggable has a reach from 80km to 1,200km and consumes 24-26W.

The latest coherent module is the first QSFP-DD to support a speed lower than the 400-gigabit 400ZR and ZR+ applications that have spurred the coherent pluggable market.

The launching of a 100-gigabit coherent QSFP-DD reflects a growing need to aggregate 10 Gigabit Ethernet (GbE) links at the network edge as 5G and fibre are deployed.

“The 10GbE links in all the different types of access networks highlight a need for a cost-effective way to do this aggregation,” says Tom Williams, vice president of marketing at Acacia.

Why coherent?

The deployment of 5G, business services, 10-gigabit passive optical networking (PON) and distributed access architecture (DAA) are driving greater traffic at the network edge.

Direct-detection optics is the main approach used for aggregation but Acacia argues coherent is now a contender.

Until now, Acacia has only been able to offer coherent metro products for access. The company believes a 100-gigabit coherent module is timely given the network edge traffic growth coupled with the QSFP-DD form factor being suited for the latest aggregation and switch platforms. Such platforms are not the high-capacity switches used in data centres yet port density still matters.

“We think we can trigger a tipping point and drive coherent adoption for these applications,” says Williams.

Using coherent brings robustness long associated with optical transport networks. “You just plug both ends in and it works,” he says.

In access, the quality of fibre in the network varies. With coherent, there is no need for an engineer to do detailed characterisations of the link thereby benefiting operational costs.

Adopting coherent technology for access also provides a way to scale. “You may only need 100 gigabits today but there is a clear path to 200 and 400 gigabit and the use of DWDM [dense wavelength-division multiplexing],” says Williams.

100-gigabit QDFP-DD

Acacia’s 100-gigabit QSFP-DD uses a temperature-controlled fixed laser and has a reach of 120km. The 120km span may rarely be needed in practice – up to 80km will meet most applications – but the extra margin will accommodate any vagaries in links.

The module uses Acacia’s 7nm CMOS low-power Greylock coherent digital signal processor (DSP). The Greylock is Acacia’s third-generation low power DSP chip that is used for its 400ZR and ZR+ modules.

The 100-gigabit QSFP-DD shares the same packaging as the 400ZR and ZR+ modules. The DSP, silicon-photonics photonic integrated circuit (PIC), modulator driver and trans-impedance amplifier (TIA) are all assembled into one package using chip-stacking techniques, what Acacia calls an opto-electronic multi-chip module (OEMCM).

“Everything other than the laser is in a single package,” says Williams. “The more we make optics look like electronics and the fewer interconnect points we have, the higher the reliability will be.”

The packaging approach brings size and optical performance benefits. The optics and DSP must be tightly coupled to ensure signal integrity as the symbol rates go up for 400-gigabit and soon 800-gigabit data rates. But this is less of an issue at 100-gigabit given the symbol rate is 32-gigabaud only. 

Opportunities

The 100-gigabit QSFP-DD is now sampling and undergoing qualification. Acacia has yet to announce its general availability.

The company is planning other coherent modules for access including a tunable laser-based QSFP-DD as well as designs that meet various environmental requirements.

“We view coherent as moving into the access market and that will require solutions that address the entire market,” says Williams. That said, Acacia admits uncertainty remains as to how widely coherent will be adopted.

“The market has to play out and there are other competitive solutions,” says Williams. “We believe coherent will be the right solution but how that plays out near- to mid-term is uncertain.”


Is traffic aggregation the next role for coherent?

Ciena and Infinera have each demonstrated the transmission of 800-gigabit wavelengths over near-1,000km distances, continuing coherent's marked progress. But what next for coherent now that high-end optical transmission is approaching the theoretical limit? Can coherent compete over shorter spans and will it find new uses?

Part 1: XR Optics

“I’m going to be a bit of a historian here,” says Dave Welch, when asked about the future of coherent.

Interest in coherent started with the idea of using electronics rather than optics to tackle dispersion in fibre. Using electronics for dispersion compensation made optical link engineering simpler.

 

Dave Welch

Dave Welch

 

Coherent then evolved as a way to improve spectral efficiency and reduce the cost of sending traffic, measured in gigabit-per-dollar.

“By moving up the QAM (quadrature amplitude modulation) scale, you got both these benefits,” says Welch, the chief innovation officer at Infinera.

Improving the economics of traffic transmission still drives coherent. Coherent transmission offers predictable performance over a range of distances while non-coherent optics links have limited spans.

But coherent comes at a cost. The receiver needs a local oscillator - a laser source - and a coherent digital signal processor (DSP).

Infinera believes coherent is now entering a phase that will add value to networking. “This is less about coherent and more about the processor that sits within that DSP,” says Welch.

Aggregation

Infinera is developing technology - dubbed XR Optics - that uses coherent for traffic aggregate in the optical domain.

The 'XR’ label is a play on 400ZR, the 400-gigabit pluggable optics coherent standard. XR will enable point-to-point spans like ZR optics but also point-to-multipoint links.

Infinera, working with network operators, has been assessing XR optics’ role in the network. The studies highlight how traffic aggregation dictates networking costs.

“If you aggregate traffic in the optical realm and avoid going through a digital conversion to aggregate information, your network costs plummet,” says Welch.

Are there network developments that are ripe for such optical aggregation?

“The expansion of bandwidth demand at the network edge,” says Rob Shore, Infinera’s senior vice president of marketing. “It is growing, and it is growing unpredictably.”

XR Optics

XR optics uses coherent technology and Nyquist sub-carriers. Instead of a laser generating a single carrier, pulse-shaping at the optical transmitter is used to create multiple carriers, dubbed Nyquist sub-carriers.

The sub-carriers carry the same information as a single carrier but each one has a lower symbol rate. The lower symbol rate improves tolerance to non-linear fibre effects and enables the use of lower-speed electronics. This benefits long-distance transmissions.

But sub-carriers also enable traffic aggregation. Traffic is fanned out over the Nyquist sub-carriers. This enables modules with different capacities to communicate, using the sub-carrier as a basic data rate. For example, a 25-gigabit single sub-carrier XR module and a 100-gigabit XR module based on four sub-carriers can talk to a 400-gigabit module that supports 16.

It means that optics is no longer limited to a fixed point-to-point link but can support point-to-multipoint links where capacities can be changed adaptively.

“You are not using coherent to improve performance but to increase flexibility and allow dynamic reconfigurability,” says Shore.

Rob Shore

Rob Shore

XR optics makes an intermediate-stage aggregation switch redundant since the higher-capacity XR coherent module aggregates the traffic from the lower-capacity edge modules.

The result is a 70 per cent reduction in networking costs: the transceiver count is halved and platforms can be removed from the network.

XR Optics starts to make economic sense at 10-gigabit data rates, says Shore. “It depends on the rest of the architecture and how much of it you can drive out,” he says. “For 25-gigabit data rates, it becomes a virtual no-brainer.”

There may be the coherent ‘tax’ associated with XR Optics but it removes so much networking cost that it proves itself much earlier than a 400ZR module, says Shore.

Applications

First uses of XR Optics will include 5G and distributed access architecture (DAA) whereby cable operators bring fibre closer to the network edge.

XR Optics will likely be adopted in two phases. The first is traditional point-to-point links, just as with 400ZR pluggables.

“For mobile backhaul, what is fascinating is that XR Optics dramatically reduces the expense of your router upgrade cost,” says Welch. “With the ZR model I have to upgrade every router on that ring; in XR I only have to upgrade the routers needing more bandwidth.”

Phase two will be for point-to-multipoint aggregation networks: 5G, followed by cable operators as they expand their fibre footprint.

Aggregation also takes place in the data centre, has coherent a role there?

“The intra-data centre application [of XR Optics] is intriguing in how much you can change in that environment but it is far from proven,” says Welch.

Coherent for point-to-point links will not be used inside the data centre as it doesn’t add value but configurable point-to-multiple links do have merit.

“It is less about coherent and more about the management of how content is sent to various locations in a point-to-multiple or multipoint-to-multipoint way,” says Welch. “That is where the game can be had.”

Uptake

Infinera is working with leading mobile operators regarding using XR Optics for optical aggregation. Infinera is talking to their network architects and technologists at this stage, says Shore.

Given how bandwidth at the network edge is set to expand, operators are keen to explore approaches that promise cost savings. “The people that build mobile networks or cable have told us they need help,” says Shore.

Infinera is developing the coherent DSPs for XR Optics and has teamed with optical module makers Lumentum and II-VI. Other unnamed partners have also joined Infinera to bring the technology to market.

The company will detail its pluggable module strategy including XR Optics and ZR+ later this year.


Nokia buys Elenion for its expertise and partnerships

Kyle Hollasch, director of optical networking product marketing, Nokia.

Nokia will become the latest systems vendor to bolster its silicon photonics expertise with the acquisition of Elenion Technologies.

The deal for Elenion, a privately-held company, is expected to be completed this quarter, subject to regulatory approval. No fee has been disclosed.

If you look at the vertically-integrated [systems] vendors, they captured the lions share of the optical coherent marketplace,” says Kyle Hollasch, director of optical networking product marketing at Nokia. But the coherent marketplace is shifting to pluggables and it is shifting to more integration; we cant afford to be left behind.”   

Elenion Technologies  

Elenion started in mid-2014, with a focus of using silicon as a platform for photonics. We consider ourselves more of a semiconductor company than an optics company,” says Larry Schwerin, CEO of Elenion. 

Elenion makes photonic engines and chipsets and is not an optical module company. We then use the embedded ecosystem to offer up solutions,” says Schwerin. That is how we approach the marketplace.” 

The company has developed a process design kit (PDK) for photonics and has built a library of circuits that it uses for its designs and custom solutions for customers.

A PDK is a semiconductor industry concept that allows circuit designers to develop complex integrated circuits without worrying about the underlying transistor physics. Adhering to the PDK ensures the circuit design is manufacturable at a chip fabrication plant (fab).

But developing a PDK for optics is tricky. How the PDK is designed and developed must be carefully thought through, as has the manufacturing process, says Elenion.

Larry Schwerin, CEO of Elenion.

We got started on a process and developed a library,” says Larry Schwerin, CEO of Elenion. And we modelled ourselves on the hyperscale innovation cycle, priding ourselves that we could get down to less than three years for new products to come out.”

The “embedded ecosystem” Elenion refers to involves close relationships with companies such as Jabil to benefit from semiconductor assembly test and packaging techniques. Other partnerships include Molex and webscale player, Alibaba.

Elenion initially focussed on coherent optics, providing its CSTAR coherent device that supports 100- and 200-gigabit transmissions to Jabil for a CFP2-DCO pluggable module. Other customers also use the design, mostly for CFP2-DCO modules.

The company has now developed a third-generation coherent design, dubbed CSTAR ZR, for 400ZR optics. The optical engine can operate up to 600 gigabits-per-second (Gbps), says Elenion.

Elenion’s work with the cloud arm of Alibaba covers 400-gigabit DR4 client-side optics as well as an 800-gigabit design.

Alibaba Cloud has said the joint technology development with Elenion and Hisense Broadband covers all the production stages: the design, packaging and testing of the silicon photonics chip followed by the design, packaging, assembly and testing of the resulting optical module. 

Bringing optics in-house 

With the acquisition of Elenion, Nokia becomes the latest systems vendors to buy a silicon photonics specialist.

Cisco Systems acquired Lightwire in 2012 that enabled it to launch the CPAK, a 100-gigabit optical module, a year ahead of its rivals. Cisco went on another silicon photonics shopping spree more recently with the acquisition of Luxtera in 2019, and it is the process of acquiring leading merchant coherent player, Acacia Communications

In 2013 Huawei bought the Belgium silicon photonics start-up, Caliopa, while Mellanox Technologies acquired silicon photonics firm, Kotura, although subsequently, it disbanded its silicon photonics arm. 

Ciena bought the silicon-photonics arm of Teraxion in 2016 and, in the same year, Juniper bought silicon photonics start-up, Aurrion Technologies.

Markets 

Nokia highlights several markets – 5G, cloud and data centres – where optics is undergoing rapid change and where the system vendors designs will benefit from Elenion’s expertise. 

5G is a pretty obvious one; a significant portion of our optical business over the last two years has been mobile front-haul,” says Nokias Hollasch. And that is only going to become more significant with 5G.”

Front-haul is optics-dependent and requires new pluggable form factors supporting lower data rates such as 25Gbps and 100Gbps. This is the new frontier for coherent,” says Hollasch.

Nokia is not looking to be an optical module provider, at least for now. That one we are treading cautiously,” says Hollasch. We, ourselves, are quite a massive customer [of optics] which gives us some built-in scale straight away but our go-to-market [strategy] is still to be determined.” 

Not being a module provider, adds Schwerin, means that Nokia doesnt have to come out with modules to capitalise on what Elenion has been doing. 

Nokia says both silicon photonics and indium phosphide will play a role for its coherent optical designs. Nokia also has its own coherent digital signal processors (DSPs).

There is an increasingly widening application space for silicon photonics,” says Hollasch. Initially, silicon photonics was looked at for the data centre and then strictly for metro [networks]; I dont think that is the case anymore.”

Why sell?

Schwerin says the company was pragmatic when it came to being sold. Elenion wasn’t looking to be acquired and the idea of a deal came from Nokia. But once the dialogue started, the deal took shape. 

The industry is in a tumultuous state and from a standpoint of scenario planning, there are multiple dynamics afoot,” says Schwerin.

As the company has grown and started working with larger players including webscales, their requirements have become more demanding.

As you get more into bigs, they require big,” says Schwerin. They want supply assurance, and network indemnification clauses come into play.” The need to innovate is also constant and that means continual investment. 

When you weigh it all up, this deal makes sense,” he says.  

Schwerin laughs when asked what he plans to do next: I know what my wife wants me to do.

I will be going with this organisation for a short while at least,” he says. “You have to make sure things go well in the absorption process involving big companies and little companies.”


Privacy Preference Center