Changing the radio access network for good

The industry initiative to open up the radio access network, known as open RAN, is changing how the mobile network is architected and is proving its detractors wrong.
So says a recent open RAN study by market research company, LightCounting.
“The virtual RAN and open RAN sceptics are wrong,” says Stéphane Téral, chief analyst at LightCounting.
Japan’s mobile operators, Rakuten Mobile and NTT Docomo, lead the world with large-scale open RAN deployments. Meanwhile, many leading communications service providers (CSPs) continue to trial the technology with substantial deployments planned around 2024-25.
Japan’s fourth and newest mobile network operator, Rakuten Mobile, deployed 40,000 open RAN sites with 200,000 radio units by the start of 2022.
Meanwhile, NTT Docomo, Japan’s largest mobile operator, deployed 10,000 sites in 2021 and will deploy another 10,000 this year.
NTT Docomo has shown that open RAN also benefits incumbent operators, not just new mobile entrants like Rakuten Mobile and Dish Networks in the US that can embrace the latest technologies as they roll out their networks.
Virtual RAN and open virtual RAN
Traditional RANs use a radio unit and a baseband unit from the same equipment supplier. Such RAN systems use proprietary interfaces between the units, with the vendor also providing a custom software stack, including management software.
The vendor may also offer a virtualised system that implements some or all of the baseband unit’s functions as software running on server CPUs.
A further step is disaggregating the baseband unit’s functions into a distributed unit (DU) and a centralised unit (CU). Placing the two units at different locations is then possible.
A disaggregated design may also be from a single vendor but the goal of open RAN is to enable CSPs to mix and match RAN components from different suppliers. Accordingly, the virtual RAN using open interfaces, as specified by the O-RAN Alliance, is an open virtual RAN system.

The diagram shows the different architectures leading to the disaggregated, virtualised RAN (vRAN) architecture.
Open virtual RAN comprises radio units, the DU and CU functions that can be implemented in the cloud, and the RAN Intelligent Controller (RIC), the brain of the RAN, which runs applications.
Several radio units may be connected to a virtual DU. The radio unit and virtual DU may be co-located or separate, linked using front-haul technology. Equally, the CU can host several virtual DUs depending on the networking requirements, connected with a mid-haul link.
Rakuten Mobile has deployed the world’s largest open virtual RAN architecture, while NTT Docomo has the world’s largest brownfield open RAN deployment.
NTT Docomo’s deployment is not virtualised and is not running RAN functions in software.
“NTT Docomo’s baseband unit is not disaggregated,” says Téral. “It’s a traditional RAN with a front-haul using the O-RAN Alliance specification for 5G.”
NTT Docomo is working to virtualise the baseband units and the work is likely to be completed in 2023.
Opening the RAN
NTT Docomo and the MGMN Alliance were working on interoperability between RAN vendors 15 years ago, says Téral. The Japanese mobile operator wanted to avoid vendor lock-in and increase its options.
“NTT Docomo was the only one doing it and, as such, could not enjoy economies of scale because there was no global implementation,” says Téral.
Wider industry backing arrived in 2016 with the formation of the Telecom Infra Project (TIP) backed by Meta (Facebook) and several CSPs to design network architectures that promoted interoperability using open equipment.
The O-RAN Alliance formed in 2018 was another critical development. With over 300 members, the O-RAN Alliance has ten working groups addressing such topics as defining the interfaces between RAN functions to standardise the open RAN architecture.
The O-RAN Alliance realised it needed to create more flexibility to enable boxes to be interchanged, says Téral, and started in the RAN to allow any radio unit to work with any virtual DU.
Geopolitics is the third element to kickstart open RAN. Removing Chinese equipment vendors Huawei and ZTE from key markets brought Open RAN to the forefront as a way to expand suppliers.
Indeed, Rakuten Mobile was about to select Huawei for its network, but it decided in 2018 to adopt open RAN instead because of geopolitics.
“Geopolitics added a new layer and, to some extent, accelerated the development of open RAN,” he says. “But it does not mean it has accelerated market uptake.”
That’s because the first wave of 5G deployments by the early adopter CSPs seeking a first-mover advantage is ending. Indeed, the uptake in 5G’s first three years has eclipsed the equivalent rollout of 4G, says LightCounting.
To date, over 200 of 800 mobile operators worldwide have deployed 5G.
Early 5G adopters have gone with traditional RAN suppliers like Ericsson, Nokia, Samsung, NEC and Fujitsu. And with open RAN only now hitting its stride, it has largely missed the initial 5G wave.
Open RAN’s next wave
For the next two years, then, the dominant open RAN deployments will continue to be those of Rakuten Mobile and NTT Docomo, to which can be added the network launches from Dish Networks in the US, and 1&1 Drillisch of Germany, which is outsourcing its buildout to Rakuten Symphony.
Rakuten Mobile’s vendor offshoot, Rakuten Symphony, set up to commercialise Rakuten’s open RAN experiences, is also working with Dish Networks on its deployment.
Rakuten Mobile hosts its own 5G network, including open RAN in its data centres. Dish is working with cloud player Amazon Web Services to host its 5G network. Dish’s network is still in its early stages, but the mobile operator can host its network in Amazon’s cloud because it uses a cloud-native implementation that includes Open RAN.
The next market wave for Open RAN will start in 2024-25 when the leading CSPs begin to turn off their 3G and start deploying Open RAN for 5G.
It will also be helped by the second wave of 5G rollouts those 600 operators with LTE networks. However, this second 5G cycle may not be as large as the first cycle, says Téral, and there will be a lag between the two cycles that will not be helped if there is a coming economic recession.
Specific leading CSPs that were early cheerleaders for open RAN has since dampened their deployment plans, says Téral. For example, Telefónica and Vodafone first spoke in 2019 of 1,000s of site deployments but have scaled back their deployment plans.
The leading CSPs explain their reluctance to deploy open RAN due to its many challenges. One is interoperability issues; despite the development of open interfaces, getting the different vendors’ components to work together is still a challenge.
Another issue is integration. Disaggregating the various RAN components means someone must stitch them together. Certain CSPs do this themselves, but there is a need for system integrators, and this is a challenge.
Téral believes that while these are valid concerns, Rakuten and NTT Docomo have already overcome such complications; open RAN is now deployed at scale.
These CSPs are also reluctant to end their relationships with established suppliers.
“The service providers’ teams have built relationships and are used to dealing with the same vendors for so long,” says Téral. “It’s very complicated for them to build new relationships with somebody else.”
More RAN player entrants
Rakuten Symphony has assembled a team with tremendous open RAN experience. AT&T is one prominant CSP that has selected Rakuten Symphony to help it with network planning and speed up deployments.
NTT Docomo working with four vendors, has got their radio units and baseband units to work with each other. In addition, NTT Docomo is also promoting its platform dubbed OREC (5G Open RAN Ecosystem) to other interested parties.
NEC and Fujitsu, selected by NTT Docomo, have also gained valuable open RAN experience. Fujitsu is a system integrator with Dish while NEC is involved in many open RAN networks in Europe, starting with Telefónica.
There is also a commercial advantage for these systems vendors since Rakuten Mobile and NTT Docomo are the leading operators, along with DISH and 1&1, deploying open RAN for the next two years.
That said, the radio unit business continues to look up. “There is no cycle [with radio units]; you still have to add radio units at some point in particular parts of the network,” says Téral.
But for open RAN, those vendors not used by NTT Docomo and Rakuten Mobile must wait for the next deployment wave. Vendor consolidation is thus inevitable; Parallel Wireless being the first shoe to drop with its recently announced wide-scale layoffs.
So while open RAN has expanded the number of vendor suppliers, further acquisitions should be expected, as well as companies folding that will not survive until the next deployment wave, says Téral.
And soon at the chip level too
There is also a supply issue with open RAN silicon.
With its CPUs and FlexRAN software, Intel dominates the open RAN market. However, the CSPs acknowledge there is no point in expanding RAN suppliers if there is a vendor lock-in at the chip level, one layer below.
Téral says several chip makers are working with system vendors to enter the market with alternative solutions. These include ARM-based architectures, AMD-Xilinx, Qualcomm, Marvell’s Octeon family and Nvidia’s BlueField-3 data processing unit.
The CSPs are also getting involved in promoting more chip choices. For example, Vodafone has set up a 50-strong research team at its new R&D centre in Malaga, Spain, to work with chip and software companies to develop the architecture of choice for Open RAN to expand the chip options.
Outlook
LightCounting forecasts that the open vRAN market will account for 13 per cent of the total global RAN sales in 2027, up from 4 per cent in 2022.
A key growth driver will be the global switch to open virtual RAN in 2024-25, driven by the large Tier 1 CSPs worldwide.
“Between 2025 and 2030, you will see a mix of open RAN, and where it makes sense in parts of the network, traditional RAN deployments too,” says Téral.
The quiet progress of Network Functions Virtualisation

Network Functions Virtualisation (NFV) is a term less often heard these days.
Yet the technology framework that kickstarted a decade of network transformation by the telecom operators continues to progress.
The working body specifying NFV, the European Telecommunications Standards Institute’s (ETSI) Industry Specification Group (ISG) Network Functions Virtualisation (NFV), is working on the latest releases of the architecture.
The releases add AI and machine learning, intent-based management, power savings, and virtual radio access network (VRAN) support.
ETSI is also shortening the time between NFV releases.
“NFV is quite a simple concept but turning the concept into reality in service providers’ networks is challenging,” says Bruno Chatras, ETSI’s ISG NFV Chairman and senior standardisation manager at Orange Innovation. “There are many hidden issues, and the more you deploy NFV solutions, the more issues you find that need to be addressed via standardisation.”
NFV’s goal
Thirteen leading telecom operators published a decade ago the ETSI NFV White Paper.
The operators were frustrated. They saw how the IT industry and hyperscalers innovated using software running on servers while they had cumbersome networks that couldn’t take advantage of new opportunities.
Each network service introduced by an operator required specialised kit that had to be housed, powered, and maintained by skilled staff that were increasingly hard to find. And any service upgrade required the equipment vendor to write a new release, a time-consuming, costly process.
The telcos viewed NFV as a way of turning network functions into software. Such network functions – constituents of services – could then be combined and deployed.
“We believe Network Functions Virtualisation is applicable to any data plane packet processing and control plane function in fixed and mobile network infrastructures,” claimed the authors in the seminal NFV White Paper.
A decade on
Virtual network functions (VNFs) now run across the network, and the transformation buzz has moved from NFV to such topics as 5G, Open RAN, automation and cloud-native.
Yet NFV changed the operators’ practices by introducing virtualisation, disaggregation, and open software practices.
A decade of network transformation has given rise to new challenges while technologies such as 5G and Open RAN have emerged.
Meanwhile, the hyperscalers and cloud have advanced significantly in the last decade.
“When we coined the term NFV in the summer of 2012, we never expected the cloud technologies we wanted to leverage to stand still,” says Don Clarke, one of the BT authors of the original White Paper. “Indeed, that was the point.”
NFV releases
The ISG NFV’s work began with a study to confirm NFV’s feasibility, and the definition of the NFV architecture and terminology.
Release 2 tackled interoperability. The working group specified application programming interfaces (APIs) between the NFV management and orchestration (MANO) functions using REST interfaces, and also added ‘descriptors’.
A VNF descriptor is a file that contains the information needed by the VNFM, an NFV-MANO functional block, to deploy and manage a VNF.
Release 3, whose technical content is now complete, added a policy framework. Policy rules given to the NFV orchestrator determine where best to place the VNFs in a distributed infrastructure.
Other features include VNF snapshotting for troubleshooting and MANO functions to manage the VNFs and network services.
Release 3 also addressed multi-site deployment. “If you have two VNFs, one is in a data centre in Paris, and another in a data centre in southern France, interconnected via a wide area network, how does that work?” says Chatras.
The implementation of a VNF implemented using containers was always part of the NFV vision, says ETSI.
Initial specifications concentrated on virtual machines, but Release 3 marked NFV’s first container VNF work.

Release 4 and 5
The ISG NFV working group is now working on Releases 4 and 5 in parallel.
Each new release item starts with a study phase and, based on the result, is turned into specifications.
The study phase is now limited to six months to speed up the NFV releases. The project is earmarked for a later release if the work takes longer than expected.
Two additions in NFV Release 4 are container management frameworks such as Kubernetes, a well-advanced project, and network automation: adding AI and machine learning and intent management techniques.
Network automation
NFV MANO functions already provide automation using policy rules.
“Here, we are going a step further; we are adding a management data analytics function to help the NFV orchestrator make decisions,” says Chatras. “Similarly, we are adding an intent management function above the NFV orchestrator to simplify interfacing to the operations support systems (OSS).”
Intent management is an essential element of the operators’ goal of end-to-end network automation.
Without intent management, if the OSS wants to deploy a network service, it sends a detailed request using a REST API to the NFV orchestrator on how to proceed. For example, the OSS details the VNFs needed for the network service, their interconnections, the bandwidth required, and whether IPv4 or IPv6 is used.
“With an intent-based approach, that request sent to the intent management function will be simpler,” says Chatras. “It will just set out the network service the operator wants, and the intent management function will derive the technical details.”
The intent management function, in effect, knows what is technically possible and what VNFs are available to do the work.
The work on intent management and management data analytics has just started.
“We have spent quite a lot of time on a study phase to identify what is feasible,” says Chatras.
Release 5 work started a year ago with the ETSI group asking its members what was needed.
The aim is to consolidate and close functional gaps identified by the industry. But two features are being added: Green NFV and support for VRAN.
Green NFV and VRAN
Energy efficiency was one of the expected benefits listed in the original White Paper.
ETSI has a Technical Committee for Environmental Engineering (TC EE) with a working group to reduce energy consumption in telecommunications.
The energy-saving work of Release 5 is solely for NFV, one small factor in the overall picture, says Chatras.
Just using the orchestration capabilities of NFV can reduce energy costs.
“You can consolidate workloads on fewer servers during off-peak hours,” says Chatras. “You can also optimise the location of the VNF where the cost of energy happens to be lower at that time.“
Release 5 goes deeper by controlling the energy consumption of a VNF dynamically using the power management features of servers.
The server can change the CPU’s clock frequency. Release 5 will address whether the VNF management or orchestration does this. There is also a tradeoff between lowering the clock speed and maintaining acceptable performance.
“So, many things to study,” says Chatras.
The Green NFV study will provide guidelines for designing an energy-efficient VNF by reducing execution time and memory consumption and whether hardware accelerators are used, depending on the processor available.
“We are collecting use cases of what operators would like to do, and we hope that we can complete that by mid-2022,” says Chatras.
The VRAN work involves checking the work done in the O-RAN Alliance to verify whether the NFV framework supports all the requirements. If not, the group will evaluate proposed solutions before changing specifications.
“We are doing that because we heard from various people that things are missing in the ETSI ISG NFV specifications to support VRAN properly,” says Chatras.
Is the bulk of the NFV work already done? Chatras thinks before answering: “It is hard to say.”
The overall ecosystem is evolving, and NFV must remain aligned, he says, and this creates work.
The group will complete the study phases of Green NFV and NFV for VRAN this summer before starting the specification work.
NFV deployments
ETSI ISG NFV has a group known as the Network Operator Council, comprising operators only.
The group creates occasional surveys to assess where NFV technology is being used.
“What we see is confidential, but roughly there are NFV deployments across nearly all network segments: mobile core, fixed networks, RAN and enterprise customer premise equipment,” says Chatras.
VNFs to CNFs
Now there is a broad industry interest in cloud-native network functions. But the ISG NFV group believes that there is a general misconception regarding NFV.
“In ETSI, we do not consider that cloud-native network functions are something different from VNFs,” says Chatras. “For us, a cloud-native function is a VNF with a particular software design, which happens to be cloud-native, and which in most cases is hosted in containers.”
The NFV framework’s goal, says ETSI, is to deliver a generic solution to manage network functions regardless of the technology used to deploy them.
Chatras is not surprised that the NFV is less mentioned: NFV is 10-years-old and it happens to many technologies as they mature. But from a technical standpoint, the specifications being developed by ETSI NFV comply with the cloud model.
Most operators will admit that NFV has proved very complex to deploy.
Running VNFs on third-party infrastructure is complicated, says Chatras. That will not change whether an NFV specification is used or something else based on Kubernetes.
Chatras is also candid about the overall progress of network transformation. “Is it all happening sufficiently rapidly? Of course, the answer is no,” he says.
Network transformation has many elements, not just standards. The standardisation work is doing its part; whenever an issue arises, it is tackled.
“The hallmark of good standardisation is that it evolves to accommodate unexpected twists and turns of technology evolution,” agrees Clarke. “We have seen the growth of open source and so-called cloud-native technologies; ETSI NFV has adapted accordingly and figured out new and exciting possibilities.”
Many issues remain for the operators: skills transformation, organisational change, and each determining what it means to become a ‘digital’ service provider.
In other words, the difficulties of network transformation will not magically disappear, however elegantly the network is architected as it transitions increasingly to software and cloud.
Vodafone's effort to get silicon for telco

This as an exciting time for semiconductors, says Santiago Tenorio, which is why his company, Vodafone, wants to exploit this period to benefit the radio access network (RAN), the most costly part of the wireless network for telecom operators.
The telecom operators want greater choice when buying RAN equipment.
As Tenorio, a Vodafone Fellow (the company’s first) and its network architecture director, notes, there were more than ten wireless RAN equipment vendors 15 years ago. Now, in some parts of the world, the choice is down to two.
“We were looking for more choice and that is how [the] Open RAN [initiative] started,” says Tenorio. “We are making a lot of progress on that and creating new options.”
But having more equipment suppliers is not all: the choice of silicon inside the equipment is also limited.
“You may have Fujitsu radios or NEC radios, Samsung radios, Mavenir software, whatever; in the end, it’s all down to a couple of big silicon players, which also supply the incumbents,” he says. “So we thought that if Open RAN is to go all the way, we need to create optionality there too to avoid vendor lock-in.”
Vodafone has set up a 50-strong research team at its new R&D centre in Malaga, Spain, that is working with chip and software companies to develop the architecture of choice for Open RAN to expand the chip options.
Open RAN R&D
Vodafone’s R&D centre’s 50-staff are organised into several streams, but their main goal is to answer critical issues regarding the Open RAN silicon architecture.
“Things like whether the acceleration is in-line or look-aside, which is a current controversy in the industry,” says Tenorio. “These are the people who are going to answer that question.”
With Open RAN, the virtualised Distributed Unit (DU) runs on a server. This contrasts with specialised hardware used in traditional baseband units.
Open RAN processes layer 1 data in one of two ways: look-aside or in-line. With look-aside, the server’s CPU performs certain layer 1 tasks, aided by accelerator hardware to perform tasks like forward error correction. This requires frequent communication between the two that limits processing efficiency.
In-line solves this by performing all the layer 1 processing using a single chip. Dell, for example, has an Open RAN accelerator card that performs in-line processing using Marvell’s silicon.
When Vodafone announced its Open RAN silicon initiative in January, it was working with 20 chip and software companies. More companies have since joined.
“You have software players like middleware suppliers, also clever software plug-ins that optimise the silicon itself,” says Tenorio. “It’s not only silicon makers attracted by this initiative.”
Vodafone has no preconceived ideas as to the ideal solution. “All we want is the best technical solution in terms of performance and cost,” he says.
By performance, Vodafone means power consumption and processing. “With a more efficient solution, you need less [processing] cores,” says Tenorio.
Vodafone is talking to the different players to understand their architectures and points of view and is doing its own research that may include simulations.
Tenorio does not expect Vodafone to manufacture silicon: “I mean, that’s not necessarily on the cards.” But Vodafone must understand what is possible and will conduct lab testing and benchmark measurements.
“We will do some head-to-head measurements that, to be fair, no one I know does,” he says. Vodafone’s position will then be published, it will create a specification and will drive vendors to comply with it.
“We’ve done that in the past,” says Tenorio. “We have been specifying radios for the last 20 years, and we never had to manufacture one; we just needed to understand how they’re done to take the good from the bad and then put everybody on the art of the possible.”
Industry interest
The companies joining Vodafone’s Open RAN chip venture are motivated for different reasons.
Some have joined to ensure that they have a voice and influence Vodafone’s views. “Which is super,” says Tenorio.
Others are there because they are challengers to the current ecosystem. “They want to get the specs ahead of anybody to have a better chance of succeeding if they listen to our advice, which is also super,” says Tenorio.
Meanwhile, software companies have joined to see whether they can improve hardware performance.
“That is the beauty of having the whole ecosystem,” he says.

Work scale
The work is starting at layer 1 and not just the RAN’s distributed unit (DU) but also the radio unit (RU), given how the power amplifier technology is the biggest offender in terms of power consumption.
Layers 2 and 3 will also be tackled. “We’re currently running that on Intel, and we’re finding that there is a lot of room for improvement, which is normal,” says Tenorio. “It’s true that running the three layers on general-purpose hardware has room for improvement.”
That room for improvement is almost equivalent to one full generation of silicon, he says.
Vodafone says that it also can’t be the case that Intel is the only provider of silicon for Open RAN.
The operator expects new hardware variants based on ARM, perhaps AMD, and maybe the RISC-V architecture at some point.
“We will be there to make it happen,” says Tenorio.
Other chip accelerators
Does such hardware as Graphics Processing Units (GPUs), Data Processing Units (DPUs) and also programmable logic have roles?
“I think there’s room for that, particularly at the point that we are in,” says Tenorio. “The future is not decided yet.”
The key is to avoid vendor lock-in for layer 1 acceleration, he says.
He highlights the work of such companies like Marvell and Qualcomm to accelerate layer 1 tasks, but he fears this will drive the software suppliers to take sides on one of these accelerators. “This is not what we want,” he says.
What is required is to standardise the interfaces to abstract the accelerator from the software, or steer away from custom hardware and explore the possibilities of general-purpose but specialised processing units.
“I think the future is still open,” says Tenorio. “Right now, I think people tend to go proprietary at layer 1, but we need another plan.”
“As for FPGAs, that is what we’re trying to run away from,” says Tenorio. “If you are an Open RAN vendor and can’t afford to build your ASIC because you don’t have the volume, then, okay, that’s a problem we were trying to solve.”
Improving general-purpose processing avoids having to go to FPGAs which are bulky, power-hungry and expensive, says Tenorio but he also notes how FPGAs are evolving.
“I don’t think we should have religious views about it,” he says. “There are semi-programmable arrays that are starting to look better and better, and there are different architectures.”
This is why he describes the chip industry as ‘boiling’: “This is the best moment for us to take a view because it’s also true that, to my knowledge, there is no other kind of player in the industry that will offer you a neutral, unbiased view as to what is best for the industry.”
Without that, the fear is that by acquisition and competition, the chip players will reduce the IC choices to a minimum.
“You will end up with two to three incumbent architectures, and you run a risk of those being suboptimal, and of not having enough competition,” says Tenorio.
Vodafone’s initiative is open to companies to participate including its telco competitors.
“There are times when it is faster, and you make a bigger impact if you start things on your own, leading the way,” he says.
Vodafone has done this before: In 2014, it started working with Intel on Open RAN.
“We made some progress, we had some field trials, and in 2017, we approached TIP (the Telecom Infra Project), and we offered to contribute our progress for TIP to continue in a project group,” says Tenorio. “At that point, we felt that we would make more progress with others than going alone.”
Vodafone is already deploying Open RAN in the UK and has said that by 2030, 30 per cent of its deployments in Europe will be Open RAN.
“We’ve started deploying open RAN and it works, the performance is on par with the incumbent architecture, and the cost is also on par,” says Tenorio. “So we are creating that optionality without paying any price in terms of performance, or a huge premium cost, regardless of what is inside the boxes.”
Timeline
Vodafone is already looking at in-line versus look-aside.
“We are closing into in-line benefits for the architecture. There is a continuous flow of positions or deliverables to the companies around us,” says Tenorio. “We have tens of meetings per week with interested companies who want to know and contribute to this, and we are exchanging our views in real-time.”
There will also be a white paper published, but for now, there is no deadline.
But there is an urgency to the work given Vodafone is deploying Open RAN, but this research work is for the next generation of Open RAN. “We are deploying the previous generation,” he says.
Vodafone is also talking, for example, to the ONF open-source organisation, which announced an interest in defining interfaces to exploit acceleration hardware.
“I think the good thing is that the industry is getting it, and we [Vodafone] are just one factor,” says Tenorio. “But you start these conversations, and you see how they’re going places. So people are listening.”
The industry agrees that layer 1 interfacing needs to be standardised or abstracted to avoid companies ending in particular supplier camps.
“I think there’ll be a debate whether that needs to happen in the ORAN Alliance or somewhere else,” says Tenorio. “I don’t have strong views. The industry will decide.”
Other developments
The Malaga R&D site will not just focus on Open RAN but other parts of the network, such as transport.
Transport still makes use of proprietary silicon but there is also more vendor competition.
“The dollars spent by operators in that area is smaller,” says Tenorio. “That’s why it is not making the headlines these days, but that doesn’t mean there is no action.”
Two transport areas where disaggregated designs have started are the disaggregated backbone router, and the disaggregated cell site gateway, both being sensible places to start.
“Disaggregating a full MPLS carrier-grade router is a different thing, but its time will come,” says Tenorio, adding that the centre in Malaga is not just for Open RAN, but silicon for telcos.
BT’s Open RAN trial: A mix of excitement and pragmatism

“We in telecoms, we don’t do complexity very well.” So says Neil McRae, BT’s managing director and chief architect.
He was talking about the trend of making network architectures open and in particular the Open Radio Access Network (Open RAN), an approach that BT is trialling.
“In networking, we are naturally sceptical because these networks are very important and every day become more important,” says McRae
Whether it is Open RAN or any other technology, it is key for BT to understand its aims and how it helps. “And most importantly, what it means for customers,” says McRae. “I would argue we don’t do enough of that in our industry.”
Open RAN
Open RAN has become a key focus in the development of 5G. Open RAN is backed by leading operators, it promises greater vendor choice and helps counter the dependency on the handful of key RAN vendors such as Nokia and Ericsson. There are also geopolitical considerations given that Huawei is no longer a network supplier in certain countries.
“Huawei and China, once they were the flavour of the month and now they no longer are,” says McRae. “That has driven a lot of concern – there are only Nokia and Ericsson as scaled players – and I think that is a thing we need to worry about.”
McRae points out that Open RAN is an interface standard rather than a technology.
“Those creating Open RAN solutions, the only part that is open is that interface side,” he says. ”If you think of Nokia, Ericsson, Mavenir, Rakuten and Altiostar – any of the guys building this technology – none of their technology is specifically open but you can talk to it via this open interface.”

Opportunity
McRae is upbeat about Open RAN but much work is needed to realise its potential.
“Open RAN, and I would probably say the same about NFV (network functions virtualisation), has got a lot of momentum and a lot of hype well before I think it deserves it,” he says.
Neil McRaeBT favours open architectures and interoperability. “Why wouldn’t we want to to be part of that, part of Open RAN,” says McRae. “But what we are seeing here is people excited about the potential – we are hugely excited about the potential – but are we there yet? Absolutely not.”
BT views Open RAN as a way to support the small-cell neutral host model whereby a company can offer operators coverage, one way Open RAN can augment macro cell coverage.
Open RAN can also be used to provide indoor coverage such as in stadiums and shopping centres. McRae says Open RAN could also be used for transportation but there are still some challenges there.
“We see Open RAN and the Open RAN interface specifications as a great way for building innovation into the radio network,” he says. “If there is one part that we are hugely excited about, it is that.”
BT’s Open RAN trial
BT is conducting an Open RAN trial with Nokia in Hull, UK.
“We haven’t just been working with Nokia on this piece of work, other similar experiments are going on with others,” says McRae.
McRae equates Open RAN with software-defined networking (SDN). SDN uses several devices that are largely unintelligent while a central controller – ’the big brain’ – controls the devices and in the process makes them more valuable.
“SDN has this notion of a controller and devices and the Open RAN solution is no different: it uses a different interface but it is largely the same concept,” says McRae.
This central controller in Open RAN is the RAN Intelligent Controller (RIC) and it is this component that is at the core of the Nokia trial.
“That controller allows us to deploy solutions and applications into the network in a really simple and manageable way,” says McRae.
The RIC architecture is composed of a near-real-time RIC that is very close to the radio and that makes almost instantaneous changes based on the current situation.
There is also a non-real-time controller – that is used for such tasks as setting policy, the overall run cycle for the network, configuration and troubleshooting.
“You kind of create and deploy it, adjust it or add or remove things, not in real-time,” says McRae. “It is like with a train track, you change the signalling from red to green long before the train arrives.”
BT views the non-real-time aspect of the RIC as a new way for telcos to automate and optimise the core aspects of radio networking.
McRae says the South Bank, London is one of the busiest parts of BT’s network and the operator has had to keep adding spectrum to the macrocells there.
“It is getting to the point where the macro isn’t going to be precise enough to continue to build a great experience in a location like that,” he says.
One solution is to add small cells and BT has looked at that but has concluded that making macros and small cells work together well is not straightforward. This is where the RIC can optimise the macro and small cells in a way that improves the experience for customers even when the macro equipment is from one vendor and the small cells from another.
“The RIC allows us to build solutions that take the demand and the requirements of the network a huge step forward,” he says. “The RIC makes a massive step – one of the biggest steps in the last decade, probably since massive MIMO – in ensuring we can get the most out of our radio network.”
BT is focussed on the non-real-time RIC for the initial use cases it is trialling. It is using Nokia’s equipment for the Hull trial.
BT is also testing applications such as load optimisation between different layers of the network and between different neighbouring sites. Also where there is a failure in the network it is using ‘Xapps’ to reroute traffic or re-optimise the network.
Nokia also has AI and machine learning software which BT is trialling. BT sees AI and machine learning-based solutions as a must as ultimately human operators are too slow.
Trial goals
BT wants to understand how Open RAN works in deployment. For example, how to manage a cell that is part of a RIC cluster.
In a national network, there will likely be multiple RICs used.
“We expect that this will be a distributed architecture,” says McRae. “How do you control that? Well, that’s where the non-real-time RIC has a job to do, effectively to configure the near-real-time RIC, or RICs as we understand more about how many of them we need.”
Another aspect of the trial is to see if, by using Open RAN, the network performance KPIs can be improved. These include time on 4G/ time on 5G, and the number of handovers and dropped calls.
“Our hope and we expect that all of these get better; the early signs in our labs are that they should all get better, the network should perform more effectively,” he says.
BT will also do coverage testing which, with some of the newer radios it is deploying, it expects to improve.
“We’ve done a lot of learning in the lab,” says McRae. “Our experience suggests that translating that into operational knowledge is not perfect. So we’re doing this to learn more about how this will work and how it will benefit customers at the end of the day.”
Openness and diversity
Given that Open RAN aims to open vendor choice, some have questioned whether BT’s trial with Nokia is in the spirit of the initiative.
“We are using the Open RAN architecture and the Open RAN interface specs,” says McRae. “Now, for a lot of people, Open RAN means you have got to have 12 vendors in the network. Let me tell you, good luck to everyone that tries that.”
BT says there are a set of flavours of Open RAN appearing. One is Rakuten and Symphony, another is Mavenir. These are end-to-end solutions being built that can be offered to operators as a solution.
“Service providers are terrible at integrating things; it is not our core competency,” says McRae. “We have got better over the years but we want to buy a solution that is tested, that has a set of KPIs around how it operates, that has all the security features we need.”
This is key for a platform that in BT’s case serves 30 million users. As McRae puts it, if Open RAN becomes too complicated, it is not going to get off the ground: “So we welcome partnerships, or ecosystems that are forming because we think that is going to make Open RAN more accessible.”
McRae says some of the reaction to its working with Nokia is about driving vendor diversity.
BT wants diverse vendors that can provide it with greater choice and benefit from competition. But McRae points out that many of the vendors’ equipment use the same key components from a handful of chip companies; and chips that are made in two key locations.
“What we want to see is those underlying components, we want to see dozens of companies building them all over the world,” he says. “They are so crucial to everything we do in life today, not just in the network, but in your car, smartphone, TV and the microwave.”
And while more of the network is being implemented in software – BT’s 5G core is all software – hardware is still key where there are are packet or traffic flows.
“The challenge in some of these components, particularly in the radio ecosystem, is you need strong parallel processing,” says McRae. “In software that is really difficult to do.”
“Intel, AMD, Broadcom and Qualcomm are all great partners,“ says McRae. “But if any one of these guys, for some reason, doesn’t move the innovation curve in the way we need it to move, then we run into real problems of how to grow and develop the network.”
What BT wants is as much IC choice as possible, but how that will be achieved McRae is less certain. But operators rightly have to be concerned about it, he says.
Nvidia's plans for the data processor unit

When Nvidia’s CEO, Jensen Huang, discussed its latest 400-gigabit BlueField-3 data processing unit (DPU) at the company’s 2021 GTC event, he also detailed its successor.
Companies rarely discuss chip specifications two generations ahead; the BlueField-3 only begins sampling next quarter.
The BlueField-4 will advance Nvidia’s DPU family.
It will double again the traffic throughput to 800 gigabits-per-second (Gbps) and almost quadruple the BlueField-3’s integer processing performance.
But one metric cited stood out. The BlueField-4 will increase by nearly 1000x the number of terabit operators-per-second (TOPS) performed: 1,000 TOPS compared to the BlueField-3’s 1.5 TOPS.
Huang said artificial intelligence (AI) technologies will be added to the BlueField-4, implying that the massively parallel hardware used for Nvidia’s graphics processor units (GPUs) are to be grafted onto its next-but-one DPU.
Why add AI acceleration? And will it change the DPU, a relatively new processor class?
Data processor units
Nvidia defines the DPU as a programmable device for networking.
The chip combines general-purpose processing – multiple RISC cores used for control-plane tasks and programmed in a high-level language – with accelerator units tailored for packet-processing data-plane tasks.
“The accelerators perform functions for software-defined networking, software-defined storage and software-defined security,” says Kevin Deierling, senior vice president of networking at Nvidia.
The DPU can be added to a Smart Network Interface Card (SmartNIC) that complements the server’s CPU, taking over the data-intensive tasks that would otherwise burden the server’s most valuable resource.
Other customers use the DPU as a standalone device. “There is no CPU in their systems,” says Deierling.
Storage platforms is one such example, what Deierling describes as a narrowly-defined workload. “They don’t need a CPU and all its cores, what they need is the acceleration capabilities built into the DPU, and a relatively small amount of compute to perform the control-path operations,” says Deierling.
Since the DPU is the server’s networking gateway, it supports PCI Express (PCIe). The PCIe bus interfaces to the host CPU, to accelerators such as GPUs, and supports NVMe storage. NVMe is a non-volatile memory host controller interface specification.
BlueField 3
When announced in 2021, the 22-billion transistor BlueField-3 chip was scheduled to sample this quarter. “We need to get the silicon back and do some testing and validation before we are sampling,” says Deierling.
The device is a scaled-up version of the BlueField-2: it doubles the throughput to 400Gbps and includes more CPU cores: 16 Cortex-A78 64-bit ARM cores.
Nvidia deliberately chose not to use more powerful ARM cores. “The ARM is important, there is no doubt about it, and there are newer classes of ARM,” says Deierling. “We looked at the power and the performance benefits you’d get by moving to one of the newer classes and it doesn’t buy us what we need.”
The BlueField-3 has the equivalent processing performance of 300 X86 CPU cores, says Nvidia, but this is due mainly to the accelerator units, not the ARM cores.
The BlueField-3 input-output [I/O] includes Nvidia’s ConnectX-7 networking unit that supports 400 Gigabit Ethernet (GbE) which can be split over 1, 2 or 4 ports. The DPU also doubles the InfiniBand interface compared to the BlueField-2, either a single 400Gbps (NDR) port or two 200Gbps (HDR) ports. There are also 32 lanes of PCI Express 5.0, each lane supporting 32 giga-transfers-per-second (GT/s) in each direction.
The memory interface is two DDR5 channels, doubling both the memory performance and the channel count of the BlueField-2.
The data path accelerator (DPA) of the BlueField-3 comprises 16 cores, each supporting 16 instruction threads. Typically, when a packet arrives, it is decrypted and the headers are inspected after which the accelerators are used. The threads are used if the specific function needed is not accelerated. Then, a packet is assigned to a thread and processed.
“The DPA is a specialised part of our acceleration core that is highlighly programmable,” says Deierling.
Other programmable logic blocks include the accelerated switching and packet processing (ASAP2) engine that parses packets. It inspects packet fields looking for a match that tells it what to do, such as dropping the packet or rewriting its header.
In-line acceleration
The BlueField-3 implements the important task of security.
A packet can have many fields and encapsulations. For example, the fields can include a TCP header, quality of service, a destination IP and an IP header. These can be encapsulated into an overlay such as VXLAN and further encapsulated into a UDP packet before being wrapped in an outer IP datagram that is encrypted and sent over the network. Then, only the IPSec header is exposed; the remaining fields are encrypted.
Deierling says the BlueField-3 does the packet encryption and decryption in-line.
For example, the DPU uses the in-line IPsec decode to expose the headers of the various virtual network interfaces – the overlays – of a received packet. Picking the required overlay, the packet is sent to a set of service-function chainings that use all the accelerators available such as tackling distributed denial-of-service and implementing a firewall and load balancing.
“You can do storage, you can do an overlay, receive-side scaling [RSS], checksums,” says Deierling. “All the accelerations built into the DPU become available.”
Without in-line processing, the received packet goes through a NIC and into the memory of the host CPU. There, it is encrypted and hence opaque; the packet’s fields can’t benefit from the various acceleration techniques. “It is already in memory when it is decrypted,” says Deierling.

Often, with the DPU, the received packet is decrypted and passed to the host CPU where the full packet is visible. Then, once the host application has processed the data, the data and packet may be encrypted again before being sent on.
“In a ‘zero-trust’ environment, there may be a requirement to re-encrypt the data before sending it onto the next hop,” says Deierling. “In this case, we just reverse the pipeline.”
An example is confidential healthcare information where data needs to be encrypted before being sent and stored.
DPU evolution
There are many application set to benefit from DPU hardware. These cover the many segments Nvidia is addressing including AI, virtual worlds, robotics, self-driving cars, 5G and healthcare.
All need networking, storage and security. “Those are the three things we do but it is software-defined and hardware-accelerated,” says Deierling.
Nvidia has an ambitious target of launching a new DPU every 18 months. That suggests the BlueField-4 could sample as early as the end of 2023.
The 800-gigabit Bluefield-4 will have 64-billion transistors and nearly quadruple the integer processing performance of the BlueField-3: from 42 to 160 SPECint.
Nvidia says its DPUs, including the BlueField-4, are evolutionary in how they scale the ARM cores, accelerators and throughput. However, the AI acceleration hardware added to the BlueField-4 will change the nature of the DPU.
“What is truly salient is that [1,000] TOPS number,” says Deierling. “And that is an AI acceleration; that is leveraging capabilities Nvidia has on the GPU side.”
Self-driving cars, 5G and robotics
An AI-assisted DPU will support such tasks as video analytics, 5G and robotics.
For self-driving cars, the DPU will reside in the data centre, not in the car. But that too will change.“Frankly, the car is becoming a data centre,” notes Deierling.
Deep learning currently takes place in the data centre but as the automotive industry adopts Ethernet, a car’s sensors – lidar, radar and cameras – will send massive amounts of data which an IC must comprehend.
This is relevant not just for automotive but all applications where data from multiple sensors needs to be understood.
Deierling describes Nvidia as an AI-on-5G company.
“We have a ton of different things that we are doing and for that, you need a ton of parallel-processing capabilities,” he says. This is why the BlueField-4 is massively expanding its TOPS rating.
He describes how a robot on an automated factory floor will eventually understand its human colleagues.
“It is going to recognize you as a human being,“ says Deierling. “You are going to tell it: ‘Hey, stand back, I’m coming in to look at this thing’, and the robot will need to respond in real-time.”
Video analytics, voice processing, and natural language processing are all needed while the device will also be running a 5G interface. Here, the DPU will reside in a small mobile box: the robot.
“Our view of 5G is thus more comprehensive than just a fast pipe that you can use with a virtual RAN [radio access network] and Open RAN,” says Deierling. “We are looking at integrating this [BlueField-4] into higher-level platforms.”
Marvell's 50G PAM-4 DSP for 5G optical fronthaul

- Marvell has announced the first 50-gigabit 4-level pulse-amplitude modulation (PAM-4) physical layer (PHY) for 5G fronthaul.
- The chip completes Marvell’s comprehensive portfolio for 5G radio access network (RAN) and x-haul (fronthaul, midhaul and backhaul).
Marvell has announced what it claims is an industry-first: a 50-gigabit PHY for the 5G fronthaul market.
Dubbed the AtlasOne, the PAM-4 PHY chip also integrates the laser driver. Marvell claims this is another first: implementing the directly modulated laser (DML) driver in CMOS.
“The common thinking in the industry has been that you couldn’t do a DML driver in CMOS due to the current requirements,” says Matt Bolig, director, product marketing, optical connectivity at Marvell. “What we have shown is that we can build that into CMOS.”
Marvell, through its Inphi acquisition, says it has shipped over 100 million ICs for the radio access network (RAN) and estimates that its silicon is in networks supporting 2 billion cellular users.
“We have been in this business for 15 years,” says Peter Carson, senior director, solutions marketing at Marvell. “We consider ourselves the number one merchant RAN silicon provider.”
Inphi started shipping its Polaris PHY for 5G midhaul and backhaul markets in 2019. “We have over a million ships into 5G,” says Bolig. Now Marvell is adding its AtlasOne PHY for 5G fronthaul.
Mobile traffic
Marvell says wireless data has been growing at a compound annual growth rate (CAGR) of over 60 per cent (2015-2021). Such relentless growth is forcing operators to upgrade their radio units and networks.
Stéphane Téral, chief analyst at market research firm, LightCounting, in its latest research note on Marvell’s RAN and x-haul silicon strategy, says that while 5G rollouts are “going gangbusters” around the world, they are traditional RAN implementations.
By that Téral means 5G radio units linked to a baseband unit that hosts both the distributed unit (DU) and centralised unit (CU).
But as 5G RAN architectures evolve, the baseband unit is being disaggregated, separating the distributed unit (DU) and centralised unit (CU). This is happening because the RAN is such an integral and costly part of the network and operators want to move away from vendor lock-in and expand their marketplace options.
For RAN, this means splitting the baseband functions and standardising interfaces that previously were hidden within custom equipment. Splitting the baseband unit also allows the functionality to be virtualised and be located separately, leading to the various x-haul options.
How the RAN is being disaggregated includes virtualised RAN and Open RAN. Marvell says Open RAN is still in its infancy but is a key part of the operators’ desire to virtualise and disaggregate their networks.
“Every Open RAN operator that is doing trials or early-stage deployments is also virtualising and disaggregating,” says Carson.
RAN disaggregation is also occuring in the vertical domain: the baseband functions and how they interface to the higher layers of the network. Such vertical disaggregation is being undertaken by the likes of the ONF and the Open RAN Alliance.
The disaggregated RAN – a mixture of the radio, DU and CU units – can still be located at a common site but more likely will be spread across locations.
Fronthaul is used to link the radio unit and DU when they are at separate locations. In turn, the DU and CU may also be at separate locations with the CU implemented in software running on servers deep within the network. Separating the DU and the CU is leading to the emergence of a new link: midhaul, says Téral.
Fronthaul speeds
Marvell says that the first 5G radio deployments use 8 transmitter/ 8 receiver (8T/8R) multiple-input multiple-output (MIMO) systems.
MIMO is a signal processing technique for beamforming, allowing operators to localise the capacity offered to users. An operator may use tens of megahertz of radio spectrum in such a configuration with the result that the radio unit traffic requires a 10Gbps front-haul link to the DU.
Leading operators are now deploying 100MHz of radio spectrum and massive MIMO – up to 32T/32R. Such a deployment requires 25Gbps fronthaul links.
“What we are seeing now is those leading operators, starting in the Asia Pacific, while the US operators have spectrum footprints at 3GHz and soon 5-6GHz, using 200MHz instantaneous bandwidth on the radio unit,” says Carson.
Here, an even higher-order 64T/64R massive MIMO will be used, driving the need for 50Gbps fronthaul links. Samsung has demonstrated the use of 64T/64R MIMO, enabling up to 16 spatial layers and boosting capacity by 7x.
“Not only do you have wider bandwidth, but you also have this capacity boost from spatial layering which carriers need in the ‘hot zones’ of their networks,” says Carson. “This is driving the need for 50-gigabit fronthaul.”
AtlasOne PHY
Marvell says its AtlasOne PAM-4 PHY chip for fronthaul supports an industrial temperature range and reduces power consumption by a quarter compared to its older PHYs. The power-saving is achieved by optimising the PHY’s digital signal processor and by integrating the DML driver.
Earlier this year Marvell announced its 50G PAM-4 Atlas quad-PHY design for the data centre. The AtlasOne uses the same architecture but differs in that it is integrated into a package for telecom and integrates the DML driver but not the trans-impedance amplifier (TIA).
“In a data centre module, you typically have the TIA and the photo-detector close to the PHY chip; in telecom, the photo-detector has to go into a ROSA (receiver optical sub-assembly),” says Bolig. “And since the photo-detector is in the ROSA, the TIA ends up having to be in the ROSA as well.”
The AtlasOne also supports 10-gigabit and 25-gigabit modes. Not all lines will need 50 gigabits but deploying the PHY future-proofs the link.
The device will start going into modules in early 2022 followed by field trials starting in the summer. Marvell expects the 50G fronthaul market to start in 2023.
RAN and x-haul IC portfolio
One of the challenges of virtualising the RAN is doing the layer one processing and this requires significant computation, more than can be handled in software running on a general-purpose processor.
Marvell supplies two chips for this purpose: the Octeon Fusion and the Octeon 10 data processing unit (DPU) that provides programmability and as well as specialised hardware accelerator blocks needed for 4G and 5G. “You just can’t deploy 4G or 5G on a software-only architecture,” says Carson.
As well as these two ICs and its PHY families for the various x-haul links, Marvell also has a coherent DSP family for backhaul (see diagram). Indeed, LightCounting’s Téral notes how Marvell has all the key components for an all-RAN 5G architecture.
Marvell also offers a 5G virtual RAN (VRAN) DU card that uses the OcteonFusion IC and says it already has five design wins with major cloud and OEM customers.




