The quiet progress of Network Functions Virtualisation

Bruno Chatras

Network Functions Virtualisation (NFV) is a term less often heard these days.

Yet the technology framework that kickstarted a decade of network transformation by the telecom operators continues to progress.

The working body specifying NFV, the European Telecommunications Standards Institute’s (ETSI) Industry Specification Group (ISG) Network Functions Virtualisation (NFV), is working on the latest releases of the architecture.

The releases add AI and machine learning, intent-based management, power savings, and virtual radio access network (VRAN) support.

ETSI is also shortening the time between NFV releases.

“NFV is quite a simple concept but turning the concept into reality in service providers’ networks is challenging,” says Bruno Chatras, ETSI’s ISG NFV Chairman and senior standardisation manager at Orange Innovation. “There are many hidden issues, and the more you deploy NFV solutions, the more issues you find that need to be addressed via standardisation.”

NFV’s goal

Thirteen leading telecom operators published a decade ago the ETSI NFV White Paper.

The operators were frustrated. They saw how the IT industry and hyperscalers innovated using software running on servers while they had cumbersome networks that couldn’t take advantage of new opportunities.

Each network service introduced by an operator required specialised kit that had to be housed, powered, and maintained by skilled staff that were increasingly hard to find. And any service upgrade required the equipment vendor to write a new release, a time-consuming, costly process.

The telcos viewed NFV as a way of turning network functions into software. Such network functions – constituents of services – could then be combined and deployed.

“We believe Network Functions Virtualisation is applicable to any data plane packet processing and control plane function in fixed and mobile network infrastructures,” claimed the authors in the seminal NFV White Paper.

A decade on

Virtual network functions (VNFs) now run across the network, and the transformation buzz has moved from NFV to such topics as 5G, Open RAN, automation and cloud-native.

Yet NFV changed the operators’ practices by introducing virtualisation, disaggregation, and open software practices.

A decade of network transformation has given rise to new challenges while technologies such as 5G and Open RAN have emerged.

Meanwhile, the hyperscalers and cloud have advanced significantly in the last decade.

“When we coined the term NFV in the summer of 2012, we never expected the cloud technologies we wanted to leverage to stand still,” says Don Clarke, one of the BT authors of the original White Paper. “Indeed, that was the point.”

NFV releases

The ISG NFV’s work began with a study to confirm NFV’s feasibility, and the definition of the NFV architecture and terminology.

Release 2 tackled interoperability. The working group specified application programming interfaces (APIs) between the NFV management and orchestration (MANO) functions using REST interfaces, and also added ‘descriptors’.

A VNF descriptor is a file that contains the information needed by the VNFM, an NFV-MANO functional block, to deploy and manage a VNF.

Release 3, whose technical content is now complete, added a policy framework. Policy rules given to the NFV orchestrator determine where best to place the VNFs in a distributed infrastructure.

Other features include VNF snapshotting for troubleshooting and MANO functions to manage the VNFs and network services.

Release 3 also addressed multi-site deployment. “If you have two VNFs, one is in a data centre in Paris, and another in a data centre in southern France, interconnected via a wide area network, how does that work?” says Chatras.

The implementation of a VNF implemented using containers was always part of the NFV vision, says ETSI.

Initial specifications concentrated on virtual machines, but Release 3 marked NFV’s first container VNF work.

The NFV architecture. The colored boxes are additions to the original architecture to support cloud-native functions. Source: ETSI

Release 4 and 5

The ISG NFV working group is now working on Releases 4 and 5 in parallel.

Each new release item starts with a study phase and, based on the result, is turned into specifications.

The study phase is now limited to six months to speed up the NFV releases. The project is earmarked for a later release if the work takes longer than expected.

Two additions in NFV Release 4 are container management frameworks such as Kubernetes, a well-advanced project, and network automation: adding AI and machine learning and intent management techniques.

Network automation

NFV MANO functions already provide automation using policy rules.

“Here, we are going a step further; we are adding a management data analytics function to help the NFV orchestrator make decisions,” says Chatras. “Similarly, we are adding an intent management function above the NFV orchestrator to simplify interfacing to the operations support systems (OSS).”

Intent management is an essential element of the operators’ goal of end-to-end network automation.

Without intent management, if the OSS wants to deploy a network service, it sends a detailed request using a REST API to the NFV orchestrator on how to proceed. For example, the OSS details the VNFs needed for the network service, their interconnections, the bandwidth required, and whether IPv4 or IPv6 is used.

“With an intent-based approach, that request sent to the intent management function will be simpler,” says Chatras. “It will just set out the network service the operator wants, and the intent management function will derive the technical details.”

The intent management function, in effect, knows what is technically possible and what VNFs are available to do the work.

The work on intent management and management data analytics has just started.

“We have spent quite a lot of time on a study phase to identify what is feasible,” says Chatras.

Release 5 work started a year ago with the ETSI group asking its members what was needed.

The aim is to consolidate and close functional gaps identified by the industry. But two features are being added: Green NFV and support for VRAN.

Green NFV and VRAN

Energy efficiency was one of the expected benefits listed in the original White Paper.

ETSI has a Technical Committee for Environmental Engineering (TC EE) with a working group to reduce energy consumption in telecommunications.

The energy-saving work of Release 5 is solely for NFV, one small factor in the overall picture, says Chatras.

Just using the orchestration capabilities of NFV can reduce energy costs.

“You can consolidate workloads on fewer servers during off-peak hours,” says Chatras. “You can also optimise the location of the VNF where the cost of energy happens to be lower at that time.“

Release 5 goes deeper by controlling the energy consumption of a VNF dynamically using the power management features of servers.

The server can change the CPU’s clock frequency. Release 5 will address whether the VNF management or orchestration does this. There is also a tradeoff between lowering the clock speed and maintaining acceptable performance.

“So, many things to study,” says Chatras.

The Green NFV study will provide guidelines for designing an energy-efficient VNF by reducing execution time and memory consumption and whether hardware accelerators are used, depending on the processor available.

“We are collecting use cases of what operators would like to do, and we hope that we can complete that by mid-2022,” says Chatras.

The VRAN work involves checking the work done in the O-RAN Alliance to verify whether the NFV framework supports all the requirements. If not, the group will evaluate proposed solutions before changing specifications.

“We are doing that because we heard from various people that things are missing in the ETSI ISG NFV specifications to support VRAN properly,” says Chatras.

Is the bulk of the NFV work already done? Chatras thinks before answering: “It is hard to say.”

The overall ecosystem is evolving, and NFV must remain aligned, he says, and this creates work.

The group will complete the study phases of Green NFV and NFV for VRAN this summer before starting the specification work.

NFV deployments

ETSI ISG NFV has a group known as the Network Operator Council, comprising operators only.

The group creates occasional surveys to assess where NFV technology is being used.

“What we see is confidential, but roughly there are NFV deployments across nearly all network segments: mobile core, fixed networks, RAN and enterprise customer premise equipment,” says Chatras.

VNFs to CNFs

Now there is a broad industry interest in cloud-native network functions. But the ISG NFV group believes that there is a general misconception regarding NFV.

“In ETSI, we do not consider that cloud-native network functions are something different from VNFs,” says Chatras. “For us, a cloud-native function is a VNF with a particular software design, which happens to be cloud-native, and which in most cases is hosted in containers.”

The NFV framework’s goal, says ETSI, is to deliver a generic solution to manage network functions regardless of the technology used to deploy them.

Chatras is not surprised that the NFV is less mentioned: NFV is 10-years-old and it happens to many technologies as they mature. But from a technical standpoint, the specifications being developed by ETSI NFV comply with the cloud model.

Most operators will admit that NFV has proved very complex to deploy.

Running VNFs on third-party infrastructure is complicated, says Chatras. That will not change whether an NFV specification is used or something else based on Kubernetes.

Chatras is also candid about the overall progress of network transformation. “Is it all happening sufficiently rapidly? Of course, the answer is no,” he says.

Network transformation has many elements, not just standards. The standardisation work is doing its part; whenever an issue arises, it is tackled.

“The hallmark of good standardisation is that it evolves to accommodate unexpected twists and turns of technology evolution,” agrees Clarke. “We have seen the growth of open source and so-called cloud-native technologies; ETSI NFV has adapted accordingly and figured out new and exciting possibilities.”

Many issues remain for the operators: skills transformation, organisational change, and each determining what it means to become a ‘digital’ service provider.

In other words, the difficulties of network transformation will not magically disappear, however elegantly the network is architected as it transitions increasingly to software and cloud.


Marvell's 50G PAM-4 DSP for 5G optical fronthaul

Marvell's wireless portfolio of ICs. Source: Marvell.

  • Marvell has announced the first 50-gigabit 4-level pulse-amplitude modulation (PAM-4) physical layer (PHY) for 5G fronthaul.
  • The chip completes Marvell’s comprehensive portfolio for 5G radio access network (RAN) and x-haul (fronthaul, midhaul and backhaul).

Marvell has announced what it claims is an industry-first: a 50-gigabit PHY for the 5G fronthaul market.

Dubbed the AtlasOne, the PAM-4 PHY chip also integrates the laser driver. Marvell claims this is another first: implementing the directly modulated laser (DML) driver in CMOS.

“The common thinking in the industry has been that you couldn’t do a DML driver in CMOS due to the current requirements,” says Matt Bolig, director, product marketing, optical connectivity at Marvell. “What we have shown is that we can build that into CMOS.”

Marvell, through its Inphi acquisition, says it has shipped over 100 million ICs for the radio access network (RAN) and estimates that its silicon is in networks supporting 2 billion cellular users.

“We have been in this business for 15 years,” says Peter Carson, senior director, solutions marketing at Marvell. “We consider ourselves the number one merchant RAN silicon provider.”

Inphi started shipping its Polaris PHY for 5G midhaul and backhaul markets in 2019. “We have over a million ships into 5G,” says Bolig. Now Marvell is adding its AtlasOne PHY for 5G fronthaul.

Mobile traffic

Marvell says wireless data has been growing at a compound annual growth rate (CAGR) of over 60 per cent (2015-2021). Such relentless growth is forcing operators to upgrade their radio units and networks.

Stéphane Téral, chief analyst at market research firm, LightCounting, in its latest research note on Marvell’s RAN and x-haul silicon strategy, says that while 5G rollouts are “going gangbusters” around the world, they are traditional RAN implementations.

By that Téral means 5G radio units linked to a baseband unit that hosts both the distributed unit (DU) and centralised unit (CU).

But as 5G RAN architectures evolve, the baseband unit is being disaggregated, separating the distributed unit (DU) and centralised unit (CU). This is happening because the RAN is such an integral and costly part of the network and operators want to move away from vendor lock-in and expand their marketplace options.

For RAN, this means splitting the baseband functions and standardising interfaces that previously were hidden within custom equipment. Splitting the baseband unit also allows the functionality to be virtualised and be located separately, leading to the various x-haul options.

How the RAN is being disaggregated includes virtualised RAN and Open RAN. Marvell says Open RAN is still in its infancy but is a key part of the operators’ desire to virtualise and disaggregate their networks.

“Every Open RAN operator that is doing trials or early-stage deployments is also virtualising and disaggregating,” says Carson.

RAN disaggregation is also occuring in the vertical domain: the baseband functions and how they interface to the higher layers of the network. Such vertical disaggregation is being undertaken by the likes of the ONF and the Open RAN Alliance.

The disaggregated RAN – a mixture of the radio, DU and CU units – can still be located at a common site but more likely will be spread across locations.

Fronthaul is used to link the radio unit and DU when they are at separate locations. In turn, the DU and CU may also be at separate locations with the CU implemented in software running on servers deep within the network. Separating the DU and the CU is leading to the emergence of a new link: midhaul, says Téral.

Fronthaul speeds

Marvell says that the first 5G radio deployments use 8 transmitter/ 8 receiver (8T/8R) multiple-input multiple-output (MIMO) systems.

MIMO is a signal processing technique for beamforming, allowing operators to localise the capacity offered to users. An operator may use tens of megahertz of radio spectrum in such a configuration with the result that the radio unit traffic requires a 10Gbps front-haul link to the DU.

Leading operators are now deploying 100MHz of radio spectrum and massive MIMO – up to 32T/32R. Such a deployment requires 25Gbps fronthaul links.

“What we are seeing now is those leading operators, starting in the Asia Pacific, while the US operators have spectrum footprints at 3GHz and soon 5-6GHz, using 200MHz instantaneous bandwidth on the radio unit,” says Carson.

Here, an even higher-order 64T/64R massive MIMO will be used, driving the need for 50Gbps fronthaul links. Samsung has demonstrated the use of 64T/64R MIMO, enabling up to 16 spatial layers and boosting capacity by 7x.

“Not only do you have wider bandwidth, but you also have this capacity boost from spatial layering which carriers need in the ‘hot zones’ of their networks,” says Carson. “This is driving the need for 50-gigabit fronthaul.”

AtlasOne PHY

Marvell says its AtlasOne PAM-4 PHY chip for fronthaul supports an industrial temperature range and reduces power consumption by a quarter compared to its older PHYs. The power-saving is achieved by optimising the PHY’s digital signal processor and by integrating the DML driver.

Earlier this year Marvell announced its 50G PAM-4 Atlas quad-PHY design for the data centre. The AtlasOne uses the same architecture but differs in that it is integrated into a package for telecom and integrates the DML driver but not the trans-impedance amplifier (TIA).

“In a data centre module, you typically have the TIA and the photo-detector close to the PHY chip; in telecom, the photo-detector has to go into a ROSA (receiver optical sub-assembly),” says Bolig. “And since the photo-detector is in the ROSA, the TIA ends up having to be in the ROSA as well.”

The AtlasOne also supports 10-gigabit and 25-gigabit modes. Not all lines will need 50 gigabits but deploying the PHY future-proofs the link.

The device will start going into modules in early 2022 followed by field trials starting in the summer. Marvell expects the 50G fronthaul market to start in 2023.

RAN and x-haul IC portfolio

One of the challenges of virtualising the RAN is doing the layer one processing and this requires significant computation, more than can be handled in software running on a general-purpose processor.

Marvell supplies two chips for this purpose: the Octeon Fusion and the Octeon 10 data processing unit (DPU) that provides programmability and as well as specialised hardware accelerator blocks needed for 4G and 5G. “You just can’t deploy 4G or 5G on a software-only architecture,” says Carson.

As well as these two ICs and its PHY families for the various x-haul links, Marvell also has a coherent DSP family for backhaul (see diagram). Indeed, LightCounting’s Téral notes how Marvell has all the key components for an all-RAN 5G architecture.

Marvell also offers a 5G virtual RAN (VRAN) DU card that uses the OcteonFusion IC and says it already has five design wins with major cloud and OEM customers.


Privacy Preference Center