The ONF adapts after sale of spin-off Ananki to Intel

Intel’s acquisition of Ananki, a private 5G networking company set up within the ONF last year, has meant the open-model organisation has lost the bulk of its engineering staff.
The ONF, a decade-old non-profit consortium led by the telecom operators, has developed some notable networking projects over the years such as CORD, OpenFlow, one of the first software-defined networking (SDN) standards, and Aether, the 5G edge platform.
Its joint work with the operators has led to virtualised and SDN building blocks that, when combined, can address comprehensive networking tasks such as 5G, wireline broadband and private wireless networks.
The ONF’s approach has differed from other open-source organisations. Its members pay for an in-house engineering team to co-develop networking blocks based on disaggregation, SDN and cloud.
The ONF and its members have built a comprehensive portfolio of networking functions which last year led to the organisation spinning out a start-up, Ananki, to commercialise a complete private end-to-end wireless network.
Now Intel has acquired Ananki, taking with it 44 of the ONF’s 55 staff.
“Intel acquired Ananki, Intel did not acquire the ONF,” says Timon Sloane, the ONF’s newly appointed general manager. “The ONF is still whole.”
The ONF will now continue with a model akin to other open-source organisations.
ONF’s evolution
The ONF began by tackling the emerging interest in SDN and disaggregation.
“After that phase, considered Phase One, we broke the network into pieces and it became obvious that it was complicated to then build solutions; you have these pieces that had to be reassembled,” says Sloane.
The ONF used its partner funding to set up a joint development team to craft solutions that were used to seed the industry.
The ONF pursued this approach for over six years but Sloane said that it felt increasingly that the model had run its course.“We were kind of an insular walled garden, with us and a small number of operators working on things,” says Sloane. “We needed to flip the model inside out and go broad.”
This led to the spin-out of Ananki, a separate for-profit entity that would bring in funding yet would also be an important contributor to open source. And as it grew, the thinking was that it would subsume some of the ONF’s engineering team.
“We thought for the next phase that a more typical open-source model was needed,” says Sloane. “Something like Google with Kubernetes, where one company builds something, puts it in open source and feeds it, even for a couple of years, until it grows, and the community grows around it.”
But during the process of funding Ananki, several companies expressed an interest in acquiring the start-up. The ONF will not say the other interested players but hints that it included telecom operators and hyperscalers.
The merit of Intel, says Sloane, is that it is a chipmaker with a strong commitment to open source.
Deutsche Telekom’s ongoing ORAN trial in Berlin uses key components from the ONF including the SD-Fabric, 5G and 4G core functions, and the uONOS near real-time RAN Intelligent controller (RIC). Source: ONF, DT.
Post-Ananki
“Those same individuals who were wearing an ONF hat, are swapping it for an Intel hat, but are still on the leadership of the project,” says Sloane. “We view this as an accelerant for the project contributions because Intel has pretty deep resources and those individuals will be backed by others.”
The ONF acknowledges that its fixed broadband passive optical networking (PON) work is not part of Ananki’s interest. Intel understands that there are operators reliant on that project and will continue to help during a transition period. Those vendors and operators directly involved will also continue to contribute.
“If you look at every other project that we’re doing: mobile core, mobile RAN, all the P4 work, programmable networks, Intel has been very active.”
Meanwhile, the ONF is releasing its entire portfolio to the open-source community.
“We’ve moved out of the walled-garden phase into a more open phase, focused on the consumption and adoption [of the designs,” says Sloane. The projects will stay remain under the auspices of the ONF to get the platforms adopted within networks.
The ONF will use its remaining engineers to offer its solutions using a Continuous Integration/ Continuous Delivery (CI/CD) software pipeline.
“We will continue to have a smaller engineering team focused on Continuous Integration so that we’ll be able to deliver daily builds, hourly builds, and continuous regression testing – all that coming out of ONF and the ONF community,” says Sloane. “Others can use their CD pipelines to deploy and we are delivering exemplar CD pipelines if you want to deploy bare metal or in a cloud-based model.”
The ONF is also looking at creating a platform that enables the programmability of a host using silicon such as a data processing unit (DPU) as part of larger solutions.
“It’s a very exciting space,” says Sloane. “You just saw the Pensando acquisition; I think that others are recognising this is a pretty attractive space.” AMD recently announced it is acquiring Pensando, to add a DPU architecture to AMD’s chip portfolio.
The ONF’s goal is to create a common platform that can be used for cloud and telecom networking and infrastructure for applications such as 5G and edge.
“And then there is of course the whole edge space, which is quite fascinating; a lot is going on there as well,” says Sloane. “So I don’t think we’re done by any means.”
BT’s Open RAN trial: A mix of excitement and pragmatism

“We in telecoms, we don’t do complexity very well.” So says Neil McRae, BT’s managing director and chief architect.
He was talking about the trend of making network architectures open and in particular the Open Radio Access Network (Open RAN), an approach that BT is trialling.
“In networking, we are naturally sceptical because these networks are very important and every day become more important,” says McRae
Whether it is Open RAN or any other technology, it is key for BT to understand its aims and how it helps. “And most importantly, what it means for customers,” says McRae. “I would argue we don’t do enough of that in our industry.”
Open RAN
Open RAN has become a key focus in the development of 5G. Open RAN is backed by leading operators, it promises greater vendor choice and helps counter the dependency on the handful of key RAN vendors such as Nokia and Ericsson. There are also geopolitical considerations given that Huawei is no longer a network supplier in certain countries.
“Huawei and China, once they were the flavour of the month and now they no longer are,” says McRae. “That has driven a lot of concern – there are only Nokia and Ericsson as scaled players – and I think that is a thing we need to worry about.”
McRae points out that Open RAN is an interface standard rather than a technology.
“Those creating Open RAN solutions, the only part that is open is that interface side,” he says. ”If you think of Nokia, Ericsson, Mavenir, Rakuten and Altiostar – any of the guys building this technology – none of their technology is specifically open but you can talk to it via this open interface.”

Opportunity
McRae is upbeat about Open RAN but much work is needed to realise its potential.
“Open RAN, and I would probably say the same about NFV (network functions virtualisation), has got a lot of momentum and a lot of hype well before I think it deserves it,” he says.
Neil McRaeBT favours open architectures and interoperability. “Why wouldn’t we want to to be part of that, part of Open RAN,” says McRae. “But what we are seeing here is people excited about the potential – we are hugely excited about the potential – but are we there yet? Absolutely not.”
BT views Open RAN as a way to support the small-cell neutral host model whereby a company can offer operators coverage, one way Open RAN can augment macro cell coverage.
Open RAN can also be used to provide indoor coverage such as in stadiums and shopping centres. McRae says Open RAN could also be used for transportation but there are still some challenges there.
“We see Open RAN and the Open RAN interface specifications as a great way for building innovation into the radio network,” he says. “If there is one part that we are hugely excited about, it is that.”
BT’s Open RAN trial
BT is conducting an Open RAN trial with Nokia in Hull, UK.
“We haven’t just been working with Nokia on this piece of work, other similar experiments are going on with others,” says McRae.
McRae equates Open RAN with software-defined networking (SDN). SDN uses several devices that are largely unintelligent while a central controller – ’the big brain’ – controls the devices and in the process makes them more valuable.
“SDN has this notion of a controller and devices and the Open RAN solution is no different: it uses a different interface but it is largely the same concept,” says McRae.
This central controller in Open RAN is the RAN Intelligent Controller (RIC) and it is this component that is at the core of the Nokia trial.
“That controller allows us to deploy solutions and applications into the network in a really simple and manageable way,” says McRae.
The RIC architecture is composed of a near-real-time RIC that is very close to the radio and that makes almost instantaneous changes based on the current situation.
There is also a non-real-time controller – that is used for such tasks as setting policy, the overall run cycle for the network, configuration and troubleshooting.
“You kind of create and deploy it, adjust it or add or remove things, not in real-time,” says McRae. “It is like with a train track, you change the signalling from red to green long before the train arrives.”
BT views the non-real-time aspect of the RIC as a new way for telcos to automate and optimise the core aspects of radio networking.
McRae says the South Bank, London is one of the busiest parts of BT’s network and the operator has had to keep adding spectrum to the macrocells there.
“It is getting to the point where the macro isn’t going to be precise enough to continue to build a great experience in a location like that,” he says.
One solution is to add small cells and BT has looked at that but has concluded that making macros and small cells work together well is not straightforward. This is where the RIC can optimise the macro and small cells in a way that improves the experience for customers even when the macro equipment is from one vendor and the small cells from another.
“The RIC allows us to build solutions that take the demand and the requirements of the network a huge step forward,” he says. “The RIC makes a massive step – one of the biggest steps in the last decade, probably since massive MIMO – in ensuring we can get the most out of our radio network.”
BT is focussed on the non-real-time RIC for the initial use cases it is trialling. It is using Nokia’s equipment for the Hull trial.
BT is also testing applications such as load optimisation between different layers of the network and between different neighbouring sites. Also where there is a failure in the network it is using ‘Xapps’ to reroute traffic or re-optimise the network.
Nokia also has AI and machine learning software which BT is trialling. BT sees AI and machine learning-based solutions as a must as ultimately human operators are too slow.
Trial goals
BT wants to understand how Open RAN works in deployment. For example, how to manage a cell that is part of a RIC cluster.
In a national network, there will likely be multiple RICs used.
“We expect that this will be a distributed architecture,” says McRae. “How do you control that? Well, that’s where the non-real-time RIC has a job to do, effectively to configure the near-real-time RIC, or RICs as we understand more about how many of them we need.”
Another aspect of the trial is to see if, by using Open RAN, the network performance KPIs can be improved. These include time on 4G/ time on 5G, and the number of handovers and dropped calls.
“Our hope and we expect that all of these get better; the early signs in our labs are that they should all get better, the network should perform more effectively,” he says.
BT will also do coverage testing which, with some of the newer radios it is deploying, it expects to improve.
“We’ve done a lot of learning in the lab,” says McRae. “Our experience suggests that translating that into operational knowledge is not perfect. So we’re doing this to learn more about how this will work and how it will benefit customers at the end of the day.”
Openness and diversity
Given that Open RAN aims to open vendor choice, some have questioned whether BT’s trial with Nokia is in the spirit of the initiative.
“We are using the Open RAN architecture and the Open RAN interface specs,” says McRae. “Now, for a lot of people, Open RAN means you have got to have 12 vendors in the network. Let me tell you, good luck to everyone that tries that.”
BT says there are a set of flavours of Open RAN appearing. One is Rakuten and Symphony, another is Mavenir. These are end-to-end solutions being built that can be offered to operators as a solution.
“Service providers are terrible at integrating things; it is not our core competency,” says McRae. “We have got better over the years but we want to buy a solution that is tested, that has a set of KPIs around how it operates, that has all the security features we need.”
This is key for a platform that in BT’s case serves 30 million users. As McRae puts it, if Open RAN becomes too complicated, it is not going to get off the ground: “So we welcome partnerships, or ecosystems that are forming because we think that is going to make Open RAN more accessible.”
McRae says some of the reaction to its working with Nokia is about driving vendor diversity.
BT wants diverse vendors that can provide it with greater choice and benefit from competition. But McRae points out that many of the vendors’ equipment use the same key components from a handful of chip companies; and chips that are made in two key locations.
“What we want to see is those underlying components, we want to see dozens of companies building them all over the world,” he says. “They are so crucial to everything we do in life today, not just in the network, but in your car, smartphone, TV and the microwave.”
And while more of the network is being implemented in software – BT’s 5G core is all software – hardware is still key where there are are packet or traffic flows.
“The challenge in some of these components, particularly in the radio ecosystem, is you need strong parallel processing,” says McRae. “In software that is really difficult to do.”
“Intel, AMD, Broadcom and Qualcomm are all great partners,“ says McRae. “But if any one of these guys, for some reason, doesn’t move the innovation curve in the way we need it to move, then we run into real problems of how to grow and develop the network.”
What BT wants is as much IC choice as possible, but how that will be achieved McRae is less certain. But operators rightly have to be concerned about it, he says.
Can a think tank tackle telecoms innovation deficit?

The Telecom Ecosystem Group (TEG) will publish shortly its final paper that concludes two years of industry discussion on ways to spur innovation in telecommunications.
The paper, entitled Addressing the Telecom Innovation Deficit, says telcos have lost much of their influence in shaping the technologies on which they depend.
“They have become ageing monocultures; disruptive innovators have left the industry and innovation is outsourced,” says the report.
The TEG has held three colloquiums and numerous discussion groups soliciting views from experienced individuals across the industry during the two years.
The latest paper names eight authors but many more contributed to the document and its recommendations.
Network transformation
Don Clarke, formerly of BT and CableLabs, is one of the authors of the latest paper. He also co-authored ETSI’s Network Functions Virtualisation (NFV) paper that kickstarted the telcos’ network transformation strategies of the last decade.
Many of the changes sought in the original NFV paper have come to pass.
Networking functions now run as software and no longer require custom platforms. To do that, the operators have embraced open interfaces that allow disaggregated designs to tackle vendor lock-in. The telcos have also adopted open-source software practices and spurred the development of white boxes to expand equipment choice.
Yet the TEG paper laments the industry’s continued reliance on large vendors while smaller telecom vendors – seen as vital to generate much-needed competition and innovation – struggle to get a look-in.
The telecom ecosystem
The TEG segments the telecommunications ecosystem into three domains (see diagram).
The large-scale data centre players are the digital services providers (top layer). In this domain, innovation and competition are greatest.
The digital network provider domain (middle layer) is served by a variety of players, notably the cloud providers, while it is the telcos that dominate the physical infrastructure provider domain.
At this bottom layer, competition is low and overall investment in infrastructure is inadequate. A third of the world’s population still has no access to the internet, notes the report.
The telcos should also be exploiting the synergies between the domains, says the TEG, yet struggle to do so. But more than that, the telcos can be a barrier.
Clarke cites the emerging metaverse that will support immersive virtual worlds as an example.
Metaverse
The “Metaverse” is a concept being promoted by the likes of Meta and Microsoft and has been picked up by the telcos, as evident at this week’s MWC Barcelona 22 show.
Meta’s Mark Zuckerberg recently encouraged his staff to focus on long-term thinking as the company transitions to become a metaverse player. “We should take on the challenges that will be the most impactful, even if the full results won’t be seen for years,” he said.
Telcos should be thinking about how to create a network that enables the metaverse, given the data for rendering metaverse environments will come through the telecom network, says Clarke.

“The real innovation will come when you try and understand the needs of the metaverse in terms of networking, and then you get into the telco game,” he says.
Any concentration of metaverse users will generate a data demand likely to exhaust the network capacity available.
“Telcos will say, ‘We aren’t upgrading capacity because we are not getting a return,’ and then metaverse innovation will be slowed down,” says Clarke.
He says much of the innovation needed for the metaverse will be in the network and telcos need to understand the opportunities for them. “The key is what role will the telcos have, not in dollars but network capability, then you start to see where the innovation needs to be done.”
The challenge is that the telcos can’t see beyond their immediate operational challenges, says Clarke: “Anything new creates more operational challenges and therefore needs to be rejected because they don’t have the resources to do anything meaningful.”
He stresses he is full of admiration for telcos’ operations staff: “They know their game.” But in an environment where operational challenges are avoided, innovation is less important.
TEG’s action plan
TEG’s report lists direct actions telcos can take regarding innovation. These cover funding, innovation processes, procurement and increasing competition.
Many of the proposals are designed to help smaller vendors overcome the challenges they face in telecoms. TEG views small vendors and start-ups as vital for the industry to increase competition and innovation.
Under the funding category, TEG wants telcos to allocate a least 5 per cent of procurement to start-ups and small vendors. The group also calls for investment funds to be set up that back infrastructure and middleware vendors, not just over-the-top start-ups.
For innovation, it wants greater disaggregation so as to steer away from monolithic solutions. The group also wants commitments to fast lab-to-field trials (a year) and shorter deployment cycles (two years maximum) of new technologies.
Competition will require a rethink regarding small vendors. At present, all the advantages are with the large vendors. It lists six measures how telcos can help small vendors win business, one being to stop forcing them to partner with large vendors. The TEG wants telcos to ensure enough personnel that small vendors get all the “airtime” they need with the telcos.
Lastly, concerning procurement, telcos can do much more.
One suggestion is to stop sending small vendors large, complex request for proposals (RFPs) that they must respond to in short timescales; small vendors can’t compete with the large RFP teams available to the large vendors.
Also, telcos should stop their harsh negotiating terms such as a 30 per cent additional discount. Such demands can hobble a small vendor.
Innovation
“Innovation comes from left field and if you try to direct it with a telco mindset, you miss it,” says Clarke. “Telcos think they know what ‘good’ looks like when it comes to innovation, but they don’t because they come at it from a monoculture mindset.”
He said that in the TEG discussions, the idea of incubators for start-ups was mentioned. “We have all done incubators,” he says. But success has been limited for the reasons cited above.
He also laments the lack of visionaries in the telecom industry.
A monoculture organisation rejects such individuals. “Telcos don’t like visionaries because culturally they are annoying and they make their life harder,” he says. “Disruptors have left the industry.”
Prospects
The authors are realistic.
Even if their report is taken seriously, they note any change will take time. They also do not expect the industry to be able to effect change without help. The TEG wants government and regulator involvement if the long-term prospects of a crucial industry are to be ensured.
The key is to create an environment that nurtures innovation and here telcos could work collectively to make that happen.
“No telco has it all, but individual ones have strengths,” says Clarke. “If you could somehow combine the strengths of the particular telcos and create such an environment, things will emerge.”
The trick is diversity – get people from different domains together to make judgements as to what promising innovation looks like.
“Bring together the best people and marvelous things happen when you give them a few beers and tell them to solve a problem impacting all of them,” says Clarke. “How can we make that happen?”
Making optical networking feel like cycling downhill

BT’s chief architect, Neil McRae, is a fervent believer in the internet, a technology built on the continual progress of optical networking. He discussed both topics during his invited talk at the recent OFC 2021 virtual conference and exhibition.
Neil McRae’s advocacy of the internet as an educational tool for individuals from disadvantaged backgrounds stems from his childhood experiences.
“When I was a kid, I lived in a deprived area and the only thing that I could do was go to the library,” says McRae, chief architect and managing director for architecture and technology strategy at BT.
His first thought on discovering the internet was just how much there was to read.
“If I’m honest, everything I’ve learnt in technology has been pretty much self-taught,” says McRae.
This is why he so values the internet. It has given him a career where he has travelled widely and worked with talented and creative people.
“Anyone who is out there in the world can do the same thing,” he says. “I strongly believe that the internet brings opportunities to people who are willing to spend the time to learn.”
Optical networking
McRae surveyed the last 20 years of optical networking in his OFC talk. He chose the period since it was only at the end of the last century that the internet started to have a global impact.
“The investment in networking [during this period] has been orders of magnitude bigger than prior years,” says McRae. “There has also been a lot of deregulation across the world, more telecoms companies, more vendors and ultimately more people getting connected.”
In 2000, networks used the SONET/SDH protocol and fixed wavelengths. “We have brought in many new technologies – coherent, coloured optics, programable lasers and silicon photonics – and they have been responsible for pretty significant changes.”
McRae likens optical network to gears on a bike. “It powers the rest of what we do in the network and without those advances, we wouldn’t be the digitally connected society we are today,” says McRae. “If I think about the pandemic of the last year, can you imagine what the pandemic would have been like if it had happened in the year 2000?”
McRae says he spends a fifth of his time on optical networking. This is more than previously due to the relentless growth in network bandwidth.
“Ultimately, if you get optical wrong, it feels like you are in the wrong gear cycling uphill,” says McRae. “If you get it right, you are in the right gear, you are going as fast as you can go and it feels like a downhill ride.”
And it’s not just bandwidth but also from a cost, capability and customer experience perspective. “We recognise the value that it brings to all the other layers right up to the application,” he says.
Research
BT Labs has an optical networking programme that is run by Professor Andrew Lord. The programme’s remit is to help BT address existing and future issues.
“There is a longer-term research aspect to what Andrew and his team do, but there are some here-and-now issues that they support me on like the hollow-core fibre work and some of the 400-gigabit [coherent] platforms we have been reviewing recently,” he says.
He cites as examples the work the programme did for BT’s next-generation optical platform that was designed for growth and which indeed has grown massively in the last decade. “We have launched optical services as a product because of the platform,” says McRae.
The programme has also helped Openreach, BT Group’s copper and fibre plant subsidiary, with its fibre-to-the-premise (FTTP) deployments that use such technologies as GPON and XGS-PON.
Reliable, dynamic, secure networks
McRae admits he is always nervous about predicting the future. But he is confident 400 gigabits will be a significant optical development over the next decade.
This includes inside the data centre, driven by servers, and in the network including long haul.
“The challenge will be around getting the volume and interoperability as quickly as we possibly can,” says McRae.
The other big opportunity is the increased integration of IP and optical using a control plane aligned to both.
“The biggest networking technology out there is IP,” says McRae. “And that will not change in the coming decade.”
The Layer-3 capabilities include working around issues but it is bad at managing bandwidth. Optical is the opposite: great at managing bandwidth but less dynamic for working around problems. Merging the two promises significant benefits.
This idea, advocated as IP-over-DWDM, has long been spoken of but has not been deployed widely. The advent of 400-gigabit coherent implemented using client-side modules means that the line-side interface density can equal that of the host. And other developments such as software-defined networking and artificial intelligence also help.
Software-defined networking will make a big difference because it will enable the move to automation and that will enable new technologies such as artificial networking (AI) and machine-learning to be introduced.
McRae talks of a control plane capable of deciding which interface to send packets down and also determine what paths to create across the optical infrastructure.
“We have seen some of that but we have not seen enough,” says McRae. AI and machine-learning technologies will provide networks with almost autonomous control over which paths to use and enable for the various traffic types the network sees.
McRae stresses that it is getting harder to get the maximum out of the network: “If we maintain human intervention, the network will never see its full potential because of complexity, demands and scale.”
He predicts that once the human component is taken out of the network, some of the silos between the different layers will be removed. Indeed, he believes networks built by AI and aided by automation will look very different to today’s networks.
Another technology McRae highlights is hollow-core fibre which BT Labs has been researching.
“Increasingly, we are starting to reach some limits although many folks have said that before, but hollow-core fibre gives us some interesting and exciting opportunities around latency and the total use of a fibre,” says McRae.
There are still challenges to be overcome such as manufacturing the fibre at scale but he sees a path in many parts of the network where hollow-core fibre could be valuable to BT.
Quantum key distribution (QKD) and the importance of network security is another area starting to gain momentum.
“We have gone from a world where people were scared to send an email rather than a fax to one where the network is controlling mission-critical use cases,” says McRae. “The more secure and reliable we make those networks, the more it will help us in our everyday lives.”
McRae believes this is the decade where the underlying optical network capability coupled with QKD security will take effect.
Making a difference
McRae has run several events involving children with autism although during the pandemic this has not happened. He uses gaming as a way to demonstrate how electronics works – switching things on and off – and then he introduces the concept of computer programming.
“I find that kids with autism get it really quickly” he says. BT runs such events two or three times a year.
McRae also works with children who are learning to program but find it difficult. “Again, it is something self-taught for me,” he says although he quips that the challenge he has is that he teaches them bad programming habits.
“I’m keen to find the next generation of fantastic engineers; covid has shown us that we need them more than ever,” he says.


