OPNFV's releases reflect the evolving needs of the telcos
Heather KirkseyThe open source group, part of the Linux Foundation, specialises in the system integration of network functions virtualisation (NFV) technology.
The OPNFV issued Fraser, its latest platform release, earlier this year while its next release, Gambia, is expected soon.
Moreover, the telcos continual need for new features and capabilities means the OPNFV’s work is not slowing down.
“I don’t see us entering maintenance-mode anytime soon,” says Heather Kirksey, vice president, community and ecosystem development, The Linux Foundation and executive director, OPNFV.
Meeting a need
The OPNFV was established in 2014 to address an industry shortfall.
“When we started, there was a premise that there were a lot of pieces for NFV but getting them to work together was incredibly difficult,” says Kirksey.
Open-source initiatives such as OpenStack, used to control computing, storage, and networking resources in the data centre, and the OpenDaylight software-defined networking (SDN) controller, lacked elements needed for NFV. “No one was integrating and doing automated testing for NFV use cases,” says Kirksey.
I don’t see us entering maintenance-mode anytime soon
OPNFV set itself the task of identifying what was missing from such open-source projects to aid their deployment. This involved working with the open-source communities to add NFV features, testing software stacks, and feeding the results back to the groups.
The nature of the OPNFV’s work explains why it is different from other, single-task, open-source initiatives that develop an SDN controller or NFV management and orchestration, for example. “The code that the OPNFV generates tends to be for tools and installation - glue code,” says Kirksey.
OPNFV has gained considerable expertise in NFV since its founding. It uses advanced software practices and has hardware spread across several labs. “We have a large diversity of hardware we can deploy to,” says Kirksey.
One of the OPNFV’s advanced software practices is continuous integration/ continuous delivery (CI/CD). Continuous integration refers to how code is added to a software-build while it is still being developed unlike the traditional approach of waiting for a complete software release before starting the integration and testing work. For this to be effective, however, requires automated code testing.
Continuous delivery, meanwhile, builds on continuous integration by automating a release’s update and even its deployment.
“Using our CI/CD system, we will build various scenarios on a daily, two-daily or weekly basis and write a series of tests against them,” says Kirksey, adding that the OPNFV has a large pool of automated tests, and works with code bases from various open-source projects.
Kirksey cites two examples to illustrate how the OPNFV works with the open-source projects.
When OPNFV first worked with OpenStack, the open-source cloud platform took far too long - about 10 seconds - to detect a faulty virtual machine used to implement a network function running on a server. “We had a team within OPNFV, led by NEC and NTT Docomo, to analyse what it would take to be able to detect faults much more quickly,” says Kirksey.
The result required changes to 11 different open-source projects, while the OPNFV created test software to validate that the resulting telecom-grade fault-detection worked.
Another example cited by Kirksey was to enable IPv6 support that required changes to OpenStack, OpenDaylight and FD.io, the fast data plane open source initiative.
The reason cloud-native is getting a lot of excitement is that it is much more lightweight with its containers versus virtual machines
OPNFV Fraser
In May, the OPNFV issued its sixth platform release dubbed Fraser that progresses its technology on several fronts.
Fraser offers enhanced support for cloud-native technology that use microservices and containers, an alternative to virtual machine-based network functions.
The OPNFV is working with the Cloud Native Computing Foundation (CNCF), another open-source organisation overseen by the Linux Foundation.
CNCF is undertaking several projects addressing the building blocks needed for cloud-native applications. The most well-known being Kubernetes, used to automate the deployment, scaling and management of containerised applications.
“The reason cloud-native is getting a lot of excitement is that it is much more lightweight with its containers versus virtual machines,” says Kirksey. “It means more density of what you can put on your [server] box and that means capex benefits.”
Meanwhile, for applications such as edge computing, where smaller devices will be deployed at the network edge, lightweight containers and Kubernetes are attractive, says Kirksey.
Another benefit of containers is faster communications. “Because you don’t have to go between virtual machines, communications between containers is faster,” she says. “If you are talking about network functions, things like throughput starts to become important.”
The OPNFV is working with cloud-native technology in the same way it started working with OpenStack. It is incorporating the technology within its frameworks and undertaking proof-of-concept work for the CNCF, identifying shortfalls and developing test software.
OPNFV has incorporated Kubernetes in all its installers and is adopting other CNCF work such as the Prometheus project used for monitoring.
“There is a lot of networking work happening in CNCF right now,” says Kirksey. “There are even a couple of projects on how to optimise cloud-native for NFV that we are also involved in.”
OPNFV’s Fraser also enhances carrier-grade features. Infrastructure maintenance work can now be performed without interrupting virtual network functions.
Also expanded are the metrics that can be extracted from the underlying hardware, while the OPNFV’s Calipso project has added modules for service assurance as well as support for Kubernetes.
Fraser has also improved the support for testing and can allocate hardware dynamically across its various labs. “Basically we are doing more testing across different hardware and have got that automated as well,” says Kirksey.
Linux Foundation Networking Fund
In January, the Linux Foundation combined the OPNFV with five other open-source telecom projects it is overseeing to create the Linux Foundation Networking Fund (LNF).
The other five LNF projects are the Open Network Automation Platform (ONAP), OpenDaylight, FD.io, the PNDA big data analytics project, and the SNAS streaming network analytics system.
Edge is becoming a bigger and more important use-case for a lot of the operators
“We wanted to break down the silos across the different projects,” says Kirksey. There was also overlap with members sitting on several projects’ boards. “Some of the folks were spending all their time in board meetings,” says Kirksey.
Service provider Orange is using the OPNFV Fraser functional testing framework as it adopts ONAP. Orange used the functional testing to create its first test container for ONAP in one day. Orange also achieved a tenfold reduction in memory demands, going from a 1-gigabyte test virtual machine to a 100-megabyte container. And the operator has used the OPNFV’s CI/CD toolchain for the ONAP work.
By integrating the CI/CD toolchain across projects, OPNFV says it is much easier to incorporate new code on a regular basis and provide valuable feedback to the open source projects.
The next code release, Gambia, could be issued as early as November.
Gambia will offer more support for cloud-native technologies. There is also a need for more work around Layer 2 and Layer 3 networking as well as edge computing work involving OpenStack and Kubernetes.
“Edge is becoming a bigger and more important use-case for a lot of the operators,” says Kirksey.
OPNFV is also continuing to enhance its test suites for the various projects. “We want to ensure we can support the service providers real-world deployment needs,” concludes Kirksey.
ONF advances its vision for the network edge
The Open Networking Foundation’s (ONF) goal to create software-driven architectures for the network edge has advanced with the announcement of its first reference designs.
In March, eight leading service providers within the ONF - AT&T, Comcast, China Unicom, Deutsche Telekom, Google, NTT Group, Telefonica and Turk Telekom - published their strategic plan whereby they would take a hands-on approach to the design of their networks after becoming frustrated with what they perceived as foot-dragging by the systems vendors.
Timon SloaneThree months on, the service providers have initial drafts of the the first four reference designs: a broadband access architecture, a spine-leaf switch for network functions virtualisation (NFV), a more general networking fabric that uses the P4 packet forwarding programming language, and the open disaggregated transport network (ODTN).
The ONF also announced four system vendors - Adtran, Dell EMC, Edgecore Networks, and Juniper Networks - have joined to work with the operators on the reference design programmes.
“We are disaggregating the supply chain as well as disaggregating the technology,” says Timon Sloane, the ONF’s vice president of marketing and ecosystem. “It used to be that you’d buy a complete solution from one vendor. Now operators want to buy individual pieces and put them together, or pay somebody to do it for them.”
We are disaggregating the supply chain as well as disaggregating the technology
CORD and Exemplars
The ONF is known for various open-source initiatives such as its ONOS software-defined networking (SDN) controller and CORD. CORD is the ONF’s cloud optimised remote data centre work, also known as the central office re-architected as a data centre. That said, the ONF points out that CORD can be used in places other than the central office.
“CORD is a hardware architecture but it is really about software,” says Sloane. “It is a landscape of all our different software projects.”
However, the ONF received feedback last year that service providers were putting the CORD elements together slightly differently. “Vendors were using that as an excuse to say that CORD was too complicated and that there was no critical mass: ‘We don’t know how every operator is going to do this and so we are not going to do anything’,” says Sloane.
It led to the ONF’s service providers agreeing to define the assemblies of common components for various network platforms so that vendors would know what the operators want and intend to deploy. The result is the reference designs.
The reference designs offer operators some flexibility in terms of the components they can use. The components may be from the ONF but need not be; they can also be open-source or a vendor’s own solution.
Source: ONF
The ONF has also announced the exemplar platforms aligned with the reference designs (see diagram). An exemplar platform is an assembly of open-source components that builds an example platform based on a reference design. “The exemplar platforms are the open source projects that pull all the pieces together,” says Sloane. “They are easy to download, trial and deploy.”
The ONF admits that it is much more experienced with open source projects and exemplar platforms that it is with reference designs. The operators are adopting an iterative process involving all three - open source components, exemplar designs and reference designs - before settling on the solutions that will lead to deployments.
Two of the ONF exemplar platforms announced are new: the SDN-enabled broadband access (SEBA) and the universal programmable automated network (UPAN).
Reference designs
The SEBA reference design is a broadband variant of the ONF’s CORD work and addresses residential and backhauling applications. The design uses Kubernetes, the cloud-native orchestration system that automates the deployment, scaling and management of container-based applications, while the use of the OpenStack platform is optional. “OpenStack is only used if you want to support a virtual machine-based virtual network function,” says Sloane.
Source: ONF
SEBA uses VOLTHA, the open-source virtual passive optical networking (PON) optical line terminal (OLT) developed by AT&T and contributed to the ONF, and provides interfaces to both legacy operational support systems (OSS) and the Linux Foundation’s Open Networking Automation Platform (ONAP).
SEBA also features FCAPS and mediation. FCAPS is an established telecom capability for network management that can identify faults while the mediation presents information from FCAPS in a way the OSS understands.
“In its slimmest implementation, SEBA doesn’t need CORD switches, just a pair of aggregation switches,” says Sloane. The architecture can place sophisticated forwarding rules onto the optical line terminal and the aggregation switches such that servers and OpenStack are not required. “That has tremendous performance and scale implications,” says Sloane. “No other NFV architecture does this kind of thing.”
The second reference design - the NFV Fabric - ties together two ONF projects - Trellis and ONOS - to create a spine-leaf data centre fabric for edge services and applications.
The two remaining reference designs are UPAN and ODTN.
UPAN can be viewed as an extension of the NFV fabric that adds the P4 data plane programming language. P4 brings programmability to the data plane while the SDN controller enables developers to specify particular forwarding behaviour. “The controller can pull in P4 programs and do intelligent things with them,” says Sloane. “This is a new world where you can write custom apps that will push intelligence into the switch.”
Meanwhile, the ODTN reference design is used to add optical capabilities including reconfigurable optical add-drop multiplexers (ROADMs) and wide-area-network support.
There are also what the ONF calls two trailblazer projects - Mobile CORD (M-CORD) and CORD - that are not ready to become reference designs as they depend on 5G developments that are still taking place.
CORD represents the ONF’s unifying project that brings all the various elements together to address multi-access edge cloud. Also included as part of CORD is an edge cloud services platform. “This is the ultimate vision: what is the app store for edge applications?” says Sloane. “If you write a latency-sensitive application for eyeglasses, for example, how does that get deployed across multiple operators and multiple geographies?”
The ONF says it has already achieved a ‘critical mass’ of vendors to work on the development of the reference designs three months after announcing its strategic plan. The supply chain for each of the reference designs is shown in the table.
Source: ONF
“We boldly stated that we were going to reconstitute the supply chain as part of this work and bring in partners more aligned to embrace enthusiastically open source and help this ecosystem form and thrive,” says Sloane. “It is a whole new approach and to be able to rally the ecosystem in a short timeframe is notable.”
Our expectation is that at least two of these reference designs will go through this transition this year. This is not a multi-year process.
Next steps
It is the partner operators that are involved in the development of the reference designs. For example, the partners working on ODTN are China Unicom, Comcast and NTT. Once the reference designs are ready, they will be released to ONF members and then publicly.
However, the ONF has yet to give timescales as to when that will happen. “Our expectation is that at least two of these reference designs will go through this transition this year,” says Sloane. “This is not a multi-year process.”
How ONAP is blurring network boundaries
Telecom operators will soon be able to expand their networks by running virtualised network functions in the public cloud. This follows work by Amdocs to port the open-source Open Network Automation Platform (ONAP) onto Microsoft’s Azure cloud service.
Source: Amdocs, Linux Foundation
According to Craig Sinasac, network product business unit manager at Amdocs, several telecom operators are planning to run telecom applications on the Azure platform, and the software and services company is already working with one service provider to prepare the first trial of the technology.
Deploying ONAP in the public cloud blurs the normal understanding of what comprises an operator’s network. The development also offers the prospect of web-scale players delivering telecom services using ONAP.
ONAP
ONAP is an open-source network management and orchestration platform, overseen by the Linux Foundation. It was formed in 2017 with the merger of two open-source orchestration and management platforms: AT&T’s ECOMP platform, and Open-Orchestrator (Open-O), a network functions virtualisation platform initiative backed by companies such as China Mobile, China Telecom, Ericsson, Huawei and ZTE.
The ONAP framework’s aim is the become the telecom industry’s de-facto orchestration and management platform.
Craig SinasacAmdocs originally worked with AT&T to develop ECOMP as part of the operator’s Domain 2.0 initiative.
“Amdocs has hundreds of people working on ONAP and is the leading vendor in terms of added lines of code to the open-source project,” says Sinasac.
Amdocs has make several changes to the ONAP code to port it onto the Azure platform.
The company is using Kubernetes, the open-source orchestration system used to deploy, scale and manage container-based applications. Containers, used with micro-services, offer several advantages compared to running networks functions on virtual machines.
Amdocs is also changing ONAP components to make use of TOSCA cloud generic descriptor files that are employed with the virtual network functions. The descriptor files are an important element to enable virtual network functions from different vendors to work on ONAP, simplifying the operator effort needed for their integration.
“There are also changes to the multiVIM component of ONAP, to enable Azure cloud control,” says Sinasac. MultiVIM is designed to decouple ONAP from the underlying cloud infrastructure.
Further work is needed so that ONAP can manage a multi-cloud environment. One task is to enable closed-loop control by completing work already underway to the ONAP Data Collection, Analytics, and Events (DCAE) component to run in containers. The DCAE is a component of ONAP that is of interest to several operators that recently joined ONAP.
Amdocs is making its changes available as open-source code.
Business opportunities
For Microsoft, porting ONAP onto Azure promises new operator customers. Microsoft is also keen for vendors like Amdocs to use Azure for their own development work.
Telecom operators could use the Azure platform in several ways. An operator running ONAP on its own cloud-based network could use the platform to spin up additional network functions on the Azure platform. This could be to expand network capacity, restore the network in case of a fault, or to host location-sensitive network functions where the operator has no presence.
A telco could also use Azure’s data centres to expand into regions where it has no presence.
Amdocs says cloud players could offer telecom and over-the-top services using ONAP. “As long as they have connectivity to their customers,” says Sinasac.
What the cable operators are planning for NFV and SDN
Cable operators are working on adding wireless to their fixed access networks using NFV and SDN technologies.
Don Clarke“Cable operators are now every bit as informed about NFV and SDN as the telcos are, but they are not out there talking too much about it,” says Don Clarke, principal architect for network technologies at CableLabs, the R&D organisation serving the cable operators.
Clarke is well placed to comment. While at BT, he initiated the industry collaboration on NFV and edited the original white paper which introduced the NFV concept and outlined the operators’ vision for NFV.
NFV plans
The cable operators are planning developments by exploiting the Central Office Re-architected as a Datacenter (CORD) initiative being pursued by the wider telecom community. Comcast is one cable operator that has already joined the Open Networking Lab’s (ON.Lab) CORD initiative. The aim is to add a data centre capability to the cable operators’ access network onto which wireless will be added.
CableLabs is investigating adding high-bandwidth wireless to the cable network using small cells, and the role 5G will play. The cable operators use DOCSIS as their broadband access network technology and it is ideally suited for small cells once these become mainstream, says Clarke: “How you overlay wireless on top of that network is probably where there is going to be some significant opportunities in the next few years.”
One project CableLabs is working on is helping cable operators provision services more efficiently. At present, operators deliver services over several networks: DOCSIS, EPON and in some cases, wireless. CableLabs has been working for a couple of years on simplifying the provisioning process so that the system is agnostic to the underlying networks. “The easiest way to do that is to abstract and virtualize the lower-level functionality; we call that virtual provisioning,” says Clarke.
CableLabs recently published its Virtual Provisioning Interfaces Technical Report on this topic and is developing data models and information models for the various access technologies so that they can be abstracted. The result will be more efficient provisioning of services irrespective of the underlying access technology, says Clarke.
How you overlay wireless on top of that network is probably where there is going to be some significant opportunities in the next few years
SNAPS
CableLabs is also looking at how to virtualise functionality cable operators may deploy near the edge of their networks.
“As the cable network evolves to do different things and adds more capabilities, CableLabs is looking at the technology platform that would do that,” says Clarke.
To this aim, CableLabs has created the SDN-NFV Application development Platform and Stack - SNAPS - which it has contributed to the Open Platform for NFV (OPNFV) group, part of the open source management organisation, The Linux Foundation.
SNAPS is a reference platform to be located near the network edge, and possibly at the cable head-end where cable operators deliver video over their networks. The reference platform makes use of the cloud-based operating system, OpenStack, and other open source components such as OpenDaylight, and is being used to instantiate virtual network functions (VNFs) in a real-time dynamic way. “The classic NFV vision,” says Clarke.
CableLabs' Randy Levensalor says one challenge facing cable operators is that, like telcos, they have separate cloud infrastructures for their services and that impacts their bottom line.
Cable operators are now every bit as informed about NFV and SDN as the telcos are, but they are not out there talking too much about it
“You have one [cloud infrastructure] for business services, one for video delivery and one for IT, and you are operationally less efficient when you have those different stacks,” says Levensalor, lead software architect at CableLabs. “With SNAPS, you bring together all the capabilities that are needed in a reference configuration that can be replicated.”
This platform can support local compute with low latency. "We are not able to say much but there is a longer-term vision for that capability that we’ll develop new applications around,” says Clarke.
Challenges and opportunities
The challenges facing cable operators concerning NFV and SDN are the same as those facing the telcos, such as how to orchestrate and manage virtual networks and do it in a way that avoids vendor lock-in.
“The whole industry wants an open ecosystem where we can buy virtual network functions from one vendor and connect them to virtual network functions and other components from different vendors to create an end-to-end platform with the best capabilities at any given time,” says Clarke.
He also believes that cable operators can move more quickly than telcos because of how they collaborate via CableLabs, their research hub. However, the cable operators' progress is inevitably linked to that of the telcos given they want to use the same SDN and NFV technologies to achieve economies of scale. “So we can’t diverge in the areas that need to be common, but we can move more quickly in areas where the cable network has an inherent advantage, for example in the access network,” says Clarke.
ETSI embraces AI to address rising network complexity
The growing complexity of networks is forcing telecom operators and systems vendors to turn to machine intelligence for help. It has led the European Telecommunications Standards Institute, ETSI, to set up an industry specification group to define how artificial intelligence (AI) can be applied to networking.
“With the advent of network functions virtualisation and software-defined networking, we can see the eventuality that network management is going to get very much more complicated,” says Ray Forbes, convenor of the ETSI Industry Specification Group, Experimental Network Intelligence (ISG-ENI).
Source: ETSI
The AI will not just help with network management, he says, but also with the introduction of services and the more efficient use of network resources.
Visibility of events at many locations in the network will be needed with the deployment of network functions virtualisation (NFV), says Forbes. In current networks, a large switch may serve hundreds of thousands of users but with NFV, virtual network functions will be at many locations. The ETSI group will look at how AI can be used to manage and control this distributed deployment of virtual network functions, says Forbes.
The group’s work has started by inviting interested parties to bring and discuss use cases from which a set of requirements will be generated. In parallel, the group is looking at AI techniques.
The aim is to use computing to derive data from across the network. The data will be analysed, and by having 'context awareness', the machine intelligence will compute various scenarios before presenting the most promising ones for consideration by the network management team. “The process is collecting data, analysing it, testing out various scenarios and then advising people on what would happen in the better scenarios,” says Forbes.
With the advent of NFV and SDN, we can see the eventuality that network management is going to get very much more complicated
ETSI's goal is to make it easier for operators to deploy services quickly, reroute around networking faults, and make better use of networking resources. “In very large cities like Shanghai and Tokyo, where there are populations of 25 million, there is a need for this,” says Forbes. “In London, with about 12 million people, there is still a need but not quite so quickly.”
Operators and system vendors have some understanding of AI but there is a learning curve in bringing more and more AI experts on board, says Forbes: "Hence, we are trying to involve various universities in the research project."
Project schedule
The ISG-ENI's initial document work will be followed by defining the architecture and specifying the parameters needed to measure the network and the 'intelligence' of the scenarios.
“ETSI has a two-year project with the possibility of an extension,” says Forbes, with AI deployed in networks as early as 2019.
Forbes says open-source software to add AI to networks could be available as soon as 2018. Such open-source software will be developed by operators and systems vendors rather than ETSI.
Verizon's move to become a digital service provider
Working with Dell, Big Switch Networks and Red Hat, the US telco announced in April it had already brought online five data centres. Since then it has deployed more sites but is not saying how many.
Source: Verizon
“We are laying the foundation of the programmable infrastructure that will allow us to do all the automation, virtualisation and the software-defining we want to do on top of that,” says Chris Emmons, director, network infrastructure planning at Verizon.
“This is the largest OpenStack NFV deployment in the marketplace,” says Darrell Jordan-Smith, vice president, worldwide service provider sales at Red Hat. “The largest in terms of the number of [server] nodes that it is capable of supporting and the fact that it is widely distributed across Verizon’s sites.”
OpenStack is an open source set of software tools that enable the management of networking, storage and compute services in the cloud. “There are some basic levels of orchestration while, in parallel, there is a whole virtualised managed environment,” says Jordon-Smith.
This announcement suggests that Verizon feels confident enough in its experience with its vendors and their technology to take the longer-term approach
“Verizon is joining some of the other largest communication service providers in deploying a platform onto which they will add applications over time,” says Dana Cooperson, a research director at Analysys Mason.
Most telcos start with a service-led approach so they can get some direct experience with the technology and one or more quick wins in the form of revenue in a new service arena while containing the risk of something going wrong, explains Cooperson. As they progress, they can still lead with specific services while deploying their platforms, and they can make decisions over time as to what to put on the platform as custom equipment reach their end-of-life.
A second approach - a platform strategy - is a more sophisticated, longer-term one, but telcos need experience before they take that plunge.
“This announcement suggests that Verizon feels confident enough in its experience with its vendors and their technology to take the longer-term approach,” says Cooperson.
Applications
The Verizon data centres are located in core locations of its network. “We are focussing more on core applications but some of the tools we use to run the network - backend systems - are also targeted,” says Emmons.
The infrastructure is designed to support all of Verizon’s business units. For example, Verizon is working with its enterprise unit to see how it can use the technology to deliver virtual managed services to enterprise customers.
“Wherever we have a need to virtualise something - the Evolved Packet Core, IMS [IP Multimedia Subsystem] core, VoLTE [Voice over LTE] or our wireline side, our VoIP [Voice over IP] infrastructure - all these things are targeted to go on the platform,” says Emmons. Verizon plans to pool all these functions and network elements onto the platform over the next two years.
Red Hat’s Jordon-Smith talks about a two-stage process: virtualising functions and then making them stateless so that applications can run on servers independent of the location of the server and the data centre.
“Virtualising applications and running on virtual machines gives some economies of scale from a cost perspective and density perspective,” says Jordon-Smith. But there is a cost benefit as well as a level of performance and resiliency once such applications can run across data centres.
And by having a software-based layer, Verizon will be able to add devices and create associated services applications and services. “With the Internet of Things, Verizon is looking at connecting many, many devices and add scale to these types of environments,” says Jordon-Smith.
Architecture
Verizon is deploying a ‘pod and core architecture’ in its data centres. A pod contains racks of servers, top-of-rack or leaf switches, and higher-capacity spine switches and storage, while the core network is used to enable communications between pods in the same data centre and across sites (see diagram, top).
Dell is providing Verizon with servers, storage platforms and white box leaf and spine switches. Big Switch Networks is providing software that runs on the Dell switches and servers, while the OpenStack platform and ceph storage software is provided by Red Hat.
Each Dell rack houses 22 servers - each server having 24 cores and supporting 48 hyper threads - and all 22 servers connect to the leaf switch. Each rack is teamed with a sister rack and the two are connected to two leaf switches, providing switch level redundancy.
“Each of the leaf switches is connected to however many spine switches are needed at that location and that gives connectivity to the outside world,” says Emmons. For the five data centres, a total of 8 pods have been deployed amounting to 1,000 servers and this has not changed since April.
This is the largest OpenStack NFV deployment in the marketplace
Verizon has deliberately chosen to separate the pods from the core network so it can innovate at the pod level independently of the data centre’s network.
“We wanted the ability to innovate at the pod level and not be tied into any technology roadmap at the data centre level,” says Emmons who points out that there are several ways to evolve the data centre network. For example, in some cases, an SDN controller is used to control the whole data centre network.
“We don't want our pods - at least initially - to participate in that larger data centre SDN controller because we were concerned about the pace of innovation and things like that,” says Emmons. “We want the pod to be self-contained and we want the ability to innovate and iterate in those pods.”
Its first-generation pods contain equipment and software from Dell, Big Switch and Red Hat but Verizon may decide to swap out some or all of the vendors for its next-generation pod. “So we could have two generations of pod that could talk to each other through the core network,” says Emmons. “Or they could talk to things that aren't in other pods - other physical network functions that have not yet been virtualised.”
Verizon’s core networks are its existing networks in the data centres. “We didn't require any uplift and migration of the data centre networks,” says Emmons, However, Verizon has a project investigating data-centre interconnect platforms for core networking.
What we have been doing with Red Hat and Big Switch is not a normal position for a telco where you test something to death; it is a lot different to what people are used to
Benefits
Verizon expects capital expenditure and operational expense benefits from its programmable network but says it is too early to quantify. What more excites the operator is the ability to get services up and running quickly, and adapt and scale the network according to demand.
“You can reconfigure and reallocate your network once it is all software-defined,” says Emmons. There is still much work to be done, from the network core to the edge. “These are the first steps to that programmable infrastructure that we want to get to,” says Emmons.
Capital expenditure savings result from adopting standard hardware. “The more uniform you can keep the commodity hardware underneath, the better your volume purchase agreements are,” says Emmons. Operational savings also result from using standardised hardware. “Spares becomes easier, troubleshooting becomes easier as does the lifecycle management of the hardware,” he says.
Challenges
“We are tip-of-the-spear here,” admits Emmons. “What we have been doing with Red Hat and Big Switch is not a normal position for a telco where you test something to death; it is a lot different to what people are used to.”
Red Hat’s Jordon-Smith also talks about the accelerated development environment enabled by the software-enabled network. The OpenStack platform undergoes a new revision every six months.
“There are new services that are going to be enabled through major revisions in the not too distant future - the next 6 to 12 months,” says Jordon-Smith. “That is one of the key challenges for operators like Verizon have when they are moving in what is now a very fast pace.”
Verizon continues to deploy infrastructure across its network. The operator has completed most of the troubleshooting and performance testing at the cloud-level and in parallel is working on the applications in various of its labs. “Now it is time to put it all together,” says Emmons.
One critical aspect of the move to become a digital service provider will be the operators' ability to offer new services more quickly - what people call service agility, says Cooperson. Only by changing their operations and their networks can operators create and, if needed, retire services quickly and easily.
"It will be evident that they are truly doing something new when they can launch services in weeks instead of months or years, and make changes to service parameters upon demand from a customer, as initiated by the customer," says Cooperson. “Another sign will be when we start seeing a whole new variety of services and where we see communications service providers building those businesses so that they are becoming a more significant part of their revenue streams."
She cites as examples cloud-based services and more machine-to-machine and Internet of Things-based services.
WDM and 100G: A Q&A with Infonetics' Andrew Schmitt
The WDM optical networking market grew 8 percent year-on-year, with spending on 100 Gigabit now accounting for a fifth of the WDM market. So claims the first quarter 2014 optical networking report from market research firm, Infonetics Research. Overall, the optical networking market was down 2 percent, due to the continuing decline of legacy SONET/SDH.
In a Q&A with Gazettabyte, Andrew Schmitt, principal analyst for optical at Infonetics Research, talks about the report's findings.
Q: Overall WDM optical spending was up 8% year-on-year: Is that in line with expectations?
Andrew Schmitt: It is roughly in line with the figures I use for trend growth but what is surprising is how there is no longer a fourth quarter capital expenditure flush in North America followed by a down year in the first quarter. This still happens in EMEA but spending in North America, particularly by the Tier-1 operators, is now less tied to calendar spending and more towards specific project timelines.
This has always been the case at the more competitive carriers. A good example of this was the big order Infinera got in Q1, 2014.
You refer to the growth in 100G in 2013 as breathtaking. Is this growth not to be expected as a new market hits its stride? Or does the growth signify something else?
I got a lot of pushback for aggressive 100G forecasts in 2010 and 2011 when everyone was talking about, and investing in, 40G. You can read a White Paper I wrote in early 2011 which turned out to be pretty accurate.
My call was based on the fact that, fundamentally, coherent 100G shouldn’t cost more than 40G, and that service providers would move rapidly to 100G. This is exactly what has happened, outside AT&T, NTT and China which did go big with 40G. But even my aggressive 100G forecasts in 2012 and 2013 were too conservative.
I have just raised my 2014 100G forecast after meeting with Chinese carriers and understanding their plans. 100G will essentially take over almost all of the new installations in the core by 2016, worldwide, and that is when metro 100G will start. But there is too much hype on metro 100G right now given the cost, but within two years the price will be right for volume deployment by service providers.
There is so much 'blah blah blah' about video but 90 percent is cacheable. Cloud storage is not
You say the lion's share of 100G revenue is going to five companies: Alcatel-Lucent, Ciena, Cisco, Huawei, and Infinera. Most of the companies are North American. Is the growth mainly due to the US market (besides Huawei, of course). And if so, is it due to Verizon, AT&T and Sprint preparing for growing LTE traffic? Or is the picture more complex with cable operators, internet exchanges and large data centre players also a significant part of the 100G story, as Infinera claims.
It’s a lot more complex than the typical smartphone plus video-bandwidth-tsunami narrative. Many people like to attach the wireless metaphor to any possible trend because it is the only area perceived as having revenue and profitability growth, and it has a really high growth rate. But something big growing at 35 percent adds more in a year than something small growing at 70 percent.
The reality is that wireless bandwidth, as a percentage of all traffic, is still small. 100G is being used for the long lines of the network today as a more efficient replacement for 10G and while good quantitative measures don’t exist, my gut tells me it is inter-data-centre traffic and consumer/ business to data centre traffic driving most of the network growth today.
I use cloud storage for my files. I’m a die-hard Quicken user with 15 years of data in my file. Every time I save that file, it is uploaded to the cloud – 100MB each time. The cloud provider probably shifts that around afterwards too. Apply this to a single enterprise user - think about how much data that is for just one person. There is so much 'blah blah blah' about video but 90 percent is cacheable. Cloud storage is not.
Each morning a hardware specialist must wake up and prove to the world that they still need to exist
Cisco is in this list yet does not seek much media attention about its 100G. Why is it doing well in the growing 100G market?
Cisco has a slice of customers that are fibre-poor who are always seeking more spectral efficiency. I also believe Cisco won a contract with Amazon in Q4, 2013, but hey, it’s not Google or Facebook so it doesn’t get the big press. But no one will dispute Amazon is the real king of public cloud computing right now.
You’ve got to do hard stuff that others can’t easily do or you are just a commodity provider
In the data centre world, there is a sense that the value of specialist hardware is diminishing as commodity platforms - servers and switches - take hold. The same trend is starting in telecoms with the advent of Network Functions Virtualisation (NFV) and software-defined networking (SDN). WDM is specialist hardware and will remain so. Can WDM vendors therefore expect healthy annual growth rates to continue for the rest of the decade?
I am not sure I agree.
There is no reason transport systems couldn’t be white-boxed just like other parts of the network. There is an over-reaction to the impact SDN will have on hardware but there have always been constant threats to the specialist.
Each morning a hardware specialist must wake up and prove to the world that they still need to exist. This is why you see continued hardware vertical integration by some optical companies; good examples are what Ciena has done with partners on intelligent Raman amplification or what Infinera has done building a tightly integrated offering around photonic-integrated circuits for cheap regeneration. Or Transmode which takes a hacker’s approach to optics to offer customers better solutions for specific category-killer applications like mobile backhaul. Or you swing to the other side of the barbell, and focus on software, which appears to be Cyan’s strategy.
You’ve got to do hard stuff that others can’t easily do or you are just a commodity provider. This is why Cisco and Intel are investing in silicon photonics – they can use this as an edge against commodity white-box assemblers and bare-metal suppliers.
NFV moves from the lab to the network
Dor Skuler
In October 2012, several of the world's leading telecom operators published a document to spur industry action. Entitled Network Functions Virtualisation - Introductory White Paper, the document stressed the many benefits such a telecom transformation would bring: reduced equipment costs, power consumption savings, portable applications, and nimbleness instead of ordeal when a service is launched.
Eighteen months on and much progress has been made. Operators and vendors have been identifying the networking functions to virtualise on servers, and the impact Network Functions Virtualisation (NFV) will have on the network.
A group within ETSI, the standards body behind NFV, is fleshing out the architectural layers of NFV: the virtual network functions layer that resides above the management and orchestration one that oversees the servers, distributed in data centres across the network.
In the lab, network functions have been put on servers and then onto servers in the cloud. "Now we are at the start of the execution phase: leaving the lab and moving into first deployments in the network," says Dor Skuler, vice president and general manager of CloudBand, the NFV spin-in of Alcatel-Lucent. Skuler views 2014 as the year of experimentation for NFV. By 2015, there will be pockets of deployments but none at scale; that will start in 2016.
SDN is a simple way for virtual network functions to get what they need from the network through different commands
Deploying NFV in the network and at scale will require software-defined networking (SDN). That is because network functions make unique requirements of the network, says Skuler. Because the network functions are distributed, each application must make connections to the different sites on demand. "SDN is a simple way for virtual network functions to get what they need from the network through different commands," he says.
CloudBand's customers include Deutsche Telekom, Telefonica and NTT. Overall, the company says it is involved in 14 customer projects.
CloudBand 2.0
CloudBand has developed a management and orchestration platform, and launched an 'ecosystem' that includes 25 companies. Companies such as Radware and Metaswitch Networks are developing virtual network functions that use the CloudBand platform.
More recently, CloudBand has upgraded its platform, what it calls CloudBand 2.0, and has launched its own virtualised network functions (VNFs) for the Long Term Evolution (LTE) cellular standard. In particular, VNFs for the Evolved Packet Core (EPC), IP Multimedia Subsystem (IMS) and the radio access network (RAN). "These are now virtualised and running in the cloud," says Skuler.
SDN technology from Nuage Networks, another Alcatel-Lucent spin-in, has been integrated into the CloudBand node that is set up in a data centre. The platform also has enhanced management systems. "How to manage the many nodes into a single logical cloud, with a lot of tools that help applications," says Skuler. CloudBand 2.0 has also added support for OpenStack alongside its existing support for CloudStack. OpenStack and CloudStack are open-source platforms supporting cloud.
For the EPC, the functions virtualised are on the network side of the basestation: the Mobility Management Entity (MME), the Serving Gateway and Packet Data Network Gateway (S- and P-Gateways) and the Policy and Charging Rules Function (PCRF).
IMS is used for Voice over LTE (VoLTE). "Operators are looking for more efficient ways of delivering VoLTE," says Skuler. This includes reducing deployment times and scalability, growing the service as more users sign up.
The high-frequency parts of the radio access network, typically located in a remote radio head (RRH), cannot be virtualised. What can is the baseband processing unit (BBU). The BBUs run on off-the-shelf servers in pools up to 40km away from the radio heads. "This allows more flexible capacity allocation to different radio heads and easier scaling and upgrading," says Skuler.
Skuler points out that virtualising a function is not simply a case of putting a piece of code on a server running a platform such as CloudBand. "The VNF itself needs to go through a lot of change; a big monolithic application needs to be broken up into small components," he says.
"The VNF needs to use the development tools we offer in CloudBand so it can give rules so it can run in the cloud." The VNF also needs to know what key performance indicators to look at, and be able to request scaling, and inform the system when it is unhealthy and how to remedy the situation.
These LTE VNFs are designed to run on CloudBand and on other vendors' platforms. "CloudBand won't be run everywhere which is why we use open standards," says Skuler.
Pros and cons
The benefits from adopting NFV include prompt service deployment, "Today it can take 9-18 months for an operator to scale [a service]," says Skuler. The services, effectively software on servers, can scale more easily whereas today, typically, operators have to overprovision to ensure extra capacity is in place.
Less equipment also needs to be kept by operators for maintainance. "A typical North America mobile operator may have 450,000 spare parts," says Skuler; items such as line cards and power supplies. With automation and the use of dedicated servers, the number of spare parts held is typically reduced by a factor of ten.
Services can be scaled and healed, while functionality can be upgraded using software alone. "If I have a new verison of IMS, I can test it in parallel and then migrate users; all behind my desk at the push of a button," says Skuler.
The NFV infrastructure - comprising compute, storage, and networking resources - reside at multiple locations - the operator's points-of-presence. These resources are designed to be shared by applications - VNFs - and it is this sharing of a common pool of resources that is one of the biggest advantages of NFV, says Skuler.
But there are challenges.
"Operating [existing] systems has been relatively simple; if there is a faulty line card, you simply replace it," says Skuler. "Now you have all these virtual functions sitting on virtual machines across data centres and that creates complexities."
An application needs to be aware of this and provide the required rules to the management and orchestration system such as CloudBand. Such systems need to provide the necessary operational tools to operators to enable automated upgrades and automated scaling as well as pinpoint causes of failures.
For example, an IMS core might have 12 tiers. In cloud-speak, a tier is one of a set of virtual machines making up a virtual network function. Examples of a tier include a load balancer, an application or a database server. Each tier consists of one or more virtual machines. Scaling of capacity is enabled by adding or removing virtual machines from a tier.
In a cloud deployment, these linkages between tiers must be understood by the system to allow scaling. Two tiers may be placed in the same data centre to ensure low latency, but an extra pair of the tier-pair may be placed in separate sites in case one pair goes down. SDN is used to connect the different sites, says Skuler: "All this needs to be explained simply to the system so that it understands it and execute it".
That, he says, is what CloudBand does.
See also:
Telcos eye servers and software to meet networking needs, click here
OFC 2014 industry reflections - Part 1
T.J. Xia, distinguished member of technical staff at Verizon
The CFP2 form factor pluggable - analogue coherent optics (CFP2-ACO) at 100 and 200 Gig will become the main choice for metro core networks in the near future.

I learnt that the discrete multitone (DMT) modulation format seems the right choice for a low-cost, single-wavelength direct-detection 100 Gigabit Ethernet (GbE) interface for data ports, and a 4xDMT for 400GbE ports.
As for developments to watch, photonic switches will play a much more important role for intra-data centre connections. As the port capacity of top-of-rack switches gets larger, photonic switches have more cost advantages over middle stage electrical switches.
Don McCullough, Ericsson's director of strategic communications at group function technology
The biggest trend in networking right now is software-defined networking (SDN) and Network Function Virtualisation (NFV), and both were on display at OFC. We see that the combination of SDN and NFV in the control and software domains will directly impact optical networks. The Ericsson-Ciena partnership embodies this trend with its agreement to develop joint transport solutions for IP-optical convergence and service provider SDN.
We learnt that network transformation, both at the control layer (SDN and NFV) and at the data plane layer, including optical, is happening at the network operators. Related to that, we also saw interest at OFC in the announcement that AT&T made at Mobile World Congress about their User-Defined Network Cloud and Domain 2.0 strategy where AT&T has selected to work with Ericsson on integration and transformation services.
We will continue to watch the on-going deployment of SDN and NFV to control wide area networks including optical. We expect more joint developments agreements to connect SDN and NFV with optical networking, like the Ericsson-Ciena one.
One new thing for 2014 is that we expect to see open source projects like OpenStack and Open DayLight play increasingly important roles in the transformation of networks.
Brandon Collings, JDSU's CTO for communications and commercial optical products
The announcements of integrated photonics for coherent CFP2s was an important development in the 100 Gig progression. While JDSU did not make an announcement at OFC, we are similarly engaged with our customers on pluggable approaches for coherent 100 Gig.
I would like to see convergence around 400 Gig client interface standards
There is a lack of appreciation of the data centre operators who aren’t big household names. While the mega data centre operators have significant influence and visibility, the needs of the numerous, smaller-sized operators are largely under-represented.
I would like to see convergence around 400 Gig client interface standards. Lots of complex technology here, challenges to solve and options to do so. But ambiguity in these areas is typically detrimental to the overall industry.
Mike Freiberger, principal member of technical staff, Verizon
The emergence of 100 Gig for metro, access, and data centre reach optics generated a lot of contentious debate. Maybe the best way forward as an industry isn’t really solidified just yet.
What did I learn? Verizon is a leader in wireless backhaul and is growing its options at a rate faster than the industry.
The two developments that caught my attention are 100 Gig short-reach and above-100-Gig research. 100 Gig short-reach because this will set the trigger point for the timing of 100 Gig interfaces really starting to sell in volume. Research on data rates faster than 100 Gig because price-per-bit always has to come downward.
Ericsson and Ciena collaborate on IP-over-WDM and SDN
Jan Häglund
Ericsson and Ciena have signed a global strategic agreement that provides Ericsson with Ciena's optical networking technology, while Ciena benefits from Ericsson's broader service provider relationships.
In particular, Ciena's WaveLogic coherent optical processor will be integrated into a module and added to Ericsson's Smart Service IP routers, while Ericsson will resell Ciena's 6500 Packet-Optical Platform and 5400 Reconfigurable Switching Systems.
Both companies will also collaborate in developing SDN in the WAN, also known as service provider SDN or Transport SDN.
IP-over-WDM will grow rapidly, accounting for over 30 percent of the total market by 2020.
Ericsson says the IP market will reach US $15 billion and optical networking $10 billion in 2014. Jan Häglund, vice president, head of IP and broadband at Ericsson, says the two markets are not independent and that IP-over-WDM will grow rapidly, accounting for over 30 percent of the total market by 2020.
Ciena's motivation for the deal is somewhat different.
"We are focussed on packet optical convergence - Layer 2 down to Layer 0 - creating a scalable, cost effective WAN infrastructure for service providers," said James Frodsham, Ciena’s senior vice president and chief strategy officer. "We have been looking around our core value proposition, we have been looking to expand our distribution into geographies and customers where we lack presense." The deal with Ericsson clearly addresses that, he says.
There is now more to think about. It is a very interesting time.
James Frodsham, Ciena
The company also has a different view regarding IP-over-WDM. IP routers are a vital part of the network but for cost reasons they are better used in centralised locations, interconnected using packet optical networking, said Tom Mock, senior vice president, corporate communications at Ciena.
Working with Ericsson widens the network applications Ciena can address. "But our view of the prevalence of IP-over-WDM hasn't really changed," said Mock.
Tom MockEricsson and Ciena both highlight the changes taking place in the network, namely Network Functions Virtualisation (NFV) and SDN, as another reason for the tie-up.
NFV is turning telecom functions that previously required dedicated platforms into software that is virtualised and executed on servers. NFV promises to bring to telecom the benefits of IT and cloud computing, enabling operators to introduce services more quickly and scale them according to demand.
SDN, meanwhile, not only oversees such virtualised services, but also the network layers over which they run. This is where IP-over-WDM plays a role and why the two companies are working to develop Transport SDN.
It also gives us exposure to the Evolved Packet Core that is going into new wireless installations
Ciena's optical infrastructure and Ericsson's service-provider SDN and IP portfolio will result in a competitive solution, said Ericsson. "Combining the two network layers, and jointly making sure that the control protocol optimises the traffic network, will lead to CapEx and OpEx savings," said Ericsson's Häglund, in a company webcast announcing the deal.
Other benefits of the agreement include growing Ciena's relationships with services providers, especially in wireless. "It also gives us exposure to the Evolved Packet Core that is going into new wireless installations," said Mock.
Ciena also highlights Ericsson's strengths in operations and business support systems (OSS/ BSS). Ciena says the transition to SDN will be gradual. "That evolution is going to have to take into account OSS/ BSS technologies and having a partner that is strong in that area will help us both," said Mock.
Ciena believes more such industry collaboration should be expected. "We see that with programs like AT&T's Domain 2.0 Program, such thinking is also happening in the marketplace," said Mock. For the Supplier Domain 2.0 Program, AT&T is selecting vendors to provide a modern, cloud-based architecture that includes NFV and SDN technologies.
The collaboration between Ciena and Ericsson should boost their position as possible Domain 2.0 suppliers. "Both of us are suppliers under AT&T's current domain program, and as with any relationship, incumbency has advantages" said Mock. "The fact that we are beginning to collaborate on SDN-oriented applications ought to help."
Industry collaboration between telecom vendors and IT equipment providers will also likely increase.
"The data centre is a very important piece of real-estate in the future infrastructure," said Frodsham. The data centre hosts the storage and servers that manage the bulk of applications that pass across the network. Greater collaboration will be needed between telco and IT vendors to optimise how the data centre interacts with the WAN.
"There is now more to think about," said Frodsham. "It is a very interesting time."
