OPNFV's releases reflect the evolving needs of the telcos
Heather KirkseyThe open source group, part of the Linux Foundation, specialises in the system integration of network functions virtualisation (NFV) technology.
The OPNFV issued Fraser, its latest platform release, earlier this year while its next release, Gambia, is expected soon.
Moreover, the telcos continual need for new features and capabilities means the OPNFV’s work is not slowing down.
“I don’t see us entering maintenance-mode anytime soon,” says Heather Kirksey, vice president, community and ecosystem development, The Linux Foundation and executive director, OPNFV.
Meeting a need
The OPNFV was established in 2014 to address an industry shortfall.
“When we started, there was a premise that there were a lot of pieces for NFV but getting them to work together was incredibly difficult,” says Kirksey.
Open-source initiatives such as OpenStack, used to control computing, storage, and networking resources in the data centre, and the OpenDaylight software-defined networking (SDN) controller, lacked elements needed for NFV. “No one was integrating and doing automated testing for NFV use cases,” says Kirksey.
I don’t see us entering maintenance-mode anytime soon
OPNFV set itself the task of identifying what was missing from such open-source projects to aid their deployment. This involved working with the open-source communities to add NFV features, testing software stacks, and feeding the results back to the groups.
The nature of the OPNFV’s work explains why it is different from other, single-task, open-source initiatives that develop an SDN controller or NFV management and orchestration, for example. “The code that the OPNFV generates tends to be for tools and installation - glue code,” says Kirksey.
OPNFV has gained considerable expertise in NFV since its founding. It uses advanced software practices and has hardware spread across several labs. “We have a large diversity of hardware we can deploy to,” says Kirksey.
One of the OPNFV’s advanced software practices is continuous integration/ continuous delivery (CI/CD). Continuous integration refers to how code is added to a software-build while it is still being developed unlike the traditional approach of waiting for a complete software release before starting the integration and testing work. For this to be effective, however, requires automated code testing.
Continuous delivery, meanwhile, builds on continuous integration by automating a release’s update and even its deployment.
“Using our CI/CD system, we will build various scenarios on a daily, two-daily or weekly basis and write a series of tests against them,” says Kirksey, adding that the OPNFV has a large pool of automated tests, and works with code bases from various open-source projects.
Kirksey cites two examples to illustrate how the OPNFV works with the open-source projects.
When OPNFV first worked with OpenStack, the open-source cloud platform took far too long - about 10 seconds - to detect a faulty virtual machine used to implement a network function running on a server. “We had a team within OPNFV, led by NEC and NTT Docomo, to analyse what it would take to be able to detect faults much more quickly,” says Kirksey.
The result required changes to 11 different open-source projects, while the OPNFV created test software to validate that the resulting telecom-grade fault-detection worked.
Another example cited by Kirksey was to enable IPv6 support that required changes to OpenStack, OpenDaylight and FD.io, the fast data plane open source initiative.
The reason cloud-native is getting a lot of excitement is that it is much more lightweight with its containers versus virtual machines
OPNFV Fraser
In May, the OPNFV issued its sixth platform release dubbed Fraser that progresses its technology on several fronts.
Fraser offers enhanced support for cloud-native technology that use microservices and containers, an alternative to virtual machine-based network functions.
The OPNFV is working with the Cloud Native Computing Foundation (CNCF), another open-source organisation overseen by the Linux Foundation.
CNCF is undertaking several projects addressing the building blocks needed for cloud-native applications. The most well-known being Kubernetes, used to automate the deployment, scaling and management of containerised applications.
“The reason cloud-native is getting a lot of excitement is that it is much more lightweight with its containers versus virtual machines,” says Kirksey. “It means more density of what you can put on your [server] box and that means capex benefits.”
Meanwhile, for applications such as edge computing, where smaller devices will be deployed at the network edge, lightweight containers and Kubernetes are attractive, says Kirksey.
Another benefit of containers is faster communications. “Because you don’t have to go between virtual machines, communications between containers is faster,” she says. “If you are talking about network functions, things like throughput starts to become important.”
The OPNFV is working with cloud-native technology in the same way it started working with OpenStack. It is incorporating the technology within its frameworks and undertaking proof-of-concept work for the CNCF, identifying shortfalls and developing test software.
OPNFV has incorporated Kubernetes in all its installers and is adopting other CNCF work such as the Prometheus project used for monitoring.
“There is a lot of networking work happening in CNCF right now,” says Kirksey. “There are even a couple of projects on how to optimise cloud-native for NFV that we are also involved in.”
OPNFV’s Fraser also enhances carrier-grade features. Infrastructure maintenance work can now be performed without interrupting virtual network functions.
Also expanded are the metrics that can be extracted from the underlying hardware, while the OPNFV’s Calipso project has added modules for service assurance as well as support for Kubernetes.
Fraser has also improved the support for testing and can allocate hardware dynamically across its various labs. “Basically we are doing more testing across different hardware and have got that automated as well,” says Kirksey.
Linux Foundation Networking Fund
In January, the Linux Foundation combined the OPNFV with five other open-source telecom projects it is overseeing to create the Linux Foundation Networking Fund (LNF).
The other five LNF projects are the Open Network Automation Platform (ONAP), OpenDaylight, FD.io, the PNDA big data analytics project, and the SNAS streaming network analytics system.
Edge is becoming a bigger and more important use-case for a lot of the operators
“We wanted to break down the silos across the different projects,” says Kirksey. There was also overlap with members sitting on several projects’ boards. “Some of the folks were spending all their time in board meetings,” says Kirksey.
Service provider Orange is using the OPNFV Fraser functional testing framework as it adopts ONAP. Orange used the functional testing to create its first test container for ONAP in one day. Orange also achieved a tenfold reduction in memory demands, going from a 1-gigabyte test virtual machine to a 100-megabyte container. And the operator has used the OPNFV’s CI/CD toolchain for the ONAP work.
By integrating the CI/CD toolchain across projects, OPNFV says it is much easier to incorporate new code on a regular basis and provide valuable feedback to the open source projects.
The next code release, Gambia, could be issued as early as November.
Gambia will offer more support for cloud-native technologies. There is also a need for more work around Layer 2 and Layer 3 networking as well as edge computing work involving OpenStack and Kubernetes.
“Edge is becoming a bigger and more important use-case for a lot of the operators,” says Kirksey.
OPNFV is also continuing to enhance its test suites for the various projects. “We want to ensure we can support the service providers real-world deployment needs,” concludes Kirksey.
ONF advances its vision for the network edge
The Open Networking Foundation’s (ONF) goal to create software-driven architectures for the network edge has advanced with the announcement of its first reference designs.
In March, eight leading service providers within the ONF - AT&T, Comcast, China Unicom, Deutsche Telekom, Google, NTT Group, Telefonica and Turk Telekom - published their strategic plan whereby they would take a hands-on approach to the design of their networks after becoming frustrated with what they perceived as foot-dragging by the systems vendors.
Timon SloaneThree months on, the service providers have initial drafts of the the first four reference designs: a broadband access architecture, a spine-leaf switch for network functions virtualisation (NFV), a more general networking fabric that uses the P4 packet forwarding programming language, and the open disaggregated transport network (ODTN).
The ONF also announced four system vendors - Adtran, Dell EMC, Edgecore Networks, and Juniper Networks - have joined to work with the operators on the reference design programmes.
“We are disaggregating the supply chain as well as disaggregating the technology,” says Timon Sloane, the ONF’s vice president of marketing and ecosystem. “It used to be that you’d buy a complete solution from one vendor. Now operators want to buy individual pieces and put them together, or pay somebody to do it for them.”
We are disaggregating the supply chain as well as disaggregating the technology
CORD and Exemplars
The ONF is known for various open-source initiatives such as its ONOS software-defined networking (SDN) controller and CORD. CORD is the ONF’s cloud optimised remote data centre work, also known as the central office re-architected as a data centre. That said, the ONF points out that CORD can be used in places other than the central office.
“CORD is a hardware architecture but it is really about software,” says Sloane. “It is a landscape of all our different software projects.”
However, the ONF received feedback last year that service providers were putting the CORD elements together slightly differently. “Vendors were using that as an excuse to say that CORD was too complicated and that there was no critical mass: ‘We don’t know how every operator is going to do this and so we are not going to do anything’,” says Sloane.
It led to the ONF’s service providers agreeing to define the assemblies of common components for various network platforms so that vendors would know what the operators want and intend to deploy. The result is the reference designs.
The reference designs offer operators some flexibility in terms of the components they can use. The components may be from the ONF but need not be; they can also be open-source or a vendor’s own solution.
Source: ONF
The ONF has also announced the exemplar platforms aligned with the reference designs (see diagram). An exemplar platform is an assembly of open-source components that builds an example platform based on a reference design. “The exemplar platforms are the open source projects that pull all the pieces together,” says Sloane. “They are easy to download, trial and deploy.”
The ONF admits that it is much more experienced with open source projects and exemplar platforms that it is with reference designs. The operators are adopting an iterative process involving all three - open source components, exemplar designs and reference designs - before settling on the solutions that will lead to deployments.
Two of the ONF exemplar platforms announced are new: the SDN-enabled broadband access (SEBA) and the universal programmable automated network (UPAN).
Reference designs
The SEBA reference design is a broadband variant of the ONF’s CORD work and addresses residential and backhauling applications. The design uses Kubernetes, the cloud-native orchestration system that automates the deployment, scaling and management of container-based applications, while the use of the OpenStack platform is optional. “OpenStack is only used if you want to support a virtual machine-based virtual network function,” says Sloane.
Source: ONF
SEBA uses VOLTHA, the open-source virtual passive optical networking (PON) optical line terminal (OLT) developed by AT&T and contributed to the ONF, and provides interfaces to both legacy operational support systems (OSS) and the Linux Foundation’s Open Networking Automation Platform (ONAP).
SEBA also features FCAPS and mediation. FCAPS is an established telecom capability for network management that can identify faults while the mediation presents information from FCAPS in a way the OSS understands.
“In its slimmest implementation, SEBA doesn’t need CORD switches, just a pair of aggregation switches,” says Sloane. The architecture can place sophisticated forwarding rules onto the optical line terminal and the aggregation switches such that servers and OpenStack are not required. “That has tremendous performance and scale implications,” says Sloane. “No other NFV architecture does this kind of thing.”
The second reference design - the NFV Fabric - ties together two ONF projects - Trellis and ONOS - to create a spine-leaf data centre fabric for edge services and applications.
The two remaining reference designs are UPAN and ODTN.
UPAN can be viewed as an extension of the NFV fabric that adds the P4 data plane programming language. P4 brings programmability to the data plane while the SDN controller enables developers to specify particular forwarding behaviour. “The controller can pull in P4 programs and do intelligent things with them,” says Sloane. “This is a new world where you can write custom apps that will push intelligence into the switch.”
Meanwhile, the ODTN reference design is used to add optical capabilities including reconfigurable optical add-drop multiplexers (ROADMs) and wide-area-network support.
There are also what the ONF calls two trailblazer projects - Mobile CORD (M-CORD) and CORD - that are not ready to become reference designs as they depend on 5G developments that are still taking place.
CORD represents the ONF’s unifying project that brings all the various elements together to address multi-access edge cloud. Also included as part of CORD is an edge cloud services platform. “This is the ultimate vision: what is the app store for edge applications?” says Sloane. “If you write a latency-sensitive application for eyeglasses, for example, how does that get deployed across multiple operators and multiple geographies?”
The ONF says it has already achieved a ‘critical mass’ of vendors to work on the development of the reference designs three months after announcing its strategic plan. The supply chain for each of the reference designs is shown in the table.
Source: ONF
“We boldly stated that we were going to reconstitute the supply chain as part of this work and bring in partners more aligned to embrace enthusiastically open source and help this ecosystem form and thrive,” says Sloane. “It is a whole new approach and to be able to rally the ecosystem in a short timeframe is notable.”
Our expectation is that at least two of these reference designs will go through this transition this year. This is not a multi-year process.
Next steps
It is the partner operators that are involved in the development of the reference designs. For example, the partners working on ODTN are China Unicom, Comcast and NTT. Once the reference designs are ready, they will be released to ONF members and then publicly.
However, the ONF has yet to give timescales as to when that will happen. “Our expectation is that at least two of these reference designs will go through this transition this year,” says Sloane. “This is not a multi-year process.”
What the cable operators are planning for NFV and SDN
Cable operators are working on adding wireless to their fixed access networks using NFV and SDN technologies.
Don Clarke“Cable operators are now every bit as informed about NFV and SDN as the telcos are, but they are not out there talking too much about it,” says Don Clarke, principal architect for network technologies at CableLabs, the R&D organisation serving the cable operators.
Clarke is well placed to comment. While at BT, he initiated the industry collaboration on NFV and edited the original white paper which introduced the NFV concept and outlined the operators’ vision for NFV.
NFV plans
The cable operators are planning developments by exploiting the Central Office Re-architected as a Datacenter (CORD) initiative being pursued by the wider telecom community. Comcast is one cable operator that has already joined the Open Networking Lab’s (ON.Lab) CORD initiative. The aim is to add a data centre capability to the cable operators’ access network onto which wireless will be added.
CableLabs is investigating adding high-bandwidth wireless to the cable network using small cells, and the role 5G will play. The cable operators use DOCSIS as their broadband access network technology and it is ideally suited for small cells once these become mainstream, says Clarke: “How you overlay wireless on top of that network is probably where there is going to be some significant opportunities in the next few years.”
One project CableLabs is working on is helping cable operators provision services more efficiently. At present, operators deliver services over several networks: DOCSIS, EPON and in some cases, wireless. CableLabs has been working for a couple of years on simplifying the provisioning process so that the system is agnostic to the underlying networks. “The easiest way to do that is to abstract and virtualize the lower-level functionality; we call that virtual provisioning,” says Clarke.
CableLabs recently published its Virtual Provisioning Interfaces Technical Report on this topic and is developing data models and information models for the various access technologies so that they can be abstracted. The result will be more efficient provisioning of services irrespective of the underlying access technology, says Clarke.
How you overlay wireless on top of that network is probably where there is going to be some significant opportunities in the next few years
SNAPS
CableLabs is also looking at how to virtualise functionality cable operators may deploy near the edge of their networks.
“As the cable network evolves to do different things and adds more capabilities, CableLabs is looking at the technology platform that would do that,” says Clarke.
To this aim, CableLabs has created the SDN-NFV Application development Platform and Stack - SNAPS - which it has contributed to the Open Platform for NFV (OPNFV) group, part of the open source management organisation, The Linux Foundation.
SNAPS is a reference platform to be located near the network edge, and possibly at the cable head-end where cable operators deliver video over their networks. The reference platform makes use of the cloud-based operating system, OpenStack, and other open source components such as OpenDaylight, and is being used to instantiate virtual network functions (VNFs) in a real-time dynamic way. “The classic NFV vision,” says Clarke.
CableLabs' Randy Levensalor says one challenge facing cable operators is that, like telcos, they have separate cloud infrastructures for their services and that impacts their bottom line.
Cable operators are now every bit as informed about NFV and SDN as the telcos are, but they are not out there talking too much about it
“You have one [cloud infrastructure] for business services, one for video delivery and one for IT, and you are operationally less efficient when you have those different stacks,” says Levensalor, lead software architect at CableLabs. “With SNAPS, you bring together all the capabilities that are needed in a reference configuration that can be replicated.”
This platform can support local compute with low latency. "We are not able to say much but there is a longer-term vision for that capability that we’ll develop new applications around,” says Clarke.
Challenges and opportunities
The challenges facing cable operators concerning NFV and SDN are the same as those facing the telcos, such as how to orchestrate and manage virtual networks and do it in a way that avoids vendor lock-in.
“The whole industry wants an open ecosystem where we can buy virtual network functions from one vendor and connect them to virtual network functions and other components from different vendors to create an end-to-end platform with the best capabilities at any given time,” says Clarke.
He also believes that cable operators can move more quickly than telcos because of how they collaborate via CableLabs, their research hub. However, the cable operators' progress is inevitably linked to that of the telcos given they want to use the same SDN and NFV technologies to achieve economies of scale. “So we can’t diverge in the areas that need to be common, but we can move more quickly in areas where the cable network has an inherent advantage, for example in the access network,” says Clarke.
Verizon's move to become a digital service provider
Working with Dell, Big Switch Networks and Red Hat, the US telco announced in April it had already brought online five data centres. Since then it has deployed more sites but is not saying how many.
Source: Verizon
“We are laying the foundation of the programmable infrastructure that will allow us to do all the automation, virtualisation and the software-defining we want to do on top of that,” says Chris Emmons, director, network infrastructure planning at Verizon.
“This is the largest OpenStack NFV deployment in the marketplace,” says Darrell Jordan-Smith, vice president, worldwide service provider sales at Red Hat. “The largest in terms of the number of [server] nodes that it is capable of supporting and the fact that it is widely distributed across Verizon’s sites.”
OpenStack is an open source set of software tools that enable the management of networking, storage and compute services in the cloud. “There are some basic levels of orchestration while, in parallel, there is a whole virtualised managed environment,” says Jordon-Smith.
This announcement suggests that Verizon feels confident enough in its experience with its vendors and their technology to take the longer-term approach
“Verizon is joining some of the other largest communication service providers in deploying a platform onto which they will add applications over time,” says Dana Cooperson, a research director at Analysys Mason.
Most telcos start with a service-led approach so they can get some direct experience with the technology and one or more quick wins in the form of revenue in a new service arena while containing the risk of something going wrong, explains Cooperson. As they progress, they can still lead with specific services while deploying their platforms, and they can make decisions over time as to what to put on the platform as custom equipment reach their end-of-life.
A second approach - a platform strategy - is a more sophisticated, longer-term one, but telcos need experience before they take that plunge.
“This announcement suggests that Verizon feels confident enough in its experience with its vendors and their technology to take the longer-term approach,” says Cooperson.
Applications
The Verizon data centres are located in core locations of its network. “We are focussing more on core applications but some of the tools we use to run the network - backend systems - are also targeted,” says Emmons.
The infrastructure is designed to support all of Verizon’s business units. For example, Verizon is working with its enterprise unit to see how it can use the technology to deliver virtual managed services to enterprise customers.
“Wherever we have a need to virtualise something - the Evolved Packet Core, IMS [IP Multimedia Subsystem] core, VoLTE [Voice over LTE] or our wireline side, our VoIP [Voice over IP] infrastructure - all these things are targeted to go on the platform,” says Emmons. Verizon plans to pool all these functions and network elements onto the platform over the next two years.
Red Hat’s Jordon-Smith talks about a two-stage process: virtualising functions and then making them stateless so that applications can run on servers independent of the location of the server and the data centre.
“Virtualising applications and running on virtual machines gives some economies of scale from a cost perspective and density perspective,” says Jordon-Smith. But there is a cost benefit as well as a level of performance and resiliency once such applications can run across data centres.
And by having a software-based layer, Verizon will be able to add devices and create associated services applications and services. “With the Internet of Things, Verizon is looking at connecting many, many devices and add scale to these types of environments,” says Jordon-Smith.
Architecture
Verizon is deploying a ‘pod and core architecture’ in its data centres. A pod contains racks of servers, top-of-rack or leaf switches, and higher-capacity spine switches and storage, while the core network is used to enable communications between pods in the same data centre and across sites (see diagram, top).
Dell is providing Verizon with servers, storage platforms and white box leaf and spine switches. Big Switch Networks is providing software that runs on the Dell switches and servers, while the OpenStack platform and ceph storage software is provided by Red Hat.
Each Dell rack houses 22 servers - each server having 24 cores and supporting 48 hyper threads - and all 22 servers connect to the leaf switch. Each rack is teamed with a sister rack and the two are connected to two leaf switches, providing switch level redundancy.
“Each of the leaf switches is connected to however many spine switches are needed at that location and that gives connectivity to the outside world,” says Emmons. For the five data centres, a total of 8 pods have been deployed amounting to 1,000 servers and this has not changed since April.
This is the largest OpenStack NFV deployment in the marketplace
Verizon has deliberately chosen to separate the pods from the core network so it can innovate at the pod level independently of the data centre’s network.
“We wanted the ability to innovate at the pod level and not be tied into any technology roadmap at the data centre level,” says Emmons who points out that there are several ways to evolve the data centre network. For example, in some cases, an SDN controller is used to control the whole data centre network.
“We don't want our pods - at least initially - to participate in that larger data centre SDN controller because we were concerned about the pace of innovation and things like that,” says Emmons. “We want the pod to be self-contained and we want the ability to innovate and iterate in those pods.”
Its first-generation pods contain equipment and software from Dell, Big Switch and Red Hat but Verizon may decide to swap out some or all of the vendors for its next-generation pod. “So we could have two generations of pod that could talk to each other through the core network,” says Emmons. “Or they could talk to things that aren't in other pods - other physical network functions that have not yet been virtualised.”
Verizon’s core networks are its existing networks in the data centres. “We didn't require any uplift and migration of the data centre networks,” says Emmons, However, Verizon has a project investigating data-centre interconnect platforms for core networking.
What we have been doing with Red Hat and Big Switch is not a normal position for a telco where you test something to death; it is a lot different to what people are used to
Benefits
Verizon expects capital expenditure and operational expense benefits from its programmable network but says it is too early to quantify. What more excites the operator is the ability to get services up and running quickly, and adapt and scale the network according to demand.
“You can reconfigure and reallocate your network once it is all software-defined,” says Emmons. There is still much work to be done, from the network core to the edge. “These are the first steps to that programmable infrastructure that we want to get to,” says Emmons.
Capital expenditure savings result from adopting standard hardware. “The more uniform you can keep the commodity hardware underneath, the better your volume purchase agreements are,” says Emmons. Operational savings also result from using standardised hardware. “Spares becomes easier, troubleshooting becomes easier as does the lifecycle management of the hardware,” he says.
Challenges
“We are tip-of-the-spear here,” admits Emmons. “What we have been doing with Red Hat and Big Switch is not a normal position for a telco where you test something to death; it is a lot different to what people are used to.”
Red Hat’s Jordon-Smith also talks about the accelerated development environment enabled by the software-enabled network. The OpenStack platform undergoes a new revision every six months.
“There are new services that are going to be enabled through major revisions in the not too distant future - the next 6 to 12 months,” says Jordon-Smith. “That is one of the key challenges for operators like Verizon have when they are moving in what is now a very fast pace.”
Verizon continues to deploy infrastructure across its network. The operator has completed most of the troubleshooting and performance testing at the cloud-level and in parallel is working on the applications in various of its labs. “Now it is time to put it all together,” says Emmons.
One critical aspect of the move to become a digital service provider will be the operators' ability to offer new services more quickly - what people call service agility, says Cooperson. Only by changing their operations and their networks can operators create and, if needed, retire services quickly and easily.
"It will be evident that they are truly doing something new when they can launch services in weeks instead of months or years, and make changes to service parameters upon demand from a customer, as initiated by the customer," says Cooperson. “Another sign will be when we start seeing a whole new variety of services and where we see communications service providers building those businesses so that they are becoming a more significant part of their revenue streams."
She cites as examples cloud-based services and more machine-to-machine and Internet of Things-based services.
Ciena offers enterprises vNF pick and choose
Ciena, working with partners, has developed a platform for service providers to offer enterprises network functions they can select and configure with the click of a button.
Dubbed Agility Matrix, the product enables enterprises to choose their IT and connectivity services using software running on servers. It also promises to benefit service providers' revenues, enabling more adventurous service offerings due to the flexibility and new business models the virtual network functions (vNFs) enable. Currently, managed services require specialist equipment and on-site engineering visits for their set-up and management, while the contracts tend to be lengthy and inflexible.
"It offers an ecosystem of vNF vendors with a licensing structure that can give operators flexibility and vendors a revenue stream," says Eric Hanselman, chief analyst at 451 Research. "There are others who have addressed the different pieces of the puzzle, but Ciena has wrapped the products with the business tools to make it attractive to all of the players involved."
Ciena has created an internal division, dubbed Ciena Agility, to promote the venture. The unit has 100 staff while its technology, Agility Matrix, is being trialled by service providers although Ciena has declined to say how many.
"Why a separate devision? To move fast in a market that is moving rapidly," says Kevin Sheehan, vice president and general manager of Ciena Agility.
The unit inherits Agility products previously announced by Ciena. These include the multi-layer WAN controller that Ciena is co-developing with Ericsson, and certain applications that run on the software-defined networking (SDN) controller.
"The unique aspect of Ciena’s offering is the comprehensive approach to virtualised functions, says Hanselman. "It tackles everything from service orchestration out to monetisation."
Source: Ciena
What has been done
Agility Matrix comprises three elements: the vNF Market, Director and the host. The vNF Market is cloud-based and enables a service provider to offer a library of vNFs that its enterprise customers can choose from. An enterprise IT manager can select the vNFs required using a secure portal.
The Director, the second element, does the rest. The Director, built using Openstack software, delivers the vNFs to the host, an x86 instruction set-based server located at the enterprise's premises or in the service provider's central office or data centre.
The Director generates a software licence, the enterprise customer confirms the vNFs are working, which prompts the Director to generate post-payment charging data records. The VNF Market then invoices the service provider and pays the vNF vendors selected.
"Agility Matrix enables a pay-as-you-earn model for the service provider, much different from today's managed services providers' experiences," says Sheenan, who points out that a service provider currently buys custom hardware in bulk based on their enterprise-demand forecast, shipping products one by one. Now, with Agility Matrix, the service provider pays for a licence only after its enterprise customer has purchased one.
Ciena has launched Agility Matrix with five vNF partners. The partners and their vNF products are shown in the table.
Source: Gazettabyte
AT&T Domain 2.0 programme
Ciena is one of the vendors selected by AT&T for its Supplier Domain 2.0 programme. Does AT&T's programme influence this development?
“We are always working with our customers on addressing their current and future problems," says Sheehan. "When we bring something like Agility Matrix to the market, it is created by working with our partners and customers to develop a solution that is designed to meet everyone’s needs."
"Ciena has application programming interfaces that can support integration at several levels, but it is not clear that Agility is part of the deployment within Domain 2.0," says Hanselman. "The interesting things in Domain 2.0 are the automation and virtualisation pieces; Ciena can handle the automation part with its existing products."
Meanwhile, AT&T has announced its 'Network on Demand' that enables businesses to add and change network services in 'near real-time' using a self-service online portal.
Software-defined networking: A network game-changer?
OFC/NFOEC reflections: Part 1

"We [operators] need to move faster"
Andrew Lord, BT
Q: What was your impression of the show?
A: Nothing out of the ordinary. I haven't come away clutching a whole bunch of results that I'm determined to go and check out, which I do sometimes.
I'm quite impressed by how the main equipment vendors have moved on to look seriously at post-100 Gigabit transmission. In fact we have some [equipment] in the labs [at BT]. That is moving on pretty quickly. I don't know if there is a need for it just yet but they are certainly getting out there, not with live chips but making serious noises on 400 Gig and beyond.
There was a talk on the CFP [module] and whether we are going to be moving to a coherent CFP at 100 Gig. So what is going to happen to those prices? Is there really going to be a role for non-coherent 100 Gig? That is still a question in my mind.
"Our dream future is that we would buy equipment from whomever we want and it works. Why can't we do that for the network?"
I was quite keen on that but I'm wondering if there is going to be a limited opportunity for the non-coherent 100 Gig variants. The coherent prices will drop and my feeling from this OFC is they are going to drop pretty quickly when people start putting these things [100 Gig coherent] in; we are putting them in. So I don't know quite what the scope is for people that are trying to push that [100 Gigabit direct detection].
What was noteworthy at the show?
There is much talk about software-defined networking (SDN), so much talk that a lot of people in my position have been describing it as hype. There is a robust debate internally [within BT] on the merits of SDN which is essentially a data centre activity. In a live network, can we make use of it? There is some skepticism.
I'm still fairly optimistic about SDN and the role it might have and the [OFC/NFOEC] conference helped that.
I'm expecting next year to be the SDN conference and I'd be surprised if SDN doesn't have a much greater impact then [OFC/NFOEC 2014] with more people demoing SDN use cases.
Why is there so much excitement about SDN?
Why now when it could have happened years ago? We could have all had GMPLS (Generalised Multi-Protocol Label Switching) control planes. We haven't got them. Control plane research has been around for a long time; we don't use it: we could but we don't. We are still sitting with heavy OpEx-centric networks, especially optical.
"The 'something different' this conference was spatial-division multiplexing"
So why are we getting excited? Getting the cost out of the operational side - the software-development side, and the ability to buy from whomever we want to.
For example, if we want to buy a new network, we put out a tender and have some 10 responses. It is hard to adjudicate them all equally when, with some of them, we'd have to start from scratch with software development, whereas with others we have a head start as our own management interface has already been developed. That shouldn't and doesn't need to be the case.
Opening the equipment's north-bound interface into our own OSS (operating systems support) in theory, and this is probably naive, any specific OSS we develop ought to work.
Our dream future is that we would buy equipment from whomever we want and it works. Why can't we do that for the network?
We want to as it means we can leverage competition but also we can get new network concepts and builds in quicker without having to suffer 18 months of writing new code to manage the thing. We used to do that but it is no longer acceptable. It is too expensive and time consuming; we need to move faster.
It [the interest in SDN] is just competition hotting up and costs getting harder to manage. This is an area that is now the focus and SDN possibly provides a way through that.
Another issue is the ability to put quickly new applications and services onto our networks. For example, a bank wants to do data backup but doesn't want to spend a year and resources developing something that it uses only occasionally. Is there a bandwidth-on-demand application we can put onto our basic network infrastructure? Why not?
SDN gives us a chance to do something like that, we could roll it out quickly for specific customers.
Anything else at OFC/NFOEC that struck you as noteworthy?
The core networks aspect of OFC is really my main interest.
You are taking the components, a big part of OFC, and then the transmission experiments and all the great results that they get - multiple Terabits and new modulation formats - and then in networks you are saying: What can I build?
The networks have always been the poor relation. It has not had the great exposure or the same excitement. Well, now, the network is becoming centre stage.
As you see components and transmission mature - and it is maturing as the capacity we are seeing on a fibre is almost hitting the natural limit - so the spectral efficiency, the amount of bits you can squeeze in a single Hertz, is hitting the limit of 3,4,5,6 [bit/s/Hz]. You can't get much more than that if you want to go a reasonable distance.
So the big buzz word - 70 to 80 percent of the OFC papers we reviewed - was flex-grid, turning the optical spectrum in fibre into a much more flexible commodity where you can have wherever spectrum you want between nodes dynamically. Very, very interesting; loads of papers on that. How do you manage that? What benefits does it give?
What did you learn from the show?
One area I don't get yet is spatial-division multiplexing. Fibre is filling up so where do we go? Well, we need to go somewhere because we are predicting our networks continuing to grow at 35 to 40 percent.
Now we are hitting a new era. Putting fibre in doesn't really solve the problem in terms of cost, energy and space. You are just layering solutions on top of each other and you don't get any more revenue from it. We are stuffed unless we do something different.
The 'something different' this conference was spatial-division multiplexing. You still have a single fibre but you put in multiple cores and that is the next way of increasing capacity. There is an awful lot of work being done in this area.
I gave a paper [pointing out the challenges]. I couldn't see how you would build the splicing equipment, how you would get this fibre qualified given the 30-40 years of expertise of companies like Corning making single mode fibre, are we really going to go through all that again for this new fibre? How long is that going to take? How do you align these things?
"SDN for many people is data centres and I think we [operators] mean something a bit different."
I just presented the basic pitfalls from an operator's perspective of using this stuff. That is my skeptic side. But I could be proved wrong, it has happened before!
Anything you learned that got you excited?
One thing I saw is optics pushing out.
In the past we saw 100 Megabit and one Gigabit Ethernet (GbE) being king of a certain part of the network. People were talking about that becoming optics.
We are starting to see optics entering a new phase. Ten Gigabit Ethernet is a wavelength, a colour on a fibre. If the cost of those very simple 10GbE transceivers continues to drop, we will start to see optics enter a new phase where we could be seeing it all over the place: you have a GigE port, well, have a wavelength.
[When that happens] optics comes centre stage and then you have to address optical questions. This is exciting and Ericsson was talking a bit about that.
What will you be monitoring between now and the next OFC?
We are accelerating our SDN work. We see that as being game-changing in terms of networks. I've seen enough open standards emerging, enough will around the industry with the people I've spoken to, some of the vendors that want to do some work with us, that it is exciting. Things like 4k and 8k (ultra high definition) TV, providing the bandwidth to make this thing sensible.
"I don't think BT needs to be delving into the insides of an IP router trying to improve how it moves packets. That is not our job."
Think of a health application where you have a 4 or 8k TV camera giving an ultra high-res picture of a scan, piping that around the network at many many Gigabits. These type of applications are exciting and that is where we are going to be putting a bit more effort. Rather than the traditional just thinking about transmission, we are moving on to some solid networking; that is how we are migrating it in the group.
When you say open standards [for SDN], OpenFlow comes to mind.
OpenFlow is a lovely academic thing. It allows you to open a box for a university to try their own algorithms. But it doesn't really help us because we don't want to get down to that level.
I don't think BT needs to be delving into the insides of an IP router trying to improve how it moves packets. That is not our job.
What we need is the next level up: taking entire network functions and having them presented in an open way.
For example, something like OpenStack [the open source cloud computing software] that allows you to start to bring networking, and compute and memory resources in data centres together.
You can start to say: I have a data centre here, another here and some networking in between, how can I orchestrate all of that? I need to provide some backup or some protection, what gets all those diverse elements, in very different parts of the industry, what is it that will orchestrate that automatically?
That is the kind of open theme that operators are interested in.
That sounds different to what is being developed for SDN in the data centre. Are there two areas here: one networking and one the data centre?
You are quite right. SDN for many people is data centres and I think we mean something a bit different. We are trying to have multi-vendor leverage and as I've said, look at the software issues.
We also need to be a bit clearer as to what we mean by it [SDN].
Andrew Lord has been appointed technical chair at OFC/NFOEC
Further reading
Part 2: OFC/NFOEC 2013 industry reflections, click here
Part 3: OFC/NFOEC 2013 industry reflections, click here
Part 4: OFC/NFOEC industry reflections, click here
Part 5: OFC/NFEC 2013 industry reflections, click here
