The art of virtualised network function placement

Telefónica, along with vendors Cyan and Red Hat, are working on a network functions virtualisation (NFV) project. The project involves the careful placement of virtualised network functions in the network to improve their performance.

Cyan is delivering its Blue Planet NFV orchestrator software that will make use of enhancements being made to OpenStack developed by Red Hat in close collaboration with Telefónica.

Gazettabyte asked Nirav Modi, director of software innovations at Cyan, about the work.
 

Q: The concept of deterministic placement of virtualised network functions. Why is this important? 

NM: We are attempting to solve a fundamental technology challenge required to make NFV successful for carriers. Telefónica has been doing a lot of internal trials and its own NFV R&D and has found that the placement of virtualised network functions greatly affects their performance. 

We are working with Telefónica and Red Hat to solve this problem, to ensures virtualised network functions perform consistently and at their peak from one instance to another. This is particularly important for composite or clustered virtualised network function architectures, where the placement of various components can affect performance and availability. 

Also, you need to ensure that the virtualised network functions are located where the most suitable compute, storage or networking resources are located. Other important performance metrics that need to be considered include latency and bandwidth availability into the cloud environment.

 

What is involved in enabling deterministic virtualised network functions, and what is the impact on Cyan's orchestration platform?

Deterministic placement requires an orchestration platform, such as Cyan’s Blue Planet, to be aware of the resources available at various NFV points of presence (PoPs) and to map operator-provided placement policies, such as performance and high-availability requirements, into placement decisions. In other words, which servers should be used and which PoP should host the virtualised network function. 

For the operator, interested in application service-level agreements (SLAs) and performance, Cyan’s orchestration platform provides the intelligence to translate those policies into a placement architecture.

 

Does this Telefonica work also require Cyan's Z-Series packet-optical transport system (P-OTS)? 

NFV is all about taking network functions currently deployed on purpose-built, vertically-integrated hardware platforms, and deploying them on industry-standard commercial off-the-shelf (COTS) servers, possibly in a virtualised environment running OpenStack, for example. 

In such an set-up, Cyan’s Blue Planet orchestration platform is responsible for the deployment of the virtualised network functions into the NFV infrastructure or telco cloud. Cyan’s orchestration software is always deployed on COTS servers. There is no dependency on using Cyan’s P-OTS to use the Blue Planet software-defined networking (SDN) and NFV software.

The Z-Series platform can be used in the metro and wide area network to enable a scalable and programmable network. And this can supplement the virtual network functions deployed in the cloud to replace existing hardware-based solutions, but the Z-Series is not involved in this joint-effort with Telefónica and Red Hat.


P-OTS 2.0: 60s interview with Heavy Reading's Sterling Perrin

Heavy Reading has surveyed over 100 operators worldwide about their packet optical transport plans. Sterling Perrin, senior analyst at Heavy Reading, talks about the findings.


Q: Heavy Reading claims the metro packet optical transport system (P-OTS) market is entering a new phase. What are the characteristics of P-OTS 2.0 and what first-generation platform shortcomings does it address?

A: I would say four things characterise P-OTS 2.0 and separate 2.0 from the 1.0 implementations:

  • The focus of packet-optical shifts from time-division multiplexing (TDM) functions to packet functions.
  • Pure-packet implementations of P-OTS begin to ramp and, ultimately, dominate.
  • Switched OTN (Optical Transport Network) enters the metro, removing the need for SONET/SDH fabrics in new elements.
  • 100 Gigabit takes hold in the metro.

The last two points are new functions while the first two address shortcomings of the previous generation. P-OTS 1.0 suffered because its packet side was seen as sub-par relative to Ethernet "pure plays" and also because packet technology in general still had to mature and develop - such as standardising MPLS-TP (Multiprotocol Label Switching - Transport Profile).

 

Your survey's key findings: What struck Heavy Reading as noteworthy?

The biggest technology surprise was the tremendous interest in adding IP/MPLS functions to transport. There was a lot of debate about this 10 years ago. Then the industry settled on a de facto standard that transport includes layers 0-2 but no higher. Now, it appears that the transport definition must broaden to include up to layer 3.

A second key finding is how quickly SONET/SDH has gone out of favour. Going forward, it is all about packet innovation. We saw this shift in equipment revenues in 2012 as SONET/SDH spend globally dropped more than 20 percent. That is not a one-time hit - it's the new trend for SONET/SDH.

 

Heavy Reading argues that transport has broadened in terms of the networking embraced - from layers 0 (WDM) and 1 (SONET/SDH and OTN) to now include IP/MPLS. Is the industry converging on one approach for multi-layer transport optimisation? For example, IP over dense WDM? Or OTN, Carrier Ethernet 2.0 and MPLS-TP? Or something else?

We did not uncover a single winning architecture and it's most likely that operators will do different things. Some operators will like OTN and put it everywhere. Others will have nothing to do with OTN. Some will integrate optics on routers to save transponder capital expenditure, but others will keep hardware separate but tightly link IP and optical layers via the control plane. I think it will be very mixed.

You talk about a spike in 100 Gigabit metro starting in 2014. What is the cause? And is it all coherent or is a healthy share going to 100 Gigabit direct detection?

Interest in 100 Gigabit in the metro exceeds interest in OTN in the metro - which is different from the core, where those two technologies are more tightly linked.

Cloud and data centre interconnect are the biggest drivers for interest in metro 100 Gig but there are other uses as well. We did not ask about coherent versus direct in this survey, but based on general industry discussions, I'd say the momentum is clearly around coherent at this stage - even in the metro. It does not seem that direct detect 100 Gig has a strong enough cost proposition to justify a world with two very different flavours of 100 Gig.

 

What surprised you from the survey's findings?

It was really the interest-level in IP functionality on transport systems that was the most surprising find.

It opens up the packet-optical transport market to new players that are strongest on IP and also poses a threat to suppliers that were good at lower layers but have no IP expertise - they'll have to do something about that.

Heavy Reading surveyed 114 operators globally. All those surveyed were operators; no system vendors were included. The regional split was North America - 22 percent, Europe - 33 percent, Asia Pacific - 25 percent, and the rest of the world - Latin America mainly - 20 percent.


Privacy Preference Center