TIP tackles the growing complexity of open design
The TIP chairman and vice president, technology innovation at Deutsche Telekom described how the relentless growth of IP traffic is causing production costs to rise yet the average revenues per subscriber for bundled communications services is flat or dipping. “Not a good situation to be in,” he said. The industry is also investing in new technologies including the rollout of 5G.
Niall Robinson
The industry needs a radically different approach if it is to achieve capital efficiency, said Clauberg, and that requires talent to drive innovation. Garnering such talent needs an industry-wide effort and this is the motivation for TIP.
TIP
Established in 2016, TIP brings together internet giants Facebook and Microsoft with leading telecom operators, systems vendors, components players and others to co-develop open-source designs for telecoms. In the last year, TIP has added 200 companies to total over 500 members.
TIP used its second summit held in Santa Clara, California to unveil several new project groups. These include End-to-End Networking Slicing, Edge Computing, and Artificial Intelligence and Applied Machine Learning.
There are three main project categories within TIP: access, backhaul, and core and management. Access now includes six project groups including the new Edge Computing, backhaul has two, while core and management has three including the new network slicing and artificial intelligence initiatives. TIP has also established what it calls ecosystem acceleration centres and community labs.
“TIP is definitely bigger and, I think, better,” says Niall Robinson, vice president, global business development at ADVA Optical Networking. “As with any organisation there is always initial growing pains and TIP has gone through those.”
Open Optical Packet Transport
ADVA Optical Networking is a member in one of TIP’s more established projects, the Open Optical Packet Transport group which announced the 1-rack-unit Voyager packet transport and routing box last year.
OOPT itself comprises four work groups: Optical Line System, Disaggregated Transponders and Chips, Physical Simulation Environment and the Common API. A fifth group is being considered to tackle routing and software-defined interconnection.
Robinson highlights two activities of the OOPT’s subgroups to illustrate the scope and progress of TIP.
The Common API group in which Robinson is involved aims to bring commonality to the various open source groups’ application programming interfaces (APIs).
Open is great but there are so many initiatives out there that it is really not helping the market
The Open Networking Forum alone has several initiatives: the Central Office Rearchitected as a Data centre (CORD), the Open Networking Operating System (ONOS) SDN controller, the Open Core Model, and the Transport API. Other open initiatives developing APIs include OpenConfig set up by operators, the Open API initiative, and OpenROADM.
“Open is great but there are so many initiatives out there that it is really not helping the market,” says Robinson. An operator may favour a particular system vendor’s equipment that does not support a particular API. Either the operator or the vendor must then develop something, a situation in the case of an operator that can repeat itself many times. The goal of the Common API group’s work is to develop a mapping function between the software-defined networking (SDN) controller and equipment so that any SDN controller can use these industry-initiative APIs.
Robinson’s second example is the work of the OOPT’s Disaggregated Transponders and Chips group that is developing a transponder abstraction interface. The goal is to make it easier for vendors to benefit from the functionality of a transponder’s coherent DSP independent of the particular chip used.
“For ADVA, when we build our own gear we pick a DSP and we have to get our firmware to work with it,” says Robinson. “We can’t change that DSP easily; it’s a custom interface.”
The goal of the work is to develop a transponder abstraction interface that sits between the higher-level functionality software and the coherent DSP. The transponder vendor will interface its particular DSP to the abstraction interface that will then allow a network element’s software to configure settings and get optical monitoring data.
“It doesn’t care or even know what DSP is used, all it is talking to is this common transponder abstraction interface,” says Robinson.
Cassini and Voyager platforms
Edgecore Networks has contributed its packet transponder white box platform to the TIP OOPT group. Like Voyager, the platform uses the Broadcom StrataXGS Tomahawk 3.2 terabit switch chip. But instead of using built-in coherent interfaces based on Acacia’s AC-400 module, Cassini offers eight card slot options. Each slot can accommodate three module options: a coherent CFP2-ACO, a coherent CFP2-DCO or two QSFP28 pluggables. The Cassini platform also has 16 fixed QSFP28 ports.
Accordingly, the 1.5-rack-unit box can be configured as a 3.2 terabit switch using QSFP28 modules only or as a transport box with up to 1.6 terabits of client-side interfaces and 1.6 terabits of line-side coherent interfaces. This contrasts with the Voyager that uses up to 2 terabits of the switch capacity with its dozen 100-gigabit client-side interfaces and 800 gigabits of coherent line-side capacity.
There have also been developments with TIP’s Voyager box. Cumulus Network has replaced Snaproute to provide the platform’s Linux network operating system. ADVA Optical Networking, a seller of the Voyager, says the box will likely be generally available in the first quarter of 2018.
Robinson says TIP will ultimately be judged based on what it ends up delivering. “Eighteen months is not enough time for the influence of something like this to be felt,” he says.
TIP Summit 2017 talks, click here
Giving telecom networks a computing edge
But a subtler approach is taking hold as networks evolve whereby what a user does will change depending on their location. And what will enable this is edge computing.
Source: Senza Fili Consulting
Edge computing
“This is an entirely new concept,” says Monica Paolini, president and founder at Senza Fili Consulting. “It is a way to think about service which is going to have a profound impact.”
Edge computing has emerged as a consequence of operators virtualising their networks. Virtualisation of network functions hosted in the cloud has promoted a trend to move telecom functionality to the network core. Functionality does not need to be centralised but initially, that has been the trend, says Paolini, especially given how virtualisation promotes the idea that network location no longer matters.
“That is a good story, it delivers a lot of cost savings,” says Paolini, who recently published a report on edge computing. *
But a realisation has emerged across the industry that location does matter; centralisation may save the operator some costs but it can impact performance. Depending on the application, it makes sense to move servers and storage closer to the network edge.
The result has been several industry initiatives. One is Mobile Edge Computing (MEC) being developed by the European Telecommunications Standards Institute (ETSI). In March, ETSI renamed the Industry Specification Group undertaking the work to Multi-access Edge Computing to reflect the operators requirements beyond just cellular.
“What Multi-access Edge Computing does is move some of the core functionality from a central location to the edge,” says Paolini.
Another initiative is M-CORD, the mobile component of the Central Office Re-architected as a Datacenter initiative, overseen by the Open Networks Labs non-profit organisation. Other initiatives Paolini highlights include the Open Compute Project, Open Edge Computing and the Telecom Infra Project.
This is an entirely new concept. It is a way to think about service which is going to have a profound impact.
Location
The exact location of the ‘edge’ where the servers and storage reside is not straightforward.
In general, edge computing is located somewhere between the radio access network (RAN) and the network core. Putting everything at the RAN is one extreme but that would lead to huge duplication of hardware and exceed what RAN locations can support. Equally, edge computing has arisen in response to the limitations of putting too much functionality in the core.
The matter of location is blurred further when one considers that the RAN itself is movable to the core using the Cloud RAN architecture.
Paolini cites another reason why the location of edge computing is not well defined: the industry does not yet know. And it will only be in the next year or two when operators start trialling the technology. “There is going to be some trial and error by the operators,” she says.
Use cases
An enterprise located across a campus is one example use of edge computing, given how much of the content generated stays on-campus. If the bulk of voice calls and data stay local, sending traffic to the core and back makes little sense. There are also security benefits keeping data local. An enterprise may also use the edge computing to run services locally and share them across networks, for example using cellular or Wi-Fi for calls.
Another example is to install edge computing at a sports stadium, not only to store video of the game’s play locally - again avoiding going to the core and back with content - but also to cache video from games taking place elsewhere for viewing by attending fans.
Virtual reality and augmented reality are other applications that require low-latency, another performance benefit of having local computation.
Paolini expects the uptake of edge computing to be gradual. She also points to its challenging business case, or at least how operators typically assess a business case may not tell the full story.
Operators view investing in edge computing as an extra cost but Paolini argues that operators need to look carefully at the financial benefits. Edge computing delivers better utilisation of the network and lower latency. “The initial cost for multi-access edge computing is compensated for by the improved utilisation of the existing network,” she says.
When Paolini started the report it was to research low-latency and the issues of distributed network design, reliability and redundancy. But she soon realised that multi-access edge computing was something broader and that edge computing is beyond what ETSI is doing.
This is not like an operator rolling out LTE and reporting to shareholders how much of the population now has coverage. “It is a very different business to learn how to use networks better,” says Paolini.
* Click here to access the report, Power at the edge. MEC, edge computing, and the prominence of location
