Deutsche Telekom explains its IP-over-DWDM thinking
Telecom operators are always seeking better ways to run their networks. In particular, operators regularly scrutinise how best to couple the IP layer with their optical networking infrastructure.
The advent of 400-gigabit coherent modules that plug directly into an IP router is one development that has caught their eye.
Placing dense wavelength division multiplexing (DWDM) interfaces directly onto an IP router allows the removal of a separate transponder box and its interfacing.
IP-over-DWDM is not a new concept. However, until now, operators have had to add a coherent line card, taking up valuable router chassis space.

Now, with the advent of compact 400-gigabit coherent pluggables developed for the hyperscalers to link their data centres, telecom operators have realised that such pluggables also serve their needs.
BT will start rolling out IP-over-DWDM in its network this year, while Deutsche Telekom has analysed the merits of IP-over-DWDM.
“The adoption of IP-over-DWDM is the subject of our techno-economical studies,” says Werner Weiershausen, senior architect for the transport network at Deutsche Telekom.
Network architecture
Deutsche Telekom’s domestic network architecture comprises 12 large nodes where IP and OTN backbones align with the underlying optical networking infrastructure. These large nodes – points of presence – can be over 1,000km apart.
Like many operators, Deutsche Telekom has experienced IP annual traffic growth of 35 per cent. The need to carry more traffic without increasing costs has led the operators to adopt coherent technology, with the symbol rate rising with each new generation of optical transport technology.
A higher channel bit rate sends more data over an optical wavelength. The challenge, says Weiershausen, is maintaining the long-distance reaches with each channel rate hike.
Deutsche Telekom’s in-house team forecasts that IP traffic growth will slow down to a 20 per cent annual growth rate and even 16 per cent in future.
Weiershausen says this is still to be proven but that if annual traffic growth does slow down to 16-20 per cent, bandwidth growth issues will remain; it is just that they can be addressed over a longer timeframe.
Bandwidth and reach are long-haul networking issues. Deutsche Telekom’s metro networks, which are horse-shoe-shaped, have limited spans overall.
“For metro, our main concern is to have the lowest cost-per-bit because we are fibre- and spectrum-rich, and even a single DWDM fibre pair per metro horseshoe ring offer enough bandwidth headroom,” says Weiershausen. “So it’s easy; we have no capacity problem like the backbone. Also there, we are fibre-rich but can avoid the costly activation of multiple parallel fibre trunks.”
IP-over-DWDM
IP-over-DWDM is increasingly associated with adding pluggable optics onto an IP core router.
“This is what people call IP-over-DWDM, or what Cisco calls it hop-by-hop approach,” says Dr Sascha Vorbeck, head of strategy and architecture IP-core & transport networks at Deutsche Telekom.

Cisco’s routed optical networking – its term for the hop-by-hop approach – uses the optical layer for point-to-point connections between IP routers. As a result, traffic switching and routing occur at the IP layer rather than the optical layer, where optical traffic bypass is performed using reconfigurable optical add/drop multiplexers (ROADMs).
Routed optical networking also addresses the challenge of the rising symbol rate of coherent technology, which must maintain the longest reaches when passing through multiple ROADM stages.
Deutsche Telekom says it will not change its 12-node backbone network to accommodate additional routing stages.
“We will not change our infrastructure fundamentally because this is costly,” says Weiershausen. “We try to address this bandwidth growth with technology and not with the infrastructure change.”
Deutsche Telekom’s total cost-of-ownership analysis highlights that optical bypass remains attractive compared to a hop-by-hop approach for specific routes.
However, the operator has concluded that the best approach is to have both: some hop-by-hop where it suits its network in terms of distances but also using optical bypass for longer links using either ROADM or static bypass technology.
“A mixture is the optimum from our total cost of ownership calculation,” says Weiershausen. “There was no clear winner.”
Strategy
Deutsche Telecom favours coherent interfaces on its routers for its network backbone because it wants to simplify its network. In addition, the operator wants to rid its network of existing DWDM transponders and their short reach – ‘grey’ – interfaces linking the IP router to the DWDM transponder box.
“They use extra power and are an extra capex [capital expenditure] cost,” says Weiershausen. “They are also an additional source of failures when you have in-line several network elements. That said, heat dissipation of long-reach coherent optical DWDM interfaces limited the available IP router interfaces that could have been activated in the past.
For example, a decade ago, Deutsche Telecom tried to use IP-over-DWDM for its backbone network but had to step back to use an external DWDM transponder box due to heat dissipation problems.
The situation may have changed with modern router and optical interface generations, but this is under further study by Deutsche Telecom and is an essential prerequisite for its evolution roadmap.
Deutsche Telecom is still using traditional DWDM equipment between the interconnection of IP routers with grey interfaces. Deutsche Telecom undertook an evaluation in 2020 and calculated a traditional DWDM network versus a hop-by-hop approach. Then, the hop-by-hop method was 20 per cent more expensive. Deutsche Telecom plans to redo the calculations to see if anything has changed.
The operator has yet to decide whether to adopt ZR+ coherent pluggable optics and a hop-by-hop approach or use more advanced larger coherent modules in its routers. “This is not decided yet and depends on pricing evolution,” says Weiershausen.
With the volumes expected for pluggable coherent optics, the expectation is they will have a notable price advantage compared to traditional high-performance coherent interfaces.
But Deutsche Telekom is still determining, believing that conventional coherent interfaces may also come down markedly in price.
SDN controller
Another issue for consideration with IP-over-DWDM is the software-defined networking (SDN) controller.
IP router vendors offer their SDN controllers, but there also is a need for working with third-party SDN controllers.
For example, Deutsche Telekom is a member of the OpenROADM multi-source agreement and has pushed for IP-over-DWDM to be a significant application of the MSA.
But there are disaggregation issues regarding how a router’s coherent optical interfaces are controlled. For example, are the optical interfaces overseen and orchestrated by the OpenROADM SDN controller and its application programming interface (API) or is the SDN controller of each IP router vendor responsible for steering the interfaces?
Deutsche Telekom says that a compromise has been reached for the OpenROADM MSA whereby the IP router vendors’ SDN controllers oversee the optics but that for the solution to work, information is exchanged with the OpenROADM’s SDN controller.
“That way, the path computation engine (PCE) of the optical network layer, including the ROADMs, can calculate the right path to network the traffic. “Without information from the IP router, it would be blind; it would not work,” says Weiershausen.
Automation
Weiershausen says it is not straightforward to say which approach – IP-over-DWDM or a boundary between the IP and optical layers – is easier to automate.
“Principally, it is the same in terms of the information model; it is just that there are different connectivity and other functionalities [with the two approaches],” says Weiershausen.
But one advantage of a clear demarcation between the layers is the decoupling of the lifecycles of the different equipment.
Fibre has the longest lifecycle, followed by the optical line system, with IP routers having the shortest of the three, with new generation equipment launched every few years.
Decoupling and demarcation is therefore a good strategy here, notes Weiershausen.
The ONF adapts after sale of spin-off Ananki to Intel

Intel’s acquisition of Ananki, a private 5G networking company set up within the ONF last year, has meant the open-model organisation has lost the bulk of its engineering staff.
The ONF, a decade-old non-profit consortium led by the telecom operators, has developed some notable networking projects over the years such as CORD, OpenFlow, one of the first software-defined networking (SDN) standards, and Aether, the 5G edge platform.
Its joint work with the operators has led to virtualised and SDN building blocks that, when combined, can address comprehensive networking tasks such as 5G, wireline broadband and private wireless networks.
The ONF’s approach has differed from other open-source organisations. Its members pay for an in-house engineering team to co-develop networking blocks based on disaggregation, SDN and cloud.
The ONF and its members have built a comprehensive portfolio of networking functions which last year led to the organisation spinning out a start-up, Ananki, to commercialise a complete private end-to-end wireless network.
Now Intel has acquired Ananki, taking with it 44 of the ONF’s 55 staff.
“Intel acquired Ananki, Intel did not acquire the ONF,” says Timon Sloane, the ONF’s newly appointed general manager. “The ONF is still whole.”
The ONF will now continue with a model akin to other open-source organisations.
ONF’s evolution
The ONF began by tackling the emerging interest in SDN and disaggregation.
“After that phase, considered Phase One, we broke the network into pieces and it became obvious that it was complicated to then build solutions; you have these pieces that had to be reassembled,” says Sloane.
The ONF used its partner funding to set up a joint development team to craft solutions that were used to seed the industry.
The ONF pursued this approach for over six years but Sloane said that it felt increasingly that the model had run its course.“We were kind of an insular walled garden, with us and a small number of operators working on things,” says Sloane. “We needed to flip the model inside out and go broad.”
This led to the spin-out of Ananki, a separate for-profit entity that would bring in funding yet would also be an important contributor to open source. And as it grew, the thinking was that it would subsume some of the ONF’s engineering team.
“We thought for the next phase that a more typical open-source model was needed,” says Sloane. “Something like Google with Kubernetes, where one company builds something, puts it in open source and feeds it, even for a couple of years, until it grows, and the community grows around it.”
But during the process of funding Ananki, several companies expressed an interest in acquiring the start-up. The ONF will not say the other interested players but hints that it included telecom operators and hyperscalers.
The merit of Intel, says Sloane, is that it is a chipmaker with a strong commitment to open source.
Deutsche Telekom’s ongoing ORAN trial in Berlin uses key components from the ONF including the SD-Fabric, 5G and 4G core functions, and the uONOS near real-time RAN Intelligent controller (RIC). Source: ONF, DT.
Post-Ananki
“Those same individuals who were wearing an ONF hat, are swapping it for an Intel hat, but are still on the leadership of the project,” says Sloane. “We view this as an accelerant for the project contributions because Intel has pretty deep resources and those individuals will be backed by others.”
The ONF acknowledges that its fixed broadband passive optical networking (PON) work is not part of Ananki’s interest. Intel understands that there are operators reliant on that project and will continue to help during a transition period. Those vendors and operators directly involved will also continue to contribute.
“If you look at every other project that we’re doing: mobile core, mobile RAN, all the P4 work, programmable networks, Intel has been very active.”
Meanwhile, the ONF is releasing its entire portfolio to the open-source community.
“We’ve moved out of the walled-garden phase into a more open phase, focused on the consumption and adoption [of the designs,” says Sloane. The projects will stay remain under the auspices of the ONF to get the platforms adopted within networks.
The ONF will use its remaining engineers to offer its solutions using a Continuous Integration/ Continuous Delivery (CI/CD) software pipeline.
“We will continue to have a smaller engineering team focused on Continuous Integration so that we’ll be able to deliver daily builds, hourly builds, and continuous regression testing – all that coming out of ONF and the ONF community,” says Sloane. “Others can use their CD pipelines to deploy and we are delivering exemplar CD pipelines if you want to deploy bare metal or in a cloud-based model.”
The ONF is also looking at creating a platform that enables the programmability of a host using silicon such as a data processing unit (DPU) as part of larger solutions.
“It’s a very exciting space,” says Sloane. “You just saw the Pensando acquisition; I think that others are recognising this is a pretty attractive space.” AMD recently announced it is acquiring Pensando, to add a DPU architecture to AMD’s chip portfolio.
The ONF’s goal is to create a common platform that can be used for cloud and telecom networking and infrastructure for applications such as 5G and edge.
“And then there is of course the whole edge space, which is quite fascinating; a lot is going on there as well,” says Sloane. “So I don’t think we’re done by any means.”
Telecoms' innovation problem and its wider cost

Imagine how useful 3D video calls would have been this last year.
The technologies needed – a light field display and digital compression techniques to send the vast data generated across a network – do exist but practical holographic systems for communication remain years off.
But this is just the sort of application that telcos should be pursuing to benefit their businesses.
A call for innovation
“Innovation in our industry has always been problematic,” says Don Clarke, formerly of BT and CableLabs and co-author of a recent position paper outlining why telecoms needs to be more innovative.
Entitled Accelerating Innovation in the Telecommunications Arena, the paper’s co-authors include representatives from communications service providers (CSPs), Telefonica and Deutsche Telekom.
In an era of accelerating and disruptive change, CSPs are proving to be an impediment, argues the paper.
The CSPs’ networking infrastructure has its own inertia; the networks are complex, vast in scale and costly. The operators also require a solid business case before undertaking expensive network upgrades.
Such inertia is costly, not only for the CSPs but for the many industries that depend on connectivity.
But if the telecom operators are to boost innovation, practices must change. This is what the position paper looks to tackle.
NFV White Paper
Clarke was one of the authors of the original Network Functions Virtualisation (NFV) White Paper, published by ETSI in 2012.
The paper set out a blueprint as to how the telecom industry could adopt IT practices and move away from specialist telecom platforms running custom software. Such proprietary platforms made the CSPs beholden to systems vendors when it came to service upgrades.

The NFV paper also highlighted a need to attract new innovative players to telecoms.
“I see that paper as a catalyst,” says Clarke. “The ripple effect it has had has been enormous; everywhere you look, you see its influence.”
Clarke cites how the Linux Foundation has re-engineered its open-source activities around networking while Amazon Web Services now offers a cloud-native 5G core. Certain application programming interfaces (APIs) cited by Amazon as part of its 5G core originated in the NFV paper, says Clarke.
Software-based networking would have happened without the ETSI NFV white paper, stresses Clarke, but its backing by leading CSPs spurred the industry.
However, building a software-based network is hard, as the subsequent experiences of the CSPs have shown.
“You need to be a master of cloud technology, and telcos are not,” says Clarke. “But guess what? Riding to the rescue are the cloud operators; they are going to do what the telcos set out to do.”
For example, as well as hosting a 5G core, AWS is active at the network edge including its Internet of Things (IoT) Greengrass service. Microsoft, having acquired telecom vendors Metaswitch and Affirmed Networks, has launched ‘Azure for Operators’ to offer 5G, cloud and edge services. Meanwhile, Google has signed agreements with several leading CSPs to advance 5G mobile edge computing services.
“They [the hyperscalers] are creating the infrastructure within a cloud environment that will be carrier-grade and cloud-native, and they are competitive,” says Clarke.
The new ecosystem
The position paper describes the telecommunications ecosystem in three layers (see diagram).
The CSPs are examples of the physical infrastructure providers (bottom layer) that have fixed and wireless infrastructure providing connectivity. The physical infrastructure layer is where the telcos have their value – their ‘centre of gravity’ – and this won’t change, says Clarke.
The infrastructure layer also includes the access network which is the CSPs’ crown jewels.
“The telcos will always defend and upgrade that asset,” says Clarke, adding that the CSPs have never cut access R&D budgets. Access is the part of the network that accounts for the bulk of their spending. “Innovation in access is happening all the time but it is never fast enough.”
The middle, digital network layer is where the nodes responsible for switching and routing reside, as do the NFV and software-defined networking (SDN) functions. It is here where innovation is needed most.
Clarke points out that the middle and upper layers are blurring; they are shown separately in the diagram for historical reasons since the CSPs own the big switching centres and the fibre that connect them.
But the hyperscalers – with their data centres, fibre backbones, and NFV and SDN expertise – play in the middle layer too even if they are predominantly known as digital service providers, the uppermost layer.
The position paper’s goal is to address how CSPs can better address the upper two network layers while also attracting smaller players and start-ups to fuel innovation across all three.
Paper proposal
The paper identifies several key issues that curtail innovation in telecoms.
One is the difficulty for start-ups and small companies to play a role in telecoms and build a business.
Just how difficult it can be is highlighted by the closure of SDN-controller specialist, Lumina Networks, which was already engaged with two leading CSPs.
In a Telecom TV panel discussion about innovation in telecoms, that accompanied the paper’s publication, Andrew Coward, the then CEO of Lumina Networks, pointed out how start-ups require not just financial backing but assistance from the CSPs due to their limited resources compared to the established systems vendors.
It is hard for a start-up to respond to an operator’s request-for-proposals that can be thousands of pages long. And when they do, will the CSPs’ procurement departments consider them due to their size?
Coward argues that a portion of the CSP’ capital expenditure should be committed to start-ups. That, in turn, would instill greater venture capital confidence in telecoms.
The CSPs also have ‘organisational inertia’ in contrast to the hyperscalers, says Clarke.
“Big companies tend towards monocultures and that works very well if you are not doing anything from one year to the next,” he says.
The hyperscalers’ edge is their intellectual capital and they work continually to produce new capabilities. “They consume innovative brains far faster and with more reward than telcos do, and have the inverse mindset of the telcos,” says Clarke.
The goals of the innovation initiative are to get CSPs and the hyperscalers – the key digital service providers – to work more closely.
“The digital service providers need to articulate the importance of telecoms to their future business model instead of working around it,” says Clarke.
Clarke hopes the digital service providers will step up and help the telecom industry be more dynamic given the future of their businesses depend on the infrastructure improving.
In turn, the CSPs need to stand up and articulate their value. This will attract investors and encourage start-ups to become engaged. It will also force the telcos to be more innovative and overcome some of the procurement barriers, he says.
Ultimately, new types of collaboration need to emerge that will address the issue of innovation.
Next steps
Work has advanced since the paper was published in June and additional players have joined the initiative, to be detailed soon.
“This is the beginning of what we hope will be a much more interesting dialogue, because of the diversity of players we have in the room,” says Clarke. “It is time to wake up, not only because of the need for innovation in our industry but because we are an innovation retardant everywhere else.”
Further information:
Telecom TV’s panel discussion: Part 2, click here
Tom Nolle’s response to the Accelerating Innovation in the Telecommunications Arena paper, click here
Deutsche Telekom's Access 4.0 transforms the network edge

Deutsche Telekom has a working software platform for its Access 4.0 architecture that will start delivering passive optical network (PON) services to German customers later this year. The architecture will also serve as a blueprint for future edge services.
Access 4.0 is a disaggregated design comprising open-source software and platforms that use merchant chips – ‘white-boxes’ – to deliver fibre-to-the-home (FTTH) and fibre-to-the-building (FTTB) services.
“One year ago we had it all as prototypes plugged together to see if it works,” says Hans-Jörg Kolbe, chief engineer and head of SuperSquad Access 4.0. “Since the end of 2019, our target software platform – a first end-to-end system – is up and running.”
Deutsche Telekom has about 1,000 central office sites in Germany, several of which will be upgraded this year to the Access 4.0 architecture.
“Once you have a handful of sites up and running and you have proven the principle, building another 995 is rather easy,” says Robert Soukup, senior program manager at Deutsche Telekom, and another of the co-founders of the Access 4.0 programme.
Origins
The Access 4.0 programme emerged with the confluence of two developments: a detailed internal study of the costs involved in building networks and the advent of the Central Office Re-architected as a Datacentre (CORD) industry initiative.
Deutsche Telekom was scrutinising the costs involved in building its networks. “Not like removing screws here and there but looking at the end-to-end costs,” says Kolbe.
Separately, the operator took an interest in CORD that was, at the time, being overseen by ON.Labs.
At first, Kolbe thought CORD was an academic exercise but, on closer examination, he and his colleague, Thomas Haag, the chief architect and the final co-founder of Access 4.0, decided the activity needed to be investigated internally. In particular, to assess the feasibility of CORD, how bringing together cloud technologies with access hardware would work, and quantify the cost benefits.
“The first goal was to drive down cost in our future network,” says Kolbe. “And that was proven in the first month by a decent cost model. Then, building a prototype and looking into it, we found more [cost savings].”
Given the cost focus, the operator hadn’t considered the far-reaching changes involve with adopting white boxes and the disaggregation of software and hardware, nor the consequences of moving to a mainly software-based architecture in how it could shorten the introduction of new services.
“I knew both these arguments were used when people started to build up Network Functions Virtualisation (NFV) but we didn’t have this in mind; it was a plain cost calculation,” says Kolbe. “Once we starting doing it, however, we found both these things.”
Cost engineering
Deutsche Telekom says it has learnt a lot from the German automotive industry when it comes to cost engineering. For some companies, cost is part of the engineering process and in others, it is part of procurement.

“The issue is not talking to a vendor and asking for a five percent discount on what we want it to deliver,” says Soukup, adding that what the operator seeks is fair prices for everybody.
“Everyone needs to make a margin to stay in business but the margin needs to be fair,” says Soukup. “If we make with our customers a margin of ’X’, it is totally out of the blue that our vendors get a margin of ‘10X’.”
The operator’s goal with Access 4.0 has been to determine how best to deploy broadband internet access on a large scale and with carrier-grade quality. Access is an application suited to cost reduction since “the closer you come to the customer, the more capex [capital expenditure] you have to spend,” says Soukup, adding that since capex is always less than what you’d like, creativity is required.
“When you eat soup, you always grasp a spoon,” says Soukup. “But we asked ourselves: ‘Is a spoon the right thing to use?’”
Software and White Boxes
Access 4.0 uses two components from the Open Networking Foundation (ONF): Voltha and the Software Defined Networking (SDN) Enabled Broadband Access (SEBA) reference design.
Voltha provides a common control and management system for PON white boxes while making the PON network appear to the SDN controller that resides above as a programmable switch. “It abstracts away the [PON] optical line terminal (OLT) so we can treat it as a switch,” says Soukup
SEBA supports a range of fixed broadband technologies that include GPON and XGS-PON. “SEBA 2.0 is a design we are using and are compliant,” says Soukup.
“We are bringing our technology to geographically-distributed locations – central offices – very close to the customer,” says Kolbe. Some aspects are common with the cloud technology used in large data centres but there are also differences.
For example, virtualisation technologies such as Kubernetes are shared while large data centres use OpenStack which is not needed for Access 4.0. In turn, a leaf-spine switching architecture is common as is the use of SDN technology.
“One thing we have learned is that you can’t just take the big data centre technology and put it in distributed locations and try to run heavy-throughput access networks on them,” says Kolbe. “This is not going to work and it led us to the white box approach.”
The issue is that certain workloads cannot be tackled efficiently using x86-based server processors. An example is the Broadband Network Gateway (BNG). “You need to do significant enhancements to either run on the x86 or you offload it to a different type of hardware,” says Kolbe.
Deutsche Telekom started by running a commercial vendor’s BNG on servers. “In parallel, we did the cost calculation and it was horrible because of the throughput-per-Euro and the power-per-Euro,” says Kolbe. And this is where cost engineering comes in: looking at the system, the biggest cost driver was the servers.
“We looked at the design and in the data path there are three programmable ASICs,” says Kolbe. “And this is what we did; it is not a product yet but it is working in our lab and we have done trials.” The result is that the operator has created an opportunity for a white-box design.
There are also differences in the use of switching between large data centres and access. In large data centres, the switching supports the huge east-west traffic flows while in carrier networks, especially close to the edge, this is not required.

Instead, for Access 4.0, traffic from PON trees arrives at the OLT where it is aggregated by a chipset before being passed on to a top-of-rack switch where aggregation and packet processing occur.
The leaf-and-spine architecture can also be used to provide a ‘breakout’ to support edge-cloud services such as gaming and local services. “There is a traffic capability there but we currently don’t use it,” says Kolbe. “But we are thinking that in the future we will.”
Deutsche Telekom has been public about working with such companies as Reply, RtBrick and Broadcom. Reply is a key partner while RtBrick contributes a major element of the speciality domain BNG software.
Kolbe points out that there is no standard for using network processor chips: “They are all specific which is why we need a strong partnership with Broadcom and others and build a common abstraction layer.”
Deutsche Telekom also works closely with Intel, incumbent network vendors such as ADTRAN and original design manufacturers (ODMs) including EdgeCore Networks.
Challenges
About 80 percent of the design effort for Access 4.0 is software and this has been a major undertaking for Deutsche Telekom.
“The challenge is to get up to speed with software; that is not a thing that you just do,” says Kolbe. “We can’t just pretend we are all software engineers.”
Deutsche Telekom also says the new players it works with – the software specialists – also have to better understand telecom. “We need to meet in the middle,” says Kolbe.
Soukup adds that mastering software takes time – years rather than weeks or months – and this is only to be expected given the network transformation operators are undertaking.
But once achieved, operators can expect all the benefits of software – the ability to work in an agile manner, continuous integration/ continuous delivery (CI/DC), and the more rapid introduction of services and ideas.
“This is what we have discovered besides cost-savings: becoming more agile and transforming an organisation which can have an idea and realise it in days or weeks,” says Soukup. The means are there, he says: “We have just copied them from the large-scale web-service providers.”
Status
The first Access 4.0 services will be FTTH delivered from a handful of central offices in Germany later this year. FTTB services will then follow in early 2021.
“Once we are out there and we have proven that it works and it is carrier-grade, then I think we are very fast in onboarding other things,” says Soukup. “But they are [for now] not part of our case.”
Deutsche Telekom’s edge for cloud gaming

Deutsche Telekom believes its network gives it an edge in the emerging game-streaming market.
The operator is trialling a cloud-based service similar to the likes of Google and Microsoft.
The operator already offers IP TV and music as part of its entertainment offerings and will decide if gaming will be the third component. The operator will launch its MagentaGaming cloud-based service in 2020.
“Since 2017, the biggest market in entertainment is gaming,” says Dominik Lauf, project lead, MagentaGaming at Deutsche Telekom.
Market research firms vary in their estimates but the global video gaming market was of the order of $138 billion in 2018 while the theatrics and home entertainment market totalled just under $100 billion for the same period.
Cloud Gaming
In Germany, half the population play video games with half of those being young adults. The gaming market represents a valuable opportunity to ‘renew the brand’ with a younger audience.
Until now, a user’s gaming experience has been determined by the video-processing capabilities of their gaming console or PC graphics card.
The advent of cloud-based gaming changes all that. A user not only can access the latest game titles via the cloud, they no longer need to own state-of-the-art equipment for the ultimate gaming experience. Instead, video processing for gaming is performed in the cloud. All that the user needs is a display. Any display; a smartphone, tablet, PC or TV.
Lauf says hardcore gamers typically spend over €1,000 each year on equipment, while some 45 per cent of all gamers can’t play the latest games at the highest display quality because their hardware is not up to the task. “[With cloud gaming,] the entry barrier of hardware no longer exists for customers,” says Lauf.
However, for game-streaming to work, the onus is on the service provider to deploy hardware – dedicated servers hosting high-end graphics processing units (GPUs) – and ensure that the game-streaming traffic is delivered efficiently over the network.
Deutsche Telekom points out that while buffering is used for video or music streaming services, this isn’t an option with gaming given its real-time nature.
“Latency and bandwidth play a pivotal role within gaming,” says Lauf. “Connectivity counts here.”
Networking demands
Deutsche Telekom’s game-streaming service requires a 50 megabit-per-second (Mbps) broadband connection.
Gaming traffic requires between 30-40Mbps of capacity to ensure full graphics quality. This is over four times the bandwidth required for a video stream. “We can lower the bandwidth required [for gaming] but you will notice it when using a bigger screen,” says Lauf.
The operator is testing the bandwidth requirements its mobile network must deliver to ensure the required gaming quality.
“With 5G, the bandwidth is more or less there, but bandwidth is not the only point, maybe the more important topic is latency,” says Lauf. The operator has recently launched 5G in five cities in Germany.
An end-to-end latency of 50-80ms ensures a smooth gaming experience. A latency of 100ms decreases an individual’s game-play while a latency of 120ms noticeably impacts responsiveness.
Deutsche Telekom’s fixed network delivers a sub-50ms latency. However, the home environment must also be factored in: the home’s wireless network and signal coverage, as well as other electronic devices in the home, all can influence gaming performance.
And it is not just latency that counts but jitter: the volatility of the latency. “The average may be below 50ms but if there are peaks at 100ms, it will impact your gameplay,” says Lauf.
Moreover, the latency and jitter performance should ideally be consistent across the network; otherwise, it can give an unfair advantage to select users in multi-player games.
5G and edge computing
The MegentaGaming trial is also being used to test how 5G and edge computing – where the servers and GPUs are hosted at the network edge – can deliver a sufficiently low jitter.
5G will provide more bandwidth than the operator’s existing LTE mobile network. This will not only benefit individual game players but also the size of group-gaming plays. At present, hundreds can play each other in a game but this number will grow, says Lauf.
5G will also enable new features, such as network slicing, that will benefit low jitter, says Lauf.
“‘Edge’ is a fuzzy term,” says Lauf. “But we will build our servers in a decentralised way to ensure latency does not affect gamers.”
MobiledgeX, a Deutsche Telekom spin-out that focusses on cloud infrastructure, operates four data centres in Germany and is also testing GPUs. However, for the test phase of MagentaGaming, Deutsche Telekom is deploying its servers and GPUs at the network edge
Lauf says the complete architecture must be designed with latency in mind: “There are a lot of components that can increase latency.” Not only the network but the GPU run times and the storage run times.
Deploying servers and GPUs at the network edge requires investment. And given that cloud gaming is still being trialled, it is too early to assess gaming’s business success.
So how does Deutsche Telekom justify investing in edge infrastructure and will the edge be used for other tasks as well as gaming?
“This is also a focus of our trial, to see when are the server peak times in terms of usage,” says Lauf. “There are capabilities for other use cases on the same GPUs.”
The operator is considering using the GPUs for artificial intelligence tasks.
Cloud-gaming competition
Microsoft and Google are also pursuing gaming-streaming services.
Microsoft is about to launch a preview of xCloud – its Xbox cloud-based service – and has been accepting registrations in certain countries.
Microsoft, too, recognises the importance of network latency and is working with operators such as SK Telecom in South Korea and Vodafone UK. It has also signed an agreement with T-Mobile, the US operator arm of Deutsche Telekom.
Meanwhile, Google is preparing its Stadia service which will launch next month.
Lauf believes Deutsche Telekom has an edge despite such hyperscaler competition.
“We are sure that with our high-quality network – our edge and 5G latency capabilities, and our last mile to our customer – we have an advantage compared to the hyperscalers given how latency and bandwidth count,” he says.
Gaming content also matters and the operator says it is in discussions with gaming developers that welcome the fact that there are alternatives to the hyperscalers’ platforms.
“We are quite sure we can play a role,” concludes Lauf. “Even if we are not on the same global level of a Google, we will have a right to play in this business.”
Game on!
Books in 2017: Part 2
Dave Welch, founder and chief strategy and technology officer at Infinera
One favourite book I read this year was Alexander Hamilton by Ron Chernow. Great history about the makings of the US government and financial systems as well as a great biography. Another is The Gene: An Intimate History by Siddhartha Mukherjee, a wonderful discussion about the science and history of genetics.
Yuriy Babenko, senior expert NGN, Deutsche Telekom
As part of my reading in 2017 I selected two technical books, one general life-philosophy title and one strategy book.
Today’s internet infrastructure design is hardly possible without what we refer to as the cloud. Cloud is a very general term but I really like the definition of NIST: Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.
Cloud Native Infrastructure: Patterns for scalable infrastructure and applications in a dynamic environment by Kris Nova and Justin Garrison helps you understand the necessary characteristics of such cloud infrastructure and defines the capabilities of the service architecture that fits this model. The Cloud Native architecture is not just about ‘lift and shift’ into the cloud, it is the redesign of your services focusing on cloud elasticity, scalability, and security as well as operational models including but not limited to infrastructure as code. If you already heard about Kubernetes, Terraform and Cloud Native Foundation but want to understand how various technologies and frameworks fit together, this is a great and easy read.
High Performnce Browser Networking: What Every Web Developer Should Know About Networking and Web Performance by Ilya Grigorik provides a thorough look into the peculiarities of modern browser networking protocols, their foundation, methods and tools that help to optimise and increase the performance of internet sites.
Every serious business today has a web presence. Many services and processes are consumed through the browser, so a look behind the curtains of these infrastructure is informative and useful.
Probably one of the more interesting conclusions is that is not always the bandwidth which is necessary for a site’s successful operation but rather the end-to-end latency. The book discusses HTTP, HTTP2 and SPDY and will be of great interest to anyone who wants to refresh their knowledge of the history of the internet as well as to understand the peculiarities of performance optimisation of (big) internet sites.
Principles: Life and Work by Ray Dalio is probably one of the most discussed books of 2017. Mr Dalio is one of the most successful hedge-fund investors of our generation. In this book, he decided to share the main life and business principles that have guided his decisions during the course of his life. The main message which Dalio shares is not to copy or use his particular principles, although you are likely to adopt several of them, but to have your own.
One of Dalio’s key ideas is that everything works as a machine so if you define the general rules of how the machine (i.e. life in general) works, it will be significantly easier to follow the ups and downs and apply clear thinking in case of difficulties and challenges. He sums it up in an easy-to-comprehend approach which goes like the following: you try things out, reflect on them if something goes well or wrong, log all problems you face along the way, reflect on them and formulate and write down the principles. In due course, you will end up with your own version of Principles. Sounds easy but doing it is the key.
Edge Strategy: A New Mindset for Profitable Growth by Dan McKone and Alan Lewis is about the edges of a business, opportunities sitting comfortably in front of you and your business and which can be 'easily’ tackled and addressed.
Why would you go for a crazy new and risky business idea when there is a bunch of market opportunities just outside of the main door of your core business?
This sounds like “focus and expand” to me and makes a lot of sense. The authors identify three main “edges” which a business can address: product edge, journey edge and enterprise edge.
The book goes into detail about how product edge can be expanded (remember your shiny new iPhone leather case?); a firm can focus more on the complete customer journey (What are the jobs to be done? What problem is the customer really trying to solve? Airbnb service can be a great example); and finally leveraging the enterprise edge (like Amazon renting and selling unused server capacity via its AWS services).
Edge strategies are not new per se, but this book helps to formulate and structure the discussion in an understandable and comprehensive framework.
Books in 2015 - Part 2
Yuriy Babenko, senior network architect, Deutsche Telekom
The books I particularly enjoyed in 2015 dealt with creativity, strategy, and social and organisational development.
People working in IT are often right-brained people; we try to make our decisions rationally, verifying hypotheses and build scenarios and strategies. An alternative that challenges this status quo and looks at issues from a different perspective is Thinkertoys by Michael Michalko.
Thinkertoys develops creativity using helpful tools and techniques that show problems in a different light that can help a person stumble unexpectedly on a better solution.
Some of the methods are well known such as mind-mapping and "what if" techniques but there is a bunch of intriguing new approaches. One of my favourites this year, dubbed Clever Trevor, is that specialisation limits our options, whereas many breakthrough ideas come from non-experts in a particular field. It is thus essential to talk to people outside your field and bounce ideas with them. It leads to the surprising realisation that many problems are common across fields.
The book offers a range of practical exercises, so grab them and apply.
I found From Third World to First: The Singapore Story - 1965-2000 by by Lee Kuan Yew, the founder of modern Singapore, inspiring.
Over 700 pages, Mr. Lee describes the country’s journey to ‘create a First World oasis in a Third World region". He never tired to learn, benchmark and optimise. The book offers perspectives on how to stay confident no matter what happens, focus and execute the set strategy; the importance of reputation and established ties, and fact-based reasoning and argumentation.
Lessons can be drawn here for either organisational development or business development in general. You need to know your strengths, focus on them, not rush and become world class in them. To me, there is a direct link to a resource-based approach, or strategic capability analysis here.
The massive Strategy: A History by Lawrence Freeman promises to be the reference book on strategy, strategic history and strategic thinking.
Starting with the origins of strategy including sources such as The Bible, the Greeks and Sun Tzu, the author covers systematically, and with a distinct English touch, the development of strategic thinking. There are no mathematics or decision matrices here, but one is offered comprehensive coverage of relevant authors, thinkers and methods in a historical context.
Thus, for instance, Chapter 30 (yes, there are a lot of chapters) offers an account of the main thinkers of strategic management of the 20th century including Peter Drucker, Kenneth Andrews, Igor Ansoff and Henry Mintzberg.
The book offers a reference for any strategy-related questions, in both personal or business life, with at least 100 pages of annotated, detailed footnotes. I will keep this book alive on my table for months to come.
The last book to highlight is Continuous Delivery by Jez Humble and David Farley.
The book is a complete resource for software delivery in a continuous fashion. Describing the whole lifecycle from initial development, prototyping, testing and finally releasing and operations, the book is a helpful reference in understanding how companies as diverse as Facebook, Google, Netflix, Tesla or Etsy develop and deliver software.
With roots in the Toyota Production System, continuous delivery emphasises empowerment of small teams, the creation of feedback processes, continuous practise, the highest level of automation and repeatability.
Perhaps the most important recommendation is that for a product to be successful, ‘the team succeeds or fails’. Given the levels of ever-rising complexity and specialisation, the recommendation should be taken seriously.
Roy Rubenstein, Gazettabyte
I asked an academic friend to suggest a textbook that he recommends to his students on a subject of interest. Students don’t really read textbooks anymore, he said, they get most of their information from the Internet.
How can this be? Textbooks are the go-to resource to uncover a new topic. But then I was at university before the age of the Internet. His comment also made me wonder if I could do better finding information online.
Two textbooks I got in 2015 concerned silicon photonics. The first, entitled Handbook of Silicon Photonics provides a comprehensive survey of the subject from noted academics involved in this emerging technology. At 800-pages-plus, the volume packs a huge amount of detail. My one complaint with such compilation books is that they tend to promote the work and viewpoints of the contributors. That said, the editors Laurent Vivien and Lorenzo Pavesi have done a good job and while the chapters are specialist, effort is made to retain the reader.
The second silicon photonics book I’d recommend, especially from someone interested in circuit design, is Silicon Photonics Design: From Devices to Systems by Lukas Chrostowski and Michael Hochberg. The book looks at the design and modelling of the key silicon photonics building blocks and assumes the reader is familiar with Matlab and EDA tools. More emphasis is given to the building blocks than systems but the book is important for two reasons: it is neither a textbook nor a compendium of the latest research, and is written for engineers to get them designing. [1]
I also got round to reading a reflective essay by Robert W. Lucky included in a special 100th anniversary edition of the Proceedings of the IEEE magazine, published in 2012. Lucky started his career as an electrical engineer at Bell Labs in 1961. In his piece he talks about the idea of exponential progress and cites Moore’s law. “When I look back on my frame of reference in 1962, I realise that I had no concept of the inevitability of constant change,” he says.
1962 was fertile with potential. Can we say the same about technology today? Lucky doesn’t think so but accepts that maybe such fertility is only evident in retrospect: “We took the low-hanging fruit. I have no idea what is growing further up the tree.”
A common theme of some of the books I read in the last year is storytelling.
I read journalist Barry Newman’s book News to Me: Finding and Writing Colorful Feature Stories that gives advice on writing. Newman has been writing colour pieces for the Wall Street Journal for over four decades: “I’m a machine operator. I bang keys to make words.”
I also recommend Storytelling with Data: A Data Visualization Guide for Business Professionals by Cole Nussbaumer Knaflic about how best to present one’s data.
I discovered Abigail Thomas’s memoirs A Three Dog Life: A Memoir and What Comes Next and How to Like It. She writes beautifully and a chapter of hers may only be a paragraph. Storytelling need not be long.
Three other books I hugely enjoyed were Atul Gawande's Being Mortal: Medicine and What Matters in the End, Roger Cohen’s The Girl from Human Street: A Jewish Family Odyssey and the late Oliver Sacks’ autobiography On the Move: A Life. Sacks was a compulsive writer and made sure he was never far away from a notebook and pen, even when going swimming. A great habit to embrace.
Lastly, if I had to choose one book - a profound work and a book of our age - it is One of Us: The Story of Anders Breivik and the Massacre in Norway by Asne Seierstad.
For Books in 2015 - Part 1, click here
Further Information
[1] There is an online course that includes silicon photonics design, fabrication and data analysis and which uses the book. For details, click here
Operators want to cut power by a fifth by 2020
Part 2: Operators’ power efficiency strategies
Service providers have set themselves ambitious targets to reduce their energy consumption by a fifth by 2020. The power reduction will coincide with an expected thirty-fold increase in traffic in that period. Given the cost of electricity and operators’ requirements, such targets are not surprising: KPN, with its 12,000 sites in The Netherlands, consumes 1% of the country’s electricity.

“We also have to invest in capital expenditure for a big swap of equipment – in mobile and DSLAMs"
Philippe Tuzzolino, France Telecom-Orange
Operators stress that power consumption concerns are not new but Marga Blom, manager, energy management group at KPN, highlights that the issue had become pressing due to steep rises in electricity prices. “It is becoming a significant part of our operational expense,” she says.
"We are getting dedicated and allocated funds specifically for energy efficiency,” adds John Schinter, AT&T’s director of energy. “In the past, energy didn’t play anywhere near the role it does today.”
Power reduction strategies
Service providers are adopted several approaches to reduce their power requirements.
Upgrading their equipment is one. Newer platforms are denser with higher-speed interfaces while also supporting existing technologies more efficiently. Verizon, for example, has deployed 100 Gigabit-per-second (Gbps) interfaces for optical transport and for its IT systems in Europe. The 100Gbps systems are no larger than existing 10Gbps and 40Gbps platforms and while the higher-speed interfaces consume more power, overall power-per-bit is reduced.
“There is a business case based on total cost of ownership for migrating to newer platforms.”
Marga Blom, KPN
Reducing the number of facilities is another approach. BT and Deutsche Telekom are reducing significantly the number of local exchanges they operate. France Telecom is consolidating a dozen data centres in France and Poland to two, filling both with new, more energy-efficient equipment. Such an initiative improves the power usage effectiveness (PUE), an important data centre efficiency measure, halving the energy consumption associated with France Telecom’s data centres’ cooling systems.
“PUE started with data centres but it is relevant in the future central office world,” says Brian Trosper, vice president of global network facilities/ data centers at Verizon. “As you look at the evolution of cloud-based services and virtualisation of applications, you are going to see a blurring of data centres and central offices as they interoperate to provide the service.”
Belgacom plans to upgrade its mobile infrastructure with 20% more energy-efficient equipment over the next two years as it seeks a 25% network energy efficiency improvement by 2020. France Telecom is committed to a 15% reduction in its global energy consumption by 2020 compared to the level in 2006. Meanwhile KPN has almost halted growth in its energy demands with network upgrades despite strong growth in traffic, and by 2012 it expects to start reducing demand. KPN’s target by 2020 is to reduce energy consumption by 20 percent compared to its network demands of 2005.
Fewer buildings, better cooling
Philippe Tuzzolino, environment director for France Telecom-Orange, says energy consumption is rising in its core network and data centres due to the ever increasing traffic and data usage but that power is being reduced at sites using such techniques as virtualisation of servers, free-air cooling, and increasing the operating temperature of equipment. “We employ natural ventilation to reduce the energy costs of cooling,” says Tuzzolino.
“Everything we do is going to be energy efficient.”
Brian Trosper, Verizon
Verizon uses techniques such as alternating ‘hot’ and ‘cold’ aisles of equipment and real-time smart-building sensing to tackle cooling. “The building senses the environment, where cooling is needed and where it is not, ensuring that the cooling systems are running as efficiently as possible,” says Trosper.
Verizon also points to vendor improvements in back-up power supply equipment such as DC power rectifiers and uninterruptable power supplies. Such equipment which is always on has traditionally been 50% efficient. “If they are losing 50% power before they feed an IP router that is clearly very inefficient,” says Chris Kimm, Verizon's vice president, network field operations, EMEA and Asia-Pacific. Now manufacturers have raised efficiencies of such power equipment to 90-95%.
France Telecom forecasts that its data centre and site energy saving measures will only work till 2013 with power consumption then rising again. “We also have to invest in capital expenditure for a big swap of equipment – in mobile and DSLAMs [access equipment],” says Tuzzolino.
Newer platforms support advanced networking technologies and more traffic while supporting existing technologies more efficiently. This allows operators to move their customers onto the newer platforms and decommission the older power-hungry kit.
“Technology is changing so rapidly that there is always a balance between installing new, more energy efficient equipment and the effort to reduce the huge energy footprint of existing operations”
John Schinter, AT&T
Operators also use networking strategies to achieve efficiencies. Verizon is deploying a mix of equipment in its global private IP network used by enterprise customers. It is deploying optical platforms in new markets to connect to local Ethernet service providers. “We ride their Ethernet clouds to our customers in one market, whereas layer 3 IP routing may be used in an adjacent, next most-upstream major market,” says Kimm. The benefit of the mixed approach is greater efficiencies, he says: “Fewer devices to deploy, less complicated deployments, less capital and ultimately less power to run them.”
Verizon is also reducing the real-estate it uses as it retires older equipment. “One trend we are seeing is more, relatively empty-looking facilities,” says Kimm. It is no longer facilities crammed with equipment that is the problem, he says, rather what bound sites are their power and cooling capacity requirements.
“You have to look at the full picture end-to-end,” says Trosper. “Everything we do is going to be energy efficient.” That includes the system vendors and the energy-saving targets Verizon demands of them, how it designs its network, the facilities where the equipment resides and how they are operated and maintained, he says.
Meanwhile, France Telecom says it is working with 19 operators such as Vodafone and Telefonica, BT, DT, China Telecom, and Verizon as well as the organisations such as the ITU and ETSI to define standards for DSLAMs and base stations to aid the operators in meeting their energy targets.
Tuzzolino stresses that France Telecom’s capital expenditure will depend on how energy costs evolve. Energy prices will dictate when France Telecom will need to invest in equipment, and the degree, to deliver the required return on investment.
The operator has defined capital expenditure spending scenarios - from a partial to a complete equipment swap from 2015 - depending on future energy costs. New services will clearly dictate operators’ equipment deployment plans but energy costs will influence the pace.
““If they [DC power rectifiers and UPSs] are losing 50% power before they feed an IP router that is clearly very inefficient”
Chris Kimm, Verizon.
Justifying capital expenditure spending based on energy and hence operational expense savings in now ‘part of the discussion’, says KPN’s Blom: “There is a business case based on total cost of ownership for migrating to newer platforms.”
Challenges
But if operators are generally pleased with the progress they are making, challenges remain.
“Technology is changing so rapidly that there is always a balance between installing new, more energy efficient equipment and the effort to reduce the huge energy footprint of existing operations,” says AT&T’s Schinter.
“The big challenge for us is to plan the capital expenditure effort such that we achieve the return-on-investment based on anticipated energy costs,” says Tuzzolino.
Another aspect is regulation, says Tuzzolino. The EC is considering how ICT can contribute to reducing the energy demands of other industries, he says. “We have to plan to reduce energy consumption because ICT will increasingly be used in [other sectors like] transport and smart grids.”
Verizon highlights the challenge of successfully managing large-scale equipment substitution and other changes that bring benefits while serving existing customers. “You have to keep your focus in the right place,” says Kimm.
Part 1: Standards and best practices





