Is network traffic growth dwindling to a trickle?

“Network capacities are sufficient, and with data usage expected to plateau in the coming years, further capacity expansion is not needed. We have reached the end of history for communications.”

– Willian Webb, The End of Telecoms History


William Webb has pedigree when it comes to foreseeing telecoms trends.

Webb wrote The 5G Myth in 2016, warning that 5G would be a flop.

In the book, he argued that the wireless standard’s features would create limited interest and fail to grow revenues for mobile operators.

The next seven years saw the telcos promoting 5G and its capabilities. Now, they admit their considerable investments in 5G have delivered underwhelming returns.

His latest book, The End of Telecoms History, argues that telecoms has reached a maturity that satisfies the link speeds needed and that traffic growth is slowing.

“There will be no end of new applications,” says Webb. “But they won’t result in material growth in data requirements or in data speeds.”

What then remains for the telcos is filling in the gaps to provide connectivity everywhere.

Traffic growth slowdown

Earlier this year, AT&T’s CEO, John Stankey, mentioned that its traffic had grown 30 per cent year over year, the third consecutive year of such growth for the telco. The 30 per cent annual figure is the typical traffic growth rate that has been reported for years.

“My take is that we are at about 20 per cent a year annual growth rate worldwide, and it’s falling consistently by about 5 per cent a year,” says Webb.

In 2022, yearly traffic growth was 30 per cent; last year, it was 25 per cent. These are the average growth rates, notes Webb, and there are enormous differences worldwide.

“I was just looking at some data and Greece grew 45 per cent whereas Bahrain declined 10 per cent,” says Webb. “Clearly, there will be big differences between operators.”

He also cites mobile data growth numbers from systems vendor Ericsson. In North America, the growth between 2022 and 2024 was 24 per cent, 17 per cent, and 26 per cent.

“So it is fluctuating around the 20 per cent mark,” says Webb.

Other developments 

What about trends like the ever-greater use of digital technologies experienced by many industries, including telecoms? Or the advent of artificial intelligence (AI), which is leading to significant data centre builds, and how AI is expected to change traffic?

“If you look at all non-personal data use, such as the Internet of Things and so on, traffic levels are tiny,” says Webb. There are exceptions, such as security cameras generating video streams. “I don’t see that trend materially changing overall data rates,” says Webb.

He also doesn’t see AI meaningfully growing overall traffic. AI is useful for improving the running of networks but not changing the amount of wireless traffic. “If anything, it might reduce it because you can be more intelligent about what you need to send,” he says.

While Webb admits that AI data centre builds will require extra fixed networking capacity, as will sharing workloads over distributed data centres in a metropolitan area, he argues that this represents a tiny part of the overall network.

He does not see any new devices emerging that will replace the smartphone, dramatically changing how we consume and interact with data.

5G and 6G

Webb also has doubts about the emerging 6G wireless standard. The academic community is busy developing new capabilities for the next wireless standard. “The problem with that is that academics are generally not grounded in the reality of what will make money in the future,” says Webb. Instead, developers should challenge academics to develop the technologies needed for their applications to succeed.

Webbs sees two 6G camps emerging. The first camp wants 6G to address all the shortfalls of 5G using terahertz frequencies and delivering hundreds of gigabits speeds.

“Let’s max out on everything, and then surely, something wonderful must happen,” says Webb. “This strikes me as not learning the lessons of 5G.”

The second camp, including several telcos, does not want to spend any money on 6G but instead wants the technology, in the form of software updates, to address high operational costs and the difficulties in running different network types.

“In this case, 6G improves the operator’s economics rather than improve the end-user offering, which I think makes sense,” says Webb.

“We may end up in a situation where 6G has all this wondrous stuff, and the operators turn around and say they are not interested,” says Webb. “I see a significant risk for 6G, that it just isn’t ever really deployed anywhere.”

Webb’s career in telecoms spans 35 years. His PhD addressed modulation schemes for radio communications. He spent seven years at the UK regulator Ofcom addressing radio spectrum strategy, and he has also been President of the IET, the UK’s equivalent of the IEEE. Webb also co-founded an IoT startup that Huawei bought. For the last 15 years, he has been a consultant covering telecom strategy and technology.

Outlook

The dwindling growth in traffic will impact the telecom industry.

Webb believes the telcos’ revenues will remain the same resulting in somewhat profitable businesses. “They’re making more profit than utilities but less than technology companies,” says Webb.

He also expects there will be more mergers, an obvious reaction to a market flattening out. The aim is to improve profitability.

Given his regulatory background, is that likely? Regulators shun consolidation as they want to keep competition high. He expects it to happen indirectly, with telcos increasingly sharing networks. Each market will offer three or four brands for consumers per market but fewer networks; operators merging in all but name.

Will there even be a need for telecom consultants?  “I have to say, as I’ve made these predictions, I’ve been thinking what am I needed for now?” says Webb, laughing.

If he is right, the industry will be going through a period of change.

But if the focus becomes extending connectivity everywhere, there is work to be done in understanding and addressing the regulatory considerations, and also how best to transition the industry.

“I do suspect that just as the rest of the industry is effectively more a utility, it will need fewer and fewer consultants,” he says.


Open ROADM gets deployed as work starts on Release 6.0

Source: Open ROADM MSA

AT&T has deployed Open ROADM technology in its network and says all future reconfigurable optical add-drop multiplexer (ROADM) deployments will be based on the standard.

At this point, it is in a single metro and we are working on a second large metro area,” says John Paggi, assistant vice president member of technical staff, network infrastructure and services at AT&T.

 

Open ROADM listed as a requirement in RFPs (Request For Proposals) from many other service providers

As shown are the various elements included in the disaggregated Open ROADM MSA. Also shown is the hierarchical SDN controller architecture with the federated controllers overseeing the optical layer and the multi-layer controller overseeing the path creation across the layer, from IP to optical. Source: Open ROADM MSA

Meanwhile, the Open ROADM multi-source agreement (MSA) continues to progress, with members working on Release 6.0 of the standard.

Motivation 

AT&T is a founding member of the Open ROADM MSA along with system vendors Ciena, Fujitsu and Nokia. The organisation has since grown to 23 members, 13 of which operate networks. Besides AT&T, the communications service providers include Deutsche Telekom, Orange, KDDI, SK Telecom and Telecom Italia.

The initiative was created to promote a disaggregated ROADM standard that enables interoperability between vendors’ ROADMs.

The specification work includes the development of open interfaces to control the ROADMs using software-defined networking (SDN) technology. The scope of the disaggregated design has also been expanded beyond ROADMs to include optical transceivers, OTN switching to handle sub-wavelength traffic, and optical amplifiers.

AT&T viewed the MSA as a way to change the traditional model of assigning two ROADM system vendors for each of its metro regions.

We had two suppliers to keep each other honest,” says Paggi. But once we had committed a region to a supplier, we were more or less beholden to that supplier for additional ROADM and transponder purchases.”

AT&T wanted true hyper-competition’ among ROADM and transponder suppliers and the Open ROADM MSA was the result.

The operator saw the MSA as a way to reduce costs and speed up innovation by using an open networking model. Opening up and standardising the design would also allow innovative start-up vendors to participate. With the traditional supply model, an operator would favour larger firms knowing it would be dependent on the suppliers for 5-10 years.

Because you can mix and match different suppliers, Open ROADM allows us to introduce disrupters to our environment,” says Paggi.

Evolution

The first Open ROADM revision used 100-gigabit wavelengths and a 50GHz fixed grid. A flexible grid and in-line amplification that extended the reach of 100-gigabit wavelengths to 1,000km were then added with Revision 2.

In Revision 3 we made Open ROADM applicable to more use cases,” says Martin Birk, director member of technical staff, network infrastructure and services, AT&T. We started introducing things like OTUCn and FlexO in preparation for 400 gigabits.”  The OTN Beyond 100 gigabit’ OTUCn format comprises n’ multiples of 100-gigabit OTUC frames, while FlexO refers to the Flexible OTN format.

Adopting OTN technologies is part of enabling Open ROADM to support 200-, 300- and 400-gigabit wavelengths.

Revision 4 then added ODUFlex, 400-gigabit clients, and support for low-noise amplifiers to further extend reach, while the latest fifth revision adds streaming telemetry for network monitoring using work from the OpenConfig industry group.

A lot of features that widen considerably the application of Open ROADM,” says Birk.

Revision 6.0

The frequency of each Open ROADM release was initially once a year but now the scope of each revision has been curtailed to enable two releases a year. Members are polled as to what new features are required at the start of each standardisation process.

Now, the MSA members are working on revision 6.0 that covers all directions’ of the standard.

We are improving the control plane interoperability with more features,” says Birk. Right now you have a single network view; in future, you could have an idealised network plan and a network view with actual failures, and you could provision services across these network views.”

And with the advent of 600-gigabit, 800-gigabit and even 1.2-terabit coherent wavelengths, OpenROADM members may add support for faster speeds than 400 gigabits.

Just as our suppliers continue to evolve their roadmaps, so does the Open ROADM MSA to stay relevant,” says Birk.

AT&Ts Open ROADM deployments support 100-gigabit wavelengths while the 400-gigabit technology is still in development.

The ROADMs will not change; the only thing that will change is the software,” says Birk. And in a disaggregated design, you can leave the ROADMs on version 2.0 and upgrade the transponders to 400 gigabits and version 5.0.”

This, says Birk, is why it is much easier to introduce new technology with an open design compared to monolithic platforms where an upgrade involves all the element management systems, ROADMs and transponders.

Status

The Open ROADM MSA says it is up to individual network operator members to declare the status of their Open ROADM network deployments. Accordingly, the status of overall Open ROADM deployments is unclear.

What AT&T will say is that it is being approached by vendors that want to demonstrate their Open ROADM technology to the operator.

When we ask them why they have done this without any agreement that AT&T would purchase their solutions, they respond that they are seeing Open ROADM listed as a requirement in RFPs (Request For Proposals) from many other service providers,” says Paggi. “They have taken it upon themselves to develop Open ROADM-compliant products.”

At the OFC show earlier this year, an Open ROADM MSA showcased an SDN controller turning up a wavelength to send virtual machines between two data centres. The SDN controller then terminated the optical connection on completion of the transfer.

Operators AT&T and Orange were part of the demonstration as was the University of Texas, Dallas. They [the University of Texas] are a supercomputing centre and they can create some nice applications on top of Open ROADM,” says Birk.

The system vendors involved in the OFC demonstration included Ciena, Fujitsu, ECI Telecom, Infinera and Juniper Networks.

 


Will white boxes predominate in telecom networks?

Will future operator networks be built using software, servers and white boxes or will traditional systems vendors with years of network integration and differentiation expertise continue to be needed? 

 

AT&T’s announcement that it will deploy 60,000 white boxes as part of its rollout of 5G in the U.S. is a clear move to break away from the operator pack.

The service provider has long championed network transformation, moving from proprietary hardware and software to a software-controlled network based on virtual network functions running on servers and software-defined networking (SDN) for the control switches and routers.

Glenn WellbrockNow, AT&T is going a stage further by embracing open hardware platforms - white boxes - to replace traditional telecom hardware used for data-path tasks that are beyond the capabilities of software on servers.       

For the 5G deployment, AT&T will, over several years, replace traditional routers at cell and tower sites with white boxes, built using open standards and merchant silicon.   

“White box represents a radical realignment of the traditional service provider model,” says Andre Fuetsch, chief technology officer and president, AT&T Labs. “We’re no longer constrained by the capabilities of proprietary silicon and feature roadmaps of traditional vendors.”

But other operators have reservations about white boxes. “We are all for open source and open [platforms],” says Glenn Wellbrock, director, optical transport network - architecture, design and planning at Verizon. “But it can’t just be open, it has to be open and standardised.”

Wellbrock also highlights the challenge of managing networks built using white boxes from multiple vendors. Who will be responsible for their integration or if a fault occurs? These are concerns SK Telecom has expressed regarding the virtualisation of the radio access network (RAN), as reported by Light Reading.

“These are the things we need to resolve in order to make this valuable to the industry,” says Wellbrock. “And if we don’t, why are we spending so much time and effort on this?”

Gilles Garcia, communications business lead director at programmable device company, Xilinx, says the systems vendors and operators he talks to still seek functionalities that today’s white boxes cannot deliver. “That’s because there are no off-the-shelf chips doing it all,” says Garcia. 

 

We’re no longer constrained by the capabilities of proprietary silicon and feature roadmaps of traditional vendors

 

White boxes

AT&T defines a white box as an open hardware platform that is not made by an original equipment manufacturer (OEM).

A white box is a sparse design, built using commercial off-the-shelf hardware and merchant silicon, typically a fast router or switch chip, on which runs an operating system. The platform usually takes the form of a pizza box which can be stacked for scaling, while application programming interfaces (APIs) are used for software to control and manage the platform.

As AT&T’s Fuetsch explains, white boxes deliver several advantages. By using open hardware specifications for white boxes, they can be made by a wider community of manufacturers, shortening hardware design cycles. And using open-source software to run on such platforms ensures rapid software upgrades.

Disaggregation can also be part of an open hardware design. Here, different elements are combined to build the system. The elements may come from a single vendor such that the platform allows the operator to mix and match the functions needed. But the full potential of disaggregation comes from an open system that can be built using elements from different vendors. This promises cost reductions but requires integration, and operators do not want the responsibility and cost of both integrating the elements to build an open system and integrating the many systems from various vendors.   

Meanwhile, in AT&T’s case, it plans to orchestrate its white boxes using the Open Networking Automation Platform (ONAP) - the ‘operating system’ for its entire network made up of millions of lines of code. 

ONAP is an open software initiative, managed by The Linux Foundation, that was created by merging a large portion of AT&T’s original ECOMP software developed to power its software-defined network and the OPEN-Orchestrator (OPEN-O) project, set up by several companies including China Mobile and China Telecom.   

AT&T has also launched several initiatives to spur white-box adoption. One is an open operating system for white boxes, known as the dedicated network operator system (dNOS). This too will be passed to The Linux Foundation.

The operator is also a key driver of the open-based reconfigurable optical add/ drop multiplexer multi-source agreement, the OpenROADM MSA. Recently, the operator announced it will roll out OpenROADM hardware across its network. AT&T has also unveiled the Akraino open source project, again under the auspices of the Linux Foundation, to develop edge computing-based infrastructure.

At the recent OFC show, AT&T said it would limit its white box deployments in 2018 as issues are still to be resolved but that come 2019, white boxes will form its main platform deployments.

Xilinx highlights how certain data intensive tasks - in-line security, performed on a per-flow basis, routing exceptions, telemetry data, and deep packet inspection - are beyond the capabilities of white boxes. “White boxes will have their place in the network but there will be a requirement, somewhere else in the network for something else, to do what the white boxes are missing,” says Garcia. 

 

Transport has been so bare-bones for so long, there isn’t room to get that kind of cost reduction

 

AT&T also said at OFC that it expects considerable capital expenditure cost savings - as much as a halving - using white boxes and talked about adopting in future reverse auctioning each quarter to buy its equipment.

Niall Robinson, vice president, global business development at ADVA Optical Networking, questions where such cost savings will come from: “Transport has been so bare-bones for so long, there isn’t room to get that kind of cost reduction. He also says that there are markets that already use reverse auctioning but typically it is for items such as components. “For a carrier the size of AT&T to be talking about that, that is a big shift,” says Robinson. 

 

Layer optimisation

Verizon’s Wellbrock first aired reservations about open hardware at Lightwave’s Open Optical Conference last November.

In his talk, Wellbrock detailed the complexity of Verizon’s wide area network (WAN) that encompasses several network layers. At layer-0 are the optical line systems - terminal and transmission equipment - onto which the various layers are added: layer-1 Optical Transport Network (OTN), layer-2 Ethernet and layer-2.5 Multiprotocol Label Switching (MPLS). According to Verizon, the WAN takes years to design and a decade to fully exploit the fibre.

“You get a significant saving - total cost of ownership - from combining the layers,” says Wellbrock. “By collapsing those functions into one platform, there is a very real saving.” But there is a tradeoff: encapsulating the various layers’ functions into one box makes it more complex.

“The way to get round that complexity is going to a Cisco, a Ciena, or a Fujitsu and saying: ‘Please help us with this problem’,” says Wellbrock. “We will buy all these individual piece-parts from you but you have got to help us build this very complex, dynamic network and make it work for a decade.”

 

Next-generation metro

Verizon has over 4,000 nodes in its network, each one deploying at least one ROADM - a Coriant 7100 packet optical transport system or a Fujitsu Flashwave 9500. Certain nodes employ more than one ROADM; once one is filled, a second is added.

“Verizon was the first to take advantage of ROADMs and we have grown that network to a very large scale,” says Wellbrock.

The operator is now upgrading the nodes using more sophiticated ROADMs, as part of its next-generation metro. Now each node will need only one ROADM that can be scaled. In 2017, Verizon started to ramp and upgraded several hundred ROADM nodes and this year it says it will hit its stride before completing the upgrades in 2019.

“We need a lot of automation and software control to hide the complexity of what we have built,” says Wellbrock. This is part of Verizon’s own network transformation project. Instead of engineers and operational groups in charge of particular network layers and overseeing pockets of the network - each pocket being a ‘domain’, Verizon is moving to a system where all the networks layers, including ROADMs, are managed and orchestrated using a single system.

The resulting software-defined network comprises a ‘domain controller’ that handles the lower layers within a domain and an automation system that co-ordinates between domains.

“Going forward, all of the network will be dynamic and in order to take advantage of that, we have to have analytics and automation,” says Wellbrock.

 

In this new world, there are lots of right answers and you have to figure what the best one is

 

Open design is an important element here, he says, but the bigger return comes from analytics and automation of the layers and from the equipment.

This is why Wellbrock questions what white boxes will bring: “What are we getting that is brand new? What are we doing that we can’t do today?”

He points out that the building blocks for ROADMs - the wavelength-selective switches and multicast switches - originate from the same sub-system vendors, such that the cost points are the same whether a white box or a system vendor’s platform is used. And using white boxes does nothing to make the growing network complexity go away, he says.

“Mixing your suppliers may avoid vendor lock-in,” says Wellbrock. “But what we are saying is vendor lock-in is not as serious as managing the complexity of these intelligent networks.”

Wellbrock admits that network transformation with its use of analytics and orchestration poses new challenges. “I loved the old world - it was physics and therefore there was a wrong and a right answer; hardware, physics and fibre and you can work towards the right answer,” he says. “In this new world, there are lots of right answers and you have to figure what the best one is.”

 

Evolution

If white boxes can’t perform all the data-intensive tasks, then they will have to be performed elsewhere. This could take the form of accelerator cards for servers using devices such as Xilinx’s FPGAs.

Adding such functionality to the white box, however, is not straightforward. “This is the dichotomy the white box designers are struggling to address,” says Garcia. A white box is light and simple so adding extra functionality requires customisation of its operating system to run these application. And this runs counter to the white box concept, he says. 

 

We will see more and more functionalities that were not planned for the white box that customers will realise are mandatory to have

 

But this is just what he is seeing from traditional systems vendors developing designs that are bringing differentiation to their platforms to counter the white-box trend.

One recent example that fits this description is Ciena’s two-rack-unit 8180 coherent network platform. The 8180 has a 6.4-terabit packet fabric, supports 100-gigabit and 400-gigabit client-side interfaces and can be used solely as a switch or, more typically, as a transport platform with client-side and coherent line-side interfaces.

The 8180 is not a white box but has a suite of open APIs and has a higher specification than the Voyager and Cassini white-box platforms developed by the Telecom Infra Project.  

“We are going through a set of white-box evolutions,” says Garcia. “We will see more and more functionalities that were not planned for the white box that customers will realise are mandatory to have.”

Whether FPGAs will find their way into white boxes, Garcia will not say. What he will say is that Xilinx is engaged with some of these players to have a good view as to what is required and by when.

It appears inevitable that white boxes will become more capable, to handle more and more of the data-plane tasks, and as a response to the competition from traditional system vendors with their more sophisticated designs.

AT&T’s white-box vision is clear. What is less certain is whether the rest of the operator pack will move to close the gap.


Juniper Networks opens up the optical line system

Juniper Networks has responded to the demands of the large-scale data centre players with an open optical line system architecture.

Donyel Jones-WilliamsThe system vendor has created software external to its switch, IP router and optical transport platforms that centrally controls the optical layer.

Juniper has also announced a reconfigurable optical add-drop multiplexer (ROADM) - the TCX1000 - that is Lumentum’s own white box ROADM design. Juniper will offer the Lumentum white box as its own, part of its optical product portfolio.

The open line system architecture, including the TCX1000, is also being pitched to communications service providers that want an optical line system and prefer to deal with a single vendor.

“Juniper plans to address the optical layer with a combination of software and open hardware in the common optical layer,” says Andrew Schmitt, founder and lead analyst at Cignal AI. “This is the solution it will bring to customers rather than partnering with an optical vendor, which Juniper has tried several times without great success.”

 

Open line systems

An optical line system comprises terminal and transmission equipment and network management software. The terminal equipment refers to coherent optics hosted on platforms, while line elements such as filters, optical amplifiers and ROADMs make up the transmission equipment. Traditionally, a single vendor has provided all these elements with the network management software embedded within the vendor’s platforms.

An open optical line system refers to line equipment and the network management system from a vendor such as Nokia, Infinera or Ciena that allows the attachment of independent terminal equipment. An example would be the Telecom Infra Project’s Voyager box linked to a Nokia line system, says Schmitt.

The open line system can also be implemented as a disaggregated design. Here, says Schmitt, the control software would be acquired from a vendor such as Juniper, Fujitsu, or Ciena with the customer buying open ROADMs, amplifiers and filters from various vendors before connecting them. Open software interfaces are used to communicate with these components. And true to an open line system, any terminal equipment can be connected.

The advantage of an open disaggregated optical line system is that elements can be bought from various sources to avoid vendor lock-in. It also allows the best components to be acquired and upgraded as needed.

Meanwhile, disaggregating the management and control software from the optical line system and equipment appeals to the way the internet content providers architect and manage their large-scale data centres. This is what Juniper’s proNX Optical Director platform enables, the second part of its open line system announcement. 

Juniper believes its design is an industry first in how it separates the control plane from the optical hardware.

“We have taken the concept of disaggregation and software-defined networking to separate the control plane out of the hardware,” says Donyel Jones-Williams, director of product marketing management at Juniper Networks. “Our control plane is no longer tied to physical hardware.”

 

Having an open line system supplied by one vendor gets you 90% of the way there

 

Disaggregated control benefits the optimisation of the open line system, and enables flexible updates without disrupting the service.

Cignal AI’s Schmitt says that the cloud and co-location players are already using open line systems just not disaggregated ones.

“Having an open line system supplied by one vendor gets you 90% of the way there,” says Schmitt. For him, a key question is what problem is being solved by taking this one step further and disaggregating the hardware.

Schmitt’s view is that an operator introduces a lot of complexity into the network for the marginal benefit of picking hardware suppliers independently. “And realistically they are still single-sourcing the software from a vendor like Juniper or Ciena,” says Schmitt.

Juniper now can offer an open line system, and if a customer wants a disaggregated one, it can build it. “I don’t think users will choose to do that,” says Schmitt. “But Juniper is in a great position to sell the right open line system technology to its customer base and this announcement is interesting and important because Juniper is clearly stating this is the path it plans to take.”

 

TCX1000 and proNX 

Juniper’s open optical line system announcement is the latest development in its optical strategy since it acquired optical transport firm, BTI Systems, in 2016.

BTI’s acquisition provided Juniper with a line system for 100-gigabit transport. “The filters and ROADMs didn’t allow the system to scale to 200-gigabit and 400-gigabit line rates and to support super-channels and flexgrid,” says Jones-Williams.

With the TCX1000, Juniper now has a one-rack-unit 20-degree ROADM that is colourless, directionless and which supports flexgrid to enable 400-gigabit, 600-gigabit and even higher capacity optical channels in future. The TCX1000 supports up to 25.6 terabits-per-second per line.

A customer can also buy the white box ROADM from Lumentum directly, says Juniper. “It gives our customers freedom as to how they want to source their product,” says Jones-Williams.

 

Competition between vendors is now in the software domain. We no longer believe that there is differentiation in the optical line system hardware


Juniper’s management and control software, the ProNX Optical Director, has been architected using microservices. Microservices offers a way to architect applications using virtualisation technology. Each application is run in isolation based on the service they provide. This allows a service to run and scale independently while application programming interfaces (APIs) enable communication with other services.

Container technology is used to implement microservices. Containers use fewer hardware resources than virtual machines, an alternative approach to server virtualisation.

 

Source: Juniper Networks.

“It is built for data centre operators,” says Don Frey, principal analyst, routers and transport at the market research firm, Ovum. “Microservices makes the product more modular.”

Juniper believes the competition between vendors is now in the software domain. “We no longer believe that there is differentiation in the optical line system hardware,” says Jones-Williams.

 

Data centre operators are not concerned about line system interoperability, they are just trying to remove the blade lock-in so they can get the latest technology.

 

Market demands

Most links between data centres are point-to-point networks yet despite that, the internet content providers are interested in ROADMs, says Juniper. What they want is to simplify network design using the ROADM’s colourless and flexible grid attributes. A directionless ROADM is only needed for complex hub sites that require flexibility in moving wavelengths through a mesh network.

The strategy of the large-scale data centre operators is to split the optical system between an open line system and purpose-built blades. The split allows them to upgrade to the best blades or pluggable optics while leaving the core untouched. “The concept is similar to the open submarine cables as the speed of innovation in core systems is not the same as the line optics,” says Frey. “Data centre operators are not concerned about line system interoperability, they are just trying to remove the blade lock-in so they can get the latest technology.”

Juniper says there is also interest from communications service providers in the ROADM as part of their embrace of open initiatives such as the Open ROADM MSA. Frey says AT&T will make its first deployment of the Open ROADM before the year-end or in early 2018.  

“There are a lot of synergies in terms of what we have announced and things like Open ROADM,” says Jones-Williams. “But we know that there are customers out there that just want a line system and they do not care if it is open or not.”  

Juniper is already working with customers with its open line system as part of the development of its proNX software.

The branded ROADM and the proNX Optical Director will be generally available in early 2018.


Sckipio’s G.fast silicon to enable gigabit services

Sckipio’s newest G.fast broadband chipset family delivers 1.2 gigabits of aggregate bandwidth over 100m of telephone wire.

The start-up’s SCK-23000 chipset family implements the ITU’s G.fast Amendment 3 212a profile. The profile doubles the spectrum used from G.fast from 106MHz to 212MHz, boosting the broadband rates. In contrast, VDSL2 digital subscriber line technology uses 17MHz of spectrum only.

“What the telcos want is gigabit services,” says Michael Weissman, vice president of marketing at Sckipio. “This second-generation [chipset family] allows that.”

 

G.fast market

AT&T announced in August that it is rolling out G.fast technology in 22 metro regions in the US. The operator already offers G.fast to multi-dwelling units in eight of these metro regions. The rollout adds to the broadband services AT&T offers in 21 states.

AT&T’s purchase of DirecTV in 2015 has given the operator some 20 million coax lines, says Weissman. AT&T can now deliver broadband services to apartments that have the DirecTV satellite service by bringing a connection to the building’s roof. AT&T will deliver such connections using its own fibre or by partnering with an incumbent operator. Once connected, high-speed internet using G.fast can then be delivered over the coax cable, a superior medium compared to telephony wiring.

Michael Weissman“This is fundamentally going to change the game,” says Weissman. “AT&T can now compete with cable companies and incumbent operators in markets it couldn’t address before.”

Sckipio has secured four out of the top five telcos in the US that have chosen to do G.fast: AT&T, CenturyLink, Windstream and Frontier. “The two largest - AT&T and CenturyLink - are exclusively ours,” says Weissman.

In markets such as China, the focus is on fibre. The three largest Chinese operators had deployed some 260 million fibre-to-the-home (FTTH) lines by the end of July.  

Overall, Sckipio is involved in some 100 G.fast pilots worldwide. The start-up is also the sole supplier of G.fast silicon to broadband vendor Calix and one of two suppliers to Adtran.

“Right now there are only two real deployments that are publicly announced - and I mean deployment volumes - AT&T and BT,” says Weissman. “The point is G.fast is real.”

Telcos have several requirements when it comes to G.fast deployment. One is that the technology delivers competitive broadband rates and that means gigabit services. Another is coverage: the ability to serve as high a percentage of customers as possible in a given region.

 

What the telcos want is gigabit services. This second-generation [chipset family] allows that.

 

Because G.fast works across the broader spectrum - 212MHz - advanced signal processing techniques are required to make the technology work. Known as vectoring, the signal processing technique rejects crosstalk - leaking signals - between the telephone wires at the distribution point. A further operator need is ‘vectoring density’, the ability to vector as many lines as possible. 

It is these and other requirements that Sckipio has set out to address with its SCK-23000 chipset family.    

 

SCK-23000 chipset

The SCK-23000 comprises two chipsets. One is the 8-port DP23000 chipset used at the distribution point unit (DPU) while the second chipset is the CP23000, used for customer premise equipment.

Sckipio is not saying what CMOS process is used to implement the chipsets. Nor will it say how many chips make up each of the chipsets.

As for performance, the chipsets enable an aggregate line-rate performance (downstream and upstream) of 1.7 gigabits-per-second (Gbps) over 50m, to 0.4Gbps over 300m. The DP23000 chipset also supports two bonded telephone lines, effectively doubling the line rate. In markets such as the US and Taiwan, a second wire pair to a home is common.

 

Vectoring density   

Vectoring density dictates how many G.fast ports can be deployed as a distribution point. And the computationally-intensive task is even more demanding with the adoption of the 212a profile. “The larger the vector group, the more each subscriber’s line must know what every other subscriber’s signal is to manage the crosstalk - and you are doing it at twice the bandwidth,” says Weissman.

Sckipio says the SCK-23000 supports up to 96 ports (or 48 bonded ports) at the 212a profile. The design uses distributed parallel processing that spreads the vectoring computation among the DP23000 8-port devices used. “We are not specifying data paths between the chips but you are talking about gigabytes of traffic flowing in all directions, all of the time,” says Weissman.

The computation can not only be spread across the devices in a single distribution point box but across devices in different boxes. Operators can thus use a pay-as-you-grow model, adding a new box as required. “A 96-port design could be two 48-port boxes, or an 8-port box could [be combined to] become a 16- or 24-port design if you have a smaller multi-dwelling unit environment,” says Weissman.

Sckipio’s design also features a reverse power feed: power is fed to the distribution point to avoid having to install a costly power supply. Since the power must come from a subscriber, the box’s power demand must not be excessive. A 16-port box is a good compromise in that it is not too large and as subscriber-count grows, each new 16-port unit added can be powered by another consumer.

“You can only do that if you can do cross distribution-point-unit vectoring,” says Weissman. “It allows the telcos to do a reverse power feed at the densities they require.” 

 

Dynamic bandwidth allocation

The chipsets also support co-ordinated dynamic bandwidth allocation, what Sckipio refers to as co-ordinated dynamic time assignment.

Unlike DSL where the spectrum is split between upstream and downstream traffic, G.fast partitions the two streams in time: the CPE chipset is either uploading or downloading traffic only.

Until now, an operator will preset a fixed upload-download ratio at installation. Now, with the latest silicon, dynamic bandwidth allocation can take place. The system assesses the changing usage of subscribers and adjusts the upload-download ratio accordingly. However, this must be co-ordinated across all users such that they all send and all receive data simultaneously.

“You can’t, under any circumstances, have lines uploading and downloading at the same time,” says Weissman. “All the systems that are vectored must be communicating in the same direction at the same time.” If they are not co-ordinated, crosstalk occurs. This is another crosstalk, in addition to the crosstalk caused by the adjacency of the telephone wires that is compensated for using vectoring.

“If you don’t co-ordinate across all the pairs, you create a different type of crosstalk which you can’t mitigate,” says Weissman. “This will kill the system.”      

Sckipio says the SCK-23000 chipsets are already with customers and that the devices are generally available.


The Open ROADM MSA adds new capabilities in Release 2.0

The Multi-Source Agreement (MSA) for open reconfigurable add-drop multiplexers (ROADM) group expects to publish its second release in the coming months. The latest MSA specifications extend optical reach by including line amplification and adds support for flexible grid and lower-speed tributaries with OTN switching.

Xavier PougnardThe Open ROADM MSA, set up by AT&T, Ciena, Fujitsu and Nokia, is promoting interoperability between vendors’ ROADMs by specifying open interfaces for their control using software-defined networking (SDN) technology. Now, one year on, the MSA has 10 members, equally split between operators and systems vendors.

Orange joined the Open ROADM MSA last July and says it shares AT&T’s view that optical networks lack openness given the proprietary features of the vendors’ systems.

“As service providers, we suffer from lock-in where our networks are composed of equipment from a single vendor,” says Xavier Pougnard, R&D manager for transport networks at Orange Labs. “When we want to introduce another vendor for innovation or economic reasons, it is nearly impossible.”

This is what the MSA group wants to tackle with its open specifications for the data and management planes. The goal is to enable an operator to swap equipment without having to change their control by using a common, open management interface. “Right now, for every new provider, we need IT development for the management of the [network] node,” says Pougnard.

 

As service providers, we suffer from lock-in where our networks are composed of equipment from a single vendor. When we want to introduce another vendor for innovation or economic reasons, it is nearly impossible.

 

MSA status

The Open ROADM MSA has published two data sets as part of its Release 1.2. One set tackles 100-gigabit data plane interoperability by defining what is needed for two line-side transponders to talk to each other. The second set of specifications uses the YANG modelling language to allow the management of the transponders and ROADMs.

The group is now working on Release 2.0 that will enable longer reaches and exploit OTN switching. The specifications will also support flexgrid whereas Release 1.2 specifies 50GHz fixed channels only. Release 2.0 is expected to be completed in the second quarter of 2017. “Service providers would like it as soon as possible,” says Pougnard.

Pougnard highlights the speed of development of an open MSA model with new releases issued every few months, far quicker that traditional standardisation bodies. It was this frustration with the slow pace of development of the standards bodies that led Orange to join the Open ROADM MSA.

Orange stresses that the Open ROADM will not be used for all dense wavelength-division multiplexing cases. There will be applications which require extended performance where a specific vendor's equipment will be used. “We do specify the use of an FEC [forward error correction] in the specification but there are more powerful FECs that extend the reach for 100-gigabit interfaces,” says Pougnard. But the underlying flexibility offered by the MSA trumps performance.

 

Trials

AT&T detailed in December a network demonstration of the Open ROADM technology. The operator used a 100-gigabit optical wavelength in its Dallas area network to connect two IP-MPLS routers using transponders and ROADMs from Ciena and Fujitsu.

Orange is targeting its own lab trials in the first half of this year using a simplified OpenDaylight SDN controller working with ROADMs from three systems vendors. “We want to showcase the technology and prove the added value of an open ROADM,” says Pougnard. 

Orange is also a member of the Telecom Infra Project, a venture that includes Facebook and 10 operators to tackle telecom networks from access to the core. The two groups have had discussions about areas of possible collaboration but while the Open ROADM MSA wants to promote a single YANG model that includes the amplifiers of the line system, TIP expects there to be more than a single model. The two organisations also differ in their philosophies: the Open ROADM MSA concerns itself with the interfaces to the platforms whereas TIP also tackles the internal design of platforms.  

Coriant, which is a member of TIP and the Open ROADM MSA, is keen for alignment. "As an industry we should try to make sure that certain elements such as open API definitions are aligned between TIP and the Open ROADM MSA," says Uwe Fischer, CTO of Coriant.  

Meanwhile, the Open ROADM MSA will announce another vendor member soon and says additional operators are watching the MSA’s progress with interest.

Pougnard stresses how open developments such as the ROADM MSA require WDM engineers to tackle new things. “We have a tremendous shift in skills,” he says. “Now they need to work on the automation capability, on YANG modelling and Netconf.”  Netconf - the IETF’s network configuration protocol - uses YANG models to enable the management of network devices such as ROADMs.    


60-second interview with Infonetics' Andrew Schmitt

Market research firm Infonetics Research, now part of IHS Inc., has issued its 2014 summary of the global wavelength-division multiplexing (WDM) equipment market. Andrew Schmitt, research director for carrier transport networking, in a Q&A with Gazettabyte.

 

Andrew Schmitt

Q: Infonetics claims the global WDM market grew 6% in 2014, to total US $10 billion. What accounted for such impressive growth in 2014?

AS: Primarily North American strength from data centre-related spending and growth in China.

 

Q: In North America, the optical vendors' fortunes were mixed: ADVA Optical Networking, Infinera and Ciena had strong results, balanced by major weakness at Alcatel-Lucent, Fujitsu and Coriant. You say those companies whose fortunes are tied to traditional carriers under-performed. What are the other markets that caused those vendors' strong results?

These three vendors are leading the charge into the data centre market. ADVA had flat revenue, North America saved their bacon in 2014. Ciena is also there because they are the ones who have suffered the least with the ongoing changes at AT&T and Verizon. And Infinera has just been killing it as they haven’t been exposed to legacy tier-1 spending and, despite the naysayers, has the platform the new customers want.

 

"People don’t take big risks and do interesting things to attack flat or contracting markets"

 

Q: Is this mainly a North American phenomenon, because many of the leading internet content providers are US firms?

Yes, but spending from Baidu, Alibaba, and Tencent in China is starting to scale. They are running the same playbook as the western data centre guys, with some interesting twists.

 

Q. You say the press and investors are unduly fascinated with AT&T's and Verizon's spending. Yet they are the two largest US operators, their sum capex was $39 billion in 2014, and their revenues grew. Are these other markets becoming so significant that this focus is misplaced?  

Growth is what matters.

People don’t take big risks and do interesting things to attack flat or contracting markets. Sure, it is a lot of spend, but the decisions are made and that data is seen - incorporated into people’s thought-process and market opinion. What matters is what changes. And all signs are that these incumbents are trying to become more like the data centre folks.

 

Q. What will be the most significant optical networking trend in 2015?

Cheaper 100 gigabit, which lights up the metro 100 gigabit market for real in 2016.


North American operators in an optical spending rethink

Optical transport spending by the North American operators dropped 13 percent year-on-year in the third quarter of 2014, according to market research firm Dell'Oro Group.

Operators are rethinking the optical vendors they buy equipment from as they consider their future networks. "Software-defined networking (SDN) and Network Functions Virtualisation (NFV) - all the futuristic next network developments, operators are considering what that entails," says Jimmy Yu, vice president of optical transport research at Dell’Oro. "Those decisions have pushed out spending."

NFV will not impact optical transport directly, says Yu, and could even benefit it with the greater signalling to central locations that it will generate. But software-defined networks will require Transport SDN. "You [as an operator] have to decide which vendors are going to commit to it [Transport SDN]," says Yu.

 

SDN and NFV - all the futuristic next network developments, operators are considering what that entails. Those decisions have pushed out spending

 

The result is that the North American tier-one operators reduced their spending in the third quarter 2014. Yu highlights AT&T which during 2013 through to mid 2014 undertook robust spending. "What we saw growing [in that period] was WDM metro equipment, and it is that spending that has dropped off in the third quarter," says Yu. For equipment vendors Ciena and Fujitsu that are part of AT&T's Domain 2.0 supplier programme, the Q3 reduced spending is unwelcome news. But Yu expects North American optical transport spending in 2015 to exceed 2014's. This, despite AT&T announcing that its capital expenditure in 2015 will dip to US $18 billion from $21 billion in 2014 now that its Project VIP network investment has peaked.

But Yu says AT&T has other developments that will require spending. "Even though AT&T may reduce spending on Project VIP, it is purchasing DirecTV and the Mexican mobile carrier, lusacell," he says. "That type of stuff needs network integration." AT&T has also committed to passing two million homes with fibre once it acquires DirecTV.

Verizon is another potential reason for 2015 optical transport growth in North America. It has a request-for-proposal for metro DWDM equipment and the only issue is when the operator will start awarding contracts. Meanwhile, each year the large internet content providers grow their optical transport spending.

 

Dell'Oro expects 2014 global optical transport spending to be flat, with 2015 forecast to experience three percent growth

 

Asia Pacific remains one of the brighter regions for optical transport in 2014. "Partly this is because China is buying a lot of DWDM long-haul equipment, with China Mobile being one of the biggest buyers of 100 Gig," says Yu. EMEA continues to under-perform and Yu expects optical transport spending to decline in 2014. "But there seems to be a lot of activity and it's just a question of when that activity turns into revenue," he says.

Dell'Oro expects 2014 global optical transport spending to be flat compared to 2013, with 2015 forecast to experience three percent growth. "That growth is dependent on Europe starting to improve," says Yu.

One area driving optical transport growth that Yu highlights is interconnected data centres. "Whether enterprises or large companies interconnecting their data centres, internet content providers distributing their networks as they add more data centres, or telecom operators wanting to jump on the bandwagon and build their own data centres to offer services; that is one of the more interesting developments," he says.

 


Operators want to cut power by a fifth by 2020

Briefing: Green ICT

Part 2: Operators’ power efficiency strategies

Service providers have set themselves ambitious targets to reduce their energy consumption by a fifth by 2020. The power reduction will coincide with an expected thirty-fold increase in traffic in that period. Given the cost of electricity and operators’ requirements, such targets are not surprising: KPN, with its 12,000 sites in The Netherlands, consumes 1% of the country’s electricity.

 

“We also have to invest in capital expenditure for a big swap of equipment – in mobile and DSLAMs"

Philippe Tuzzolino, France Telecom-Orange

Operators stress that power consumption concerns are not new but Marga Blom, manager, energy management group at KPN, highlights that the issue had become pressing due to steep rises in electricity prices. “It is becoming a significant part of our operational expense,” she says.

"We are getting dedicated and allocated funds specifically for energy efficiency,” adds John Schinter, AT&T’s director of energy. “In the past, energy didn’t play anywhere near the role it does today.”

 

Power reduction strategies

Service providers are adopted several approaches to reduce their power requirements.

Upgrading their equipment is one. Newer platforms are denser with higher-speed interfaces while also supporting existing technologies more efficiently. Verizon, for example, has deployed 100 Gigabit-per-second (Gbps) interfaces for optical transport and for its IT systems in Europe. The 100Gbps systems are no larger than existing 10Gbps and 40Gbps platforms and while the higher-speed interfaces consume more power, overall power-per-bit is reduced.

 

 “There is a business case based on total cost of ownership for migrating to newer platforms.”

Marga Blom, KPN

 

 

 

 

Reducing the number of facilities is another approach. BT and Deutsche Telekom are reducing significantly the number of local exchanges they operate. France Telecom is consolidating a dozen data centres in France and Poland to two, filling both with new, more energy-efficient equipment. Such an initiative improves the power usage effectiveness (PUE), an important data centre efficiency measure, halving the energy consumption associated with France Telecom’s data centres’ cooling systems.

“PUE started with data centres but it is relevant in the future central office world,” says Brian Trosper, vice president of global network facilities/ data centers at Verizon. “As you look at the evolution of cloud-based services and virtualisation of applications, you are going to see a blurring of data centres and central offices as they interoperate to provide the service.”

Belgacom plans to upgrade its mobile infrastructure with 20% more energy-efficient equipment over the next two years as it seeks a 25% network energy efficiency improvement by 2020. France Telecom is committed to a 15% reduction in its global energy consumption by 2020 compared to the level in 2006. Meanwhile KPN has almost halted growth in its energy demands with network upgrades despite strong growth in traffic, and by 2012 it expects to start reducing demand.  KPN’s target by 2020 is to reduce energy consumption by 20 percent compared to its network demands of 2005.

 

Fewer buildings, better cooling

Philippe Tuzzolino, environment director for France Telecom-Orange, says energy consumption is rising in its core network and data centres due to the ever increasing traffic and data usage but that power is being reduced at sites using such techniques as virtualisation of servers, free-air cooling, and increasing the operating temperature of equipment. “We employ natural ventilation to reduce the energy costs of cooling,” says Tuzzolino.  

“Everything we do is going to be energy efficient.”

Brian Trosper, Verizon

 

 

 

 

 

Verizon uses techniques such as alternating ‘hot’ and ‘cold’ aisles of equipment and real-time smart-building sensing to tackle cooling. “The building senses the environment, where cooling is needed and where it is not, ensuring that the cooling systems are running as efficiently as possible,” says Trosper.

Verizon also points to vendor improvements in back-up power supply equipment such as DC power rectifiers and uninterruptable power supplies. Such equipment which is always on has traditionally been 50% efficient. “If they are losing 50% power before they feed an IP router that is clearly very inefficient,” says Chris Kimm, Verizon's vice president, network field operations, EMEA and Asia-Pacific. Now manufacturers have raised efficiencies of such power equipment to 90-95%. 

France Telecom forecasts that its data centre and site energy saving measures will only work till 2013 with power consumption then rising again. “We also have to invest in capital expenditure for a big swap of equipment – in mobile and DSLAMs [access equipment],” says Tuzzolino. 

Newer platforms support advanced networking technologies and more traffic while supporting existing technologies more efficiently. This allows operators to move their customers onto the newer platforms and decommission the older power-hungry kit.  

 

“Technology is changing so rapidly that there is always a balance between installing new, more energy efficient equipment and the effort to reduce the huge energy footprint of existing operations”

John Schinter, AT&T

 

Operators also use networking strategies to achieve efficiencies. Verizon is deploying a mix of equipment in its global private IP network used by enterprise customers. It is deploying optical platforms in new markets to connect to local Ethernet service providers. “We ride their Ethernet clouds to our customers in one market, whereas layer 3 IP routing may be used in an adjacent, next most-upstream major market,” says Kimm. The benefit of the mixed approach is greater efficiencies, he says: “Fewer devices to deploy, less complicated deployments, less capital and ultimately less power to run them.”

Verizon is also reducing the real-estate it uses as it retires older equipment. “One trend we are seeing is more, relatively empty-looking facilities,” says Kimm. It is no longer facilities crammed with equipment that is the problem, he says, rather what bound sites are their power and cooling capacity requirements.

“You have to look at the full picture end-to-end,” says Trosper. “Everything we do is going to be energy efficient.” That includes the system vendors and the energy-saving targets Verizon demands of them, how it designs its network, the facilities where the equipment resides and how they are operated and maintained, he says.

Meanwhile, France Telecom says it is working with 19 operators such as Vodafone and Telefonica, BT, DT, China Telecom, and Verizon as well as the organisations such as the ITU and ETSI to define standards for DSLAMs and base stations to aid the operators in meeting their energy targets.

Tuzzolino stresses that France Telecom’s capital expenditure will depend on how energy costs evolve. Energy prices will dictate when France Telecom will need to invest in equipment, and the degree, to deliver the required return on investment.

The operator has defined capital expenditure spending scenarios - from a partial to a complete equipment swap from 2015 - depending on future energy costs. New services will clearly dictate operators’ equipment deployment plans but energy costs will influence the pace.  

 

““If they [DC power rectifiers and UPSs] are losing 50% power before they feed an IP router that is clearly very inefficient” 

Chris Kimm, Verizon. 

 

 

 

Justifying capital expenditure spending based on energy and hence operational expense savings in now ‘part of the discussion’, says KPN’s Blom: “There is a business case based on total cost of ownership for migrating to newer platforms.”

 

Challenges

But if operators are generally pleased with the progress they are making, challenges remain.

“Technology is changing so rapidly that there is always a balance between installing new, more energy efficient equipment and the effort to reduce the huge energy footprint of existing operations,” says AT&T’s Schinter.

“The big challenge for us is to plan the capital expenditure effort such that we achieve the return-on-investment based on anticipated energy costs,” says Tuzzolino.

Another aspect is regulation, says Tuzzolino. The EC is considering how ICT can contribute to reducing the energy demands of other industries, he says. “We have to plan to reduce energy consumption because ICT will increasingly be used in [other sectors like] transport and smart grids.”

Verizon highlights the challenge of successfully managing large-scale equipment substitution and other changes that bring benefits while serving existing customers. “You have to keep your focus in the right place,” says Kimm. 

 

Part 1: Standards and best practices 


AT&T domain suppliers

Date

Domain

Partners

Sept 2009

Wireline Access 

Ericsson

Feb 2010

Radio Access Network

Alcatel-Lucent, Ericsson

April 2010

Optical and transport equipment 

Ciena

July 2010

IP/MPLS/Ethernet/Evolved Packet Core

Alcatel-Lucent, Juniper, Cisco

 

The table shows the selected players in AT&T's domain supplier programme announced to date.

AT&T has stated that there will likely be eight domain supplier categories overall so four more have still to be detailed.

Looking at the list, several thoughts arise:

  • AT&T has already announced wireless and wireline infrastructure providers whose equipment spans the access network all the way to ultra long-haul. The networking technologies also address the photonic layer to IP or layer 3.
  • Alcatel-Lucent and Ericsson already play in two domains while no Asian vendor has yet to be selected.
  • One or two more players may be added to the wireline access and optical and transport infrastructure domains but this part of the network is pretty much done.

So what domains are left? Peter Jarich, service director at market research firm Current Analysis, suggests the following:

  • Datacentre
  • OSS/BSS
  • IP Service Layer (IP Multimedia Subsystem, subscriber data management, service delivery platform)
  • Voice Core (circuit, softswitch)
  • Content Delivery (IP TV, etc.)

AT&T was asked to comment but the operator said that it has not detailed any domains beyond those that have been announced.

Date

Domain

Partners

Sept 2009

Wireline Access

Ericsson

Feb 2010

Radio Access Network

Alcatel-Lucent, Ericsson

April 2010

Optical and transport equipment

Ciena

July 2010

IP/MPLS/Ethernet/Evolved Packet Core

Alcatel-Lucent, Juniper, Cisco


Privacy Preference Center