Verizon's move to become a digital service provider
Working with Dell, Big Switch Networks and Red Hat, the US telco announced in April it had already brought online five data centres. Since then it has deployed more sites but is not saying how many.
Source: Verizon
“We are laying the foundation of the programmable infrastructure that will allow us to do all the automation, virtualisation and the software-defining we want to do on top of that,” says Chris Emmons, director, network infrastructure planning at Verizon.
“This is the largest OpenStack NFV deployment in the marketplace,” says Darrell Jordan-Smith, vice president, worldwide service provider sales at Red Hat. “The largest in terms of the number of [server] nodes that it is capable of supporting and the fact that it is widely distributed across Verizon’s sites.”
OpenStack is an open source set of software tools that enable the management of networking, storage and compute services in the cloud. “There are some basic levels of orchestration while, in parallel, there is a whole virtualised managed environment,” says Jordon-Smith.
This announcement suggests that Verizon feels confident enough in its experience with its vendors and their technology to take the longer-term approach
“Verizon is joining some of the other largest communication service providers in deploying a platform onto which they will add applications over time,” says Dana Cooperson, a research director at Analysys Mason.
Most telcos start with a service-led approach so they can get some direct experience with the technology and one or more quick wins in the form of revenue in a new service arena while containing the risk of something going wrong, explains Cooperson. As they progress, they can still lead with specific services while deploying their platforms, and they can make decisions over time as to what to put on the platform as custom equipment reach their end-of-life.
A second approach - a platform strategy - is a more sophisticated, longer-term one, but telcos need experience before they take that plunge.
“This announcement suggests that Verizon feels confident enough in its experience with its vendors and their technology to take the longer-term approach,” says Cooperson.
Applications
The Verizon data centres are located in core locations of its network. “We are focussing more on core applications but some of the tools we use to run the network - backend systems - are also targeted,” says Emmons.
The infrastructure is designed to support all of Verizon’s business units. For example, Verizon is working with its enterprise unit to see how it can use the technology to deliver virtual managed services to enterprise customers.
“Wherever we have a need to virtualise something - the Evolved Packet Core, IMS [IP Multimedia Subsystem] core, VoLTE [Voice over LTE] or our wireline side, our VoIP [Voice over IP] infrastructure - all these things are targeted to go on the platform,” says Emmons. Verizon plans to pool all these functions and network elements onto the platform over the next two years.
Red Hat’s Jordon-Smith talks about a two-stage process: virtualising functions and then making them stateless so that applications can run on servers independent of the location of the server and the data centre.
“Virtualising applications and running on virtual machines gives some economies of scale from a cost perspective and density perspective,” says Jordon-Smith. But there is a cost benefit as well as a level of performance and resiliency once such applications can run across data centres.
And by having a software-based layer, Verizon will be able to add devices and create associated services applications and services. “With the Internet of Things, Verizon is looking at connecting many, many devices and add scale to these types of environments,” says Jordon-Smith.
Architecture
Verizon is deploying a ‘pod and core architecture’ in its data centres. A pod contains racks of servers, top-of-rack or leaf switches, and higher-capacity spine switches and storage, while the core network is used to enable communications between pods in the same data centre and across sites (see diagram, top).
Dell is providing Verizon with servers, storage platforms and white box leaf and spine switches. Big Switch Networks is providing software that runs on the Dell switches and servers, while the OpenStack platform and ceph storage software is provided by Red Hat.
Each Dell rack houses 22 servers - each server having 24 cores and supporting 48 hyper threads - and all 22 servers connect to the leaf switch. Each rack is teamed with a sister rack and the two are connected to two leaf switches, providing switch level redundancy.
“Each of the leaf switches is connected to however many spine switches are needed at that location and that gives connectivity to the outside world,” says Emmons. For the five data centres, a total of 8 pods have been deployed amounting to 1,000 servers and this has not changed since April.
This is the largest OpenStack NFV deployment in the marketplace
Verizon has deliberately chosen to separate the pods from the core network so it can innovate at the pod level independently of the data centre’s network.
“We wanted the ability to innovate at the pod level and not be tied into any technology roadmap at the data centre level,” says Emmons who points out that there are several ways to evolve the data centre network. For example, in some cases, an SDN controller is used to control the whole data centre network.
“We don't want our pods - at least initially - to participate in that larger data centre SDN controller because we were concerned about the pace of innovation and things like that,” says Emmons. “We want the pod to be self-contained and we want the ability to innovate and iterate in those pods.”
Its first-generation pods contain equipment and software from Dell, Big Switch and Red Hat but Verizon may decide to swap out some or all of the vendors for its next-generation pod. “So we could have two generations of pod that could talk to each other through the core network,” says Emmons. “Or they could talk to things that aren't in other pods - other physical network functions that have not yet been virtualised.”
Verizon’s core networks are its existing networks in the data centres. “We didn't require any uplift and migration of the data centre networks,” says Emmons, However, Verizon has a project investigating data-centre interconnect platforms for core networking.
What we have been doing with Red Hat and Big Switch is not a normal position for a telco where you test something to death; it is a lot different to what people are used to
Benefits
Verizon expects capital expenditure and operational expense benefits from its programmable network but says it is too early to quantify. What more excites the operator is the ability to get services up and running quickly, and adapt and scale the network according to demand.
“You can reconfigure and reallocate your network once it is all software-defined,” says Emmons. There is still much work to be done, from the network core to the edge. “These are the first steps to that programmable infrastructure that we want to get to,” says Emmons.
Capital expenditure savings result from adopting standard hardware. “The more uniform you can keep the commodity hardware underneath, the better your volume purchase agreements are,” says Emmons. Operational savings also result from using standardised hardware. “Spares becomes easier, troubleshooting becomes easier as does the lifecycle management of the hardware,” he says.
Challenges
“We are tip-of-the-spear here,” admits Emmons. “What we have been doing with Red Hat and Big Switch is not a normal position for a telco where you test something to death; it is a lot different to what people are used to.”
Red Hat’s Jordon-Smith also talks about the accelerated development environment enabled by the software-enabled network. The OpenStack platform undergoes a new revision every six months.
“There are new services that are going to be enabled through major revisions in the not too distant future - the next 6 to 12 months,” says Jordon-Smith. “That is one of the key challenges for operators like Verizon have when they are moving in what is now a very fast pace.”
Verizon continues to deploy infrastructure across its network. The operator has completed most of the troubleshooting and performance testing at the cloud-level and in parallel is working on the applications in various of its labs. “Now it is time to put it all together,” says Emmons.
One critical aspect of the move to become a digital service provider will be the operators' ability to offer new services more quickly - what people call service agility, says Cooperson. Only by changing their operations and their networks can operators create and, if needed, retire services quickly and easily.
"It will be evident that they are truly doing something new when they can launch services in weeks instead of months or years, and make changes to service parameters upon demand from a customer, as initiated by the customer," says Cooperson. “Another sign will be when we start seeing a whole new variety of services and where we see communications service providers building those businesses so that they are becoming a more significant part of their revenue streams."
She cites as examples cloud-based services and more machine-to-machine and Internet of Things-based services.
Next-gen 100 Gigabit optics
Briefing: 100 Gigabit
Part 2: Interview
Gazettabyte spoke to John D'Ambrosia about 100 Gigabit technology
John D'Ambrosia, chair of the IEEE 100 Gig backplane and copper cabling task force
John D'Ambrosia laughs when he says he is the 'father of 100 Gig'.
He spent five years as chair of the IEEE 802.3ba group that created the 40 and 100 Gigabit Ethernet (GbE) standards. Now he is the chair of the IEEE task force looking at 100 Gig backplane and copper cabling. D'Ambrosia is also chair of the Ethernet Alliance and chief Ethernet evangelist in the CTO office of Dell's Force10 Networks.
“People are also starting to talk about moving data operations around the network based on where electricity is cheapest”
"Part of the reason why 100 Gig backplane technology is important is that I don't know anybody that wants a single 100 Gig port off whatever their card is," says D'Ambrosia. "Whether it is a router, line card, whatever you want to call it, they want multiple 100 Gig [interfaces]: 2, 4, 8 - as many as they can."
Earlier this year, there was a call for interest for next-generation 100 Gig optical interfaces, with the goal of reducing the cost and power consumption of 100 Gig interfaces while increasing their port density. "This [next-generation 100 Gig optical interfaces] is going to become very interesting in relation to what is going on in the industry,” he said.
Next-gen 100 Gig
The 10x10 MSA is an industry initiative that is an alternative 100 Gig interface to the IEEE 100 Gigabit Ethernet standards. Members of the 10x10 MSA include Google, Brocade, JDSU, NeoPhotonics (Santur), Enablence, CyOptics, AFOP, MRV, Oplink and Hitachi Cable America.
"Unfortunately, that [10x10 MSA] looks like it could cause potential interop issues,” says D'Ambrosia. That is because the 10x10 MSA has a 10-channel 10 Gigabit-per-second (Gbps) optical interface while the IEEE 100GbE use a 4x25Gbps optical interface.
The 10x10 interface has a 2km reach and the MSA has since added a 10km variant as well as 4x10x10Gbps and 8x10x10Gbps versions over 40km.
The advent of the 10x10 MSA has led to an industry discussion about shorter-reach IEEE interfaces. "Do we need something below 10km?” says D’Ambrosia.
Reach is always a contentious issue, he says. When the IEEE 802.3ba was choosing the 10km 100GBASE-LR4, there was much debate as to whether it should be 3 or 4km. "I won’t be surprised if you have people looking to see what they can do with the current 100GBASE-LR4 spec: There are things you can do to reduce the power and the cost," he says.
One obvious development to reduce size, cost and power is to remove the gearbox chip. The gearbox IC translates between 10x10Gbps and the 4x25Gbps channels. The chip consumes several watts each way (transmit to receive and vice versa). By adopting a 4x25Gbps input electrical interface, the gearbox chip is no longer needed - the electrical and optical channels will then be matched in speed and channel count. The result is that the 100GbE designs can be put into the upcoming, smaller CFP2 and even smaller CFP4 form factors.
As for other next-gen 100Gbps developments, these will likely include a 4x25Gbps multi-mode fibre specification and a 100 Gig, 2km serial interface, similar to the 40GBASE-FR.
The industry focus, he says, is to reduce the cost, power and size of 100Gbps interfaces rather than develop multiple 100 Gig link interfaces or expand the reach beyond 40km. "We are going to see new systems introduced over the next few years not based on 10 Gig but designed for 25 Gig,” says D’Ambrosia. The ASIC and chip designers are also keen to adopt 25Gbps signalling because they need to increase input-output (I/O) yet have only so may pins on a chip, he says.
D’Ambrosia is also part of an Ethernet bandwidth assessment ad-hoc committee that is part of the IEEE 802.3 work. The group is working with the industry to quantify bandwidth demand. “What you see is a lot of end users talking about needing terabit and a lot of suppliers talking about 400 Gig,” he says. Ultimately, what will determine the next step is what technologies are going to be available and at what cost.
Backplane I/0 and switching
Many of the systems D'Ambrosia is seeing use a single 100Gbps port per card. "A single port is a cool thing but is not that useful,” he says. “Frankly, four ports is where things start to become interesting.”
This is where 25Gbps electrical interfaces come into play. "It is not just 25 Gig for chip-to-chip, it is 25 Gig chip-to-module and 25 Gig to the backplane."
Moreover modules, backplane speeds, and switching capacity are all interrelated when designing systems. Designing a 10 Terabit switch, for example, the goal is to reduce the number of traces on a board and that go through the backplane to the switch fabric and other line cards.
Using 10Gbps electrical signals, between 1,200 to 2,000 signals are needed depending on the architecture, says D'Ambrosia. With 25Gbps the interface count reduces to 500-750. “The electrical signal has an impact on the switch capacity,” he says.
100 Gig in the data centre
D’Ambrosia stresses that care is needed when discussing data centres as the internet data centres (IDC) of a Google or a Facebook differ greatly from those of enterprises. “In the case of IDCs, those people were saying they needed 100 Gig back in 2006,” he says.
Such mega data centres use tens of thousands of servers connected across a flat switching architecture unlike traditional data centres that use three layers of aggregated switching. According to D'Ambrosia such flat architectures can justify using 100Gbps interfaces even when the servers each have a 1 Gig Ethernet interfaces only. And now servers are transitioning to 10 GbE interfaces.
“You are going to have to worry about the architecture, you are going to have to worry about the style of data centre and also what the server applications are,” says D'Ambrosia. “People are also starting to talk about moving data operations around the network based on where electricity is cheapest.” Such an approach will require a truly wide, flat architecture, he says.
D'Ambrosia cites the Amsterdam Internet exchange that announced in May its first customer using a 100 Gig service. "We are starting to see this happen,” he says.
One lesson D'Ambrosia has learnt is that there is no clear relationship between what comes in and out of the cloud and what happens within the cloud. Data centres themselves are one such example.
100 Gig direct detection
In recent months lower power, 200km to 800km reach, 100Gbps direct detection interfaces that are cheaper than coherent transmission have been announced by ADVA Optical Networking and MultiPhy. Such interfaces have a role in the network and are of varying interest to telco operators. But these are vendor-specific solutions.
D’Ambrosia stresses the importance of standards such as the IEEE and the work of the Optical Internetworking Forum (OIF) that has adopting coherent. “I still see customers that want a standards-based solution,” says D'Ambrosia, who adds that while the OIF work is not a standard, it is an interoperability agreement. “It allows everyone to develop the same thing," he says.
There are also other considerations regarding 100 Gig direct-detection besides cost, power and a pluggable form factor. Vendors and operators want to know how many people will be able to source this, he says.
D'Ambrosia says that new systems being developed now will likely be deployed in 2013. Vendors must assess the attractiveness of any alternative technologies to where industry backed technologies like coherent and the IEEE standards will be then.
The industry will adopt a variety of 100Gbps solutions, he says, with particular decisions based on a customer’s cost model, its long term strategy and its network.
For Part 1 - 100 Gig: An operator view click here
