OpenFlow extends its control to the optical layer

OpenFlow may be causing an industry stir as system vendors such as ADVA Optical Networking extend the protocol's reach to the optical layer, but analysts warn that it will take years before the technology benefits operators' revenues.

 

"We see OpenFlow as an additional solution to tackle the problem of network control"

Jörg-Peter Elbers, ADVA Optical Networking

 

 


 

The largest data centre players have a single-mindedness when it comes to service delivery. Players such as Google, Facebook and Amazon do not think twice about embracing and even spurring hardware and software developments if they will help them better meet their service requirements.

Such developments are also having a wider impact, interesting traditional telecom operators that have their own service challenges.

The latest development causing waves is the OpenFlow protocol. An open standard, OpenFlow is being developed by the Open Networking Foundation, an industry body that includes Google, Facebook and Microsoft, telecom operators Verizon, NTT and Deutsche Telekom, and various equipment makers.

OpenFlow is already being used by Google, and falls under the more general topic of software-defined networking (SDN). A key principle underpinning SDN is the separation of the data and control planes to enable more centralised and simplified management of the network.

OpenFlow is being used in the management of packet switches for cloud services. "The promise of software-defined networking and OpenFlow is to give [data centre operators] a virtualised network infrastructure," says Jörg-Peter Elbers, vice president, advanced technology at ADVA Optical Networking.

The growing interest in OpenFlow is reflected in the activities of the telecom system vendors that have extended the protocol to embrace the optical layer. But whereas the content service provider giants need only worry about tailoring their networks to optimise their particular services, telecom operators must consider legacy equipment and issues of interoperability.

 

 

OFELIA

ADVA Optical Networking has started the ball rolling by running an experiment to show OpenFlow controlling both the optical and packet layers of the network. Until now the protocol, which provides a software-programmable interface, has been used to manage packet switches; the adding of the optical layer control is an industry first, the company claims.

The OpenFlow demonstration is part of the European “OpenFlow in Europe, Linking Infrastructure and Applications” (OFELIA) research project involving ADVA Optical Networking and the University of Essex.  A test bed has been set up that uses the ADVA FSP 3000 to implement a colourless and directionless ROADM-based optical network. 

"We have put a network together such that people can run the optical layer through an OpenFlow interface, as they do the packet switching layer, under one uniform control umbrella," says Elbers. "The purpose of this project is to set up an experimental facility to give researchers access to, and have them play with, the capabilities of an OpenFlow-enabled network."  

 

"The fact that Google is doing it [SDN] is not a strong indication that service providers are going to do it tomorrow"

Mark Lutkowitz, Telecom Pragmatics

 

Remote researchers can access the test bed via GÉANT, a high-bandwidth pan-European backbone connecting national research and education networks.

ADVA Optical Networking hopes the project will act as a catalyst to gain useful feedback and ideas from the users, leading to further developments to meet emerging requirements. 

 

OpenFlow and GMPLS

A key principle of SDN, as mentioned, is the separation of the data plane from the control plane. "The aim is to have a more unified control of what your network is doing rather than running a distributed specialised protocol in the switches," says Elbers.

That is not that much different from the Generalized Multi-Protocol Label Switching (GMPLS), he says: "With GMPLS in an optical network you effectively have a data plane - a wavelength switched data plane - and then you have a unified control plane implementation running on top, decoupled from the data plane."

But clearly there are differences. OpenFlow is being used by data centre operators to control their packet switches and generate packet flows. The goal is for their networks to gain flexibility and agility: "A virtualised network that can be run as you, the user, want it," said Elbers.

But the protocol only gives a user the capability to manage the forwarding behavior of a switch: an incoming packet's header is inspected and the user can program the forwarding table to determine how the packet stream is treated and the port it goes out on.

And while OpenFlow has since been extended to cater for circuit switches as well as wavelength circuits, there are aspects at the optical layer which OpenFlow is not designed to address - issues that GMPLS does.

To run end-to-end, the control plane needs to be aware of the blocking constraints of an optical switch, while when provisioning it must also be aware of such aspects as the optical power levels and optical performance constraints.  "The management of optical is different from managing a packet switch or a TDM [circuit switched] platform," says Elbers. “We need to deal with transmission impairments and constraints that simply do not exist inside a packet switch.”

That said, having GMPLS expertise, it is relatively simple for a vendor to provide an OpenFlow interface to an optical controlled network, he says: "We see OpenFlow as an additional solution to tackle the problem of network control."

Operators want mature and proven interoperable standards for network control, that incorporate all the different network layers and that use GMPLS.

"We are seeing that in the data centre space, the players think that they may not have to have that level of complexity in their protocols and can run something lower level and streamlined for their applications," says Elbers.

While operators see the benefit of OpenFlow for their own data centres and managed service offerings, they also are eyeing other applications such as for access and aggregation to allow faster service mobility and for content management, says Elbers.

ADVA Optical Networking sees the adding of optical to OpenFlow as a complementary approach: the integration of optical networking into an existing framework to run it in a more dynamic fashion, an approach that benefits the data centre operators and the telcos.

"If you have one common framework, when you give server and compute jobs then you know what kind of connectivity and latency needs to go with this and request these resources and reconfigure the network accordingly," says Elbers.

But longer term the impact of OpenFlow and SDN will likely be more far-reaching:  applications themselves could program the network, or it could be used to enable dial-up bandwidth services in a more dynamic fashion. "By providing software programmability into a network, you can develop your own networking applications on top of this - what we see as the heart of the SDN concept," says Elbers. “The long term vision is that the network will also become a virtualised resource, driven by applications that require certain types of connectivity.”

Providing the interface is the first step, the value-add will be the things that players do with the added network flexibility, either the vendors working with operators, or by the operators' customers and by third-party developers.

"This is a pretty significant development that addresses the software side of things," says Elbers, adding that software is becoming increasingly important, with OpenFlow being an interesting step in that direction.

 


100 Gig: Is market expectation in need of a reality check?

Recent market research suggests that the 100 Gigabit-per-second (Gbps) era is fast-approaching and that 100Gbps promises to leave the 40Gbps market opportunity in its wake.

 

“It could easily be ten to 15 years before we see 100Gbps in a big way on the public network side”

 

Mark Lutkowitz, Telecom Pragmatics

 

 

 

Infonetics Research, in a published White Paper, says that 100Gbps technology will be adopted at a faster rate than 40Gbps was in its first years, and that the 100Gbps market will begin in earnest from 2013. Indeed this could even be sooner if China, which accounts for half of all 40Gbps ports being shipped, moves to 100Gbps faster than expected.

LightCounting, in its research, describes 100 Gbps optical transmission as a transformational networking technology for carriers, and forecasts that sales of 100Gbps dense wavelength division multiplexing (DWDM) line cards will grow to US $2.3 billion by 2015.

But one research firm, Telecom Pragmatics, is sounding a note of caution. It reports that the 40Gbps market is growing nicely and believes that it could be at least a decade before there is a substantial 100Gbps market.

“100G is not going to kill 40G and, if anything, we are bullish about 40G,” says Mark Lutkowitz, principal at Telecom Pragmatics. “I’m not talking about large volume ramp-up of 40G but there is arguably a ramp-up.”

 

100G Paradox

One reason, not often mentioned, why 40Gbps is being adopted is that it does not require as many networking changes as when 100Gbps technology is deployed. “There is additional compensation [needed] and it is not clear that all the fibres will work with 100G,” says Lutkowitz.

There is also what he calls the ‘100G Paradox’.

The 100Gbps technology will most likely be considered at pinch-points in the operators’ networks. Yet these are the same network pinch-points that were first upgraded to 10Gbps. As a result they are likely to have legacy DWDM systems such that upgrading to 100Gbps is a considerable undertaking. “It is questionable whether these systems can even work with 100G,” he says.

 

"We really think that 40G should be getting a lot more respect than it is getting” 

 

”When you look at service providers they are willing to put up with a whole amount of pain before they buy something, and they will certainly not forklift electronics or fibre - they will only do that as a last resort.” Another attraction of 40Gbps for the operators is its growing maturity - it is a technology that has been available for several years.

 

Costs

Telecom Pragmatics also dismisses the argument made by  component vendors that the market will move to 100Gbps especially if the cost-per-bit of 100G technology declines faster than expected.

“The first cost [point] is ten times 10G and really you need to get to something like six or seven times [the cost of] 10G before you consider 100G,” says Lutkowitz.  But that is not the sole cost. Network protection is needed which means a second system and there are additional networking and operational costs associated with 100Gbps.

Moreover, to whatever extent 40G is deployed, it will put further pressure on 100Gbps as 40Gbps prices decline. “In the 10G market, prices continued to decline and that precluded 40G, now you have 40G - to whatever extent there is deployment - precluding 100G,” says Lutkowitz.

“It could easily be 10 to 15 years before we see 100G in a big way on the public network side,” says Lutkowitz. But he stresses that in the datacenter and for the enterprise, demand for 100Gbps technology will be a different story.

Meanwhile Telecom Pragmatics expects further operator trials at 100Gbps as well as new system announcements from vendors. “But we really think that 40G should be getting a lot more respect than it is getting,” says Lutkowitz. 


Cisco Systems' coherent power move

Cisco Systems’ acquisition of CoreOptics means the company has largely cornered the coherent market, says Telecom Pragmatics. 

Cisco Systems announced its intent to acquire the optical transmission specialist CoreOptics back in May. CoreOptics has digital signal processing expertise used to enhance high-speed long-haul dense wavelength division multiplexing (DWDM) optical transmission. Cisco’s acquisition values the German company at US $99m.

 

"Let me be clear, we don’t believe 100Gbps serial will dominate the market for a long time, or 40Gbps for that matter"

Mark Lutkowitz, Telecom Pragmatics

 

 

 

“It has become clear that Cisco, with a few exceptions, has cornered the coherent market for 40 Gig and 100 Gig,” says Mark Lutkowitz, principal at market research firm, Telecom Pragmatics, which has published a report on Cisco's move.

Prior to Cisco’s move, several system vendors were working with CoreOptics for coherent transmission technology at 40 and 100 Gigabit-per-second (Gbps). Nokia Siemens Networks (NSN) was one and had invested in the company, another was Fujitsu Network Communications. Telecom Pragmatics believes other firms were also working with CoreOptics including Xtera and Ericsson (CoreOptics had worked with Marconi before it was acquired by Ericsson).

ACG Research in its May report Cisco/ CoreOptics Acquisition: What Does It Mean for the Packet Optical Transport Space? also claimed that the Cisco acquisition would set back NSN and Ericsson and listed other system vendors such as ADVA Optical Networking and Transmode that may have been considering using CoreOptics’ 100Gbps multi-source agreement (MSA) design.

“The mere fact that you have all these companies working with CoreOptics - and we don’t know all of them – says it all,” says Lutkowitz. “This was the company they were initially going to be depending on and Cisco made a power move that was brilliant.” 

With Cisco bringing CoreOptics in-house, these system vendors will need to find a new coherent technology partner. “The next chance would be with a company like Opnext coming out with a sub-system,” says Lutkowitz. “There is no doubt about it – this was a major coup for Cisco.”

For Cisco, the deal is important for its router business more than its optical transmission business. “In terms of transceivers that go into routers and switches it was absolutely essential that Cisco comes up with coherent technology,” says Lutkowitz. Cisco views transport as a low-margin business unlike IP core routers. “This [acquisition] is about protecting Cisco’s bread and butter – the router business,” he says.

The acquisition also has consequences among the router vendors. Alcatel-Lucent has its own 100Gbps coherent technology which it could add to its router platforms. In contrast, the other main router player, Juniper Networks, must develop the technology internally or partner. Telecom Pragmatics claims Juniper has an internal coherent technology development programme.

 

40 and 100 Gig markets

Cisco kick-started the 40Gbps market when it added the high-speed interface on its IP core router and Lutkowitz expects Cisco to do the same at 100Gbps. “But let me be clear, we don’t believe 100Gbps serial will dominate the market for a long time, or 40Gbps for that matter.”

In Telecom Pragmatics’ view, multiple channels of 10Gbps will be the predominant approach. First, 10Gbps DWDM systems are widely deployed and their cost continues to come down. And while Alcatel-Lucent and Ciena already have 100Gbps systems, they remain expensive given the infancy of the technology.  

But with business with large US operators to be won, systems vendors must have a 100Gbps optical transport offering. Verizon has an ultra-long haul request for proposal (RFP), AT&T has named Ciena as its first domain supplier for its optical and transport equipment but a second partner is still to be announced. And according to ACG Research, Google also has DWDM business.

 

What next?

Besides Alcatel-Lucent, Ciena, Infinera, Huawei, and now Cisco developing coherent technology, several optical module players are also developing 100Gbps line-side optics. These include Opnext, Oclaro and JDS Uniphase. There are also players such as Finisar that has yet to detail their plans. Lutkowitz believes that if Finisar is holding off developing 100Gbps coherent modules, it may prove a wise move given the continuing strength of the 10Gbps DWDM market.

Opnext acquired subsystem vendor StrataLight Communications in January 2009 and one benefit was gaining StrataLight’s systems expertise and its direct access to operators. Oclaro made its own subsystem move in July, acquiring Mintera. Oclaro has also partnered with Clariphy, which is developing coherent receiver ASICs.

But Telecom Pragmatics questions the long-term prospects of high-end line-side module/ subsystem vendors. “This [technology] is the guts of systems and where the money is made,” says Lutkowitz. “Ultimately all the system vendors will look to develop their own subsystems.”

Lutkowitz highlights other challenges facing module firms. Since they are foremost optical component makers it is challenging for them to make significant investment in subsystems. He also questions when the market 100Gbps will take off.  “Some of our [market research] competitors talk about 2014 but they don’t know,” says Lutkowitz.

But is not the trend that over time, 40Gbps and 100Gbps modules will gain increasing share of the line side systems optics, as has happened at 10Gbps?  

That is certainly LightCounting’s view that sees Cisco’s move as good news for component and transceiver vendors developing 40 and 100Gbps products. LightCounting argues that with Cisco’s commitment to the technology, other system vendors will have to follow suit, boosting demand for the higher-margin products.

“There will be all types of module vendors but it is possible that going higher in the food chain will not work out,” says Lutkowitz. “There will be more module and component vendors than we have now but all I question is: where are the examples of companies that have gone into subsystems that have done relatively well?”

Opnext is likely to be the next vendor with 100Gbps product, says Lutkowitz, and Oclaro could easily come out with its own offering. “All I’m saying is that there is a possibility that, in the final analysis, systems vendors take the technology and do it themselves.”


Privacy Preference Center