OFC/NFOEC 2013 industry reflections - Part 3

Joe Berthold, vice president of network architecture, Ciena
The two topics that received the most attention, judging from session attendance and discussion in the hallways, were silicon photonics and software-defined networking (SDN). I predict that next year those who wish to capitalise on this popularity wave will be submitting papers on SDN-enabled silicon photonics.
More seriously, though, there remains vigorous debate about the relative importance of III-V integrated optics and silicon photonics, and I look forward to seeing how this evolves in the marketplace.
"Some of the SDN-related talks from the global research and education community were very good. They have been pioneers in making high capacity optical networks dynamic, and we have much to learn from them as they have several years experience building and operating SDNs, even before the term existed."
With respect to SDN and service providers, it is going to be several years before we see a true, SDN-enabled network as there are many other issues that need to be addressed.
This is one of the reasons Ciena is taking a lead role in the Open Networking Foundation's investigation of applying OpenFlow or the like at the optical layers. I thought some of the SDN-related talks from the global research and education community were very good. They have been pioneers in making high capacity optical networks dynamic, and we have much to learn from them as they have several years experience building and operating SDNs, even before the term existed.
"One of the most interesting commercial developments to watch in the coming years related to 100 Gig is the work that has begun on pluggable coherent analogue optical modules"
There was also quite a bit of buzz about 100 Gig deployments. It was nice to hear one of the industry analysts refer to 2013 as the year of 100 Gig as this is an area where Ciena has been quite successful.
I did not see or hear of any dramatic advances reported at the conference. What I did see, in talks and on the show floor, was a broad base of technology development that will lead to increased system density and lower cost and power.
On the client side, many companies showed 100 Gig CFP2 modules, and there was quite a bit of talk and demonstrations of technology building blocks that will lead to even smaller size.
Another optical networking topic that means many different things to different people was flexible grids and flexible transmission formats. From speaking with a number of network operators, it seems there is an appreciation for the future-proofing benefit of flexible grid ROADMs, but a recognition that the spectral efficiency gains to be had are quite limited, especially in a ROADM mesh network. So they are emerging as a nice-to-have feature but not a must-have-at-any-price feature.
Another 'flex' concept is flex-transceivers. The flavour of flex-transceivers that seem by most I spoke with to be practical are those that maintain a fixed baud rate but vary modulation format, say from BPSK to QPSK, 8PSK, 16QAM and perhaps beyond, to fit different distance applications.
I think one of the most interesting commercial developments to watch in the coming years related to 100 Gig is the work that has begun on pluggable coherent analogue optical modules, likely to emerge in a CFP2 form factor. I view this as a major next step the industry will take to reduce the cost and increase the density of coherent interfaces on switches and transmission systems.
The OIF did the industry a great service in pulling together a set of interoperable building blocks that form the photonic foundation of 100 Gig solutions today. The next step is to integrate these pieces and place them in a pluggable module. There is yet no formal project with this goal, but discussions are underway.
Watch this space...

Karen Liu, principal analyst components, Ovum.
There was a real sense of openness to new directions even as a lot of short-term activity continues to focus on getting 100 Gig to full maturity. Instead of pitching their favourite directions, some people actually solicited more ideas.
"One trend to watch is the battle between VCSELs and silicon photonics"
Directions that seemed promising but unformed last year got a bit firmed up. Connections are being made from the application down to the device technology. What had been wacky ideas previously are being taken seriously:
- Optical circuit switching looks like it will have a place in conjunction with Ethernet switching.
- Spatial division multiplexing is the hot research topic. I like the work that Bell Labs is doing, particularly where the add/drop increment ties together multiple cores of the same wavelength so compensation algorithms can take advantage of similar environmental history. This is moving past the physics, to thinking about network architecture.
- Monolithic integration of electronics with photonics. Early stages still and primarily around the drivers. But as this is motivated by power consumption, it seems like a solid direction that will have legs.
One trend to watch is the battle between VCSELs (vertical-cavity surface-emitting lasers) and silicon photonics. Conventional wisdom was that VCSELS were for multi-mode and silicon photonics for single-mode but both have crossed over into the other's space.
Martin Guy, vice-president of product management and technology Teraxion
There were several noteworthy developments. In particular, silicon photonics has started to show its promises as new products are introduced:
- Cisco announced its 100 Gig CPAK transceiver following the Lightwire acquisition
- Kotura showed its 100 Gig WDM QSFP package with only 3.5 W of power consumption.
- Luxtera demonstrated a 100 Gig QSFP package using four fibre pairs, each [fibre] carrying 25Gbps.
- Teraxion introduced its small form factor coherent receiver based on silicon photonics
Silicon photonics was also largely discussed at the technical conference and very impressive results were demonstrated. Most notably, Cisco and Alcatel-Lucent presented results on silicon photonic modulators for metro and long-haul coherent systems with performance comparable to lithium niobate.
Tunable laser technologies on silicon photonics were also presented by companies such as Skorpios and Aurrion during the post-deadline sessions.
"Cisco and Alcatel-Lucent presented results on silicon photonic modulators for metro and long-haul coherent systems with performance comparable to lithium niobate."
All those new silicon photonics technologies could eventually become key building blocks of future highly-integrated transceivers.
Pluggable coherent modules will be a big market opportunity and it is all about density and low power consumption.
At the show, Oclaro demonstrated key milestones to bring to market a CFP2 coherent module by mid-2014 while this product is on the roadmap of all other major transceiver vendors.
From Teraxion’s perspective, our recent acquisition of Cogo Optronics Canada for high-speed modulators is directly in line with this market trend at the modules level where performance, size and low power consumption are key requirements.
Paul Brooks, product line manager for high-speed test solutions, JDSU
The growing confidence in second-generation 100 Gig CFP2s was evident at the show. Many companies, including JDSU, demonstrated robust second-generation 100 Gig modules which will drive confidence across the whole 100 Gig ecosystem to allow cost efficient 100 Gig clients. Our ONT CFP2 test solution was well received and we spent a lot of time demonstrating the features that will enable successful CFP2 deployment.
"Many companies are openly discussing 400 Gig and beyond, the bandwidth demand is there but considerable technology challenges need to be address"
One thing enforced at the show is the continued importance of innovation in test and measurement solutions required by our customers as we move to 100 Gig+ systems.
Many companies are openly discussing 400 Gig and beyond, the bandwidth demand is there but considerable technology challenges need to be address. The intellectual horsepower present at the show allows fruitful and engaging discussions on key topics.
See also:
Part 1: Software-defined networking: A network game-changer, click here
Part 2: OFC/NFOEC industry reflections, click here
Part 4: OFC/NFOEC industry reflections, click here
Part 5: OFC/NFEC 2013 industry reflections, click here
OFC/NFOEC 2013 industry reflections - Part 2
Bill Gartner, vice president and general manager of high-end routing and optical business unit at Cisco Systems.
There were several key themes during this year’s OFC conference, but what I found most compelling were the disruptive trends and technologies that stand to significantly impact the optical communications market in the coming years.

"SDN could be the single biggest disruptor in the transport industry and has the potential to transform network programmability and orchestration"
One of the hottest themes at this year’s OFC conference is the role of silicon photonics and the benefits it presents to service providers and carriers. Silicon photonics is truly one of the most interesting advancements taking place in the industry as it has the potential to drastically lower the density, power and overall cost of ASICs.
Several carriers at the show, including CenturyLink and AT&T, presented their view that optics is becoming a larger portion of their spend and now exceeds the cost of packet switching technologies.
A second key trend coming out of the show is software-defined networking (SDN) and its impact on networking. There is tremendous industry interest around this topic and it extended to the Anaheim Convention Center.
With SDN, our customers can increase flexibility in terms of selecting the features and protocols that make sense for their network application – whether it is a data centre application, a service provider application or a large-scale enterprise application.
The last theme that resonated during OFC was around the convergence of packet and optical solutions. As service providers look for ways to decrease both CapEx and OpEx related to the network, incremental technology improvements will decrease costs. However, for many customers, their network capacity is growing far faster than their revenues, so incremental improvements will not yield required reductions.
"As an industry we have to evolve organisationally and technically. Those who fail to recognise that face extinction."
This shows us that we need to explore more fundamental shifts in architectures that have the potential to yield significant savings in OpEx and CapEx. Enter the convergence of IP and optical – this may take the form of converged platforms, but will also involve multi-layer control planes that allow the exchange of information between the packet and optical layers. This convergence helps answer questions like: How well is the network utilised? Can it be optimised? Are there multi-layer protection/ restoration schemes that make better use of the available resources?
During the conference, I had the opportunity to present at the OSA Executive Forum, which brought together more than 150 senior-level executives to discuss key themes, opportunities and challenges facing the next generation in optical communications.
What struck me is that this industry is constantly evolving, which presents challenges and opportunities. We are looking at an industry that is highly fragmented at the moment and requires further streamlining.
You have new players at every level of the value chain that bring exciting, unique perspectives and advanced technologies that increase efficiency and decrease costs. But none of this innovation comes without change; as an industry we have to evolve organisationally and technically. Those who fail to recognise that face extinction.
"This is like solving a simultaneous equation where the variables are power, cost and density – you need to solve for all three"
The key themes discussed at OFC are an indication of what is to come in optical transport and mirror our top priorities at Cisco.
In the coming year, we expect to see CMOS photonics technology enable lower power pluggables. This is the case with CPAK, but more broadly, we will see this technology find its way into low cost board-to-board interconnect and chassis-to-chassis interconnect.
As an industry, we have made great progress in reducing the cost of transmitting bits over a long distance but much more remains to be done. As bit rates increase to beyond 100 Gigabit, we must look for ways to drive this cost down faster, while decreasing both power and size. This is like solving a simultaneous equation where the variables are power, cost and density – you need to solve for all three.
During the next five years, I think that SDN could be the single biggest disruptor in the transport industry and has the potential to transform network programmability and orchestration.
We will see an entire software industry emerge around SDN, but it is important to note that this is really all about multilayer control – Layer 0 to Layer 3. SDN is not simply an optical transport problem to be solved. The advantage will go to those who are looking at this holistically.
Brandon Collings, CTO of the communications and commercial optical products group at JDSU
I found it interesting that the major network equipment manufactures had a significantly increased presence on the exhibition floor.

"This year’s focus and buzz was all on silicon photonics with researchers leveraging it against nearly every function in telecom and datacom"
I learned a lot about SDN at levels above the photonic network. This is a very complex topic likely to take some time to fully mature within telecom networks; however, the potential values appear compelling.
This year’s focus and buzz was all on silicon photonics with researchers leveraging it against nearly every function in telecom and datacom. I expect it will be interesting for industry watchers how this promising technology evolves within the industry, where it achieves its promise and where it runs into practical roadblocks.
Vladimir Kozlov, CEO of LightCounting
This was the best OFC since 2000. The optical community is once again energised. Some attribute the improved mood to high-value acquisitions of companies LightWire and Nicira that were made last year, but this is just part of the story.
Yes, the potential of silicon photonics and software-defined networking (which LightWire and Nicira were focussed on, respectively) do broaden the horizon for optical technologies in communication networks and data centres. But the excitement is not limited to just these two ideas. All the new - and old or forgotten - ideas, technologies and products once again have a shot at making a difference. Demand for optics is strong and the customers are hungry for innovation.
"Demand for optics is strong and the customers are hungry for innovation"
In contrast to 2000, few people are getting carried away with the excitement. The mood is much more constructive this time and it makes me hope that most of this new energy will not be wasted.
I would not single out a specific technology or application to watch out for in the next few years. All of them have opportunities and challenges ahead. We will keep track of as many developments as we can and make sure that hype does lead the industry off the tracks this time.
Effie Favreau, marketing, Sumitomo Electric
One hundred Gigabit technology is here. Last year there was a lot of hype about 100 Gigabit and now it is reality; vendors have products that are shipping.
Sumitomo and ClariPhy partnered on pluggable coherent modules. Together, we hosted an impressive demonstration with all the components to make pluggable coherent modules available next year.
"For the enterprise/ data centre, vendors requiring low cost, high density equipment really need the CFP4"
One thing I learned from the show is that vendors need to re-purpose their existing equipment. There was much discussion regarding software-enabled applications and passives to enhance the performance of networks and make them more intelligent.
There was the introduction of the CFP2 from several vendors as well as Cisco's CPAK. For the enterprise/ data centre, vendors requiring low cost, high density equipment really need the CFP4. At Sumitomo, we are concentrating our R&D efforts on the CFP4.
See also:
Part 1: Software-defined networking: A network game-changer? click here
Part 3: OFC/NFOEC 2013 industry reflections, click here
Part 4: OFC/NFOEC industry reflections, click here
Part 5: OFC/NFEC 2013 industry reflections, click here
Software-defined networking: A network game-changer?
OFC/NFOEC reflections: Part 1

"We [operators] need to move faster"
Andrew Lord, BT
Q: What was your impression of the show?
A: Nothing out of the ordinary. I haven't come away clutching a whole bunch of results that I'm determined to go and check out, which I do sometimes.
I'm quite impressed by how the main equipment vendors have moved on to look seriously at post-100 Gigabit transmission. In fact we have some [equipment] in the labs [at BT]. That is moving on pretty quickly. I don't know if there is a need for it just yet but they are certainly getting out there, not with live chips but making serious noises on 400 Gig and beyond.
There was a talk on the CFP [module] and whether we are going to be moving to a coherent CFP at 100 Gig. So what is going to happen to those prices? Is there really going to be a role for non-coherent 100 Gig? That is still a question in my mind.
"Our dream future is that we would buy equipment from whomever we want and it works. Why can't we do that for the network?"
I was quite keen on that but I'm wondering if there is going to be a limited opportunity for the non-coherent 100 Gig variants. The coherent prices will drop and my feeling from this OFC is they are going to drop pretty quickly when people start putting these things [100 Gig coherent] in; we are putting them in. So I don't know quite what the scope is for people that are trying to push that [100 Gigabit direct detection].
What was noteworthy at the show?
There is much talk about software-defined networking (SDN), so much talk that a lot of people in my position have been describing it as hype. There is a robust debate internally [within BT] on the merits of SDN which is essentially a data centre activity. In a live network, can we make use of it? There is some skepticism.
I'm still fairly optimistic about SDN and the role it might have and the [OFC/NFOEC] conference helped that.
I'm expecting next year to be the SDN conference and I'd be surprised if SDN doesn't have a much greater impact then [OFC/NFOEC 2014] with more people demoing SDN use cases.
Why is there so much excitement about SDN?
Why now when it could have happened years ago? We could have all had GMPLS (Generalised Multi-Protocol Label Switching) control planes. We haven't got them. Control plane research has been around for a long time; we don't use it: we could but we don't. We are still sitting with heavy OpEx-centric networks, especially optical.
"The 'something different' this conference was spatial-division multiplexing"
So why are we getting excited? Getting the cost out of the operational side - the software-development side, and the ability to buy from whomever we want to.
For example, if we want to buy a new network, we put out a tender and have some 10 responses. It is hard to adjudicate them all equally when, with some of them, we'd have to start from scratch with software development, whereas with others we have a head start as our own management interface has already been developed. That shouldn't and doesn't need to be the case.
Opening the equipment's north-bound interface into our own OSS (operating systems support) in theory, and this is probably naive, any specific OSS we develop ought to work.
Our dream future is that we would buy equipment from whomever we want and it works. Why can't we do that for the network?
We want to as it means we can leverage competition but also we can get new network concepts and builds in quicker without having to suffer 18 months of writing new code to manage the thing. We used to do that but it is no longer acceptable. It is too expensive and time consuming; we need to move faster.
It [the interest in SDN] is just competition hotting up and costs getting harder to manage. This is an area that is now the focus and SDN possibly provides a way through that.
Another issue is the ability to put quickly new applications and services onto our networks. For example, a bank wants to do data backup but doesn't want to spend a year and resources developing something that it uses only occasionally. Is there a bandwidth-on-demand application we can put onto our basic network infrastructure? Why not?
SDN gives us a chance to do something like that, we could roll it out quickly for specific customers.
Anything else at OFC/NFOEC that struck you as noteworthy?
The core networks aspect of OFC is really my main interest.
You are taking the components, a big part of OFC, and then the transmission experiments and all the great results that they get - multiple Terabits and new modulation formats - and then in networks you are saying: What can I build?
The networks have always been the poor relation. It has not had the great exposure or the same excitement. Well, now, the network is becoming centre stage.
As you see components and transmission mature - and it is maturing as the capacity we are seeing on a fibre is almost hitting the natural limit - so the spectral efficiency, the amount of bits you can squeeze in a single Hertz, is hitting the limit of 3,4,5,6 [bit/s/Hz]. You can't get much more than that if you want to go a reasonable distance.
So the big buzz word - 70 to 80 percent of the OFC papers we reviewed - was flex-grid, turning the optical spectrum in fibre into a much more flexible commodity where you can have wherever spectrum you want between nodes dynamically. Very, very interesting; loads of papers on that. How do you manage that? What benefits does it give?
What did you learn from the show?
One area I don't get yet is spatial-division multiplexing. Fibre is filling up so where do we go? Well, we need to go somewhere because we are predicting our networks continuing to grow at 35 to 40 percent.
Now we are hitting a new era. Putting fibre in doesn't really solve the problem in terms of cost, energy and space. You are just layering solutions on top of each other and you don't get any more revenue from it. We are stuffed unless we do something different.
The 'something different' this conference was spatial-division multiplexing. You still have a single fibre but you put in multiple cores and that is the next way of increasing capacity. There is an awful lot of work being done in this area.
I gave a paper [pointing out the challenges]. I couldn't see how you would build the splicing equipment, how you would get this fibre qualified given the 30-40 years of expertise of companies like Corning making single mode fibre, are we really going to go through all that again for this new fibre? How long is that going to take? How do you align these things?
"SDN for many people is data centres and I think we [operators] mean something a bit different."
I just presented the basic pitfalls from an operator's perspective of using this stuff. That is my skeptic side. But I could be proved wrong, it has happened before!
Anything you learned that got you excited?
One thing I saw is optics pushing out.
In the past we saw 100 Megabit and one Gigabit Ethernet (GbE) being king of a certain part of the network. People were talking about that becoming optics.
We are starting to see optics entering a new phase. Ten Gigabit Ethernet is a wavelength, a colour on a fibre. If the cost of those very simple 10GbE transceivers continues to drop, we will start to see optics enter a new phase where we could be seeing it all over the place: you have a GigE port, well, have a wavelength.
[When that happens] optics comes centre stage and then you have to address optical questions. This is exciting and Ericsson was talking a bit about that.
What will you be monitoring between now and the next OFC?
We are accelerating our SDN work. We see that as being game-changing in terms of networks. I've seen enough open standards emerging, enough will around the industry with the people I've spoken to, some of the vendors that want to do some work with us, that it is exciting. Things like 4k and 8k (ultra high definition) TV, providing the bandwidth to make this thing sensible.
"I don't think BT needs to be delving into the insides of an IP router trying to improve how it moves packets. That is not our job."
Think of a health application where you have a 4 or 8k TV camera giving an ultra high-res picture of a scan, piping that around the network at many many Gigabits. These type of applications are exciting and that is where we are going to be putting a bit more effort. Rather than the traditional just thinking about transmission, we are moving on to some solid networking; that is how we are migrating it in the group.
When you say open standards [for SDN], OpenFlow comes to mind.
OpenFlow is a lovely academic thing. It allows you to open a box for a university to try their own algorithms. But it doesn't really help us because we don't want to get down to that level.
I don't think BT needs to be delving into the insides of an IP router trying to improve how it moves packets. That is not our job.
What we need is the next level up: taking entire network functions and having them presented in an open way.
For example, something like OpenStack [the open source cloud computing software] that allows you to start to bring networking, and compute and memory resources in data centres together.
You can start to say: I have a data centre here, another here and some networking in between, how can I orchestrate all of that? I need to provide some backup or some protection, what gets all those diverse elements, in very different parts of the industry, what is it that will orchestrate that automatically?
That is the kind of open theme that operators are interested in.
That sounds different to what is being developed for SDN in the data centre. Are there two areas here: one networking and one the data centre?
You are quite right. SDN for many people is data centres and I think we mean something a bit different. We are trying to have multi-vendor leverage and as I've said, look at the software issues.
We also need to be a bit clearer as to what we mean by it [SDN].
Andrew Lord has been appointed technical chair at OFC/NFOEC
Further reading
Part 2: OFC/NFOEC 2013 industry reflections, click here
Part 3: OFC/NFOEC 2013 industry reflections, click here
Part 4: OFC/NFOEC industry reflections, click here
Part 5: OFC/NFEC 2013 industry reflections, click here
Telcos eye servers & software to meet networking needs
- The Network Functions Virtualisation (NFV) initiative aims to use common servers for networking functions
- The initiative promises to be industry disruptive
"The sheer massive [server] volumes is generating an innovation dynamic that is far beyond what we would expect to see in networking"
Don Clarke, NFV
Telcos want to embrace the rapid developments in IT to benefit their networks and operations.
The Network Functions Virtualisation (NFV) initiative, set up by the European Telecommunications Standards Institute (ETSI), has started work to use servers and virtualisation technology to replace the many specialist hardware boxes in their networks. Such boxes can be expensive to maintain, consume valuable floor space and power, and add to the operators' already complex operations support systems (OSS).
"Data centre technology has evolved to the point where the raw throughput of the compute resources is sufficient to do things in networking that previously could only be done with bespoke hardware and software," says Don Clarke, technical manager of the NFV industry specification group, and who is BT's head of network evolution innovation. "The data centre is commoditising server hardware, and enormous amounts of software innovation - in applications and operations - is being applied.”
"Everything above Layer 2 is in the compute domain and can be put on industry-standard servers"
The operators have been exploring independently how IT technology can be applied to networking. Now they have joined forces via the NFV initiative.
"The most exciting thing about the technology is piggybacking on the innovation that is going on in the data centre," says Clarke. "The sheer massive volumes is generating an innovation dynamic that is far beyond what we would expect to see in networking."
Another key advantage is that once networks become software-based, enormous amounts of flexibility results when creating new services, bringing them to market quickly while also reducing costs.
NFV and SDN
The NFV initiative is being promoted as a complement to software-defined networking (SDN).
The complementary relationship between NFV and SDN. Source: NFV.
SDN is focussed on control mechanisms to reconfigure networks that separate the control plane and the data plane. The transport network can be seen as dumb pipes with the control mechanisms adding the intelligence.
“There are other areas of the network where there is intrinsic complexity of processing rather than raw throughput,” says Clarke.
These include firewalls, session border controllers, deep packet inspection boxes and gateways - all functions that can be ported onto servers. Indeed, once running as software on servers such networking functions can be virtualised.
"Everything above Layer 2 is in the compute domain and can be put on industry-standard servers,” says Clarke. This could even include core IP routers but clearly that is not the best use of general-purpose computing, and the initial focus will be equipment at the edge of the network.
Clarke describes how operators will virtualise network elements and interface them to their existing OSS systems. “We see SDN as a longer journey for us,” he says. “In the meantime we want to get the benefits of network virtualisation alongside existing networks and reusing our OSS where we can.”
NFV will first be applied to appliances that lend themselves to virtualisation and where the impact on the OSS will be minimal. Here the appliance will be loaded as software on a common server instead of current bespoke systems situated at the network's end points. “You [as an operator] can start to draw a list of target things as to what will be of most interest,” says Clarke.
Virtualised network appliances are not a new concept and examples are already available on the market. Vanu's software-based radio access network technology is one such example. “What has changed is the compute resources available in servers is now sufficient, and the volume of servers [made] is so massive compared to five years ago,” says Clarke
The NFV forum aims to create an industry-wide understanding as to what the challenges are while ensuring that there are common tools for operators that will also increase the total available market.
Clarke stresses that the physical shape of operators' networks - such as local exchange numbers - will not change greatly with the uptake of NFV. “But the kind of equipment in those locations will change, and that equipment will be server-based," says Clarke.
"One of the things the software world has shown us is that if you sit on your hands, a player comes out of nowhere and takes your business"
One issue for operators is their telecom-specific requirements. Equipment is typically hardened and has strict reliability requirements. In turn, operators' central offices are not as well air conditioned as data centres. This may require innovation around reliability and resilience in software such that should a server fail, the system adapts and the server workload is moved elsewhere. The faulty server can then be replaced by an engineer on a scheduled service visit rather than an emergency one.
"Once you get into the software world, all kinds of interesting things that enhance resilience and reliability become possible," says Clarke.
Industry disruption
The NFV initiative could prove disruptive for many telecom vendors.
"This is potentially massively disruptive," says Clarke. "But what is so positive about this is that it is new." Moreover, this is a development that operators are flagging to vendors as something that they want.
Clarke admits that many vendors have existing product lines that they will want to protect. But these vendors have unique telecom networking expertise which no IT start-up entering the field can match.
"It is all about timing," says Clarke. "When do they [telecom vendors] decisively move their product portfolio to a software version is an internal battle that is happening right now. Yes, it is disruptive, but only if they sit on their hands and do nothing and their competitors move first."
Clarke is optimistic about to the vendors' response to the initiative. "One of the things the software world has shown us is that if you sit on your hands, a player comes out of nowhere and takes your business," he says.
Once operators deploy software-based network elements, they will be able to do new things with regard services. "Different kinds of service profiles, different kinds of capabilities and different billing arrangements become possible because it is software- not hardware-based."
Work status
The NFV initiative was unveiled late last year with the first meeting being held in January. The initiative includes operators such as AT&T, BT, Deutsche Telekom, Orange, Telecom Italia, Telefonica and Verizon as well as telecoms equipment vendors, IT vendors and technology providers.
One of the meeting's first tasks was to identify the issues to be addressed to enable the use of servers for telecom functions. Around 60 companies attended the meeting - including 20-odd operators - to create the organisational structure to address these issues.
Two experts groups - on security, and on performance and portability - were set up. “We see these issues as key for the four working groups,” says Clarke. These four working groups cover software architecture, infrastructure, reliability and resilience, and orchestration and management.
Work has started on the requirement specifications, with calls between the members taking place each day, says Clarke. The NFV work is expected to be completed by the end of 2014.
Further information:
White Paper: Network Functions Virtualisation: An Introduction, Benefits, Enablers, Challenges & Call for Action, click here
OFC/NFOEC 2013 to highlight a period of change
Next week's OFC/NFOEC conference and exhibition, to be held in Anaheim, California, provides an opportunity to assess developments in the network and the data centre and get an update on emerging, potentially disruptive technologies.
Source: Gazettabyte
Several networking developments suggest a period of change and opportunity for the industry. Yet the impact on optical component players will be subtle, with players being spared the full effects of any disruption. Meanwhile, industry players must contend with the ongoing challenges of fierce competition and price erosion while also funding much needed innovation.
The last year has seen the rise of software-defined networking (SDN), the operator-backed Network Functions Virtualization (NFV) initiative and growing interest in silicon photonics.
SDN has already being deployed in the data centre. Large data centre adopters are using an open standard implementation of SDN, OpenFlow, to control and tackle changing traffic flow requirements and workloads.
Telcos are also interested in SDN. They view the emerging technology as providing a more fundamental way to optimise their all-IP networks in terms of processing, storage and transport.
Carrier requirements are broader than those of data centre operators; unsurprising given their more complex networks. It is also unclear how open and interoperable SDN will be, given that established vendors are less keen to enable their switches and IP routers to be externally controlled. But the consensus is that the telcos and large content service providers backing SDN are too important to ignore. If traditional switching and routers hamper the initiative with proprietary add-ons, newer players will willing fulfill requirements.
Optical component players must assess how SDN will impact the optical layer and perhaps even components, a topic the OIF is already investigating, while keeping an eye on whether SDN causes market share shifts among switch and router vendors.
The ETSI Network Functions Virtualization (NFV) is an operator-backed initiative that has received far less media attention than SDN. With NFV, telcos want to embrace IT server technology to replace the many specialist hardware boxes that take up valuable space, consume power, add to their already complex operations support systems (OSS) while requiring specialist staff. By moving functions such as firewalls, gateways, and deep packet inspection onto cheap servers scaled using Ethernet switches, operators want lower cost systems running virtualised implementations of these functions.
The two-year NFV initiative could prove disruptive for many specialist vendors albeit ones whose equipment operate at higher layers of the network, removed from the optical layer. But the takeaway for optical component players is how pervasive virtualisation technology is becoming and the continual rise of the data centre.
Silicon photonics is one technology set to impact the data centre. The technology is already being used in active optical cables and optical engines to connect data centre equipment, and soon will appear in optical transceivers such as Cisco Systems' own 100Gbps CPAK module.
Silicon photonics promises to enable designs that disrupt existing equipment. Start-up Compass-EOS has announced a compact IP core router that is already running live operator traffic. The router makes use of a scalable chip coupled to huge-bandwidth optical interfaces based on 168, 8 Gigabit-per-second (Gbps) vertical-cavity surface-emitting lasers (VCSELs) and photodetectors. The Terabit-plus bandwidth enables all the router chips to be connected in a mesh, doing away with the need for the router's midplane and switching fabric.
The integrated silicon-optics design is not strictly silicon photonics - silicon used as a medium for light - but it shows how optics is starting to be used for short distance links to enable disruptive system designs.
Some financial analysts are beating the drum of silicon photonics. But integrated designs using VCSELs, traditional photonic integration and silicon photonics will all co-exist for years to come and even though silicon photonics is expected to make a big impact in the data centre, the Compass-EOS router highlights how disruptive designs can occur in telecoms.
Market status
The optical component industry continues to contend with more immediate challenges after experiencing sharp price declines in 2012.
The good news is that market research companies do not expect a repeat of the harsh price declines anytime soon. They also forecast better market prospects: The Dell'Oro Group expects optical transport to grow through 2017 at a compound annual growth rate (CAGR) of 10 percent, while LightCounting expects the optical transceiver market to grow 50 percent, to US $5.1bn in 2017. Meanwhile Ovum estimates the optical component market will grow by a mid-single-digit percent in 2013 after a contraction in 2012.
In the last year it has become clear how high-speed optical transport will evolve. The equipment makers' latest generation coherent ASICs use advanced modulation techniques, add flexibility by trading transport speed with reach, and use super-channels to support 400 Gigabit and 1 Terabit transmissions. Vendors are also looking longer term to techniques such as spatial-division multiplexing as fibre spectrum usage starts to approach the theoretical limit.
Yet the emphasis on 400 Gigabit and even 1 Terabit is somewhat surprising given how 100 Gigabit deployment is still in its infancy. And if the high-speed optical transmission roadmap is now clear, issues remain.
OFC/NFOEC 2013 will highlight the progress in 100 Gigabit transponder form factors that follow the 5x7-inch MSA, 100 Gigabit pluggable coherent modules, and the uptake of 100 Gigabit direct-detection modules for shorter reach links - tens or hundreds of kilometers - to connect data centres, for example.
There is also an industry consensus regarding wavelength-selective switches (WSSes) - the key building block of ROADMs - with the industry choosing a route-and-select architecture, although that was already the case a year ago.
There will also be announcements at OFC/NFOEC regarding client-side 40 and 100 Gigabit Ethernet developments based on the CFP2 and CFP4 that promise denser interfaces and Terabit capacity blades. Oclaro has already detailed its 100GBASE-LR4 10km CFP2 while Avago Technologies has announced its 100GBASE-SR10 parallel fibre CFP2 with a reach of 150m over OM4 fibre.
The CFP2 and QSFP+ make use of integrated photonic designs. Progress in optical integration, as always, is one topic to watch for at the show.
PON and WDM-PON remain areas of interest. Not so much developments in state-of-the-art transceivers such as for 10 Gigabit EPON and XG-PON1, though clearly of interest, but rather enhancements of existing technologies that benefit the economics of deployment.
The article is based on a news analysis published by the organisers before this year's OFC/NFOEC event.
Netronome uses its network flow processor for OpenFlow
Part 2: Hardware for SDN
Netronome has demonstrated its flow processor chip implementing the OpenFlow protocol, an open standard implementation of software-defined networking (SDN).

"What OpenFlow does is let you control the hardware that is handling the traffic in the network. The value to the end customer is what they can do with that"
David Wells, Netronome
The reference design demonstration, which took place at an Open Networking User Group meeting, used the fabless semiconductor player's NFP-3240 network flow processor. The NFP-3240 was running the latest 1.3.0 version of the OpenFlow protocol.
Last year Netronome announced its next-generation flow processor family, the NFP-6xxx. The OpenFlow demonstration hints at what the newest flow processor will enable once first samples become available at the year end.
Netronome believes its flow processor architecture is well placed to tackle emerging intelligent networking applications such as SDN due to its emphasis on packet flows.
“In security, mobile and other spaces, increasingly there needs to be equipment in the network that is looking at content of packets and states of a flow - where you are looking at content across multiple packets - to figure out what is going on,” says David Wells, co-founder of Netronome and vice president of technology. “That is what we term flow processing."
This requires equipment able to process all the traffic on network links at 10 and 40 Gigabit-per-second (Gbps), and with next-generation equipment at 100Gbps. "This is where you do more than look at the packet header and make a switching decision," says Wells.
Software-defined networking
Operators and content service providers are interested in SDN due to its promise to deliver greater efficiencies and control in how they use their switches and routers in the data centre and network. With SDN, operators can add their own intelligence to tailor how traffic is routed in their networks.

In the data centre, a provider may be managing a huge number of servers running virtualised applications. "The management of the servers and applications is clever enough to optimise where it moves virtual machines and where it puts particular applications," says Wells. "You want to be able to optimise how the traffic flows through the network to get to those servers in the same way you are optimising the rest of the infrastructure."
Without OpenFlow, operators depend on routing protocols that come with existing switches and routers. "It works but it won't necessarily take the most efficient route through the network," says Wells.
OpenFlow lets operators orchestrate from the highest level of the infrastructure where applications reside, map the flows that go to them, determine their encapsulation and the capacity they have. "The service can be put in a tunnel, for example, and have resource allocated to it so that you know it is not going to be contended with," says Wells, guaranteeing services to customers.
"What OpenFlow does is let you control the hardware that is handling the traffic in the network," says Wells. "The value to the end customer is what they can do with that, in conjunction with other things they are doing."
Operators are also interested in using OpenFlow in the wide area network. "The attraction of OpenFlow is in the core and the edge [of the network] but it is the edge that is the starting point," says Wells.
OpenFlow demonstration
Netronome's OpenFlow demonstration used an NFP-3240 on a PCI Express (PCIe) card to run OpenFlow while other Netronome software runs on the host server in which the card resides.
The NFP-3240 classifies the traffic and implements the actions to be taken on the flows. The software on the host exposes the OpenFlow application programming interface (API) enabling the OpenFlow controller, the equipment that oversees how traffic is handled, to address the NFP device and influence how flows are processed.
Early OpenFlow implementations are based on Ethernet switch chips that interface to a CPU that provides the OpenFlow API. However, the Ethernet chips support the OpenFlow 1.1.0 specification and have limited-sized look-up tables with 98, 64k or 100k entries, says Wells.
The OpenFlow controller can write to the table and dictate how traffic is handled, but its size is limited. "That is a starting point and is useful," says Wells. "But to really do SDN, you need hardware platforms that can handle many more flows than these switches."
This is where the NFP processor is being targeted: it is programmable with capabilities driven by software rather than the hardware architecture, says Wells.
NFP-6xxx architecture
The NFP-6xxx is Netronome's latest network flow processor (NFP) family, rated at 40 to 200Gbps. No particular devices have yet been detailed but the highest-end NFP-6xxx device will comprise 216 processors: 120 flow processors (see chart - Netronome's sixth generation device) and new to its NFP devices, 96 packet processors.
The architecture is made up of 'islands', units that comprise a dozen flow processors. Netronome will combine different numbers of islands to create the various NFP-6xxx devices.
The input-output bandwidth of the device is 800Gbps while the on-chip memory totals 30 Megabyte. The device also interfaces directly to QSFP, SFP+ and CFP optical transceivers.
The 120 flow processors tackle the more complex, higher-layer tasks. Netronome has added packet processors to the NFP-6xxx to free the flow processors from tasks such as taking packets from the input stream and passing them on to where they are processed. The packet processors are programmable and perform such tasks as header classification before being processed by the flow processors.
The NFP-6xxx devices will include some 100 hardware accelerator engines for tasks such as traffic management, encryption and deep packet inspection.
The device will be implemented using Intel's latest 22nm 3D Tri-Gate CMOS process and is designed to work with high-end general purpose CPUs such as Intel's x86 devices, Broadcom's XLP and Freescale's PowerPC.
Markets
The data centre, where SDN is already being used, is one promising market for the device as customers look to enhance their existing capabilities.
There are requirements for intelligent gateways now but this is a market that is a year or two out, says Wells. Use of OpenFlow to control large IP core routers or core optical switches is a longer term application. "Those areas will come but it will be further out," says Wells.
For other markets such as security, there is a need for knowledge about the state of flows. This is more sophisticated treatment of packets than the simple looking up the action required based on a packet's header. Netronome believes that OpenFlow will develop to not only forward or terminate traffic at a certain destination but will also send traffic to a service before it is returned.
"You could insert a service in a OpenFlow environment and what it would do is guide packets to that service and return it but inside that service you may do something that is stateful," says Wells. This is just the sort of task security performs on flows. For example, an intrusion prevention system as a service or a firewall function. This function could be run on a dedicated platform or as a virtual application running on Netronome's flow processor.
Further reading:
Part 1: The role of software defined networking for telcos
EZchip expands the role of the network processor, click here
The role of software-defined networking for telcos
The OIF's Carrier Working Group is assessing how software-defined networking (SDN) will impact transport. Hans-Martin Foisel, chair of the OIF working group, explains SDN's importance for operators.
Briefing: Software-defined networking
Part 1: Operator interest in SDN
"Using SDN use cases, we are trying to derive whether the transport network is ready or if there is some missing functionality"
Hans-Martin Foisel, OIF
Hans-Martin Foisel, of Deutsche Telekom and chair of the OIF Carrier Working Group, says SDN is of great interest to operators that view the emerging technology as a way of optimising all-IP networks that increasingly make use of data centres.
"Software-defined networking is an approach for optimising the network in a much larger sense than in the past," says Foisel whose OIF working group is tasked with determining how SDN's requirements will impact the transport network.
Network optimisation remains an ongoing process for operators. Work continues to improve the interworking between the network's layers to gain efficiencies and reduce operating costs (see Cisco Systems' intelligent light).
With SDN, the scope is far broader. "It [SDN] is optimising the network in terms of processing, storage and transport," says Foisel. SDN takes the data centre environment and includes it as part of the overall optimisation. For example, content allocation becomes a new parameter for network optimisation.
Other reasons for operator interest in SDN, says Foisel, include optimising operation support systems (OSS) software, and the characteristic most commonly associated with SDN, making more efficient use of the network's switches and routers.
"A lot of carriers are struggling with their OSSes - these are quite complex beasts," he says. "With data centres involved, you now have a chance to simplify your IT as all carriers are struggling with their IT."
The Network Functions Virtualisation (NFV) industry specification group is a carrier-led initiative set up in January by the European Telecommunications Standards Institute (ETSI). The group is tasked with optimising software components, the OSSes, involved for processing, storage and transport.
The initiative aims to make use of standard servers, storage and Ethernet switches to reduce the varied equipment making up current carrier networks to improve service innovation and reduce the operators' capital and operational expenditure.
The NFV and SDN are separate developments that will benefit each other. The ETSI group will develop requirements and architecture specifications for the hardware and software infrastructure needed for the virtualized functions, as well as guidelines for developing network functions.
The third reason for operator interest in SDN - separating management, control and data planes - promises greater efficiencies, enabling network segmentation irrespective of the switch and router deployments. This allows flexible use the network, with resources shifted based on particular user requirements.
"Optimising the network as a whole - including the data centre services and applications - is a concept, a big architecture," says Foisel. "OpenFlow and the separation of data, management and control planes are tools to achieve them."
OpenFlow is an open standard implementation of the SDN concept. The OpenFlow protocol is being developed by the Open Networking Foundation, an industry body that includes Google, Facebook and Microsoft, telecom operators Verizon, NTT, Deutsche Telekom, and various equipment makers.
Transport SDN
The OIF Working Group will identify how SDN impacts the transport network including layers one and two, networking platforms and even components. By undertaking this work, the operators' goal is to make SDN "carrier-grade'.
Foisel admits that the working group does not yet know whether the transport layer will be impacted by SDN. To answer the question, SDN applications will be used to identify required transport SDN functionalities. Once identified, a set of requirements will be drafted.
"Using SDN use cases, we are trying to derive whether the transport network is ready or if there is some missing functionality," says Foisel.
The work will also highlight any areas that require standardisation, for the OIF and for other standards bodies, to ensure future SDN interworking between vendors' solutions. The OIF expects to have a first draft of the requirements by July 2013.
"In the transport network we are pushed by the mobile operators but also by the over-the-top applications to be faster and be more application-aware," says Foisel. "With SDN we have a chance to do so."
Part 2: Hardware for SDN
Cisco Systems' intelligent light
Network optimisation continues to exercise operators and content service providers as their requirements evolve with the growth of services such as cloud computing. Cisco Systems' announced elastic core architecture aims to tackle networking efficiency and address particular service provider requirements.

“The core [network] needs to be more robust, agile and programmable”
Sultan Dawood, Cisco
“The core [network] needs to be more robust, agile and programmable – especially with the advent of cloud,” says Sultan Dawood, senior manager, service provider marketing at Cisco. “As service providers look at next-generation infrastructure, convergence of IP and optical is going to have a big play.”
Cisco's elastic core architecture combines several developments. One is the integration of Cisco's 100 Gigabit-per-second (Gbps) dense wavelength division multiplexing (DWDM) coherent transponder, first introduced on its ROADM platform, onto its router to enable IP-over-DWDM.
This is part of what Cisco calls nLight – intelligent light - which itself has three components: its 100Gbps coherent ASIC hardware, the nLight control plane and nLight colourless and contentionless ROADMs. “As packet and optical networks converge, intelligence between the layers is needed,” says Dawood. “Today how the ROADM and the router communicate is limited."
There is the GMPLS [Generalized Multi-Protocol Label Switching] layer working at the IP layer, and WSON [Wavelength Switched Optical Layer] working at the optical layer. These two protocols are doing control plane functions at each of their respective layers. "What nLight is doing is communicating between these two layers [using existing parameters] and providing the interaction," says Dawood.
Ron Kline, principal analyst for network infrastructure at Ovum, describes nLight more generally as Cisco’s strategy for software-defined networking: "Interworking control planes to share info across platforms and add the dynamic capabilities."
The second component of Cisco's announcement is an upgrade of its carrier-grade services engine, from 20Gbps to 80Gbps, that fits within Cisco's CSR-3 core router and will be available from May 2013. The services engine enables such services as IPv6 and 'cloud routing' - network positioning which determines the most suitable resource for a customer’s request based on the content’s location and the data centre's loading.
Cisco has also added anti distributed denial of service (anti-DDoS) software to counter cyber threats. “We have licensed software that we have put into our CRS-3 so that with our VPN services we can provide threat mitigation and scrub any traffic liable to hurt our customers,” says Dawood.
nLight
According to Cisco, several issues need to be addressed between the IP and optical layers. For example, how the router and the optical infrastructure exchange information like circuit ID, path identifiers and real-time information in order to avoid the manual intervention used currently.
“With this intelligent data that is extracted due to these layers communicating, I can now make better, faster decisions that result in rapid service provisioning and service delivery,” says Dawood.
Cisco cites as an example a financial customer requesting a low-latency path. In this case, the optical network comes back through this nLight extraction process and highlights the most appropriate path. That path has a circuit ID that is assigned to the customer. If the customer then comes back to request a second identical circuit, the network can make use of the existing intelligence to deliver a similar-specification circuit.
Such a framework avoids lengthy, manual interactions between the IP and transport departments of an operator required when setting up an IP VPN, for example. By exchanging data between layers, service providers can understand and improve their network topology in real-time, and be more dynamic in how they shift resources and do capacity planning in their network.
Service providers can also improve their protection and restoration schemes and also how they configure and provision services. Such capabilities will enable operators to be more efficient in the introduction and delivery of cloud and mobile services.
Total cost of ownership
Market research firm ACG Research has done a total cost of ownership (TCO) analysis of Cisco's elastic core architecture. It claims using nLight achieves up to a halving of the TCO of the optical and packet core networks in designs using protected wavelengths. It also avoids a 10% overestimation of required capacity.
Meanwhile, ACG claims an 18-month payback and 156% return on investment from a CRS CGSE service module with its anti‐DDoS service, and a 24% TCO savings from demand engineering with the improved placement of routes and cloud service workload location.
Cisco says its designed framework architecture is being promoted in the Internet Engineering Task Force (IETF). The company is also liaising with the International Telecommunication Union (ITU) and the Optical Internetworking Forum (OIF) where relevant.
EZchip expands the role of the network processor
- EZchip's NPS-400 will be a 200Gbps duplex chip capable of layer 2 to layer 7 network processing
- The device is being aimed at edge routers and the data centre
- First samples by year end
EZchip Semiconductor has announced a class of network processor capable of performing traditional data plane processing as well as higher layer networking tasks.
EZchip's announced NPS will extend the role of the network processor to encompass layer two to layer seven of the network. Source: EZchip
"It [the device family] is designed to provide processing for all the networking layers, from layer two all the way to layer seven," says Amir Eyal, EZchip’s vice president of business development. Network processors typically offer layer-two and layer-three processing only.
The device family, called the network processor for smart networks (NPS), is being aimed at Carrier Ethernet edge router platforms, the traditional telecom application for network processors.
But the NPS opens up new opportunities for EZchip in the data centre, such as security, load balancing and software-defined networking (SDN). Indeed EZchip says the NPS market will double the total addressable market to US$2.4bn by 2016.
"SDN is supposedly a big deal in the data centre," says Eyal. Because SDN separates the control plane from the data plane, it implies that the data plane becomes relatively simple. In practice the opposite is true: the data processing becomes more complex requiring the recognition and handling of packets having different encapsulation schemes, says Eyal.
The NPS borrows architectural elements of EZchip's existing high-end NPUs but the company has added an ARC 32-bit reduced instruction set computer (RISC) processor which it has redesigned to create the basic packet-processing computing node: the CTOP (C-programmable task-optimised processor).
EZchip has announced two NPS devices: The NPS-200 and the more processing-capable NPS-400. The NPS-400 is a 200 Gigabit-per-second (Gbps) duplex chip with 256 CTOPs, giving it twice the packet-processing performance of EZchip's latest NP-5 NPU. The NPS-400 will also have 800 Gigabit of input/ output. The NPS-200 design will have 128 CTOPs.
As a result of adding the ARC, the NPS family will be C-programmable whereas NPUs are programmed using assembly language or micro-code. The CTOP will also be able to processes 16 instruction threads whereas the standard ARC is single thread.
The NPS also features an on-chip traffic manager which controls the scheduling of traffic after it has been processed and classified.
The power consumption of the NPS has yet to be detailed but Eyal says it will be of the order of the NP-5 which is 60W.
EZchip says up to eight NPS chips could be put on a line card, to achieve a 1.6Tbps packet throughput, power-consumption permitting.
Adopting the NPS processor will eliminate the need to add to platforms service line cards that use general-purpose processors. More NPS-based cards can then be used in the vacated line-card slots to boost the platform's overall packet-processing performance.
The company started the NPS design two years ago and expects first samples at the end of 2013. NPS-based products are expected to be deployed in 2015.
Meanwhile, EZchip says it is sampling its NP-5 NPU this quarter. The NPS will overlap with the NP-5 and be available before the NP-6, the next NPU on EZchip's roadmap.
Will the NPS-400 with double the throughput not deter sales of the NP-5, even if the design is used solely for traditional NPU layer-two and layer-three tasks?
EZchip says new customers will likely adopt the NPS especially given its support for high-level programming. But existing customers using the NP-4 will prefer to stay with the NPU family due to the investment already made in software.
Further reading:
Microprocessor Report: EZchip breaks the NPU mold, click here
A Terabit network processor by 2015?, click here
Infinera adds software to its PIC for instant bandwidth
Infinera has enabled its DTN-X platform to deliver rapidly 100 Gigabit services. The ability to fulfill capacity demand quickly is seen as a competitive advantage by operators. Gazettabyte spoke with Infinera and TeliaSonera International Carrier, a DTN-X customer, about the merits of its 'instant bandwidth' and asked several industry analysts for their views.
Infinera has added a WDM line card hosting its 500 Gigabit super-channel photonic integrated circuit to its DTN-X platform
Pravin Mahajan, Infinera.
Infinera is claiming an industry first with the software-enablement of 100 Gigabit capacity increments. The company's DTN-X platform's 'instant bandwidth' feature shortens the time to add new capacity in the network, from weeks as is common today to less than a day.
The ability to add bandwidth as required is increasingly valued by operators. TeliaSonera International Carrier points out that its traffic demands are increasingly variable, making capacity requirements harder to forecast and manage.
"It [the DTN-X's instant bandwidth] enables us to activate 100 Gig services between network spans to manage our own IP traffic which is growing rapidly," says Ivo Pascucci, head of sales, Americas at TeliaSonera International Carrier. "We will also be able to sell in the market 100 Gig services and activate the capacity much more rapidly."
What has been done
Infinera has added three elements to enable its DTN-X platform to enable 100 Gigabit services.
One is a new wavelength division multiplexing (WDM) line card that features its 500 Gigabit-per-second (Gbps) super-channel photonic integrated circuit (PIC). Infinera says the line card has 500Gbps of capacity enabled, of which only 100Gbps is activated. "The remaining 400Gbps is latent, waiting to be activated," says Pravin Mahajan, director of corporate marketing and messaging at Infinera.
Infinera uses the DTN-X's Optical Transport Network (OTN) switch fabric to pack the client side signals onto any of the 100Gbps channels activated on the line side. This capacity pool of up to 500 Gbps, says Infinera, results in better usage of backbone capacity compared to traditional optical networking equipment based on individual 100Gbps 'siloed' channels.
A software application has also been added to Infinera's network management system, the digital network administrator (DNA), to activate the 100Gbps capacity increments.
Lastly, Infinera has in place a just-in-time system that enables client-side 10 Gigabit Ethernet optical transceivers to be delivered to customers within 10 days, if they out of stock. Infinera says it is achieving a 6-day delivery time in 95% of the cases.

Advantages
TeliaSonera International Carrier confirms the advantages to having 100 Gigabit capacities pre-provisioned and ready for use.
"Having the ability to turn up large bandwidth is critical to our business, especially as the [traffic] numbers continue to grow"
Ivo Pascucci, TeliaSonera International Carrier
"If it is individual line cards across the network when you have as many PoPs as we do, it does get tricky," says Pascucci. "If we have 500 Gig channels pre-provisioned with the ability to activate 100 Gig segments as needed, that gives us an advantage versus having to figure out how many line cards to have deployed in which nodes, and forecasting which nodes should have the line cards in the first place."
The operator is already seeing demand for 100 Gigabit services, from the carrier market and large content providers. The operator already provides 10x10Gbps and 20x10Gbps services to customers. "With that there are all the challenges of provisioning ten or 20 10 Gig circuits and 10 or 20 cross-connects for each site," says Pascucci. The operator also manages one and two Terabits of network capacity for certain customers.
"Having the ability to turn up large bandwidth is critical to our business, especially as the [traffic] numbers continue to grow," says Pascucci.
Analysts' comments
Gazettabye asked several industry analysts about the significance of Infinera's announcement. In particular the uniqueness of the offering, the claim to reduce rapidly bandwidth enablement times and its importance for operators.
Infonetics Research
Andrew Schmitt, directing analyst for optical
Schmitt believes Infinera's announcement is significant as it is the first announced North American win. It also shows the company has a solution for carriers that only want to roll out a single 100 Gbps but don't want to buy 500Gbps.

More importantly, it should allow some carriers to deploy extra capacity for future use at no cost to them and that opens up interesting possibilities for automatically switched optical network (ASON) management or even software-defined networking (SDN).
"As to the claim that it reduces capacity enablement from weeks to potential minutes, to some degree, yes," says Schmitt.
Certainly Ciena, Alcatel-Lucent or Cisco could ship extra line cards into customers and not charge the customer until they are used and that would effectively achieve the same result. "But if the PIC truly has better economics than the discrete solutions from these vendors then Infinera can ship hardware up front and then recognise the profits on the back end," he says.
"You simply can't predict where the best places to put bandwidth will be"
In turn, if customers get free inventory management out of the deal and Infinera equipment can support that arrangement more economically, that is a significant advantage for Infinera.
"This instant bandwidth is unique to Infinera. As I said, anyone could do this deal. But you need a hardware cost structure that can support it or it gets expensive quickly," says Schmitt. "Everyone is working on super-channels but it is clear from the legacy of the way the 10 Gig DTN hardware and software worked that Infinera gets it."
Schmitt believes the term super-channel is abused. He prefers the term virtualised bandwidth - optical capacity that can be allocated the same way server or storage resources are assigned through virtualization.
"The SDN hype is hitting strong in this business but Infinera is really one of the only companies that have a history of a hardware and software architecture that lends itself well to this concept," he says. This is validated with its customer list which is loaded heavily with service providers that are not just talking about SDN but actively doing something, he says.
"It [turning capacity up quickly] is important for SDN as well as more advanced protection arrangements. You simply can't predict where the best places to put bandwidth will be," says Schmitt. "If you can have spare capacity in the network that is lit on demand but not paid for if you don't need it, it is the cheapest approach for avoiding overbuilding a network for corner-case requirements.
"I think the accounting for this product will be interesting, it is likely that we will know in a year how successful this concept was just by a careful examination of the company's financials," he concludes.
ACG Research
Eve Griliches, vice president of optical networking
Infinera delivered this year the DTN-X with 500 Gig super-channels based on PIC technology. Now, a new 500 Gig line card has been added that can operate at 100 Gig and the remaining 400 Gig can be lit in 100 Gig increments using software. This allows customers to purchase 100 Gig at a time, and turn up subsequent bandwidth via software when they require it.

“No other vendor has a software-based solution, and no one else is delivering 500 Gig yet either,” says Griliches.
With this solution, ACG Research says in its research note, operators can start to develop a flexible infrastructure where bandwidth can grow and move around the network instantly. This is useful to address varying demands in bandwidth, triggered by incidents such as natural disasters or sporting events.
Rapid bandwidth enablement has always been important and takes way too long, so this development is key, says Griliches: “Also, it enables Infinera to enter markets which only need one 100 Gig wavelength for now, which they could not do before.”
“No other vendor has a software-based solution, and no one else is delivering 500 Gig yet either”
Looking forward, ACG Research expects this software and hardware-based instant bandwidth utility model will enable Infinera to widen its potential market base and increase its global market share in 2013 and 2014.
Ovum
Ron Kline, principal analyst, and Dana Cooperson, vice president, of the network infrastructure practice
Ovum also thinks Infinera's announcement is significant. It brings essentially the same value proposition Infinera had with 10 Gigabit to the 100 Gigabit market - low operational expenditure (opex) and quick time-to-market. ”Remember 10 Gig in 10 days?” says Kline.

It further fixes an issue for customers in that with the 10x10Gbps, they had to essentially pay for the full 100Gbps up front, and then they could be very efficient with turn-up and opex. Customers made an efficient opex for more capital expenditure (capex) up-front trade. "With instant bandwidth, they don't have to make the upfront capex-versus-opex tradeoff; they can be most efficient with both,” says Cooperson.
Any vendor can shorten capacity enablement times if they can convince the operator to pre-position bandwidth in the network that is ready to be turned on at a moment's notice.
Ron Kline
Kline says operators has different processes for turning up services and in many cases it is these processes and not the equipment directly that is the cause of the additional time for provisioning. “For example the operator may not use the DNA system or may have a very complex OSS/BSS used in the process,” says Kline.
Nevertheless, the capability to have really short provisioning is there, if an operator wants to take advantage. In the TeliaSonera case, Infinera is managing the network so the quick time to market will be there, says Kline.
Cooperson adds that there can be many factors that impede the capacity enablement process, based on Ovum's own research. “But it is clear from talking to Infinera's customers that its system design and approach is a big benefit to those carriers, often the competitive carriers, in competing in the market,” she says. “Multiple carriers told us that with the Infinera system, they were able to win business from competitors.”

Any vendor can shorten capacity enablement times if they can convince the operator to pre-position bandwidth in the network that is ready to be turned on at a moment's notice. However what is unique to Infinera is its system is deployed 500Gbps at a time and all the switching is done electrically by the OTN switch at each node. Others are working on super-channels but none are close to deploying, says Ovum.
“Multiple carriers told us that with the Infinera system, they were able to win business from competitors.”
Dana Cooperson
The ability to turn on bandwidth rapidly is becoming increasingly important. From a wholesale operator perspective it is very important and a key differentiator.
"It's particularly relevant to wholesale applications where large bandwidth chunks are required and the customer is another carrier," says Cooperson. "Whether you view a Google or a Facebook as a carrier or a very large enterprise, it would apply to them as well as a more traditional carrier."
