OFC 2014 industry reflections - Part 2

Ciena and Ovum comment on the noteworthy developments and trends following the recent OFC 2014 exhibition and conference. 


The high cost of 100 Gigabit Ethernet client modules has been a major disappointment to me as it has slowed adoption

Joe Berthold, Ciena

 

Joe Berthold, vice president of network architecture at Ciena.

OFC 2014 was another great event, with interesting programmes, demonstrations and papers presented. A few topics that really grabbed my interest were discussions around silicon photonics, software-defined networking (SDN) and 400 Gigabit Ethernet (GbE).

The intense interest we saw at last year’s OFC around silicon photonics grew this year with lots of good papers and standing-room-only sessions. I look forward to future product announcements that deliver on the potential of this technology to significantly reduce cost of interconnecting systems over modest distances. The high cost of 100GbE client modules has been a major disappointment to me as it has slowed adoption.

Another area of interest at this year’s show was the great deal of experimental work around SDN, some more practical than others.

I particularly liked the reviews of the latest work under the DARPA-sponsored CORONET programme, whose Phase 3 focused on SDN control of multi-layer, multi-vendor, multi-data centre cloud networking across wide area networks.

In particular, there were talks from three companies I noted: Anne Von Lehman of Applied Communication Sciences, the prime contractor, provided a good program overview; Bob Doverspike of AT&T described a very extensive testbed using equipment of the type currently deployed in AT&T’s network, as well as two different processing and storage virtualisation platforms; and Doug Freimuth of IBM described its contributions to CORONET including an OpenStack virtualisation environment, as well as other IBM distributed cloud networking research.

 

All the action on rates above 100 Gig lies with the selection of client signals. 400 Gig seems to have the major mindshare but there are still calls for flexible rate clients and Terabit clients.

One thing I enjoyed about these talks was that they described an approach to SDN for distributed data centre networking that is pragmatic and could be realised soon.

I also really liked a workshop held on the Sunday on the question whether SDN will kill GMPLS. While there was broad consensus that GMPLS has failed in delivering on its original turn-of-the-century vision of IP routers control of multi-layer, multi-domain networks, most speakers recognised the value distributed control planes have in simplifying and speeding the control of single layer, single domain networks.

What I took away was that single layer distributed control planes are here to stay as important network control functions, but instead will work under the direction of an SDN network controller.

As we all know, 400 Gigabit dense wavelength division multiplexing (DWDM) is here from the technology perspective, but awaiting standardisation of the 400 Gig Ethernet signal from the IEEE, and follow-on work by the ITU-T on signal mapping to OTN. In fact, from the perspective of DWDM transmission systems, 1 Terabit-per-second systems can be had for the asking.

All the action on rates above 100 Gig lies with the selection of client signals. 400 Gig seems to have the major mindshare but there are still calls for flexible rate clients and Terabit clients.

One area that received a lot of attention, with many differing points of view, was the question of the 400GbE client. As the 400GbE project begins soon in the IEEE, it is time to take a lesson from the history of the 100 Gig client modules and do better.

 

Let us all agree that we don’t need 400 Gig clients until they can do better in cost, face plate density, and power dissipation than the best 100 Gig modules that will exist then.

 

The first 100 Gig DWDM transceivers were introduced in 2009. It is now 2014 and 100 Gig is the transmission rate of choice for virtually all high capacity DWDM network applications, with a strong economic value proposition versus 10 Gig. Yet the industry has not yet managed to achieve cost/bit parity between 100 Gig and 10 Gig  clients - far from it!

Last year's OFC, we saw many show floor demonstrations of CFP2 modules. They promise lower costs, but evidence of their presence in shipping products is still lacking. At the exhibit this year we saw 100 Gig QSFP28 modules. While progress is slow, the cost of the 100 Gig client module continues to result in many operators favouring 10 Gig handoffs to their 100 Gig optical networking systems.

Let us all agree that we don’t need 400 Gig clients until they can do better in cost, face plate density, and power dissipation than the best 100 Gig modules that will exist then. At this juncture the 100 Gig benchmark we should be comparing 400 Gig to is a QSFP28 package.

Lastly, last year we heard about the launch of an OIF project to create a pluggable analogue coherent optical module. There were several talks that referenced this project, and discussed its implications for shrinking size and supporting higher transceiver card density.

Broad adoption of this component will help drive down costs of coherent transceivers, so I look forward to its hearing about its progress at OFC 2015.

 

 

Daryl Inniss, vice president and practice leader, Ovum.

There was no shortage of client-side announcements at OFC and I’ve spent time since the conference trying to organise them and understand what it all means.

I’m tempted to say that the market is once again developing too many options and not quickly agreeing on a common solution. But I’m reminded that this market works collaboratively and the client-side  uncertainty we’re seeing today is a reflection of a lack of market clarity. 

Let me describe three forces affecting suppliers:

The IEEE 100GBASE-xxx standards represent the best collective information that suppliers have. Not surprisingly, most vendors brought solutions to OFC supporting these standards. Vendors sharpened their products and focused on delivering solutions with smaller form factors and lower power consumption. Advances in optical components (lasers, TOSAs and ROSAs), integrated circuits (CDRs, TIAs, drivers), transceivers, active optical cables, and optical engines were all presented.  A promising and robust supply base is emerging that should serve the market well.

A second driver is that hyperscale service providers want a cost-effective solution today that supports 500m to 2km. This is non-standard and suppliers have not agreed on the best approach. This is where the market becomes fragmented. The same vendors supporting the IEEE standard are also pushing non-standard solutions. There are at least four different approaches to support the hyperscale request:

 

  • Parallel single mode (PSM4) where an MSA was established in January 2014
  • Coarse wavelength division multiplexing—using uncooled directly modulated lasers and single mode fibre
  • Dense wavelength division multiplexing—this one just emerged on the scene at OFC with Ranovus and Mellanox introducing the OpenOptics MSA
  • Complex modulation—PAM-8 for example and carrier multi-tone.

 

Admittedly, the presence of this demand disrupts the traditional process. But I believe the suppliers’ behavior reflects their unhappiness with the standardisation solution.  

The good news is these approaches are using established form factors like the QSFP. And silicon photonic products are starting to emerge. Suppliers will continue to innovate.

 

Ambiguity will persist but we believe that clarity will ultimately prevail.


The third issue lurking in the background is knowledge that 400 Gig and one Terabit will soon be needed. The best-case scenario is to use 100 Gig as a platform to support the next generation. Some argue for complex modulation as you reduce the number of optical components thereby lowering cost. That’s good but part of the price is higher power consumption, an issue that is to be determined.

Part of today’s uncertainty is whether the standard solution is suitable to support the market to the next generation. Sixteen channels at 25 Gig is doable but feels more like a stopgap measure than a long-term solution.

These forces leave suppliers innovating in search of the best path forward. The approaches and solutions differ for each vendor. Timing is an issue too with hyperscale looking for solutions today while the mass market may be years away.  

We believe that servers with 25 Gig and/ or 40 Gig ports will be one of the catalysts to drive the mass market and this will not start until about 2016. Meanwhile, each vendor and the market will battle for the apparent best solution to meet the varying demands. Ambiguity will persist but we believe that clarity will ultimately prevail.


OFC/NFOEC 2013 industry reflections - Part 4

Gazettabyte asked industry figures for their views after attending the recent OFC/NFOEC show. 

 

"Spatial domain multiplexing has been a hot topic in R&D labs. However, at this year's OFC we found that incumbent and emerging carriers do not have a near-term need for this technology. Those working on spatial domain multiplexing development should adjust their efforts to align with end-users' needs"

T.J. Xia, Verizon

 

T.J. Xia, distinguished member of technical staff, Verizon

Software-defined networking (SDN) is an important topic. Looking forward, I expect SDN will involve the transport network so that all layers in the network are controlled by a unified controller to enhance network efficiency and enable application-driven networking.

Spatial domain multiplexing has been a hot topic in R&D labs. However, at this year's OFC we found that incumbent and emerging carriers do not have a near-term need for this technology. Those working on spatial domain multiplexing development should adjust their efforts to align with end-users' needs.

Several things are worthy to watch. Silicon photonics has the potential to drop the cost of optical interfaces dramatically. Low-cost pluggables such as CFP2, CFP4 and QSFP28 will change the cost model of client connections. Also, I expect adaptive, DSP-enabled transmission to enable high spectral efficiencies for all link conditions. 

 

Andrew Schmitt, principal analyst, optical at Infonetics Research

The Cisco CPAK announcement was noteworthy because the amount of attention it generated was wildly out of proportion to the product they presented. They essentially built the CFP2 with slightly better specs.

 

"It was very disappointing to see how breathless people were about this [CPAK] announcement. When I asked another analyst on a panel if he thought Cisco could out-innovate the entire component industry he said yes, which I think is just ridiculous."

 

Cisco has successfully exploited the slave labour and capital of the module vendors for over a decade and I don't see why they would suddenly want to be in that business.

The LightWire technology is much better used in other applications than modules, and ultimately the CPAK is most meaningful as a production proof-of-concept. I explored this issue in depth in a research note for clients.

It was very disappointing to see how breathless people were about this announcement. When I asked another analyst on a panel if he thought Cisco could out-innovate the entire component industry he said yes, which I think is just ridiculous.

There were also some indications surrounding CFP2 customers that cast doubt on the near-term adoption of the technology, with suppliers such as Sumitomo Electric deciding to forgo development entirely in favour of CFP4 and/ or QSFP.

I think CFP2 ultimately will be successful outside of enterprise and data centre applications but there is not a near-term catalyst for adoption of this format, particularly now that Cisco has bowed out, at least for now.

SDN is a really big deal for data centres and enterprise networking but its applications in most carrier networks will be constrained to only a few areas relative to multi-layer management.

Within carrier networks, I think SDN is ultimately a catalyst for optical vendors to potentially add value to their systems, and a threat to router vendors as it makes bypass architectures easier to implement.

 

"Pluggable coherent is going to be just huge at OFC/NFOEC 2014"

 

Optical companies like ADVA Optical Networking, Ciena and Infinera are pushing the envelope here and the degree to which optical equipment companies are successful is dependent on who their customers are and how hungry these customers are for solutions.

Meanwhile, pluggable coherent is going to be just huge at OFC/NFOEC 2014, followed by QSFP/ CFP4 prototyping and more important production planning and reliability. Everyone is going to use different technologies to get there and it will be interesting to see what works best.

I also think the second half of 2013 will see an increase in deployment of common equipment such as amplifiers and ROADMs.

 

Magnus Olson, director hardware engineering, Transmode

Two clear trends from the conference, affecting quite different layers of the optical networks, are silicon photonics and SDN.

 

"If you happen to have an indium phosphide fab, the need for silicon photonics is probably not that urgent. If you don't, now seems very worthwhile to look into silicon photonics"

 

Silicon photonics, deep down in the physical layer, is now emerging rapidly from basic research to first product realisation. Whereas some module and component companies barely have taken the step from lithium niobate modulators to indium phospide, others have already advanced indium phosphide photonic integrated circuits (PICs) in place.

If you happen to have an indium phosphide fab, the need for silicon photonics is probably not that urgent. If you don't, now seems very worthwhile to look into silicon photonics.

Silicon photonics is a technology that should help take out the cost of optics for 100 Gigabit and beyond, primarily for short distance, data centre applications.

SDN, on the other hand, continues to mature. There is considerable momentum and lively discussion in the research community as well as within the standardisation bodies that could perhaps help SDN to succeed where Generalized Multi-Protocol Label Switching (GMPLS) failed.

Ongoing industry consolidation has reduced the number of companies to meet and discuss issues with to a reasonable number. The larger optical module vendors all have full portfolios and hence the consolidation would likely slow down for awhile.  The spirit at the show was quite optimistic, in a very positive, sustainable way.

As for emerging developments, the migration of form factors for 100 Gigabit, from CFP via CFP2 to CFP4 and beyond, is important to monitor and influence from a wavelength-division multiplexing (WDM) vendor point of view.

We should learn from the evolution of the SFP+, originally invented with purely grey data centre applications. Once the form factor is well established and mature, coloured versions start to appear.

If not properly taken into account from the start in the multi-source agreement (MSA) work with respect to, for example, power classes, it is not easy to accommodate tunable dense WDM versions in these form factors. Pluggable optics are crucial for cost as well as flexibility, on both the client side and line side.

 

Shai Rephaeli, vice president of interconnect products, Mellanox

At OFC, many companies demonstrated 25 Gigabit-per-second (Gbps) prototypes and solutions, both multi mode and single mode.

Thus, a healthy ecosystem for the 100 Gigabit Ethernet (GbE) and EDR (Enhanced Data Rate) InfiniBand looks to be well aligned with our introduction of new NIC (network interface controller)/ HCA (Infiniband host channel adaptor) and switch systems.

However, a significant increase in power consumption compared to current 10Gbps and 14Gbps product is observed. This requires the industry to focus heavily on power optimisation and thermal solutions.

 

"One development to watch is 1310nm and 1550nm VCSELs"

 

Standardisation for 25Gbps single mode fibre solutions is a big challenge. All the industry leaders have products at some level of development, but each company is driving its own technology. There may be a real interoperability barrier, considering the different technologies: WDM/ 1310nm, parallel and pulse-amplitude modulation (PAM) which, itself, may have several flavours: 4-levels, 8-levels and 16-levels.

One development to watch is 1310nm and 1550nm VCSELs, which can bring the data centre/ multi-mode fibre volume and prices into the mid-reach market. This technology can be important for the new large-scale data centres, requiring connections significantly longer than 100m.

 

Part 1: Software-defined networking: A network game-changer, click here

Part 2: OFC/NFOEC 2013 industry reflections, click here

Part 3: OFC/NFOEC 2013 industry reflections, click here

Part 5: OFC/NFEC 2013 industry reflections, click here

 


OpenFlow extends its control to the optical layer

OpenFlow may be causing an industry stir as system vendors such as ADVA Optical Networking extend the protocol's reach to the optical layer, but analysts warn that it will take years before the technology benefits operators' revenues.

 

"We see OpenFlow as an additional solution to tackle the problem of network control"

Jörg-Peter Elbers, ADVA Optical Networking

 

 


 

The largest data centre players have a single-mindedness when it comes to service delivery. Players such as Google, Facebook and Amazon do not think twice about embracing and even spurring hardware and software developments if they will help them better meet their service requirements.

Such developments are also having a wider impact, interesting traditional telecom operators that have their own service challenges.

The latest development causing waves is the OpenFlow protocol. An open standard, OpenFlow is being developed by the Open Networking Foundation, an industry body that includes Google, Facebook and Microsoft, telecom operators Verizon, NTT and Deutsche Telekom, and various equipment makers.

OpenFlow is already being used by Google, and falls under the more general topic of software-defined networking (SDN). A key principle underpinning SDN is the separation of the data and control planes to enable more centralised and simplified management of the network.

OpenFlow is being used in the management of packet switches for cloud services. "The promise of software-defined networking and OpenFlow is to give [data centre operators] a virtualised network infrastructure," says Jörg-Peter Elbers, vice president, advanced technology at ADVA Optical Networking.

The growing interest in OpenFlow is reflected in the activities of the telecom system vendors that have extended the protocol to embrace the optical layer. But whereas the content service provider giants need only worry about tailoring their networks to optimise their particular services, telecom operators must consider legacy equipment and issues of interoperability.

 

 

OFELIA

ADVA Optical Networking has started the ball rolling by running an experiment to show OpenFlow controlling both the optical and packet layers of the network. Until now the protocol, which provides a software-programmable interface, has been used to manage packet switches; the adding of the optical layer control is an industry first, the company claims.

The OpenFlow demonstration is part of the European “OpenFlow in Europe, Linking Infrastructure and Applications” (OFELIA) research project involving ADVA Optical Networking and the University of Essex.  A test bed has been set up that uses the ADVA FSP 3000 to implement a colourless and directionless ROADM-based optical network. 

"We have put a network together such that people can run the optical layer through an OpenFlow interface, as they do the packet switching layer, under one uniform control umbrella," says Elbers. "The purpose of this project is to set up an experimental facility to give researchers access to, and have them play with, the capabilities of an OpenFlow-enabled network."  

 

"The fact that Google is doing it [SDN] is not a strong indication that service providers are going to do it tomorrow"

Mark Lutkowitz, Telecom Pragmatics

 

Remote researchers can access the test bed via GÉANT, a high-bandwidth pan-European backbone connecting national research and education networks.

ADVA Optical Networking hopes the project will act as a catalyst to gain useful feedback and ideas from the users, leading to further developments to meet emerging requirements. 

 

OpenFlow and GMPLS

A key principle of SDN, as mentioned, is the separation of the data plane from the control plane. "The aim is to have a more unified control of what your network is doing rather than running a distributed specialised protocol in the switches," says Elbers.

That is not that much different from the Generalized Multi-Protocol Label Switching (GMPLS), he says: "With GMPLS in an optical network you effectively have a data plane - a wavelength switched data plane - and then you have a unified control plane implementation running on top, decoupled from the data plane."

But clearly there are differences. OpenFlow is being used by data centre operators to control their packet switches and generate packet flows. The goal is for their networks to gain flexibility and agility: "A virtualised network that can be run as you, the user, want it," said Elbers.

But the protocol only gives a user the capability to manage the forwarding behavior of a switch: an incoming packet's header is inspected and the user can program the forwarding table to determine how the packet stream is treated and the port it goes out on.

And while OpenFlow has since been extended to cater for circuit switches as well as wavelength circuits, there are aspects at the optical layer which OpenFlow is not designed to address - issues that GMPLS does.

To run end-to-end, the control plane needs to be aware of the blocking constraints of an optical switch, while when provisioning it must also be aware of such aspects as the optical power levels and optical performance constraints.  "The management of optical is different from managing a packet switch or a TDM [circuit switched] platform," says Elbers. “We need to deal with transmission impairments and constraints that simply do not exist inside a packet switch.”

That said, having GMPLS expertise, it is relatively simple for a vendor to provide an OpenFlow interface to an optical controlled network, he says: "We see OpenFlow as an additional solution to tackle the problem of network control."

Operators want mature and proven interoperable standards for network control, that incorporate all the different network layers and that use GMPLS.

"We are seeing that in the data centre space, the players think that they may not have to have that level of complexity in their protocols and can run something lower level and streamlined for their applications," says Elbers.

While operators see the benefit of OpenFlow for their own data centres and managed service offerings, they also are eyeing other applications such as for access and aggregation to allow faster service mobility and for content management, says Elbers.

ADVA Optical Networking sees the adding of optical to OpenFlow as a complementary approach: the integration of optical networking into an existing framework to run it in a more dynamic fashion, an approach that benefits the data centre operators and the telcos.

"If you have one common framework, when you give server and compute jobs then you know what kind of connectivity and latency needs to go with this and request these resources and reconfigure the network accordingly," says Elbers.

But longer term the impact of OpenFlow and SDN will likely be more far-reaching:  applications themselves could program the network, or it could be used to enable dial-up bandwidth services in a more dynamic fashion. "By providing software programmability into a network, you can develop your own networking applications on top of this - what we see as the heart of the SDN concept," says Elbers. “The long term vision is that the network will also become a virtualised resource, driven by applications that require certain types of connectivity.”

Providing the interface is the first step, the value-add will be the things that players do with the added network flexibility, either the vendors working with operators, or by the operators' customers and by third-party developers.

"This is a pretty significant development that addresses the software side of things," says Elbers, adding that software is becoming increasingly important, with OpenFlow being an interesting step in that direction.

 


High fives: 5 Terabit OTN switching and 500 Gig super-channels.

Infinera has announced a core network platform that combines Optical Transport Network (OTN) switching with dense wavelength division multiplexing (DWDM) transport. "We are looking at a system that integrates two layers of the network," says Mike Capuano, vice president of corporate marketing at Infinera. 

 

"This is 100Tbps of non-blocking switching, all functioning as one system. You just can't do that with merchant silicon."

Mike Capuano, Infinera 

 

 

 

The DTN-X platform is based on Infinera's third-generation photonic integrated circuit (PIC) that supports five, 100Gbps coherent channels. 

Each DTN-X platform can deliver 5 Terabits-per-second (Tbps) of non-blocking OTN switching using an Infinera-designed ASIC. Ten DTN-X platforms can be combined to scale the OTN switching and transport capacity to 50Tbps currently.

Infinera also plans to add Multiprotocol Label Switching (MPLS) to turn the DTN-X into a hybrid OTN/ MPLS switch. With the next upgrades to the PIC and the switching, the ten DTN-X platforms will scale to 100Tbps optical transport and 100Tbps OTN and MPLS switching capacity.

The platform is being promoted by Infinera as a way for operators to tackle network traffic growth and support developments such as cloud computing where applications and content increasingly reside in the network. "What that means [for cloud-based services to work] is a network with huge capacity and very low latency," says Capuano.

 

Platform details

The 5x100Gbps PIC supports what Infinera calls a 500Gbps 'super-channel'. Each super-channel is a multi-carrier implementation comprising five, 100Gbps wavelengths. Combined with OTN, the 500Gbps super-channel can be filled with 1, 10, 40 and 100 Gigabit streams (SONET/SDH, Ethernet, video etc). Moreover, there is no spectral efficiency penalty: the super-channel uses 250GHz of fibre spectrum, provisioning five 50GHz-wide, 100Gbps wavelengths at a time.

"We have seen 40 and 100Gbps come on the market and they are definitely helping with fibre capacity issues," says Capuano. "But they are more expensive from a cost-per-bit perspective than 10Gbps." By introducing the 500Gbps PIC, Infinera says it is reducing the cost-per-bit performance of high speed optical transport.

DTN-X: shown are 5 line and tributary cards top and bottom with switching cards in the centre of the chassis. Source: Infinera

Integrating OTN switching within the platform results in the lowest cost solution and is more efficient when compared to multiplexed transponders (muxponder) configured manually, or an external OTN switch which must be optically connected to the transport platform. 

The DTN-X also employs Generalised MPLS (GMPS) software. "GMPLS makes it easy to deploy networks and services with point-and-click provisioning," says Capuano.

Each DTX-N line card supports a 500Gbps PIC but the chassis backplane is specified at 1Tbps, ready for Infinera's next-generation 10x100Gbps PIC that will upgrade the DTN-X to a 10Tbps system. "We have already presented our test results for our 1Tbps PIC back in March," says Capuano. The fourth-generation PIC, estimated around 2014 (based on a company slide although Infinera has made no public comment), will support a 1Tbps super-channel.

Adding MPLS will add the transport capability of the protocol to the DTN-X. "You will have MPLS transport, OTN switching and DWDM all in one platform," says Capuano.

OTN switching is the priority of the tier-one operators to carry and process their SONET/SDH traffic; adding MPLS will enable extra traffic processing capabilities to the system, he says.

Infinera says that by eventually integrating MPLS switching into the optical transport network, operators will be able to bypass expensive router ports and simplify their network operation. 

 

Performance

Infinera says that the DTX-N 5Tbps performance does not dip however the system is configured: whether solely as a switch (all line card slots filled with tributary modules), mixed DWDM/ switching (half DWDM/ half tributaries, for example) or solely as a DWDM platform. Depending on the cards in the DTN-X platform, the transport/ switching configuration can be varied but the 5Tbps I/O capacity is retained. Infinera says other switches on the market do lose I/O capacity as the interface mix is varied. 

Overall, Infinera claims the platform requires half the power of competing solutions and takes up a third less space.

The DTN-X will be available in the first half of 2012.

 

Analysis

Gazettabyte asked several market research firms about the significance of the DTN-X announcement and the importance of combining OTN, DWDM and soon MPLS within one platform.

 

Ovum 

Ron Kline, principal analyst, and Dana Cooperson, vice president, of the network infrastructure practice


"MPLS switching is setting up a very interesting competitive dynamic among vendors"

Dana Cooperson, Ovum

 

The DTN-X is a platform for the largest service providers and their largest sites, says Ovum. 

It sees the DTN-X in the same light as other integrated OTN/ WDM platforms such as Huawei's OSN 8800, Nokia Siemens Networks' hiT 7100, Alcatel-Lucent's 1830 PSS and Tellabs' 7100 OTS. 

"It fits the mold for Verizon's long-haul optical transport platform (LH OTP), especially once MPLS is added," says Kline. "NSN is also claiming it will add MPLS to the 7100. Once MPLS is added, then you have the big packet optical transport box that Verizon wants."

The DTN-X platform will boost the business case for 100 Gig in a similar way to how Infinera's current PIC has done at 10 Gig. "The others will be forced to lower price," says Kline.

Having GMPLS is important, especially if there is a need to do dynamic bandwidth allocation, however it is customer-dependent. "When you start digging, it's hard to find large-scale implementations of GMPLS," says Kline.

The Ovum analysts stress that the need for OTN in the core depends on the customer. Content service providers like Google couldn't care less about OTN. "It's really an issue for multi-service providers like BT and AT&T," says Cooperson, 

There is a consensus about the need for MPLS in the core. "Different service providers are likely to take different approaches — some might prefer an integrated box and others might not, it depends on their business," she says. "I think MPLS switching is setting up a very interesting competitive dynamic among vendors that focus on IP/MPLS, those that focus on optical, and those that are trying to do both [optical and IP/MPLS]. 

Ovum highlights several aspects regarding the DTN-X's claimed performance.

"Assuming it performs as advertised, this should finally give Infinera what it needs to be of real interest to the tier-ones," says Cooperson. "The message of scalability, simplicity, efficiency, and profitability is just what service providers want to hear." 

Cooperson also highlights Infinera's approach to optical-electrical-optical conversion and the benefit this could deliver at line speeds greater than 100Gbps. 

At present ROADMs are being upgraded to support flexible spectrum channel configurations, also known as gridless. This is to enable future line speeds that will use more spectrum than current 50GHz DWDM channels. Operators want ROADMs that support flexible spectrum requirements but managing the network to support these variable width channels is still to resolved. 

 

"It fits the mold for Verizon's long-haul optical transport platform (LH OTP), especially once MPLS is added"

Ron Kline, Ovum

 

 

 

Infinera's approach is based on conversion to the electrical domain when dropping and regenerating wavelengths such that the issue of flexible channels does not arise or is at least forestalled. This, says Cooperson, could be Infinera's biggest point of differentiation.

"What impresses me is the 500Gbps super-channel using five, 100Gbps carriers and the size of the switch fabric," adds Kline. The 5Tbps switching performance also exceeds that of everyone else: "Alcatel-Lucent is closest with 4Tbps but most range from 1-3Tbps and top out at 3Tbps." 

The ease of use is also a big deal. Infinera did very well in marketing rapid turn up: 10 Gig in 10 days for example, says Kline: "It looks like they will be able to do the same here with 100 Gig." 

 

Infonetics Research

 Andrew Schmitt, directing analyst, optical


"GMPLS isn't that important, yet."

 

 

 

 

 

The DTN-X is a WDM platform which optionally includes a switch fabric for carriers that want it integrated with the transport equipment, says Schmitt. Once MPLS is added, it has the potential to be a full-blown packet-optical system.

"[The announcement is] pretty significant though not unexpected," says Schmitt. "I think the key question is what it costs, and whether the 500G PIC translates into compelling savings."

Having MPLS support is important for some carriers such as XO Communications and Google but not for others. 

Schmitt also says GMPLS isn't that important, yet. "Infinera's implementation of regen-rich networks should make their GMPLS implementation workable," he says. "It has been building networks like that for a while."

OTN in the core is still an open debate but any carrier that doesn't have the luxury of a homogenous data network needs it, he says

Schmitt has yet to speak with carriers who have used the DTN-X: "I can't comment on claimed performance but like I said, cost is important."

 

ACG Research 

Eve Griliches, managing partner 


"Infinera has already introduced the 500G PIC, but the OTN is significant in that it can be used as a standalone OTN switch, and it has the largest capacity out there today"

 

 

The DTN-X as an OTN/ WDM platform awaiting label switch router (LSR) functionality, says Griliches: "With the LSR functionality it will be able to do statistical multiplexing for direct router connections."

Infinera has already introduced the 500 Gig PIC but the OTN is significant in that it can be used as a standalone OTN switch, and it has the largest capacity out there today. An OTN survey conducted last year by ACG Research found that the switch capacity sweet spot is between 4 and 8Tbps.

Griliches says that LSR-based products are taking time to incorporate WDM and OTN technologies, while it is unclear when the DTN-X will support MPLS to add LSR capabilities. The race is on as to whom can integrate everything first, but DWDM and OTN before MPLS is the right direction for most tier-one operators, she says. 

Infinera has over eight thousand of its existing DTNs deployed at 85 customers in 50 countries. The scale of the DTN-X will likely broaden Infinera's customer base to include tier-one operators, says Griliches.

ACG Research has heard positive feedback from operators it has spoken to. One stressed that the decreased port count due to the larger OTN cross-connect significantly improves efficiencies. Another operator said it would pick Infinera and said the beta version of the 500Gbps PIC is "working beautifully". 


Privacy Preference Center