OFC/NFOEC 2013 industry reflections - Part 3

Joe Berthold, vice president of network architecture, Ciena

The two topics that received the most attention, judging from session attendance and discussion in the hallways, were silicon photonics and software-defined networking (SDN). I predict that next year those who wish to capitalise on this popularity wave will be submitting papers on SDN-enabled silicon photonics.

More seriously, though, there remains vigorous debate about the relative importance of III-V integrated optics and silicon photonics, and I look forward to seeing how this evolves in the marketplace.

 

"Some of the SDN-related talks from the global research and education community were very good. They have been pioneers in making high capacity optical networks dynamic, and we have much to learn from them as they have several years experience building and operating SDNs, even before the term existed."

 

With respect to SDN and service providers, it is going to be several years before we see a true, SDN-enabled network as there are many other issues that need to be addressed.

This is one of the reasons Ciena is taking a lead role in the Open Networking Foundation's investigation of applying OpenFlow or the like at the optical layers. I thought some of the SDN-related talks from the global research and education community were very good. They have been pioneers in making high capacity optical networks dynamic, and we have much to learn from them as they have several years experience building and operating SDNs, even before the term existed.


"One of the most interesting commercial developments to watch in the coming years related to 100 Gig is the work that has begun on pluggable coherent analogue optical modules"

 

There was also quite a bit of buzz about 100 Gig deployments. It was nice to hear one of the industry analysts refer to 2013 as the year of 100 Gig as this is an area where Ciena has been quite successful.

I did not see or hear of any dramatic advances reported at the conference. What I did see, in talks and on the show floor, was a broad base of technology development that will lead to increased system density and lower cost and power.

On the client side, many companies showed 100 Gig CFP2 modules, and there was quite a bit of talk and demonstrations of technology building blocks that will lead to even smaller size.

Another optical networking topic that means many different things to different people was flexible grids and flexible transmission formats. From speaking with a number of network operators, it seems there is an appreciation for the future-proofing benefit of flexible grid ROADMs, but a recognition that the spectral efficiency gains to be had are quite limited, especially in a ROADM mesh network. So they are emerging as a nice-to-have feature but not a must-have-at-any-price feature.

Another 'flex' concept is flex-transceivers. The flavour of flex-transceivers that seem by most I spoke with to be practical are those that maintain a fixed baud rate but vary modulation format, say from BPSK to QPSK, 8PSK, 16QAM and perhaps beyond, to fit different distance applications.

I think one of the most interesting commercial developments to watch in the coming years related to 100 Gig is the work that has begun on pluggable coherent analogue optical modules, likely to emerge in a CFP2 form factor. I view this as a major next step the industry will take to reduce the cost and increase the density of coherent interfaces on switches and transmission systems.

The OIF did the industry a great service in pulling together a set of interoperable building blocks that form the photonic foundation of 100 Gig solutions today. The next step is to integrate these pieces and place them in a pluggable module. There is yet no formal project with this goal, but discussions are underway.

Watch this space...

 

 

Karen Liu, principal analyst components, Ovum.

There was a real sense of openness to new directions even as a lot of short-term activity continues to focus on getting 100 Gig to full maturity. Instead of pitching their favourite directions, some people actually solicited more ideas.

 

 "One trend to watch is the battle between VCSELs and silicon photonics"

 

Directions that seemed promising but unformed last year got a bit firmed up. Connections are being made from the application down to the device technology. What had been wacky ideas previously are being taken seriously:

  • Optical circuit switching looks like it will have a place in conjunction with Ethernet switching.
  • Spatial division multiplexing is the hot research topic. I like the work that Bell Labs is doing, particularly where the add/drop increment ties together multiple cores of the same wavelength so compensation algorithms can take advantage of similar environmental history.  This is moving past the physics, to thinking about network architecture.
  • Monolithic integration of electronics with photonics. Early stages still and primarily around the drivers. But as this is motivated by power consumption, it seems like a solid direction that will have legs. 

One trend to watch is the battle between VCSELs (vertical-cavity surface-emitting lasers) and silicon photonics. Conventional wisdom was that VCSELS were for multi-mode and silicon photonics for single-mode but both have crossed over into the other's space.

 

 

Martin Guy, vice-president of product management and technology Teraxion 

There were several noteworthy developments. In particular, silicon photonics has started to show its promises as new products are introduced:

  • Cisco announced its 100 Gig CPAK transceiver following the Lightwire acquisition
  • Kotura showed its 100 Gig WDM QSFP package with only 3.5 W of power consumption.
  • Luxtera demonstrated a 100 Gig QSFP package using four fibre pairs, each [fibre] carrying 25Gbps.
  • Teraxion introduced its small form factor coherent receiver based on silicon photonics 

Silicon photonics was also largely discussed at the technical conference and very impressive results were demonstrated. Most notably, Cisco and Alcatel-Lucent presented results on silicon photonic modulators for metro and long-haul coherent systems with performance comparable to lithium niobate.

Tunable laser technologies on silicon photonics were also presented by companies such as Skorpios and Aurrion during the post-deadline sessions.

 

"Cisco and Alcatel-Lucent presented results on silicon photonic modulators for metro and long-haul coherent systems with performance comparable to lithium niobate."

 

All those new silicon photonics technologies could eventually become key building blocks of future highly-integrated transceivers.

Pluggable coherent modules will be a big market opportunity and it is all about density and low power consumption.

At the show, Oclaro demonstrated key milestones to bring to market a CFP2 coherent module by mid-2014 while this product is on the roadmap of all other major transceiver vendors.

From Teraxion’s perspective, our recent acquisition of Cogo Optronics Canada for high-speed modulators is directly in line with this market trend at the modules level where performance, size and low power consumption are key requirements.

 

Paul Brooks, product line manager for high-speed test solutions, JDSU

The growing confidence in second-generation 100 Gig CFP2s was evident at the show. Many companies, including JDSU, demonstrated robust second-generation 100 Gig modules which will drive confidence across the whole 100 Gig ecosystem to allow cost efficient 100 Gig clients. Our ONT CFP2 test solution was well received and we spent a lot of time demonstrating the features that will enable successful CFP2 deployment.

 

"Many companies are openly discussing 400 Gig and beyond, the bandwidth demand is there but considerable technology challenges need to be address"

 

One thing enforced at the show is the continued importance of innovation in test and measurement solutions required by our customers as we move to 100 Gig+ systems.

Many companies are openly discussing 400 Gig and beyond, the bandwidth demand is there but considerable technology challenges need to be address. The intellectual horsepower present at the show allows fruitful and engaging discussions on key topics.

 

See also:

Part 1: Software-defined networking: A network game-changer, click here

Part 2: OFC/NFOEC industry reflections, click here

Part 4: OFC/NFOEC industry reflections, click here

Part 5: OFC/NFEC 2013 industry reflections, click here

 


OFC/NFOEC 2013 product round-up - Part 1

Part 1: Client-side transceivers 

  • First CFP2 single-mode and multi-mode transceiver announcements
  • Cisco Systems unveils its CPAK module
  • 100 Gigabit QSFPs from Kotura and Luxtera
  • CFP2 and 40km CFP 10x10 MSA modules
  • Infiniband FDR and 'LR4 superset' QSFPs

 

The recent OFC/NFOEC exhibition and conference held in Anaheim, California, saw a slew of optical transceiver announcements. The first CFP2 client-side products for single-mode and multi-mode fibre were unveiled by several companies, as was Cisco Systems' in-house CPAK transceiver.

The CFP2 is the pluggable form factor that follows the first generation CFP. The CFP MSA announced the completion of the CFP2 specification at the show, while several vendors including Avago Technologies, Finisar, Fujitsu Optical Components, NeoPhotonics, Oclaro and Oplink Communications detailed their first CFP2 products.

The 40 and 100 Gigabit CFP2 is half the size of the CFP, enabling at least a doubling of the CFP2 transceivers on a faceplate compared to four CFPs (see table below). The CFP2 is also future-proofed to support 200 and 400Gbps (See first comment at bottom of CFP2 story).

Another difference between the CFP and the CFP2 is that the CFP2 uses a 4x25Gbps electrical interface. Accordingly, the CFP2 does not need the 'gearbox' IC that translates between ten, 10 Gigabit-per-second (Gbps) lanes to four, 25Gbps electrical lanes that interface to the 4x25/28Gbps optics. Removing the gearbox IC saves space and reduces the power consumption by several watts.

The industry has long settled on the SFP+ at 10Gbps while the QSFP has become the 40Gbps form factor of choice. With 100Gbps still in its infancy, transceiver vendors are pursuing several client-side interfaces.  Much work will be needed to reduce the size, power consumption and cost of 100Gbps interfaces before the industry settles on a single pluggable form factor for the single-mode and multi-mode standards.  

 

CFP2 announcements

Finisar demonstrated two CFP2 modules, one implementing the IEEE 100GBASE-LR4 10km standard and the other, the IEEE 100GBASE-SR10 100m multi-mode standard. The company is using directly-modulated, distributed feedback (DFB) lasers for its CFP2 LR4. In contrast, the CFP module uses more expensive, electro-absorption modulator lasers (EMLs). Finisar demonstrated interoperability between the two LR4 modules, an EML-based CFP and a DFB-based CFP2, at the show.

 

* An ER4 CFP2 is under development
** Oclaro disclosed indium phosphide components for a future CFP2 line side pluggable

 

Using directly modulated lasers also reduces the power consumption, says Finisar. Overall, the CFP2 LR4 consumes 7W compared to a 24W first-generation CFP-based LR4.

"We can migrate these [directly modulated laser] designs to a single quad 28 Gig photonic integrated circuit TOSA," says Rafik Ward, Finisar's vice president of marketing. "Likewise on the receive [path], there will be a quad 28 Gig ROSA." The TOSA refers to a transmitter optical sub-assembly while the ROSA is the receiver equivalent.  Ward says the CFP2s will be in production this year.   

Several module and chip makers took part in the Optical Internetworking Forum's (OIF) multi-vendor demonstration of its 4x25 Gigabit chip-to-module electrical interface, the CEI-28G-VSR. The demonstration included CFP2 LR4s from Finisar and from Oclaro as well as Luxtera's 100Gbps shorter reach module in a QSFP28. Oclaro's CFP2 is expected to be in production in the third quarter of 2013.

Another standard implemented in the CFP2 is the 100GBASE-SR10 multi-mode standard. Avago Technologies and Finisar both detailed CFP2 SR10 modules. The SR10 uses 10 VCSELs, each operating at 10Gbps. The SR10 can be used as a 100Gbps interface or as 10 independent 10Gbps channels.

The CFP2 SR10 can be interfaced to 10 Gigabit Ethernet (GbE) SFP+ modules or combinations of 10GbE SFP+ and 40GbE QSFPs. "What people are looking for using the CFP2 multi-mode module is not only for the 100 Gig Ethernet application but interoperability with 40 Gig Ethernet as well as 10 Gig Ethernet modules," says I Hsing Tan, Ethernet segment marketing manager in the fibre optics product division at Avago.

The SR10 electrical interface specification supports retiming and non-retiming options. The Avago CFP2 module includes clock data recovery ICs that can be used for retiming if needed or bypassed. The result is that Avago's CFP2 SR10 consumes 4-6W, depending on whether the clock data recovery chips are bypassed or used.

Meanwhile, NeoPhotonics became the first company to announce the 10x10 MSA in a CFP2.

NeoPhotonics has not detailed the power consumption but says the 10x10Gbps CFP2 is lower than the CFP since all of the chips - photonic and electrical - are a newer generation and much work has gone into reducing the power consumption.

"Demand is quite strong for the 10x10 solution," says Ferris Lipscomb, vice president of marketing at NeoPhotonics. "The CFP2 version is being developed, and we expect strong demand there as well."

The key advantage of the 10x10-based solution over a 4x25Gbps design is cost, according to NeoPhotonics. "10x10 enjoys the volume and maturity of 10 Gig, and thus the cost advantage," says Lipscomb. "We believe the 10x10 CFP2 will follow the trend of the 10x10 MSA CFP and will offer a significant cost advantage over CFP2 LR4-based solutions."

 

Cisco's CPAK

Cisco finally showed its in-house silicon photonics-based CPAK transceiver at OFC/NFOEC. The CPAK is the first product to be announced following Cisco's acquisition of silicon photonics player, LightWire.

 

Cisco says the CPAK is more compact than the CFP2 transceiver with the company claiming that 12 or more transceivers will fit on a faceplate. "While the industry is leapfrogging the CFP with the CFP2, our CPAK leapfrogs the CFP2 because it is much more efficient from a size and power consumption perspective," says Sultan Dawood, a marketing manager at Cisco.

Vendors backing the CFP2 stress that the CPAK is only slighter smaller than the MSA module. "The CFP2 and the CPAK are both interim form factors pending when the CFP4 becomes available." says Avago's Tan. "Any product [like the CFP2] governed by an MSA is going to see strong market adoption." 

 

Cisco's CPAK transceiver Source: Cisco

The CFP4 specification is still being worked on but 16 CFP4s will fit on a faceplate and the transceiver is scheduled for the second half of 2014.

At OFC, Cisco demonstrated the CPAK implementing the 100GBASE-LR4 and -SR10 standards. The CPAK transceiver will be generally available in the summer of 2013, says Cisco.

 

CFP

Oplink Communication and hybrid integration specialist, Kaiam, showed a 100Gbps 10x10 MSA CFP implementing a 40km extended reach.

The 10x10 40km CFP is for connecting data centres and for broadband backhaul applications. The CFP electro-absorption modulator lasers coupled to a wavelength multiplexer make up the TOSA while the ROSA comprises avalanche photodiode receivers and a demultiplexer. Samples will be available in the second quarter of 2013, with production starting in the third quarter.

Source Photonics announced a second-generation 100GBASE-LR4 CFP with a power consumption of 12-14W.

Meanwhile, Effdon Networks detailed its first 100Gbps product, a CFP with a reach of 80km. Until now 100Gbps CFPs have been limited largely to 10km LR4 while the first 100Gbps CFPs with a reach of 80km or greater being 4x25Gbps direct-detection designs that can include specialist ICs.

 

100 Gig QSFP

Luxtera and Kotura, both detailed 100 Gigabit QSFPs that use their respective silicon photonics technology. The Kotura design uses two chips, has a reach of 2km and is a four-channel wavelength-division multiplexing (WDM) design while the Luxtera design is a four-channel integrated transceiver that uses a single laser and is tailored for 500m although Luxtera says it can achieve a 2km reach.

 

40 Gigabit Ethernet and Infiniband FDR

Avago Technologies announced that its eSR4 40 Gigabit Ethernet (GbE) QSFP+ has a reach of up to 550m, beyond the reach specified by the IEEE 40GBASE-SR4 standard. The eSR4 supports 40GbE or four independent 10GbE channels. When used as a multi-channel 10GbE interface, the QSFP+ interfaces to various 10GbE form factors such as X2, XFP and SFP+, It can also interface to a 100GbE CFP2, as mentioned.

Avago first announced the eSR4 QSFP+ with a reach of 300m over OM3 multi-mode fibre and 400m over OM4 fibre. The eSR4 now extends the reach to a guaranteed 550m when used with specific OM4 fibre from fibre makers Corning, Commscope and Panduit.

The extended reach is needed to address larger data centres now being build, as well as support flatter switch architectures that use two rather than three tiers of switches, and that have greater traffic flowing between switches on the same tier.

Avago says data centre managers are moving to deploy OM4 fibre. "The end user is going to move from OM3 to OM4 fibre for future-proofing purposes," says Tan. "The next-generation 32 Gig Fibre Channel and 100 Gigabit Ethernet are focussing on OM4 fibre."  

Meanwhile, ColorChip showed its 56Gbps QSFP+ implementing the FDR (Fourteen Data Rate) 4x Infiniband standard as part of a Mellanox MetroX long-haul system demonstration at the show.

Finisar also demonstrated a 40Gbps QSFP using four 1310nm VCSELs. The result is a QSFP with a 10km reach that supports a 40Gbps link or four, 10Gbps links when used in a 'breakout' mode. The existing 40GBASE-LR4 standard supports a 40Gbps link only. Finisar's non-standard implementation adds a point-to-multipoint configuration.

"A single form factor port can be used not only for 40 Gig but also can enable higher density 10 Gig applications than what you can do with SFP+," says Ward.

Kaiam detailed a 40Gbps QSFP+ ER4 transceiver having a 40km reach. The QSFP+ transceiver has the equivalent functionality of four DML-based SFP+s fixed on a coarse WDM grid, and includes a wavelength multiplexer and de-multiplexer. 

 

For OFC/NFOEC 2013 - Part 2, click here

 

Further reading

LightCounting: OFC/NFOEC review: news from the show floor, click here

Ovum: Cisco hits both show hot buttons with silicon photonics for 100G, click here


OFC/NFOEC 2013 industry reflections - Part 2

Gazettabyte asked industry figures for their views after attending the recent OFC/NFOEC show held in Anaheim. Part 2

 

Bill Gartner, vice president and general manager of high-end routing and optical business unit at Cisco Systems.

There were several key themes during this year’s OFC conference, but what I found most compelling were the disruptive trends and technologies that stand to significantly impact the optical communications market in the coming years.

 

"SDN could be the single biggest disruptor in the transport industry and has the potential to transform network programmability and orchestration"

 

 

 

One of the hottest themes at this years OFC conference is the role of silicon photonics and the benefits it presents to service providers and carriers. Silicon photonics is truly one of the most interesting advancements taking place in the industry as it has the potential to drastically lower the density, power and overall cost of ASICs.

Several carriers at the show, including CenturyLink and AT&T, presented their view that optics is becoming a larger portion of their spend and now exceeds the cost of packet switching technologies.

A second key trend coming out of the show is software-defined networking (SDN) and its impact on networking. There is tremendous industry interest around this topic and it extended to the Anaheim Convention Center.

With SDN, our customers can increase flexibility in terms of selecting the features and protocols that make sense for their network application whether it is a data centre application, a service provider application or a large-scale enterprise application.

The last theme that resonated during OFC was around the convergence of packet and optical solutions. As service providers look for ways to decrease both CapEx and OpEx related to the network, incremental technology improvements will decrease costs. However, for many customers, their network capacity is growing far faster than their revenues, so incremental improvements will not yield required reductions. 

 

"As an industry we have to evolve organisationally and technically. Those who fail to recognise that face extinction."

 

This shows us that we need to explore more fundamental shifts in architectures that have the potential to yield significant savings in OpEx and CapEx. Enter the convergence of IP and optical this may take the form of converged platforms, but will also involve multi-layer control planes that allow the exchange of information between the packet and optical layers. This convergence helps answer questions like: How well is the network utilised? Can it be optimised? Are there multi-layer protection/ restoration schemes that make better use of the available resources?

During the conference, I had the opportunity to present at the OSA Executive Forum, which brought together more than 150 senior-level executives to discuss key themes, opportunities and challenges facing the next generation in optical communications.

What struck me is that this industry is constantly evolving, which presents challenges and opportunities. We are looking at an industry that is highly fragmented at the moment and requires further streamlining.

You have new players at every level of the value chain that bring exciting, unique perspectives and advanced technologies that increase efficiency and decrease costs. But none of this innovation comes without change; as an industry we have to evolve organisationally and technically. Those who fail to recognise that face extinction.

 

"This is like solving a simultaneous equation where the variables are power, cost and density – you need to solve for all three"

 

The key themes discussed at OFC are an indication of what is to come in optical transport and mirror our top priorities at Cisco.

In the coming year, we expect to see CMOS photonics technology enable lower power pluggables. This is the case with CPAK, but more broadly, we will see this technology find its way into low cost board-to-board interconnect and chassis-to-chassis interconnect. 

As an industry, we have made great progress in reducing the cost of transmitting bits over a long distance but much more remains to be done. As bit rates increase to beyond 100 Gigabit, we must look for ways to drive this cost down faster, while decreasing both power and size. This is like solving a simultaneous equation where the variables are power, cost and density you need to solve for all three.

During the next five years, I think that SDN could be the single biggest disruptor in the transport industry and has the potential to transform network programmability and orchestration.

We will see an entire software industry emerge around SDN, but it is important to note that this is really all about multilayer control Layer 0 to Layer 3. SDN is not simply an optical transport problem to be solved. The advantage will go to those who are looking at this holistically.

 

Brandon Collings, CTO of the communications and commercial optical products group at JDSU

I found it interesting that the major network equipment manufactures had a significantly increased presence on the exhibition floor.

 

"This year’s focus and buzz was all on silicon photonics with researchers leveraging it against nearly every function in telecom and datacom" 

 

 

 

I learned a lot about SDN at levels above the photonic network. This is a very complex topic likely to take some time to fully mature within telecom networks; however, the potential values appear compelling.

This years focus and buzz was all on silicon photonics with researchers leveraging it against nearly every function in telecom and datacom. I expect it will be interesting for industry watchers how this promising technology evolves within the industry, where it achieves its promise and where it runs into practical roadblocks.

 

Vladimir Kozlov, CEO of LightCounting

This was the best OFC since 2000. The optical community is once again energised. Some attribute the improved mood to high-value acquisitions of companies LightWire and Nicira that were made last year, but this is just part of the story.

Yes, the potential of silicon photonics and software-defined networking (which LightWire and Nicira were focussed on, respectively) do broaden the horizon for optical technologies in communication networks and data centres. But the excitement is not limited to just these two ideas. All the new - and old or forgotten - ideas, technologies and products once again have a shot at making a difference. Demand for optics is strong and the customers are hungry for innovation. 

 

"Demand for optics is strong and the customers are hungry for innovation" 

 

In contrast to 2000, few people are getting carried away with the excitement. The mood is much more constructive this time and it makes me hope that most of this new energy will not be wasted.

I would not single out a specific technology or application to watch out for in the next few years. All of them have opportunities and challenges ahead. We will keep track of as many developments as we can and make sure that hype does lead the industry off the tracks this time.

 

Effie Favreau, marketing, Sumitomo Electric

One hundred Gigabit technology is here. Last year there was a lot of hype about 100 Gigabit and now it is reality; vendors have products that are shipping. 

Sumitomo and ClariPhy partnered on pluggable coherent modules. Together, we hosted an impressive demonstration with all the components to make pluggable coherent modules available next year.

 

"For the enterprise/ data centre, vendors requiring low cost, high density equipment really need the CFP4"

 

One thing I learned from the show is that vendors need to re-purpose their existing equipment. There was much discussion regarding software-enabled applications and passives to enhance the performance of networks and make them more intelligent.

There was the introduction of the CFP2 from several vendors as well as Cisco's CPAK. For the enterprise/ data centre, vendors requiring low cost, high density equipment really need the CFP4.  At Sumitomo, we are concentrating our R&D efforts on the CFP4.    

 

See also:

Part 1: Software-defined networking: A network game-changer? click here 

Part 3: OFC/NFOEC 2013 industry reflections, click here

Part 4: OFC/NFOEC industry reflections, click here

Part 5: OFC/NFEC 2013 industry reflections, click here


Software-defined networking: A network game-changer?

Q&A with Andrew Lord, head of optical research at BT, about his impressions following the recent OFC/NFOEC show.

OFC/NFOEC reflections: Part 1


"We [operators] need to move faster"

Andrew Lord, BT

 

 

 

 

 

Q: What was your impression of the show?

A: Nothing out of the ordinary. I haven't come away clutching a whole bunch of results that I'm determined to go and check out, which I do sometimes.

I'm quite impressed by how the main equipment vendors have moved on to look seriously at post-100 Gigabit transmission. In fact we have some [equipment] in the labs [at BT]. That is moving on pretty quickly. I don't know if there is a need for it just yet but they are certainly getting out there, not with live chips but making serious noises on 400 Gig and beyond.

There was a talk on the CFP [module] and whether we are going to be moving to a coherent CFP at 100 Gig. So what is going to happen to those prices? Is there really going to be a role for non-coherent 100 Gig? That is still a question in my mind.


"Our dream future is that we would buy equipment from whomever we want and it works. Why can't we do that for the network?"

 

I was quite keen on that but I'm wondering if there is going to be a limited opportunity for the non-coherent 100 Gig variants. The coherent prices will drop and my feeling from this OFC is they are going to drop pretty quickly when people start putting these things [100 Gig coherent] in; we are putting them in. So I don't know quite what the scope is for people that are trying to push that [100 Gigabit direct detection].

 

What was noteworthy at the show?

There is much talk about software-defined networking (SDN), so much talk that a lot of people in my position have been describing it as hype. There is a robust debate internally [within BT] on the merits of SDN which is essentially a data centre activity. In a live network, can we make use of it? There is some skepticism.

I'm still fairly optimistic about SDN and the role it might have and the [OFC/NFOEC] conference helped that.

I'm expecting next year to be the SDN conference and I'd be surprised if SDN doesn't have a much greater impact then [OFC/NFOEC 2014] with more people demoing SDN use cases.

 

Why is there so much excitement about SDN?

Why now when it could have happened years ago? We could have all had GMPLS (Generalised Multi-Protocol Label Switching) control planes. We haven't got them. Control plane research has been around for a long time; we don't use it: we could but we don't. We are still sitting with heavy OpEx-centric networks, especially optical.


"The 'something different' this conference was spatial-division multiplexing"


So why are we getting excited? Getting the cost out of the operational side - the software-development side, and the ability to buy from whomever we want to.

For example, if we want to buy a new network, we put out a tender and have some 10 responses. It is hard to adjudicate them all equally when, with some of them, we'd have to start from scratch with software development, whereas with others we have a head start as our own management interface has already been developed. That shouldn't and doesn't need to be the case.

Opening the equipment's north-bound interface into our own OSS (operating systems support) in theory, and this is probably naive, any specific OSS we develop ought to work.

Our dream future is that we would buy equipment from whomever we want and it works. Why can't we do that for the network?

We want to as it means we can leverage competition but also we can get new network concepts and builds in quicker without having to suffer 18 months of writing new code to manage the thing. We used to do that but it is no longer acceptable. It is too expensive and time consuming; we need to move faster.

It [the interest in SDN] is just competition hotting up and costs getting harder to manage. This is an area that is now the focus and SDN possibly provides a way through that.

Another issue is the ability to put quickly new applications and services onto our networks. For example, a bank wants to do data backup but doesn't want to spend a year and resources developing something that it uses only occasionally. Is there a bandwidth-on-demand application we can put onto our basic network infrastructure? Why not?

SDN gives us a chance to do something like that, we could roll it out quickly for specific customers.

 

Anything else at OFC/NFOEC that struck you as noteworthy?  

The core networks aspect of OFC is really my main interest.

You are taking the components, a big part of OFC, and then the transmission experiments and all the great results that they get - multiple Terabits and new modulation formats - and then in networks you are saying: What can I build?

The networks have always been the poor relation. It has not had the great exposure or the same excitement. Well, now, the network is becoming centre stage.

As you see components and transmission mature - and it is maturing as the capacity we are seeing on a fibre is almost hitting the natural limit - so the spectral efficiency, the amount of bits you can squeeze in a single Hertz, is hitting the limit of 3,4,5,6 [bit/s/Hz]. You can't get much more than that if you want to go a reasonable distance.

So the big buzz word - 70 to 80 percent of the OFC papers we reviewed - was flex-grid, turning the optical spectrum in fibre into a much more flexible commodity where you can have wherever spectrum you want between nodes dynamically. Very, very interesting; loads of papers on that. How do you manage that? What benefits does it give?  

 

What did you learn from the show?

One area I don't get yet is spatial-division multiplexing. Fibre is filling up so where do we go? Well, we need to go somewhere because we are predicting our networks continuing to grow at 35 to 40 percent.

Now we are hitting a new era. Putting fibre in doesn't really solve the problem in terms of cost, energy and space. You are just layering solutions on top of each other and you don't get any more revenue from it. We are stuffed unless we do something different.

The 'something different' this conference was spatial-division multiplexing. You still have a single fibre but you put in multiple cores and that is the next way of increasing capacity. There is an awful lot of work being done in this area.

I gave a paper [pointing out the challenges]. I couldn't see how you would build the splicing equipment, how you would get this fibre qualified given the 30-40 years of expertise of companies like Corning making single mode fibre, are we really going to go through all that again for this new fibre? How long is that going to take? How do you align these things?

 

"SDN for many people is data centres and I think we [operators] mean something a bit different." 

 

I just presented the basic pitfalls from an operator's perspective of using this stuff. That is my skeptic side. But I could be proved wrong, it has happened before!

 

Anything you learned that got you excited?

One thing I saw is optics pushing out.

In the past we saw 100 Megabit and one Gigabit Ethernet (GbE) being king of a certain part of the network. People were talking about that becoming optics.

We are starting to see optics entering a new phase. Ten Gigabit Ethernet is a wavelength, a colour on a fibre. If the cost of those very simple 10GbE transceivers continues to drop, we will start to see optics enter a new phase where we could be seeing it all over the place: you have a GigE port, well, have a wavelength.

[When that happens] optics comes centre stage and then you have to address optical questions. This is exciting and Ericsson was talking a bit about that.

 

What will you be monitoring between now and the next OFC?

We are accelerating our SDN work. We see that as being game-changing in terms of networks. I've seen enough open standards emerging, enough will around the industry with the people I've spoken to, some of the vendors that want to do some work with us, that it is exciting. Things like 4k and 8k (ultra high definition) TV, providing the bandwidth to make this thing sensible.

 

"I don't think BT needs to be delving into the insides of an IP router trying to improve how it moves packets. That is not our job."

 

Think of a health application where you have a 4 or 8k TV camera giving an ultra high-res picture of a scan, piping that around the network at many many Gigabits. These type of applications are exciting and that is where we are going to be putting a bit more effort. Rather than the traditional just thinking about transmission, we are moving on to some solid networking; that is how we are migrating it in the group.

 

When you say open standards [for SDN], OpenFlow comes to mind.

OpenFlow is a lovely academic thing. It allows you to open a box for a university to try their own algorithms. But it doesn't really help us because we don't want to get down to that level.

I don't think BT needs to be delving into the insides of an IP router trying to improve how it moves packets. That is not our job.

What we need is the next level up: taking entire network functions and having them presented in an open way.

For example, something like OpenStack [the open source cloud computing software] that allows you to start to bring networking, and compute and memory resources in data centres together.

You can start to say: I have a data centre here, another here and some networking in between, how can I orchestrate all of that? I need to provide some backup or some protection, what gets all those diverse elements, in very different parts of the industry, what is it that will orchestrate that automatically?

That is the kind of open theme that operators are interested in.

 

That sounds different to what is being developed for SDN in the data centre. Are there two areas here: one networking and one the data centre?

You are quite right. SDN for many people is data centres and I think we mean something a bit different. We are trying to have multi-vendor leverage and as I've said, look at the software issues.

We also need to be a bit clearer as to what we mean by it [SDN].

 

Andrew Lord has been appointed technical chair at OFC/NFOEC

 

Further reading

Part 2: OFC/NFOEC 2013 industry reflections, click here

Part 3: OFC/NFOEC 2013 industry reflections, click here

Part 4: OFC/NFOEC industry reflections, click here

Part 5: OFC/NFEC 2013 industry reflections, click here


Telcos eye servers & software to meet networking needs

  • The Network Functions Virtualisation (NFV) initiative aims to use common servers for networking functions
  • The initiative promises to be industry disruptive

 

"The sheer massive [server] volumes is generating an innovation dynamic that is far beyond what we would expect to see in networking"

Don Clarke, NFV

 

 

Telcos want to embrace the rapid developments in IT to benefit their networks and operations.

The Network Functions Virtualisation (NFV) initiative, set up by the European Telecommunications Standards Institute (ETSI), has started work to use servers and virtualisation technology to replace the many specialist hardware boxes in their networks. Such boxes can be expensive to maintain, consume valuable floor space and power, and add to the operators' already complex operations support systems (OSS).

"Data centre technology has evolved to the point where the raw throughput of the compute resources is sufficient to do things in networking that previously could only be done with bespoke hardware and software," says Don Clarke, technical manager of the NFV industry specification group, and who is BT's head of network evolution innovation. "The data centre is commoditising server hardware, and enormous amounts of software innovation - in applications and operations - is being applied.” 

 

"Everything above Layer 2 is in the compute domain and can be put on industry-standard servers"

The operators have been exploring independently how IT technology can be applied to networking. Now they have joined forces via the NFV initiative.

"The most exciting thing about the technology is piggybacking on the innovation that is going on in the data centre," says Clarke. "The sheer massive volumes is generating an innovation dynamic that is far beyond what we would expect to see in networking."

Another key advantage is that once networks become software-based, enormous amounts of flexibility results when creating new services, bringing them to market quickly while also reducing costs.

NFV and SDN

The NFV initiative is being promoted as a complement to software-defined networking (SDN).

 

The complementary relationship between NFV and SDN. Source: NFV.
SDN is focussed on control mechanisms to reconfigure networks that separate the control plane and the data plane. The transport network can be seen as dumb pipes with the control mechanisms adding the intelligence.

“There are other areas of the network where there is intrinsic complexity of processing rather than raw throughput,” says Clarke.

These include firewalls, session border controllers, deep packet inspection boxes and gateways - all functions that can be ported onto servers. Indeed, once running as software on servers such networking functions can be virtualised.

"Everything above Layer 2 is in the compute domain and can be put on industry-standard servers,” says Clarke. This could even include core IP routers but clearly that is not the best use of general-purpose computing, and the initial focus will be equipment at the edge of the network.

Clarke describes how operators will virtualise network elements and interface them to their existing OSS systems. “We see SDN as a longer journey for us,” he says. “In the meantime we want to get the benefits of network virtualisation alongside existing networks and reusing our OSS where we can.”

NFV will first be applied to appliances that lend themselves to virtualisation and where the impact on the OSS will be minimal. Here the appliance will be loaded as software on a common server instead of current bespoke systems situated at the network's end points. “You [as an operator] can start to draw a list of target things as to what will be of most interest,” says Clarke.

Virtualised network appliances are not a new concept and examples are already available on the market. Vanu's software-based radio access network technology is one such example. “What has changed is the compute resources available in servers is now sufficient, and the volume of servers [made] is so massive compared to five years ago,” says Clarke

The NFV forum aims to create an industry-wide understanding as to what the challenges are while ensuring that there are common tools for operators that will also increase the total available market.

Clarke stresses that the physical shape of operators' networks - such as local exchange numbers - will not change greatly with the uptake of NFV. “But the kind of equipment in those locations will change, and that equipment will be server-based," says Clarke.

 

"One of the things the software world has shown us is that if you sit on your hands, a player comes out of nowhere and takes your business"

 

One issue for operators is their telecom-specific requirements. Equipment is typically hardened and has strict reliability requirements. In turn, operators' central offices are not as well air conditioned as data centres. This may require innovation around reliability and resilience in software such that should a server fail, the system adapts and the server workload is moved elsewhere. The faulty server can then be replaced by an engineer on a scheduled service visit rather than an emergency one.

"Once you get into the software world, all kinds of interesting things that enhance resilience and reliability become possible," says Clarke.


Industry disruption

The NFV initiative could prove disruptive for many telecom vendors.

"This is potentially massively disruptive," says Clarke. "But what is so positive about this is that it is new." Moreover, this is a development that operators are flagging to vendors as something that they want.

Clarke admits that many vendors have existing product lines that they will want to protect. But these vendors have unique telecom networking expertise which no IT start-up entering the field can match.

"It is all about timing," says Clarke. "When do they [telecom vendors] decisively move their product portfolio to a software version is an internal battle that is happening right now. Yes, it is disruptive, but only if they sit on their hands and do nothing and their competitors move first."

Clarke is optimistic about to the vendors' response to the initiative. "One of the things the software world has shown us is that if you sit on your hands, a player comes out of nowhere and takes your business," he says. 

Once operators deploy software-based network elements, they will be able to do new things with regard services. "Different kinds of service profiles, different kinds of capabilities and different billing arrangements become possible because it is software- not hardware-based."

Work status

The NFV initiative was unveiled late last year with the first meeting being held in January. The initiative includes operators such as AT&T, BT, Deutsche Telekom, Orange, Telecom Italia, Telefonica and Verizon as well as telecoms equipment vendors, IT vendors and technology providers.

One of the meeting's first tasks was to identify the issues to be addressed to enable the use of servers for telecom functions. Around 60 companies attended the meeting - including 20-odd operators - to create the organisational structure to address these issues.

Two experts groups - on security, and on performance and portability - were set up. “We see these issues as key for the four working groups,” says Clarke. These four working groups cover software architecture, infrastructure, reliability and resilience, and orchestration and management.

Work has started on the requirement specifications, with calls between the members taking place each day, says Clarke. The NFV work is expected to be completed by the end of 2014.

 

Further information:

White Paper: Network Functions Virtualisation: An Introduction, Benefits, Enablers, Challenges & Call for Action, click here


Luxtera's interconnect strategy

Briefing: Silicon photonics

Part 1: Optical interconnect

 

Luxtera demonstrated a 100 Gigabit QSFP optical module at the OFC/NFOEC 2013 exhibition.

 

"We're in discussions with a lot of memory vendors, switch vendors and different ASIC providers"

Chris Bergey, Luxtera

 

 

 

 

The silicon photonics-based QSFP pluggable transceiver was part of the Optical Internetworking Forum's (OIF) multi-vendor demonstration of the 4x25 Gigabit chip-to-module interface, defined by the CEI-28G-VSR Implementation Agreement.

The OIF demonstration involved several optical module and chip companies and included CFP2 modules running the 100GBASE-LR4 10km standard alongside Luxtera's 4x28 Gigabit-per-second (Gbps) silicon photonics-based QSFP28.

Kotura also previewed a 100Gbps QSFP at OFC/NFOEC but its silicon photonics design uses two chips and wavelength-division multiplexing (WDM).

The Luxtera QSFP28 is being aimed at data centre applications and has a 500m reach although Luxtera says up to 2km is possible. The QSFP28 is sampling to initial customers and will be in production next year.

100 Gigabit modules

Current 100GBASE-LR4 client-side interfaces are available in the CFP form factor. OFC/NFOEC 2013 saw the announcement of two smaller pluggable form factors at 100Gbps: the CFP2, the next pluggable on the CFP MSA roadmap, and Cisco Systems' in-house CPAK.

Now silicon photonics player Luxtera is coming to market with a QSFP-based 100 Gigabit interface, more compact than the CFP2 and CPAK.

The QSFP is already available as a 40Gbps interface. The 40Gbps QSFP also supports four independent 10Gbps interfaces. The QSFP form factor, along with the SFP+, are widely used on the front panels of data centre switches.

"The QSFP is an inside-the-data-centre connector while the CFP/CFP2 is an edge of the data centre, and for telecom, an edge router connector," says Chris Bergey, vice president of marketing at Luxtera. "These are different markets in terms of their power consumption and cost."

Bergey says the big 'Web 2.0' data centre operators like the reach and density offered by the 100Gbps QSFP as their data centres are physically large and use flatter, less tiered switch architectures.


"If you are a big systems company and you are betting on your flagship chip, you better have multiple sources" 

 

The content service providers also buy transceivers in large volumes and like that the Luxtera QSFP works over single-mode fibre which is cheaper than multi-mode fibre. "All these factors lead to where we think silicon photonics plays in a big way," says Bergey.

The 100Gbps QSFP must deliver a lower cost-per-bit compared to the 40Gbps QSFP if it is to be adopted widely. Luxtera estimates that the QSFP28 will cost less than US $1,000 and could be as low as $250.

Optical interconnect

Luxtera says its focus is on low-cost, high-density interconnect rather than optical transceivers. "We want to be a chip company," says Bergey.

The company defines optical interconnect as covering active optical cable and transceivers, optical engines used as board-mounted optics placed next to chips, and ASICs with optical SerDes (serialiser/ deserialisers) rather than copper ones.

Optical interconnect, it argues, will have a three-stage evolution: starting with face-plate transceivers, moving to mid-board optics and then ASICS with optical interfaces. Such optical interconnect developments promise lower cost high-speed designs and new ways to architect systems.

Currently optics are largely confined to transceivers on a system׳s front panel. The exceptions are high-end supercomputer systems and emerging novel designs such as Compass-EOS's IP core router.

"The problem with the front panel is the density you can achieve is somewhat limited," says Bergey. Leading switch IC suppliers using a 40nm CMOS process are capable of a Terabit of switching. "That matches really well if you put a ton of QSFPs on the front panel," says Bergey.

But once switch IC vendors use the next CMOS process node, the switching capacity will rise to several Terabits. This becomes far more challenging to meet using front panel optics and will be more costly compared to putting board-mounted optics alongside the chip.

"When we build [silicon photonics] chips, we can package them in QSFPs for the front panel, or we can package them for mid-board optics," says Bergey.

 

"If it [silicon photonics] is viewed as exotic, it is never going to hit the volumes we aspire to."


The use of mid-board optics by system vendors is the second stage in the evolution of optical interconnect. "It [mid-board optics] is an intermediate step between how you move from copper I/O [input/output] to optical I/O," says Bergey.

The use of mid-board optics requires less power, especially when using 25Gbps signals, says Bergey: “You dont need as many [signal] retimers.” It also saves power consumed by the SerDes - from 2W for each SerDes to 1W, since the mid-board optics are closer and signals need not be driven all the way to the front panel. "You are saving 2W per 100 Gig and if you are doing several Terabits, that adds up," says Bergey.

The end game is optical I/O. This will be required wherever there are dense I/O requirements and where a lot of traffic is aggregated.

Luxtera, as a silicon photonics player, is pursuing an approach to integrate optics with VLSI devices. "We're in discussions with a lot of memory vendors, switch vendors and different ASIC providers," says Bergey.

 

Silicon photonics fab

Last year STMicroelectronics (ST) and Luxtera announced they would create a 300mm wafer silicon photonics process at ST's facility in Crolles, France.

Luxtera expects that line to be qualified, ramped and in production in 2014. Before then, devices need to be built, qualified and tested for their reliability.

"If you are a big systems company and you are betting on your flagship chip, you better have multiple sources," says Bergey. "That is what we are doing with ST: it drastically expands the total available market of silicon photonics and it is something that ST and Luxtera can benefit from.”

Having multiple sources is important, says Bergey: "If it [silicon photonics] is viewed as exotic, it is never going to hit the volumes we aspire to."

 

Part 2: Bell Labs on silicon photonics click here

Part 3: Is silicon photonics an industry game-changer? click here


Kotura demonstrates a 100 Gigabit QSFP

Kotura has announced a 100 Gigabit QSFP with a reach of 2km.  

 

“QSFP will be the long-term winner at 100 Gig; the same way QSFP has been a high volume winner at 40 Gig”

Arlon Martin, Kotura

 

 

The device is aimed at plugging the gap between vertical-cavity surface-emitting laser (VCSEL) -based 100GBASE-SR10 designs that have span 100m, and the CFP-based 100GBASE-LR4 that has a 10km reach.

“It is aimed at the intermediate space, which the IEEE is looking at a new standard for," says Arlon Martin, vice president of marketing at Kotura.

The device is similar to Luxtera's 100 Gigabit-per-second (Gbps) QSFP, also detailed at the OFC/NFOEC 2013 exhibition, and is targeting the same switch applications in the data centre. “Where we differ is our ability to do wavelength-division multiplexing (WDM) on a chip,” says Martin. Kotura also uses third-party electronics such as laser drivers and transimpedance amplifiers (TIA) whereas Luxtera develops and integrates its own.

The Kotura QSFP uses four wavelengths, each at 25Gbps, that operate around 1550nm. “We picked 1550nm because that is where a lot of the WDM applications are," says Martin. “There are also some customers that want more than four channels.” The company says it is also doing development work at 1310nm.

Although Kotura's implementation doesn't adhere to an IEEE standard - the standard is still work in progress - Martin points out that the 10x10 MSA is also not an IEEE standard, yet is probably the best selling client-side 100Gbps interface.

Optical component and module vendors including Avago Technologies, Finisar, Oclaro, Oplink, Fujitsu Optical Components and NeoPhotonics all announced CFP2 module products at OFC/NFOEC 2013. The CFP2 is the next pluggable form factor on the CFP MSA roadmap and is approximately half the size of the CFP.

The advent of the CFP2 enables eight 100Gbps pluggable modules on a system's front panel compared to four CFPs. But with the QSFP, up to 24 modules can be fitted while 48 are possible when mounted double sidedly - ’belly-to-belly’ - across the panel. “QSFP will be the long-term winner at 100 Gig; the same way QSFP has been a high volume winner at 40 Gig,” says Martin.

The QSFP uses 28Gbps pins, which is also called the QSFP28, but Kotura refers to it 100Gbps product as a QSFP. The design consumes 3.5W and uses two silicon photonic chips. Kotura says 80 percent of the total power consumption is due to the electronics.

One of the two chips is the silicon transmitter which houses the platform for the four lasers (gain chips) combined as a four-channel array. Each is an external cavity laser where part of the cavity is within the indium phosphide device and the rest in the silicon photonics waveguide. The gain chips are flip-chipped onto the silicon. The transmitter also includes a grating that sets each laser's wavelength, four modulators, and a WDM multiplexer to combine the four wavelengths before transmission on the fibre.

 

 Kotura's 4x25 Gig transmitter and receiver chips. Source: Kotura
The receiver chip uses a four-channel demultiplexer with each channel fed to a germanium photo-detector. Two chips are used as it is easier to package each as a transmitter optical sub-assembly (TOSA) or receiver optical sub-assembly (ROSA), says Martin.  The 100Gbps QSFP will be generally available in 2014. 

Disruptive system design

The recent Compass-EOS IP router announcement is a welcome development, says Kotura, as it brings the optics inside the system - an example of mid-board optics - as opposed to the front panel. Compass-EOS refers to its novel icPhotonics chip combining a router chip and optics as silicon photonics but in practice it is an integrated optics design. The 168 VCSELs and 168 photodetectors per chip is massively parallel interconnect, says Martin.

“The advantage, from our point of view of silicon photonics, is to do WDM on the same fibre in order to reduce the amount of cabling and interconnect needed,” he says. At 100 Gigabit this reduces the cabling by a factor of four and this will grow with more 25Gbps wavelength channels used to 10x or even 40x eventually.

“What we want to do is transition from the electronics to the optical domain as close to those large switching chips as possible,” says Martin. “Pioneers [like Compass-EOS] demonstrating that style of architecture are to be welcomed."

Kotura says that every company that is building large switching and routing ASICs is looking at various interface options. "We have talked to quite a few of them,” says Martin.

One solution suited to silicon photonics is to place the lasers on the front panel while putting the modulation, detection and WDM devices - packaged using silicon photonics - right next to the ASICs. This way the laser works at the cooler room temperature while the rest of the circuitry can be at the temperature of the chip, says Martin.


P-OTS 2.0: 60s interview with Heavy Reading's Sterling Perrin

Heavy Reading has surveyed over 100 operators worldwide about their packet optical transport plans. Sterling Perrin, senior analyst at Heavy Reading, talks about the findings.


Q: Heavy Reading claims the metro packet optical transport system (P-OTS) market is entering a new phase. What are the characteristics of P-OTS 2.0 and what first-generation platform shortcomings does it address?

A: I would say four things characterise P-OTS 2.0 and separate 2.0 from the 1.0 implementations:

  • The focus of packet-optical shifts from time-division multiplexing (TDM) functions to packet functions.
  • Pure-packet implementations of P-OTS begin to ramp and, ultimately, dominate.
  • Switched OTN (Optical Transport Network) enters the metro, removing the need for SONET/SDH fabrics in new elements.
  • 100 Gigabit takes hold in the metro.

The last two points are new functions while the first two address shortcomings of the previous generation. P-OTS 1.0 suffered because its packet side was seen as sub-par relative to Ethernet "pure plays" and also because packet technology in general still had to mature and develop - such as standardising MPLS-TP (Multiprotocol Label Switching - Transport Profile).

 

Your survey's key findings: What struck Heavy Reading as noteworthy?

The biggest technology surprise was the tremendous interest in adding IP/MPLS functions to transport. There was a lot of debate about this 10 years ago. Then the industry settled on a de facto standard that transport includes layers 0-2 but no higher. Now, it appears that the transport definition must broaden to include up to layer 3.

A second key finding is how quickly SONET/SDH has gone out of favour. Going forward, it is all about packet innovation. We saw this shift in equipment revenues in 2012 as SONET/SDH spend globally dropped more than 20 percent. That is not a one-time hit - it's the new trend for SONET/SDH.

 

Heavy Reading argues that transport has broadened in terms of the networking embraced - from layers 0 (WDM) and 1 (SONET/SDH and OTN) to now include IP/MPLS. Is the industry converging on one approach for multi-layer transport optimisation? For example, IP over dense WDM? Or OTN, Carrier Ethernet 2.0 and MPLS-TP? Or something else?

We did not uncover a single winning architecture and it's most likely that operators will do different things. Some operators will like OTN and put it everywhere. Others will have nothing to do with OTN. Some will integrate optics on routers to save transponder capital expenditure, but others will keep hardware separate but tightly link IP and optical layers via the control plane. I think it will be very mixed.

You talk about a spike in 100 Gigabit metro starting in 2014. What is the cause? And is it all coherent or is a healthy share going to 100 Gigabit direct detection?

Interest in 100 Gigabit in the metro exceeds interest in OTN in the metro - which is different from the core, where those two technologies are more tightly linked.

Cloud and data centre interconnect are the biggest drivers for interest in metro 100 Gig but there are other uses as well. We did not ask about coherent versus direct in this survey, but based on general industry discussions, I'd say the momentum is clearly around coherent at this stage - even in the metro. It does not seem that direct detect 100 Gig has a strong enough cost proposition to justify a world with two very different flavours of 100 Gig.

 

What surprised you from the survey's findings?

It was really the interest-level in IP functionality on transport systems that was the most surprising find.

It opens up the packet-optical transport market to new players that are strongest on IP and also poses a threat to suppliers that were good at lower layers but have no IP expertise - they'll have to do something about that.

Heavy Reading surveyed 114 operators globally. All those surveyed were operators; no system vendors were included. The regional split was North America - 22 percent, Europe - 33 percent, Asia Pacific - 25 percent, and the rest of the world - Latin America mainly - 20 percent.


Infinera speeds up network restoration

  • Claimed to be the only hardware implementation of the Shared Mesh Protection protocol
  • Provides network-wide protection against multiple network failures
  • The chip is already within the DTN-X system; protocol will be activated this year

 

Pravin Mahajan, Infinera

Infinera has developed a chip to speed up network restoration following faults.

The chip implements the Shared Mesh Protection (SMP) protocol being developed by the International Telecommunication Union (ITU) and the Internet Engineering Task Force (IETF) and Infinera believes it is the only vendor with hardware acceleration of the protocol.

The SMP standard is still being worked on and will be completed this year. Infinera demonstrated its hardware SMP implementation at OFC/NFOEC 2013 and will activate the scheme in operators' networks using a platform software upgrade this year.

The chip, dubbed Fast Shared Mesh Protection (FastSMP), is sprinkled across cards within Infinera's DTN-X platform and will be linked to other FastSMP ICs across the network. The FastSMP chips exchange signalling information and use internal look-up tables with pre-calculated routing data to determine the required protection action when one or more network failures occur.

 

Network faults

The causes of network faults range from fibre cuts from construction work to natural disasters such as Hurricane Sandy and the Asia Pacific tsunami. Level 3 Communications cited in 2011 that squirrels were the second most common cause of fibre cuts after construction work. The squirrels, chewing through fibre, accounted for 17 percent of all cuts. Meanwhile, one Indian service provider says it experiences 100 fibre cuts nationwide each day, according to Infinera.

Operators are also having to share their network maps with enterprises that want to assess the risk based on geography before choosing a service provider. "End customers no longer necessarily trust the service level agreements they have with operators," says Pravin Mahajan, director, corporate marketing and messaging at Infinera.  In riskier regions, for example those prone to earthquakes, enterprises may choose two operators. "A form of 1+1 protection,” says Mahajan.

Operators want resilient networks that adapt to faults quickly, ideally within 50ms, without adding extra cost.

Traditional resiliency schemes include SONET/SDH’s 1+1 protection. This meets the sub-50ms requirement but addresses single faults only and requires dedicated back-up for each circuit. At the IP/MPLS (Internet Protocol/ Multiprotocol Label Switching) layer, the MPLS Fast Re-Route scheme caters for multiple failures and is sub-50ms. But it only addresses local faults, not the full network. And being packet-based - at a higher layer of the network - the scheme is costlier to implement.

 

"End customers no longer necessarily trust the service level agreements they have with operators"

Infinera's protection scheme uses its digital optical networking approach based on its photonic integrated circuits (PICs) coupled with Optical Transport Networking (OTN). OTN resides between the packet and optical layers, and using a mesh network topology, it can handle multiple failures. By sharing bandwidth at the transport layer, the approach is cheaper than at the packet layer.  But being software-based, restoration takes seconds.

Infinera has speeded up the scheme by implementing SMP with its chip such that it meets the 50ms goal.

FastSMP chip

Infinera plans for multiple failures using the Generalized Multiprotocol Label Switching (GMPLS) control plane. “The same intelligence is now implemented in hardware [using the FastSMP processor],” says Mahajan.

The chip is on each 500 Gigabit-per-second (Gbps) line card, within the platform's OTN switch fabric, the client side and as part of the controller. The FastSMP, described as a co-processor to the CPU, hosts look-up tables with rules as to what should happen with each failure. The chips, located in the platform and across the network, then adjust to the back-up plan for each service failure.

Infinera says that the protection is at the service level not at the link level. "It does this at ODU [OTN's optical data unit] granularity," says Mahajan; each circuit can hold different sized services, 2.5 Gigabit-per-second (Gbps)  or 10Gbps for example, all carried within a 100Gbps light path. "By defining failure scenarios on a per-service basis, you now need to put all these entries in hardware," says Mahajan.

To program the chip, network failures are simulated using Infinera's network planning tool to determine the required back-up schemes. These can be chosen based on shortest path or lowest latency, for example.

The GMPLS control plane protocol determines the rules as to how the network should be adapted and these are written on-chip. When a failure occurs, the chip detects the failure and performs the required actions.

The FastSMP chip is already on all the DTN-X line cards Infinera has shipped and will be enabled using software upgrade.

The GMPLS control plane recomputes backup paths after a failure has occurred. Typically no action is required but if several failures occur, the new GMPLS backup paths will be distributed to update the FastSMPs' tables. "Only on the third or fourth failure typically will a new backup plan be needed," says Mahajan.

In effect, the more meshed the network topology, the greater the number of failures that can be tolerated. "When you have three or four failures, you need to have new computation at the GMPLS control plane and then it can repopulate the backups for failures 3, 4, and 5," he says.

Instant bandwidth and FastSMP

Infinera is able to turn up bandwidth in real-time using its 500Gbps super-channel PIC. "We slice up the 500 Gig capacity available per line card into 100 Gig chunks," says Mahajan.  

This feature, combined with FastSMP, aids operators dealing with failures once traffic is rerouted. The next backup route, if it is close to its full capacity, can have an extra 100 Gigabit of capacity added in case the link is called into use.

A study based on an example 80-node network by ACG Research estimates that the Shared Mesh Protection scheme uses 30 percent less line-side ports compared to an equivalent network implementing the 1+1 protection scheme.


Aurrion mixes datacom and telecom lasers on a wafer

Silicon photonics player, Aurrion, has detailed the making of multiple laser designs for datacom and telecom on a single wafer. The multiple designs on one wafer benefit the economics of telecom lasers by manufacturing them alongside higher-volume datacom sources.

 

"There is an inevitability of the co-mingling of electronics and optics and we are just at the beginning"

Eric Hall, Aurrion

 

 

 

Aurrion's long-term vision for its heterogeneous integration approach to silicon photonics is to tackle all stages of a communication link: the high-bandwidth transmitter, switch and receiver. Heterogeneous integration refers to the introduction of III-V material - used for lasers, modulators and receivers - onto the silicon wafer where it is processed alongside the silicon using masks and lithography.  

In a post-deadline paper given at OFC/NFOEC 2013, the fabless start-up detailed the making of various transmitters on a silicon wafer. These include tunable lasers for telecom that cover the C- and L-bands, and uncooled laser arrays for datacom.

The lasers are narrow-linewidth tunable devices for long-haul coherent applications. According to Aurrion, achieving a narrow-linewidth laser typically requires an external cavity whose size makes it difficult to produce a compact design when integrated with the modulator.

Having a tunable laser integrated with the modulator on the same silicon photonics platform will enable compact 100 Gigabit coherent pluggable modules. "The 100 Gig equivalent of the tunable XFP or SFP+," says Eric Hall, vice president of business development at Aurrion.

Hall admits that traditional indium-phosphide laser manufacturers will likely integrate tunable lasers with the modulator to produce compact narrow-linewidth designs. "There will be other approaches but it is exciting that we can now make this laser and modulator on this platform," says Hall. "And it becomes very exciting when you make these on the same wafer as high-volume datacom components." 

 

Aurrion's vision of a coherent transmitter and a 16-laser array made on the same wafer. Source: Aurrion

 

The wafer's datacom devices include a 4-channel laser array for 100GBASE-LR4 10km reach applications and a 400 Gigabit transmitter design comprising 2x8 wavelength division multiplexing (WDM) arrays for a 16x25Gbps design, each laser spaced 200GHz apart. These could be for 10km or 40km applications depending on the modulator used. "These arrays are for uncooled applications," says Hall. "The idea is these don't have to be coarse WDM but tighter-spaced WDM that hold their wavelength across 20-80oC."

Coarse WDM-based laser arrays do not require a thermo-electric cooler (TEC) but the larger spacing of the wavelengths makes it harder to design beyond 100 Gigabit, says Hall: "Being able to pack in a bunch of wavelengths yet not need a TEC opens up a lot of applications."

Such lasers coupled with different modulators could also benefit 100 Gigabit shorter-reach interfaces currently being discussed in the IEEE, including the possibility of multi-level modulation schemes, says the company.

Aurrion says it is seeing the trend of photonics moving closer to the electronics due to emerging applications.

"Electronics never really noticed photonics because it was so far away and suddenly photonics has encroached into its personal space," says Hall. "There is an inevitability of the co-mingling of electronics and optics and we are just at the beginning."


Privacy Preference Center