SDM and MIMO: An interview with Bell Labs

Bell Labs is claiming an industry first in demonstrating the recovery in real time of multiple signals sent over spatial-division multiplexed fibre. Gazettabyte spoke to two members of the research team to understand more.

 

Part 2: The capacity crunch and the role of SDM

The argument for spatial-division multiplexing (SDM) - the sending of optical signals down parallel fibre paths, whether multiple modes, cores or fibres - is the coming ‘capacity crunch’. The information-carrying capacity limit of fibre, for so long described as limitless, is being approached due to the continual yearly high growth in IP traffic. But if there is a looming capacity crunch, why are we not hearing about it from the world’s leading telcos? 

“It depends on who you talk to,” says Peter Winzer, head of the optical transmission systems and networks research department at Bell Labs. The incumbent telcos have relatively low traffic growth - 20 to 30 percent annually. “I believe fully that it is not a problem for them - they have plenty of fibre and very low growth rates,” he says. 

Twenty to 30 percent growth rates can only be described as ‘very low’ when you consider that cable operators are experiencing 60 percent year-on-year traffic growth while it is 80 to 100 percent for the web-scale players. “The whole industry is going through a tremendous shift right now,” says Winzer.  

In a recent paper, Winzer and colleague Roland Ryf extrapolate wavelength-division multiplexing (WDM) trends, starting with 100-gigabit interfaces that were adopted in 2010. Assuming an annual traffic growth rate of 40 to 60 percent, 400-gigabit interfaces become required in 2013 to 2014, and the authors point out that 400-gigabit transponder deployments started in 2013. Terabit transponders are forecast in 2016 to 2017 while 10 terabit commercial interfaces are expected from 2020 to 2024. 

In turn, while WDM system capacities have scaled a hundredfold since the late 1990s, this will not continue. That is because systems are approaching the Non-linear Shannon Limit which estimates the upper limit capacity of fibre at 75 terabit-per-second. 

Starting with 10-terabit-capacity systems in 2010 and a 30 to 40 percent core network traffic annual growth rate, the authors forecast that 40 terabit systems will be required shortly. By 2021, 200 terabit systems will be needed - already exceeding one fibre’s capacity  - while petabit-capacity systems will be required  by 2028. 

 

Even if I’m off by an order or magnitude, and it is 1000, 100-gigabit lines leaving the data centre; there is no way you can do that with a single WDM system

 

Parallel spatial paths are the only physical multiplexing dimension remaining to expand capacity, argue the authors, explaining Bell Labs’ interest in spatial-division multiplexing for optical networks.

If the telcos do not require SDM-based systems anytime soon, that is not the case for the web-scale data centre operators. They could deploy SDM as soon as 2018 to 2020, says Winzer.

The web-scale players are talking about 400,000-server data centres in the coming three to five years. “Each server will have a 25-gigabit network interface card and if you assume 10 percent of the traffic leaves the data centre, that is 10,000, 100-gigabit lines,” says Winzer. “Even if I’m off by an order or magnitude, and it is 1000, 100-gigabit lines leaving the data centre; there is no way you can do that with a single WDM system.”   

 

SDM and MIMO

SDM can be implemented in several ways. The simplest way to create parallel transmission paths is to bundle several single-mode fibres in a cable. But speciality fibre can also be used, either multi-core or multi-mode.

For the demo, Bell Labs used such a fibre, a coupled 3-core one, but Sebastian Randel, a member of technical staff, said its SDM receiver could also be used with a fibre supporting a few spatial modes. By increasing slightly the diameter of a single-mode fibre, not only is a single mode supported but two second-order modes. “Our signal processing would cope with that fibre as well,” says Winzer.

The signal processing referred to, that restores the multiple transmissions at the receiver, implements multiple input, multiple output or MIMO. MIMO is a well-known signal processing technique used for wireless and digital subscriber line (DSL).  

 

They are garbled up, that is what the rotation is; undoing the rotation is called MIMO

 

Multi-mode fibre can support as many as 100 spatial modes. “But then you have a really big challenge to excite all 100 spatial modes individually and detect them individually,” says Randel. In turn, the digital signal processing computation required for the 100 modes is tremendous. “We can’t imagine we can get there anytime soon,” says Randel.

Instead, Bell Labs used 60 km of the 3-core coupled fibre for its real-time SDM demo. The transmission distance could have been much longer except the fibre sample was 60 km long. Bell Labs chose the coupled-core fibre for the real-time MIMO demonstration as it is the most demanding case, says Winzer. 

The demonstration can be viewed as an extension of coherent detection used for long-distance 100 gigabit optical transmission. In a polarisation-multiplexed, quadrature phase-shift keying (PM-QPSK) system, coupling occurs between the two light polarisations. This is a 2x2 MIMO system, says Winzer, comprising two inputs and two outputs. 

For PM-QPSK, one signal is sent on the x-polarisation and the other on the y-polarisation. The signals travel at different speeds while hugely coupling along the fibre, says Winzer: “The coherent receiver with the 2x2 MIMO processing is able to undo that coupling and undo the different speeds because you selectively excite them with unique signals.” This allows both polarisations to be recovered. 

With the 3-core coupled fibre, strong coupling arises between the three signals and their individual two polarisations, resulting in a 6x6 MIMO system (six inputs and six outputs). The transmission rotates the six signals arbitrarily while the receiver, using 6x6 MIMO, rotates them back. “They are garbled up, that is what the rotation is; undoing the rotation is called MIMO.”

 

Demo details

For the demo, Bell Labs generated 12, 2.5-gigabit signals. These signals are modulated onto an optical carrier at 1550nm using three nested lithium niobate modulators. A ‘photonic lantern’ - an SDM multiplexer - couples the three signals orthogonally into the fibre’s three cores. 

The photonic lantern comprises three single-mode fibre inputs fed by the three single-mode PM-QPSK transmitters while its output places the fibres closer and closer until the signals overlap. “The lantern combines the fibres to create three tiny spots that couple into a single fibre, either single mode or multi-mode,” says Winzer.  

At the receiver, another photonic lantern demultiplexes the three signals which are detected using three integrated coherent receivers. 

 

Don’t do MIMO for MIMO’s sake, do MIMO when it helps to bring the overall integrated system cost down

 

To implement the MIMO, Bell Labs built a 28-layer printed circuit board which connects the three integrated coherent receiver outputs to 12, 5-gigabit-per-second 10-bit analogue-to-digital converters. The result is an 600 gigabit-per-second aggregate output digital data stream. This huge data stream is fed to a Xilinx Virtex-7 XC7V2000T FPGA using 480 parallel lanes, each at 1.25 gigabit-per-second. It is the FPGA that implements the 6x6 MIMO algorithm in real time.

“Computational complexity is certainly one big limitation and that is why we have chosen a relatively low symbol rate - 2.5 Gbaud, ten times less than commercial systems,” says Randel. “But this helps us fit the [MIMO] equaliser into a single FPGA.”  

 

Future work

With the growth in IP traffic, optical engineers are going to have to use space and wavelengths. “But how are you going to slice the pie?” says Winzer. 

With the example of 10,000, 100-gigabit wavelengths, will 100 WDM channels be sent over 100 spatial paths or 10 WDM channels over 1,000 spatial paths? “That is a techno-economic design optimisation,” says Winzer. “In those systems, to get the cost-per-bit down, you need integration.”

That is what the Bell Lab’s engineers are working on: optical integration to reduce the overall spatial-division multiplexing system cost. “Integration will happen first across the transponders and amplifiers; fibre will come last,” says Winzer. 

Winzer stresses that MIMO-SDM is not primarily about fibre, a point frequently misunderstood. The point is to enable systems with crosstalk, he says. 

“So if some modulator manufacturer can build arrays with crosstalk and sell the modulator at half the price they were able to before, then we have done our job,” says Winzer. “Don’t do MIMO for MIMO’s sake, do MIMO when it helps to bring the overall integrated system cost down.”  

 

Further Information:

Space-division Multiplexing: The Future of Fibre-Optics Communications, click here

For Part 1, click here


Ovum Q&A: Infinera as an end-to-end systems vendor

Infinera hosted an Insight analyst day on October 6th to highlight its plans now that it has acquired metro equipment player, Transmode. Gazettabyte interviewed Ron Kline, principal analyst, intelligent networks at market research firm, Ovum, who attended the event.    

 

Q. Infinera’s CEO Tom Fallon referred to this period as a once-in-a-decade transition as metro moves from 10 Gig to 100 Gig. The growth is attributed mainly to the uptake of cloud services and he expects this transition to last for a while. Is this Ovum’s take?  

Ron Kline, OvumRK: It is a transition but it is more about coherent technology rather than 10 Gig to 100 Gig. Coherent enables that higher-speed change which is required because of the level of bandwidth going on in the metro.

We are going to see metro change from 10 Gig to 100 Gig, much like we saw it change from 2.5 Gig to 10 Gig. Economically, it is going to be more feasible for operators to deploy 100 Gig and get more bang for their buck.

Ten years is always a good number from any transition. If you look at SONET/SDH, it began in the early 1990s and by 2000 was mainstream.

If you look at transitions, you had a ten-year time lag to get from 2.5 Gig to 10 Gig and you had another ten years for the development of 40 Gig, although that was impacted by the optical bubble and the [2008] financial crisis. But when coherent came around, you had a three-year cycle for 100 gigabit. Now you are in the same three-year cycle for 200 and 400 gigabit.

Is 100 Gig the unit of currency? I think all logic tells us it is. But I’m not sure that ends up being the story here.   

 

If you get line systems that are truly open then optical networking becomes commodity-based transponders - the white box phenomenon - then where is the differentiation? It moves into the software realm and that becomes a much more important differentiator.    

 

Infinera’s CEO asserted that technology differentiation has never been more important in this industry. Is this true or only for certain platforms such as for optical networking and core routers?   

If you look at Infinera, you would say their chief differentiator is the PIC (photonic integrated circuit) as it has enabled them to do very well. But other players really have not tried it. Huawei does a little but only in the metro and access.

It is true that you need differentiation, particularly for something as specialised as optical networking. The edge has always gone to the company that can innovate quickest. That is how Nortel did it; they were first with 10 gigabit for long haul and dominated the market.

When you look at coherent, the edge has gone to the quickest: Ciena, Alcatel-Lucent, Huawei and to a certain extent Infinera. Then you throw in the PIC and that gives Infinera an edge.

But then, on the flip side, there is this notion of disaggregation. Nobody likes to say it but it is the commoditisation of the technology; that is certainly the way the content providers are going.

If you get line systems that are truly open then optical networking becomes commodity-based transponders - the white box phenomenon - then where is the differentiation? It moves into the software realm and that becomes a much more important differentiator.    

I do think differentiation is important; it always is. But I’m not sure how long your advantage is these days.

 

Infinera argues that the acquisition of Transmode will triple the total available market it can address.  

Infinera definitely increases its total available market. They only had an addressable market related to long haul and submarine line terminating equipment. Now this [acquisition of Transmode] really opens the door. They can do metro, access, mobile backhaul; they can do a lot of different things.

We don’t necessarily agree with the numbers, though, it more a doubling of the addressable market.

The rolling annual long-haul backbone global market (3Q 2014 to 2Q 2015) and the submarine line terminating equipment market where they play [pre-Transmode] was $5.2 billion. If you assume the total market of $14.2 billion is addressable then yes it is nearly a tripling but that includes the legacy SONET/SDH and Bandwidth Management segments which are rapidly declining. Nevertheless, Tom’s point is well-taken, adding a further $5.8 billion for the metro and access WDM markets to their total addressable market is significant.

 

Tom Fallon also said vendor consolidation will continue, and companies will need to have scale because of the very large amounts of R&D needed to drive differentiation. Is scale needed for a greater R&D spend to stay ahead of the competition?

When you respond to an operator’s request-for-proposal, that is where having end-to-end scale helps Infinera; being able to be a one-stop shop for the metro and long haul.

If I’m an operator, I don’t have to get products from several vendors and be the systems integrator.  

 

Infinera announced a new platform for long haul, the XT-500, which is described as a telecom version of its data centre interconnect Cloud Xpress platform. Why do service providers want such a platform, and how does it differ from cloud Xpress? 

Infinera’s DTN-X long haul platform is very high capacity and there are applications where you don’t need a such a large platform. That is one application.

The other is where you lease space [to house your equipment]. If I am going to lease space, if I have a box that is 2 RU (rack unit) high and can do 500 gigabit point-to-point and I don’t need any cross-connect, then this smaller shelf size makes a lot of sense. I’m just transporting bandwidth.

Cloud Xpress is a scaled-down product for the metro. The XT-500 is carrier-class, e.g. NEBS [Network Equipment-Building System] compliant and can span long-haul distances.  

 

Infinera has also announced the XTC-2. What is the main purpose of this platform?

The platform is a smaller DTN-X variant to serve smaller regions. For example you can take a 500 gigabit PIC super-channel and slice it up. That enables you to do a hub-and-spoke virtual ring and drop 100 Gig wavelengths at appropriate places. The system uses the new metro PICs introduced in March. At the hub location you use an ePIC that slices up the 500G into individually routable 100G channels and at the hub location, where the XTC-2 is, you use an oPIC-100.  

 

Does the oPIC-100 offer any advantage compared to existing100 Gig optics?

I don’t think it has a huge edge other than the differentiation you get from a PIC. In fact it might be a deterrent: you have to buy it from Infinera. It is also anti-trend, where the trend is pluggables. 

But the hub and spoke architecture is innovative and it will be interesting to see what they do with the integration of PIC technology in Transmode’s gear.

  

Acquiring Transmode provides Infinera with an end-to-end networking portfolio? Does it still lack important elements? For example, Ciena acquired Cyan and gained its Blue Planet SDN software. 

Transmode has a lot of different technologies required in the metro: mobile back-haul, synchronisation, they are also working on mobile front-hauling, and their hardware is low power.

Transmode has pretty much everything you need in these smaller platforms. But it is the software piece that they don’t have. Infinera has a strategy that says: we are not going to do this; we are going to be open and others can come in through an interface essentially and run our equipment.

That will certainly work.

But if you take a long view that says that in future technology will be commoditised, then you are in a bad spot because all the value moves to the software and you, as a company, are not investing and driving that software. So, this could be a huge problem going forward.

 

What are the main challenges Infinera faces?

One challenge, as mentioned, is hardware commoditisation and the issue of software.

Hardware commodity can play in Infinera’s favour. Infinera should have the lowest-cost solution given its integrated solution, so large hardware volumes is good for them. But if pluggable optics is a requirement, then they could be in trouble with this strategy

The other is keeping up with the Joneses.

I think the 500 Gig in 100 Gig channels is now not that exciting. The 500 Gig PIC is not creating as much advantage as it did before. Where is the 1.2 terabit PIC? Where is the next version that drives Infinera forward?

And is it still going to be 100 Gig? They are leading me to believe it won’t just be. Are they going to have a PIC that is 12 channels that are tunable in modulation formats to go from 100 to 200 to 400 Gig.

They need to if they want to stay competitive with everyone else because the market is moving to 200 Gig and 400 Gig. Our figures show that over 2,000 multi-rate (QPSK and 16-QAM) ports have been shipped in the last year (3Q 2014 to 2Q 2015). And now you have 8-QAM coming. Infinera’s PIC is going to have to support this.

Infinera’s edge is the PIC but if you don’t keep progressing the PIC, it is no longer an edge.

These are the challenges facing Infinera and it is not that easy to do these things. 


Cisco Systems' intelligent light

Network optimisation continues to exercise operators and content service providers as their requirements evolve with the growth of services such as cloud computing. Cisco Systems' announced elastic core architecture aims to tackle  networking efficiency and address particular service provider requirements.

 

“The core [network] needs to be more robust, agile and programmable”

Sultan Dawood, Cisco

 

 

 

 

 

“The core [network] needs to be more robust, agile and programmable – especially with the advent of  cloud,” says Sultan Dawood, senior manager, service provider marketing at Cisco. “As service providers look at next-generation infrastructure, convergence of IP and optical is going to have a big play.”

Cisco's elastic core architecture combines several developments. One is the integration of Cisco's 100 Gigabit-per-second (Gbps) dense wavelength division multiplexing (DWDM) coherent transponder, first introduced on its ROADM platform, onto its router to enable IP-over-DWDM. 

This is part of what Cisco calls nLight – intelligent light - which itself has three components: its 100Gbps coherent ASIC hardware, the nLight control plane and nLight colourless and contentionless ROADMs. “As packet and optical networks converge, intelligence between the layers is needed,” says Dawood. “Today how the ROADM and the router communicate is limited."

There is the GMPLS [Generalized Multi-Protocol Label Switching] layer working at the IP layer, and WSON [Wavelength Switched Optical Layer] working at the optical layer. These two protocols are doing control plane functions at each of their respective layers. "What nLight is doing is communicating between these two layers [using existing parameters] and providing the interaction," says Dawood.

Ron Kline, principal analyst for network infrastructure at Ovum, describes nLight more generally as Cisco’s strategy for software-defined networking:  "Interworking control planes to share info across platforms and add the dynamic capabilities."

The second component of Cisco's announcement is an upgrade of its carrier-grade services engine, from 20Gbps to 80Gbps, that fits within Cisco's CSR-3 core router and will be available from May 2013. The services engine enables such services as IPv6 and 'cloud routing' - network positioning which determines the most suitable resource for a customer’s request based on the content’s location and the data centre's loading.

Cisco has also added anti distributed denial of service (anti-DDoS) software to counter cyber threats. “We have licensed software that we have put into our CRS-3 so that with our VPN services we can provide threat mitigation and scrub any traffic liable to hurt our customers,” says Dawood.

 

nLight

According to Cisco, several issues need to be addressed between the IP and optical layers. For example, how the router and the optical infrastructure exchange information like circuit ID, path identifiers and real-time information in order to avoid the manual intervention used currently.

“With this intelligent data that is extracted due to these layers communicating, I can now make better, faster decisions that result in rapid service provisioning and service delivery,” says Dawood.

Cisco cites as an example a financial customer requesting a low-latency path.  In this case, the optical network comes back through this nLight extraction process and highlights the most appropriate path. That path has a circuit ID that is assigned to the customer. If the customer then comes back to request a second identical circuit, the network can make use of the existing intelligence to deliver a similar-specification circuit.

Such a framework avoids lengthy, manual interactions between the IP and transport departments of an operator required when setting up an IP VPN, for example. By exchanging data between layers, service providers can understand and improve their network topology in real-time, and be more dynamic in how they shift resources and do capacity planning in their network.

Service providers can also improve their protection and restoration schemes and also how they configure and provision services. Such capabilities will enable operators to be more efficient in the introduction and delivery of cloud and mobile services.

 

Total cost of ownership

Market research firm ACG Research has done a total cost of ownership (TCO) analysis of Cisco's elastic core architecture. It claims using nLight achieves up to a halving of the TCO of the optical and packet core networks in designs using protected wavelengths. It also avoids a 10% overestimation of required capacity.

Meanwhile, ACG claims an 18-month payback and 156% return on investment from a CRS CGSE service module with its anti‐DDoS service, and a 24% TCO savings from demand engineering with the improved placement of routes and cloud service workload location.

Cisco says its designed framework architecture is being promoted in the Internet Engineering Task Force (IETF). The company is also liaising with the International Telecommunication Union (ITU) and the Optical Internetworking Forum (OIF) where relevant. 


The evolution of optical networking

An upcoming issue of the Proceeedings of the IEEE will be dedicated solely to the topic of optical networking. This, says the lead editor, Professor Ioannis Tomkos at the Athens Information Technology Center, is a first in the journal's 100-year history.  The issue, entitled The Evolution of Optical Networking, will be published in either April or May and will have a dozen invited papers. 

 

One topic that will change the way we think about optical networks is flexible or elastic optical networks.

Professor Ioannis Tomkos

 

"If I have to pick one topic that will change the way we think about optical networks, it is flexible or elastic optical networks, and the associated technologies," says Tomkos.

A conventional dense wavelength division multiplexing (DWDM) network has fixed wavelengths. For long-haul optical transmission each wavelength has a fixed bit rate - 10, 40 or 100 Gigabit-per-second (Gbps), a fixed modulation format, and typically occupies a 50GHz channel.  "Such a network is very rigid," says Tomkos. "It cannot respond easily to changes in the network's traffic patterns." 

This arrangement has come about, says Tomkos, because the assumption has always been that fibre bandwidth is abundant. "But at the moment we are only a factor of two away from reaching the Shannon limit [in terms of spectral efficiency bits/s/Hz) so we are going to hit the fibre capacity wall by 2018-2020," he warns. 

The maximum theoretically predicted spectral efficiency for an optical communication system based on standard single-mode fibres is about 9bits/s/Hz per polarisation for typical long-haul system reaches of 500km without regeneration, says Tomkos. "At the moment the most advanced hero experiments demonstrated in labs have achieved a spectral efficiency of about 4-6bits/s/Hz," he says. This equates to a total transmission capacity close to 100 Terabits-per-second (Tbps).  After that, deploying more fibre will be the only way to further scale networks.

Accordingly, new thinking is required.

Two approaches are being proposed. One is to treat the optical network in the same way as the air interface in cellular networks: spectrum is scarce and must be used effectively.

"We are running close to fundamental limits, that's why the optical spectrum of available deployed standard single mode fibers should be utilized more efficiently from now on as is the case with wireless spectrum," says Tomkos.

 

How optical communication is following in the footsteps of wireless.

The second technique - spatial multiplexing - looks to extend fibre capacity well beyond what can be achieved using the first approach alone.  Such an option would need to deploy new fibre types that support multiple cores or multi-mode transmission.

 

Flexible spectrum 

"We have to start thinking about techniques used in wireless networks to be adopted in optical networks," says Tomkos (See text box).  With a flexible network, the thinking is to move from the 50GHz fixed grid, down to 12.50GHz, then 6.25GHz or 1.50GHz or even eliminate the ITU grid entirely, he says. Such an approach is dubbed flexible spectrum or a gridless network.

With such an approach, the optical transponders can tune the bit rate and the modulation format according to the reach and capacity requirements. The ROADMs or, more aptly, the wavelength-selective switches (WSSes) on which they are based, also have to support such gridless operation. 

WSS vendors Finisar and Nistica already support such a flexible spectrum approach, while JDS Uniphase has just announced it is readying its first products. Meanwhile US operator Verizon is cheerleading the industry to support gridless. "I'm sure Verizon is going to make this happen, as it did at 100 Gigabit," says Tomkos.

 

Spatial multiplexing

The simplest way to implement spatial multiplexing is to use several fibres in parallel. However, this is not cost-effective. Instead, what is being proposed is to create multi-core fibres - fibres that have more than one core - seven, 19 or more cores in an hexagonal arrangement, down which light can be transmitted. "That will increase the fibre's capacity by a factor of ten of 20," says Tomkos.

Another consideration is to move from single-mode to multi-mode fibre that will support the transmission of multiple modes, as many as several hundred. 

The issue with multi-mode fibre is its very high modal dispersion which limits its bandwidth-distance product. "Now with improved techniques from signal processing like MIMO [multiple-input, multiple out] processing, OFDM [orthogonal frequency division multiplexing] to more advanced optical technologies, you can think that all these multiple modes in the fibre can be used potentially as independent channels," says Tomkos. "Therefore you can potentially multiply your fibre capacity by 100x or 200x."  

The Proceedings of the IEEE issue will have a paper on flexible networking by NEC Labs, USA, and a second, on the ultimate capacity limits in optical communications, authored by Bell Labs.

 

Further reading:

MODE-GAP EU Seventh Framework project, click here


Privacy Preference Center