How scaling optical networks is soon to change

Carrier division multiplexing and spatial division multiplexing (CSDM) are both needed, argues Lumentum’s Brian Smith.

The era of coherent-based optical transmission as is implemented today is coming to an end, argues Lumentum in a White Paper.

Brian Smith

Brian Smith

The author of the paper, Brian Smith, product and technology strategy, CTO Office at Lumentum, says two factors account for the looming change.

One is Shannon’s limit that defines how much information can be sent across a communications channel, in this case an optical fibre.

The second, less discussed regarding coherent-based optical transport, is how Moore’s law is slowing down.

”Both are happening coincidentally,” says Smith. “We believe what that means is that we, as an industry, are going to have to change how we scale capacity.”

 

Accommodating traffic growth

A common view in telecoms, based on years of reporting, is that internet traffic is growing 30 per cent annually. The CEO of AT&T mentioned over 30 per cent traffic growth in its network for the last three years during the company’s last quarterly report of 2023.

Smith says that data on the rate of traffic growth is limited. He points to a 2023 study by market research firm TeleGeography that shows traffic growth is dependent on region, ranging from 25 to 45 per cent CAGR.

Since the deployment of the first optical networking systems using coherent transmission in 2010, almost all networking capacity growth has been achieved in the C-band of a fibre, which comprises approximately 5 terahertz (THz) of spectrum.

Cramming more data into the C-band has come about by increasing the symbol rate used to transmit data and the modulation scheme used by the coherent transceivers, says Smith.

The current coherent era – labelled the 5th on the chart – is coming to an end. Source: Lumentum.

Pushing up baud rate

Because of the Shannon limit being approached, marginal gains exist to squeeze more data within the C-band. It means that more spectrum is required. In turn, the channel bandwidth occupied by an optical wavelength now goes up with baud rate such that while each wavelength carries more data, the capacity limit within the C-band has largely been reached.

Current systems use a symbol rate of 130-150 gigabaud (GBd). Later this year Ciena will introduce its 200GBd WaveLogic 6e coherent modem, while the industry has started work on developing the next generation 240-280GBd systems.

Reconfigurable optical add-drop multiplexers (ROADMs) have had to become ‘flexible’ in the last decade to accommodate changing channel widths. For example, a 400-gigabit wavelength fits in a 75GHz channel while an 800-gigabit wavelength fits within a 150GHz channel.

Another consequence of Shannon’s limit is that the transmission distance limit for a certain modulation scheme has been reached. Using 16-ary quadrature amplitude modulation (16-QAM), the distance ranges from 800-1200km. Doubling the baud rate doubles the data rate per wavelength but the link span remains fixed.

“There is a fundamentally limit to the maximum reach that you can achieve with that modulation scheme because of the Shannon limit,” says Smith.

At the recent OFC show held in March in San Diego, a workshop discussed whether a capacity crunch was looming.

The session’s consensus was that, despite the challenges associated with the latest OIF 1600ZR and ZR+ standards, which promise to send 1.6 terabits of data on a single wavelength, the industry is confident that it will meet the OIF’s 240-280+ GBd symbol rates.

However, in the discussion about the next generation of baud rate—400-500GBd—the view is that while such rates look feasible, it is unclear how they will be achieved. The aim is always to double baud rate because the increase must be meaningful.

“If the industry can continue to push the baud rate, and get the cost-per-bit, power-per-bit, and performance required, that would be ideal,” says Smith.

But this is where the challenges of Moore’s law slowing down comes in. Achieving 240GBd and more will require a coherent digital signal processor (DSP) made using a 3nm CMOS process at least. Beyond this, transistors start to approach atomic scale and the performance becomes less deterministic. Moreover, the development costs of advanced CMOS processes – 3nm, 2nm and beyond – are growing exponentially.

Beyond 240GBd, it’s also going to become more challenging to achieve the higher analogue bandwidths for the electronics and optics components needed in a coherent modem, says Smith. How the components will be packaged is key. There is no point in optimising the analogue bandwidth of each component only for the modem performance to degrade due to packaging. “These are massive challenges,” says Smith.

This explains why the industry is starting to think about alternatives to increasing baud rate, such as moving to parallel carriers. Here a coherent modem would achieve a higher data rate by implementing multiple wavelengths per channel.

Lumentum refers to this approach as carrier division multiplexing.

 

Capacity scaling

The coherent modem, while key to optical transport systems, is only part of the scaling capacity story.

Prior to coherent optics, capacity growth was achieved by adding more and more wavelengths in the C-band. But with the advent of coherent DSPs compensating for chromatic and polarisation mode dispersion, suddenly baud rate could be increased.

“We’re starting to see the need, again, for growing spectrum,” says Smith. “But now, we’re growing spectrum outside the C-band.”

First signs of this are how optical transport systems are adding the L-band alongside the C-band, doubling a fibre’s spectrum from five to 10THz.

“The question we ask ourselves is: what happens once the C and L bands are exhausted?” says Smith.

Lumentum’s belief is that spatial division multiplexing will be needed to scale capacity further, starting with multiple fibre pairs. The challenge will be how to build systems so that costs don’t scale linearly with each added fibre pair.

There are already twin wavelength selective switches used for ROADMs for the C-band and L-bands. Lumentum is taking a first step in functional integration by combining the C- and L-bands in a single wavelength selective switch module, says Smith. “And we need to keep doing functional integration when we move to this new generation where spatial division multiplexing is going to be the approach.”

Another consideration is that, with higher baud-rate wavelengths, there will be far fewer channels per fibre. And with growing fibre pairs per route, that suggests a future need for fibre-switched networking not just wavelength switching networking as used today.

“Looking into the future, you may find that your individual routeable capacity is closer to a full C-band,” says Smith.

Will carrier division multiplexing happen before spatial division multiplexing?

Smith says that spatial division multiplexing will likely be first because Shannon’s limit is fundamental, and the industry is motivated to keep pushing Moore’s law and baud rate.

“With Shannon’s limit and with the expansion from C-band to C+L Band, if you’re growing at that nominal 30 per cent a year, a single fibre’s capacity will exhaust in two to three years’ time,” says Smith. “This is likely the first exhaust point.”

Meanwhile, even with carrier division multiplexing and the first parallel coherent modems after 240GBd, advancing baud rate will not stop. The jumps may diminish from the doublings the industry knows and that will continue for several years yet. But they will still be worth having.


ECOC 2019 industry reflections

Gazettabyte is asking industry figures for their thoughts after attending the recent ECOC show, held in Dublin. In particular, what developments and trends they noted, what they learned and what, if anything, surprised them. Here are the first responses from Huawei, OFS Fitel and ADVA.  

James Wangyin, senior product expert, access and transmission product line at Huawei  

At ECOC, one technology that is becoming a hot topic is machine learning. There is much work going on to model devices and perform optimisation at the system level.

And while there was much discussion about 400-gigabit and 800-gigabit coherent optical transmissions, 200-gigabit will continue to be the mainstream speed for the coming three-to-five years.

That is because, despite the high-speed ports, most networks are not being run at the highest speed. More time is also needed for 400-gigabit interfaces to mature before massive deployment starts.

BT and China Telecom both showed excellent results running 200-gigabit transmissions in their networks for distances over 1,000km.

We are seeing this with our shipments; we are experiencing a threefold year-on-year growth in 200-gigabit ports.

Another topic confirmed at ECOC is that fibre is a must for 5G. People previously expressed concern that 5G would shrink the investment of fibre but many carriers and vendors now agree that 5G will boost the need for fibre networks.

As for surprises at the show, the main discussion seems to have shifted from high-speed optics to system-level or device-level optimisation using machine learning.

Many people are also exploring new applications based on the fibre network.

For example, at a workshop to discuss new applications beyond 5G, a speaker from Orange talked about extending fibre connections to each room, and even to desktops and other devices. Other operators and systems vendors expressed similar ideas.

Verizon discussed, in another market focus talk, its monitoring of traffic and the speed of cars using fibre deployed alongside roads. This is quite impressive.

We are also seeing the trend of using fibre and 5G to create a fully-connected world.

Such applications will likely bring new opportunities to the optical industry.

Two other items to note.

The Next Generation Optical Transport Network Forum (NGOF) presented updates on optical technologies in China. Such technologies include next-generation OTN standardisation, the transition to 200 gigabits, mobile transport and the deployment of ROADMs. The NGOF also seeks more interaction with the global community.

The 800G Pluggable MSA was also present at ECOC. The MSA is also keen for more companies to join.

Daryl Inniss, director, new business development at OFS Fitel

There were many discussions about co-packaged optics, regarding the growth trends in computing and the technology’s use in the communications market.

This is a story about high-bandwidth interfaces and not just about linking equipment but also the technology’s use for on-board optical interconnects and chip-to-chip communications such as linking graphics processing units (GPUs).

I learned that HPE has developed a memory-centric computing system that improves significantly processing speed and workload capacity. This may not be news but it was new to me. Moreover, HPE is using silicon photonics in its system including a quantum dot comb laser, a technology that will come for others.

As for surprises, there was a notable growing interest in spatial-division multiplexing (SDM). The timescale may be long term but the conversations and debate were lively.  Two areas to watch are in proprietary applications such as very short interconnects in a supercomputer and for undersea networks where the hyperscalers  quickly consume the capacity on any newly commission link.

Lastly, another topic of note was the use of spectrum outside the C-band and extending the C-band itself to increase the data-carrying capacity of the fibre.

Jörg-Peter Elbers, senior vice president, advanced technology, ADVA

Co-packaging optics with electronics is gaining momentum as the industry moves to higher and higher silicon throughput. The advent of 51.2 terabit-per-second (Tbps) top-of-rack switches looks like a good interception point. Microsoft and Facebook also have a co-packaged optics collaboration initiative.

As for coherent, quo vadis? Well, one direction is higher speeds and feeds. What will the next symbol rate be for coherent after 60-70 gigabaud (GBd)? A half-step or a full-step; incremental or leap-frogging? The growing consensus is a full-step: 120-140 GBd.

Another direction for coherent is new applications such as access/ aggregation networks. Yet cost, power and footprint challenges will have to be solved.

Advanced optical packaging, an example being the OIF IC-TROSA project, as well as compact silicon photonics and next-gen coherent DSPs are all critical elements here.

A further issue arising from ECOC is whether optical networks need to deliver more than just bandwidth.

Latency is becoming increasingly important to address time-sensitive applications as well as for advanced radio technologies such as 5G and beyond.

Additional applications are the delivery of precise timing information (frequency, time of day, phase synchronisation) where the existing fibre infrastructure can be used to deliver additional services.

An interesting new field is the use of the communication infrastructure for sensing, with Glenn Wellbrock giving a presentation on Verizon’s work at the Market Focus.

Other topics of note include innovation in fibres and optics for 5G.

With spatial-division multiplexing, interest in multi-core and multi-mode fibre applications have weakened. Instead, more parallel fibres operating in the linear regime appear as an energy-efficient, space-division multiplexing alternative.

Hollow-core fibres are also making progress, offering not only lower latencies but lower nonlinearity compared to standard fibres.

As for optics for 5G, what is clear is that 5G requires more bandwidth and more intelligence at the edge. How network solutions will look will depend on fibre availability and the associated cost.

With eCPRI, Ethernet is becoming the convergence protocol for 5G transport. While grey and WDM (G.metro) optics, as well as next-generation PON, are all being discussed as optical underlay options. Grey and WDM optics offer an unbundling on the fibre/virtual fibre level whereas (TDM-)PON requires bitstream access.

Another observation is that radio “x-haul” [‘x’ being front, mid or back] will continue to play an important role for locations where fibre is nonexistent and uneconomical.


SDM and MIMO: An interview with Bell Labs

Bell Labs is claiming an industry first in demonstrating the recovery in real time of multiple signals sent over spatial-division multiplexed fibre. Gazettabyte spoke to two members of the research team to understand more.

 

Part 2: The capacity crunch and the role of SDM

The argument for spatial-division multiplexing (SDM) - the sending of optical signals down parallel fibre paths, whether multiple modes, cores or fibres - is the coming ‘capacity crunch’. The information-carrying capacity limit of fibre, for so long described as limitless, is being approached due to the continual yearly high growth in IP traffic. But if there is a looming capacity crunch, why are we not hearing about it from the world’s leading telcos? 

“It depends on who you talk to,” says Peter Winzer, head of the optical transmission systems and networks research department at Bell Labs. The incumbent telcos have relatively low traffic growth - 20 to 30 percent annually. “I believe fully that it is not a problem for them - they have plenty of fibre and very low growth rates,” he says. 

Twenty to 30 percent growth rates can only be described as ‘very low’ when you consider that cable operators are experiencing 60 percent year-on-year traffic growth while it is 80 to 100 percent for the web-scale players. “The whole industry is going through a tremendous shift right now,” says Winzer.  

In a recent paper, Winzer and colleague Roland Ryf extrapolate wavelength-division multiplexing (WDM) trends, starting with 100-gigabit interfaces that were adopted in 2010. Assuming an annual traffic growth rate of 40 to 60 percent, 400-gigabit interfaces become required in 2013 to 2014, and the authors point out that 400-gigabit transponder deployments started in 2013. Terabit transponders are forecast in 2016 to 2017 while 10 terabit commercial interfaces are expected from 2020 to 2024. 

In turn, while WDM system capacities have scaled a hundredfold since the late 1990s, this will not continue. That is because systems are approaching the Non-linear Shannon Limit which estimates the upper limit capacity of fibre at 75 terabit-per-second. 

Starting with 10-terabit-capacity systems in 2010 and a 30 to 40 percent core network traffic annual growth rate, the authors forecast that 40 terabit systems will be required shortly. By 2021, 200 terabit systems will be needed - already exceeding one fibre’s capacity  - while petabit-capacity systems will be required  by 2028. 

 

Even if I’m off by an order or magnitude, and it is 1000, 100-gigabit lines leaving the data centre; there is no way you can do that with a single WDM system

 

Parallel spatial paths are the only physical multiplexing dimension remaining to expand capacity, argue the authors, explaining Bell Labs’ interest in spatial-division multiplexing for optical networks.

If the telcos do not require SDM-based systems anytime soon, that is not the case for the web-scale data centre operators. They could deploy SDM as soon as 2018 to 2020, says Winzer.

The web-scale players are talking about 400,000-server data centres in the coming three to five years. “Each server will have a 25-gigabit network interface card and if you assume 10 percent of the traffic leaves the data centre, that is 10,000, 100-gigabit lines,” says Winzer. “Even if I’m off by an order or magnitude, and it is 1000, 100-gigabit lines leaving the data centre; there is no way you can do that with a single WDM system.”   

 

SDM and MIMO

SDM can be implemented in several ways. The simplest way to create parallel transmission paths is to bundle several single-mode fibres in a cable. But speciality fibre can also be used, either multi-core or multi-mode.

For the demo, Bell Labs used such a fibre, a coupled 3-core one, but Sebastian Randel, a member of technical staff, said its SDM receiver could also be used with a fibre supporting a few spatial modes. By increasing slightly the diameter of a single-mode fibre, not only is a single mode supported but two second-order modes. “Our signal processing would cope with that fibre as well,” says Winzer.

The signal processing referred to, that restores the multiple transmissions at the receiver, implements multiple input, multiple output or MIMO. MIMO is a well-known signal processing technique used for wireless and digital subscriber line (DSL).  

 

They are garbled up, that is what the rotation is; undoing the rotation is called MIMO

 

Multi-mode fibre can support as many as 100 spatial modes. “But then you have a really big challenge to excite all 100 spatial modes individually and detect them individually,” says Randel. In turn, the digital signal processing computation required for the 100 modes is tremendous. “We can’t imagine we can get there anytime soon,” says Randel.

Instead, Bell Labs used 60 km of the 3-core coupled fibre for its real-time SDM demo. The transmission distance could have been much longer except the fibre sample was 60 km long. Bell Labs chose the coupled-core fibre for the real-time MIMO demonstration as it is the most demanding case, says Winzer. 

The demonstration can be viewed as an extension of coherent detection used for long-distance 100 gigabit optical transmission. In a polarisation-multiplexed, quadrature phase-shift keying (PM-QPSK) system, coupling occurs between the two light polarisations. This is a 2x2 MIMO system, says Winzer, comprising two inputs and two outputs. 

For PM-QPSK, one signal is sent on the x-polarisation and the other on the y-polarisation. The signals travel at different speeds while hugely coupling along the fibre, says Winzer: “The coherent receiver with the 2x2 MIMO processing is able to undo that coupling and undo the different speeds because you selectively excite them with unique signals.” This allows both polarisations to be recovered. 

With the 3-core coupled fibre, strong coupling arises between the three signals and their individual two polarisations, resulting in a 6x6 MIMO system (six inputs and six outputs). The transmission rotates the six signals arbitrarily while the receiver, using 6x6 MIMO, rotates them back. “They are garbled up, that is what the rotation is; undoing the rotation is called MIMO.”

 

Demo details

For the demo, Bell Labs generated 12, 2.5-gigabit signals. These signals are modulated onto an optical carrier at 1550nm using three nested lithium niobate modulators. A ‘photonic lantern’ - an SDM multiplexer - couples the three signals orthogonally into the fibre’s three cores. 

The photonic lantern comprises three single-mode fibre inputs fed by the three single-mode PM-QPSK transmitters while its output places the fibres closer and closer until the signals overlap. “The lantern combines the fibres to create three tiny spots that couple into a single fibre, either single mode or multi-mode,” says Winzer.  

At the receiver, another photonic lantern demultiplexes the three signals which are detected using three integrated coherent receivers. 

 

Don’t do MIMO for MIMO’s sake, do MIMO when it helps to bring the overall integrated system cost down

 

To implement the MIMO, Bell Labs built a 28-layer printed circuit board which connects the three integrated coherent receiver outputs to 12, 5-gigabit-per-second 10-bit analogue-to-digital converters. The result is an 600 gigabit-per-second aggregate output digital data stream. This huge data stream is fed to a Xilinx Virtex-7 XC7V2000T FPGA using 480 parallel lanes, each at 1.25 gigabit-per-second. It is the FPGA that implements the 6x6 MIMO algorithm in real time.

“Computational complexity is certainly one big limitation and that is why we have chosen a relatively low symbol rate - 2.5 Gbaud, ten times less than commercial systems,” says Randel. “But this helps us fit the [MIMO] equaliser into a single FPGA.”  

 

Future work

With the growth in IP traffic, optical engineers are going to have to use space and wavelengths. “But how are you going to slice the pie?” says Winzer. 

With the example of 10,000, 100-gigabit wavelengths, will 100 WDM channels be sent over 100 spatial paths or 10 WDM channels over 1,000 spatial paths? “That is a techno-economic design optimisation,” says Winzer. “In those systems, to get the cost-per-bit down, you need integration.”

That is what the Bell Lab’s engineers are working on: optical integration to reduce the overall spatial-division multiplexing system cost. “Integration will happen first across the transponders and amplifiers; fibre will come last,” says Winzer. 

Winzer stresses that MIMO-SDM is not primarily about fibre, a point frequently misunderstood. The point is to enable systems with crosstalk, he says. 

“So if some modulator manufacturer can build arrays with crosstalk and sell the modulator at half the price they were able to before, then we have done our job,” says Winzer. “Don’t do MIMO for MIMO’s sake, do MIMO when it helps to bring the overall integrated system cost down.”  

 

Further Information:

Space-division Multiplexing: The Future of Fibre-Optics Communications, click here

For Part 1, click here


Bell Labs demos real-time MIMO over multicore fibre

Part 1 of 2

Bell Labs, the research arm of Alcatel-Lucent, has used a real-time receiver to recover a dozen 2.5-gigabit signals sent over a coupled three-core fibre. Until now the signal processing for such spatial-division multiplexed transmissions have been done offline due to the computational complexity involved.

 

“The era of real-time experiments in spatial-division multiplexing is starting and this is the very first example” - Peter Winzer

“The era of real-time experiments in spatial-division multiplexing is starting and this is the very first example,” said Peter Winzer, head of the Optical Transmission Systems and Networks Research Department at Bell Labs. “Such real-time experiments are the next stepping stone towards a true product implementation.”    

Spatial-division multiplexing promises to increase the capacity of optical fibre by a factor of between ten and one hundredfold. Multiple input, multiple output [MIMO], a signal processing technique employed for wireless and for DSL broadband access, is used to recover the signals at the receiver. 

MIMO also promises optical designers a way to tackle crosstalk between components, enabling cheaper integrated optics to be used at the expense of more complex digital signal processing, said Winzer.    

For the demo, Bell-Labs used MIMO to recover twelve 2.5-gigabit transmitted signals down a three-core fibre, in effect three polarisation-multiplexed, quadrature phase-shift keying (PM-QPSK) signals. The result is a 6x6 MIMO system [six inputs, six outputs] due to the coupling between the three signals, each with two polarisations. The signal couplings cause an arbitrary rotation in a 6-dimensional space, says Winzer: “They are garbled up, that is what the rotation is. Undoing the rotation is called MIMO.”

The signals were transmitted at 1,550nm over a 60 km spool of coupled-core fibre. The three 10 gigabit PM-QPSK signals are a tenth the speed of commercial systems but this was necessary for an FPGA to execute MIMO in real time.  

According to Bell Labs, the coupled-core fibre was chosen for the real-time receiver demonstration as it is the most taxing example. The Bell Labs team is now working on optical integration to reduce the overall spatial-division multiplexing system’s cost-per-bit. “Making those transponders cheaper, we are trying to figure out what are the right knobs to turn,” said Winzer.

Bell Labs does not expect telcos to require spatial-division systems soon. But traffic requirements of the web-scale data centre operators could lead to select deployments in three to five years, said Winzer.   

 

For Part 2, a more detailed discussion with Bell Labs about spatial-division multiplexing and the 60km 6x6 MIMO demonstration, click here


Optical networking: The next 10 years

Feature - Part 2: Optical networking R&D

Predicting the future is a foolhardy endeavour, at best one can make educated guesses.

Ioannis Tomkos is better placed than most to comment on the future course of optical networking. Tomkos, a Fellow of the OSA and the IET at the Athens Information Technology Centre (AIT), is involved in several European research projects that are tackling head-on the challenges set to keep optical engineers busy for the next decade.

“We are reaching the total capacity limit of deployed single-mode, single-core fibre,” says Tomkos. “We can’t just scale capacity because there are limits now to the capacity of point-to-point connections.”

 

Source: Infinera 

The industry consensus is to develop flexible optical networking techniques that make best use of the existing deployed fibre. These techniques include using spectral super-channels, moving to a flexible grid, and introducing ‘sliceable’ transponders whose total capacity can be split and sent to different locations based on the traffic requirements.

Once these flexible networking techniques have exhausted the last Hertz of a fibre’s C-band, additional spectral bands of the fibre will likely be exploited such as the L-band and S-band.

After that, spatial-division multiplexing (SDM) of transmission systems will be used, first using already deployed single-mode fibre and then new types of optical transmission systems that use SDM within the same optical fibre. For this, operators will need to put novel fibre in the ground that have multiple modes and multiple cores.

SDM systems will bring about change not only with the fibre and terminal end points, but also the amplification and optical switching along the transmission path. SDM optical switching will be more complex but it also promises huge capacities and overall dollar-per-bit cost savings.     

Tomkos is heading three European research projects - FOX-C, ASTRON & INSPACE.

FOX-C involves adding and dropping all-optically sub-channels from different types of spectral super-channels. ASTRON is undertaing the development of a one terabit transceiver photonic integrated circuit (PIC). The third, INSPACE, will undertake the development of new optical switch architectures for SDM-based networks.  

Tomkos’s research group is also a partner in three other EU projects. One of them - dubbed ACINO - involves a consortium developing a software-defined networking (SDN) controller that oversees sliceable transponders.
These projects are now detailed.

 

FOX-C 

Spectral super-channels are used to create high bit-rate signals - 400 Gigabit and greater - by combining a number of sub-channels. Combining sub-channels is necessary since existing electronics can’t create such high bit rates using a single carrier.

Infinera points out that a 1.2 Terabit-per-second (Tbps) signal implemented using a single carrier would require 462.5 GHz of spectrum while the accompanying electronics to achieve the 384 Gigabaud (Gbaud) symbol rate would require a sub-10nm CMOS process, a technology at least five years away.  

In contrast, implementing the 1.2 Tbps signal using 12 sub-channels, each at 100 Gigabit-per-second (Gbps), occupies the same 462.5 GHz of spectrum but could be done with existing 32 Gbaud electronics. However, instead of one laser and four modulators for the single-carrier case, 12 lasers and 48 modulators are needed for the 1.2 Tbps super-channel (see diagram, top).   
 
Operators are already deploying super-channels on existing networking routes. For example, certain 400 Gbps links use two sub-channels, each a single carrier modulated using polarisation-multiplexed, 16 quadrature amplitude modulation (PM-16-QAM).   
 
Meanwhile, CenturyLink was the first operator, in the second quarter of 2012, to deploy a 500 Gbps super-channel using Infinera’s PIC. Infinera’s 500 Gigabit uses 10 sub-channels, each carrying a 50 Gbps signal modulated using polarisation-multiplexed, quadrature phase-shift keying (PM-QPSK).  
 
There are two types of super-channels, says Tomkos:
  • Those that use non-overlapping sub-channels implemented using what is called Nyquist multiplexing. 
  • And those with overlapping sub-channels using orthogonal frequency division multiplexing (OFDM). 
Existing transport systems from the optical vendors use non-overlapping super-channels and Optical Transport Networking (OTN) at the electrical layer for processing, switching and grooming of the signals, says Tomkos: “With FOX-C, we are developing techniques to add/ drop sub-channels out of the super-channel without going into the electronic domain.”   
 
Accordingly, the FOX-C project is developing transceivers that implement both types of super-channel, using non-overlapping and overlapping sub-channels, to explore their merits. The project is also developing techniques to enable all-optical adding and dropping of sub-channels from these super-channel types.  
 
With Nyquist-WDM super-channels, the sub-channels are adjacent to each other but are non-overlapping such that dropping or adding a sub-channel is straightforward. Today’s 25 GHz wide filters can separate a sub-channel and insert another in the empty slot.
The FOX-C project will use much finer filtering: 12.5GHz, 6.25GHz, 3.125GHz and even finer resolutions, where there is no fixed grid to adhere to. “We are developing ultra-high resolution filtering technology to do this all-optical add/drop for Nyquist multiplexed sub-channels without any performance degradation,” says Tomkos. The FOX-C filter can achieve a record resolution of 0.8GHz. 
 
OFDM is more complicated since each sub-channel interacts with its neighbours. “If you take out one, you disturb the neighbouring ones, and you introduce severe performance degradation,” says Tomkos. To tackle this, the FOX-C project is using an all-optical interferometer.
“Using the all-optical interferometer introduces constructive and destructive interference among the OFDM sub-channels and the sub-channel or channels we want to drop and add,” says Tomkos. “By properly controlling the interferometer, we are able to perform add/ drop functions without performance degradation.”  
 
 
ASTRON 

The second project, ASTRON, is developing a one terabit super-channel PIC. The hybrid integration platform uses planar lightwave circuit (PLC) technology based on a glass substrate to which are added the actives: modulator arrays and the photo-detectors in indium phosphide. “We have kept the lasers outside the PIC mostly due to budgetary constraints, but there is no problem to include them also in the PIC,” says Tomkos. The one terabit super-channel will use eight sub-channels, occupying a total spectrum of 200 GHz.  
 
The PLC acts as the integration platform onto which the actives are placed. “We use 3D waveguide inscription inside the glass using high-power lasers and flip-chip bonding to couple the actives to the passives inside the PIC,” says Tomkos.  
 
The modulation arrays and the passives have already been made, and the project members have mastered how to create 3D waveguides in the glass to enable the active-passive alignment.
“We are in the process of finalising the technique for doing the hybrid integration and putting everything together,” says Tomkos.  
 
The physical layer PIC is complemented by developments in advanced software-defined digital signal processing (DSP) and forward error correction (FEC) modules implemented on FPGAs to enhance the transmission performance of the transceiver. The working one terabit PIC, expected from October, will then be used for experimentation in transmission testbeds.      
 
 
INSPACE
 
Spatial-division multiplexing promises new efficiencies in that instead of individual transponders and amplifiers per fibre, arrays of transponders and amplifiers can be used, spread across all the spatial super-channels. Not only does the approach promise far higher overall capacities but also lower cost.     
 
The introduction of bundled single-mode fibres, as well as new fibers that transmit over several modes and cores within such SDM systems complicates the optical switching. The channels will be less used for point-to-point transmission due to the huge capacities involved, and there will be a need to process and switch spatial sub-channels from the spatial super-channels. “We are developing a wavelength-selective switch that also operates at the spatial dimension,” says Tomkos. 
 
Already it is clear there will be two main SDM switching types. 
 
The first, simpler case involves spatial sub-channels that do not overlap with each other so that individual sub-channels can be dropped and added. This is the case using fibre with a few cores only, sufficiently spaced apart that they are effectively isolated from each other. Existing cable where a bundle of single-mode, single-core fibres are used for SDM also fits this category.  The switching for these fibre configurations is dubbed independent switching. 
 
The second SDM switch type, known as joint switching, uses fibre with multiple cores that are closely spaced, and few core multi-mode fibre. In these cases, individual sub-channels cannot be added or dropped and processed independently as their overlap causes crosstalk. “Here you switch the entire spatially-multiplexed super-channel as a whole, and to do so you can use a single wavelength-selective switch making the overall network more cost effective,” says Tomkos.  
 
Only after dropping the entire super-channel can signal processing techniques such as multiple input/multiple output (MIMO), a signal processing technique already used for cellular, be used in the electronic domain to access individual sub-channels.         
 
The goal of the INSPACE project is to develop a new generation of wavelength-selective switches (WSSes) that operate at the spatial dimension.  
 
“The true value of SDM is in its capability to reduce the cost of transport through spatial integration of network elements: fibers, amplifiers, transceivers and nodes,” says Tomkos. If by performing independent switching of several SDM signals using several switches, no cost-per-bit savings result. But by using joint switching for all the SDM signals with the one switch, the hope is for significant cost reductions, he says.   
 
The team has already implemented the first SDM switches one year into the project.  
 

ACINO


The ACINO project is headed by the Italian Centre of Research and Telecommunication Experimentations for Networked communities (Create-net), and also involves Telefonica I+D, ADVA Optical Networking and Tomkos’s group.
 
The project, which began in February, is developing an SDN controller and use sliceable transponders to deliver different types of application flows over the optical network.   
 
To explain the sliceable transponder concept, Tomkos uses the example of a future 10 terabit transponder implemented using 20 or 40 sub-channels. All these sub-channels can be combined to deliver the total 10 Tbps capacity between two points, but in a flexible network, the likelihood is that flows will be variable. If, for example, demand changes such that only one terabit is needed between the two points, suddenly 90 percent of the overall capacity is wasted. Using a sliceable transponder, the sub-channels can be reconfigured dynamically to form different capacity containers, depending on traffic demand. Using the transponder in combination with WSSes, the different sub-channel groupings can be sent to different end points, as required.
 
Combining such transponders with the SDN controller, ACINO will enable high-capacity links to be set up and dismantled on demand and according to the different application requirements. One application flow example is large data storage back-ups scheduled at certain times between an enterprise’s sites, another is backhauling wireless traffic from 5G networks.  
 
Tomkos stresses that the key development of ACINO is not sliceable transponders but the SDN controller and the application awareness that the overall solution will offer. 
 
 
The roadmap  

So how does Tomkos expect optical networking to evolve over the next 10-plus years?  
 
The next five years will see further development of flexible optical networking that makes best use of the existing infrastructure using spectral super-channels, a flexible grid and sliceable software-defined flexible transponders. 
  
From 2020-2025, more of the fibre’s spectral bands will be used, coupled with first use of SDM. SDM could start even sooner by using existing single-core, single-mode fibres and combining them to create an SDM fibre bundle.  
 
But for the other versions of SDM, new fibre must be deployed in the network and that is something that operators will find difficult to accept. This may be possible for certain greenfield deployments or for data centre interconnects, he says.  
 
Only after 2025 does Tomkos expect next-generation SDM systems using higher capacity fibre with a high core and mode count, or even hybrid systems that use both low and high core-count fibre with advanced MIMO processing, to become more widely deployed in backbone networks. 
 

For part 1, click one

Heading off the capacity crunch

Feature - Part 1: Capacity limits and remedies

Improving optical transmission capacity to keep pace with the growth in IP traffic is getting trickier. 

Engineers are being taxed in the design decisions they must make to support a growing list of speeds and data modulation schemes. There is also a fissure emerging in the equipment and components needed to address the diverging needs of long-haul and metro networks. As a result, far greater flexibility is needed, with designers looking to elastic or flexible optical networking where data rates and reach can be adapted as required.

Figure 1: The green line is the non-linear Shannon limit, above which transmission is not possible. The chart shows how more bits can be sent in a 50 GHz channel as the optical signal to noise ratio (OSNR) is increased. The blue dots closest to the green line represent the performance of the WaveLogic 3, Ciena's latest DSP-ASIC family. Source: Ciena.

But perhaps the biggest challenge is only just looming. Because optical networking engineers have been so successful in squeezing information down a fibre, their scope to send additional data in future is diminishing. Simply put, it is becoming harder to put more information on the fibre as the Shannon limit, as defined by information theory, is approached.

"Our [lab] experiments are within a factor of two of the non-linear Shannon limit, while our products are within a factor of three to six of the Shannon limit," says Peter Winzer, head of the optical transmission systems and networks research department at Bell Laboratories, Alcatel-Lucent. The non-linear Shannon limit dictates how much information can be sent across a wavelength-division multiplexing (WDM) channel as a function of the optical signal-to-noise ratio.

A factor of two may sound a lot, says Winzer, but it is not. "To exhaust that last factor of two, a lot of imperfections need to be compensated and the ASIC needs to become a lot more complex," he says. The ASIC is the digital signal processor (DSP), used for pulse shaping at the transmitter and coherent detection at the receiver.     

 

Our [lab] experiments are within a factor of two of the non-linear Shannon limit, while our products are within a factor of three to six of the Shannon limit - Peter Winzer 

 

At the recent OFC 2015 conference and exhibition, there was plenty of announcements pointing to industry progress. Several companies announced 100 Gigabit coherent optics in the pluggable, compact CFP2 form factor, while Acacia detailed a flexible-rate 5x7 inch MSA capable of 200, 300 and 400 Gigabit rates. And research results were reported on the topics of elastic optical networking and spatial division multiplexing, work designed to ensure that networking capacity continues to scale.  

 

Trade-offs

There are several performance issues that engineers must consider when designing optical transmission systems. Clearly, for submarine systems, maximising reach and the traffic carried by a fibre are key. For metro, more data can be carried on a single carrier to improving overall capacity but at the expense of reach.

Such varied requirements are met using several design levers:  

  •  Baud or symbol rate 
  •  The modulation scheme which determines the number of bits carried by each symbol 
  •  Multiple carriers, if needed, to carry the overall service as a super-channel

The baud rate used is dictated by the performance limits of the electronics. Today that is 32 Gbaud: 25 Gbaud for the data payload and up to 7 Gbaud for forward error correction and other overhead bits. 

Doubling the symbol rate from 32 Gbaud used for 100 Gigabit coherent to 64 Gbaud is a significant challenge for the component makers. The speed hike requires a performance overhaul of the electronics and the optics: the analogue-to-digital and digital-to-analogue converters and the drivers through to the modulators and photo-detectors. 

"Increasing the baud rate gives more interface speed for the transponder," says Winzer. But the overall fibre capacity stays the same, as the signal spectrum doubles with a doubling in symbol rate.

However, increasing the symbol rate brings cost and size benefits. "You get more bits through, and so you are sharing the cost of the electronics across more bits," says Kim Roberts, senior manager, optical signal processing at Ciena. It also implies a denser platform by doubling the speed per line card slot.  

 

As you try to encode more bits in a constellation, so your noise tolerance goes down - Kim Roberts   

 

Modulation schemes 

The modulation used determines the number of bits encoded on each symbol. Optical networking equipment already use binary phase-shift keying (BPSK or 2-quadrature amplitude modulation, 2-QAM) for the most demanding, longest-reach submarine spans; the workhorse quadrature phase-shift keying (QPSK or 4-QAM) for 100 Gigabit-per-second (Gbps) transmission, and the 200 Gbps 16-QAM for distances up to 1,000 km.

Moving to a higher QAM scheme increases WDM capacity but at the expense of reach. That is because as more bits are encoded on a symbol, the separation between them is smaller. "As you try to encode more bits in a constellation, so your noise tolerance goes down," says Roberts.   

One recent development among system vendors has been to add more modulation schemes to enrich the transmission options available. 

 

From QPSK to 16-QAM, you get a factor of two increase in capacity but your reach decreases of the order of 80 percent - Steve Grubb

 

Besides BPSK, QPSK and 16-QAM, vendors are adding 8-QAM, an intermediate scheme between QPSK and 16-QAM. These include Acacia with its AC-400 MSA, Coriant, and Infinera. Infinera has tested 8-QAM as well as 3-QAM, a scheme between BPSK and QPSK, as part of submarine trials with Telstra. 

"From QPSK to 16-QAM, you get a factor of two increase in capacity but your reach decreases of the order of 80 percent," says Steve Grubb, an Infinera Fellow. Using 8-QAM boosts capacity by half compared to QPSK, while delivering more signal margin than 16-QAM. Having the option to use the intermediate formats of 3-QAM and 8-QAM enriches the capacity tradeoff options available between two fixed end-points, says Grubb.    

Ciena has added two chips to its WaveLogic 3 DSP-ASIC family of devices: the WaveLogic 3 Extreme and the WaveLogic 3 Nano for metro. 

WaveLogic3 Extreme uses a proprietary modulation format that Ciena calls 8D-2QAM, a tweak on BPSK that uses longer duration signalling that enhances span distances by up to 20 percent. The 8D-2QAM is aimed at legacy dispersion-compensated fibre that carry 10 Gbps wavelengths and offers up to 40 percent additional upgrade capacity compared to BPSK. 

Ciena has also added 4-amplitude-shift-keying (4-ASK) modulation alongside QPSK to its WaveLogic3 Nano chip. The 4-ASK scheme is also designed for use alongside 10 Gbps wavelengths that introduce phase noise, to which 4-ASK has greater tolerance than QPSK. Ciena's 4-ASK design also generates less heat and is less costly than BPSK.    

According to Roberts, a designer’s goal is to use the fastest symbol rate possible, and then add the richest constellation as possible "to carry as many bits as you can, given the noise and distance you can go". 

After that, the remaining issue is whether a carrier’s service can be fitted on one carrier or whether several carriers are needed, forming a super-channel. Packing a super-channel's carriers tightly benefits overall fibre spectrum usage and reduces the spectrum wasted for guard bands needed when a signal is optically switched.  

Can symbol rate be doubled to 64 Gbaud? "It looks impossibly hard but people are going to solve that," says Roberts. It is also possible to use a hybrid approach where symbol rate and modulation schemes are used. The table shows how different baud rate/ modulation schemes can be used to achieve a 400 Gigabit single-carrier signal.

 

Note how using polarisation for coherent transmission doubles the overall data rate. Source: Gazettabyte

 

But industry views differ as to how much scope there is to improve overall capacity of a fibre and the optical performance.

Roberts stresses that his job is to develop commercial systems rather than conduct lab 'hero' experiments. Such systems need to be work in networks for 15 years and must be cost competitive. "It is not over yet," says Roberts.

He says we are still some way off from when all that remains are minor design tweaks only. "I don't have fun changing the colour of the paint or reducing the cost of the washers by 10 cents,” he says. “And I am having a lot of fun with the next-generation design [being developed by Ciena].”  

"We are nearing the point of diminishing returns in terms of spectrum efficiency, and the same is true with DSP-ASIC development," says Winzer. Work will continue to develop higher speeds per wavelength, to increase capacity per fibre, and to achieve higher densities and lower costs. In parallel, work continues in software and networking architectures. For example, flexible multi-rate transponders used for elastic optical networking, and software-defined networking that will be able to adapt the optical layer.

After that, designers are looking at using more amplification bands, such as the L-band and S-band alongside the current C-band to increase fibre capacity. But it will be a challenge to match the optical performance of the C-band across all bands used. 

"I would believe in a doubling or maybe a tripling of bandwidth but absolutely not more than that," says Winzer. "This is a stop-gap solution that allows me to get to the next level without running into desperation." 

The designers' 'next level' is spatial division multiplexing. Here, signals are launched down multiple channels, such as multiple fibres, multi-mode fibre and multi-core fibre. "That is what people will have to do on a five-year to 10-year horizon," concludes Winzer. 

 

For Part 2, click here

 

See also:

  • High Capacity Transport - 100G and Beyond, Journal of Lightwave Technology, Vol 33, No. 3, February 2015.

 

A version of this article first appeared in an OFC 2015 show preview


Software-defined networking: A network game-changer?

Q&A with Andrew Lord, head of optical research at BT, about his impressions following the recent OFC/NFOEC show.

OFC/NFOEC reflections: Part 1


"We [operators] need to move faster"

Andrew Lord, BT

 

 

 

 

 

Q: What was your impression of the show?

A: Nothing out of the ordinary. I haven't come away clutching a whole bunch of results that I'm determined to go and check out, which I do sometimes.

I'm quite impressed by how the main equipment vendors have moved on to look seriously at post-100 Gigabit transmission. In fact we have some [equipment] in the labs [at BT]. That is moving on pretty quickly. I don't know if there is a need for it just yet but they are certainly getting out there, not with live chips but making serious noises on 400 Gig and beyond.

There was a talk on the CFP [module] and whether we are going to be moving to a coherent CFP at 100 Gig. So what is going to happen to those prices? Is there really going to be a role for non-coherent 100 Gig? That is still a question in my mind.


"Our dream future is that we would buy equipment from whomever we want and it works. Why can't we do that for the network?"

 

I was quite keen on that but I'm wondering if there is going to be a limited opportunity for the non-coherent 100 Gig variants. The coherent prices will drop and my feeling from this OFC is they are going to drop pretty quickly when people start putting these things [100 Gig coherent] in; we are putting them in. So I don't know quite what the scope is for people that are trying to push that [100 Gigabit direct detection].

 

What was noteworthy at the show?

There is much talk about software-defined networking (SDN), so much talk that a lot of people in my position have been describing it as hype. There is a robust debate internally [within BT] on the merits of SDN which is essentially a data centre activity. In a live network, can we make use of it? There is some skepticism.

I'm still fairly optimistic about SDN and the role it might have and the [OFC/NFOEC] conference helped that.

I'm expecting next year to be the SDN conference and I'd be surprised if SDN doesn't have a much greater impact then [OFC/NFOEC 2014] with more people demoing SDN use cases.

 

Why is there so much excitement about SDN?

Why now when it could have happened years ago? We could have all had GMPLS (Generalised Multi-Protocol Label Switching) control planes. We haven't got them. Control plane research has been around for a long time; we don't use it: we could but we don't. We are still sitting with heavy OpEx-centric networks, especially optical.


"The 'something different' this conference was spatial-division multiplexing"


So why are we getting excited? Getting the cost out of the operational side - the software-development side, and the ability to buy from whomever we want to.

For example, if we want to buy a new network, we put out a tender and have some 10 responses. It is hard to adjudicate them all equally when, with some of them, we'd have to start from scratch with software development, whereas with others we have a head start as our own management interface has already been developed. That shouldn't and doesn't need to be the case.

Opening the equipment's north-bound interface into our own OSS (operating systems support) in theory, and this is probably naive, any specific OSS we develop ought to work.

Our dream future is that we would buy equipment from whomever we want and it works. Why can't we do that for the network?

We want to as it means we can leverage competition but also we can get new network concepts and builds in quicker without having to suffer 18 months of writing new code to manage the thing. We used to do that but it is no longer acceptable. It is too expensive and time consuming; we need to move faster.

It [the interest in SDN] is just competition hotting up and costs getting harder to manage. This is an area that is now the focus and SDN possibly provides a way through that.

Another issue is the ability to put quickly new applications and services onto our networks. For example, a bank wants to do data backup but doesn't want to spend a year and resources developing something that it uses only occasionally. Is there a bandwidth-on-demand application we can put onto our basic network infrastructure? Why not?

SDN gives us a chance to do something like that, we could roll it out quickly for specific customers.

 

Anything else at OFC/NFOEC that struck you as noteworthy?  

The core networks aspect of OFC is really my main interest.

You are taking the components, a big part of OFC, and then the transmission experiments and all the great results that they get - multiple Terabits and new modulation formats - and then in networks you are saying: What can I build?

The networks have always been the poor relation. It has not had the great exposure or the same excitement. Well, now, the network is becoming centre stage.

As you see components and transmission mature - and it is maturing as the capacity we are seeing on a fibre is almost hitting the natural limit - so the spectral efficiency, the amount of bits you can squeeze in a single Hertz, is hitting the limit of 3,4,5,6 [bit/s/Hz]. You can't get much more than that if you want to go a reasonable distance.

So the big buzz word - 70 to 80 percent of the OFC papers we reviewed - was flex-grid, turning the optical spectrum in fibre into a much more flexible commodity where you can have wherever spectrum you want between nodes dynamically. Very, very interesting; loads of papers on that. How do you manage that? What benefits does it give?  

 

What did you learn from the show?

One area I don't get yet is spatial-division multiplexing. Fibre is filling up so where do we go? Well, we need to go somewhere because we are predicting our networks continuing to grow at 35 to 40 percent.

Now we are hitting a new era. Putting fibre in doesn't really solve the problem in terms of cost, energy and space. You are just layering solutions on top of each other and you don't get any more revenue from it. We are stuffed unless we do something different.

The 'something different' this conference was spatial-division multiplexing. You still have a single fibre but you put in multiple cores and that is the next way of increasing capacity. There is an awful lot of work being done in this area.

I gave a paper [pointing out the challenges]. I couldn't see how you would build the splicing equipment, how you would get this fibre qualified given the 30-40 years of expertise of companies like Corning making single mode fibre, are we really going to go through all that again for this new fibre? How long is that going to take? How do you align these things?

 

"SDN for many people is data centres and I think we [operators] mean something a bit different." 

 

I just presented the basic pitfalls from an operator's perspective of using this stuff. That is my skeptic side. But I could be proved wrong, it has happened before!

 

Anything you learned that got you excited?

One thing I saw is optics pushing out.

In the past we saw 100 Megabit and one Gigabit Ethernet (GbE) being king of a certain part of the network. People were talking about that becoming optics.

We are starting to see optics entering a new phase. Ten Gigabit Ethernet is a wavelength, a colour on a fibre. If the cost of those very simple 10GbE transceivers continues to drop, we will start to see optics enter a new phase where we could be seeing it all over the place: you have a GigE port, well, have a wavelength.

[When that happens] optics comes centre stage and then you have to address optical questions. This is exciting and Ericsson was talking a bit about that.

 

What will you be monitoring between now and the next OFC?

We are accelerating our SDN work. We see that as being game-changing in terms of networks. I've seen enough open standards emerging, enough will around the industry with the people I've spoken to, some of the vendors that want to do some work with us, that it is exciting. Things like 4k and 8k (ultra high definition) TV, providing the bandwidth to make this thing sensible.

 

"I don't think BT needs to be delving into the insides of an IP router trying to improve how it moves packets. That is not our job."

 

Think of a health application where you have a 4 or 8k TV camera giving an ultra high-res picture of a scan, piping that around the network at many many Gigabits. These type of applications are exciting and that is where we are going to be putting a bit more effort. Rather than the traditional just thinking about transmission, we are moving on to some solid networking; that is how we are migrating it in the group.

 

When you say open standards [for SDN], OpenFlow comes to mind.

OpenFlow is a lovely academic thing. It allows you to open a box for a university to try their own algorithms. But it doesn't really help us because we don't want to get down to that level.

I don't think BT needs to be delving into the insides of an IP router trying to improve how it moves packets. That is not our job.

What we need is the next level up: taking entire network functions and having them presented in an open way.

For example, something like OpenStack [the open source cloud computing software] that allows you to start to bring networking, and compute and memory resources in data centres together.

You can start to say: I have a data centre here, another here and some networking in between, how can I orchestrate all of that? I need to provide some backup or some protection, what gets all those diverse elements, in very different parts of the industry, what is it that will orchestrate that automatically?

That is the kind of open theme that operators are interested in.

 

That sounds different to what is being developed for SDN in the data centre. Are there two areas here: one networking and one the data centre?

You are quite right. SDN for many people is data centres and I think we mean something a bit different. We are trying to have multi-vendor leverage and as I've said, look at the software issues.

We also need to be a bit clearer as to what we mean by it [SDN].

 

Andrew Lord has been appointed technical chair at OFC/NFOEC

 

Further reading

Part 2: OFC/NFOEC 2013 industry reflections, click here

Part 3: OFC/NFOEC 2013 industry reflections, click here

Part 4: OFC/NFOEC industry reflections, click here

Part 5: OFC/NFEC 2013 industry reflections, click here


OFC/NFOEC 2013: Technical paper highlights

Source: The Optical Society

Network evolution strategies, state-of-the-art optical deployments, next-generation PON and data centre interconnect are just some of the technical paper highlights of the upcoming OFC/NFOEC conference and exhibition, to be held in Anaheim, California from March 17-21, 2013. Here is a selection of the papers.

Optical network applications and services

Fujitsu and AT&T Labs-Research (Paper Number: 1551236) present simulation results of shared mesh restoration in a backbone network. The simulation uses up to 27 percent fewer regenerators than dedicated protection while increasing capacity by some 40 percent.

KDDI R&D Laboratories and the Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Spain (Paper Number: 1553225) show results of an OpenFlow/stateless PCE integrated control plane that uses protocol extensions to enable end-to-end path provisioning and lightpath restoration in a transparent wavelength switched optical network (WSON).

In invited papers, Juniper highlights the benefits of multi-layer packet-optical transport, IBM discusses future high-performance computers and optical networking, while Verizon addresses multi-tenant data centre and cloud networking evolution.


Network technologies and applications

A paper by NEC (Paper Number: 1551818) highlights 400 Gigabit transmission using four parallel 100 Gigabit subcarriers over 3,600km. Using optical Nyquist shaping each carrier occupies 37.5GHz for a total bandwidth of 150GHz.

In an invited paper Andrea Bianco of the Politecnico de Torino, Italy details energy awareness in the design of optical core networks, while Verizon's Roman Egorov discusses next-generation ROADM architecture and design.


FTTx technologies, deployment and applications

In invited papers, operators share their analysis and experiences regarding optical access. Ralf Hülsermann of Deutsche Telekom evaluates the cost and performance of WDM-based access networks, while France Telecom's Philippe Chanclou shares the lessons learnt regarding its PON deployments and details its next steps.


Optical devices for switching, filtering and interconnects

In invited papers, MIT's Vladimir Stojanovic discusses chip and board scale integrated photonic networks for next-generation computers. Alcatel-Lucent's Bell Labs' Nicholas Fontaine gives an update on devices and components for space-division multiplexing in few-mode fibres, while Acacia's Long Chen discusses silicon photonic integrated circuits for WDM and optical switches.

Optoelectronic devices

Teraxion and McGill University (Paper Number: 1549579) detail a compact (6mmx8mm) silicon photonics-based coherent receiver. Using PM-QPSK modulation at 28 Gbaud, up to 4,800 km is achieved.

Meanwhile, Intel and the UC-Santa Barbara (Paper Number: 1552462) discuss a hybrid silicon DFB laser array emitting over 200nm integrated with EAMs (3dB bandwidth> 30GHz). Four bandgaps spread over greater than 100nm are realised using quantum well intermixing.


Transmission subsystems and network elements

In invited Papers, David Plant of McGill University compares OFDM and Nyquist WDM, while AT&T's Sheryl Woodward addresses ROADM options in optical networks and whether to use a flexible grid or not.

Core networks

Orange Labs' Jean-Luc Auge asks whether flexible transponders can be used to reduce margins. In other invited papers, Rudiger Kunze of Deutsche Telekom details the operator's standardisation activities to achieve 100 Gig interoperability for metro applications, while Jeffrey He of Huawei discusses the impact of cloud, data centres and IT on transport networks.

Access networks

Roberto Gaudino of the Politecnico di Torino discusses the advantages of coherent detection in reflective PONs. In other invited papers, Hiroaki Mukai of Mitsubishi Electric details an energy efficient 10G-EPON system, Ronald Heron of Alcatel-Lucent Canada gives an update on FSAN's NG-PON2 while Norbert Keil of the Fraunhofer Heinrich-Hertz Institute highlights progress in polymer-based components for next-generation PON.

Optical interconnection networks for datacom and computercom

Use of orthogonal multipulse modulation for 64 Gigabit Fibre Channel is detailed by Avago Technologies and the University of Cambridge (Paper Number: 1551341).

IBM T.J. Watson (Paper Number: 1551747) has a paper on a 35Gbps VCSEL-based optical link using 32nm SOI CMOS circuits. IBM is claiming record optical link power efficiencies of 1pJ/b at 25Gb/s and 2.7pJ/b at 35Gbps.

Several companies detail activities for the data centre in the invited papers.

Oracle's Ola Torudbakken has a paper on a 50Tbps optically-cabled Infiniband data centre switch, HP's Mike Schlansker discusses configurable optical interconnects for scalable data centres, Fujitsu's Jun Matsui details a high-bandwidth optical interconnection for an densely integrated server while Brad Booth of Dell also looks at optical interconnect for volume servers.

In other papers, Mike Bennett of Lawrence Berkeley National Lab looks at network energy efficiency issues in the data centre. Lastly, Cisco's Erol Roberts addresses data centre architecture evolution and the role of optical interconnect.


Achieving 56 Gigabit VCSELs

A Q&A with Finisar's Jim Tatum, director of new product development. Tatum talks about the merits of the vertical-cavity surface-emitting laser (VCSEL) and the challenges to get VCSELs to work at 56 Gigabit.

Briefing: VCSELs


VCSELs galore! A wafer of 28 Gig devices Source: Finisar

Q. What are the merits of VCSELs compared to other laser technologies?

A: VCSELs have been a workhorse for the datacom industry for some 15 years. In that time there have been some 500 million devices deployed for data infrastructure links, with Finisar being a major producer of these VCSELs.

The competition is copper which means you need to be at a cost that makes such [optical] links attractive. This is where VCSELs have value: operating at 850nm which means running on multi-mode fibre. 

Coupling VCSELs to multi-mode fibre [the core diameter] is in the tens of microns whereas it is one micron for single-mode fibre and that is where the cost is. Also with VCSELs and multi-mode fibre, we don't need optical isolators which add significant cost to the assemblies. It is not the cost of the laser die itself; the difference in terms of the link [approaches] is the cost of the optics and getting light in and out of the fibre.

There are also advantages to the VCSEL itself: wafer-level testing that allows rapid testing of the die before you commit to further packaging costs. This becomes more important as the VCSEL speed gets higher.

 

What are the differences with 850nm VCSELs compared to longer wavelength (1300nm and 1550nm) VCSELs?

At 850nm you are growing devices that are all epitaxial - the laser mirrors are grown epitaxially and the quantum wells are grown in one shot. At the other wavelengths, it is much harder.

People have managed it at 1300nm but it is not yet proven to be a reliable material system for getting high-speed operation. When you go to 1550nm, you are doing wafer bonding of the mirrors and active regions or you are doing more complex epitaxial processing.

That is where 850nm VCSELs has a nice advantage in that the whole thing is done in one shot; the epitaxy and the fabrication are relatively simple. You don't have the complex manufacturing of chip parts that you do at 1550nm.

 

What link distances are served by 850nm VCSELs?

The longest standards are for 500m. As we venture to higher speeds - 28 Gigabit-per-second (Gbps) - 100m is more the maximum. And this trend will continue, at 56Gbps I would anticipate less than 50m and maybe 25m.

The good news is that the number of links that become economically viable at those speeds grows exponentially at these shorter distances. Put another way, copper is very challenged at 56Gbps lane rates and we'll see optics and VCSEL technology move inside the chassis for board-to-board and even chip-to-chip interconnects. Such applications will deliver much higher volumes.

 

"Taking that next step - turning the 28Gbps VCSEL into a product - is where all the traps lie"

 

What are the shortest distances?

There are the edge-mounted connections and those are typically 1-5m. There is also a lot of demonstrated work with VCSELs on boards doing chip-to-chip interconnect. That is a big potential market for these devices as well.

The 28Gbps VCSEL has been demonstrated but commercial products are not yet available. It is difficult to sense whether such a device is relatively straightforward to develop or a challenge.

Achieving a 28Gbps VCSEL is hard. Certainly there have been many companies that have demonstrated a modulation capability at that speed. However, it is one thing to do it one time, another to put a reliable VCSEL product into a transceiver with everything around it.

Taking that next step - turning the 28Gbps VCSEL into a product - is where all the traps lie. That is where the bulk of the work is being done today.  Certainly this year there will be 25Gbps/ 28Gbps products out in customers' hands.

 

"With a VCSEL, you have to fill up a volume of active region with enough carriers to generate photons and you can only put in so many, so fast. The smaller you can make that volume, the faster you can lase."

 

What are the issues that dictate a VCSEL's speed?

When you think about going to the next VCSEL speed, it helps to think about where we came from.

All the devices shipped, from 1 to 10 Gig, had gallium arsenide active regions. It has lots of wonderful attributes but one of its less favourable ones is that it is not the highest speed. Going to 14Gbps and 28Gbps we had to change the active region from gallium arsenide to indium gallium arsenide and that gives us an enhancement of the differential gain, a key parameter for controlling speed. 

What you really want to do when you are dealing with speed is that for every incremental bit of current I give the [VCSEL] device, how much more does that translate into gain, or more photons coming out? If you can make that happen more efficiently, then the edge speed of the device increases. In other words, you don't have to deal with other parasitics - carriers going into non-recombination centres and that sort of thing; everything is going into the production of photons rather than other parasitic things.

With a VCSEL, you have to fill up a volume of active region with enough carriers to generate photons and you can only put in so many, so fast. The smaller you can make that volume, the faster you can lase. 

Differential gain is a measure of the efficiency in terms of the number of photons generated by a particular carrier. If I can increase that efficiency of making photons, then my transition speed and my edge speed of the laser increases.

Shown is the chart on the y-axis is the differential gain and on the x-axis is the current density going into the part. The decay tells you that if I'm running really high currents, the differential gain is worse for indium gallium arsenide parts. So you want to operate your device with a carrier density that maximises the differential gain. 

Part of that maximisation is using less carriers in smaller quantum wells so that it ramps up the curve. You want to operate at a lower current density while also doing a better job of each carrier transitioning into photons.

 

What else besides differential gain dictates VCSEL performance?

The speed of the laser increases above threshold as the square root of the current. That gives you a return-on-investment in terms of how much current you put into the device.

However, the reliability of the part degrades with the cube of the current you put into it. So you get to a boundary condition where speed varies as the square root of the current and you have the reliability which is degrading with the cube of the current. The intersection of those two points is where you are willing to live in terms of reliability.

That is the trade-off we constantly have to deal with when designing lasers for high speed communications.

 

Having explained the importance of this region of operation, what changes in terms of the laser when operating at 28Gbps and at 56Gbps?

At 14Gbps and even at 28Gbps the lasers are directly modulated with little analogue trickery. That said, 28Gbps Fibre Channel does allow you to use equalisation at the receiver.

My feeling today is that at 56Gbps, direct modulation of the laser is going to be pretty tricky. At that speed there is going to have to be dispersion compensation or equalisation built into the optical system.

There are a lot of ways to incorporate some analogue or even digital methods to reduce the effective bandwidth of the device from 56Gbps to running less. One of these is a little bit of pre-emphasis and equalisation. Another way is to use analogue modulation levels. Alternatively, you can start borrowing a whole lot more from the digital communication world and look at sub-carrier multiplexing or other more advanced modulation schemes. In other words pull the bandwidth of the laser down instead of doing 1, 0 on-off stuff.  At 56 Gig those things are going to be a requirement.

The bottom line is that a 28Gbps VCSEL design maybe something pretty similar to a 56 Gig part with the addition of bandwidth enhancements techniques.

 

"I can see [VCSEL] modulation rates going to 100Gbps"

 

So has VCSEL technology already reached its peak?

In terms of direct modulation of a VCSEL - pushing current into it and generating photons - 28 Gig is a reasonable place. And 56 Gig or 40Gig VCSELs may happen with some electronic trickery around it.

The next step - and even at 56Gbps - there is a fair amount of investigation of alternate modulation techniques for VCSELs.

Instead of modulating the current in the active region, you can do passive modulation of an external absorber inside the epitaxial structure. That starts to look like a modulated laser you would see in the telecom industry but it is all grown expitaxially. Once you are modulating a passive component, the modulation speed can get significantly higher. I can see modulation rates going to 100Gbps, for example.

 

The VCSEL roadmap isn't running out then, but it is getting more complicated. Will it take longer to achieve each device transition: from 28 to 56Gbps, and from 56Gbps to 112Gbps?

A question that is difficult to answer.

The time line will probably scale out every time you try to scale the bandwidth. But maybe not if you are able to do things like combine other technologies at 56Gbps or you do things that are more package related. For example, one way to achieve a 56 Gig link is to multiplex two lasers together on a multi-core fibre. That is significantly less challenging thing to do from a technology development point of view than lasers fundamentally capable of 56Gbps. Is such a solution cost optimised? Well, it is hard to say at this point but it may be time-to-market optimised, at least for the first generation.

Multi-core fibre is one way, another is spatial-division multiplexing. In other words, coarse WDM, making lasers at 850nm, 980nm, 1040nm - a whole bunch of different colours and multiplexing them.

There is more than one way to achieve a total aggregate throughput.

 

Does all this make your job more interesting, more stressful, or both?

It means I have options in my job which is always a good thing.


Privacy Preference Center