How scaling optical networks is soon to change

Carrier division multiplexing and spatial division multiplexing (CSDM) are both needed, argues Lumentum’s Brian Smith.

The era of coherent-based optical transmission as is implemented today is coming to an end, argues Lumentum in a White Paper.

Brian Smith

Brian Smith

The author of the paper, Brian Smith, product and technology strategy, CTO Office at Lumentum, says two factors account for the looming change.

One is Shannon’s limit that defines how much information can be sent across a communications channel, in this case an optical fibre.

The second, less discussed regarding coherent-based optical transport, is how Moore’s law is slowing down.

”Both are happening coincidentally,” says Smith. “We believe what that means is that we, as an industry, are going to have to change how we scale capacity.”

 

Accommodating traffic growth

A common view in telecoms, based on years of reporting, is that internet traffic is growing 30 per cent annually. The CEO of AT&T mentioned over 30 per cent traffic growth in its network for the last three years during the company’s last quarterly report of 2023.

Smith says that data on the rate of traffic growth is limited. He points to a 2023 study by market research firm TeleGeography that shows traffic growth is dependent on region, ranging from 25 to 45 per cent CAGR.

Since the deployment of the first optical networking systems using coherent transmission in 2010, almost all networking capacity growth has been achieved in the C-band of a fibre, which comprises approximately 5 terahertz (THz) of spectrum.

Cramming more data into the C-band has come about by increasing the symbol rate used to transmit data and the modulation scheme used by the coherent transceivers, says Smith.

The current coherent era – labelled the 5th on the chart – is coming to an end. Source: Lumentum.

Pushing up baud rate

Because of the Shannon limit being approached, marginal gains exist to squeeze more data within the C-band. It means that more spectrum is required. In turn, the channel bandwidth occupied by an optical wavelength now goes up with baud rate such that while each wavelength carries more data, the capacity limit within the C-band has largely been reached.

Current systems use a symbol rate of 130-150 gigabaud (GBd). Later this year Ciena will introduce its 200GBd WaveLogic 6e coherent modem, while the industry has started work on developing the next generation 240-280GBd systems.

Reconfigurable optical add-drop multiplexers (ROADMs) have had to become ‘flexible’ in the last decade to accommodate changing channel widths. For example, a 400-gigabit wavelength fits in a 75GHz channel while an 800-gigabit wavelength fits within a 150GHz channel.

Another consequence of Shannon’s limit is that the transmission distance limit for a certain modulation scheme has been reached. Using 16-ary quadrature amplitude modulation (16-QAM), the distance ranges from 800-1200km. Doubling the baud rate doubles the data rate per wavelength but the link span remains fixed.

“There is a fundamentally limit to the maximum reach that you can achieve with that modulation scheme because of the Shannon limit,” says Smith.

At the recent OFC show held in March in San Diego, a workshop discussed whether a capacity crunch was looming.

The session’s consensus was that, despite the challenges associated with the latest OIF 1600ZR and ZR+ standards, which promise to send 1.6 terabits of data on a single wavelength, the industry is confident that it will meet the OIF’s 240-280+ GBd symbol rates.

However, in the discussion about the next generation of baud rate—400-500GBd—the view is that while such rates look feasible, it is unclear how they will be achieved. The aim is always to double baud rate because the increase must be meaningful.

“If the industry can continue to push the baud rate, and get the cost-per-bit, power-per-bit, and performance required, that would be ideal,” says Smith.

But this is where the challenges of Moore’s law slowing down comes in. Achieving 240GBd and more will require a coherent digital signal processor (DSP) made using a 3nm CMOS process at least. Beyond this, transistors start to approach atomic scale and the performance becomes less deterministic. Moreover, the development costs of advanced CMOS processes – 3nm, 2nm and beyond – are growing exponentially.

Beyond 240GBd, it’s also going to become more challenging to achieve the higher analogue bandwidths for the electronics and optics components needed in a coherent modem, says Smith. How the components will be packaged is key. There is no point in optimising the analogue bandwidth of each component only for the modem performance to degrade due to packaging. “These are massive challenges,” says Smith.

This explains why the industry is starting to think about alternatives to increasing baud rate, such as moving to parallel carriers. Here a coherent modem would achieve a higher data rate by implementing multiple wavelengths per channel.

Lumentum refers to this approach as carrier division multiplexing.

 

Capacity scaling

The coherent modem, while key to optical transport systems, is only part of the scaling capacity story.

Prior to coherent optics, capacity growth was achieved by adding more and more wavelengths in the C-band. But with the advent of coherent DSPs compensating for chromatic and polarisation mode dispersion, suddenly baud rate could be increased.

“We’re starting to see the need, again, for growing spectrum,” says Smith. “But now, we’re growing spectrum outside the C-band.”

First signs of this are how optical transport systems are adding the L-band alongside the C-band, doubling a fibre’s spectrum from five to 10THz.

“The question we ask ourselves is: what happens once the C and L bands are exhausted?” says Smith.

Lumentum’s belief is that spatial division multiplexing will be needed to scale capacity further, starting with multiple fibre pairs. The challenge will be how to build systems so that costs don’t scale linearly with each added fibre pair.

There are already twin wavelength selective switches used for ROADMs for the C-band and L-bands. Lumentum is taking a first step in functional integration by combining the C- and L-bands in a single wavelength selective switch module, says Smith. “And we need to keep doing functional integration when we move to this new generation where spatial division multiplexing is going to be the approach.”

Another consideration is that, with higher baud-rate wavelengths, there will be far fewer channels per fibre. And with growing fibre pairs per route, that suggests a future need for fibre-switched networking not just wavelength switching networking as used today.

“Looking into the future, you may find that your individual routeable capacity is closer to a full C-band,” says Smith.

Will carrier division multiplexing happen before spatial division multiplexing?

Smith says that spatial division multiplexing will likely be first because Shannon’s limit is fundamental, and the industry is motivated to keep pushing Moore’s law and baud rate.

“With Shannon’s limit and with the expansion from C-band to C+L Band, if you’re growing at that nominal 30 per cent a year, a single fibre’s capacity will exhaust in two to three years’ time,” says Smith. “This is likely the first exhaust point.”

Meanwhile, even with carrier division multiplexing and the first parallel coherent modems after 240GBd, advancing baud rate will not stop. The jumps may diminish from the doublings the industry knows and that will continue for several years yet. But they will still be worth having.


Xtera demonstrates 40 Terabit using Raman amplification

Feature: 100 Gig and Beyond. Part 2

  • Xtera's Raman amplification boosts capacity and reach
  • 40 Terabit optical transmission over 1,500km in Verizon trial
  • 64 Terabit over 1,500km in 2015 using a Raman module operating over 100nm of spectrum  

 

Herve Fevrier
Optical transport equipment makers continue to research techniques to increase the data carried long distances over a fibre without sacrificing reach. The techniques include signal processing of the transmit signal to cram data-carrying channels closer in the C-band, advanced soft-decision forward error correction (SD-FEC) and receiver signal processing to counter transmission impairments.
 

System vendor Xtera is using all these techniques as part of its Nu-Wave Optima system but also uses Raman amplification to extend capacity and reach.

Raman amplification can more than treble the available fibre spectrum used for transmission while doubling the reach, claims Xtera. In a trial with US operator Verizon, Xtera demonstrated its Nu-Wave Optima platform using Raman amplification to send 15 Terabit over 4,500km, and 40 Terabit over 1,500km.
 

"We offer capacity and reach using a technology - Raman amplification - that we have been pioneering and working on for 15 years," says Herve Fevrier, executive vice president and chief strategy officer at Xtera.

 
EDFA and Raman
 
An Erbium-doped fibre amplifier (EDFA), long established as the amplifier of choice for wavelength division multiplexing (WDM) deployments, has superior power efficiency compared to Raman. Power efficiency refers to the optical pump power needed to pump an amplifier to achieve a certain gain, explains Fevrier. One way to improve the Raman's power efficiency is to use a high-efficiency fibre. However, unless an operator is deploying new fibre, the existing less-efficient fibre plant will be used as the Raman gain medium. Yet while the Raman amplifier has a lower power efficiency compared to an EDFA, it does deliver significant benefits at the system level.
 
An EDFA spans the C-band, the 35nm wide spectrum covering 1530-1565nm. Raman can amplify a much wider 100nm window, covering more than the C-Band and L-band (1570-1605nm) combined. This enables the transmission of many more wavelengths across a fibre, delivering the threefold increase in capacity Xtera mentions. And while operators can also deploy EDFAs in the L-Band, a different design is needed compared to the C-band EDFA such that an operator must stock two EDFAs types.
 
An EDFA performs lumped amplification, restoring the signal at distinct points in the network 80km apart. In contrast, Raman boosts the signal as it travels down the fibre. The smoother amplification of Raman also means the power needed per channel is lower, says Fevrier, increasing the margin before non-linear effects are triggered due to the optical signal's power.
 
The distributed amplification profile of Raman (blue) compared to an EDFA's point amplification. Source: Xtera
 

One way vendors are improving the amplification for 100 Gigabit and greater deployments is to use a hybrid EDFA/ Raman design. This benefits the amplifier's power efficiency and the overall transmission reach but the spectrum width is still dictated by Erbium to around 35nm. "And Raman only helps you have spans which are a bit longer," says Fevrier.

"100-Gigabit-plus will need Raman and people are using hybrids," he says. Instead, operators should consider deploying all-Raman for their future high-speed networks. "Go for it in terms of line system and then you will triple capacity." says Fevrier.  
 
 
Verizon trial
 
Xtera conducted several trials with Verizon using the Nu-Wave Optima with the Raman amplification operating over a 61nm window.
 
The first test sent 15 Terabit over 4,500km using 150 channels, each 100 Gig polarisation-multiplexed, quadrature phase-shift keying (PM-QPSK) modulation at 50GHz channel spacings. The second trial demonstrated 400 Gigabit super-channels comprising four 100 Gig PM-QPSK signals spaced 33GHz apart. The resulting capacity was 20 Terabit that  achieved a 3,000km reach. The final trial used a 400 Gig super-carrier based on two 200 Gig polarisation multiplexed, 16 quadrature amplitude modulation (PM-16-QAM) signals. Using 50GHz spacing a total of 30 Terabit was sent over 2,000km. Moving to a 37.5GHz channel spacing, capacity rose to 40 Terabits and a distance of 1,500km was achieved.
 
"People [at Verizon] were amazed that our system was linear," says Fevrier. "The non-linear penalty is extremely small."
 
Xtera expects that with a full 100nm spectral window, 48 Terabits could be sent over 2,000km and 64 Terabits over 1,500km. "To go to 100nm we need to be cost effective such that the cost of an 100nm system would match the 62nm one is today," says Fevrier.
 
The vendor's message to operators planning 100-Gig-plus deployments is that spectrum should be a key part of their considerations. "When you think of the investment your are doing, turning up a new system, I think it is really time to think of spectrum."
 

Meanwhile, Xtera is working on programable cards that will support the various transmission options. Xtera will offer a 100nm amplifier module this year that extends its system capacity to 24 Terabit (240, 100 Gig channels). Also planned this year is super-channel PM-QPSK implementation that will extend transmissions to 32 Terabit using the 100nm amplifier module. In 2015 Xtera will offer PM-16-QAM that will deliver the 48 Terabit over 2,000km and the 64 Terabit over 1,500km.

 

For Part 1, click here


The great data rate-reach-capacity tradeoff

Source: Gazettabyte

Optical transmission technology is starting to bump into fundamental limits, resulting in a three-way tradeoff between data rate, reach and channel bandwidth. So says Brandon Collings, JDS Uniphase's CTO for communications and commercial optical products. See the recent Q&A.

This tradeoff will impact the coming transmission speeds of 200, 400 Gigabit-per-second and 1 Terabit-per-second. For each increased data rate, either the channel bandwidth must increase or the reach must decrease or both, says Collings.

Thus a 200Gbps light path can be squeezed into a 50GHz channel in the C-band but its reach will not match that of 100Gbps over a 50GHz channel (Shown on the graph with a hashed line). A wider version of 200Gbps could match the reach to the 100Gbps, but that would probably need a 75GHz channel, says Collings.

For 400Gbps, the same situation arises suggesting two possible approaches: 400Gbps fitting in a 75GHz channel but with limited reach (for metro) or a 400Gbps signal placed within a 125GHz channel to match the reach of 100Gbps over a 50GHz channel.

Optical transmission technology is starting to bump into fundamental limits resulting in a three-way tradeoff between data rate, reach and channel bandwidth.

"Continue this argument for 1 Terabit as well," says Collings. Here the industry consensus suggests a 200GHz-wide channel will be needed.

Similarly, within this compromise, other options are available such as 400Gbps over a 50GHz channel. But this would have a very limited reach.

Collings does not dismiss the possibility of a technology development which would break this fundamental compromise, but at present this is the situation.

As a result there will likely be multiple formats hitting the market which align the reach needed with the minimised channel bandwidth, says Collings.

 


Privacy Preference Center