Briefing: Flexible elastic-bandwidth networks

Vendors and service providers are implementing the first examples of flexible, elastic-bandwidth networks. Infinera and Microsoft detailed one such network at the Layer123 Terabit Optical and Data Networking conference held earlier this year.

Optical networking expert Ioannis Tomkos of the Athens Information Technology Center explains what is flexible, elastic bandwidth.

Part 1: Flexible elastic bandwidth


"We cannot design anymore optical networks assuming that the available fibre capacity is abundant" 

Prof. Tomkos

 

 

Several developments are driving the evolution of optical networking. One is the incessant demand for bandwidth to cope with the 30+% annual growth in IP traffic. Another is the changing nature of the traffic due to new services such as video, mobile broadband and cloud computing. 

"The characteristics of traffic are changing: A higher peak-to-average ratio during the day, more symmetric traffic, and the need to support higher quality-of-service traffic than in the past," says Professor Ioannis Tomkos of the Athens Information Technology Center.

 

"The growth of internet traffic will require core network interfaces to migrate from the current 10, 40 and 100Gbps to 1 Terabit by 2018-2020"

 

Operators want a more flexible infrastructure that can adapt to meet these changes, hence their interest in flexible elastic-bandwidth networks. The operators also want to grow bandwidth as required while making best use of the fibre's spectrum. They also require more advanced control plane technology to restore the network elegantly and promptly following a fault, and to simplify the provisioning of bandwidth.  

The growth of internet traffic will require core network interfaces to migrate from the current 10, 40 and 100Gbps to 1 Terabit by 2018-2020, says Tomkos. Such bit-rates must be supported with very high spectral efficiencies, which according to latest demonstrations are only a factor of 2 away of the Shannon's limit. Simply put, optical fibre is rapidly approaching its maximum limit.

"We cannot design anymore optical networks assuming that the available fibre capacity is abundant," says Tomkos. "As is the case in wireless networks where the available wireless spectrum/ bandwidth is a scarce resource, the future optical communication systems and networks should become flexible in order to accommodate more efficiently the envisioned shortage of available bandwidth.”

 

The attraction of multi-carrier schemes and advanced modulation formats is the prospect of operators modifying capacity in a flexible and elastic way based on varying traffic demands, while maintaining cost-effective transport.

 

 

Elastic elements

Optical systems providers now realise they can no longer keep increasing a light path's data rate while expecting the signal to still fit in the standard International Telecommunication Union (ITU) - defined 50GHz band. 

It may still be possible to fit a 200 Gigabit-per-second (Gbps) light path in a 50GHz channel but not a 400Gbps or 1 Terabit signal. At 400Gbps, 80GHz is needed and at 1 Terabit it rises to 170GHz, says Tomkos. This requires networks to move away from the standard ITU grid to a flexible-based one, especially if operators want to achieve the highest possible spectral efficiency.

Vendors can increase the data rate of a carrier signal by using more advanced modulation schemes than dual polarisation, quadrature phase-shift keying (DP-QPSK), the defacto 100Gbps standard. Such schemes include amplitude modulation at 16-QAM, 64-QAM and 256-QAM but the greater the amplitude levels used and hence the data rates, the shorter the resulting reach. 

Another technique vendors are using to achieve 400Gbps and 1Tbps data rates is to move from a single carrier to multiple carriers or 'super-channels'. Such an approach boosts the data rate by encoding data on more than one carrier and avoids the loss in reach associated with higher order QAM. But this comes at a cost: using multiple carriers consumes more, precious spectrum.

As a result, vendors are looking at schemes to pack the carriers closely together. One is spectral shaping. Tomkos also details the growing interest in such schemes as optical orthogonal frequency division multiplexing (OFDM) and Nyquist WDM. For Nyquist WDM, the subcarriers are spectrally shaped so that they occupy a bandwidth close or equal to the Nyquist limit to avoid inter symbol interference and crosstalk during transmission. 

Both approaches have their pros and cons, says Tomkos, but they promise optimum spectral efficiency of 2N bits-per-second-per-Hertz (2N bits/s/Hz), where N is the number of constellation points.

The attraction of these techniques - multi-carrier schemes and advanced modulation formats - is the prospect of operators modifying capacity in a flexible and elastic way based on varying traffic demands, while maintaining cost-effective transport.

"With flexible networks, we are not just talking about the introduction of super-channels, and with it the flexible grid," says Tomkos. "We are also talking about the possibility to change either dynamically."

According to Tomkos, vendors such as Infinera with its 5x100Gbps super-channel photonic integrated circuit (PIC) are making an important first step towards flexible, elastic-bandwidth networks. But for true elastic networks, a flexible grid is needed as is the ability to change the number of carriers on-the-fly.

"Once we have those introduced, in order to get to 1 Terabit, then you can think about playing with such parameters as modulation levels and the number of carriers, to make the bandwidth really elastic, according to the connections' requirements," he says.

Meanwhile, there are still technology advances needed before an elastic-bandwidth network is achieved, such as software-defined transponders and a new advanced control plane. 

Tomkos says that operators are now using control plane technology that co-ordinates between layer three and the optical layer to reduce network restoration time from minutes to seconds. Microsoft and Infinera cite that they have gone from tens of minutes down to a few seconds using the more advanced optical infrastructure. "They [Microsoft] are very happy with it," says Tomkos.

But to provision new capacity at the optical layer, operators are talking about requirements in the tens of minutes; something they do not expect will change in the coming years. "Cloud services could speed up this timeframe," says Tomkos.

"There is usually a big lag between what operators and vendors do and what academics do," says Tomkos. "But for the topic of flexible, elastic networking, the lag between academics and the vendors has become very small."

 

Further reading:

Optical transmission's era of rapid capacity growth


Optical transmission beyond 100Gbps

Briefing: High-speed optical transmission. 

Part 3: What's next?

Given the 100 Gigabit-per-second (Gbps) optical transmission market is only expected to take off from 2013, addressing what comes next seems premature. Yet operators and system vendors have been discussing just this issue for at least six months.

And while it is far too early to talk of industry consensus, all agree that optical transmission is becoming increasingly complex. As Karen Liu, vice president, components and video technologies at market research firm Ovum, observed at OFC 2010, bandwidth on the fibre is no longer plentiful.

 

“We need to keep a very close eye that we are not creating more problems than we are solving.”

Brandon Collings, JDS Uniphase.

 

As to how best to extend a fibre’s capacity beyond 80, 100Gbps dense wavelength division multiplexing (DWDM) channels spaced 50GHz apart, all options are open.

“What comes after 100Gbps is an extremely complicated question,” says Brandon Collings, CTO of JDS Uniphase’s consumer and commercial optical products division. “It smells like it will entail every aspect of network engineering.”

Ciena believes that if operators are to exploit future high-speed transmission schemes, new architected links will be needed. The rigid networking constraints imposed on 40 and 100Gbps to operate over existing 10Gbps networks will need to be scrapped.

“It will involve a much broader consideration in the way you build optical systems,” says Joe Berthold, Ciena’s vice president of network architecture. “For the next step it is not possible [to use existing 10Gbps links]; no-one can magically make it happen.”

Lightpaths faster than 100Gbps simply cannot match the performance of current optical systems when passing through multiple reconfigurable optical add/drop multiplexer (ROADM) stages using existing amplifier chains and 50GHz channels.

Increasing traffic capacity thus implies re-architecting DWDM links. “Whatever the solution is it will have to be cheap,” says Berthold. This explains why the Optical Internetworking Forum (OIF) has already started a work group comprising operators and vendors to align objectives for line rates above 100Gbps.

If new links are put in then changing the amplifier types and even their spacing becomes possible, as is the use of newer fibre. “If you stay with conventional EDFAs and dispersion managed links, you will not reach ultimate performance,” says Jörg-Peter Elbers, vice president, advanced technology at ADVA Optical Networking,

 

Capacity-boosting techniques

Achieve higher speeds while matching the reach of current links will require a mixture of techniques. Besides redesigning the links, modulation schemes can be extended and new approaches used such as going ‘gridless” and exploiting sophisticated forward error-correction (FEC) schemes.

For 100Gbps, polarisation and phase modulation in the form of dual polarization, quadrature phase-shift keying (DP-QPSK) is used. By adding amplitude modulation, quadrature amplitude modulation (QAM) schemes can be extended to include 16-QAM, 64-QAM and even 256 QAM.

Alcatel-Lucent is one firm already exploring QAM schemes but describes improving spectral efficiency using such schemes as a law of diminishing returns. For example, 448Gbps based on 64-QAM achieves a bandwidth of 37GHz and a sampling rate of 74 Gsamples/s but requires use of high-resolution A/D converters. “This is very, very challenging,” says Sam Bucci, vice president, optical portfolio management at Alcatel-Lucent.

Infinera is also eyeing QAM to extend the data performance of its 10-channel photonic integrated circuits (PICs). Its roadmap goes from today’s 100Gbps to 4Tbps per PIC.

Infinera has already announced a 10x40Gbps PIC and says it can squeeze 160 such channels in the C-band using 25GHz channel spacing. To achieve 1 Terabit would require a 10x100Gbps PIC.

How would it get to 2Tbps and 4Tbps? “Using advanced modulation technology; climbing up the QAM ladder,” says Drew Perkins, Infinera’s CTO.

Glenn Wellbrock, director of backbone network design at Verizon Business, says it is already very active in exploring rates beyond 100Gbps as any future rate will have a huge impact on the infrastructure.  “No one expects ultra-long-haul at greater than 100Gbps using 16-QAM,” says Wellbrock.

Another modulation approach being considered is orthogonal frequency-division multiplexing (OFDM). “At 100Gbps, OFDM and the single-carrier approach [DP-QPSK] have the same spectral efficiency,” says Jonathan Lacey, CEO of Ofidium. “But with OFDM, it’s easy to take the next step in spectral efficiency – required for higher data rates - and it has higher tolerance to filtering and polarisation-dependent loss.”

One idea under consideration is going “gridless”, eliminating the standard ITU wavelength grid altogether or using different sized bands, each made up of increments of narrow 25GHz ones. “This is just in the discussion phase so both options are possible,” says Berthold, who estimates that a gridless approach promises up to 30 percent extra bandwidth.

Berthold favours using channel ‘quanta’ rather than adopting a fully flexibility band scheme  - using a 37GHz window followed by a 17GHz window, for example - as the latter approach will likely reduce technology choice and lead to higher costs.

Wellbrock says coarse filtering would be needed using a gridless approach as capturing the complete C-Band would be too noisy. A band 5 or 6 channels wide would be grabbed and the signal of interest recovered by tuning to the desired spectrum using a coherent receiver’s tunable laser, similar to how a radio receiver works.

Wellbrock says considerable technical progress is needed for the scheme to achieve a reach of 1500km or greater.

 

“Whatever the solution is it will have to be cheap”

Joe Berthold, Ciena.

 

 

 

 

 

 

JDS Uniphase’s Collings sounds a cautionary note about going gridless. “50GHz is nailed down – the number of questions asked that need to be addressed once you go gridless balloons,” he says. “This is very complex; we need to keep a very close eye that we are not creating more problems than we are solving.”

“Operators such as AT&T and Verizon have invested heavily in 50GHz ROADMs, they are not just going to ditch them,” adds Chris Clarke, vice president strategy and chief engineer at Oclaro. 

More powerful FEC schemes and in particular soft-decision FEC (SD-FEC) will also benefit optical performance for data rates above 100Gbps. SD-FEC delivers up to a 1.3dB coding gain improvement compared to traditional FEC schemes at 100Gbps. 

SD-FEC also paves the way for performing joint iterative FEC decoding and signal equalisation at the coherent receiver, promising further performance improvements, albeit at the expense of a more complex digital signal processor design.

 

400Gbps or 1 Tbps?

Even the question of what the next data rate after 100Gbps will be –200Gbps, 400Gbps or even 1 Terabit-per -second – remains unresolved.

Verizon Business will deploy new 100Gbps coherent-optimised routes from 2011 and would like as much clarity as possible so that such routes are future-proofed.  But Collings points out that this is not something that will stop a carrier addressing immediate requirements. “Do they make hard choices that will give something up today?” he says.

At the OFC Executive Forum, Verizon Business expressed a preference for 1Tbps lightpaths. While 400Gbps was a safe bet, going to 1Tbps would enable skipping one additional stage i.e. 400Gbps. But Verizon recognises that backing 1Tbps depends on when such technology would be available and at what cost.

According to BT, speeds such as 200, 400Gbps and even 1 Tbps are all being considered. “The 200/ 400Gbps systems may happen using multiple QAM modulation,” says Russell Davey, core transport Layer 1 design manager at BT. “Some work is already being done at 1Tbps per wavelength although an alternative might be groups or bands of wavelengths carrying a continuous 1Tbps channel, such as ten 100Gbps wavelengths or five 200Gbps wavelengths.”

Davey stresses that the industry shouldn’t assume that bit rates will continue to climb. Multiple wavelengths at lower bitrates or even multiple fibres for short distances will continue to have a role.  “We see it as a mixed economy – the different technologies likely to have a role in different parts of network,” says Davey.

Niall Robinson, vice president of product marketing at Mintera, is confident that 400Gbps will be the chosen rate.

Traditionally Ethernet has grown at 10x rates while SONET/SDH has grown in four-fold increments. However now that Ethernet is a line side technology there is no reason to expect the continued faster growth rate, he says.  “Every five years the line rate has increased four-fold; it has been that way for a long time,” says Robinson. 100Gbps will start in 2012/ 2013 and 400Gbps in 2017.”  

“There is a lot of momentum for 400Gbps but we’ll have a better idea in a six months’ time,” says Matt Traverso, senior manager, technical marketing at Opnext.  “The IEEE [and its choice for the next Gigabit Ethernet speed after 100GbE] will be the final arbiter.”

 

Software defined optics and cognitive optics

Optical transmission could ultimately borrow two concepts already being embraced by the wireless world: software defined radio (SDR) and cognitive radio.

SDR refers to how a system can be reconfigured in software to implement the most suitable radio protocol. In optical it would mean making the transmitter and receiver software-programmable so that various transmission schemes, data rates and wavelength ranges could be used. “You would set up the optical transmitter and receiver to make best use of the available bandwidth,” says ADVA Optical Networking’s Elbers. 

This is an idea also highlighted by Nokia Siemens Networks, trading capacity with reach based on modifying the amount of information placed on a carrier.

“For a certain frequency you can put either one bit [of information] or several,” says Oliver Jahreis, head of product line management, DWDM at Nokia Siemens Networks. “If you want more capacity you put more information on a frequency but at a lower signal-to-noise ratio and you can’t go as far.”

Using ‘cognitive optics’, the approach would be chosen by the optical system itself using the best transmission scheme dependent capacity, distance and performance constraints as well as the other lightpaths on the fibre. “You would get rid of fixed wavelengths and bit rates altogether,” says Elbers.

 

Market realities

Ovum’s view is it remains too early to call the next rate following 100Gbps.

Other analysts agree. “Gridless is interesting stuff but from a commercial standpoint it is not relevant at this time,” says Andrew Schmitt, directing analyst, optical at Infonetics Research.

Given that market research firms look five years ahead and the next speed hike is only expected from 2017, such a stance is understandable.

Optical module makers highlight the huge amount of work still to be done. There is also a concern that the benefits of corralling the industry around coherent DP-QPSK at 100Gbps to avoid the mistakes made at 40Gbps will be undone with any future data rate due to the choice of options available.

Even if the industry were to align on a common option, developing the technology at the right price point will be highly challenging. 

“Many people in the early days of 100Gbps – in 2007 – said: ‘We need 100Gbps now – if I had it I’d buy it’,” says Rafik Ward, vice president of marketing at Finisar. “There should be a lot of pent up demand [now].” The reason why there isn’t is that such end users always miss out key wording at the end, says Ward: “If I had it I’d buy it - at the right price.

 

For Part 1, click here

For Part 2, click here


Privacy Preference Center