Alcatel-Lucent demos dual-carrier Terabit transmission
"Without [photonic] integration you are doubling up your expensive opto-electronic components which doesn't scale"
Peter Winzer, Alcatel-Lucent's Bell Labs
Part 1: Terabit optical transmission
Alcatel-Lucent's research arm, Bell Labs, has used high-speed electronics to enable one Terabit long-haul optical transmission using two carriers only.
Several system vendors have demonstrated one Terabit transmission including Alcatel-Lucent but the company is claiming an industry first in using two multiplexed carriers only. In 2009, Alcatel-Lucent's first Terabit optical transmission used 24 sub-carriers.
"There is a tradeoff between the speed of electronics and the number of optical modulators and detectors you need," says Peter Winzer, director of optical transmission systems and networks research at Bell Labs. "In general it will be much cheaper doing it with fewer carriers at higher electronics speeds than doing it at a lower speed with many more carriers."
What has been done
In the lab-based demonstration, Bell Labs sent five, 1 Terabit-per-second (Tbps) signals over an equivalent distance of 3,200km. Each signal uses dual-polarisation 16-QAM (quadrature amplitude modulation) to achieve a 1.28Tbps signal. Thus each carrier holds 640Gbps: some 500Gbps data and the rest forward error correction (FEC) bits.
In current 100Gbps systems, dual-polarisation, quadrature phase-shift keying (DP-QPSK) modulation is used. Going from QPSK to 16-QAM doubles the bit rate. Bell Labs has also increased the symbol rate from some 30Gbaud to 80Gbaud using state-of-the-art high-speed electronics developed at Alcatel Thales III-V Lab.
"To achieve these rates, you need special high-speed components - multiplexers - and also high-speed multi-level devices," says Winzer. These are indium phosphide components, not CMOS and hence will not be deployed in commercial products for several years yet. "These things are realistic [in CMOS], just not for immediate product implementation," says Winzer.
Each carrier occupies 100GHz of channel bandwidth equating to 200GHz overall, or a 5.2b/s/Hz spectral efficiency. Current state-of-the-art 100Gbps systems use 50GHz channels, achieving 2b/s/Hz.
The 3,200km reach using 16-QAM technology is achieved in the lab, using good fibre and without any commercial product margins, says Winzer. Adding commercial product margins would reduce the optical link budget by 2-3dB and hence the overall reach.
Winzer says the one Terabit demonstration uses all the technologies employed in Alcatel-Lucent's photonic service engine (PSE) ASIC although the algorithms and soft-decision FEC used are more advanced, as expected in an R&D trial.
Before such one Terabit systems become commercial, progress in photonic integration will be needed as well as advances in CMOS process technology.
"Progress in photonic integration is needed to get opto-electronic costs down as it [one Terabit] is still going to need two-to-four sub-carriers," he says. A balance between parallelism and speed needs to be struck, and parallelism is best achieved using integration. "Without integration you are doubling up your expensive opto-electronic components which doesn't scale," says WInzer.
Optical transmission beyond 100Gbps
Part 3: What's next?
Given the 100 Gigabit-per-second (Gbps) optical transmission market is only expected to take off from 2013, addressing what comes next seems premature. Yet operators and system vendors have been discussing just this issue for at least six months.
And while it is far too early to talk of industry consensus, all agree that optical transmission is becoming increasingly complex. As Karen Liu, vice president, components and video technologies at market research firm Ovum, observed at OFC 2010, bandwidth on the fibre is no longer plentiful.
“We need to keep a very close eye that we are not creating more problems than we are solving.”
Brandon Collings, JDS Uniphase.
As to how best to extend a fibre’s capacity beyond 80, 100Gbps dense wavelength division multiplexing (DWDM) channels spaced 50GHz apart, all options are open.
“What comes after 100Gbps is an extremely complicated question,” says Brandon Collings, CTO of JDS Uniphase’s consumer and commercial optical products division. “It smells like it will entail every aspect of network engineering.”
Ciena believes that if operators are to exploit future high-speed transmission schemes, new architected links will be needed. The rigid networking constraints imposed on 40 and 100Gbps to operate over existing 10Gbps networks will need to be scrapped.
“It will involve a much broader consideration in the way you build optical systems,” says Joe Berthold, Ciena’s vice president of network architecture. “For the next step it is not possible [to use existing 10Gbps links]; no-one can magically make it happen.”
Lightpaths faster than 100Gbps simply cannot match the performance of current optical systems when passing through multiple reconfigurable optical add/drop multiplexer (ROADM) stages using existing amplifier chains and 50GHz channels.
Increasing traffic capacity thus implies re-architecting DWDM links. “Whatever the solution is it will have to be cheap,” says Berthold. This explains why the Optical Internetworking Forum (OIF) has already started a work group comprising operators and vendors to align objectives for line rates above 100Gbps.
If new links are put in then changing the amplifier types and even their spacing becomes possible, as is the use of newer fibre. “If you stay with conventional EDFAs and dispersion managed links, you will not reach ultimate performance,” says Jörg-Peter Elbers, vice president, advanced technology at ADVA Optical Networking,
Capacity-boosting techniques
Achieve higher speeds while matching the reach of current links will require a mixture of techniques. Besides redesigning the links, modulation schemes can be extended and new approaches used such as going ‘gridless” and exploiting sophisticated forward error-correction (FEC) schemes.
For 100Gbps, polarisation and phase modulation in the form of dual polarization, quadrature phase-shift keying (DP-QPSK) is used. By adding amplitude modulation, quadrature amplitude modulation (QAM) schemes can be extended to include 16-QAM, 64-QAM and even 256 QAM.
Alcatel-Lucent is one firm already exploring QAM schemes but describes improving spectral efficiency using such schemes as a law of diminishing returns. For example, 448Gbps based on 64-QAM achieves a bandwidth of 37GHz and a sampling rate of 74 Gsamples/s but requires use of high-resolution A/D converters. “This is very, very challenging,” says Sam Bucci, vice president, optical portfolio management at Alcatel-Lucent.
Infinera is also eyeing QAM to extend the data performance of its 10-channel photonic integrated circuits (PICs). Its roadmap goes from today’s 100Gbps to 4Tbps per PIC.
Infinera has already announced a 10x40Gbps PIC and says it can squeeze 160 such channels in the C-band using 25GHz channel spacing. To achieve 1 Terabit would require a 10x100Gbps PIC.
How would it get to 2Tbps and 4Tbps? “Using advanced modulation technology; climbing up the QAM ladder,” says Drew Perkins, Infinera’s CTO.
Glenn Wellbrock, director of backbone network design at Verizon Business, says it is already very active in exploring rates beyond 100Gbps as any future rate will have a huge impact on the infrastructure. “No one expects ultra-long-haul at greater than 100Gbps using 16-QAM,” says Wellbrock.
Another modulation approach being considered is orthogonal frequency-division multiplexing (OFDM). “At 100Gbps, OFDM and the single-carrier approach [DP-QPSK] have the same spectral efficiency,” says Jonathan Lacey, CEO of Ofidium. “But with OFDM, it’s easy to take the next step in spectral efficiency – required for higher data rates - and it has higher tolerance to filtering and polarisation-dependent loss.”
One idea under consideration is going “gridless”, eliminating the standard ITU wavelength grid altogether or using different sized bands, each made up of increments of narrow 25GHz ones. “This is just in the discussion phase so both options are possible,” says Berthold, who estimates that a gridless approach promises up to 30 percent extra bandwidth.
Berthold favours using channel ‘quanta’ rather than adopting a fully flexibility band scheme - using a 37GHz window followed by a 17GHz window, for example - as the latter approach will likely reduce technology choice and lead to higher costs.
Wellbrock says coarse filtering would be needed using a gridless approach as capturing the complete C-Band would be too noisy. A band 5 or 6 channels wide would be grabbed and the signal of interest recovered by tuning to the desired spectrum using a coherent receiver’s tunable laser, similar to how a radio receiver works.
Wellbrock says considerable technical progress is needed for the scheme to achieve a reach of 1500km or greater.
“Whatever the solution is it will have to be cheap”
Joe Berthold, Ciena.
JDS Uniphase’s Collings sounds a cautionary note about going gridless. “50GHz is nailed down – the number of questions asked that need to be addressed once you go gridless balloons,” he says. “This is very complex; we need to keep a very close eye that we are not creating more problems than we are solving.”
“Operators such as AT&T and Verizon have invested heavily in 50GHz ROADMs, they are not just going to ditch them,” adds Chris Clarke, vice president strategy and chief engineer at Oclaro.
More powerful FEC schemes and in particular soft-decision FEC (SD-FEC) will also benefit optical performance for data rates above 100Gbps. SD-FEC delivers up to a 1.3dB coding gain improvement compared to traditional FEC schemes at 100Gbps.
SD-FEC also paves the way for performing joint iterative FEC decoding and signal equalisation at the coherent receiver, promising further performance improvements, albeit at the expense of a more complex digital signal processor design.
400Gbps or 1 Tbps?
Even the question of what the next data rate after 100Gbps will be –200Gbps, 400Gbps or even 1 Terabit-per -second – remains unresolved.
Verizon Business will deploy new 100Gbps coherent-optimised routes from 2011 and would like as much clarity as possible so that such routes are future-proofed. But Collings points out that this is not something that will stop a carrier addressing immediate requirements. “Do they make hard choices that will give something up today?” he says.
At the OFC Executive Forum, Verizon Business expressed a preference for 1Tbps lightpaths. While 400Gbps was a safe bet, going to 1Tbps would enable skipping one additional stage i.e. 400Gbps. But Verizon recognises that backing 1Tbps depends on when such technology would be available and at what cost.
According to BT, speeds such as 200, 400Gbps and even 1 Tbps are all being considered. “The 200/ 400Gbps systems may happen using multiple QAM modulation,” says Russell Davey, core transport Layer 1 design manager at BT. “Some work is already being done at 1Tbps per wavelength although an alternative might be groups or bands of wavelengths carrying a continuous 1Tbps channel, such as ten 100Gbps wavelengths or five 200Gbps wavelengths.”
Davey stresses that the industry shouldn’t assume that bit rates will continue to climb. Multiple wavelengths at lower bitrates or even multiple fibres for short distances will continue to have a role. “We see it as a mixed economy – the different technologies likely to have a role in different parts of network,” says Davey.
Niall Robinson, vice president of product marketing at Mintera, is confident that 400Gbps will be the chosen rate.
Traditionally Ethernet has grown at 10x rates while SONET/SDH has grown in four-fold increments. However now that Ethernet is a line side technology there is no reason to expect the continued faster growth rate, he says. “Every five years the line rate has increased four-fold; it has been that way for a long time,” says Robinson. “100Gbps will start in 2012/ 2013 and 400Gbps in 2017.”
“There is a lot of momentum for 400Gbps but we’ll have a better idea in a six months’ time,” says Matt Traverso, senior manager, technical marketing at Opnext. “The IEEE [and its choice for the next Gigabit Ethernet speed after 100GbE] will be the final arbiter.”
Software defined optics and cognitive optics
Optical transmission could ultimately borrow two concepts already being embraced by the wireless world: software defined radio (SDR) and cognitive radio.
SDR refers to how a system can be reconfigured in software to implement the most suitable radio protocol. In optical it would mean making the transmitter and receiver software-programmable so that various transmission schemes, data rates and wavelength ranges could be used. “You would set up the optical transmitter and receiver to make best use of the available bandwidth,” says ADVA Optical Networking’s Elbers.
This is an idea also highlighted by Nokia Siemens Networks, trading capacity with reach based on modifying the amount of information placed on a carrier.
“For a certain frequency you can put either one bit [of information] or several,” says Oliver Jahreis, head of product line management, DWDM at Nokia Siemens Networks. “If you want more capacity you put more information on a frequency but at a lower signal-to-noise ratio and you can’t go as far.”
Using ‘cognitive optics’, the approach would be chosen by the optical system itself using the best transmission scheme dependent capacity, distance and performance constraints as well as the other lightpaths on the fibre. “You would get rid of fixed wavelengths and bit rates altogether,” says Elbers.
Market realities
Ovum’s view is it remains too early to call the next rate following 100Gbps.
Other analysts agree. “Gridless is interesting stuff but from a commercial standpoint it is not relevant at this time,” says Andrew Schmitt, directing analyst, optical at Infonetics Research.
Given that market research firms look five years ahead and the next speed hike is only expected from 2017, such a stance is understandable.
Optical module makers highlight the huge amount of work still to be done. There is also a concern that the benefits of corralling the industry around coherent DP-QPSK at 100Gbps to avoid the mistakes made at 40Gbps will be undone with any future data rate due to the choice of options available.
Even if the industry were to align on a common option, developing the technology at the right price point will be highly challenging.
“Many people in the early days of 100Gbps – in 2007 – said: ‘We need 100Gbps now – if I had it I’d buy it’,” says Rafik Ward, vice president of marketing at Finisar. “There should be a lot of pent up demand [now].” The reason why there isn’t is that such end users always miss out key wording at the end, says Ward: “If I had it I’d buy it - at the right price.”
For Part 1, click here
For Part 2, click here
