Optical transmission's era of rapid capacity growth
Kim Roberts, senior director coherent systems at Ciena, moves from theory to practice with a discussion of practical optical transmission systems supporting 100Gbps, and in future, 400 Gigabit and 1 Terabit line rates. This discussion is based on a talk Roberts gave at the Layer123's Terabit Optical and Data Networking conference held in Cannes recently.
Part 2: Commercial systems
The industry is experiencing a period of rapid growth in optical transmission capacity. The years 1995 till 2006 were marked by a gradual increase in system capacity with the move to 10 Gigabit-per-second (Gbps) wavelengths. But the pace picked up with the advent of first 40Gbps direct detection and then coherent transmission, as shown by the red curve in the chart.
Source: Ciena
The chart's left y-axis shows bits-per-second-Hertz (bits/s/Hz). The y-axis on the right is an alternative representation of capacity expressed in Terabits in the C-band. "The C-band remains, on most types of fibre, the lowest cost and the most efficient," says Roberts.
The notable increase started with 40Gbps in a 50GHz ITU channel - 46Gbps to accommodate forward error correction (FEC) - and then, in 2009, 100Gbps (112Gbps) in the same width channel. In Ciena's (Nortel's) case, 100Gbps transmission was achieved using two carriers, each carrying 56Gbps, in one 50GHz channel.
"It is going to get hard to achieve spectral efficiencies much beyond 5bits/s/Hz. Getting hard means it is going to take the industry longer"
The chart's blue labels represent future optical transmission implementations. The 224Gbps in a 50GHz channel (200Gbps data) is achieve using more advanced modulation. Instead of dual polarisation, quadrature phase-shift keying (DP-QPSK) coherent transmission, DP-16-QAM will be used based on phase and amplitude modulation.
At 448Gbps, two carriers will be used, each carrying 224Gbps DP-16-QAM in a 50GHz band. "Two carriers, two polarisations on each, and 16-QAM on each," says Roberts.
As explained in Part 1, two carriers are needed because squeezing 400Gbps into the 50GHz channel will have unacceptable transmission performance. But instead of using two 50GHz channels - one for each carrier - 80GHz of spectrum will be needed overall. That is because the latest DSP-ASICs, in this case Ciena's WaveLogic 3 chipset, use waveform shaping, packing the carriers closer and making better use of the spectrum available. For the scheme to be practical, however, the optical network will also require flexible-spectrum ROADMs.
One Terabit transmission extends the concept by using five carriers, each carrying 200Gbps. This requires an overall spectrum of 160-170GHz. "The measurement in the lab that I have shown requires 200GHz using WaveLogic 3 technology," says Roberts, who stresses that these are labs measurements and not a product.
Slowing down
Roberts expects progress in line rate and overall transmission capacity to slow down once 400Gbps transmission is achieved, as indicated by the chart's curve's lesser gradient in future years.
"It is going to get hard to achieve spectral efficiencies much beyond 5bits/s/Hz" says Roberts. "Getting hard means it is going to take the industry longer." The curve is an indication of what is likely to happen, says Roberts: "We are reaching closer and closer to the Shannon bound, so it gets hard."
Roberts says that lab "hero" experiments can go far beyond 5 or 6 bits/s/Hz but that what the chart is showing are system product trends: "Commercial products that can handle commercial amounts of noise, commercial margins and FEC; all the things that make it a useful product."
Reach
What the chart does not show is how transmission reach changes with the modulation scheme used. To this aim, Roberts refers to the chart discussed in Part 1.
Source: Ciena
The 100Gbps blue dot is the WaveLogic 3 performance achieved with the same optical signal-to-noise ratio (ONSR) as used at 10Gbps.
"If you apply the same technology, the same FEC at 16-QAM at the same symbol rate, you get 200Gbps or twice the throughput," says Roberts. "But as you can see on the curve, you get a 4.6dB penalty [at 200Gbps] inherent in the modulation."
What this means is that the reach of an optical transport system is no longer 3,000km but rather 500-700km regional reaches, says Roberts.
Part 1: The capacity limits facing optical networking
Part 3: 2020 vision
The capacity limits facing optical networking
Ever wondered just how close systems vendors are in approaching the limits of fibre capacity in optical networks? Kim Roberts, senior director coherent systems at Ciena, adds some mathematical rigour with his explanation of Shannon's bound, from a workshop he gave at the recent Layer123's Terabit Optical and Data Networking conference held in Cannes.
Part 1 Shannon's bound
Source: Ciena
One positive message from Kim Roberts is that optical networking engineers are doing very well at squeezing information down a fibre. But a consequence of their success is that the scope for sending yet more information is diminishing.
"The key message is we are reaching that boundary," says Roberts. "We are not going to have factors of 10 improvement in spectral efficiency."
Shannon's bound
The boundary in question - the green line in the chart above - is based on the work of famed mathematician and information theorist, Claude Shannon. The chart shows how the amount of information that can be sent across a fibre is ultimately dictated by the optical signal-to-noise ratio (OSNR).
To understand the chart, the axes need to be explained. The y-axis represents the Gigabits-per-second (Gbps) of information to be communicated error free in a 50GHz ITU-defined channel. The second, right hand y-axis is an alternative representation, based on spectral efficiency: How many bits/s are transmitted, error free, per Hertz of optical spectrum. For example, 100Gbps fitted within a 50GHz channel (see 100Gbps black dot) has 2bits/s/Hz spectral efficiency.
The horizontal axis is the OSNR, measured as the total power in the signal divided by the noise in a tenth of a nanometer of spectrum.
The curve, in green, shows where communication is possible and where it is not, based on Shannon's bound. "Shannon described that for a given bandwidth - 50GHz in this example - based on the amount of noise present, specifically the signal-to-noise ratio - is the limit of the amount of information that can be communicated error free."
Roberts points out that Shannon's work was based on a linear communication channel with added Gaussian noise. Fibre is a more complex channel but the same Shannon bound applies, although some assumptions must be made. "There are certain assumptions for the non-linearities in the fibre," says Roberts. "If you make reasonable assumptions, you can draw this [Shannon] bound which shows where it is possible - and where it is not - to operate."
The dots on the chart represent the different generations of Ciena's optical transmission systems based on its WaveLogic coherent ASIC technology. The 10Gbps black dot is the performance of Ciena's first generation WaveLogic silicon. The black dot at 40Gbps and 100Gbps represent the performance achieved using Ciena's WaveLogic 2 40 and 100Gbps ASICs, shipping since 2009.
The two blue dots - at 100Gbps and 200Gbps - represent the performance achieved using Ciena's latest WaveLogic 3 silicon shipping this year. The 100Gbps is achieved using dual-polarisation, quadrature phase-shift keying (DP-QPSK) and the 200Gbps using DP-16QAM (quadrature amplitude modulation). The 200Gbps data after forward error correction in a 50GHz channel achieves 4bits/s per Hertz of spectrum.
The 100Gbps WaveLogic 3 (blue dot) delivers improved performance compared to the 100Gbps WaveLogic 2 (black dot) silicon by shifting the performance to the left, closer to the bound.
"Moving to the left means tolerating more noise, which can be translated to longer reach or higher-noise bands or more tolerance for imperfections in the network." Just how this improved performance - in terms of gained decibels (dBs) - is used depends on whether the network deployment is a long-haul or metro one, says Roberts.
What next?
Moving to faster data rates - vertically on the graph - raises its own issues. A Terabit - 1,000Gbit/s - in a 50GHz channel requires an OSNR in excess of 35dB. "That is not something that can be achieved in the network," says Roberts. "For a robust network you want to tolerate 20dB, or at least be left of 25dB." As a result, a practicable 1Tbps signal is not going to fit in a 50GHz channel.
The chart does imply that 400Gbps might be practicable in a 50GHz channel but as Roberts points out, while it might be theoretically possible, the closer you get to the theoretical limit, the harder it is to achieve.
"To increase capacity we need to find ways of reducing the noise on the line to move more to the right [on the chart]," says Roberts. "We [optical networking engineers] also need to push the data points to the left and vertically, but we are not going to push beyond the green."
Further Reading:
Capacity Trends and Limits of Optical Communication Networks, Proceedings of the IEEE, May 2012.
Part 2: Optical transmission's era of rapid capacity growth
Part 3: 2020 vision
FPGA transceiver speed hikes bring optics to the fore

Despite rapid increases in the transceiver speeds of field-programmable gate arrays (FPGA), the transition to optical has begun.
FPGA vendors Xilinx and Altera have increased their on-chip transceiver speeds fourfold since 2005, from 6.5Gbps to 28Gbps. But signal integrity issues and the rapid decline in reach associated with higher speed means optics is becoming a relevant option.
Altera has unveiled a prototype with two 12x10Gbps optical engines but has yet to reveal its product plans. Xilinx believes that FPGA optical interfaces are still several years off with requirements being met with electrical interfaces for now.
The CFP4 optical module to enable Terabit blades
Source: Gazettabyte, Xilinx
The CFP2 is about half the size of the CFP while the CFP4 is half the size of the CFP2. The CFP4 is slightly wider and longer than the QSFP.
The two CFP modules will use a 4x25Gbps electrical interface, doing away with the need for a 10x10Gbps to 4x25Gbps gearbox IC used for current CFP 100GBASE-LR4 and -ER4 interfaces. The CFP2 and CFP4 are also defined for 40 Gigabit Ethernet use.
The CFP's maximum power rating is 32W, the CFP2 12W and the CFP4 5W. But vendors that put eight CFP2 or 16 CFP4s on a blade still want to meet the 60W total power budget.
Getting close: Four CFP modules deliver slightly less bandwidth than 48 SFP+ modules: 4x100Gbps versus 480Gbps. The four also consume more power - 60w versus 48W. Moving to the CFP2 module will double the blade's bandwidth without consuming more power while the CFP4 will do the same again. a blade with 16 CFP4 modules promises 1.6Tbps while requiring 60W. Source: Xilinx
The first CFP2 modules are expected this year - there could be vendor announcements as early as the upcoming OFC/NFOEC 2012 show to be held in LA in the first week in March. The first CFP4 products are expected in 2013.
Further reading
The CFP MSA presentation: CFP MSA 100G roadmap and applications
The great data rate-reach-capacity tradeoff
Source: Gazettabyte
Optical transmission technology is starting to bump into fundamental limits, resulting in a three-way tradeoff between data rate, reach and channel bandwidth. So says Brandon Collings, JDS Uniphase's CTO for communications and commercial optical products. See the recent Q&A.
This tradeoff will impact the coming transmission speeds of 200, 400 Gigabit-per-second and 1 Terabit-per-second. For each increased data rate, either the channel bandwidth must increase or the reach must decrease or both, says Collings.
Thus a 200Gbps light path can be squeezed into a 50GHz channel in the C-band but its reach will not match that of 100Gbps over a 50GHz channel (Shown on the graph with a hashed line). A wider version of 200Gbps could match the reach to the 100Gbps, but that would probably need a 75GHz channel, says Collings.
For 400Gbps, the same situation arises suggesting two possible approaches: 400Gbps fitting in a 75GHz channel but with limited reach (for metro) or a 400Gbps signal placed within a 125GHz channel to match the reach of 100Gbps over a 50GHz channel.
Optical transmission technology is starting to bump into fundamental limits resulting in a three-way tradeoff between data rate, reach and channel bandwidth.
"Continue this argument for 1 Terabit as well," says Collings. Here the industry consensus suggests a 200GHz-wide channel will be needed.
Similarly, within this compromise, other options are available such as 400Gbps over a 50GHz channel. But this would have a very limited reach.
Collings does not dismiss the possibility of a technology development which would break this fundamental compromise, but at present this is the situation.
As a result there will likely be multiple formats hitting the market which align the reach needed with the minimised channel bandwidth, says Collings.
Plotting transceiver shipments versus traffic growth

Summing transceiver shipments in the core of the network and plotting the data against traffic growth provides useful insights into the state of the network.
"We use transceiver shipment data [from vendors] to calculate how fast the network is growing and compare it to the traffic growth," says Vladimir Kozlov, CEO of market research firm, LightCounting.
What it reveals is that in 2005-06 there was a significant discrepancy between traffic growth and installed capacity: there was 35-40% traffic growth while investment in dense wavelength division multiplex (DWDM) only grew 20-25%. This gap began to shrink in 2007-08.
LightCounting stresses that network investment must keep track with the traffic growth. "It is not going to be a one-to-one correlation as network efficiency improves over time," says Kozlov. But the gap in the past was too large and probably had to do with unused network capacity.
"As long as the network expansion is to continue just to keep up with traffic, we are looking at sustainable growth," says Kozlov.
Good long-term news for the optical component and module makers.
Optical components: The six billion dollar industry
The service provider industry, including wireless and wireline players, is up 6% year-on-year (2Q10 to 1Q11) to reach US $1.82 trillion, according to Ovum. The equipment market, mainly telecom vendors but also the likes of Brocade, has also shown strong growth - up 15% - to reach revenues of over $41.4 billion. But the most striking growth has occurred in the optical components market, up 28%, to achieve revenues of over $6 billion, says the market research firm.
Source: Ovum
“This is the first time optical components has exceeded six billion since 2001,” says Daryl Inniss, practice leader, Ovum Components. Moreover, the optical component industry growth has continued over six consecutive quarters with the growth being more than 25% for the past four quarters. “None of the other [two] segments have performed in this way,” says Inniss.
Ovum cites three factors accounting for the growth. Fibre-to-the-x (FTTx) is experiencing strong growth while revenues have entered the market from datacom players from the start of 2010. “The [optical] component recovery was led by datacom,” says Inniss. “We speculate that some of that money came from the Googles, Facebooks and Yahoos!.” A third factor accounting for growth has been optical equipment vendors ordering more long lead-time items than needed – such as ROADMs – to secure supply.
Source: Ovum
The second chart above shows the different market segments normalised since the start of 1999. Shown are the capex spending for optical networking, optical networking equipment revenues, optical components and FTTx equipment spending.
Optical networking spending is some 3.5x that of the components. FTTx equipment revenues are lower than the optical component industry’s and is therefore multiplied by 2.25, while capex is 9.2x that of optical equipment. The peak revenue in 2001 is the optical component revenues during the optical boom.
Several points can be drawn from the normalised chart:
- The strong recent growth in FTTx is the result of the booming Chinese market.
- From 2003 to 2008, the overall market showed steady growth, as illustrated by the best-fit line.
- From 2003 to 2008, capex and optical networking revenues were in line, while two thirds of the optical component revenues were due to this telecom spending.
- From 2010 onwards, components deviated from these two other segments due to the datacom spending from new players and the strong growth in FTTx.
- Once the market crashed in early 2009, optical components, networking and capex all fell. FTTx recovered after only one quarter and was followed by optical components. Optical networking and capex, meanwhile, have still not fully recovered when compared with the underlying growth line.
