BT makes plans for continued traffic growth in its core
Part 1
Kevin Smith: “A lot of the work we are doing with the trials have demonstrated we can scale our networks gracefully rather than there being a brick wall of a problem.”
BT is confident that its core network will accommodate the expected IP traffic growth for the next decade. Traffic in BT’s core is growing at between 35 and 40 percent annually, compared to the global average growth rate of 20 to 30 percent. BT attributes the higher growth to the rollout of fibre-based broadband across the UK.
The telco is deploying 100-gigabit wavelengths in high-traffic areas of its network. “These are key sites where we're running out of wavelengths such that we need to implement higher-speed ones,” says Kevin Smith, research leader for BT’s transport networks. The operator is now trialling 200-gigabit wavelengths using polarisation multiplexing, 16-quadrature amplitude modulation (PM-16QAM).
Adopting higher-order modulation increases capacity and spectral efficiency but at the expense of a loss in system performance which can be significant.
Systems vendors use polarisation-multiplexed, quadrature phase-shift keying (PM-QPSK) for 100-gigabit wavelengths. Moving to PM-16QAM doubles the bits on the wavelength but the received data has less tolerance to noise. The result is a 6-decibel loss compared to PM-QPSK, such that the transmission distance drops to a quarter. If PM-QPSK spans a 4,000km link, using PM-16QAM the reach on the same link is only 1,000km.
The transmitted capacity can also be increased by using pulse-shaping at the transmitter to cram a wavelength into a narrower channel. BT’s existing optical network uses fixed 50GHz-wide channels. But in a recent network trial with Huawei, a 3 terabit super-channel was transmitted over a 360km link using a flexible grid.
The super-channel comprised 15 channels, each carrying 200 gigabit using PM-16QAM. Using the flexible grid, each carrier occupied a 33.5GHz channel, increasing fibre capacity by a factor of 1.5 compared to a 50GHz fixed-grid. “For 16-QAM, it [33.5GHz] is pretty close to the limit,” says Smith.
Increasing the baud rate is the most structurally-efficient way to accommodate the high speed
Another way to boost the carrier’s data as well as reduce system cost is to up the signalling rate. Current optical transport systems use a 30Gbaud symbol rate. Here, two carriers each using PM-16QAM are needed to deliver 400 gigabit. Doubling the symbol rate to 60Gbaud enables a single 400 gigabit wavelength. Doubling the baud rate also halves a platform’s transponder count, reducing the overall cost-per-bit, and increases platform density.
“Increasing the baud rate is the most structurally-efficient way to accommodate the high speed,” says Smith. Going to 16QAM increases the data that is carried but at the expense of reach. By increasing the baud rate, reach can be extended while also keeping the modulation rate at a lower level, he says.
BT says it is seeing signs of such ‘flexrate’ transponders that can adapt modulation format and baud rate. “This is a very interesting area we can mine,” says Smith. The fundamental driver is about reducing cost but also giving BT more flexibility in its network, he says.
Traffic growth
Coping with traffic growth is a constant challenge, says BT.
“I’m not worried about a capacity crunch,” says Smith. “A lot of the work we are doing with the trials have demonstrated we can scale our networks gracefully rather than there being a brick wall of a problem.”
The operator is confident that 25 to 30 terabit of traffic can be squeezed into the C-band using flexgrid and narrower bands. Beyond that, BT says broadening the spectral window using additional spectral bands such as the L-band could boost a fibre’s capacity to 100 terabit. Vendors are already looking at extending the spectral window, says BT.
Sliceable transponders
BT is also part of longer-term research exploring an extension to the ‘flexrate' transponder, dubbed the sliceable bit rate variable transponder (S-BVT).
“It is very much early days but the idea is to put multiple modulators on the same big super transponder so that it can kick out super-channels that can be provisioned on demand,” says Andrew Lord, head of optical research at BT.
The large multi-terabit super-channel would be sent out and sliced further down the network by flexible grid wavelength-selective switches such that parts of the super-channel would end up at different destinations. “You don’t need all that capacity to go to one other node but you might need it to go to multiple nodes,” says Lord.
Such a sliceable transponder promises several benefits. One is an ability to keep repartitioning the multi-terabit slice based on demand. “It is a good thing if we see that kind of dynamics happening, but not fast dynamics,” says Lord. The repartitioning would more likely be occasional, adding extra capacity between nodes based on demand. Accordingly, the sliced multi-terabit super-channel would end up at fewer destinations over time.
The sliceable transponder concept also promises cost reduction through greater component integration.
BT stresses this is still early research but such a transponder could end up in the network in five years’ time.
Space-division multiplexing
Another research area that promises to increase significantly the overall capacity of a fibre is space-division multiplexing (SDM).
SDM promises to boost the capacity by a factor of between 10 and 100 through the adoption of parallel transmission paths. The simplest way to create such parallel paths is to bundle several standard single-mode fibres in a cable. But speciality fibre could also be used, either multi-core or multi-mode.
BT says it is not researching spatial multiplexing.
”I’m very much more interested in how we use the fibre we have already got,” says Lord. The priority is pushing channels together as close as possible and getting the 25 terabit figure higher, as well as exploring the L-band. “That is a much more practical way to go forward,” says Lord.
However, BT welcomes the research into SDM. “What it [SDM] is pushing into the industry is a knowledge about how to do integration and the expertise that comes out of that is still really valid,” says Lord. “As it is, I don’t see how it fits.”
Data centres to give silicon photonics its chance
The scale of modern data centres and the volumes of transceivers they will use are going to have a significant impact on the optical industry. So claims Facebook, the social networking company.
Katharine Schmidtke
Facebook has been vocal in outlining the optical requirements it needs for its large data centres.
The company will use duplex single-mode fibre and has chosen the 2 km mid-reach 100 gigabit CWDM4 interface to connect its equipment.
But the company remains open regarding the photonics used inside transceivers. “Facebook is agnostic to technology,“ says Katharine Schmidtke, strategic sourcing manager, optical technology at Facebook. “There are multiple technologies that meet our requirements.”
That said, Facebook says silicon photonics has characteristics that are appealing.
Silicon photonics can produce integrated designs, with all the required functions placed in one or two chips. Such designs will also be needed in volume, given that a large data centre uses hundred of thousands of optical transceivers, and that requires a high-yielding process. This is a manufacturing model the chip industry excels at, and one that silicon photonics, which uses a CMOS-compatible process, can exploit.
When you bring up a data centre, you don’t just deploy, you deploy a data centre
New business model
What data centres brings to optics is scale. Optical transceiver volumes used by data centres are growing, and growing fast, and will account for half the industry’s demand for Ethernet transceivers by 2020, according to LightCounting Market Research.
Transceivers must be designed with high-volume, low-cost manufacturing in mind from the start. This is different to what the market has done traditionally. “With the telecom industry, you step into volume in more manageable, digestible chunks,” says Schmidtke. “When you bring up a data centre, you don’t just deploy, you deploy a data centre.”
Silicon photonics has already proven it can achieve the required optical performance, says Facebook, what remains open is whether the technology can meet the manufacturing demands of the data centre. What helps its cause is that the data centre provides the volumes needed to achieve such a manufacturing maturity.
Schmidtke is upbeat about silicon photonics’ prospects.
“Why silicon photonics is attractive is integration; you are reducing the number of components and the bill of materials significantly, and that reduces cost,” she says. “Then there is all the alignment and assembly cost reductions; that is what makes this technology appealing.”
Her expectation is that the industry will demonstrate the required level of manufacturing maturity in the coming year. Then the role silicon photonics will play for this market will become clearer.
“Within a year it will be very obvious,” she says.
Verizon tips silicon photonics as a key systems enabler
Part 3: An operator view
Glenn Wellbrock is upbeat about silicon photonics’ prospects. Challenges remain, he says, but the industry is making progress. “Fundamentally, we believe silicon photonics is a real enabler,” he says. “It is the only way to get to the densities that we want.”
Glenn Wellbrock
Wellbrock adds that indium phosphide-based photonic integrated circuits (PICs) can also achieve such densities.
But there are many potential silicon photonics suppliers because of its relatively low barrier to entry, unlike indium phosphide. "To date, Infinera has been the only real [indium phosphide] PIC company and they build only for their own platform,” says Wellbrock.
That an operator must delve into emerging photonics technologies may at first glance seem surprising. But Verizon needs to understand the issues and performance of such technologies. “If we understand what the component-level capabilities are, we can help drive that with requirements,” says Wellbrock. “We also have a better appreciation for what the system guys can and cannot do.”
Verizon can’t be an expert in the subject, he says, but it can certainly be involved. “To the point where we understand the timelines, the cost points, the value-add and the risk factors,” he says. “There are risk factors that we also want to understand, independent of what the system suppliers might tell us.”
The cost saving is real, but it is also the space savings and power saving that are just as important
All the silicon photonics players must add a laser in one form or another to the silicon substrate since silicon itself cannot lase, but pretty much all the other optical functions can be done on the silicon substrate, says Wellbrock: “The cost saving is real, but it is also the space savings and power saving that are just as important.”
The big achievement of silicon photonics, which Wellbrock describes as a breakthrough, is the getting rid of the gold boxes around the discrete optical components. “How do I get to the point where I don’t have fibre connecting all these discrete components, where the traces are built into the silicon, the modulator is built in, even the detector is built right in.” The resulting design is then easier to package. “Eventually I get to the point where the packaging is glass over the top of that.”
So what has silicon photonics demonstrated that gives Verizon confidence about its prospects?
Wellbrock points to several achievements, the first being Infinera’s PICs. Yes, he says, Infinera’s designs are indium phosphide-based and not silicon photonics, but the company makes really dense, low-power and highly reliable components.
He also cites Cisco’s silicon photonics-based CPAK 100 Gig optical modules, and Acacia, which is applying silicon photonics and its in-house DSP-ASICs to get a lower power consumption than other, high-end line-side transmitters.
Verizon believes the technology will also be used in CFP4 and QSFP28 optical modules, and at the next level of integration that avoids pluggable modules on the equipment's faceplate altogether.
But challenges remain. Scale is one issue that concerns Verizon. What makes silicon chips cheap is the fact that they are made in high volumes. “It [silicon photonics] couldn’t survive on just the 100 gigabit modules that the telecom world are buying,” says Wellbrock.
If these issues are not resolved, then indium phosphide continues to win for a long time because that is where the volumes are today
When Verizon asks the silicon photonics players about how such scale will be achieved, the response it gets is data centre interconnect. “Inside the data centre, the optics is growing so rapidly," says Wellbrock. "We can leverage that in telecom."
The other issue is device packaging, for silicon photonics and for indium phosphide. It is ok making a silicon-photonics die cheaply but unless the packaging costs can be reduced, the overall cost saving is lost. ”How to make it reliable and mainstream so that everyone is using the same packaging to get cost down,” says Wellbrock.
All these issues - volumes, packaging, increasing the number of applications a single part can be applied to - need to be resolved and almost simultaneously. Otherwise, the technology will not realise its full potential and the start-ups will dwindle before the problems are fixed.
“If these issues are not resolved, then indium phosphide continues to win for a long time because that is where the volumes are today,” he says.
Verizon, however, is optimistic. “We are making enough progress here to where it should all pan out,” says Wellbrock.
The quiet period of silicon photonics
Michael Hochberg discusses his book on silicon photonics and the status of the technology. Hochberg is director of R&D at Coriant's Advanced Technology Group. Previously he has been an Associate Professor at the University of Delaware and at the National University of Singapore. He was also a director at the Optoelectronic Systems Integration in Silicon (OpSIS) foundry, and was a co-founder of silicon photonics start-up, Luxtera.
Part 2: An R&D perspective
If you are going to write a book on silicon photonics, you might as well make it different. That is the goal of Michael Hochberg and co-author Lukas Chrostowski, who have published a book on the topic.
Michael HochbergHochberg says there is no shortage of excellent theoretical textbooks and titles that survey the latest silicon photonics research. Instead, the authors set themselves the goal of creating a design manual to help spur a new generation of designers.
The book aims to provide designers with all the necessary tools and know-how to develop silicon photonics circuits without needing to be specialists in optics.
“One of the limiting factors in terms of the growth and success of the field is how quickly can we breed up more and more designers,” says Hochberg.
The book - Silicon Photonics Design: From Devices to Systems - starts by exploring the main silicon photonics building blocks, from optical waveguides and grating couplers to modulators, photo-detectors and lasers. The book then addresses putting the parts together, with chapters on tools, fabrication, testing and packaging before finishing with system design examples.
The numerical tools used in the book are mostly based on the finite-difference time-domain method, what the authors describe as the typical workhorse in silicon photonics design. Hochberg admits that the systems software tools, in contrast, are less mature: “It is a moving target that will change year to year.”
Myths
Hochberg is also a co-author of a Nature Photonics’ paper, published in 2012, that debunks some of the myths regarding silicon photonics. “We wrote the myths paper after seeing an upswing in the ratio of hype-to-results going on,” says Hochberg.
He says part of the problem was that people were claiming silicon photonics was going to solve problems that it was plainly unsuited to address, for example integrating photonics with cutting-edge ultra-scale sub-micron electronics, for instance at 16 nm and 28 nm nodes. “That is not a practical solution for any near term problem,” says Hochberg.
More recent events, such as Intel’s announcement in February that it is delaying the commercial introduction of its silicon photonics products, highlights how bringing the technology to market is a significant engineering challenge. Instead, we are in a quiet period for silicon photonics, he says. Companies are getting into serious product mode, where they stop publishing and start focussing on building a product.
Moreover, these products - what he refers to as second-generation silicon photonics designs - are increasingly sophisticated with more functions or channels placed on the chip. “It is the standard story of almost any technology in silicon,” he says. “Silicon wins when you can do more stuff on a single chip.”
Silicon photonics and III-V
Hochberg stresses that while it is an understandable desire, it is very hard to compare the performance of silicon photonics as a whole with traditional optical components using III-V compounds. The issue being that silicon photonics comprises many different platforms where designers have made tradeoffs. The same applies to III-V compounds where there are hundreds of processes aimed at thousands of different products. “It is very hard to compare them in a generic way,” he says.
“The great advantage silicon photonics gives you is access to first-rate fabrication infrastructure,” says Hochberg. Silicon photonics offers 8- and 12-inch wafers, high volume foundries, high process control, the ability to ramp to high volumes and achieve high yields of complex-structure designs with hundreds, even thousands of components on-chip.
In contrast, III-V materials such as indium phosphide and gallium arsenide offer higher mobilities - electrons and holes move faster - and, unlike silicon, can straightforwardly emit light.
“The downside is that III-V foundries use technology processes that silicon stopped using 20 to 30 years ago,” says Hochberg. Wafers that are 2-, 3- or 4-inch in diameter, lithography that is ten times coarser than is used for silicon, process controls that are less advanced, and less automation.
If you are going to design a complex chip with lots of different components that require a predictable relationship with each other, this is where silicon tends to beat III-Vs, he says.
But the claim of large silicon wafers and huge volumes is what silicon photonics proponents have been promoting for years, and which has fed some of the false expectation associated with the emerging technology, says one industry analyst.
Hochberg counters by highlighting two trends that play in silicon photonics’ favour.
One is the well-known one of optics slowly replacing copper. This has been going on for 40 to 50 years, he says, in long haul, then in metro and now linking equipment in the data centre. “This will continue for shorter and shorter distances and then, at some point, stop,” he says. That said, Hochberg stresses that there are other applications for silicon photonics besides data communications.
“Just because you run out of opportunities at shorter and shorter reach at some point in the distant future, doesn't mean that the field collapses,” he says. “There's a lot of other cool stuff being done in silicon photonics these days with serious commercial potential.” Example applications include medical and remote sensing.
Once you can do something in silicon and do it adequately well, it tends to displace everything else from the majority of the market
The second trend he highlights is that silicon ends up dominating fields, not necessarily because it is the best choice in terms of performance but because it ends up being so cheap in scale. “Once you can do something in silicon and do it adequately well, it tends to displace everything else from the majority of the market.”
There are up-front costs of getting silicon photonics into a CMOS fab so companies have to be judicious in choosing the applications they tackle. “But once the infrastructure gets going to make a new application, the speed with which the industry can scale is just mind-blowing,” he said.
At Coriant, Hochberg leads a team that is doing advanced R&D. “We are doing advanced research with the goal to develop new technology that may eventually make its way into product.”
Does that include silicon photonics? “There is certainly an interest in silicon photonics; it is one of the things we are exploring,” says Hochberg.
Further reading:
Book: Michael Hochberg and Lukas Chrostowski, Silicon Photonics Design: From Devices to Systems, Cambridge University Press, 2015
Paper: Myths and rumours of silicon photonics, Nature Photonics, Vol 6, April 2012.
Silicon photonics: "The excitement has gone"
The opinion of industry analysts regarding silicon photonics is mixed at best. More silicon photonics products are shipping but challenges remain.
Part 1: An analyst perspective
"The excitement has gone,” says Vladimir Kozlov, CEO of LightCounting Market Research. “Now it is the long hard work to deliver products.”
Dale Murray, LightCounting
However, he is less concerned about recent setbacks and slippages for companies such as Intel that are developing silicon photonics products. This is to be expected, he says, as happens with all emerging technologies.
Mark Lutkowitz, principal at consultancy fibeReality, is more circumspect. “As a general rule, the more that reality sets in, the less impressive silicon photonics gets to be,” he says. “The physics is just hard; light is not naturally inclined to work on the silicon the way electronics does.”
LightCounting, which tracks optical component and modules, says silicon photonics product shipments in volume are happening. The market research firm cites Cisco’s CPAK transceivers, and 40 gigabit PSM4 modules shipping in excess of 100,000 units as examples. Six companies now offer 40 gigabit PSM4 products with Luxtera, a silicon photonics player, having a healthy start on the other five.
Indium phosphide and other technologies will not step back and give silicon photonics a free ride
LightCounting also cites Acacia with its silicon photonics-based low-power 100 and 400 gigabit coherent modules. “At OFC, Acacia made a fairly compelling case, but how much of its modules’ optical performance is down to silicon photonics and how much is down to its advanced coherent DSP chip is unclear,” says Dale Murray, principal analyst at LightCounting. Silicon photonics has not shown itself to be the overwhelming solution for metro/ regional and long-haul networks to date but that could change, he says.
Another trend LightCounting notes is how PAM-4 modulation is becoming adopted within standards. PAM-4 modulates two bits of data per symbol and has been adopted for the emerging 400 Gigabit Ethernet standard. Silicon photonics modulators work really well with PAM-4 and getting it into standards benefits the technology, says LightCounting. “All standards were developed around indium phosphide and gallium arsenide technologies until now,” says Kozlov.
You would be hard pressed to find a lot of OEMs or systems integrators that talk about silicon photonics and what impact it is going to have
Silicon photonics has been tainted due to the amount of hype it has received in recent years, says Murray. Especially the claim that optical products made in a CMOS fabrication plant will be significantly cheaper compared to traditional III-V-based optical components.
First, Murray highlights that no CMOS production line can make photonic devices without adaptation. “And how many wafers starts are there for the whole industry? How much does a [CMOS] wafer cost?” he says.
“You would be hard pressed to find a lot of OEMs or systems integrators that talk about silicon photonics and what impact it is going to have,” says Lutkowitz. “To me, that has always said everything.”
Mark Lutkowitz, fibeReality LightCounting highlights heterogeneous integration as one promising avenue for silicon photonics. Heterogeneous integration involves bonding III-V and silicon wafers before processing the two.
This hybrid approach uses the III-V materials for the active components while benefitting from silicon’s larger (300 mm) wafer sizes and advanced manufacturing techniques.
Such an approach avoids the need to attach and align an external discrete laser. “If that can be integrated into a WDM design, then you have got the potential to realise the dream of silicon photonics,” says Murray. “But it’s not quite there yet.”
This poses a real challenge for silicon photonics: it will only achieve low cost if there are sufficient volumes, but without such volumes it will not achieve a cost differential
Murray says over 30 vendors now make modules at 40 gigabit and above: “There are numerous module types and more are being added all the time.” Then there is silicon photonics which has its own product pie split. This poses a real challenge for silicon photonics: it will only achieve low cost if there are sufficient volumes, but without such volumes it will not achieve a cost differential.
“Indium phosphide and other technologies will not step back and give silicon photonics a free ride, and are going to fight it,” says Kozlov. Nor is it just VCSELs that are made in high volumes.
LightCounting expects over 100 million indium phosphide transceivers to ship this year. Many of these transceivers use distributed feedback (DFB) lasers and many are at 10 gigabit and are inexpensive, says Kozlov.
For FTTx and GPON, bi-directional optical subassemblies (BOSAs) now cost $9, he says: “How much lower cost can you get?”
Optical networking: The next 10 years
Predicting the future is a foolhardy endeavour, at best one can make educated guesses.
Ioannis Tomkos is better placed than most to comment on the future course of optical networking. Tomkos, a Fellow of the OSA and the IET at the Athens Information Technology Centre (AIT), is involved in several European research projects that are tackling head-on the challenges set to keep optical engineers busy for the next decade.
“We are reaching the total capacity limit of deployed single-mode, single-core fibre,” says Tomkos. “We can’t just scale capacity because there are limits now to the capacity of point-to-point connections.”
Source: Infinera
The industry consensus is to develop flexible optical networking techniques that make best use of the existing deployed fibre. These techniques include using spectral super-channels, moving to a flexible grid, and introducing ‘sliceable’ transponders whose total capacity can be split and sent to different locations based on the traffic requirements.
Once these flexible networking techniques have exhausted the last Hertz of a fibre’s C-band, additional spectral bands of the fibre will likely be exploited such as the L-band and S-band.
After that, spatial-division multiplexing (SDM) of transmission systems will be used, first using already deployed single-mode fibre and then new types of optical transmission systems that use SDM within the same optical fibre. For this, operators will need to put novel fibre in the ground that have multiple modes and multiple cores.
SDM systems will bring about change not only with the fibre and terminal end points, but also the amplification and optical switching along the transmission path. SDM optical switching will be more complex but it also promises huge capacities and overall dollar-per-bit cost savings.
Tomkos is heading three European research projects - FOX-C, ASTRON & INSPACE.
FOX-C involves adding and dropping all-optically sub-channels from different types of spectral super-channels. ASTRON is undertaing the development of a one terabit transceiver photonic integrated circuit (PIC). The third, INSPACE, will undertake the development of new optical switch architectures for SDM-based networks.
FOX-C
Spectral super-channels are used to create high bit-rate signals - 400 Gigabit and greater - by combining a number of sub-channels. Combining sub-channels is necessary since existing electronics can’t create such high bit rates using a single carrier.
Infinera points out that a 1.2 Terabit-per-second (Tbps) signal implemented using a single carrier would require 462.5 GHz of spectrum while the accompanying electronics to achieve the 384 Gigabaud (Gbaud) symbol rate would require a sub-10nm CMOS process, a technology at least five years away.
- Those that use non-overlapping sub-channels implemented using what is called Nyquist multiplexing.
- And those with overlapping sub-channels using orthogonal frequency division multiplexing (OFDM).

Heading off the capacity crunch
Improving optical transmission capacity to keep pace with the growth in IP traffic is getting trickier.
Engineers are being taxed in the design decisions they must make to support a growing list of speeds and data modulation schemes. There is also a fissure emerging in the equipment and components needed to address the diverging needs of long-haul and metro networks. As a result, far greater flexibility is needed, with designers looking to elastic or flexible optical networking where data rates and reach can be adapted as required.
Figure 1: The green line is the non-linear Shannon limit, above which transmission is not possible. The chart shows how more bits can be sent in a 50 GHz channel as the optical signal to noise ratio (OSNR) is increased. The blue dots closest to the green line represent the performance of the WaveLogic 3, Ciena's latest DSP-ASIC family. Source: Ciena.
But perhaps the biggest challenge is only just looming. Because optical networking engineers have been so successful in squeezing information down a fibre, their scope to send additional data in future is diminishing. Simply put, it is becoming harder to put more information on the fibre as the Shannon limit, as defined by information theory, is approached.
"Our [lab] experiments are within a factor of two of the non-linear Shannon limit, while our products are within a factor of three to six of the Shannon limit," says Peter Winzer, head of the optical transmission systems and networks research department at Bell Laboratories, Alcatel-Lucent. The non-linear Shannon limit dictates how much information can be sent across a wavelength-division multiplexing (WDM) channel as a function of the optical signal-to-noise ratio.
A factor of two may sound a lot, says Winzer, but it is not. "To exhaust that last factor of two, a lot of imperfections need to be compensated and the ASIC needs to become a lot more complex," he says. The ASIC is the digital signal processor (DSP), used for pulse shaping at the transmitter and coherent detection at the receiver.
Our [lab] experiments are within a factor of two of the non-linear Shannon limit, while our products are within a factor of three to six of the Shannon limit - Peter Winzer
At the recent OFC 2015 conference and exhibition, there was plenty of announcements pointing to industry progress. Several companies announced 100 Gigabit coherent optics in the pluggable, compact CFP2 form factor, while Acacia detailed a flexible-rate 5x7 inch MSA capable of 200, 300 and 400 Gigabit rates. And research results were reported on the topics of elastic optical networking and spatial division multiplexing, work designed to ensure that networking capacity continues to scale.
Trade-offs
There are several performance issues that engineers must consider when designing optical transmission systems. Clearly, for submarine systems, maximising reach and the traffic carried by a fibre are key. For metro, more data can be carried on a single carrier to improving overall capacity but at the expense of reach.
Such varied requirements are met using several design levers:
- Baud or symbol rate
- The modulation scheme which determines the number of bits carried by each symbol
- Multiple carriers, if needed, to carry the overall service as a super-channel
The baud rate used is dictated by the performance limits of the electronics. Today that is 32 Gbaud: 25 Gbaud for the data payload and up to 7 Gbaud for forward error correction and other overhead bits.
Doubling the symbol rate from 32 Gbaud used for 100 Gigabit coherent to 64 Gbaud is a significant challenge for the component makers. The speed hike requires a performance overhaul of the electronics and the optics: the analogue-to-digital and digital-to-analogue converters and the drivers through to the modulators and photo-detectors.
"Increasing the baud rate gives more interface speed for the transponder," says Winzer. But the overall fibre capacity stays the same, as the signal spectrum doubles with a doubling in symbol rate.
However, increasing the symbol rate brings cost and size benefits. "You get more bits through, and so you are sharing the cost of the electronics across more bits," says Kim Roberts, senior manager, optical signal processing at Ciena. It also implies a denser platform by doubling the speed per line card slot.
As you try to encode more bits in a constellation, so your noise tolerance goes down - Kim Roberts
Modulation schemes
The modulation used determines the number of bits encoded on each symbol. Optical networking equipment already use binary phase-shift keying (BPSK or 2-quadrature amplitude modulation, 2-QAM) for the most demanding, longest-reach submarine spans; the workhorse quadrature phase-shift keying (QPSK or 4-QAM) for 100 Gigabit-per-second (Gbps) transmission, and the 200 Gbps 16-QAM for distances up to 1,000 km.
Moving to a higher QAM scheme increases WDM capacity but at the expense of reach. That is because as more bits are encoded on a symbol, the separation between them is smaller. "As you try to encode more bits in a constellation, so your noise tolerance goes down," says Roberts.
One recent development among system vendors has been to add more modulation schemes to enrich the transmission options available.
From QPSK to 16-QAM, you get a factor of two increase in capacity but your reach decreases of the order of 80 percent - Steve Grubb
Besides BPSK, QPSK and 16-QAM, vendors are adding 8-QAM, an intermediate scheme between QPSK and 16-QAM. These include Acacia with its AC-400 MSA, Coriant, and Infinera. Infinera has tested 8-QAM as well as 3-QAM, a scheme between BPSK and QPSK, as part of submarine trials with Telstra.
"From QPSK to 16-QAM, you get a factor of two increase in capacity but your reach decreases of the order of 80 percent," says Steve Grubb, an Infinera Fellow. Using 8-QAM boosts capacity by half compared to QPSK, while delivering more signal margin than 16-QAM. Having the option to use the intermediate formats of 3-QAM and 8-QAM enriches the capacity tradeoff options available between two fixed end-points, says Grubb.
Ciena has added two chips to its WaveLogic 3 DSP-ASIC family of devices: the WaveLogic 3 Extreme and the WaveLogic 3 Nano for metro.
WaveLogic3 Extreme uses a proprietary modulation format that Ciena calls 8D-2QAM, a tweak on BPSK that uses longer duration signalling that enhances span distances by up to 20 percent. The 8D-2QAM is aimed at legacy dispersion-compensated fibre that carry 10 Gbps wavelengths and offers up to 40 percent additional upgrade capacity compared to BPSK.
Ciena has also added 4-amplitude-shift-keying (4-ASK) modulation alongside QPSK to its WaveLogic3 Nano chip. The 4-ASK scheme is also designed for use alongside 10 Gbps wavelengths that introduce phase noise, to which 4-ASK has greater tolerance than QPSK. Ciena's 4-ASK design also generates less heat and is less costly than BPSK.
According to Roberts, a designer’s goal is to use the fastest symbol rate possible, and then add the richest constellation as possible "to carry as many bits as you can, given the noise and distance you can go".
After that, the remaining issue is whether a carrier’s service can be fitted on one carrier or whether several carriers are needed, forming a super-channel. Packing a super-channel's carriers tightly benefits overall fibre spectrum usage and reduces the spectrum wasted for guard bands needed when a signal is optically switched.
Can symbol rate be doubled to 64 Gbaud? "It looks impossibly hard but people are going to solve that," says Roberts. It is also possible to use a hybrid approach where symbol rate and modulation schemes are used. The table shows how different baud rate/ modulation schemes can be used to achieve a 400 Gigabit single-carrier signal.

Note how using polarisation for coherent transmission doubles the overall data rate. Source: Gazettabyte
But industry views differ as to how much scope there is to improve overall capacity of a fibre and the optical performance.
Roberts stresses that his job is to develop commercial systems rather than conduct lab 'hero' experiments. Such systems need to be work in networks for 15 years and must be cost competitive. "It is not over yet," says Roberts.
He says we are still some way off from when all that remains are minor design tweaks only. "I don't have fun changing the colour of the paint or reducing the cost of the washers by 10 cents,” he says. “And I am having a lot of fun with the next-generation design [being developed by Ciena].”
"We are nearing the point of diminishing returns in terms of spectrum efficiency, and the same is true with DSP-ASIC development," says Winzer. Work will continue to develop higher speeds per wavelength, to increase capacity per fibre, and to achieve higher densities and lower costs. In parallel, work continues in software and networking architectures. For example, flexible multi-rate transponders used for elastic optical networking, and software-defined networking that will be able to adapt the optical layer.
After that, designers are looking at using more amplification bands, such as the L-band and S-band alongside the current C-band to increase fibre capacity. But it will be a challenge to match the optical performance of the C-band across all bands used.
"I would believe in a doubling or maybe a tripling of bandwidth but absolutely not more than that," says Winzer. "This is a stop-gap solution that allows me to get to the next level without running into desperation."
The designers' 'next level' is spatial division multiplexing. Here, signals are launched down multiple channels, such as multiple fibres, multi-mode fibre and multi-core fibre. "That is what people will have to do on a five-year to 10-year horizon," concludes Winzer.
For Part 2, click here
See also:
- Scaling Optical Fiber Networks: Challenges and Solutions by Peter Winzer
- High Capacity Transport - 100G and Beyond, Journal of Lightwave Technology, Vol 33, No. 3, February 2015.
A version of this article first appeared in an OFC 2015 show preview
NeoPhotonics to expand its tunable laser portfolio
Part 1: Tunable lasers
NeoPhotonics will become the industry's main supplier of narrow line-width tunable lasers for high-speed coherent systems once its US $17.5 million acquisition of Emcore's tunable laser business is completed. Gazettabyte spoke with Ferris Lipscomb of NeoPhotonics about Emcore's external cavity laser and the laser performance attributes needed for metro and long haul.
Key specifications and attributes of Emcore's external cavity laser and NeoPhotonics's DFB laser array. Source: NeoPhotonics.
Emcore and NeoPhotonics are leading suppliers of tunable lasers for the 100 Gigabit coherent market, according to market research firm Ovum. NeoPhotonics will gain Emcore's external cavity laser (ECL) on the completion of the deal, expected in January. The company will also gain Emcore's integrable tunable laser assembly (ITLA), micro ITLA, tunable XFP transceiver, tunable optical sub-assemblies, and 10, 40, 100 and 400 Gig integrated coherent transmitter products.
Emcore's ECL has a long history. Emcore acquired the laser when it bought Intel's optical platform division for $85 million in 2007, while Intel acquired the laser from New Focus in 2002 in a $50 million deal. Meanwhile, NeoPhotonics bought Santur's distributed feedback (DFB) tunable laser array in 2011 in a $39 million deal.
The two lasers satisfy different needs: Emcore's is suited for high-speed long distance transmission while NeoPhotonics's benefits metro and intermediate distances.
The Emcore laser uses mirrors and optics external to the gain medium to create the laser's relatively long cavity. This aids high-performance coherent systems as it results in a laser with a narrow line-width. Coherent detection uses a mixing technique borrowed from radio where an incoming signal is recovered by compared it with a local oscillator or tone. "The narrower the line-width, the more pure that tone is that you are comparing it to," says Lipscomb.
Source: NeoPhotonics
A narrower line-width also means less digital signal processing (DSP) is needed to resolve the ambiguity that results from that line-width, says Lipscomb: "And the more DSP power can be spent on either compensating fibre impairments or going further [distances], or compensating the higher-order modulation schemes which require more DSP power to disentangle."
The ECL has a narrow line-width that is specified at under 100kHz. "It is probably closer to 20kHz," says Lipscomb. One of the laser's drawbacks is that its uses a mechanical tuning mechanism that is relatively slow. It also has a lower output power of 16dBm compared to NeoPhotonics's DFB laser array that is up to 18dBm.
The metro market for 100 Gig coherent will emerge in volume towards the end of 2015 or early 2016
In contrast, NeoPhotonics' DFB laser array, suited to metro and intermediate reach applications, has a wider line-width specified at 300kHz, although 200kHz is typical. The DFB design comprises multiple lasers integrated compactly. The laser design also uses a MEMS that results in efficient coupling and the higher - 18dBm - output power. "Using the MEMS structure, you can integrate the laser with other indium phosphide or silicon photonics devices," says Lipscomb. "That is a little bit harder to do with the Emcore device."
Source: NeoPhotonics
It is the compactness and higher power of the DFB laser array that makes it suited to metro networks. The higher output power means that one laser can be used for both transmission and the local oscillator used to recover the received coherent signal. "More power can be good if you can live with the broader line-width," says Lipscomb. "It reduces overall system cost and can support higher-order modulation schemes over shorter distances."
Market opportunities
NeoPhotonics' focus is on narrow line-width lasers for coherent systems operating at 100 Gigabit and greater speeds. Lipscomb says the metro market for 100 Gig coherent will emerge in volume towards the end of 2015 or early 2016. "The distance here is less and therefore less compensation is needed and a little bit more line-width is tolerable," he says. "Also cost is an issue and a more integrated product can have potentially a lower cost."
For long haul, and especially at transmission rates of 200 and 400 Gig, the demands placed on the DSP are considerable. This is where Emcore's laser, with is narrow line-width, is most suited.
System vendors are already investigating 400 Gig and above transmission speeds. "For the high-end, line-width is going to be a critical factor," says Lipscomb. "Whatever modulation schemes there are to do the higher speeds, they are going to be the most demanding of laser performance."
For Part 2: Is the tunable laser market set for an upturn? click here
G.fast adds to the broadband options of the service providers
Feature: G.fast
Source: Alcatel-Lucent
Competition is commonly what motivates service providers to upgrade their access networks. And operators are being given every incentive to respond. Cable operators are offering faster broadband rates and then there are initiatives such as Google Fiber.
Internet giant Google is planning 1 Gigabit fibre rollouts in up to 34 US cities covering 9 metro areas. The initiative prompted AT&T to issue its own list of 21 cities it is considering to offer a 1 Gigabit fibre-to-the-home (FTTH) service.
But delivering fibre all the way to the home is costly, and then there is the engineering time required to connect the home gateway to the network. Hence the operator interest in the emerging G.fast standard, the latest digital subscriber line (DSL) development that promises Gigabit rates using the telephone wire.
"G.fast eliminates the need to run fibre for the last 250 meters [to the home]," says Dudi Baum, CEO of Sckipio, an Israeli start-up developing G.fast chipsets. "Providing 1 Gigabit over a copper pair is cheaper and faster to deploy, compared to running fibre all the way."
For G.fast, you need the fibre closer to your house to get the Gigabit and that is not available today with most carriers
Until recently, operators faced a choice of whether to deploy FTTH or use fibre-to-the-node (FTTN) and VDSL to boost broadband rates. Now, such boundaries are disappearing, says Stefaan Vanhastel, marketing director for fixed networks, Alcatel-Lucent. Operators are more pragmatic in their deployments and are choosing the most suitable technology for a given deployment based on what is most cost effective and fastest to deploy.
"It is very much no longer black and white," agrees Julie Kunstler, principal analyst, components at market research firm, Ovum. "The same service providers will be supporting multiple access networks."
The advent of G.fast will enhance the operators' choice, boosting data rates while using existing copper to bridge the gap between the fibre and the home. But the technology is still some way off and views differs as to whether deployments will begin in 2015 or 2016.
"For G.fast, you need the fibre closer to your house to get the Gigabit and that is not available today with most carriers," says Arun Hiremath, director, marketing at DSL chip company, Ikanos Communications. It will likely start with some small scale deployments, he says, "but the carriers will wait a little more for things to mature".
G.fast
G.fast enables Gigabit rates over telephone wire by expanding the usable spectrum to 106MHz. This compares to the 17MHz spectrum used by VDSL2, the current most advanced deployed DSL standard. But adopting the wider spectrum exacerbates two local-loop characteristics that dictate DSL performance: signal attenuation and crosstalk.
Operating at higher frequencies induces signal attenuation, shortening the copper reach over which data can be sent. VDSL2 is deployed over 1,500m links typically, G.fast distances will more likely be 200m or less.
Dudi BaumCrosstalk refers to signal leakage between copper pairs in a cable bundle. A cable can be made up of tens or hundreds of copper twisted pairs. The leakages causes each twisted pair not only to carry the signal sent but also noise, the sum of the leakage components from neighbouring DSL pairs.
Crosstalk becomes more prominent the higher the frequency. "One reason why no one has developed G.fast technology until now is the challenge of handling crosstalk at the much higher frequencies," says Baum. Indeed, from G.fast field trials, observed crosstalk is so severe that from certain frequencies upwards, the interference is as strong as the received signal, says Paul Spruyt, DSL strategist for fixed networks at Alcatel-Lucent.
Vectoring
Vectoring is a technique use to tackle crosstalk and restore a line's data capacity. Vectoring uses digital signal processing to implement noise cancellation, and is already used for VDSL2. "Vectoring is considered a key aspect of G.fast, even more than for VDSL2," says Spruyt.
G.fast can be seen as a logical evolution of VDSL2 but there are also differences. Besides the wider 106MHz spectrum, G.fast has a different duplexing scheme. VDSL2 uses frequency-division duplexing (FDD) where the data transmission is continuous - upstream (from the home) and downstream - but on different frequency bands or tones. In contrast, G.fast uses time-division duplexing (TDD) where all the spectrum is used to either send data (upstream) or receive data.
If a cable carries both services to homes/ businesses, G.fast is started from the 17-106MHz band to avoid overlapping with VDSL2, since crosstalk cannot be cancelled between the two because of their differing duplexing schemes.
Paul Spruyt
Both DSL schemes use discrete multi-tone, where each tone carries data bits. But G.fast uses half the number of tones - 2,048 - with each tone 12 times the bandwidth of the tones used for VDSL2.
Operators can also configure the upstream and downstream ratio more easily using TDD. An 80 percent downstream/ 20 percent upstream is common to the home whereas businesses have symmetric data flows.
Only transmitting or only receiving also simplifies the G.fast analogue front-end circuitry since it is less susceptible to signal echo, whereas such an echo is an issue with VDSL2 due to the simultaneous sending and receiving of data.
Operators want G.fast to deliver 150 Megabit-per-second (Mbps) aggregate data rates over 250m, 200Mbps over 200m, 500Mbps over 100m and up to 1 Gigabit-per-second over shorter spans. This compares to VDSL2's 70Mbps (50Mbps downstream, 20Mbps upstream) over 400m. With vectoring, VDSL2 performance is doubled: 100Mbps downstream and 40Mbps for the same span.
Vectoring works by measuring the crosstalk coupling on each line before the DSLAM - the platform at the cabinet, or the fibre distribution point unit for G.fast - generates anti-noise to null each line's crosstalk.
The crosstalk coupling between the pairs is estimated using special modulated ‘sync’ symbols that are sent between data transmissions. A user's DSL modem expects to see the modulated sync symbol, but in reality receives the symbol distorted with crosstalk from modulated sync symbols transmitted on the neighbouring lines.
The modem measures the error – the crosstalk – and sends it to the DSLAM. The DSLAM correlates the received error values on the ‘victim’ line with the pilot sequences transmitted on all the other ‘disturber’ lines. This way, the DSLAM measures the crosstalk coupling for every disturber–victim pair. Anti-noise is then generated using a vectoring chip in the DSLAM, and injected into the victim line on top of the transmitted signal to cancel the crosstalk picked up, a process repeated for each line.
Such an approach is known as pre-coding: in the downstream direction anti-noise signals are generated and injected in the DSLAM before the signal is transmitted on the line. For the upstream, post-coding is used: the DSLAM generates and adds the anti-noise after reception of the signal distorted with crosstalk. In this case, the DSL modem sends modulated sync symbols and the DSLAM measures the error signal and performs the correlations and anti-noise calculations.
G.fast vectoring is more complex than vectoring for VDSL2.
Besides the strength of the crosstalk at higher frequencies, G.fast uses a power-saving mode that deactivates the line when no data is being sent. The vectoring algorithm must stop generating anti-noise each time the line is deactivated, while quickly generate anti-noise when transmission restarts. A VDSL2 modem line can also be deactivated but this is much less commonplace.
"The number of computations you need to do is proportional to the square of the number of lines," says Spruyt. For G.fast, the lines used are far less - 4 to 24 and even 48 in certain cases - because the G.fast mini-DSLAM is much closer to the home. For VDSL2, the number of lines can be 200 or 400.
However, the symbol rate of G.fast is related to the tone spacing and hence is 12 times faster than VDSL2. That requires faster calculation, but since G.fast has half the number of tones of VDSL2, and crosstalk cancellation is performed for each tone, the overall G.fast processing for G.fast is six times greater.
G.fast vectoring may thus be more complex but the overall computation - and power consumption - of the vectoring processor is lower than VDSL2 due to the fewer DSL lines.
We should expect the first generation of G.fast to consume more power than VDSL2 silicon
Chip developments
The G.fast analogue silicon requires much faster analogue-to-digital and digital-to-analogue converters due to the broader spectrum used, while the G.fast line drivers use a lower transmit power due to the shorter reach requirements. "We should expect the first generation of G.fast to consume more power than VDSL2 silicon," says Spruyt.
Stefaan Vanhastel
The main functional blocks for G.fast and VDSL2 include the baseband digital signal processor, vectoring, the analogue front end, and the line driver. The degree to which they are integrated in silicon - whether one chip or four if the home-gateway functions are included - depends on where they are used.
"The chipsets will be designed differently for the different segments where they are used," says Hiremath. For example, the G.fast modem could be implemented as a single chip that includes the baseband, home gateway, and even the line driver due to the short lengths involved, he says.
Moreover, while the G.fast standard does not require backward compatibility with VDSL2, there is nothing stopping chipmakers from supporting both. The same was true with VDSL2 yet the resulting chipsets also supported ADSL2.
Ikanos has yet to unveil its G.fast silicon but it has announced its Neos development platform for customers to test and trial the technology. Hiremath says its G.fast design is based on the Neos architecture and that it expects first samples later this year.
Start-up Sckipio has also to detail its G.fast silicon design but says it will provide more information in the coming months. G.fast has system requirements that are difficult to meet, says Baum: "The challenge is not to show the technology working but to meet the standard's boundary requirements with a small, efficient design that provides 1 Gigabit." By boundary conditions Baum is referring to performance requirements that the modem needs to achieve, such as certain speeds and distances with a given packet loss, for example.
Sckipio already has first samples of its silicon. The company ported the RTL design of its silicon onto a Cadence Palladium system - a box with hundreds of FPGAs that allows the complete hardware design to be built. The company also has DSL models - bundles of twisted copper pairs measured at greater than 200MHz - to test the design's performance. "We use those models to see the expected performance running our protocol over those wires," says Baum.
Alcatel-Lucent has developed its own vectoring know-how for VDSL2 and has now added G.fast. "Having our own vectoring technology means that we have our own vectoring processing," says Alcatel-Lucent's Vanhastel.
Alcatel-Lucent has conducted G.fast trials with A1 Telecom Austria. "The good news is that we have been able to show that with vectoring, you can get really close to single-user capacity; the same capacity you have if there is only a single user active on the line," says Vanhastel. In the trial using over 100m of cable, G.fast achieved 60Mbps due to crosstalk. "Activating G.fast vectoring it rose to 500Mbps - almost a factor of 10," he says.
Much work remains before G.fast is deployed in the network, says Alcatel-Lucent. The International Telecommunication Union's G.9701 G.fast physical layer document is 300 pages long and while consent has been achieved, approving the standard is expected to take the rest of the year. Interoperability, test, functionality and performance specifications are still to be written by the Broadband Forum and then there are regulatory issues to be overcome: G.fast's 106MHz spectrum overlaps with FM radio, for example.
Sckipio is more upbeat about timescales, believing operators will start deployments in 2015 due to competition including the cable operators. The start-up says it has multiple field trials of its G.fast silicon this year.
Meanwhile, extending the spectrum to 212MHz is the next logical step in the development of G.fast. "Bonding is another concept that could be applied," says Spruyt.
There is life in the plain old telephone service yet.
This is an extended version of an article that first appeared in New Electronics, click here.
Amplifiers come to the fore to tackle agile network challenges
The growing sophistication of high-speed optical transmission based on 100 Gigabit-plus lightpaths and advanced ROADMs is rekindling interest in amplifier design.

Raman is a signature of the spread of 100 Gig but also the desire of being upgradable to higher bit rates
Per Hansen, II-VI
For the last decade, amplifier designers have been tasked with reducing the cost of Erbium-doped fibre amplifiers (EDFAs). "Now there is a need for new solutions that are more expensive," says Daryl Inniss, vice president and practice leader, components at market research firm, Ovum. "It is no longer just cost-cutting."
Higher output power amplifiers are needed to boost 100 Gig-plus signals that have less energy. Such amplifiers must also counter greater losses incurred by sophisticated colourless, directionless and contentionless (CDC) ROADM nodes. System vendors also require more power-efficient and compact amplifiers to maximise the chassis slots available for revenue-generating 100 Gig transponders.
Such requirements have created interest in all amplifier types, not just EDFAs but hybrid EDFA-Raman and Raman amplifiers.
"Improving the optical signal-to-noise ratio (OSNR) is of paramount consideration to enable higher capacity and reach for 100 Gig-plus lambdas," says Madhu Krishnaswamy, director, product line management at JDSU. "Raman amplification is becoming increasingly critical to delivering this OSNR improvement, largely in long haul."
Other developments include micro-amplifiers that boost single channels, and arrayed amplifiers used with ROADM nodes. These developments are also driving optical components: power-efficient, integrated pump lasers are needed for such higher-power amplifiers.
Operators' requirements span all three amplifier classes: EDFA, hybrid EDFA-Raman and all-Raman, says Anuj Malik, manager, solutions marketing at Infinera: "Some networks require a high OSNR and use hybrid amplifiers but some networks are prone to fibre cuts and hence avoid hybrid as fibre splices can cause more problems with Raman."
Raman differs from EDFA in several ways. Raman has a lower power efficiency, the optical pump power needed to pump an amplifier to achieve a certain gain and output power. This requires higher power to be launched into a Raman amplifier, raising safety issues for staff and equipment. The high launch power requires a sound connection between the Raman pump source and the fibre to avoid equipment being damaged, hence Infinera's reference to fibre splices.
Yet if Raman has a lower power efficiency, it has notable benefits when compared to an EDFA.
An EDFA performs lumped amplification, boosting the signal at distinct points in the network, every 80km commonly. Raman amplifies the signal as it travels down the fibre.
"With Raman amplification the gain is out in the fibre span, and Raman delivers a lower equivalent noise figure - a big advantage," says Per Hansen, head of product marketing, amplifier business unit at II-VI." The company II-VI acquired Oclaro's amplifier business in November 2013.
An amplifier's noise figure is a measure of performance in the network. All amplifiers introduce noise so that the input signal-to-noise ratio divided by the output signal-to-noise ratio is always greater than one. "Raman gives you a significantly better noise figure, an improvement in the range of 3 to 5dB," says Hansen.
EDFA designs continue to progress alongside the growing interest in hybrid and all-Raman. JDSU says that higher output power EDFAs, greater than 24dBm, are increasingly relevant for 96-plus channel systems that support super-channels and flexible grid ROADMs in the metro and long haul.
"Switchable-gain EDFAs to optimise the noise figure over a wider dynamic range of operation is another element enhancing overall system OSNR," says Krishnaswamy. "This is also common for metro and long haul."
Hybrid amplification combines the best characteristics of EDFA and Raman. In a hybrid, Raman is the first amplification stage where noise figure performance is most important, while the EDFA, with its power efficiency, is used as the second stage, boosting the signal to a higher level.
According to Finisar, 100 Gig uses the same receiver OSNR as 10 Gig transmissions. However, the transmission power per channel at 100 Gig is reduced, from 0 to 1dBm at 10 Gig to -2 to -3dBm at 100 Gig, due to non-linearity transmission issues. "Immediately you lose a few dBs in the OSNR," says Uri Ghera, CTO of the optical amplifier products at Finisar.
An overwhelming portion of WANs worldwide have adopted hybrid EDFA-Raman and this trend is expected to continue for the foreseeable future.
For 400 Gigabit transmission, the weaker signal sent requires the OSNR at the receiver to be 4-10dBm higher, says Ghera: "This is why you need hybrid Raman-EDFA."
Moving to a narrower channel spacing using a flexible grid also places greater demands on amplifiers. "Because of super-channels, if before we were talking about 100 channels [in the C-band], for a channel spacing of 37.5GHz it is more like 130 channels," says Ghera. "If you want the same power per channel, it means higher-output amplifiers."
The spectrum amplified by an EDFA is determined by the fibre. EDFAs amplify the 35nm-wide C-band spanning 1530 to 1565nm, and also the separate L-band at 1570 to 1605nm, if that is used. In contrast, the spectrum amplified by Raman is determined by the pump laser's wavelength. This leads to another benefit of all-Raman: far broader spectrum amplification, 100nm and wider.
Xtera is a proponent of all-Raman amplification. The system vendor has demonstrated 60nm- and even 100nm-wide spectrum amplification, broader than the C and L bands combined.
Xtera conducted trials with Verizon in 2013 using its Nu-Wave Optima platform and Raman operating over a 61nm window. The trials are detailed in Table 1.
Between 15 and 40 Terabits were sent over 4,500km and 1,500km, respectively, using several modulation schemes and super-channel arrangements. In comparison, state-of-the-art 100 Gig-plus systems achieve 16 Terabit typically across the C-band, and are being extended to 20-24 Terabit using closer-spaced channels. Using 16-QAM modulation, the reach achieved is 600km and more.
Table 1: Xtera's Verizon trial results using a 61nm spectrum and all-Raman amplification.
JDSU says hybrid amplification remains the most cost-competitive way to deliver the required OSNR and system capacity, while all-Raman can potentially increase system capacity.
Overall, it is network capacity and reach requirements that drive amplifier choice, says Krishnaswamy: "An overwhelming portion of WANs worldwide have adopted hybrid EDFA-Raman and this trend is expected to continue for the foreseeable future."
Meanwhile, the single channel micro-amp, sits alongside or is integrated within the transmitter. Operators want a transponder that meets various requirements for their reconfigurable networks. "If you look into the numbers, you want to boost the signal early on before it is attenuated," says II-VI's Hansen. "That gives you the best OSNR performance."
"This [single-channel amp] is a type that was rare in old systems," adds Finisar's Ghera. "It is also a market that is growing the fastest for us."
The micro-amp needs to be compact and low power, being alongside the power-hungry 100 Gig coherent transmitter. This is driving uncooled pump laser development and system integration.
Similar design goals apply to arrayed amplifiers that counter losses in ROADM add/ drop cards. "If you have some of the features of colourless, directionless and contentionless, you incur bigger losses in the node but you can make it up with other amps, one of these being arrayed amps," says Hansen.
Arrayed designs can have eight or more amps to support multiple-degree nodes so that achieving a power-efficient, compact design is paramount. Hence II-VI's development of an uncooled dual-chip pump laser integrated in a package. "Having four packages to pump eight amps in a small space that do not require cooling is a huge advantage," says Hansen.
The amplifier design challenges are set to continue.
One, highlighted by Infinera, is expanding amplification to the L-band to double overall capacity. JDSU highlights second-order and third-order Raman designs that use a more complex pump laser arrangement to improve system OSNR. Lowering the noise figure of EDFAs will be another continuing design goal, says JDSU.
II-VI expects further challenges in miniaturising single-channel and arrayed amplifier designs. Finisar also cites the need for more compact designs, citing putting an EDFA in an XFP package as an example.
Another challenge is producing high-power Raman amplifiers that can bridge extremely long spans, 300 to 400km. Such an amplifier must be able to read lots of physical parameters associated with the span and set the line accordingly, said Gheri.
II-VI's Hansen says the adoption of Raman and arrayed amplifiers is a good indicator of the wider deployment of next-generation network architectures. "Raman is a signature of the spread of 100 Gig but also the desire of being upgradable to higher bit rates," he says.
The article first appeared as an OFC 2014 show preview piece
