Merits and challenges of optical transmission at 64 Gbaud
u2t Photonics announced recently a balanced detector that supports 64Gbaud. This promises coherent transmission systems with double the data rate. But even if the remaining components - the modulator and DSP-ASIC capable of operating at 64Gbaud - were available, would such an approach make sense?
Gazettabyte asked system vendors Transmode and Ciena for their views.
Transmode:
Transmode points out that 100 Gigabit dual-polarisation, quadrature phase-shift keying (DP-QPSK) using coherent detection has several attractive characteristics as a modulation format.
It can be used in the same grid as 10 Gigabit-per-second (Gbps) and 40Gbps signals in the C-band. It also has a similar reach as 10Gbps by achieving a comparable optical signal-to-noise ratio (OSNR). Moreover, it has superior tolerance to chromatic dispersion and polarisation mode dispersion (PMD), enabling easier network design, especially with meshed networking.
The IEEE has started work standardising the follow-on speed of 400 Gigabit. "This is a reasonable step since it will be possible to design optical transmission systems at 400 Gig with reasonable performance and cost," says Ulf Persson, director of network architecture in Transmode's CTO office.
Moving to 100Gbps was a large technology jump that involved advanced technologies such as high-speed analogue-to-digital (A/D) converters and advanced digital signal processing, says Transmode. But it kept the complexity within the optical transceivers which could be used with current optical networks. It also enabled new network designs due to the advanced chromatic dispersion and PMD compensations made possible by the coherent technology and the DSP-ASIC.
For 400Gbps, the transition will be simpler. "Going from 100 Gig to 400 Gig will re-use a lot of the technologies used for 100 Gig coherent," says Magnus Olson, director of hardware engineering.
So even if there will be some challenges with higher-speed components, the main challenge will move from the optical transceivers to the network, he says. That is because whatever modulation format is selected for 400Gbps, it will not be possible to fit that signal into current networks keeping both the current channel plan and the reach.
"From an industry point of view, a metro-centric cost reduction of 100Gbps coherent is currently more important than increasing the bit rate to 400Gbps"
"If you choose a 400 Gigabit single carrier modulation format that fits into a 50 Gig channel spacing, the optical performance will be rather poor, resulting in shorter transmission distances," says Persson. Choosing a modulation format that has a reasonable optical performance will require a wider passband. Inevitably there will be a tradeoff between these two parameters, he says.
This will likely lead to different modulation formats being used at 400 Gig, depending on the network application targeted. Several modulation formats are being investigated, says Transmode, but the two most discussed are:
- 4x100Gbps super-channels modulated with DP-QPSK. This is the same as today's modulation format with the same optical performance as 100Gbps, and requires a channel width of 150GHz.

- 2x200Gbps super-channels, modulated with DP-16-QAM. This will have a passband of about 75GHz. It is also possible to put each of the two channels in separate 50GHz-spaced channels and use existing networks The effective bandwidth will then be 100GHz for a 400GHz signal. However, the OSNR performance for this format is about 5-6 dB worse than the 100Gbps super-channels. That equates to about a quarter of the reach at 100Gbps.

As a result, 100Gbps super-channels are more suited to long distance systems while 200Gbps super-channels are applicable to metro/ regional systems.
Since 200Gbps super-channels can use standard 50GHz spacing, they can be used in existing metro networks carrying a mix of traffic including 10Gbps and 40Gbps light paths.
"Both 400 Gig alternatives mentioned have a baud rate of about 32 Gig and therefore do not require a 64 Gbaud photo detector," says Olson. "If you want to go to a single channel 400G with 16-QAM or 32-QAM modulation, you will get 64Gbaud or 51Gbaud rate and then you will need the 64 Gig detector."
The single channel formats, however, have worse OSNR performance than 200Gbps super-channels, about 10-12 dB worse than 100Gbps, says Transmode, and have a similar spectral efficiency as 200Gbps super-channels. "So it is not the most likely candidates for 400 Gig," says Olson. "It is therefore unclear for us if this detector will have a use in 400 Gigabit transmission in the near future."
Transmode points out that the state-of-the-art bit rate has traditionally been limited by the available optics. This has kept the baud rate low by using higher order modulation formats that support more bits per symbol to enable existing, affordable technology to be used.
"But the price you have to pay, as you can not fool physics, is shorter reach due to the OSNR penalty," says Persson.
Now the challenges associated with the DSP-ASIC development will be equally important as the optics to further boost capacity.
The bundling of optical carriers into super-channels is an approach that scales well beyond what can be accomplished with improved optics. "Again, we have to pay the price, in this case eating greater portions of the spectrum," says Persson.
Improving the bandwidth of the balanced detector to the extent done by u2t is a very impressive achievement. But it will not make it alone into new products, modulators and a faster DSP-ASIC will also be required.
"From an industry point of view, a metro-centric cost reduction of 100Gbps coherent is currently more important than increasing the bit rate to 400Gbps," says Olson. "When 100 Gig coherent costs less than 10x10 Gig, both in dollars and watts, the technology will have matured to again allow for scaling the bit rate, using technology that suits the application best."
Ciena:
How the optical performance changes going from 32Gbaud to 64Gbaud depends largely on how well the DSP-ASIC can mitigate the dispersion penalties that get worse with speed as the duration of a symbol narrows.
BPSK goes twice as far as QPSK which goes about 4.5 times as far as 16-QAM
"I would also expect a higher sensitivity would be needed also, so another fundamental impact," says Joe Berthold, vice president of network architecture at Ciena. "We have quite a bit or margin with the WaveLogic 3 [DSP-ASIC] for many popular network link distances, so it may not be a big deal."
With a good implementation of a coherent transmission system, the reach is primarily a function of the modulation format. BPSK goes twice as far as QPSK which goes about 4.5 times as far as 16-QAM, says Berthold.
"On fibres without enough dispersion, a higher baud rate will go 25 percent further than the same modulation format at half of that baud rate, due to the nonlinear propagation effects," says Berthold. This is the opposite of what occurred at 10 Gigabit incoherent. On fibres with plenty of local dispersion, the difference becomes marginal, approximately 0.05 dB, according to Ciena.
Regarding how spectral efficiency changes with symbol rate, doubling the baud rate doubles the spectral occupancy, says Berthold, so the benefit of upping the baud rate is that fewer components are needed for a super-channel.
"Of course if the cost of the higher speed components are higher this benefit could be eroded," he says. "So the 200 Gbps signal using DP-QPSK at 64 Gbaud would nominally require 75GHz of spectrum given spectral shaping that we have available in WaveLogic 3, but only require one laser."
Does Ciena have an view as to when 64Gbaud systems will be deployed in the network?
Berthold says this hard to answer. "It depends on expectations that all elements of the signal path, from modulators and detectors to A/D converters, to DSP circuitry, all work at twice the speed, and you get this speedup for free, or almost."
The question has two parts, he says: When could it be done? And when will it provide a significant cost advantage? "As CMOS geometries narrow, components get faster, but mask sets get much more expensive," says Berthold.
Briefing: Flexible elastic-bandwidth networks
Vendors and service providers are implementing the first examples of flexible, elastic-bandwidth networks. Infinera and Microsoft detailed one such network at the Layer123 Terabit Optical and Data Networking conference held earlier this year.
Optical networking expert Ioannis Tomkos of the Athens Information Technology Center explains what is flexible, elastic bandwidth.
Part 1: Flexible elastic bandwidth

"We cannot design anymore optical networks assuming that the available fibre capacity is abundant"
Prof. Tomkos
Several developments are driving the evolution of optical networking. One is the incessant demand for bandwidth to cope with the 30+% annual growth in IP traffic. Another is the changing nature of the traffic due to new services such as video, mobile broadband and cloud computing.
"The characteristics of traffic are changing: A higher peak-to-average ratio during the day, more symmetric traffic, and the need to support higher quality-of-service traffic than in the past," says Professor Ioannis Tomkos of the Athens Information Technology Center.
"The growth of internet traffic will require core network interfaces to migrate from the current 10, 40 and 100Gbps to 1 Terabit by 2018-2020"
Operators want a more flexible infrastructure that can adapt to meet these changes, hence their interest in flexible elastic-bandwidth networks. The operators also want to grow bandwidth as required while making best use of the fibre's spectrum. They also require more advanced control plane technology to restore the network elegantly and promptly following a fault, and to simplify the provisioning of bandwidth.
The growth of internet traffic will require core network interfaces to migrate from the current 10, 40 and 100Gbps to 1 Terabit by 2018-2020, says Tomkos. Such bit-rates must be supported with very high spectral efficiencies, which according to latest demonstrations are only a factor of 2 away of the Shannon's limit. Simply put, optical fibre is rapidly approaching its maximum limit.
"We cannot design anymore optical networks assuming that the available fibre capacity is abundant," says Tomkos. "As is the case in wireless networks where the available wireless spectrum/ bandwidth is a scarce resource, the future optical communication systems and networks should become flexible in order to accommodate more efficiently the envisioned shortage of available bandwidth.”
The attraction of multi-carrier schemes and advanced modulation formats is the prospect of operators modifying capacity in a flexible and elastic way based on varying traffic demands, while maintaining cost-effective transport.
Elastic elements
Optical systems providers now realise they can no longer keep increasing a light path's data rate while expecting the signal to still fit in the standard International Telecommunication Union (ITU) - defined 50GHz band.
It may still be possible to fit a 200 Gigabit-per-second (Gbps) light path in a 50GHz channel but not a 400Gbps or 1 Terabit signal. At 400Gbps, 80GHz is needed and at 1 Terabit it rises to 170GHz, says Tomkos. This requires networks to move away from the standard ITU grid to a flexible-based one, especially if operators want to achieve the highest possible spectral efficiency.
Vendors can increase the data rate of a carrier signal by using more advanced modulation schemes than dual polarisation, quadrature phase-shift keying (DP-QPSK), the defacto 100Gbps standard. Such schemes include amplitude modulation at 16-QAM, 64-QAM and 256-QAM but the greater the amplitude levels used and hence the data rates, the shorter the resulting reach.
Another technique vendors are using to achieve 400Gbps and 1Tbps data rates is to move from a single carrier to multiple carriers or 'super-channels'. Such an approach boosts the data rate by encoding data on more than one carrier and avoids the loss in reach associated with higher order QAM. But this comes at a cost: using multiple carriers consumes more, precious spectrum.
As a result, vendors are looking at schemes to pack the carriers closely together. One is spectral shaping. Tomkos also details the growing interest in such schemes as optical orthogonal frequency division multiplexing (OFDM) and Nyquist WDM. For Nyquist WDM, the subcarriers are spectrally shaped so that they occupy a bandwidth close or equal to the Nyquist limit to avoid inter symbol interference and crosstalk during transmission.
Both approaches have their pros and cons, says Tomkos, but they promise optimum spectral efficiency of 2N bits-per-second-per-Hertz (2N bits/s/Hz), where N is the number of constellation points.
The attraction of these techniques - multi-carrier schemes and advanced modulation formats - is the prospect of operators modifying capacity in a flexible and elastic way based on varying traffic demands, while maintaining cost-effective transport.
"With flexible networks, we are not just talking about the introduction of super-channels, and with it the flexible grid," says Tomkos. "We are also talking about the possibility to change either dynamically."
According to Tomkos, vendors such as Infinera with its 5x100Gbps super-channel photonic integrated circuit (PIC) are making an important first step towards flexible, elastic-bandwidth networks. But for true elastic networks, a flexible grid is needed as is the ability to change the number of carriers on-the-fly.
"Once we have those introduced, in order to get to 1 Terabit, then you can think about playing with such parameters as modulation levels and the number of carriers, to make the bandwidth really elastic, according to the connections' requirements," he says.
Meanwhile, there are still technology advances needed before an elastic-bandwidth network is achieved, such as software-defined transponders and a new advanced control plane.
Tomkos says that operators are now using control plane technology that co-ordinates between layer three and the optical layer to reduce network restoration time from minutes to seconds. Microsoft and Infinera cite that they have gone from tens of minutes down to a few seconds using the more advanced optical infrastructure. "They [Microsoft] are very happy with it," says Tomkos.
But to provision new capacity at the optical layer, operators are talking about requirements in the tens of minutes; something they do not expect will change in the coming years. "Cloud services could speed up this timeframe," says Tomkos.
"There is usually a big lag between what operators and vendors do and what academics do," says Tomkos. "But for the topic of flexible, elastic networking, the lag between academics and the vendors has become very small."
Further reading:
Optical transmission's era of rapid capacity growth
Kim Roberts, senior director coherent systems at Ciena, moves from theory to practice with a discussion of practical optical transmission systems supporting 100Gbps, and in future, 400 Gigabit and 1 Terabit line rates. This discussion is based on a talk Roberts gave at the Layer123's Terabit Optical and Data Networking conference held in Cannes recently.
Part 2: Commercial systems
The industry is experiencing a period of rapid growth in optical transmission capacity. The years 1995 till 2006 were marked by a gradual increase in system capacity with the move to 10 Gigabit-per-second (Gbps) wavelengths. But the pace picked up with the advent of first 40Gbps direct detection and then coherent transmission, as shown by the red curve in the chart.
Source: Ciena
The chart's left y-axis shows bits-per-second-Hertz (bits/s/Hz). The y-axis on the right is an alternative representation of capacity expressed in Terabits in the C-band. "The C-band remains, on most types of fibre, the lowest cost and the most efficient," says Roberts.
The notable increase started with 40Gbps in a 50GHz ITU channel - 46Gbps to accommodate forward error correction (FEC) - and then, in 2009, 100Gbps (112Gbps) in the same width channel. In Ciena's (Nortel's) case, 100Gbps transmission was achieved using two carriers, each carrying 56Gbps, in one 50GHz channel.
"It is going to get hard to achieve spectral efficiencies much beyond 5bits/s/Hz. Getting hard means it is going to take the industry longer"
The chart's blue labels represent future optical transmission implementations. The 224Gbps in a 50GHz channel (200Gbps data) is achieve using more advanced modulation. Instead of dual polarisation, quadrature phase-shift keying (DP-QPSK) coherent transmission, DP-16-QAM will be used based on phase and amplitude modulation.
At 448Gbps, two carriers will be used, each carrying 224Gbps DP-16-QAM in a 50GHz band. "Two carriers, two polarisations on each, and 16-QAM on each," says Roberts.
As explained in Part 1, two carriers are needed because squeezing 400Gbps into the 50GHz channel will have unacceptable transmission performance. But instead of using two 50GHz channels - one for each carrier - 80GHz of spectrum will be needed overall. That is because the latest DSP-ASICs, in this case Ciena's WaveLogic 3 chipset, use waveform shaping, packing the carriers closer and making better use of the spectrum available. For the scheme to be practical, however, the optical network will also require flexible-spectrum ROADMs.
One Terabit transmission extends the concept by using five carriers, each carrying 200Gbps. This requires an overall spectrum of 160-170GHz. "The measurement in the lab that I have shown requires 200GHz using WaveLogic 3 technology," says Roberts, who stresses that these are labs measurements and not a product.
Slowing down
Roberts expects progress in line rate and overall transmission capacity to slow down once 400Gbps transmission is achieved, as indicated by the chart's curve's lesser gradient in future years.
"It is going to get hard to achieve spectral efficiencies much beyond 5bits/s/Hz" says Roberts. "Getting hard means it is going to take the industry longer." The curve is an indication of what is likely to happen, says Roberts: "We are reaching closer and closer to the Shannon bound, so it gets hard."
Roberts says that lab "hero" experiments can go far beyond 5 or 6 bits/s/Hz but that what the chart is showing are system product trends: "Commercial products that can handle commercial amounts of noise, commercial margins and FEC; all the things that make it a useful product."
Reach
What the chart does not show is how transmission reach changes with the modulation scheme used. To this aim, Roberts refers to the chart discussed in Part 1.
Source: Ciena
The 100Gbps blue dot is the WaveLogic 3 performance achieved with the same optical signal-to-noise ratio (ONSR) as used at 10Gbps.
"If you apply the same technology, the same FEC at 16-QAM at the same symbol rate, you get 200Gbps or twice the throughput," says Roberts. "But as you can see on the curve, you get a 4.6dB penalty [at 200Gbps] inherent in the modulation."
What this means is that the reach of an optical transport system is no longer 3,000km but rather 500-700km regional reaches, says Roberts.
Part 1: The capacity limits facing optical networking
Part 3: 2020 vision
Q&A with JDSU's CTO
In Part 1 of a Q&A with Gazettabyte, Brandon Collings, JDS Uniphase's CTO for communications and commercial optical products, reflects on the key optical networking developments of the coming decade, how the role of optical component vendors is changing and next-generation ROADMs.

"For transmission components, photonic integration is the name of the game. If you are not doing it, you are not going to be a player"
Brandon Collings (left), JDSU
Q: What are the key optical networking trends of the next decade?
A: The two key pieces of technology at the photonic layer in the last decade were ROADMs [reconfigurable optical add-drop multiplexers] and the relentless reduction in size, cost and power of 10 Gigabit transponders.
If you look at the next decade, I see the same trends occupying us.
We are seeing a whole other generation of reconfigurable networks - this whole colourless, directionless, flexible spectrum - all this stuff is coming and it is requiring a complete overhaul of the transport network. We have to support Raman [amplifiers] and we need to support more flexible [optical] channel monitors to deal with flexible spectrum.
We have to overhaul every aspect of the transport system: the components, design, capability, usability and the management. It may take a good eight years for the dust to settle on how that all plays out.
The other piece is transmission size, cost and power.
Right now a 40 Gig or a 100 Gig transponder is large, power-hungry and extremely expensive. Ironically they don't look too different to a 10 Gig transponder in 1998 and you see where that has gone.
You have seen our recent announcement [a tunable laser in an SFP+ optical pluggable module]; that whole thing is now tunable, the size of your pinkie and costs a fraction of what it did in 1998.
I expect that same sort of progression to play out for 100 Gig, and we'll start to get into 400 Gig and some flexible devices in between 100 and 400 Gig.
The name of the game is going to be getting size, cost and power down to ensure density keeps going up and the cost-per-bit keeps going down; all that is enabled by the photonic devices themselves.
Is that what will occupy JDSU for the next decade?
This is what will occupy us at the component level. As you go up one level - and this will impact us more indirectly than it will our customers - we are seeing this ramp of capacity, driven by the likes of video, where the willingness to pay-per-bit is dropping through the floor but the cost to deliver that bit is dropping a lot less.
Operators are caught in the middle and they are after efficiency and cost advantages when operating their networks. We are seeing a re-evaluation of the age-old principles in how networks are operated: How they do protection, how they offer redundancy and how they do aggregation.
People are saying: 'Well, the optical layer is actual fairly cheap compared to the layer two and three. Let's see if we can't ask more of the somewhat cheaper network and maybe pull some of the complexity and requirements out of the upper layers and make that simpler, to end up with an overall cheaper and easier network to operate.'
That is putting a lot of feature requirements on us at the hardware level to build optical networks that are more capable and do more, as well as on our customers that must make that network easier to operate.
That is a challenge that will be a very interesting area of differentiation. There are so many knobs to turn as you try to build a better delivery system optimised over packets, OTN [Optical Transport Network] and photonics.
Are you noting changes among system vendors to become more vertically integrated?
I've heard whisperings of vendors wanting to figure out how they could be more vertically integrated. That's because: 'Well hey, that could make our products cheaper and we could differentiate'. But I think the reality is moving in the opposite direction.
To build differentiated, compelling products, you have to have expertise, capability and technology control all the way down to the materials level almost. Take for example the tunable XFP; that whole thing is enabled by complete technology ownership of an indium-phosphate fab and all the manufacturing that goes around it. That is a herculean effort.
It is tough to say they [system vendors] want to be vertically integrated because to do so effectively you need just a gigantic organisation.
JDSU is vertically integrated. We have an awful lot of technology and we have got a very large manufacturing infrastructure expertise and know-how. We can produce competitive products because for this particular application we use a PLC [planar lightwave circuit], and for that one, gallium arsenide. We can do this because we diversify all this infrastructure, operation and company size across a wide customer base.
Increasingly this is also into adjacent markets like solar, gesture recognition and optical interconnects. These adjacent spaces would not be something that a system vendor would probably want to get into.
The bottom line is that it [the trend] is actually going in the opposite direction because the level, size and scope of the vertical integration would need to be very large and completely non-trivial if system vendors want to be differentiating and compelling. And the business case would not work very well because it would only be for their product line.

"No one says exactly what they will pay for next-gen ROADMs but all can articulate why they want it and what it will do in general terms"
Is this system vendor trend changing the role of optical component players?
Our level of business and our competitors are looking to be more vertically integrated: semiconductors all the way to line cards.
We've proven it with things like our Super Transport Blade that the more you have control over, the more knobs you can turn to create new things when merging multiple functions.
Instead of selling a lot of small black boxes and having the OEMs splice them together, we can integrated those functions and make a more compact and cost-effective solution. But you have to start with the ability to make all those blocks yourself.
Whether it is a line card, a tunable XFP or a 100 Gig module, the more you own and control, the more you can integrate and the more effective your solution will be. This is playing out at the components level because you create more compelling solutions the more functional integration you accomplish.
How do you avoid competing with your customers? If system vendors are just putting cards together, what are they doing? Also, how do you help each vendor differentiate?
It is very true. There are several system vendors that don't build their line cards anymore. They have chosen to do so because they realise that from a design and manufacturing perspective, they don't add much value or even subtract value because we can do more functional integration and they may not be experts in wavelength-selective switch (WSS) construction and various other things.
A fair number of them basically acknowledge that giving these blades to the people who can do them is a better solution for them.
How they differentiate can go two ways.
First, they don't just say: 'Build me a ROADM card.' We work very closely; they are custom design cards for each vendor. They specify what the blade will do and they participate intimately in its design. They make their own choices and put in their own secret sauce.
That means we have very strong partnerships with these operations, almost to the extent that we are part of their development organisations.
The importance of things above the photonic layer collectively is probably more important than the photonic layer. Usability, multiplexing, aggregation, security - all the things that go into the higher levels of a network, this is where system vendors are differentiating.
They can still differentiate at the photonic layer by building strong partnerships with technology engines like JDSU and it allows them to focus more resources at the upper levels where they can differentiate their complete network offering.
"The new generation of reconfigurable networks are not able to reuse anything that is being built today"
Will is happening with regard photonic integration?
For transmission components, photonic integration is the name of the game. If you are not doing it, you are not going to be a player.
If you look at JDSU's tunable [laser] XFP, that is 100% photonic integration. Yes, we build an ASIC to control the device but it is just about getting a little bit extra volume and a little bit more power. The whole thing is about monolithic integration of a tunable laser, the modulator and some power control elements. And that is just 10 Gig.
If you look at 40 Gig, today's modulators are already putting in heavy integration and it is just the first round. These dual-polarisation QPSK modulators, they integrate multiple modulators - one for each polarisation as well as all the polarisation combining functionality, all into one device using waveguide-based integration. Today that is in lithium niobate, which is not a small technology.
100 Gig looks similar, it is just a little bit faster and when you go to 400 Gig, you go multi-carrier which means you make multiple copies of this same device.
So getting these things down in size, cost and power means photonic integration. And just the way 10 Gig migrated from lithium niobate down to monolithic indium phosphide, the same path is going to be followed for 40, 100 and 400 Gig.
It may be more complicated than 10 Gig but we are more advanced with our technology.
Operators are asking for advanced ROADM capabilities while system vendors are willing to provide such features but only once operators will pay for them. Meanwhile, optical component vendors must do significant ROADM development work without knowing when they will see a return. How does JDSU manage this situation and is there a way of working smart here?
I don't think there is a terrifically clever way to look at this other than to say that we speak very carefully and closely with our customers.
These next-generation ROADMs have been going on for three or four years now. We also meet operators globally and ask them very similar questions about when and how and to what extent their interest in these various features [colourless, directionless, contentionless, gridless (flexible spectrum)] lie.
We are a ROADM leader; this is a ROADM question so we'd be making critical decisions if we decided not to invest in this area. We have decided this is going to happen and we have invested very heavily in this space.
It is true; there is not a market there right now.
With anything that is new, if you want to be a market leader you can't enter a market that exists, otherwise you'll be late. So through those discussions with our customers and the trust we have with them, and understanding where their customers and their problems lie, we are confident in that investment.
If you look back at the initial round of ROADMs, the chitchat was the same. When WSSs and ROADMs first came out, the reaction was: 'Wow, these things are really expensive, why would I want this compared to a set of fixed filters which back then cost $100 a pop?".
The commentary on cost was all in that flavour but once they became available and the costs were known, the operators started adopting them because the operators could figure out how they could benefit from the flexibility. Today ROADMs are just about in every network in the world.
We expect the same track to follow. No one is going to say: 'Yes, I’m going to pay twice for this new functionality' because they are being cagey of course.
We are still in the development phase. We are starting to get to the end of that, so the costs and real capabilities - all enabled by the devices we are developing - are becoming clear enough so that our customers can now go to their customers and say: 'Here's what it is, here's what it does and here's what it cost'.
Operators will require time to get comfortable with that and figure out how that will work in their respective networks.
We have seen consistent interest in these next-generation ROADM features. No one says exactly what they will pay for it but all can articulate why they want it and what it will do in general terms.
You say you are starting to get to the end of the development phase of these next-generation ROADMs. What challenges remain?
The new generation of reconfigurable networks are not able to reuse anything that is being built today whether it is from JDSU or Finisar, whether it is MEMS or LCOS (liquid crystal on silicon).
All the devices that are on the shelf today simply are not adequate or you end up with extremely expensive solutions.
This requires us to have a completely new generation of products in the WSS and the multiplexing demultiplexing space - all the devices that will do these functions that were done by AWGs or today by a 1x9 WSS but what is under development, they just look completely different.
They are still WSSs but they use different technologies so without saying exactly what they are and what they do, it is basically a whole new platform of devices.
Can you say when we will know what these look like?
I think the general architecture is fairly well known.
The exact details of the devices and components are still not publicly being talked about but it is the general combination of high-port-count WSSs that support flexible spectrum, fast switching and low loss, and are being used in a route-and-select approach rather than a broadcast-and-select one. That is the node building block.
Then these multicast switches are being built - fibre amplifier arrays; what comprise the colourless, directionless and contentionless multiplexing and demultiplexing.
That is the general architecture - it seems that that is what everyone is settling on. The devices to support that are what the industry is working on.
For Part II of the Q&A with Brandon Collings, click here
u2t Photonics: Adapting to a changing marketplace
u2t Photonics' Jens Fiedler, vice president sales and marketing (left), and CEO Andreas Umbach.
u2t Photonics has begun sampling its second-generation coherent receiver module. The dual-polarisation, quadrature phase-shift keying (DP-QPSK) coherent transmission receiver adds polarisation diversity to the company’s first-generation design – an indium-phosphide 90O hybrid design that includes balanced photo-detectors – all within an integrated module. u2t Photonics has developed two such coherent receiver designs, to address the 40 Gigabit-per-second (Gbps) and the 100Gbps markets, adding to the company’s first-generation design now available in small volumes.
“We can explore [next-gen coherent] solutions before we need them and we learn a lot from these partnerships” Andreas Umbach
The latest coherent receiver design represents what CEO Andreas Umbach believes u2t Photonics does best: using its radio frequency (RF) and optical component expertise to design high-speed integrated optical receiver modules.
Differentiation at 100 Gigabit
u2t Photonics is a leading component supplier for the 40 Gigabit market with its photo-detectors and more recently differential phase-shift keying (DPSK) integrated receiver designs that combine a delay-line interferometer with a balanced receiver. Such receivers are used for optical transponder and line card designs. Now, with its latest integrated coherent receiver, the German company aims to exploit the emerging 100 Gigabit market.
“The 40 Gig market will be strong for awhile yet, but 100 Gig is coming and will start to squeeze 40 Gig,” says Umbach. “Right now we do not see 100 Gig cannibalising 40 Gig,” says Jens Fiedler, vice president sales and marketing at u2t Photonics. “For DP-QPSK, 100 Gig might cannibalise 40 Gig since the technology for 40 Gig does not offer a big cost benefit for the customer.”
The emergence of 100 Gigabit optical links and its use of more advanced modulation have changed component requirements. Whereas the DPSK modulation scheme for 40Gbps requires photo-detectors with bandwidths that match the data rate, 100Gbps coherent requires photo-detectors with bandwidths of 28GHz only.
“In principle, not having the requirement of a very high-speed photo-detector makes it [100Gbps] a little bit easier, yet having 40 Gig serial is not unique anymore; there are differences in performance but it is not a limiting factor,” says Umbach.
Instead what matters for optical component players is to understand the 100Gbps functional requirements and deliver a design that meets them as early as possible, says Umbach. The challenges after that are scaling volume production and driving down cost. “We are the first company with a second generation design, offering the highest integration based on the [100G OIF] standard,” says Fiedler. “No doubt our competition is tough, but so far we are doing pretty well.”
u2t Photonics dismisses the view that the advent of high-speed CMOS ASICs that execute digital signal processing algorithms at the DP-QPSK receiver is eroding the need for the company’s expertise by enabling less specialist optics to be used. The optical specification requirements the company faces are challenging enough because customers still want to get the best performance from the links, it says.
"u2t Photonics has grown its revenue tenfold in the last five years" Jens Fiedler
“The challenge for us now is not just a photo-detector with a higher bandwidth but a coherent receiver that can detect the polarisation, phase and amplitude of the optical signal,” says Umbach. “That puts a much higher challenge on the components – not just high speed and efficiency but linearity in all these parameters.”
EC Galactico and Mirthe projects
To keep on top of next-generation coherent optical transmission schemes, u2t Photonics is a member of two European Commission (EC) Framework 7 projects dubbed Galactico (Blending diverse photonics and electronics on silicon for integrated and fully functional coherent Tb Ethernet) and Mirthe (Monolithic InP-based dual polarization QPSK integrated receiver and transmitter for coherent 100-400Gb Ethernet).
The Galactico project, which includes Nokia Siemens Networks, will develop photonic integrated circuits (PICS) that will implement a 100Gbps DP-QPSK coherent transmitter and receiver, a 600Gbps dense wavelength division multiplexing (DWDM) DP-QPSK coherent transmitter and receiver and a 280Gbps DP-128 quadrature amplitude modulation (QAM) transmitter that will deliver 10bit/sec/Hz spectral efficiency. The second project, Mirthe, is tasked with developing multi-level coding schemes using QPSK and QAM.
“Both projects address next generation [high-speed optical transmission] and the next level of integration of coherent receivers and transmitters for complex coherent systems,” says Umbach.
u2t is working with the projects’ partners in defining the devices needed for next-generation networks. In particular it is helping define the specifications needed for the ICs to drive such optical devices as well as what the product should look like to aid integration and packaging. “We have partners in the Mirthe project such as the Heinrich Hertz Institute and [Alcatel Thales] III V Lab that are focusing on chip design according to our requirements and matching our packaging development efforts,” says Umbach. The Galactico project is similar but here u2t Photonics is also contributing its integrated modulator technology for more complex transmission formants compared to the current DP-QPSK.
The company says its involvement in these projects is less to do with the research funding made available. Rather, it is the chance to work with partners on the R&D side. “We can explore solutions before we need them and we learn a lot from these partnerships,” says Umbach.
Changing markets
The emergence of three or four dominant module makers as the optical market matures presents new challenges for u2t Photonics. These emerging leaders are increasingly vertically integrated, using their own in-house components within their modules.
“Vertical integration is something we have to face and are fully aware of,” says Fiedler. To remain a valid component supplier, what matters is delivering component performance, volume capability, and cost that meet customer targets and challenge their own developments. “That is what we need to – at least be the second source for these vertical integrated companies,” says Fiedler.
Umbach points out that many of the system vendors are developing their own 100Gbps systems on line cards, and this represents another market opportunity for u2t Photonics, independent of the module makers.
“u2t might not offer the best pricing and might have issues - technical challenges common when you have early, leading-edge components,” says Fiedler. “But finally u2t is chosen as the supplier. They know we deliver the products.” To prove his point, Fiedler claims u2t Photonics has grown its revenue tenfold in the last five years.

Europe’s optical vendors
The last few years has seen significant consolidation among European optical component firms. Whereas Europe has system vendors that include Alcatel-Lucent, Nokia Siemens Networks, Ericsson, ADVA Optical Networking and Transmode Systems, the number of component vendors has continued to shrink. Bookham became a US company before merging with Avanex to become Oclaro, MergeOptics folded and its assets were acquired by FCI, while CoreOptics was acquired by Cisco Systems in May 2010.
Oclaro may be a US-registered company but its main operations are in China and Europe, points out Umbach. And many companies’ operations in the US and elsewhere have large headcounts in the Far East such that they could be view as more Asian companies, he adds.
“I believe there are lot of systems and components expertise in Europe - in Italy, the UK and Germany,” says Umbach. “Maybe they are just teams out of global players, like the CoreOptics team which will stay in Nuremberg although it is now a US company.” u2t Photonics itself has opened a unit in the UK. “We don’t feel too lonely,” he adds. “There is a lot of know-how we can look at here in these areas, not only other companies but academia in all the photonics fields.”
In turn, the market is a global one, says the firm, with the Chinese market being particular important with its large 40Gbps DPSK deployments and the importance of Huawei as a leading system vendor. Fiedler says the Chinese market is rapidly moving and has the potential to be a huge market for high-speed optical transmission. Yet despite emerging Chinese optical component players, the likes of Huawei are no different to other system vendors in terms of the criteria used when choosing optical components: performance capabilities and cost.
u2t Photonics remains open to all developments. “We have the opportunity to grow and expand our own business, and face the challenge with our bigger competitors,” says Umbach. “And if there is a reasonable path into consolidation, there is nothing that keeps us from going this way.”
Further information:
A presentation on Galactico, click here
A presentation on Mirthe, click here
Q&A with Rafik Ward - Part 1
"This is probably the strongest growth we have seen since the last bubble of 1999-2000." Rafik Ward, Finisar
Q: How would you summarise the current state of the industry?
A: It’s a pretty fun time to be in the optical component business, and it’s some time since we last said that.
We are at an interesting inflexion point. In the past few years there has been a lot of emphasis on the migration from 1 to 2.5 Gig to 10 Gig. The [pluggable module] form factors for these speeds have been known, and involved executing on SFP, SFP+ and XFPs.
But in the last year there has been a significant breakthrough; now a lot of the discussion with customers are around 40 and 100 Gig, around form factors like QSFP and CFP - new form factors we haven’t discussed before, around new ways to handle data traffic at these data rates, and new schemes like coherent modulation.
It’s a very exciting time. Every new jump is challenging but this jump is particularly challenging in terms of what it takes to develop some of these modules.
From a business perspective, certainly at Finisar, this is probably the strongest growth we have seen since the last bubble of 1999-2000. It’s not equal to what it was then and I don’t think any of us believes it will be. But certainly the last five quarters has been the strongest growth we’ve seen in a decade.
What is this growth due to?
There are several factors.
There was a significant reduction in spending at the end of 2008 and part of 2009 where end users did not keep up with their networking demands. Due to the global financial crisis, they [service providers] significantly cut capex so some catch-up has been occurring. Keep in mind that during the global financial crisis, based on every metric we’ve seen, the rate of bandwidth growth has been unfazed.
From a Finisar perspective, we are well positioned in several markets. The WSS [wavelength-selective switch] ROADM market has been growing at a steady clip while other markets are growing quite significantly – at 10 Gig, 40 Gig and even now 100 Gig. The last point is that, based on all the metrics we’ve seen, we are picking up market share.
Your job title is very clear but can you explain what you do?
I love my job because no two days are the same. I come in and have certain things I expect to happen and get done yet it rarely shapes out how I envisaged it.
There are really three elements to my job. Product management is the significant majority of where I focus my efforts. It’s a broad role – we are very focussed on the products and on the core business to win market share. There is a pretty heavy execution focus in product management but there is also a strategic element as well.
The second element of my job is what we call strategic marketing. We spend time understanding new, potential markets where we as Finisar can use our core competencies, and a lot of the things we’ve built, to go after. This is not in line with existing markets but adjacent ones: Are there opportunities for optical transceivers in things like military and consumer applications?
One of the things I’m convinced of is that, as the price of optical components continues to come down, new markets will emerge. Some of those markets we may not even know today, and that is what we are finding. That’s a pretty interesting part of my job but candidly I spend quite a bit less time on it [strategic marketing] than product management.
The third area is corporate communications: talking to media and analysts, press releases, the website and blog, and trade shows.
"40Gbps DPSK and DQPSK compete with each other, while for 40 Gig coherent its biggest competitor isn’t DPSK and DQPSK but 100 Gig."
Some questions on markets and technology developments.
Is it becoming clearer how the various 40Gbps line side optics – DPSK, DQPSK and coherent – are going to play out?
The situation is becoming clearer but that doesn’t mean it is easier to explain.
The market is composed of customers and end users that will use all of the above modulation formats. When we talk to customers, every one has picked one, two or sometimes all three modulation formats. It is very hard to point to any trend in terms of picks, it is more on a case-by-case basis. Customers are, like us at the component level, very passionate about the modulation format that they have chosen and will have a variety of very good reasons why a particular modulation format makes sense.
Unlike certain markets where you see a level of convergence, I don’t think that there will be true convergence at 40 Gbps. Coherent – DP-QPSK - is a very powerful technology but the biggest challenge 40 Gig has with DP-QPSK is that you have the same modulation format at 100 Gig.
The more I look at the market, 40Gbps DPSK and DQPSK compete with each other, while for 40 Gig coherent its biggest competitor isn’t DPSK and DQPSK but 100 Gig.
Finisar has been quiet about its 100 Gig line side plans, what is its position?
We view these markets - 40 and 100 Gig line side – as potentially very large markets at the optical component level. Despite that fact that there are some customers that are doing vertical integrated solutions, we still see these markets as large ones. It would be foolish for us not to look at these markets very carefully. That is probably all I would say on the topic right now.
"Photonic integration is important and it becomes even more important as data rates increase."
Finisar has come out with an ‘optical engine’, a [240Gbps] parallel optics product. Why now?
This is a very exciting part of our business. We’ve been looking for some time at the future challenges we expect to see in networking equipment. If you look at fibre optics today, they are used on the front panel of equipment. Typically it is pluggable optics, sometimes it is fixed, but the intent is that the optics is the interface that brings data into and out of a chassis.
People have been using parallel optics within chassis – for backplane and other applications – but it has been niche. The reason it’s niche is that the need hasn’t been compelling for intra-chassis applications. We believe that need will change in the next decade. Parallel optics intra-chassis will be needed just to be able to drive the amount of bandwidth required from one printed circuit board to another or even from one chip to another.
The applications driving this right now are the very largest supercomputers and the very largest core routers. So it is a market focussed on the extreme high-end but what is the extreme high-end today will be mainstream a few years from now. You will see these things in mainstream servers, routers and switches etc.
Photonic integration – what’s happening here?
Photonic integration is something that the industry has been working on for several years in different forms; it continues to chug on in the background but that is not to understate its importance.
For vendors like Finisar, photonic integration is important and it becomes even more important as data rates increase. What we are seeing is that a lot of emerging standards are based around multiple lasers within a module. Examples are the 40GBASE-LR4 and the 100GBASE-LR4 (10km reach) standards, where you need four lasers and four photo-detectors and the corresponding mux-demux optics to make that work.
The higher the number of lasers required inside a given module, and the more complexity you see, the more room you have to cost-reduce with photonic integration.
Optical transmission beyond 100Gbps
Part 3: What's next?
Given the 100 Gigabit-per-second (Gbps) optical transmission market is only expected to take off from 2013, addressing what comes next seems premature. Yet operators and system vendors have been discussing just this issue for at least six months.
And while it is far too early to talk of industry consensus, all agree that optical transmission is becoming increasingly complex. As Karen Liu, vice president, components and video technologies at market research firm Ovum, observed at OFC 2010, bandwidth on the fibre is no longer plentiful.
“We need to keep a very close eye that we are not creating more problems than we are solving.”
Brandon Collings, JDS Uniphase.
As to how best to extend a fibre’s capacity beyond 80, 100Gbps dense wavelength division multiplexing (DWDM) channels spaced 50GHz apart, all options are open.
“What comes after 100Gbps is an extremely complicated question,” says Brandon Collings, CTO of JDS Uniphase’s consumer and commercial optical products division. “It smells like it will entail every aspect of network engineering.”
Ciena believes that if operators are to exploit future high-speed transmission schemes, new architected links will be needed. The rigid networking constraints imposed on 40 and 100Gbps to operate over existing 10Gbps networks will need to be scrapped.
“It will involve a much broader consideration in the way you build optical systems,” says Joe Berthold, Ciena’s vice president of network architecture. “For the next step it is not possible [to use existing 10Gbps links]; no-one can magically make it happen.”
Lightpaths faster than 100Gbps simply cannot match the performance of current optical systems when passing through multiple reconfigurable optical add/drop multiplexer (ROADM) stages using existing amplifier chains and 50GHz channels.
Increasing traffic capacity thus implies re-architecting DWDM links. “Whatever the solution is it will have to be cheap,” says Berthold. This explains why the Optical Internetworking Forum (OIF) has already started a work group comprising operators and vendors to align objectives for line rates above 100Gbps.
If new links are put in then changing the amplifier types and even their spacing becomes possible, as is the use of newer fibre. “If you stay with conventional EDFAs and dispersion managed links, you will not reach ultimate performance,” says Jörg-Peter Elbers, vice president, advanced technology at ADVA Optical Networking,
Capacity-boosting techniques
Achieve higher speeds while matching the reach of current links will require a mixture of techniques. Besides redesigning the links, modulation schemes can be extended and new approaches used such as going ‘gridless” and exploiting sophisticated forward error-correction (FEC) schemes.
For 100Gbps, polarisation and phase modulation in the form of dual polarization, quadrature phase-shift keying (DP-QPSK) is used. By adding amplitude modulation, quadrature amplitude modulation (QAM) schemes can be extended to include 16-QAM, 64-QAM and even 256 QAM.
Alcatel-Lucent is one firm already exploring QAM schemes but describes improving spectral efficiency using such schemes as a law of diminishing returns. For example, 448Gbps based on 64-QAM achieves a bandwidth of 37GHz and a sampling rate of 74 Gsamples/s but requires use of high-resolution A/D converters. “This is very, very challenging,” says Sam Bucci, vice president, optical portfolio management at Alcatel-Lucent.
Infinera is also eyeing QAM to extend the data performance of its 10-channel photonic integrated circuits (PICs). Its roadmap goes from today’s 100Gbps to 4Tbps per PIC.
Infinera has already announced a 10x40Gbps PIC and says it can squeeze 160 such channels in the C-band using 25GHz channel spacing. To achieve 1 Terabit would require a 10x100Gbps PIC.
How would it get to 2Tbps and 4Tbps? “Using advanced modulation technology; climbing up the QAM ladder,” says Drew Perkins, Infinera’s CTO.
Glenn Wellbrock, director of backbone network design at Verizon Business, says it is already very active in exploring rates beyond 100Gbps as any future rate will have a huge impact on the infrastructure. “No one expects ultra-long-haul at greater than 100Gbps using 16-QAM,” says Wellbrock.
Another modulation approach being considered is orthogonal frequency-division multiplexing (OFDM). “At 100Gbps, OFDM and the single-carrier approach [DP-QPSK] have the same spectral efficiency,” says Jonathan Lacey, CEO of Ofidium. “But with OFDM, it’s easy to take the next step in spectral efficiency – required for higher data rates - and it has higher tolerance to filtering and polarisation-dependent loss.”
One idea under consideration is going “gridless”, eliminating the standard ITU wavelength grid altogether or using different sized bands, each made up of increments of narrow 25GHz ones. “This is just in the discussion phase so both options are possible,” says Berthold, who estimates that a gridless approach promises up to 30 percent extra bandwidth.
Berthold favours using channel ‘quanta’ rather than adopting a fully flexibility band scheme - using a 37GHz window followed by a 17GHz window, for example - as the latter approach will likely reduce technology choice and lead to higher costs.
Wellbrock says coarse filtering would be needed using a gridless approach as capturing the complete C-Band would be too noisy. A band 5 or 6 channels wide would be grabbed and the signal of interest recovered by tuning to the desired spectrum using a coherent receiver’s tunable laser, similar to how a radio receiver works.
Wellbrock says considerable technical progress is needed for the scheme to achieve a reach of 1500km or greater.
“Whatever the solution is it will have to be cheap”
Joe Berthold, Ciena.
JDS Uniphase’s Collings sounds a cautionary note about going gridless. “50GHz is nailed down – the number of questions asked that need to be addressed once you go gridless balloons,” he says. “This is very complex; we need to keep a very close eye that we are not creating more problems than we are solving.”
“Operators such as AT&T and Verizon have invested heavily in 50GHz ROADMs, they are not just going to ditch them,” adds Chris Clarke, vice president strategy and chief engineer at Oclaro.
More powerful FEC schemes and in particular soft-decision FEC (SD-FEC) will also benefit optical performance for data rates above 100Gbps. SD-FEC delivers up to a 1.3dB coding gain improvement compared to traditional FEC schemes at 100Gbps.
SD-FEC also paves the way for performing joint iterative FEC decoding and signal equalisation at the coherent receiver, promising further performance improvements, albeit at the expense of a more complex digital signal processor design.
400Gbps or 1 Tbps?
Even the question of what the next data rate after 100Gbps will be –200Gbps, 400Gbps or even 1 Terabit-per -second – remains unresolved.
Verizon Business will deploy new 100Gbps coherent-optimised routes from 2011 and would like as much clarity as possible so that such routes are future-proofed. But Collings points out that this is not something that will stop a carrier addressing immediate requirements. “Do they make hard choices that will give something up today?” he says.
At the OFC Executive Forum, Verizon Business expressed a preference for 1Tbps lightpaths. While 400Gbps was a safe bet, going to 1Tbps would enable skipping one additional stage i.e. 400Gbps. But Verizon recognises that backing 1Tbps depends on when such technology would be available and at what cost.
According to BT, speeds such as 200, 400Gbps and even 1 Tbps are all being considered. “The 200/ 400Gbps systems may happen using multiple QAM modulation,” says Russell Davey, core transport Layer 1 design manager at BT. “Some work is already being done at 1Tbps per wavelength although an alternative might be groups or bands of wavelengths carrying a continuous 1Tbps channel, such as ten 100Gbps wavelengths or five 200Gbps wavelengths.”
Davey stresses that the industry shouldn’t assume that bit rates will continue to climb. Multiple wavelengths at lower bitrates or even multiple fibres for short distances will continue to have a role. “We see it as a mixed economy – the different technologies likely to have a role in different parts of network,” says Davey.
Niall Robinson, vice president of product marketing at Mintera, is confident that 400Gbps will be the chosen rate.
Traditionally Ethernet has grown at 10x rates while SONET/SDH has grown in four-fold increments. However now that Ethernet is a line side technology there is no reason to expect the continued faster growth rate, he says. “Every five years the line rate has increased four-fold; it has been that way for a long time,” says Robinson. “100Gbps will start in 2012/ 2013 and 400Gbps in 2017.”
“There is a lot of momentum for 400Gbps but we’ll have a better idea in a six months’ time,” says Matt Traverso, senior manager, technical marketing at Opnext. “The IEEE [and its choice for the next Gigabit Ethernet speed after 100GbE] will be the final arbiter.”
Software defined optics and cognitive optics
Optical transmission could ultimately borrow two concepts already being embraced by the wireless world: software defined radio (SDR) and cognitive radio.
SDR refers to how a system can be reconfigured in software to implement the most suitable radio protocol. In optical it would mean making the transmitter and receiver software-programmable so that various transmission schemes, data rates and wavelength ranges could be used. “You would set up the optical transmitter and receiver to make best use of the available bandwidth,” says ADVA Optical Networking’s Elbers.
This is an idea also highlighted by Nokia Siemens Networks, trading capacity with reach based on modifying the amount of information placed on a carrier.
“For a certain frequency you can put either one bit [of information] or several,” says Oliver Jahreis, head of product line management, DWDM at Nokia Siemens Networks. “If you want more capacity you put more information on a frequency but at a lower signal-to-noise ratio and you can’t go as far.”
Using ‘cognitive optics’, the approach would be chosen by the optical system itself using the best transmission scheme dependent capacity, distance and performance constraints as well as the other lightpaths on the fibre. “You would get rid of fixed wavelengths and bit rates altogether,” says Elbers.
Market realities
Ovum’s view is it remains too early to call the next rate following 100Gbps.
Other analysts agree. “Gridless is interesting stuff but from a commercial standpoint it is not relevant at this time,” says Andrew Schmitt, directing analyst, optical at Infonetics Research.
Given that market research firms look five years ahead and the next speed hike is only expected from 2017, such a stance is understandable.
Optical module makers highlight the huge amount of work still to be done. There is also a concern that the benefits of corralling the industry around coherent DP-QPSK at 100Gbps to avoid the mistakes made at 40Gbps will be undone with any future data rate due to the choice of options available.
Even if the industry were to align on a common option, developing the technology at the right price point will be highly challenging.
“Many people in the early days of 100Gbps – in 2007 – said: ‘We need 100Gbps now – if I had it I’d buy it’,” says Rafik Ward, vice president of marketing at Finisar. “There should be a lot of pent up demand [now].” The reason why there isn’t is that such end users always miss out key wording at the end, says Ward: “If I had it I’d buy it - at the right price.”
For Part 1, click here
For Part 2, click here
40 and 100Gbps: Growth assured yet uncertainty remains
Part 2: 40 and 100Gbps optical transmission
The market for 40 and 100 Gigabit-per-second optical transmission is set to grow over the next five years at a rate unmatched by any other optical networking segment. Such growth may excite the industry but vendors have tough decisions to make as to how best to pursue the opportunity.
Market research firm Ovum forecasts that the wide area network (WAN) dense wavelength division multiplexing (DWDM) market for 40 and 100 Gigabit-per-second (Gbps) linecards will have a 79% compound annual growth rate (CAGR) till 2014.
In turn, 40 and 100Gbps transponder volumes will grow even faster, at 100% CAGR till 2015, while revenues from 40 and 100Gbps transponder sale will have a 65% CAGR during the same period.
Yet with such rude growth comes uncertainty.

“We upgraded to 40Gbps because we believe – we are certain, in fact – that across the router and backbone it [40Gbps technology] is cheaper.”
Jim King, AT&T Labs
Systems, transponder and component vendors all have to decide what next-generation modulation schemes to pursue for 40Gbps to complement the now established differential phase-shift keying (DPSK). There are also questions regarding the cost of the different modulation options, while vendors must assess what impact 100Gbps will have on the 40Gbps market and when the 100Gbps market will take off.
“What is clear to us is how muddled the picture is,” says Matt Traverso, senior manager, technical marketing at Opnext.
Economics
Despite two weak quarters in the second half of 2009, the 40Gbps market continues to grow.
One explanation for the slowdown was that AT&T, a dominant deployer of 40Gbps, had completed the upgrade of its IP backbone network.
Andreas Umbach, CEO of u2t Photonics, argues that the slowdown is part of an annual cycle that the company also experienced in 2008: strong 40Gbps sales in the first half followed by a weaker second half. “In the first quarter of 2010 it seems to be repeating with the market heating up,” says Umbach.
This is also the view of Simon Warren, Oclaro’s director product line managenent, transmission product line. “We are seeing US metro demand coming,” he says. “And it is very similar with European long-haul.”
BT, still to deploy 40Gbps, sees the economics of higher-speed transmission shifting in the operator’s favour. “The 40Gbps wavelengths on WDM transmission systems have just started to cost in for us and we are likely to start using it in the near future,” says Russell Davey, core transport Layer 1 design manager at BT.
What dictates an operator upgrade from 10Gbps to 40Gbps, and now also to 100Gbps, is economics.
The transition from 2.5Gbps to 10Gbps lightpaths that began in 1999 occurred when 10Gbps approached 2.5x the cost of 2.5Gbps. This rule-of-thumb has always been assumed to apply to 40Gbps yet thousands of wavelengths have been deployed while 40Gbps remains more than 4x the cost of 10Gbps. Now the latest rule-of-thumb for 100Gbps is that operators will make the transition once 100Gbps reaches 2x 40Gbps i.e. gaining 25% extra bandwidth for free.
The economics is further complicated by the continuing price decline of 10Gbps. “Our biggest competitor is 10Gbps,” says Niall Robinson, vice president of product marketing at 40Gbps module maker Mintera.
“The traditional multiplier of 2.5x for the transition to 10Gbps is completely irrelevant for the 10 to 40 Gigabit and 10 to 100 Gigabit transitions,” says Andrew Schmitt, directing analyst of optical at Infonetics Research. “The transition point is at a higher level; even higher than cost-per-bit parity.”
So far two classes of operators adopting 40Gbps have emerged: AT&T, China Telecom and cable operator Comcast which have made, or plan, significant network upgrades to 40Gbps, and those such as Verizon Business and Qwest that have used 40Gbps more strategically for selective routes. For Schmitt there is no difference between the two: “These are economic decisions.”
AT&T is in no doubt about the cost benefits of moving to higher speed transmission. “We upgraded to 40Gbps because we believe – we are certain, in fact – that across the router and backbone it [40Gbps technology] is cheaper,” says Jim King, executive director of new technology product development and engineering, AT&T Labs.
King stresses that 40Gbps is cheaper than 10Gbps in terms of capital expenditure and operational expense. IP efficiencies result and there are fewer larger pipes to manage whereas at lower rates “multiple WDM in parallel” are required, he says.
“We see 100Gbps wavelengths on transmission systems available within a year or so, but we think the cost may be prohibitive for a while yet, especially given we are seeing large reductions in 10Gbps,” says Davey. BT is designing the line-side of new WDM systems to be compatible with 40Gbps – and later 100Gbps - even though it will not always use the faster line-cards immediately.
Even when an operator has ample fibre, the case for adopting 40Gbps on existing routes is compelling. That’s because lighting up new fibre is “enormous costly”, says Joe Berthold, Ciena’s vice president of network architecture. By adding 40Gbps to existing 10Gbps lightpaths at 50GHz channel spacing, capacity on an existing link is boosted and the cost of lighting up a separate fibre is forestalled.
According to Berthold, lighting a new fibre costs about the same as 80 dense DWDM channels at 10Gbps. “The fibre may be free but there is the cost of the amplifiers and all the WDM terminals,” he says. “If you have filled up a line and have plenty of fibre, the 81st channel costs you as much as 80 channels.”
The same consideration applies to metropolitan (metro) networks when a fibre with 40, 10Gbps channels is close to being filled. “The 41st channel also means six ROADMs (reconfigurable optical add/drop multiplexers) and amps which are not cheap compared to [40Gbps] transceivers,” says Berthold.
Alcatel-Lucent segments 40Gbps transmission into two categories: multiplexing of lower speed signals into a higher speed 40Gbps line-side trunk link - ‘muxing to trunk’ - and native 40Gbps transmission where the client-side, signal is at 40Gbps.
“The economics of the two are somewhat different,” says Sam Bucci, vice president, optical portfolio management at Alcatel-Lucent. The economics favour moving to higher capacity trunks. That said, Alcatel-Lucent is seeing native 40Gbps interfaces coming down in price and believes 100GbE interfaces will be ‘quite economical’ compared to 10x10Gbps in the next two years.
Further evidence regarding the relative expense of router interfaces is given by Jörg-Peter Elbers, vice president, advanced technology at ADVA Optical Networking, who cites that in overall numbers currently only 20% go into 40Gbps router interfaces while the remaining 80% go into muxponders.
Modulation Technologies
While economics dictate when the transition to the next-generation transmission speed occurs, what is complicating matters is the wide choice of modulation schemes. Four modulation technologies are now being used at 40Gbps with operators having the additional option of going to 100Gbps.
The 40Gbps market has already experienced one false start back in 2002/03. The market kicked off in 2005, at least that is when the first 40Gbps core router interfaces from Cisco Systems and Juniper Networks were launched.
"There is an inability for guys like us to do what we do best: take an existing interface and shedding cost by driving volumes and driving the economics.”
Rafik Ward, Finisar
Since then four 40Gbps modulation schemes are now shipping: optical duobinary, DPSK, differential quadrature phase-shift keying (DQPSK) and polarisation multiplexing quadrature phase-shift keying (PM-QPSK). PM-QPSK is also referred to as dual-polarisation QPSK or DP-QPSK.
“40Gbps is actually a real mess,” says Rafik Ward, vice president of marketing at Finisar.
The lack of standardisation can be viewed as a positive in that it promotes system vendor differentiation but with so many modulation formats available the lack of consensus has resulted in market confusion, says Ward: “There is an inability for guys like us to do what we do best: take an existing interface and shedding cost by driving volumes and driving the economics.”
DPSK is the dominant modulation scheme deployed on line cards and as transponders. DPSK uses relatively simple transmitter and receiver circuitry although the electronics must operate at 40Gbps. DPSK also has to be modified to cope with tighter 50GHz channel spacing.
“DPSK’s advantage is relatively simple,” says Loi Nguyen, founder, vice president of networking, communications, and multi-markets at Inphi. “For 1200km it works fine, the drawback is it requires good fibre.”
The DQPSK and DP-QPSK modulation formats being pursued at 40Gbps offer greater transmission performance but are less mature.
DQPSK has a greater tolerance to polarisation mode dispersion (PMD) and is more resilient when passing through cascaded 50GHz channels compared to DPSK. However DQPSK uses more complex transmitter and receiver circuitry though it operates at half the symbol rate – at 20Gbaud/s - simplifying the electronics.
DP-QPSK is even more complex than DQPSK requiring twice as much optical circuitry due to the use of polarisation multiplexing. But this halves again the symbol rate to 10Gbaud/s, simplifying the design constraints of the optics. However DP-QPSK also requires a complex application-specific integrated circuit (ASIC) to recover signals in the presence of such fibre-induced signal impairments as chromatic dispersion and PMD.
The ASIC comprises high-speed analogue-to-digital converters (ADCs) that sample the real and imaginary components that are the output of the DP-QPSK optical receiver circuitry, and a digital signal processor (DSP) which performs the algorithms to recovery the original transmitted bit stream in the presence of dispersion.
The chip is costly to develop – up to US $20 million – but its use reduces line costs by allowing fewer optical amplifiers numbers and removing PMD and chromatic dispersion in-line compensators.
“You can build more modular amplifiers and really optimise performance/ cost,” says Bucci. Such benefits only apply when a new optimised route is deployed, not when 40Gbps lightpaths are added to existing fibre carrying 10Gbps lightpaths.
Eliminating dispersion compensation fibre in the network using coherent detection brings another important advantage, says Oliver Jahreis, head of product line management, DWDM at Nokia Siemens Networks. “It reduces [network] latency by 10 to 20 percent,” he says. “This can make a huge difference for financial transactions and for the stock exchange.”
Because of the more complex phase modulation used, 40Gbps DQPSK and DP-QPSK lightpaths when lit alongside 10Gbps suffer from crosstalk interference. “DQPSK is more susceptible to crosstalk but coherent detection is even worse,” says Chris Clarke, vice president strategy and chief engineer at Oclaro.
Wavelength management - using a guard-band channel or two between the 10Gbps and 40Gbps lightpaths – solves the problem. Alcatel-Lucent also claims it has developed a coherent implementation that works alongside existing 10Gbps and 40Gbps DPSK signals without requiring such wavelength management.
100Gbps consensus
Because of the variety of modulation schemes at 40Gbps the industry has sought to achieve a consensus at 100Gbps resulting in coherent becoming the defacto standard.
Early-adopter operators of 40Gbps technology such as AT&T and Verizon Business have been particularly vocal in getting the industry to back DP-QPSK for 100Gbps. The Optical Internetworking Forum (OIF) industry body has also done much work to provide guidelines for the industry as part of its 100Gbps Framework Document.
Yet despite the industry consensus, DP-QPSK will not be the sole modulation scheme targeted at 100Gbps.
ADVA Optical Networking is pursuing 100Gbps technology for the metro and enterprise using a proprietary modulation scheme. “If you look at 100Gbps, we believe there is room for different solutions,” says Elbers.
For metro and enterprise systems, the need is for more compact, less power-consuming, cheaper solutions. ADVA Optical Networking is following a proprietary approach. At ECOC 2008 the company published a paper that combined DPSK with amplitude-shift keying.
“If you look at 100Gbps, we believe there is room for different solutions.”
Jörg-Peter Elbers, ADVA Optical Networking
“Coherent DP-QPSK offers the highest performance but it is not required for certain situations as it brings power and cost burdens,” says Elbers. The company plans to release a dedicated product for the metro and enterprise markets and Elbers says the price point will be very close to 10x10Gbps.
Another approach is that of Australian start-up Ofidium. It is using a multi-carrier modulation scheme based on orthogonal frequency-division multiplexing. Ofidium claims that while OFDM is an alternative modulation scheme to DP-QPSK, it uses the same optical building blocks as recommended by the OIF.
Decisions, decisions
Simply looking at the decisions of a small sample of operators highlights the complex considerations involved when deciding a high-speed optical transmission strategy.
Cost is clearly key but is complicated by the various 40Gbps schemes being at different stages of maturity. 40Gbps DPSK is deployed in volume and is now being joined by DQPSK. Coherent technology was, until recently, provided solely by Nortel, now owned by Ciena. However, Nokia Siemens Networks working with CoreOptics, and Fujitsu have recently announced 40Gbps coherent offerings upping the competition.
Ciena also has a first-generation 100Gbps technology and will soon be joined by system vendors either developing their own 100Gbps interfaces or are planning to offer 100Gbps once DP-QPSK transponders become available in 2011.
The particular performance requirements also influence the operators’ choices.
Verizon Business has limited its deployment of DPSK due to the modulation scheme’s tolerance to PMD. “It is quite low, in the 2 to 4 picosecond range,” says Glenn Wellbrock, director of backbone network design at Verizon Business. “We have avoided deploying DPSK even if we have measured the [fibre] route [for PMD].”
Because PMD can degrade over time, even if a route is measured and is within the PMD tolerance there is no guarantee the performance will last. Verizon will deploy DQPSK this year for certain routes due to its higher 8ps tolerance to PMD.
China Telecom is a key proponent of DQPSK for its network rollout of 40Gbps. “It has doubled demand for its 40Gbps build-out and the whole industry is scrambling to keep up,” says Oclaro’s Clarke.
AT&T has deployed DPSK to upgrade its network backbone and will continue as it upgrades its metro and regional networks. “Our stuff [DPSK transponders] is going into [these networks],” says Mintera’s Robinson. But AT&T will use other technologies too.
In general modulation formats are a vendor decision, “something internal to the box”, says King. What is important is their characteristics and how the physics and economics match AT&T’s networks. “As coherent becomes available at 40Gbps, we will be able to offer it where the fibre characteristics require it,” says King.
“AT&T is really hot on DP-QPSK,” says Ron Kline, principal analyst of network infrastructure at Ovum. “They have a whole lot of fibre - stuff before 1998 - that is only good for 2.5Gbps and maybe 10Gbps. They have to be able to use it as it is hard to replace.”
BT points out how having DP-QPSK as the de facto standard for 100Gbps will help make it cost-effective compared to 10Gbps and will also benefit 40Gbps coherent designs. “This offers high performance 40Gbps which will probably work over all of our network,” says Davey.
But this raises another issue regarding coherent: it offers superior performance over long distances yet not all networks need such performance. “For the UK it may be that we simply don’t have sufficient long distance links [to merit DP-QPSK] and so we may as well stick with non-coherent,” says Davey. “In the end pricing and optical reach will determine what is used and where.”
One class of network where reach is supremely important is submarine.
For submarine transmission, reaches between 5,000 and 7,000km can be required and as such 10Gbps links dominate. “In the last six months even if most RFQs (Request for Quotation from operators) are about 10Gbps, all are asking about the possibility of 40Gbps,” says Jose Chesnoy, technical director, submarine network activity at Alcatel-Lucent.
Until now there has also been no capacity improvement in submarine adopting 40Gbps: 10Gbps lightpaths use 25GHz-spaced channels while 40Gbps uses 100GHz. “Now with technology giving 40Gbps performance at 50GHz, fibre capacity is doubled,” says Chesnoy.
To meet trans-ocean distances for 40Gbps submarine, Alcatel-Lucent is backing coherent technology, as it is for terrestrial networks. “Our technology direction is definitely coherent, at 40 and 100Gbps,” says Bucci.
Ciena, with its acquisition of Nortel’s Metro Ethernet Networks division, now offers 40 and 100Gbps coherent technology.
“It’s like asking what the horsepower per cylinder is rather than the horsepower of the engine.”
Drew Perkins, Infinera
ADVA Optical Networking, unlike Ciena and Alcatel-Lucent, is not developing 40Gbps technology in-house. “When looking at second generation 40Gbps, DQPSK and DP-QPSK are both viable options from a performance point of view,” says Elbers.
He points out that what will determine what ADVA Optical Networking adopts is cost. DQPSK has a higher nonlinear tolerance and can offer lower cost compared to DP-QPSK but there are additional costs besides just the transponder for DQPSK, he says, namely the need for an optical pre-amplifier and an optical tunable dispersion compensator per wavelength.
DP-QPSK, for Elbers, eliminates the need for any optical dispersion compensation and complements 100Gbps DP-QPSK, but is currently a premium technology. “If 40Gbps DP-QPSK is close to the cost of 4x10Gbps tunable XFP [transceivers], it will definitely be used,” he says. “We are not seeing that yet.”
Infinera, with its photonic integrated circuit (PIC) technology, questions the whole premise of debating 40Gbps and 100Gbps technologies. Infinera believes what ultimately matters is how much capacity can be transmitted over a fibre.
“Most people want pure capacity,” says Drew Perkins, Infinera’s CTO, who highlights the limitations of the industry’s focus on line speed rather than overall capacity using the analogy of buying a car. “It’s like asking what the horsepower per cylinder is rather than the horsepower of the engine,” he says.
Infinera offers a 10x10Gbps PIC though it has still not launched its 10x40Gbps DP-DQPSK PIC. “The components have been delivered to the [Infinera] systems group,” says Perkins. The former CEO of Infinera, Jagdeep Singh, has said that while the company is not first to market with 40Gbps it intends to lead the market with the most economical offering.
Moreover, Infinera is planning to develop its own coherent based PIC. “The coherent approach - DP-QPSK ‘Version 1.0’ with a DSP - is very powerful with its high capacity and long reach but it has a significant power density cost,” says Perkins. “We envisage the day when there will be a 10-channel PIC with a 100Gbps coherent-type technology in 50GHz spectrum at very low power.” Such PIC technology would deliver 8 Terabits over a fibre.
Further evidence of the importance of 100Gbps is given by Verizon Business which has announced that it will deploy 100Gbps coherent-optimized fibre links starting next year that will do away with dispersion compensation fibre. AT&T’s King says it will also deploy coherent-optimised links.
Not surprisingly, views also differ among module makers regarding the best 40Gbps modulation schemes to pursue.
“We had a very good look at DQPSK,” says Mintera’s Robinson. “What’s best to invest? The price comparison [DQPSK versus coherent] is very similar yet DP-QPSK is vastly superior [in performance]. Put in a module it will kill off DP-QPSK.”
Finisar has yet to detail its plans but Ward says that the view inside the company is that the lowest cost interface is offered by DPSK while DP-QPSK delivers high performance. “DQPSK is in this challenging position, it can’t meet the cost point of DPSK nor the performance of DP-QPSK,” he says.
Opnext begs to differ.
The firm offers the full spectrum of 40Gbps modulation schemes - optical duobinary, DPSK and DQPSK. “The next phase we are focussed on is 100Gbps coherent,” says Traverso. “We are not as convinced that 40Gbps is a sweet spot.”
In contrast Opnext does believe DQPSK will be popular, although Traverso highlights that it depends on the particular markets being addressed, with DQPSK being particularly suited to regional networks. “One huge advantage of DQPSK is thermal – the coherent IC burns a lot of power”.
Oclaro is also backing DQPSK as the format for metro and regional networks: fibre is typically older and the number of ROADM stages a signal encounters is higher.
Challenges
The maturity of the high–speed transmission supply chain is one challenge facing the industry.
“Many of the critical components are not mature,” says Finisar’s Ward. “There are a lot of small companies - almost start-ups - that are pioneers and are doing amazing things but they are not mature companies.”
JDS Uniphase believes that with the expected growth for 40Gbps and 100Gbps there is an opportunity for the larger optical vendors to play a role. “The economic and technical challenges are still a challenge,” says Tom Fawcett, JDS Uniphase’s director of product line management.
Driving down cost at 40Gbps remains a continuing challenge, agrees Nguyen: “Cost is still an important factor; operators really want lower cost”. To address this the industry is moving along the normal technology evolution path, he says, reducing costs, making designs more compact and enabling the use of techniques such as surface-mount technology.
Mintera has developed a smaller 300-pin MSA DPSK transponder that enable two 40Gbps interfaces on one card: the line side and client side ones. Shown on the right is a traditional 5"x7" 300-pin MSA.
JDS Uniphase’s strategy is to bring the benefits of vertical integration to 40 and 100Gbps; using its own internal components such as its integrated tunable laser assembly, lithium niobate modulator, and know-how to produce an integrated optical receiver to reduce costs and overall power consumption.
Vertical integration is also Oclaro’s strategy with is 40Gbps DQPSK transponder that uses its own tunable laser and integrated indium-phosphide-based transmitter and receiver circuitry.
“[Greater] vertical integration will make our lives more difficult,” says u2t’s Umbach. “But any module maker that has in-house components will only use them if they have the right optical performance.”
Jens Fiedler, vice president sales and marketing at u2t Photonics,stresses that while DQPSK and DP-QPSK may reduce the speed of the photodetectors and hence appear to simplify design requirements, producing integrated balanced receivers is far from trivial. And by supplying multiple customers such as non-vertically integrated module makers and system vendors, merchant firms also have a volume manufacturing advantage.
Opnext has already gone down the vertically integrated path with its portfolio of 40Gbps offerings and is now developing an ASIC for use in its 100Gbps transponders.
Estimates vary that there are between eight and ten companies or partnerships developing their own coherent ASIC. That equates to a total industry spend of some $160 million, leading some to question whether the industry as a whole is being shrewd with its money. “Is that wise use of people’s money?” says Oclaro’s Clarke. “People have got to partner.”
The ASICs are also currently a bottleneck. “For 100Gbps the ASIC is holding everything up,” says Jimmy Yu, a director at the Dell'Oro Group
According to Stefan Rochus, vice president of marketing and business development at CyOptics, another supply challenge is the optical transmitter circuitry at 100Gbps while for 40Gbps DP-QPSK, the main current supplier is Oclaro.
“The [40Gbps] receiver side is well covered,” says Rochus.
CyOptics itself is developing an integrated 40Gbps DPSK balanced receiver that includes a delay-line inteferometer and a balanced receiver. The firm is also developing a 40 and a 100G PM-QPSK receiver, compliant with the OIF Framework Document. This is also a planar lightwave circuit-based design but what is different between 40 and 100Gbps designs is the phodetectors - 10 and 28GHz respectively - and the trans-impedence amplifiers (TIAs).
NeoPhotonics is another optical component company that has announced such integrated DM-QPSK receivers.
And u²t Photonics recently announced a 40G DQPSK dual balanced receiver that it claims reduces board space by 70%, and it has also announced with Picometrix a 100Gbps coherent receiver multi-source agreement.
40 and 100Gbps: next market steps
Verizon Business in late 2009 became the first operator to deploy a 100Gbps route linking Frankfurt and Paris. And the expectation is that only a few more 100Gbps lightpaths will be deployed this year.
The next significant development is the ratification of the 40 and 100 Gigabit Ethernet standards that will happen this year. The advent of such interfaces will spur 40Gbps and 100Gbps line side. After that 100Gbps transponders are expected in mid-2011.
Such transponders will have a two-fold effect: they will enable more system vendors to come to market and reduce the cost of 100Gbps line-side interfaces.
However industry analysts expect the 100Gbps volumes to ramp from 2013 onwards only.
Dell'Oro’s Yu expects the 40Gbps market to grow fiercely all the while 100Gbps technology matures. At 40Gbps he expects DPSK to continue to ship. DP-QPSK will be used for long haul links - greater than 1200km –while DQPSK will find use in the metro. “There is room for all three modulations,” says Yu.
|
40 100G market |
Compound annual growth rate CAGR |
|
Line card volumes |
79% till 2014 |
|
Transponder volumes |
100% till 2015 |
|
Transponder revenues |
65% till 2015 |
Source: Ovum
Ovum and Infonetics have different views regarding the 40Gbps market.
“Coherent is the story; the opportunity for DQPSK being limited,” says Ovum’s Kline. Infonetics’ Schmitt disagrees: “If you were to look back in 2015 over the last five years, the bulk of the deployments [at 40Gbps] will be DQPSK.
Schmitt does agree that 2013 will be a big year for 100Gbps: “100Gbps will ramp faster than 40Gbps but it will not kill it.”
Schmitt foresees operators bundling 10Gbps wavelengths into both 40Gbps and 100Gbps lightpaths (and 10Gbps and 40Gbps lightpaths into 100Gbps ones) using Optical Transport Networking (OTN) encapsulation technology.
Given the timescales, vendors still to make their 40Gbps modulation bets run the risk of being late to market. They are also guaranteed a steep learning curve. Yet those that have made their decisions at 40Gbps will likely remain uncomfortable for a while yet until they can better judge the wisdom of their choices.
For the first part of this feature, click here
For Part 3, click here
Verizon plans coherent-optimised routes

"Next-gen lines will be coherent only"
Glenn Wellbrock, Verizon Business
Muxponders at 40Gbps
Given the expense of OC-768 very short reach transponders, Verizon is a keen proponent of 4x10Gbps muxponders. Instead of using the OC-768 client side interface, Verizon uses 4x10Gbps pluggables which are multiplexed into the 40Gbps line-side interface. The muxponder approach is even more attractive with compared to 40Gbps IP core router interfaces which are considerable more expensive than 4x10Gbps pluggables.
DQPSK will be deployed this year
Verizon has been selective in its use of differential phase-shift keying (DPSK) based 40Gbps transmission within its network. It must measure the polarisation mode dispersion (PMD) on a proposed 40Gbps route and its variable nature means that impairment issues can arise over time. For this reason Verizon favours differential quadrature phase-shift keying (DQPSK) modulation.
According to Wellbrock, DPSK has a typical PMD tolerance of 4 ps while DQPSK is closer to 8 ps. In contrast, 10Gbps DWDM systems have around 12 ps. “That [8 ps of DQPSK] is the right ballpark figure,” he says, pointing out that a measuring a route's PMD must still be done.
Verizon is testing the technology in its labs and Wellbrock says Verizon will deploy 40Gbps DQPSK technology this year.
Cost of 100Gbps
Verizon Business has already deployed Nortel’s 100Gbps dual- polarization quadrature phase-shift keying (DP-QSPK) coherent system in Europe, connecting Frankfurt and Paris. However, given 100Gbps is at the very early stages of development it will take time to meet the goal of costing 2x 40Gbps.
That said, Verizon expects at least one other system vendor to have a 100Gbps system available for deployment this year. And around mid-2011, at least three 300-pin module makers will likely have products. It will be the advent of 100Gbps modules and the additional 100Gbps systems they will enable that will reduce the price of 100Gbps. This has already happened with 40Gbps line side transponders; with 100Gbps the advent of 300-pin MSAs will happen far much quickly, says Wellbrock.
Next-gen routes coherent only
When Verizon starts deploying its next-generation fibre routes they will be optimised for 100Gbps coherent systems. This means that there will be no dispersion compensation fibre used on the links, depending on the 100Gbps receiver’s electronics to execute the dispersion compensation instead.
The routes will accommodate 40Gbps transmission but only if the systems use coherent detection. Moreover, much care will be needed in how these links are architected since they will need to comply with future higher-speed optical transmission schemes.
Verizon expects to start such routes in 2011 and “certainly” in 2012.
Optical transceivers: Pouring a quart into a pint pot
Optical equipment and transceiver makers have much in common. Both must contend with the challenge of yearly network traffic growth and both are addressing the issue similarly: using faster interfaces, reducing power consumption and making designs more compact and flexible.
Yet if equipment makers and transceiver vendors share common technical goals, the market challenges they face differ. For optical transceiver vendors, the challenges are particularly complex.
LightCounting's global optical transceiver sales forecast. In 2009 the market was $2.10bn and will rise to $3.42bn in 2013
Transceiver vendors have little scope for product differentiation. That’s because the interfaces are based on standard form factors defined using multi-source agreements (MSAs).
System vendors may welcome MSAs since it increases their choice of suppliers but for transceiver vendors it means fierce competition, even for new opportunities such as 40 and 100 Gigabit Ethernet (GbE) and 40 and 100 Gigabit-per-second (Gbps) long-haul transmission.
Transceiver vendors must also contend with 40Gbps overlapping with the emerging 100Gbps market. Vendors must choose which interface options to back with their hard-earned cash.
Some industry observers even question the 40 and 100Gbps market opportunities given the continual cost reduction and simplicity of 10Gbps transceivers. One is Vladimir Kozlov, CEO of optical transceiver market research firm, LightCounting.
“The argument heard is that 40Gbps will take over the world in two or three years’ time,” says Kozlov. Yet he has been hearing the same claim for over a decade: “Look at the relative prices of 40Gbps and 10Gbps a decade ago and look at it now – 10Gbps is miles ahead.”
In Kozlov’s view, while 40Gbps and 100Gbps are being adopted in the network, the vast majority of networks will not see such rates. Instead traffic growth will be met with additional 10Gbps wavelengths and where necessary more fibre.
“Look at the relative prices of 40Gbps and 10Gbps a decade ago and look at it now – 10Gbps is miles ahead.”
Vladimir Kozlov, LightCounting.
And despite the activity surrounding new pluggable transceivers such as the 40 and 100Gbps CFP MSA and long-haul modulation schemes, his view is that “99% of the market is about simplicity and low cost”.
Juniper Networks, in contrast, has no doubt 100Gbps interfaces will be needed.
First demand for 100Gbps will be to simplify data centre connections and link the network backbone. “Link aggregating 10Gbps channels involves multiple fibres and connections,” says Luc Ceuppens, senior director of marketing, high-end systems business unit at Juniper. “Having a single 100 Gigabit interface simplifies network topology and connections.”
Longer term, 100Gbps will be driven when the basic currency of streams exceeds 10Gbps. “You won’t have to parse a greater-than-10 Gig stream over two 10Gbps links,” says Ceuppens.
But faster line rates is only one way equipment vendors are tackling traffic growth and networking costs.
"Forty Gig and eventually 100 Gig are basic needs for data centre connections and backbone networks, but in the metro, higher line rate is not the only way to handle traffic growth cost effectively,” says Mohamad Ferej, vice president of R&D at Transmode. He points to lowering equipment’s cost, power consumption and size as well as enhancing its flexibility.
Compact designs equate to less floor space in the central office, while the energy consumption of platforms is a growing concern. Tackling both reduce operational expenses.
Greater platform flexibility using tunable components and pluggable transceivers also helps reduce costs. Tunable-laser-based transceivers slash the number of spare fixed-wavelength dense wavelength division multiplexing (DWDM) transceivers operators and system vendors must store. Meanwhile, pluggables reduce costs by increasing competition and decoupling optics from the line card.
For higher speed interfaces, optical transmission cost – the cost-per-bit-per-kilometre - is reduced only if the new interface’s bandwidth grows faster than its cost relative to existing interfaces. The rule-of-thumb is that the transition to a new 4x line rate occurs once it matches 2.5x the existing interface’s cost. This is how 10Gbps superceded 2.5Gbps rates a decade ago.
The reason widespread adoption of 40Gbps has not happened is that 40Gbps has still to meet the crossover threshold. Indeed by 2012, 40Gbps will only be at 4x 10Gbps’ cost, according to market research firm, Ovum.
Thus it is the economics of 40 and 100Gbps as well as power and size that preoccupies module vendors.
Modulation war
“If the 40Gbps module market is at Step 1, 10Gbps is at Step 4,” says ECI Telecom’s Oren Marmur, vice president, optical networking line of business, network solutions division. Ten Gigabit has gone through several transitions; from 300-pin large form factor (LFF) to 300-pin small form factor (SFF) to the smaller fixed-wavelength pluggable XFP and now the tunable XFP. “Forty Gig is where 10 Gig modules were three years’ ago - each vendor has a different form factor and a different modulation scheme,” says Marmur.
DPSK dominates 40Gbps module shipments
Niall Robinson, Mintera
There are four modulation scheme choices for 40Gbps. First deployed has been optical duo-binary, followed by two phased-based modulation schemes: differential phase-shift keying (DPSK) and differential quadrature phase-shift keying (DQPSK). The phase modulation schemes offer superior reach and robustness to dispersion but are more complex and costly designs.
Added to the three is the emerging dual-polarisation, quadrature phase-shift keying (DP-QPSK), already deployed by operators using Nortel’s system and now being developed as a 300-pin LFF transponder by Mintera and JDS Uniphase. Indeed several such designs are expected in 2010.
Mintera has been shipping its 300-pin LFF adaptive DPSK transponder, and claims DPSK dominates 40Gbps module shipments. “DQPSK is being shipped in Japan and there is some interest in China but 90% is DPSK,” says Niall Robinson, vice president of product marketing at Mintera.
Opnext offers four 40Gbps transponder types: duo-binary, DPSK, a continuous mode DPSK variant that adapts to channel conditions based on the reconfigurable optical add/drop multiplexing (ROADM) stages a signal encounters, and a DQPSK design.

"40Gbps coherent channel position must be managed"
Daryl Inniss, Ovum
According to an Ovum study, duo-binary is cheapest followed by DPSK. The question facing transponder vendors is what next? Should they back DQPSK or a 40Gbps coherent DP-QPSK design?
“The problem with DQPSK is that it is more costly, though even coherent is somewhat expensive,” says Daryl Inniss, practice leader components at Ovum. The transponders’ bill of materials is only part of the story; optical performance being the other factor.
DQPSK has excellent performance when encountering dispersion while 40Gbps coherent channel position must be managed when used alongside 10Gbps wavelengths in the fibre. “It is not a big deal but it needs to be managed,” says Inniss. If price declines for the two remain equal, DQPSK will have the larger volumes, he says.
Another consideration is 100Gbps modules. DP-QPSK is the industry-backed modulation scheme for 100Gbps and given the commonality between 40 and 100Gbps coherent designs, the issue is their relative costs.
“The right question people are asking is what are the economics of 40 Gig versus 100 Gig coherent,” says Rafik Ward, Finisar's vice president of marketing. “If you buy 40 Gig and shortly after an economical 100 Gig coherent design appears, will 40 Gig coherent get the required market traction?”
Meanwhile, designers are shrinking existing 40Gbps modules, boosting significantly 40Gbps system capacity.
The 300-pin LFF transponder, at 7x5 inch, requires its own line card. As such, two system line cards are needed for a 40Gbps link: one for the short-reach, client-side interface and one for the line-side transponder.
A handful: a 300-pin large form factor transponder Source: Mintera
Mintera is one vendor developing a smaller 300-pin MSA DPSK transponder that will enable the two 40Gbps interfaces on one card.
“At present there are 16 slots per shelf supporting eight 40Gbps links, and three shelves per bay,” says Robinson. Once vendors design a new line card, system capacity will double with 16, 40Gbps links (640Gbps) per shelf and 1,920Gbps capacity per system. Equipment vendors can also used the smaller pin-for-pin compatible 300-pin MSA on existing cards to reduce costs.
Matt Traverso, senior manager, technical marketing at Opnext also stresses the importance of more compact transponders: “Right now though it is a premature. The issue still is the modulation format war.”
Another factor driving transponder development is the electrical interface used. The 300-pin MSA uses the SFI 5.1 interface based on 16, 2.5Gbps channels. “Forty and 100GbE all use 10Gbps interfaces, as do a lot of framer and ASIC vendors,” says Traverso. Since the 300-pin MSA in not compatible, adopting 10Gbps-channel electrical interfaces will likely require a new pluggable MSA for long haul.
CFP MSA for 40 and 100 Gig
One significant MSA development in 2009 was the CFP pluggable transceiver MSA. At ECOC last September, several companies announced first CFP designs implementing 40 and 100GbE standards.
Opnext announced a 100GBASE-LR4 CFP, a 100GbE over 10 km interface made up of four wavelengths each at 25Gbps. Finisar and Sumitomo Electric each announced a 40GBASE-LR4 CFP, a 40GbE over 10km comprising four wavelengths at 10Gbps.
The CFP MSA is smaller than the 300-pin LFF, measuring some 3.4x4.8 inches (86x120mm). It has four power settings - up to 8W, up to 16W, below 24W and above 24W (to 32W). When a CFP is plugged in, it communicates to the host platform its power class.
The 100Gbps CFP is designed to link IP routers, or an IP router to a DWDM platform for longer distance transmission.
“There is customer-pull to get the 100 Gig [pluggable] out,” says Traverso, explaining why Opnext chose 100GbE for its first design.
Opnext’s 100GbE pluggable comprises four 25Gbps transmit optical sub-assemblies (TOSAs) and four receive optical sub-assemblies (ROSAs). Also included are an optical multiplexer and demultiplexer to transmit and recover the four narrowly (LAN-WDM) spaced wavelengths. Also included within the 100GbE CFP are two integrated circuits (ICs): a gearbox IC translating between the 10Gbps channels and the higher speed 25Gbps lanes, and the module’s electrical interface IC.

"The issue still is the modulation format war”
Matt Traverso, Opnext
The CFP transceiver, while relatively large, has space constraints that challenge the routeing of fibres linking the discrete optical components. “This is familiar territory,” says Traverso. “The 10GBASE-LX4 [a four-channel design] in an X2 [pluggable] was a much harder problem.”
“Right now our [100GbE] focus is the 10 km CFP,” says Juniper’s Ceuppens. “There is no interest in parallel multimode [100GBASE-SR10] - service providers will not deploy multi-mode fibre due to the bigger cable and greater weight.”
Finisar’s and Sumitomo Electric’s 40GBASE-LR4 CFP also uses four TOSAs and ROSAs, but since each is 10Gbps no gearbox IC is needed. Moreover, coarse WDM (CWDM)-based wavelength spacing is used avoidng the need for thermal cooling. The cooling is required for 100Gbps to restrict the lasers’ LAN-WDM wavelengths drifting. Finisar has since detailed a 100GBASE-LR4 CFP.
“For the 40GBASE-LR4 CFP, a discrete design is relatively straightforward,” says Feng Tian, senior manager marketing, device at Sumitomo Electric Device Innovations. Vendors favour discretes to accelerate time-to-market, he says. But with second generation designs, power and cost reduction will be achieved using photonic integration.
Reflex Photonics announced dual 40GBASE-SR4 transceivers within a CFP in October 2009. The SR4 specification uses a 4-channel multimode ribbon cable for short reach links up to 150 m. The short reach CFP designs will be used for connecting routers to DWDM platforms for telecom and to link core switch platforms within the largest data centres. “Where the number of [10Gbps] links becomes unwieldy,” says Robert Coenen, director of product management at Reflex Photonics.
Reflex’s 100GbE design uses a 12x photo-detector array and a 12x VCSEL array. For the 100GbE design, 10 of the 12 channels are used, while for the 2x40GbE, eight (2x4) channels of each array are used (see diagram). “We didn’t really have to redesign [the 100GbE]; just turn off two lanes and change the fibering,” says Coenen.
Meanwhile switch makers are already highlighting a need for more compact pluggables than the CFP.
“The CFP standard is OK for first generation 100Gbps line cards but denser line cards are going to require a smaller form factor,” says Pravin Mahajan, technology marketer at Cisco Systems.
This is what Cube Optics is addressing by integrating four photo-detectors and a demultiplexer in a sub-assembly using its injection molding technology. Its 4x25Gbps ROSA for 100GbE complements its existing 4x10 CWDM ROSA for 40GbE applications.
“The CFP is a nice starting point but there must be something smaller, such as a QSFP or SFP+,” says Sven Krüger, vice president product management at Cube Optics.
The company has also received funding for the development of complementary 4x25Gbps and 4x10Gbps TOSA functions. “The TOSA is more challenging from an optical alignment point of view; the lasers have a smaller coupling area,” says Francis Nedvidek, Cube Optic’s CEO.
Cube Optics forecasts second generation 40GbE and 100GbE transceiver designs using its integrated optics to ship in volume in 2011.
Could the CFP be used beyond 100GbE for 100Gbps line side and the most challenging coherent design?
“The CFP with its smaller size is a good candidate,” says Sumitomo’s Tian. “But power consumption will be a challenge.” It may require one and maybe two more CMOS process generations to be used beyond the current 65nm to reduce the power consumption sufficiently for the design to meet the CFP’s 32W power limit, he says.
XFP put to new uses
Established pluggables such as the 10Gbps XFP transceiver also continue to evolve.
Transmode is shipping XFP-based tunable lasers with its systems, claiming the tunable XFP brings significant advantages.
In turn, Menara Networks is incorporating system functionality within the XFP normally found only on the line card.
Until now deploying fixed-wavelength DWDM XFPs meant a system vendor had to keep a sizable inventory for when an operator needed to light new DWDM wavelengths. “With no inventory you have to wait for a firm purchase order from your customer before you know which wavelengths to order from your transceiver vendor, and that means a 12-18 weeks delivery time,” says Ferej. Now with a tunable XFP, one transceiver meets all the operator’s wavelength planning requirements.
Moreover, the optical performance of the XFP is only marginally less than a tunable 10Gbps 300-pin SFF MSA. “The only advantage of a 300-pin is a 2-3dB better optical signal-to-noise ratio, meaning the signal can pass more optical amplifiers, required for longer reach” says Ferej.
Using a 300-pin extends the overall reach without a repeater beyond 1,000 km. “But the majority of the metro network business is below 1000 km,” says Ferej.
Does the power and space specifications of an MSA such as the XFP matter for component vendors or do they just accept it?
“It doesn’t matter till it matters,” says Padraig OMathuna, product marketing director at optical device maker, GigOptix. The maximum power rating for an XFP is 3.5W. “If you look inside a tunable XFP, the thermo-electric cooler takes 1.5 to 2W, the laser 0.5W and then there is the TIA,” says OMathuna. “That doesn’t leave a lot of room for our modulator driver.”
Inside JDS Uniphase's tunable XFP
Meanwhile, Menara Networks has implemented the ITU-T’s Optical Transport Network (OTN) in the form of an application specific IC (ASIC) within an XFP.
OTN is used to encapsulate signals for transport while adding optical performance monitoring functions and forward error correction. By including OTN within a pluggable, signal encapsulation, reach and optical signal management can be added to IP routers and carrier Ethernet switch routers.
The approach delivers several advantages, says Siraj ElAhmadi, CEO of Menara Networks.
First, it removes the need for additional 10Gbps transponders to ready the signals from the switch or router for DWDM transport. Second, system vendors can develop a universal linecard without supporting OTN functionality.
The biggest technical challenge for Menara was not developing the OTN ASIC but the accompanying software. “We had the chip one and a half years before we shipped the product because of the software,” says ElAhmadi. “There is no room [within the XFP] for extra memory.”
Menara is supplying its OTN pluggables to a North American cable operator.
ECI Telecom is one vendor using Menara’s pluggable for its carrier Ethernet switch router (CESR) platforms. “For certain applications it saves you having to develop OTN,” says Jimmy Mizrahi, next-generation networking product line manager, network solutions division at ECI Telecom.
Pluggables and optical engines
The CFP is one module that will be used in the data center but for high density applications - linking switches and high-performance computing - more compact designs are needed. These include the QSFP, the CXP and what are being called optical engines.
The CFP form factor for 40 and 100Gbps
The QSFP is already the favoured interface for active optical cables that encapsulate the optics within the cable and which provide an attractive alternative to copper interconnect. QSFP transceivers support quad data rate (QDR) 4xInfiniband as well as extending the reach of 4x10Gbps Ethernet beyond copper’s 7m.
The QSFP is also an option for more compact 40GbE short-reach interfaces. “The [40GBASE-]SR4 is doable today as a QSFP,” says Christian Urricarriet, 40, 100GbE, and parallel product line manager at Finisar. The 40-GBASE-LR4 in a QSFP is also possible, as targeted by Cube Optics among others.
Achieving 100GbE within a QSFP is another matter. Adding a 25Gbps-per-channel electrical interface and higher-speed lasers while meeting the QSFP’s power constraints is a considerable challenge. “There may need to be an intermediate form factor that is better defined [for the task],” says Urricarriet.
Meanwhile, the CXP is a front panel interface that promises denser interfaces within the data centre. “CXP is useful for inter-chassis links as it stands today,” says Cisco’s Mahajan.
According to Avago Technologies, Infiniband is the CXP’s first target market while 100GbE using 10 of the 12 channels is clearly an option. But there are technical challenges to be overcome before the CXP connector can be used for 100GbE Ethernet. “You need to be much more stringent to meet the IEEE optical specification,” says Sami Nassar, director of marketing, fiber optic products division at Avago Technologies.
The CXP is also entering territory until recently the preserve of the SNAP12 parallel optics module. SNAP12 connects the platforms within large IP router configurations, and is used for high-end computing. However, it is not a pluggable and comprises separate 12-channel transmitter and receiver modules. SNAP12 has a 6.25Gbps per channel data rate although a 10Gbps per channel has been announced.
“Both [the CXP and SNAP12] have a role,” says Reflex’s Coenen. SNAP12 is on the mother board and because it has a small form factor it can sit close to the ASIC, he says.
Such an approach is now being targeted by firms using optical engines to reduce the cost of parallel interfaces and address emerging high-speed interface requirements on the mother-board, between racks and between systems.
Luxtera’s OptoPHY is one such optical engine. There are two versions: a single channel 10Gbps and a 4x10Gbps product, while a 12-channel version will sample later this year.
The OptoPHY uses the same optical technology as Luxtera’s AOC: a 1490nm distributed feedback (DFB) laser is used for both one and four-channel products, modulated using the company’s silicon photonics technology. The single channel consumes 450mW while the four-channel consumes 800mW, says Marek Tlalka, vice president of marketing at Luxtera, while reach is up to 4km.
Luxtera says the 12-channel version which will cost around $120, equating to $1 per 1Gbps. This, it claims, is several times cheaper than SNAP12.
“The next-generation product will achieve 25Gbps per channel, using the same form factor and the same chip,” says Tlalka. This will allow the optical engine to handle channel speeds used for 100GbE as well as the next Infiniband speed-hike known as Eight Data Rate (EDR).
Avago, a leading supplier of SNAP12, says that the robust interface with its integrated heat sink is still a preferred option for vendors. “For others, with even higher-density concentrations, a next generation packaging type is being used, which we’ve not announced yet,” says Dan Rausch, Avago’s senior technical marketing manager, fiber optic products division.
The advent of 100GbE and even higher rates, and 25Gbps electrical interfaces, will further promote optical engines. “It is hard enough to route 10Gbps around an FR4 printed circuit board,” says Coenen. Four inches are typically the limit, while longer links up to 10 inches requiring such techniques as pre-emphasis, electronic dispersion compensation and retiming.
At 25Gbps distances will become even shorter. “This makes the argument for optical engines even stronger, you will need them near the ASICs to feed data to the front panel,” says Coenen.
Optical transceivers may rightly be in the limelight handling network traffic growth but it is the activities linking platforms, boards and soon on-board devices where optical transceiver vendors, unencumbered by MSAs, have scope for product differentiation.
