Optical transceivers: Pouring a quart into a pint pot
Transceiver feature - 3rd and final part
Optical equipment and transceiver makers have much in common. Both must contend with the challenge of yearly network traffic growth and both are addressing the issue similarly: using faster interfaces, reducing power consumption and making designs more compact and flexible.
Yet if equipment makers and transceiver vendors share common technical goals, the market challenges they face differ. For optical transceiver vendors, the challenges are particularly complex.
Transceiver vendors have little scope for product differentiation. That’s because the interfaces are based on standard form factors defined using multi-source agreements (MSAs).
System vendors may welcome MSAs since it increases their choice of suppliers but for transceiver vendors it means fierce competition, even for new opportunities such as 40 and 100 Gigabit Ethernet (GbE) and 40 and 100 Gigabit-per-second (Gbps) long-haul transmission.
Transceiver vendors must also contend with 40Gbps overlapping with the emerging 100Gbps market. Vendors must choose which interface options to back with their hard-earned cash.
Some industry observers even question the 40 and 100Gbps market opportunities given the continual cost reduction and simplicity of 10Gbps transceivers. One is Vladimir Kozlov, CEO of optical transceiver market research firm, LightCounting.
“The argument heard is that 40Gbps will take over the world in two or three years’ time,” says Kozlov. Yet he has been hearing the same claim for over a decade: “Look at the relative prices of 40Gbps and 10Gbps a decade ago and look at it now – 10Gbps is miles ahead.”
In Kozlov’s view, while 40Gbps and 100Gbps are being adopted in the network, the vast majority of networks will not see such rates. Instead traffic growth will be met with additional 10Gbps wavelengths and where necessary more fibre.
“Look at the relative prices of 40Gbps and 10Gbps a decade ago and look at it now – 10Gbps is miles ahead.”
Vladimir Kozlov, LightCounting.
And despite the activity surrounding new pluggable transceivers such as the 40 and 100Gbps CFP MSA and long-haul modulation schemes, his view is that “99% of the market is about simplicity and low cost”.
Juniper Networks, in contrast, has no doubt 100Gbps interfaces will be needed.
First demand for 100Gbps will be to simplify data centre connections and link the network backbone. “Link aggregating 10Gbps channels involves multiple fibres and connections,” says Luc Ceuppens, senior director of marketing, high-end systems business unit at Juniper. “Having a single 100 Gigabit interface simplifies network topology and connections.”
Longer term, 100Gbps will be driven when the basic currency of streams exceeds 10Gbps. “You won’t have to parse a greater-than-10 Gig stream over two 10Gbps links,” says Ceuppens.
But faster line rates is only one way equipment vendors are tackling traffic growth and networking costs.
"Forty Gig and eventually 100 Gig are basic needs for data centre connections and backbone networks, but in the metro, higher line rate is not the only way to handle traffic growth cost effectively,” says Mohamad Ferej, vice president of R&D at Transmode. He points to lowering equipment’s cost, power consumption and size as well as enhancing its flexibility.
Compact designs equate to less floor space in the central office, while the energy consumption of platforms is a growing concern. Tackling both reduce operational expenses.
Greater platform flexibility using tunable components and pluggable transceivers also helps reduce costs. Tunable-laser-based transceivers slash the number of spare fixed-wavelength dense wavelength division multiplexing (DWDM) transceivers operators and system vendors must store. Meanwhile, pluggables reduce costs by increasing competition and decoupling optics from the line card.
For higher speed interfaces, optical transmission cost – the cost-per-bit-per-kilometre - is reduced only if the new interface’s bandwidth grows faster than its cost relative to existing interfaces. The rule-of-thumb is that the transition to a new 4x line rate occurs once it matches 2.5x the existing interface’s cost. This is how 10Gbps superceded 2.5Gbps rates a decade ago.
The reason widespread adoption of 40Gbps has not happened is that 40Gbps has still to meet the crossover threshold. Indeed by 2012, 40Gbps will only be at 4x 10Gbps’ cost, according to market research firm, Ovum.
Thus it is the economics of 40 and 100Gbps as well as power and size that preoccupies module vendors.
Modulation war
“If the 40Gbps module market is at Step 1, 10Gbps is at Step 4,” says ECI Telecom’s Oren Marmur, vice president, optical networking line of business, network solutions division. Ten Gigabit has gone through several transitions; from 300-pin large form factor (LFF) to 300-pin small form factor (SFF) to the smaller fixed-wavelength pluggable XFP and now the tunable XFP. “Forty Gig is where 10 Gig modules were three years’ ago - each vendor has a different form factor and a different modulation scheme,” says Marmur.
DPSK dominates 40Gbps module shipments
Niall Robinson, Mintera
There are four modulation scheme choices for 40Gbps. First deployed has been optical duo-binary, followed by two phased-based modulation schemes: differential phase-shift keying (DPSK) and differential quadrature phase-shift keying (DQPSK). The phase modulation schemes offer superior reach and robustness to dispersion but are more complex and costly designs.
Added to the three is the emerging dual-polarisation, quadrature phase-shift keying (DP-QPSK), already deployed by operators using Nortel’s system and now being developed as a 300-pin LFF transponder by Mintera and JDS Uniphase. Indeed several such designs are expected in 2010.
Mintera has been shipping its 300-pin LFF adaptive DPSK transponder, and claims DPSK dominates 40Gbps module shipments. “DQPSK is being shipped in Japan and there is some interest in China but 90% is DPSK,” says Niall Robinson, vice president of product marketing at Mintera.
Opnext offers four 40Gbps transponder types: duo-binary, DPSK, a continuous mode DPSK variant that adapts to channel conditions based on the reconfigurable optical add/drop multiplexing (ROADM) stages a signal encounters, and a DQPSK design.
"40Gbps coherent channel position must be managed"
Daryl Inniss, Ovum
According to an Ovum study, duo-binary is cheapest followed by DPSK. The question facing transponder vendors is what next? Should they back DQPSK or a 40Gbps coherent DP-QPSK design?
“The problem with DQPSK is that it is more costly, though even coherent is somewhat expensive,” says Daryl Inniss, practice leader components at Ovum. The transponders’ bill of materials is only part of the story; optical performance being the other factor.
DQPSK has excellent performance when encountering dispersion while 40Gbps coherent channel position must be managed when used alongside 10Gbps wavelengths in the fibre. “It is not a big deal but it needs to be managed,” says Inniss. If price declines for the two remain equal, DQPSK will have the larger volumes, he says.
Another consideration is 100Gbps modules. DP-QPSK is the industry-backed modulation scheme for 100Gbps and given the commonality between 40 and 100Gbps coherent designs, the issue is their relative costs.
“The right question people are asking is what are the economics of 40 Gig versus 100 Gig coherent,” says Rafik Ward, Finisar's vice president of marketing. “If you buy 40 Gig and shortly after an economical 100 Gig coherent design appears, will 40 Gig coherent get the required market traction?”
Meanwhile, designers are shrinking existing 40Gbps modules, boosting significantly 40Gbps system capacity.
The 300-pin LFF transponder, at 7x5 inch, requires its own line card. As such, two system line cards are needed for a 40Gbps link: one for the short-reach, client-side interface and one for the line-side transponder.
Mintera is one vendor developing a smaller 300-pin MSA DPSK transponder that will enable the two 40Gbps interfaces on one card.
“At present there are 16 slots per shelf supporting eight 40Gbps links, and three shelves per bay,” says Robinson. Once vendors design a new line card, system capacity will double with 16, 40Gbps links (640Gbps) per shelf and 1,920Gbps capacity per system. Equipment vendors can also used the smaller pin-for-pin compatible 300-pin MSA on existing cards to reduce costs.
Matt Traverso, senior manager, technical marketing at Opnext also stresses the importance of more compact transponders: “Right now though it is a premature. The issue still is the modulation format war.”
Another factor driving transponder development is the electrical interface used. The 300-pin MSA uses the SFI 5.1 interface based on 16, 2.5Gbps channels. “Forty and 100GbE all use 10Gbps interfaces, as do a lot of framer and ASIC vendors,” says Traverso. Since the 300-pin MSA in not compatible, adopting 10Gbps-channel electrical interfaces will likely require a new pluggable MSA for long haul.
CFP MSA for 40 and 100 Gig
One significant MSA development in 2009 was the CFP pluggable transceiver MSA. At ECOC last September, several companies announced first CFP designs implementing 40 and 100GbE standards.
Opnext announced a 100GBASE-LR4 CFP, a 100GbE over 10 km interface made up of four wavelengths each at 25Gbps. Finisar and Sumitomo Electric each announced a 40GBASE-LR4 CFP, a 40GbE over 10km comprising four wavelengths at 10Gbps.
The CFP MSA is smaller than the 300-pin LFF, measuring some 3.4x4.8 inches (86x120mm). It has four power settings - up to 8W, up to 16W, below 24W and above 24W (to 32W). When a CFP is plugged in, it communicates to the host platform its power class.
The 100Gbps CFP is designed to link IP routers, or an IP router to a DWDM platform for longer distance transmission.
“There is customer-pull to get the 100 Gig [pluggable] out,” says Traverso, explaining why Opnext chose 100GbE for its first design.
Opnext’s 100GbE pluggable comprises four 25Gbps transmit optical sub-assemblies (TOSAs) and four receive optical sub-assemblies (ROSAs). Also included are an optical multiplexer and demultiplexer to transmit and recover the four narrowly (LAN-WDM) spaced wavelengths. Also included within the 100GbE CFP are two integrated circuits (ICs): a gearbox IC translating between the 10Gbps channels and the higher speed 25Gbps lanes, and the module’s electrical interface IC.
"The issue still is the modulation format war”
Matt Traverso, Opnext
The CFP transceiver, while relatively large, has space constraints that challenge the routeing of fibres linking the discrete optical components. “This is familiar territory,” says Traverso. “The 10GBASE-LX4 [a four-channel design] in an X2 [pluggable] was a much harder problem.”
“Right now our [100GbE] focus is the 10 km CFP,” says Juniper’s Ceuppens. “There is no interest in parallel multimode [100GBASE-SR10] - service providers will not deploy multi-mode fibre due to the bigger cable and greater weight.”
Finisar’s and Sumitomo Electric’s 40GBASE-LR4 CFP also uses four TOSAs and ROSAs, but since each is 10Gbps no gearbox IC is needed. Moreover, coarse WDM (CWDM)-based wavelength spacing is used avoidng the need for thermal cooling. The cooling is required for 100Gbps to restrict the lasers’ LAN-WDM wavelengths drifting. Finisar has since detailed a 100GBASE-LR4 CFP.
“For the 40GBASE-LR4 CFP, a discrete design is relatively straightforward,” says Feng Tian, senior manager marketing, device at Sumitomo Electric Device Innovations. Vendors favour discretes to accelerate time-to-market, he says. But with second generation designs, power and cost reduction will be achieved using photonic integration.
Reflex Photonics announced dual 40GBASE-SR4 transceivers within a CFP in October 2009. The SR4 specification uses a 4-channel multimode ribbon cable for short reach links up to 150 m. The short reach CFP designs will be used for connecting routers to DWDM platforms for telecom and to link core switch platforms within the largest data centres. “Where the number of [10Gbps] links becomes unwieldy,” says Robert Coenen, director of product management at Reflex Photonics.
Reflex’s 100GbE design uses a 12x photo-detector array and a 12x VCSEL array. For the 100GbE design, 10 of the 12 channels are used, while for the 2x40GbE, eight (2x4) channels of each array are used (see diagram). “We didn’t really have to redesign [the 100GbE]; just turn off two lanes and change the fibering,” says Coenen.
Meanwhile switch makers are already highlighting a need for more compact pluggables than the CFP.
“The CFP standard is OK for first generation 100Gbps line cards but denser line cards are going to require a smaller form factor,” says Pravin Mahajan, technology marketer at Cisco Systems.
This is what Cube Optics is addressing by integrating four photo-detectors and a demultiplexer in a sub-assembly using its injection molding technology. Its 4x25Gbps ROSA for 100GbE complements its existing 4x10 CWDM ROSA for 40GbE applications.
“The CFP is a nice starting point but there must be something smaller, such as a QSFP or SFP+,” says Sven Krüger, vice president product management at Cube Optics.
The company has also received funding for the development of complementary 4x25Gbps and 4x10Gbps TOSA functions. “The TOSA is more challenging from an optical alignment point of view; the lasers have a smaller coupling area,” says Francis Nedvidek, Cube Optic’s CEO.
Cube Optics forecasts second generation 40GbE and 100GbE transceiver designs using its integrated optics to ship in volume in 2011.
Could the CFP be used beyond 100GbE for 100Gbps line side and the most challenging coherent design?
“The CFP with its smaller size is a good candidate,” says Sumitomo’s Tian. “But power consumption will be a challenge.” It may require one and maybe two more CMOS process generations to be used beyond the current 65nm to reduce the power consumption sufficiently for the design to meet the CFP’s 32W power limit, he says.
XFP put to new uses
Established pluggables such as the 10Gbps XFP transceiver also continue to evolve.
Transmode is shipping XFP-based tunable lasers with its systems, claiming the tunable XFP brings significant advantages.
In turn, Menara Networks is incorporating system functionality within the XFP normally found only on the line card.
Until now deploying fixed-wavelength DWDM XFPs meant a system vendor had to keep a sizable inventory for when an operator needed to light new DWDM wavelengths. “With no inventory you have to wait for a firm purchase order from your customer before you know which wavelengths to order from your transceiver vendor, and that means a 12-18 weeks delivery time,” says Ferej. Now with a tunable XFP, one transceiver meets all the operator’s wavelength planning requirements.
Moreover, the optical performance of the XFP is only marginally less than a tunable 10Gbps 300-pin SFF MSA. “The only advantage of a 300-pin is a 2-3dB better optical signal-to-noise ratio, meaning the signal can pass more optical amplifiers, required for longer reach” says Ferej.
Using a 300-pin extends the overall reach without a repeater beyond 1,000 km. “But the majority of the metro network business is below 1000 km,” says Ferej.
Does the power and space specifications of an MSA such as the XFP matter for component vendors or do they just accept it?
“It doesn’t matter till it matters,” says Padraig OMathuna, product marketing director at optical device maker, GigOptix. The maximum power rating for an XFP is 3.5W. “If you look inside a tunable XFP, the thermo-electric cooler takes 1.5 to 2W, the laser 0.5W and then there is the TIA,” says OMathuna. “That doesn’t leave a lot of room for our modulator driver.”
Meanwhile, Menara Networks has implemented the ITU-T’s Optical Transport Network (OTN) in the form of an application specific IC (ASIC) within an XFP.
OTN is used to encapsulate signals for transport while adding optical performance monitoring functions and forward error correction. By including OTN within a pluggable, signal encapsulation, reach and optical signal management can be added to IP routers and carrier Ethernet switch routers.
The approach delivers several advantages, says Siraj ElAhmadi, CEO of Menara Networks.
First, it removes the need for additional 10Gbps transponders to ready the signals from the switch or router for DWDM transport. Second, system vendors can develop a universal linecard without supporting OTN functionality.
The biggest technical challenge for Menara was not developing the OTN ASIC but the accompanying software. “We had the chip one and a half years before we shipped the product because of the software,” says ElAhmadi. “There is no room [within the XFP] for extra memory.”
Menara is supplying its OTN pluggables to a North American cable operator.
ECI Telecom is one vendor using Menara’s pluggable for its carrier Ethernet switch router (CESR) platforms. “For certain applications it saves you having to develop OTN,” says Jimmy Mizrahi, next-generation networking product line manager, network solutions division at ECI Telecom.
Pluggables and optical engines
The CFP is one module that will be used in the data center but for high density applications - linking switches and high-performance computing - more compact designs are needed. These include the QSFP, the CXP and what are being called optical engines.
The QSFP is already the favoured interface for active optical cables that encapsulate the optics within the cable and which provide an attractive alternative to copper interconnect. QSFP transceivers support quad data rate (QDR) 4xInfiniband as well as extending the reach of 4x10Gbps Ethernet beyond copper’s 7m.
The QSFP is also an option for more compact 40GbE short-reach interfaces. “The [40GBASE-]SR4 is doable today as a QSFP,” says Christian Urricarriet, 40, 100GbE, and parallel product line manager at Finisar. The 40-GBASE-LR4 in a QSFP is also possible, as targeted by Cube Optics among others.
Achieving 100GbE within a QSFP is another matter. Adding a 25Gbps-per-channel electrical interface and higher-speed lasers while meeting the QSFP’s power constraints is a considerable challenge. “There may need to be an intermediate form factor that is better defined [for the task],” says Urricarriet.
Meanwhile, the CXP is a front panel interface that promises denser interfaces within the data centre. “CXP is useful for inter-chassis links as it stands today,” says Cisco’s Mahajan.
According to Avago Technologies, Infiniband is the CXP’s first target market while 100GbE using 10 of the 12 channels is clearly an option. But there are technical challenges to be overcome before the CXP connector can be used for 100GbE Ethernet. “You need to be much more stringent to meet the IEEE optical specification,” says Sami Nassar, director of marketing, fiber optic products division at Avago Technologies.
The CXP is also entering territory until recently the preserve of the SNAP12 parallel optics module. SNAP12 connects the platforms within large IP router configurations, and is used for high-end computing. However, it is not a pluggable and comprises separate 12-channel transmitter and receiver modules. SNAP12 has a 6.25Gbps per channel data rate although a 10Gbps per channel has been announced.
“Both [the CXP and SNAP12] have a role,” says Reflex’s Coenen. SNAP12 is on the mother board and because it has a small form factor it can sit close to the ASIC, he says.
Such an approach is now being targeted by firms using optical engines to reduce the cost of parallel interfaces and address emerging high-speed interface requirements on the mother-board, between racks and between systems.
Luxtera’s OptoPHY is one such optical engine. There are two versions: a single channel 10Gbps and a 4x10Gbps product, while a 12-channel version will sample later this year.
The OptoPHY uses the same optical technology as Luxtera’s AOC: a 1490nm distributed feedback (DFB) laser is used for both one and four-channel products, modulated using the company’s silicon photonics technology. The single channel consumes 450mW while the four-channel consumes 800mW, says Marek Tlalka, vice president of marketing at Luxtera, while reach is up to 4km.
Luxtera says the 12-channel version which will cost around $120, equating to $1 per 1Gbps. This, it claims, is several times cheaper than SNAP12.
“The next-generation product will achieve 25Gbps per channel, using the same form factor and the same chip,” says Tlalka. This will allow the optical engine to handle channel speeds used for 100GbE as well as the next Infiniband speed-hike known as Eight Data Rate (EDR).
Avago, a leading supplier of SNAP12, says that the robust interface with its integrated heat sink is still a preferred option for vendors. “For others, with even higher-density concentrations, a next generation packaging type is being used, which we’ve not announced yet,” says Dan Rausch, Avago’s senior technical marketing manager, fiber optic products division.
The advent of 100GbE and even higher rates, and 25Gbps electrical interfaces, will further promote optical engines. “It is hard enough to route 10Gbps around an FR4 printed circuit board,” says Coenen. Four inches are typically the limit, while longer links up to 10 inches requiring such techniques as pre-emphasis, electronic dispersion compensation and retiming.
At 25Gbps distances will become even shorter. “This makes the argument for optical engines even stronger, you will need them near the ASICs to feed data to the front panel,” says Coenen.
Optical transceivers may rightly be in the limelight handling network traffic growth but it is the activities linking platforms, boards and soon on-board devices where optical transceiver vendors, unencumbered by MSAs, have scope for product differentiation.
Reader Comments (1)
Nice article.
I think I agree with Vladimir's assessment that 10G is still the place to be for volume optical shipments.
I think many of the system manufacturers are in a strange position. They clearly need to show some kind of 40G and 100G capability moving forward, especially for their largest customers. Most of them, however, don't have (or have lost) the ability to do a vertically integrated development in this space.
Simultaneously the majority of the optical module suppliers cannot justify, or can only barely justify, the development of such a costly and low volume product offering. Our industry, on the whole, is in a quandary on this issue. Who is going to front the money and time to make 100G happen in a widespread sense?
More importantly, if it becomes widespread (at least on the supply side) but not widely adopted (perhaps due to cost reasons) then what happens to the too many folks that have come out with module solutions? They all will share a very small pie, which is not a recipe for success.
I wonder if 100G is still at the stage where economically it only makes sense to be done in startups? I can't fathom how a Product Manager at a mainline optical module vendor can justify a 100G module development (especially one aimed at the LH) given the cost of the development, the time to revenue, and the projected inflection point of 10G to 100G widespread adoption (and that doesn't even take into account 40G), unless of course the business is pre-wired and booked by a majority of the marketplace.