40 Gigabit Ethernet QSFPs boost port density and reach

Avago Technologies and Reflex Photonics have announced extended reach 40 Gigabit Ethernet (GbE) QSFP+ transceivers.  As data centres grow in size, there is a need to link equipment over distances greater than 100m, as defined by the IEEE 40 Gigabit Ethernet standard.

 

"For the larger data centres being built today, reach is becoming more and more important"

I Hsing Tan, Avago 

 

 

Avago’s eSR4 QSFP+ transceiver extends the reach of 40GbE over multimode fibre beyond the IEEE 40GBASE-SR4 specification, to 300m over OM3 and 400m over OM4 multimode fibre.

Reflex Photonics’ 40GbE QSFP also achieves 300m over OM3 fibre and while it has not tested the transceiver over OM4 fibre, the company is using the same optics that it uses for its CFP which meets 450m over OM4.

“This [QSFP] is aimed at large data centres operated by the likes of a Google or a Facebook,” says Robert Coenen, vice president, sales and marketing at Reflex Photonics. Such data centres can have link requirements of 1000m. “The more reach you can give over multimode fibre, the more money they [data centre operators] can save.”

The eSR4, like Avago's already announced iSR4 (interoperable SR4) 40GbE QSFP+ transceiver, supports either 40GbE or four independent 10GbE channels. When used as a multichannel 10GbE interface, the QSFP+ can interface to various 10GbE form factors such as X2, XFP and SFP+, says Avago. 

The iSR4 also increases the faceplate port density of equipment from 48, 10 Gigabit Ethernet (GbE) SFP+ ports to up to 44 QSFP+ 40GbE ports. Avago says that one equipment vendor has already announced a card with 36 QSFP+ ports. The iSR4 QSFP+ also reduces the overall Gigabit/Watt power consumption to 37.5mW/Gbps compared to 100mW /Gbps for the SFP+.  The eSR4 has half the power consumption, which puts it around 50mW/Gbps.

But the iSR4 matches the reach of the IEEE 40GBASE-SR4 40GbE standard: 100m for OM3 and 150m for OM4-based fibre. "This [reduced reach at 40GbE] creates an issue for data centre operations," says I Hsing Tan, Ethernet segment marketing manager in the fiber optics product division at Avago. "They require additional investment to redo all the wiring in current 10GbE infrastructure to support a shorter reach." 

With the extended reach 40GbE QSFPs the reach associated with 10GbE interfaces on OM3 and OM4 multimode fibre is now restored.

The iSR4 module is available now, says Avago, while the eSR4 will be available from mid-2012. Reflex’s Coenen says it will have samples of its 40GbE QSFP, which also supports 40GbE and 4x10GbE,  by May 2012.

 

What has been done

For Avago's iSR4 QSFP+ to operate as four, 10GbE channels, it has to comply with the 10GBASE-SR optical standard. That is because 10GBASE-SR supports a maximum receive power of -1dBm whereas the 40GBASE-SR4 has a maximum output power of 2.4dBm. The transmitter power of the iSR4 has thus been reduced. "We force the output of the transmitter down to -1dBm," says Tan.

To achieve the greater reach, the eSR4 uses a VCSEL design with a tighter spectral width. Other parameters include the optical modulation amplitude power and the wavelength. These affect the resulting fibre dispersion. “Once you control the spectral width, you can design the other two to meet the specs," says Tan.  

The Avago 40GbE QSFP+ modules use an integrated 4- channel VCSEL array and a 4-channel photo-detector array.

 

Significance

The 40GbE short reach interfaces play an important role in the data centre. As servers move from using 1GbE to 10GbE interfaces, the uplink from aggregation 'top-of-rack' switches must also scale from 10GbE to higher speeds of 40GbE or 100GbE.

However existing 100GbE interfaces make use of the CFP module which is relatively large and expensive. And although the 100GbE standard has a clear roadmap leading to CFP2 and CFP4 modules, half and a quarter of the size of the CFP, respectively, these are not yet available. 

40GbE QSFP+ transceivers do exist and offer the equipment faceplate density improvement vendors want. 

The QSFP+ also benefits existing 10GbE designs by supporting nearly 4x the number of 10GbE on a card. Thus a new blade supporting up to 44, 40GbE QSFP+ transceivers can interface to up to 176 10GbE transceivers, a near fourfold capacity increase.

According to Avago, between 10% and 20% of interface requirements in the data centre are beyond 150m. Without the advent of extended reach 40GbE modules, data centre operators would need to deploy single mode fibre and a 40GBASE-LR4 module, it says. And while that can be fitted inside a QSFP, its power consumption is up to 3.5W, compared to the 1.5W of the QSFP+ eSR4. "The cost of the LR4 is also increased by at least a factor of three," says Tan.

Avago says that some 95% of all fibre in the data centre is multimode fibre. As for OM3 and OM4 deployments the ratio is 80% to 20%, respectively. 


The CFP4 optical module to enable Terabit blades

The next-generation CFP modules - the CFP2 and CFP4 - promise to double and double again the number of 100 Gigabit-per-second (Gbps) optical module interfaces on a blade.

Using the CFP4, up to 16, 100Gbps modules will fit on a blade, a total line rate of 1.6 Terabits-per-second (Tbps). With a goal of a 60W total module power budget per blade, that equates to 27Gbps/W. In comparison, the power-efficient SFP+ achieves 10Gbps/W.
 

Source: Gazettabyte, Xilinx

The CFP2 is about half the size of the CFP while the CFP4 is half the size of the CFP2. The CFP4 is slightly wider and longer than the QSFP.

The two CFP modules will use a 4x25Gbps electrical interface, doing away with the need for a 10x10Gbps to 4x25Gbps gearbox IC used for current CFP 100GBASE-LR4 and -ER4 interfaces. The CFP2 and CFP4 are also defined for 40 Gigabit Ethernet use.

The CFP's maximum power rating is 32W, the CFP2 12W and the CFP4 5W. But vendors that put eight CFP2 or 16 CFP4s on a blade still want to meet the 60W total power budget.

 

Getting close: Four CFP modules deliver slightly less bandwidth than 48 SFP+ modules: 4x100Gbps versus 480Gbps. The four also consume more power - 60w versus 48W. Moving to the CFP2 module will double the blade's bandwidth without consuming more power while the CFP4 will do the same again. a blade with 16 CFP4 modules promises 1.6Tbps while requiring 60W. Source: Xilinx

The first CFP2 modules are expected this year - there could be vendor announcements as early as the upcoming OFC/NFOEC 2012 show to be held in LA in the first week in March. The first CFP4 products are expected in 2013.

 

Further reading

The CFP MSA presentation: CFP MSA 100G roadmap and applications

 


Reflections 2011, Predictions 2012 - Part 2

Gazettabyte asked industry analysts, CEOs, executives and commentators to reflect on the last year and comment on developments they most anticipate for 2012. Here are the views of Verizon's Glenn Wellbrock, Professor Rod Tucker, Ciena's Joe Berthold, Opnext's Jon Anderson, NeoPhotonics' Tim Jenks and Vladimir Kozlov of LightCounting.

 

Glenn Wellbrock, Verizon's director of optical transport network architecture & design

The most significant accomplishment from an optical transport perspective for me was the introduction of 100 Gigabit into Verizon's domestic - US - network. 


"The key technology enabler in 2012 will be the flexible grid optical switching that can support data rates beyond 100 Gigabit"

 

That accomplishment has paved the way for us to hit the ground running in 2012 with a very aggressive 100 Gigabit deployment plan. I also believe this accomplishment gives others the confidence to start taking advantage of this leading-edge technology. 

With coherent receiver technology and the associated high-speed electronics lowering the propagation latency by up to 15%, we see a much cleaner line system design that eliminates external dispersion compensation fibre while bringing down the cost, space and power per bit. 

The value of the whole industry moving in this direction means higher volumes and, therefore, lower costs.  This new infrastructure will allow operators to get ahead of customer demand, thus improving delivery intervals and introducing new, higher bandwidth services to those large key customers that require it.  

In my opinion, the key technology enabler in 2012 will be the flexible grid optical switching that can support data rates beyond 100 Gigabit and provides the framework to support colourless, directionless and contentionless optical nodes.

Today, field technicians must plug a new transmitter/ receiver into the appropriate direction and filter port at both circuit ends. With this new technology, operations personnel can simply plug the new card into the next available port and it can then be provisioned, tested and even moved to a new colour or direction remotely without any on-site personnel involvement - even when there are multiple copies of the same colour on the same add/ drop structure coming from different fibres.

This new nodal architecture takes advantage of the inherent channel selection capability of the coherent receiver to eliminate fixed filters and opens up the door for a truly reconfigurable optical add/ drop multiplexer (ROADM) - creating new flexibility that can be used for optical restoration, network defragmentation, operational simplicity, and more. 

 

Rod Tucker, Director of the Institute for a Broadband Enabled Society (IBES), Director of the Centre for Energy-Efficient Telecommunications (CEET), and professor of electrical and electronic engineering at the University of Melbourne.

Australia's National Broadband Network (NBN) hit the ground running in 2011.

The project is still many years from completion, but in 2011 the roll-out of fibre-to-the-premises infrastructure began in earnest. This is a very noteworthy project - a wholesale broadband access network delivering advanced broadband services to the entire population of the country, including fibre to 93% of all premises and a mixture of fixed wireless and satellite to the remainder. At an estimated cost of around AUS$36 billion, the price tag is not small.

 

"The environment created by [Australia's] National Broadband Network  will greatly enhance opportunities for innovations in new services and new modes of broadband service delivery"  

 

But the wholesale-only model maximises opportunities for competition at the service provider level, and reduces wasteful duplication of infrastructure in the last mile.  A remarkable aspect of the NBN project is that a deal has been struck between the incumbent telco, Telstra, and the government-owned owner of the NBN.  

Under this deal, Telstra will shut down its Hybrid-Fibre-Coax (HFC) network and decommission its legacy copper access network.  Australia will become a truly fibre-connected country, with a future-proof broadband infrastructure.

My thoughts for 2012 also relate to Australia's National Broadband Network.  The environment created by the NBN will greatly enhance opportunities for innovations in new services and new modes of broadband service delivery.  

I anticipate that in 2012 and beyond, new services providers and aggregators in areas such as health care, education, entertainment and energy will emerge.  

I am very excited about the opportunities.

 

Joe Berthold, vice president of network architecture at Ciena

One of the most memorable developments from a network architecture point of view was the clear emergence of the category of packet-optical switching products to serve as the transport layer of backbone IP networks.

For years two competing points of view have been put forth. First, in the 'IP-over-glass' position, long-haul optics is incorporated into core routers. This has never taken off, with some disappointing attempts in the early days of 40 Gigabit. The second approach involves a separate, very much simpler, packet optical transport platform being introduced to interconnect core routers. The packet transport could be based on Ethernet protocols, MPLS, MPLS-TE or MPLS-TP.

 

"It will be interesting to see if a large internet data centre operator decides to embrace the OpenFlow concept at this very early stage of its development"

 

 

 

 

What is quite significant in this development, traditional router vendors seem to be going in this direction too, with the vision of a much simpler packet switching platform to keep cost, space and power under control. 

This is a clear response to the overwhelming need we see in the market, representing a separation of packet switching into two layers: one with global routing capability at strategic locations in the network, and the other with flexible transport functionality for network traffic engineering.

In 2012 it will be fascinating to see how the struggle for protocol dominance plays out within the data centre. 

While the IETF has many competing proposals, worked in multiple groups, the IEEE is in final ballot now for Shortest Path Bridging (IEEE 802.1aq). 

Shortest Path Bridging has broad applicability in networks, but we might see it first emerge as a solution within the data centre. 

The other contender within the data centre is OpenFlow, which has developed quite a momentum too. 

It will be interesting to see if a large internet data centre operator decides to embrace the OpenFlow concept at this very early stage of its development.

 

Jon Anderson, director of technology programme at Opnext

Our most significant 2011 events were the Japan great earthquake in March and the Thailand floods in October. Both events caused major disruptions and challenges in optical component supply-chain management and manufacturing.

JDS Uniphase's tunable SFP+ announcement was well ahead of the technology curve.

 

"Our most significant 2011 events were the Japan great earthquake in March and the Thailand floods in October."

 

 

 

 

In 2012 we expect initial production shipments and deployment of 100Gbps PM-QPSK/ coherent modules. Also a fast production ramp of 40 Gigabit Ethernet (GbE) QSFP+ modules for data centre applications. 

Another development to watch is the next-generation 100 GbE interconnect technology and standards development for low-cost, high-density modules for data centre applications. 

Lastly, there will be an increased focus on technologies and solutions for 100 Gigabit DWDM in metro and extended reach enterprise applications.

 

Tim Jenks, CEO of NeoPhotonics 

NeoPhotonics made significant progress this year in developments of components and technologies for coherent transmission networks, including receivers, transmitters and advanced approaches toward switching.

We continue to see increasing adoption of coherent transmission systems, broad-scale deployment of access networks and a continuing emergence of large scale data centres as a prominent element of the communications network landscape.

 

Vladimir Kozlov, CEO of LightCounting

The industry was strong enough to get over an earthquake, tsunami and flood in 2011. Softer demand for optics in 2011 helped - is still helping - many vendors to ride the disruptions. Ironically, the industry was more stressed ramping up production in 2010 to meet demand than dealing with the disruptions of 2011.  We are looking forward to a smoother ride in 2012, as demand/ supply reach equilibrium and nature cooperates.

 

"Ironically, the industry was more stressed ramping up production in 2010 to meet demand than dealing with the disruptions of 2011"

 

 

 

 

Service provider revenue and capex were up significantly in 2011. Mobile data is driving the growth, but even wireline revenues are improving and FTTx is probably behind it. This should be a sustainable trend for 2012-2015, even as service providers curb expenses to improve profitability, a larger fraction of capex will be spend on equipment. New technology is critical to stay ahead of competition.

Data centre optics had another good year with 10GBASE-T falling further behind schedule and with 100 Gigabit generating much action. This will probably get even more interesting in 2012.

Our conservative forecast for active optical cable, criticised by some vendors, was not conservative enough in 2011. It will take a while for this segment to unfold.

 

For Part 1, click here

For Part 3, click here



Optical transceivers: Pouring a quart into a pint pot

Transceiver feature - 3rd and final part

Optical equipment and transceiver makers have much in common.  Both must contend with the challenge of yearly network traffic growth and both are addressing the issue similarly: using faster interfaces, reducing power consumption and making designs more compact and flexible.  

Yet if equipment makers and transceiver vendors share common technical goals, the market challenges they face differ. For optical transceiver vendors, the challenges are particularly complex.

LightCounting's global optical transceiver sales forecast. In 2009 the market was $2.10bn and will rise to $3.42bn in 2013

Transceiver vendors have little scope for product differentiation. That’s because the interfaces are based on standard form factors defined using multi-source agreements (MSAs).

System vendors may welcome MSAs since it increases their choice of suppliers but for transceiver vendors it means fierce competition, even for new opportunities such as 40 and 100 Gigabit Ethernet (GbE) and 40 and 100 Gigabit-per-second (Gbps) long-haul transmission.  

Transceiver vendors must also contend with 40Gbps overlapping with the emerging 100Gbps market. Vendors must choose which interface options to back with their hard-earned cash.  

Some industry observers even question the 40 and 100Gbps market opportunities given the continual cost reduction and simplicity of 10Gbps transceivers.  One is Vladimir Kozlov, CEO of optical transceiver market research firm, LightCounting.

“The argument heard is that 40Gbps will take over the world in two or three years’ time,” says Kozlov. Yet he has been hearing the same claim for over a decade: “Look at the relative prices of 40Gbps and 10Gbps a decade ago and look at it now – 10Gbps is miles ahead.”

In Kozlov’s view, while 40Gbps and 100Gbps are being adopted in the network, the vast majority of networks will not see such rates. Instead traffic growth will be met with additional 10Gbps wavelengths and where necessary more fibre. 

 

“Look at the relative prices of 40Gbps and 10Gbps a decade ago and look at it now – 10Gbps is miles ahead.”

Vladimir Kozlov, LightCounting.

 

And despite the activity surrounding new pluggable transceivers such as the 40 and 100Gbps CFP MSA and long-haul modulation schemes, his view is that “99% of the market is about simplicity and low cost”.

Juniper Networks, in contrast, has no doubt 100Gbps interfaces will be needed.

First demand for 100Gbps will be to simplify data centre connections and link the network backbone. “Link aggregating 10Gbps channels involves multiple fibres and connections,” says Luc Ceuppens, senior director of marketing, high-end systems business unit at Juniper. “Having a single 100 Gigabit interface simplifies network topology and connections.”  

Longer term, 100Gbps will be driven when the basic currency of streams exceeds 10Gbps. “You won’t have to parse a greater-than-10 Gig stream over two 10Gbps links,” says Ceuppens.

But faster line rates is only one way equipment vendors are tackling traffic growth and networking costs.

"Forty Gig and eventually 100 Gig are basic needs for data centre connections and backbone networks, but in the metro, higher line rate is not the only way to handle traffic growth cost effectively,” says Mohamad Ferej, vice president of R&D at Transmode.  He points to lowering equipment’s cost, power consumption and size as well as enhancing its flexibility.

Compact designs equate to less floor space in the central office, while the energy consumption of platforms is a growing concern. Tackling both reduce operational expenses. 

Greater platform flexibility using tunable components and pluggable transceivers also helps reduce costs.  Tunable-laser-based transceivers slash the number of spare fixed-wavelength dense wavelength division multiplexing (DWDM) transceivers operators and system vendors must store. Meanwhile, pluggables reduce costs by increasing competition and decoupling optics from the line card.

For higher speed interfaces, optical transmission cost – the cost-per-bit-per-kilometre - is reduced only if the new interface’s bandwidth grows faster than its cost relative to existing interfaces.   The rule-of-thumb is that the transition to a new 4x line rate occurs once it matches 2.5x the existing interface’s cost. This is how 10Gbps superceded 2.5Gbps rates a decade ago.

The reason widespread adoption of 40Gbps has not happened is that 40Gbps has still to meet the crossover threshold.  Indeed by 2012, 40Gbps will only be at 4x 10Gbps’ cost, according to market research firm, Ovum.

Thus it is the economics of 40 and 100Gbps as well as power and size that preoccupies module vendors.

 

Modulation war

“If the 40Gbps module market is at Step 1, 10Gbps is at Step 4,” says ECI Telecom’s Oren Marmur, vice president, optical networking line of business, network solutions division.  Ten Gigabit has gone through several transitions; from 300-pin large form factor (LFF) to 300-pin small form factor (SFF) to the smaller fixed-wavelength pluggable XFP and now the tunable XFP. “Forty Gig is where 10 Gig modules were three years’ ago - each vendor has a different form factor and a different modulation scheme,” says Marmur.

DPSK dominates 40Gbps module shipments

Niall Robinson, Mintera

 

 

 

 

There are four modulation scheme choices for 40Gbps. First deployed has been optical duo-binary, followed by two phased-based modulation schemes:  differential phase-shift keying (DPSK) and differential quadrature phase-shift keying (DQPSK). The phase modulation schemes offer superior reach and robustness to dispersion but are more complex and costly designs. 

Added to the three is the emerging dual-polarisation, quadrature phase-shift keying (DP-QPSK), already deployed by operators using Nortel’s system and now being developed as a 300-pin LFF transponder by Mintera and JDS Uniphase.  Indeed several such designs are expected in 2010.

Mintera has been shipping its 300-pin LFF adaptive DPSK transponder, and claims DPSK dominates 40Gbps module shipments.  “DQPSK is being shipped in Japan and there is some interest in China but 90% is DPSK,” says Niall Robinson, vice president of product marketing at Mintera.

Opnext offers four 40Gbps transponder types: duo-binary, DPSK, a continuous mode DPSK variant that adapts to channel conditions based on the reconfigurable optical add/drop multiplexing (ROADM) stages a signal encounters, and a DQPSK design.

 

"40Gbps coherent channel position must be managed"

Daryl Inniss, Ovum

 

According to an Ovum study, duo-binary is cheapest followed by DPSK. The question facing transponder vendors is what next? Should they back DQPSK or a 40Gbps coherent DP-QPSK design?

“The problem with DQPSK is that it is more costly, though even coherent is somewhat expensive,” says Daryl Inniss, practice leader components at Ovum.   The transponders’ bill of materials is only part of the story; optical performance being the other factor.

DQPSK has excellent performance when encountering dispersion while 40Gbps coherent channel position must be managed when used alongside 10Gbps wavelengths in the fibre. “It is not a big deal but it needs to be managed,” says Inniss. If price declines for the two remain equal, DQPSK will have the larger volumes, he says.

Another consideration is 100Gbps modules. DP-QPSK is the industry-backed modulation scheme for 100Gbps and given the commonality between 40 and 100Gbps coherent designs, the issue is their relative costs.

“The right question people are asking is what are the economics of 40 Gig versus 100 Gig coherent,” says Rafik Ward, Finisar's vice president of marketing. “If you buy 40 Gig and shortly after an economical 100 Gig coherent design appears, will 40 Gig coherent get the required market traction?”

Meanwhile, designers are shrinking existing 40Gbps modules, boosting significantly 40Gbps system capacity.

The 300-pin LFF transponder, at 7x5 inch, requires its own line card. As such, two system line cards are needed for a 40Gbps link: one for the short-reach, client-side interface and one for the line-side transponder.

A handful: a 300-pin large form factor transponder Source: Mintera

Mintera is one vendor developing a smaller 300-pin MSA DPSK transponder that will enable the two 40Gbps interfaces on one card.

“At present there are 16 slots per shelf supporting eight 40Gbps links, and three shelves per bay,” says Robinson. Once vendors design a new line card, system capacity will double with 16, 40Gbps links (640Gbps) per shelf and 1,920Gbps capacity per system. Equipment vendors can also used the smaller pin-for-pin compatible 300-pin MSA on existing cards to reduce costs.

Matt Traverso, senior manager, technical marketing at Opnext also stresses the importance of more compact transponders: “Right now though it is a premature. The issue still is the modulation format war.”

Another factor driving transponder development is the electrical interface used. The 300-pin MSA uses the SFI 5.1 interface based on 16, 2.5Gbps channels. “Forty and 100GbE all use 10Gbps interfaces, as do a lot of framer and ASIC vendors,” says Traverso.  Since the 300-pin MSA in not compatible, adopting 10Gbps-channel electrical interfaces will likely require a new pluggable MSA for long haul.  

 

CFP MSA for 40 and 100 Gig

One significant MSA development in 2009 was the CFP pluggable transceiver MSA. At ECOC last September, several companies announced first CFP designs implementing 40 and 100GbE standards.

Opnext announced a 100GBASE-LR4 CFP, a 100GbE over 10 km interface made up of four wavelengths each at 25Gbps. Finisar and Sumitomo Electric each announced a 40GBASE-LR4 CFP, a 40GbE over 10km comprising four wavelengths at 10Gbps.

The CFP MSA is smaller than the 300-pin LFF, measuring some 3.4x4.8 inches (86x120mm). It has four power settings - up to 8W, up to 16W, below 24W and above 24W (to 32W). When a CFP is plugged in, it communicates to the host platform its power class.

The 100Gbps CFP is designed to link IP routers, or an IP router to a DWDM platform for longer distance transmission.

“There is customer-pull to get the 100 Gig [pluggable] out,” says Traverso, explaining why Opnext chose 100GbE for its first design.

Opnext’s 100GbE pluggable comprises four 25Gbps transmit optical sub-assemblies (TOSAs) and four receive optical sub-assemblies (ROSAs). Also included are an optical multiplexer and demultiplexer to transmit and recover the four narrowly (LAN-WDM) spaced wavelengths. Also included within the 100GbE CFP are two integrated circuits (ICs): a gearbox IC translating between the 10Gbps channels and the higher speed 25Gbps lanes, and the module’s electrical interface IC.

 

"The issue still is the modulation format war”

Matt Traverso, Opnext

 

The CFP transceiver, while relatively large, has space constraints that challenge the routeing of fibres linking the discrete optical components. “This is familiar territory,” says Traverso. “The 10GBASE-LX4 [a four-channel design] in an X2 [pluggable] was a much harder problem.”

“Right now our [100GbE] focus is the 10 km CFP,” says Juniper’s Ceuppens. “There is no interest in parallel multimode [100GBASE-SR10] - service providers will not deploy multi-mode fibre due to the bigger cable and greater weight.”

Finisar’s and Sumitomo Electric’s 40GBASE-LR4 CFP also uses four TOSAs and ROSAs, but since each is 10Gbps no gearbox IC is needed. Moreover, coarse WDM (CWDM)-based wavelength spacing is used avoidng the need for thermal cooling. The cooling is required for 100Gbps to restrict the lasers’ LAN-WDM wavelengths drifting. Finisar has since detailed a 100GBASE-LR4 CFP.

“For the 40GBASE-LR4 CFP, a discrete design is relatively straightforward,” says Feng Tian, senior manager marketing, device at Sumitomo Electric Device Innovations.  Vendors favour discretes to accelerate time-to-market, he says. But with second generation designs, power and cost reduction will be achieved using photonic integration.

Reflex Photonics announced dual 40GBASE-SR4 transceivers within a CFP in October 2009. The SR4 specification uses a 4-channel multimode ribbon cable for short reach links up to 150 m. The short reach CFP designs will be used for connecting routers to DWDM platforms for telecom and to link core switch platforms within the largest data centres. “Where the number of [10Gbps] links becomes unwieldy,” says Robert Coenen, director of product management at Reflex Photonics.

Reflex’s 100GbE design uses a 12x photo-detector array and a 12x VCSEL array. For the 100GbE design, 10 of the 12 channels are used, while for the 2x40GbE, eight (2x4) channels of each array are used (see diagram).  “We didn’t really have to redesign [the 100GbE]; just turn off two lanes and change the fibering,” says Coenen.

Meanwhile switch makers are already highlighting a need for more compact pluggables than the CFP.

“The CFP standard is OK for first generation 100Gbps line cards but denser line cards are going to require a smaller form factor,” says Pravin Mahajan, technology marketer at Cisco Systems.

This is what Cube Optics is addressing by integrating four photo-detectors and a demultiplexer in a sub-assembly using its injection molding technology. Its 4x25Gbps ROSA for 100GbE complements its existing 4x10 CWDM ROSA for 40GbE applications.

“The CFP is a nice starting point but there must be something smaller, such as a QSFP or SFP+,” says Sven Krüger, vice president product management at Cube Optics.

The company has also received funding for the development of complementary 4x25Gbps and 4x10Gbps TOSA functions. “The TOSA is more challenging from an optical alignment point of view; the lasers have a smaller coupling area,” says Francis Nedvidek, Cube Optic’s CEO.

Cube Optics forecasts second generation 40GbE and 100GbE transceiver designs using its integrated optics to ship in volume in 2011.

Could the CFP be used beyond 100GbE for 100Gbps line side and the most challenging coherent design?

“The CFP with its smaller size is a good candidate,” says Sumitomo’s Tian. “But power consumption will be a challenge.”  It may require one and maybe two more CMOS process generations to be used beyond the current 65nm to reduce the power consumption sufficiently for the design to meet the CFP’s 32W power limit, he says. 

 

XFP put to new uses

Established pluggables such as the 10Gbps XFP transceiver also continue to evolve. 

Transmode is shipping XFP-based tunable lasers with its systems, claiming the tunable XFP brings significant advantages.  

In turn, Menara Networks is incorporating system functionality within the XFP normally found only on the line card.

Until now deploying fixed-wavelength DWDM XFPs meant a system vendor had to keep a sizable inventory for when an operator needed to light new DWDM wavelengths. “With no inventory you have to wait for a firm purchase order from your customer before you know which wavelengths to order from your transceiver vendor, and that means a 12-18 weeks delivery time,” says Ferej. Now with a tunable XFP, one transceiver meets all the operator’s wavelength planning requirements.

Moreover, the optical performance of the XFP is only marginally less than a tunable 10Gbps 300-pin SFF MSA. “The only advantage of a 300-pin is a 2-3dB better optical signal-to-noise ratio, meaning the signal can pass more optical amplifiers, required for longer reach” says Ferej.

Using a 300-pin extends the overall reach without a repeater beyond 1,000 km. “But the majority of the metro network business is below 1000 km,” says Ferej.

Does the power and space specifications of an MSA such as the XFP matter for component vendors or do they just accept it?

“It doesn’t matter till it matters,” says Padraig OMathuna, product marketing director at optical device maker, GigOptix.  The maximum power rating for an XFP is 3.5W. “If you look inside a tunable XFP, the thermo-electric cooler takes 1.5 to 2W, the laser 0.5W and then there is the TIA,” says OMathuna. “That doesn’t leave a lot of room for our modulator driver.”

 

Inside JDS Uniphase's tunable XFP

Meanwhile, Menara Networks has implemented the ITU-T’s Optical Transport Network (OTN) in the form of an application specific IC (ASIC) within an XFP.

OTN is used to encapsulate signals for transport while adding optical performance monitoring functions and forward error correction.  By including OTN within a pluggable, signal encapsulation, reach and optical signal management can be added to IP routers and carrier Ethernet switch routers.

The approach delivers several advantages, says Siraj ElAhmadi, CEO of Menara Networks.

First, it removes the need for additional 10Gbps transponders to ready the signals from the switch or router for DWDM transport. Second, system vendors can develop a universal linecard without supporting OTN functionality.

The biggest technical challenge for Menara was not developing the OTN ASIC but the accompanying software. “We had the chip one and a half years before we shipped the product because of the software,” says ElAhmadi. “There is no room [within the XFP] for extra memory.”

Menara is supplying its OTN pluggables to a North American cable operator.

ECI Telecom is one vendor using Menara’s pluggable for its carrier Ethernet switch router (CESR) platforms. “For certain applications it saves you having to develop OTN,” says Jimmy Mizrahi, next-generation networking product line manager, network solutions division at ECI Telecom.

 

Pluggables and optical engines

The CFP is one module that will be used in the data center but for high density applications - linking switches and high-performance computing - more compact designs are needed.  These include the QSFP, the CXP and what are being called optical engines.

The CFP form factor for 40 and 100Gbps

The QSFP is already the favoured interface for active optical cables that encapsulate the optics within the cable and which provide an attractive alternative to copper interconnect.  QSFP transceivers support quad data rate (QDR) 4xInfiniband as well as extending the reach of 4x10Gbps Ethernet beyond copper’s 7m.

The QSFP is also an option for more compact 40GbE short-reach interfaces. “The [40GBASE-]SR4 is doable today as a QSFP,” says Christian Urricarriet, 40, 100GbE, and parallel product line manager at Finisar.  The 40-GBASE-LR4 in a QSFP is also possible, as targeted by Cube Optics among others.

Achieving 100GbE within a QSFP is another matter. Adding a 25Gbps-per-channel electrical interface and higher-speed lasers while meeting the QSFP’s power constraints is a considerable challenge.  “There may need to be an intermediate form factor that is better defined [for the task],” says Urricarriet.

Meanwhile, the CXP is a front panel interface that promises denser interfaces within the data centre. “CXP is useful for inter-chassis links as it stands today,” says Cisco’s Mahajan.

According to Avago Technologies, Infiniband is the CXP’s first target market while 100GbE using 10 of the 12 channels is clearly an option. But there are technical challenges to be overcome before the CXP connector can be used for 100GbE Ethernet. “You need to be much more stringent to meet the IEEE optical specification,” says Sami Nassar, director of marketing, fiber optic products division at Avago Technologies.

The CXP is also entering territory until recently the preserve of the SNAP12 parallel optics module.  SNAP12 connects the platforms within large IP router configurations, and is used for high-end computing. However, it is not a pluggable and comprises separate 12-channel transmitter and receiver modules. SNAP12 has a 6.25Gbps per channel data rate although a 10Gbps per channel has been announced.

“Both [the CXP and SNAP12] have a role,” says Reflex’s Coenen.  SNAP12 is on the mother board and because it has a small form factor it can sit close to the ASIC, he says.

Such an approach is now being targeted by firms using optical engines to reduce the cost of parallel interfaces and address emerging high-speed interface requirements on the mother-board, between racks and between systems.

Luxtera’s OptoPHY is one such optical engine. There are two versions: a single channel 10Gbps and a 4x10Gbps product, while a 12-channel version will sample later this year.

The OptoPHY uses the same optical technology as Luxtera’s AOC: a 1490nm distributed feedback (DFB) laser is used for both one and four-channel products, modulated using the company’s silicon photonics technology.  The single channel consumes 450mW while the four-channel consumes 800mW, says Marek Tlalka, vice president of marketing at Luxtera, while reach is up to 4km.

Luxtera says the 12-channel version which will cost around $120, equating to $1 per 1Gbps. This, it claims, is several times cheaper than SNAP12.

“The next-generation product will achieve 25Gbps per channel, using the same form factor and the same chip,” says Tlalka. This will allow the optical engine to handle channel speeds used for 100GbE as well as the next Infiniband speed-hike known as Eight Data Rate (EDR).

Avago, a leading supplier of SNAP12, says that the robust interface with its integrated heat sink is still a preferred option for vendors. “For others, with even higher-density concentrations,  a next generation packaging type is being used, which we’ve not announced yet,” says Dan Rausch, Avago’s senior technical marketing manager, fiber optic products division.

The advent of 100GbE and even higher rates, and 25Gbps electrical interfaces, will further promote optical engines. “It is hard enough to route 10Gbps around an FR4 printed circuit board,” says Coenen. Four inches are typically the limit, while longer links up to 10 inches requiring such techniques as pre-emphasis, electronic dispersion compensation and retiming.

At 25Gbps distances will become even shorter.  “This makes the argument for optical engines even stronger, you will need them near the ASICs to feed data to the front panel,” says Coenen.

Optical transceivers may rightly be in the limelight handling network traffic growth but it is the activities linking platforms, boards and soon on-board devices where optical transceiver vendors, unencumbered by MSAs, have scope for product differentiation.


Privacy Preference Center