EZchip packs 100 ARM cores into one networking chip

 

The Tile-Mx100. Source: EZchip

  • The industry's first detailed chip featuring 100, 64-bit ARM cores
  • The Tile-Mx devices will perform control plane processing and data plane processing
  • The 100-core chip will have 100 Gigabit Ethernet ports and support 200 Gigabit duplex traffic 

EZchip has detailed the industry's first 100-core processor. Dubbed the Tile-Mx100, the processor will be the most powerful of a family of devices aimed at such applications as software-defined networking (SDN), network function virtualisation (NFV), load-balancing and security. Other uses include video processing and application recognition, to identify applications riding over a carrier's network.

Known for its network processors, EZchip has branched out to also include general-purpose processors following its acquisition of multicore specialist, Tilera. It now competes with such companies as Broadcom, Cavium and Intel.  

 

What's new about the EZchip Tile-Mx100 is that it is the first such processor with 100 cache-coherent programmable CPU cores and it is by far the largest 64-bit ARM processor yet announced

 

EZchip's NPS network processor is a custom IC designed to maximise packet-processing performance. The Tile-Mx also targets networking but using standard ARM cores. Engineers will benefit from open source software, third-party applications and ARM development tools. "We believe the market needs a standard, open architecture," says Amir Eyal, vice president of business development at EZchip.

"A multicore standard processor tailored for networking is nothing new; numerous such processors have been available for years from several vendors," says Tom Halfhill, senior analyst at The Linley Group. "What's new about the EZchip Tile-Mx100 is that it is the first such processor with 100 cache-coherent programmable CPU cores and it is by far the largest 64-bit ARM processor yet announced."

EZchip has detail three Tile-Mx devices, the most powerful being the Tile-Mx100 that uses 100, 64-bit ARM Cortex-A53 cores. The Cortex-A53 is newer and smaller than the Cortex-A57, and has a relatively low power consumption. Handset and tablet designs are also using the ARM Cortex-A53 core. Both the A53 and A57 cores use the ARMv8-A instruction set.

"We have taken the A53 in order to put more cores on the die," says Eyal. "The idea with networking applications is that the more packets you can process in parallel, the better." A chip hosting many, smaller cores helps meet this goal.

 

Tile-Mx architecture

The Tile-Mx100 device will process traffic at rates up to 200 Gigabit-per-second (Gbps) rates, or 200 Gbps duplex. In contrast, EZchip's NPS family of devices has a roadmap with a traffic processing performance of 400 Gbps to 800 Gbps duplex.

The Tile-Mx uses a two-level architecture. The 100 cores are partitioned into 25 processing clusters or tiles, each comprising four ARM cores that share network acceleration hardware and level-2 cache memory. Each tile also features router hardware, part of the chip's interconnect network that handles the tile's input/ output (I/O) requirements.

Source: EZchip

"The key technology for the Tile-Mx architecture is the interconnect that enables 100 CPUs to be connected in a coherent manner," says Jag Bolaria, principal analyst at The Linley Group.

"There are five different networks [part of the mesh] that interconnect the 100 cores in parallel, preventing bottlenecks and contention," says Eyal. The mesh also ensures that each core can talk to the chip's I/O and to the memory. The mesh is a fifth iteration, having been improved with each generation of chip design, says Eyal, and has a total bandwidth of 25 Terabits.

The mesh also implements cache coherency, an important aspect of multi-processor design that ensures that cache memory is updated when accessed by any of the cores without needing to introduce idle states first.

Other chip features include a traffic manager, essentially the one used for EZchip's NPUs, which prioritises traffic, allocates bandwidth and prevents packet loss. There are also hardware units (see MiCA blocks in main chip diagram), developed by Tilera, which do preliminary packet classification before presenting the packets to the cores.  

The chip's I/O includes 1, 10, 25, 40, 50 and 100 Gigabit Ethernet interfaces, the Interlaken interface and PCI Express, used to connect the chip to a host processor such as an Intel x86 microprocessor.

 

The idea with networking applications is that the more packets you can process in parallel, the better

 

EZchip is not detailing the device's interface mix or such metrics as the chip's pin-count, clock speed or power consumption. However, EZchip says the chip's power consumption will be under 100W.

When a packet is presented to the chip, it is assigned to a core which processes it to completion before sending it typically to the I/O. For the programmer, the 100-core device appears as a single processor; it is the hardware on-chip that handles the details, sending an incoming packet to the next free core.

Ezchip shows examples of possible platforms that could use the Tile-Mx.

One is a 1-rack-unit-high pizza box in the data centre used to deliver virtual network functions. Such a NFV server would benefit from the Tile-Mx's hardware-accelerated table look-ups, packet classification and packet flow management in and out of the device. Another design example is using the device for an intelligent network interface card (NIC) in a standard Intel x86-based server.    

The two other Tile-Mx family devices will use 36 and 64 Cortex-A53 cores. First Tile-Mx samples are expected in the second half of 2016.

 

Multicore trends

The Linley Group says that despite the unprecedented 100 ARM cores, EZchip's family of device faces competition. Moreover, the trend to increase core-count has its limits.

EZchip is already shipping a 72-core processor it acquired from Tilera although the device is not ARM-based. And Cavium's largest processor has 48 cores, says Halfhill. Broadcom's largest processor has only 20 cores, but those CPUs are quad-threaded, so the processor can handle up to 80 packet streams. "Not quite as many as the Tile-Mx100, but it is in the same ballpark," says Halfhill.

"Keep in mind that Tile-Mx100 production is about two years out; a lot can happen in two years," adds Halfhill.

According to Bolaria, multicore designs are good for applications that are highly parallelised such as packet processing and deep packet processing. But NPUs are better if all that is being done is packet processing.

"Many cores is not particularly good for applications that need good single-thread performance," says Bolaria. "This is where [an Intel] Xeon will shine — for applications such as high-performance computing, simulations and algorithms."

Coherent interconnects also limit CPU scaling, says Bolaria. Tile-Mx gets around the interconnect limitation by clustering four ARM cores into a tile, so that effectively 25 nodes only are connected. "With more nodes, it becomes difficult to maintain cache coherency and performance," says Bolaria.

Another limitation is partitioning applications into smaller chunks for execution on 100 cores. Some tasks are serial by nature and cannot  benefit from parallel processing. "Amdahl’s law limits performance gains from adding more CPUs," says Bolaria.


Optical transceiver market to grow 50 percent by 2017

  • The optical transceiver market will grow to US $5.1bn in 2017
  • The fierce price declines of 2012 will lessen during the forecast period
  • Stronger traffic growth could have a significant positive effect on transceiver market growth

 

"The price declines in 2012 were brutal but they will not happen again [during the forecast period]"

Vladimir Kozlov, LightCounting

 

 

 

The global optical transceiver market will grow strongly over the next five year to $5.1bn in 2017, from $3.4bn in 2012. So claims market research company, LightCounting, in its latest telecom and datacom forecast.

"That [market value] does not include tunable lasers, wavelength-selective switches, pump lasers and amplifiers which will add some $1bn or $2bn more [in 2017]," says Vladimir Kozlov, CEO of LightCounting.

One key assumption underpinning the forecast is that competitive pressures will ease. "The price declines in 2012 were brutal but they will not happen again [during the forecast period]," says Kozlov.

Optical transceivers

The optical transceiver market saw price declines as high as 30 percent last year. These were not new products ramping in volume where sharp price declines are to be expected, says Kozlov.  Last year also saw fierce competition among the service providers while the steepest price declines were experienced by the telecom equipment makers.

One optical transceiver sector that performed well last year is high-speed optical transceivers and in particular Ethernet.

The 100 Gigabit Ethernet (GbE) market saw revenue growth due to strong demand for the 100GBASE-LR4 10km transceiver even though its unit price declined 30 percent. This is a sector the Chinese optical transceiver players are eyeing as they look to broaden the markets they address.

One unheralded market that did well was 40 Gigabit transceivers for telecoms and the data centre. "This is 40 Gig short reach mostly - up to 100m - but also 10km reach transceivers did well in the data centre," says Kozlov.

LightCounting expects the steady growth of 40GbE to continue; 40GbE transceivers use 10 Gig technology co-packaged into one module, offer improved port density and have a lower power and cost compared to four 10GbE transceivers.

Even the veteran 10GbE market continues to grow. Some 7-8M 10GbE short reach and long reach units were sold in 2012 growing to 10M units this year.

Meanwhile, the 100 Gigabit coherent long-haul transponder market was small in 2012. The optical vendors only started selling in volume last year and most of the system vendors manufacture their own 100 Gigabit-per-second (Gbps) designs using discrete components. "Those companies that sell modulators and receivers for 100 Gig did really well in 2012," says Kozlov.

LightCounting expects the 100Gbps coherent transponder market will grow in 2013 as system vendors embrace more third-party 100 Gig transponders. "We estimate that the optical transceiver vendors captured 10-15 percent of the 40 and 100 Gig market and this will grow to 18-20 percent in 2013," says Kozlov.

Other markets that grew in 2012 include optical access. The fibre-to-the-x (FTTx) continues to grow in terms of units shipped, with transceivers and board optical sub-assembly (BOSA) designs sharing the volumes.

LightCounting says that the number of optical network units (ONU) exceeded by more than double the number of FTTx subscribers added in 2012: 35-40M ONU transceivers and BOSAs compared to 15M new subscribers.

The result was a market value of $700M in 2012 compared to $300M in 2009. But because of the excess in shipments compared to new subscribers, Kozlov expects the FTTx market to slow down. "That is probably a sure sign that it is going to grow again," he quips.

 

Market expectations

Kozlov will be watching how the optical interconnect market does this year. The active optical cable market did well in 2012 and this is likely to continue. Kozlov is interested to see if silicon photonics starts to make its mark in the transceiver market, citing as an example Cisco's in-house silicon photonics-based CPAK transceiver. He also expects the 40G and 100Gbps module makers to do well.

LightCounting stresses the wide discrepancy between video traffic growth through 2017 as forecast by Bell Labs and by Cisco Systems. This is important because the optical transceiver forecast model developed by LightCounting is sensitive to traffic growth. LightCounting has averaged the two forecasts but if video traffic grows more quickly, the overall transceiver market will exceed the market research company's 2017 forecast.

Another reason why Kozlov is upbeat about the market's prospects is that while the system vendors suffered the sharpest price declines - up to 35 percent in 2012 - this will not continue.

The sharp falls in equipment prices were due largely to the fierce competition provided by the Chinese giants Huawei and ZTE. But relief is expected with government initiatives in Europe and the United States to limit the influence of Huawei and ZTE, says Kozlov.

The U.S. government has effectively restricted sales of Huawei and ZTE networking equipment to major U.S. carriers due to cyber security concerns, while the European Commission has determined that Huawei and ZTE are both inflicting damage on European equipment vendors by dumping products onto the European market.


Altera optical FPGA in 100 Gigabit Ethernet traffic demo

Altera is demonstrating its optical FPGA at OFC/NFOEC, being held in Los Angeles this week. The FPGA, coupled to parallel optical interfaces, is being used to send and receive 100 Gigabit Ethernet packets of various sizes. 

The technology demonstrator comprises an Altera Stratix IV FPGA with 28, 11.3Gbps electrical transceivers coupled to two Avago Technologies' MicroPod optical modules. 

 

"FPGAs are now being used for full system level solutions"

Kevin Cackovic, Altera

 

 

The MicroPods - a 12x10Gbps transmitter and a 12x10Gbps optical transceiver - are co-packaged with the FPGA. "All the interconnect between the serdes and the optics are on the package, not on the board," says Steve Sharp, marketing program manager, fiber optic products division at Avago.  Such a design benefits signal integrity and power consumption, he says:  "It opens up a different world for FPGA users, and for system integration for optic users."

Both Altera and Avago stress that the optical FPGA has been designed deliberately using proven technologies. "We wanted to focus on demonstrating the integration of the optics, not pushing either of the process technologies to the absolute edge," says Sharp.

The nature of FPGA designs has changed in recent years, says Kevin Cackovic, senior strategic marketing manager of Altera's transmission business unit.  Many designs no longer use FPGAs solely to interface application-specific standard products to ASICs, or as a co-processor.  "FPGAs are now being used for full system level solutions, things like a framer or MAC technology, forward error correction at very high rates, mapper engines, packet processing and traffic management," he says.

Having its FPGAs in such designs has highlighted for Altera current and upcoming system bottlenecks. "This is what is driving our interest in looking at this technology and what is possible integrating the optics into the FPGA," says Cackovic. Applications requiring the higher bandwidth and the greater reach of optical - rack-to-rack rather than chip-to-module - include next-generation video, cloud computing and 3D gaming, he says.

Altera has still to announce its product plans regarding the optical FPGA dsign. Meanwhile Avago says it is looking at higher-speed versions of MicroPod.

"The request for higher line rates is obviously there," says Sharp. "Whether it goes all the way to 28 [Gigabit] or one of the steps in-between, we are not sure yet."


Next-gen 100 Gigabit short reach optics starts to take shape

The latest options for 100 Gigabit-per-second (Gbps) interfaces are beginning to take shape following a meeting of the IEEE 802.3 Next Generation 100Gb/s Optical Ethernet Study Group in November. 

The interface options being discussed include: 

  • A parallel multi-mode fibre using a VCSEL with a reach of 50m to 70m. An active optical cable version with a 30m reach, limited by the desired cable length rather than the technology, using silicon photonics or a VCSEL has also been proposed.
  • A parallel single-mode fibre using a 1310nm electro-absorption modulated laser (EML) or silicon photonics with a range of 50m to 1000m+.
  • A duplex single-mode fiber, using wavelength division multiplexing (WDM) or pulse-width modulation (PAM), an EML or silicon photonics for a 2km reach.

“I think in the end all will be adopted,” says Marek Tlalka, director of marketing at Luxtera. "Users will be able to choose what is most economical."

Jon Anderson, director of technology programme at Opnext, stresses however that these are proposals.

"No decisions were reached by the Study Group on any of these proposals," he says. “The Study Group is only working towards defining objectives for a next-gen 100 Gigabit Ethernet Optics project.” Agreement on technical solutions is outside the scope of the Study Group.

Anderson says there is a general agreement to define a 4x25Gbps multi-mode fibre optical interface. But the issues of reach and multi-mode fibre type (OM3, OM4) are still being studied.

“The Study Group has not reached any agreement on whether a 100GE short reach single-mode objective should be pursued," says Anderson. “Discussion at this point are on reach, power consumption and relative cost of possible solutions with respect to (the 10km) 100GBASE-LR4."


Optical transceivers: Pouring a quart into a pint pot

Transceiver feature - 3rd and final part

Optical equipment and transceiver makers have much in common.  Both must contend with the challenge of yearly network traffic growth and both are addressing the issue similarly: using faster interfaces, reducing power consumption and making designs more compact and flexible.  

Yet if equipment makers and transceiver vendors share common technical goals, the market challenges they face differ. For optical transceiver vendors, the challenges are particularly complex.

LightCounting's global optical transceiver sales forecast. In 2009 the market was $2.10bn and will rise to $3.42bn in 2013

Transceiver vendors have little scope for product differentiation. That’s because the interfaces are based on standard form factors defined using multi-source agreements (MSAs).

System vendors may welcome MSAs since it increases their choice of suppliers but for transceiver vendors it means fierce competition, even for new opportunities such as 40 and 100 Gigabit Ethernet (GbE) and 40 and 100 Gigabit-per-second (Gbps) long-haul transmission.  

Transceiver vendors must also contend with 40Gbps overlapping with the emerging 100Gbps market. Vendors must choose which interface options to back with their hard-earned cash.  

Some industry observers even question the 40 and 100Gbps market opportunities given the continual cost reduction and simplicity of 10Gbps transceivers.  One is Vladimir Kozlov, CEO of optical transceiver market research firm, LightCounting.

“The argument heard is that 40Gbps will take over the world in two or three years’ time,” says Kozlov. Yet he has been hearing the same claim for over a decade: “Look at the relative prices of 40Gbps and 10Gbps a decade ago and look at it now – 10Gbps is miles ahead.”

In Kozlov’s view, while 40Gbps and 100Gbps are being adopted in the network, the vast majority of networks will not see such rates. Instead traffic growth will be met with additional 10Gbps wavelengths and where necessary more fibre. 

 

“Look at the relative prices of 40Gbps and 10Gbps a decade ago and look at it now – 10Gbps is miles ahead.”

Vladimir Kozlov, LightCounting.

 

And despite the activity surrounding new pluggable transceivers such as the 40 and 100Gbps CFP MSA and long-haul modulation schemes, his view is that “99% of the market is about simplicity and low cost”.

Juniper Networks, in contrast, has no doubt 100Gbps interfaces will be needed.

First demand for 100Gbps will be to simplify data centre connections and link the network backbone. “Link aggregating 10Gbps channels involves multiple fibres and connections,” says Luc Ceuppens, senior director of marketing, high-end systems business unit at Juniper. “Having a single 100 Gigabit interface simplifies network topology and connections.”  

Longer term, 100Gbps will be driven when the basic currency of streams exceeds 10Gbps. “You won’t have to parse a greater-than-10 Gig stream over two 10Gbps links,” says Ceuppens.

But faster line rates is only one way equipment vendors are tackling traffic growth and networking costs.

"Forty Gig and eventually 100 Gig are basic needs for data centre connections and backbone networks, but in the metro, higher line rate is not the only way to handle traffic growth cost effectively,” says Mohamad Ferej, vice president of R&D at Transmode.  He points to lowering equipment’s cost, power consumption and size as well as enhancing its flexibility.

Compact designs equate to less floor space in the central office, while the energy consumption of platforms is a growing concern. Tackling both reduce operational expenses. 

Greater platform flexibility using tunable components and pluggable transceivers also helps reduce costs.  Tunable-laser-based transceivers slash the number of spare fixed-wavelength dense wavelength division multiplexing (DWDM) transceivers operators and system vendors must store. Meanwhile, pluggables reduce costs by increasing competition and decoupling optics from the line card.

For higher speed interfaces, optical transmission cost – the cost-per-bit-per-kilometre - is reduced only if the new interface’s bandwidth grows faster than its cost relative to existing interfaces.   The rule-of-thumb is that the transition to a new 4x line rate occurs once it matches 2.5x the existing interface’s cost. This is how 10Gbps superceded 2.5Gbps rates a decade ago.

The reason widespread adoption of 40Gbps has not happened is that 40Gbps has still to meet the crossover threshold.  Indeed by 2012, 40Gbps will only be at 4x 10Gbps’ cost, according to market research firm, Ovum.

Thus it is the economics of 40 and 100Gbps as well as power and size that preoccupies module vendors.

 

Modulation war

“If the 40Gbps module market is at Step 1, 10Gbps is at Step 4,” says ECI Telecom’s Oren Marmur, vice president, optical networking line of business, network solutions division.  Ten Gigabit has gone through several transitions; from 300-pin large form factor (LFF) to 300-pin small form factor (SFF) to the smaller fixed-wavelength pluggable XFP and now the tunable XFP. “Forty Gig is where 10 Gig modules were three years’ ago - each vendor has a different form factor and a different modulation scheme,” says Marmur.

DPSK dominates 40Gbps module shipments

Niall Robinson, Mintera

 

 

 

 

There are four modulation scheme choices for 40Gbps. First deployed has been optical duo-binary, followed by two phased-based modulation schemes:  differential phase-shift keying (DPSK) and differential quadrature phase-shift keying (DQPSK). The phase modulation schemes offer superior reach and robustness to dispersion but are more complex and costly designs. 

Added to the three is the emerging dual-polarisation, quadrature phase-shift keying (DP-QPSK), already deployed by operators using Nortel’s system and now being developed as a 300-pin LFF transponder by Mintera and JDS Uniphase.  Indeed several such designs are expected in 2010.

Mintera has been shipping its 300-pin LFF adaptive DPSK transponder, and claims DPSK dominates 40Gbps module shipments.  “DQPSK is being shipped in Japan and there is some interest in China but 90% is DPSK,” says Niall Robinson, vice president of product marketing at Mintera.

Opnext offers four 40Gbps transponder types: duo-binary, DPSK, a continuous mode DPSK variant that adapts to channel conditions based on the reconfigurable optical add/drop multiplexing (ROADM) stages a signal encounters, and a DQPSK design.

 

"40Gbps coherent channel position must be managed"

Daryl Inniss, Ovum

 

According to an Ovum study, duo-binary is cheapest followed by DPSK. The question facing transponder vendors is what next? Should they back DQPSK or a 40Gbps coherent DP-QPSK design?

“The problem with DQPSK is that it is more costly, though even coherent is somewhat expensive,” says Daryl Inniss, practice leader components at Ovum.   The transponders’ bill of materials is only part of the story; optical performance being the other factor.

DQPSK has excellent performance when encountering dispersion while 40Gbps coherent channel position must be managed when used alongside 10Gbps wavelengths in the fibre. “It is not a big deal but it needs to be managed,” says Inniss. If price declines for the two remain equal, DQPSK will have the larger volumes, he says.

Another consideration is 100Gbps modules. DP-QPSK is the industry-backed modulation scheme for 100Gbps and given the commonality between 40 and 100Gbps coherent designs, the issue is their relative costs.

“The right question people are asking is what are the economics of 40 Gig versus 100 Gig coherent,” says Rafik Ward, Finisar's vice president of marketing. “If you buy 40 Gig and shortly after an economical 100 Gig coherent design appears, will 40 Gig coherent get the required market traction?”

Meanwhile, designers are shrinking existing 40Gbps modules, boosting significantly 40Gbps system capacity.

The 300-pin LFF transponder, at 7x5 inch, requires its own line card. As such, two system line cards are needed for a 40Gbps link: one for the short-reach, client-side interface and one for the line-side transponder.

A handful: a 300-pin large form factor transponder Source: Mintera

Mintera is one vendor developing a smaller 300-pin MSA DPSK transponder that will enable the two 40Gbps interfaces on one card.

“At present there are 16 slots per shelf supporting eight 40Gbps links, and three shelves per bay,” says Robinson. Once vendors design a new line card, system capacity will double with 16, 40Gbps links (640Gbps) per shelf and 1,920Gbps capacity per system. Equipment vendors can also used the smaller pin-for-pin compatible 300-pin MSA on existing cards to reduce costs.

Matt Traverso, senior manager, technical marketing at Opnext also stresses the importance of more compact transponders: “Right now though it is a premature. The issue still is the modulation format war.”

Another factor driving transponder development is the electrical interface used. The 300-pin MSA uses the SFI 5.1 interface based on 16, 2.5Gbps channels. “Forty and 100GbE all use 10Gbps interfaces, as do a lot of framer and ASIC vendors,” says Traverso.  Since the 300-pin MSA in not compatible, adopting 10Gbps-channel electrical interfaces will likely require a new pluggable MSA for long haul.  

 

CFP MSA for 40 and 100 Gig

One significant MSA development in 2009 was the CFP pluggable transceiver MSA. At ECOC last September, several companies announced first CFP designs implementing 40 and 100GbE standards.

Opnext announced a 100GBASE-LR4 CFP, a 100GbE over 10 km interface made up of four wavelengths each at 25Gbps. Finisar and Sumitomo Electric each announced a 40GBASE-LR4 CFP, a 40GbE over 10km comprising four wavelengths at 10Gbps.

The CFP MSA is smaller than the 300-pin LFF, measuring some 3.4x4.8 inches (86x120mm). It has four power settings - up to 8W, up to 16W, below 24W and above 24W (to 32W). When a CFP is plugged in, it communicates to the host platform its power class.

The 100Gbps CFP is designed to link IP routers, or an IP router to a DWDM platform for longer distance transmission.

“There is customer-pull to get the 100 Gig [pluggable] out,” says Traverso, explaining why Opnext chose 100GbE for its first design.

Opnext’s 100GbE pluggable comprises four 25Gbps transmit optical sub-assemblies (TOSAs) and four receive optical sub-assemblies (ROSAs). Also included are an optical multiplexer and demultiplexer to transmit and recover the four narrowly (LAN-WDM) spaced wavelengths. Also included within the 100GbE CFP are two integrated circuits (ICs): a gearbox IC translating between the 10Gbps channels and the higher speed 25Gbps lanes, and the module’s electrical interface IC.

 

"The issue still is the modulation format war”

Matt Traverso, Opnext

 

The CFP transceiver, while relatively large, has space constraints that challenge the routeing of fibres linking the discrete optical components. “This is familiar territory,” says Traverso. “The 10GBASE-LX4 [a four-channel design] in an X2 [pluggable] was a much harder problem.”

“Right now our [100GbE] focus is the 10 km CFP,” says Juniper’s Ceuppens. “There is no interest in parallel multimode [100GBASE-SR10] - service providers will not deploy multi-mode fibre due to the bigger cable and greater weight.”

Finisar’s and Sumitomo Electric’s 40GBASE-LR4 CFP also uses four TOSAs and ROSAs, but since each is 10Gbps no gearbox IC is needed. Moreover, coarse WDM (CWDM)-based wavelength spacing is used avoidng the need for thermal cooling. The cooling is required for 100Gbps to restrict the lasers’ LAN-WDM wavelengths drifting. Finisar has since detailed a 100GBASE-LR4 CFP.

“For the 40GBASE-LR4 CFP, a discrete design is relatively straightforward,” says Feng Tian, senior manager marketing, device at Sumitomo Electric Device Innovations.  Vendors favour discretes to accelerate time-to-market, he says. But with second generation designs, power and cost reduction will be achieved using photonic integration.

Reflex Photonics announced dual 40GBASE-SR4 transceivers within a CFP in October 2009. The SR4 specification uses a 4-channel multimode ribbon cable for short reach links up to 150 m. The short reach CFP designs will be used for connecting routers to DWDM platforms for telecom and to link core switch platforms within the largest data centres. “Where the number of [10Gbps] links becomes unwieldy,” says Robert Coenen, director of product management at Reflex Photonics.

Reflex’s 100GbE design uses a 12x photo-detector array and a 12x VCSEL array. For the 100GbE design, 10 of the 12 channels are used, while for the 2x40GbE, eight (2x4) channels of each array are used (see diagram).  “We didn’t really have to redesign [the 100GbE]; just turn off two lanes and change the fibering,” says Coenen.

Meanwhile switch makers are already highlighting a need for more compact pluggables than the CFP.

“The CFP standard is OK for first generation 100Gbps line cards but denser line cards are going to require a smaller form factor,” says Pravin Mahajan, technology marketer at Cisco Systems.

This is what Cube Optics is addressing by integrating four photo-detectors and a demultiplexer in a sub-assembly using its injection molding technology. Its 4x25Gbps ROSA for 100GbE complements its existing 4x10 CWDM ROSA for 40GbE applications.

“The CFP is a nice starting point but there must be something smaller, such as a QSFP or SFP+,” says Sven Krüger, vice president product management at Cube Optics.

The company has also received funding for the development of complementary 4x25Gbps and 4x10Gbps TOSA functions. “The TOSA is more challenging from an optical alignment point of view; the lasers have a smaller coupling area,” says Francis Nedvidek, Cube Optic’s CEO.

Cube Optics forecasts second generation 40GbE and 100GbE transceiver designs using its integrated optics to ship in volume in 2011.

Could the CFP be used beyond 100GbE for 100Gbps line side and the most challenging coherent design?

“The CFP with its smaller size is a good candidate,” says Sumitomo’s Tian. “But power consumption will be a challenge.”  It may require one and maybe two more CMOS process generations to be used beyond the current 65nm to reduce the power consumption sufficiently for the design to meet the CFP’s 32W power limit, he says. 

 

XFP put to new uses

Established pluggables such as the 10Gbps XFP transceiver also continue to evolve. 

Transmode is shipping XFP-based tunable lasers with its systems, claiming the tunable XFP brings significant advantages.  

In turn, Menara Networks is incorporating system functionality within the XFP normally found only on the line card.

Until now deploying fixed-wavelength DWDM XFPs meant a system vendor had to keep a sizable inventory for when an operator needed to light new DWDM wavelengths. “With no inventory you have to wait for a firm purchase order from your customer before you know which wavelengths to order from your transceiver vendor, and that means a 12-18 weeks delivery time,” says Ferej. Now with a tunable XFP, one transceiver meets all the operator’s wavelength planning requirements.

Moreover, the optical performance of the XFP is only marginally less than a tunable 10Gbps 300-pin SFF MSA. “The only advantage of a 300-pin is a 2-3dB better optical signal-to-noise ratio, meaning the signal can pass more optical amplifiers, required for longer reach” says Ferej.

Using a 300-pin extends the overall reach without a repeater beyond 1,000 km. “But the majority of the metro network business is below 1000 km,” says Ferej.

Does the power and space specifications of an MSA such as the XFP matter for component vendors or do they just accept it?

“It doesn’t matter till it matters,” says Padraig OMathuna, product marketing director at optical device maker, GigOptix.  The maximum power rating for an XFP is 3.5W. “If you look inside a tunable XFP, the thermo-electric cooler takes 1.5 to 2W, the laser 0.5W and then there is the TIA,” says OMathuna. “That doesn’t leave a lot of room for our modulator driver.”

 

Inside JDS Uniphase's tunable XFP

Meanwhile, Menara Networks has implemented the ITU-T’s Optical Transport Network (OTN) in the form of an application specific IC (ASIC) within an XFP.

OTN is used to encapsulate signals for transport while adding optical performance monitoring functions and forward error correction.  By including OTN within a pluggable, signal encapsulation, reach and optical signal management can be added to IP routers and carrier Ethernet switch routers.

The approach delivers several advantages, says Siraj ElAhmadi, CEO of Menara Networks.

First, it removes the need for additional 10Gbps transponders to ready the signals from the switch or router for DWDM transport. Second, system vendors can develop a universal linecard without supporting OTN functionality.

The biggest technical challenge for Menara was not developing the OTN ASIC but the accompanying software. “We had the chip one and a half years before we shipped the product because of the software,” says ElAhmadi. “There is no room [within the XFP] for extra memory.”

Menara is supplying its OTN pluggables to a North American cable operator.

ECI Telecom is one vendor using Menara’s pluggable for its carrier Ethernet switch router (CESR) platforms. “For certain applications it saves you having to develop OTN,” says Jimmy Mizrahi, next-generation networking product line manager, network solutions division at ECI Telecom.

 

Pluggables and optical engines

The CFP is one module that will be used in the data center but for high density applications - linking switches and high-performance computing - more compact designs are needed.  These include the QSFP, the CXP and what are being called optical engines.

The CFP form factor for 40 and 100Gbps

The QSFP is already the favoured interface for active optical cables that encapsulate the optics within the cable and which provide an attractive alternative to copper interconnect.  QSFP transceivers support quad data rate (QDR) 4xInfiniband as well as extending the reach of 4x10Gbps Ethernet beyond copper’s 7m.

The QSFP is also an option for more compact 40GbE short-reach interfaces. “The [40GBASE-]SR4 is doable today as a QSFP,” says Christian Urricarriet, 40, 100GbE, and parallel product line manager at Finisar.  The 40-GBASE-LR4 in a QSFP is also possible, as targeted by Cube Optics among others.

Achieving 100GbE within a QSFP is another matter. Adding a 25Gbps-per-channel electrical interface and higher-speed lasers while meeting the QSFP’s power constraints is a considerable challenge.  “There may need to be an intermediate form factor that is better defined [for the task],” says Urricarriet.

Meanwhile, the CXP is a front panel interface that promises denser interfaces within the data centre. “CXP is useful for inter-chassis links as it stands today,” says Cisco’s Mahajan.

According to Avago Technologies, Infiniband is the CXP’s first target market while 100GbE using 10 of the 12 channels is clearly an option. But there are technical challenges to be overcome before the CXP connector can be used for 100GbE Ethernet. “You need to be much more stringent to meet the IEEE optical specification,” says Sami Nassar, director of marketing, fiber optic products division at Avago Technologies.

The CXP is also entering territory until recently the preserve of the SNAP12 parallel optics module.  SNAP12 connects the platforms within large IP router configurations, and is used for high-end computing. However, it is not a pluggable and comprises separate 12-channel transmitter and receiver modules. SNAP12 has a 6.25Gbps per channel data rate although a 10Gbps per channel has been announced.

“Both [the CXP and SNAP12] have a role,” says Reflex’s Coenen.  SNAP12 is on the mother board and because it has a small form factor it can sit close to the ASIC, he says.

Such an approach is now being targeted by firms using optical engines to reduce the cost of parallel interfaces and address emerging high-speed interface requirements on the mother-board, between racks and between systems.

Luxtera’s OptoPHY is one such optical engine. There are two versions: a single channel 10Gbps and a 4x10Gbps product, while a 12-channel version will sample later this year.

The OptoPHY uses the same optical technology as Luxtera’s AOC: a 1490nm distributed feedback (DFB) laser is used for both one and four-channel products, modulated using the company’s silicon photonics technology.  The single channel consumes 450mW while the four-channel consumes 800mW, says Marek Tlalka, vice president of marketing at Luxtera, while reach is up to 4km.

Luxtera says the 12-channel version which will cost around $120, equating to $1 per 1Gbps. This, it claims, is several times cheaper than SNAP12.

“The next-generation product will achieve 25Gbps per channel, using the same form factor and the same chip,” says Tlalka. This will allow the optical engine to handle channel speeds used for 100GbE as well as the next Infiniband speed-hike known as Eight Data Rate (EDR).

Avago, a leading supplier of SNAP12, says that the robust interface with its integrated heat sink is still a preferred option for vendors. “For others, with even higher-density concentrations,  a next generation packaging type is being used, which we’ve not announced yet,” says Dan Rausch, Avago’s senior technical marketing manager, fiber optic products division.

The advent of 100GbE and even higher rates, and 25Gbps electrical interfaces, will further promote optical engines. “It is hard enough to route 10Gbps around an FR4 printed circuit board,” says Coenen. Four inches are typically the limit, while longer links up to 10 inches requiring such techniques as pre-emphasis, electronic dispersion compensation and retiming.

At 25Gbps distances will become even shorter.  “This makes the argument for optical engines even stronger, you will need them near the ASICs to feed data to the front panel,” says Coenen.

Optical transceivers may rightly be in the limelight handling network traffic growth but it is the activities linking platforms, boards and soon on-board devices where optical transceiver vendors, unencumbered by MSAs, have scope for product differentiation.


Privacy Preference Center