Optical module trends: A conversation with Finisar
Finisar demonstrated recently a raft of new products that address emerging optical module developments. These include:
- A compact coherent integrated tunable transmitter and receiver assembly
- 400GBASE-FR8 and -LR8 QSFP-DD pluggable modules and a QSFP-DD active optical cable
- A QSFP28 100-gigabit serial FR interface
- 50-gigabit SFP56 SR and LR modules
Rafik Ward, Finisar’s general manager of optical interconnects, explains the technologies and their uses.
Compact coherent
Finisar is sampling a compact integrated assembly that supports 100-gigabit and 200-gigabit coherent transmission.
The integrated tunable transmitter and receiver assembly (ITTRA), to give it its full title, includes the optics and electronics needed for an analogue coherent optics interface.
The 32-gigabaud ITTRA includes a tunable laser, optical amplifier, modulators, modulator drivers, coherent mixers, a photo-detector array and the accompanying trans-impedance amplifiers, all within a gold box. “An entire analogue coherent module in a footprint that is 70 percent smaller than the size of a CFP2 module,” says Ward. The ITTRA's power consumption is below 7.5W.
Rafik WardFinisar says the ITTRA is smaller than the equivalent integrated coherent transmitter-receiver optical sub-assembly (IC-TROSA) design being developed by the Optical Internetworking Forum (OIF).
“We potentially could take this device and enable it to work in that [IC-TROSA] footprint,” says Ward.
Using the ITTRA enables higher-density coherent line cards and frees up space within an optical module for the coherent digital signal processor (DSP) for a CFP2 Digital Coherent Optics (CFP2-DCO) design.
Ward says the CFP2 is a candidate for a 400-gigabit coherent pluggable module along with the QSFP-DD and OSFP form factors. “All have their pros and cons based on such fundamental things as the size of the form factor and power dissipation,” says Ward.
But given coherent DSPs implemented in 7nm CMOS required for 400 gigabit are not yet available, the 100 and 200-gigabit CFP2 remains the module of choice for coherent pluggable interfaces.
The demonstration of the ITTRA implementing a 200-gigabit link using 16-QAM at OFC 2018. Source: Finisar
400 gigabits
Finisar also demonstrated its first 400-gigabit QSFP-DD pluggable module products based on the IEEE standards: the 2km 400GBASE-FR8 and the 10km 400GBASE-LR8. The company also unveiled a QSFP-DD active optical cable to link equipment up to 70m apart.
The two QSFP-DD pluggable modules use eight 50-gigabit PAM-4 electrical signal inputs that are modulated onto eight lasers whose outputs are multiplexed and sent over a single fibre. Finisar chose to implement the IEEE standards as its first QSFP-DD products as they are low-power and lower risk 400-gigabit solutions.
The alternative 2km 400-gigabit design, developed by the 100 Lambda MSA, is the 400G-FR4 that uses four 100-gigabit optical lanes. “This has some risk elements to it such as the [PAM-4] DSP and making 100-gigabit serial lambdas work,” says Ward. “We think the -LR8 and -FR8 are complementary and could enable a fast time-to-market for people looking at these kinds of interfaces.”
The QSFP-DD active optical cable may have a reach of 70m but typical connections are 20m. Finisar uses its VCSEL technology to implement the 400-gigabit interface. At the OFC show in March, Finisar demonstrated the cable working with a Cisco high-density port count 1 rack-unit switch.
I sometimes get asked by customers what is the best way to get to higher-density 100 gigabit. I point to the 400-gigabit DR4.
QSFP28 FR
Finisar also showed it 2km QSFP28 optical module with a single wavelength 100-gigabit PAM-4 output. The QSFP28 FR takes four 25 gigabit-per-second electrical interfaces and passes them through a gearbox chip to form a 50-gigabaud PAM-4 signal that is used to modulate the laser.
The QSFP28 FR is expected to eventually replace the CWDM4 that uses four 25-gigabit wavelengths multiplexed onto a single fibre. “The end-game is to get a 100-gigabit serial module,” says Ward. “This module represents the first generation of that.”
Finisar is also planning a 500m QSFP28 DR. The QSFP28 DR and FR will work with the 500m IEEE 400GBASE-DR4 that has four outputs, each a fibre carrying a 100-gigabit PAM-4 signal, with the -DR4 outputs interfacing with up to four FR or DR modules.
“I sometimes get asked by customers what is the best way to get to higher-density 100 gigabit,” says Ward. “I point to the 400 gigabit DR4, even though we call it a 400-gigabit part, it is also a 4x100-gigabit DR solution.”
Ward says that the 500m reach of the DR is sufficient for the vast majority of links in the data centre.
SFP56 SR and LR
Finisar has also demonstrated two SFP56 modules: a short reach (SR) version that has a reach of 100m over OM4 multi-mode fibre and the 10km LR single-mode interface. The SR is VSCEL-based while the LR uses a directly-modulated distributed feedback laser.
The SFP is deployed widely at speeds up to and including 10 gigabits while the 25-gigabit SFP shipments are starting to ramp. The SFP56 is the next-generation SFP module with a 50-gigabit electrical input and a 50-gigabit PAM-4 optical output.
The SFP56 will be used for several applications, says Finisar. These include linking servers to switches, connecting switches in enterprise applications, and 5G wireless applications.
Finisar says its 50 and 100 gigabit-per-lane products will likely be released throughout 2019, in line with the industry. “The 8-channel devices will likely come out at least a few quarters before the 4-channel devices,” says Ward.
The era of cloud-scale routeing
Nokia's FP4 p-chip. The multi-chip module shows five packages: the p-chip die surrounded by four memory stacks. Each stack has five memory die. The p-chip and memory stacks are interconnected using an interposer. - Nokia has unveiled the FP4, a 2.4 terabit-per-second network processor that has 6x the throughput of its existing FP3.
- The FP4 is a four-IC chipset implemented using 16nm CMOS FinFET technology. Two of the four devices in the chipset are multi-chip modules.
- The FP4 uses 56 gigabit-per-second serial-deserialiser (serdes) technology from Broadcom, implemented using PAM-4 modulation. It also supports terabit flows.
- Nokia announced IP edge and core router platforms that will use the FP4, the largest configuration being a 0.58 petabit switching capacity router.
Much can happen in an internet minute. In that time, 4.1 million YouTube videos are viewed, compared to 2.8 million views a minute only last year. Meanwhile, new internet uses continue to emerge. Take voice-activated devices, for example. Amazon ships 50 of its Echo devices every minute, almost one a second.
Given all that happens each minute, predicting where the internet will be in a decade’s time is challenging. But that is the task Alcatel-Lucent’s (now Nokia’s) chip designers set themselves in 2011 after the launch of its FP3 network processor chipset that powers its IP-router platforms.
Six years on and its successor - the FP4 - has just been announced. The FP4 is the industry’s first multi-terabit network processor that will be the mainstay of Nokia’s IP router platforms for years to come.
Cloud-scale routing
At the FP4’s launch, Nokia’s CEO, Rajeev Suri, discussed the ‘next chapter’ of the internet that includes smart cities, new higher-definition video formats and the growing number of connected devices.
IP traffic is growing at a compound annual growth rate (CAGR) of 25 percent through to 2022, according to Nokia Bell Labs, while peak data rates are growing at a 39 percent CAGR. Nokia Bell Labs also forecasts that the number of connected devices will grow from 12 billion this year to 100 billion by 2025.
Basil Alwan, Nokia’s president of IP and optical networks, said the internet has entered the era of cloud-scale routeing. When delivering a cloud service, rarely is the request fulfilled by one data centre. Rather, several data centres are involved in fulfilling the tasks. “One transaction to the cloud is multiplied,” said Alwan.
IP traffic is also becoming more dynamic, while the Internet of Things presents a massive security challenge.
Alwan also mentioned how internet content providers have much greater visibility into their traffic whereas the telcos’ view of what flows in their networks is limited. Hence their interest in analytics to understand and manage their networks better.
These are the trends that influenced the design of the FP4.
We put a big emphasis on making sure we had a high degree of telemetry coming out at the chip level
FP4 goals
Telemetry, the sending of measurement data for monitoring purposes, and network security were two key design goals for the FP4.
Steve Vogelsang“We put a big emphasis on making sure we had a high degree of telemetry coming out at the chip level,” said Steve Vogelsang, CTO for Nokia's IP and optical business.
Tasks include counters, collecting statistics and packet copying. “This is to make sure we have the instrumentation coming off these systems that we can use to drive the [network] analytics platform,” said Vogelsang.
Being able to see the applications flowing in the network benefits security. Distributed Denial-of-Service (DDoS) attacks are handled by diverting traffic to a ‘scrubbing centre’ where sophisticated equipment separates legitimate IP packets from attack traffic that needs scrubbing.
The FP4 supports the deeper inspection of packets. “Once we identify a threat, we can scrub that traffic directly in the network,” said Vogelsang. Nokia claims that that the FP4 can deal with over 90 percent of the traffic that would normally go to a scrubbing centre.
Chipset architecture
Nokia’s current FP3 network processor chipset comprises three devices: the p-chip network processor, the q-chip traffic manager and the t-chip fabric interface device.
The p-chip network processor inspects packets and performs table look-ups using fast-access memory to determine where packets should be forwarded. The q-chip is the traffic manager that oversees the packet flows and decides how packets should be dealt with, especially when congestion occurs. The third FP3 chip is the t-chip that interfaces to the router fabric.
The FP4 retains the three chips and adds a fourth: the e-chip - a media access controller (MAC) that parcels data from the router’s client-side pluggable optical modules for the p-chip. However, while the FP4 retains the same nomenclature for the chips as the FP3, the CMOS process, chip architecture and packaging used to implement the FP4 are significantly more advanced.
The FP4 can deal with over 90 percent of the traffic that would normally go to a scrubbing centre
Nokia is not providing much detail regarding FP4 chipset's architecture, unlike the launch of the FP3. “We wanted to focus on the re-architecture we have gone through,” said Vogelsang. But looking at the FP3 design, insight can be gained as to how the FP4 has likely changed.
The FP3’s p-chip uses 288 programmable cores. Each programmable core can process two instructions each clock cycle and is clocked at 1GHz.
The 288 cores are arranged as a 32-row-by-9-column array. Each row of cores can be viewed as a packet-processing pipeline. A row pipeline can also be segmented to perform independent tasks. The array’s columns are associated with table look-ups. The resulting FP3 p-chip is a 400-gigabit network processor.
Vogelsang said there is limited scope to increase the clock speed of the FP4 p-chip beyond 1GHz. Accordingly, the bulk of the FP4’s sixfold throughput improvement is the result of a combination of programmable core enhancements, possible a larger core array and, most importantly, system improvements. In particular, the memory architecture is now packaged within the p-chip for fast look-ups, while the chipset’s input-output lanes have been boosted from 10 gigabits-per-second (Gbps) to 50Gbps.
Nokia has sought to reuse as much of the existing microcode to program the cores for the FP4 p-chip but has added new instructions to take advantage of changes in the pipeline.
Software compatibility already exists at the router operating system level. The same SROS router operating system runs on Nokia’s network processors, merchant hardware from the like of Broadcom and on x86 instruction-set microprocessors in servers using virtualisation technology.
Such compatibility is achieved using a hardware abstraction layer that sits between the operating system and the underlying hardware. “The majority of the software we write has no idea what the underlying hardware is,” said Vogelsang.
Nokia has a small team of software engineers focussed on the FP4’s microcode changes but, due to the hardware abstraction layer, such changes are transparent to the main software developers.
The FP3’s traffic manager, the q-chip, comprises four reduced instruction set computer (RISC) cores clocked at 900MHz. This too has been scaled up for the FP4 but Nokia has not given details.
The t-chip interfaces to the switch fabric that sits on a separate card. In previous generations of router products, a mid-plane is used, said Nokia. This has been scrapped with the new router products being announced. Instead, the switch cards are held horizontally in the chassis and the line cards are vertical. “A bunch of metal guides are used to guide the two cards and they directly connect to each other,” said Vogelsang. “The t-chips are what interface to these connectors inside the system.”
The MAC e-chip interfaces to the line card’s pluggable modules and support up to a terabit flow. Indeed, the MAC will support integer multiples of 100 Gigabit Ethernet from 100 gigabit to 1 terabit. Nokia has a pre-standard implementation of FlexMAC that allows it to combine lanes across multiple transceivers into a single interface.
Nokia will have line cards that support 24 or 36 QSFP-DD pluggable modules, with each module able to support 400 Gigabit Ethernet.
The FP4 is also twice as power efficient, consuming 4 gigabit/W.
We wanted to make sure we used a high-volume chip-packaging technology that was being driven by other industries and we found that in the gaming industry
Design choices
One significance difference between the two network processor generations is the CMOS process used. Nokia skipped 28nm and 22nm CMOS nodes to go from 40nm CMOS for the FP3 to 16nm FinFET for the FP4. “We looked at that and we did not see all the technologies we would need coming together to get the step-function in performance that we wanted,” said Vogelsang.
Nokia also designed its own memory for the FP4.
“A challenge we face with each generation of network processor is finding memories and memory suppliers that can offer the performance we need,” said Vogelsang. The memory Nokia designed is described as intelligent: instructions can effectively be implemented during memory access and the memory can be allocated to do different types of look-up and buffering, depending on requirements.
Another key area associated with maximising the performance of the memory is the packaging. Nokia has adopted multi-chip module technology for the p-chip and the q-chip.
“We wanted to make sure we used a high-volume chip-packaging technology that was being driven by other industries and we found that in the gaming industry,” said Vogelsang, pointing out that the graphics processing unit (GPU) has similar requirements to those of a network processor. GPUs are highly memory intensive while manipulating bits on a screen is similar to manipulating headers and packets.
The resulting 2.5D packaged p-chip comprises the packet processor die and stacks of memory. Each memory stack comprises 5 memory die. All sit on an interposer substrate - itself a die that is used for dense interconnect of devices. The resulting FP4 p-chip is thus a 22-die multi-chip module.
“Our memory stacks are connected at the die edges and do not use through-silicon vias,” said Vogelsang. “Hence it is technically a 2.5D package [rather than 3D].”
The q-chip is also implemented as a multi-chip module containing RISC processors and buffering memory, whereas the router fabric t-chip and MAC e-chip are single-die ICs.
The FP4’s more advanced CMOS process also enables significantly faster interfaces. The FP4 uses PAM-4 modulation to implement 56Gbps interfaces. “You really need to run those bit rates much much higher to get the traffic into and out of the chip,” said Vogelsang.
Nokia says it is using embedded serialiser-deserialiser interface technology from Broadcom.
Next-gen routers
Nokia has also detailed the IP edge and core routers that will use the FP4 network processor.
The 7750 Service Router (SR-s) edge router family will support up to 144 terabits in a single shelf. This highest capacity configuration is the 7750 SR-14. It is a 24-rack-unit-plus-the-power-supply high chassis and supports a dozen line cards, each 12Tbps when using 100-gigabit modules, or 24x400GbE when using QSFP-DD modules.
Another new platform is the huge 7950 Extensible Routing System (XRS-XC) IP core router which can be scaled to 576 terabits - over half a petabit - when used in a six-chassis configuration. Combining the six chassis does not make require the use of front-panel client-side interfaces. Instead, dedicated interfaces are used with active optical cables to interlink the chassis.
The first router products will be shipped to customers at the year end with general availability expected from the first quarter of 2018.
OFC 2014 product round-up - Final part

The industry is moving at a clip to fill the void in 100 Gig IEEE standards for 100m to 2km links. Until now, the IEEE 10km 100GBASE-LR4 and the 10x10 MSA have been the interfaces used to address such spans.
But responding to data centre operators, optical players are busy developing less costly, mid-reach MSAs, as was evident at the OFC exhibition and conference, held in San Francisco in March.
Meanwhile, existing IEEE 100 Gigabit standards are skipping to the most compact CFP4 and QSFP28 form factors. The -LR4 standard was first announced in a CFP in 2010, and moved to the CFP2, half the size of the CFP, in 2013. Now, several companies have detailed CFP4 -LR4 products, while Source Photonics has gone one better, announcing the standard in a QSFP28.
The CFP4 is half the size of the CFP2, while the QSFP28 is marginally smaller than the CFP4 but has a lower power consumption: 3.5W compared to the CFP4's 6W.
Timeline of some pluggable announcements at recent OFCs. Source: Gazettabyte
The mid-reach landscape
Several interfaces for mid-reach interconnect were detailed at OFC. And since the show, two MSAs have been detailed: the CWDM4 and the CLR4 Alliance.
At OFC, the OpenOptics MSA backed by Mellanox Technologies and Ranovus, was announced. Skorpios Technologies demoed its CLR4 module that has since become the CLR4 Alliance. And vendors discussed the Parallel Single Mode (PSM4) initiative that was first detailed in January.
Switch vendor Mellanox Technologies and module start-up Ranovus announced the OpenOptics MSA at OFC. The QSFP-based MSA uses a single-mode fibre and WDM transmission around 1550nm to address data centre links up to 2km.
Saeid Aramideh of Ranovus says that the MSA using its laser and silicon photonics technologies will deliver significant cost, power and size advantages {add link}. "But the 1550nm WDM connection is open to any technology," says Aramideh, chief marketing and sales officer at Ranovus. "It does not have to be silicon photonics."
The first MSA product, a 100 Gig QSFP28, uses 4x25 Gig channels. "The channel spacing for the MSA is flexible to be 50GHz or more," says Aramideh. The MSA is scalable to 400 Gig and greater rates. The 100 Gig QSFP28 technology is several months away from sampling.
Skorpios Technologies demonstrated its QSFP28-CLR4 transceiver although the details of the MSA have yet to be detailed. Skorpios is a silicon photonics player and uses heterogenous integration where the lasers, modulators, detectors and optical multiplexer and de-multiplexer are monolithically integrated on one chip.
The PSM4 MSA is another initiative designed to tackle the gap between IEEE short and long reach standards. Backed by players such as Avago Technologies, Brocade, JDSU, Luxtera, Oclaro, and Panduit, the 100 Gig standard is defined to operate in the 1295-1325nm spectral window and will have a reach of at least 500m.
ColorChip demonstrated a 100 Gig (4x25 Gig) QSFP28 with a 2km reach at the show. The design uses uncooled directly modulated lasers to achieve the 3.5W power consumption. Since the show Colorchip is one of the member companies backing the CLR4 Alliance and the demoed QSFP matches the first details of the new MSA's spec.
100GBASE-LR4 moves to CFP4 and QSFP28
The IEEE 100GBASE-LR4 standard is transitioning to the smallest modules. At OFC, vendors detailed the first CFP4s while Source Photonics announced the -LR4 in a QSFP28.
Source Photonics says its transceiver consumes 3.5W. The QSFP28 form factor achieves up to a fourfold increase in face plate density compared to the CFP2: up to 48 modules compared to a dozen CFP2 modules, says the company, which expects first QSFP28 -LR4 samples in mid-2014.
Meanwhile, Avago Technologies, Finisar, Fujitsu Optical Components and JDSU all detailed their first CFP4 -LR4 modules.
JDSU says that when it developed the optics for its CFP2 -LR4, it was already eyeing the transition to the CFP4 and QSFP28 form factors. To achieve the -LR4 spec in the 6W CFP4, a key focus are the clock data recover (CDR), driver and trans-impedance amplifier chips. "A decent amount of the power consumption is wrapped up in the ICs that do the CDR and a variety of the digital functions behind the photonics," says Brandon Collings, JDSU's CTO for communications and commercial optical products. JDSU expects general availability of its CFP4 -LR4 later this year.
Finisar's -LR4 is its second CFP4 product; at ECOC 2013 it showcased a 100m, 100GBASE-SR4 CFP4. Finisar says its -LR4 uses distributed feedback (DFB) lasers and consumes 4.5W, well within the CFP4's 6W power profile. At OFC, the CFP4 was demonstrated working with CFP2 and CFP -LR4 modules. Finisar's CFP4 will sample later this year.
Avago announced availability of its -LR4 transmit optical subassembly (TOSA) and receive optical subassembly (ROSA) products for the CFP4, along with its CFP4 module which it says will be available next year. Fujitsu Optical Components also used OFC to demo its CFP4 -LR4.
40km Extended Reach Lite
Oclaro and Finisar detailed a tweak to the 100 Gig Extended Reach standard: the 40km, 100GBASE-ER4.
The IEEE standard uses a power-hungry semiconductor optical amplifier (SOA) prior to the PIN photodetector to achieve 40km. The module vendors have proposed replacing the SOA and PIN with an avalanche photo diode (APD) and external forward error correction to reduce the power consumption while maintaining the optical link budget. The changed spec is dubbed 100GBASE-ER4 Lite.
"Trying to achieve the power envelopes required for the CFP4 and QSFP28 using SOAs is going to be too hard," says Kevin Granucci, vice president of strategy and marketing at Oclaro.
Oclaro demonstrated a ER4-Lite in a CFP2. The module supports 100 Gigabit Ethernet and the Optical Transport Network (OTN) OTU-4 rates, and consumes less than 9W. "We are using the CFP2 as the first proof-of-concept," says Granucci. "For the 6W CFP4 and the 3.5W QSFP28, we think this is the only solution available."
At OFC Finisar demonstrated the link's feasibility, which it refers to as ER4f, using four 28 Gig lasers and four 28 Gig APDs.
Oclaro says it is seeing customer interest in the ER4 Lite, and points out that there are many 10 Gig 40km links deployed, especially in China. "The ER4 Lite will provide an update path to 100 Gig," says Granucci.
VCSELs: serial 40 Gig and the 400 Gig CDFP
Finisar showcased a VCSEL operating at 40 Gig at OFC. State-of-the-art VCSEL interfaces run up to 28 Gig. Finisar's VCSEL demonstration was to show the commercial viability of higher-speed VCSELs for single channel or parallel-array applications. "We believe that VCSELs have not run out of steam," says Rafik Ward, vice president of marketing at Finisar. The 40 Gig VCSEL demonstration used non-return-to-zero (NRZ) signalling, "no higher-order modulation is being used", says Ward.
IBM T.J.Watson Research Center has published an IEEE paper with Finisar involving a 56Gbps optical link based on an 850nm VCSEL.
Finisar also demonstrated an CDFP-based active optical cable. The CDFP is a 400 Gig MSA that uses 16 x 25 Gig VCSEL channels in each direction. Such an interface will address routing, high-performance computing and proprietary interface requirements, says Finisar. The demonstration showcased the technology; Finisar has yet to announce interface products or reaches.
Short reach 100G and 4x16 Gig Fibre QSFPs
Avago Technologies announced a 100GBASE-SR4 implemented using the QSFP28. Avago's I Hsing Tan, segment marketing manager for Ethernet and storage optical transceivers, says there has been a significant ramp in data centre demand for the 40GBASE-SR4 QSFP+ in the last year. "Moving to the next generation, the data centre operator would like to keep the same [switch] density as the QSFP+, and the QSFP28 MSA offers the same form factor," he says.
The QSFP28 differs from the QSFP+ is that its electrical connector is upgraded to handle 28 Gigabit-per-lane data rates. Avago says the -SR4 module will be generally available next year.
Avago also announced a 4x16 Gigabit Fibre Channel QSFP+ transceiver. The industry is transitioning from 8 to 16 Gig Fibre Channel, says Avago, and this will be followed by 32 Gig serial and 4x32 Gig Fibre Channel modules.
The company has announced a 4x16 Gig QSFP+ to continue the increase in platform channel density while the industry transitions from 16 to 32 Gig Fibre Channel. "This solution is going to provide the switch vendor a 3x increase in density at half the power dissipation per channel for 16 Gig Fibre Channel, before the 32 bit Fibre Channel come to maturity in three to five years," says Tan.
Avago has just announced that it has shipped over half a million QSFP+ modules.
Optical engines
TE Connectivity announced its 25 Gig-per-channel optical engine technology. The Coolbit optical engine will be included in four TE Connectivity products planned for this year: 100 Gig QSFP28 active optical cables (AOCs), 100 Gig QSFP28 transceivers, 300 Gig mid-board optical modules, and 400 Gig CDFP AOCs.
Meanwhile, Avago's MiniPod and MicroPod optical engine products now have a reach of 550m when coupled with Corning's ClearCurve OM4 fibre.
"This allows customers in the data centre to go a little bit further and not have to go to single-mode fibre," says Sharon Hall, product line manager for embedded optics at Avago.
For Part 1, click here
Further reading:-
TE Connectivity White Paper: End-to-end Communications with Fiber Optic Technologies, click here
LightCounting: Reflections on OFC 2014: The industry is approaching a critical junction, click here
Ovum at OFC 2014, click here
LightWave OFC 2014 Podcast, click here
Ethernet Alliance Blog: OFC 2014 show and best in class, click here
Avago's latest optical engine targets active optical cables
Avago Technologies has unveiled its first family of active optical cables for use in the data centre and for high performance computing.
The company has developed an optical engine for use in the active optical cables (AOCs). Known as the Atlas 75x, the optical engine reduces the power consumption and cost of the AOC to better compete with direct-attach copper cables.

“Some 99 percent of [active optical cable] applications are 20m or less”
Sharon Hall, Avago
"This is a price-elastic market," says Sharon Hall, product line manager for embedded optics at Avago Technologies. "A 20 percent price premium over a copper solution, then it starts to get interesting."
The AOC family comprises a 10 Gigabit-per-second (Gbps) single channel SFP+ and two QSFP+ cables - a 4x10Gbps QSFP+ and a QSFP+-to-four SFP+. The SFP+ AOC is used for 10 Gigabit Ethernet, 8 Gigabit Fibre Channel and Infiniband applications. The QSFP+ is used for 4-channel Infiniband, serial-attached SCSI (SAS) storage while the QSFP-to-four-SFP+ is required for server applications.
There are also three 12-channel CXP AOC products: 10-channel and 12-channel cables with each channel at 10Gbps; and a 12-channel CXP, each at 12.5Gbps. The devices supports the 100GBASE-SR10 100 Gigabit Ethernet and 12-channel Infiniband standards.
The 12-channel 12.5Gbps CXP product is used typically for proprietary applications such as chassis-to-chassis links where greater bandwidth is required, says Avago.
The SFP+ and QSFP+ products have a reach of 20m whereas competing AOC products achieve 100m. “Some 99 percent of applications are 20m or less,” says Hall.
The SFP+ and QSFP+ AOC products use the Atlas 75x optical engine. The CXP cable uses Avago’s existing Atlas 77x MicroPod engine and has a reach of 100m.
The Atlas 75x duplex 10Gbps engine reduces the power consumption by adopting a CMOS-based VCSEL driver instead of a silicon germanium one. “With CMOS you do not get the same level of performance as silicon germanium and that impacts the reach,” says Hall. “This is why the MicroPod is more geared for the high-end solutions.”
The result of using the Atlas 75x is an SFP+ AOC with a power consumption of 270mW compared to 200mW of a passive direct-attach copper cable. However, the SFP+ AOC has a lower bit error rate (1x10-15 vs. 1x10-12), a reach of up to 20m compared to the copper cable’s 7m and is only a quarter of the weight.
The SFP+ AOC does have a lower power consumption compared to active direct-attach cable, which consumes 400-800mW and has a reach of 15m.
Avago says that up to a 30m reach is possible using the Atlas 75x optical engine. Meanwhile, samples of the AOCs are available now.
Reflections 2011, Predictions 2012 - Part 2
Gazettabyte asked industry analysts, CEOs, executives and commentators to reflect on the last year and comment on developments they most anticipate for 2012. Here are the views of Verizon's Glenn Wellbrock, Professor Rod Tucker, Ciena's Joe Berthold, Opnext's Jon Anderson, NeoPhotonics' Tim Jenks and Vladimir Kozlov of LightCounting.
Glenn Wellbrock, Verizon's director of optical transport network architecture & design
The most significant accomplishment from an optical transport perspective for me was the introduction of 100 Gigabit into Verizon's domestic - US - network.

"The key technology enabler in 2012 will be the flexible grid optical switching that can support data rates beyond 100 Gigabit"
That accomplishment has paved the way for us to hit the ground running in 2012 with a very aggressive 100 Gigabit deployment plan. I also believe this accomplishment gives others the confidence to start taking advantage of this leading-edge technology.
With coherent receiver technology and the associated high-speed electronics lowering the propagation latency by up to 15%, we see a much cleaner line system design that eliminates external dispersion compensation fibre while bringing down the cost, space and power per bit.
The value of the whole industry moving in this direction means higher volumes and, therefore, lower costs. This new infrastructure will allow operators to get ahead of customer demand, thus improving delivery intervals and introducing new, higher bandwidth services to those large key customers that require it.
In my opinion, the key technology enabler in 2012 will be the flexible grid optical switching that can support data rates beyond 100 Gigabit and provides the framework to support colourless, directionless and contentionless optical nodes.
Today, field technicians must plug a new transmitter/ receiver into the appropriate direction and filter port at both circuit ends. With this new technology, operations personnel can simply plug the new card into the next available port and it can then be provisioned, tested and even moved to a new colour or direction remotely without any on-site personnel involvement - even when there are multiple copies of the same colour on the same add/ drop structure coming from different fibres.
This new nodal architecture takes advantage of the inherent channel selection capability of the coherent receiver to eliminate fixed filters and opens up the door for a truly reconfigurable optical add/ drop multiplexer (ROADM) - creating new flexibility that can be used for optical restoration, network defragmentation, operational simplicity, and more.
Rod Tucker, Director of the Institute for a Broadband Enabled Society (IBES), Director of the Centre for Energy-Efficient Telecommunications (CEET), and professor of electrical and electronic engineering at the University of Melbourne.
Australia's National Broadband Network (NBN) hit the ground running in 2011.
The project is still many years from completion, but in 2011 the roll-out of fibre-to-the-premises infrastructure began in earnest. This is a very noteworthy project - a wholesale broadband access network delivering advanced broadband services to the entire population of the country, including fibre to 93% of all premises and a mixture of fixed wireless and satellite to the remainder. At an estimated cost of around AUS$36 billion, the price tag is not small.

"The environment created by [Australia's] National Broadband Network will greatly enhance opportunities for innovations in new services and new modes of broadband service delivery"
But the wholesale-only model maximises opportunities for competition at the service provider level, and reduces wasteful duplication of infrastructure in the last mile. A remarkable aspect of the NBN project is that a deal has been struck between the incumbent telco, Telstra, and the government-owned owner of the NBN.
Under this deal, Telstra will shut down its Hybrid-Fibre-Coax (HFC) network and decommission its legacy copper access network. Australia will become a truly fibre-connected country, with a future-proof broadband infrastructure.
My thoughts for 2012 also relate to Australia's National Broadband Network. The environment created by the NBN will greatly enhance opportunities for innovations in new services and new modes of broadband service delivery.
I anticipate that in 2012 and beyond, new services providers and aggregators in areas such as health care, education, entertainment and energy will emerge.
I am very excited about the opportunities.
Joe Berthold, vice president of network architecture at Ciena
One of the most memorable developments from a network architecture point of view was the clear emergence of the category of packet-optical switching products to serve as the transport layer of backbone IP networks.
For years two competing points of view have been put forth. First, in the 'IP-over-glass' position, long-haul optics is incorporated into core routers. This has never taken off, with some disappointing attempts in the early days of 40 Gigabit. The second approach involves a separate, very much simpler, packet optical transport platform being introduced to interconnect core routers. The packet transport could be based on Ethernet protocols, MPLS, MPLS-TE or MPLS-TP.

"It will be interesting to see if a large internet data centre operator decides to embrace the OpenFlow concept at this very early stage of its development"
What is quite significant in this development, traditional router vendors seem to be going in this direction too, with the vision of a much simpler packet switching platform to keep cost, space and power under control.
This is a clear response to the overwhelming need we see in the market, representing a separation of packet switching into two layers: one with global routing capability at strategic locations in the network, and the other with flexible transport functionality for network traffic engineering.
In 2012 it will be fascinating to see how the struggle for protocol dominance plays out within the data centre.
While the IETF has many competing proposals, worked in multiple groups, the IEEE is in final ballot now for Shortest Path Bridging (IEEE 802.1aq).
Shortest Path Bridging has broad applicability in networks, but we might see it first emerge as a solution within the data centre.
The other contender within the data centre is OpenFlow, which has developed quite a momentum too.
It will be interesting to see if a large internet data centre operator decides to embrace the OpenFlow concept at this very early stage of its development.
Jon Anderson, director of technology programme at Opnext
Our most significant 2011 events were the Japan great earthquake in March and the Thailand floods in October. Both events caused major disruptions and challenges in optical component supply-chain management and manufacturing.
JDS Uniphase's tunable SFP+ announcement was well ahead of the technology curve.

"Our most significant 2011 events were the Japan great earthquake in March and the Thailand floods in October."
In 2012 we expect initial production shipments and deployment of 100Gbps PM-QPSK/ coherent modules. Also a fast production ramp of 40 Gigabit Ethernet (GbE) QSFP+ modules for data centre applications.
Another development to watch is the next-generation 100 GbE interconnect technology and standards development for low-cost, high-density modules for data centre applications.
Lastly, there will be an increased focus on technologies and solutions for 100 Gigabit DWDM in metro and extended reach enterprise applications.
Tim Jenks, CEO of NeoPhotonics

NeoPhotonics made significant progress this year in developments of components and technologies for coherent transmission networks, including receivers, transmitters and advanced approaches toward switching.
We continue to see increasing adoption of coherent transmission systems, broad-scale deployment of access networks and a continuing emergence of large scale data centres as a prominent element of the communications network landscape.
Vladimir Kozlov, CEO of LightCounting
The industry was strong enough to get over an earthquake, tsunami and flood in 2011. Softer demand for optics in 2011 helped - is still helping - many vendors to ride the disruptions. Ironically, the industry was more stressed ramping up production in 2010 to meet demand than dealing with the disruptions of 2011. We are looking forward to a smoother ride in 2012, as demand/ supply reach equilibrium and nature cooperates.
"Ironically, the industry was more stressed ramping up production in 2010 to meet demand than dealing with the disruptions of 2011"
Service provider revenue and capex were up significantly in 2011. Mobile data is driving the growth, but even wireline revenues are improving and FTTx is probably behind it. This should be a sustainable trend for 2012-2015, even as service providers curb expenses to improve profitability, a larger fraction of capex will be spend on equipment. New technology is critical to stay ahead of competition.
Data centre optics had another good year with 10GBASE-T falling further behind schedule and with 100 Gigabit generating much action. This will probably get even more interesting in 2012.
Our conservative forecast for active optical cable, criticised by some vendors, was not conservative enough in 2011. It will take a while for this segment to unfold.
For Part 1, click here
For Part 3, click here
Next-gen 100 Gigabit short reach optics starts to take shape
The latest options for 100 Gigabit-per-second (Gbps) interfaces are beginning to take shape following a meeting of the IEEE 802.3 Next Generation 100Gb/s Optical Ethernet Study Group in November.
The interface options being discussed include:
- A parallel multi-mode fibre using a VCSEL with a reach of 50m to 70m. An active optical cable version with a 30m reach, limited by the desired cable length rather than the technology, using silicon photonics or a VCSEL has also been proposed.
- A parallel single-mode fibre using a 1310nm electro-absorption modulated laser (EML) or silicon photonics with a range of 50m to 1000m+.
- A duplex single-mode fiber, using wavelength division multiplexing (WDM) or pulse-width modulation (PAM), an EML or silicon photonics for a 2km reach.
“I think in the end all will be adopted,” says Marek Tlalka, director of marketing at Luxtera. "Users will be able to choose what is most economical."
Jon Anderson, director of technology programme at Opnext, stresses however that these are proposals.
"No decisions were reached by the Study Group on any of these proposals," he says. “The Study Group is only working towards defining objectives for a next-gen 100 Gigabit Ethernet Optics project.” Agreement on technical solutions is outside the scope of the Study Group.
Anderson says there is a general agreement to define a 4x25Gbps multi-mode fibre optical interface. But the issues of reach and multi-mode fibre type (OM3, OM4) are still being studied.
“The Study Group has not reached any agreement on whether a 100GE short reach single-mode objective should be pursued," says Anderson. “Discussion at this point are on reach, power consumption and relative cost of possible solutions with respect to (the 10km) 100GBASE-LR4."
Luxtera's 100 Gigabit silicon photonics chip
Luxtera has detailed a 4x28 Gigabit optical transceiver chip. The silicon photonics company is aiming the device at embedded applications such as system backplanes and high-performance computing (HPC). The chip is also being used by Molex for 100 Gigabit active optical cables. Molex bought Luxtera's active optical cable business in January 2011.

“Do I want to invest in a copper backplane for a single generation or do I switch over now to optics and have a future-proof three-generation chassis?”
Marek Tlalka, Luxtera
What has been done
To make the optical transceiver, a distributed-feedback (DFB) laser operating at 1490nm is coupled to the silicon photonics CMOS-based chip. One laser only is required to serve the four individually modulated 28Gbps transmit channels, giving the chip a 112Gbps maximum data rate. There are also four receive channels, each using a germanium-based photo-detector that is grown on-chip.
The DFB is the same laser that Luxtera uses for its 4x10Gbps and 4x14Gbps designs. What has been changed is the Mach-Zehnder waveguide-based modulators that must now operate at 28Gbps, and the electronics amplifiers at the receivers. “The chip [at 5mmx6mm] is pretty much the same size as our 4x10 and 4x14 Gig designs,” says Marek Tlalka, director of marketing at Luxtera.
Source: Luxtera
Luxtera is announcing the 100 Gigabit chip which it is sampling to customers. Molex, for example, will package the chip and the laser to make its active optical cable products. Luxtera will package the transceiver chip and laser in a housing as an OptoPHY, a packaged product it already provides at lower speeds. The company will sell the 100Gbps OptoPHY for embedded applications such as system backplanes and HPC.
Applications
The 100GbE transceiver chip is targeted at next-generation backplane applications as well as active optical cables. And it is enterprise vendors that make switches, routers and blade servers that are considering adopting optical backplanes for their next-generation platforms, says Luxtera.
According to Tlalka, system vendors are moving their backplanes from 15Gbps to 28Gbps: “It is pretty obvious that building an electrical backplane at this data rate will be extremely challenging.”
When vendors design a new chassis, they want it to support three generations of line cards. Even if a system vendor develops a 28Gbps copper-based backplane, it will need to go optical when the backplane data rate increases to 40-50Gbps in 2-3 years’ time and 100Gbps when that speed transition occurs. “Do I want to invest in a copper backplane for a single generation or do I switch over now to optics and have a future-proof three-generation chassis?” says Tlalka.
Exascale computers, 1000x more powerful than existing supercomputers planned for the second half of the decade, is another application area. Here there is a need for 25-28Gbps links between chips, says Tlalka.
System platforms and HPC are ideal candidates for the packaged transceiver chip but longer term Luxtera is eyeing the move of optics inside chips such as ASICs. Such system-on-chip optical integration could include Ethernet switch ICs (See example switch ICs from Broadcom and Intel (Fulcrum)) and network interface cards. Another example highlighted by Tlalka is CPU-memory interfaces.
However such applications are at least five years away and there are significant hurdles to be overcome. These include resolving the business model of such designs as well as the technical challenges of coupling the ASIC to the optics and the associated mechanical design.
Standards
Luxtera's 100Gbps transceiver chip supports a variety of standards.
Operating at 25Gbps per channel, the chip supports 100GbE and Enhanced Data Rate (EDR) Infiniband. The ability to go to 28Gbps per channel means that the transceiver can also support the OTN (optical transport network) standard as well as proprietary backplane protocols that add overhead to the basic 25Gbps data rate.
In addition the chip supports the OIF's short reach and very short reach interfaces that define the interface between an ASIC and the optical module.
The chip is also suited for some of the IEEE Next Generation 100Gbps Optical Ethernet Study Group standards now in development. These interfaces will cover a reach of 30m to 2km.
400GbE and HDR Infiniband
Luxtera says that it is working on different channel ’flavours' of 100G. It is also following developments such as Infiniband Hexadecimal Data Rate (HDR) and 400GbE.
HDR will use 40Gbps channels while there is still an industry debate as to whether 400GbE will be implemented using ten channels, each at 40Gbps, or as a 16x25Gbps design.
Active optical cable: market drivers

CIR’s report key findings
The global market for active optical cable (AOC) is forecast to grow to US $1.5bn by 2014, with the linking of datacenter equipment being the largest single market valued at $835m. Other markets for the cabling technology include digital signage, PC interconnect and home theatre.
CIR’s report entitled Active Optical Cabling: A Technology Assessment and Market Forecast notes how AOC emerged with a jolt. Two years on and the technology is now a permanent fixture that will continue to nimbly address application as they appear. This explains why CIR views AOC as an opportunistic and tactical interconnect technology.

AOC: "Opportunistic and tactical"
Loring Wirbel
What is active optical cable?
An AOC converts an electrical interface to optical for transmission across a cable before being restored to the electrical domain. Optics are embedded as part of the cabling connectors with AOC vendors using proprietary designs. Being self-contained, AOCs have the opportunity to become a retail sale at electronics speciality stores.
A common interface for AOC is the QSFP but there are AOC products that use proprietary interfaces. Indeed the same interface need not be used at each end of the cable. Loring Wirbel, author of the CIR AOC report, mentions a MergeOptics’ design that uses a 12-channel CXP interface at one end and three 4-channel QSFP interfaces at the other. “If it gets traction, everyone will want to do it,” he says.
Origins
AOC products were launched by several vendors in 2007. Start-up Luxtera saw it as an ideal entry market for its silicon photonics technology; Finisar came out with a 10Gbps serial design; while Zarlink identified AOC as a primary market opportunity, says Wirbel.
Application markets
AOC is the latest technology targeting equipment interconnect in the data centre. Typical distances linking equipment range from 10 to 100m; 10m is where 10Gbps copper cabling starts to run out of steam while 100m and above are largely tackled by structured cabling.
“Once you get beyond 100 meters, the only AOC applications I see are outdoor signage and maybe a data centre connecting to satellite operations on a campus,” says Wirbel.
AOC is used to connect servers and storage equipment using either Infiniband or Ethernet. “Keep in mind it is not so much corporate data centres as huge dedicated data centre builds from a Google or a Facebook,” says Wirbel.
AOC’s merits include its extended reach and light weight compared to copper. Servers can require metal plates to support the sheer weight of copper cabling. The technology also competes with optical pluggable transceivers and here the battleground is cost, with active optical cabling including end transceivers and the cable all-in-one.
To date AOC is used for 10Gbps links and for double data rate (DDR) and quad data rate (QDR) Infiniband. But it is the evolution of Infiniband’s roadmap - eight data rate (EDR, 20Gbps per lane) and hexadecimal data rate (HDR, 40Gbps per lane) - as well as the advent of 100m 40 and 100 Gigabit Ethernet links with their four and ten channel designs that will drive AOC demand.
The second largest market for AOC, about $450 million by 2014, and one that surprised Wirbel, is the ‘unassuming’ digital signage.
Until now such signs displaying video have been well served by 1Gbps Ethernet links but now with screens showing live high-definition feeds and four-way split screens 10Gbps feeds are becoming the baseline. Moreover distances of 100m to 1km are common.
PC interconnect is another market where AOC is set to play a role, especially with the inclusion of a high-definition multimedia interface (HDMI) interface as standard with each netbook.
“A netbook has no local storage, using the cloud instead,” says Wirbel. Uploading video from a video camera to the server or connecting video streams to a home screen via HDMI will warrant AOC, says Wirbel.
Home theatre is the fourth emerging application for AOC though Wirbel stresses this will remain a niche application.

