ECOC 2015 Review - Final Part

The second and final part of the survey of developments at the ECOC 2015 show held recently in Valencia.  

Part 2 - Client-side component and module developments   

  • The first SWDM Alliance module shown
  • More companies detail CWDM4, CLR4 and PSM4 mid-reach modules
  • 400 Gig datacom technologies showcased
  • The CFP8 MSA for 400 Gigabit Ethernet unveiled

The CFP MSA modules including the newest CFP8. Source: Finisar

  • Lumentum and Kaiam use silicon photonics for mid-reach modules
  • Finisar demonstrates a 10 km 25 Gig SFP28, and low-latency 25 Gig and 100 Gig SR4 interfaces 

 

Shortwave wavelength-division multiplexing

Finisar demonstrated the first 100 gigabit shortwave wavelength-division multiplexing (SWDM) module at ECOC. Dubbed the SWDM4, the 100 gigabit interface supports WDM over multi-mode fibre. Finisar showed a 40 version at OFC earlier this year. “This product [the SWDM4] provides the next step in that upgrade path,” says Rafik Ward, vice president of marketing at Finisar. 

The SWDM Alliance was formed in September to exploit the large amount of multi-mode fibre used by enterprises. The goal of the SWDM Alliance is to extend the use of multi-mode fibre by enabling link speeds beyond 10 gigabit.

“We believe if you can do something with multi-mode fibre, you can achieve cost points that are not achievable with single-mode fibre,” says Ward. “SWDM4 allows us to have not only low-cost optics on either end, but allows customers to reuse their installed fibre.”

The SWDM4 interface uses four 25 gigabit VCSELs operating at wavelengths sufficiently apart that cooling is not required. “By having this [wavelength] gap, you can keep to relatively low-cost components like for multiplexing and de-multiplexing,” says Ward.

The 100 Gig SWDM4 achieves 70 meters over OM3 fibre and 100 meters over OM4 fibre. SWDM can scale beyond 100 gigabit, says Ward, but the challenge with multi-mode fibre remains the tradeoff between speed and distance.

Finisar is already shipping SWDM4 alpha samples to customers.

The SWDM Alliance founding members include CommScope, Corning, Dell, Finisar, H3C, Huawei, Juniper Networks, Lumentum, and OFS.

 

CWDM4, CLR4 and PSM4

Oclaro detailed a 100 gigabit mid-reach QSFP28 module that supports both the CWDM4 multi-source agreement (MSA) and the CLR4 MSA. “We can support either depending on whether, on the host card, there is forward-error correction or not,” says Robert Blum, director of strategic marketing at Oclaro.

Both MSAs have a 2 km reach and use four 25 gigabit channels. However, the CWDM4 uses a more relaxed optical specification as its overall performance is complemented with forward-error correction (FEC) on the host card. The CLR4, in contrast, does not use FEC and therefore requires a more demanding optical specification.

“The requirements are significantly harder to meet for the CLR4 specification,” says Blum. By avoiding FEC, the CLR4 module benefits low-latency applications such as financial trading.

Oclaro showed its dual-MSA module achieving a 10 km reach at ECOC even though the two specifications call for 2 km only. “We have very large margins for the module compared to the specification,” says Blum, adding that customers now need to only qualify one module to meet their CWDM4 or CLR4 line card needs.

Other optical module vendors that announced support for CWDM4 in a QSFP28 module include Source Photonics, whose module is also CLR4-compliant. Kaiam is making CWDM4 and CLR4 modules using silicon photonics as part of its designs.

Lumentum also detailed its CWDM4 and the PSM4, a QSFP28 that uses a single-mode ribbon cable to deliver 100 Gig over 500 meters. Lumentum says its CWDM4 and PSM4 QSFP28 products will be available this quarter. “These 100 gigabit modules are what the hyper-scale data centre operators are clamouring for,” says Brandon Collings, CTO of Lumentum.

 

The question is who can ramp and support the 100 Gig deployments that are going to happen next year

 

Lumentum says it is using silicon photonics technology for one of its designs but has not said which. “We have both technologies [indium phosphide and silicon photonics], we use both technologies, and silicon photonics is involved with one of these [modules],” says Collings.

There is demand for both the PSM4 and CWDM4, says Lumentum. Which type a particular data centre operator chooses depends on such factors as what fibre they have or plan to deploy, whether they favour single-mode fibre pairs or ribbon cable, and if their reach requirements are beyond 500 meters.

Quite a few module companies have already sampled [100 Gig] products, says Oclaro’s Blum: “The question is who can ramp and support the 100 Gig deployments that are going to happen next year.”

 

Technologies for 400 gigabit

Several companies demonstrated technologies that will be needed for 400 gigabit client-side interfaces.

NeoPhotonics and chip company InPhi partnered to demonstrate the use of PAM-4 modulation to achieve 100 gigabit. “To do PAM-4, you need not only the optics but a special PAM-4 DSP,” says Ferris Lipscomb, vice president of marketing at NeoPhotonics.

The 400 Gigabit Ethernet standard under development by the IEEE 802.3bs supports several configurations using PAM-4 including a four-channel parallel single-mode fibre configuration, each at 100 gigabit that will have a 500m reach, and two 8 x 50 gigabit, for 2 km and 10 km links.    

The company showcased its 4x28 Gig transmitter optical sub-assembly (TOSA) that uses a photonic integrated circuit comprising electro-absorptive modulated lasers (EMLs). Combined with InPhi’s PAM-4 chip, two channels were combined to achieve 100 gigabit. NeoPhotonics says its EMLs are also capable of supporting 56 gigabaud rates which, coupled with PAM-4, would achieve 100 gigabit single channels. 

Lipscomb points out that not only are there several interfaces under development but also various optical form factors. “For 100 Gig and 400 Gig client-side data centre links, there are several competing MSA groups,” says Lipscomb. “The final winning approach has not yet emerged and NeoPhotonics wants its solution to be generic enough so that it supports this winning approach once it emerges.” 

Meanwhile, Teraxion announced its silicon photonics-based modulator technology for 100 gigabit (4 x 25 Gig) and 400 gigabit datacom interfaces. “People we talk to are interested in WDM applications for short-reach links,” says Martin Guy, Teraxion’s CTO and strategic marketing.

Teraxion says a challenge using silicon photonics for WDM is supporting a broad band of wavelengths. “People use surface gratings to couple light into the silicon photonics,” says Guy. “But surface gratings have a strong wavelength-dependency over the C-band.”

Teraxion has developed an edge coupler instead which is on the same plane as the propagating light. This compares to a surface grating where light is coupled vertical to the plane.

 

You hear a lot about the cost of silicon photonics but one of the key advantages is the density you can achieve on the chip itself. Having many modulators in a very small footprint has value for the platform; you can make smaller and smaller transceivers. 

 

“We can couple light efficiently with large-tolerance alignment and our approach can be used for WDM applications,” says Guy. Teraxion’s modulator array can be used for CWDM4 and CLR4 MSAs as well as optical engines for future 400 gigabit datacom systems. 

“You hear a lot about the cost of silicon photonics but one of the key advantages is the density you can achieve on the chip itself,” says Guy. “Having many modulators in a very small footprint has value for the platform; you can make smaller and smaller transceivers.” 

 

CFP8 MSA

Finisar demonstrated a 400 gigabit link that included a mock-up of the CFP8 form factor, the latest CFP MSA member being developed to support emerging standards such as 400 Gigabit Ethernet.

The 400 gigabit demonstration implemented the 400GE-SR16 multi-mode standard. A Xilinx FPGA was used to implement an Ethernet MAC and generated 16, 25 Gig channels that were fed to four CFP4 modules, each implementing a 100GBASE-SR4 but collectively acting as the equivalent of the 400GE-SR16. The 16 fibre outputs were then fed to the CFP8 prototype which performed an optical loop-back function, sending the signals back to the CFP4s and FPGA.

 

The CFP8 will be able to support 6.4 terabit of switching on a 1U card when used in a 2 row by 8 module configuration. The CFP8 has a similar size and power consumption profile of the CFP2. “There is still a lot of work putting an MSA together for 400 gigabit,” says Ward, adding that there is still no timeframe as to when the CFP8 MSA will be completed.

 

25 Gig SFP28

Finisar also announced at ECOC a 1310nm SFP28 supporting 25 gigabit Ethernet over 10 km, complementing the 850nm SFP28 short reach module it announced at OFC 2015.

Ethernet vendors are designing their next-generation series of switches that use the SFP28, says Finisar, while the IEEE is completing standardising 25 Gigabit Rthernet over copper and multi-mode fibre options.

“There hasn’t yet been a motion to standardise a long-wave interface,” says Ward. “With the demo at ECOC, we have come out with a 25 Gig long-wave interface in advance of a standard.”       

Ward points out that the large-scale data centres several years ago only had 40 gigabit as a higher speed option beyond 10 gigabit. Now enterprises will also have a 25 gigabit option.

Ward points out that 25 gigabit compared to 40 Gig delivers an attractive cost-performance. Forty gigabit short-reach and long-reach interfaces are based on four channels at 10 gigabit, whereas 25 gigabit uses one laser and one photo-detector that fit in an SFP28. This compares to a QSFP for 40 Gig.

“25 Gigabit Ethernet is a very interesting interface for the next set of customers after the Web 2.0 players that are looking to migrate beyond 10 gigabit,” said Ward.     

 

Low-latency 25 Gig SR and 100 Gig Ethernet SR4 modules

Also announced by Finisar are 25 Gigabit Ethernet SFP28 SR and 100GE QSFP28 SR4 transceivers that can operate without accompanying FEC on the host board. The transceivers achieve a 30 meter reach on OM3 fibre and 40 meters using OM4 fibre.

“Using FEC simplifies the optical link,” says Ward. “It can take the cost out of the optics by having FEC which gives you additional gain.”  But some customers have requested the parts for use without FEC to reduce link latency, similar to those that choose the CLR4 MSA for mid-reach 100 Gig.

Finisar has not redesigned its modules but offering modules that have its higher performing VCSELs and photo-detectors. “Think of it as a simple screen,” says Ward.

 

Click here for the ECOC 2015 Review - Part 1.  


ECOC '15 Reflections: Part 2

Part 2: More industry executives share the trends and highlights they noted at the recent European Conference on Optical Communication (ECOC) event, held in Valencia. 

 

Martin Zirngibl, head of network enabling components and technologies at Bell Labs. 

Silicon Photonics is seeming to gain traction, but traditional component suppliers are still betting on indium phosphide.

There are many new start-ups in silicon photonics, most seem to be going after the 100 gigabit QSFP28 market. However, silicon photonics still needs a ubiquitous high-volume application for the foundry model to be sustainable.

There is a battle between 4x25 Gig CWDM and 100 Gig PAM-4 56 gigabaud, with most people believing that 400 Gig PAM-4 or discrete multi-tone with 100 Gig per lambda will win.

 

Will coherent make it into black and white applications - up to 80 km - or is there a role for a low-cost wavelength-division multiplexing (WDM) system with direct detection?

 

One highlight at ECOC was the 3D integrated 100 Gig silicon photonics by Kaiam.

In coherent, the analogue coherent optics (ACO) model seems to be winning over the digital coherent one, and people are now talking about 400 Gig single carrier for metro and data centre interconnect applications.

As for what I’ll track in the coming year: will coherent make it into black and white applications - up to 80 km - or is there a role for a low-cost wavelength-division multiplexing (WDM) system with direct detection?

 

Yukiharu Fuse, director, marketing department at Fujitsu Optical Components

There were no real surprises as such at ECOC this year. The products and demonstrations on show were within expectations but perhaps were more realistic than last year’s show.

Most of the optical component suppliers demonstrated support to meet the increasing demand of data centres for optical interfaces.

The CFP2 Analogue Coherent Optics (CFP2-ACO) form factor’s ability to support multiple modulation formats configurable by the user makes it a popular choice for data centre interconnect applications. In particular, by supporting 16-QAM, the CFP2-ACO can double the link capacity using the same optics.

 

Lithium niobate and indium-phosphide modulators will continue to be needed for coherent optical transmission for years to come

 

Recent developments in indium phosphide designs has helped realise the compact packaging needed to fit within the CFP2 form factor.

I saw the level of integration and optical engine configurations within the CFP2-ACO differ from vendor to vendor. I’m interested to see which approach ends up being the most economical once volume production starts.

Oclaro introduced a high-bandwidth lithium niobate modulator for single wavelength 400 gigabit optical transmission. Lithium niobate continues to play an important role in enabling future higher baud rate applications with its excellent bandwidth performance. My belief is that both lithium niobate and indium-phosphide modulators will continue to be needed for coherent optical transmission for years to come.

 

Chris Cole, senior director, transceiver engineering at Finisar

ECOC technical sessions and exhibition used to be dominated by telecom and long haul transport technology. There is a shift to a much greater percentage focused on datacom and data centre technology.

 

What I learned at the show is that cost pressures are increasing

 

There were no major surprises at the show. It was interesting to see about half of the exhibition floor occupied by Chinese optics suppliers funded by several Chinese government entities like municipalities jump-starting industrial development.

What I learned at the show is that cost pressures are increasing.

New datacom optics technologies including optical packaging, thermal management, indium phosphide and silicon integration are all on the agenda to track in the coming year.

 


Choosing paths to future Gigabit Ethernet speeds

Industry discussions are being planned in the coming months to determine how Ethernet standards can be accelerated to better serve industry needs, including how existing work can be used to speed up the creation of new Ethernet speeds.

 

The y-axis shows the number of lanes while the x-axis is the speed per lane. Each red dot shows the Ethernet rate at which the signalling (optical or electrical) was introduced. One challenge that John D'Ambrosia highlights is handling overlapping speeds. "What do we do about 100 Gig based on 4x25, 2x50 and 1x100 and ensure interoperability, and do that for every multiple where you have a crossover?" Source: Dell

One catalyst for these discussions has been the progress made in the emerging 400 Gigabit Ethernet (GbE) standard which is now at the first specification draft stage.

“If you look at what is happening at 400 Gig, the decisions that were made there do have potential repercussions for new speeds as well as new signalling rates and technologies,” says John D’Ambrosia, chairman of the Ethernet Alliance.

Before the IEEE P802.3bs 400 Gigabit Ethernet Task Force met in July, two electrical signalling schemes had already been chosen for the emerging standard: 16 channels of 25 gigabit non-return-to-zero (NRZ) and eight lanes of 50 gigabit using PAM-4 signalling. 

For the different reaches, three of the four optical interfaces had also been chosen, with the July meeting resolving the fourth -  2km - interface.  The final optical interfaces for the four different reaches are shown in the Table.

 

 

The adoption of 50 gigabit electrical and optical interfaces at the July meeting has led some industry players to call for a new 50 gigabit Ethernet family to be created, says D’Ambrosia. 

Certain players favour the 50 GbE standard to include a four-lane 200 GbE version, just as 100 GbE uses 4 x 25 Gig channels, while others want 50 GbE to be broader, with one, two, four and eight lane variants to deliver 50, 100, 200 and 400 GbE rates.  

 

If you look at what is happening at 400 Gig, the decisions that were made there do have potential repercussions for new speeds as well as new signalling rates and technologies

 

The 400 GbE standard’s adoption of 100 GbE channels that use PAM-4 signalling has also raised questions as to whether 100 GbE PAM-4 should be added to the existing 100 GbE standard or a new 100 GbE activity be initiated.

“Those decisions have snowballed into a lot of activity and a lot of discussion,” says D’Ambrosia, who is organising an activity to address these issues and to determine where the industry consensus is as to how to proceed. 

“These are all industry debates that are going to happen over the next few months,” he says, with the goal being to better meet industry needs by evolving Ethernet more quickly.

Ethernet continues to change, notes D’Ambrosia. The 40 GbE standard exploited the investment made in 10 gigabit signalling, and the same is happening with 25 gigabit signalling and 100 gigabit. 

 

If you buy into the idea of more lanes based around a single signalling speed, then applying that to the next signalling speed at 100 Gigabit Ethernet, does that mean the next speed with be 800 Gigabit Ethernet? 

 

With 50 Gig electrical signalling now starting as part of the 400 GbE work, some industry voices wonder whether, instead of developing one Ethernet family around a rate, it is not better to develop a family of rates around the signalling speed, such as is being proposed with 50 Gig and the use of 1, 2, 4 and 8 lane configurations.

“If you buy into the idea of more lanes based around a single signalling speed, then applying that to the next signalling speed at 100 Gigabit Ethernet, does that mean the next speed with be 800 Gigabit Ethernet?” says D’Ambrosia.     

The 400 GbE Task Force is having its latest meeting this week. A key goal is to get the first draft of the standard -  Version 1.0 - approved. “To make sure all the baselines have been interpreted correctly,” says D’Ambrosia. What then follows is filling in the detail, turning the draft into a technically-complete document. 

 

Further reading:

LightCounting: 25GbE almost done but more new Ethernet options are coming, click here


Silicon photonics: "The excitement has gone"

The opinion of industry analysts regarding silicon photonics is mixed at best. More silicon photonics products are shipping but challenges remain.

 

Part 1: An analyst perspective

"The excitement has gone,” says Vladimir Kozlov, CEO of LightCounting Market Research. “Now it is the long hard work to deliver products.” 

Dale Murray, LightCounting

However, he is less concerned about recent setbacks and slippages for companies such as Intel that are developing silicon photonics products. This is to be expected, he says, as happens with all emerging technologies.

Mark Lutkowitz, principal at consultancy fibeReality, is more circumspect. “As a general rule, the more that reality sets in, the less impressive silicon photonics gets to be,” he says. “The physics is just hard; light is not naturally inclined to work on the silicon the way electronics does.”

LightCounting, which tracks optical component and modules, says silicon photonics product shipments in volume are happening. The market research firm cites Cisco’s CPAK transceivers, and 40 gigabit PSM4 modules shipping in excess of 100,000 units as examples. Six companies now offer 40 gigabit PSM4 products with Luxtera, a silicon photonics player, having a healthy start on the other five.

 

Indium phosphide and other technologies will not step back and give silicon photonics a free ride

 

LightCounting also cites Acacia with its silicon photonics-based low-power 100 and 400 gigabit coherent modules. “At OFC, Acacia made a fairly compelling case, but how much of its modules’ optical performance is down to silicon photonics and how much is down to its advanced coherent DSP chip is unclear,” says Dale Murray, principal analyst at LightCounting. Silicon photonics has not shown itself to be the overwhelming solution for metro/ regional and long-haul networks to date but that could change, he says.

Another trend LightCounting notes is how PAM-4 modulation is becoming adopted within standards. PAM-4 modulates two bits of data per symbol and has been adopted for the emerging 400 Gigabit Ethernet standard. Silicon photonics modulators work really well with PAM-4 and getting it into standards benefits the technology, says LightCounting. “All standards were developed around indium phosphide and gallium arsenide technologies until now,” says Kozlov.

 

You would be hard pressed to find a lot of OEMs or systems integrators that talk about silicon photonics and what impact it is going to have 

 

Silicon photonics has been tainted due to the amount of hype it has received in recent years, says Murray. Especially the claim that optical products made in a CMOS fabrication plant will be significantly cheaper compared to traditional III-V-based optical components. 

First, Murray highlights that no CMOS production line can make photonic devices without adaptation. “And how many wafers starts are there for the whole industry? How much does a [CMOS] wafer cost?” he says. 

“You would be hard pressed to find a lot of OEMs or systems integrators that talk about silicon photonics and what impact it is going to have,” says Lutkowitz. “To me, that has always said everything.”  

Mark Lutkowitz, fibeReality LightCounting highlights heterogeneous integration as one promising avenue for silicon photonics. Heterogeneous integration involves bonding III-V and silicon wafers before processing the two.

This hybrid approach uses the III-V materials for the active components while benefitting from silicon’s larger (300 mm) wafer sizes and advanced manufacturing techniques.

Such an approach avoids the need to attach and align an external discrete laser. “If that can be integrated into a WDM design, then you have got the potential to realise the dream of silicon photonics,” says Murray. “But it’s not quite there yet.”

 

This poses a real challenge for silicon photonics: it will only achieve low cost if there are sufficient volumes, but without such volumes it will not achieve a cost differential

 

Murray says over 30 vendors now make modules at 40 gigabit and above: “There are numerous module types and more are being added all the time.” Then there is silicon photonics which has its own product pie split. This poses a real challenge for silicon photonics: it will only achieve low cost if there are sufficient volumes, but without such volumes it will not achieve a cost differential.

“Indium phosphide and other technologies will not step back and give silicon photonics a free ride, and are going to fight it,” says Kozlov. Nor is it just VCSELs that are made in high volumes.

LightCounting expects over 100 million indium phosphide transceivers to ship this year. Many of these transceivers use distributed feedback (DFB) lasers and many are at 10 gigabit and are inexpensive, says Kozlov. 

For FTTx and GPON, bi-directional optical subassemblies (BOSAs) now cost $9, he says: “How much lower cost can you get?”  


IBM demos a 100 Gigabit silicon photonics transceiver

IBM has demonstrated a 100 gigabit transceiver using silicon photonics technology, its most complex design unveiled to date. The 100 gigabit design is not a product but a technology demonstrator, and IBM says it will not offer branded transceivers to the marketplace.

“It is a demonstration vehicle illustrating the complex design capabilities of the technology and the functionality of the optical and electrical components,” says Will Green, manager of IBM’s silicon integrated nano-photonics group. 

Will Green

IBM has been developing silicon photonics technology for over a decade, starting with building-block optical functions based on silicon, to its current monolithic system-on-chip technology that includes design tools, testing and packaging technologies.

Now this technology is nearing commercialisation.   

“We do plan to have the technology available for use within IBM’s systems but also within the larger market; large-volume applications such as the data centre and hyper-scale data centres in particular,” says Green. 

IBM is already working with companies developing their own optical component designs using its technology and design tools. “These are tools that circuit designers are familiar with, such that they do not need to have an in-depth knowledge of photonics in order to build, for example, an optical transceiver,” says Green.  

 

We do plan to have the technology available for use within IBM’s systems but also within the larger market

 

100 gig demonstrator

IBM refers to its silicon photonics technology as CMOS-integrated nano-photonics. CMOS-integrated refers to the technology’s monolithic nature that combines CMOS electronics with photonics on one substrate. Nano-photonics highlights the dimensions of the feature sizes used.   

IBM is rare among the silicon photonics community in combining electronics and photonics on one chip; other players implement photonics and electronics on separate dies before combining the two. What is not included is the laser which is externally attached using fibre.

The platform supports 25 gigabit speeds as well as wavelength division multiplexing. Originally, IBM started with 90 nm CMOS using bulk silicon before transferring to a silicon-on-insulator (SOI) substrate. An SOI wafer is ideal for creating optical waveguides that confine light using the large refractive index difference between silicon and silicon dioxide. However, to make the electrical devices run at 25 gigabit, the resulting transistor gate length ended up being closer to a 65 nm CMOS process.   

 

Source: IBM Corporation.

 

IBM's optical waveguides are sub-micron, having dimensions of a few hundred nanometers. This is the middle ground, says Green, trading off the density of smaller-dimensioned waveguides with larger, micron-plus ones that deliver low propagation loss.   

Also used are sub-wavelength optical 'metamaterial' structures that transition between the refractive index of the fibre and that of the optical waveguide to achieve a good match between the two. “These very tiny sub-wavelength structures are made using lithography near the limits of what is available,” says Green. “We are engineering the optical properties of the waveguide in order to achieve a low insertion loss when bringing the fibre onto the chip.”  The single mode fibre to the chip is attached using passive alignment.

The 100 gigabit transceiver demonstrator uses four 25 gigabit coarse wavelengths around 1310 nm.  The technology is suited to implement the CWDM4 MSA.

 

The whole technology is available to be commercialised by any chip manufacturer

 

“We are working with four wavelengths today but in the same way as telecom uses many wavelengths, we can follow a similar path,” says Green.

The chip design features transmitter electronics - a series of amplifiers that boost the voltage to drive the Mach-Zehnder Interferometer modulators - and a multiplexer to combine the four wavelengths onto the fibre while the receiver circuitry includes a demultiplexer, four photo-detectors and trans-impedance amplifiers and limiting amplifiers, says Green. What is lacking to make the 100 gigabit transceiver functional is a micro-controller, feedback loops to control the temperature of key circuits, and the circuitry to interface to standard electrical input/ output.  

Green highlights how the bill of materials of a chip is only a fraction of the total cost since assembly and testing must also be included.  

“We reduce the cost of assembly through automated passive optical alignment and the introduction of custom structures onto the wafer,” he says. “We believe we can make an impact on the cost structure of the optical transceiver and where this technology needs to be to access the data centre.” IBM has also developed a way to test the transceiver chips at the wafer level. 

Green admits that its CMOS-integrated nanophotonics process will not scale beyond 25 gigabit as the 90-65 nm CMOS is not able to implement faster serial rates. But IBM has already shown an optical implementation of the PAM-4 modulation scheme that doubles a link's rate to 50 gigabit.    

Meanwhile, IBM’s process design kit (PDK) is already with customers. A PDK includes documents and data files that describe the fabrication process and enable a user to complete a design. A PDK includes a fab’s process parameters, mask layout instructions, and the library of silicon photonics components; grating couplers, waveguides, modulators and the like [1].  

“They [customers] have used the design kit provided by IBM but have built their own designs,” says Green. “And now they are testing hardware.”

IBM is keen that its silicon photonics technology will be licensed and used by circuit design houses. "Houses that bring their own IP [intellectual property], use the enablement tools and manufacture at a site that is licensing the technology from IBM,” says Green. "The whole technology is available to be commercialised by any chip manufacturer.”

 

Reference

[1] Silicon Photonics Design: From Devices to Systems, Lukas Chrostowski and Michael Hochberg, Cambridge University Press, 2015. Click here


OFC 2015 digest: Part 2

The second part of the survey of developments at the OFC 2015 show held recently in Los Angeles.   
 
Part 2: Client-side component and module developments   
  • CFP4- and QSFP28-based 100GBASE-LR4 announced
  • First mid-reach optics in the QSFP28
  • SFP extended to 28 Gigabit
  • 400 Gig precursors using DMT and PAM-4 modulations 
  • VCSEL roadmap promises higher speeds and greater reach   
First CFP4 100GBASE-LR4s 
 
Several companies including Avago Technologies, JDSU, NeoPhotonics and Oclaro announced the first 100GBASE-LR4 products in the smaller CFP4 optical module form factor. Until now the 100GBASE-LR4 has been available in a CFP2 form factor.  
 
“Going from a CFP2 to a CFP4 results in a little over a 2x increase in density,” says Brandon Collings, CTO for communications and commercial optical products at JDSU. The CFP4 also has a lower maximum power specification of 6W compared to the CFP2’s 12W.  
 
The 100GBASE-LR4 standard spans 10 km over single mode fibre. The -LR4 is used mainly as a telecom interface to connect to WDM or packet-optical transport platforms, even when used in the data centre. Data centre switches already favour the smaller QSFP28 rather than the CFP4.  
 
Other 100 Gigabit standards include the 100GBASE-SR4 with a 100 meters reach over OM3 multi-mode fibre, and up to 150m over OM4 fibre. Avago points out that the -SR4 is typically used between a data centre’s top-of-rack and core switches whereas the -LR4 is used within the core network and for links between buildings. The -LR4 modules also can support Optical Transport Network (OTN).      
 
But in the data centre there is a mid-reach requirement. “People are looking at new standards to accommodate more of the mega data centre distances of 500 m or 2 km,” says Robert Blum, Oclaro’s director of strategic marketing.  These mid-reach standards over single mode fibre include the 500 meter PSM4 and the 2 km CWDM4 and modules supporting these standards are starting to appear. “But today, on single mode, there is basically the -LR4 that gets you to 10 km,” says Blum.  
 
JDSU also views the -LR4 as an interim technology in the data centre that will fade once more optimised PSM4 and CWDM4 optics appear.  
 
 
QSFP28 portfolio grows 
 
The 100GBASE-LR4 was also shown in the smaller QSFP28 form factor, as part of a range of new interface offerings in the form factor.  The QSFP28 offers a near 2x increase in face plate density compared to the CFP4.  
 
JDSU announced three 100 Gigabit QSFP28-based interfaces at OFC - the PSM-4 and CWDM4 MSAs and a 100GBASE-LR4, while Finisar announced QSFP28 versions of the CWDM4, the 100GBASE-LR4 and the 100GBASE-SR4. Meanwhile, Avago has samples of a QSFP28 100GBASE-SR4. JDSU’s QSFP28 -LR4 uses the same optics it is using in its CFP4 -LR4 product.  
 
The PSM4 MSA uses a single mode ribbon cable - four lanes in each direction - to deliver the 500 m reach, while the CWDM4 MSA uses a fibre to carry the four wavelengths in each direction. The -LR4 standard uses tightly spaced wavelengths such that the lasers need to be cooled and temperature controlled.  The CWDM4, in contrast, uses a wider wavelength spacing and can use uncooled lasers, saving on power.   
 
"100 Gig-per-laser, that is very economically advantageous" - Brian Welch, Luxtera

  
Luxtera announced the immediate availability of its PSM4 QSFP28 transceiver while the company is also offering its PSM4 silicon chipset for packaging partners that want to make their own modules or interfaces. Luxtera is a member of the newly formed Consortium for On-Board Optics (COBO).
 
Luxtera’s original active optical cable products were effectively 40 Gigabit PSM4 products although no such MSA was defined. The company’s original design also operated at 1490nm  whereas the PSM4 is at 1310nm.  
 
“The PSM4 is a relatively new type of product, focused on hyper-scale data centres - Microsoft, Amazon, Google and the like - with reaches regularly to 500 m and beyond,” says Brian Welch, director of product marketing at Luxtera. The company’s PSM4 offers an extended reach to 2 km, far beyond the PSM4 MSA’s specification. The company says there is also industry interest for PSM4 links over shorter reaches, up to 30 m. 
 
Luxtera’s PSM4 design uses one laser for all four lanes. “In a 100 Gig part, we get 100 Gig-per-laser,” says Welch. “WDM gets 25 Gig-per-laser, multi-mode gets 25 Gig-per-laser; 100 Gig-per-laser, that is very economically advantageous.”    
 
 
QSFP28 ‘breakout’ mode 
 
Avago, Finisar and Oclaro all demonstrated a 100 Gigabit QSFP28 modules in ‘breakout’ mode whereby the module’s output fibres fan out and interface to separate, lower-speed SFP28 optical modules.  
 
“The SFP+ is the most ubiquitous and standard form factor deployed in the industry,” says Rafik Ward, vice president of marketing at Finisar. “The SFP28 leverages this architecture, bringing it up to 28 Gigabit.”  
 
Applications using the breakout arrangement include the emerging Fibre Channel standards: the QSFP28 can support the 128 Gig Fibre channel standard where 32 Gig Fibre Channel traffic is sent to individual transceivers. Avago demonstrated such an arrangement at OFC and said its QSFP28 product will be available before the year end.  
 
Similarly, the QSFP28-to-SFP28 breakout mode will enable the splitting of 100 Gigabit Ethernet (GbE) into IEEE 25 Gigabit Ethernet lanes once the standard is completed. 
 
Oclaro showed a 100 Gig QSFP28 using a 4x28G LISEL (lens-integrated surface-emitting DFB laser) array with one channel connected to an SFP28 over a 2 km link. Oclaro inherited the LISEL technology when it merged with Opnext in 2012.  
 
Finisar demonstrated its 100GBASE-SR4 QSFP28 connected to four SFP28s over 100 m of OM4 multimode fibre.
Oclaro also showed a SFP28 for long reach that spans 10 km over single-mode fibre. In addition to Fibre Channel and Ethernet, Oclaro also highlights wireless fronthaul to carry CPRI traffic, although such data rates are not expected for several years yet. Oclaro’s SFP28 will be in full production in the first quarter of 2016. Oclaro says it will also use the LISEL technology for its PSM4 design.   
 
 
Industry prepares for 400GbE with DMT and PAM-4
  
JDSU demonstrated a 4 x 100 Gig design, described as a precursor for 400 Gigabit technology. The IEEE is still working to define the different versions of the 400 Gigabit Ethernet standard. The JDSU optical hardware design multiplexes four 100 Gig wavelengths onto a fibre.    
 
“There are multiple approaches towards 400 Gig client interfaces being discussed at the IEEE and within the industry,” says JDSU’s Collings. “The modulation formats being evaluated are non-return-to-zero (NRZ), PAM-4 and discrete multi-tone (DMT).”  
 
For the demonstration, JDSU used DMT modulation to encode 100 Gbps on each of the four wavelengths, although Collings stresses that JDSU continues work on all three formats. In contrast, MultiPhy is using PAM-4 to develop a 100 Gig serial link
 
At OFC, Avago demonstrated a 25 Gig VCSEL being driven using its PAM-4 chip to achieve a 50 Gig rate. The PAM-4 chip takes two 25 Gbps input streams and encodes each two bits into a symbol that then drives the VCSEL. The demonstration paves the way for emerging standards such as 50 Gigabit Ethernet (GbE) using a 25G VCSEL, and shows how 50 Gigabit lanes could be used to implement 400 GbE using eight lanes instead of 16.  
 
NeoPhotonics demonstrated a 56 Gbps externally modulated laser (EML) along with pHEMT gallium arsenide driver technology, the result of its acquisition of Lapis Semiconductor in 2013.  
 
The main application will be 400 Gigabit Ethernet but there is already industry interest in proprietary solutions, says Nicolas Herriau, director of product engineering at NeoPhotonics. The industry may not have decided whether it will use NRZ or PAM-4 [for 400GbE], “but the goal is to get prepared”, he says. 
 
Herriau points out that the first PAM-4 ICs are not yet optimised to work with lasers. As a result, having a fast, high-quality 56 Gbps laser is an advantage.   
 
Avago has shipped over one million 25 Gig channels in multiple products
 
  
The future of VCSELs   
 
VCSELs at 25 Gig is an enabling technology for the data centre, says Avago. Operating at 850nm, the VCSELs deliver the 100m reach over OM3 and 150m reach over OM4 multi-mode fibre. Avago announced at OFC that it had shipped over one million VCSELs in the last two years. Before then, only 10 Gig VCSELs were available, used for 40 Gig and 100 Gig short-reach modules.  
 
Avago says that the move to 100 Gig and beyond has triggered an industry debate as to whether single-mode rather than multi-mode fibre is the way forward in data centres. For VCSELs, the open questions are whether the technology can support 25 Gig lanes, whether such VCSELs are cost-effective, and whether they can meet extended link distances beyond 100 m and 150 m.  
 
“Silicon photonics is spoken of as a great technology for the future, for 100 Gig and greater speeds, but this [announcement] is not academic or hype,” says I-Hsing Tan, Avago’s segment marketing manager for Ethernet and storage optical transceivers. “Avago has been using 25 Gig VCSELs for short-reach distance applications and has shipped over one million 25 Gig channels in multiple products.” 
 
The products that account for the over one million shipments include Ethernet transceivers; single- and 4-lane 32 Gigabit Fibre Channel, each channel operates at 28 Gbps; Infiniband applications, with 4-channels being the most popular; and proprietary optical interfaces with the channel count varying from two to 12 channels, 50 to 250 Gbps.   
 
In other OFC data centre demonstrations, Avago showed an extended short reach interface at 100 Gig - the 100GBASE-eSR4 - with a 300 m span. Because it is a demonstration and not a product, Avago is not detailing how it is extending the reach beyond saying that it is a combination of the laser output power and the receiver design. The extended reach product will be available from 2016.  
 
Avago completed the acquisition of PLX Technologies in the third quarter of 2014 and its PCI Express (PCIe) over optics demonstration is one result. The demonstration is designed to remove the need for a network interface card between an Ethernet switch and a server. “The aim is to absorb the NIC as part of the ASIC design to achieve a cost effective solution,” says Tan. Avago says it is engaged with several data centre operators with this concept.     
 
Avago also demonstrated 40 Gig bi-directional module, an alternative to the 40GBASE-SR4. The 40G -SR4 uses eight multi-mode fibres, four in each direction, each carrying a 10 Gig signal. “Going to 40 Gig [from 10 Gig] consumes fibre,” says Tan. Accordingly, the 40 Gig bidi design uses WDM to avoid using a ribbon fibre. Instead, the bidi uses two multi-mode fibres, each carrying two 20 Gig wavelengths travelling in opposite directions. Avago hopes to make this product generally available later this year.   
 
At OFC, Finisar demonstrated designs for 40 Gig and 100 Gig speeds using duplex multi-mode fibre rather than ribbon fibre. The 40 Gig demo achieved 300 m over OM3 fibre while the 100 Gig demo achieved 70 m over OM3 and 100 m over OM4 fibre. Finisar’s designs use four wavelengths for each multi-mode fibre, what it calls shortwave WDM. 
 
Finisar’s VCSEL demonstrations at OFC were to highlight that the technology can continue to play an important role in the data centre. Citing a study by market research firm, Gartner, 94 percent of data centres built in 2014 were smaller than 250,000 square feet, and this percentage is not expected to change through to 2018. A 300-meter optical link is sufficient for the longest reaches in such sized data centres. 
 
Finisar is also part of a work initiative to define and standardise new wideband multi-mode fibre that will enable WDM transmission over links even beyond 300 m to address larger data centres. 
 
“There are a lot of legs to VCSEL-based multi-mode technology for several generations into the future,” says Ward. “We will come out with new innovative products capable of links up to 300 m on multi-mode fibre.””

 

For Part 1, click here

COBO acts to bring optics closer to the chip

The formation of the Consortium for On-Board Optics (COBO) highlights how, despite engineers putting high-speed optics into smaller and smaller pluggable modules, further progress in interface compactness is needed.

The goal of COBO, announced at the OFC 2015 show and backed by such companies as Microsoft, Cisco Systems, Finisar and Intel, is to develop a technology roadmap and common specifications for on-board optics to ensure interoperability.

“The Microsoft initiative is looking at the next wave of innovation as it relates to bringing optics closer to the CPU,” says Saeid Aramideh, co-founder and chief marketing and sales officer for start-up Ranovus, one of the founding members of COBO. “There are tremendous benefits for such an architecture in terms of reducing power dissipation and increasing the front panel density.”

On-board optics refers to optical engines or modules placed on the printed circuit board, close to a chip. The technology is not new; Avago Technologies and Finisar have been selling such products for years. But these products are custom and not interoperable.  

Placing the on-board optics nearer the chip - an Ethernet switch, network processor or a microprocessor for example - shortens the length of the board’s copper traces linking the two. The fibre from the on-board optics bridges the remaining distance to the equipment’s face plate connector. Moving the optics onto the board reduces the overall power consumption, especially as 25 Gigabit-per-second electrical lanes start to be used. The fibre connector also uses far less face plate area compared to pluggable modules, whether the CFP2, CFP4, QSFP28 or even an SFP+.  

________________________________________________________________________________
The founding members of the Consortium for On-Board Optics are Arista Networks, Broadcom, Cisco, Coriant, Dell, Finisar, Inphi, Intel, Juniper Networks, Luxtera, Mellanox Technologies, Microsoft, Oclaro, Ranovus, Source Photonics and TE Connectivity.

Given the breadth of companies and the different technologies they prefer,  will the COBO's initiative choose a specific fibre type and wavelength?

“COBO currently has no plans to specify a single medium or a single wavelength, but rather will reference existing standards,” Brad Booth, Chair for the Consortium for On-Board Optics told Gazettabyte.

“There has not been any discussion on the fibre type - single mode versus multi-mode - yet,” added Aramideh. “This will be one item among many interworking specification items for the consortium to define.”
________________________________________________________________________________

 

“The [COBO] initiative is going to be around defining the electrical interface, the mechanical interface, the power budget, the heat-sinking constraints and the like,” says Aramideh.

To understand why such on-board optics will be needed, Aramideh cites Broadcom’s StrataXGS Tomahawk switch chips used for top-of-rack and aggregation switches. The Tomahawk is Broadcom’s first switch family that use 25 Gbps serialiser/ deserialiser (serdes) and has an aggregate switch bandwidth of up to 3.2 terabit. And Broadcom is not alone. Cavium through its Xplaint acquisition has the CNX880xx line of Ethernet switch chips that also uses 25 Gbps lanes and has a switch capacity up to 3.2 terabit.

“You have 1.6 terabit going to the front panel and 1.6 terabit going to the back panel; that is a lot of traces,” says Aramideh. “If you make this into opex [operation expense], and put the optics close to the switch ASIC, the overall power consumption is reduced and you have connectivity to the front and the back.” 

This is the focus of Ranovus, with the OpenOptics MSA initiative. “Scaling into terabit connectivity over short distances and long distances,” he says.

 

OpenOptics MSA

At OFC, members of the OpenOptics MSA, of which Ranovus and Mellanox are founders, published its WDM specification for an interoperable 100 Gbps WDM standard that will have a two kilometer reach. 

The 100 Gigabit standard uses 4x25 Gbps wavelengths but Aramideh says the standard scales to 8, 16 and 32 lanes. In turn, there will also be a 50 Gbps lane version that will provide a total connectivity of 1.6 terabit (32x50 Gbps). 

Ranovus has not detailed what modulation scheme it will use to achieve 50 Gbps lanes, but Aramideh says that PAM-4 is one of the options and an attractive one at that. “There are also a lot of chipsets [supporting PAM-4] becoming available,” he says. 

Ranovus’s first products will be an OpenOptics MSA optical engine and an QSFP28 optical module. “We are not making any product announcements yet but there will be products available this year,” says Aramideh. 

Meanwhile, Ciena has become the sixth member to join the OpenOptics MSA. 


MultiPhy readies 100 Gigabit serial direct-detection chip

MultiPhy is developing a chip that will support serial 100 Gigabit-per-second (Gbps) transmission using 25 Gig optical components. The device will enable short reach links within the data centre and up to 80km point-to-point links for data centre interconnect. The fabless chip company expects to have first samples of the chip, dubbed FlexPhy, by year-end.

Figure 1: A block diagram of the 100 Gig serial FlexPhy. The transmitter output is an electrical signal that is fed to the optics. Equally, the input to the receive path is an electrical signal generated by the receiver optics. Source: Gazettabyte

The FlexPhy IC comprises multiplexing and demultiplexing functions as well as a receiver digital signal processor (DSP). The IC's transmitter path has a CAUI-4 (4x28 Gig) interface, a 4:1 multiplexer and four-level pulse amplitude modulation (PAM-4) that encodes two bits per symbol. The resulting chip output is a 50 Gbaud signal used to drive a laser to produce the 100 Gbps output stream.

"The input/output doesn't toggle at 100 Gig, it toggles at 50 Gig," says Neal Neslusan, vice president of sales and marketing at MultiPhy. "But 50 Gig PAM-4 is actually 100 Gigabit-per-second."

The IC's receiver portion will use digital signal processing to recover and decode the PAM-4 signals, and demultiplex the data into four 28 Gbps electrical streams. The FlexPhy IC will fit within a QSFP28 pluggable module.

As with MultiPhy's first-generation chipset, the optics are overdriven. With the MP1101Q 4x28 Gig multiplexer and MP1100Q four-channel receiver, 10 Gig optics are used to achieve four 28 Gig lanes, while with the FlexPhy, a 25 Gig laser is used. "Using a 25 GigaHertz laser and double-driving it to 50 GigaHertz induces some noise but the receiver DSP cleans it up," says Neslusan.

The use of PAM-4 incurs an optical signal-to-noise ratio (OSNR) penalty compared to non-return-to-zero (NRZ) signalling used for MultiPhy's first-generation direct-detection chipset. But PAM-4 has a greater spectral density; the 100 Gbps signal fits within a 50 GHz channel, resulting in 80 wavelengths in the C-band. This equates to 8 terabits of capacity to connect data centres up to 80 km apart.

Within the data centre, MultiPhys physical layer IC will enable 100 Gbps serial interfaces. The design could also enable 400 Gig links over distances of 500 m, 2 km and 10 km, by using four FlexPhys, four transmitter optical sub-assemblies (TOSAs) and four receiver optical sub-assemblies (ROSAs).

Meanwhile, MultiPhy's existing direct-detection chipset has been adopted by multiple customers. These include two optical module makers Oplink and a Chinese vendor and a major Chinese telecom system vendor that is using the chipset for a product coming to market now. 


STMicro chooses PSM4 for first silicon photonics product

STMicroelectronics has revealed that its first silicon photonics product will be the 100 Gigabit PSM4. The 500m-reach PSM4 multi-source agreement (MSA) is a single-mode design that uses four parallel fibres in each direction. The chip company expects the PSM4 optical engine to be in production during 2015. 

"We have prototypes and can show them running very well at 40 Gig [4x10 Gig]," says Flavio Benetti, group vice president, digital product group and general manager, networking products division at STMicroelectronics. "But it is expected to be proven at 4x25 Gig in the next few months." STMicroelectronics has just received its latest prototype chip that is now working at 4x25 Gig. 

Benetti is bullish about the technology's prospects: "Silicon photonics is really a great opportunity today in that, once proven, it will be a strong breakthrough in the market." He highlights three benefits silicon photonics delivers:
  • Lowers the manufacturing cost of optical modules  
  • Improves link speeds
  • Reduces power consumption       
Silicon photonics provides an opportunity to optimise the supply chain, he says. The technology simplifies optical module manufacturing by reducing the number of parts needed and the assembly cost. 

Regarding speed, STMicroelectronics cites its work with Finisar demonstrating a 50 Gig link using silicon photonics that was detailed at the recent ECOC show. "Photonic processing integrated into a CMOS line allows this intrinsically, while there are several factors that doesn't allow such an easy implementation of 50 Gig with traditional technologies," he says, citing VCSELs and directly-modulated lasers as examples. 
  
"We think we can bring an advantage in the power consumption as well," says Benetti. There still needs to be ICs such as to drive the optical modulator, for example, but the optical circuitry has a very low power consumption.  
   
STMicroelectronics licensed silicon photonics technology from Luxtera in 2012 and has a 300mm (12-inch) wafer production line in Crolles, France. The company will not offer a foundry service to other companies since STMicroelectronics believes its silicon photonics process offers it a competitive advantage.      

The company has also developed its own electronic design automation tool (EDA) for silicon photonics. The EDA tool allows optical circuitry to be simulated alongside the company's high-speed BiCMOS ICs. "What we have developed covers our needs," says Benetti. "But we are working to evolve it to more complex models."
STMicro's in-house silicon photonics EDA. "We will develop the EDA tools to the level needed for the next generation products," says Flavio Benetti.
The company has developed a fibre attachment solution. "The big issue with silicon photonics is not the silicon part; it is getting the light in and out [of the chip]," says Benetti. The in-house technique is being made available to customers. "It is much more convenient for customers to attach the fibre, not us, as it is quite delicate." STMicroelectronics will deliver the optical engine and its customers will finish the design and attach the fibres.

Other techniques to couple light onto the chip are also being explored by the company. "Why should we need the fibre?" he says. "We should find a better way to get the light in and the light out." The goal of the work is to develop techniques that can be implemented on a fabrication plant scale. "The problem is [developing a technique] not to produce 100 parts but one million parts; this is the angle we are taking."

Meanwhile, the company is evaluating high-speed designs for 400 Gigabit Ethernet. "I don't see in the short-to-medium term a solution that is 400 Gig-one fibre," says Benetti. The work involves looking at the trade-offs of the various design approaches such as parallel fibres, WDM and modulation schemes like PAM-4 and PAM-8 (pulse amplitude modulation). 

Performance parameters used to evaluate the various design options include cost and power consumption. But Benetti says more work is needed before STMicroelectronics will choose particular designs to pursue.

 


Silicon photonics: Q&A with Kotura's CTO

A Q&A with Mehdi Asghari, CTO of silicon photonics start-up, Kotura.  In part one, Asghari talks about a recent IEEE conference he co-chaired that included silicon photonics, the next Ethernet standard, and the merits of silicon photonics for system design.

Part 1

 

"Photons and electrons are like cats and dogs. Electrons are dogs: they behave, they stick by you, they are loyal, they do exactly as you tell them, whereas cats are their own animals and they do what they like. And that is what photons are like."

Mehdi Asghari, CTO of Kotura

 

Q: You recently co-chaired the IEEE International Conference on Group IV Photonics that included silicon photonics. What developments and trends would you highlight? 

A: This year I wanted to show that silicon photonics was ready to make a leap from an active area of scientific research to a platform for engineering innovation and product development.

To this end, I needed to show that the ecosystem was ready and present. Therefore, a key objective was to get the industry more involved with the conference. "This has always been a challenge," I was told.

To address this issue I asked my co-chair, MIT's Professor Jurgen Michel, that we appoint joint-session chairs, one from industry and one from academia. We got people we knew from Google, Oracle and Intel as co-chairs, and paired them with prominent academics and asked them to ensure that there were an equal number of industry-invited talks in the schedule. We knew this would be a major attraction to industry attendees. We also got the industry to fund the conference at a level that set an IEEE record.

A key highlight of the show was a boat cruise journey on San Diego bay with Dr. Andrew Rickman as speaker, sharing his experiences and thoughts about setting up the first silicon photonics company - Bookham Technology - over 20 years ago.

Among other distinguished industry speakers we had Samsung telling us of the role of silicon photonics in consumer applications, Broadcom on the need for on-chip optical communication, Cisco on the role of silicon photonics in the future of the Internet, and Google on its broadband fibre-to-the-home (FTTh) initiative and what silicon photonics could offer in this area.

Oracle also shared its latest development in silicon photonics and the application of the technology in their systems, while Luxtera discussed the latest developments in its CMOS photonics platform, particularly the 4x25 Gigabit-per-second (Gbps) platform.

We also heard about the latest germanium laser development at MIT and had an invited speaker to talk about what III-V devices could do and to provide a comparison to silicon to make sure we are not blinded by our own rhetoric.

We ended up with a record number of attendees for the conference and, perhaps more importantly, close to half from industry; a record and vindicated my motivation and perspective for the conference and that silicon photonics is ready and coming.

 

Was there a trend or presentation at the IEEE event that stood out?

There are two areas creating excitement. One is the germanium laser. This is a topic of significant interest because these devices can operate at very high temperatures and therefore they can be next to the processor or ASIC. This can be a game-changer in how we envisage photonics and electronics being integrated.

We have germanium detectors and at Kotura we are working very hard to get a germanium electro-absorption modulator. We have shown this device can be extremely small and low power. And it can operate at very high speed - we have observed 3dB bandwidths in excess of 70GHz which means you can think of 100 Gigabit direct modulation for a device only 40 microns long and with a capacitance of a few femtofarads. So in terms of RF power, the dissipation of this device is virtually zero.

I would say the MIT group is probably leading the [germanium laser] efforts. They reported on room-temperature, current-driven laser emission which is very exciting. The efficiency of these lasers are still low for commercial applications; they probably have to improve by a factor of 100 or so. But given the progress we've seen in the last two years, if they keep going at that pace we may have viable germanium lasers in a couple of years. Then someone in industry has to take that on and turn it into a product and that is usually the hardest part.

This is exciting because that enables us to forget about off-the-chip lasers and integrate them in the device. We can then give up a whole bunch of problems. For example, the high temperature operation of the III-V devices is a real limit for us. Electronic devices can give off 100W and operate at 120oC, whereas optical devices often have to be stabilised, may go through multiple packaging layers, and the heat dissipation is usually directly related to cost.

If you could end up with a germanium laser that is happy at high temperatures - and we know our detectors and modulators work at high temperatures, and we know we can use electronic packaging to package these devices - then we can put these lasers next to the processor and address the bandwidth limitations that ASICs are facing today.

 

"Wavelength division multiplexing (WDM) is effectively a zero-power gearbox"

 

What was the second area?

The other area that was very interesting is graphene, a new material people are starting to work with and putting on silicon. They [researchers] are showing very low power, very high speed operation. It is still at a research level but that is another area we should watch.

 

The IEEE has started a group looking at the next speed Ethernet standard. No technical specification has been mentioned but it looks that 400 Gigabit Ethernet (GbE) will be the approach. Do you agree and what role can silicon photonics play in making the next speed Ethernet standard possible?

Industry is busy arguing about the different ways of doing 100 and 400GbE, and perhaps forgetting the fact that we have been here before.

The simple fact is that people always go for higher bit rate when it is cost-efficient and power-efficient to do so. After that, wavelengths are used.

Wavelength division multiplexing (WDM) is effectively a zero-power 'gearbox', mixing the signals in the optical domain. You do pay a power penalty for it in the form of photons lost in the multiplexer and demultiplexer. However that is not significant compared to the power consumption of an electronics gearbox chip.

Once we have exploited line rate and wavelength division multiplexing, we come to more complex modulation formats and pay the associated power and complexity penalty. Of course, more channels of fibre can always carry more information bandwidth but that is just a brute force solution that works while density and bandwidth requirements are moderate.

I think the right 100 Gigabit is based on a WDM 4x25 Gig solution. This can then scale to 400 Gigabit by adding more wavelengths, and can then scale to 1.6 Terabits. We have already demonstrated this in a single chip and will demonstrate this later in the form of a QSFP 100Gbps.

 

How does the interface scale to 1.6Tbps?

Our devices are capable of running at 40 or 50Gbps, depending on the electronics. The electronics is going to limit the speed of our devices. We can very easily see going from four channels at 25Gbps to 16 channels at 25Gbps to provide a 400 Gigabit solution.

We can also see a way of increasing the line rate to 50Gbps perhaps, either a straightforward NRZ (non-return-to-zero) line rate or some people are talking about multi-level modulation, PAM-4 (pulse amplitude modulation) type of stuff, to get to 50Gbps.

The customers we are talking to about 100Gbps are already talking about 400Gbps. So we can see 16x25Gbps, or 8x50Gbps if that is the right thing to do at the time based on the availability of electronics.

To go to 1.6 Terabit transceivers, we envisage something running at 40Gbps times 40 channels or 50Gbps times 32 channels. We already have done a single receiver chip demonstrator that has 40 channels, each at 40Gbps.

These things in silicon are not a big deal. The III-V guys really struggle with yield and cost. But you can envisage scaling to that level of complexity in a silicon platform.

 

Silicon photonics is spoken of not just as an optical platform like traditional optical integration technologies, but also as a design approach, making use of techniques associated with semiconductor design. The implication is that the technology will enable designs and even systems in a way that traditional optics can't. Can you explain how silicon photonics is a design approach and just what the implications are?

I think this is a key promise of silicon photonics, but perhaps one that has been oversold in recent years.

The key here is that given the maturity of the silicon processing capabilities, process simulation tools available and inherent properties of silicon, it is possible to predict the performance of the optical circuits far better in this platform than in any other before it. I think this is true and very valuable, potentially even a game changer.

However, we have to realise that there still remains an inherent difference between electrons and photons and their behavior in such circuits. Photons remain in a quantum world in such circuits, where the wavelength of light is comparable to feature sizes we manufacture. Hence we are dealing with a statistical quantum process whether we like it or not.

In summary, silicon will be a key enabler for on-chip system design, but it is too early for the university courses to stop graduating photonics PhDs!

 

So there is an advantage to silicon photonics but are you saying it is not that simple as using mature semiconductor design techniques?

Photons and electrons are like cats and dogs. Electrons are dogs: they behave, they stick by you, they are loyal, they do exactly as you tell them, whereas cats are their own animals and they do what they like. And that is what photons are like.

So it is really hard to predict what a photon does. The dimensions that we use for the structures we make are of the size of the wavelength of a photon. And that means it is more of a hit-and-miss process - there is always stray light, the stray light has a habit of interfering and you can always get unpredicted results.

When I interact with my electronic partners I find that they go through 6-9 months of very detailed simulation. They have very complex simulation tools.

When you come to photonics for sure we can borrow some of these simulation tools, we can simulate the process because we are using silicon. However some of the tolerances that we need are beyond what the silicon guys need, and the way the photons behave is very different. So in the end we don't spend 9 months simulating; we spend a month simulating and 3 months running the process and optimising it and re-running it and re-optimising it.

We end up with a reverse situation where the design is only 3 months, and the interaction with the designer and the manufacturing process is a 9-month process. So this is more of an iterative process. It is not as mature and a little bit more statistical. 

 


Privacy Preference Center