ST makes its first PSM4 optical engine deliveries

Flavio Benetti is upbeat about the prospects of silicon photonics. “Silicon photonics as a market is at a turning point this year,” he says.

What gives Benetti confidence is the demand he is seeing for 100-gigabit transceivers in the data centre. “From my visibility today, the tipping point is 2016,” says Benetti, group vice president and general manager, digital and mixed processes ASIC division at STMicroelectronics.

 

Flavio Benetti

Benetti and colleagues at ST have spent the last four years working to bring to market the silicon photonics technology that the chip company licensed from Luxtera.

The company has developed a 300mm-wafer silicon photonics production line at its fabrication plant in Crolles that is now up and running. ST also has its first silicon photonics product - a mid-reach PSM4 100-gigabit optical engine - and has just started its very first deliveries.

At the OFC show in March, ST said it had already delivered samples to one unnamed 'customer partner', possibly Luxtera, and Benetti showed a slide of the PSM4 chips as part of a Lumentum transceiver.  

Another ST achievement Benetti highlights is the development of a complete supply chain for the technology. In addition to wafer production, ST has developed electro-optic wafer testing. This allows devices to be probed electrically and optically to select working designs before the wafer is diced. ST has also developed a process to 3D-bond chips.

“We have focussed on building an industrial environment, with a supply chain that can deliver hundreds of thousands and millions of devices,” says Benetti. 

 

PSM4 and CWDM4

ST’s first product, the components for a 4x25 gigabit PSM4 transceiver, is a two-chip design.

One chip is the silicon photonics optical engine which integrates the PSM4’s four modulators, four detectors and the grating couplers used to interface the chip to the fibres. The second chip, fabricated using ST’s 55nm BiCMOS process, houses the transceiver’s associated electronics such as the drivers, and trans-impedance amplifiers.

The two chips are combined using 3D packaging. “The 3D packaging consists of the two dies, one copper-pillar bonded to the other,” says Benetti. “It is a dramatic simplification of the mounting process of an optical module.” 

The company is also developing a 100-gigabit CWDM4 transceiver which unlike the PSM4 uses four 25-gigabit wavelengths on a single fibre.

The CWDM4 product will be developed using two designs. The first is an interim, hybrid solution that uses an external planar lightwave circuit-based multiplexer and demultiplexer,  followed by an integrated silicon photonics design. The hybrid design is being developed and is expected in late 2017; the integrated silicon photonics design is due in 2018.

With the hybrid design, it is not just a question of adding a mux-demux to the PSM4 design. “The four channels are each carrying a different wavelength so there are some changes that need to be done to the PSM4,” says Benetti, adding that ST is working with partners that will provide the mux-demux and do the integration.   

 

We need to have a 100-gigabit solution in high volume for the market, and the pricing pressure that is coming has convinced us that silicon photonics is the right thing to do

 

Opportunities 

Despite the growing demand for 100-gigabit transceivers that ST is seeing, Benetti stresses that these are not 'mobile-phone wafer volumes'. “We are much more limited in terms of wafers,” he says. Accordingly, there is probably only room for one or two large fabs for silicon photonics globally, in his opinion. 

So why is ST investing in a large production line? For Benetti, this is an obvious development for the company which has been a provider of electrical ICs for the optical module industry for years.

“ST has entered silicon photonics to provide our customers with a roadmap,” says Benetti. “We need to have a 100-gigabit solution in high volume for the market, and the pricing pressure that is coming has convinced us that silicon photonics is the right thing to do.”

It also offers chip players the possibility of increasing its revenues. “The optical engine integrates all the components that were in the old-fashioned modules so we can increase our revenues there,” he says.

ST is tracking developments for 200-gigabit and 400-gigabit links and is assessing whether there is enough of an opportunity to justify pursuing 200-gigabit interconnects.

For now though, it is seeing strong pricing pressure for 100-gigabit links for reaches of several hundred meters. “We do not think we can compete for very short reach distances,” says Benetti.  “We will leave that to VCSELs until the technology can no longer follow.” As link speeds increase, the reach of VCSEL links diminishes. “We will see more room for silicon photonics but this is not the case in the short term,” says Benetti.

 

Market promise

People have been waiting for years for silicon photonics to become a reality, says Benetti. “My target is to demonstrate it [silicon photonics] is possible, that we are serious in delivering parts to the market in an industrial way and in volumes that have not been delivered before.”

To convince the market, it is not just showing the technological advantages of silicon photonics but the fact that there is a great simplification in constructing the optical module along with the ability to deliver devices in volume. “This is the point,” he says. 

Benetti’s other role at ST is overseeing advanced networking ASICs. He argues that over the mid- to long-term, there needs to be a convergence between ASIC and optical connectivity.

“Look at a switch board, for example, you have a big ASIC or two in the middle and a bunch of optical modes on the side,” says Benetti. For him, the two technologies - photonics and ICs - are complementary and the industry’s challenge is to make the two live together in an efficient way.


Verizon's move to become a digital service provider

Verizon’s next-generation network based on network functions virtualisation (NFV) and software-defined networking (SDN) is rapidly taking shape.

Working with Dell, Big Switch Networks and Red Hat, the US telco announced in April it had already brought online five data centres. Since then it has deployed more sites but is not saying how many.

Source: Verizon

“We are laying the foundation of the programmable infrastructure that will allow us to do all the automation, virtualisation and the software-defining we want to do on top of that,” says Chris Emmons, director, network infrastructure planning at Verizon.

“This is the largest OpenStack NFV deployment in the marketplace,” says Darrell Jordan-Smith, vice president, worldwide service provider sales at Red Hat. “The largest in terms of the number of [server] nodes that it is capable of supporting and the fact that it is widely distributed across Verizon’s sites.”

OpenStack is an open source set of software tools that enable the management of networking, storage and compute services in the cloud. “There are some basic levels of orchestration while, in parallel, there is a whole virtualised managed environment,” says Jordon-Smith.

 

This announcement suggests that Verizon feels confident enough in its experience with its vendors and their technology to take the longer-term approach


“Verizon is joining some of the other largest communication service providers in deploying a platform onto which they will add applications over time,” says Dana Cooperson, a research director at Analysys Mason. 

Most telcos start with a service-led approach so they can get some direct experience with the technology and one or more quick wins in the form of revenue in a new service arena while containing the risk of something going wrong, explains Cooperson. As they progress, they can still lead with specific services while deploying their platforms, and they can make decisions over time as to what to put on the platform as custom equipment reach their end-of-life.

A second approach - a platform strategy - is a more sophisticated, longer-term one, but telcos need experience before they take that plunge.

“This announcement suggests that Verizon feels confident enough in its experience with its vendors and their technology to take the longer-term approach,” says Cooperson.

 

Applications

The Verizon data centres are located in core locations of its network. “We are focussing more on core applications but some of the tools we use to run the network - backend systems - are also targeted,” says Emmons.

The infrastructure is designed to support all of Verizon’s business units. For example, Verizon is working with its enterprise unit to see how it can use the technology to deliver virtual managed services to enterprise customers.

“Wherever we have a need to virtualise something - the Evolved Packet Core, IMS [IP Multimedia Subsystem] core, VoLTE [Voice over LTE] or our wireline side, our VoIP [Voice over IP] infrastructure - all these things are targeted to go on the platform,” says Emmons. Verizon plans to pool all these functions and network elements onto the platform over the next two years.   

Red Hat’s Jordon-Smith talks about a two-stage process: virtualising functions and then making them stateless so that applications can run on servers independent of the location of the server and the data centre.

“Virtualising applications and running on virtual machines gives some economies of scale from a cost perspective and density perspective,” says Jordon-Smith. But there is a cost benefit as well as a level of performance and resiliency once such applications can run across data centres. 

And by having a software-based layer, Verizon will be able to add devices and create associated services applications and services. “With the Internet of Things, Verizon is looking at connecting many, many devices and add scale to these types of environments,” says Jordon-Smith.  

 

Architecture

Verizon is deploying a ‘pod and core architecture’ in its data centres. A pod contains racks of servers, top-of-rack or leaf switches, and higher-capacity spine switches and storage, while the core network is used to enable communications between pods in the same data centre and across sites (see diagram, top).

Dell is providing Verizon with servers, storage platforms and white box leaf and spine switches. Big Switch Networks is providing software that runs on the Dell switches and servers, while the OpenStack platform and ceph storage software is provided by Red Hat.  

Each Dell rack houses 22 servers - each server having 24 cores and supporting 48 hyper threads - and all 22 servers connect to the leaf switch. Each rack is teamed with a sister rack and the two are connected to two leaf switches, providing switch level redundancy.

“Each of the leaf switches is connected to however many spine switches are needed at that location and that gives connectivity to the outside world,” says Emmons. For the five data centres, a total of 8 pods have been deployed amounting to 1,000 servers and this has not changed since April.

 

This is the largest OpenStack NFV deployment in the marketplace

 

Verizon has deliberately chosen to separate the pods from the core network so it can innovate at the pod level independently of the data centre’s network.

“We wanted the ability to innovate at the pod level and not be tied into any technology roadmap at the data centre level,” says Emmons who points out that there are several ways to evolve the data centre network.  For example, in some cases, an SDN controller is used to control the whole data centre network. 

“We don't want our pods - at least initially - to participate in that larger data centre SDN controller because we were concerned about the pace of innovation and things like that,” says Emmons. “We want the pod to be self-contained and we want the ability to innovate and iterate in those pods.”

Its first-generation pods contain equipment and software from Dell, Big Switch and Red Hat but Verizon may decide to swap out some or all of the vendors for its next-generation pod. “So we could have two generations of pod that could talk to each other through the core network,” says Emmons. “Or they could talk to things that aren't in other pods - other physical network functions that have not yet been virtualised.”  

Verizon’s core networks are its existing networks in the data centres. “We didn't require any uplift and migration of the data centre networks,” says Emmons, However, Verizon has a project investigating data-centre interconnect platforms for core networking.    

 

What we have been doing with Red Hat and Big Switch is not a normal position for a telco where you test something to death; it is a lot different to what people are used to

 

Benefits

Verizon expects capital expenditure and operational expense benefits from its programmable network but says it is too early to quantify. What more excites the operator is the ability to get services up and running quickly, and adapt and scale the network according to demand.

“You can reconfigure and reallocate your network once it is all software-defined,”  says Emmons. There is still much work to be done, from the network core to the edge. “These are the first steps to that programmable infrastructure that we want to get to,” says Emmons.

Capital expenditure savings result from adopting standard hardware. “The more uniform you can keep the commodity hardware underneath, the better your volume purchase agreements are,” says Emmons. Operational savings also result from using standardised hardware. “Spares becomes easier, troubleshooting becomes easier as does the lifecycle management of the hardware,” he says. 

 

Challenges

“We are tip-of-the-spear here,” admits Emmons. “What we have been doing with Red Hat and Big Switch is not a normal position for a telco where you test something to death; it is a lot different to what people are used to.”

Red Hat’s Jordon-Smith also talks about the accelerated development environment enabled by the software-enabled network. The OpenStack platform undergoes a new revision every six months.

“There are new services that are going to be enabled through major revisions in the not too distant future - the next 6 to 12 months,” says Jordon-Smith.  “That is one of the key challenges for operators like Verizon have when they are moving in what is now a very fast pace.”

Verizon continues to deploy infrastructure across its network. The operator has completed most of the troubleshooting and performance testing at the cloud-level and in parallel is working on the applications in various of its labs. “Now it is time to put it all together,” says Emmons. 

One critical aspect of the move to become a digital service provider will be the operators' ability to offer new services more quickly - what people call service agility, says Cooperson. Only by changing their operations and their networks can operators create and, if needed, retire services quickly and easily. 

"It will be evident that they are truly doing something new when they can launch services in weeks instead of months or years, and make changes to service parameters upon demand from a customer, as initiated by the customer," says Cooperson. “Another sign will be when we start seeing a whole new variety of services and where we see communications service providers building those businesses so that they are becoming a more significant part of their revenue streams."

She cites as examples cloud-based services and more machine-to-machine and Internet of Things-based services.


FPGAs with 56-gigabit transceivers set for 2017

Xilinx is expected to ship its first FPGAs featuring 56-gigabit transceivers next year. 

The company demonstrated a 56-gigabit transceiver using 4-level pulse-amplitude modulation (PAM-4) at the recent OFC show. The 56-gigabit transceiver, also referred to as a serialiser-deserialiser (serdes), was shown successfully working over backplane specified for 25-gigabit signalling only.

Gilles GarciaXilinx's 56-gigabit serdes is implemented using a 16nm CMOS process node but the first FPGAs featuring the design will be made using a 7nm process. Gilles Garcia says the choice of 7nm CMOS is solely a business decision and not a technical one.

”Optical module [makers] will take another year to make something decent using PAM-4," says Garcia, Xilinx's director marketing and business development, wired communications. "Our 7nm FPGAs will follow very soon afterwards.”

The company is still to detail its next-generation FPGA family  but says that it will include an FPGA capable of supporting 1.6 terabit of Optical Transport Network (OTN) using 56-gigabit serdes only. At first glance that implies at least 28 PAM-4 transceivers on a chip but OTN is a complex design that is logic not I/O limited suggesting that the FPGA will feature more than 28, 56-gigabit serdes. 

 

Applications 

Xilinx’s Virtex UltraScale and its latest UltraScale+ FPGA families feature 16-gigabit and 25-gigabit transceivers. Managing power consumption and maximising reach of the high-speed serdes are key challenges for its design engineers. Xilinx says it has 150 engineers for serdes design.

“Power is always a key challenge because as soon as you talk about 400-gigabit to 1-terabit per line card, you need to be cautious about the power your serdes will use,” says Garcia. He says the serdes need to adapt to the quality of the traces for backplane applications. Customers want serdes that will support 25 gigabit on existing 10-gigabit backplane equipment.

Xilinx describes its Virtex UltraScale as a 400-gigabit capable single-chip system supporting up to 104 serdes: 52 at 16 gigabit and 52 at 25 gigabit. 

The UltraScale+ is rated as a 500-gigabit to 600-gigabit capable system, depending on the application. For example, the FPGA could support three, 200-gigabit OTN wavelengths, says Garcia. 

Xilinx says the UltraScale+ reduces power consumption by 35% to 50% compared to the same designs implemented on the UltrasScale. The Virtex UltraScale+ devices also feature dedicated hardware to implement RS-FEC, freeing up programmable logic for other uses. RS-FEC is used with multi-mode fibre or copper interconnects for error correction, says Xilinx. Six UltraScale+ FPGAs are available and the VU13P, not yet out, will feature up to 128 serdes, each capable of up to 32 gigabit.

 

We don’t need retimers so customers can connect directly to the backplane at 25 gigabit, thereby saving space, power and cost

 

The UltraScale and UltraScale+ FPGAs are being used in several telecom and datacom applications. 

For telecom, 500-gigabit and 1-terabit OTN designs are an important market for the UltraScale FPGAs. Another use for the FPGA serdes is for backplane applications. “We don’t need retimers so customers can connect directly to the backplane at 25 gigabit, thereby saving space, power and cost,” says Garcia. Such backplane uses include OTN platforms and data centre interconnect systems.     

The FPGA family’s 16-gigabit serdes are also being used in 10-gigabit PON and NG-PON2 systems. “When you have an 8-port or 16-port system, you need to have a dense serdes capability to drive the [PON optical line terminal’s] uplink,” says Garcia.   

For data centre applications, the FPGAs are being employed in disaggregated storage systems that involved pooled storage devices. The result is many 16-gigabit and 25-gigabit streams accessing the storage while the links to the data centre and its servers are served using 100-gigabit links. The FPGA serdes are used to translate between the two domains (see diagram).    

 

Source: Xilinx

 

For its next-generation 7nm FPGAs with 56-gigabit transceivers, Xilinx is already seeing demand for several applications. 

Data centre uses include server-to-top-of-rack links as the large Internet providers look move from 25 gigabit to 50- and 100-gigabit links. Another application is to connect adjacent buildings that make up a mega data centre which can involve hundreds of 100-gigabit links. A third application is meeting the growing demands of disaggregated storage. 

For telecom, the interest is being able to connect directly to new optical modules over 50-gigabit lanes, without the need for gearbox ICs.       

 

Optical FPGAs 

Altera, now part of Intel, developed an optical FPGA demonstrator that used co-packaged VCSELs for off-chip optical links. Since then Altera announced its Stratix 10 FPGAs that include connectivity tiles - transceiver logic co-packaged and linked with the FPGA using interposer technology. 

Xilinx says it has studied the issue of optical I/O and that there is no technical reason why it can’t be done. But the issue is a business one when integrating optics in an FPGA, he says: “Who is responsible for the yield? For the support?”     

Garcia admits Xilinx could develop its own I/O designs using silicon photonics and then it would be responsible for the logic and the optics. “But this is not where we are seeing the business growing,” he says. 


Enabling coherent optics down to 2km short-reach links

Silicon photonics luminaries series

Interview 5: Chris Doerr

Chris Doerr admits he was a relative latecomer to silicon photonics. But after making his first silicon photonics chip, he was hooked. Nearly a decade later and Doerr is associate vice president of integrated photonics at Acacia Communications. The company uses silicon photonics for its long-distance optical coherent transceivers.

 

Chris Doerr in the lab

Acacia Communications made headlines in May after completing an initial public offering (IPO), raising approximately $105 million for the company. Technology company IPOs have become a rarity and are not always successful. On its first day of trading, Acacia’s shares opened at $29 per share and closed just under $31.

Although investors may not have understood the subtleties of silicon photonics or coherent DSP-ASICs for that matter, they noted that Acacia has been profitable since 2013. But as becomes clear in talking to Doerr, silicon photonics plays an important role in the company’s coherent transceiver design, and its full potential for coherent has still to be realised.

 

Bell Labs

Doerr was at Bell Labs for 17 years before joining Acacia in 2011. He spent the majority of his time at Bell Labs making first indium phosphide-based optical devices and then also planar lightwave circuits. One of his bosses at Bell Labs was Y.K. Chen. Chen had arranged a silicon photonics foundry run and asked Doerr if he wanted to submit a design.

What hooked Doerr was silicon photonics’ high yields. He could assume every device was good, whereas when making complex indium phosphide designs, he would have to test maybe five or six devices before finding a working one. And because the yields were high, he could focus more on the design aspects. “Then you could start to make very complex designs - devices with many elements - with confidence,” he says.

Another benefit was that the performance of the silicon photonic circuit matched closely its simulation results. “Indium phosphide is so complex,” he says. “You have to worry about the composition effects and the etching is not that precise.” With silicon, in contrast, the dimensions and the refractive index are known with precision. “You can simulate and design very precisely, which made it [the whole process] richer,” says Doerr.

 

Silicon photonics is a disruptive technology because of its ability to integrate so many things together and still be high yield and get the raw performance 

 

After that first wafer run, Doerr continued to design both planar lightwave circuits and indium phosphide components at Bell Labs. But soon it was solely silicon photonics ICs.

Doerr views Acacia’s volume production of an integrated coherent transceiver - the transmit and receive optics on the one chip - with a performance that matches discrete optical designs, as one of silicon photonics’ most notable achievements to date.

With a discrete component coherent design, you can use the best of each material, he explains, whereas with an integrated design, compromises are inevitable. “You can’t optimise the layer structure; each component has to share the wafer structure,” he says. Yet with silicon photonics, the design space is so powerful and high-yielding, that these compromises are readily overcome.

Doerr also describes a key moment when he realised the potential of silicon photonics for volume manufacturing.

He was reading an academic paper on grating couplers, a structure used to couple fibres to waveguides. “You can only make that in silicon photonics because you need a high vertical [refractive] index contrast,” he says. Technically, a grating coupler can also be made in indium phosphide but the material has to be cut from under the waveguide; this leaves the waveguide suspended in air.

When he first heard of grating couplers he assumed the coupling efficiency would be of the order of a few percent whereas in practice it is closer to 85 percent. “That is when I realised it is a very powerful concept,” he says.

 

Integration is key

Doerr pauses before giving measured answers to questions about silicon photonics. Nor does his enthusiasm for silicon photonics blinker him to the challenges it faces. However, his optimism regarding the technology’s future is clear.

“Silicon photonics is a disruptive technology because of its ability to integrate so many things together and still be high yield and get the raw performance,” he says. In the industry, silicon photonics has proven itself for such applications as metro telecommunications but it faces significant competition from established technologies such as indium phosphide.  It will require more channels to be integrated for the full potential of silicon photonics as a disruption technology to emerge, says Doerr.

Silicon photonics also has an advantage on indium phosphide in that it can be integrated with electronic ICs using 2.5D and 3D packaging, saving cost, footprint, and power. “If you are in the same material system then such system-in-package is easier,” he says.  Also, silicon photonic integrated circuits do not require temperature control, unlike indium phosphide modulators, which saves power.

 

Areas of focus 

One silicon photonics issue is the need for an external laser. For coherent transceivers, it is better to separate the laser from the high-speed optics due to the fact that the coherent DSP-ASIC and the photonic chips are hot and the laser requires temperature control.  

For applications such as very short reach links, silicon photonics needs a laser source and while there are many options to integrate the laser to the chip, a clear winning approach has yet to emerge. “Until a really low cost solution is found, it precludes silicon from competing with really low-cost solutions like VCSELs for very short reach applications,” he says.

Silicon photonic chip volumes are still many orders of magnitude fewer than those of electronic ICs. But Acacia says foundries already have silicon photonics lines running, and as these foundries ramp volumes, costs, production times, and node-sizes will continually improve.

 

Opportunities   

The adoption of silicon photonics will increase significantly as more and more functions are integrated onto devices. For coherent designs, Doerr can foresee silicon photonics further reducing the size, cost and power consumption, making them competitive with other optical transceiver technologies for distances as short as 2km.

“You can use high-order formats such as 256-QAM and achieve very high spectral efficiency,” says Doerr. Using such a modulation scheme would require fewer overall lasers to achieve significant transport capacities, improving the cost-per-bit performance for applications such as data centre interconnect. “Fibre is expensive so the more you can squeeze down a fibre, the better,” he says.

Doerr also highlights other opportunities for silicon photonics, beyond communications. Medical applications is one such area. He cites a post-deadline paper at OFC 2016 from Acacia on optical coherent tomography which has similarities with the coherent technology used in telecom.

Longer term, he sees silicon photonics enabling optical input/ output (I/O) between chips. As further evolutionary improvements are achieved, he can see lasers being used externally to the chip to power such I/O. “That could become very high volume,” he says.

However, he expects 3D stacking of chips to take hold first. “That is the easier way,” he says.


Richard Soref: The new frontiers of silicon photonics

Silicon photonics luminaries series

Interview 4: Professor Richard Soref

John Bowers acknowledges him with ‘kicking off’ silicon photonics some 30 years ago, while Andrew Rickman refers to him as the ‘founding father of silicon photonics’. An interview with Richard Soref

 

It was fibre-optic communications that started Professor Richard Soref on the path to silicon photonics.

“In 1985, the only photonic chip that could interface to fibre was the III-V semiconductor chip,” says Soref. He wondered if an elemental chip such as silicon could be used, and whether it might even do a better job. He had read in a textbook that silicon is relatively transparent at the 1.30-micron and 1.55-micron wavelengths used for telecom and it inspired him to look at silicon as a material for optical waveguides.

Soref's interest in silicon was a combination of the potential of using the chip industry’s advanced manufacturing infrastructure for electro-optical integration and his own interest in materials. “I’m a science guy and I have curiosity and fascination with what the world of materials offers,” he says. “If I have an avenue like that, I like to explore where the physics takes us.”

In 1985 Soref constructed and did experiments on waveguides based on un-doped silicon resting upon a doped silicon substrate. It turned out not to be the best choice for a waveguide and in 1986 Soref proposed using a silicon-on-insulator waveguide instead, what has become the mainstream approach for the silicon photonics industry.

Silicon-on-insulator had a far greater refractive index contrast between the waveguide core and its cladding and is far less lossy. And while Soref didn’t build such structures, “it stimulated others to develop that major, major waveguide, so I’m proud of that”.

The original waveguide idea was not a wasted one, though. Soref and then research assistant, Brian Bennett, used the undoped-on-doped silicon waveguide structure to study and quantify free-carrier electro-modulation effects. These effects underpin the workings of the bulk of current silicon photonic modulators. Soref says their published academic paper has since been cited over 1,800 times.

Soref is approaching his 80th birthday and is a research professor at the University of Massachusetts in Boston. He has spent over 50 years researching photonics, silicon photonics and the broader topic of mid-infrared wavelengths and Group IV photonics, as well as spending five years researching liquid crystals for displays and electro-optical switching. For 27 years he was employed at the Air Force Research Laboratory. He has also worked at the Sperry Research Center and the MIT Lincoln Laboratory.

 

Applications go beyond telecom and optical interconnect, and perhaps the most important application is sensing

 

Group IV photonics

Soref’s research interests are broad as part of his fundamental interest in material science. In more recent years he has focused on Group IV photonics but not exclusively so.

The term silicon-photonics is firmly entrenched in the global community, he says, a phrase that includes on-chip germanium photo-detectors and even, with heterogeneous integration, III-V materials. Group IV photonics is a superset of silicon photonics and includes silicon-germanium-tin materials (SiGeSn) and well as silicon carbide. Such materials will likely be used in the monolithic silicon chip of the future, he says.

He has published papers on alloys such as silicon germanium carbon and silicon germanium tin. “I was estimating what these never-before-seen materials would do; you could create new alloys and how would those alloys behave,” says Soref.

Silicon germanium tin offers the possibility of a direct bandgap light emitter. “It is a richer material science space, with independent control of the bandgap and the lattice parameter,” says Soref.

Adding tin to the alloy lengthens the wavelength of operation, typically in the 1.5-micron to 5-micron range, the near infra-red and part of the mid infra-red part of the spectrum. “Applications go beyond telecom and optical interconnect, and perhaps the most important application is sensing,” says Soref.

The applications in this wavelength range include system-on-a-chip, lab-on-a-chip, sensor-on-a-chip and sensor-fusion-on-a-chip for such applications as chemical, biological, medical and environmental sensing. Such sensor chips could be in your smartphone and play an important role in the emerging Internet of Things (IoT). “Sensing could be a very important economic foundation for Group IV photonics,” says Soref.

And Soref does not stop there. He is writing a paper on Group III nitrides for ultra violet and visible-light integrated photonics: “I think silicon and Group IV are limited to the near-, mid- and longwave infra red”.

 

Challenges

Soref points to the work being done in developing commercial high-volume manufacturing: the use of 300mm silicon wafers, developing process libraries and perfecting devices for volume manufacturing. He welcomes AIM Photonics, the US public-private venture investing $610 million in photonics and manufacturing.

But he argues that there should also be an intellectual space for growth, “a wider space which is not so practical but which will become practical”. He cites the emerging areas of sensing and microwave photonics. “That is the frontier,” says Soref. “And the foundry work should not prevent that intellectual exploration.”

An important application area for microwave photonics is wireless, from 5GHz to 90GHz. Soref envisages a photonic integrated circuit (PIC), or an opto-electronic IC (OEIC) that features electronics and optics on-chip, that communicates with other entities often via fibre but also wirelessly.

“That means RF (radio frequency) or microwave, and for microwave that requires a transmitter and receiver on the chip,” says Soref. Such a device would find use in the IoT and future smartphones.

Microwave designs in the past used an assemblage of discrete components that makes a system on a board. These new microwave PICs or OEICs could perform many of the classical functions such as spectral analysis, optical control of a phased array microwave antenna, microwave signal processing, and optical analogue to digital conversion (ADC) and optical digital to analogue conversion (DAC).

This is analogous to the convergence of computing and photonics, says Soref. In computing, the signal goes from the electrical domain to the optical and back, while for microwave photonics it will be conversions between the microwave and photonic domains on the chip.

There are also quantum-photonic applications: quantum computing, quantum cryptography and quantum metrology where photonic devices could play a role.

 

Opportunities

These are the three emerging opportunities areas Soref foresees for Group IV photonics emerging in the next decade: sensors, microwave photonics and the quantum and computing worlds in addition to the existing markets of telecom and optical interconnect.

Soref is not sure that silicon photonics has yet reached its tipping point. “To make silicon photonics and Group IV photonics ubiquitous and pervasive, it takes a lot of investment and a lot of commercial results,” he says. “We have not yet arrived at that stage of economic foundation.”

 

New optical devices

Soref also highlights how continual advances in CMOS feature size, from 45nm down to 7nm, promise new photonic components that could become commonplace.

Soref cites the example of a silicon-on-isolator nanobeam. The nanobeam is a strip waveguide with air holes, in effect a one-dimensional photonic crystal lattice in a waveguide.

The nanobeam structure is of interest as it performs the same role as the micro-ring resonator, a useful optical building block used in such applications as modulation.

“The photonic crystal structure requires extreme control of dimensions to reduce unwanted scattering, so it needs very fine lithography,” says Soref. People have argued such structures are impractical due to the unrealistic dimensional control needed.

“But foundries have shown you can get a very high-quality photonic crystal in a silicon fab,” he explains. “This foundry advantage would enable new components that might have seemed too difficult or marginal on paper.”

Significant progress in silicon photonics may have been achieved since his first work in 1985, but as Soref highlights, it is still early when assessing the full significance of the technology.


Nokia’s PSE-2s delivers 400 gigabit on a wavelength

Nokia has unveiled what it claims is the first commercially announced coherent transport system to deliver 400 gigabits of data on a single wavelength. Using multiple 400-gigabit wavelengths across the C-band, 35 terabits of data can be transmitted.

Four hundred gigabit transmission over a single carrier is enabled using Nokia’s second-generation programmable Photonic Service Engine coherent processor, the PSE2, part of several upgrades to Nokia's flagship PSS 1830 family of packet-optical transport platforms.

Kyle Hollasch“One thing that is clear is that performance will have a key role to play in optics for a long time to come, including distance, capacity per fiber, and density,” says Sterling Perrin, senior analyst at Heavy Reading.

This limits the appeal of the so-called “white box” trend for many applications in optics, he says: “We will continue to see proprietary advances that boost performance in specific ways and which gain market traction with operators as a result”.


The 1830 Photonic Service Switch

The 1830 PSS family comprises dense wavelength-division multiplexing (DWDM) platforms and packet-OTN (Optical Transport Network) switches.

The DWDM platform includes line amplifiers, reconfigurable optical add-drop multiplexers (ROADMs), transponder and muxponder cards. The 1830 platforms span the PSS-4, -8, -16 and the largest and original -32, while the 1830 PSS packet-OTN switches include the PSS-36 and the PSS-64 platforms. The switches include their own coherent uplinks but can be linked to the 1830 DWDM platforms for their line amps and ROADMs.   

The 1830 PSS upgrades include a 500-gigabit muxponder card for the DWDM platforms that feature the PSE2, new ROADM and line amplifiers that will support the L-band alongside the C-band to double fibre capacity, and the PSS-24x that complements the two existing OTN switch platforms.      

 

100-gigabit as a service  

In DWDM transmissions, 100-gigabit wavelengths are commonly used to transport multiplexed 10-gigabit signals. Nokia says it is now seeing increasing demand to transport 100-gigabit client signals.

“One hundred gigabit is becoming the new currency,” says Kyle Hollasch, director, optical marketing at Nokia. “No longer is the thinking of 100 gigabit just as a DWDM line rate but 100 gigabit as a service, being handed from a customer for transport over the network.” 

Current PSS 1830 platform line cards support 50-gigabit, 100-gigabit and 200-gigabit coherent transmission using polarisation-multiplexed, binary phase-shift keying (PM-BPSK), quadrature phase-shift keying (PM-QPSK) and 16 quadrature amplitude modulation (PM-16QAM), respectively. Nokia now offers a 500-gigabit muxponder card that aggregates and transports 100-gigabit client signals. The 500-gigabit muxponder card has been available since the first quarter and already several hundred cards have been shipped. 

“The challenge is not just to crank up capacity but to do so profitably,” says Hollasch. “Keeping the cost-per-bit down, the power consumption down while pushing towards the Shannon limit [of fibre] to carry more capacity.”

 Source: Nokia

Modulation formats

The PSE2 family of coherent processors comprises two designs: the high-end super-coherent PSE-2s and the compact low-power PSE-2c.

Nokia joins the likes of Ciena and Infinera in developing several coherent ASICs, highlighting how optical transport requirements are best met using custom silicon. Infinera also announced its latest generation photonic integrated circuit that supports up to 2.4 terabits.

The high-end PSE-2s is a significant enhancement on the PSE coherent chipset first announced in 2012. Implemented using 28nm CMOS, the PSE-2s has a power consumption similar to the original PSE yet halves the power consumption-per-bit given its higher throughput. 

The PSE-2s adds four modulation formats to the PSE’s existing three and supports two symbol rates: 32.5 gigabaud and 44.5 gigabaud. The modulation schemes and distances they enable are shown in the chart.

 


The 1.4 billion transistor PSE-2s has sufficient processing performance to support two coherent channels. Each channel can implement a different modulation format if desired, or the two can be tightly coupled to form a super-channel. The only exception is the 400-gigabit single wavelength format. Here the PSE-2s supports only one channel implemented using a 45 gigabaud symbol rate and PM-64QAM. The 400-gigabit wavelength has a relatively short 100-150km reach, but this suits data centre interconnect applications where links are short and maximising capacity is key.

Nokia recently conducted a lab experiment resulting in the sending of 31.2 terabits of data over 90km of standard single-mode fibre using 78, 400-gigabit channels spaced 50GHz apart across the C-band. "We were only limited by the available hardware from reaching 35 terabits," says Hollasch.

Using the 45-gigabaud rate and PM-16QAM enables two 250-gigabit channels. This is how the 500-gigabit muxponder card is achieved. The 250-gigabit wavelength has a reach of 900km, and this can be extended to 1,000km but at 200 gigabit by dropping to the 32-gigabaud symbol rate, as implemented with the current PSE chipset.

Nokia also offers 200 gigabit implemented using 45 gigabaud and 8-QAM. “The extra baud rate gets us [from 150 gigabit] to 200 gigabit; this is very valuable,” says Hollasch. The resulting reach is 2,000km and he expects this format to gain the most market traction.  

The PSE-2s, like the PSE, also implements PM-QPSK and PM-BPSK but with reaches of 3,000-5,000km and 10,000km, respectively.

The PSE-2s introduces a fourth modulation format dubbed set-partition QPSK (SP-QPSK). 

Standard QPSK uses amplitude and phase modulation resulting in a 4-point constellation. With SP-QPSK, only three out of the possible four constellation points are used for any given symbol.  The downside of the approach is that a third fewer constellation points are used and hence less data is transported but the lost third can be restored using the higher 45-gigabaud symbol rate.

The benefit of SP-QPSK is its extended reach. “By properly mapping the sequence of symbols in time, you create a greater Euclidean distance between the symbol points,” says Hollasch. “What that gives you is gain.” This 2.5dB extra gain compared to PM-QPSK equates to a reach beyond 5,000km. “That is the territory most implementation are using BPSK and also addresses a lot of sub-sea applications,” says Hollasch. “Using SP-QPSK [at 100 gigabit] also means fewer carriers and hence, it is more spectrally efficient than [50-gigabit] BPSK.”  

 

The PSE-2c

The second coherent DSP-ASIC in the new family is the PSE-2c compact, also implemented in 28nm CMOS, designed for smaller, low-power metro platforms and metro-regional reaches.

The PSE-2c supports a 100-gigabit line rate using PM-QPSK and will be used alongside the CFP2-ACO line-side pluggable module. The PSE-2c consumes a third of the power of the current PSE operating at 100 gigabit. 

“We are putting the PSE2 [processors] in multiple form factors and multiple products,” says Hollasch.

The recent Infinera and Nokia announcements highlight the electronic processing versus photonic integration innovation dynamics, says Heavy Reading's Perrin. He notes how innovations in electronics are driving transmission across greater distances and greater capacities per fibre and finding applications in both long haul and metro networks as a result.

“Parallel photonic integration is a density play, but even Infinera’s ICE announcement is a combination of photonic integration and electronic processing advancements,” says Perrin. “In our view, electronic processing has taken a front seat in importance for addressing fibre capacity and transmission distance, which is why the need for parallel photonic integration in transport has not really spread beyond Infinera so far.”

The PSS-24x showing the 24, 400 gigabit line cards and 3 switch fabric cards, 2 that are used and one for redundancy. Source: Nokia

PSS-24x OTN switch

Nokia has also unveiled its latest 28nm CMOS Transport Switch Engine, a 2.4-terabit non-blocking OTN switch chip that is central to its latest PSS-24x switch platform. Two such chips are used on a fabric card to achieve 4.8 terabits, and three such cards are used in the PSS-24x, two active cards and a third for redundancy. The result is 9.6 terabits of switching capacity instead of the current platforms' 4 terabits, while power consumption is halved.

Nokia says it already has a roadmap to 48-terabits of switching capacity. “The current generation [24x] shipping in just a few months is 400-gigabit per slot,” says Hollasch. The 24 slots that fit within the half chassis results in 9.6 terabits of switching capacity. However, Nokia's platform roadmap will achieve 1 terabit-per-slot by 2018-19. The backplane is already designed to support such higher speeds, says Hollasch. This would enable 24 terabits of switching capacity per shelf and with two shelves in a bay, a total switching capacity of 48 terabits.

The transport switch engine chip switches OTN only. It is not designed as a packet and OTN switch. “A cell-based agnostic switching architecture comes with a power and density penalty,” explains Hollasch, adding that customers prefer the lowest possible power consumption and highest possible density.

The result is a centralised OTN switch fabric with line-card packet switching. Nokia will introduce packet switching line cards next year that will support 300 gigabit per card. Two such cards will be ‘pair-able’ to boost capacity to 600 gigabit but Hollasch stresses that the PSS-24x will not switch packets through its central fabric.

 

Doubling capacity with the L-band

By extending the 1830 PSS platform to include the L-band, up to 70 terabits of data can be supported on a fibre, says Hollasch.

Nokia has developed a line card that supports both C-band and L-band amplification that will be available around the fourth quarter of this year. The ROADM and 500-gigabit muxponder card for the L-band will be launched in 2017.

Once the amplification is available, operators can start future-proofing their networks. Then when the L-band ROADMs and muxponder cards become available, operators can pay as they grow; extending wavelengths into the L-band, once all 96 channels of the C-band are used, says Hollasch.


Professor Graham Reed: The calm before the storm

Silicon photonics luminaries series

Interview 3: Professor Graham Reed

Despite a half-century track record driving technology, electronics is increasingly calling upon optics for help. “It seems to me that this is a marriage that is really going to define the future,” says Graham Reed, professor of silicon photonics at the University of Southampton’s Optoelectronics Research Centre.

 

The optics alongside the electronics does not have to be silicon photonics, he says, but silicon as a photonics technology is attractive for several reasons. 

“What makes silicon photonics interesting is its promise to enable low-cost manufacturing, an important requirement for emerging consumer applications,” says Reed. And being silicon-based, it is much more compatible than other photonics technologies. “It probably means silicon photonics is going to win out,” he says. 

 

From Surrey to Southampton

Reed has been active in silicon photonics for over 25 years. As an academic at the University of Surrey, his first Ph.D. student was Andrew Rickman, who went on to found Bookham Technology and is now CEO of Rockley Photonics. 

Rickman undertook the study of basic optical waveguide structures using silicon. “The first data we got, the waveguide losses were very high, 20 to 30dB per centimetre,” says Reed. “Within a year, we got the losses down to below 1dB per centimetre; that makes it viable.”

The research then broadened to include silicon modulators, a research topic Reed continues to this day. 

 

Everything about silicon photonics is about low cost

 

The optical modulator is silicon photonics biggest achievement to date, argues Reed. “We were working on modulators in 1991 that worked at 20 megahertz,” he says. “Intel’s Mario Paniccia ribbed me when they got [a modulator] to 1 gigahertz.”  

The Surrey group was not focussing on telecom when they started. “I never believed in the early 1990s that these things were going to go as fast as they became,” says Reed. Partly that was because the early work used much larger waveguides and to increase speed, the dimensions need to shrink.

In 2012, Reed and a dozen colleagues moved from the University of Surrey to the University of Southampton.  Several factors led to the move. The University of Southampton was interested in the team, given its reputation and the rising importance of silicon photonics, while Reed was keen to make use of the university’s new on-site fabrication plant, which he describes as the best university fab in the UK and probably Europe. 

“We were increasing frustrated with the fab facilities around the world,” says Reed. The team used multi-project wafers where companies and institutions have their circuits made on a shared wafer. However, such multi-project wafers have a lower run priority.

“Foundries do a good job but they often take much longer to deliver [the designs] than they aim,” says Reed. Worst case, it can take over three years to receive the chip design back. Given a project cycle typically lasts three years, this is a non-starter, he says: “Having a fab that you have a lot of control over is a big attraction”. 

 

Research focus

Reed’s group is regularly approached by companies from all over the world. But it wasn't always like that. In the 1990s, getting funding to research silicon photonics was a challenge, he says.

The companies now contacting Reed’s group are either in the field and have a difficulty, or they want to enter the marketplace. “They want particular work done or a particular device worked upon,” he says.

Intel is one company that worked with Reed when they started their silicon photonics programme some dozen years ago.

Reed’s group’s research covers the development of individual optical components as well as systems. Much of the work is focussed on telecom and datacom, given that is where silicon photonics is most established, but the group is also conducting work using silicon photonics for longer wavelengths - 2 to 18 microns - known as the mid infra-red region. 

Mid infra-red is an emerging field, says Reed: “People have seen the success of existing silicon photonics and are applying it to longer wavelengths.”

Such wavelengths are suited for sensing applications. “A lot of nasties - chemicals you’d want to sense - have characteristic absorption lines in this longer wavelength range,” he says.

Things also become easier at the longer wavelengths because the dimensions of the silicon features are more relaxed. However, additional materials are required that are transparent at these longer wavelengths, and these platforms all need developing.  “Longer wavelengths equate to bigger waveguides; what gets more difficult are the sources and the detectors,” says Reed.

A third research activity his group is tackling is ongoing silicon photonics challenges such as wafer-scale testing, passive alignment, lowering power consumption and thermal stability issues.        

 

Optical device work

Reed cites a low-channel-count multiplexer as an example of its research work on basic optical devices with the goal of helping commercialise silicon photonics.

“One of the issues in silicon photonics is to make things reliable and high yield,” says Reed. “One way to look at that is you need simplicity.”

The group has developed an angled multi-mode interference (MMI) multiplexer suited for 4 or 8 channel designs.

“It is so simple,” says Reed. The multiplexer is made in a single etch step and is based on large multi-mode waveguides that are more resilient to fabrication errors and layer thickness variations. The design is also more thermally stable than single-mode waveguides.  

Another area is ring resonators - useful devices that can be used for a variety of tasks including modulation but which are sensitive to layer thickness variations as well as thermal stability issues. “If anyone is going to adopt ring resonators they need to find a way to make them athermal,” says Reed.  “And they need a way to tune or trim to operate them to the resonance they need.”

 

Systems work

The group’s systems work addresses some of the same issues as the large systems vendors. However, the group is careful in the topics it chooses given their more modest university resources. “We are looking at more complex modulation systems but probably not for long haul communications,” says Reed.

Another research activity is looking at alternative ways to combine components. Using silicon photonics for integration in the mid infra-red range may give a new lease of life to the lab-on-a-chip concept. “People have talked about it for a long while but it hasn't really happened,” says Reed. “If you can do these things in a reliable and low-cost manner, maybe disposable chips are viable again.”   

 

Silicon photonics challenges

Two current manufacturing challenges Reed highlights are the issues of passive alignment and wafer-scale testing.

Coupling the laser to a fibre or the silicon chip’s waveguide using passive alignment remains an ongoing challenge. “Everything about silicon photonics is about low cost,” says Reed. At present to attach a laser, it is typically turned on and aligned to the chip’s waveguide. This requires manual intervention and is time-consuming.

“The ideal scenario is to put a fibre down and it couples to the waveguide or laser and somehow you have aligned it,” he says. The challenge is the discrepancy in dimensions between the 10-micron fibre core and the waveguide, which is typically between 0.35- and 0.5-microns wide. Work is on-going to use mode converters or grating couplers such that the resulting optical loss is low enough to make passive alignment viable.

 

All these events are consistent with this field of technology pointing to mass markets 

 

Wafer-scale testing remains another challenge. Grating couplers are one way designs can be tested while still on the silicon wafer. But these typically only allow the whole circuit to be tested - either it works or not - but you can’t test individual components. “If you are going to mimic the successes of electronics, you need to test more comprehensibly than that,” says Reed.

His group has developed an erasable grating that can be placed either side of a critical component to test it. These gratings can then be removed from the final circuit by using local laser annealing. 

Reed expects the industry to overcome all these manufacturing challenges: “But it still means somebody has to have the brilliant idea”.

He is also somewhat surprised that there are not more silicon photonics products on the market, especially considering the huge investment in the technology made by some of the larger companies over the last decade.

He describes what is happening now as silicon photonics’ quiet period. Partly it is due to the vendors working to commercialise their technologies, partly it is the systems vendors that are developing next-generation products are evaluating the various technologies. “Until somebody jumps and that market takes off - and somebody will jump,” he says. “Then there will be ferocious activity.”

 

Opportunities  

Reed is measured when assessing the future opportunities for the technology.

“It is not something that we strategise about - it is not what we do - but we get insights from time to time because of the people we work with and what they want,” he says. “The crucial thing is what facilitates the mass market because silicon photonics is really trying to bring photonics to the mass market.”

Reed does believe silicon photonics is disruptive: “If you look at the origins of what a disruptive technology is, it is a technology that works in one field but then it performs so well, it crosses the boundary into other areas”.

Silicon photonics was initially regarded as a short-reach technology but once the performance of its modulators started to drastically increase, the technology crossed the boundary into long-haul research, he notes. “That is the definition of a disruptive technology,” he says.

He also believes the technology has passed its tipping point. As evidence, he points to the investment made by the large companies and says it is inevitable that they will launch products: “So in that sense, the tipping point has already been and gone”.

In addition, he highlights the American Institute for Manufacturing Integrated Photonics (AIM Photonics) venture, the $610 million public and private funded initiative set up in 2015 to advance silicon photonics-based manufacturing.  

“All these events are consistent with this field of technology pointing to mass markets,” says Reed. “If this was going to be indium phosphide that did that, why did not all that activity happen years ago?”


Mario Paniccia: We are just at the beginning

Silicon photonics luminaries series
Interview 2: Mario Paniccia
 
Talking about his time heading Intel’s silicon photonics development programme, Mario Paniccia, spotlights a particularly creative period between 2002 and 2008.  
 
During that time, his Intel team had six silicon photonics papers published in the science journals, Nature and Nature Photonics, and held several world records - for the fastest modulator, first at 1 gigabit, then 10 gigabit and finally 40 gigabit, the first pulsed and continuous-wave Raman silicon laser, the first hybrid silicon laser working with The University of California, Santa Barbara, and the fastest silicon germanium photo-detector operating at 40 gigabit.
 
“These [achievements] were all in one place, labs within 100 yards of each other; you had to pinch yourself sometimes,” he says.
 

It got to the stage where Intel’s press relations department would come and ask what the team would be announcing in the coming months. “ 'Hey guys,' I said, 'it doesn't work that way ' ”.

Since leaving Intel last year, Paniccia has been working as a consultant and strategic advisor. He is now exploring opportunities for silicon photonics but in segments other than telecom and datacom.

“I didn't want to go into developing transceivers for other big companies and compete with my team's decade-plus of development; I spent 20 years at Intel,” he says.

 

Decade of development

Intel’s silicon photonics work originated in the testing of its microprocessors using a technique known as laser voltage probing. Infra-red light is applied to the back side of the silicon to make real-time measurements of the chip’s switching transistors.

For Paniccia, it raised the question: if it is possible to read transistor switching using light, can communications between silicon devices also be done optically? And can it be done in parallel to the silicon rather than using the back side of silicon?

In early 2000 Intel started working with academic Graham Reed, then at the University of Surrey, and described by Paniccia as one of the world leaders in silicon photonics devices. “We started with simple waveguides and it just progressed from there,” he says.

The Intel team set the target of developing a silicon modulator working at 1 gigahertz (GHz); at the time, the fastest silicon modulator operated at 10 megahertz. “Sometimes leadership is about pushing things out and putting a stake in the ground,” he says.

It was Intel’s achievement of a working 1GHz silicon modulator that led to the first paper in Nature. And by the time the paper was published, Intel had the modulator working at 2GHz. The work then progressed to developing a 10 gigabit-per-second (Gbps) modulator and then broadened to include developing other silicon photonics building-block devices that would be needed alongside the modulator – the hybrid silicon laser, the photo-detector and other passive devices needed for an integrated transmitter.

 

There is a difference between proving the technology works and making a business out of it

 

Once 10Gbps was achieved, the next milestone was 20Gbps and then 40Gbps. Once the building block devices achieved operation in excess of 40Gbps, Intel’s work turned to using these optical building blocks in integrated designs. This was the focus of the work between 2010 to 2012. Intel chose to develop a four-channel 40Gbps (4x10 gigabit) transceiver using four-wavelength coarse WDM which ended up working at 50Gbps (4x12.5 gigabit) and then, most recently, a 100Gbps transceiver.

He says the same Intel team is no longer talking about 50Gbps or 100Gbps but how to get multiple terabits coming out of a chip.

 

Status

Paniccia points out that in little more than a decade, the industry has gone from not knowing whether silicon could be used to make basic optical functions such as modulators and photo-detectors, to getting them to work at speeds in excess of 40Gbps. “I’d argue that today the performance is close to what you can get in III-V [compound semiconductors],” he says.

He believes silicon photonics is the technology of the future, it is just a question of when and where it is going to be applied: “There is a difference between proving the technology works and making a business out of it”.

In his mind, these are the challenges facing the industry: proving silicon photonics can be a viable commercial technology and determining the right places to apply it.

For Paniccia, the 100-gigabit market is a key market for silicon photonics. “I do think that 100 gigabit is where the intercept starts, and then silicon photonics becomes more prevalent as you go to 200 gigabit, 400 gigabit and 1 terabit,” he says.

So has silicon photonics achieved its tipping point?

Paniccia defines the tipping point for silicon photonics as when people start believing the technology is viable and are willing to invest. He cites the American Institute for Manufacturing Integrated Photonics (AIM Photonics) venture, the $610 million public and private funded initiative set up in 2015 to advance silicon photonics-based manufacturing. Other examples include the silicon photonics prototyping service coordinated by nano-electronics research institute imec in Belgium, and global chip-maker STMicroelectronics becoming a silicon photonics player having developed a 12-inch wafer manufacturing line.

 

Instead of one autonomous LIDAR system in a car, you could have 20 or 50 or 100 sprinkled throughout your vehicle

 

“All these are places where people not only see silicon photonics as viable but are investing significant funds to commercialise the technology,” says Paniccia. “There are numerous companies now selling commercialised silicon photonics, so I think the tipping point has passed.”

Another indicator that the tipping point has happened, he argues, is that people are not spending their effort and their money solely on developing the technology but are using CMOS processes to develop integrated products.

“Now people can say, I can take this process and build integrated devices,” he says. “And when I put it next to a DSP, or an FPGA, or control electronics or a switching chip, I can do things that you couldn't do next to bulky electronics or bulky photonics.”

It is this combination of silicon photonics with electronics that promises greater computing power, performance and lower power consumption, he says, a view shared by another silicon photonics luminary, Rockley Photonics CEO, Andrew Rickman.

Moreover, the opportunities for integrated photonics are not confined to telecom and datacom. “Optical testing systems for spectroscopy today is a big table of stuff - lasers, detectors modulators and filters,” says Paniccia. Now all these functions can be integrated on a chip for such applications as gas sensing, and the integrated photonics device can then be coupled with a wireless chip for Internet of Things applications.  

The story is similar with autonomous vehicle systems that use light detection and ranging (LIDAR) technology. “These systems are huge, complicated, have a high power consumption, and have lots of lasers that are spinning around,” he says. “Now you can integrate that on a chip with no moving parts, and instead of one autonomous LIDAR system in a car, you could have 20 or 50 or 100 sprinkled throughout your vehicle”

 

Disruptive technology

Paniccia is uncomfortable referring to silicon photonics as a disruptive technology. He believes disruption is a term that is used too often.

Silicon photonics is a technology that opens up a lot of new possibilities, he says, as well as a new cost structure and the ability to produce components in large volume. But it doesn’t solve every problem.

The focus of the optical vendors is very much on cost. For markets such as the large-scale data centre, it is all about achieving the required performance at the right cost for the right application. Packaging and testing still account for a significant part of the device's overall cost and that cannot be forgotten, he says.

Paniccia thus expects silicon photonics to co-exist with the established technologies of indium phosphide and VCSELs in the near term.

“It is all about practical decisions based on price, performance and good-enough solutions,” he says, adding that silicon photonics has the opportunity to be the mass market solution and change the way one thinks about where photonics can be applied.

“Remember we are just at the beginning and it will be very exciting to see what the future holds.” 


Optical integration and silicon photonics: A view to 2021

LightCounting Market Research’s recent report on optical integration investigates the global market opportunity for integrated optical components including silicon photonics for the next five years. An interview with LightCounting CEO and report author, Vladimir Kozlov. 

 

LightCounting’s report on photonic integration has several notable findings. The first is that only one in 40 optical components sold in the datacom and telecom markets is an integrated device yet such components account for a third of total revenues.

Another finding is that silicon photonics will not have a significant market impact in the next five years to 2021, although its size will grow threefold in that time.

By 2021, one in 10 optical components will be integrated and will account for 40% of the total market, while silicon photonics will become a $1 billion industry by then. 

 

Integrated optics

“Contrary to the expectation that integration is helping to reduce the cost of components, it is only being used for very high-end products,” says Vladimir Kozlov, CEO of LightCounting. 

He cites the example of the cost-conscious fibre-to-the-home market which despite boasting 100 million units in 2015 -  the highest volumes in any one market -  uses discrete parts for its transceivers. “There is very little need for optical integration in this high-volume, low-cost market,” he says

Where integration is finding success is where it benefits device functionality. “Where it takes the scale of components to the next level, meaning much more sophisticated designs than just co-packaged discrete parts,” says Kozlov. And it is because optical integration is being applied to high-end, costlier components that explains why revenues are high despite volumes being only 2.4% of the total market.    

 

Defining integration

LightCounting is liberal in its definition of an integrated component. An electro-absorption modulated laser (EML) where the laser and modulator are on the same chip is considered as an integrated device. “It was developed 20 years ago but is just reaching prime time now with line rates going to 25 gigabit,” says Kozlov. 

Designs that integrate multiple laser chips into a transceiver such as a 4x10 gigabit design is also considered an integrated design.  “There is some level of integration; it is more sophisticated than four TO-cans,” says Kozlov. “But you could argue it is borderline co-packaging.”

LightCounting forecasts that integrated products will continue to be used for high-end designs in the coming five years. This runs counter to the theory of technological disruption where new technologies are embraced at the low end first before going on to dominate a market.  

“We see it continuing to enter the market for high-end products simply because there is no need for integration for very simple optical parts,” says Kozlov.        

 

Silicon photonics 

LightCounting does not view silicon photonics as a disruptive technology but Kozlov acknowledges that while the technology has performance disadvantages compared to traditional technologies such as indium phosphide and gallium arsenide, its optical performance is continually improving. “That may still be consistent with the theory of technological disruption,” he says.

 

There are all these concerns about challenges but silicon photonics does have a chance to be really great


The market is also developing in a way that plays to silicon photonics’ strengths. One such development is the need for higher-speed interfaces, driven by large-scale data centre players such as Microsoft. “Their appetite increases as the industry is making progress,” says Kozlov. “Six months ago they were happy with 100 gigabit, now they are really focused on 400 gigabit.”

Going to 400 gigabit interfaces will need 4-level pulse-amplitude modulation (PAM4) transmitters that will provide new ground for competition between indium phosphide, VCSELs and silicon photonics, says Kozlov. Silicon photonics may even have an edge according to results from Cisco where its silicon photonics-based modulators were shown to work well with PAM4. This is where silicon photonics could even take a market lead: for 400-gigabit designs that require multiple PAM4 transmitters on a chip, says LightCounting.      

Another promise silicon photonics could deliver although yet to be demonstrated is the combination of optics and electronics in one package. Such next-generation 3D packaging, if successful, could change things more dramatically than LightCounting currently anticipates, says Kozlov.  

“This is the interesting thing about technology, you never really know how successful it will be,” says Kozlov. “There are all these concerns about challenges but silicon photonics does have a chance to be really great.”

But while LightCounting is confident the technology will prove successful sooner of later, getting businesses that use the technology to thrive will require overcoming a completely different set of challenges.

“It is a challenging environment,” warns Kozlov. “There is probably more risk on the business side of things now than on the technology side.”  


Tackling system design on a data centre scale

Silicon photonics luminaries series

Interview 1: Andrew Rickman

Silicon photonics has been a recurring theme in the career of Andrew Rickman. First, as a researcher looking at the feasibility of silicon-based optical waveguides, then as founder of Bookham Technologies, and after that as a board member of silicon photonics start-up, Kotura.

 

Andrew Rickman

Now as CEO of start-up Rockley Photonics, his company is using silicon photonics alongside its custom ASIC and software to tackle a core problem in the data centre: how to connect more and more servers in a cost effective and scaleable way.

 

Origins

As a child, Rickman attended the Royal Institution Christmas Lectures given by Eric Laithwaite, a popular scientist who was also a professor of electrical engineering at Imperial College. As an undergraduate at Imperial, Rickman was reacquainted with Professor Laithwaite who kindled his interest in gyroscopes.

“I stumbled across a device called a fibre-optic gyroscope,” says Rickman. “Within that I could see people starting to use lithium niobate photonic circuits.” It was investigating the gyroscope design and how clever it was that made Rickman wonder whether the optical circuits of such a device could be made using silicon rather than exotic materials like lithium niobate.

“That is where the idea triggered, to look at the possibility of being able to make optical circuits in silicon,” he says.

 

If you try and force a photon into a space shorter than its wavelength, it behaves very badly


In the 1980s, few people had thought about silicon in such a context. That may seem strange today, he says, but silicon was not a promising candidate material. “It is not a direct band-gap material - it was not offering up the light source, and it did not have a big electro-optic effect like lithium niobate which was good for modulators,” he says. “And no one had demonstrated a low-loss single-mode waveguide.”

Rickman worked as a researcher at the University of Surrey’s physics department with such colleagues as Graham Reed to investigate whether the trillions of dollars invested in the manufacturing of silicon could also be used to benefit photonic circuits and in particular whether silicon could be used to make waveguides. “The fundamental thing one needed was a viable waveguide,” he says.

Rickman even wrote a paper with Richard Soref who was collaborating with the University of Surrey at the time. “Everyone would agree that Richard Soref is the founding father of the idea - the proposal of having a useful waveguide in silicon - which is the starting point,” says Rickman. It was the work at the University of Surrey, sponsored by Bookham which Rickman had by then founded, that demonstrated low-loss waveguides in silicon.

 

Fabrication challenges

Rickman argues that not having a background in CMOS processes has been a benefit. “I wasn’t dyed-in-the-wool-committed to CMOS-type electronics processing,” he says. “I looked upon silicon technology as a set of machine-shop processes for making things.”

Looking at CMOS processing completely afresh and designing circuits optimised for photonics yielded Bookham a great number of high-performance products, he says. In contrast, the industry’s thrust has been very much a semiconductor CMOS-focused one. “People became interested in photonics because they just naturally thought it was going to be important in silicon, to perpetuate Moore’s law,” says Rickman.

You can use the structures and much of the CMOS processes to make optical waveguides, he says, but the problem is you create small structures - sub-micron - that guide light poorly. “If you try and force a photon into a space shorter than its wavelength, it behaves very badly,” he says. “In microelectronics, an electron has got a wavelength that is one hundred times smaller that the features it is using.”

The results include light being sensitive to interface roughness and to the manufacturing tolerances - the width, hight and composition of the waveguide. “At least an order of magnitude more difficult to control that the best processes that exist,” says Rickman.

“Our [Rockley’s] waveguides are one thousand times more relaxed to produce than the competitors’ smaller ones,” he says. “From a process point of view, we don’t need the latest CMOS node, we are more a MEMS process.”

 

If you take control of enough of the system problem, and you are not dictated to in terms of what MSA or what standard that component must fit into, and you are not competing in this brutal transceiver market, then that is when you can optimise the utilisation of silicon photonics 

 

Rickman stresses that small waveguides do have merits - they go round tighter bends, and their smaller-dimensioned junctions make for higher-speed components. But using very large features solves the ‘fibre connectivity problem’, and Rockley has come up with its own solutions to achieve higher-speed devices and dense designs.

“Bookham was very strong in passive optics and micro-engineered features,” says Rickman. “We have taken that experience and designed a process that has all the advantages of a smaller process - speed and compactness - as well as all the benefits of a larger technology: the multiplexing and demultiplexing for doing dense WDM, and we can make a chip that already has a connector on it.”

 

Playing to silicon photonics’ strengths

Rickman believes that silicon photonics is a significant technological development: “It is a paradigm shift; it is not a linear improvement”. But what is key is how silicon photonics is applied and the problem it is addressing.

To make an optical component for an interface standard or a transceiver MSA using silicon photonics, or to use it as an add-on to semiconductors - a ’band-aid” – to prolong Moore’s law, is to undersell its full potential. Instead, he recommends using silicon photonics as one element - albeit an important one - in an array of technologies to tackle system-scale issues.

“If you take control of enough of the system problem, and you are not dictated to in terms of what MSA or what standard that component must fit into, and you are not competing in this brutal transceiver market, then that is when you can optimise the utilisation of silicon photonics,” says Rickman. “And that is what we are doing.” In other words, taking control of the environment that the silicon sits in.

 

It [silicon photonics] is a paradigm shift; it is not a linear improvement 

 

Rockley’s team has been structured with the view to tackle the system-scale problem of interconnecting servers in the data centre. Its team comprises computer scientists, CMOS designers - digital and analogue - and silicon photonics experts.

Knowing what can be done with the technologies and organising them allows the problems caused by the ‘exhaustion of Moore’s law’ and the input/output (I/O) issues that result to be overcome. “Not how you apply one technology to make up for the problems in another technology,” says Rickman.

 

The ending of Moore’s law

Moore’s law continues to deliver a doubling of transistors every two years but the associated scaling benefits like the halving of power consumed per transistor no longer apply. As a result, while Moore’s law continues to grow gate count that drives greater computation, the overall power consumption is no longer constant.

Rickman also points out that the I/O - the number of connections on and off a chip - are not doubling with transistor count. “I/O may be going from 25 gigabit to 50 gigabit using PAM–4 but there are many challenges and the technology has yet to be demonstrated,” he says.

The challenge facing the industry is that increasing the I/O rate inevitably increases power consumption. “As power consumption goes up, it also equates to cost,” says Rickman. Clearly that is unwelcome and adds cost, he says, but that is not the only issue. As power goes up, you cannot fully benefit from the doubling transistor counts, so things cannot be packed more densely.

“You are running into to the end of Moore’s law and you don’t get the benefit of reducing space and cost because you’ve got to bolt on all these other things as it is very difficult to get all these signals off-chip,” he says.

This is where tackling the system as a whole comes in. You can look at microelectronics in isolation and use silicon photonics for chip-to-chip communications across a printed circuit board to reduce the electrical losses through the copper traces. “A good thing to do,” stresses Rickman. Or you can address, as Rockley aims to do, Moore’s law and the I/O limitations within a complete system the size of the data centre that links hundred of thousands of computers. “Not the same way you’d solve an individual problem in an individual device,” says Rickman.

 

Rockley Photonics

Rockley Photonics has already demonstrated all the basic elements of its design. “That has gone very well,” says Rickman.

The start-up has stated its switch design uses silicon photonics for optical switching and that the company is developing an accompanying controller ASIC. It has also developed a switching protocol to run on the hardware. Rockley’s silicon photonics design performs multiplexing and demultiplexing, suggesting that dense WDM is being used as well as optical switching.

Rockley is a fabless semiconductor company and will not be building systems. Partly, it is because it is addressing the data centre and the market has evolved in a different way to telecoms. For the data centre, there are established switch vendors and white-box manufacturers. As such, Rockley will provide its chipset-based reference design, its architecture IP and the software stack for its customers. “Then, working with the customer contract manufacturer, we will implement the line cards and the fabric cards in the format that the particular customer wants,” says Rickman.

The resulting system is designed as a drop-in replacement for the large-scale data centre players’ switches they haver already deployed, yet will be cheaper, more compact and consume less power, says Rockley.

“They [the data centre operators] can scale the way they do at the moment, or they can scale with our topology,” says Rickman.

The start-up expects to finally unveil its technology by the year end.


Privacy Preference Center