Ayar Labs advances I/O and pens GlobalFoundries deal

Silicon photonics start-up, Ayar Labs, has entered into a strategic agreement with semiconductor foundry, GlobalFoundries.

Alexandra Wright-GladsteinAyar Labs will provide GlobalFoundries with its optical input-output (I/O) technology. In return, the start-up will gain early access to the foundry’s 45nm CMOS process being tailored for silicon photonics.

GlobalFoundries has also made an investment in the start-up for an undisclosed fee.

“We gain, first and foremost, a close relationship with GlobalFoundries as we qualify our product for customers,” says Alexandra Wright-Gladstein, co-founder and CEO of Ayar Labs. “That will help us speed up availability of our product and have their weight of support behind us.”

 

Strategy

Ayar Labs is bringing to market technology developed by academics originally at MIT. The research group developed a way to manufacture silicon photonics components using a standard silicon-on-insulator (SOI) CMOS process. The research work resulted in a novel dual-core RISC-V microprocessor demonstrator that used optical I/O to send and receive data, work that was published in the Nature science journal in December 2015.

Ayar Labs is using its optical I/O technology to address the high-performance computing and data centre markets. The optical I/O reaches up to 2km, from chip-to-chip communications to linking equipment between the buildings of a large data centre. 

The start-up will offer a die - chiplet - that can be integrated within a multi-chip module, as well as a high-capacity 3.2-terabit optical module.

“We are aggregating the capacity of 4, 8 or 16 pluggable transceivers into a single module to share the cost of production at such high data rates,” says Wright-Gladstein. “This makes us competitive [for applications] where a pluggable transceiver is not.” Offering a chiplet and a high-density optical module on a board will bring to the marketplace the benefits companies are looking for if they are to move from copper to optics, she says.

Ayar Labs will also license its technology. “Our goal is to create an ecosystem for optical I/O for chips,” says Wright-Gladstein.

 

 

Technology

Ayar Labs has been a customer of GlobalFoundries for several years, using its existing 45nm SOI CMOS process to make devices as part of the foundry’s multi-project wafer service. The start-up will use the same 45nm CMOS process to make its first product. The CEO points out that using an unmodified electronics process introduces tight design constraints; no new materials can be introduced or layer thicknesses modified. 

The start-up will also support GlobalFoundries in the development of its 45nm CMOS process optimised for silicon photonics. “The new process is more geared to traditional applications of optics such as optical transceivers for longer-distance communications,” says Wright-Gladstein.

 

Our goal is to create an ecosystem for optical I/O for chips

 

The intellectual property of Ayar Labs includes a micro-ring resonator optical modulator that is tiny compared to a Mach-Zehnder modulator. An issue with a micro-ring resonator is its sensitivity to temperature and manufacturing variances. Ayar’s Labs ability to design the ring resonator using standard CMOS means control circuitry can be added to ensure the modulator’s stability. 

Ayar Labs has advanced its technology since the publication of the 2015 Nature paper. It has changed the operating wavelength of its optics from 1180nm to the standard 1310nm. It has also increased the speed of optical transmission from 2.5 to 25 gigabits-per-second (Gbps). The start-up expects to be able to extend the data rate to 50Gbps and even 100Gbps using 4-level pulse-amplitude modulation (PAM-4). The company has already demonstrated PAM-4 technology working with its optics. 

The company also has wavelength-division multiplexing technology, using 8 wavelengths on a fibre; the original microprocessor demonstrator used only one wavelength. “We have 8 [micro-resonator] rings that lock on the transmit side and 8 rings that lock on the receive side,” says Wright-Gladstein. The company expects to extend the number of working wavelengths to 16 and even 32.

“We believe this is the process of the future because it can scale,” she says.

 

A factor of 10

Wright-Gladstein says its technology delivers a tenfold improvement using several metrics when compared to copper interconnect.

Typically a 25Gbps electrical interface will occupy 1 mm2 of chip area whereas Ayar Labs can fit more - potentially much more - than 250Gbps. The use of WDM technology also means that the amount of data passing the chip’s edge is at least 10 times greater.

 

The energy efficiency for the I/O is also between 5 times and 20 times greater than copper

 

The latency - how long it takes a signal to arrive at the receiver from the transmitter - is also improved tenfold. The fastest electrical interfaces at 56Gbps that use PAM-4 require forward-error correction which adds 100ns to the latency. Sending light 3m between racks takes 10ns, a tenth of the time. And more wavelengths can be added rather than using PAM-4 to avoid adversely impacting latency. “That matters for HPC customers,” she says.

The energy efficiency for the I/O is also between 5 times and 20 times greater than copper.

Ayar Labs has also developed an integrated laser module that provides the light sources for its optical I/O. Multiple lasers are integrated on a single die and the module outputs several wavelengths of light on several fibres.

The start-up claims the overall optical I/O design is simplified as there is no attachment of laser dies to the silicon and there are no attached driver chips. The result is a die that is flip-chip-attached allowing the use of standard high-volume CMOS packaging techniques. 

First samples are expected sometime this year, with general product availability starting in 2019.

Meanwhile, GlobalFoundries is expected to offer the optical I/O as part of its 45nm silicon photonics process library in 2019.  


Tackling system design on a data centre scale

Silicon photonics luminaries series

Interview 1: Andrew Rickman

Silicon photonics has been a recurring theme in the career of Andrew Rickman. First, as a researcher looking at the feasibility of silicon-based optical waveguides, then as founder of Bookham Technologies, and after that as a board member of silicon photonics start-up, Kotura.

 

Andrew Rickman

Now as CEO of start-up Rockley Photonics, his company is using silicon photonics alongside its custom ASIC and software to tackle a core problem in the data centre: how to connect more and more servers in a cost effective and scaleable way.

 

Origins

As a child, Rickman attended the Royal Institution Christmas Lectures given by Eric Laithwaite, a popular scientist who was also a professor of electrical engineering at Imperial College. As an undergraduate at Imperial, Rickman was reacquainted with Professor Laithwaite who kindled his interest in gyroscopes.

“I stumbled across a device called a fibre-optic gyroscope,” says Rickman. “Within that I could see people starting to use lithium niobate photonic circuits.” It was investigating the gyroscope design and how clever it was that made Rickman wonder whether the optical circuits of such a device could be made using silicon rather than exotic materials like lithium niobate.

“That is where the idea triggered, to look at the possibility of being able to make optical circuits in silicon,” he says.

 

If you try and force a photon into a space shorter than its wavelength, it behaves very badly


In the 1980s, few people had thought about silicon in such a context. That may seem strange today, he says, but silicon was not a promising candidate material. “It is not a direct band-gap material - it was not offering up the light source, and it did not have a big electro-optic effect like lithium niobate which was good for modulators,” he says. “And no one had demonstrated a low-loss single-mode waveguide.”

Rickman worked as a researcher at the University of Surrey’s physics department with such colleagues as Graham Reed to investigate whether the trillions of dollars invested in the manufacturing of silicon could also be used to benefit photonic circuits and in particular whether silicon could be used to make waveguides. “The fundamental thing one needed was a viable waveguide,” he says.

Rickman even wrote a paper with Richard Soref who was collaborating with the University of Surrey at the time. “Everyone would agree that Richard Soref is the founding father of the idea - the proposal of having a useful waveguide in silicon - which is the starting point,” says Rickman. It was the work at the University of Surrey, sponsored by Bookham which Rickman had by then founded, that demonstrated low-loss waveguides in silicon.

 

Fabrication challenges

Rickman argues that not having a background in CMOS processes has been a benefit. “I wasn’t dyed-in-the-wool-committed to CMOS-type electronics processing,” he says. “I looked upon silicon technology as a set of machine-shop processes for making things.”

Looking at CMOS processing completely afresh and designing circuits optimised for photonics yielded Bookham a great number of high-performance products, he says. In contrast, the industry’s thrust has been very much a semiconductor CMOS-focused one. “People became interested in photonics because they just naturally thought it was going to be important in silicon, to perpetuate Moore’s law,” says Rickman.

You can use the structures and much of the CMOS processes to make optical waveguides, he says, but the problem is you create small structures - sub-micron - that guide light poorly. “If you try and force a photon into a space shorter than its wavelength, it behaves very badly,” he says. “In microelectronics, an electron has got a wavelength that is one hundred times smaller that the features it is using.”

The results include light being sensitive to interface roughness and to the manufacturing tolerances - the width, hight and composition of the waveguide. “At least an order of magnitude more difficult to control that the best processes that exist,” says Rickman.

“Our [Rockley’s] waveguides are one thousand times more relaxed to produce than the competitors’ smaller ones,” he says. “From a process point of view, we don’t need the latest CMOS node, we are more a MEMS process.”

 

If you take control of enough of the system problem, and you are not dictated to in terms of what MSA or what standard that component must fit into, and you are not competing in this brutal transceiver market, then that is when you can optimise the utilisation of silicon photonics 

 

Rickman stresses that small waveguides do have merits - they go round tighter bends, and their smaller-dimensioned junctions make for higher-speed components. But using very large features solves the ‘fibre connectivity problem’, and Rockley has come up with its own solutions to achieve higher-speed devices and dense designs.

“Bookham was very strong in passive optics and micro-engineered features,” says Rickman. “We have taken that experience and designed a process that has all the advantages of a smaller process - speed and compactness - as well as all the benefits of a larger technology: the multiplexing and demultiplexing for doing dense WDM, and we can make a chip that already has a connector on it.”

 

Playing to silicon photonics’ strengths

Rickman believes that silicon photonics is a significant technological development: “It is a paradigm shift; it is not a linear improvement”. But what is key is how silicon photonics is applied and the problem it is addressing.

To make an optical component for an interface standard or a transceiver MSA using silicon photonics, or to use it as an add-on to semiconductors - a ’band-aid” – to prolong Moore’s law, is to undersell its full potential. Instead, he recommends using silicon photonics as one element - albeit an important one - in an array of technologies to tackle system-scale issues.

“If you take control of enough of the system problem, and you are not dictated to in terms of what MSA or what standard that component must fit into, and you are not competing in this brutal transceiver market, then that is when you can optimise the utilisation of silicon photonics,” says Rickman. “And that is what we are doing.” In other words, taking control of the environment that the silicon sits in.

 

It [silicon photonics] is a paradigm shift; it is not a linear improvement 

 

Rockley’s team has been structured with the view to tackle the system-scale problem of interconnecting servers in the data centre. Its team comprises computer scientists, CMOS designers - digital and analogue - and silicon photonics experts.

Knowing what can be done with the technologies and organising them allows the problems caused by the ‘exhaustion of Moore’s law’ and the input/output (I/O) issues that result to be overcome. “Not how you apply one technology to make up for the problems in another technology,” says Rickman.

 

The ending of Moore’s law

Moore’s law continues to deliver a doubling of transistors every two years but the associated scaling benefits like the halving of power consumed per transistor no longer apply. As a result, while Moore’s law continues to grow gate count that drives greater computation, the overall power consumption is no longer constant.

Rickman also points out that the I/O - the number of connections on and off a chip - are not doubling with transistor count. “I/O may be going from 25 gigabit to 50 gigabit using PAM–4 but there are many challenges and the technology has yet to be demonstrated,” he says.

The challenge facing the industry is that increasing the I/O rate inevitably increases power consumption. “As power consumption goes up, it also equates to cost,” says Rickman. Clearly that is unwelcome and adds cost, he says, but that is not the only issue. As power goes up, you cannot fully benefit from the doubling transistor counts, so things cannot be packed more densely.

“You are running into to the end of Moore’s law and you don’t get the benefit of reducing space and cost because you’ve got to bolt on all these other things as it is very difficult to get all these signals off-chip,” he says.

This is where tackling the system as a whole comes in. You can look at microelectronics in isolation and use silicon photonics for chip-to-chip communications across a printed circuit board to reduce the electrical losses through the copper traces. “A good thing to do,” stresses Rickman. Or you can address, as Rockley aims to do, Moore’s law and the I/O limitations within a complete system the size of the data centre that links hundred of thousands of computers. “Not the same way you’d solve an individual problem in an individual device,” says Rickman.

 

Rockley Photonics

Rockley Photonics has already demonstrated all the basic elements of its design. “That has gone very well,” says Rickman.

The start-up has stated its switch design uses silicon photonics for optical switching and that the company is developing an accompanying controller ASIC. It has also developed a switching protocol to run on the hardware. Rockley’s silicon photonics design performs multiplexing and demultiplexing, suggesting that dense WDM is being used as well as optical switching.

Rockley is a fabless semiconductor company and will not be building systems. Partly, it is because it is addressing the data centre and the market has evolved in a different way to telecoms. For the data centre, there are established switch vendors and white-box manufacturers. As such, Rockley will provide its chipset-based reference design, its architecture IP and the software stack for its customers. “Then, working with the customer contract manufacturer, we will implement the line cards and the fabric cards in the format that the particular customer wants,” says Rickman.

The resulting system is designed as a drop-in replacement for the large-scale data centre players’ switches they haver already deployed, yet will be cheaper, more compact and consume less power, says Rockley.

“They [the data centre operators] can scale the way they do at the moment, or they can scale with our topology,” says Rickman.

The start-up expects to finally unveil its technology by the year end.


NextIO simplifies top of rack switching with I/O virtualisation

NextIO has developed virtualised input/output (I/O) equipment that simplifies switch design in the data centre.

 

"Our box takes a single virtual NIC, virtualises that and shares that out with all the servers in a rack"

John Fruehe, NextIO 

 

The platform, known as vNET, replaces both Fibre Channel and Ethernet top-of-rack switches in the data centre and is suited for use with small one rack unit (1RU) servers. The platform uses PCI Express (PCIe) to implement I/O virtualisation.

"Where we tend to have the best success [with vNET] is with companies deploying a lot of racks - such as managed service providers, service providers and cloud providers - or are going through some sort of IT transition," says John Fruehe, vice president of outbound marketing at NextIO.

Three layers of Ethernet switches are typically used in the data centre. The top-of-rack switches aggregate traffic from server racks and link to end-of-row, aggregator switches that in turn interface to core switches. "These [core switches] aggregate all the traffic from all the mid-tier [switches]," says Fruehe. "What we are tackling is the top-of-rack stuff; we are not touching end-of-row or the core."

A similar hierarchical architecture is used for storage: a top-of-rack Fibre Channel switch, end-of-row aggregation and a core that connects to the storage area network. NextIO's vNET platform also replaces the Fibre Channel top-of-rack switch.

"We are replacing those two top-of-rack switches - Fibre Channel and Ethernet - with a single device that aggregates both traffic types," says Fruehe.

vNET is described by Fruehe as an extension of the server I/O. "All or our connections are PCI Express, we have a simple PCI Express card that sits in the server, and a PCI Express cable," he says. "To the server, it [vNET] looks like a PCI Express hub with a bunch of I/O cards attached to it." The server does not discern that the I/O cards are shared across multiple servers or that they reside in an external box.

For IT networking staff, the box appears as a switch providing 10 Gigabit Ethernet (GbE) ports to the end-of-rack switches, while for storage personnel, the box provides multiple Fibre Channel connections to the end-of-row storage aggregation switch. "Most importantly there is no difference to the software," says Fruehe.

 

I/O virtualisation

NextIO's technology pools the I/O bandwidth available and splits it to meet the various interface requirements. A server is assigned I/O resources yet it believes it has the resources all to itself. "Our box directs the I/O the same way a hypervisor directs the CPU and memory inside a server for virtualisation," says Fruehe.  

There are two NextIO boxes available that support up to 15 or up to 30 servers. One has 30, 10 Gigabit-per-second (Gbps) links and the other 15, 20Gbps links. These are implemented as 30x4 and 15x8 PCIe connections, respectively, that connect directly to the servers.

A customer most likely uses two vNET platforms at the top of the rack, the second being used for redundancy. "If a server is connected to two, you have 20 or 40 Gig of aggregate bandwidth," says Fruehe.

NextIO exploits two PCIe standards known as single root I/O virtualisation (SRIOV) and multi-root I/O virtualisation (MRIOV).

SRIOV allows a server to take an I/O connection like a network card, a Fibre Channel card or a drive controller and share it across multiple server virtual machines. MRIOV extends the concept by allowing an I/O controller to be shared by multiple servers. "Think of SRIOV as being the standard inside the box and MRIOV as the standard that allows multiple servers to share the I/O in our vNET box," says Fruehe.

Each server uses only a single PCIe connection to the vNET with the MRIOV's pooling and sharing happening inside the platform.

 

 

The vNET platform showing the PCIe connections to the servers, the 10GbE interfaces to the network and the 8 Gig Fibre Channel connections to the storage area networks (SANs). Source: NextIO

 

Meanwhile, vNET's front panel has eight shared slots. These house Ethernet controllers and/or Fibre Channel controllers, and these are shared across the multiple servers.

In affect an application running on the server communicates with its operating system to send the traffic over the PCIe bus to the vNET platform, where it is passed to the relevant network interface controller (NIC) or Fibre Channel card.

The NIC encapsulates the data in Ethernet frames before being sent over the network. The same applies with the host bus adaptor (HBA) that converts the data to be stored to Fibre Channel. "All these things are happening over the PCIe bus natively, and they are handled in different streams," says Fruehe.

In effect, a server takes a single physical NIC and partitions it into multiple virtual NICs for all the virtual machines running on the server. "Our box takes a single virtual NIC, virtualises that and shares that out with all the servers in a rack" says Fruehe. "We are using PCIe as the transport all the way back to the virtual machine and all the way forward to that physical NIC; that is all a PCIe channel."

The result is a high bandwidth, low latency link that is also scalable.

NextIO has a software tool that allows bandwidth to be assigned on the fly. "With vNET, you open up a console and grab a resource and drag it over to a server and in 2-3 seconds you've just provisioned more bandwidth for that server without physically touching anything."

The provisioning is between vNET and the servers. In the case of networking traffic, this is in 10GbE chucks. It is the server's own virtualisation tools that do the partitioning between the various virtual machines.

vNET has an additional four vNET slots - for a total of 12 -  for assignment to individual servers. "If you are taking all the I/O cards out of the server, you can use smaller form-factor servers," says Fruehe. But such 1RU servers may not have room for a specific I/O card. Accordingly, the four slots are available to host cards - such as a solid-state drive flash memory or a graphics processing unit accelerator - that may be needed by individual servers.

 

Operational benefits

There are power and cooling benefits using the vNET platform. First, smaller form factor servers draw less power while using PCIe results in fewer cables and better air flow.

To understand why fewer cables are needed, a typical server uses a quad 1GbE controller and a dual-ported Fibre Channel controller, resulting in six cables. To have a redundant system, a second set of Ethernet and Fibre Channel cards are used, doubling the cables to a dozen. With 30 servers in a rack, the total is 360 cables.

Using NextIO's vNET, in contrast, only two PCIe cables are required per server or 60 cables in total.  

On the front panel, there are eight shared slots and these can all be either dual 10GbE port cards or dual 8GbE port Fibre Channel cards or a mix of both.  This gives a total of 160GbE or 128 Gig of Fibre Channel. NextIO plans to upgrade the platforms to 40GbE interfaces for an overall capacity of 640GbE.


Privacy Preference Center