Intel on silicon photonics and its role in the data centre
In the next couple of years, you will see a massive adoption of silicon photonics into the data centers and into high-performance computing
Mario Paniccia, Intel
Bringing new technology to market is at least a decade-long undertaking. So says Mario Paniccia, Intel Fellow and general manager of the company's silicon photonics operation. “The first transistor, the first chip; it has been 10 or 15 years from the first idea or research result to a commercial product,” he says. “Silicon photonics is just another example.”
Paniccia should know. He has been at Intel for nearly 20 years and started the company’s investigation of silicon photonics. Paniccia has overseen each of Intel’s various silicon photonics' building-block developments, beginning with a 1 Gigabit silicon modulator in 2004 through to its high gain-bandwidth avalanche photo-detector detailed in 2008.
Now Intel has unveiled its first 100 Gigabit silicon photonic product used as part of its Rack Scale Architecture (RSA) that implements a disaggregated system design that separates storage, computing and networking. The 100 Gigabit modules are used along with Terabit connectors and Corning's ClearCurve multi-mode fibre.
"Silicon photonics is the path to low-cost, high-volume optical connectivity in and around the server platform and in the data centre,” says Paniccia. “We can see it now coming.”
We are operating with a mindset of CMOS compatibility and we are putting our process and our photonics into fabs that also run high volume CMOS manufacturing
A key advantage of silicon photonics is its ability to benefit from high-volume manufacturing developed for the chip industry. But high-volume manufacturing raises its own challenges, such as determining where silicon photonics has value and picking the right applications.
Another merit, which at first does not sound like one, is that silicon photonics is 'good enough'. “But that 'good enough' is getting better and getting very close to performance levels of most of the modulation and detection devices people have shown in excess of 40 Gig," says Paniccia.
Such silicon-photonic building blocks can be integrated to deliver aggregate bandwidths of 100 Gig, 400 Gig, even a Terabit-per-second. “As demands increase in the data centre, cloud and high-performance computing, the ability to integrate photonics devices with CPUs or ASICs to deliver solutions at an architecture level, that is the really exciting part," says Paniccia.
At the end of the day, it is about building a technology that is cost effective for the application
Manufacturing process
Intel has not said what process it uses for its silicon photonic devices, although it does say it uses more than one. IBM uses 90nm lithography and STMicroelectronics has chosen 65nm for their silicon photonic designs.
Intel makes its photonics and associated drive electronics on separated devices due to the economics. Not using a leading manufacturing process for the photonics is cheaper since it avoids having to use expensive die and associated masks. “At the end of the day, it is about building a technology that is cost effective for the application," says Paniccia.
Intel uses a 22nm CMOS process and is moving to 14nm for its CPUs. For light, the feature sizes in silicon are far broader. “However, better lithography gets you better resolution, gets you better sidewalls roughness and better accuracy,” says Paniccia. “[A] 90nm [lithography] is plenty for most of the process nodes.”
Intel says it uses more advanced lithography for the early manufacturing steps of its silicon photonics devices, while the ’backend’ processing for its hybrid (silicon/ indium phosphide) laser involved broad metal lines and etch steps for which 130nm lithography is used.
The silicon photonics process is designed to be CMOS compatible so that the photonics can be made alongside Intel's volume chips. “That is critical,” says Paniccia. “We are operating with a mindset of CMOS compatibility and we are putting our process and our photonics into fabs that also run high volume CMOS manufacturing." The goal is to ensure that as production ramps, Intel can move its technology across plants.
The company has no plans to offer silicon photonics manufacturing as a foundry business.
Data centre trends
Intel is focussing its silicon photonics on the data centre. “We announced the RSA, a rack connected with switching, with silicon photonics and the new MXC cable,” says Paniccia. “Bringing optics up and down the racks and across racks, not only are the volumes quite big but the price points are aggressive.”
The company is using multi-mode fibre for its silicon photonics solution despite growing interest in single-mode fibre to meet the longer reach requirements emerging in the data centre.
Intel chose multi-mode as it results in a more economic solution in terms of packaging, assembly and cabling. "If you look at a single-mode fibre solution - coupling the fibre, packaging and assembling - it is very expensive," he says. That is because single-mode fibre requires precise fibre alignment at the module and at the connector, he says: "Even if the photonics were free, packaging, testing and assembly accounts for 40-60 percent of cost."
Silicon photonics is inherently single-mode and making it work with multi-mode fibre is a challenge. “At the transmitter side it is somewhat easy, a small hose - the transmitter - going into a big hose, a 50-micron [multi-mode] fibre, so the alignment is easy,“ says Paniccia. “At the receiver side, I now have a 50-micron multi-mode fibre and couple it down into a silicon photonic chip; that is the hard part.”
Corning's ClearCurve multi-mode fibre and the MXC connector working with Intel's 100 Gigabit modules achieve a 300m reach, while 820m has been demonstrated. “At the end of the day, the customer will decide how do we drive a new architecture into the next-generation of data centre,” says Paniccia.
Optics edge closer
Optics will edge up to chips as silicon photonics evolves. With electrical signals moving from 10 Gigabit to 25 Gigabit, it becomes harder to send the signals off chip. Embedding the optics onto the board, as Intel has done with its RSA, means that the electrical signal paths are only a couple of inches long. The signals are then carried optically via the MXC connector that supports up to 64 fibres. "Optical modules are limited in space and power," says Paniccia. "You have got to move to an embedded solution which enables greater faceplate density."
The next development after embedded modules will be to co-package the optics with the ASIC or CPU. "That is the RSA," says Paniccia. "That is the evolution that will have to happen when data rates run from 25 Gig to 32 Gig and 40 Gig line rates."
Moreover, once optics are co-packaged with an ASIC or a CPU, systems will be designed differently and optimised further, says Paniccia. "We have an Intel roadmap that takes it from a core technology for networking all the way to how we attach this stuff to CPUs," he says. "That is the end game."
Intel views silicon photonics not as a link technology but a connectivity approach for an architecture and platforms that will allow customers to evolve as their cloud computing and storage requirements grow.
"In the next couple of years, you will see a massive adoption of silicon photonics into the data centers and into high-performance computing, where the cost of I/O [input/output] has been limiting system development and system architecture," says Paniccia.
Terabit interconnect to take hold in the data centre
Intel and Corning have further detailed their 1.6 Terabit interface technology for the data centre.
The collaboration combines Intel's silicon photonics technology operating at 25 Gigabit-per-fibre with Corning's ClearCurve LX multimode fibre and latest MXC connector.
Silicon photonics wafer and the ClearCurve fibres. Source: Intel
The fibre has a 300m reach, triple the reach of existing multi-mode fibre at such speeds, and uses a 1310nm wavelength. Used with the MXC connector that supports 64 fibres, the overall capacity will be 1.6 Terabits-per-second (Tbps).
"Each channel has a send and a receive fibre which are full duplex," says Victor Krutul, director business development and marketing for silicon photonics at Intel. "You can send 0.8Tbps on one direction and 0.8Tbps in the other direction at the same time."
The link supports connections within a rack and between racks; for example, connecting a data centre's top-of-rack Ethernet switch with an end-of-row one.
James Kisner, an analyst at global investment banking firm, Jefferies, views Intel’s efforts as providing important validation for the fledgling silicon photonics market.
However, in a research note, he points out that it is unclear whether large data centre equipment buyers will be eager to adopt the multi-mode fibre solution as it is more expensive than single mode. Equally, large data centres have increasingly longer span requirements - 500m to 2km - further promoting the long term use of single mode fibre.
Rack Scale Architecture
The latest details of the silicon photonics/ ClearCurve cabling were given as part of an Intel update on several data centre technologies including its Atom C2000 processor family for microservers, the FM5224 72-port Ethernet switch chip, and Intel's Rack Scale Architecture (RSA) that uses the new cabling and connector.
Intel is a member of Facebook's Open Compute Project based on a disaggregated system design that separates storage, computing and networking. "When I upgrade the microprocessors on the motherboard, I don't have to throw away the NICs [network interface controllers] and disc drives," says Krutul. The disaggregation can be within a rack or between rows of equipment. Intel's RSA is a disaggregated design example.
The chip company discussed an RSA design for Facebook. The rack has three 100Gbps silicon photonics modules per tray. Each module has four transmit and four receive fibres, or 24 fibres per tray and per cable. “Different versions of RSA will have more or less modules depending on requirements," says Krutul. Intel has also demonstrated a 32-fibre MXC prototype connector.
Corning says the ClearCurve fibre delivers several benefits. The fibre has a smaller bend radius of 7.5mm, enabling fibre routing on a line card. The 50 micron multimode fibre face is also expanded to 180 microns using a beam expander lens. The lenses make connector alignment easier and less sensitive to dust. Corning says the MXC connector comprises seven parts, fewer than other optical connectors.
Fibre and connector standardisation are key to ensure broad use, says Daryl Inniss, vice president and practice leader, components at Ovum.
"Intel is the only 1310nm multimode transmitter and receiver supplier, and expanding this optical link into other applications like enterprise data centres may require a broader supply base," says Inniss in a comment piece. But the fact that Corning is participating in the development signals a big market in the making, he says.
Intel has not said when the silicon photonics transceiver and fibre/ connector will be generally available. "We are not discussing schedules or pricing at this time," says Krutul.
Silicon photonics: Intel's first lab venture
The chip company has been developing silicon photonics technology for a decade.
"As our microprocessors get faster, you need bigger and faster pipes in and around the servers," says Krutul. "That is a our whole goal - feeding our microprocessors."
Intel is setting up what it calls 'lab ventures', with silicon photonics chosen to be the first.
"You have a research organisation that does not do productisation, and business units that just do products," says Krutul. "You need something in between so that technology can move from pure research to product; a lab venture is an organisational structure to allow that movement to happen."
The lab ventures will be discussed more in the coming year.
