Broadcom's Thor 2 looks to hammer top spot in AI NICs

Jas Tremblay

The NIC cards are using Broadcom’s Thor 2 chip which started sampling in 2023 and is now in volume production.Jas Tremblay, vice president and general manager of the data center solutions group at Broadcom, says the Thor 2 is the industry’s first 400 gigabit Ethernet (GbE) NIC device to be implemented in a 5nm CMOS process.

“It [the design] gives customers choices and freedom when they’re building their AI systems such that they can use different NICs with different [Ethernet] switches,” says Tremblay.

NICs for AI

The 400GbE Thor 2 supports 16 lanes of PCI Express 5.0, each lane operating at 32 gigabit-per-second (Gbps).

The chip also features eight 112-gigabit serialisers/ deserialisers (serdes). Eight 112-gigabit serdes are supported even though the chip is a 400GbE device since some customers operate the serdes at the lower 56Gbps speed to match their switches’ serdes.

Broadcom is bringing to market a variety of NICs using the Thor 2. Tremblay explains that one board is for standard servers while another is designed for an Open Compute Project (OCP) server. In turn, certain customers have custom designs.

Broadcom has also qualified 100 optical and copper-based connectors used with the NIC boards. “People want to use different cables to connect these cards, and we have to qualify them all,” says Tremblay. These include linear pluggable optics (LPO), for the first time as part of the optical options.

The requirement for so many connectors is a reflection of several factors: AI’s needs, the use of 100-gigabit serdes, and 400GbE. “What’s happening is that customers are having to optimise the physical cabling to reduce power and thermal cooling requirements,” says Tremblay.

When connecting the Broadcom NIC to a Broadcom switch, a reach of 5m is possible using direct attach copper (DAC) cabling. In contrast, if the Broadcom NIC is connected to another vendor’s switch, the link distance may only be half that.

“In the past, people would say: ‘I’m not going to have different cable lengths for various types of NICs and switch connections’,” says Tremblay. “Now, in the AI world, they have to do that given there’s so much focus on power and cooling.”

How the NIC connects to the accelerator chip (in the diagram, a graphics processing unit (GPU)) and also the layers of switches to enable the NIC to talk to other NICs. Source: Broadcom.

NIC categories

Many terms exist to describe NICs. Broadcom, which has been making NICs for over two decades, puts NICs into two categories. One, and Broadcom’s focus, is Ethernet NICs. The NICs use a hardware-accelerated data path and are optimised for networking, connectivity, security, and RoCE.

RoCE refers to RDMA over Converged Ethernet, while RDMA is short for remote direct memory access. RDMA allows one processor to read or write to another’s memory without involving the processor. This frees the processor to concentrate on computation. RoCE uses Ethernet as a low-latency medium for such transfers.

The second NIC category refers to a data processing unit (DPU). Here, the chip has CPU cores to execute the offload tasks, implementing functions that would otherwise burden the main processor.

Tremblay says the key features that make an Ethernet NIC ideal for AI include using at least a 25Gbps serdes, RoCE, and advanced traffic congestion control.

Switch scheduling or end-point scheduling

Customers no longer buy components but complete AI compute clusters, says Tremblay. They want the cluster to be an open design so that when choosing the particular system elements, they have confidence it will work.

Broadcom cites two approaches – switch scheduling and end-point scheduling – to building AI systems.

Switch scheduling refers to systems where the switch performs the traffic load balancing to ensure that the networking fabric is used to the full. The switch also oversees congestion control.

Hasan Siraj

“The switch does perfect load balancing with every packet spread across all the outbound lines and reassembled at the other end,” says Hasan Siraj, head of software products and ecosystem at Broadcom. Jericho3-AI, which Broadcom announced last year, is an example of a switch scheduler for AI workloads.

The second approach – end-point scheduling – is for customers that prefer the NIC to do the scheduling. Leading cloud-computing players have their own congestion control algorithms, typically, and favour such flexibility, says Siraj: “But you still need a high-performance fabric that can assist with the load balancing.”

Here, a cloud player will used their NIC designs or other non-Broadcom NICs for the congestion control control but use it with a Broadcom switch such as the Tomahawk 5 (see diagram below).

Left shows an end-point scheduler set-up while the right diagram is an example of switch scheduler. Source: Broadcom.

Accordingly, the main configuration options are a Broadcom NIC with a non-Broadcom switch, a third-party NIC and the Jericho3-AI, or a full NIC-switch Broadcom solution where the Jericho3-AI does the load balancing and congestion control, while the Thor 2-based NIC takes care of RoCE in a power efficient way.

“Our strategy is to be the most open solution,” says Tremblay. “Everything we are doing is standards-based.”

And that includes the work of the Ultra Ethernet Consortium that is focussed on transportation and congestion control to tailor Ethernet for AI. The Ultra Ethernet Consortium is close to issuing the first revisions of its work.

The Ultra Ethernet Consortium aspires to achieve AI cluster sizes of 1 million accelerator chips. Such a huge computing cluster will not fit within one data centre sue to size, power, and thermal constraints, says Siraj. Instead, the cluster will be distributed across several data centres tens of kilometres apart. The challenge here will be how to achieve such connectivity while maintaining job completion time and latency.

Thor 3

Meanwhile, Broadcom has started work on an 800-gigabit NIC chip, the Thor 3, and a 1.6-terabit version after that.

The Jericho3-AI switch chip supports up to 32,000 endpoints, each at 800Gbps. Thus, the AI switch chip is ready for the advent Thor 3-based NIC boards.


COBO issues industry’s first on-board optics specification

  • COBO modules supports 400-gigabit and 800-gigabit data rates   
  • Two electrical interfaces have been specified: 8 and 16 lanes of 50-gigabit PAM-4 signals. 
  • There are three module classes to support designs ranging from client-slide multi-mode to line-side coherent optics. 
  • COBO on-board optics will be able to support 800 gigabits and 1.6 terabits once 100-gigabit PAM-4 electrical signals are specified. 

Source: COBO

Interoperable on-board optics has moved a step closer with the publication of the industry’s first specification by the Consortium for On-Board Optics (COBO).

COBO has specified modules capable of 400-gigabits and 800-gigabits rates. The designs will also support 800-gigabit and 1.6-terabit rates with the advent of 100-gigabit single-lane electrical signals. 

“Four hundred gigabits can be solved using pluggable optics,” says Brad Booth, chair of COBO and principal network architect for Microsoft’s Azure Infrastructure. “But if I have to solve 1.6 terabits in a module, there is nothing out there but COBO, and we are ready.”

 

Origins 

COBO was established three years ago to create a common specification for optics that reside on the motherboard. On-board optics is not a new technology but until now designs have been proprietary.

 

I have to solve 1.6 terabits in a module, there is nothing out there but COBO, and we are ready

 

Brad BoothSuch optics are needed to help address platform design challenges caused by continual traffic growth.

Getting data on and off switch chips that are doubling in capacity every two to three years is one such challenge. The input-output (I/O) circuitry of such chips consumes significant power and takes up valuable chip area.

There are also systems challenges such as routing the high-speed signals from the chip to the pluggable optics on the platform’s faceplate. The pluggable modules also occupy much of the faceplate area and that impedes the air flow needed to cool the platform. 

Using optics on the motherboard next to the chip instead of pluggables reduces the power consumed by shortening the electrical traces linking the two. Fibre rather than electrical signals then carries the data to the faceplate, benefiting signal integrity and freeing faceplate area for the cooling.    

 

Specification 1.0

COBO has specified two high-speed electrical interfaces. One is 8-lanes wide, each lane being a 50-gigabit 4-level pulse-amplitude modulation (PAM-4) signal. The interface is based on the IEEE’s 400GAUI-8, the eight-lane electrical specification developed for 400 Gigabit Ethernet. 

The second electrical interface is a 16-lane version for an 800-gigabit module. Using a 16-lane design reduces packaging costs by creating an 800-gigabit module instead using two separate 400-gigabit ones. Heat management is also simpler with one module.

There are also systems benefits using an 800-gigabit module.“As we go to higher and higher switch silicon bandwidths, I don’t have to populate as many modules on the motherboard,” says Booth.  

The latest switch chips announced by several companies have 12.8 terabits of capacity that will require 32, 400-gigabit on-board modules but only 16, 800-gigabit ones. Fewer modules simplify the board’s wiring and the fibre cabling to the faceplate.  

Designers have a choice of optical formats using the wider-lane module, such as 8x100 gigabits, 2x400 gigabits, and even 800 gigabits.

COBO has tested its design and shown it can support a 100-gigabit electrical interface. The design uses the same connector as the OSFP pluggable module. 

“In essence, with an 8-lane width, we could support an 800-gigabit module if that is what the IEEE decides to do next,” says Booth. “We could also support 1.6 terabits if that is the next speed hop.”  

 

It is very hard to move people from their standard operating model to something else until there is an extreme pain point

 

Form factor and module classes

The approach chosen by COBO differs from proprietary on-board optics designs in that the optics is not mounted directly onto the board. Instead, the COBO module resembles a pluggable in that once placed onto the board, it slides horizontally to connect to the electrical interface (see diagram, top).  

A second connector in the middle of the COBO module houses the power, ground and control signals. Separating these signals from the high-speed interface reduces the noise on the data signals. In turn, the two connectors act as pillars supporting the module. 

The robust design allows the modules to be mounted at the factory such that the platform is ready for operation once delivered at a site, says Booth. 

COBO has defined three module classes that differ in length. The shortest Class A modules are used for 400-gigabit multi-mode interfaces while Class B suits higher-power IEEE interfaces such as 400GBASE-DR4 and the 100G Lambda MSA’s 400G-FR4.

The largest Class C module is for the most demanding and power-hungry designs such as the coherent 400ZR standard. “Class C will be able to handle all the necessary components - the optics and the DSP - associated with that [coherent design],” says Booth. 

The advantage of the on-board optics is that it is not confined to a cage as pluggables are. “With an on-board optical module, you can control the heat dissipation by the height of the heat sink,” says Booth. “The modules sit flatter to the board and we can put larger heat sinks onto these devices.”  

 

We realised we needed something as a stepping stone [between pluggables and co-packaged optics] and that is where COBO sits    

 

Next steps

COBO will develop compliance-testing boards so that companies developing COBO modules can verify their designs. Booth hopes that by the ECOC 2018 show to be held in September, companies will be able to demonstrate COBO-based switches and even modules. 

COBO will also embrace 100-gigabit electrical work being undertaken by the OIF and the IEEE to determine what needs to be done to support 8-lane and 16-lane designs. For example, whether the forward-error correction needs to be modified or whether existing codes are sufficient.   

Booth admits that the industry remains rooted to using pluggables, while the move to co-packaged optics, where the optics and the chip are combined in the same module - remains a significant hurdle, both in terms of packaging technology and the need for vendors to change their business models to build such designs. 

“It is very hard to move people from their standard operating model to something else until there is an extreme pain point,” says Booth. 

Setting up COBO followed the realisation that a point would be reached when faceplate pluggables would no longer meet demands while in-packaged technology would not be ready. 

“We realised we needed something as a stepping stone and that is where COBO sits,” says Booth.     

 

Further information

For information on the COBO specification, click here


Verizon, Ciena and Juniper trial 400 Gigabit Ethernet

Verizon has sent a 400 Gigabit Ethernet signal over its network, carried using a 400-gigabit optical wavelength.

The trial’s goal was to demonstrate multi-vendor interoperability and in particular the interoperability of standardised 400 Gigabit Ethernet (GbE) client signals.

Glenn Wellbrock“[400GbE] Interoperability with the client side has been the long pole in the tent - and continues to be,” says Glenn Wellbrock, director, optical transport network - architecture, design and planning at Verizon. “This was trial equipment, not generally-available equipment.” 

It is only the emergence of standardised modules - in this case, an IEEE 400GbE client-side interface specification - that allows multi-vendor interoperability, he says. 

By trialing a 400-gigabit lightpath, Verizon also demonstrated the working of a dense wavelength-division multiplexing (DWDM) flexible grid, and a baud rate nearly double the 32-35Gbaud in wide use for 100-gigabit and 200-gigabit wavelengths.

“It shows we can take advantage of the entire system; we don’t have to stick to 50GHz channel spacing anymore,” says Wellbrock.

 

[400GbE] Interoperability with the client side has been the long pole in the tent - and continues to be 

 

Trial set-up

The trial used Juniper Networks’ PTX5000 packet transport router and Ciena’s 6500 packet-optical platform, equipment already deployed in Verizon’s network.

The Verizon demonstration was not testing optical transmission reach. Indeed the equipment was located in two buildings in Richardson, within the Dallas area. Testing the reach of 400-gigabit wavelengths will come in future trials, says Wellbrock. 

The PTX5000 core router has a traffic capacity of up to 24 terabits and supports 10-gigabit, 40-gigabit and 100-gigabit client-side interfaces as well as 100-gigabit coherent interfaces for IP-over-DWDM applications. The PTX5000 uses a mother card on which sits one or more daughter cards hosting the interfaces, what Juniper calls a flexible PIC concentrator (FPC) and physical interface cards (PICs), respectively.  

Juniper created a PIC with a 400GbE CFP8 pluggable module implementing the IEEE’s 10km 400GBASE-LR8 standard.

“For us, it was simply creating a demo 400-gigabit pluggable line card to go into the line card Verizon has already deployed,” says Donyel Jones-Williams, director of product marketing management at Juniper Networks.

Donyel Jones-WilliamsThe CFP8 400GbE interface connected the router to Ciena’s 6500 packet-optical platform.

Ciena also used demonstration hardware developed for 400-gigabit trials. “We expect to develop other hardware for general deployment,” says Helen Xenos, senior director, portfolio marketing at Ciena. “We are looking at smaller form-factor pluggables to carry 400 Gigabit Ethernet.”

 

400-gigabit deployments and trials

Ciena started shipping its WaveLogic Ai coherent modem that implements 400-gigabit wavelengths in the third quarter of 2017. Since then, the company has announced several 400-gigabit deployments and trials.

Vodafone New Zealand deployed 400 gigabits in its national transport network last September, a world first, claims Ciena. German cable operator, Unitymedia, has also deployed Ciena’s WaveLogic Ai coherent modem to deliver a flexible grid and 400-gigabit wavelengths to support growing content delivered via its data centres. And JISC, which runs the UK’s national research and education network, has deployed the 6500 platform and is using 400-gigabit wavelengths.

Helen Xenos

Last September, AT&T conducted its own 400-gigabit trial with Ciena. With AT&T’s trial, the 400-gigabit signal was generated using a test bed. “An SDN controller was used to provision the circuit and the [400-gigabit] signal traversed an OpenROADM line system,” says Xenos.   

Using the WaveLogic Ai coherent modem and its support for a 56Gbaud rate means that tunable capacity can now be doubled across applications, says Xenos. The wavelength capacity used for long-haul distances can now be 200 gigabits instead of 100 gigabits, while metro-regional networks spanning 1,000km can use 300-gigabit wavelengths. Meanwhile, 400-gigabit lightpaths suit distances of several hundred kilometres.

It is the large data centre operators that are driving the majority of 400 gigabit deployments, says Ciena. The reason the 400-gigabit announcements relate to telecom operators is because the data centre players have not gone public with their deployments, says Xenos.

Juniper Networks’ PTX5000 core router with 400GbE interfaces will primarily be used by the telecom operators. “We are in trials with other providers on 400 gigabits,” says Jones-Williams. “Nothing is public as yet.”   


Reflections on OFC 2017

Mood, technologies, notable announcements - just what are the metrics to judge the OFC 2017 show held in Los Angeles last week?

It was the first show I had attended in several years and the most obvious changes were how natural the presence of the internet content providers now is alongside the telecom operators, as well as systems vendors exhibiting at the show. Chip companies, while also present, were fewer than before.

Source: OSA

Another impression were the latest buzz terms: 5G, the Internet of Things and virtual reality-augmented reality. Certain of these technologies are more concrete than others, but their repeated mention suggests a consensus that the topics are real enough to impact optical components and networking.

 

It could be argued that OFC 2017 was the year when 400 Gigabit Ethernet became a reality 

 

The importance of 5G needs no explanation while the more diffuse IoT is expected to drive networking with the huge amounts of data it will generate. But what are people seeing about virtual reality-augmented reality that merits inclusion alongside 5G and IoT?  

Another change is the spread of data rates. No longer does one rate represent the theme of an OFC such as 40 Gigabits or 100 Gigabits. It could be argued that OFC 2017 was the year when 400 Gigabit Ethernet became a reality but there is now a mix of relevant rates such as 25, 50, 200 and 600 gigabits.    

 

Highlights

There were several highlights at the show. One was listening to Jiajin Gao, deputy general manager at China Mobile Technology, open the OIDA Executive Forum event by discussing the changes taking place in the operator's network. Gao started by outlining the history of China Mobile's network before detailing the huge growth in ports at different points in the network over the last two years. He then outlined China Mobile's ambitious rollout of new technologies this year and next.

China's main three operators have 4G and FTTx subscriber numbers that dwarf the rest of the world. Will 2017 eventually be seen as the year when the Chinese operators first became leaders in telecom networking and technologies?

The Executive Forum concluded with an interesting fireside discussion about whether the current optical market growth is sustainable. The consensus among representatives from Huawei, Hisense, Oclaro and Macom was that it is; that the market is more varied and stable this time compared to the boom and bust of 1999-2001. As Macom’s Preetinder Virk put it: "The future has nothing to do with the past". Meanwhile, Huawei’s Jeffrey Gao still expects strong demand in China for 100 gigabits in 2017 even if growth is less strong than in 2016. He also expects the second quarter this year to pick up compared to a relatively weak first quarter.

OFC 2017 also made the news with an announcement that signals industry change: Ciena's decision to share its WaveLogic Ai coherent DSP technology with optical module vendors Lumentum, Oclaro and NeoPhotonics.

The announcement can be viewed several ways. One is that the initiative is a response to the success of Acacia as a supplier of coherent modules and coherent DSP technology. System vendors designed their own coherent DSP-ASICs to differentiate their optical networking gear. This still holds true but the deal reflects how the progress of merchant line-side optics from the likes of Acacia is progressing and squeezing the scope for differentiation.

The deal is also a smart strategic move by Ciena which, through its optical module partners, will address new markets and generate revenues as its partners start to sell modules using the WaveLogic Ai. The deal also has a first-mover advantage. Other systems vendors may now decide to offer their coherent DSPs to the marketplace but Ciena has partnerships with three leading optical module makers and is working with them on future DSP developments for pluggable modules.

The deal also raises wider questions as to the role of differentiated hardware and whether it is subtly changing in the era of network function virtualisation, or whether it is a reflection of the way companies are now collaborating with each other in open hardware developments like the Telecom Infra Project and the Open ROADM MSA.

Another prominent issue at the show is the debate as to whether there is room for 200 Gigabit Ethernet modules or whether the industry is best served by going straight from 100 to 400 Gigabit Ethernet.

Facebook and Microsoft say they will go straight to 400 gigabits. Cisco agrees, arguing that developing an interim 200 Gigabit Ethernet interface does not justify the investment. In contrast, Finisar argues that 200 Gigabit Ethernet has a compelling cost-per-bit performance and that it will supply customers that want it.  Google supported 200 gigabits at last year’s OFC.   

 

Silicon photonics

Silicon photonics was one topic of interest at the show and in particular how the technology continues to evolve. Based on the evidence at OFC, silicon photonics continues to progress but there were no significant developments since our book (co-written with Daryl Inniss) on silicon photonics was published late last year.

One of the pleasures of OFC is being briefed by key companies in rapid succession. Intel demonstrated at its booth its silicon photonics products including its CWDM4 module which will be generally available by mid-year. Intel also demonstrated a 10km 4WDM module. The 4WDM MSA, created last year, is developing a 10km reach variant based on the CWDM4, as well as 20km and 40km based designs.

Meanwhile, Ranovus announced its 200-gigabit CFP2 module based on its quantum dot laser and silicon photonics ring resonator technologies with a reach approaching 100km. The 200 gigabit is achieved using 28Gbaud optics and PAM-4.

Elenion Technologies made several announcements including the availability of its monolithically integrated coherent modulator receiver after detailing it was already supplying a 200 gigabit CFP2-ACO to Coriant. The company was also demonstrating on-board optics and, working with Cavium, announced a reference architecture to link network interface cards and switching ICs in the data centre. 

I visited Elenion Technologies in a hotel suite adjacent to the conference centre. One of the rooms had enough test equipment and boards to resemble a lab; a lab with a breathtaking view of the hills around Los Angeles. As I arrived, one company was leaving and as I left another well-known company was arriving. Elenion was using the suite to demonstrate its technologies with meetings continuing long after the exhibition hall had closed.

Two other silicon photonics start-ups at the show were Ayar Labs and Rockley Photonics.

Ayar Labs in developing a silicon photonics chip based on a "zero touch" CMOS process that will sit right next to complex ASICs and interface to network interface cards. The first chip will support 3.2 terabits of capacity. The advantage of the CMOS-based silicon photonics design is the ability to operate at high temperatures. 

Ayar Labs is using the technology to address the high-bandwidth, low-latency needs of the high-performance computing market, with the company expecting the technology to eventually be adopted in large-scale data centres.

Rockley Photonics shared more details as to what it is doing as well as its business model but it is still to unveil its first products.

The company has developed silicon photonics technology that will co-package optics alongside ASIC chips. The result will be packaged devices with fibre-based input-output offering terabit data rates.

Rockley also talked about licensing the technology for a range of applications involving complex ICs including coherent designs, not just for switching architectures in the data centre that it has discussed up till now. Rockley says its first product will be sampling in the coming months. 

 

Looking ahead

On the plane back from OFC I was reading The Undoing Project by Michael Lewis about the psychologists Danny Kahneman and Amos Tversky and their insights into human thinking.

The book describes the tendency of people to take observed facts, neglecting the many facts that are missed or could not be seen, and make them fit a confident-sounding story. Or, as the late Amos Tversky put it: "All too often, we find ourselves unable to predict what will happen; yet after the fact, we explain what did happen with a great deal of confidence. This 'ability' to explain that which we cannot predict, even in the absence of any additional information, represents an important, though subtle, flaw in our reasoning." 

So, what to expect at OFC 2018? More of the same and perhaps a bombshell or two. Or to put it another way, greater unpredictability based on the impression at OFC 2017 of an industry experiencing an increasing pace of change. 


Talking markets: Oclaro on 100 gigabits and beyond

Oclaro’s chief commercial officer, Adam Carter, discusses the 100-gigabit market, optical module trends, silicon photonics, and why this is a good time to be an optical component maker.

Oclaro has started its first quarter 2017 fiscal results as it ended fiscal year 2016 with another record quarter. The company reported revenues of $136 million in the quarter ending in September, 8 percent sequential growth and the company's fifth consecutive quarter of 7 percent or greater revenue growth.

Adam CarterA large part of Oclaro’s growth was due to strong demand for 100 gigabits across the company’s optical module and component portfolio.

The company has been supplying 100-gigabit client-side optics using the CFP, CFP2 and CFP4 pluggable form factors for a while. “What we saw in June was the first real production ramp of our CFP2-ACO [coherent] module,” says Adam Carter, chief commercial officer at Oclaro. “We have transferred all that manufacturing over to Asia now.”

The CFP2-ACO is being used predominantly for data centre interconnect applications. But Oclaro has also seen first orders from system vendors that are supplying US communications service provider Verizon for its metro buildout.

The company is also seeing strong demand for components from China. “The China market for 100 gigabits has really grown in the last year and we expect it to be pretty stable going forward,” says Carter. LightCounting Market Research in its latest optical market forecast report highlights the importance of China’s 100-gigabit market. China’s massive deployments of FTTx and wireless front haul optics fuelled growth in 2011 to 2015, says LightCounting, but this year it is demand for 100-gigabit dense wavelength-division multiplexing and 100 Gigabit Ethernet optics that is increasing China’s share of the global market.

The China market for 100 gigabits has really grown in the last year and we expect it to be pretty stable going forward 

QSFP28 modules

Oclaro is also providing 100-gigabit QSFP28 pluggables for the data centre, in particular, the 100-gigabit PSM4 parallel single-mode module and the 100-gigabit CWDM4 based on wavelength-division multiplexing technology.

2016 was expected to be the year these 100-gigabit optical modules for the data centre would take off.  “It has not contributed a huge amount to date but it will start kicking in now,” says Carter. “We always signalled that it would pick up around June.”

One reason why it has taken time for the market for the 100-gigabit QSFP28 modules to take off is the investment needed to ramp manufacturing capacity to meet the demand. “The sheer volume of these modules that will be needed for one of these new big data centres is vast,” says Carter. “Everyone uses similar [manufacturing] equipment and goes to the same suppliers, so bringing in extra capacity has long lead times as well.”

Once a large-scale data centre is fully equipped and powered, it generates instant profit for an Internet content provider. “This is very rapid adoption; the instant monetisation of capital expenditure,” says Carter. “This is a very different scenario from where we were five to ten years ago with the telecom service providers."

Data centre servers and their increasing interface speed to leaf switches are what will drive module rates beyond 100 gigabits, says Carter. Ten Gigabit Ethernet links will be followed by 25 and 50 Gigabit Ethernet. “The lifecycle you have seen at the lower speeds [1 Gigabit and 10 Gigabit] is definitely being shrunk,” says Carter.

Such new speeds will spur 400-gigabit links between the data centre's leaf and spine switches, and between the spine switches. “Two hundred Gigabit Ethernet may be an intermediate step but I’m not sure if that is going to be a big volume or a niche for first movers,” says Carter.

400 gigabit CFP8

Oclaro showed a prototype 400-gigabit module in a CFP8 module at the recent ECOC show in September.  The demonstrator is an 8-by-50 gigabit design using 25 gigabaud optics and PAM-4 modulation. The module implements the 400Gbase-LR8 10km standard using eight 1310nm distributed feedback lasers, each with an integrated electro-absorption modulator. The design also uses two 4-wide photo-detector arrays.

“We are using the four lasers we use for the CWDM4 100-gigabit design and we can show we have the other four [wavelength] lasers as well,” says Carter.

Carter says IP core routers will be the main application for the 400Gbase-LR8 module. The company is not yet saying when the 400-gigabit CFP8 module will be generally available.

We can definitely see the CFP2-ACO could support 400 gigabits and above

Coherent

Oclaro is already working with equipment customers to increase the line-side interface density on the front panel of their equipment.

The Optical Internetworking Forum (OIF) has already started work on the CFP8-ACO that will be able to support up to four wavelengths, each supporting up to 400 gigabits. But Carter says Oclaro is working with customers to see how the line-side capacity of the CFP2-ACO can be advanced. “We can definitely see the CFP2-ACO could support 400 gigabits and above,” says Carter. “We are working with customers as to what that looks like and what the schedule will be.”

And there are two other pluggable form factors smaller than the CFP2: the CFP4 and the QSFP28. “Will you get 400 gigabits in a QSFP28? Time will tell, although there is still more work to be done around the technology building blocks,” says Carter.

Vendors are seeking the highest aggregate front panel density, he says: “The higher aggregate bandwidth we are hearing about is 2 terabits but there is a need to potentially going to 3.2 and 4.8 terabits.”

Silicon photonics

Oclaro says it continues to watch closely silicon photonics and to question whether it is a technology that can be brought in-house. But issues remain. “This industry has always used different technologies and everything still needs light to work which means the basic III-V [compound semiconductor] lasers,” says Carter.

“Producing silicon photonics chips versus producing packaged products that meet various industry standards and specifications are still pretty challenging to do in high volume,” says Carter.  And integration can be done using either silicon photonics or indium phosphide.  “My feeling is that the technologies will co-exist,” says Carter.


NeoPhotonics showcases a CFP2-ACO roadmap to 400G

NeoPhotonics has begun sampling its CFP2-ACO, a pluggable module for metro and long-haul optical transport. 

The company demonstrated the CFP2-ACO module transmitting at 100 gigabit using polarisation multiplexed, quadrature phase-shift keying (PM-QPSK) modulation at the recent OFC show. The line-side module is capable of transmitting over 1,000km and also supports PM-16QAM that doubles capacity over metro network distances.

 

Ferris LipscombThe CFP2-ACO is a Class 3 design: the control electronics for the modulator and laser reside on the board, alongside the coherent DSP-ASIC chip.

At OFC, NeoPhotonics also demonstrated single-wavelength 400-gigabit transmission using more advanced modulation and a higher symbol rate, and a short-reach 100-gigabit link for inside the data centre using 4-level pulse-amplitude modulation (PAM4) signalling. 

 

Roadmap to 400 gigabit 

One benefit of the CFP2-ACO is that the pluggable module can be deployed only when needed. Another is that the optics will work with coherent DSP-ASICs for different systems vendors and merchant chip suppliers. 

“After a lot of technology-bragging about the CFP2-ACO, this is the year it is commercial,” says Ferris Lipscomb, vice president of marketing at NeoPhotonics.

Also demonstrated were the components needed for a next-generation CFP2-ACO: NeoPhotonics’ narrow line-width tunable laser and its higher-bandwidth integrated coherent receiver. To achieve 400 gigabit, 32-QAM and a 45 gigabaud symbol rate were used. 

Traditional 100-gigabit coherent uses a 32-gigabaud symbol rate. That combined with QPSK and the two polarisations results in a total bit rate of 2 polarisations x 2bits/symbol x 32 gigabaud or 128 gigabits: a 100-gigabit payload and the rest overhead bits. Using 32-QAM instead of QPSK increases the number of bits encoded per symbol from 2 to 5, while increasing the baud rate from 32 to 45 gigabaud adds a speed-up factor of 1.4. Combining the two, the resulting bit rate is 45 gigabaud x 5bits/symbol x 2 polarisations or 450 gigabit overall.

 

After a lot of technology-bragging about the CFP2-ACO, this is the year it is commercial

 

Using 32-QAM curtails the transmission distance to 100km due to the denser constellation but such distances are suited for data centre interconnect applications.

“That was the demo [at OFC] but the product is also suitable for metro distances of 500km using 16-QAM and long-haul of 1,000km using 200 gigabit and 8-QAM,” says Lipscomb.

 

PAM4

The PAM4 demonstration highlighted NeoPhotonics’ laser and receiver technology. The company showcased a single-wavelength link running at 112 gigabits-per-second using its 56Gbaud externally modulated laser (EML) with an integrated driver. The PAM4 link can span 2km in a data centre. 

“What is not quite ready for building into modules is the [56Gbaud to 112 gigabit PAM4] DSP, which is expected to be out in the middle to the second half of the year,” says Lipscomb. The company will develop its own PAM4-based optical modules while selling its laser to other module makers.

Lipscomb says four lanes at 56 gigabaud using PAM4 will deliver a cheaper 400-gigabit solution than eight lanes, each at 25 gigabaud.

 

Silicon Photonics

NeoPhotonics revealed that it is supplying new 1310nm and 1550nm distributed feedback (DFB) lasers to optical module players that are using silicon photonics for their 100-gigabit mid-reach transceivers. These include the 500m PSM-4, and the 2km CWDM4 and CLR4.

Lipscomb says the benefits of its lasers for silicon photonics include their relatively high output power - 40 to 60mW - and the fact that the company also makes laser arrays which are useful for certain silicon photonics applications.

NeoPhotonics’ laser products have been for 100-gigabit modules with reaches of 2km to 10km. “Silicon photonics is usually used for shorter reaches of a few hundred meters,” says Lipscomb. “This new product is our first one aimed at the short reach data centre market segment.”

“Our main products have been for 100-gigabit modules for the longer reaches of 2km to 10km,” says Lipscomb. “Silicon photonics is usually used for shorter reaches of a few hundred meters, and this new [laser] product is our first one aimed at the short reach data centre market segment."

The company says it has multiple customer engagements spanning various wavelength plans and approaches for Nx100-gigabit data centre transceiver designs. Mellanox Technologies is one vendor using silicon photonics that NeoPhotonics is supplying.


MultiPhy raises $17M to develop 100G serial interfaces

Start-up MultiPhy has raised U.S. $17 million to develop 100-gigabit single-wavelength technology for the data centre. Semtech has announced it is one of the companies backing the Israeli fabless start-up, the rest coming from venture capitalists and at least one other company.

MultiPhy is developing chips to support serial 100-gigabit-per-second transmission using 25-gigabit optical components. The design will enable short reach links within the data centre and up to 80km point-to-point links for data centre interconnect. 

 

Source: MultiPhy

 

“It is not the same chip [for the two applications] but the same technology core,” says Avi Shabtai, the CEO of MultiPhy. The funding will be used to bring products to market as well as expand the company’s marketing arm.

 

There is a huge benefit in moving to a single-wavelength technology; you throw out pretty much three-quarters of the optics

 

100 gigabit serial

The IEEE has specified 100-gigabit lanes as part of its ongoing 400 Gigabit Ethernet standardisation work. “It is the first time the IEEE has accepted 100 gigabit on a single wavelength as a baseline for a standard,” says Shabtai.  

The IEEE work has defined 4-by-100 gigabit with a reach of 500 meters using four-level pulse-amplitude modulation (PAM-4) that encodes 2 bits-per-symbol. This means that optics and electronics operating at 50 gigabit can be used. However, MultiPhy has developed digital signal processing technology that allows the optics to be overdriven such that 25-gigabit optics can be used to deliver the 50 gigabaud required. 

“There is a huge benefit in moving to a single-wavelength technology,” says Shabtai. ”You throw out pretty much three-quarters of the optics.”

The chip MultiPhy is developing, dubbed FlexPhy, supports the CAUI-4 (4-by-28 gigabit) interface, a 4:1 multiplexer and 1:4 demultiplexer, PAM-4 operating at 56 gigabaud and the digital signal processing. 

The optics - a single transmitter optical sub-assembly (TOSA) and a single receiver optical sub-assembly (ROSA) - and the FlexPhy chip will fit within a QSFP28 module. “Taking into account that you have one chip, one laser and one photo-diode, these are pretty much the components you already have in an SFP module,” says Shabtai. “Moving from a QSFP form factor to an SFP is not that far.”

MultiPhy says new-generation switches will support 128 SFP28 ports, each at 100 gigabit, equating to 12.8 terabits of switching capacity.

Using digital signal processing also benefits silicon photonics. “Integration is much denser using CMOS devices with silicon photonics,” says Shabtai. DSP also improves the performance of silicon photonics-based designs such as the issues of linearity and sensitivity. “A lot of these things can be solved using signal processing,” he says.

FlexPhy will be available for customers this year but MultiPhy would not say whether it already has working samples.

MultiPhy raised $7.2 million venture capital funding in 2010. 


Choosing paths to future Gigabit Ethernet speeds

Industry discussions are being planned in the coming months to determine how Ethernet standards can be accelerated to better serve industry needs, including how existing work can be used to speed up the creation of new Ethernet speeds.

 

The y-axis shows the number of lanes while the x-axis is the speed per lane. Each red dot shows the Ethernet rate at which the signalling (optical or electrical) was introduced. One challenge that John D'Ambrosia highlights is handling overlapping speeds. "What do we do about 100 Gig based on 4x25, 2x50 and 1x100 and ensure interoperability, and do that for every multiple where you have a crossover?" Source: Dell

One catalyst for these discussions has been the progress made in the emerging 400 Gigabit Ethernet (GbE) standard which is now at the first specification draft stage.

“If you look at what is happening at 400 Gig, the decisions that were made there do have potential repercussions for new speeds as well as new signalling rates and technologies,” says John D’Ambrosia, chairman of the Ethernet Alliance.

Before the IEEE P802.3bs 400 Gigabit Ethernet Task Force met in July, two electrical signalling schemes had already been chosen for the emerging standard: 16 channels of 25 gigabit non-return-to-zero (NRZ) and eight lanes of 50 gigabit using PAM-4 signalling. 

For the different reaches, three of the four optical interfaces had also been chosen, with the July meeting resolving the fourth -  2km - interface.  The final optical interfaces for the four different reaches are shown in the Table.

 

 

The adoption of 50 gigabit electrical and optical interfaces at the July meeting has led some industry players to call for a new 50 gigabit Ethernet family to be created, says D’Ambrosia. 

Certain players favour the 50 GbE standard to include a four-lane 200 GbE version, just as 100 GbE uses 4 x 25 Gig channels, while others want 50 GbE to be broader, with one, two, four and eight lane variants to deliver 50, 100, 200 and 400 GbE rates.  

 

If you look at what is happening at 400 Gig, the decisions that were made there do have potential repercussions for new speeds as well as new signalling rates and technologies

 

The 400 GbE standard’s adoption of 100 GbE channels that use PAM-4 signalling has also raised questions as to whether 100 GbE PAM-4 should be added to the existing 100 GbE standard or a new 100 GbE activity be initiated.

“Those decisions have snowballed into a lot of activity and a lot of discussion,” says D’Ambrosia, who is organising an activity to address these issues and to determine where the industry consensus is as to how to proceed. 

“These are all industry debates that are going to happen over the next few months,” he says, with the goal being to better meet industry needs by evolving Ethernet more quickly.

Ethernet continues to change, notes D’Ambrosia. The 40 GbE standard exploited the investment made in 10 gigabit signalling, and the same is happening with 25 gigabit signalling and 100 gigabit. 

 

If you buy into the idea of more lanes based around a single signalling speed, then applying that to the next signalling speed at 100 Gigabit Ethernet, does that mean the next speed with be 800 Gigabit Ethernet? 

 

With 50 Gig electrical signalling now starting as part of the 400 GbE work, some industry voices wonder whether, instead of developing one Ethernet family around a rate, it is not better to develop a family of rates around the signalling speed, such as is being proposed with 50 Gig and the use of 1, 2, 4 and 8 lane configurations.

“If you buy into the idea of more lanes based around a single signalling speed, then applying that to the next signalling speed at 100 Gigabit Ethernet, does that mean the next speed with be 800 Gigabit Ethernet?” says D’Ambrosia.     

The 400 GbE Task Force is having its latest meeting this week. A key goal is to get the first draft of the standard -  Version 1.0 - approved. “To make sure all the baselines have been interpreted correctly,” says D’Ambrosia. What then follows is filling in the detail, turning the draft into a technically-complete document. 

 

Further reading:

LightCounting: 25GbE almost done but more new Ethernet options are coming, click here


OFC 2015 digest: Part 2

The second part of the survey of developments at the OFC 2015 show held recently in Los Angeles.   
 
Part 2: Client-side component and module developments   
  • CFP4- and QSFP28-based 100GBASE-LR4 announced
  • First mid-reach optics in the QSFP28
  • SFP extended to 28 Gigabit
  • 400 Gig precursors using DMT and PAM-4 modulations 
  • VCSEL roadmap promises higher speeds and greater reach   
First CFP4 100GBASE-LR4s 
 
Several companies including Avago Technologies, JDSU, NeoPhotonics and Oclaro announced the first 100GBASE-LR4 products in the smaller CFP4 optical module form factor. Until now the 100GBASE-LR4 has been available in a CFP2 form factor.  
 
“Going from a CFP2 to a CFP4 results in a little over a 2x increase in density,” says Brandon Collings, CTO for communications and commercial optical products at JDSU. The CFP4 also has a lower maximum power specification of 6W compared to the CFP2’s 12W.  
 
The 100GBASE-LR4 standard spans 10 km over single mode fibre. The -LR4 is used mainly as a telecom interface to connect to WDM or packet-optical transport platforms, even when used in the data centre. Data centre switches already favour the smaller QSFP28 rather than the CFP4.  
 
Other 100 Gigabit standards include the 100GBASE-SR4 with a 100 meters reach over OM3 multi-mode fibre, and up to 150m over OM4 fibre. Avago points out that the -SR4 is typically used between a data centre’s top-of-rack and core switches whereas the -LR4 is used within the core network and for links between buildings. The -LR4 modules also can support Optical Transport Network (OTN).      
 
But in the data centre there is a mid-reach requirement. “People are looking at new standards to accommodate more of the mega data centre distances of 500 m or 2 km,” says Robert Blum, Oclaro’s director of strategic marketing.  These mid-reach standards over single mode fibre include the 500 meter PSM4 and the 2 km CWDM4 and modules supporting these standards are starting to appear. “But today, on single mode, there is basically the -LR4 that gets you to 10 km,” says Blum.  
 
JDSU also views the -LR4 as an interim technology in the data centre that will fade once more optimised PSM4 and CWDM4 optics appear.  
 
 
QSFP28 portfolio grows 
 
The 100GBASE-LR4 was also shown in the smaller QSFP28 form factor, as part of a range of new interface offerings in the form factor.  The QSFP28 offers a near 2x increase in face plate density compared to the CFP4.  
 
JDSU announced three 100 Gigabit QSFP28-based interfaces at OFC - the PSM-4 and CWDM4 MSAs and a 100GBASE-LR4, while Finisar announced QSFP28 versions of the CWDM4, the 100GBASE-LR4 and the 100GBASE-SR4. Meanwhile, Avago has samples of a QSFP28 100GBASE-SR4. JDSU’s QSFP28 -LR4 uses the same optics it is using in its CFP4 -LR4 product.  
 
The PSM4 MSA uses a single mode ribbon cable - four lanes in each direction - to deliver the 500 m reach, while the CWDM4 MSA uses a fibre to carry the four wavelengths in each direction. The -LR4 standard uses tightly spaced wavelengths such that the lasers need to be cooled and temperature controlled.  The CWDM4, in contrast, uses a wider wavelength spacing and can use uncooled lasers, saving on power.   
 
"100 Gig-per-laser, that is very economically advantageous" - Brian Welch, Luxtera

  
Luxtera announced the immediate availability of its PSM4 QSFP28 transceiver while the company is also offering its PSM4 silicon chipset for packaging partners that want to make their own modules or interfaces. Luxtera is a member of the newly formed Consortium for On-Board Optics (COBO).
 
Luxtera’s original active optical cable products were effectively 40 Gigabit PSM4 products although no such MSA was defined. The company’s original design also operated at 1490nm  whereas the PSM4 is at 1310nm.  
 
“The PSM4 is a relatively new type of product, focused on hyper-scale data centres - Microsoft, Amazon, Google and the like - with reaches regularly to 500 m and beyond,” says Brian Welch, director of product marketing at Luxtera. The company’s PSM4 offers an extended reach to 2 km, far beyond the PSM4 MSA’s specification. The company says there is also industry interest for PSM4 links over shorter reaches, up to 30 m. 
 
Luxtera’s PSM4 design uses one laser for all four lanes. “In a 100 Gig part, we get 100 Gig-per-laser,” says Welch. “WDM gets 25 Gig-per-laser, multi-mode gets 25 Gig-per-laser; 100 Gig-per-laser, that is very economically advantageous.”    
 
 
QSFP28 ‘breakout’ mode 
 
Avago, Finisar and Oclaro all demonstrated a 100 Gigabit QSFP28 modules in ‘breakout’ mode whereby the module’s output fibres fan out and interface to separate, lower-speed SFP28 optical modules.  
 
“The SFP+ is the most ubiquitous and standard form factor deployed in the industry,” says Rafik Ward, vice president of marketing at Finisar. “The SFP28 leverages this architecture, bringing it up to 28 Gigabit.”  
 
Applications using the breakout arrangement include the emerging Fibre Channel standards: the QSFP28 can support the 128 Gig Fibre channel standard where 32 Gig Fibre Channel traffic is sent to individual transceivers. Avago demonstrated such an arrangement at OFC and said its QSFP28 product will be available before the year end.  
 
Similarly, the QSFP28-to-SFP28 breakout mode will enable the splitting of 100 Gigabit Ethernet (GbE) into IEEE 25 Gigabit Ethernet lanes once the standard is completed. 
 
Oclaro showed a 100 Gig QSFP28 using a 4x28G LISEL (lens-integrated surface-emitting DFB laser) array with one channel connected to an SFP28 over a 2 km link. Oclaro inherited the LISEL technology when it merged with Opnext in 2012.  
 
Finisar demonstrated its 100GBASE-SR4 QSFP28 connected to four SFP28s over 100 m of OM4 multimode fibre.
Oclaro also showed a SFP28 for long reach that spans 10 km over single-mode fibre. In addition to Fibre Channel and Ethernet, Oclaro also highlights wireless fronthaul to carry CPRI traffic, although such data rates are not expected for several years yet. Oclaro’s SFP28 will be in full production in the first quarter of 2016. Oclaro says it will also use the LISEL technology for its PSM4 design.   
 
 
Industry prepares for 400GbE with DMT and PAM-4
  
JDSU demonstrated a 4 x 100 Gig design, described as a precursor for 400 Gigabit technology. The IEEE is still working to define the different versions of the 400 Gigabit Ethernet standard. The JDSU optical hardware design multiplexes four 100 Gig wavelengths onto a fibre.    
 
“There are multiple approaches towards 400 Gig client interfaces being discussed at the IEEE and within the industry,” says JDSU’s Collings. “The modulation formats being evaluated are non-return-to-zero (NRZ), PAM-4 and discrete multi-tone (DMT).”  
 
For the demonstration, JDSU used DMT modulation to encode 100 Gbps on each of the four wavelengths, although Collings stresses that JDSU continues work on all three formats. In contrast, MultiPhy is using PAM-4 to develop a 100 Gig serial link
 
At OFC, Avago demonstrated a 25 Gig VCSEL being driven using its PAM-4 chip to achieve a 50 Gig rate. The PAM-4 chip takes two 25 Gbps input streams and encodes each two bits into a symbol that then drives the VCSEL. The demonstration paves the way for emerging standards such as 50 Gigabit Ethernet (GbE) using a 25G VCSEL, and shows how 50 Gigabit lanes could be used to implement 400 GbE using eight lanes instead of 16.  
 
NeoPhotonics demonstrated a 56 Gbps externally modulated laser (EML) along with pHEMT gallium arsenide driver technology, the result of its acquisition of Lapis Semiconductor in 2013.  
 
The main application will be 400 Gigabit Ethernet but there is already industry interest in proprietary solutions, says Nicolas Herriau, director of product engineering at NeoPhotonics. The industry may not have decided whether it will use NRZ or PAM-4 [for 400GbE], “but the goal is to get prepared”, he says. 
 
Herriau points out that the first PAM-4 ICs are not yet optimised to work with lasers. As a result, having a fast, high-quality 56 Gbps laser is an advantage.   
 
Avago has shipped over one million 25 Gig channels in multiple products
 
  
The future of VCSELs   
 
VCSELs at 25 Gig is an enabling technology for the data centre, says Avago. Operating at 850nm, the VCSELs deliver the 100m reach over OM3 and 150m reach over OM4 multi-mode fibre. Avago announced at OFC that it had shipped over one million VCSELs in the last two years. Before then, only 10 Gig VCSELs were available, used for 40 Gig and 100 Gig short-reach modules.  
 
Avago says that the move to 100 Gig and beyond has triggered an industry debate as to whether single-mode rather than multi-mode fibre is the way forward in data centres. For VCSELs, the open questions are whether the technology can support 25 Gig lanes, whether such VCSELs are cost-effective, and whether they can meet extended link distances beyond 100 m and 150 m.  
 
“Silicon photonics is spoken of as a great technology for the future, for 100 Gig and greater speeds, but this [announcement] is not academic or hype,” says I-Hsing Tan, Avago’s segment marketing manager for Ethernet and storage optical transceivers. “Avago has been using 25 Gig VCSELs for short-reach distance applications and has shipped over one million 25 Gig channels in multiple products.” 
 
The products that account for the over one million shipments include Ethernet transceivers; single- and 4-lane 32 Gigabit Fibre Channel, each channel operates at 28 Gbps; Infiniband applications, with 4-channels being the most popular; and proprietary optical interfaces with the channel count varying from two to 12 channels, 50 to 250 Gbps.   
 
In other OFC data centre demonstrations, Avago showed an extended short reach interface at 100 Gig - the 100GBASE-eSR4 - with a 300 m span. Because it is a demonstration and not a product, Avago is not detailing how it is extending the reach beyond saying that it is a combination of the laser output power and the receiver design. The extended reach product will be available from 2016.  
 
Avago completed the acquisition of PLX Technologies in the third quarter of 2014 and its PCI Express (PCIe) over optics demonstration is one result. The demonstration is designed to remove the need for a network interface card between an Ethernet switch and a server. “The aim is to absorb the NIC as part of the ASIC design to achieve a cost effective solution,” says Tan. Avago says it is engaged with several data centre operators with this concept.     
 
Avago also demonstrated 40 Gig bi-directional module, an alternative to the 40GBASE-SR4. The 40G -SR4 uses eight multi-mode fibres, four in each direction, each carrying a 10 Gig signal. “Going to 40 Gig [from 10 Gig] consumes fibre,” says Tan. Accordingly, the 40 Gig bidi design uses WDM to avoid using a ribbon fibre. Instead, the bidi uses two multi-mode fibres, each carrying two 20 Gig wavelengths travelling in opposite directions. Avago hopes to make this product generally available later this year.   
 
At OFC, Finisar demonstrated designs for 40 Gig and 100 Gig speeds using duplex multi-mode fibre rather than ribbon fibre. The 40 Gig demo achieved 300 m over OM3 fibre while the 100 Gig demo achieved 70 m over OM3 and 100 m over OM4 fibre. Finisar’s designs use four wavelengths for each multi-mode fibre, what it calls shortwave WDM. 
 
Finisar’s VCSEL demonstrations at OFC were to highlight that the technology can continue to play an important role in the data centre. Citing a study by market research firm, Gartner, 94 percent of data centres built in 2014 were smaller than 250,000 square feet, and this percentage is not expected to change through to 2018. A 300-meter optical link is sufficient for the longest reaches in such sized data centres. 
 
Finisar is also part of a work initiative to define and standardise new wideband multi-mode fibre that will enable WDM transmission over links even beyond 300 m to address larger data centres. 
 
“There are a lot of legs to VCSEL-based multi-mode technology for several generations into the future,” says Ward. “We will come out with new innovative products capable of links up to 300 m on multi-mode fibre.””

 

For Part 1, click here

STMicro chooses PSM4 for first silicon photonics product

STMicroelectronics has revealed that its first silicon photonics product will be the 100 Gigabit PSM4. The 500m-reach PSM4 multi-source agreement (MSA) is a single-mode design that uses four parallel fibres in each direction. The chip company expects the PSM4 optical engine to be in production during 2015. 

"We have prototypes and can show them running very well at 40 Gig [4x10 Gig]," says Flavio Benetti, group vice president, digital product group and general manager, networking products division at STMicroelectronics. "But it is expected to be proven at 4x25 Gig in the next few months." STMicroelectronics has just received its latest prototype chip that is now working at 4x25 Gig. 

Benetti is bullish about the technology's prospects: "Silicon photonics is really a great opportunity today in that, once proven, it will be a strong breakthrough in the market." He highlights three benefits silicon photonics delivers:
  • Lowers the manufacturing cost of optical modules  
  • Improves link speeds
  • Reduces power consumption       
Silicon photonics provides an opportunity to optimise the supply chain, he says. The technology simplifies optical module manufacturing by reducing the number of parts needed and the assembly cost. 

Regarding speed, STMicroelectronics cites its work with Finisar demonstrating a 50 Gig link using silicon photonics that was detailed at the recent ECOC show. "Photonic processing integrated into a CMOS line allows this intrinsically, while there are several factors that doesn't allow such an easy implementation of 50 Gig with traditional technologies," he says, citing VCSELs and directly-modulated lasers as examples. 
  
"We think we can bring an advantage in the power consumption as well," says Benetti. There still needs to be ICs such as to drive the optical modulator, for example, but the optical circuitry has a very low power consumption.  
   
STMicroelectronics licensed silicon photonics technology from Luxtera in 2012 and has a 300mm (12-inch) wafer production line in Crolles, France. The company will not offer a foundry service to other companies since STMicroelectronics believes its silicon photonics process offers it a competitive advantage.      

The company has also developed its own electronic design automation tool (EDA) for silicon photonics. The EDA tool allows optical circuitry to be simulated alongside the company's high-speed BiCMOS ICs. "What we have developed covers our needs," says Benetti. "But we are working to evolve it to more complex models."
STMicro's in-house silicon photonics EDA. "We will develop the EDA tools to the level needed for the next generation products," says Flavio Benetti.
The company has developed a fibre attachment solution. "The big issue with silicon photonics is not the silicon part; it is getting the light in and out [of the chip]," says Benetti. The in-house technique is being made available to customers. "It is much more convenient for customers to attach the fibre, not us, as it is quite delicate." STMicroelectronics will deliver the optical engine and its customers will finish the design and attach the fibres.

Other techniques to couple light onto the chip are also being explored by the company. "Why should we need the fibre?" he says. "We should find a better way to get the light in and the light out." The goal of the work is to develop techniques that can be implemented on a fabrication plant scale. "The problem is [developing a technique] not to produce 100 parts but one million parts; this is the angle we are taking."

Meanwhile, the company is evaluating high-speed designs for 400 Gigabit Ethernet. "I don't see in the short-to-medium term a solution that is 400 Gig-one fibre," says Benetti. The work involves looking at the trade-offs of the various design approaches such as parallel fibres, WDM and modulation schemes like PAM-4 and PAM-8 (pulse amplitude modulation). 

Performance parameters used to evaluate the various design options include cost and power consumption. But Benetti says more work is needed before STMicroelectronics will choose particular designs to pursue.

 


Privacy Preference Center