Elenion's coherent and fibre-to-the-server plans

  • Elenion’s coherent chip - an integrated modulator-receiver assembly - is now generally available. 
  • The company has a silicon photonics design library that includes over 1,000 elements. 
  • Elenion is also developing an optical engine for client-side interfaces.

Elenion Technologies has given an update on its activities and strategy after announcing itself eight months ago. The silicon photonics-based specialist is backed by private equity firm, Marlin Equity Partners, which also owns systems vendor, Coriant. Elenion had already been active for two and a half years and shipping product when it emerged from its state of secrecy last December

Larry SchwerinElenion has since announced it is selling its telecom product, a coherent transceiver PIC, to Coriant and now other companies.

It has also progressed its optical engine design for the data centre that will soon be a product. Elenion has been working with Ethernet switch chip maker, Cavium, and data centre player, Microsoft, as part of its datacom work.

“We have moved forward,” says Larry Schwerin, the CEO of Elenion.

 

Coherent PIC

Elenion’s integrated modulator-receiver assembly is being used by Coriant for two CFP2 Analogue Coherent Optics (CFP2-ACO) modules as part of its Groove G30 platform.

The first is a short-reach CFP2-ACO for point-to-point 200-gigabit links that has a reach of at least 80km. The second is a high-performance CFP2-ACO that has a reach of up to 4,000km at 100 gigabits and 650km at 200 gigabits. 

Schwerin says the company is now selling the coherent PIC to “a lot of people”. In addition to the CFP2-ACO, there is the Digital Coherent Optics (DCO) pluggable market where the PIC and the coherent digital signal processor (DSP) are integrated within the module. Examples include the CFP-DCO and the smaller CFP2-DCO which is now being designed into new systems. ADVA Optical Networking is using the CFP2-DCO for its Teraflex, as is its acquisition target MRV with its 200-gigabit coherent muxponder. Infinera’s latest XTM II platforms also use the CFP2-DCO.

 

We have got a library that has well over 1,000 elements

 

Using silicon photonics benefits the cost and performance of the coherent design, says Schwerin. The cost benefit is a result of optical integration. “You can look at it as a highly simplified supply chain,” says Schwerin. Coupling the electronics close to the optics also optimises overall performance.  

Elenion is also targeting the line-card market for its coherent PIC. “This is one of the reasons why I wanted to stay out of the pluggable business,” says Schwerin. “There are a lot more customers out there if you stay out of pluggables because now you are selling an [optical] engine.”

The company is also developing a coherent PIC design that will support higher data rates such as 400- and 600-gigabit per lambda. “Without being too specific because we do remain stealthy, we have plans to support these applications,” says Schwerin.

Schwerin stresses that the real strength of the company is its design library used to develop its silicon photonics circuits. Elenion emerged out of a silicon photonics design-for-service company. “We have got a library that has well over 1,000 elements,” he says. Elenion says it can address custom design requests of companies using its design library.

 

Datacom

Elenion announced at the OFC show held in Los Angeles in March that it is working with Jabil AOC Technologies, a subsidiary of the manufacturing firm, Jabil Circuits. Elenion chose the contract manufacturer due to its ability to address both line-card and pluggable designs, the markets for its optical engines. 

The two firms have also been working at the chip level on such issues as fibre attach, coupling the laser and adding the associated electronics. “We are trying to make the interface as elegant and streamlined as possible,” says Schwerin. “We have got initiatives underway so that you don't need these complex arrangements.”

Schwerin highlights the disparity between the unit volumes needed for the telecom and datacom markets. According to forecasts from market research firms, the overall coherent market is expected to grow to 800,000 and 1 million units a year by 2020. In contrast, the interfaces used inside one large-scale data centre can be up to 2 million. “To achieve rapid manufacturing and yield, you have got to simplify the process,” he says.

This is what Elenion is tackling. If 1,000 die can be made on a single silicon wafer, and knowing the interface volumes required and the yields, the total number of wafer runs can be determined. And it is the overall time taken from starting a wafer to the finished transceiver PIC output that Elenion is looking to shorten, says the CEO.

 

We ran that demo from 7 AM to 2 AM every day of the show  

 

At OFC, Elenion hired a hotel suite near the convention centre to demonstrate its technologies to interested companies. One demonstration used its 25Gbps optical engine directly mounted on a Cavium QLogic network interface card (NIC) connecting a server to a high-capacity Cavium Xpliant Ethernet switch chip. The demo showed how 16 NICs could be connected to the switch chip for a total capacity of 400 gigabits. “No more direct-attached cables or active optical cables, literally fibre-to-the-server,” says Schwerin. “We ran that demo from 7 AM to 2 AM every day of the show.”   

Elenion’s on-board optics design was based on the emerging Consortium of On-Board Optics (COBO) standard. “The Microsoft folks, we work with them closely, so obviously what we are doing follows their intent,” says Schwerin.

The optical engine will also support 56Gbps links when used with four-level pulse-amplitude modulation (PAM-4) and the company is even eyeing 100Gbps interfaces. For now, Elenion’s datacom optical engine remains a technical platform but a product will soon follow.

The company’s datacom work is also benefiting its telecom designs. “The platform technology that we use for datacom has now found its way into the coherent programme, especially around the packaging,” says Schwerin. 

 

* The article was changed on July 25th to mention that Elenion's PIC is being used in two Coriant CFP-ACOs.


COBO: specification work nearing completion

The Consortium for On-board Optics (COBO) is on target to complete its specifications work by the year end. The work will then enter a final approval stage that will take up to a further three months.

On-board optics, also known as mid-board or embedded optics, have been available for years but vendors have so far had to use custom products. The goal of COBO, first announced in March 2015 and backed by such companies as Microsoft, Cisco Systems, Finisar and Intel, is to develop a technology roadmap and common specifications for on-board optics to ensure interoperability.

Brad Booth (pictured), the chair of COBO and principal architect for Microsoft’s Azure Global Networking Services, says that bringing optics inside systems raises a different set of issues compared to pluggable optical modules used on the front panel of equipment. “If you have a requirement for 32 ports on a faceplate, you know mechanically what you can build,” says Booth.

With on-board optics, the focus is less about size considerations and more about the optical design itself and what is needed to make it work. There is also more scope to future-proof the design, something that can not be done so much with pluggable optics, says Booth.

COBO is working on a 400-gigabit optical module based on the 8-by–50 gigabit interface. The focus in recent months has been on defining the electrical connector that will be needed. The group has narrowed down the choice of candidates to two and the final selection will be based on the connector's signal integrity performance and manufacturability. Also being addressed is how two such modules could be placed side-by-side to create an 800-gigabit (16-by–50 gigabit) design.

COBO’s 400-gigabit on-board optics will support multi-mode and single-mode fibre variants. “When we do a comparison with what the pluggable people are pushing, there are a lot of pluggables that won’t be able to handle the power envelope,” says Booth.

 

There is no revolutionary change that goes on with technology, it all has to be evolutionary

 

On-board optics differs from a pluggable module in that the optics and electronics are not confined within a mechanical enclosure and therefore power dissipation is less of an design issue. But by supporting different fibre requirements and reaches new design issues arise. For example, when building a 16-by–50 gigabit design, the footprint is doubled and COBO is looking to eliminate the gap between the two such that a module can be plugged in that is either 8- or 16-lanes wide.

COBO is also being approached about supporting other requirements such as coherent optics for long-distance transmission. A Coherent Working Group has been formed and will meet for the first time in December in Santa Barbara, California. Using on-board optics for coherent avoids the power constraint issues associated with using a caged pluggable module.

 

On-board optics versus co-packaging

On-board optics is seen as the next step in the evolution of optics as it moves from the faceplate onto the board, closer to the ASIC. There is only so many modules that can fit on a faceplate. The power consumption also raises as the data rate of a pluggable modules increases, as does the power associated with driving faster electrical traces across the board.

Using on-board optics shortens the trace lengths by placing the optics closer to the chip. The board input-output capacity that can be supported also increases as it is fibres not pluggable optics that reside on the front panel. Ultimately, however, designers are already exploring the combining of optics and the chip using a system-in-package design, also known as 2.5D or 3D chip packaging.

Booth says discussions have already taken place between COBO members about co-packaged optics. But he does not expect system vendors to stay with pluggable optics and migrate directly to co-packaging thereby ignoring the on-board optics stage.

“There is no revolutionary change that goes on with technology, it all has to be evolutionary,” says Booth, who sees on-board optics as the next needed transition after pluggables. “You have to have some pathway to learn and discover, and figure out the pain points,” he says. “We are going to learn a lot when we start the deployment of COBO-based modules.”

Booth also sees on-board optics as the next step in terms of flexibility.

When pluggable modules were first introduced they were promoted as allowing switch vendors to support different fibre and copper interfaces on their platforms. The requirements of the cloud providers has changed that broad thinking, he says: “We don’t need that same level of flexibility but there is still a need for suporting different styles of optical interfaces on a switch.”

 

There are not a lot of other modules that can do 600 gigabit but guess what? COBO can

For example, one data centre operator may favour a parallel fibre solution based on the 100-gigabit PSM4 module while another may want a 100-gigabit wavelength-division multiplexing (WDM) solution and use the CWDM4 module. “This [parallel lane versus WDM] is something embedded optics can cater for,” says Booth.

Moving to a co-packaged design offers no such flexibility. What can a data centre manager do when deciding to change from parallel single-mode optics to wavelength-division multiplexing when the optics is already co-packaged with the chip? “Also how do I deal with an optics failure? Do I have to replace the whole switch silicon?” says Booth. We may be getting to the point where we can embed optics with silicon but what is needed is a lot more work, a lot more consideration and a lot more time, says Booth.

 

Status

COBO members are busy working on the 400-gigabit embedded module, and by extension the 800-gigabit design. There is also ongoing work as to how to support technologies such as the OIF’s FlexEthernet. Coherent designs will soon support rates such as 600-gigabit using a symbol rate of 64 gigabaud and advanced modulation. “There are not a lot of other modules that can do 600 gigabits but guess what? COBO can,” says Booth.

The good thing is that whether it is coherent, Ethernet or other technologies, all the members are sitting in the same room, says Booth: “It doesn’t matter which market gets there first, we are going to have to figure it out.”

 

Story updated on October 27th regarding the connector selection and the Coherent Working Group.


Arista adds coherent CFP2 modules to its 7500 switch

Arista Networks has developed a coherent optical transport line card for its 7500 high-end switch series. The line card hosts six 100 gigabit CFP2-ACO (analogue coherent optics) and has a reach of up to 5,000 km.

 

Martin Hull

Several optical equipment makers have announced ‘stackable’ platforms specifically to link data centres in the last year.

Infinera’s Cloud Xpress was the first while Coriant recently detailed its Groove G30 platform. Arista’s announcement offers data centre managers an alternative to such data centre interconnect platforms by adding dense wavelength-division multiplexing (DWDM) optics directly onto its switch. 

For customers investing in an optical solution, they now have an all-in-one alternative to an optical transport chassis or the newer stackable data centre interconnnect products, says Martin Hull, senior director product management at Arista Networks. Insert two such line cards into the 7500 and you have 12 ports of 100 gigabit coherent optics, eliminating the need for the separate optical transport platform, he says. 

The larger 11RU 7500 chassis has eight card slots such that the likely maximum number of coherent cards used in one chassis is four or five - 24 or 30 wavelengths - given that 40 or 100 Gigabit Ethernet client-side interfaces are also needed. The 7500 can support up to 96, 100 Gigabit Ethernet (GbE) interfaces. 

Arista says the coherent line card meets a variety of customer needs. Large enterprises such as financial companies may want two to four 100 gigabit wavelengths to connect their sites in a metro region. In contrast, cloud providers require a dozen or more wavelengths. “They talk about terabit bandwidth,” says Hull.

 

With the CFP2-ACO, the DSP is outside the module. That allows us to multi-source the optics

 

As well as the CFP2-ACO modules, the card also features six coherent DSP-ASICs. The DSPs support 100 gigabit dual-polarisation, quadrature phase-shift keying (DP-QPSK) modulation but do not support the more advanced quadrature amplitude modulation (QAM) schemes that carry more bits per wavelength.  The CFP2-ACO line card has a spectral efficiency that enables up to 96 wavelengths across the fibre's C-band.

Did Arista consider using CFP coherent optical modules that support 200 gigabit, and even 300 and 400 gigabit line rates using 8- and 16-QAM? “With the CFP2-ACO, the DSP is outside the module,” says Hull. “That allows us to multi-source the optics.”

The line card also includes 256-bit MACsec encryption. “Enterprises and cloud providers would love to encrypt everything - it is a requirement,” says Hull. “The problem is getting hold of 100-gigabit encryptors.” The MACsec silicon encrypts each packet sent, avoiding having to use a separate encryption platform.   

 

CFP4-ACO and COBO

As for denser CFP4-ACO coherent modules, the next development after the CFP2-ACO, Hull says it is still too early, as it is with for 400 gigabit on-board optics being developed by COBO and which is also intended to support coherent. “There is a lot of potential but it is still very early for COBO,” he says.

“Where we are today, we think we are on the cutting edge of what can be delivered on a line card,” says Hull. “Getting everything onto that line card is an engineering achievement.”    

 

Future developments

Arista does not make its own custom ASICs or develop optics for its switch platforms. Instead, the company uses merchant switch silicon from the likes of Broadcom and Intel.  

According to Hull, such merchant silicon continues to improve, adding capabilities to Arista’s top-of-rack ‘leaf’ switches and its more powerful ‘spine’ switches such as the 7500. This allows the company to make denser, higher-performance platforms that also scale when coupled with software and networking protocol developments. 

Arista claims many of the roles performed by traditional routers can now be fulfilled by the 7500 such as peering, the exchange of large routing table information between routers using the Border Gateway Protocol (BGP). “[With the 7500], we can have that peering session; we can exchange a full set of routes with that other device,” says Hull. 

 

"We think we are on the cutting edge of what can be delivered on a line card” 

 

The company uses what it calls selective route download where the long list of routes is filtered such that the switch hardware is only programmed with the routes to be communicated with. Hull cites as an example a content delivery site that sends content to subscribers. The subscribers are typically confined to a known geographic region. “I don’t need to have every single Internet route in my hardware, I just need the routes to reach that state or metro region,” says Hull. 

By having merchant silicon that supports large routing tables coupled with software such as selective route download, customers can use a switch to do the router’s job, he says.     

Arista says that in 2016 and 2017 it will continue to introduce leaf and spine switches that enable data centre customers to further scale their networks. In September Arista launched Broadcom Tomahawk-based switches that enable the transition from 10 gigabit server interfaces to 25 gigabit and the transition from 40 to 100 gigabit uplinks.

Longer term, there will be 50 GbE and iterations of 400 and one terabit Ethernet, says Hull. And all this relates to the switch silicon. At present 3.2 terabit switch chips are common and already there is a roadmap to 6.4 and even 12.8 terabits by increasing both the chip’s pin count and using PAM-4 alongside the 25 gigabit signalling to double input/ output again. A 12.8 terabit switch may be a single chip, says Hull, or it could be multiple 3.2 terabit building blocks integrated together.  

“It is not just a case of more ports on a box,” says Hull. “The boxes have to be more capable from a hardware perspective so that the software can harness that.”


COBO looks inside and beyond the data centre

The Consortium of On-Board Optics is working on 400 gigabit optics for the data centre and also for longer-distance links. COBO is a Microsoft-led initiative tasked with standardising a form factor for embedded optics.

Established in March 2015, the consortium already has over 50 members and expects to have early specifications next year and first hardware by late 2017.

 

Brad Booth

Brad Booth, the chair of COBO and principal architect for Microsoft’s Azure Global Networking Services, says Microsoft plans to deploy 100 gigabit in its data centres next year and that when the company started looking at 400 gigabit, it became concerned about the size of the proposed pluggable modules, and the interface speeds needed between the switch silicon and the pluggable module.

“What jumped out at us is that we might be running into an issue here,” says Booth.

This led Microsoft to create the COBO industry consortium to look at moving optics onto the line card and away from the equipment’s face plate. Such embedded designs are already being used for high-performance computing, says Booth, while data centre switch vendors have done development work using the technology.

On-board optics delivers higher interface densities, and in many cases in the data centre, a pluggable module isn’t required. “We generally know the type of interconnect we are using, it is pretty structured,” says Booth. But the issue with on-board optics is that existing designs are proprietary; no standardised form factor exists.

“It occurred to us that maybe this is the problem that needs to be solved to create better equipment,” says Booth. Can the power consumed between switch silicon and the module be reduced? And can the interface be simplified by eliminating components such as re-timers?

“This is worth doing if you believe that in the long run - not the next five years, but maybe ten years out - optics needs to be really close to the chip, or potentially on-chip,” says Booth.

 

400 gigabit

COBO sees 400 gigabit as a crunch point. For 100 gigabit interconnect, the market is already well served by various standards and multi-source agreements so it makes no sense for COBO to go head-to-head here. But should COBO prove successful at 400 gigabit, Booth envisages the specification also being used for 100, 50, 25 and even 10 gigabit links, as well as future speeds beyond 400 gigabit.  

The consortium is developing standardised footprints for the on-board optics. “If I want to deploy 100 gigabit, that footprint will be common no matter what the reach you are achieving with it,” says Booth. “And if I want a 400 gigabit module, it may be a slightly larger footprint because it has more pins but all the 400 gigabit modules would have a similar footprint.” 

COBO plans to use existing interfaces defined by the industry. “We are also looking at other IEEE standards for optical interfaces and various multi-source agreements as necessary,” says Booth. COBO is also technology agnostic; companies will decide which technologies they use to implement the embedded optics for the different speeds and reaches.

 

“This is worth doing if you believe that in the long run - not the next five years, but maybe ten years out - optics needs to be really close to the chip, or potentially on-chip."

 

Reliability

Another issue the consortium is focussing on the reliability of on-board optics and whether to use socketed optics or solder the module onto the board. This is an important consideration given that is it is the vendor’s responsibility to fix or replace a card should a module fail.

This has led COBO to analyse the causes of module failure. Largely, it is not the optics but the connections that are the cause. It can be poor alignment with the electrical connector or the cleanliness of the optical connection, whether a pigtail or the connectors linking the embedded module to the face plate. “The discussions are getting to the point where the system reliability is at a level that you have with pluggables, if not better,” says Booth.

 

Dropping below $1-per-gigabit

COBO expects the cost of its optical interconnect to go below the $1-per-gigabit industry target. “The group will focus on 400 gigabit with the perception that the module could be four modules on 100 gigabit in the same footprint,” says Booth. Using four 100 gigabit optics in one module saves on packaging and the printed circuit board traces needed.

Booth says that 100 gigabit optics is currently priced between $2 and $3-per-gigabit. “If I integrate that into a 400 gigabit module, the price-per-gig comes down significantly” says Booth. “All the stuff I had to replicate four times suddenly is integrated into one, cutting costs significantly in a number of areas.” Significantly enough to dip below the $1-per-gigabit, he says.

 

Power consumption and line-side optics

COBO has not specified power targets for the embedded optics in part because it has greater control of the thermal environment compared to a pluggable module where the optics is encased in a cage. “By working in the vertical dimension, we can get creative in how we build the heatsink,” says Booth. “We can use the same footprint no matter whether it is 100 gigabit inside or 100 gigabit outside the data centre, the only difference is I’ve got different thermal classifications, a different way to dissipate that power.”        

The consortium is investigating whether its embedded optics can support 100 gigabit long-haul optics, given such optics has traditionally been implemented as an embedded design. “Bringing COBO back to that market is extremely powerful because you can better manage the thermal environment,” says Booth. And by removing the power-hungry modules away from the face plate, surface area is freed up that can be used for venting and improving air flow.

“We should be considering everything is possible, although we may not write the specification on Day One,” says Booth. “I’m hoping we may eventually be able to place coherent devices right next to the COBO module or potentially the optics and the coherent device built together.

“If you look at the hyper-scale data centre players, we have guys that focus just as much on inside the data centre as they do on how to connect the data centres in within a metro area, national area and then sub-sea,” says Booth. “That is having an impact because when we start looking at what we want to do with those networks, we want to have some level of control on what we are doing there and on the cost.

“We buy gazillions of optical modules for inside the data centre. Why is it that we have to pay exorbitant prices for the ones that we are not using inside [the data centre],” he says.

 

“I can’t help paint a more rosier picture because when you have got 1.4 million servers, if I end up with optics down to all of those, that is a lot of interconnect“

 

Market opportunities

Having a common form factor for on-board optics will allow vendors to focus on what they do best: the optics. “We are buying you for the optics, we are not buying you for the footprint you have on the board,” he says. 

Booth is sensitive to the reservations of optical component makers to such internet business-led initiatives. “It is a very tough for these guys to extend themselves to do this type of work because they are putting a lot of their own IP on the line,” says Booth. “This is a very competitive space.”

But he stresses it is also fiercely competitive between the large internet businesses building data centres. “Let’s sit down and figure out what does it take to progress this industry. What does it take to make optics go everywhere?”

Booth also stresses the promising market opportunities COBO can serve such as server interconnect.

“When I look at this market, we are talking about doing optics down to our servers,” says Booth. “I can’t help paint a more rosier picture because when you have got 1.4 million servers, if I end up with optics down to all of those, that is a lot of interconnect.“

 


OFC 2015 digest: Part 2

The second part of the survey of developments at the OFC 2015 show held recently in Los Angeles.   
 
Part 2: Client-side component and module developments   
  • CFP4- and QSFP28-based 100GBASE-LR4 announced
  • First mid-reach optics in the QSFP28
  • SFP extended to 28 Gigabit
  • 400 Gig precursors using DMT and PAM-4 modulations 
  • VCSEL roadmap promises higher speeds and greater reach   
First CFP4 100GBASE-LR4s 
 
Several companies including Avago Technologies, JDSU, NeoPhotonics and Oclaro announced the first 100GBASE-LR4 products in the smaller CFP4 optical module form factor. Until now the 100GBASE-LR4 has been available in a CFP2 form factor.  
 
“Going from a CFP2 to a CFP4 results in a little over a 2x increase in density,” says Brandon Collings, CTO for communications and commercial optical products at JDSU. The CFP4 also has a lower maximum power specification of 6W compared to the CFP2’s 12W.  
 
The 100GBASE-LR4 standard spans 10 km over single mode fibre. The -LR4 is used mainly as a telecom interface to connect to WDM or packet-optical transport platforms, even when used in the data centre. Data centre switches already favour the smaller QSFP28 rather than the CFP4.  
 
Other 100 Gigabit standards include the 100GBASE-SR4 with a 100 meters reach over OM3 multi-mode fibre, and up to 150m over OM4 fibre. Avago points out that the -SR4 is typically used between a data centre’s top-of-rack and core switches whereas the -LR4 is used within the core network and for links between buildings. The -LR4 modules also can support Optical Transport Network (OTN).      
 
But in the data centre there is a mid-reach requirement. “People are looking at new standards to accommodate more of the mega data centre distances of 500 m or 2 km,” says Robert Blum, Oclaro’s director of strategic marketing.  These mid-reach standards over single mode fibre include the 500 meter PSM4 and the 2 km CWDM4 and modules supporting these standards are starting to appear. “But today, on single mode, there is basically the -LR4 that gets you to 10 km,” says Blum.  
 
JDSU also views the -LR4 as an interim technology in the data centre that will fade once more optimised PSM4 and CWDM4 optics appear.  
 
 
QSFP28 portfolio grows 
 
The 100GBASE-LR4 was also shown in the smaller QSFP28 form factor, as part of a range of new interface offerings in the form factor.  The QSFP28 offers a near 2x increase in face plate density compared to the CFP4.  
 
JDSU announced three 100 Gigabit QSFP28-based interfaces at OFC - the PSM-4 and CWDM4 MSAs and a 100GBASE-LR4, while Finisar announced QSFP28 versions of the CWDM4, the 100GBASE-LR4 and the 100GBASE-SR4. Meanwhile, Avago has samples of a QSFP28 100GBASE-SR4. JDSU’s QSFP28 -LR4 uses the same optics it is using in its CFP4 -LR4 product.  
 
The PSM4 MSA uses a single mode ribbon cable - four lanes in each direction - to deliver the 500 m reach, while the CWDM4 MSA uses a fibre to carry the four wavelengths in each direction. The -LR4 standard uses tightly spaced wavelengths such that the lasers need to be cooled and temperature controlled.  The CWDM4, in contrast, uses a wider wavelength spacing and can use uncooled lasers, saving on power.   
 
"100 Gig-per-laser, that is very economically advantageous" - Brian Welch, Luxtera

  
Luxtera announced the immediate availability of its PSM4 QSFP28 transceiver while the company is also offering its PSM4 silicon chipset for packaging partners that want to make their own modules or interfaces. Luxtera is a member of the newly formed Consortium for On-Board Optics (COBO).
 
Luxtera’s original active optical cable products were effectively 40 Gigabit PSM4 products although no such MSA was defined. The company’s original design also operated at 1490nm  whereas the PSM4 is at 1310nm.  
 
“The PSM4 is a relatively new type of product, focused on hyper-scale data centres - Microsoft, Amazon, Google and the like - with reaches regularly to 500 m and beyond,” says Brian Welch, director of product marketing at Luxtera. The company’s PSM4 offers an extended reach to 2 km, far beyond the PSM4 MSA’s specification. The company says there is also industry interest for PSM4 links over shorter reaches, up to 30 m. 
 
Luxtera’s PSM4 design uses one laser for all four lanes. “In a 100 Gig part, we get 100 Gig-per-laser,” says Welch. “WDM gets 25 Gig-per-laser, multi-mode gets 25 Gig-per-laser; 100 Gig-per-laser, that is very economically advantageous.”    
 
 
QSFP28 ‘breakout’ mode 
 
Avago, Finisar and Oclaro all demonstrated a 100 Gigabit QSFP28 modules in ‘breakout’ mode whereby the module’s output fibres fan out and interface to separate, lower-speed SFP28 optical modules.  
 
“The SFP+ is the most ubiquitous and standard form factor deployed in the industry,” says Rafik Ward, vice president of marketing at Finisar. “The SFP28 leverages this architecture, bringing it up to 28 Gigabit.”  
 
Applications using the breakout arrangement include the emerging Fibre Channel standards: the QSFP28 can support the 128 Gig Fibre channel standard where 32 Gig Fibre Channel traffic is sent to individual transceivers. Avago demonstrated such an arrangement at OFC and said its QSFP28 product will be available before the year end.  
 
Similarly, the QSFP28-to-SFP28 breakout mode will enable the splitting of 100 Gigabit Ethernet (GbE) into IEEE 25 Gigabit Ethernet lanes once the standard is completed. 
 
Oclaro showed a 100 Gig QSFP28 using a 4x28G LISEL (lens-integrated surface-emitting DFB laser) array with one channel connected to an SFP28 over a 2 km link. Oclaro inherited the LISEL technology when it merged with Opnext in 2012.  
 
Finisar demonstrated its 100GBASE-SR4 QSFP28 connected to four SFP28s over 100 m of OM4 multimode fibre.
Oclaro also showed a SFP28 for long reach that spans 10 km over single-mode fibre. In addition to Fibre Channel and Ethernet, Oclaro also highlights wireless fronthaul to carry CPRI traffic, although such data rates are not expected for several years yet. Oclaro’s SFP28 will be in full production in the first quarter of 2016. Oclaro says it will also use the LISEL technology for its PSM4 design.   
 
 
Industry prepares for 400GbE with DMT and PAM-4
  
JDSU demonstrated a 4 x 100 Gig design, described as a precursor for 400 Gigabit technology. The IEEE is still working to define the different versions of the 400 Gigabit Ethernet standard. The JDSU optical hardware design multiplexes four 100 Gig wavelengths onto a fibre.    
 
“There are multiple approaches towards 400 Gig client interfaces being discussed at the IEEE and within the industry,” says JDSU’s Collings. “The modulation formats being evaluated are non-return-to-zero (NRZ), PAM-4 and discrete multi-tone (DMT).”  
 
For the demonstration, JDSU used DMT modulation to encode 100 Gbps on each of the four wavelengths, although Collings stresses that JDSU continues work on all three formats. In contrast, MultiPhy is using PAM-4 to develop a 100 Gig serial link
 
At OFC, Avago demonstrated a 25 Gig VCSEL being driven using its PAM-4 chip to achieve a 50 Gig rate. The PAM-4 chip takes two 25 Gbps input streams and encodes each two bits into a symbol that then drives the VCSEL. The demonstration paves the way for emerging standards such as 50 Gigabit Ethernet (GbE) using a 25G VCSEL, and shows how 50 Gigabit lanes could be used to implement 400 GbE using eight lanes instead of 16.  
 
NeoPhotonics demonstrated a 56 Gbps externally modulated laser (EML) along with pHEMT gallium arsenide driver technology, the result of its acquisition of Lapis Semiconductor in 2013.  
 
The main application will be 400 Gigabit Ethernet but there is already industry interest in proprietary solutions, says Nicolas Herriau, director of product engineering at NeoPhotonics. The industry may not have decided whether it will use NRZ or PAM-4 [for 400GbE], “but the goal is to get prepared”, he says. 
 
Herriau points out that the first PAM-4 ICs are not yet optimised to work with lasers. As a result, having a fast, high-quality 56 Gbps laser is an advantage.   
 
Avago has shipped over one million 25 Gig channels in multiple products
 
  
The future of VCSELs   
 
VCSELs at 25 Gig is an enabling technology for the data centre, says Avago. Operating at 850nm, the VCSELs deliver the 100m reach over OM3 and 150m reach over OM4 multi-mode fibre. Avago announced at OFC that it had shipped over one million VCSELs in the last two years. Before then, only 10 Gig VCSELs were available, used for 40 Gig and 100 Gig short-reach modules.  
 
Avago says that the move to 100 Gig and beyond has triggered an industry debate as to whether single-mode rather than multi-mode fibre is the way forward in data centres. For VCSELs, the open questions are whether the technology can support 25 Gig lanes, whether such VCSELs are cost-effective, and whether they can meet extended link distances beyond 100 m and 150 m.  
 
“Silicon photonics is spoken of as a great technology for the future, for 100 Gig and greater speeds, but this [announcement] is not academic or hype,” says I-Hsing Tan, Avago’s segment marketing manager for Ethernet and storage optical transceivers. “Avago has been using 25 Gig VCSELs for short-reach distance applications and has shipped over one million 25 Gig channels in multiple products.” 
 
The products that account for the over one million shipments include Ethernet transceivers; single- and 4-lane 32 Gigabit Fibre Channel, each channel operates at 28 Gbps; Infiniband applications, with 4-channels being the most popular; and proprietary optical interfaces with the channel count varying from two to 12 channels, 50 to 250 Gbps.   
 
In other OFC data centre demonstrations, Avago showed an extended short reach interface at 100 Gig - the 100GBASE-eSR4 - with a 300 m span. Because it is a demonstration and not a product, Avago is not detailing how it is extending the reach beyond saying that it is a combination of the laser output power and the receiver design. The extended reach product will be available from 2016.  
 
Avago completed the acquisition of PLX Technologies in the third quarter of 2014 and its PCI Express (PCIe) over optics demonstration is one result. The demonstration is designed to remove the need for a network interface card between an Ethernet switch and a server. “The aim is to absorb the NIC as part of the ASIC design to achieve a cost effective solution,” says Tan. Avago says it is engaged with several data centre operators with this concept.     
 
Avago also demonstrated 40 Gig bi-directional module, an alternative to the 40GBASE-SR4. The 40G -SR4 uses eight multi-mode fibres, four in each direction, each carrying a 10 Gig signal. “Going to 40 Gig [from 10 Gig] consumes fibre,” says Tan. Accordingly, the 40 Gig bidi design uses WDM to avoid using a ribbon fibre. Instead, the bidi uses two multi-mode fibres, each carrying two 20 Gig wavelengths travelling in opposite directions. Avago hopes to make this product generally available later this year.   
 
At OFC, Finisar demonstrated designs for 40 Gig and 100 Gig speeds using duplex multi-mode fibre rather than ribbon fibre. The 40 Gig demo achieved 300 m over OM3 fibre while the 100 Gig demo achieved 70 m over OM3 and 100 m over OM4 fibre. Finisar’s designs use four wavelengths for each multi-mode fibre, what it calls shortwave WDM. 
 
Finisar’s VCSEL demonstrations at OFC were to highlight that the technology can continue to play an important role in the data centre. Citing a study by market research firm, Gartner, 94 percent of data centres built in 2014 were smaller than 250,000 square feet, and this percentage is not expected to change through to 2018. A 300-meter optical link is sufficient for the longest reaches in such sized data centres. 
 
Finisar is also part of a work initiative to define and standardise new wideband multi-mode fibre that will enable WDM transmission over links even beyond 300 m to address larger data centres. 
 
“There are a lot of legs to VCSEL-based multi-mode technology for several generations into the future,” says Ward. “We will come out with new innovative products capable of links up to 300 m on multi-mode fibre.””

 

For Part 1, click here

COBO acts to bring optics closer to the chip

The formation of the Consortium for On-Board Optics (COBO) highlights how, despite engineers putting high-speed optics into smaller and smaller pluggable modules, further progress in interface compactness is needed.

The goal of COBO, announced at the OFC 2015 show and backed by such companies as Microsoft, Cisco Systems, Finisar and Intel, is to develop a technology roadmap and common specifications for on-board optics to ensure interoperability.

“The Microsoft initiative is looking at the next wave of innovation as it relates to bringing optics closer to the CPU,” says Saeid Aramideh, co-founder and chief marketing and sales officer for start-up Ranovus, one of the founding members of COBO. “There are tremendous benefits for such an architecture in terms of reducing power dissipation and increasing the front panel density.”

On-board optics refers to optical engines or modules placed on the printed circuit board, close to a chip. The technology is not new; Avago Technologies and Finisar have been selling such products for years. But these products are custom and not interoperable.  

Placing the on-board optics nearer the chip - an Ethernet switch, network processor or a microprocessor for example - shortens the length of the board’s copper traces linking the two. The fibre from the on-board optics bridges the remaining distance to the equipment’s face plate connector. Moving the optics onto the board reduces the overall power consumption, especially as 25 Gigabit-per-second electrical lanes start to be used. The fibre connector also uses far less face plate area compared to pluggable modules, whether the CFP2, CFP4, QSFP28 or even an SFP+.  

________________________________________________________________________________
The founding members of the Consortium for On-Board Optics are Arista Networks, Broadcom, Cisco, Coriant, Dell, Finisar, Inphi, Intel, Juniper Networks, Luxtera, Mellanox Technologies, Microsoft, Oclaro, Ranovus, Source Photonics and TE Connectivity.

Given the breadth of companies and the different technologies they prefer,  will the COBO's initiative choose a specific fibre type and wavelength?

“COBO currently has no plans to specify a single medium or a single wavelength, but rather will reference existing standards,” Brad Booth, Chair for the Consortium for On-Board Optics told Gazettabyte.

“There has not been any discussion on the fibre type - single mode versus multi-mode - yet,” added Aramideh. “This will be one item among many interworking specification items for the consortium to define.”
________________________________________________________________________________

 

“The [COBO] initiative is going to be around defining the electrical interface, the mechanical interface, the power budget, the heat-sinking constraints and the like,” says Aramideh.

To understand why such on-board optics will be needed, Aramideh cites Broadcom’s StrataXGS Tomahawk switch chips used for top-of-rack and aggregation switches. The Tomahawk is Broadcom’s first switch family that use 25 Gbps serialiser/ deserialiser (serdes) and has an aggregate switch bandwidth of up to 3.2 terabit. And Broadcom is not alone. Cavium through its Xplaint acquisition has the CNX880xx line of Ethernet switch chips that also uses 25 Gbps lanes and has a switch capacity up to 3.2 terabit.

“You have 1.6 terabit going to the front panel and 1.6 terabit going to the back panel; that is a lot of traces,” says Aramideh. “If you make this into opex [operation expense], and put the optics close to the switch ASIC, the overall power consumption is reduced and you have connectivity to the front and the back.” 

This is the focus of Ranovus, with the OpenOptics MSA initiative. “Scaling into terabit connectivity over short distances and long distances,” he says.

 

OpenOptics MSA

At OFC, members of the OpenOptics MSA, of which Ranovus and Mellanox are founders, published its WDM specification for an interoperable 100 Gbps WDM standard that will have a two kilometer reach. 

The 100 Gigabit standard uses 4x25 Gbps wavelengths but Aramideh says the standard scales to 8, 16 and 32 lanes. In turn, there will also be a 50 Gbps lane version that will provide a total connectivity of 1.6 terabit (32x50 Gbps). 

Ranovus has not detailed what modulation scheme it will use to achieve 50 Gbps lanes, but Aramideh says that PAM-4 is one of the options and an attractive one at that. “There are also a lot of chipsets [supporting PAM-4] becoming available,” he says. 

Ranovus’s first products will be an OpenOptics MSA optical engine and an QSFP28 optical module. “We are not making any product announcements yet but there will be products available this year,” says Aramideh. 

Meanwhile, Ciena has become the sixth member to join the OpenOptics MSA. 


Privacy Preference Center