ADVA's 100 Terabit data centre interconnect platform

  • The FSP 3000 CloudConnect comes in several configurations
  • The data centre interconnect platform scales to 100 terabits of throughput
  • The chassis use a thin 0.5 RU QuadFlex card with up to 400 Gig transport capacity
  • The optical line system has been designed to be open and programmable

ADVA Optical Networking has unveiled its FSP 3000 CloudConnect, a data centre interconnect product designed to cater for the needs of the different data centre players. The company has developed several sized platforms to address the workloads and bandwidth needs of data centre operators such as Internet content providers, communications service providers, enterprises, cloud and colocation players.

Certain Internet content providers want to scale the performance of their computing clusters across their data centres. A cluster is a grouping of distributed computing comprising a defined number of virtual machines and processor cores (see Clusters, pods and recipes explained, bottom). Yet there are also data centre operators that only need to share limited data between their sites.

ADVA Optical Networking highlights two internet content providers - Google and Microsoft with its Azure cloud computing and services platform - that want their distributed clusters to act as one giant global cluster.

“The performance of the combined clusters is proportional to the bandwidth of the interconnect,” says Jim Theodoras, senior director, technical marketing at ADVA optical Networking. “No matter how many CPU cores or servers, you are now limited by the interconnect bandwidth.”  

ADVA Optical Networking cites a Google study that involved running an application on different cluster configurations, starting with a single cluster; then two, side-by-side; two clusters in separate buildings through to clusters across continents. Google claimed the distributed clusters only performed at 20 percent capacity due to the limited interconnect bandwidth. “The reason you are hearing these ridiculous amounts of connectivity, in the hundreds of terabits, is only for those customers that want their clusters to behave as a global cluster,” says Theodoras.

Yet other internet content providers have far more modest interconnect demands. ADVA cites one, as large as the two global cluster players, that wants only 1.2 terabit-per-second (Tbps) between its sites. “It is normal duplication/ replication between sites,” says Theodoras. “They want each campus to run as a cluster but they don’t want their networks to behave as a global cluster.”   

 

FSP 3000 CloudConnect

The FSP 3000 CloudConnect has several configurations. The company stresses that it designed CloudConnect as a high-density, self-contained platform that is power-efficient and that comes with advanced data security features. 

All the CloudConnect configurations use the QuadFlex card that has a 800 Gigabit throughput: up to 400 Gigabit client-side interfaces and 400 Gigabit line rates. 

Jim TheodorasThe QuadFlex card is thin, measuring only a half rack unit (RU). Up to seven can be fitted in ADVA’s four rack-unit (4 RU) platform, dubbed the SH4R, for a line side transport capacity of 2.8 Tbps. The SH4R’s remaining, eighth slot hosts either one or two management controllers.   

The QuadFlex line-side interface supports various rates and reaches, from 100 Gigabit ultra long-haul to 400 Gigabit metro/ regional, in increments of 100 Gigabit. Two carriers, each using polarisation-multiplexing, 16 quadrature amplitude modulation (PM-16-QAM), are used to achieve the 400 Gbps line rate, whereas for 300 Gbps, 8-QAM is used on each of the two carriers. 

 

“The reason you are hearing these ridiculous amounts of connectivity, in the hundreds of terabits, is only for those customers that want their clusters to behave as a global cluster” 

 

The advantage of 8-QAM, says Theodoras, is that it is 'almost 400 Gigabit of capacity' yet its can span continents. ADVA is sourcing the line-side optics but uses its own code for the coherent DSP-ASIC and module firmware. The company has not confirmed the supplier but the design matches Acacia's 400 Gigabit coherent module that was announced at OFC 2015.  

ADVA says the CloudConnect 4 RU chassis is designed for customers that want a terabit-capacity box. To achieve a terabit link, three QuadFlex cards and an Erbium-doped fibre amplifier (EDFA) can be used. The EDFA is a bidirectional amplifier design that includes an integrated communications channel and enables the 4 RU platform to achieve ultra long-haul reaches. “There is no need to fit into a [separate] big chassis with optical line equipment,” says Theodoras. Equally, data centre operators don’t want to be bothered with mid-stage amplifier sites.         

Some data centre operators have already installed 40 dense WDM channels at 100GHz spacing across the C-band which they want to keep. ADVA Optical Networking offers a 14 RU configuration that uses three SH4R units, an EDFA and a DWDM multiplexer, that enables a capacity upgrade. The three SH4R units house a total of 20 QuadFlex cards that fit 200 Gigabit in each of the 40 channels for an overall transport capacity of 8 terabits.

ADVA CloudConnect configuration supporting 25.6 Tbps line side capacity. Source: ADVA Optical Networking

The last CloudConnect chassis configuration is for customers designing a global cluster. Here the chassis has 10 SH4R units housing 64 QuadFlex cards to achieve a total transport capacity of 25.6 Tbps and a throughput of 51.2 Tbps.   

Also included are 2 EDFAs and a 128-channel multiplexer. Two EDFAs are needed because of the optical loss associated with the high number of channels, such that an EDFA is allocated for each of the 64 channels. “For the [14 RU] 40 channels [configuration], you need only one EDFA,” says Theodoras.   

The vendor has also produced a similar-sized configuration for the L-band. Combining the two 40 RU chassis delivers 51.2Tbps of transport and 102.4 Tbps of throughput. “This configuration was built specifically for a customer that needed that kind of throughput,” says Theodoras.  

Other platform features include bulk encryption. ADVA says the encryption does not impact the overall data throughput while adding only a very slight latency hit. “We encrypt the entire payload; just a few framing bytes are hidden in the existing overhead,” says Theodoras.   

The security management is separate from the network management. “The security guys have complete control of the security of the data being managed; only they can encrypt and decrypt content,” says Theodoras.

CloudConnect consumes only 0.5W/ Gigabit. The platform does not use electrical multiplexing of data streams over the backplane. The issue with using such a switched backplane is that power is consumed independent of traffic. The CloudConnect designers has avoided this approach. “The reason we save power is that we don’t have all that switching going on over the backplane.” Instead all the connectivity comes from the front panel of the cards.  

The downside of this approach is that the platform does not support any-port to any-port connectivity. “But for this customer set, it turns out that they don’t need or care about that.”     

 

Open hardware and software  

ADVA Optical Networking claims is 4 RU basic unit addresses a sweet spot in the marketplace. The CloudConnect also has fewer inventory items for the data centre operators to manage compared to competing designs based on 1 RU or 2 RU pizza boxes, it says.   

Theodoras also highlights the system’s open hardware and software design.

“We will let anybody’s hardware or software control our network,” says Theodoras. “You don’t have to talk to our software-defined networking (SDN) controller to control our network.” ADVA was part of a demonstration last year whereby an NEC and a Fujitsu controller oversaw ADVA’s networking elements.

 

Every vendor is always under pressure to have the best thing because you are only designed in for 18 months 

 

By open hardware, what is meant is that programmers can control the optical line system used to interconnect the data centres. “We have found a way of simplifying it so it can be programmed,” says Theodoras. “We have made it more digital so that they don’t have to do dispersion maps, polarisation mode dispersion maps or worry about [optical] link budgets.” The result is that data centre operators can now access all the line elements.    

“At OFC 2015, Microsoft publicly said they will only buy an open optical line system,” says Theodoras. Meanwhile, Google is writing a specification for open optical line systems dubbed OpenConfig. “We will be compliant with Microsoft and Google in making every node completely open.”

General availability of the CloudConnect platforms is expected at the year-end. “The data centre interconnect platforms are now with key partners, companies that we have designed this with,” says Theodoras. 

 

Clusters, pods and recipes explained

A cluster is made up of a number of virtual machines and CPU cores and is defined in software. A cluster is a virtual entity, says Theodoras, unrelated to the way data centre managers define their hardware architectures. 

“Clusters vary a lot [between players],” says Theodoras. “That is why we have had to make scalability such a big part of CloudConnect.” 

The hardware definition is known as a pod or recipe. “How these guys build the network is that they create recipes,” says Theodoras. “A pod with this number of servers, this number of top-of-rack switches, this amount of end-of-row router-switches and this transport node; that will be one recipe.”    

Data centre players update their recipes every 18 months. “Every vendor is always under pressure to have the best thing because you are only designed in for 18 months,” says Theodoras.   

Vendors are informed well in advance what the next hardware requirements will be, and by when they will be needed to meet the new recipe requirements.    

In summary, pods and recipes refer to how the data centre architecture is built, whereas a cluster is defined at a higher, more abstract layer.   


OFC 2015 digest: Part 2

The second part of the survey of developments at the OFC 2015 show held recently in Los Angeles.   
 
Part 2: Client-side component and module developments   
  • CFP4- and QSFP28-based 100GBASE-LR4 announced
  • First mid-reach optics in the QSFP28
  • SFP extended to 28 Gigabit
  • 400 Gig precursors using DMT and PAM-4 modulations 
  • VCSEL roadmap promises higher speeds and greater reach   
First CFP4 100GBASE-LR4s 
 
Several companies including Avago Technologies, JDSU, NeoPhotonics and Oclaro announced the first 100GBASE-LR4 products in the smaller CFP4 optical module form factor. Until now the 100GBASE-LR4 has been available in a CFP2 form factor.  
 
“Going from a CFP2 to a CFP4 results in a little over a 2x increase in density,” says Brandon Collings, CTO for communications and commercial optical products at JDSU. The CFP4 also has a lower maximum power specification of 6W compared to the CFP2’s 12W.  
 
The 100GBASE-LR4 standard spans 10 km over single mode fibre. The -LR4 is used mainly as a telecom interface to connect to WDM or packet-optical transport platforms, even when used in the data centre. Data centre switches already favour the smaller QSFP28 rather than the CFP4.  
 
Other 100 Gigabit standards include the 100GBASE-SR4 with a 100 meters reach over OM3 multi-mode fibre, and up to 150m over OM4 fibre. Avago points out that the -SR4 is typically used between a data centre’s top-of-rack and core switches whereas the -LR4 is used within the core network and for links between buildings. The -LR4 modules also can support Optical Transport Network (OTN).      
 
But in the data centre there is a mid-reach requirement. “People are looking at new standards to accommodate more of the mega data centre distances of 500 m or 2 km,” says Robert Blum, Oclaro’s director of strategic marketing.  These mid-reach standards over single mode fibre include the 500 meter PSM4 and the 2 km CWDM4 and modules supporting these standards are starting to appear. “But today, on single mode, there is basically the -LR4 that gets you to 10 km,” says Blum.  
 
JDSU also views the -LR4 as an interim technology in the data centre that will fade once more optimised PSM4 and CWDM4 optics appear.  
 
 
QSFP28 portfolio grows 
 
The 100GBASE-LR4 was also shown in the smaller QSFP28 form factor, as part of a range of new interface offerings in the form factor.  The QSFP28 offers a near 2x increase in face plate density compared to the CFP4.  
 
JDSU announced three 100 Gigabit QSFP28-based interfaces at OFC - the PSM-4 and CWDM4 MSAs and a 100GBASE-LR4, while Finisar announced QSFP28 versions of the CWDM4, the 100GBASE-LR4 and the 100GBASE-SR4. Meanwhile, Avago has samples of a QSFP28 100GBASE-SR4. JDSU’s QSFP28 -LR4 uses the same optics it is using in its CFP4 -LR4 product.  
 
The PSM4 MSA uses a single mode ribbon cable - four lanes in each direction - to deliver the 500 m reach, while the CWDM4 MSA uses a fibre to carry the four wavelengths in each direction. The -LR4 standard uses tightly spaced wavelengths such that the lasers need to be cooled and temperature controlled.  The CWDM4, in contrast, uses a wider wavelength spacing and can use uncooled lasers, saving on power.   
 
"100 Gig-per-laser, that is very economically advantageous" - Brian Welch, Luxtera

  
Luxtera announced the immediate availability of its PSM4 QSFP28 transceiver while the company is also offering its PSM4 silicon chipset for packaging partners that want to make their own modules or interfaces. Luxtera is a member of the newly formed Consortium for On-Board Optics (COBO).
 
Luxtera’s original active optical cable products were effectively 40 Gigabit PSM4 products although no such MSA was defined. The company’s original design also operated at 1490nm  whereas the PSM4 is at 1310nm.  
 
“The PSM4 is a relatively new type of product, focused on hyper-scale data centres - Microsoft, Amazon, Google and the like - with reaches regularly to 500 m and beyond,” says Brian Welch, director of product marketing at Luxtera. The company’s PSM4 offers an extended reach to 2 km, far beyond the PSM4 MSA’s specification. The company says there is also industry interest for PSM4 links over shorter reaches, up to 30 m. 
 
Luxtera’s PSM4 design uses one laser for all four lanes. “In a 100 Gig part, we get 100 Gig-per-laser,” says Welch. “WDM gets 25 Gig-per-laser, multi-mode gets 25 Gig-per-laser; 100 Gig-per-laser, that is very economically advantageous.”    
 
 
QSFP28 ‘breakout’ mode 
 
Avago, Finisar and Oclaro all demonstrated a 100 Gigabit QSFP28 modules in ‘breakout’ mode whereby the module’s output fibres fan out and interface to separate, lower-speed SFP28 optical modules.  
 
“The SFP+ is the most ubiquitous and standard form factor deployed in the industry,” says Rafik Ward, vice president of marketing at Finisar. “The SFP28 leverages this architecture, bringing it up to 28 Gigabit.”  
 
Applications using the breakout arrangement include the emerging Fibre Channel standards: the QSFP28 can support the 128 Gig Fibre channel standard where 32 Gig Fibre Channel traffic is sent to individual transceivers. Avago demonstrated such an arrangement at OFC and said its QSFP28 product will be available before the year end.  
 
Similarly, the QSFP28-to-SFP28 breakout mode will enable the splitting of 100 Gigabit Ethernet (GbE) into IEEE 25 Gigabit Ethernet lanes once the standard is completed. 
 
Oclaro showed a 100 Gig QSFP28 using a 4x28G LISEL (lens-integrated surface-emitting DFB laser) array with one channel connected to an SFP28 over a 2 km link. Oclaro inherited the LISEL technology when it merged with Opnext in 2012.  
 
Finisar demonstrated its 100GBASE-SR4 QSFP28 connected to four SFP28s over 100 m of OM4 multimode fibre.
Oclaro also showed a SFP28 for long reach that spans 10 km over single-mode fibre. In addition to Fibre Channel and Ethernet, Oclaro also highlights wireless fronthaul to carry CPRI traffic, although such data rates are not expected for several years yet. Oclaro’s SFP28 will be in full production in the first quarter of 2016. Oclaro says it will also use the LISEL technology for its PSM4 design.   
 
 
Industry prepares for 400GbE with DMT and PAM-4
  
JDSU demonstrated a 4 x 100 Gig design, described as a precursor for 400 Gigabit technology. The IEEE is still working to define the different versions of the 400 Gigabit Ethernet standard. The JDSU optical hardware design multiplexes four 100 Gig wavelengths onto a fibre.    
 
“There are multiple approaches towards 400 Gig client interfaces being discussed at the IEEE and within the industry,” says JDSU’s Collings. “The modulation formats being evaluated are non-return-to-zero (NRZ), PAM-4 and discrete multi-tone (DMT).”  
 
For the demonstration, JDSU used DMT modulation to encode 100 Gbps on each of the four wavelengths, although Collings stresses that JDSU continues work on all three formats. In contrast, MultiPhy is using PAM-4 to develop a 100 Gig serial link
 
At OFC, Avago demonstrated a 25 Gig VCSEL being driven using its PAM-4 chip to achieve a 50 Gig rate. The PAM-4 chip takes two 25 Gbps input streams and encodes each two bits into a symbol that then drives the VCSEL. The demonstration paves the way for emerging standards such as 50 Gigabit Ethernet (GbE) using a 25G VCSEL, and shows how 50 Gigabit lanes could be used to implement 400 GbE using eight lanes instead of 16.  
 
NeoPhotonics demonstrated a 56 Gbps externally modulated laser (EML) along with pHEMT gallium arsenide driver technology, the result of its acquisition of Lapis Semiconductor in 2013.  
 
The main application will be 400 Gigabit Ethernet but there is already industry interest in proprietary solutions, says Nicolas Herriau, director of product engineering at NeoPhotonics. The industry may not have decided whether it will use NRZ or PAM-4 [for 400GbE], “but the goal is to get prepared”, he says. 
 
Herriau points out that the first PAM-4 ICs are not yet optimised to work with lasers. As a result, having a fast, high-quality 56 Gbps laser is an advantage.   
 
Avago has shipped over one million 25 Gig channels in multiple products
 
  
The future of VCSELs   
 
VCSELs at 25 Gig is an enabling technology for the data centre, says Avago. Operating at 850nm, the VCSELs deliver the 100m reach over OM3 and 150m reach over OM4 multi-mode fibre. Avago announced at OFC that it had shipped over one million VCSELs in the last two years. Before then, only 10 Gig VCSELs were available, used for 40 Gig and 100 Gig short-reach modules.  
 
Avago says that the move to 100 Gig and beyond has triggered an industry debate as to whether single-mode rather than multi-mode fibre is the way forward in data centres. For VCSELs, the open questions are whether the technology can support 25 Gig lanes, whether such VCSELs are cost-effective, and whether they can meet extended link distances beyond 100 m and 150 m.  
 
“Silicon photonics is spoken of as a great technology for the future, for 100 Gig and greater speeds, but this [announcement] is not academic or hype,” says I-Hsing Tan, Avago’s segment marketing manager for Ethernet and storage optical transceivers. “Avago has been using 25 Gig VCSELs for short-reach distance applications and has shipped over one million 25 Gig channels in multiple products.” 
 
The products that account for the over one million shipments include Ethernet transceivers; single- and 4-lane 32 Gigabit Fibre Channel, each channel operates at 28 Gbps; Infiniband applications, with 4-channels being the most popular; and proprietary optical interfaces with the channel count varying from two to 12 channels, 50 to 250 Gbps.   
 
In other OFC data centre demonstrations, Avago showed an extended short reach interface at 100 Gig - the 100GBASE-eSR4 - with a 300 m span. Because it is a demonstration and not a product, Avago is not detailing how it is extending the reach beyond saying that it is a combination of the laser output power and the receiver design. The extended reach product will be available from 2016.  
 
Avago completed the acquisition of PLX Technologies in the third quarter of 2014 and its PCI Express (PCIe) over optics demonstration is one result. The demonstration is designed to remove the need for a network interface card between an Ethernet switch and a server. “The aim is to absorb the NIC as part of the ASIC design to achieve a cost effective solution,” says Tan. Avago says it is engaged with several data centre operators with this concept.     
 
Avago also demonstrated 40 Gig bi-directional module, an alternative to the 40GBASE-SR4. The 40G -SR4 uses eight multi-mode fibres, four in each direction, each carrying a 10 Gig signal. “Going to 40 Gig [from 10 Gig] consumes fibre,” says Tan. Accordingly, the 40 Gig bidi design uses WDM to avoid using a ribbon fibre. Instead, the bidi uses two multi-mode fibres, each carrying two 20 Gig wavelengths travelling in opposite directions. Avago hopes to make this product generally available later this year.   
 
At OFC, Finisar demonstrated designs for 40 Gig and 100 Gig speeds using duplex multi-mode fibre rather than ribbon fibre. The 40 Gig demo achieved 300 m over OM3 fibre while the 100 Gig demo achieved 70 m over OM3 and 100 m over OM4 fibre. Finisar’s designs use four wavelengths for each multi-mode fibre, what it calls shortwave WDM. 
 
Finisar’s VCSEL demonstrations at OFC were to highlight that the technology can continue to play an important role in the data centre. Citing a study by market research firm, Gartner, 94 percent of data centres built in 2014 were smaller than 250,000 square feet, and this percentage is not expected to change through to 2018. A 300-meter optical link is sufficient for the longest reaches in such sized data centres. 
 
Finisar is also part of a work initiative to define and standardise new wideband multi-mode fibre that will enable WDM transmission over links even beyond 300 m to address larger data centres. 
 
“There are a lot of legs to VCSEL-based multi-mode technology for several generations into the future,” says Ward. “We will come out with new innovative products capable of links up to 300 m on multi-mode fibre.””

 

For Part 1, click here

OFC 2015 digest: Part 1

A survey of some of the key developments at the OFC 2015 show held recently in Los Angeles.  
 
Part 1: Line-side component and module developments 
  • Several vendors announced CFP2 analogue coherent optics   
  • 5x7-inch coherent MSAs: from 40 Gig submarine and ultra-long haul to 400 Gig metro  
  • Dual micro-ITLAs, dual modulators and dual ICRs as vendors prepare for 400 Gig
  • WDM-PON demonstration from ADVA Optical Networking and Oclaro 
  • More compact and modular ROADM building blocks  
  
Coherent optics within a CFP2  
 
Integrating line-side coherent optics into ever smaller pluggable modules promises higher-capacity line cards and transport platforms. Until now, the main pluggable module for coherent optical transmission has been the CFP but at OFC several optical module companies announced coherent optics that fit within the CFP2 module, dubbed CFP2 analogue coherent optics (CFP2-ACO).  
 
Oclaro, Finisar, Fujitsu Optical Components and JDSU all announced CFP2-ACO designs, capable of 100 Gigabit-per-second (Gbps) line rates using polarisation-multiplexing, quadrature phase-shift keying (PM-QPSK) and 200 Gbps transmission using polarisation-multiplexing, 16-quadrature amplitude modulation (PM-16-QAM).  
 
Unlike the CFP, the CFP2-ACO module houses the photonics for coherent transmission; the accompanying coherent DSP-ASIC resides on the line card. The CFP2’s 12W power consumption is insufficient to house the combined power consumption of the optics and current DSP-ASIC designs.  
 
With the advent of the CFP2-ACO, five or even six modules can be fitted on a line card. “With five CFP2s, if you do 100 Gigabit, you have a 500 Gigabit line card, but if you can do 200 Gigabit using 16-QAM, you have a one terabit line card,” says Robert Blum, director of strategic marketing at Oclaro. 
Such line cards can be used not just for metro and regional networks but for the emerging data centre interconnect market, says Blum. Using line-side pluggables also allows operators to add capacity as required.  
 
Oclaro says its CFP2-ACO module has been shown to work with seven different DSP-ASICs; five developed by the system vendors and two merchant chips, from ClariPhy and NEL.  
 
Oclaro uses a single high-output power narrow line-width laser for its CFP2-ACO. The bulk of the laser’s light is used for the transmitter path but some of the light is split off and used for the local oscillator in the receive path. This saves the cost of using a separate, second laser but requires that the transmit and receive paths operate on a common wavelength.  
 
In contrast, Finisar uses two lasers for its CFP2-ACO: one for the transmit path and one for the local oscillator source. This allows independent transmit and receive wavelengths, and uses all the laser’s output power for transmission. Rafik Ward, Finisar’s vice president of marketing says the company has invested significantly to develop its CFP2-ACO, and using it own in-house components. Finisar acquired indium phosphide specialist u2t Photonics in 2014 specifically to address the CFP2-ACO design. 
 
At OFC, fabless chip maker ClariPhy announced a CFP2-ACO reference design card. The design uses the company’s flagship CL20010 DSP-ASIC with a CFP2 cage into which various vendors’ CFP2-ACO modules can be inserted. The CL20010 DSP supports 100 Gbps and 200 Gbps data rates.  
 
“Every major CFP2 module maker is sampling [a CFP2-ACO],” says Paul Voois, co-founder and chief strategy officer at ClariPhy. Having coherent optics integrated into a CFP2 is a real game-changer, he says. Not only will the CFP2-ACO enable one terabit line cards, but the associated miniaturisation of the optics will lower the cost of coherent transmission.  
 
“The DSP’s cost will decline [with volumes] and so will the optics which account for two thirds of the transponder cost,” says Voois. Having a CFP2-ACO multi-source agreement (MSA) also promotes interoperability, further spurring the CFP2-ACO’s adoption, he says.   
 
NeoPhotonics announced a micro integrated coherent receiver (micro-ICR) for the CFP2-ACO. NeoPhotonics all but confirmed it will also supply a CFP2-ACO module. “That would be a logical assumption given that we have all the pieces,” says Ferris Lipscomb, vice president of marketing at NeoPhotonics.  
 
 
5x7-inch MSAs: 40 to 400 Gig  
    
Work continues to advance the line-side reach and line-speed capabilities of the fixed 5x7-inch MSA module. 
 
Acacia Communications announced a 5x7-inch coherent transponder that supports two carriers, each capable of carrying 100, 150 or 200 Gigabit  of data. The Acacia design uses two of the company’s silicon photonics chips, one for each carrier, coupled with Acacia’s DSP-ASIC. 
 
Finisar announced two 5x7 inch MSAs: one capable of 100 Gigabit and 200 Gigabit and one tailored for submarine and ultra long-haul applications using 40 Gig or 50 Gig binary phase-shift keying (PM-BPSK).  
 
Finisar claims it offers the industry’s broadest 200 Gigabit optical module portfolio with its 5x7 inch MSA and its CFP2-ACO. It demonstrated its 5x7-inch MSA also working with its CFP2-ACO at OFC. For the demonstration, Finisar used its CFP2-ACO module plugged into ClariPhy’s reference design.  
 
 
Micro-ITLAs, modulators and micro-ICRs go parallel   
 
Oclaro announced a dual micro-ITLA suited for two-carrier signals for a 400 Gig super-channel, with each carrier using PM-16-QAM.  
 
“People are designing discrete line cards using micro-ITLAs, lithium niobate modulators and coherent receivers for 400 Gig, for example, and they need two lasers, one for each channel,” says Oclaro’s Blum. This is the main application Oclaro is seeing for the design, but another use of the dual micro-ITLA is for networks where the receive wavelength is different to the transmitter one. “For that, you need a local oscillator that you tune independently,” says Blum.  
 

JDSU also showed a dual-carrier coherent lithium niobate modulator capable of 400 Gig for long-reach applications. The company is also sampling a dual 100 Gig micro-ICR also for multiple sub-channel applications. 

 

Avago announced a micro-ITLA device using its external cavity laser that has a line-width less than 100kHz. The micro-ITLA is suited for 100 Gig PM-QPSK and 200 Gig 16-QAM modulation formats and supports a flex-grid or gridless architecture.


Tunable SFP+

Oclaro announced a second-generation tunable SFP that has a power consumption below 1.5W, meeting the SFP MSA. The tunable SFP also operates over an extended temperature range of up to 85oC, but here the power consumption rises to 1.8W.  
 
“We see a lot of applications that need these higher temperatures: racks running hot, WDM-PON and wireless front-hauling,” says Blum. Wireless fronthaul typically uses grey optics to carry the radio-head traffic sent to the wireless baseband unit. But operators are looking to WDM technology as a way to aggregate traffic and this is where the extended temperature tunable SFP+ can play a role, says Blum.         
 
 
WDM-PON demonstration

ADVA Optical Networking and Oclaro demonstrated a WDM-PON prototype at OFC. WDM-PON has been spoken of for over a decade as the ultimate optical access technology, delivering dedicated wavelengths to premises. More recently, WDM-PON has been deployed to deliver business services and is being viewed for mobile backhaul and fronthaul applications.  
 
The ADVA-Oclaro WDM-PON demonstration is a 40-wavelength system using the C- and L-bands. The system’s 10 Gigabit wavelengths are implemented using tunable SFP+ modules at the customer’s site.  
 
The difference between Oclaro’s second-generation tunable SFP+ and the WDM-PON demonstration is that the latter module does not use a wavelength locker. Instead, a centralised wavelength controller is used to monitor all 40 channels and sends information back to the customer premise equipment via the L-band if a particular wavelength has drifted and needs adjustment. “We can get away with a very low-cost tunable laser in the customer premises [using this approach],” says Blum.     
  
 
ROADM building blocks 
 
JDSU showcased its latest ROADM line cards at OFC. These included its second-generation twin 1x20 wavelength-selective switch (WSS), part of its TrueFlex Super Transport blade, and its TrueFlex Multicast Switch blade that features a twin 4x16 multicast switch and a 4+4 array of amplifiers.  
 
JDSU’s first-generation twin 1x20 WSS required more than two slots in a chassis; two slots for the twin WSS and another for amplification and optical channel monitoring. JDSU can now fit all the functions on one blade with its latest design.  
 
The 4x16 multicast switch supports a four-degree (four directions) ROADM and 16 drop or add ports. The twin multicast switch design is used for multiplexing and demultiplexing of wavelengths. “This size multicast switch needs an amplifier on each of those four ports,” says Brandon Collings, CTO for communications and commercial optical products at JDSU. The 4+4 array of amplifiers is for the multicast switch multiplexing and the demultiplexing, “four amps on the mux side of the multicast switch and four amps for the demux side of the multicast switch”, says Collings. 
 
NeoPhotonics announced a modular 4x16 multicast switch which it claims does not need drop amplifiers.  
 
Being modular, operators can grow their systems based on demand, avoiding up-front costs and having to predict the ultimate size of the ROADM node. For example by adding multicast switches they can go from 4x16, 8x16, 12x16 to a full 16x16 switch configuration. “Carriers do not like to have to plan in advance, and they like to be future-proofed,” says Lipscomb.  
 
The NeoPhotonics multicast switch uses planar lightwave circuit (PLC) technology and has a broadcast-and-select architecture. As such, the architecture uses optical splitters which inevitably introduce signal loss. By concentrating on reducing switch loss and by increasing the sensitivity of the integrated coherent receiver, NeoPhotonics claims it can do away with the drop amplifiers for metro networks and even for certain long-haul routes. This can save up to a $1,000 a switch, says Lipscomb.    
 
NeoPhotonics’ multicast switch has already been designed on a line card and introduced into a customer’s platform. It is now undergoing qualification before being made generally available.   
 
ROADM status 
 
“This type of stuff [advanced WSSes and multicast switches for ROADMs] is what Verizon has been pushing for all these years,” says JDSU’s Collings. “These developments have been completed because operators like Verizon are getting serious.” Earlier this year, Verizon selected Ciena and Cisco Systems as the equipment suppliers for its large metro contract.  
 
Some analysts argue that it is largely Verizon promoting advanced ROADM usage and that the rest of the industry is less keen. Collings points out that JDSU, being a blade supplier and not a system vendor, is one customer layer removed from the operators. But he argues that other operators besides Verizon also want to deploy advanced ROADM technology but that two milestones must be overcome first. 
 
“People are waiting to see the technology mature and Verizon really do it,” he says. “[Their attitude is:] Let Verizon run headlong into that, and let’s see how they fare before we invest.” Collings says that until now, ROADM hardware has not been sufficiently mature: “Even Verizon has had to wait to start deploying this stuff.” 
 
The second milestone is having a control plane to manage the systems’ flexibility and dynamic nature. This is where the system vendors have focused their efforts in the past year, convincing operators that the hardware and the control plane are up and running, he says. 
 
“There is lots of interest [in advanced ROADMs] from a variety of carriers globally,”  says Collings. “But they have been waiting for these two shoes to drop.”

 

For Part 2, click here

COBO acts to bring optics closer to the chip

The formation of the Consortium for On-Board Optics (COBO) highlights how, despite engineers putting high-speed optics into smaller and smaller pluggable modules, further progress in interface compactness is needed.

The goal of COBO, announced at the OFC 2015 show and backed by such companies as Microsoft, Cisco Systems, Finisar and Intel, is to develop a technology roadmap and common specifications for on-board optics to ensure interoperability.

“The Microsoft initiative is looking at the next wave of innovation as it relates to bringing optics closer to the CPU,” says Saeid Aramideh, co-founder and chief marketing and sales officer for start-up Ranovus, one of the founding members of COBO. “There are tremendous benefits for such an architecture in terms of reducing power dissipation and increasing the front panel density.”

On-board optics refers to optical engines or modules placed on the printed circuit board, close to a chip. The technology is not new; Avago Technologies and Finisar have been selling such products for years. But these products are custom and not interoperable.  

Placing the on-board optics nearer the chip - an Ethernet switch, network processor or a microprocessor for example - shortens the length of the board’s copper traces linking the two. The fibre from the on-board optics bridges the remaining distance to the equipment’s face plate connector. Moving the optics onto the board reduces the overall power consumption, especially as 25 Gigabit-per-second electrical lanes start to be used. The fibre connector also uses far less face plate area compared to pluggable modules, whether the CFP2, CFP4, QSFP28 or even an SFP+.  

________________________________________________________________________________
The founding members of the Consortium for On-Board Optics are Arista Networks, Broadcom, Cisco, Coriant, Dell, Finisar, Inphi, Intel, Juniper Networks, Luxtera, Mellanox Technologies, Microsoft, Oclaro, Ranovus, Source Photonics and TE Connectivity.

Given the breadth of companies and the different technologies they prefer,  will the COBO's initiative choose a specific fibre type and wavelength?

“COBO currently has no plans to specify a single medium or a single wavelength, but rather will reference existing standards,” Brad Booth, Chair for the Consortium for On-Board Optics told Gazettabyte.

“There has not been any discussion on the fibre type - single mode versus multi-mode - yet,” added Aramideh. “This will be one item among many interworking specification items for the consortium to define.”
________________________________________________________________________________

 

“The [COBO] initiative is going to be around defining the electrical interface, the mechanical interface, the power budget, the heat-sinking constraints and the like,” says Aramideh.

To understand why such on-board optics will be needed, Aramideh cites Broadcom’s StrataXGS Tomahawk switch chips used for top-of-rack and aggregation switches. The Tomahawk is Broadcom’s first switch family that use 25 Gbps serialiser/ deserialiser (serdes) and has an aggregate switch bandwidth of up to 3.2 terabit. And Broadcom is not alone. Cavium through its Xplaint acquisition has the CNX880xx line of Ethernet switch chips that also uses 25 Gbps lanes and has a switch capacity up to 3.2 terabit.

“You have 1.6 terabit going to the front panel and 1.6 terabit going to the back panel; that is a lot of traces,” says Aramideh. “If you make this into opex [operation expense], and put the optics close to the switch ASIC, the overall power consumption is reduced and you have connectivity to the front and the back.” 

This is the focus of Ranovus, with the OpenOptics MSA initiative. “Scaling into terabit connectivity over short distances and long distances,” he says.

 

OpenOptics MSA

At OFC, members of the OpenOptics MSA, of which Ranovus and Mellanox are founders, published its WDM specification for an interoperable 100 Gbps WDM standard that will have a two kilometer reach. 

The 100 Gigabit standard uses 4x25 Gbps wavelengths but Aramideh says the standard scales to 8, 16 and 32 lanes. In turn, there will also be a 50 Gbps lane version that will provide a total connectivity of 1.6 terabit (32x50 Gbps). 

Ranovus has not detailed what modulation scheme it will use to achieve 50 Gbps lanes, but Aramideh says that PAM-4 is one of the options and an attractive one at that. “There are also a lot of chipsets [supporting PAM-4] becoming available,” he says. 

Ranovus’s first products will be an OpenOptics MSA optical engine and an QSFP28 optical module. “We are not making any product announcements yet but there will be products available this year,” says Aramideh. 

Meanwhile, Ciena has become the sixth member to join the OpenOptics MSA. 


Calient uses its optical switch to boost data centre efficiency

For Calient Technologies, an approach by one of the world’s largest data centre operators changed the company’s direction.

The company had been selling its 320x320 non-blocking optical circuit switch (OCS) for applications such as submarine cable landing sites and for government intelligence. Then, five years ago, a large internet content provider contacted Calient, saying it had figured out exactly where Calient’s OCS could play a role in the data centre.

 

This solution could deliver a significant percentage-utilisation improvement

Daniel Tardent

 

 

 

But before the hyper-scale data centre operator would adopt Calient’s switch, it wanted the platform re-engineered. It viewed Calient’s then-product as power-hungry and had concerns about the switch’s reliability given it had not been deployed in volume. If Calient could make its switch cheaper, more power efficient and prove its reliability, the internet content provider would use it in its data centres. 

Calient undertook the re-engineering challenge. The company did not change the 3D micro electromechanical system (MEMS) chip at the heart of the OCS, but it fundamentally redesigned the optics, electronics and control system surrounding the MEMS. The result is Calient’s S-Series switch family, the first product of which was launched three years ago.

“That switch family represented huge growth for us,” says Daniel Tardent, Calient’s vice president of marketing and product line manager. “We went from making one switch every two weeks to a large number of switches each week, just for this one customer.” 

Calient has remained focussed on the data centre ever since, working to understand the key connectivity issues facing the hyper-scale data centre operators. 

“All these big cloud facilities have very large engineering teams that work on customising their architectures for exactly the applications they are running,” says Tardent. “There isn’t one solution that fits all.” 

There are commonalities among the players in how Calient’s OCS can be deployed but what differs is the dimensioning and the connectivity used by each. 

Greater commonality will be needed by those customers that represent the tier below the largest data centre players, says Tardent: “These don’t have a lot of engineering resource and want a more packaged solution.” 

 

What really interests the big data players is how they can better utilise their compute and storage resources because that is where their major cost is

 

 

LightConnect fabric manager

Calient unveiled at the OFC 2015 show held in Los Angeles last month its LightConnect fabric manager software. The software, working with Calient’s S-Series switches, is designed to better share the data centre’s computing and storage resources. 

The move to improve the utilisation of data centre resources is a new venture for the company. Initially, the company tackled how the OCS could improve data centre networking linking the servers and storage. The company explored using its OCS products to offload large packet flows - dubbed elephant flows - to improve overall efficiency.

Elephant flows are specific packet flows that need to be moved across the data centre. Examples include moving a virtual machine from one server to another, or replicating or relocating storage. Different data centre operators have different definitions as to what is an elephant flow but one data centre defines it as any piece of data greater than 20 Gbyte, says Calient.

If persistent elephant flows run through the network, they clog up the network buffers, impeding the shorter ‘mice’ flows that are just as important for the efficient working of the data centre. Congested buffers increase latency and adversely affect workloads. “If you are moving a large piece of data across the data centre, you want to move it quickly and efficiently,” says Tardent. 

Calient’s OCS, by connecting top-of-rack switches, can be used to offload the elephants flows. In effect, the OCS acts as an optical expressway, bypassing the electrical switch fabric.

Now, with the launch of the LightConnect fabric manager, Calient is tackling a bigger issue: how to benefit the overall economics of the data centre by improving server and storage utilisation. 

“What really interests the big data players is how they can better utilise their compute and storage resources because that is where their major cost is,” says Tardent. 

Today’s data centres run at up to 40 percent server utilisation. Given that the largest data centres can house 100,000 servers, just one percent improvement in usage has a significant impact on overall cost.

Calient claims that a 1.6 percent improvement in server utilisation covers the cost of introducing its OCS into the data centre. An average of nine to 14 percent utilisation improvement and all the data centre’s networking costs are covered.  “The nine to 14 percent is a range that depends on how ‘thin’ or ‘fat’ the network layer is,” says Tardent. “A thin network design has less bandwidth and is less expensive.” Both types exist depending on the particular functions of the data centre, he says.

 

Virtual pods

Data centres are typically organised into pods or clusters. A pod is a collection of servers, storage and networking. A pod varies with data centre operator but an example is 16 rows of 8 server racks plus storage.

Pods are popular among the large data centre players because they enable quick replication, whether it is bringing resources online or by enabling pod maintenance by switching in a replacement pod first.

One issue data centre managers must grapple with is when one pod is heavily loaded while others have free resources. One approach is to move the workload to the other lightly-used pods. This is non-trivial, though; it requires policies, advanced planning and it is not something that can be done in real-time, says Tardent: “And when you move a big workload between pods, you create a series of elephant flows." 

An alternative approach is to move part of the workload to a less-used pod. But this runs the risk of increasing the latency between different parts of the workload. “In a big cloud facility with some big applications, they require very tightly-coupled resources,” he says.

Instead, data centre players favour over-provisioning: deliberately under-utilising their pods to leave headroom for worst-case workload expansion. “You spend a lot of money to over-provision every pod to allow for a theoretical worst-case,” says Tardent.

Calient proposes that its OCS switch fabric be used to effectively move platform resources to pods that are resource-constrained rather move workloads to pods. Hence the term virtual pods or v-pods.

For example, some of the resources in two under-utilised pods can be connected to a third, heavily-loaded pod to create a virtual pod with effectively more rows of racks. “Because you are doing it at the physical layer as opposed to going through a layer-2 or layer-3 network, it truly is within the same physical pod,” says Tardent. “It is as if you have driven a forklift, picked up that row and moved it to the other pod.”

In practice, data centre managers can pull resources from anywhere in the data centre, or they can allocate particular resources permanently to one pod by not going through the OCS optical layer.

The LightConnect fabric manager software can be used as a standalone system to control and monitor the OCS switch fabric. Or the fabric manager software can be integrated within the existing data centre management system using several application programming interfaces.

Calient has not quoted exact utilisation improvement figures that result from using its OCS switches and LightConnect software. 

“We have had acknowledgement that this solution could deliver a significant percentage-utilisation improvement and we will be going into a proof-of-concept deployment with one of the large cloud data centres very soon,” says Tardent. Calient is also in discussion with several other cloud providers.

LightConnect will be a commercially deployable system starting mid-year.


Heading off the capacity crunch

Feature - Part 1: Capacity limits and remedies

Improving optical transmission capacity to keep pace with the growth in IP traffic is getting trickier. 

Engineers are being taxed in the design decisions they must make to support a growing list of speeds and data modulation schemes. There is also a fissure emerging in the equipment and components needed to address the diverging needs of long-haul and metro networks. As a result, far greater flexibility is needed, with designers looking to elastic or flexible optical networking where data rates and reach can be adapted as required.

Figure 1: The green line is the non-linear Shannon limit, above which transmission is not possible. The chart shows how more bits can be sent in a 50 GHz channel as the optical signal to noise ratio (OSNR) is increased. The blue dots closest to the green line represent the performance of the WaveLogic 3, Ciena's latest DSP-ASIC family. Source: Ciena.

But perhaps the biggest challenge is only just looming. Because optical networking engineers have been so successful in squeezing information down a fibre, their scope to send additional data in future is diminishing. Simply put, it is becoming harder to put more information on the fibre as the Shannon limit, as defined by information theory, is approached.

"Our [lab] experiments are within a factor of two of the non-linear Shannon limit, while our products are within a factor of three to six of the Shannon limit," says Peter Winzer, head of the optical transmission systems and networks research department at Bell Laboratories, Alcatel-Lucent. The non-linear Shannon limit dictates how much information can be sent across a wavelength-division multiplexing (WDM) channel as a function of the optical signal-to-noise ratio.

A factor of two may sound a lot, says Winzer, but it is not. "To exhaust that last factor of two, a lot of imperfections need to be compensated and the ASIC needs to become a lot more complex," he says. The ASIC is the digital signal processor (DSP), used for pulse shaping at the transmitter and coherent detection at the receiver.     

 

Our [lab] experiments are within a factor of two of the non-linear Shannon limit, while our products are within a factor of three to six of the Shannon limit - Peter Winzer 

 

At the recent OFC 2015 conference and exhibition, there was plenty of announcements pointing to industry progress. Several companies announced 100 Gigabit coherent optics in the pluggable, compact CFP2 form factor, while Acacia detailed a flexible-rate 5x7 inch MSA capable of 200, 300 and 400 Gigabit rates. And research results were reported on the topics of elastic optical networking and spatial division multiplexing, work designed to ensure that networking capacity continues to scale.  

 

Trade-offs

There are several performance issues that engineers must consider when designing optical transmission systems. Clearly, for submarine systems, maximising reach and the traffic carried by a fibre are key. For metro, more data can be carried on a single carrier to improving overall capacity but at the expense of reach.

Such varied requirements are met using several design levers:  

  •  Baud or symbol rate 
  •  The modulation scheme which determines the number of bits carried by each symbol 
  •  Multiple carriers, if needed, to carry the overall service as a super-channel

The baud rate used is dictated by the performance limits of the electronics. Today that is 32 Gbaud: 25 Gbaud for the data payload and up to 7 Gbaud for forward error correction and other overhead bits. 

Doubling the symbol rate from 32 Gbaud used for 100 Gigabit coherent to 64 Gbaud is a significant challenge for the component makers. The speed hike requires a performance overhaul of the electronics and the optics: the analogue-to-digital and digital-to-analogue converters and the drivers through to the modulators and photo-detectors. 

"Increasing the baud rate gives more interface speed for the transponder," says Winzer. But the overall fibre capacity stays the same, as the signal spectrum doubles with a doubling in symbol rate.

However, increasing the symbol rate brings cost and size benefits. "You get more bits through, and so you are sharing the cost of the electronics across more bits," says Kim Roberts, senior manager, optical signal processing at Ciena. It also implies a denser platform by doubling the speed per line card slot.  

 

As you try to encode more bits in a constellation, so your noise tolerance goes down - Kim Roberts   

 

Modulation schemes 

The modulation used determines the number of bits encoded on each symbol. Optical networking equipment already use binary phase-shift keying (BPSK or 2-quadrature amplitude modulation, 2-QAM) for the most demanding, longest-reach submarine spans; the workhorse quadrature phase-shift keying (QPSK or 4-QAM) for 100 Gigabit-per-second (Gbps) transmission, and the 200 Gbps 16-QAM for distances up to 1,000 km.

Moving to a higher QAM scheme increases WDM capacity but at the expense of reach. That is because as more bits are encoded on a symbol, the separation between them is smaller. "As you try to encode more bits in a constellation, so your noise tolerance goes down," says Roberts.   

One recent development among system vendors has been to add more modulation schemes to enrich the transmission options available. 

 

From QPSK to 16-QAM, you get a factor of two increase in capacity but your reach decreases of the order of 80 percent - Steve Grubb

 

Besides BPSK, QPSK and 16-QAM, vendors are adding 8-QAM, an intermediate scheme between QPSK and 16-QAM. These include Acacia with its AC-400 MSA, Coriant, and Infinera. Infinera has tested 8-QAM as well as 3-QAM, a scheme between BPSK and QPSK, as part of submarine trials with Telstra. 

"From QPSK to 16-QAM, you get a factor of two increase in capacity but your reach decreases of the order of 80 percent," says Steve Grubb, an Infinera Fellow. Using 8-QAM boosts capacity by half compared to QPSK, while delivering more signal margin than 16-QAM. Having the option to use the intermediate formats of 3-QAM and 8-QAM enriches the capacity tradeoff options available between two fixed end-points, says Grubb.    

Ciena has added two chips to its WaveLogic 3 DSP-ASIC family of devices: the WaveLogic 3 Extreme and the WaveLogic 3 Nano for metro. 

WaveLogic3 Extreme uses a proprietary modulation format that Ciena calls 8D-2QAM, a tweak on BPSK that uses longer duration signalling that enhances span distances by up to 20 percent. The 8D-2QAM is aimed at legacy dispersion-compensated fibre that carry 10 Gbps wavelengths and offers up to 40 percent additional upgrade capacity compared to BPSK. 

Ciena has also added 4-amplitude-shift-keying (4-ASK) modulation alongside QPSK to its WaveLogic3 Nano chip. The 4-ASK scheme is also designed for use alongside 10 Gbps wavelengths that introduce phase noise, to which 4-ASK has greater tolerance than QPSK. Ciena's 4-ASK design also generates less heat and is less costly than BPSK.    

According to Roberts, a designer’s goal is to use the fastest symbol rate possible, and then add the richest constellation as possible "to carry as many bits as you can, given the noise and distance you can go". 

After that, the remaining issue is whether a carrier’s service can be fitted on one carrier or whether several carriers are needed, forming a super-channel. Packing a super-channel's carriers tightly benefits overall fibre spectrum usage and reduces the spectrum wasted for guard bands needed when a signal is optically switched.  

Can symbol rate be doubled to 64 Gbaud? "It looks impossibly hard but people are going to solve that," says Roberts. It is also possible to use a hybrid approach where symbol rate and modulation schemes are used. The table shows how different baud rate/ modulation schemes can be used to achieve a 400 Gigabit single-carrier signal.

 

Note how using polarisation for coherent transmission doubles the overall data rate. Source: Gazettabyte

 

But industry views differ as to how much scope there is to improve overall capacity of a fibre and the optical performance.

Roberts stresses that his job is to develop commercial systems rather than conduct lab 'hero' experiments. Such systems need to be work in networks for 15 years and must be cost competitive. "It is not over yet," says Roberts.

He says we are still some way off from when all that remains are minor design tweaks only. "I don't have fun changing the colour of the paint or reducing the cost of the washers by 10 cents,” he says. “And I am having a lot of fun with the next-generation design [being developed by Ciena].”  

"We are nearing the point of diminishing returns in terms of spectrum efficiency, and the same is true with DSP-ASIC development," says Winzer. Work will continue to develop higher speeds per wavelength, to increase capacity per fibre, and to achieve higher densities and lower costs. In parallel, work continues in software and networking architectures. For example, flexible multi-rate transponders used for elastic optical networking, and software-defined networking that will be able to adapt the optical layer.

After that, designers are looking at using more amplification bands, such as the L-band and S-band alongside the current C-band to increase fibre capacity. But it will be a challenge to match the optical performance of the C-band across all bands used. 

"I would believe in a doubling or maybe a tripling of bandwidth but absolutely not more than that," says Winzer. "This is a stop-gap solution that allows me to get to the next level without running into desperation." 

The designers' 'next level' is spatial division multiplexing. Here, signals are launched down multiple channels, such as multiple fibres, multi-mode fibre and multi-core fibre. "That is what people will have to do on a five-year to 10-year horizon," concludes Winzer. 

 

For Part 2, click here

 

See also:

  • High Capacity Transport - 100G and Beyond, Journal of Lightwave Technology, Vol 33, No. 3, February 2015.

 

A version of this article first appeared in an OFC 2015 show preview


Acacia unveils 400 Gigabit coherent transceiver

  • The AC-400 5x7 inch MSA transceiver is a dual-carrier design
  • Modulation formats supported include PM-QPSK, PM-8-QAM and PM-16-QAM
  • Acacia’s DSP-ASIC is a 1.3 billion transistor dual-core chip 

Acacia Communications has unveiled the industry's first flexible rate transceiver in a 5x7-inch MSA form factor that is capable of up to 400 Gigabit transmission rates. The company made the announcement at the OFC show held in Los Angeles. 

Dubbed the AC-400, the transceiver supports 200, 300 and 400 Gigabit rates and includes two silicon photonics chips, each implementing single-carrier optical transmission, and a coherent DSP-ASIC. Acacia designs its own silicon photonics and DSP-ASIC ICs.

"The ASIC continues to drive performance while the optics continues to drive cost leadership," says Raj Shanmugaraj, Acacia's president and CEO.

The AC-400 uses several modulation formats that offer various capacity-reach options. The dual-carrier transceiver supports 200 Gig using polarisation multiplexing, quadrature phase-shift keying (PM-QPSK) and 400 Gig using 16-quadrature amplitude modulation (PM-16-QAM). The 16-QAM option is used primarily for data centre interconnect for distances up to a few hundred kilometers, says Benny Mikkelsen, co-founder and CTO of Acacia: "16-QAM provides the lowest cost-per-bit but goes shorter distances than QPSK."  

Acacia has also implemented a third, intermediate format - PM-8-QAM - that improves reach compared to 16-QAM but encodes three bits per symbol (a total of 300 Gig) instead of 16-QAM's four bits (400 Gig). "8-QAM is a great compromise between 16-QAM and QPSK," says Mikkelsen. "It supports regional and even long-haul distances but with 50 percent higher capacity than QPSK." Acacia says one of its customer will use PM-8-QAM for a 10,000 km submarine cable application.

 

Source: Gazettabyte 

Other AC-400 transceiver features include OTN framing and forward error correction. The OTN framing can carry 100 Gigabit Ethernet and OTU4 signals as well as the newer OTUc1 format that allows client signals to be synchronised such that a 400 Gigabit flow from a router port can be carried, for example. The FEC options include a 15 percent overhead code for metro and a 25 percent overhead code for submarine applications. 

The 28 nm CMOS DSP-ASIC features two cores to process the dual-carrier signals. According to Acacia, its customers claim the DSP-ASIC has a power consumption less than half that of its competitors. The ASIC used for Acacia’s AC-100 CFP pluggable transceiver announced a year ago consumes 12-15W and is the basis of its latest DSP design, suggesting an overall power consumption of 25 to 30+ Watts. Acacia has not provided power consumption figures and points out that since the device implements multiple modes, the power consumption varies.

The AC-400 uses two silicon photonics chips, one for each carrier. The design, Acacia's second generation photonic integrated circuit (PIC), has a reduced insertion loss such that it can now achieve submarine transmission reaches. "Its performance is on a par with lithium niobate [modulators]," says Mikkelsen.

 

It has been surprising to us, and probably even more surprising to our customers, how well silicon photonics is performing

 

The PIC’s basic optical building blocks - the modulators and the photo-detectors - have not been changed from the first-generation design. What has been improved is how light enters and exits the PIC, thereby reducing the coupling loss. The latest PIC has the same pin-out and fits in the same gold box as the first-generation design. "It has been surprising to us, and probably even more surprising to our customers, how well silicon photonics is performing," says Mikkelsen.

Acacia has not tried to integrate the two wavelength circuits on one PIC. "At this point we don't see a lot of cost savings doing that," says Mikkelsen. "Will we do that at some point in future? I don't know." Since there needs to be an ASIC associated with each channel, there is little benefit in having a highly integrated PIC followed by several discrete DSP-ASICs, one per channel. 

The start-up now offers several optical module products. Its original 5x7 inch AC-100 MSA for long-haul applications is used by over 10 customers, while it has two 5x7 inch modules for submarine operating at 40 Gig and 100 Gig are used by two of the largest submarine network operators. Its more recent AC-100 CFP has been adopted by over 15 customers. These include most of the tier 1 carriers, says Acacia, and some content service providers. The AC-100 CFP has also been demonstrated working with Fujitsu Optical Components's CFP that uses NTT Electronics's DSP-ASIC. Acacia expects to ship 15,000 AC-100 coherent CFPs this year.

Each of the company's module products uses a custom DSP-ASIC such that Acacia has designed five coherent modems in as many years. "This is how we believe we out-compete the competition," says Shanmugaraj.  

Meanwhile, Acacia’s coherent AC-400 MSA module is now sampling and will be generally available in the second quarter.


OIF shows 56G electrical interfaces & CFP2-ACO

The Optical Internetworking Forum (OIF) is using the OFC exhibition taking place in Los Angeles this week to showcase the first electrical interfaces running at 56 Gigabit. Coherent optics in a CFP2 pluggable module is also being demonstrated.

 

“The most important thing for everyone is power consumption on the line card”

The OIF - an industry organisation comprising communications service providers, internet content providers, system vendors and component companies - is developing the next common electrical interface (CEI) specifications. The OIF is also continuing to advance fixed and pluggable optical module specifications for coherent transmission including the pluggable CFP2 (CFP2-ACO).

“These are major milestones that the [demonstration] efforts are even taking place,” says Nathan Tracy, a technologist at TE Connectivity and the OIF technical committee chair.

Tracy stresses that the CEI-56G specifications and the CFP2-ACO remain works in progress. “They are not completed documents, and what the demonstrations are not showing are compliance and interoperability,” he says.  

Five CEI-56G specifications are under development, such as platform backplanes and links between a chip and an optical engine on a line card (see Table below). 

Moving from the current 28 Gig electrical interface specifications to 56 Gig promises to double the interface capacity and cut electrical interface widths by half. “If we were going to do 400 Gigabit with 25 Gig channels, we would need 16 channels,” says Tracy. “If we can do 50 Gig, we can get it down to eight channels.”  Such a development will enable chassis to carry more traffic and help address the continual demand for more bandwidth, he says.

But doubling the data rate is challenging. “As we double the rate, the electrical loss or attenuation of the signal travelling across a printed circuit board is significantly impacted,” says Tracy. “So now our reaches have to get a lot shorter, or the silicon that sends and receives has to improve to significant higher levels.”  

 

One of the biggest challenges in system design is thermal management

 

Moreover, chip designers must ensure that the power consumption of their silicon do not rise. “We have to be careful as to what the market will tolerate, as one of the biggest challenges in system design is thermal management,” says Tracy. “We can’t just do what it takes to get to 56 Gigabit.”     

To this aim, the OIF is pursuing two parallel tracks: using 56 Gigabit non-return-to-zero (NRZ) signalling and 4-level pulse amplitude modulation (PAM-4) which encodes two bits per symbol such that a 28 Gbaud signalling rate can be used. The 56 Gig NRZ uses simpler signalling but must deal with the higher associated loss, while PAM-4 does not suffer the same loss as it is similar to existing CEI-28 channels used today but requires a more complex design. 

“Some [of the five CEI-56G specifications] use NRZ, some PAM-4 and some both,” says Tracy. The OIF will not say when it will complete the CEI-56G specifications. However, the projects are making similar progress while the OIF is increasing its interactions with other industry standards groups to shorten the overall timeline.   

 

Source: OIF, Gazettabyte

Two of the CEI-56G specifications cover much shorter distances: the Extra Short Reach (XSR) and Ultra Short Reach (USR). According to the OIF, in the past it was unclear that the industry would benefit from interoperability for such short reaches.

“What is different at 56 Gig is that architectures are fundamentally being changed: higher data rates, industry demand for higher levels of performance, and changing fabrication technologies,” says Tracy. Such fabrication technologies include 3D packaging and multi-chip modules (MCMs) where silicon dies from different chip vendors may be connected within the module. 

The XSR interface is designed to enable higher aggregate bandwidth on a line card which is becoming limited by the number of pluggable modules that can be fitted on the platform’s face plate. Density can be increased by using mid-board optics (an optical engine) placed closer to a chip. Here, fibre from the optical engine is fed to the front plate increasing the overall interface capacity.

The USR interface is to support stackable ICs and MCMs. 

 

All are coming together in this pre-competitive stage to define the specifications, yet, at the same time, we are all fierce competitors

 

“The most important thing for everyone is power consumption on the line card,” says Tracy. “If you define these very short reach interfaces in such a way that these chips do not need as much power, then we have helped to enable the next generation of line card.” 

The live demonstrations at OFC include a CEI-56G-VSR-NRZ channel, a CEI-56G-VSR-PAM QSFP compliance board, CEI-56G-MR/LR-PAM and CEI-56G-MR/LR-NRZ backplanes, and a CEI-56G-MR-NRZ passive copper cable.

The demonstrations reflects what OIF members are willing to show, as some companies prefer to keep their work private. “All are coming together in this pre-competitive stage to define the specifications, yet, at the same time, we are all fierce competitors,” says Tracy.

 

CFP2-ACO  

Also on display is working CFP2 analogue coherent optics (CFP2-ACO). The significance of coherent optics in a pluggable CFP2 is the promise of higher-density line cards. The CFP is a much bigger module and at most four can be fitted on a line card, while with the smaller CFP2, with its lower power consumption, up to eight modules are possible. 

Using the CFP2-ACO, the coherent DSP-ASIC is external to the CFP2 module. Much work has been done to ensure that the electrical interface can support the analogue signalling between the CFP2 optics and the on-board DSP-ASIC, says Tracy.

At OFC, several companies have unveiled their CFP2-ACO products including Finisar, Fujitsu Optical Components, Oclaro and NEC, while Clariphy has announced a single-board reference design that includes its CL20010 DSP-ASIC and a CFP2-ACO slot.  


MultiPhy readies 100 Gigabit serial direct-detection chip

MultiPhy is developing a chip that will support serial 100 Gigabit-per-second (Gbps) transmission using 25 Gig optical components. The device will enable short reach links within the data centre and up to 80km point-to-point links for data centre interconnect. The fabless chip company expects to have first samples of the chip, dubbed FlexPhy, by year-end.

Figure 1: A block diagram of the 100 Gig serial FlexPhy. The transmitter output is an electrical signal that is fed to the optics. Equally, the input to the receive path is an electrical signal generated by the receiver optics. Source: Gazettabyte

The FlexPhy IC comprises multiplexing and demultiplexing functions as well as a receiver digital signal processor (DSP). The IC's transmitter path has a CAUI-4 (4x28 Gig) interface, a 4:1 multiplexer and four-level pulse amplitude modulation (PAM-4) that encodes two bits per symbol. The resulting chip output is a 50 Gbaud signal used to drive a laser to produce the 100 Gbps output stream.

"The input/output doesn't toggle at 100 Gig, it toggles at 50 Gig," says Neal Neslusan, vice president of sales and marketing at MultiPhy. "But 50 Gig PAM-4 is actually 100 Gigabit-per-second."

The IC's receiver portion will use digital signal processing to recover and decode the PAM-4 signals, and demultiplex the data into four 28 Gbps electrical streams. The FlexPhy IC will fit within a QSFP28 pluggable module.

As with MultiPhy's first-generation chipset, the optics are overdriven. With the MP1101Q 4x28 Gig multiplexer and MP1100Q four-channel receiver, 10 Gig optics are used to achieve four 28 Gig lanes, while with the FlexPhy, a 25 Gig laser is used. "Using a 25 GigaHertz laser and double-driving it to 50 GigaHertz induces some noise but the receiver DSP cleans it up," says Neslusan.

The use of PAM-4 incurs an optical signal-to-noise ratio (OSNR) penalty compared to non-return-to-zero (NRZ) signalling used for MultiPhy's first-generation direct-detection chipset. But PAM-4 has a greater spectral density; the 100 Gbps signal fits within a 50 GHz channel, resulting in 80 wavelengths in the C-band. This equates to 8 terabits of capacity to connect data centres up to 80 km apart.

Within the data centre, MultiPhys physical layer IC will enable 100 Gbps serial interfaces. The design could also enable 400 Gig links over distances of 500 m, 2 km and 10 km, by using four FlexPhys, four transmitter optical sub-assemblies (TOSAs) and four receiver optical sub-assemblies (ROSAs).

Meanwhile, MultiPhy's existing direct-detection chipset has been adopted by multiple customers. These include two optical module makers Oplink and a Chinese vendor and a major Chinese telecom system vendor that is using the chipset for a product coming to market now. 


Privacy Preference Center