Infinera goes multi-terabit with its latest photonic IC

In his new book, The Great Acceleration, Robert Colvile discusses how things we do are speeding up.

In 1845 it took U.S. President James Polk six months to send a message to California. Just 15 years later Abraham Lincoln's inaugural address could travel the same distance in under eight days, using the Pony Express. But the use of ponies for transcontinental communications was shortlived once the electrical telegraph took hold. [1]

The relentless progress in information transfer, enabled by chip advances and Moore's law, is taken largely for granted. Less noticed is the progress being made in integrated photonic chips, most notably by Infinera.    

In 2000, optical transport sent data over long-haul links at 10 gigabit-per-second (Gbps), with 80 such channels supported in a platform. Fifteen years later, Infinera demonstrated its latest-generation photonic integrated circuit (PIC) and FlexCoherent DSP-ASIC that can transmit data at 600Gbps over 12,000km, and up to 2.4 terabit-per-second (Tbps) - three times the data capacity of a state-of-the-art dense wavelength-division multiplexing (DWDM) platform back in 2000 - over 1,150km.

 

Infinite Capacity Engine

Infinera dubs its latest optoelectronic subsystem the Infinite Capacity Engine. The subsystem comprises a pair of indium-phosphide PICs - a transmitter and a receiver - and the FlexCoherent DSP-ASIC. The performance capabilities that the Infinite Capacity Engine enables were unveiled by Infinera in January with its Advanced Coherent Toolkit announcement. Now, to coincide with OFC 2016, Infinera has detailed the underlying chips that enable the toolkit. And company product announcements using the new hardware will be made later this year, says Pravin Mahajan, the company's director of product and corporate marketing.

The claimed advantages of the Infinite Capacity Engine include a 82 percent reduction in power consumption compared to a system using discrete optical components and a dozen 100-gigabit coherent DSP-ASICs, and a 53 percent reduction in total-cost-of-ownership compared to competing dense WDM platforms. The FlexCoherent chip also features line rate data encryption. 

"The Infinite Capacity Engine is the industry's first multi-terabit it super-channel, says Mahajan. "It also delivers the industry's first multi-terabit layer one encryption."

 

Multi-terabit PIC 

Infinera's first transmitter and receiver PIC pair, launched in 2005, supported 10, 10-gigabit channels and implemented non-coherent optical transmission.

In 2011 Infinera introduced a 500-gigabit super-channel coherent PIC pair used with Infinera's DTN-X platforms and also its Cloud Xpress data centre interconnect platform launched in 2014. The 500 Gigabit design implemented 10, 50 gigabit channels that implemented polarisation-multiplexed, quadrature phase-shift keying (PM-QPSK) modulation. The accompanying FlexCoherent DSP-ASIC was implemented using a 40nm CMOS process node and support a symbol rate of 16 gigabaud.

The PIC design has since been enhanced to also support additional modulation schemes such as as polarisation-multiplexed, binary phase-shift keying (PM-BPSK) and 3 quadrature amplitude modulation (PM-3QAM) that extend the DTN-X's ultra long-haul performance.

In 2015 Infinera also launched the oPIC-100, a 100-gigabit PIC for metro applications that enables Infinera to exploit the concept of sliceable bandwidth by pairing oPIC-100s with a 500 gigabit PIC. Here the full 500 gigabit super-channel capacity can be pre-deployed even if not all of the capacity is used. Using Infinera's time-based instant bandwidth feature, part of that 500 gigabit capacity can be added between nodes in a few hours based on a request for greater bandwidth.

Now, with the Infinite Capacity Engine PIC, the effective number of channels has been expanded to 12, each capable of supporting a range of modulation techniques (see table below) and data rates. In fact, Infinera uses multiple Nyquist sub-carriers spread across each of the 12 channels. By encoding the data across multiple sub-carriers a lower-baud rate can be used, increasing the tolerance to non-linear channel impairments during optical transmission.

Mahajan says the latest PIC has a power consumption similar to its current 500 Gigabit super-channel PIC but because the photonic design supports up to 2.4 terabit, the power consumption in gigabit-per-Watt is reduced by 70 percent. 

 

FlexCoherent encryption

The latest FlexCoherent DSP-ASIC is Infinera's most complex yet. The 1.6 billion transistor 28nm CMOS IC can process two channels, and supports a 33 gigabaud symbol rate. As a result, six DSP-ASICs are used with the 12-channel PIC.

It is the DSP-ASIC that enables the various elements of the advanced coherent toolkit that includes improved soft-decision forward error correction. "The net coding gain is 11.9dB, up 0.9 dB, which improves the capacity-reach," says Mahajan. Infinera says the ultra long-haul performance has also been improved from 9,500km to over 12,000km. 

 

Source: Infinera

The DSP also features layer one encryption implementing the 256-bit Advanced Encryption Standard (AES-256). Infinera says the request for encryption is being led by the Internet content providers but wholesale operators and co-location providers also want to secure transmissions between sites.

Infinera introduced layer two MACsec encryption with its Cloud Xpress platform. This encrypts the Ethernet payload but not the header. With layer one encryption, it is the OTN frames that are encoded. "When we get down to the OTN level, everything is encrypted," says Mahajan. An operator can choose to encrypt the entire super-channel or encrypt at the service level, down to the ODU0 (1.244 Gbps) level. 

 

System benefits

Using the Infinite Capacity Engine, the transmission capacity over a fibre increases from 9.5 terabit to up to 26.4 terabit.

And with the newest PIC, Infinera can expand the sliceable transponder concept for metro-regional applications. The 2.4 terabits of capacity can be pre-deployed and new capacity turned up between nodes. "You can suddenly turn up 200 gigabit for a month or two, rent and then return it," says Mahajan. However, to support the full 2.4 terabits of capacity, the PIC at the other end of the link would also need to support 16-QAM.

Infinera does say there will be other Infinite Capacity Engine variants. "There will be specific engines for specific markets, and we would choose a subset of the modulations," says Mahajan.

One obvious platform that will benefit from the first Infinite Capacity Engine is the DTN-X. Another that will likely use an ICE variant is Infinera's Cloud Xpress. At present Infinera integrates its 500-gigabit PIC in a 2 rack-unit box for data centre interconnect applications. By using the new PIC and implementing PM-16QAM, the line-side capacity per rack unit of a second-generation Cloud Xpress would rise from 250 gigabit to 1.2 terabit. And with layer one encryption, the MACsec IC may no longer be needed.

Mahajan says the Infinite Capacity Engine has already been tested in the Telstra trial detailed in January. "We have already proven its viability but it is not deployed and carrying live traffic," he says.


Ovum Q&A: Infinera as an end-to-end systems vendor

Infinera hosted an Insight analyst day on October 6th to highlight its plans now that it has acquired metro equipment player, Transmode. Gazettabyte interviewed Ron Kline, principal analyst, intelligent networks at market research firm, Ovum, who attended the event.    

 

Q. Infinera’s CEO Tom Fallon referred to this period as a once-in-a-decade transition as metro moves from 10 Gig to 100 Gig. The growth is attributed mainly to the uptake of cloud services and he expects this transition to last for a while. Is this Ovum’s take?  

Ron Kline, OvumRK: It is a transition but it is more about coherent technology rather than 10 Gig to 100 Gig. Coherent enables that higher-speed change which is required because of the level of bandwidth going on in the metro.

We are going to see metro change from 10 Gig to 100 Gig, much like we saw it change from 2.5 Gig to 10 Gig. Economically, it is going to be more feasible for operators to deploy 100 Gig and get more bang for their buck.

Ten years is always a good number from any transition. If you look at SONET/SDH, it began in the early 1990s and by 2000 was mainstream.

If you look at transitions, you had a ten-year time lag to get from 2.5 Gig to 10 Gig and you had another ten years for the development of 40 Gig, although that was impacted by the optical bubble and the [2008] financial crisis. But when coherent came around, you had a three-year cycle for 100 gigabit. Now you are in the same three-year cycle for 200 and 400 gigabit.

Is 100 Gig the unit of currency? I think all logic tells us it is. But I’m not sure that ends up being the story here.   

 

If you get line systems that are truly open then optical networking becomes commodity-based transponders - the white box phenomenon - then where is the differentiation? It moves into the software realm and that becomes a much more important differentiator.    

 

Infinera’s CEO asserted that technology differentiation has never been more important in this industry. Is this true or only for certain platforms such as for optical networking and core routers?   

If you look at Infinera, you would say their chief differentiator is the PIC (photonic integrated circuit) as it has enabled them to do very well. But other players really have not tried it. Huawei does a little but only in the metro and access.

It is true that you need differentiation, particularly for something as specialised as optical networking. The edge has always gone to the company that can innovate quickest. That is how Nortel did it; they were first with 10 gigabit for long haul and dominated the market.

When you look at coherent, the edge has gone to the quickest: Ciena, Alcatel-Lucent, Huawei and to a certain extent Infinera. Then you throw in the PIC and that gives Infinera an edge.

But then, on the flip side, there is this notion of disaggregation. Nobody likes to say it but it is the commoditisation of the technology; that is certainly the way the content providers are going.

If you get line systems that are truly open then optical networking becomes commodity-based transponders - the white box phenomenon - then where is the differentiation? It moves into the software realm and that becomes a much more important differentiator.    

I do think differentiation is important; it always is. But I’m not sure how long your advantage is these days.

 

Infinera argues that the acquisition of Transmode will triple the total available market it can address.  

Infinera definitely increases its total available market. They only had an addressable market related to long haul and submarine line terminating equipment. Now this [acquisition of Transmode] really opens the door. They can do metro, access, mobile backhaul; they can do a lot of different things.

We don’t necessarily agree with the numbers, though, it more a doubling of the addressable market.

The rolling annual long-haul backbone global market (3Q 2014 to 2Q 2015) and the submarine line terminating equipment market where they play [pre-Transmode] was $5.2 billion. If you assume the total market of $14.2 billion is addressable then yes it is nearly a tripling but that includes the legacy SONET/SDH and Bandwidth Management segments which are rapidly declining. Nevertheless, Tom’s point is well-taken, adding a further $5.8 billion for the metro and access WDM markets to their total addressable market is significant.

 

Tom Fallon also said vendor consolidation will continue, and companies will need to have scale because of the very large amounts of R&D needed to drive differentiation. Is scale needed for a greater R&D spend to stay ahead of the competition?

When you respond to an operator’s request-for-proposal, that is where having end-to-end scale helps Infinera; being able to be a one-stop shop for the metro and long haul.

If I’m an operator, I don’t have to get products from several vendors and be the systems integrator.  

 

Infinera announced a new platform for long haul, the XT-500, which is described as a telecom version of its data centre interconnect Cloud Xpress platform. Why do service providers want such a platform, and how does it differ from cloud Xpress? 

Infinera’s DTN-X long haul platform is very high capacity and there are applications where you don’t need a such a large platform. That is one application.

The other is where you lease space [to house your equipment]. If I am going to lease space, if I have a box that is 2 RU (rack unit) high and can do 500 gigabit point-to-point and I don’t need any cross-connect, then this smaller shelf size makes a lot of sense. I’m just transporting bandwidth.

Cloud Xpress is a scaled-down product for the metro. The XT-500 is carrier-class, e.g. NEBS [Network Equipment-Building System] compliant and can span long-haul distances.  

 

Infinera has also announced the XTC-2. What is the main purpose of this platform?

The platform is a smaller DTN-X variant to serve smaller regions. For example you can take a 500 gigabit PIC super-channel and slice it up. That enables you to do a hub-and-spoke virtual ring and drop 100 Gig wavelengths at appropriate places. The system uses the new metro PICs introduced in March. At the hub location you use an ePIC that slices up the 500G into individually routable 100G channels and at the hub location, where the XTC-2 is, you use an oPIC-100.  

 

Does the oPIC-100 offer any advantage compared to existing100 Gig optics?

I don’t think it has a huge edge other than the differentiation you get from a PIC. In fact it might be a deterrent: you have to buy it from Infinera. It is also anti-trend, where the trend is pluggables. 

But the hub and spoke architecture is innovative and it will be interesting to see what they do with the integration of PIC technology in Transmode’s gear.

  

Acquiring Transmode provides Infinera with an end-to-end networking portfolio? Does it still lack important elements? For example, Ciena acquired Cyan and gained its Blue Planet SDN software. 

Transmode has a lot of different technologies required in the metro: mobile back-haul, synchronisation, they are also working on mobile front-hauling, and their hardware is low power.

Transmode has pretty much everything you need in these smaller platforms. But it is the software piece that they don’t have. Infinera has a strategy that says: we are not going to do this; we are going to be open and others can come in through an interface essentially and run our equipment.

That will certainly work.

But if you take a long view that says that in future technology will be commoditised, then you are in a bad spot because all the value moves to the software and you, as a company, are not investing and driving that software. So, this could be a huge problem going forward.

 

What are the main challenges Infinera faces?

One challenge, as mentioned, is hardware commoditisation and the issue of software.

Hardware commodity can play in Infinera’s favour. Infinera should have the lowest-cost solution given its integrated solution, so large hardware volumes is good for them. But if pluggable optics is a requirement, then they could be in trouble with this strategy

The other is keeping up with the Joneses.

I think the 500 Gig in 100 Gig channels is now not that exciting. The 500 Gig PIC is not creating as much advantage as it did before. Where is the 1.2 terabit PIC? Where is the next version that drives Infinera forward?

And is it still going to be 100 Gig? They are leading me to believe it won’t just be. Are they going to have a PIC that is 12 channels that are tunable in modulation formats to go from 100 to 200 to 400 Gig.

They need to if they want to stay competitive with everyone else because the market is moving to 200 Gig and 400 Gig. Our figures show that over 2,000 multi-rate (QPSK and 16-QAM) ports have been shipped in the last year (3Q 2014 to 2Q 2015). And now you have 8-QAM coming. Infinera’s PIC is going to have to support this.

Infinera’s edge is the PIC but if you don’t keep progressing the PIC, it is no longer an edge.

These are the challenges facing Infinera and it is not that easy to do these things. 


Infinera targets the metro cloud

 

Infinera has styled its latest Cloud Xpress product used to connect data centres as a stackable platform, similar to how servers and storage systems are built. The development is another example of how the rise of the data centre is influencing telecoms.

"There is a drive in the industry that is coming from the data centre world that is starting to slam into the telecom world," says Stuart Elby, Infinera's senior vice president of cloud network strategy and technology.

Cloud Xpress is designed to link data centres up to 200km apart, a market Infinera coins the metro cloud. The two-rack-unit-high (2RU) stackable box features Infinera's 500 Gigabit photonic integrated circuit (PIC) for line side transmission and a total of 500 Gigabit of client side links made up of 10, 40 or 100 Gigabit interfaces. Typically, up to 16 units will be stacked in a rack, providing 8 Terabits of transmission capacity over a fibre.

Cloud Xpress has also been designed with the data centre's stringent power and space requirements in mind. The resulting platform has significantly improved power consumption and density metrics compared to traditional metro networking platforms, claims Infinera.

 

Metro split

Elby describes how the metro network is evolving into two distinct markets: metro aggregation and metro cloud. Metro aggregation, as the name implies, combines lower speed multi-service traffic from consumers' broadband links and from enterprises into a hub where it is switched onto a network backbone. Metro cloud, in contrast, concerns date centre interconnect: point-to-point links that, for the larger data centres, can total several terabits of capacity.   

Cloud Xpress is Infinera's first metro platform that uses its PIC. "We have plans to offer it all the way out to ultra long haul," says Elby. "There are some data centres that need to get tied between continents."

Cloud Xpress is being aimed at several classes of customer: internet content providers companies (or webcos), entreprises, cloud operators and traditional service providers. The primary end users are webcos and enterprises, which is why the platform is designed as a rack-and-stack. "These are not networking companies, they are data centre ones; they think of equipment in the context of the data centre," says Elby.    

But Infinera expects telcos will also adopt Cloud Xpress. They need to connect their data centres and link data centres to points-of-presence, especially when increasing amounts of traffic from end users now goes to the cloud. Equally, a business customer may link to a cloud service provider through a colocation point, operated by companies such as Equinix, Rackspace and Verizon Terremark.

"There will be a bleed-over of the use of this product into all these metro segments," says Elby. "But the design point [of Cloud Xpress] was for those that operate data centres more than those that are network providers."

 

Google has shared that a single internet search query travels on average 2,400km before being resolved, while Facebook has revealed that a single http request generates some 930 server-to-server interactions. 


The Magnification Effect  

Webcos' services generate significantly more internal traffic than the triggering event, what Elby calls the magnification effect.

Google has shared that a single internet search query travels on average 2,400km before being resolved, while Facebook has revealed that a single http request generates some 930 server-to-server interactions. These servers may be in one data centre or spread across centres.

"It is no longer one byte in, one byte out," says Elby. "The amount of traffic generated inside the network, between data centres, is much greater than the flow of traffic into or out of the data centre." This magnification effect is what is driving the significant bandwidth demand between data centres. "When we talk to the internet content providers, they talk about terabits," says Elby.

Cloud Xpress   

Cloud Xpress is already being evaluated by customers and will be generally available from December.

The stackable platform will have three client-side faceplate options: 10 Gig, 40 Gig and 100 Gig. The 10 Gig SFP+ faceplate is the sweet spot, says Elby, and there is also a 40 Gig one, while the 100 Gig is in development. "In the data centre world, we are hearing that they [webcos] are much more interested in the QSFP28 [optical module]."  

Infinera says that the Ethernet client signals connect to a simple mapping function IC before being placed onto 100 Gig tributaries. Elby says that Infinera has minimised the latency through the box, to achieve 4.4 microseconds. This is an important requirement for certain data centre operators.

The 500 Gig PIC supports Infinera's 'instant bandwidth' feature. Here, all the 500 Gig super-channel capacity is lit but a user can add 100 Gig  increments as required. This avoids having to turn up wavelengths and simplifies adding more capacity when needed.  

The Cloud Xpress rack can accommodate 21 stackable units but Elby says 16 will be used typically. On the line side, the 500 Gigabit super-channels are passively multiplexed onto a fibre to achieve 8 Terabits. The platform density of 500 Gig per rack unit (500 Gig client and 500 Gig line side per 2RU box), exceeds any competitor's metro platform, says Elby, saving important space in the data centre.

The worse-case power consumption is 130W-per-100 Gig, an improvement on the power consumption performance of competitors' platforms. This is despite the fact that coherent detection is always used, even for links as short as between a data centre's buildings. "We have different flavours of the optical engine for different reaches," says Elby. "It [coherent] is just used because it is there."

The reduced power consumption of Cloud Xpress is achieved partly because of Infinera's integrated PIC, and by scrapping Optical Transport Network (OTN) framing and switching which is not required. "There are no extra bells and whistles for things that aren't needed for point-to-point applications," says Elby. The stackable nature of the design, adding units as needed, also helps.

The Cloud Xpress rack can be controlled using either Infinera's management system or software-defined networking (SDN) application programming interfaces (APIs). "It supports the sort of interfaces the SDN community wants: Web 2.0 interfaces, not traditional telco ones."

Infinera is also developing a metro aggregation platform that will support multi-service interfaces and aggregate flows to the hub, a market that it expects to ramp from 2016. 

 


Privacy Preference Center