Cloud and AI: Opportunities that must be grabbed
The founder of Cloud Light, Dennis Tong, talks about the company, how its sale to Lumentum came about, and the promise of cloud and AI markets for optics.

For Dennis Tong (pictured), Hong Kong is a unique place that has a perfect blend of the East and West.
Tong, the founder and CEO of optical module specialist Cloud Light, should know. The company is headquartered in Hong Kong and has R&D offices in Hong Kong and Taipei, Taiwan. Cloud Light also has manufacturing sites in Asia: in the Chinese city of Dongguan—two hours by car, north of Hong Kong—and in the Philippines.
Now, Cloud Light is part of Lumentum. The U.S. photonics firm bought the optical module maker for $750 million in November 2023.
Tie-up
Cloud Light is a volume manufacturer of optical modules. The company takes 12-inch silicon photonic wafers, tests the wafers’ dies, and packages them for use in optical modules.
Cloud Light has a long relationship with Lumentum, using the U.S. company’s continuous-wave lasers for its silicon photonic-based optical modules.
Tong says he has been in photonics for 30 years and has good friends at Lumentum. “We had opportunities to chat and exchange views as to where the industry is going, and we shared a common vision,” he says. Eventually, the talk turned to a possible merger and acquisition.
Tong says the decision to sell the company centred on how best to grow the company. Cloud Light would have continued to do well, he says, but the company could grow much faster if he and his 1,600 staff joined Lumentum.
It is also timely. “Opportunities such as cloud and AI, they don’t come along very often,” says Tong.

Wafer-in, Product-out
Cloud Light has developed a manufacturing process dubbed “wafer-in, product-out.”
Turning a photonic integrated circuit (PIC) into a packaged optical module involves many stages and players. Designers of a PIC pass it to a foundry that results in the wafer. The wafer is shipped to an outsourced semiconductor assembly and test (OSAT) that does wafer back-end tasks: testing and dicing the wafer, and polishing. The working PICs—the known good dies—are shipped to a contract manufacturer that makes the pluggable modules.
“You can see that the entire collaboration chain is fragmented,” says Tong. “With our wafer-in, product-out process, we put everything in one group.”
Cloud Light takes the wafer from the foundry and does all the steps resulting in the delivered module.
Tong says the advantage of undertaking the complete process includes improved product yield. For example, the company measures coupling loss to the PIC and its optical waveguide loss during testing, and uses the insight to improve product yield.
Cloud Light has developed its own equipment to support automation. This know-how means that its design staff can work with the process and equipment colleagues to tailor the manufacturing process for new product designs. The precise assembly of unique micro-optics is one example.
It is this expertise and capability that particularly interested Lumentum in Cloud Light.
According to Tong, accumulating expertise in the different production areas has taken years: “There is a lot of subtlety to it, and we started to set this up in 2017.”
Hyperscaler business
Cloud Light succeeded early with a hyperscaler, making a 4×10-gigabit multimode VCSEL-based transceiver. But it soon realised market growth was coming from single-mode optical transceivers.
Its decision to pursue its wafer-in, product-out strategy stemmed from a desire to avoid becoming one of many single-mode optical transceiver makers. “We didn’t think we would add any value to the market by just creating a me-too company,” says Tong.
If the company was going to invest in a new platform, it would have to be scalable to support high volumes.
“It was very clear that silicon photonics was the right thing to do,” says Tong. “We were one of the first, if not the first, to launch a 400-gigabit silicon photonics-based transceiver in 2019.”
Cloud Light pitched its in-house scalable manufacturing approach to a hyperscaler that liked its plan, resulting in the company securing the hyperscaler as a customer.
Plans
Since the acquisition’s completion, Lumentum has given Cloud Light broad scope; there is no rush for full-blown integration, says Tong.
“Our mandate is to continue to grow the module business,” he says. “And we are open to using components from Lumentum and other suppliers.”
Lumentum’s components also offer Cloud Light the ability to create new products. “Customers are seeing us as more equipped, which opens up new, interesting opportunities,” says Tong.
Moreover, Cloud Light is not solely making modules for Lumentum. “The reality is that this is a very dynamic market, dominated by a few customers,” says Tong. “We are open to different business models as long as we can add value.”
Opportunities
At the time of the deal, Lumentum revealed that it expected Cloud Light would add $200 million plus to its yearly income. Cloud Light’s $200 million in revenues in the previous year was almost all from 400-gigabit and higher-speed transceiver sales.
Lumentum also makes coherent optical modems, ROADMs, and 3D sensing for commercial applications. Tong says coherent modules are one obvious opportunity for Cloud Light: “If you look into the future, I think the line between cloud/ datacom and telecom will become blurred.”
Cloud and AI will drive volumes, and the silicon photonics platform will be applicable for coherent modems as well. “So, a lot of the things that we have developed will also be applicable to coherent modules in the future,” says Tong. “And it is definitely applicable if one day coherent optics makes its way into the data centre.”
Coherent optics modules will keep increasing symbol rate and use more sophisticated coding schemes, but at some point, the effective data rate per line will start to plateau. To increase bandwidth beyond that, designs will go parallel by adding more channels. “Adding more fibre or more wavelengths, then it comes back to density, and then it’s all about packaging,” says Tong.
The ability to change its automated assembly for new applications also suggests that Cloud Light’s manufacturing capability could benefit Lumentum’s other product lines, such as ROADMs and even new markets such as optical circuit switches.
Co-packaged optics
Co-packaged optics are seen as one solution for applications where standard pluggable optics are no longer suitable.
Tong says that there are still issues before co-packaged optics are deployed at scale. One challenge is reliability; hyperscalars will not deploy the technology at scale until it has demonstrable good quality and reliability.
“The emergence of AI and cloud may accelerate that deployment, simply because of the volumes they are using and the density issue,” says Tong. Cost and thermal issues is also something co-packaged optics can address.
Cloud Light is ready for the advent of co-packaged optics. For its 800-gigabit transceiver, it can package a bare-die digital signal processor right next to the silicon photonics optical engine. “It’s not exactly a co-packaged optics product, but it has the same capability,” he says.
Shrinking lifecycles
The lifecycle of optical module products continues to shrink. At 10 gigabits, it was a decade-plus; for 100 gigabits, it was five to six years; at 400 gigabits, it has been more like three or four years. “Now, with AI, it is more like two to three years,” says Tong.
To be successful, it is all about time-to-market and time-to-scale.
“You need to be able to ramp up very quickly to the type of volumes and the type of quality that the customer is asking for,” says Tong. “There’s no time for you to get ready; you must be ready.”
Infinera’s ICE flow
Infinera’s newest Infinite Capacity Engine 5 (ICE5) doubles capacity to 2.4 terabits. The ICE, which comprises a coherent DSP and a photonic integrated circuit (PIC), is being demonstrated this week at the OFC show being held in San Diego.
Infinera has also detailed its ICE6, being developed in tandem with the ICE5. The two designs represent a fork in Infinera’s coherent engine roadmap in terms of the end markets they will address.
Geoff BennettThe ICE5 is targeted at data centre interconnect and applications where fibre in being added towards the network edge. The next-generation access network of cable operators is one such example. Another is mobile operators deploying fibre in preparation for 5G.
First platforms using the ICE5 will be unveiled later this year and will ship early next year.
Infinera’s ICE6 is set to appear two years after the ICE5. Like the ICE4, Infinera’s current Infinite Capacity Engine, the ICE6 will be used across all of Infinera’s product portfolio.
Meanwhile, the 1.2 terabit ICE4 will now be extended to work in the L-band of optical wavelengths alongside the existing C-band, effectively doubling a fibre’s capacity available for service providers.
Infinera’s decision to develop two generations of coherent designs follows the delay in bringing the ICE4 to market.
“The fundamental truth about the industry today is that coherent algorithms are really hard,” says Geoff Bennett, director, solutions and technology at Infinera.
By designing two generations in parallel, Infinera seeks to speed up the introduction of its coherent engines. “With ICE5 and ICE6, we have learnt our lesson,” says Bennett. “We recognise that there is an increased cadence demanded by certain parts of the industry, predominately the internet content providers.”
ICE5
The ICE5 uses a four-wavelength indium-phosphide PIC that, combined with the FlexCoherent DSP, supports a maximum symbol rate of 66Gbaud and a modulation rate of up to 64-ary quadrature amplitude modulation (64-QAM).
Infinera says that the FlexCoherent DSP used for ICE5 is a co-development but is not naming its partners.
Using 64-QAM and 66Gbaud enables 600-gigabit wavelengths for a total PIC capacity of 2.4 terabits. Each PIC is also ‘sliceable’, allowing each of the four wavelengths to be sent to a different location.
Infinera is not detailing the ICE5’s rates but says the design will support lower rates, as low as 200 gigabit-per-second (Gbps) or possibly 100Gbps per wavelength.
Bennett highlights 400Gbps as one speed of market interest. Infinera believes its ICE5 design will deliver 400 gigabits over 1,300km. The 600Gbps wavelength implemented using 64-QAM and 66Gbaud will have a relatively short reach of 200-250km.
“A six hundred gigabit wavelength is going to be very short haul but is ideal for data centre interconnect,” says Bennett, who points out that the extended reach of 400-gigabit wavelengths is attractive and will align with the market emergence of 400 Gigabit Ethernet client signals.
Probabilistic shaping squeezes the last bits of capacity-reach out of the spectrum
Hybrid Modulation
The 400-gigabit will be implemented using a hybrid modulation scheme. While Infinera is not detailing the particular scheme, Bennett cites several ways hybrid modulation can be implemented.
One hybrid modulation technique is to use a different modulation scheme on each of the two light polarisations as a way of offsetting non-linearities. The two modulation schemes can be repeatedly switched between the two polarisation arms. “It turns out that the non-linear penalty takes time to build up,” says Bennett.
Another approach is using blocks of symbols, varying the modulation used for each block. “The coherent receiver has to know how many symbols you are going to send with 64-QAM and how many with 32-QAM, for example,” he says
A third hybrid modulation approach is to use sub-carriers. In a traditional coherent system, a carrier is the output of the transmit laser. To generate sub-carriers, the coherent DSP’s digital-to-analogue converter (DAC) applies a signal to the modulator which causes the carrier to split into multiple sub-carriers.
To transmit at 32Gbaud, four sub-carriers can be used, each modulated at 8Gbaud, says Bennett. Nyquist shaping is used to pack the sub-carriers to ensure there is no spectral efficiency penalty.
“You now have four parallel streams and you can deal with them independently,” says Bennett, who points out that 8Gbaud turns out to be an optimal rate in terms of minimising non-linearities made up of cross-phase and self-phase modulation components.
Sub-carriers can be described as a hybrid modulation approach in that each sub-carrier can be operated at a different baud rate and use a different modulation scheme. This is how probabilistic constellation shaping - a technique that improves spectral efficiency and which allows the data rate used on a carrier to be fine-tuned - will be used with the ICE6, says Infinera.
For the ICE5, sub-carriers are not included. “For the applications we will be using ICE5 for, the sub-carrier technology is not as important,” says Bennett. “Where it is really important is in areas such as sub-sea.”
Silicon photonics has a lower carrier mobility. It is going to be harder and harder to build such parts of the optics in silicon.
Probabilistic constellation shaping
Infinera is not detailing the longer-term ICE6 beyond highlights two papers that were presented at the ECOC show last September that involved a working 100Gbaud sub-carrier-driven wavelength and probabilistic shaping applied to a 1024-QAM signal.
The 100Gbaud rate will enable higher capacity transponders while the use of probabilistic shaping will enable greater spectral efficiency. “Probabilistic shaping squeezes the last bits of capacity-reach out of the spectrum,” says Bennett.
“In ICE6 we will be doing different modulation on each sub-carrier,” says Bennett. “That will be part of probabilistic constellation shaping.” And assuming Infinera adheres to 8Gbaud sub-carriers, 16 will be used for a 100Gbaud symbol rate.
Infinera argues that the interface between the optics and the DSP becomes key at such high baud rates and it argues that its ability to develop both components will give it a system design advantage.
The company also argues that its use of indium phosphide for its PICs will be a crucial advantage at such high baud rates when compared to silicon photonics-based solutions. “Silicon photonics has a lower carrier mobility,” says Bennett. “It is going to be harder and harder to build such parts of the optics in silicon.”
ICE4 embraces the L-band
Infinera’s 1.2 terabit six-wavelength ICE4 was the first design to use Nyquist sub-carriers and SD-FEC gain sharing, part of what Infinera calls its advanced coherent toolkit.
At OFC, Infinera announced that the ICE4 will add the L-band in addition to the C-band. It also announced that the ICE4 has now been adopted across Infinera’s platform portfolio.
The first platforms to use the ICE4 were the Cloud Xpress 2, the compact modular platform used for data centre interconnect, and the XT-3300, a 1 rack-unit (1RU) modular platform targeted at long-haul applications.
A variant of the platform tailored for submarine applications, the XTS-3300, achieved a submarine reach of 10,500km in a trial last year. The modulation format used was 8-QAM coupled with SD-FEC gain-sharing and Nyquist sub-carriers. The resulting spectral efficiency achieved was 4.5bits/s/Hz. In comparison, standard 100-gigabit coherent transmission has a spectral efficiency of 2bits/s/Hz. The total capacity supported in the trial was 18.2 terabits.
Since then, the ICE4 has been added the various DTN-X chassis including the XT-3600 2.4 terabit 4RU platform.
Infinera inches closer to cognitive networking
The second and final part as to how optical networking is becoming smarter
Infinera says it has made it easier for operators to deploy optical links to accommodate traffic growth.
The system vendor says its latest capability, known as Instant Network, also paves the way for autonomous networks that will predict traffic trends and enable capacity as required.
The latest announcement builds on Infinera’s existing Instant Bandwidth feature, introduced in 2012, that uses its photonic integrated circuit (PIC) technology.
Instant Bandwidth exploits the fact that all five 100-gigabit wavelengths of a line card hosting Infinera’s 500-gigabit PIC are lit even though an operator may only need a subset of the 100-gigabit wavelengths. Using Instant Bandwidth, extra capacity can be added to a link - until all five wavelengths are used - in a matter of hours.
The technology allows 100-gigabit wavelengths to be activated in minutes, says Geoff Bennett, director, solutions and technology at Infinera (pictured). It takes several hours due to the processing time for the operator to raise a purchasing order for the new capacity and get it signed off.
Instant Bandwidth has been enhanced since its introduction. Infinera has introduced its latest generation 2.4 terabit PIC which is also sliceable. With a sliceable PIC, individual wavelengths can be sent to different locations using reconfigurable optical add-drop multiplexer (ROADM) technology within the network.
Another feature added is time-based Instant Bandwidth. This allows an operator to add extra capacity without first raising a purchase order. Paying for the extra capacity is dealt with at a later date. This feature has already benefited operators that have experienced a fibre cut and have used Instant Bandwidth to reroute traffic.
Infinera says over 70 of its customers use Instant Bandwidth. These include half of its long-haul customers, its top three submarine network customers and over 60 percent of its data centre interconnect players that use its Cloud Xpress and XTS products. Some of its data centre interconnect customers request boxes with all the licences already activated, says Bennett.
The internet content providers are banging the drum for cognitive networking
Instant Network
Now, with the Instant Network announcement, Infinera has added a licence pool and moveable licences. The result is that an operator can add capacity in minutes rather than hours by using its pool of prepaid licenses.
Equally, if an operator wants to reroute a 100-gigabit or 200-gigabit wavelength to another destination, it can transfer the same licence from the original end-point to the new one.
“They [operators] can activate capacity when the revenue-generating service asks for it,” says Bennett.
Another element of Instant Network still to be introduced is the Automated Capacity Engineering that is part of Infinera’s Xceed software.
Source: Infinera
“Automated Capacity Engineering will be an application that runs on Xceed,” says Bennett. The Automated Capacity Engineering is an application running on the OpenDaylight open source software-defined networking (SDN) controller that takes advantage of plug-ins that Infinera has added to the Xceed platform such as multi-layer path computation and traffic monitoring.
Using this feature, the SDN orchestrator can request a 100 Gigabit Ethernet private line, for example. If there is insufficient capacity, the Automated Capacity Engineering app will calculate the most cost-effective path and install the necessary licences at the required locations, says Bennett.
“We think this is leading the way to cognitive networking,” he says. “We have the software foundation and the hardware foundation for this.”
Networks that think
With a cognitive network, data from the network is monitored and fed to a machine learning algorithm to predict when capacity will be exhausted. New capacity can then be added in a timely accordingly.
Bennett says internet content providers, the likes of Google, Microsoft and Facebook, will all deploy such technology in their networks.
Being consumers of huge amounts of bandwidth, they will be the first adopters. Wholesale operators which also serve the internet content providers will likely follow. Traditional telecom operators with their more limited traffic growth will be the last to adopt such technology.
But cognitive networking is not yet ready. “The machine learning algorithms are still basic,” says Bennett. “But the biggest thing that is missing is the acceptance [of such technology] by network operations staff.”
However, this is not an issue with the internet content providers. “They are banging the drum for cognitive networking,” says Bennett.
Part 1: Ciena's Liquid Spectrum, click here
Infinera goes multi-terabit with its latest photonic IC
In his new book, The Great Acceleration, Robert Colvile discusses how things we do are speeding up.
In 1845 it took U.S. President James Polk six months to send a message to California. Just 15 years later Abraham Lincoln's inaugural address could travel the same distance in under eight days, using the Pony Express. But the use of ponies for transcontinental communications was shortlived once the electrical telegraph took hold. [1]
The relentless progress in information transfer, enabled by chip advances and Moore's law, is taken largely for granted. Less noticed is the progress being made in integrated photonic chips, most notably by Infinera.
In 2000, optical transport sent data over long-haul links at 10 gigabit-per-second (Gbps), with 80 such channels supported in a platform. Fifteen years later, Infinera demonstrated its latest-generation photonic integrated circuit (PIC) and FlexCoherent DSP-ASIC that can transmit data at 600Gbps over 12,000km, and up to 2.4 terabit-per-second (Tbps) - three times the data capacity of a state-of-the-art dense wavelength-division multiplexing (DWDM) platform back in 2000 - over 1,150km.
Infinite Capacity Engine
Infinera dubs its latest optoelectronic subsystem the Infinite Capacity Engine. The subsystem comprises a pair of indium-phosphide PICs - a transmitter and a receiver - and the FlexCoherent DSP-ASIC. The performance capabilities that the Infinite Capacity Engine enables were unveiled by Infinera in January with its Advanced Coherent Toolkit announcement. Now, to coincide with OFC 2016, Infinera has detailed the underlying chips that enable the toolkit. And company product announcements using the new hardware will be made later this year, says Pravin Mahajan, the company's director of product and corporate marketing.
The claimed advantages of the Infinite Capacity Engine include a 82 percent reduction in power consumption compared to a system using discrete optical components and a dozen 100-gigabit coherent DSP-ASICs, and a 53 percent reduction in total-cost-of-ownership compared to competing dense WDM platforms. The FlexCoherent chip also features line rate data encryption.
"The Infinite Capacity Engine is the industry's first multi-terabit it super-channel, says Mahajan. "It also delivers the industry's first multi-terabit layer one encryption."
Multi-terabit PIC
Infinera's first transmitter and receiver PIC pair, launched in 2005, supported 10, 10-gigabit channels and implemented non-coherent optical transmission.
In 2011 Infinera introduced a 500-gigabit super-channel coherent PIC pair used with Infinera's DTN-X platforms and also its Cloud Xpress data centre interconnect platform launched in 2014. The 500 Gigabit design implemented 10, 50 gigabit channels that implemented polarisation-multiplexed, quadrature phase-shift keying (PM-QPSK) modulation. The accompanying FlexCoherent DSP-ASIC was implemented using a 40nm CMOS process node and support a symbol rate of 16 gigabaud.
The PIC design has since been enhanced to also support additional modulation schemes such as as polarisation-multiplexed, binary phase-shift keying (PM-BPSK) and 3 quadrature amplitude modulation (PM-3QAM) that extend the DTN-X's ultra long-haul performance.
In 2015 Infinera also launched the oPIC-100, a 100-gigabit PIC for metro applications that enables Infinera to exploit the concept of sliceable bandwidth by pairing oPIC-100s with a 500 gigabit PIC. Here the full 500 gigabit super-channel capacity can be pre-deployed even if not all of the capacity is used. Using Infinera's time-based instant bandwidth feature, part of that 500 gigabit capacity can be added between nodes in a few hours based on a request for greater bandwidth.
Now, with the Infinite Capacity Engine PIC, the effective number of channels has been expanded to 12, each capable of supporting a range of modulation techniques (see table below) and data rates. In fact, Infinera uses multiple Nyquist sub-carriers spread across each of the 12 channels. By encoding the data across multiple sub-carriers a lower-baud rate can be used, increasing the tolerance to non-linear channel impairments during optical transmission.
Mahajan says the latest PIC has a power consumption similar to its current 500 Gigabit super-channel PIC but because the photonic design supports up to 2.4 terabit, the power consumption in gigabit-per-Watt is reduced by 70 percent.
FlexCoherent encryption
The latest FlexCoherent DSP-ASIC is Infinera's most complex yet. The 1.6 billion transistor 28nm CMOS IC can process two channels, and supports a 33 gigabaud symbol rate. As a result, six DSP-ASICs are used with the 12-channel PIC.
It is the DSP-ASIC that enables the various elements of the advanced coherent toolkit that includes improved soft-decision forward error correction. "The net coding gain is 11.9dB, up 0.9 dB, which improves the capacity-reach," says Mahajan. Infinera says the ultra long-haul performance has also been improved from 9,500km to over 12,000km.
Source: Infinera
The DSP also features layer one encryption implementing the 256-bit Advanced Encryption Standard (AES-256). Infinera says the request for encryption is being led by the Internet content providers but wholesale operators and co-location providers also want to secure transmissions between sites.
Infinera introduced layer two MACsec encryption with its Cloud Xpress platform. This encrypts the Ethernet payload but not the header. With layer one encryption, it is the OTN frames that are encoded. "When we get down to the OTN level, everything is encrypted," says Mahajan. An operator can choose to encrypt the entire super-channel or encrypt at the service level, down to the ODU0 (1.244 Gbps) level.
System benefits
Using the Infinite Capacity Engine, the transmission capacity over a fibre increases from 9.5 terabit to up to 26.4 terabit.
And with the newest PIC, Infinera can expand the sliceable transponder concept for metro-regional applications. The 2.4 terabits of capacity can be pre-deployed and new capacity turned up between nodes. "You can suddenly turn up 200 gigabit for a month or two, rent and then return it," says Mahajan. However, to support the full 2.4 terabits of capacity, the PIC at the other end of the link would also need to support 16-QAM.
Infinera does say there will be other Infinite Capacity Engine variants. "There will be specific engines for specific markets, and we would choose a subset of the modulations," says Mahajan.
One obvious platform that will benefit from the first Infinite Capacity Engine is the DTN-X. Another that will likely use an ICE variant is Infinera's Cloud Xpress. At present Infinera integrates its 500-gigabit PIC in a 2 rack-unit box for data centre interconnect applications. By using the new PIC and implementing PM-16QAM, the line-side capacity per rack unit of a second-generation Cloud Xpress would rise from 250 gigabit to 1.2 terabit. And with layer one encryption, the MACsec IC may no longer be needed.
Mahajan says the Infinite Capacity Engine has already been tested in the Telstra trial detailed in January. "We have already proven its viability but it is not deployed and carrying live traffic," he says.
Next-generation coherent adds sub-carriers to capabilities
Part 2: Infinera's coherent toolkit
Source: Infinera
Infinera has detailed coherent technology enhancements implemented using its latest-generation optical transmission technology. The system vendor is still to launch its newest photonic integrated circuit (PIC) and FlexCoherent DSP-ASIC but has detailed features the CMOS and indium phosphide ICs support.
The techniques highlight the increasing sophistication of coherent technology and an ever tighter coupling between electronics and photonics.
The company has demonstrated the technology, dubbed the Advanced Coherent Toolkit, on a Telstra 9,000km submarine link spanning the Pacific. In particular, the demonstration used matrix-enhanced polarisation-multiplexed, binary phased-shift keying (PM-BPSK) that enabled the 9,000km span without optical signal regeneration.
Using the ACT is expected to extend the capacity-reach product for links by the order of 60 percent. Indeed the latest coherent technology with transmitter-based digital signal processing delivers 25x the capacity-reach of 10-gigabit wavelengths using direct-detection, the company says.
Infinera’s latest PIC technology includes polarisation-multiplexed, 8-quadrature amplitude modulation (PM-8QAM) and PM-16QAM schemes. Its current 500-gigabit PIC supports PM-BPSK, PM-3QAM and PM-QPSK. The PIC is expected to support a 1.2-terabit super-channel and using PM-16QAM could deliver 2.4 terabit.
“This [the latest PIC] is beyond 500 gigabit,” confirms Pravin Mahajan, Infinera’s director of product and corporate marketing. “We are talking terabits now.”
Sterling Perrin, senior analyst at Heavy Reading, sees the Infinera announcement as less PIC related and more an indication of the expertise Infinera has been accumulating in areas such as digital signal processing.
Nyquist sub-carriers
Infinera is the first to announce the use of sub-carriers. Instead of modulating the data onto a single carrier, Infinera is using multiple Nyquist sub-carriers spread across a channel.
Using a flexible grid, the sub-carriers span a 37.5GHz-wide channel. In the example shown above, six are used although the number is variable depending on the link. The sub-carriers occupy 35GHz of the band while 2.5GHz is used as a guard band.
“Information you were carrying across one carrier can now be carried over multiple sub-carriers,” says Mahajan. “The benefit is that you can drive this as a lower-baud rate.”
Lowering the baud rate increases the tolerance to non-linear channel impairments experienced during optical transmission. “The electronic compensation is also much less than what you would be doing at a much higher baud rate,” says Abhijit Chitambar, Infinera’s principal product and technology marketing manager.
While the industry is looking to increase overall baud rate to increase capacity carried and reduce cost, the introduction of sub-carriers benefits overall link performance. “You end up with a better Q value,” says Mahajan. The ‘Q’ refers to the Quality Factor, a measure of the transmission’s performance. The Q Factor combines the optical signal-to-noise ratio (OSNR) and the optical bandwidth of the photo-detector, providing a more practical performance measure, says Infinera.
Infinera has not detailed how it implements the sub-carriers. But it appears to be a combination of the transmitter PIC and the digital-to-analogue converter of the coherent DSP-ASIC.
It is not clear what the hardware implications of adopting sub-carriers are and whether the overall DSP processing is reduced, lowering the ASIC’s power consumption. But using sub-carriers promotes parallel processing and that promises chip architectural benefits.
“Without this [sub-carrier] approach you are talking about upping baud rate,” says Mahajan. “We are not going to stop increasing the baud rate, it is more a question of how much you can squeeze with what is available today.“
SD-FEC enhancements
The FlexCoherent DSP also supports enhanced soft-decision forward-error correction (SD-FEC) including the processing of two channels that need not be contiguous.
SD-FEC delivers enhanced performance compared to conventional hard-decision FEC. Hard-decision FEC decides whether a received bit is a 1 or a 0; SD-FEC also uses a confidence measure as to the likelihood of the bit being a 1 or 0. This additional information results in a net coding gain of 2dB compared to hard-decision FEC, benefiting reach and extending the life of submarine links.
By pairing two channels, Infinera shares the FEC codes. By pairing a strong channel with a weak one and sharing the codes, some of the strength of the strong signal can be traded to bolster the weaker one, extending its reach or even allowing for a more advanced modulation scheme to be used.
The SD-FEC can also trade performance with latency. SD-FEC uses as much as a 35 percent overhead and this adds to latency. Trading the two supports those routes where low latency is a priority.
Matrix-enhanced PSK
Infinera has implemented a technique that enhances the performance of PM-BPSK used for the longest transmission distances such as sub-sea links. The matrix-enhancement uses a form of averaging that adds about a decibel of gain. “Any innovation that adds gain to a link, the margin that you give to operators is always welcome,” says Mahajan.
The toolkit also supports the fine-tuning of channel widths. This fine-tuning allows the channel spacing to be tailored for a given link as well as better accommodating the Nyquist sub-carriers.
Product launch
The company has not said when it will launch its terabit PIC and FlexCoherent DSP.
“Infinera is saying it is the first announcing Nyquist sub-carriers, which is true, but they don’t give a roadmap when the product is coming out,” says Heavy Reading’s Perrin. “I suspect that Nokia [Alcatel-Lucent], Ciena and Huawei are all innovating on the same lines.”
There could be a slew of announcements around the time of the OFC show in March, says Perrin: “So Infinera could be first to announce but not necessarily first to market.”
Ovum Q&A: Infinera as an end-to-end systems vendor
Infinera hosted an Insight analyst day on October 6th to highlight its plans now that it has acquired metro equipment player, Transmode. Gazettabyte interviewed Ron Kline, principal analyst, intelligent networks at market research firm, Ovum, who attended the event.
Q. Infinera’s CEO Tom Fallon referred to this period as a once-in-a-decade transition as metro moves from 10 Gig to 100 Gig. The growth is attributed mainly to the uptake of cloud services and he expects this transition to last for a while. Is this Ovum’s take?
Ron Kline, OvumRK: It is a transition but it is more about coherent technology rather than 10 Gig to 100 Gig. Coherent enables that higher-speed change which is required because of the level of bandwidth going on in the metro.
We are going to see metro change from 10 Gig to 100 Gig, much like we saw it change from 2.5 Gig to 10 Gig. Economically, it is going to be more feasible for operators to deploy 100 Gig and get more bang for their buck.
Ten years is always a good number from any transition. If you look at SONET/SDH, it began in the early 1990s and by 2000 was mainstream.
If you look at transitions, you had a ten-year time lag to get from 2.5 Gig to 10 Gig and you had another ten years for the development of 40 Gig, although that was impacted by the optical bubble and the [2008] financial crisis. But when coherent came around, you had a three-year cycle for 100 gigabit. Now you are in the same three-year cycle for 200 and 400 gigabit.
Is 100 Gig the unit of currency? I think all logic tells us it is. But I’m not sure that ends up being the story here.
If you get line systems that are truly open then optical networking becomes commodity-based transponders - the white box phenomenon - then where is the differentiation? It moves into the software realm and that becomes a much more important differentiator.
Infinera’s CEO asserted that technology differentiation has never been more important in this industry. Is this true or only for certain platforms such as for optical networking and core routers?
If you look at Infinera, you would say their chief differentiator is the PIC (photonic integrated circuit) as it has enabled them to do very well. But other players really have not tried it. Huawei does a little but only in the metro and access.
It is true that you need differentiation, particularly for something as specialised as optical networking. The edge has always gone to the company that can innovate quickest. That is how Nortel did it; they were first with 10 gigabit for long haul and dominated the market.
When you look at coherent, the edge has gone to the quickest: Ciena, Alcatel-Lucent, Huawei and to a certain extent Infinera. Then you throw in the PIC and that gives Infinera an edge.
But then, on the flip side, there is this notion of disaggregation. Nobody likes to say it but it is the commoditisation of the technology; that is certainly the way the content providers are going.
If you get line systems that are truly open then optical networking becomes commodity-based transponders - the white box phenomenon - then where is the differentiation? It moves into the software realm and that becomes a much more important differentiator.
I do think differentiation is important; it always is. But I’m not sure how long your advantage is these days.
Infinera argues that the acquisition of Transmode will triple the total available market it can address.
Infinera definitely increases its total available market. They only had an addressable market related to long haul and submarine line terminating equipment. Now this [acquisition of Transmode] really opens the door. They can do metro, access, mobile backhaul; they can do a lot of different things.
We don’t necessarily agree with the numbers, though, it more a doubling of the addressable market.
The rolling annual long-haul backbone global market (3Q 2014 to 2Q 2015) and the submarine line terminating equipment market where they play [pre-Transmode] was $5.2 billion. If you assume the total market of $14.2 billion is addressable then yes it is nearly a tripling but that includes the legacy SONET/SDH and Bandwidth Management segments which are rapidly declining. Nevertheless, Tom’s point is well-taken, adding a further $5.8 billion for the metro and access WDM markets to their total addressable market is significant.
Tom Fallon also said vendor consolidation will continue, and companies will need to have scale because of the very large amounts of R&D needed to drive differentiation. Is scale needed for a greater R&D spend to stay ahead of the competition?
When you respond to an operator’s request-for-proposal, that is where having end-to-end scale helps Infinera; being able to be a one-stop shop for the metro and long haul.
If I’m an operator, I don’t have to get products from several vendors and be the systems integrator.
Infinera announced a new platform for long haul, the XT-500, which is described as a telecom version of its data centre interconnect Cloud Xpress platform. Why do service providers want such a platform, and how does it differ from cloud Xpress?
Infinera’s DTN-X long haul platform is very high capacity and there are applications where you don’t need a such a large platform. That is one application.
The other is where you lease space [to house your equipment]. If I am going to lease space, if I have a box that is 2 RU (rack unit) high and can do 500 gigabit point-to-point and I don’t need any cross-connect, then this smaller shelf size makes a lot of sense. I’m just transporting bandwidth.
Cloud Xpress is a scaled-down product for the metro. The XT-500 is carrier-class, e.g. NEBS [Network Equipment-Building System] compliant and can span long-haul distances.
Infinera has also announced the XTC-2. What is the main purpose of this platform?
The platform is a smaller DTN-X variant to serve smaller regions. For example you can take a 500 gigabit PIC super-channel and slice it up. That enables you to do a hub-and-spoke virtual ring and drop 100 Gig wavelengths at appropriate places. The system uses the new metro PICs introduced in March. At the hub location you use an ePIC that slices up the 500G into individually routable 100G channels and at the hub location, where the XTC-2 is, you use an oPIC-100.
Does the oPIC-100 offer any advantage compared to existing100 Gig optics?
I don’t think it has a huge edge other than the differentiation you get from a PIC. In fact it might be a deterrent: you have to buy it from Infinera. It is also anti-trend, where the trend is pluggables.
But the hub and spoke architecture is innovative and it will be interesting to see what they do with the integration of PIC technology in Transmode’s gear.
Acquiring Transmode provides Infinera with an end-to-end networking portfolio? Does it still lack important elements? For example, Ciena acquired Cyan and gained its Blue Planet SDN software.
Transmode has a lot of different technologies required in the metro: mobile back-haul, synchronisation, they are also working on mobile front-hauling, and their hardware is low power.
Transmode has pretty much everything you need in these smaller platforms. But it is the software piece that they don’t have. Infinera has a strategy that says: we are not going to do this; we are going to be open and others can come in through an interface essentially and run our equipment.
That will certainly work.
But if you take a long view that says that in future technology will be commoditised, then you are in a bad spot because all the value moves to the software and you, as a company, are not investing and driving that software. So, this could be a huge problem going forward.
What are the main challenges Infinera faces?
One challenge, as mentioned, is hardware commoditisation and the issue of software.
Hardware commodity can play in Infinera’s favour. Infinera should have the lowest-cost solution given its integrated solution, so large hardware volumes is good for them. But if pluggable optics is a requirement, then they could be in trouble with this strategy
The other is keeping up with the Joneses.
I think the 500 Gig in 100 Gig channels is now not that exciting. The 500 Gig PIC is not creating as much advantage as it did before. Where is the 1.2 terabit PIC? Where is the next version that drives Infinera forward?
And is it still going to be 100 Gig? They are leading me to believe it won’t just be. Are they going to have a PIC that is 12 channels that are tunable in modulation formats to go from 100 to 200 to 400 Gig.
They need to if they want to stay competitive with everyone else because the market is moving to 200 Gig and 400 Gig. Our figures show that over 2,000 multi-rate (QPSK and 16-QAM) ports have been shipped in the last year (3Q 2014 to 2Q 2015). And now you have 8-QAM coming. Infinera’s PIC is going to have to support this.
Infinera’s edge is the PIC but if you don’t keep progressing the PIC, it is no longer an edge.
These are the challenges facing Infinera and it is not that easy to do these things.
Infinera targets the metro cloud

Infinera has styled its latest Cloud Xpress product used to connect data centres as a stackable platform, similar to how servers and storage systems are built. The development is another example of how the rise of the data centre is influencing telecoms.
"There is a drive in the industry that is coming from the data centre world that is starting to slam into the telecom world," says Stuart Elby, Infinera's senior vice president of cloud network strategy and technology.
Cloud Xpress is designed to link data centres up to 200km apart, a market Infinera coins the metro cloud. The two-rack-unit-high (2RU) stackable box features Infinera's 500 Gigabit photonic integrated circuit (PIC) for line side transmission and a total of 500 Gigabit of client side links made up of 10, 40 or 100 Gigabit interfaces. Typically, up to 16 units will be stacked in a rack, providing 8 Terabits of transmission capacity over a fibre.
Cloud Xpress has also been designed with the data centre's stringent power and space requirements in mind. The resulting platform has significantly improved power consumption and density metrics compared to traditional metro networking platforms, claims Infinera.
Metro split
Elby describes how the metro network is evolving into two distinct markets: metro aggregation and metro cloud. Metro aggregation, as the name implies, combines lower speed multi-service traffic from consumers' broadband links and from enterprises into a hub where it is switched onto a network backbone. Metro cloud, in contrast, concerns date centre interconnect: point-to-point links that, for the larger data centres, can total several terabits of capacity.
Cloud Xpress is Infinera's first metro platform that uses its PIC. "We have plans to offer it all the way out to ultra long haul," says Elby. "There are some data centres that need to get tied between continents."
Cloud Xpress is being aimed at several classes of customer: internet content providers companies (or webcos), entreprises, cloud operators and traditional service providers. The primary end users are webcos and enterprises, which is why the platform is designed as a rack-and-stack. "These are not networking companies, they are data centre ones; they think of equipment in the context of the data centre," says Elby.
But Infinera expects telcos will also adopt Cloud Xpress. They need to connect their data centres and link data centres to points-of-presence, especially when increasing amounts of traffic from end users now goes to the cloud. Equally, a business customer may link to a cloud service provider through a colocation point, operated by companies such as Equinix, Rackspace and Verizon Terremark.
"There will be a bleed-over of the use of this product into all these metro segments," says Elby. "But the design point [of Cloud Xpress] was for those that operate data centres more than those that are network providers."
Google has shared that a single internet search query travels on average 2,400km before being resolved, while Facebook has revealed that a single http request generates some 930 server-to-server interactions.
The Magnification Effect
Webcos' services generate significantly more internal traffic than the triggering event, what Elby calls the magnification effect.
Google has shared that a single internet search query travels on average 2,400km before being resolved, while Facebook has revealed that a single http request generates some 930 server-to-server interactions. These servers may be in one data centre or spread across centres.
"It is no longer one byte in, one byte out," says Elby. "The amount of traffic generated inside the network, between data centres, is much greater than the flow of traffic into or out of the data centre." This magnification effect is what is driving the significant bandwidth demand between data centres. "When we talk to the internet content providers, they talk about terabits," says Elby.

Cloud Xpress
Cloud Xpress is already being evaluated by customers and will be generally available from December.
The stackable platform will have three client-side faceplate options: 10 Gig, 40 Gig and 100 Gig. The 10 Gig SFP+ faceplate is the sweet spot, says Elby, and there is also a 40 Gig one, while the 100 Gig is in development. "In the data centre world, we are hearing that they [webcos] are much more interested in the QSFP28 [optical module]."
Infinera says that the Ethernet client signals connect to a simple mapping function IC before being placed onto 100 Gig tributaries. Elby says that Infinera has minimised the latency through the box, to achieve 4.4 microseconds. This is an important requirement for certain data centre operators.
The 500 Gig PIC supports Infinera's 'instant bandwidth' feature. Here, all the 500 Gig super-channel capacity is lit but a user can add 100 Gig increments as required. This avoids having to turn up wavelengths and simplifies adding more capacity when needed.
The Cloud Xpress rack can accommodate 21 stackable units but Elby says 16 will be used typically. On the line side, the 500 Gigabit super-channels are passively multiplexed onto a fibre to achieve 8 Terabits. The platform density of 500 Gig per rack unit (500 Gig client and 500 Gig line side per 2RU box), exceeds any competitor's metro platform, says Elby, saving important space in the data centre.
The worse-case power consumption is 130W-per-100 Gig, an improvement on the power consumption performance of competitors' platforms. This is despite the fact that coherent detection is always used, even for links as short as between a data centre's buildings. "We have different flavours of the optical engine for different reaches," says Elby. "It [coherent] is just used because it is there."
The reduced power consumption of Cloud Xpress is achieved partly because of Infinera's integrated PIC, and by scrapping Optical Transport Network (OTN) framing and switching which is not required. "There are no extra bells and whistles for things that aren't needed for point-to-point applications," says Elby. The stackable nature of the design, adding units as needed, also helps.
The Cloud Xpress rack can be controlled using either Infinera's management system or software-defined networking (SDN) application programming interfaces (APIs). "It supports the sort of interfaces the SDN community wants: Web 2.0 interfaces, not traditional telco ones."
Infinera is also developing a metro aggregation platform that will support multi-service interfaces and aggregate flows to the hub, a market that it expects to ramp from 2016.
Finisar adds silicon photonics to its technology toolkit
- Finisar revealed its in-house silicon photonics design capability at ECOC
- The company also showed its latest ROADM technologies: a dual wavelength-selective switch and a high-resolution optical channel monitor.
- Also shown was an optical amplifier that spans 400km fibre links

These two complementary technologies [VCSELs and silicon photonics] work well together as we think about the next-generation Ethernet applications.
Rafik Ward
Finisar demonstrated at ECOC its first optical design implemented using silicon photonics. The photonic integrated circuit (PIC) uses a silicon photonics modulator and receiver and was shown operating at 50 Gigabit-per-second.
The light source used with the PIC was a continuous wave distributed feedback (DFB) laser. One Finisar ECOC demonstration showed the eye diagram of the 50 Gig transmitter using non-return-to-zero (NRZ) signalling. Separately, a 40 Gig link using this technology was shown operating error-free over 12km of single mode fibre.
"Finisar, and its fab partner STMicroelectronics, surprised the market with the 50 Gig silicon photonics demonstration,” says Daryl Inniss, practice leader of components at Ovum.
"This, to our knowledge, was the first public demonstration of silicon photonics running at such a high speed," says Rafik Ward, vice president of marketing at Finisar. However, the demonstrations were solely to show the technology's potential. "We are not announcing any new products," he says.
Potential applications for the PIC include the future 50 Gig IEEE Ethernet standard, as well as a possible 40 Gig serial Ethernet standard. "Also next-generation 400 Gig Ethernet and 100 Gig Ethernet using 50 Gig lanes," says Ward. "All these things are being discussed within the IEEE."
Jerry Rawls, co-founder and chairman of Finisar, said in an interview a year ago that the company had not developed any silicon photonics-based products as the technology had not shown any compelling advantage compared to its existing optical technologies.
Now Finisar has decided to reveal its in-house design capability as the technology is at a suitable stage of development to show to the industry. It is also timely, says Ward, given the many topics and applications being discussed in the standards work.
The company sees silicon photonics as part of its technology toolkit available to its engineers as they tackle next-generation module designs.
Finisar unveiled a vertical-cavity surface-emitting laser (VCSEL) operating at 40 Gig at the OFC show held in March. The 40 Gig VCSEL demonstration also used NRZ signalling. IBM has also published a technical paper that used Finisar's VCSEL technology operating at 50 Gbps.
"What we are trying to do is come up with solutions where we can enable a common architecture between the short wave and the long wave optical modules," says Ward. "These two complementary technologies [VCSELs and silicon photonics] work well together as we think about the next-generation Ethernet applications."
Cisco Systems, also a silicon photonics proponent, was quoted in the accompanying Finisar ECOC press release as being 'excited' to see Finisar advancing the development of silicon photonics technology. "Cisco is our biggest customer," says Ward. "We see this as a significant endorsement from a very large user of optical modules." Cisco acquired silicon photonics start-up Lightwire for $271 million in March 2012.
ROADM technologies
Finisar also demonstrated two products for reconfigurable optical add/ drop multiplexers (ROADM): a dual configuration wavelength-selective switch (WSS) and an optical channel monitor (OCM).
The dual-configuration WSS is suited to route-and-select ROADM architectures.
Two architectures are used for ROADMs: broadcast-and-select and route-and-select. With broadcast-and-select, incoming channels are routed in the various directions using a passive splitter that in effect makes copies of the incoming signal. To route signals in the outgoing direction, a 1xN WSS is used. However, due to the optical losses of the splitters, such an architecture is used for low node-degree applications. For higher-degree nodes, the optical loss becomes a barrier, such that a WSS is also used for the incoming signals, resulting in the route-and-select architecture. A dual-configuration WSS thus benefits a route-and-select ROADM design.
Finisar's WSS module is sufficiently slim that it occupies a single-chassis slot, unlike existing designs that require two. "It enables system designers to free up slots for other applications such as transponder line cards inside their chassis," says Ward.
The dual WSS modules support flexible grid and come in 2x1x20, 2x1x9 and 2x8x12 configurations. "There are some architectures being discussed for add/ drop that would utilise the WSS in that [2x8x12] configuration," says Ward.
The ECOC demonstrations included different traffic patterns passing through the WSS, as well as attenuation control and the management of super-channels.
Finisar also showed an accompanying high-resolution OCM that also occupies a single-chassis slot. The OCM can resolve the spectral power of channels as narrow as 6.25GHz. The OCM, a single-channel device, can scan a fibre's C-band in 200ms.
A rule of thumb is that an OCM is used for each WSS. A customer often monitors channels on a single fibre, says Ward, and must pick which fibres to monitor. The OCM is typically connected to each fibre or to an optical switch to scan multiple fibres.
"People are looking to use the spectrum in a fibre in a much more optimised way," says Ward. The advent of flexible grid and super-channels requires a much tighter packing of channels. "So, being able to see and identify all of the key elements of these channels and manage them is going to become more and more difficult," he says, with the issue growing in importance as operators move to line speeds greater than 100 Gig.
Finisar also used the ECOC show to demonstrate repeater-less transmission using an amplifier that can span 400km of fibre. Such an amplifier is used in harsh environments where it is difficult to build amplifier huts. The amplifier can also be used for certain submarine applications known as 'festooning' where the cable follows a coastline and returns to land each time amplification is required. Using such a long-span amplifier reduces the overall hops back to the coast.
Ovum on Infinera's Intelligent Transport Network strategy
Infinera announced that TeliaSonera International Carrier (TSIC) is extending the use of its DTN-X to its European network, having already adopted the platform in the US. Infinera has also outlined the next evolution in its networking strategy, dubbed the Intelligent Transport Network.
Dana Cooperson
Gazettabyte asked Dana Cooperson, vice president and practice leader, and Ron Kline, principal analyst, both in the network infrastructure group at market research firm, Ovum, about the announcement and Infinera's outlined strategy.
What has been announced
TSIC is adding Infinera's DTN-X to boost network capacity in Europe and accommodate its own growing IP traffic. TSIC already has deployed 100 Gig technology in its European network, using a Coriant product. The wholesale operator will sell 100 Gig services, activating capacity using the DTN-X's 'instant bandwidth' feature based on already-lit 100 Gig light paths that make up its 500 Gigabit super-channels.
Meanwhile, Infinera has detailed its Intelligent Transport Network strategy that extends its digital optical network that performs optical-electrical-optical (OEO) conversion using its 500 Gig photonic integrated circuits (PICs) coupled with OTN (Optical Transport Network) switching to include additional features. These include multi-layer switching – reconfigurable optical add/drop multiplexers (ROADMs) and MPLS (Multi-Protocol Label Switching) – and PICs with terabit capacity
Q&A with Dana Cooperson and Ron Kline
Q. What is significant about Infinera's Intelligent Transport Network strategy?
Dana C: Infinera is being more public about its longer-term strategy - to 2020 - which includes evolving from its digital optical network messaging to a network that includes multiple layers and types of switching, and more automation. Infinera is not announcing more functionality availability now.
Infinera makes much play about its 500 Gig super-channels. More recently it has detailed such platform features as instant bandwidth and Fast Shared Mesh Protection supported in hardware. Are these features giving operators something new and is Infinera gaining market share as a result?
Dana C: Instant Bandwidth provides a way for Infinera’s operator customers to have their cake and eat it. They can install 500 Gig super-channels ahead of demand, and not pay for each 100 Gig sub-channel until they have a need for that bandwidth. It is a simple process at that point to 'turn on' the next 100 Gig worth of bandwidth within the super-channel.
By installing all five 100 Gig channels at once, the operator can simplify operations - lower opex - and allow quicker time-to-revenue without having to take the capex hit until the bandwidth needs materialise. This is an improvement over the DTN platform, which gave customers the 10x10 Gig architecture to let them pre-position bandwidth before the need for it materialised and save on opex, but at the cost of higher up-front capex than was ideal.
Talking to TSIC confirm that this added flexibility the DTN-X provides has allowed them to win wholesale business from competitors while tying capex more directly to revenue.
Ron K: Although pay-as-you go capability is available, analysis of 100 Gig shipments to date indicate most customers are paying for all five up front.
Dana C: I have not directly talked with an Infinera customer that has confirmed the benefit of Fast Shared Mesh Protection, but the feature certainly seems to be of value to customers and prospects. Our research indicates the continued search for better, more efficient mesh protection. Hardware-enabled protection should provide better latency (higher speed).
Ron K: Resiliency and mesh protection are critical requirements if you want to participate in the market. Shared mesh assumes that you have idle protection capacity available in case there is a failure. That is expensive. However, with Infinera’s technology - the PIC and Instant Bandwidth - it is not as difficult.
Restoration is all about speed – how fast can you get the network back up. It is not always milliseconds, sometimes it is half a minute. But during catastrophic failure events such as an earthquake, where a user can loose entire nodes, 30 seconds may not be so bad. Infinera has implemented the switch in hardware, based on a pre-planned map, so it is quicker.
Dana C: As for what impact these capabilities are having on market share, Infinera has climbed to the No.3 player in 100 Gig DWDM in three quarters since the DTN-X has become available.
They’ve jumped back up to No.4 globally in backbone WDM/CPO-T (converged packet optical transport) after sinking to sixth when they were losing share because they were without a viable 40 Gig solution. They made the right call at that time to focus on 100 Gig systems based on the 500 Gig PIC rather than chase 40 Gig. They are both keeping and expanding with existing DTN customers, TSIC being one, and picking up new customers.
Ron Kline
Ron K:They are definitely picking up share. However, I’m not sure if they can sustain it. The reason for the share jump is they are selling 100 Gig, five at a time. Remember, most customers elect to pay for all five. That means future sales will lag because customers have pre-positioned the bandwidth.
Looking at the customers is probably a better indicator: Infinera has some 27 customers, maybe 30 by now, which provide a good embedded base. Still, 27 customers is low compared to Ciena, Alcatel-Lucent, Huawei and even Cisco.
When Infinera first announced the DTN-X in 2011 it talked about how it would add MPLS support. Now outlining its Intelligent Transport Network strategy it has still to announce MPLS support. Do operators not need this network feature yet in such platforms and if not, why?
Dana C: The market is still sorting out exactly what is needed for sub-wavelength switching and where it is needed. Cisco’s and Juniper’s approaches are very different in the routing world —essentially, a lower-cost MPLS blade for the CRS versus a whole new box in the PTX; there is no right way there.
Within packet-aware optical products, the same is true: What is the right level of integration of OTN versus MPLS? It depends on where you are in the network, what that carrier’s service mix is, and how fast the mix is changing.
Many carriers are still struggling with their rigid organisational structures, and how best to manage products that are optical and packet in equal measure. So I don’t think Infinera is late, they are just reacting to their customers’ priorities and doing other things first.
Ron K: This is the $64,000 question: MPLS versus OTN. I’m not sure how it will eventually play out. I am asking service providers now.
OTN is a carrier protocol developed for carriers by carriers (the replacement for SONET/SDH). They will be the ones to use it because they have multi-service networks and need the transparency OTN provides. Google types and cable operators will not use OTN switching - they will lean towards the label-switched path (LSP) route. Even Tier-1 operators who have both types of networks will most likely maintain separation.
"The trick is to optimise around the requirements that net you the biggest total available market and which maximise your strengths and minimise your weaknesses. You can’t be all things to all carriers."
If Infinera has its digital optical network, why is it now also talking about ROADMs? And does having both benefit operators?
Dana C: Yes, having both benefits operators. From discussions with Infinera's customers, it is true that the digital nodes give them flexibility, but they do introduce added cost. For those nodes where customers have little need to add/ drop traffic, a ROADM would provide a more cost-efficient option to a node that performs OEO for all the traffic. So, with a ROADM option customers would have more control over node design.
Infinera talks about its next-gen PICs that will support a Terabit and more. After nearly a decade of making PICs, how does Ovum view the significance of the technology?
Dana C: While more vendors are doing photonic integration R&D, and some - Huawei comes to mind - have released some PIC-based products, no one has come close to Infinera in what it can do with photonic integration. Speaking with quite a few of Infinera’s customers, they are very happy with the technology, the system, and the support.
Each generation of PIC requires a significant R&D effort, but it does provide differentiation. Infinera has managed to stay focused and implement on time and on spec. I see them as the epitome of a “specialist” vendor. They are of similar size to Coriant and Tellabs, which have seen their fortunes wane, and ADVA Optical Networking. So I would say they are a very good example of what focus and differentiation can do.
Now, is the PIC the only way to approach system architecture? No. As noted before, some Infinera clients have told me that the lack of a ROADM has hurt them in competitive situations, as did the need to pay for all the pre-positioned bandwidth up front (true for the DTN, not the DTN-X).
From my days in product development, I know you have to optimise around a set of requirements, and the trick is to optimise around the requirements that net you the biggest total available market and which maximise your strengths and minimise your weaknesses. You can’t be all things to all carriers.
What is significant about the latest TeliaSonera network win and what does it mean for Coriant?
Dana C: Infinera is announcing an extension of its deployments at TSIC from North America to now include Europe as well. When you ask what this means to Coriant, their incumbent supplier in Europe, the answer is not clear cut. This gives Infinera an expanded hunting licence and it gives Coriant some cause for worry.
TSIC values both vendors and both will have their place in the European network. TSIC plans to use the vendors in different regions.
I am sure TSIC will try and play each off against the other to get the best price. It is looking for more flexibility and some healthy competition.

