The era of 400G coherent pluggables finally emerges

Pranay Aiya

Part 1: 7nm coherent DSPs, ZR and ZR+

The era of 400-gigabit coherent pluggable modules has moved a step closer with Inphis announcement of its Canopus coherent digital signal processor (DSP) and its QSFP-DD ColorZ II optical module.

NeoPhotonics has also entered the fray, delivering first samples of its 400-gigabit ClearLight CFP2-DCO module that uses the Canopus DSP.

The ColorZ II and ClearLight modules support the 400ZR OIF standard used to link data centres up to 120km apart. They also support extended modes, known as ZR+, that is not standardised.

ZR+’s modes include 400 Gigabit-per-second (Gbps) over distances greater than 400ZR’s 120km and lower data rates over metro-regional and long-haul distances.

The announcements of the Canopus DSP and 400-gigabit pluggable coherent modules highlight the approaches being taken for ZR+. Optical module vendors are aligning around particular merchant DSPs such that interoperability exists but only within each camp.

The first camp involves Inphi and three other module vendors, one being NeoPhotonics. The second camp is based on the OpenZR+ specification that offers interoperability between the DSPs of the merchant players, Acacia Communications and NTT Electronics (NEL). Cisco is in the process of acquiring Acacia.

Market analysts, however, warn that such partial interoperability for ZR+ harms the overall market opportunity for coherent pluggables.

ZR+ should be interoperable like ZR, and come along with the hard decisions the ZR standard required,” says Andrew Schmitt, founder and directing analyst at research form, Cignal AI.

 

Andrew Schmitt, founder and directing analyst at research form, Cignal AI.

The optical module vendors counter that only with specialist designs – designs that are multi-sourced – can the potential of a coherent DSP be exploited.

Applications 

The advent of 400-gigabit coherent optics within compact client-side form factors is a notable development, says Inphi. The industry has been waiting for this inflextion point of having, for the first time, 400-gigabit coherent pluggables that go on router and switch interfaces,” says Pranay Aiya, vice president of product marketing and applications engineering at Inphi.

IP over DWDM has never happened; we have all heard about it till the cows come home,” says Aiya.

IP-over-DWDM failed to take off because of the power and space demands of coherent optics, especially when router and switch card slots come at a premium. Using coherent optics on such platforms meant trading off client-side faceplate capacity to fit bulkier coherent optics. This is no longer the case with the advent of QSFP-DD and OSFP coherent modules.

If you look at the reasons why IP-over-DWDM  – coloured optics directly on routers – failed, all of those reasons have changed,” says Schmitt. The industry is moving to open line systems, open network management, and more modular network design.

All of the traffic is IP and layer-1 switching and grooming isnt just unnecessary, it is more expensive than low-feature layer-2 switching,” says Schmitt, adding that operators will use pluggables wherever the lower performance is acceptable. Moreover, this performance gap will narrow with time.

The Canopus DSP also supports ZR+ optical performance and, when used within a CFP2-DCO module with its greater power enveloped than OSFP and QSFP-DD, enables metro and long-haul distances, as required by the telecom operators. This is what Neophotonics has announced with its ClearLight CFP2-DCO module.

Source: Inphi, Gazettabyte

Canopus

Inphi announced the Canopus DSP last November and revealed a month later that it was sampling its first optical module, the ColorZ II, that uses the Canopus DSP. The ColorZ II is a QSFP-DD pluggable module that supports 400ZR as well as the ZR+ extended modes.

Inphi says that, given the investment required to develop the 7nm CMOS Canopus, it had to address the bulk of the coherent market.

We were not going after the ultra-long-haul and submarine markets but we wanted pluggables to address 80-90 per cent of the market,” says Aiya.

This meant developing a chip that would support the OIFs 400ZR, 200-gigabit using quadrature phased-shift keying (QPSK) modulation for long haul, and deliver 400-gigabit over 500-600km.

The Canopus DSP also supports probabilistic constellation shaping (PCS), a technology that until now has been confined to the high-end coherent DSPs developed by the leading optical systems vendors.

With probabilistic shaping, not all the constellation points are used. Instead, those  with lower energy are favoured; points closer to the origin on a constellation graph. The only time all the constellation points are used is when sending the maximum data rate for a given modulation scheme.

Choosing the inner, lower-energy constellation points more frequently than the outer points to send data reduces the average energy and improves the signal-to-noise ratio. To understand why, the symbol error rate at the receiver is dominated by the distance between neighbouring points on the constellation. Reducing the average energy keeps the distance between the points the same, but since a constant signal power level is used for DWDM transmission, applying gain increases the distance between the constellation points. The result is improved optical performance.

Probabilistic shaping also allows an exact number of bits-per-symbol to be sent, even non-integer values.

Vladimir Kozlov

For example, using standard modulation schemes such as 64-QAM with no constellation shaping, 6 bits-per-symbol are sent. Using shaping and being selective as to which constellation points are used, 5.7 bits-per-symbol could be sent, for example. This enables finer control of the sent data, enabling operators to squeeze the maximum data rate to suit the margins on a given fibre link.

This is the first time a DSP with probabilistic shaping has been available in a size and power that enables pluggables,” says Aiya.

The resulting optical performance using the Canopus is up to 1,500km at 300Gbps signals and up to 2,000km for 200Gbps transmissions (see Table above). As for baud rates, the DSP ranges from 30+ to the mid-60s Gigabaud.

Inphi also claims a 75 per cent reduction in power consumption of the Canopus compared to 16nm CMOS DSPs found in larger, 4×5-inch modules.

Several factors account for the sharp power reduction: the design of the chips architecture and physical layout, and the use of 7nm CMOS. The Canopus uses functional blocks that extend the reach, and these can be turned off to reduce the power consumption when lower optical performance is acceptable.

The architectural improvements and the physical layout account for half of the overall power savings, says Aiya, with the rest coming from using a 7nm CMOS.

The result is a DSP a third the size of 16nm DSPs. It [pluggables] requires the DSP to be very small; its not a paperweight anymore,” says Aiya.

400ZR and ZR+

The main challenge for the merchant coherent DSP camps is the several, much larger 400ZR eco-systems from Ciena, Cisco and Huawei.

“Each one of these eco-systems will be larger than the total merchant market of 400ZR,” says Vladimir Kozlov, CEO and founder of LightCounting. The system vendors will make sure that their products offer something extra if plugged into their equipment while maintaining interoperability. “This could be some simple AI-like features monitoring the link performance and warning customers of poor operation of devices on the other side of the link if these are made by another supplier,” says Kozlov.

LightCounting says that ZR+ units will be half to a third of the the number of 400ZR units shipped. However, each ZR+ module will command a higher selling price.

Regarding the ZR+ camps, one standardisation effort is OpenZR+ that adopts the forward-error correction (oFEC) scheme of the OpenROADM MSA, supports multiplexing of 100 Gigabit Ethernet (GbE) and 200GbE client signals, and different line rates – 100-400Gbps – to achieve greater reaches.

The backers of OpenZR+ include the two merchant DSP vendors, Acacia and NEL, as well as Fujitsu Optical Components, Lumentum, and Juniper Networks.

The second ZR+ camp includes four module-makers that are adopting the Canopus: Inphi, Molex Optoelectronics, NeoPhotonics and an unnamed fourth company. According to Schmitt, the unnamed module maker is II-VI. II-VI declined to comment when asked to confirm.

Schmitt argues that ZR+ should be interoperable, just like 400ZR. I think NEL, Acacia, and Inphi should have an offsite and figure this out,” he says. These three companies are in a position to nail down the specs and create a large, disruptive coherent pluggable market.”

Simon Stanley

Simon Stanley, founder and principal consultant at Earlswood Marketing Limited, expects several ZR+ solutions to emerge but that the industry will settle on a common approach. You will initially see both ZR+ and OpenZR+,” says Stanley. ZR+ will be specific to each operator but over time I expect OpenZR+ or something similar to become the standard solution.”

But the optical vendors stress the importance of offering differentiated designs to exploit the coherent DSP’s full potential. And maximising a module’s optical performance is something operators want.

We are all for standards where it makes sense and where customers want it,” says Inphis Aiya. But for customers that require the best performance, we are going to offer them an ecosystem around this DSP.”

It is always a trade-off,” adds Ferris Lipscomb, vice president of marketing at NeoPhotonics. More specialised designs that aren’t interoperable can squeeze more performance out; interoperable has to be the lowest common denominator.”

Next-generation merchant DSPs

The next stage in coherent merchant DSP development is to use a 5nm CMOS process, says Inphi. Such a state-of-the-art [CMOS] process will be needed to double capacity again while keeping the power consumption constant.

The optical performance of a 5nm coherent DSP in a pluggable will approach the high-end coherent designs. It [the optical performance of the two categories] is converging,” says Aiya.

However, demand for such a device supporting 800 gigabits will take time to develop. Several years have passed for demand for 400-gigabit client-side optics to ramp and there will be a delay before telecom operators need 400-gigabit wavelengths in volume, says Inphi.

LightCounting points out that it will take Inphi and its ecosystem of suppliers at least a year to debug their products and demonstrate interoperability.

“And keep in mind that we are talking about the industry that is changing very slowly,” concludes Kozlov.

Ferris Lipscomb, vice president of marketing at NeoPhotonics

Books in 2019

Gazettabyte asks industry figures each year to cite the memorable books they have read. These include fiction, non-fiction and work-related titles. 

Here are the choices of Cisco’s Bill Gartner, Sylvie Menezo of silicon photonics start-up, Scintil Photonics, and Andrew Schmitt, directing analyst at Cignal AI.  

Bill Gartner, Senior Vice President and General Manager, Cisco Optical Systems and Optics.

At the top of my list is The Gene: An Intimate History, by Siddhartha Mukherjee. Mukherjee does an amazing job of telling the story of the gene, providing historical context dating back to pre-Darwin times through to modern advances in gene therapy. The material is complex but he is great at describing the evolution of thinking about genes and progress in the genome project in layman’s terms.

The book leaves me in awe of how much has been accomplished, especially in the past 20 years, and yet how much more we have to learn about this fascinating topic, how progress in this area might be applied to solve some of medicine’s most challenging problems, and the moral dilemma that we confront as we think about altering nature’s work.

The Billionaire Who Wasn’t: How Chuck Feeney Secretly Made and Gave Away a Fortune by Conor O’Clery is an amazing story of a man who went from rags to riches, built one of the most profitable private businesses in history (Duty-Free Shops), and earned billions. He then gave it all away and did so anonymously. He lived frugally and was adamant that his contributions be kept secret. It is an inspiring story of an American hero who touched the lives of millions who will never know.

Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time by Dava Sobel includes a foreword by Neil Armstrong. I am fascinated by stories that highlight how one individual persists in a vision and has a major impact on the world. In the 18th century, it was common for entire fleets of ships to run aground or get lost as navigation techniques were primitive.

Latitude was relatively straightforward, based on the angle of the sun relative to the horizon (and the date), but determining longitudinal position was often guesswork. After several disasters, including one where over 200 sailors were killed, the British government established a prize for the solution.

This is a fantastic story of a relatively unknown watchmaker who single-handedly solved the problem and then persuaded the sceptics that his chronometer was superior to any available method.

Lastly, I read Franklin and Winston: An Intimate Portrait of an Epic Friendship by Jon Meacham. This is a fantastic story of the intimate and at times stormy relationship between FDR and Winston Churchill. The story, unlike many WWII narratives, is told from the perspective of their interactions. FDR and Churchill were magnificent leaders, each of whom took a principled stand against Nazism and Fascism. It is also frightening to contemplate the course history may have taken had lesser leaders been in place.

Sylvie Menezo, CEO and CTO of Scintil Photonics.

The book I recommend is a novel I read this summer, La Tresse (The Braid) by Laetitia Colombani. It is a tale of three women, each from a different continent and experiencing different living conditions, yet their lives happen to be connected by something at the end of the book. To me, all three are very beautiful and strong women figures, moved by a ‘different something’ deep inside them, and that is what makes them beautiful!

Andrew Schmitt, founder and directing analyst at Cignal AI

It was a good reading year for me. Starting with fiction, my overall pick of the year is the Three-Body Problem series by Cixin Liu, a science fiction story of epic scale that stretches from the Cultural Revolution in China into the distant future.

It was written in Chinese and as a result, the style, prose and cultural perspective are different in a refreshing way. This series is right up there with Dune, Asimov and all the sci-fi greats. It is a must-read if that is your thing.

Martha Wells turned out more short novels to conclude the Murderbot Diaries, a series that I reviewed in 2018. I also read Neal Stephenson’s FALL; or, Dodge in Hell: A Novel this year. He’s maintained a steady production of books but I don’t think his latest books are as good as his archive (Snow Crash, Cryptonomicon, others). FALL was very disappointing, particularly the second half – I don’t recommend it. Read the archive instead.

It was an intense non-fiction year, so I’ll hit the good stuff that I strongly recommend.

I picked up Nobody Wants to Read Your Sh*t: And Other Tough-Love Truths to Make You a Better Writer by Steven Pressfield on a twitter recommendation and it resonated with me. So much written market research lacks respect and appreciation of the client’s time and Pressfield shares simple, useful tips to make your reader care about what you are writing. Anyone who writes for others should read this, and it is quick.

This book leads me to one of Pressfield’s big hits, Gates of Fire: An Epic Novel of the Battle of Thermopylaea narrative history of the Spartans and the battle. As an engineer, I never had the time – and frankly, the interest – to study Ancient Greece. Pressfield vividly brings Sparta and Greece to life and recounts the events leading up to the battle of the famous “300”. A fantastic book.

My son had to read Midnight in Chernobyl: The Untold Story of the World’s Greatest Nuclear Disaster by Adam Higginbotham over the summer for High School.

We read it together; a highly recommended thing to do with your teenagers. Better yet, after the book, we were treated with the excellent “Chernobyl” drama on HBO. If you liked the HBO series, definitely read the book as it tells the story in a comprehensive and detailed way without an artistic license. The size, scale, and sacrifices endured by the Soviets to contain the disaster are incredible. The organisational ineptitude before and right after the event are horrifying. The same top-down decision hierarchy that caused the problem was paradoxically the only way to get it cleaned up.

My last recommendation is Shoe Dog: A Memoir – by the Creator of Nike, by Phil Knight. It recounts the genesis of the company as a supplier of track shoes made in Japan following WWII as the country rapidly emerged as an export powerhouse. It is a book about post-war Japan, raw entrepreneurship, and building what at the time was a new sales and marketing model combining athletics and fashion. One of the better business books I’ve read.

Books in 2019 – Final part, click here 


ECOC 2019 industry reflections II

Gazettabyte requested the thoughts of industry figures after attending the ECOC show, held in Dublin. In particular, what developments and trends they noted, what they learned and what, if anything, surprised them. Input from II-VI, Ciena, Fujitsu Optical Components and Acacia Communications. The second and final part.

State of play for 400 Gigabit Ethernet (GbE). Form factors ‘right-sized’ for faceplate densities

Sanjai Parthasarathi, chief marketing officer at II-VI

One new theme at ECOC is the demand for lower-cost 100-gigabit coherent transceivers for deployment in optical access for wireless access and fibre-deep cable TV. Such demand would significantly expand the market.

It was noteworthy at the show how 5G has become a significant factor influencing the wireless access market, with the potential for wide deployment of dense wavelength-division multiplexing (DWDM) technology with wavelength switching and tuning functions, not only in traditional network architectures but interesting new ones too.

This could drive significant demand for low-cost wavelength-selective switch (WSS) modules, tunable transceivers and 100-gigabit coherent transceivers, which is exciting.

As for surprises at the show, ECOC validated the view that developments in digital signal processor (DSP) technology for transceivers have accelerated to the point of having caught up with the state-of-the-art in photolithography, previously the province of DSPs for consumer electronics, high-performance computing and processors.

DSPs, for next-generation transceivers, are increasingly leveraging 7nm CMOS.

Patricia Bower, senior manager of product marketing at Ciena

A key talking point at ECOC was the state of play for 400 Gigabit Ethernet (GbE). Form factors ‘right-sized’ for faceplate densities – QSFP-DD, for example – and developments in short-range optical signalling supporting 100 gigabit-per-lambda are enablers for this next-generation client rate.

Market projections for 400GbE indicate a faster ramp for 400GbE than for 100GbE in previous years and that 400GbE client-side modules will ship in 2020 with broad, market-wide volumes ramping in 2021.

In parallel, 400-gigabit DWDM is projected to grow very strongly. Starting in early 2020, deployments of 800 gigabit-capacity DWDM systems will enable the industry to efficiently transport 400GbE anywhere in the network, including transoceanic propagation.

Following this, 400ZR will enable 400 gigabits-per-second over short point-to-point, single-span data centre interconnect links using coherent technology in the same compact QSFP-DD mechanical forms which will go hand-in-hand with the volume uptake of 400GbE.

Co-packaged optics

Discussions continued around approaches to package optics and electronics in switch-fabric ICs.

The consensus was that the approach will be mainstream in future 51.2 terabits-per-second (Tbps) switch chips, a couple of iterations from where we are today.

I learned more about the progress supporting wafer-scale manufacturability of co-packaged switch cores and optical input/ outputs, including on-chip laser integration.

Consideration of the relative trade-offs among power dissipation, cost, thermal management, and reliability compared to off-chip lasers are key. Electrical signalling also remains key in this approach. Even moving data off a chip package optically, electrical intra-chip signaling to the switching core is still needed for what effectively is a multi-chip module or modular system-on-chip.

Companies with key design skills in electrical and optical components will be best placed to address such designs.

I wasn’t surprised but pleased to see the progress by the industry for 400ZR demonstrated at the OIF booth. Various companies showed IC-TROSA electro-optic samples which is a contributing element for a 400ZR solution.

Mechanical mock-ups of the intended module packages (QSFP-DD and OSFP) were also shown as well as a mock-up of a switch-router platform to highlight 400ZR integration.

This level of progress is in line with the expected ramp-up of 400ZR in 2021.

Yukiharu Fuse, chief marketing officer, vice president/ general manager, business strategy division, Fujitsu Optical Components Limited

Several items were of interest at ECOC, but two I’d highlight are 400-gigabit coherent pluggable optics and XR Optics.

Vendors demonstrated the progress being made in the development of 400-gigabit coherent pluggable transceivers.

The key is their success is the development of a  low-power coherent digital signal processor (DSP) that fits within a QSFP-DD or OSFP module, and this now seems feasible.

With this innovation, data centre operators will be able to install these modules in the slots used for client Ethernet, allowing the operators to support data centre interconnect without the need for transport gear.

The OIF-standardised 400ZR implementation will support linking data centres up to 120km apart using interoperable pluggable modules. The data centre operators also want longer reaches that ZR offers even if the power consumption of the transceiver inevitably goes up.

To address this, NEL and Acacia together with Lumentum and Fujitsu Optical Components introduced OpenZR+ to support longer distance links for data centre interconnect and other applications.

This will act as a potential de-facto standard with multi-source transceivers to support distances beyond ZR.

Such a development will be a big step for the data center operators, enabling wider coverage without the need for transport equipment.

XR Optics

Infinera introduced at ECOC a new concept of point-to-multi-point communications for access and aggregation network, dubbed XR Optics. Using Nyquist subcarriers, XR Optics can distribute up to 16 points according to the bandwidth requirements.

This concept may create a new market for coherent optics that until now has focussed on high-capacity, point-to-point applications.

Infinera introduced at ECOC a technology not a product. It will be interesting to see how the technology evolves into products and the support it gets with the goal of creating a multi-source supply chain.

I’m curious about the concept, though, with the key being how to achieve low-cost coherent optics needed for access and aggregation networks. I will watch this development with interest.

Tom Williams, vice president of marketing, Acacia Communications

We are seeing a trend toward increasing use of silicon photonics in client and transport optics. There are multiple approaches in the industry to address the challenges of power, size and cost, but silicon photonics has become established as an important technology for a variety of applications.

We were also happy to see the positive feedback for the OpenZR+ solution that we, in collaboration with several other companies, defined at the show.

I’ve participated in the 400ZR effort and the CableLabs project to define a coherent interface in access networks, so I was interested to learn more about the Infinera XR optics proposal. I’m still trying to understand the details, but it’s always interesting to see a different approach to solving a technical challenge.

As for unexpected developments at the show, I was surprised how difficult it can be to get a taxi in Dublin when Ariana Grande is in town!


Deutsche Telekom’s edge for cloud gaming

Market research firms vary in their estimates but the global video gaming market was of the order of $138 billion in 2018

Deutsche Telekom believes its network gives it an edge in the emerging game-streaming market.

The operator is trialling a cloud-based service similar to the likes of Google and Microsoft.

The operator already offers IP TV and music as part of its entertainment offerings and will decide if gaming will be the third component. The operator will launch its MagentaGaming cloud-based service in 2020.

“Since 2017, the biggest market in entertainment is gaming,” says Dominik Lauf, project lead, MagentaGaming at Deutsche Telekom.

Market research firms vary in their estimates but the global video gaming market was of the order of $138 billion in 2018 while the theatrics and home entertainment market totalled just under $100 billion for the same period.

Cloud Gaming

In Germany, half the population play video games with half of those being young adults. The gaming market represents a valuable  opportunity to ‘renew the brand’ with a younger audience.

Until now, a user’s gaming experience has been determined by the video-processing capabilities of their gaming console or PC graphics card.

The advent of cloud-based gaming changes all that.  A user not only can access the latest game titles via the cloud, they no longer need to own state-of-the-art equipment for the ultimate gaming experience. Instead, video processing for gaming is performed in the cloud. All that the user needs is a display. Any display; a smartphone, tablet, PC or TV.

Lauf says hardcore gamers typically spend over €1,000 each year on equipment, while some 45 per cent of all gamers can’t play the latest games at the highest display quality because their hardware is not up to the task.  “[With cloud gaming,] the entry barrier of hardware no longer exists for customers,” says Lauf.

However, for game-streaming to work, the onus is on the service provider to deploy hardware – dedicated servers hosting high-end graphics processing units (GPUs) – and ensure that the game-streaming traffic is delivered efficiently over the network.

Deutsche Telekom points out that while buffering is used for video or music streaming services, this isn’t an option with gaming given its real-time nature.

“Latency and bandwidth play a pivotal role within gaming,” says Lauf. “Connectivity counts here.”

Networking demands

Deutsche Telekom’s game-streaming service requires a 50 megabit-per-second (Mbps) broadband connection.

Gaming traffic requires between 30-40Mbps of capacity to ensure full graphics quality. This is over four times the bandwidth required for a video stream. “We can lower the bandwidth required [for gaming] but you will notice it when using a bigger screen,” says Lauf.

The operator is testing the bandwidth requirements its mobile network must deliver to ensure the required gaming quality.

“With 5G, the bandwidth is more or less there, but bandwidth is not the only point, maybe the more important topic is latency,” says Lauf. The operator has recently launched 5G in five cities in Germany.

An end-to-end latency of 50-80ms ensures a smooth gaming experience. A latency of 100ms decreases an individual’s game-play while a latency of 120ms noticeably impacts responsiveness.

Deutsche Telekom’s fixed network delivers a sub-50ms latency. However, the home environment must also be factored in: the home’s wireless network and signal coverage, as well as other electronic devices in the home, all can influence gaming performance.

And it is not just latency that counts but jitter: the volatility of the latency. “The average may be below 50ms but if there are peaks at 100ms, it will impact your gameplay,” says Lauf.

Moreover, the latency and jitter performance should ideally be consistent across the network; otherwise, it can give an unfair advantage to select users in multi-player games.

5G and edge computing

The MegentaGaming trial is also being used to test how 5G and edge computing – where the servers and GPUs are hosted at the network edge – can deliver a sufficiently low jitter.

5G will provide more bandwidth than the operator’s existing LTE mobile network. This will not only benefit individual game players but also the size of group-gaming plays. At present, hundreds can play each other in a game but this number will grow, says Lauf.

5G will also enable new features, such as network slicing, that will benefit low jitter, says Lauf.

“‘Edge’ is a fuzzy term,” says Lauf. “But we will build our servers in a decentralised way to ensure latency does not affect gamers.”

MobiledgeX, a Deutsche Telekom spin-out that focusses on cloud infrastructure, operates four data centres in Germany and is also testing GPUs. However, for the test phase of MagentaGaming, Deutsche Telekom is deploying its servers and GPUs at the network edge

Lauf says the complete architecture must be designed with latency in mind: “There are a lot of components that can increase latency.” Not only the network but the GPU run times and the storage run times.

Deploying servers and GPUs at the network edge requires investment. And given that cloud gaming is still being trialled, it is too early to assess gaming’s business success.

So how does Deutsche Telekom justify investing in edge infrastructure and will the edge be used for other tasks as well as gaming?

“This is also a focus of our trial, to see when are the server peak times in terms of usage,” says Lauf. “There are capabilities for other use cases on the same GPUs.”

The operator is considering using the GPUs for artificial intelligence tasks.

Cloud-gaming competition

Microsoft and Google are also pursuing gaming-streaming services.

Microsoft is about to launch a preview of xCloud – its Xbox cloud-based service – and has been accepting registrations in certain countries.

Microsoft, too, recognises the importance of network latency and is working with operators such as SK Telecom in South Korea and Vodafone UK. It has also signed an agreement with T-Mobile, the US operator arm of Deutsche Telekom.

Meanwhile, Google is preparing its Stadia service which will launch next month.

Lauf believes Deutsche Telekom has an edge despite such hyperscaler competition.

“We are sure that with our high-quality network – our edge and 5G latency capabilities, and our last mile to our customer – we have an advantage compared to the hyperscalers given  how latency and bandwidth count,” he says.

Gaming content also matters and the operator says it is in discussions with gaming developers that welcome the fact that there are alternatives to the hyperscalers’ platforms.

“We are quite sure we can play a role,” concludes Lauf. “Even if we are not on the same global level of a Google, we will have a right to play in this business.”

Game on!


ECOC 2019 industry reflections

Gazettabyte is asking industry figures for their thoughts after attending the recent ECOC show, held in Dublin. In particular, what developments and trends they noted, what they learned and what, if anything, surprised them. Here are the first responses from Huawei, OFS Fitel and ADVA.  

James Wangyin, senior product expert, access and transmission product line at Huawei  

At ECOC, one technology that is becoming a hot topic is machine learning. There is much work going on to model devices and perform optimisation at the system level.

And while there was much discussion about 400-gigabit and 800-gigabit coherent optical transmissions, 200-gigabit will continue to be the mainstream speed for the coming three-to-five years.

That is because, despite the high-speed ports, most networks are not being run at the highest speed. More time is also needed for 400-gigabit interfaces to mature before massive deployment starts.

BT and China Telecom both showed excellent results running 200-gigabit transmissions in their networks for distances over 1,000km.

We are seeing this with our shipments; we are experiencing a threefold year-on-year growth in 200-gigabit ports.

Another topic confirmed at ECOC is that fibre is a must for 5G. People previously expressed concern that 5G would shrink the investment of fibre but many carriers and vendors now agree that 5G will boost the need for fibre networks.

As for surprises at the show, the main discussion seems to have shifted from high-speed optics to system-level or device-level optimisation using machine learning.

Many people are also exploring new applications based on the fibre network.

For example, at a workshop to discuss new applications beyond 5G, a speaker from Orange talked about extending fibre connections to each room, and even to desktops and other devices. Other operators and systems vendors expressed similar ideas.

Verizon discussed, in another market focus talk, its monitoring of traffic and the speed of cars using fibre deployed alongside roads. This is quite impressive.

We are also seeing the trend of using fibre and 5G to create a fully-connected world.

Such applications will likely bring new opportunities to the optical industry.

Two other items to note.

The Next Generation Optical Transport Network Forum (NGOF) presented updates on optical technologies in China. Such technologies include next-generation OTN standardisation, the transition to 200 gigabits, mobile transport and the deployment of ROADMs. The NGOF also seeks more interaction with the global community.

The 800G Pluggable MSA was also present at ECOC. The MSA is also keen for more companies to join.

Daryl Inniss, director, new business development at OFS Fitel

There were many discussions about co-packaged optics, regarding the growth trends in computing and the technology’s use in the communications market.

This is a story about high-bandwidth interfaces and not just about linking equipment but also the technology’s use for on-board optical interconnects and chip-to-chip communications such as linking graphics processing units (GPUs).

I learned that HPE has developed a memory-centric computing system that improves significantly processing speed and workload capacity. This may not be news but it was new to me. Moreover, HPE is using silicon photonics in its system including a quantum dot comb laser, a technology that will come for others.

As for surprises, there was a notable growing interest in spatial-division multiplexing (SDM). The timescale may be long term but the conversations and debate were lively.  Two areas to watch are in proprietary applications such as very short interconnects in a supercomputer and for undersea networks where the hyperscalers  quickly consume the capacity on any newly commission link.

Lastly, another topic of note was the use of spectrum outside the C-band and extending the C-band itself to increase the data-carrying capacity of the fibre.

Jörg-Peter Elbers, senior vice president, advanced technology, ADVA

Co-packaging optics with electronics is gaining momentum as the industry moves to higher and higher silicon throughput. The advent of 51.2 terabit-per-second (Tbps) top-of-rack switches looks like a good interception point. Microsoft and Facebook also have a co-packaged optics collaboration initiative.

As for coherent, quo vadis? Well, one direction is higher speeds and feeds. What will the next symbol rate be for coherent after 60-70 gigabaud (GBd)? A half-step or a full-step; incremental or leap-frogging? The growing consensus is a full-step: 120-140 GBd.

Another direction for coherent is new applications such as access/ aggregation networks. Yet cost, power and footprint challenges will have to be solved.

Advanced optical packaging, an example being the OIF IC-TROSA project, as well as compact silicon photonics and next-gen coherent DSPs are all critical elements here.

A further issue arising from ECOC is whether optical networks need to deliver more than just bandwidth.

Latency is becoming increasingly important to address time-sensitive applications as well as for advanced radio technologies such as 5G and beyond.

Additional applications are the delivery of precise timing information (frequency, time of day, phase synchronisation) where the existing fibre infrastructure can be used to deliver additional services.

An interesting new field is the use of the communication infrastructure for sensing, with Glenn Wellbrock giving a presentation on Verizon’s work at the Market Focus.

Other topics of note include innovation in fibres and optics for 5G.

With spatial-division multiplexing, interest in multi-core and multi-mode fibre applications have weakened. Instead, more parallel fibres operating in the linear regime appear as an energy-efficient, space-division multiplexing alternative.

Hollow-core fibres are also making progress, offering not only lower latencies but lower nonlinearity compared to standard fibres.

As for optics for 5G, what is clear is that 5G requires more bandwidth and more intelligence at the edge. How network solutions will look will depend on fibre availability and the associated cost.

With eCPRI, Ethernet is becoming the convergence protocol for 5G transport. While grey and WDM (G.metro) optics, as well as next-generation PON, are all being discussed as optical underlay options. Grey and WDM optics offer an unbundling on the fibre/virtual fibre level whereas (TDM-)PON requires bitstream access.

Another observation is that radio “x-haul” [‘x’ being front, mid or back] will continue to play an important role for locations where fibre is nonexistent and uneconomical.


Lumentum on ROADM growth, ZR+, and 800G

Source: Lumentum

CTO interview: Brandon Collings

  • The ROADM market is experiencing a period of sustained growth
  • The Open ROADM MSA continues to advance and expand its scope
  • ZR+ coherent modules will support some interoperability to avoid becoming siloed but optical performance differentiation remains key

Lumentum reckons the ROADM growth started some 18-24 months ago.

Brandon Collings gave a Market Focus talk at the recent ECOC show in Dublin, where he explained why it is a good time to be in the reconfigurable optical add-drop multiplexer (ROADM) business.

“Quantities are growing substantially and it is not one reason but a multitude of reasons,” says Collings. The CTO of Lumentum reckons the growth started some 18-24 months ago.

ROADM markets

Lumentum highlights three factors fuelling the demand for ROADM components.

The first is the emergence of markets such as China and India that previously did not use ROADMs.

“China has pretty universally adopted ROADMs going forward,” says Collings. Previously, Optical Transport Network (OTN) point-to-point links and large OTN switches have been used. But ongoing traffic growth means this solution alone is not sustainable, both in terms of the switch capacity and the number of optical transceivers required.

“The bandwidth needed for these OTN switches is scaling beyond the rational use of optical-electrical-optical (OEO) node configuration,” says Collings. “You need 50 to 300 terabits of OTN [switch capacity] surrounded by the equivalent amount of optical transceivers, and that is not economical.”

The Chinese service providers have adopted a hybrid ROADM and OTN network architecture. The ROADMs perform optical bypass – passing on lightpaths destined for other nodes in the network – to reduce the optical transceivers and OTN switch capacity needed.

The network operators in India, in contrast, are using ROADMs to cope with the many fibre cuts they experience. The ROADMs are used to restore the network by rerouting traffic around the faults.

A second market magnifier is how modern ROADM networks use more wavelength-selective switches (WSSes). Both colourless and directionless (CD) ROADMs, and colourless, directionless and contentionless (CDC) ROADMs use more WSSes per node (see diagram above).

Such ROADMs also use more advanced WSS designs. Using an MxN WSS for the multicast switch in a route-and-select CDC ROADM, for example, delivers system benefits especially when adding and dropping wider optical channels that are starting to be used. Collings says Lumentum’s own MxN WSS is now close to volume manufacturing.

The third factor fuelling ROADM growth is the ongoing demand for more capacity. “Every time you fill a fibre, you typically use another degree in your [ROADM] node and light up a second fibre to grow capacity,” says Collings.

Operators with limited fibre are exploiting the fibre’s spectrum by using the C-band and L-band to grow capacity. This, too, requires more WSSes per node.

“All of these growth factors are happening simultaneously,” says Collings.

Open ROADM MSA

Lumentum is also a member of the Open ROADM multi-source agreement (MSA) that has created a disaggregated design to enable interoperability between systems vendors’ ROADMs.

AT&T is deploying Open ROADM systems in its metro networks while the MSA members have begun work on Revision 6.0 of the standard.

“Open ROADM is maturing and increasing its span of interest,” says Collings.

At first glance, Lumentum’s membership is surprising given it supplies ROADM building-blocks to vendors that make the ROADM systems. Moreover, the Open ROADM standard views a ROADM as an enclosed system.

“The Open ROADM has set certain boundaries where it defines interfaces so that vendor A can talk to vendor B,” says Collings. “And it has set that boundary pretty much at the complete ROADM node.”

Yet Lumentum is an MSA member because part of the software involved in controlling the ROADM is within the node. “It is not just a hardware solution, it is hardware and a significant software solution to supply into that,” says Collings.

Pluggable optics is also a part of the Open ROADM MSA, another reason for Lumentum’s interest. “There is a general discussion about potentially making a boundary condition around pluggable optics as well,” he says.

Collings says the MSA continues to build the ecosystem and the management system to help others use Open ROADM, not just AT&T.

400ZR, OpenZR+ and ZR+

As a supplier of coherent optics and line-side modules, Lumentum is interested in the OIF’s 400ZR standard and what is referred to as ZR+.

ZR+ offers an extended set of features and enhance optical performance. Both 400ZR and ZR+ will be implemented using QSFP-DD and OSFP pluggable modules.

The 400ZR specification has been developed for a specific purpose: to deliver 400 Gigabit Ethernet for distances of at least 80km for data centre interconnect applications. But 400ZR is not suited for more demanding metro mesh and longer-distance metro-regional applications.

This is what ZR+ aims to address. However, ZR+, unlike 400ZR, is not a standard and is a broad term.

At ECOC, Acacia Communications and NTT Electronics detailed interoperability between their coherent DSPs using what they call ‘OpenZR+’. OpenZR+ uses Ethernet traffic like 400ZR but also supports the additional data rates of 100, 200 and 300 Gigabit Ethernet. OpenZR+ also borrows from the OpenROADM specification to enable module interoperability between vendors for data centre interconnect applications with reaches beyond 120km.

But ZR+ encompasses differentiated coherent designs that support 400 gigabits in a compact pluggable but also lower transmission rates that trade capacity for reach.

“So, yes, both classes of ‘ZR+’ are emerging,” says Collings.

OpenZR+ seeks interoperability in compact pluggables, as well as higher power, higher performance modes less focused on interoperability, while ZR+ includes proprietary, higher-power solutions. “That [ZR+] is an area where distance and capacity equal money, in terms of savings and value,” says Collings. “That is going to be an area of differentiation, as it has always been for coherent interfaces.”

Collings favours some standardisation around ZR+, to enable interchangeability among module vendors and avoid the creation of a siloed market.

“But I don’t think we are going to find ZR+ interfaces defined for interoperability because you will find yourself walking back on that differentiation in terms of value that the network operators are looking to extract,” says Collings. “They need every bit of distance they can get.”

Network operators want compact, cost-effective solutions that do ‘even more stuff’ than they are used to. “400ZR checks that box but for bigger, broader networks, operators want the same thing,” says Collings.

There is a continuum of possibilities here, he says: “It is high value from a network operator point of view and it’s a technology challenge for the likes of us and the [DSP] chip vendors.”

800G Pluggable MSA

Lumentum also recently joined the 800G Pluggable MSA that was announced at the CIOE show, held in Shenzhen in September.

“Like any client interface where Lumentum is a supplier of the underlying [laser] chips – whether DMLs, EMLs or VCSELs – we feel it is pretty important for us to be in the definition setting of the interface,” says Collings. “We want the interface to be aligned optimally to what the chip can do.”

Lumentum announced last year that it is exiting the client-side module business and therefore will be less involved in the module aspects of the interface work.

“Having moved out of the [client-side] module business, we’re finding an awful lot of customers interested in engaging with us on the chip level, much more than before,” says Collings.

Further information

For an Optical Connections article about OpenZR+, co-authored by Acacia, NTT Electronics, Lumentum, Juniper Networks and Fujitsu Optical Components, click here

 


Gazettabyte’s 10th anniversary

Gazettabyte Celebrates Its 10th Anniversary

Gazettabytes 10th anniversary passed quietly sometime in August.

The work to create the website started earlier, as did the writing of the first stories to ensure there was content when the site went live in August 2009.

Gazettabyte has since published hundreds of stories and articles covering emerging technologies in the telecom and datacom industries.

The stories highlight the many changes that have taken place over the last decade.

Continual change

Many optical component firms have either folded or have been acquired, including industry-leading firms, in the last decade.

For example, the first Gazettabyte story featured the start-up, OneChip Photonics, that made photonic integrated circuits (PICs) for fibre-to-the-x (FTTx). The company had just received $19.5m in funding.

The companys technology was impressive but the FTTx market experienced ongoing cost reductions with companies pushing discretes such that the promised benefits of integration didn’t materialise. The start-up, with leading PIC expertise, folded.

There was also an interview with BT about 10G PON in 2009. This highlights another trend in telecoms, technology can take a long time to come to market.

The fastest optical interfaces at the time were 40 and 100 gigabit-per-second.

Fast-forward ten years and now the talk is of 800-gigabit client-side modules and terabit-plus coherent interfaces.

Acacia, an example of a leading player being acquired, recently announced its second-generation AC1200 coherent module that supports 1.2 terabits in a 150-gigahertz optical channel. Nokia has just given a hint about its next-generation 100-gigabaud coherent solution – the PSE-4? – with a 1.3-terabit single-wavelength trial over 93km. The total capacity transmitted over the fibre using Nokia’s technology was 50.8 terabits.

The last decade has also witnessed the continual rise of the internet giants that deliver double-digit yearly revenue growth. Such hyperscalers have become significant consumers of optics and drivers of technology.

Their rise has also stirred the telecoms industry, with the network operators embarking on a radical re-architecting of how they build and operate their networks.

The network operators have seen how the hyperscalers use software and commercial-off-the-shelf hardware and they too want the benefits of disaggregated designs and open networking.

The rise of China is a further key development of the last decade. Chinas unbridled ambition has seen it become a huge driver, manufacturer and consumer of leading telecom and datacom technologies.

Change on this scale is unsettling. But it is also to be welcomed. It shows telecom and datacom as healthy industries despite being mature.

Typically, a mature industry is settled: two or three players dominate a segment, the barrier for entry for start-ups is excessively high, and little changes with time.

No close observer of telecom and datacom would describe them as plodding industries.

Past and present

Over the years, Gazettabyte has conducted several feature series. These include CEO and CTO interviews, an acknowledgement of the silicon photonics pioneers and luminaries including Professor Richard Soref, described by another silicon photonics luminary, Andrew Rickman, as the ‘founding father of silicon photonics’.

Gazettabyte also proved a valuable resource during the writing of a book on silicon photonics that was co-authored with OFS Fitel’s Daryl Inniss.

Gazettabyte will mark its 10th anniversary with a series of features and special interviews.

It will revisit the CTO interviews and will focus on some key topics: the network transformation being undertaken by the telcos, co-packaged optics, and certain other key emerging technologies.  The first CTO interview will be published next, Lumentums Brandon Collings, an ongoing insightful source for Gazettabyte.

This is also an opportunity to acknowledge the sponsors of the site, many of whom have supported Gazettabyte from the start.

Without Gazettabytes backers  – ADVA, Ciena, Huawei, Infinera, Intel, LightCounting, Lumentum, Nokia, II-VI (Finisar) – the site would not exist.

 


Open ROADM gets deployed as work starts on Release 6.0

Source: Open ROADM MSA

AT&T has deployed Open ROADM technology in its network and says all future reconfigurable optical add-drop multiplexer (ROADM) deployments will be based on the standard.

At this point, it is in a single metro and we are working on a second large metro area,” says John Paggi, assistant vice president member of technical staff, network infrastructure and services at AT&T.

 

Open ROADM listed as a requirement in RFPs (Request For Proposals) from many other service providers

As shown are the various elements included in the disaggregated Open ROADM MSA. Also shown is the hierarchical SDN controller architecture with the federated controllers overseeing the optical layer and the multi-layer controller overseeing the path creation across the layer, from IP to optical. Source: Open ROADM MSA

Meanwhile, the Open ROADM multi-source agreement (MSA) continues to progress, with members working on Release 6.0 of the standard.

Motivation 

AT&T is a founding member of the Open ROADM MSA along with system vendors Ciena, Fujitsu and Nokia. The organisation has since grown to 23 members, 13 of which operate networks. Besides AT&T, the communications service providers include Deutsche Telekom, Orange, KDDI, SK Telecom and Telecom Italia.

The initiative was created to promote a disaggregated ROADM standard that enables interoperability between vendors’ ROADMs.

The specification work includes the development of open interfaces to control the ROADMs using software-defined networking (SDN) technology. The scope of the disaggregated design has also been expanded beyond ROADMs to include optical transceivers, OTN switching to handle sub-wavelength traffic, and optical amplifiers.

AT&T viewed the MSA as a way to change the traditional model of assigning two ROADM system vendors for each of its metro regions.

We had two suppliers to keep each other honest,” says Paggi. But once we had committed a region to a supplier, we were more or less beholden to that supplier for additional ROADM and transponder purchases.”

AT&T wanted true hyper-competition’ among ROADM and transponder suppliers and the Open ROADM MSA was the result.

The operator saw the MSA as a way to reduce costs and speed up innovation by using an open networking model. Opening up and standardising the design would also allow innovative start-up vendors to participate. With the traditional supply model, an operator would favour larger firms knowing it would be dependent on the suppliers for 5-10 years.

Because you can mix and match different suppliers, Open ROADM allows us to introduce disrupters to our environment,” says Paggi.

Evolution

The first Open ROADM revision used 100-gigabit wavelengths and a 50GHz fixed grid. A flexible grid and in-line amplification that extended the reach of 100-gigabit wavelengths to 1,000km were then added with Revision 2.

In Revision 3 we made Open ROADM applicable to more use cases,” says Martin Birk, director member of technical staff, network infrastructure and services, AT&T. We started introducing things like OTUCn and FlexO in preparation for 400 gigabits.”  The OTN Beyond 100 gigabit’ OTUCn format comprises n’ multiples of 100-gigabit OTUC frames, while FlexO refers to the Flexible OTN format.

Adopting OTN technologies is part of enabling Open ROADM to support 200-, 300- and 400-gigabit wavelengths.

Revision 4 then added ODUFlex, 400-gigabit clients, and support for low-noise amplifiers to further extend reach, while the latest fifth revision adds streaming telemetry for network monitoring using work from the OpenConfig industry group.

A lot of features that widen considerably the application of Open ROADM,” says Birk.

Revision 6.0

The frequency of each Open ROADM release was initially once a year but now the scope of each revision has been curtailed to enable two releases a year. Members are polled as to what new features are required at the start of each standardisation process.

Now, the MSA members are working on revision 6.0 that covers all directions’ of the standard.

We are improving the control plane interoperability with more features,” says Birk. Right now you have a single network view; in future, you could have an idealised network plan and a network view with actual failures, and you could provision services across these network views.”

And with the advent of 600-gigabit, 800-gigabit and even 1.2-terabit coherent wavelengths, OpenROADM members may add support for faster speeds than 400 gigabits.

Just as our suppliers continue to evolve their roadmaps, so does the Open ROADM MSA to stay relevant,” says Birk.

AT&Ts Open ROADM deployments support 100-gigabit wavelengths while the 400-gigabit technology is still in development.

The ROADMs will not change; the only thing that will change is the software,” says Birk. And in a disaggregated design, you can leave the ROADMs on version 2.0 and upgrade the transponders to 400 gigabits and version 5.0.”

This, says Birk, is why it is much easier to introduce new technology with an open design compared to monolithic platforms where an upgrade involves all the element management systems, ROADMs and transponders.

Status

The Open ROADM MSA says it is up to individual network operator members to declare the status of their Open ROADM network deployments. Accordingly, the status of overall Open ROADM deployments is unclear.

What AT&T will say is that it is being approached by vendors that want to demonstrate their Open ROADM technology to the operator.

When we ask them why they have done this without any agreement that AT&T would purchase their solutions, they respond that they are seeing Open ROADM listed as a requirement in RFPs (Request For Proposals) from many other service providers,” says Paggi. “They have taken it upon themselves to develop Open ROADM-compliant products.”

At the OFC show earlier this year, an Open ROADM MSA showcased an SDN controller turning up a wavelength to send virtual machines between two data centres. The SDN controller then terminated the optical connection on completion of the transfer.

Operators AT&T and Orange were part of the demonstration as was the University of Texas, Dallas. They [the University of Texas] are a supercomputing centre and they can create some nice applications on top of Open ROADM,” says Birk.

The system vendors involved in the OFC demonstration included Ciena, Fujitsu, ECI Telecom, Infinera and Juniper Networks.

 


Infinera rethinks aggregation with slices of light

Dave Welch, founder and chief innovation officer at Infinera.

An optical architecture for traffic aggregation that promises to deliver networking benefits and cost savings was unveiled by Infinera at this weeks ECOC show, held in Dublin.

Traffic aggregation is used widely in the network for applications such as fixed broadband, cellular networks, fibre-deep cable networks and business services.

Infinera has developed a class of optics, dubbed XR optics, that fits into pluggable modules for traffic aggregation. And while the company is focussing on the network edge for applications such as 5G, the technology could also be used in the data centre. 

Optics is inherently a point-to-point communications technology, says Infinera. Yet optics is applied to traffic aggregation, a point-to-multipoint architecture, and that results in inefficiencies.

The breakthrough here is that, for the first time in optics’ history, we have been able to make optics work to match the needs of an aggregation network,” says Dave Welch, founder and chief innovation officer at Infinera.

Infinera proposes coherent sub-carriers for a new class of problem

XR Optics

Infinera came up with the ‘XR’ label after borrowing from the naming scheme used for 400ZR, the 400-gigabit pluggable optics coherent standard.

XR can do point-to-point like ZR optics,” says Welch. But XR allows you to go beyond, to point-to-multipoint; ‘X’ being an ill-defined variable as to exactly how you want to set up your network.”

XR optics uses coherent technology and Nyquist sub-carriers. Instead of using a laser to generate a single carrier, pulse-shaping is used at the transmitter to generate multiple carriers, referred to as Nyquist sub-carriers.

The sub-carriers convey the same information as a single carrier but by using several sub-carriers, a lower symbol rate can be used for each. The lower symbol rate improves the tolerance to non-linear effects in a fibre and enables the use of lower-speed electronics.

Infinera first detailed Nyquist sub-carriers as part of its advanced coherent toolkit, and implemented the technology with its Infinite Capacity Engine 4 (ICE4) used for optical transport.

The company is bringing to market its second-generation Nyquist sub-carrier design with its ICE6 technology that supports 800-gigabit wavelengths.

Now Infinera is proposing coherent sub-carriers for a new class of problem: traffic aggregation. But XR optics will need backing and be multi-sourced if it is to be adopted widely.

Network operators will also need to be convinced of the technologys merits. Infinera claims XR optics will halve the pluggable modules needed for aggregation and remove the need for intermediate digital aggregation platforms, reducing networking costs by 70 percent.

Aggregation optics

XR optics will be required at both ends of a link. The modules will need to understand a protocol that tells them the nature of the sub-carriers to use: their baud rate (and resulting spectral width) and modulation scheme.

Infinera cites as the example a 4GHz-wide sub-carrier modulated using 16-ary quadrature amplitude modulation (16-QAM) that can transmit 25-gigabit of data.

A larger capacity XR coherent module will be used at the aggregation hub and will talk directly with XR modules at the network edge, “casting out” its sub-carriers to the various pluggable modules at the network edge.

For example, the module at the hub may be a 400-gigabit QSFP-DD supporting 16, 25-gigabit sub-carriers, or an 800-gigabit QSFP-DD or OSFP module delivering 32 sub-carriers. A mix of lower-speed XR modules will be used at the edge: 100-gigabit QSFP28 XR modules based on four sub-carriers and single sub-carrier 25-gigabit SFP28s.

Source: Infinera

As soon as you have defined that each one of these transceivers is some multiple of that 25-gigabit sub-carrier, they can all talk to each other,” says Welch.

The hub XR module and network-edge modules are linked using optical splitters such that all the sub-channels sent by the hub XR module are seen by each of the edge modules. The hub in effect broadcasts its sub-carriers to all the edge devices, says Welch.

A coding scheme is used such that each edge module’s coherent receiver can pick off its assigned sub-channel(s). In turn, an edge module will send its data using the same frequencies on a separate fibre.

Basing the communications on multiples of sub-carriers means any XR module can talk to any other, irrespective of their overall speeds.

Sub-carriers can also be reassigned.

In that fashion, today you are a 25-gigabit client module and tomorrow you are 100-gigabit,” says Welch. Reassigning edge-module capacities will not happen often but when undertaken, no truck roll will be needed.

System benefits

In a conventional aggregation network, the edge transceivers send traffic to an intermediate electrical aggregation switch. The switch’s line-side-facing transceivers then send on the aggregated traffic to the hub.

Using XR optics, the intermediate aggregation switch becomes redundant since the higher-capacity XR coherent module aggregates the traffic from the edge. Removing the switch and its one-to-one edge-facing transceivers account for the halving of the overall transceiver count and the overall 70 percent network cost saving (see diagram below).

Source: Infinera

The disadvantage of getting rid of the intermediate aggregation switch is minor in comparison to the plusses, says Infinera.

In a network where all the traffic is going left to right, there is always an economic gain,” says Welch. And while a layer-2 aggregation switch enables statistical multiplexing to be applied to the traffic, it is insignificant when compared to the cost-savings XR optics brings, he says.

Challenges

XR transceivers will need to support sub-carriers and coherent signal processing as well as the language that defines the sub-carriers and their assignment codes. Accordingly, module makers will need to make a new class of XR pluggable modules.

We are working with others,” says Welch. The object is to bring the technology and a broad-base supply chain to the market.” The fastest way to achieve this, says Welch, is through a series of multi-source agreements (MSAs). Arista Networks and Lumentum were both quoted as part of Infinera’s XR Optics press release.

Another challenge is that a family of coherent digital signal processors (DSPs) will need to be designed that fit within the power constraints of the various slim client-side pluggable form factors.

Infinera stresses it is unveiling a technological development and not a product announcement. That will come later.

However, Welch says that XR optics will support a reach of hundreds of kilometres and even metro-regional distances of over 1,000km.

We are comfortable we are working with partners to get this out,” says Welch. “We are comfortable we have some key technologies that will enhance these capabilities as well.”

Other applications

Infinera’s is focussing its XR optics on applications such as 5G. But it says the technology will benefit many network applications.

If you look at the architecture in the data centre or look are core networks, they are all aggregation networks of one flavour or another,” says Welch. “Any type of power, cost, and operational savings of this magnitude should be evaluated across the board on all networks.”


Acacia heralds the era of terabit-plus optical channels

Each line is a data rate. Shown is the scope of how the baud rate and the modulation scheme can be varied and its impact on channel width, reach and data rate. Source: ADVA.

Acacia Communications has unveiled the AC1200-SC2 that delivers 1.2 terabits over a single optical channel.

The SC2 (single chip, single channel) is an upgrade of Acacia’s high-end AC1200 module. The AC1200 too is a 1.2-terabit module but uses two optical channels, each transmitting a 600-gigabit wavelength. The SC2 sends 1.2 terabits using two sub-carriers that fit within a single 150GHz-wide channel.

Each line is a data rate. Shown is the scope of how the baud rate and the modulation scheme can be varied and its impact on channel width, reach and data rate. Source: ADVA.

“In the SC2, we take care of everything so the user configures a single channel that is easier to manage in their network,” says Tom Williams, vice president of marketing at Acacia.

1.2-terabit channel

Acacia unveiled the AC1200 at the ECOC show in 2017. With its introduction, Acacia gained an advantage over its system-vendor rivals in bringing a 1.2-terabit coherent module to market using 600-gigabit wavelengths. The module supports up to 64-ary quadrature amplitude modulation (64-QAM) and a symbol rate of 69 gigabaud (GBd).

Systems vendors such as Ciena, with its WaveLogic 5, and Infinera, with its Infinite Coherent Engine 6 (ICE6), responded with their next-generation coherent designs that use symbol rates approaching 100GBd and support an 800-gigabit wavelength.

Sell-side research analysts interpreted the coherent developments as Acacia having a window of opportunity to exploit the AC1200 until the systems vendors’ coherent designs come to market in the coming year. The analysts also noted how 400 Gigabit Ethernet client signals better fit in an 800-gigabit wavelength compared to a 600-gigabit wavelength.

Then, in July, Acacia’s status as a merchant coherent technology supplier changed with the announcement that Cisco Systems is to acquire the company for $2.6 billion. Now, Acacia has detailed the SC2 as its acquisition awaits completion.

AC1200-SC2

The SC2 uses the same form factor and electrical connector as the AC1200 module, simplifying the upgrading of system designs using the AC1200. However, the SC2 module uses a single fibre pair for its optical output whereas the AC1200 uses two pairs, one for each channel.

The SC2 module shares the same Pico coherent digital signal processor (DSP) and baud rates as the AC1200. The Pico DSP uses fractional quadrature amplitude modulation (QAM) and an adjustable baud rate.

Fractional QAM allows the tuning of the transmitted data rate by using a mix of adjacent modulation formats. For example, 8-QAM and 16-QAM are alternated, and the percentage of time each is used determining the resulting data rate. In turn, the baud rate can be increased to widen the signal’s spectrum, if the optical channel permits, such that using a lower modulation scheme may become possible, improving the reach (see diagram above).

The AC1200 uses 50GHz- and 75GHz-wide channels while the SC2 uses 50-150GHz channels. For 600-gigabit and 1.2-terabit transmissions, the widest channels are used: 75GHz for the AC1200, and 150GHz for the SC2. “But as you go down in data rate, you can address the transmission in multiple ways,” says Williams. “You can run a higher modulation scheme in a narrow channel or, with a wider channel, run a lower modulation scheme to go further.”

The result optical performance means that the SC2 can be used for multiple applications: from short-span data centre interconnect where the full 1.2-terabit capacity is sent using 64-QAM, to metro-regional and long-haul distances using 800-gigabit and 16-QAM, all the way to ultra-long-haul terrestrial and subsea links with 400-gigabitand quadrature phase-shift keying (QPSK) modulation.

The AC1200 and the SC2 have comparable optical performance in terms of spectral efficiency and reach. This is unsurprising given how both modules use the same Pico DSP, baud rates and modulation schemes.

The AC1200 uses two 75GHz channels, each carrying 600 gigabits, to send 1.2 terabits, while the SC2 uses two sub-carriers in a 150GHz channel. However, the SC2 has a slight advantage since no guard band is needed between the two channels as is required with the AC1200 (unless the AC1200 is sending a two-channel ‘superchannel’ whereby no dead zone is needed between the channels).

Acacia is not detailing how it generates the optical sub-carriers besides saying the change stems from the interface between the Pico DSP and its silicon photonics-based photonic integrated circuit (PIC). The company will also not say if the SC2 uses a new PIC design.

Operational benefits

The fact that the SC2 and AC1200 deliver the same reach and capacity may explain why Acacia downplays the argument that the company has again leapfrogged its rivals with the advent of a module that sends 1.2 terabits over a single channel.

Instead, Acacia stresses the system and operational benefits resulting from doubling the data transmitted per channel.

“The SC2 module allows the entire capacity to be managed as a single channel,” says Williams. “The original [AC1200] module is well-suited to brownfield networks operating with 50GHz or 75GHz spacing, while the SC2 offers advantages in greenfield network architectures that can use channel plans up to 150GHz.”

Using a higher-capacity channel requires fewer optical components and reconfigurable optical add/ drop multiplexer (ROADM) ports thereby reducing networking costs, says Williams.

Using 150GHz-wide channels also aligns with an emerging consensus among network operators regarding wavelength roadmaps. “Network operators want to operate on some standardised grid based on regular multiples [50GHz, 75GHz] because it avoids fragmentation,” says Williams.

Availability

Acacia is already providing the SC2 module to certain customers that are undertaking validation testing. The firm is ready to ramp production based on particular customer demand.

Acacia will also be demonstrating its latest module at this week’s ECOC show being held in Dublin.


Privacy Preference Center