Changing the radio access network for good

Stéphane Téral

The industry initiative to open up the radio access network, known as open RAN, is changing how the mobile network is architected and is proving its detractors wrong.

So says a recent open RAN study by market research company, LightCounting.

“The virtual RAN and open RAN sceptics are wrong,” says Stéphane Téral, chief analyst at LightCounting.

Japan’s mobile operators, Rakuten Mobile and NTT Docomo, lead the world with large-scale open RAN deployments. Meanwhile, many leading communications service providers (CSPs) continue to trial the technology with substantial deployments planned around 2024-25.

Japan’s fourth and newest mobile network operator, Rakuten Mobile, deployed 40,000 open RAN sites with 200,000 radio units by the start of 2022.

Meanwhile, NTT Docomo, Japan’s largest mobile operator, deployed 10,000 sites in 2021 and will deploy another 10,000 this year.

NTT Docomo has shown that open RAN also benefits incumbent operators, not just new mobile entrants like Rakuten Mobile and Dish Networks in the US that can embrace the latest technologies as they roll out their networks.

Virtual RAN and open virtual RAN

Traditional RANs use a radio unit and a baseband unit from the same equipment supplier. Such RAN systems use proprietary interfaces between the units, with the vendor also providing a custom software stack, including management software.

The vendor may also offer a virtualised system that implements some or all of the baseband unit’s functions as software running on server CPUs.

A further step is disaggregating the baseband unit’s functions into a distributed unit (DU) and a centralised unit (CU). Placing the two units at different locations is then possible.

A disaggregated design may also be from a single vendor but the goal of open RAN is to enable CSPs to mix and match RAN components from different suppliers. Accordingly, the virtual RAN using open interfaces, as specified by the O-RAN Alliance, is an open virtual RAN system.

The diagram shows the different architectures leading to the disaggregated, virtualised RAN (vRAN) architecture.

Open virtual RAN comprises radio units, the DU and CU functions that can be implemented in the cloud, and the RAN Intelligent Controller (RIC), the brain of the RAN, which runs applications.

Several radio units may be connected to a virtual DU. The radio unit and virtual DU may be co-located or separate, linked using front-haul technology. Equally, the CU can host several virtual DUs depending on the networking requirements, connected with a mid-haul link.

Rakuten Mobile has deployed the world’s largest open virtual RAN architecture, while NTT Docomo has the world’s largest brownfield open RAN deployment.

NTT Docomo’s deployment is not virtualised and is not running RAN functions in software.

“NTT Docomo’s baseband unit is not disaggregated,” says Téral. “It’s a traditional RAN with a front-haul using the O-RAN Alliance specification for 5G.”

NTT Docomo is working to virtualise the baseband units and the work is likely to be completed in 2023.

Opening the RAN

NTT Docomo and the MGMN Alliance were working on interoperability between RAN vendors 15 years ago, says Téral. The Japanese mobile operator wanted to avoid vendor lock-in and increase its options.

“NTT Docomo was the only one doing it and, as such, could not enjoy economies of scale because there was no global implementation,” says Téral.

Wider industry backing arrived in 2016 with the formation of the Telecom Infra Project (TIP) backed by Meta (Facebook) and several CSPs to design network architectures that promoted interoperability using open equipment.

The O-RAN Alliance formed in 2018 was another critical development. With over 300 members, the O-RAN Alliance has ten working groups addressing such topics as defining the interfaces between RAN functions to standardise the open RAN architecture.

The O-RAN Alliance realised it needed to create more flexibility to enable boxes to be interchanged, says Téral, and started in the RAN to allow any radio unit to work with any virtual DU.

Geopolitics is the third element to kickstart open RAN. Removing Chinese equipment vendors Huawei and ZTE from key markets brought Open RAN to the forefront as a way to expand suppliers.

Indeed, Rakuten Mobile was about to select Huawei for its network, but it decided in 2018 to adopt open RAN instead because of geopolitics.

“Geopolitics added a new layer and, to some extent, accelerated the development of open RAN,” he says. “But it does not mean it has accelerated market uptake.”

That’s because the first wave of 5G deployments by the early adopter CSPs seeking a first-mover advantage is ending. Indeed, the uptake in 5G’s first three years has eclipsed the equivalent rollout of 4G, says LightCounting.

To date, over 200 of 800 mobile operators worldwide have deployed 5G.

Early 5G adopters have gone with traditional RAN suppliers like Ericsson, Nokia, Samsung, NEC and Fujitsu. And with open RAN only now hitting its stride, it has largely missed the initial 5G wave.

Open RAN’s next wave

For the next two years, then, the dominant open RAN deployments will continue to be those of Rakuten Mobile and NTT Docomo, to which can be added the network launches from Dish Networks in the US, and 1&1 Drillisch of Germany, which is outsourcing its buildout to Rakuten Symphony.

Rakuten Mobile’s vendor offshoot, Rakuten Symphony, set up to commercialise Rakuten’s open RAN experiences, is also working with Dish Networks on its deployment.

Rakuten Mobile hosts its own 5G network, including open RAN in its data centres. Dish is working with cloud player Amazon Web Services to host its 5G network. Dish’s network is still in its early stages, but the mobile operator can host its network in Amazon’s cloud because it uses a cloud-native implementation that includes Open RAN.

The next market wave for Open RAN will start in 2024-25 when the leading CSPs begin to turn off their 3G and start deploying Open RAN for 5G.

It will also be helped by the second wave of 5G rollouts those 600 operators with LTE networks. However, this second 5G cycle may not be as large as the first cycle, says Téral, and there will be a lag between the two cycles that will not be helped if there is a coming economic recession.

Specific leading CSPs that were early cheerleaders for open RAN has since dampened their deployment plans, says Téral. For example, Telefónica and Vodafone first spoke in 2019 of 1,000s of site deployments but have scaled back their deployment plans.

The leading CSPs explain their reluctance to deploy open RAN due to its many challenges. One is interoperability issues; despite the development of open interfaces, getting the different vendors’ components to work together is still a challenge.

Another issue is integration. Disaggregating the various RAN components means someone must stitch them together. Certain CSPs do this themselves, but there is a need for system integrators, and this is a challenge.

Téral believes that while these are valid concerns, Rakuten and NTT Docomo have already overcome such complications; open RAN is now deployed at scale.

These CSPs are also reluctant to end their relationships with established suppliers.

“The service providers’ teams have built relationships and are used to dealing with the same vendors for so long,” says Téral. “It’s very complicated for them to build new relationships with somebody else.”

More RAN player entrants

Rakuten Symphony has assembled a team with tremendous open RAN experience. AT&T is one prominant CSP that has selected Rakuten Symphony to help it with network planning and speed up deployments.

NTT Docomo working with four vendors, has got their radio units and baseband units to work with each other. In addition, NTT Docomo is also promoting its platform dubbed OREC (5G Open RAN Ecosystem) to other interested parties.

NEC and Fujitsu, selected by NTT Docomo, have also gained valuable open RAN experience. Fujitsu is a system integrator with Dish while NEC is involved in many open RAN networks in Europe, starting with Telefónica.

There is also a commercial advantage for these systems vendors since Rakuten Mobile and NTT Docomo are the leading operators, along with DISH and 1&1, deploying open RAN for the next two years.

That said, the radio unit business continues to look up. “There is no cycle [with radio units]; you still have to add radio units at some point in particular parts of the network,” says Téral.

But for open RAN, those vendors not used by NTT Docomo and Rakuten Mobile must wait for the next deployment wave. Vendor consolidation is thus inevitable; Parallel Wireless being the first shoe to drop with its recently announced wide-scale layoffs.

So while open RAN has expanded the number of vendor suppliers, further acquisitions should be expected, as well as companies folding that will not survive until the next deployment wave, says Téral.

And soon at the chip level too

There is also a supply issue with open RAN silicon.

With its CPUs and FlexRAN software, Intel dominates the open RAN market. However, the CSPs acknowledge there is no point in expanding RAN suppliers if there is a vendor lock-in at the chip level, one layer below.

Téral says several chip makers are working with system vendors to enter the market with alternative solutions. These include ARM-based architectures, AMD-Xilinx, Qualcomm, Marvell’s Octeon family and Nvidia’s BlueField-3 data processing unit.

The CSPs are also getting involved in promoting more chip choices. For example, Vodafone has set up a 50-strong research team at its new R&D centre in Malaga, Spain, to work with chip and software companies to develop the architecture of choice for Open RAN to expand the chip options.

Outlook

LightCounting forecasts that the open vRAN market will account for 13 per cent of the total global RAN sales in 2027, up from 4 per cent in 2022.

A key growth driver will be the global switch to open virtual RAN in 2024-25, driven by the large Tier 1 CSPs worldwide.

“Between 2025 and 2030, you will see a mix of open RAN, and where it makes sense in parts of the network, traditional RAN deployments too,” says Téral.


Vodafone's effort to get silicon for telco

Santiago Tenorio

This as an exciting time for semiconductors, says Santiago Tenorio, which is why his company, Vodafone, wants to exploit this period to benefit the radio access network (RAN), the most costly part of the wireless network for telecom operators.

The telecom operators want greater choice when buying RAN equipment.

As Tenorio, a Vodafone Fellow (the company’s first) and its network architecture director, notes, there were more than ten wireless RAN equipment vendors 15 years ago. Now, in some parts of the world, the choice is down to two.

“We were looking for more choice and that is how [the] Open RAN [initiative] started,” says Tenorio. “We are making a lot of progress on that and creating new options.”

But having more equipment suppliers is not all: the choice of silicon inside the equipment is also limited.

“You may have Fujitsu radios or NEC radios, Samsung radios, Mavenir software, whatever; in the end, it’s all down to a couple of big silicon players, which also supply the incumbents,” he says. “So we thought that if Open RAN is to go all the way, we need to create optionality there too to avoid vendor lock-in.”

Vodafone has set up a 50-strong research team at its new R&D centre in Malaga, Spain, that is working with chip and software companies to develop the architecture of choice for Open RAN to expand the chip options.

Open RAN R&D

Vodafone’s R&D centre’s 50-staff are organised into several streams, but their main goal is to answer critical issues regarding the Open RAN silicon architecture.

“Things like whether the acceleration is in-line or look-aside, which is a current controversy in the industry,” says Tenorio. “These are the people who are going to answer that question.”

With Open RAN, the virtualised Distributed Unit (DU) runs on a server. This contrasts with specialised hardware used in traditional baseband units.

Open RAN processes layer 1 data in one of two ways: look-aside or in-line. With look-aside, the server’s CPU performs certain layer 1 tasks, aided by accelerator hardware to perform tasks like forward error correction. This requires frequent communication between the two that limits processing efficiency.

In-line solves this by performing all the layer 1 processing using a single chip. Dell, for example, has an Open RAN accelerator card that performs in-line processing using Marvell’s silicon.

When Vodafone announced its Open RAN silicon initiative in January, it was working with 20 chip and software companies. More companies have since joined.

“You have software players like middleware suppliers, also clever software plug-ins that optimise the silicon itself,” says Tenorio. “It’s not only silicon makers attracted by this initiative.”

Vodafone has no preconceived ideas as to the ideal solution. “All we want is the best technical solution in terms of performance and cost,” he says.

By performance, Vodafone means power consumption and processing. “With a more efficient solution, you need less [processing] cores,” says Tenorio.

Vodafone is talking to the different players to understand their architectures and points of view and is doing its own research that may include simulations.

Tenorio does not expect Vodafone to manufacture silicon: “I mean, that’s not necessarily on the cards.” But Vodafone must understand what is possible and will conduct lab testing and benchmark measurements.

“We will do some head-to-head measurements that, to be fair, no one I know does,” he says. Vodafone’s position will then be published, it will create a specification and will drive vendors to comply with it.

“We’ve done that in the past,” says Tenorio. “We have been specifying radios for the last 20 years, and we never had to manufacture one; we just needed to understand how they’re done to take the good from the bad and then put everybody on the art of the possible.”

Industry interest

The companies joining Vodafone’s Open RAN chip venture are motivated for different reasons.

Some have joined to ensure that they have a voice and influence Vodafone’s views. “Which is super,” says Tenorio.

Others are there because they are challengers to the current ecosystem. “They want to get the specs ahead of anybody to have a better chance of succeeding if they listen to our advice, which is also super,” says Tenorio.

Meanwhile, software companies have joined to see whether they can improve hardware performance.

“That is the beauty of having the whole ecosystem,” he says.

 

The Open RAN architecture showing (R to L) the Distributed Unit (DU) and the Radio Unit (RU). Source: ONF

Work scale

The work is starting at layer 1 and not just the RAN’s distributed unit (DU) but also the radio unit (RU), given how the power amplifier technology is the biggest offender in terms of power consumption.

Layers 2 and 3 will also be tackled. “We’re currently running that on Intel, and we’re finding that there is a lot of room for improvement, which is normal,” says Tenorio. “It’s true that running the three layers on general-purpose hardware has room for improvement.”

That room for improvement is almost equivalent to one full generation of silicon, he says.

Vodafone says that it also can’t be the case that Intel is the only provider of silicon for Open RAN.

The operator expects new hardware variants based on ARM, perhaps AMD, and maybe the RISC-V architecture at some point.

“We will be there to make it happen,” says Tenorio.

Other chip accelerators

Does such hardware as Graphics Processing Units (GPUs), Data Processing Units (DPUs) and also programmable logic have roles?

“I think there’s room for that, particularly at the point that we are in,” says Tenorio. “The future is not decided yet.”

The key is to avoid vendor lock-in for layer 1 acceleration, he says.

He highlights the work of such companies like Marvell and Qualcomm to accelerate layer 1 tasks, but he fears this will drive the software suppliers to take sides on one of these accelerators. “This is not what we want,” he says.

What is required is to standardise the interfaces to abstract the accelerator from the software, or steer away from custom hardware and explore the possibilities of general-purpose but specialised processing units.

“I think the future is still open,” says Tenorio. “Right now, I think people tend to go proprietary at layer 1, but we need another plan.”

“As for FPGAs, that is what we’re trying to run away from,” says Tenorio. “If you are an Open RAN vendor and can’t afford to build your ASIC because you don’t have the volume, then, okay, that’s a problem we were trying to solve.”

Improving general-purpose processing avoids having to go to FPGAs which are bulky, power-hungry and expensive, says Tenorio but he also notes how FPGAs are evolving.

“I don’t think we should have religious views about it,” he says. “There are semi-programmable arrays that are starting to look better and better, and there are different architectures.”

This is why he describes the chip industry as ‘boiling’: “This is the best moment for us to take a view because it’s also true that, to my knowledge, there is no other kind of player in the industry that will offer you a neutral, unbiased view as to what is best for the industry.”

Without that, the fear is that by acquisition and competition, the chip players will reduce the IC choices to a minimum.

“You will end up with two to three incumbent architectures, and you run a risk of those being suboptimal, and of not having enough competition,” says Tenorio.

Vodafone’s initiative is open to companies to participate including its telco competitors.

“There are times when it is faster, and you make a bigger impact if you start things on your own, leading the way,” he says.

Vodafone has done this before: In 2014, it started working with Intel on Open RAN.

“We made some progress, we had some field trials, and in 2017, we approached TIP (the Telecom Infra Project), and we offered to contribute our progress for TIP to continue in a project group,” says Tenorio. “At that point, we felt that we would make more progress with others than going alone.”

Vodafone is already deploying Open RAN in the UK and has said that by 2030, 30 per cent of its deployments in Europe will be Open RAN.

“We’ve started deploying open RAN and it works, the performance is on par with the incumbent architecture, and the cost is also on par,” says Tenorio. “So we are creating that optionality without paying any price in terms of performance, or a huge premium cost, regardless of what is inside the boxes.”

Timeline

Vodafone is already looking at in-line versus look-aside.

“We are closing into in-line benefits for the architecture. There is a continuous flow of positions or deliverables to the companies around us,” says Tenorio. “We have tens of meetings per week with interested companies who want to know and contribute to this, and we are exchanging our views in real-time.”

There will also be a white paper published, but for now, there is no deadline.

But there is an urgency to the work given Vodafone is deploying Open RAN, but this research work is for the next generation of Open RAN. “We are deploying the previous generation,” he says.

Vodafone is also talking, for example, to the ONF open-source organisation, which announced an interest in defining interfaces to exploit acceleration hardware.

“I think the good thing is that the industry is getting it, and we [Vodafone] are just one factor,” says Tenorio. “But you start these conversations, and you see how they’re going places. So people are listening.”

The industry agrees that layer 1 interfacing needs to be standardised or abstracted to avoid companies ending in particular supplier camps.

“I think there’ll be a debate whether that needs to happen in the ORAN Alliance or somewhere else,” says Tenorio. “I don’t have strong views. The industry will decide.”

Other developments

The Malaga R&D site will not just focus on Open RAN but other parts of the network, such as transport.

Transport still makes use of proprietary silicon but there is also more vendor competition.

“The dollars spent by operators in that area is smaller,” says Tenorio. “That’s why it is not making the headlines these days, but that doesn’t mean there is no action.”

Two transport areas where disaggregated designs have started are the disaggregated backbone router, and the disaggregated cell site gateway, both being sensible places to start.

“Disaggregating a full MPLS carrier-grade router is a different thing, but its time will come,” says Tenorio, adding that the centre in Malaga is not just for Open RAN, but silicon for telcos.


TIP launches a disaggregated cell-site gateway design

Part 1: TIP white-box designs

Four leading telecom operators, members of the Telecom Infra Project (TIP), have developed a disaggregated white-box design for cell sites. The four operators are Orange, Telefonica, TIM Brazil and Vodafone. BT is also believed to be backing the open-design cell-site venture.

 Source: ADVA

The first TIP cell-site gateway product, known as Odyssey-DCSG, is being brought to market by ADVA and Edgecore Networks.

TIP isn’t the only open design framework that is developing cell-site gateways. Edgecore Networks contributed in October a design to the Open Compute Project (OCP) that is based on an AT&T cell-site gateway specification. There are thus two overlapping open networking initiatives developing disaggregated cell-site gateways. 

ADVA and Edgecore will provide the standardised cell-site gateways as operators deploy 5G. The platforms will support either commercial cell-site gateway software or open-source code. 

“We are providing a white box at cell sites to interconnect them back into the network,” says Bill Burger, vice president, business development and marketing, North America at Edgecore Networks. 

“The cell site is a really nice space for a white-box because volumes are high,” says Niall Robinson, vice president, global business development at ADVA. Vodafone alone has stated that it has 300,000 cell-site gateways that will need to be updated for 5G.

 

Odyssey-DCSG

A mobile cell site comprises remote radio units (RRUs) located on cell towers that interface to the mobile baseband unit (BBU). The baseband unit also connects to the disaggregated cell-site gateway with the two platforms communicating using IP-over-Ethernet. “The cell-site gateway is basically an IP box,” says Robinson. 

The Odyssey gateway design is based on a general-purpose Intel microprocessor and a 120-gigabit Broadcom Qumran-UX switch chip.

The white box’s link speeds to the baseband unit range from legacy 10 megabit-per-second (Mbps) to 1 gigabit-per-second (Gbps). The TIP gateway’s uplinks are two 25-gigabit SFP28 modules typically. In contrast, the OCP’s gateway design uses a higher capacity 300-gigabit Qumran-AX switch chip and has two 100-gigabit QSFP28 uplink interfaces. “There is a difference in capacity [for the two designs] and hence in their cost,” says Robinson.

 

The cell-site gateway is basically an IP box

 

The cell-site gateways can be connected in a ring with the traffic fed to an aggregation unit for transmission within the network.          

Robinson expects other players to join ADVA and Edgecore as project partners to bring the TIP gateway to market. To date, no software partners have been announced. First samples of the platform are expected in the first quarter of 2019 with general availability in the third quarter of 2019.

“Cell-site gateways is one of those markets that will benefit from driving a common design,” says Robinson. The goal is to get away from operators choosing proprietary platforms. “You have one design hitting the market and being chosen by the different end users,” he says. “Volumes go up and costs go down.”

ADVA is also acting as the systems integrator, offering installation, commissioning and monitoring services for the gateway. “People like disaggregation when costs are being added up but end users like things - especially in high volumes - to be reintegrated to make it easy for their operations folk,” says Robinson.

The disaggregated cell-site gateway project is part of TIP’s Open Optical and Packet Transport group, the same group that is developing the Voyager packet-optical white box.    

 

Source: Gazettabyte

 

Voyager

ADVA announced recently that the Voyager platform is now available, two years after being unveiled. 

The 1-rack-unit Voyager platform uses up to 2 terabits of the 3.2-terabit Broadcom Tomahawk switch-chip: a dozen 100-gigabit client-side interfaces and 800 gigabits of coherent line-side capacity.

Robinson admits that the Voyager platform would have come to market earlier had SnapRoute - providing the platform’s operating system - not withdrawn from the project. Cumulus Networks then joined the project as SnapRoute’s replacement. 

“This shows both sides of the white-box model,” says Robinson. How a collective project design can have a key member drop out but also the strength of a design community when a replacement can step in.  

TIP has yet to announce Voyager customers although the expectation is that this will happen in the next six months.

Robinson identifies two use cases for the platform: regional metro networks of up to 600km and data centre interconnect.

“Voyager has four networking ports allowing an optical network to be built,” says Robinson. “Once you have that in place, it is very easy to set up Layer-2 and Layer-3 services on top.” 

The second use case is data centre interconnect, providing enterprises with Layer-2 trucking connectivity services between sites. “Voyager is not just about getting bits across but about Layer-2 structures,” says Robinson. 

The Voyager is not targeted at leading internet content providers that operate large-scale data centres. They will use specific, leading-edge platforms. “The hyperscalers have moved on,” says Robinson. “The Voyager will play in a different market, a smaller-sized data centre interconnect space.”   

 

We will be right at the front and I think we will reap the rewards for jumping in early

 

Early-mover advantage

Robinson contrasts how the Voyager and TIP’s cell-site gateway were developed. Facebook developed and contributed the Voyager design to TIP and only then did members become aware of the design. 

With the cell-site gateway, a preliminary specification was developed with one customer - Vodafone - before it was taken to other operators. These companies that make up a good portion of the cell site market worked on the specification before being offered to the TIP marketplace for development. 

“This is the right model for a next-generation Voyager design,” says Robinson. Moreover, rather than addressing the hyperscalers’ specialised requirements involving the latest coherent chips and optical pluggable modules, the next Voyager design should be more like the cell-site gateway, says Robinson: “A little bit more bread-and-butter: go after the 100-gigabit market and make that more of a commodity.”  

ADVA also believes in a first-mover advantage with open networking designs such as the TIP cell-site gateway. 

“We have been involved for quite some time, as has Edgecore with which we have teamed up,” says Robinson. “We will be right at the front and I think we will reap the rewards for jumping in early.”

 

Part 2: Open networking, click here


Ciena goes stackable with 8180 'white box' and 6500 RLS

Ciena has unveiled two products - the 8180 coherent networking platform and the 6500 reconfigurable line system - that target cable and cellular operators that are deploying fibre deep in their networks, closer to subscribers.

The 6500 line system is also aimed at the data centre interconnect market given how the webscale players are experiencing a near-doubling of traffic each year.

Source: Ciena

The cable industry is moving to a distributed access architecture (DAA) that brings fibre closer to the network’s edge and splits part of the functionality of the cable modem termination system (CMTS) - the remote PHY - closer to end users. The cable operators are deploying fibre to boost the data rates they can offer homes and businesses.

Both Ciena’s 8180 modular switch and the 6500 reconfigurable line system are suited to the cable network. The 8180 is used to link the master headend with primary and secondary hub sites where aggregated traffic is collected from the digital nodes (see network diagram). The 8180 platforms will use the modular 6500 line system to carry the dense wavelength-division multiplexed (DWDM) traffic. 

“The [cable] folks that are modernising the access network are not used to managing optical networking,” says Helen Xenos, senior director, portfolio marketing at Ciena (pictured). “They are looking for simple platforms, aggregating all the connections that are coming in from the access.”

The 8180 can play a similar role for wireless operators, using DWDM to carry aggregated traffic for 4G and 5G networks.

Ciena says the 6500 optical line system will also serve the data centre interconnect market, complementing the WaveServer Ai, Ciena’s second-generation 1RU modular platform that has 2.4 terabits of client-side interfaces and 2.4 terabits of coherent capacity.     

 

With the 8180, you are only using the capacity on the fibre that you have traffic for 

 

“They [the webscale players] are looking for as many efficiencies as they can get from the platforms they deploy,” says Xenos. “The 6500 reconfigurable line system gives them the flexibility they need - a colourless, directionless, contentionless [reconfigurable optical add-drop multiplexer] and a flexible grid that extends to the L-band.” 

A research note from analyst house, Jefferies, published after the recent OFC show where Ciena announced the platforms, noted that in many cable networks, 6-strand fibre is used: two fibre pairs allocated for business services and one for residential. Adding the L-band to the existing C-band effectively doubles the capacity of each fibre pair, it noted.

 

The 8180

Ciena’s 8180 is a modular packet switch that includes coherent optics. The 8180 is similar in concept to the Voyager and Cassini white boxes developed by the Telecom Infra Project. However, the 8180 is a two-rack-unit (2RU) 6.4-terabit switch compared to the 1RU, 2-terabit Voyager and the 1.5RU 3.2-terabit Cassini. The 8180 also uses Ciena’s own 400-gigabit coherent DSP, the WaveLogic Ai, rather than merchant coherent DSP chips. 

The platform comprises 32 QSFP+/ QSFP28 client-side ports, a 6.4-terabit switch chip and four replaceable modules or ‘sleds’, each capable of accommodating 800 gigabits of capacity. The options include an initial 400-gigabit line-side coherent interface (a sled with two coherent WaveLogic Ai DSPs will follow), an 8x100-gigabit QSFP28 sled, a 2x400-gigabit sled and also the option for an 800-gigabit module once they become available.

 

Source: Ciena

Using all four sleds as client-side options, the 8180 becomes a 6.4-terabit Ethernet switch. Using only coherent sleds instead, the packet-optical platform has a 1.6-terabit line-side capacity. And because there is a powerful switch chip integrated, the input ports can be over-subscribed.“With the 8180, you are only using the capacity on the fibre that you have traffic for,” says Xenos.  

 

6500 line system 

The 6500 reconfigurable line system is also a modular design. Aimed at the cable, wireless, and data centre interconnect markets, only a subset of Ciena’s existing optical line systems features is used.

“The 6500 software has a lot of capabilities that the content providers are not using,” says Xenos. “They just want to use it as a photonic layer.”

There are three 6500 reconfigurable line system platform sizes: 1RU, 2RU and 4RU. The chassis can be stacked and managed as one unit. Card options that fit within the chassis include amplifiers and reconfigurable optical add-drop multiplexers (ROADMs).

The amplifier options area dual-line Erbium-doped fibre amplifiercard that includes an integrated bi-directional optical time-domain reflectometer (OTDR) used to characterise the fibre. There is also a half-line-width RAMAN amplifier card. The line system will support the C and L bands, as mentioned.

The reconfigurable line system also has ROADM cards: a 1x12 wavelength-selective switch (WSS) with integrated amplifier, a colourless 16-channel add-drop that support channels of any size (flexible grid), and a full-width card 1x32 WSS. “The 1x32 would be used for colourless, directionless and directionless [ROADM] configurations,” says Xenos.   

The 6500 reconfigurable line system also supports open application porgramming interfaces (APIs) for telemetry, with a user able to program the platform to define the data streamed.“The platform can also be provisioned via REST APIs; something a content provider will do,” she says. 

Ciena is a member of the OpenROADM multi-source agreement and was involved in last year’s AT&T OpenROADM trial with its 6500 Converged Packet Optical Transport (POTS) platform. 

Will the 6500 reconfigurable line system be OpenROADM-compliant? 

“This card [and chassis form factor] could be used for OpenROADM if AT&T preferred this platform to the other [6500 Converged POTS] one,” says Xenos. “You also have to design the hardware to meet the specifications for OpenROADM.”

Ciena expects both platforms to be available by year-end. The 6500 reconfigurable line system will be in customer trials at the end of this quarter while the 8180 will be trialed by the end of the third quarter.


Will white boxes predominate in telecom networks?

Will future operator networks be built using software, servers and white boxes or will traditional systems vendors with years of network integration and differentiation expertise continue to be needed? 

 

AT&T’s announcement that it will deploy 60,000 white boxes as part of its rollout of 5G in the U.S. is a clear move to break away from the operator pack.

The service provider has long championed network transformation, moving from proprietary hardware and software to a software-controlled network based on virtual network functions running on servers and software-defined networking (SDN) for the control switches and routers.

Glenn WellbrockNow, AT&T is going a stage further by embracing open hardware platforms - white boxes - to replace traditional telecom hardware used for data-path tasks that are beyond the capabilities of software on servers.       

For the 5G deployment, AT&T will, over several years, replace traditional routers at cell and tower sites with white boxes, built using open standards and merchant silicon.   

“White box represents a radical realignment of the traditional service provider model,” says Andre Fuetsch, chief technology officer and president, AT&T Labs. “We’re no longer constrained by the capabilities of proprietary silicon and feature roadmaps of traditional vendors.”

But other operators have reservations about white boxes. “We are all for open source and open [platforms],” says Glenn Wellbrock, director, optical transport network - architecture, design and planning at Verizon. “But it can’t just be open, it has to be open and standardised.”

Wellbrock also highlights the challenge of managing networks built using white boxes from multiple vendors. Who will be responsible for their integration or if a fault occurs? These are concerns SK Telecom has expressed regarding the virtualisation of the radio access network (RAN), as reported by Light Reading.

“These are the things we need to resolve in order to make this valuable to the industry,” says Wellbrock. “And if we don’t, why are we spending so much time and effort on this?”

Gilles Garcia, communications business lead director at programmable device company, Xilinx, says the systems vendors and operators he talks to still seek functionalities that today’s white boxes cannot deliver. “That’s because there are no off-the-shelf chips doing it all,” says Garcia. 

 

We’re no longer constrained by the capabilities of proprietary silicon and feature roadmaps of traditional vendors

 

White boxes

AT&T defines a white box as an open hardware platform that is not made by an original equipment manufacturer (OEM).

A white box is a sparse design, built using commercial off-the-shelf hardware and merchant silicon, typically a fast router or switch chip, on which runs an operating system. The platform usually takes the form of a pizza box which can be stacked for scaling, while application programming interfaces (APIs) are used for software to control and manage the platform.

As AT&T’s Fuetsch explains, white boxes deliver several advantages. By using open hardware specifications for white boxes, they can be made by a wider community of manufacturers, shortening hardware design cycles. And using open-source software to run on such platforms ensures rapid software upgrades.

Disaggregation can also be part of an open hardware design. Here, different elements are combined to build the system. The elements may come from a single vendor such that the platform allows the operator to mix and match the functions needed. But the full potential of disaggregation comes from an open system that can be built using elements from different vendors. This promises cost reductions but requires integration, and operators do not want the responsibility and cost of both integrating the elements to build an open system and integrating the many systems from various vendors.   

Meanwhile, in AT&T’s case, it plans to orchestrate its white boxes using the Open Networking Automation Platform (ONAP) - the ‘operating system’ for its entire network made up of millions of lines of code. 

ONAP is an open software initiative, managed by The Linux Foundation, that was created by merging a large portion of AT&T’s original ECOMP software developed to power its software-defined network and the OPEN-Orchestrator (OPEN-O) project, set up by several companies including China Mobile and China Telecom.   

AT&T has also launched several initiatives to spur white-box adoption. One is an open operating system for white boxes, known as the dedicated network operator system (dNOS). This too will be passed to The Linux Foundation.

The operator is also a key driver of the open-based reconfigurable optical add/ drop multiplexer multi-source agreement, the OpenROADM MSA. Recently, the operator announced it will roll out OpenROADM hardware across its network. AT&T has also unveiled the Akraino open source project, again under the auspices of the Linux Foundation, to develop edge computing-based infrastructure.

At the recent OFC show, AT&T said it would limit its white box deployments in 2018 as issues are still to be resolved but that come 2019, white boxes will form its main platform deployments.

Xilinx highlights how certain data intensive tasks - in-line security, performed on a per-flow basis, routing exceptions, telemetry data, and deep packet inspection - are beyond the capabilities of white boxes. “White boxes will have their place in the network but there will be a requirement, somewhere else in the network for something else, to do what the white boxes are missing,” says Garcia. 

 

Transport has been so bare-bones for so long, there isn’t room to get that kind of cost reduction

 

AT&T also said at OFC that it expects considerable capital expenditure cost savings - as much as a halving - using white boxes and talked about adopting in future reverse auctioning each quarter to buy its equipment.

Niall Robinson, vice president, global business development at ADVA Optical Networking, questions where such cost savings will come from: “Transport has been so bare-bones for so long, there isn’t room to get that kind of cost reduction. He also says that there are markets that already use reverse auctioning but typically it is for items such as components. “For a carrier the size of AT&T to be talking about that, that is a big shift,” says Robinson. 

 

Layer optimisation

Verizon’s Wellbrock first aired reservations about open hardware at Lightwave’s Open Optical Conference last November.

In his talk, Wellbrock detailed the complexity of Verizon’s wide area network (WAN) that encompasses several network layers. At layer-0 are the optical line systems - terminal and transmission equipment - onto which the various layers are added: layer-1 Optical Transport Network (OTN), layer-2 Ethernet and layer-2.5 Multiprotocol Label Switching (MPLS). According to Verizon, the WAN takes years to design and a decade to fully exploit the fibre.

“You get a significant saving - total cost of ownership - from combining the layers,” says Wellbrock. “By collapsing those functions into one platform, there is a very real saving.” But there is a tradeoff: encapsulating the various layers’ functions into one box makes it more complex.

“The way to get round that complexity is going to a Cisco, a Ciena, or a Fujitsu and saying: ‘Please help us with this problem’,” says Wellbrock. “We will buy all these individual piece-parts from you but you have got to help us build this very complex, dynamic network and make it work for a decade.”

 

Next-generation metro

Verizon has over 4,000 nodes in its network, each one deploying at least one ROADM - a Coriant 7100 packet optical transport system or a Fujitsu Flashwave 9500. Certain nodes employ more than one ROADM; once one is filled, a second is added.

“Verizon was the first to take advantage of ROADMs and we have grown that network to a very large scale,” says Wellbrock.

The operator is now upgrading the nodes using more sophiticated ROADMs, as part of its next-generation metro. Now each node will need only one ROADM that can be scaled. In 2017, Verizon started to ramp and upgraded several hundred ROADM nodes and this year it says it will hit its stride before completing the upgrades in 2019.

“We need a lot of automation and software control to hide the complexity of what we have built,” says Wellbrock. This is part of Verizon’s own network transformation project. Instead of engineers and operational groups in charge of particular network layers and overseeing pockets of the network - each pocket being a ‘domain’, Verizon is moving to a system where all the networks layers, including ROADMs, are managed and orchestrated using a single system.

The resulting software-defined network comprises a ‘domain controller’ that handles the lower layers within a domain and an automation system that co-ordinates between domains.

“Going forward, all of the network will be dynamic and in order to take advantage of that, we have to have analytics and automation,” says Wellbrock.

 

In this new world, there are lots of right answers and you have to figure what the best one is

 

Open design is an important element here, he says, but the bigger return comes from analytics and automation of the layers and from the equipment.

This is why Wellbrock questions what white boxes will bring: “What are we getting that is brand new? What are we doing that we can’t do today?”

He points out that the building blocks for ROADMs - the wavelength-selective switches and multicast switches - originate from the same sub-system vendors, such that the cost points are the same whether a white box or a system vendor’s platform is used. And using white boxes does nothing to make the growing network complexity go away, he says.

“Mixing your suppliers may avoid vendor lock-in,” says Wellbrock. “But what we are saying is vendor lock-in is not as serious as managing the complexity of these intelligent networks.”

Wellbrock admits that network transformation with its use of analytics and orchestration poses new challenges. “I loved the old world - it was physics and therefore there was a wrong and a right answer; hardware, physics and fibre and you can work towards the right answer,” he says. “In this new world, there are lots of right answers and you have to figure what the best one is.”

 

Evolution

If white boxes can’t perform all the data-intensive tasks, then they will have to be performed elsewhere. This could take the form of accelerator cards for servers using devices such as Xilinx’s FPGAs.

Adding such functionality to the white box, however, is not straightforward. “This is the dichotomy the white box designers are struggling to address,” says Garcia. A white box is light and simple so adding extra functionality requires customisation of its operating system to run these application. And this runs counter to the white box concept, he says. 

 

We will see more and more functionalities that were not planned for the white box that customers will realise are mandatory to have

 

But this is just what he is seeing from traditional systems vendors developing designs that are bringing differentiation to their platforms to counter the white-box trend.

One recent example that fits this description is Ciena’s two-rack-unit 8180 coherent network platform. The 8180 has a 6.4-terabit packet fabric, supports 100-gigabit and 400-gigabit client-side interfaces and can be used solely as a switch or, more typically, as a transport platform with client-side and coherent line-side interfaces.

The 8180 is not a white box but has a suite of open APIs and has a higher specification than the Voyager and Cassini white-box platforms developed by the Telecom Infra Project.  

“We are going through a set of white-box evolutions,” says Garcia. “We will see more and more functionalities that were not planned for the white box that customers will realise are mandatory to have.”

Whether FPGAs will find their way into white boxes, Garcia will not say. What he will say is that Xilinx is engaged with some of these players to have a good view as to what is required and by when.

It appears inevitable that white boxes will become more capable, to handle more and more of the data-plane tasks, and as a response to the competition from traditional system vendors with their more sophisticated designs.

AT&T’s white-box vision is clear. What is less certain is whether the rest of the operator pack will move to close the gap.


TIP tackles the growing complexity of open design

Axel Clauberg outlined the challenges facing the telecom industry in his opening address at the recent Telecom Infra Project (TIP) summit.

The TIP chairman and vice president, technology innovation at Deutsche Telekom described how the relentless growth of IP traffic is causing production costs to rise yet the average revenues per subscriber for bundled communications services is flat or dipping. “Not a good situation to be in,” he said. The industry is also investing in new technologies including the rollout of 5G.

Niall Robinson

The industry needs a radically different approach if it is to achieve capital efficiency, said Clauberg, and that requires talent to drive innovation. Garnering such talent needs an industry-wide effort and this is the motivation for TIP.

 

TIP

Established in 2016, TIP brings together internet giants Facebook and Microsoft with leading telecom operators, systems vendors, components players and others to co-develop open-source designs for telecoms. In the last year, TIP has added 200 companies to total over 500 members. 

TIP used its second summit held in Santa Clara, California to unveil several new project groups. These include End-to-End Networking Slicing, Edge Computing, and Artificial Intelligence and Applied Machine Learning. 

There are three main project categories within TIP: access, backhaul, and core and management. Access now includes six project groups including the new Edge Computing, backhaul has two, while core and management has three including the new network slicing and artificial intelligence initiatives. TIP has also established what it calls ecosystem acceleration centres and community labs.

“TIP is definitely bigger and, I think, better,” says Niall Robinson, vice president, global business development at ADVA Optical Networking. “As with any organisation there is always initial growing pains and TIP has gone through those.”

 

Open Optical Packet Transport

ADVA Optical Networking is a member in one of TIP’s more established projects, the Open Optical Packet Transport group which announced the 1-rack-unit Voyager packet transport and routing box last year.

OOPT itself comprises four work groups: Optical Line System, Disaggregated Transponders and Chips, Physical Simulation Environment and the Common API. A fifth group is being considered to tackle routing and software-defined interconnection.

Robinson highlights two activities of the OOPT’s subgroups to illustrate the scope and progress of TIP.

The Common API group in which Robinson is involved aims to bring commonality to the various open source groups’ application programming interfaces (APIs).

 

Open is great but there are so many initiatives out there that it is really not helping the market


The Open Networking Forum alone has several initiatives: the Central Office Rearchitected as a Data centre (CORD), the Open Networking Operating System (ONOS) SDN controller, the Open Core Model, and the Transport API. Other open initiatives developing APIs include OpenConfig set up by operators, the Open API initiative, and OpenROADM.

“Open is great but there are so many initiatives out there that it is really not helping the market,” says Robinson. An operator may favour a particular system vendor’s equipment that does not support a particular API. Either the operator or the vendor must then develop something, a situation in the case of an operator that can repeat itself many times. The goal of the Common API group’s work is to develop a mapping function between the software-defined networking (SDN) controller and equipment so that any SDN controller can use these industry-initiative APIs. 

Robinson’s second example is the work of the OOPT’s Disaggregated Transponders and Chips group that is developing a transponder abstraction interface. The goal is to make it easier for vendors to benefit from the functionality of a transponder’s coherent DSP independent of the particular chip used.

“For ADVA, when we build our own gear we pick a DSP and we have to get our firmware to work with it,” says Robinson. “We can’t change that DSP easily; it’s a custom interface.”

The goal of the work is to develop a transponder abstraction interface that sits between the higher-level functionality software and the coherent DSP. The transponder vendor will interface its particular DSP to the abstraction interface that will then allow a network element’s software to configure settings and get optical monitoring data.

“It doesn’t care or even know what DSP is used, all it is talking to is this common transponder abstraction interface,” says Robinson.

 

Cassini and Voyager platforms

Edgecore Networks has contributed its packet transponder white box platform to the TIP OOPT group. Like Voyager, the platform uses the Broadcom StrataXGS Tomahawk 3.2 terabit switch chip. But instead of using built-in coherent interfaces based on Acacia’s AC-400 module, Cassini offers eight card slot options. Each  slot can accommodate three module options: a coherent CFP2-ACO, a coherent CFP2-DCO or two QSFP28 pluggables. The Cassini platform also has 16 fixed QSFP28 ports.

Accordingly, the 1.5-rack-unit box can be configured as a 3.2 terabit switch using QSFP28 modules only or as a transport box with up to 1.6 terabits of client-side interfaces and 1.6 terabits of line-side coherent interfaces. This contrasts with the Voyager that uses up to 2 terabits of the switch capacity with its dozen 100-gigabit client-side interfaces and 800 gigabits of coherent line-side capacity.

There have also been developments with TIP’s Voyager box. Cumulus Network has replaced Snaproute to provide the platform’s Linux network operating system. ADVA Optical Networking, a seller of the Voyager, says the box will likely be generally available in the first quarter of 2018.

Robinson says TIP will ultimately be judged based on what it ends up delivering. “Eighteen months is not enough time for the influence of something like this to be felt,” he says.

 

TIP Summit 2017 talks, click here


Giving telecom networks a computing edge

Operators have long sought to provide their users with a consistent quality of service. For cellular it is why ubiquitous cellular coverage is important, for example.

But a subtler approach is taking hold as networks evolve whereby what a user does will change depending on their location. And what will enable this is edge computing.

 Source: Senza Fili Consulting

Edge computing

“This is an entirely new concept,” says Monica Paolini, president and founder at Senza Fili Consulting. “It is a way to think about service which is going to have a profound impact.”

Edge computing has emerged as a consequence of operators virtualising their networks. Virtualisation of network functions hosted in the cloud has promoted a trend to move telecom functionality to the network core. Functionality does not need to be centralised but initially, that has been the trend, says Paolini, especially given how virtualisation promotes the idea that network location no longer matters.

“That is a good story, it delivers a lot of cost savings,” says Paolini, who recently published a report on edge computing. * 

But a realisation has emerged across the industry that location does matter; centralisation may save the operator some costs but it can impact performance. Depending on the application, it makes sense to move servers and storage closer to the network edge.

The result has been several industry initiatives. One is Mobile Edge Computing (MEC) being developed by the European Telecommunications Standards Institute (ETSI). In March, ETSI renamed the Industry Specification Group undertaking the work to Multi-access Edge Computing to reflect the operators requirements beyond just cellular.

“What Multi-access Edge Computing does is move some of the core functionality from a central location to the edge,” says Paolini.

Another initiative is M-CORD, the mobile component of the Central Office Re-architected as a Datacenter initiative, overseen by the Open Networks Labs non-profit organisation. Other initiatives Paolini highlights include the Open Compute Project, Open Edge Computing and the Telecom Infra Project. 

 

This is an entirely new concept. It is a way to think about service which is going to have a profound impact.

 

Location

The exact location of the ‘edge’ where the servers and storage reside is not straightforward.

In general, edge computing is located somewhere between the radio access network (RAN) and the network core. Putting everything at the RAN is one extreme but that would lead to huge duplication of hardware and exceed what RAN locations can support. Equally, edge computing has arisen in response to the limitations of putting too much functionality in the core.   

The matter of location is blurred further when one considers that the RAN itself is movable to the core using the Cloud RAN architecture.

Paolini cites another reason why the location of edge computing is not well defined: the industry does not yet know. And it will only be in the next year or two when operators start trialling the technology. “There is going to be some trial and error by the operators,” she says.

 

Use cases

An enterprise located across a campus is one example use of edge computing, given how much of the content generated stays on-campus. If the bulk of voice calls and data stay local, sending traffic to the core and back makes little sense. There are also security benefits keeping data local. An enterprise may also use the edge computing to run services locally and share them across networks, for example using cellular or Wi-Fi for calls.

Another example is to install edge computing at a sports stadium, not only to store video of the game’s play locally - again avoiding going to the core and back with content - but also to cache video from games taking place elsewhere for viewing by attending fans.

Virtual reality and augmented reality are other applications that require low-latency, another performance benefit of having local computation.

Paolini expects the uptake of edge computing to be gradual. She also points to its challenging business case, or at least how operators typically assess a business case may not tell the full story.

Operators view investing in edge computing as an extra cost but Paolini argues that operators need to look carefully at the financial benefits. Edge computing delivers better utilisation of the network and lower latency. “The initial cost for multi-access edge computing is compensated for by the improved utilisation of the existing network,” she says.

When Paolini started the report it was to research low-latency and the issues of distributed network design, reliability and redundancy. But she soon realised that multi-access edge computing was something broader and that edge computing is beyond what ETSI is doing.

This is not like an operator rolling out LTE and reporting to shareholders how much of the population now has coverage. “It is a very different business to learn how to use networks better,” says Paolini.  

 

* Click here to access the report, Power at the edge. MEC, edge computing, and the prominence of location


BT bolsters research in quantum technologies

BT is increasing its investment in quantum technologies. “We have a whole team of people doing quantum and it is growing really fast,” says Andrew Lord, head of optical communications at BT.

The UK incumbent is working with companies such as Huawei, ADVA Optical Networking and ID Quantique on quantum cryptography, used for secure point-to-point communications. And in February, BT joined the Telecom Infra Project (TIP), and will work with Facebook and other TIP members at BT Labs in Adastral Park and at London’s Tech City. Quantum computing is one early project.

Andrew LordThe topics of quantum computing and data security are linked. The advent of quantum computers promises the break the encryption schemes securing data today, while developments in quantum cryptography coupled with advances in mathematics promise new schemes resilient to the quantum computer threat.    

 

Securing data transmission

To create a secure link between locations, special digital keys are used to scramble data. Two common data encryption schemes are used, based on symmetric and asymmetric keys. 

A common asymmetric key scheme is public key cryptography which uses a public and private key pair that are uniquely related. The public key is published along with its user’s name. Any party wanting to send data securely to the user looks up their public key and uses it to scramble the data. Only the user, which has the associated private key, can unscramble the data. A widely used public-key crypto-system is the RSA algorithm.

 

There are algorithms that can be run on quantum computers that can crack RSA. Public key crypto has a big question mark over it in the future and anything using public key crypto now also has a question mark over it.

 

In contrast, symmetric schemes use the same key at both link ends, to lock and unlock the data. A well-known symmetric key algorithm is the Advanced Encryption Standard which uses keys up to 256-bits long (AES-256); the more bits, the more secure the encryption.

The issue with a symmetrical key scheme, however, is getting the key to the recipient without it being compromised. One way is to deliver the secret key using a security guard handcuffed to a case. An approach more befitting the digital age is to send the secret key over a secure link, and here, public key cryptography can be used. In effect, an asymmetric key is used to encrypt the symmetric key for transmission to the destination prior to secure communication.

But what worries governments, enterprises and the financial community is the advent of quantum computing and the risk it poses to cracking public key algorithms which are the predominant way data is secured. Quantum computers are not yet available but government agencies and companies such as Intel, Microsoft and Google are investing in their development and are making progress.

Michele Mosca estimates that there is a 50 percent chance that a quantum computer will exist by 2030. Professor Mosca, co-founder of the Institute for Quantum Computing at the University of Waterloo, Canada and of the security firm, evolutionQ, has a background in cyber security and has researched quantum computing for 20 years.

This is a big deal, says BT’s Lord. “There are algorithms that can be run on quantum computers that can crack RSA,” he says. “Public key crypto has a big question mark over it in the future and anything using public key crypto now also has a question mark over it.”

A one-in-two chance by 2030 suggests companies have time to prepare but that is not the case. Companies need to keep data confidential for a number of years. This means that they need to protect data to the threat of quantum computers at least as many years in advance since cyber-criminals could intercept and cache the data and wait for the advent of quantum computers to crack the coded data.   

 

Upping the game

The need to have secure systems in place years in advance of quantum computer systems is leading security experts and researchers to pursue two approaches to data security. One uses maths while the other is based on quantum physics.

Maths promises new algorithms that are not vulnerable to quantum computing. These are known as post-quantum or quantum-resistant techniques. Several approaches are being researched including lattice-based, coding-based and hash-function-based techniques. But these will take several years to develop. Moreover, such algorithms are deemed secure because they are based on sound maths that is resilient to algorithms run on quantum computers. But equally, they are secure because techniques to break them have not been widely investigated, by researchers and cyber criminals alike.   

The second, physics approach uses quantum mechanics for key distribution across an optical link, which is inherently secure.  

“Do you pin your hopes on a physics theory [quantum mechanics] that has been around for 100 years or do you base it on maths?” says BT’s Lord. “Or do you do both?”

 

In the world of the very small, things are linked, even though they are not next to each other

 

Quantum cryptography 

One way to create a secure link is to send the information encoded on photons - particles of light. Here, each photon carries a single bit of the key.

If the adversary steals the photon, it is not received and, equally, they are taking information that is no use to them, says Lord. A more sophisticated technique is to measure the photon while it passes through but here they come up against the quantum mechanical effect where measuring a photon changes its parameters. The transmitter and receiver typically reserve at random a small number of the key’s photons to detect a potential eavesdropper. If the receiver detects photons that were not sent, the change alerts them that the link has been compromised.

The issue with such quantum key distribution techniques is that the distances a single photon can be sent are limited to a few tens of kilometres only. If longer links are needed, intermediate secure trusted sites are used to regenerate the key. These trusted sites need to be secure.

Entanglement, whereby two photons are created such that they are linked even if they are physically in separate locations, is one way researchers are looking to extend the distance keys can be distributed. With such entangled photons, any change or measurement of one instantly affects the twin photon. “In the world of the very small, things are linked, even though they are not next to each other,” says Lord.

Entanglement could be used by quantum repeaters to increase the length possible for key distribution not least for satellites, says Lord: “A lot of work is going on how to put quantum key distribution on orbiting satellites using entanglement.”

But quantum key distribution only solves a particular class of problem such as protecting data sent across links, backing up data between a bank and a data centre, for example. The technique is also dependent on light and thus is not as widely applicable as post-quantum algorithms. "There is a view emerging in the industry that you throw both of these techniques [post quantum algorithms and quantum key distribution] especially at data streams you want to keep secure."

 

Practicalities

BT working with Toshiba and optical transport equipment maker ADVA Optical Networking have already demonstrated a quantum protected link operating at 100 gigabits-per-second.

BT’s Lord says that while quantum cryptography has been a relatively dormant topic for the last decade, this is now changing. “There are lots of investment around the world and in the UK, with millions poured in by the government,” he says. BT is also encouraged that there are more companies entering the market including Huawei.

“What is missing is still a little bit more industrialisation,” says Lord. “Quantum physics is pretty sound but we still need to check that the way this is implemented, there are no ways of breaching it; to be honest we haven't really done that yet.”

BT says it has spent the last few months talking to financial institutions and claims there is much interest, especially with quantum computing getting much closer to commercialisation. “That is going to force people to make some decisions in the coming years,” says Lord. 


The Open ROADM MSA adds new capabilities in Release 2.0

The Multi-Source Agreement (MSA) for open reconfigurable add-drop multiplexers (ROADM) group expects to publish its second release in the coming months. The latest MSA specifications extend optical reach by including line amplification and adds support for flexible grid and lower-speed tributaries with OTN switching.

Xavier PougnardThe Open ROADM MSA, set up by AT&T, Ciena, Fujitsu and Nokia, is promoting interoperability between vendors’ ROADMs by specifying open interfaces for their control using software-defined networking (SDN) technology. Now, one year on, the MSA has 10 members, equally split between operators and systems vendors.

Orange joined the Open ROADM MSA last July and says it shares AT&T’s view that optical networks lack openness given the proprietary features of the vendors’ systems.

“As service providers, we suffer from lock-in where our networks are composed of equipment from a single vendor,” says Xavier Pougnard, R&D manager for transport networks at Orange Labs. “When we want to introduce another vendor for innovation or economic reasons, it is nearly impossible.”

This is what the MSA group wants to tackle with its open specifications for the data and management planes. The goal is to enable an operator to swap equipment without having to change their control by using a common, open management interface. “Right now, for every new provider, we need IT development for the management of the [network] node,” says Pougnard.

 

As service providers, we suffer from lock-in where our networks are composed of equipment from a single vendor. When we want to introduce another vendor for innovation or economic reasons, it is nearly impossible.

 

MSA status

The Open ROADM MSA has published two data sets as part of its Release 1.2. One set tackles 100-gigabit data plane interoperability by defining what is needed for two line-side transponders to talk to each other. The second set of specifications uses the YANG modelling language to allow the management of the transponders and ROADMs.

The group is now working on Release 2.0 that will enable longer reaches and exploit OTN switching. The specifications will also support flexgrid whereas Release 1.2 specifies 50GHz fixed channels only. Release 2.0 is expected to be completed in the second quarter of 2017. “Service providers would like it as soon as possible,” says Pougnard.

Pougnard highlights the speed of development of an open MSA model with new releases issued every few months, far quicker that traditional standardisation bodies. It was this frustration with the slow pace of development of the standards bodies that led Orange to join the Open ROADM MSA.

Orange stresses that the Open ROADM will not be used for all dense wavelength-division multiplexing cases. There will be applications which require extended performance where a specific vendor's equipment will be used. “We do specify the use of an FEC [forward error correction] in the specification but there are more powerful FECs that extend the reach for 100-gigabit interfaces,” says Pougnard. But the underlying flexibility offered by the MSA trumps performance.

 

Trials

AT&T detailed in December a network demonstration of the Open ROADM technology. The operator used a 100-gigabit optical wavelength in its Dallas area network to connect two IP-MPLS routers using transponders and ROADMs from Ciena and Fujitsu.

Orange is targeting its own lab trials in the first half of this year using a simplified OpenDaylight SDN controller working with ROADMs from three systems vendors. “We want to showcase the technology and prove the added value of an open ROADM,” says Pougnard. 

Orange is also a member of the Telecom Infra Project, a venture that includes Facebook and 10 operators to tackle telecom networks from access to the core. The two groups have had discussions about areas of possible collaboration but while the Open ROADM MSA wants to promote a single YANG model that includes the amplifiers of the line system, TIP expects there to be more than a single model. The two organisations also differ in their philosophies: the Open ROADM MSA concerns itself with the interfaces to the platforms whereas TIP also tackles the internal design of platforms.  

Coriant, which is a member of TIP and the Open ROADM MSA, is keen for alignment. "As an industry we should try to make sure that certain elements such as open API definitions are aligned between TIP and the Open ROADM MSA," says Uwe Fischer, CTO of Coriant.  

Meanwhile, the Open ROADM MSA will announce another vendor member soon and says additional operators are watching the MSA’s progress with interest.

Pougnard stresses how open developments such as the ROADM MSA require WDM engineers to tackle new things. “We have a tremendous shift in skills,” he says. “Now they need to work on the automation capability, on YANG modelling and Netconf.”  Netconf - the IETF’s network configuration protocol - uses YANG models to enable the management of network devices such as ROADMs.    


TIP seeks to shake up the telecom marketplace

The telecom industry has long recognised the benefits of the Internet content providers' data-centre work practices. It has led to the operators starting to embrace software-defined networking (SDN) and network function virtualisation (NFV) technology whereby telecom functions that previously required custom hardware are executed as software on servers.

 Niall Robinson

Now, ten telcos, systems vendors, component and other players have joined Facebook as part of the Telecom Infra Project, or TIP, to bring the benefits of open-source design and white-box platforms to telecoms. TIP has over 300 members and has seven ongoing projects across three network segments of focus: access, backhaul, and core and management. 

Facebook's involvement in a telecoms project is to benefit its business. The social media giant has 1.79 billion active monthly users and wants to make Internet access more broadly available. Facebook also has demanding networking requirements, both the linking of its data centres and supporting growing video traffic. It also wants better networks to support emerging services using technologies such as virtual reality headsets.

 

It is time to disrupt this closed market; it is time to reinvent everything we have today

 

The telecom operators want to collaborate with Facebook having seen how its Open Compute Project has created flexible, scalable equipment for the data centre. The operators also want to shake up the telecom industry. At the inaugural TIP summit held in November, the TIP chairman and CTO of SK Telecom, Alex Jinsung Choi, discussed how the scale and complexity of telecom networks make it hard for innovators and start-ups to enter the market. “It is time to disrupt this closed market; it is time to reinvent everything we have today,” said Choi during his TIP Summit talk.

 

Voyager

TIP unveiled a white-box packet optical platform dubbed Voyager at the summit. The one rack-unit (1RU) box is a project for backhaul. Voyager has been designed by Facebook and the platform’s specification has been made available to TIP.

Voyager is based on another platform Facebook has developed: the Wedge top-of-rack switch for the data centre. Wedge switches are now being made by several contract manufacturers. Each can be customised based on the operating system used and the applications loaded onboard. The goal is to adopt a similar approach with Voyager.

“Eventually, there will be something that is definitely market competitive in terms of hardware cost,” says Niall Robinson, vice president, global business development at ADVA Optical Networking, one of the companies involved in the Voyager initiative. “And you have got an open-source community developing a feature set from a software perspective.”

Other companies backing Voyager include Acacia Communications, Broadcom and Lumentum which are involved in the platform’s hardware design. Snaproute is delivering the software inside the box while first units are being made by the contract manufacturer, Celestica.

ADVA Optical Networking’s will provide a sales channel for Voyager and is interfacing it to its network management system. The system vendor will also provide services and software support. Coriant is another systems vendor backing the project. It is providing networking support including routeing and switching as well as dense WDM transmission capabilities.

 

This [initiative] has shown me that the whole supply and design chains for transport can be opened up; I find that fascinating.

 

Robinson describes TIP as one of the most ambitious and creative projects he has been involved in. “It is less around the design of the box," he says. "It is the shaking up of the ecosystem, that is what TIP is about.” 

A 25-year involvement in transport has given Robinson an ingrained view that it is different to other aspects of telecom. For example, a vendor’s transport system must be at each end of the link due to the custom nature of platforms that are designed to squeeze maximum performance over a link. “In some cases, transport is different but what TIP maybe realises is that transport does not always have to be different,” says Robinson. “This [initiative] has shown me that the whole supply and design chains for transport can be opened up; I find that fascinating.”      

 

Specification

At the core of the 1RU Voyager is the Broadcom StrataXGS Tomahawk. The 3.2-terabit switch chip is also the basis of the Wedge top-of-rack switch. The Tomahawk features 128 x 25 gigabit-per-second (Gbps) serdes to enable 32 x 100 gigabit ports, and supports layer-2 switching and layer-3 routeing.

Voyager uses 12, 100 Gigabit Ethernet client-side pluggable interfaces and four 200-gigabit networking interfaces based on Acacia’s AC-400 optical module. The AC-400 uses coherent optics and supports polarisation multiplexing, 16 quadrature amplitude modulation (PM-16QAM).  “If it was a pure transport box the input rate would equal the output rate but because it is a packet box, you can take advantage of layer 2 over-subscription,” says Robinson. 

At layer-3 the total routeing capacity is 2 terabits, the sum of the client and network interfaces. “At layer-3, the Tomahawk chip does not know what is a client port and what is a networking port; they are just Ethernet ports on that device,” says Robinson.

ADVA Optical Networking chose to back Voyager because it does not have a packet optical platform in its product portfolio. Until now, it has partnered with Juniper Networks and Arista Networks when such functionality has been needed. “We are chasing certain customers that are interested in Voyager,” says Robinson. “We are enabling ourselves to play in the packet optical space with a self-contained box.”  

 

Status and roadmap

The Voyager is currently in beta-prototype status and has already been tested in trials. Equinix has tested the box working with Lumentum’s open line system over 140km of fiber, while operator MTN has also tested Voyager.

The platform is expected to be generally available in March or April 2017, by when ADVA Optical Networking will have completed the integration of Voyager with its network management system.

Robinson says there are two ways Voyager could develop.

Source: Gazettabyte

One direction is to increase the interface and switching capacities of the 1RU box. Next-generation coherent digital signal processors that support higher baud rates will enable 400Gbps and even 600Gbps wavelengths using PM-64QAM. This could enable the line-side capacity to increase from the current 800Gbps to 2 or 3 terabits. And soon, 400Gbps client-side pluggable modules will become available. Equally, Broadcom is already sampling its next-generation Tomahawk II chip that has 6.4 terabits of switching capacity.

Another direction the platform could evolve is to add an backplane to connect multiple Voyagers. This is something already done with the Wedge '6-pack' that combines six Wedge switch cards. A Voyager 6-pack would result in a packet-optical platform with multiple terabits of switching and routeing capacity.

“This is an industry-driven initiative as opposed to a company-driven one,” says Robinson. “Voyager will go whichever way the industry thinks the lowest cost is.” 

 

Corrected on Dec 22nd. The AC-400 is a 5"x7" module and not as originally stated.


Privacy Preference Center