TIP launches a disaggregated cell-site gateway design

Part 1: TIP white-box designs

Four leading telecom operators, members of the Telecom Infra Project (TIP), have developed a disaggregated white-box design for cell sites. The four operators are Orange, Telefonica, TIM Brazil and Vodafone. BT is also believed to be backing the open-design cell-site venture.

 Source: ADVA

The first TIP cell-site gateway product, known as Odyssey-DCSG, is being brought to market by ADVA and Edgecore Networks.

TIP isn’t the only open design framework that is developing cell-site gateways. Edgecore Networks contributed in October a design to the Open Compute Project (OCP) that is based on an AT&T cell-site gateway specification. There are thus two overlapping open networking initiatives developing disaggregated cell-site gateways. 

ADVA and Edgecore will provide the standardised cell-site gateways as operators deploy 5G. The platforms will support either commercial cell-site gateway software or open-source code. 

“We are providing a white box at cell sites to interconnect them back into the network,” says Bill Burger, vice president, business development and marketing, North America at Edgecore Networks. 

“The cell site is a really nice space for a white-box because volumes are high,” says Niall Robinson, vice president, global business development at ADVA. Vodafone alone has stated that it has 300,000 cell-site gateways that will need to be updated for 5G.

 

Odyssey-DCSG

A mobile cell site comprises remote radio units (RRUs) located on cell towers that interface to the mobile baseband unit (BBU). The baseband unit also connects to the disaggregated cell-site gateway with the two platforms communicating using IP-over-Ethernet. “The cell-site gateway is basically an IP box,” says Robinson. 

The Odyssey gateway design is based on a general-purpose Intel microprocessor and a 120-gigabit Broadcom Qumran-UX switch chip.

The white box’s link speeds to the baseband unit range from legacy 10 megabit-per-second (Mbps) to 1 gigabit-per-second (Gbps). The TIP gateway’s uplinks are two 25-gigabit SFP28 modules typically. In contrast, the OCP’s gateway design uses a higher capacity 300-gigabit Qumran-AX switch chip and has two 100-gigabit QSFP28 uplink interfaces. “There is a difference in capacity [for the two designs] and hence in their cost,” says Robinson.

 

The cell-site gateway is basically an IP box

 

The cell-site gateways can be connected in a ring with the traffic fed to an aggregation unit for transmission within the network.          

Robinson expects other players to join ADVA and Edgecore as project partners to bring the TIP gateway to market. To date, no software partners have been announced. First samples of the platform are expected in the first quarter of 2019 with general availability in the third quarter of 2019.

“Cell-site gateways is one of those markets that will benefit from driving a common design,” says Robinson. The goal is to get away from operators choosing proprietary platforms. “You have one design hitting the market and being chosen by the different end users,” he says. “Volumes go up and costs go down.”

ADVA is also acting as the systems integrator, offering installation, commissioning and monitoring services for the gateway. “People like disaggregation when costs are being added up but end users like things - especially in high volumes - to be reintegrated to make it easy for their operations folk,” says Robinson.

The disaggregated cell-site gateway project is part of TIP’s Open Optical and Packet Transport group, the same group that is developing the Voyager packet-optical white box.    

 

Source: Gazettabyte

 

Voyager

ADVA announced recently that the Voyager platform is now available, two years after being unveiled. 

The 1-rack-unit Voyager platform uses up to 2 terabits of the 3.2-terabit Broadcom Tomahawk switch-chip: a dozen 100-gigabit client-side interfaces and 800 gigabits of coherent line-side capacity.

Robinson admits that the Voyager platform would have come to market earlier had SnapRoute - providing the platform’s operating system - not withdrawn from the project. Cumulus Networks then joined the project as SnapRoute’s replacement. 

“This shows both sides of the white-box model,” says Robinson. How a collective project design can have a key member drop out but also the strength of a design community when a replacement can step in.  

TIP has yet to announce Voyager customers although the expectation is that this will happen in the next six months.

Robinson identifies two use cases for the platform: regional metro networks of up to 600km and data centre interconnect.

“Voyager has four networking ports allowing an optical network to be built,” says Robinson. “Once you have that in place, it is very easy to set up Layer-2 and Layer-3 services on top.” 

The second use case is data centre interconnect, providing enterprises with Layer-2 trucking connectivity services between sites. “Voyager is not just about getting bits across but about Layer-2 structures,” says Robinson. 

The Voyager is not targeted at leading internet content providers that operate large-scale data centres. They will use specific, leading-edge platforms. “The hyperscalers have moved on,” says Robinson. “The Voyager will play in a different market, a smaller-sized data centre interconnect space.”   

 

We will be right at the front and I think we will reap the rewards for jumping in early

 

Early-mover advantage

Robinson contrasts how the Voyager and TIP’s cell-site gateway were developed. Facebook developed and contributed the Voyager design to TIP and only then did members become aware of the design. 

With the cell-site gateway, a preliminary specification was developed with one customer - Vodafone - before it was taken to other operators. These companies that make up a good portion of the cell site market worked on the specification before being offered to the TIP marketplace for development. 

“This is the right model for a next-generation Voyager design,” says Robinson. Moreover, rather than addressing the hyperscalers’ specialised requirements involving the latest coherent chips and optical pluggable modules, the next Voyager design should be more like the cell-site gateway, says Robinson: “A little bit more bread-and-butter: go after the 100-gigabit market and make that more of a commodity.”  

ADVA also believes in a first-mover advantage with open networking designs such as the TIP cell-site gateway. 

“We have been involved for quite some time, as has Edgecore with which we have teamed up,” says Robinson. “We will be right at the front and I think we will reap the rewards for jumping in early.”

 

Part 2: Open networking, click here


ONF’s operators seize control of their networking needs

  • The eight ONF service providers will develop reference designs addressing the network edge.
  • The service providers want to spur the deployment of open-source designs after becoming frustrated with the systems vendors failing to deliver what they need. 
  • The reference designs will be up and running before year-end.
  • New partners have committed to join since the consortium announced its strategic plan

The service providers leading the Open Networking Foundation (ONF) will publish open designs to address next-generation networking needs.

Timon SloaneThe ONF service providers - NTT Group, AT&T, Telefonica, Deutsche Telekom, Comcast, China Unicom, Turk Telekom and Google - are taking a hands-on approach to the design of their networks after becoming frustrated with what they perceive as foot-dragging by the systems vendors.

“All eight [operators] have come together to say in unison that they are going to work inside the ONF to craft explicit plans - blueprints - for the industry for how to deploy open-source-based solutions,” says Timon Sloane, vice president of marketing and ecosystem at the ONF. 

The open-source organisation will develop ‘reference designs’ based on open-source components for the network edge. The reference designs will address developments such as 5G and multi-access edge and will be implemented using cloud, white box, network functions virtualisation (NFV) and software-defined networking (SDN) technologies.  

By issuing the designs and committing to deploy them, the operators want to attract select systems vendors that will work with them to fulfil their networking needs.

 

Remit

The ONF is known for such open-source projects as the Central Office Rearchitected as a Datacenter (CORD) and the Open Networking Operating System (ONOS) SDN controller.  

The ONF’s scope has broadened over the years, reflecting the evolving needs of its operator members. The organisation’s remit is to reinvent the network edge. “To apply the best of SDN, NFV and cloud technologies to enable not just raw connectivity but also the delivery of services and applications at the edge,” says Sloane.

The network edge spans from the central office to the cellular tower and includes the emerging edge cloud that extends the ‘edge’ to such developments as the connected car and drones. 

 

The operators have been hopeful the whole vendor community would step up and start building solutions and embracing this approach but it is not happening at the speed operators want, demand and need

 

“The edge cloud is called a lot of different things right now: multi-access edge computing, fog computing, far edge and distributed cloud,” says Sloane. “It hasn’t solidified yet.”  

One ONF open-source project is the Open and Disaggregated Transport Network (ODTN), led by NTT. “ODTN is edge related but not exclusively so,” says Sloane. “It is starting off with a data centre interconnect focus but you should think of it as CORD-to-WAN connectivity.”  

The ONF’s operators spent months formulated the initiative, dubbed the Strategic Plan, after growing frustrated with a supply chain that has failed to deliver the open-source solutions they need. “The operators have been hopeful the whole vendor community would step up and start building solutions and embracing this approach but it is not happening at the speed operators want, demand and need,” says Sloane.

The ONF’s initiative signals to the industry that the operators are shifting their spending to open-source solutions and basing their procurement decisions on the reference designs they produce.

“It is a clear sign to the industry that things are shifting,” says Sloane. “The longer you sit on the sidelines and wait and see what happens, the more likely you are to lose your position in the industry.”

If operators adopt open-source software and use white boxes based on merchant silicon, how will systems vendors produce differentiated solutions?

“All this goes to show why this is disruptive and creating turbulence in the industry,” says Sloane.

Open-source design equates to industry collaboration to develop shared, non-differentiated infrastructure, he says. That means system vendors can focus their R&D tackling new issues such as running and automating networks, developing applications and solving challenges such as next-generation radio access and radio spectrum management.     

“We want people to move with the mark,” says Sloane. “It is not just building a legacy business based on what used to be unique and expecting to build that into the future.” 

 

Reference designs

The operators have identified five reference designs: fixed and mobile broadband, multi-access edge, leaf-and-spine architectures, 5G at the edge, and next-generation SDN. 

The ONF has already done much work in fixed and mobile broadband with its residential and mobile CORD projects. Multi-access edge refers to developing one network to serve all types of customers simultaneously, using cloud techniques to shift networking resources dynamically as needed.

At first glance, it is unclear what the ONF can contribute to leaf-and-spine architectures. But the ONF is developing SDN-controlled switch fabric that can perform advanced packet processing, not just packet forwarding.

 

The ONF’s initiative signals to the industry that the operators are shifting their spending to open-source solutions and basing their procurement decisions on the reference designs they produce.

 

Sloane says that many virtualised tasks today are run on server blades using processors based on the x86 instruction set. But offloading packet processing tasks to programmable switch chips - referred to as networking fabric - can significantly benefit the price-performance achieved.

“We can leverage [the] P4 [programming language for data forwarding] and start to do things people never envisaged being done in a fabric,” says Sloane, adding that the organisation overseeing P4 is going to merge with the ONF.  

The 5G reference design is one application where such a switch fabric will play a role. The ONF is working on implementing 5G network core functions and features such as network slicing, using the P4 language to run core tasks on intelligent fabric.  

The ONF has already done work separating the radio access network (RAN) controller from radio frequency equipment and aims to use SDN to control a pool of resources and make intelligent decisions about the placement of subscribers, workloads and how the available radio spectrum can best be used.     

The ONF’s fifth reference design addresses next-generation SDN and will use work that Google has developed and is contributing to the ONF.

The ONF manages the OpenFlow protocol, used to define the separation between the control and data forwarding planes. But the ONF is the first to admit that OpenFlow overlooked such issues as equipment configuration and operational issues. 

The ONF is now engaged in a next-generation SDN initiative. “We are taking a step back and looking at the whole problem, to address all the pieces that didn’t get resolved in the past,” says Sloane.

Google has also contributed two interfaces that allow device management and the ONF has started its Stratum project that will develop an open-source solution for white boxes to expose these interfaces. This software residing on the white box has no control intelligence and does not make any packet-forwarding decisions. That will be done by the SDN controller that talks to the white box via these interfaces. Accordingly, the ONF is updating its ONOS controller to use these new interfaces. 

 

Source: ONF

 

From reference designs to deployment 

The ONF has a clear process to transition its reference designs to solutions ready for network deployment.

The reference designs will be produced by the eight operators working with other ONF partners. “The reference design is to help others in the industry to understand where you might choose to swap in another open source piece or put in a commercial piece,” says Sloane. 

This explains how the components are linked to the reference design (see diagram above). The ONF also includes the concept of the exemplar platform, the specific implementation of the reference design. “We have seen that there is tremendous value in having an open platform, something like Residential CORD,” says Sloane. “That really is what the exemplar platform is.”      

The ONF says there will be one exemplar platform for each reference design but operators will be able to pick particular components for their implementations. The exemplar platform will inevitably also need to interface to a network management and orchestration platform such as the Linux Foundation’s Open Network Automation Platform (ONAP) or ETSI’s Open Source MANO (OSM).   

The process of refining the reference design and honing the exemplar platform built using specific components is inevitably iterative but once completed, the operators will have a solution to test, trial and, ultimately, deploy. 

The ONF says that since announcing the strategic plan a month ago, several new partners - as yet unannounced - have committed to join.

“The intention is to have the reference designs up and running before the end of the year,” says Sloane.  


NeoPhotonics samples its first CFP-DCO products

NeoPhotonics has entered the fray as a supplier of long-distance CFP pluggable modules that integrate the coherent DSP-ASIC chip with the optics. 

The company has announced two such CFP Digital Coherent Optics (CFP-DCO) modules: a 100 gigabit-per-second (Gbps) module and a dual-rate 100Gbps and 200Gbps one.

“Our rationale [for entering the CFP-DCO market] is we have all the optical components and the [merchant coherent] DSPs are now becoming available,” says Ferris Lipscomb (pictured), vice president of marketing at NeoPhotonics. “It is possible to make this product without developing your own custom DSP, with all the expense that entails.”

 

-DCO versus -ACO

The pluggable transceiver line-side market is split between Digital Coherent Optics and Analog Coherent Optics (ACO) modules.

Optical module makers are already supplying the more compact CFP2 Analog Coherent Optics (CFP2-ACO) transceivers. The CFP2-ACO integrates the optics only, with the accompanying coherent DSP-ASIC chip residing on the line card. The CFP2-ACO suits system vendors that have their own custom DSP-ASICs and can offer differentiated, higher-transmission performance while choosing the optics in a compact pluggable module from several suppliers.

In contrast, the CFP-DCO suits more standard deployments, and for those end-customers that do not want to be locked into a single vendor and a proprietary DSP. The -DCO is also easier to deploy. In China, currently undertaking large-scale 100-gigabit optical transport deployments, operators want a module that can be deployed in the field by a relatively unskilled technician. Deploying an ACO requires an engineer to perform the calibration due to the analogue interface between the module and the DSP, says NeoPhotonics.

The DCO also suits those systems vendors that do not have their own DSP and do not want to source a merchant coherent DSP and implement the analogue integration on the line card.

 

Our rationale [for entering the CFP-DCO market] is we have all the optical components and the [merchant coherent] DSPs are now becoming available 

 

 

One platform, two products

The two announced ClearLight CFP-DCO products are a 100 gigabit-per-second (Gbps) module implemented using polarisation multiplexing, quadrature phase-shift keying modulation (PM-QPSK), and a module that supports both 100Gbps and 200Gbps using PM-QPSK and 16 quadrature amplitude modulation (PM-16QAM), respectively.

The two modules share the same optics and DSP-ASIC. Where they differ is in the software loaded onto the DSP and the host interface used. The lower-speed module has a 4 by 25-gigabit interface whereas the 200-gigabit CFP-DCO uses an 8 by 25-gigabit-wide interface. “The 100-gigabit CFP-DCO plugs into existing client-side slots whereas the 200-gigabit CFPs have to plug into custom designed equipment slots,” says Lipscomb.

The 100-gigabit CFP-DCO has a reach of 1,000km plus and has a power consumption under 24W. Lipscomb points out that the actual specs including the power consumption are negotiated on a customer-by-customer basis. The 200-gigabit CFP-DCO has a reach of 500km.

NeoPhotonics says it is using a latest-generation 16nm CMOS merchant DSP. NTT Electronics (NEL) and Clariphy have both announced 16nm CMOS coherent DSPs.

“We are designing to be able to second-source the DSP,” says Lipscomb. “There are currently only two merchant suppliers but there are others that have developments but are not yet at the point where they would be in the market.”

The CFP-DCO modules also support flexible grid that can fit a carrier within the narrower 37.5GHz channel to increase overall transmission capacity sent across a fibre’s C-band.

NeoPhotonics’s 100Gbps CFP-DCO is already sampling and it expected to be generally available in mid-2017, while the 200Gbps CFP-DCO is expected to be available one-quarter later.

“For 200-gigabit, you need to have customers building slots,” says Lipscomb. “For 100-gigabit, there are lots of slots available that you can plug into; 200-gigabits will take a little bit longer.”

NeoPhotonics’ CFP-DCO delivers the line rate used by the Voyager white box packet optical switch being developed as part of the Telecom Infra Project backed by Facebook and ten operators including Deutsche Telekom and SK Telecom. But the one-rack-unit Voyager packet optical platform uses four 5"x7" modules not pluggable CFP-DCOs to achieve the total line rate of 800Gbps.

 

Roadmap

NeoPhotonics is developing coherent module designs that will use higher baud rates than the standard 32-35 gigabaud (Gbaud), such as 45Gbaud and 64Gbaud.

The company also plans to develop a CFP2-DCO. Such a module is expected around 2018 once lower-power DSP-ASICs become available that can fit within the 12W power envelope of the CFP2. Such merchant DSP-ASICs will likely be implemented in a more advanced CMOS process such as 12nm or even 7nm.

Acacia Communications is already sampling a CFP2-DCO. Acacia designs its own silicon photonics-based optics and the coherent DSP-ASIC.

NeoPhotonics is also considered future -ACO designs beyond the CFP2 such as the CFP8, the 400-gigabit OSFP form factor and even the CFP4. “We are studying it but we don't know yet which directions things are going to go,” says Lipscomb.

 

Corrected on Dec 22nd. The Voyager box does not use pluggable CFP-DCO modules.


The white box concept gets embraced at the optical layer

Lumentum has unveiled several optical white-box designs. To date the adoption of white boxes - pizza-box sized platforms used in large-scale data centres - has been at the electronic layer, for switching and routing applications.

 

Brandon Collings

White boxes have arisen to satisfy the data centre operators’ need for simple building-block functions in large number that they can direct themselves.  

“They [data centre operators] started using very simple white boxes - rather simple functionality, much simpler than the large router companies were providing - which they controlled themselves using software-defined networking orchestrators,” says Brandon Collings, CTO of Lumentum. 

Such platforms have since evolved to deliver high-performance switching, controlled by third-party SDN orchestrators, and optimised for the simple needs of the data centre, he says. Now this trend is moving to the optical layer where the same flexibility of function is desired. Operators would like to better pair the functionality that they are going to buy with the exact functionality they need for their network, says Collings.

“There is no plan to build networks with different architectures to what is built today,” he says. “It is really about how do we disaggregate conventional platforms to something more flexible to deploy, to control, and which you can integrate with control planes that also manage higher layers of the network, like OTN and the packet layer.” 

 

White box products

Lumentum has a background in integrating optical functions such as reconfigurable optical add/drop multiplexers (ROADMs) and amplifiers onto line cards, known as its TrueFlex products. “That same general element is now the element being demanded by these white box strategies, so we are putting them in pizza boxes,” says Collings. 

At OFC, Lumentum announced several white box designs for linking data centres and for metro applications. Such designs are for large-scale data centre operators that use data centre interconnect platforms. But several such operators also have more complex, metro-like optical networking requirements. Traditional telcos such as AT&T are also interested in pursuing the approach.

The first Lumentum white box products include terminal and line amplifiers, a dense WDM multiplexer/ demultiplexer and a ROADM. These hardware boxes come with open interfaces so that they can be controlled by an SDN orchestrator and are being made available to interested parties. 

OpenFlow, which is used for electrical switches in the data centre, could be use with such optical white boxes. Other more likely software are the Restconf and Netconf protocols. “They are just protocols that are being defined to interface the orchestrator with a collection of white boxes,” says Collings.

Lumentum’s mux-demux is defined as a white box even though it is passive and has no power or monitoring requirements. That is because the mux-demux is a distinct element that is not part of a platform.

AT&T is exploring the concept of a disaggregated ROADM. Collings says a disaggregated ROADM has two defining characteristics. One is that the hardware isn’t required to come with a full network control management system. “You can buy it and operate it without buying that same vendor’s control system,” he says. The second characteristic is that the ROADM is physically disaggregated - it comes in a pizza box rather than a custom, proprietary chassis.  


There remains a large amount of value between encompassing optical hardware in a pizza box to delivering an operating network

 

Lumentum: a systems vendor? 

The optical layer white box ecosystem continues to develop, says Collings, with many players having different approaches and different levels of ‘aggressiveness’ in pursuing the concept. There is also the issue of the orchestrators and who will provide them. Such a network control system could be written by the hyper-scale data centre operators or be developed by the classical network equipment manufacturers, says Collings.   

Collings says selling pizza boxes does not make Lumentum a systems vendor. “There is a lot of value-add that has to happen between us delivering a piece of hardware with simple open northbound control interfaces and a complete deployed, qualified, engineered system.”

Control software is needed as is network engineering; value that traditional system vendors have been adding. “That is not our expertise; we are not trying to step into that space,” says Collings. There remains a large amount of value between encompassing optical hardware in a pizza box to delivering an operating network, he says. 

This value and how it is going to be provided is also at the core of an ongoing industry debate. “Is it the network provider or the people that are very good at it: the network equipment makers, and how that plays out.”  

Lumentum’s white box ROADM was part of an Open Networking Lab proof-of-concept demonstration at OFC.  


Cyan's stackable optical rack for data centre interconnent

Demand for high-capacity links between data centres is creating a new class of stackable optical platform from equipment makers. Cyan has unveiled the N-Series, what it calls an open hyperscale transport platform. "It is a hardware and software system specifically for data centre interconnect," says Joe Cumello, Cyan's chief marketing officer. Cyan's announcement follows on from Infinera, which detailed its Cloud Xpress platform last year.

 

"The drivers for these [data centre] guys every day of the week is lowest cost-per-gigabit"

Joe Cumello

 

 

 

 

 

The amount of traffic moved between data centres can be huge. According to ACG Research, certain cloud-based applications shared between data centres can require between 40 to 500 terabits of capacity. This could be to link adjacent data centre buildings to appear as one large logical one, or connect data centres across a metro, 20 km to 200 km apart. For data centres separated across greater distances, traditional long-haul links are typically sufficient.

Cyan says it developed the N-series platform following conversations conducted with internet content providers over the last two years. "We realised that the white box movement would make its way into the data centre interconnect space," says Cumello.

White box servers and white box switches, manufactured by original design manufacturers (ODMs), are already being used in the data centre due to their lower cost. Cyan is using a similar approach for its N-Series, using commercial-off-the-shelf hardware and open software.

"The drivers for these [data centre] guys every day of the week is lowest cost-per-gigabit," says Cumello.

 

N-Series platform

Cyan's N-Series N11 is a 1-rack-unit (1RU) box that has a total capacity of 800 Gigabit-per-second (Gbps). The 1RU shelf comprises two units, each using two client-side 100Gbps QSFP28s and a line-side interface that supports 100 Gbps coherent transmission using PM-QPSK, or 200 Gbps coherent using PM-16QAM. The transmission capacity can be traded with reach: using 100 Gbps, optical transmission up to 2,000 km is possible, while capacity can be doubled using 200 Gbps lightpaths for links up to 600 km. Cyan is using Clariphy's CL20010 coherent transceiver/ framer chip. Stacking 42 of the 1RUs within a chassis results in an overall capacity - client side and line side - of 33.6 terabit.

 

There is a whole ecosystem of companies competing to drive better capacity and scale

 

The N-Series N11 uses a custom line-side design but Cyan says that by adopting commercial-off-the-shelf design, it will benefit from the pluggable line-side optical module roadmap. The roadmap includes 200 Gbps and 400 Gbps coherent MSA modules, pluggable CFP2 and CFP4 analogue coherent optics, and the CFP2 digital coherent optics that also integrates the DSP-ASIC.

"There is a whole ecosystem of companies competing to drive better capacity and scale," says Cumello. "By using commercial-off-the-shelf technology, we are going to get to better scale, better density, better energy efficiency and better capacity."

To support these various options, Cyan has designed the chassis to support 1RU shelves with several front plate options including a single full-width unit, two half-width ones as used for the N11, or four quarter-width units.   

 

Open software

For software, the N-series platform uses a Linux networking operating system. Using Linux enables third-party applications to run on the N-series, and enables IT staff to use open source tools they already know. "The data centre guys use Linux and know how to run servers and switches so we have provided that kind of software through Cyan's Linux," says Cumello. Cyan has also developed its own networking applications for configuration management, protocol handling and statistics management that run on the Linux operating system.

 

The open software architecture of the N-Series. Also shown are the two units that make up a rack. Source: Cyan.

"We have essentially disaggregated the software from the hardware," says Cumello. Should a data centre operator chooses a future, cheaper white box interconnect product, he says, Cyan's applications and Linux networking operating system will still run on that platform.    

The N-series will be available for customer trials in the second quarter and will be available commercially from the third quarter of 2015.        


WDM and 100G: A Q&A with Infonetics' Andrew Schmitt

The WDM optical networking market grew 8 percent year-on-year, with spending on 100 Gigabit now accounting for a fifth of the WDM market. So claims the first quarter 2014 optical networking report from market research firm, Infonetics Research. Overall, the optical networking market was down 2 percent, due to the continuing decline of legacy SONET/SDH.

In a Q&A with Gazettabyte, Andrew Schmitt, principal analyst for optical at Infonetics Research, talks about the report's findings.

 

Q: Overall WDM optical spending was up 8% year-on-year: Is that in line with expectations?

Andrew Schmitt: It is roughly in line with the figures I use for trend growth but what is surprising is how there is no longer a fourth quarter capital expenditure flush in North America followed by a down year in the first quarter. This still happens in EMEA but spending in North America, particularly by the Tier-1 operators, is now less tied to calendar spending and more towards specific project timelines.

This has always been the case at the more competitive carriers. A good example of this was the big order Infinera got in Q1, 2014.

 

You refer to the growth in 100G in 2013 as breathtaking. Is this growth not to be expected as a new market hits its stride? Or does the growth signify something else?

I got a lot of pushback for aggressive 100G forecasts in 2010 and 2011 when everyone was talking about, and investing in, 40G. You can read a White Paper I wrote in early 2011 which turned out to be pretty accurate. 

My call was based on the fact that, fundamentally, coherent 100G shouldn’t cost more than 40G, and that service providers would move rapidly to 100G. This is exactly what has happened, outside AT&T, NTT and China which did go big with 40G. But even my aggressive 100G forecasts in 2012 and 2013 were too conservative.

I have just raised my 2014 100G forecast after meeting with Chinese carriers and understanding their plans. 100G will essentially take over almost all of the new installations in the core by 2016, worldwide, and that is when metro 100G will start. But there is too much hype on metro 100G right now given the cost, but within two years the price will be right for volume deployment by service providers.

 

There is so much 'blah blah blah' about video but 90 percent is cacheable. Cloud storage is not

 

You say the lion's share of 100G revenue is going to five companies: Alcatel-Lucent, Ciena, Cisco, Huawei, and Infinera. Most of the companies are North American. Is the growth mainly due to the US market (besides Huawei, of course). And if so, is it due to Verizon, AT&T and Sprint preparing for growing LTE traffic? Or is the picture more complex with cable operators, internet exchanges and large data centre players also a significant part of the 100G story, as Infinera claims.   

It’s a lot more complex than the typical smartphone plus video-bandwidth-tsunami narrative. Many people like to attach the wireless metaphor to any possible trend because it is the only area perceived as having revenue and profitability growth, and it has a really high growth rate. But something big growing at 35 percent adds more in a year than something small growing at 70 percent.

The reality is that wireless bandwidth, as a percentage of all traffic, is still small. 100G is being used for the long lines of the network today as a more efficient replacement for 10G and while good quantitative measures don’t exist, my gut tells me it is inter-data-centre traffic and consumer/ business to data centre traffic driving most of the network growth today.

I use cloud storage for my files. I’m a die-hard Quicken user with 15 years of data in my file. Every time I save that file, it is uploaded to the cloud – 100MB each time. The cloud provider probably shifts that around afterwards too. Apply this to a single enterprise user - think about how much data that is for just one person. There is so much 'blah blah blah' about video but 90 percent is cacheable. Cloud storage is not.

 

Each morning a hardware specialist must wake up and prove to the world that they still need to exist

 

Cisco is in this list yet does not seek much media attention about its 100G. Why is it doing well in the growing 100G market?

Cisco has a slice of customers that are fibre-poor who are always seeking more spectral efficiency. I also believe Cisco won a contract with Amazon in Q4, 2013, but hey, it’s not Google or Facebook so it doesn’t get the big press. But no one will dispute Amazon is the real king of public cloud computing right now.

 

You’ve got to do hard stuff that others can’t easily do or you are just a commodity provider

 

In the data centre world, there is a sense that the value of specialist hardware is diminishing as commodity platforms - servers and switches - take hold. The same trend is starting in telecoms with the advent of Network Functions Virtualisation (NFV) and software-defined networking (SDN). WDM is specialist hardware and will remain so. Can WDM vendors therefore expect healthy annual growth rates to continue for the rest of the decade?   

I am not sure I agree.

There is no reason transport systems couldn’t be white-boxed just like other parts of the network. There is an over-reaction to the impact SDN will have on hardware but there have always been constant threats to the specialist.

Each morning a hardware specialist must wake up and prove to the world that they still need to exist. This is why you see continued hardware vertical integration by some optical companies; good examples are what Ciena has done with partners on intelligent Raman amplification or what Infinera has done building a tightly integrated offering around photonic-integrated circuits for cheap regeneration. Or Transmode which takes a hacker’s approach to optics to offer customers better solutions for specific category-killer applications like mobile backhaul. Or you swing to the other side of the barbell, and focus on software, which appears to be Cyan’s strategy.

You’ve got to do hard stuff that others can’t easily do or you are just a commodity provider. This is why Cisco and Intel are investing in silicon photonics – they can use this as an edge against commodity white-box assemblers and bare-metal suppliers.

 


Privacy Preference Center