The ONF adapts after sale of spin-off Ananki to Intel

Tim Sloane

Intel’s acquisition of Ananki, a private 5G networking company set up within the ONF last year, has meant the open-model organisation has lost the bulk of its engineering staff.

The ONF, a decade-old non-profit consortium led by the telecom operators, has developed some notable networking projects over the years such as CORD, OpenFlow, one of the first software-defined networking (SDN) standards, and Aether, the 5G edge platform.

 

 

Its joint work with the operators has led to virtualised and SDN building blocks that, when combined, can address comprehensive networking tasks such as 5G, wireline broadband and private wireless networks.

The ONF’s approach has differed from other open-source organisations. Its members pay for an in-house engineering team to co-develop networking blocks based on disaggregation, SDN and cloud.

The ONF and its members have built a comprehensive portfolio of networking functions which last year led to the organisation spinning out a start-up, Ananki, to commercialise a complete private end-to-end wireless network.

Now Intel has acquired Ananki, taking with it 44 of the ONF’s 55 staff.

“Intel acquired Ananki, Intel did not acquire the ONF,” says Timon Sloane, the ONF’s newly appointed general manager. “The ONF is still whole.”

The ONF will now continue with a model akin to other open-source organisations.

ONF’s evolution

The ONF began by tackling the emerging interest in SDN and disaggregation.

“After that phase, considered Phase One, we broke the network into pieces and it became obvious that it was complicated to then build solutions; you have these pieces that had to be reassembled,” says Sloane.

The ONF used its partner funding to set up a joint development team to craft solutions that were used to seed the industry.

The ONF pursued this approach for over six years but Sloane said that it felt increasingly that the model had run its course.“We were kind of an insular walled garden, with us and a small number of operators working on things,” says Sloane. “We needed to flip the model inside out and go broad.”

This led to the spin-out of Ananki, a separate for-profit entity that would bring in funding yet would also be an important contributor to open source. And as it grew, the thinking was that it would subsume some of the ONF’s engineering team.

“We thought for the next phase that a more typical open-source model was needed,” says Sloane. “Something like Google with Kubernetes, where one company builds something, puts it in open source and feeds it, even for a couple of years, until it grows, and the community grows around it.”

But during the process of funding Ananki, several companies expressed an interest in acquiring the start-up. The ONF will not say the other interested players but hints that it included telecom operators and hyperscalers.

The merit of Intel, says Sloane, is that it is a chipmaker with a strong commitment to open source.

Deutsche Telekom’s ongoing ORAN trial in Berlin uses key components from the ONF including the SD-Fabric, 5G and 4G core functions, and the uONOS near real-time RAN Intelligent controller (RIC). Source: ONF, DT.

Post-Ananki

“Those same individuals who were wearing an ONF hat, are swapping it for an Intel hat, but are still on the leadership of the project,” says Sloane. “We view this as an accelerant for the project contributions because Intel has pretty deep resources and those individuals will be backed by others.”

The ONF acknowledges that its fixed broadband passive optical networking (PON) work is not part of Ananki’s interest. Intel understands that there are operators reliant on that project and will continue to help during a transition period. Those vendors and operators directly involved will also continue to contribute.

“If you look at every other project that we’re doing: mobile core, mobile RAN, all the P4 work, programmable networks, Intel has been very active.”

Meanwhile, the ONF is releasing its entire portfolio to the open-source community.

“We’ve moved out of the walled-garden phase into a more open phase, focused on the consumption and adoption [of the designs,” says Sloane. The projects will stay remain under the auspices of the ONF to get the platforms adopted within networks.

The ONF will use its remaining engineers to offer its solutions using a Continuous Integration/ Continuous Delivery (CI/CD) software pipeline.

“We will continue to have a smaller engineering team focused on Continuous Integration so that we’ll be able to deliver daily builds, hourly builds, and continuous regression testing – all that coming out of ONF and the ONF community,” says Sloane. “Others can use their CD pipelines to deploy and we are delivering exemplar CD pipelines if you want to deploy bare metal or in a cloud-based model.”

The ONF is also looking at creating a platform that enables the programmability of a host using silicon such as a data processing unit (DPU) as part of larger solutions.

“It’s a very exciting space,” says Sloane. “You just saw the Pensando acquisition; I think that others are recognising this is a pretty attractive space.” AMD recently announced it is acquiring Pensando, to add a DPU architecture to AMD’s chip portfolio.

The ONF’s goal is to create a common platform that can be used for cloud and telecom networking and infrastructure for applications such as 5G and edge.

“And then there is of course the whole edge space, which is quite fascinating; a lot is going on there as well,” says Sloane. “So I don’t think we’re done by any means.”

 


Telecoms' innovation problem and its wider cost

Source: Accelerating Innovation in the Telecommunications Arena

Imagine how useful 3D video calls would have been this last year.

The technologies needed – a light field display and digital compression techniques to send the vast data generated across a network – do exist but practical holographic systems for communication remain years off.

But this is just the sort of application that telcos should be pursuing to benefit their businesses.

A call for innovation

“Innovation in our industry has always been problematic,” says Don Clarke, formerly of BT and CableLabs and co-author of a recent position paper outlining why telecoms needs to be more innovative.

Entitled Accelerating Innovation in the Telecommunications Arena, the paper’s co-authors include representatives from communications service providers (CSPs), Telefonica and Deutsche Telekom.

In an era of accelerating and disruptive change, CSPs are proving to be an impediment, argues the paper.

The CSPs’ networking infrastructure has its own inertia; the networks are complex, vast in scale and costly. The operators also require a solid business case before undertaking expensive network upgrades.

Such inertia is costly, not only for the CSPs but for the many industries that depend on connectivity.

But if the telecom operators are to boost innovation, practices must change. This is what the position paper looks to tackle.

NFV White Paper

Clarke was one of the authors of the original Network Functions Virtualisation (NFV) White Paper, published by ETSI in 2012.

The paper set out a blueprint as to how the telecom industry could adopt IT practices and move away from specialist telecom platforms running custom software. Such proprietary platforms made the CSPs beholden to systems vendors when it came to service upgrades.

Don Clarke, formerly of BT and CableLabs and co-author of a recent position paper outlining why telecoms needs to be more innovative.

The NFV paper also highlighted a need to attract new innovative players to telecoms.

“I see that paper as a catalyst,” says Clarke. “The ripple effect it has had has been enormous; everywhere you look, you see its influence.”

Clarke cites how the Linux Foundation has re-engineered its open-source activities around networking while Amazon Web Services now offers a cloud-native 5G core. Certain application programming interfaces (APIs) cited by Amazon as part of its 5G core originated in the NFV paper, says Clarke.

Software-based networking would have happened without the ETSI NFV white paper, stresses Clarke, but its backing by leading CSPs spurred the industry.

However, building a software-based network is hard, as the subsequent experiences of the CSPs have shown.

“You need to be a master of cloud technology, and telcos are not,” says Clarke. “But guess what? Riding to the rescue are the cloud operators; they are going to do what the telcos set out to do.”

For example, as well as hosting a 5G core, AWS is active at the network edge including its Internet of Things (IoT) Greengrass service. Microsoft, having acquired telecom vendors Metaswitch and Affirmed Networks, has launched ‘Azure for Operators’ to offer 5G, cloud and edge services. Meanwhile, Google has signed agreements with several leading CSPs to advance 5G mobile edge computing services.

“They [the hyperscalers] are creating the infrastructure within a cloud environment that will be carrier-grade and cloud-native, and they are competitive,” says Clarke.

The new ecosystem

The position paper describes the telecommunications ecosystem in three layers (see diagram).

The CSPs are examples of the physical infrastructure providers (bottom layer) that have fixed and wireless infrastructure providing connectivity. The physical infrastructure layer is where the telcos have their value – their ‘centre of gravity’ – and this won’t change, says Clarke.

The infrastructure layer also includes the access network which is the CSPs’ crown jewels.

“The telcos will always defend and upgrade that asset,” says Clarke, adding that the CSPs have never cut access R&D budgets. Access is the part of the network that accounts for the bulk of their spending. “Innovation in access is happening all the time but it is never fast enough.”

The middle, digital network layer is where the nodes responsible for switching and routing reside, as do the NFV and software-defined networking (SDN) functions. It is here where innovation is needed most.

Clarke points out that the middle and upper layers are blurring; they are shown separately in the diagram for historical reasons since the CSPs own the big switching centres and the fibre that connect them.

But the hyperscalers – with their data centres, fibre backbones, and NFV and SDN expertise – play in the middle layer too even if they are predominantly known as digital service providers, the uppermost layer.

The position paper’s goal is to address how CSPs can better address the upper two network layers while also attracting smaller players and start-ups to fuel innovation across all three.

Paper proposal

The paper identifies several key issues that curtail innovation in telecoms.

One is the difficulty for start-ups and small companies to play a role in telecoms and build a business.

Just how difficult it can be is highlighted by the closure of SDN-controller specialist, Lumina Networks, which was already engaged with two leading CSPs.

In a Telecom TV panel discussion about innovation in telecoms, that accompanied the paper’s publication, Andrew Coward, the then CEO of Lumina Networks, pointed out how start-ups require not just financial backing but assistance from the CSPs due to their limited resources compared to the established systems vendors.

It is hard for a start-up to respond to an operator’s request-for-proposals that can be thousands of pages long. And when they do, will the CSPs’ procurement departments consider them due to their size?

Coward argues that a portion of the CSP’ capital expenditure should be committed to start-ups. That, in turn, would instill greater venture capital confidence in telecoms.

The CSPs also have ‘organisational inertia’ in contrast to the hyperscalers, says Clarke.

“Big companies tend towards monocultures and that works very well if you are not doing anything from one year to the next,” he says.

The hyperscalers’ edge is their intellectual capital and they work continually to produce new capabilities. “They consume innovative brains far faster and with more reward than telcos do, and have the inverse mindset of the telcos,” says Clarke.

The goals of the innovation initiative are to get CSPs and the hyperscalers – the key digital service providers – to work more closely.

“The digital service providers need to articulate the importance of telecoms to their future business model instead of working around it,” says Clarke.

Clarke hopes the digital service providers will step up and help the telecom industry be more dynamic given the future of their businesses depend on the infrastructure improving.

In turn, the CSPs need to stand up and articulate their value. This will attract investors and encourage start-ups to become engaged. It will also force the telcos to be more innovative and overcome some of the procurement barriers, he says.

Ultimately, new types of collaboration need to emerge that will address the issue of innovation.

Next steps

Work has advanced since the paper was published in June and additional players have joined the initiative, to be detailed soon.

“This is the beginning of what we hope will be a much more interesting dialogue, because of the diversity of players we have in the room,” says Clarke. “It is time to wake up, not only because of the need for innovation in our industry but because we are an innovation retardant everywhere else.”

Further information:

Telecom TV’s panel discussion: Part 2, click here

Tom Nolle’s response to the Accelerating Innovation in the Telecommunications Arena paper, click here


Deutsche Telekom's Access 4.0 transforms the network edge

Hans-Jörg Kolbe

Deutsche Telekom has a working software platform for its Access 4.0 architecture that will start delivering passive optical network (PON) services to German customers later this year. The architecture will also serve as a blueprint for future edge services.

Access 4.0 is a disaggregated design comprising open-source software and platforms that use merchant chips – white-boxes’ – to deliver fibre-to-the-home (FTTH) and fibre-to-the-building (FTTB) services. 

One year ago we had it all as prototypes plugged together to see if it works,” says Hans-Jörg Kolbe, chief engineer and head of SuperSquad Access 4.0. Since the end of 2019, our target software platform – a first end-to-end system – is up and running.”  

Deutsche Telekom has about 1,000 central office sites in Germany, several of which will be upgraded this year to the Access 4.0 architecture.

Once you have a handful of sites up and running and you have proven the principle, building another 995 is rather easy,” says Robert Soukup, senior program manager at Deutsche Telekom, and another of the co-founders of the Access 4.0 programme. 

Origins

The Access 4.0 programme emerged with the confluence of two developments: a detailed internal study of the costs involved in building networks and the advent of the Central Office Re-architected as a Datacentre (CORD) industry initiative. 

Deutsche Telekom was scrutinising the costs involved in building its networks. Not like removing screws here and there but looking at the end-to-end costs,” says Kolbe. 

Separately, the operator took an interest in CORD that was, at the time, being overseen by ON.Labs.

At first, Kolbe thought CORD was an academic exercise but, on closer examination, he and his colleague, Thomas Haag, the chief architect and the final co-founder of Access 4.0, decided the activity needed to be investigated internally. In particular, to assess the feasibility of CORD, how bringing together cloud technologies with access hardware would work, and quantify the cost benefits.          

The first goal was to drive down cost in our future network,” says Kolbe. And that was proven in the first month by a decent cost model. Then, building a prototype and looking into it, we found more [cost savings].”

Given the cost focus, the operator hadn’t considered the far-reaching changes involve with adopting white boxes and the disaggregation of software and hardware, nor the consequences of moving to a mainly software-based architecture in how it could shorten the introduction of new services.

I knew both these arguments were used when people started to build up Network Functions Virtualisation (NFV) but we didnt have this in mind; it was a plain cost calculation,” says Kolbe. Once we starting doing it, however, we found both these things.”  

Cost engineering

Deutsche Telekom says it has learnt a lot from the German automotive industry when it comes to cost engineering. For some companies, cost is part of the engineering process and in others, it is part of procurement.

Robert Soukup

The issue is not talking to a vendor and asking for a five percent discount on what we want it to deliver,” says Soukup, adding that what the operator seeks is fair prices for everybody.

Everyone needs to make a margin to stay in business but the margin needs to be fair,” says Soukup. If we make with our customers a margin of ’X, it is totally out of the blue that our vendors get a margin of 10X.”

The operators goal with Access 4.0 has been to determine how best to deploy broadband internet access on a large scale and with carrier-grade quality. Access is an application suited to cost reduction since “the closer you come to the customer, the more capex [capital expenditure] you have to spend,” says Soukup, adding that since capex is always less than what youd like, creativity is required.

When you eat soup, you always grasp a spoon,” says Soukup. But we asked ourselves: Is a spoon the right thing to use?’”  

Software and White Boxes 

Access 4.0 uses two components from the Open Networking Foundation (ONF): Voltha and the Software Defined Networking (SDN) Enabled Broadband Access (SEBA) reference design.

Voltha provides a common control and management system for PON white boxes while making the PON network appear to the SDN controller that resides above as a programmable switch. It abstracts away the [PON] optical line terminal (OLT) so we can treat it as a switch,” says Soukup

SEBA supports a range of fixed broadband technologies that include GPON and XGS-PON. SEBA 2.0 is a design we are using and are compliant,” says Soukup. 

We are bringing our technology to geographically-distributed locations – central offices – very close to the customer,” says Kolbe. Some aspects are common with the cloud technology used in large data centres but there are also differences. 

For example, virtualisation technologies such as Kubernetes are shared while large data centres use OpenStack which is not needed for Access 4.0. In turn, a leaf-spine switching architecture is common as is the use of SDN technology.

One thing we have learned is that you can’t just take the big data centre technology and put it in distributed locations and try to run heavy-throughput access networks on them,” says Kolbe. This is not going to work and it led us to the white box approach.”

The issue is that certain workloads cannot be tackled efficiently using x86-based server processors. An example is the Broadband Network Gateway (BNG). You need to do significant enhancements to either run on the x86 or you offload it to a different type of hardware,” says Kolbe.

Deutsche Telekom started by running a commercial vendors BNG on servers. In parallel, we did the cost calculation and it was horrible because of the throughput-per-Euro and the power-per-Euro,” says Kolbe. And this is where cost engineering comes in: looking at the system, the biggest cost driver was the servers. 

We looked at the design and in the data path there are three programmable ASICs,” says Kolbe. And this is what we did; it is not a product yet but it is working in our lab and we have done trials.” The result is that the operator has created an opportunity for a white-box design.     

There are also differences in the use of switching between large data centres and access. In large data centres, the switching supports the huge east-west traffic flows while in carrier networks, especially close to the edge, this is not required.

Source: Deutsche Telekom

Instead, for Access 4.0, traffic from PON trees arrives at the OLT where it is aggregated by a chipset before being passed on to a top-of-rack switch where aggregation and packet processing occur.

The leaf-and-spine architecture can also be used to provide a ‘breakout’ to support edge-cloud services such as gaming and local services. There is a traffic capability there but we currently dont use it,” says Kolbe. But we are thinking that in the future we will.”   

Deutsche Telekom has been public about working with such companies as Reply, RtBrick and Broadcom. Reply is a key partner while RtBrick contributes a major element of the speciality domain BNG software.

Kolbe points out that there is no standard for using network processor chips: They are all specific which is why we need a strong partnership with Broadcom and others and build a common abstraction layer.” 

Deutsche Telekom also works closely with Intel, incumbent network vendors such as ADTRAN and original design manufacturers (ODMs) including EdgeCore Networks.

Challenges 

About 80 percent of the design effort for Access 4.0 is software and this has been a major undertaking for Deutsche Telekom. 

The challenge is to get up to speed with software; that is not a thing that you just do,” says Kolbe. We cant just pretend we are all software engineers.”

Deutsche Telekom also says the new players it works with – the software specialists – also have to better understand telecom. We need to meet in the middle,” says Kolbe.    

Soukup adds that mastering software takes time – years rather than weeks or months – and this is only to be expected given the network transformation operators are undertaking.

But once achieved, operators can expect all the benefits of software – the ability to work in an agile manner, continuous integration/  continuous delivery (CI/DC), and the more rapid introduction of services and ideas.

This is what we have discovered besides cost-savings: becoming more agile and transforming an organisation which can have an idea and realise it in days or weeks,” says Soukup.  The means are there, he says: We have just copied them from the large-scale web-service providers.” 

Status

The first Access 4.0 services will be FTTH delivered from a handful of central offices in Germany later this year. FTTB services will then follow in early 2021.

Once we are out there and we have proven that it works and it is carrier-grade, then I think we are very fast in onboarding other things,” says Soukup. But they are [for now] not part of our case.”  


Infinera buying Coriant will bring welcome consolidation

Infinera is to purchase privately-held Coriant for $430 million. The deal will effectively double Infinera’s revenues, add 100 new customers and expand the systems vendor’s product portfolio.

Infinera's CEO, Tom FallonBut industry analysts, while welcoming the consolidation among optical systems suppliers, highlight the challenges Infinera faces making the Coriant acquisition a success.   

“The low price reflects that this isn't the best asset on the market,” says Sterling Perrin, principal analyst, optical networking and transport at Heavy Reading. “They are buying $1 of revenue for 50 cents; the price reflects the challenges.”   

 

Benefits 

According to Perrin, there are still too many vendors facing "brutal price pressures" despite the optical industry being mature. Removing one vendor that has been cutting prices to win business is good news for the rest. 

For Infinera, the acquisition of Coriant promises three main benefits, as outlined by its CEO, Tom Fallon, during a briefing addressing the acquisition. 

The first is expanding its vertically-integrated business model across a wider portfolio of products. Infinera develops its own optical technology: its indium-phosphide photonic integrated circuits (PICs) and accompanying coherent DSPs that power its platforms. Having its own technology differentiates the optical performance of its platforms and helps it achieve leading gross margins of over 40 percent, said Fallon.

Exploiting the vertical integration model will be a central part of the Coriant acquisition. Indeed, the company mentioned vertical integration 21 times in as many minutes during its briefing outlining the deal. Infinera expects to deliver industry-leading growth and operating margins once it exploits the benefits of vertical integration across an expanded portfolio of platforms, said Fallon.

 

Having a seat at the table with the largest global service providers to strategise about where their business is going will be invaluable

 

Buying Coriant also gives Infinera much-needed scale. Not only will Infinera double its revenues - Coriant’s revenues were about $750 million in 2017 while Infinera’s were $741 million for the same period - but it will expand its customer base including key tier-one service providers and webscale players. According to Fallon, the newly combined company will include nine of the top 10 global tier-one service providers and the six leading global internet content providers.

Infinera admits it has struggled to break into the tier-one operators and points out that trying to enter is an expensive and time-consuming process, estimated at between $10 million to $20 million each time. “[Now, with Coriant,] having a seat at the table with the largest global service providers to strategise about where their business is going will be invaluable,” said Fallon. 

 

Sterling Perrin of Heavy Reading The third benefit Infinera gains is an expanded product portfolio. Coriant has expertise in layer 3 networking, in the metro core with its mTera universal transport platform as well as SDN orchestration and white box technologies. Heavy Reading’s Perrin says Coriant has started development of a layer-3 router white box for edge applications.

Combining the two companies also results in a leading player in data centre interconnect.

“Coriant expands our portfolio, particularly in packet and automation where significant network investment is expected over the next decade,” said Fallon. The deal is happening at the right time, he said, as operators ramp spending as they undertake network transformation. 

Infinera will pay $230 million in cash - $150 million up front and the rest in increments - and a further $200 million in shares for Coriant. The company expects to achieve cost savings of $250 million between 2019 and 2021 by combining the two firms, $100 million in 2019 alone. The deal is expected to close in the third quarter of 2018. 

 

If a company is going to put that integrated product into their network, it’s a full-blown RFP process which Infinera may or may not win

 

Challenges 

Industry analysts, while seeing positives for Infinera, have concerns regarding the deal.  

A much-needed consolidation of weaker vendors is how George Notter, an analyst at the investment bank, Jefferies, describes the deal. For Infinera, however, continuing as before was not an option. Heavy Reading’s Perrin agrees: ”Infinera has been under a lot of pressure; their core business of long-haul has slowed.”

The deal brings benefits to Infinera: scale, complementary product sets, and the promise of being able to invest more in R&D to benefit its PIC technology, says Notter in a research note.

Gaining customers is also a key positive. “Infinera is really excited about getting the new set of customers and that is what they are paying for,” says Vladimir Kozlov, CEO of LightCounting Market Research. “However, these customers were gained by pricing products at steep discounts.” 

What is vital for Infinera is that it delivers its upcoming 2.4-terabit Infinite Capacity Engine 5 (ICE5) optical engine on time. The ICE5 is expected to ship in early 2019. In parallel, Infinera is developing its ICE6 due two years later. Infinera is developing two generations of ICE designs in parallel after being late to market with its current 1.2-terabit optical engine. 

 

Infinera is really excited about getting the new set of customers and that is what they are paying for

 

But even if the ICE5 is delivered on time, upgrading Coriant's platforms will be a major undertaking. “It sounds like they are going to fit their optical engines in all of Coriant’s gear; I don’t see how that is going to happen anytime quickly,” says Perrin.

Customers bought Coriant's equipment for a reason. Once upgraded with Infinera’s PICs, these will be new products that have to undergo extensive lab testing and full evaluations.  

Perrin questions how moving customers off legacy platforms to the new will not result in the service providers triggering a new request-for-proposal (RFP). “If a company is going to put that integrated product into their network, it’s a full-blown RFP process which Infinera may or may not win,” says Perrin. “Infinera talked a lot about the benefits of vertical integration but they didn’t really address the challenges and the specific steps they would take to make that work.”

LightCounting's Vladimir KozlovLightCounting’s Kozlov also questions how this will work. 

“The story about vertical integration and scaling up PIC production is compelling, but how will they support Coriant products with the PIC?” he says. “Will they start making pluggable modules internally? Will Coriant’s customers be willing to move away from the pluggables and get locked into Infinera’s PICs? Do they know something that we don’t?”

While Infinera is a top five optical platform supplier globally it hasn’t dominated the market with its PIC technologies, says Perrin. “Even if they technically pull off the vertical integration with the Coriant products, how much is that going to win business for them?” he says. “It is one architecture in a mix that has largely gone to pluggables.”

 

Transmode 

Infinera already has experience acquiring a systems vendor when it bought in 2015 metro-access player, Transmode. Strategically, this was a very solid acquisition, says Perrin, but the jury is still out as to its success. 

“The integration, making it work, how Transmode has performed within Infinera hasn’t gone as well as they wanted,” says Perrin. “That said, there are some good opportunities going forward for the Transmode group.” 

Infinera also had planned to integrate its PIC technology within Transmode’s products but it didn't make economic sense for the metro market. There may also have been pushback from customers that liked the Transmode products, says Perrin: “With Coriant it looks like they really are going to force the vertical integration.” 

Infinera acknowledges the challenges ahead and the importance of overcoming them if it is to secure its future. 

“Given the comparable sizes of each company’s revenues and workforce, we recognise that integration will be challenging and is vital for our ultimate success,” said Fallon.  


ONF’s operators seize control of their networking needs

  • The eight ONF service providers will develop reference designs addressing the network edge.
  • The service providers want to spur the deployment of open-source designs after becoming frustrated with the systems vendors failing to deliver what they need. 
  • The reference designs will be up and running before year-end.
  • New partners have committed to join since the consortium announced its strategic plan

The service providers leading the Open Networking Foundation (ONF) will publish open designs to address next-generation networking needs.

Timon SloaneThe ONF service providers - NTT Group, AT&T, Telefonica, Deutsche Telekom, Comcast, China Unicom, Turk Telekom and Google - are taking a hands-on approach to the design of their networks after becoming frustrated with what they perceive as foot-dragging by the systems vendors.

“All eight [operators] have come together to say in unison that they are going to work inside the ONF to craft explicit plans - blueprints - for the industry for how to deploy open-source-based solutions,” says Timon Sloane, vice president of marketing and ecosystem at the ONF. 

The open-source organisation will develop ‘reference designs’ based on open-source components for the network edge. The reference designs will address developments such as 5G and multi-access edge and will be implemented using cloud, white box, network functions virtualisation (NFV) and software-defined networking (SDN) technologies.  

By issuing the designs and committing to deploy them, the operators want to attract select systems vendors that will work with them to fulfil their networking needs.

 

Remit

The ONF is known for such open-source projects as the Central Office Rearchitected as a Datacenter (CORD) and the Open Networking Operating System (ONOS) SDN controller.  

The ONF’s scope has broadened over the years, reflecting the evolving needs of its operator members. The organisation’s remit is to reinvent the network edge. “To apply the best of SDN, NFV and cloud technologies to enable not just raw connectivity but also the delivery of services and applications at the edge,” says Sloane.

The network edge spans from the central office to the cellular tower and includes the emerging edge cloud that extends the ‘edge’ to such developments as the connected car and drones. 

 

The operators have been hopeful the whole vendor community would step up and start building solutions and embracing this approach but it is not happening at the speed operators want, demand and need

 

“The edge cloud is called a lot of different things right now: multi-access edge computing, fog computing, far edge and distributed cloud,” says Sloane. “It hasn’t solidified yet.”  

One ONF open-source project is the Open and Disaggregated Transport Network (ODTN), led by NTT. “ODTN is edge related but not exclusively so,” says Sloane. “It is starting off with a data centre interconnect focus but you should think of it as CORD-to-WAN connectivity.”  

The ONF’s operators spent months formulated the initiative, dubbed the Strategic Plan, after growing frustrated with a supply chain that has failed to deliver the open-source solutions they need. “The operators have been hopeful the whole vendor community would step up and start building solutions and embracing this approach but it is not happening at the speed operators want, demand and need,” says Sloane.

The ONF’s initiative signals to the industry that the operators are shifting their spending to open-source solutions and basing their procurement decisions on the reference designs they produce.

“It is a clear sign to the industry that things are shifting,” says Sloane. “The longer you sit on the sidelines and wait and see what happens, the more likely you are to lose your position in the industry.”

If operators adopt open-source software and use white boxes based on merchant silicon, how will systems vendors produce differentiated solutions?

“All this goes to show why this is disruptive and creating turbulence in the industry,” says Sloane.

Open-source design equates to industry collaboration to develop shared, non-differentiated infrastructure, he says. That means system vendors can focus their R&D tackling new issues such as running and automating networks, developing applications and solving challenges such as next-generation radio access and radio spectrum management.     

“We want people to move with the mark,” says Sloane. “It is not just building a legacy business based on what used to be unique and expecting to build that into the future.” 

 

Reference designs

The operators have identified five reference designs: fixed and mobile broadband, multi-access edge, leaf-and-spine architectures, 5G at the edge, and next-generation SDN. 

The ONF has already done much work in fixed and mobile broadband with its residential and mobile CORD projects. Multi-access edge refers to developing one network to serve all types of customers simultaneously, using cloud techniques to shift networking resources dynamically as needed.

At first glance, it is unclear what the ONF can contribute to leaf-and-spine architectures. But the ONF is developing SDN-controlled switch fabric that can perform advanced packet processing, not just packet forwarding.

 

The ONF’s initiative signals to the industry that the operators are shifting their spending to open-source solutions and basing their procurement decisions on the reference designs they produce.

 

Sloane says that many virtualised tasks today are run on server blades using processors based on the x86 instruction set. But offloading packet processing tasks to programmable switch chips - referred to as networking fabric - can significantly benefit the price-performance achieved.

“We can leverage [the] P4 [programming language for data forwarding] and start to do things people never envisaged being done in a fabric,” says Sloane, adding that the organisation overseeing P4 is going to merge with the ONF.  

The 5G reference design is one application where such a switch fabric will play a role. The ONF is working on implementing 5G network core functions and features such as network slicing, using the P4 language to run core tasks on intelligent fabric.  

The ONF has already done work separating the radio access network (RAN) controller from radio frequency equipment and aims to use SDN to control a pool of resources and make intelligent decisions about the placement of subscribers, workloads and how the available radio spectrum can best be used.     

The ONF’s fifth reference design addresses next-generation SDN and will use work that Google has developed and is contributing to the ONF.

The ONF manages the OpenFlow protocol, used to define the separation between the control and data forwarding planes. But the ONF is the first to admit that OpenFlow overlooked such issues as equipment configuration and operational issues. 

The ONF is now engaged in a next-generation SDN initiative. “We are taking a step back and looking at the whole problem, to address all the pieces that didn’t get resolved in the past,” says Sloane.

Google has also contributed two interfaces that allow device management and the ONF has started its Stratum project that will develop an open-source solution for white boxes to expose these interfaces. This software residing on the white box has no control intelligence and does not make any packet-forwarding decisions. That will be done by the SDN controller that talks to the white box via these interfaces. Accordingly, the ONF is updating its ONOS controller to use these new interfaces. 

 

Source: ONF

 

From reference designs to deployment 

The ONF has a clear process to transition its reference designs to solutions ready for network deployment.

The reference designs will be produced by the eight operators working with other ONF partners. “The reference design is to help others in the industry to understand where you might choose to swap in another open source piece or put in a commercial piece,” says Sloane. 

This explains how the components are linked to the reference design (see diagram above). The ONF also includes the concept of the exemplar platform, the specific implementation of the reference design. “We have seen that there is tremendous value in having an open platform, something like Residential CORD,” says Sloane. “That really is what the exemplar platform is.”      

The ONF says there will be one exemplar platform for each reference design but operators will be able to pick particular components for their implementations. The exemplar platform will inevitably also need to interface to a network management and orchestration platform such as the Linux Foundation’s Open Network Automation Platform (ONAP) or ETSI’s Open Source MANO (OSM).   

The process of refining the reference design and honing the exemplar platform built using specific components is inevitably iterative but once completed, the operators will have a solution to test, trial and, ultimately, deploy. 

The ONF says that since announcing the strategic plan a month ago, several new partners - as yet unannounced - have committed to join.

“The intention is to have the reference designs up and running before the end of the year,” says Sloane.  


Ciena picks ONAP’s policy code to enhance Blue Planet

Ciena is adding policy software from the Linux Foundation’s open-source Open Network Automation Platform (ONAP) to its Blue Planet network management platform.

Operators want to use automation to help tackle the growing complexity and cost of operating their networks.

Kevin Wade“Policy plays a key role in this goal by enabling the creation and administration of rules that automatically modify the network’s behaviour,” says Kevin Wade, senior director of solutions, Ciena’s Blue Planet. 

Incorporating ONAP code to enhance Blue Planet’s policy engine also advances Ciena’s own vision of the adaptive network.      

 

Automation platforms

ONAP and Ciena’s Blue Planet are examples of network automation platforms. 

ONAP is an open software initiative created by merging a large portion of AT&T’s original Enhanced Control, Orchestration, Management and Policy (ECOMP) software developed to power its own software-defined network and the OPEN-Orchestrator (OPEN-O) project, set up by several companies including China Mobile, China Telecom and Huawei.   

ONAP’s goal is to become the default automation platform for service providers as they move to a software-driven network using such technologies as network functions virtualisation (NFV) and software-defined networking (SDN).

Blue Planet is Ciena’s own open automation platform for SDN and NFV-based networks. The platform can be used to manage Ciena’s own platforms and has open interfaces to manage software-defined networks and third-party equipment.

Ciena gained the Blue Planet platform with the acquisition of Cyan in 2015. Since then Ciena has added two main elements.

One is the Manage, Control and Plan (MCP) component that oversees Ciena's own telecom equipment. Ciena’s Liquid Spectrum that adds intelligence to its optical layer is part of MCP.

The second platform component added is analytics software to collect and process telemetry data to detect trends and patterns in the network to enable optimisation.

“We have 20-plus [Blue Planet] customers primarily on the orchestration side,” says Wade. These include Windstream, Centurylink and Dark Fibre Africa of South Africa. Out of these 20 or so customers, one fifth do not use Ciena’s equipment in their networks. One such operator is Orange, another Blue Planet user Ciena has named. 

A further five service providers are trialing an upgraded version of MCP, says Wade, while two operators are using Blue Planet’s analytics software.

 

In a closed-loop automation process, the policy subsystem guides the orchestration or the SDN controller, or both, to take actions

 

Policy

Ciena has been a member of the ONAP open source initiative for one year. By integrating ONAP’s policy components into Blue Planet, the platform will support more advanced closed-loop network automation use cases, enabling smarter adaptation.

“In a closed-loop automation process, the policy subsystem guides the orchestration or the SDN controller, or both, to take actions,” says Wade. Such actions include scaling capacity, restoring the network following failure, and automatic placement of a virtual network function to meet changing service requirements.

In return for using the code, Ciena will contribute bug fixes back to the open source venture and will continue the development of the policy engine.

The enhanced policy subsystem’s functionalities will be incorporated over several Blue Planet releases, with the first release being made available later this year. “Support for the ONAP virtual network function descriptors and packaging specifications are available now,” says Wade. 

 

The adaptive network 

Software control and automation, in which policy plays an important role, is one key component of Ciena's envisaged adaptive network.

A second component is network analytics and intelligence. Here, real-time data collected from the network is fed to intelligent systems to uncover the required network actions.

The final element needed for an adaptive network is a programmable infrastructure. This enables network tuning in response to changing demands.

What operators want, says Wade, is automation, guided by analytics and intent-based policies, to scale, configure, and optimise the network based on a continual reading to detect changing demands.


China Mobile plots 400 Gigabit trials in 2017

China Mobile is preparing to trial 400-gigabit transmission in the backbone of its optical network in 2017. The planned trials were detailed during a keynote talk given by Jiajin Gao, deputy general manager at China Mobile Technology, at the OIDA Executive Forum, an OSA event hosted at OFC, held in Los Angeles last week.

The world's largest operator will trial two 400-gigabit variants: polarisation-multiplexed, quadrature phase-shift keying (PM-QPSK) and 16-ary quadrature amplitude modulation (PM-16QAM).    

The 400-gigabit 16-QAM will achieve a total transmission capacity of 22 terabits and a reach of 1,500km using ultra-low-loss fibre and Raman amplification, while with Nyquist PM-QPSK, the capacity will be 13.6 terabits and a 2000km reach. China Mobile started to deploy 100 gigabits in its backbone in 2013. It expects to deploy 400 gigabits in its metro and provisional networks from 2018.

Gao also detailed the growth in the different parts of China Mobile's network. Packet transport networking ports grew by 200,000 in 2016 to 1.2 million. The operator also grew its fixed broadband market share, adding over 20 million GPON subscribers to reach 80 million in 2016 while its optical line terminals (OLTs) grew from 89,000 in 2015 to 113,000 in 2016. Indeed, China Mobile has now overtaken China Unicom as China's second largest fixed broadband provider. Meanwhile, the fibre in its metro networks grew from 1.26 million kilometres in 2015 to 1.41 million in 2016.

The Chinese operator is also planning to adopt a hybrid OTN-reconfigurable optical add-drop multiplexer (OTN-ROADM) architecture which it trialled in the second half of 2016, linking several cities. The operator currently uses electrical cross-connect switches which were first deployed in 2011.

The ROADM is a colourless, directionless and contentionless design that also supports a flexible grid, and the operator is interested in using the hybrid OTN-ROADM in its provisional backbone and metro networks. Using the OTN-ROADM architecture is expected to deliver a power savings of between 13% and 50%, says Gao.

XG-PON was also first deployed in 2016. China Mobile says 95% of its GPON optical network units deployed connect single families. The operator detailed an advanced home gateway that it has designed which six vendors are now developing. The home gateway features application programming interfaces to enable applications to be run on the platform.

For the XG-PON OLTs, China Mobile is using four vendors - Fiberhome, Huawei, ZTE and Nokia Shanghai Bell. The OLTs support 8 ports per card with three of the designs using an ASIC and one an FPGA. "Our conclusion is that 10-gigabit PON is mature for commercialisation," says Gao. 

Gao also talked about China Mobile's NovoNet 2020, the vision for its network which was first outlined in a White Paper in 2015. NovaNet will be based on such cloud technologies as software-defined networking (SDN) and network function virtualisation (NFV) and is a hierarchical arrangement of Telecom Integrated Clouds (TICs) that span the core through to access. He outlined how for private cloud services, a data centre will have 3,000 servers typically while for public cloud 4,000 servers per node will be used. 

China Mobile has said the first applications on NovoNet will be for residential services, with LTE, 5G enhanced packet core and multi-access edge computing also added to the TICs.

The operator said that it will trial SDN and NFV in its network this year and also mentioned how it had developed its own main SDN controller that oversees the network.

China Mobile reported 854 million mobile subscribers at the end of February, of which 559 million are LTE users, while its wireline broadband users now exceed 83 million. 


What role FPGA server co-processors for virtual routing?

Part 2:  Accelerating virtual routing functions using FPGAs

IP routing specialists have announced first virtual edge router products that run on servers. These include Alcatel-Lucent with its Virtualized Service Router and Juniper with its vMX. Gazettabyte asked Alcatel-Lucent's Steve Vogelsang about the impact FPGA accelerator cards could have on IP routing.

 

Steve Vogelsang, IP routing and transport CTO, Alcatel-Lucent

The co-processor cards in servers could become interesting for software-defined networking (SDN) and network function virtualisation (NFV).

The main challenge is that we require that our virtualised network functions (vNFs) and SDN data plane can run on any cloud infrastructure; we can’t assume that any specific accelerator card is installed. That makes it a challenge.

I can imagine, over time, that DPDK, the set of libraries and drivers for packet processing, and other open source libraries will support co-processors, making it easier to exploit by an SDN data plane or vNF.

For now we’re not too worried about pushing the limits of performance because the advantage of NFV is the operational simplicity. However, when we have vNFs running at significant scale, we will likely evaluate co-processor options to improve performance. This is similar to what Microsoft and others are doing with search algorithms and other applications.

Note that there are alternative co-processors that are more focussed on networking acceleration.  An example is Netronome which is a purpose-built network co-processor for the x86 architecture.  Not sure how it compares to Xilinx for networking functionality, but it may outperform FPGAs and be a better option if networking is the focus.

Some servers are also built to enable workload-specific processing architectures.  Some of these are specialised on a single processor architecture while others such as HP's Moonshot allow installation of various processors including FPGAs.

 

When we have vNFs running at significant scale, we will likely evaluate co-processor options to improve performance

 

I don’t expect FPGA accelerator cards will have much impact on network processors (NPUs). We or any other vendor could build an NPU using a Xilinx or another FPGA. But we get much more performance by building our own NPU because we control how we use the chip area.

When designing an FPGA, Xilinx and other FPGA vendors have to decide how to allocate chip space to I/O, processing cores, programmable logic, memory, and other functional blocks.  The resulting structure can deliver excellent performance for a variety of applications, but we can still deliver considerably more performance by designing our own chips allocating the chip space needed to the required functions.  

I have experience with my previous company which built multiple generations of NPUs using FPGAs, but they could not come close to the capabilities of our FP3 chipset.

 

For Part 1, click here

For Part 3, click here


SDN starts to fulfill its network optimisation promise

Infinera, Brocade and ESnet demonstrate the use of software-defined networking to provision and optimise traffic across several networking layers.

Infinera, Brocade and network operator ESnet are claiming a first in demonstrating software-defined networking (SDN) performing network provisioning and optimisation using platforms from more than one vendor.

Mike Capuano, Infinera

The latest collaboration is one of several involving optical vendors that are working to extend SDN to the WAN. ADVA Optical Networking and IBM are working to use SDN to connect data centres, while Ciena and partners have created a test bed to develop SDN technology for the WAN.

The latest lab-based demonstration uses ESnet's circuit reservation platform that requests network resources via an SDN controller. ESnet, the US Department of Energy's Energy Sciences Network, conducts networking R&D and operates a large 100 Gigabit network linking research centres and universities. The SDN controller, the open source Floodlight Project design, oversees the network comprising Brocade's 100 Gigabit MLXe IP router and Infinera's DTN-X platform.

The goal of provisioning and optimising traffic across the routing, switching and optical layers has been a work in progress for over a decade. System vendors have undertaken initiatives such as External Network-Network Interface (ENNI) and multi-domain GMPLS but with limited success. "They have been talked about, experimented with, but have never really made it out of the labs," says Mike Capuano, vice president of corporate marketing at Infinera. "SDN has the opportunity to solve this problem for real."

 

"In the world of Web 2.0, the general approach is not to sit and wait till standards are done, but to prototype, test, find the gaps, report back, and do it again"

 

"SDN, and technologies like the OpenFlow protocol, allow all of the resources of the entire network to be abstracted to this higher level control," says Daniel Williams, director of product marketing for data center and service provider routing at Brocade.

Daniel William, BrocadeInfinera and ESnet demonstrated OpenFlow provisioning transport resources a year ago. This latest demonstration has OpenFlow provisioning at the packet and optical layers and performing network optimisation. "We have added more carrier-grade capabilities," says Capuano. "Not just provisioning, but now we have topology discovery and network configuration."

“The demonstration is a positive step in the development of SDN because it showcases the multi-layer transport provisioning and management that many operators consider the prime use case for transport SDN,” says Rick Talbot, principal analyst, optical infrastructure at Current Analysis. "The demonstration’s real-time network optimisation is an excellent example of the potential benefits of transport SDN, leveraging SDN to minimise transit traffic carried at the router layer, saving both CapEx and OpEx." 

Using such an SDN setup, service providers can request high-bandwidth links to meet specific networking requirements. "There can be a request from a [software] app: 'I need a 80 Gigabit flow for two days from Switzerland to California with a 95ms latency and zero packet loss'," says Capuano. "The fact that the network has the facility to set that service up and deliver on those parameters automatically is a huge saving."

Such a link can be established the same day of the request being made, even within minutes. Traditionally, such requests involving the IP and optical layers - and different organisations within a service provider - can take weeks to fulfill, says Infinera.

Current Analysis also highlights another potential benefit of the demonstration: how the control of separate domains - the Infinera wavelength and TDM domain and the Brocade layer 2/3 domain - with a common controller illustrates how SDN can provide end-to-end multi-operator, multi-vendor control of connections.


What next

The Open Networking Foundation (ONF) has an Optical Transport Working Group that is tasked with developing OpenFlow extensions to enable SDN control beyond the packet layer to include optical.

How is the optical layer in the demonstration controlled given the ONF work is unfinished?

"Our solution leverages Web 2.0 protocols like RESTful and JSON integrated into the Open Transport Switch [application] that runs on the DTN-X," says Capuano. "In the world of Web 2.0, the general approach is not to sit and wait till standards are done, but to prototype, test, find the gaps, report back, and do it again."

Further work is needed before the demonstration system is robust enough for commercial deployment.

"This is going to take some time: 2014 is the year of test and trials in the carrier WAN while 2015 is when you will see production deployment," says Capuano. "If service providers are making decision on what platforms they want to deploy, it is important to chose ones that are going to position them well to move to SDN when the time comes."


Fibre-to-the-NPU: optics reshapes the IP core router

Start-up Compass Electro-Optical Systems has announced an IP core router based on a chip with a Terabit-plus optical interface.

 

Asaf Somekh, vice president of marketing, showing Gazettabyte Compass-EOS's novel icPhotonics chip

Having an optical interface linking directly to the chip, which includes a merchant network processor, simplifies the system design and enables router features such as real output queuing. The r10004 IP router is in production and is already deployed in an operator's network.

The company's icPhotonics chip integrates 168, 8 Gigabit VCSELs and 168 photodetectors for a bandwidth of 1.344 Terabit-per-second (Tbps) each direction. Eight of the chips are connected in a full mesh, doing away with the need for a router's switch fabric and mid-plane used to interconnect the router cards.

The resulting architecture saves power, space and cost, says Asaf Somekh, vice president of marketing at Compass-EOS. The start-up estimates that its platform's total cost of ownership over five years is a quarter to a third of existing IP core routers.

The high-bandwidth optical links will also be used to connect multiple platforms, enabling operators to add routing resources as required. Compass-EOS is coming to market with a 6U-high standalone platform but says it will scale up to 21 platforms to appear as one logical router.

The 800Gbps-capacity r10004 comes with 2x100 Gigabit-per-second (Gbps) and 20x10Gbps line cards options. The platform has real output queuing where all the input ports' packets are queued with quality of service applied prior to the exit port. The router also supports software-defined networking (SDN) that enables external control of traffic routing.

The company has its own clean room where it makes its optical interface. Compass-EOS has also developed its own ASICs and the router software for the r10004.   

Somekh says developing the optical interface has been challenging, requiring years of development working with the Fraunhofer Institute and Tel-Aviv University. One challenge was developing a glue to fix the VCSELs on top of the silicon.

The start-up has raised US $120M with investors such as Cisco Systems, Deutsche Telekom and Comcast as well as several venture capitalist firms.

 

icPhotonics technology

Compass-EOS refers to its optical interface IC as silicon photonics but a more accurate description is integrated silicon-optics; silicon itself is not used as a medium for light. But its use of embedded optics to the chip has created a disruptive system.

The optical-interconnect addresses two chip design challenges: signal integrity for long transmission lengths and chip input/output (I/O).

With high-speed interfaces, achieving signal integrity across a high-speed line card and between boards is challenging. Routers use a midplane and switch fabric to connect the the router cards within a platform and parallel optics to connect chassis.

Compass-EOS has taken board-mounted optics one step further and integrated VCSELs and photodetectors to the packaged chip. This simplifies the platform by connecting cards using a mesh architecture, and allows scaling by linking systems. 

The chip window shows the VCSELs and photodetectors Source: Compass-EOS

The design also addresses chip I/O issues. "The I/O density is about 30x higher than traditional solutions and the gap will grow in future," says Somekh.

Directly attaching the optical interconnect to the CMOS chip overcomes limitations imposed by ball grid array and printed circuit board (PCB) technologies.

Typically data is routed from the host PCB to an ASIC via a ball grid array matrix which has a ball pitch of 0.8mm. Shrinking this further is non-trivial given PCB signal integrity issues. Moreover, each electrical serdes (serialiser/ deserialiser) for data I/O uses at least eight bumps (transmit, receive, signal and ground) occupying a cell of 3.2×1.6 mm. For a 10Gbps device the resulting duplex data density is 2Gbps/mm2, increasing to 5Gbps/mm2 if a 25Gbps device is used, according to Compass-EOS.

The start-up says its optical-interconnect achieves a chip I/O of 61Gbps/mm2. "This will increase to 243Gbps/mm2 once we move to 32Gbps."

The resulting design uses 10 percent of the total CMOS area for  I/O. "This is a more efficient chip design," says Somekh. "Most of the silicon is used for logic tasks."

The serdes on chip still need to interface to hundreds of 8Gbps channels. And moving to 32Gbps will present a greater challenge. In comparison, silicon photonics promises to simplify the coupling of optics and electronics.

Another design challenge is that the VCSELs are co-packaged with a large chip consuming 30-50W and generating heat. The design needs to make sure that the operating temperature of the VCSELs is not affected by the heat from the chip.

This is another promised advantage of silicon photonics where the operating temperature of the optics and silicon are matched.       

 

Analysts' perspective

Gazettabyte asked two analysts - IDC's Vernon Turner and ACG Research's Eve Griliches - about the significance of Compass-EOS's announcement. The analysts were also asked for their views on the router's modularity, the total cost of ownership claims, the support for SDN and real output queueing, and whether the platform will gain market share from the IP core router incumbents.

 

IDC

Vernon Turner, senior vice president & general manager enterprise computing, network, telecom, storage, consumer and infrastructure.

One of the hardest places to innovate in the ICT (information and communications technology) world is at or around the speed of light.  Anytime you can make things run faster, the last hurdle tends to be the speed by which things travel over an optical network.

Therefore, to see something that changes the form factor of a network router and innovates at the interconnect speed, it may be able to disrupt a significant part of the network industry.

 

"Separating the interconnect with the physical building block is huge. It means that you scale the pieces that you need, when and where you want them; this is not just a repackaging announcement"

 

Building the capacity of a router as needed is great for service providers and large enterprises since you deploy capacity only as you need it. Second, by using a photonics interconnect, the speed and distance over which two devices can sit is enhanced greatly, changing the way one builds network infrastructures.

Separating the interconnect with the physical building block is huge. It means that you scale the pieces that you need, when and where you want them; this is not just a repackaging announcement.

Regarding the total-cost-of-ownership claims, if these are valid, they are of a magnitude that does fit into a 'disruptive innovation' class where it will deliver network services to an underserved market and create new network services markets.

SDN is the latest buzzword [regarding the router's support for SDN]. But it is the last part of the virtualised data centre as the compute and I/O have already been figured out. SDN is not new, but the need to separate the data plane from the control plane for the service provider industry means that they can begin to create network services through virtualisation without impacting the network performance, something that already happens in server and storage performance.

Existing core router vendors use their own ASIC designs as the last-stop differentiation, so to do this [as Compass-EOS has done] on merchant silicon could have wide implications on router commoditisation, or at least at a faster rate than current trends.

 

ACG Research

Eve Griliches, vice president of optical networking

As to the significance of the announcement, it is not huge in the scheme of things, but it does bring the optical component use of replacing a backplane to market earlier than what has been quoted to ACG Research.

 

"Virtual output queueing is a smart way to do quality of service"


In theory, the router should be a smaller footprint which results in better total cost of ownership due to the optical modules. The advantage with this optical patch-panel approach is that it allows a much higher bandwidth to cross the backplane which is now an optical interconnect. That means you don't have to do as much flow control, or drop as many packets, or keep the utilization of the router so low. You can bring up the utilisation rate from let's say 15 percent to maybe 25 percent or higher. All that results in lower total cost of ownership in theory.

SDN in a bit nebulous. Virtual output queueing is a smart way to do quality of service, but there are key software features like how many BGP (border gateway protocol) peers are supported, multicast capability, as well as signaling for MPLS (multiprotocol label switching), do they support RSVP-TE (resource reservation protocol - traffic engineering) or LDP (label distribution protocol)?  Or both?  Building a real router still takes years of work.

Faster interconnects are the way to go across routing and optical platforms, period. This [Compass-EOS platform] can help. Do I see this optical piece fitting nicely into an already existing router? Yes. I think if that doesn't happen, they will have a bit of an uphill battle nudging the incumbents.

On the other hand, if full router functionality is not needed at some junctures, as we've seen with the LSR (label switch router) technology, then they may have a place in the network. But operators don't like to play around with their routed network too much, so it may be greenfield application that are mostly available to them [Compass-EOS] initially.


Privacy Preference Center