Infinera's XR optics pluggable plans

Infinera’s coherent pluggables for XR optics will also address the company’s metro needs.
Coherent pluggables now dominate the metro market where embedded designs account for just a fifth of all ports, says Infinera.
“As we grow our metro business, we need our own pluggables if we want to be cost-competitive,” says Robert Shore, senior vice president of marketing at Infinera.
Infinera’s family of pluggables implementing the XR optics concept is dubbed ICE-XR.
XR optics splits a coherent optical signal into Nyquist sub-carriers, each carrying a data payload. Twenty-five gigabits will likely be the sub-carrier capacity chosen.
XR optics can be used for point-to-point links where all the sub-carriers go to the same destination. But the sub-carriers can also be steered to different destinations, similar to how breakout cables are used in the data centre.
With XR optics, a module can talk to several lower-speed ones in a point-to-multipoint arrangement. This enables optical feeds to be summed, ideal for traffic aggregation applications such as access and 5G.
Open XR Forum
Infinera detailed its ICE-XR pluggables during the OFC virtual conference and exhibition.
The event coincided with the launch of the Open XR Forum whose members include network operators, Verizon, Lumen Technologies (formerly CenturyLink), Windstream and Liberty Global.
Members of the Open XR Forum span sub-component makers, systems vendors like Infinera, and network operators. The day the Open XR Forum website was launched, Infinera received a dozen enquiries from interested parties.
The Open XR Forum will define standards for XR optics such as how the networks are managed, the form factors used, their speeds and power requirements.
“There are a lot of underlying operational aspects that need to be worked out,” says Shore.
XR optics will use a similar model to ZR+ coherent optics. ZR+ delivers enhanced transmission performance compared to the OIF’s 400ZR coherent standard. “ZR+ is not a standard but rather a set of open specifications that can be used by anyone to create a product, and that is exactly the approach we are taking with XR optics,” says Shore.
Over the last 18 months, Infinera has met with 150 network operators regarding XR optics. “We wanted to validate this is a worthwhile technology and that people wanted it,” says Shore.
There have also been 40 network operator trials of the technology by the end of July. BT has used the technology as part of a metro aggregation trial while Virgin Media and American Tower each tested XR optics over PON.
More members have joined the Open XR Forum and will be announced in the autumn.
ICE-XR
ICE-XR’s name combines two concepts.
The first, ICE, refers to the Infinite Capacity Engine, the optics and coherent digital signal processor (DSP) that is the basis for Infinera’s ICE4 and newer ICE6 coherent transmission designs. ICE4 was Infinera’s first product to use Nyquist sub-carriers.
“XR”, meanwhile, borrows from 400ZR. Here, the ‘X’ highlights that XR supports point-to-point coherent communications, like 400ZR, and point-to-multipoint.
“ICE-XR’s release will be timed in conjunction with the official ratification of the specifications from the Open XR Forum,” says Shore.
Infinera’s ICE-XR portfolio will include 100, 400, and 800-gigabit optical modules.
The 100-gigabit ICE-XR, based on four 25-gigabit sub-carriers, will be offered as QSFP-28, QSDP-DD and CFP2 form factors. The 400-gigabit and 800-gigabit variants, using 16 and 32 sub-carriers respectively, will be available as QSFP-DD and CFP2 modules.
The 100-gigabit and 400-gigabit ICE-XR modules will be released first in 2022.
The 400-gigabit ICE-XR will also double as Infinera’s ZR+ offering when used point-to-point.
Shore says its first ZR+ module will not support the oFEC forward-error correction (FEC) used by the OpenZR+ multi-source agreement (MSA).
“The performance hit you take to ensure multi-vendor interoperability is vastly outweighed by the benefits of the improved [optical] performance [using a proprietary FEC],” says Shore.
Merchant DSP suppliers and the systems vendors with in-house DSP designs all support proprietary FEC schemes that deliver far better performance than oFEC, says Shore.
Infinera is developing a monolithic photonic integrated circuit (PIC) for ICE-XR manufactured at its indium phosphide facility.“ICE-XR will increase the utilisation of our fabrication centre, especially when pluggables produce higher volumes compared to embedded [coherent designs],” says Shore.
Infinera says more than one coherent DSP will be needed for the ICE-XR product portfolio. The modules used have a range of power profiles. The QSFP-28 module will need to operate within 4-5W, for example, while the QSFP-DD implementing ZR+ will need to be below 20W. Developing one DSP to span such a power range is not possible.
Business model
The Open XR Forum’s specifications will enable vendors to develop their own XR optics implementations.
Infinera will also license aspects of its design including its coherent DSPs. The aim, says Shore, is to develop as broad an ecosystem as possible: “We want to make XR optics an industry movement.”
Shore stresses ZR+ interoperability is not a must for most applications. Typically, a vendor’s module will be used at both ends of a link to benefit from the ZR+’s custom features. But interoperability is a must for XR optics given its multi-rate nature. The different speed modules from different vendors must talk to each other.
“Because you have multi-generational and multi-rate designs, it becomes even more important to support multi-vendor interoperability,” says Shore. “It gives the network operators more choice, freedom and flexibility.”
XR optics for the data centre
Infinera says there are developments to use XR optics within the data centre.
As data rates between equipment rise, direct-detect optics will struggle to cope, says Shore. The hierarchical architectures used in data centres also lend themselves to a hub-and-spoke architecture of XR optics.
“This type of technology could fit very nicely into that environment once the capacity requirements get high enough,” says Shore.
For this to happen, power-efficient coherent designs are required. But first, XR optics will need to become established and demonstrate a compelling advantage in a point-to-multipoint configuration.
XR optics will also need to replace traditional direct-detect pluggables that continue to progress; 800-gigabit designs are appearing and 1.6-terabit designs were discussed at OFC. Co-packaged optics is another competing technology.
“You are not looking at the 2022-23 timeframe, but maybe 2025-26,” says Shore.
Covid-era shows
Infinera postponed its customer meetings that pre-covid would take place at OFC till after the show to avoid clashing with the online sessions. Once the meetings occurred, customers were given a tour of Infinera’s virtual OFC booth.
Infinera’s solutions marketing team also divided between them the OFC sessions of interest to attend. The team then ‘met’ daily to share their learnings.
“I do think that the world of in-person events has changed forever,” says Shore. Infinera attended 40 events in 2019. “We will probably do fewer than 20 [a year] going forward,” says Shore.
Juniper Networks to acquire Aurrion for $165 million
The announcement of the acquisition was low key. A CTO blog post and a statement that Juniper Networks had entered into an agreement to acquire Aurrion, the fabless silicon photonics start-up. No fee was mentioned.
However, in the company's US Securities and Exchange Commission filing, Juniper values the deal at approximately $165 million. "The Company believes the acquisition will help to fuel its long-term competitive advantage by enabling cost-effective, high-density, high-speed optical networks," it said. The deal is expected to be closed this quarter.
Ciena acquired Teraxion, while in recent years Cisco acquired Lightwire, Mellanox bought Kotura and Huawei bought a small Belgium start-up, Caliopa. Meanwhile, other vendors have their own silicon photonics developments. Intel is one, Nokia has Bell Labs while Coriant has its own silicon photonics R&D.
But the deal is significant for a number of reasons.
First, Aurrion, like Intel, is a proponent of heterogeneous integration, combining indium phosphide and other technologies on a silicon wafer platform through bonding. The approach has still to be proven in commercial volumes but it promises the use of III-V materials on 12-inch silicon wafers manufactured in a chip fabrication plant.
Aurrion has made tunable lasers for telecom that cover both the C- and L-bands, as well as uncooled laser arrays for datacom applications. The start-up has also been developing high-speed transceivers for the data centre.
The company has also been working on the manufacturing aspects of silicon photonics, a considerable undertaking. These include automated wafer-scale testing, connecting fibre to a silicon photonics chip, and packaging.
Juniper is thus getting an advanced silicon photonics technology suited for volume manufacturing that it will use to advance its data centre networking offerings.
Juniper may choose to make its own optical transceivers but, more likely, it will use silicon photonics as part of its switch designs to tackle issues of data centre scaling and the continual challenge of growing power consumption. It could also use the technology for its IP core routers and longer term, to tackle I/O issues alongside custom ASICs.
Systems vendors drive silicon photonics
The Aurrion acquisition also highlights how it is systems vendors that are acquiring silicon photonics start-ups rather than the traditional optical component and module makers.
This is partly a recognition that silicon photonics' main promise is as a systems technology. Acacia, the coherent transmission specialist, is one company that has shown how silicon photonics can benefit optical module design but the technology's longer-term promise is for systems design rather than optical modules.
A consequence of such acquisitions by systems vendors is that technology being developed by silicon-photonics start-ups is being swallowed within systems houses for their own use and not for the merchant market. Systems vendors have deep pockets to develop the technology but it will be for their own use. For the wider community, silicon photonics technology being developed by the likes of Aurrion is no longer available.
This is what AIM Photonics, the US public-private partnership that is developing technology for integrated photonics, is looking to address: to advance the manufacturing of silicon photonics, making the resulting technology available to small to medium sized businesses and entrepreneurial ventures. However, AIM Photonics is one year into a five-year venture.
Implications
Should major systems vendors owning silicon photonics technology in-house concern the traditional optical component vendors?
Not for now.
Optical transceiver sales continue to grow and the bulk of designs are not integrated. And while silicon photonics is starting to be used for integrated designs, it is competing against the established technologies of indium phosphide and gallium arsenide.
But as photonics moves closer to the silicon and away from a system's faceplate, silicon photonics becomes more strategically important and this is where systems vendors can start developing custom designs.
Must the systems houses own the technology to do that?
Not necessarily, but they will need silicon photonics design expertise, and in the case of Juniper, it can hit the ground running with Aurrion.
Longer term, it will be the much larger chip industry that will drive silicon photonics rather than the optical industry. There are chip foundries now that are making silicon photonic ICs as there are top-ten chip companies such as Intel and STMicroelectronics. But ultimately it will be a very different supply chain that will take shape.
It is early days but Juniper's acquisition is the latest indicator that it is the systems vendors that are moving first at the very beginnings of this new ecosystem.
Mellanox Technologies to acquire EZchip for $811M
Eyal Waldman
Mellanox makes InfiniBand and Ethernet interconnection platforms and products for the data centre while EZchip sells network and multi-core processors that are used in carrier edge routers and enterprise platforms.
EZchip’s customers include Huawei, ZTE, Ericsson, Oracle, Avaya and Cisco Systems.
“Mellanox needs to diversify its business; it is still heavily dependent on the high-performance computing market and InfiniBand,” says Bob Wheeler, principal analyst, networking at market research firm The Linley Group. “EZchip helps move Mellanox into markets and customers that it would not have access to with its existing products.”
CEO Eyal Waldman says Mellanox will continue to focus on the data centre and not the WAN, and that it plans to use EZchip’s products to add intelligence to its designs. Mellanox's Ethernet expertise may also find its way into EZchip’s ICs.
But analysts do expect Mellanox to benefit from telecom. “The big change has to do with Network Function Virtualisation (NFV) and the fact that service provider’s data centres are starting to look more and more like cloud data centres,” says Wheeler. “There is an opportunity for Mellanox to start selling to the large carriers and that is a whole new market for the company.”
Acquiring EZchip
Both companies will ensure continuity and use the same product lines to grow into each other’s markets, said Waldman on a conference call to announce the deal: “Later on will come more combined solutions and products.” First product collaborations are expected in 2016 with more integrated products appearing from 2017.
“Mellanox sees a need to add intelligence to its core products and it does not really have the expertise or the intellectual property,” says Wheeler. One future product of interest is the smart or intelligence network interface controller (NIC). “By working together they could product quite a compelling product,” says Wheeler.
In 2014 EZchip acquired Tilera for $50 million. The value of the deal could have risen to $130 million but was dependent on targets that Tilera did not meet, says Wheeler. Tilera's products include multi-core processors, NICs and white box security appliances. EZchip has also announced the Tile-Mx product family using Tilera’s technology, the most powerful family device will feature 100, 64-bit ARM cores.
The primary application of Tilera’s products is security applications: deep-packet inspection and layer 7 processing. Instead of replacing the general-purpose processor in a security appliance, an alternative approach is to use an intelligent NIC card with a Tilera processor connected via the PCI Express bus to an Intel Xeon-based server. “The card can do a lot of the packet processing offloaded from the Xeon,” says Wheeler.
Another area where EZchip’s NPS processor can be used is in more dedicated appliances or in an intelligent top-of-rack switch. The NPS would perform security as well as terminating overlay protocols used for network virtualisation in the data centre. “You can terminate all those [overlay] protocols in a top-of-rack switch and offload that processing from the server,” says Wheeler.
The key benefit of InfiniBand is its very low latency but the flip side is that the protocol is limited with regard routing to larger fabrics. Adding intelligence could benefit Mellanox’s core Infiniband fabric products, notes Wheeler.
EZchip’s founder and CEO Eli Fruchter said he expects the merger to open doors for EZchip among more hyper-scale data centre players: “With the merger we believe we can be a lot more successful in data centres than by continuing by ourselves.”
Mellanox has made several acquisitions in recent years. It acquired data centre switch fabric player Voltaire in 2011, and in 2013 it added silicon photonics start-up Kotura and chip company IPTronics in quick succession. Now with EZchip's acquisition it will add packet processing and multi-core processor IP to its in-house technology portfolio.
The EZchip acquisition is expected to close in the first quarter of 2016.
Further information:
Mellanox’s Waldman: We've discussed merging for years, click here
Rockley demos a silicon photonics switch prototype
Rockley Photonics has made a prototype switch to help grow the number of servers that can be linked in a data centre. The issue with interconnection networks inside a data centre is that they do not scale linearly as more servers are added.
Dr. Andrew Rickman
“If you double the number of servers connected in a mega data centre, you don’t just double the complexity of the network, it goes up exponentially,” explains Andrew Rickman, co-founder, chairman and CEO at Rockley Photonics. “That is the problem we are addressing.”
By 2017 and 2018, it will still be possible to build the networks that large-scale data centre network operators require, says Rickman, but at an ever increasing cost and with a growing power consumption. “The basic principles of what they are doing needs to be rethought,” he says.
Network scale
Modern data centre networks must handle significant traffic flow between servers, referred to as east-west traffic. A common switching arrangement in the data centre is the leaf-spine architecture, used to interconnect thousands of servers.
A ‘leaf’ may be a top-of-rack switch that is linked to multiple server chassis on one side and larger-capacity, ‘spine’ switches on the other. The result is a switch network where each leaf is connected to all the spine switches, while each spine switch is linked to all the leaves. In the example shown, four spine switches connect to 32 leaf switches.
A leaf-spine architecture
The leaf and spine switches are built using ASICs, with the largest ICs typically having 32, 100 gigabit ports. One switch ASIC may be used in a platform but as Rickman points out, larger switches may implement multiple stages such as a three-stage Clos architecture. As a result, traffic between servers on different leaves, travelling up and down the leaf-spine, may pass through five stages or hops but possibly as many as nine.
There is no replacement performance in this area
It is the switch IC’s capacity and port count that dictates the overall size of the leaf-spine network and therefore the number of servers that can be connected. Rockley’s goal is to develop a bigger switch building block making use of silicon photonics.
“The fundamental thing to address is making bigger switching elements,” says Rickman. “That way you can keep the number of stages in the network the same but still make bigger and bigger networks.” Rockley expects its larger building-block switch will reduce the switch stages needed.
The UK start-up is not yet detailing its switch beyond saying it uses optical switching and that the company is developing a photonic integrated circuit (PIC) and a controlling ASIC.
“In the field of silicon photonics, for the same area of silicon, you can produce a larger switch; you have more capacity than you do in electronics,” says Rickman. Moreover, Rockley says that its silicon photonics-based PIC will scale with Moore’s law, with its switch's data capacity approximately doubling every two years. “Previously, the network did not scale with Moore’s law,” says Rickman.
Customers can see something is real and that it works. We are optimising all the elements of the system before taping out the fully integrated devices
Status
The company has developed a switch prototype that includes ‘silicon photonics elements’ and FPGAs. “Customers can see something is real and that it works,” says Rickman. “We are optimising all the elements of the system before taping out the fully integrated devices.” Rockley expects to have its switch in volume production in 2017.
Last year the company raised its first round of funding and said that it would undergo a further round in 2015. Rockley has not said how much it has raised or the status of the latest round. “We are well-funded and we have a very supportive group of investors,” says Rickman.
Rickman has long been involved in silicon photonics, starting out as a researcher at the University of Surrey developing silicon photonics waveguides in the early 1990s, before founding Bookham Technologies (now Oclaro). He has also been chairman of silicon photonics start-up Kotura that was acquired by Mellanox Technologies in 2013. Rickman co-founded Rockley in 2013.
“What I’ve learned about silicon photonics, and about all those electronics technologies, is how to design stuff from a process point of view to make something highly manufacturable and at the same time having the performance,” says Rickman.
There is no replacement performance in the area of data centre switching, he stresses: “The benefit of our technology is to deliver the performance, not the fact that it is cheap or [offers] average performance.”
For Part 2, Interconnection networks - an introduction, click here
60-second interview with Infonetics' Andrew Schmitt
Andrew Schmitt
Q: Infonetics claims the global WDM market grew 6% in 2014, to total US $10 billion. What accounted for such impressive growth in 2014?
AS: Primarily North American strength from data centre-related spending and growth in China.
Q: In North America, the optical vendors' fortunes were mixed: ADVA Optical Networking, Infinera and Ciena had strong results, balanced by major weakness at Alcatel-Lucent, Fujitsu and Coriant. You say those companies whose fortunes are tied to traditional carriers under-performed. What are the other markets that caused those vendors' strong results?
These three vendors are leading the charge into the data centre market. ADVA had flat revenue, North America saved their bacon in 2014. Ciena is also there because they are the ones who have suffered the least with the ongoing changes at AT&T and Verizon. And Infinera has just been killing it as they haven’t been exposed to legacy tier-1 spending and, despite the naysayers, has the platform the new customers want.
"People don’t take big risks and do interesting things to attack flat or contracting markets"
Q: Is this mainly a North American phenomenon, because many of the leading internet content providers are US firms?
Yes, but spending from Baidu, Alibaba, and Tencent in China is starting to scale. They are running the same playbook as the western data centre guys, with some interesting twists.
Q. You say the press and investors are unduly fascinated with AT&T's and Verizon's spending. Yet they are the two largest US operators, their sum capex was $39 billion in 2014, and their revenues grew. Are these other markets becoming so significant that this focus is misplaced?
Growth is what matters.
People don’t take big risks and do interesting things to attack flat or contracting markets. Sure, it is a lot of spend, but the decisions are made and that data is seen - incorporated into people’s thought-process and market opinion. What matters is what changes. And all signs are that these incumbents are trying to become more like the data centre folks.
Q. What will be the most significant optical networking trend in 2015?
Cheaper 100 gigabit, which lights up the metro 100 gigabit market for real in 2016.
FPGAs embrace data centre co-processing role
The PCIe accelerator card has a power budget of 25W. Hyper data centres can host hundreds of thousands of servers whereas other industries with more specialist computation requirements use far fewers servers. As such, they can afford a higher power budget per card. Source: Xilinx
Xilinx has developed a software-design environment that simplifies the use of an FPGA as a co-processor alongside the server's x86 instruction set microprocessor.
Dubbed SDAccel, the development environment enables a software engineer to write applications using OpenCL, C or the C++ programming language running on servers in the data centre.
Applications can be developed to run on the server's FPGA-based acceleration card without requiring design input from a hardware designer. Until now, a hardware engineer has been needed to convert the code into the RTL hardware description language that is mapped onto the FPGA's logic gates using synthesis tools.
"[Now with SDAccel] you suffer no degradation in [processing] performance/ Watt compared to hand-crafted RTL on an FPGA," says Giles Peckham, regional americas and EMEA marketing director at Xilinx. "And you move the entire design environment into the software domain; you don't need a hardware designer to create it."
Data centre acceleration
The data centre is the first application targeted for SDAccel along with the accompanying FPGA accelerator cards developed by Xilinx's three hardware partners: Alpha Data, Convey and Pico Computing.
The FPGA cards connect to the server's host processor via the PCI Express (PCIe) interface are not just being aimed at leading internet content providers but also institutions and industries that have custom computational needs. These include oil and gas, financial services, medical and defence companies.
PCIe cards have a power budget of 25W, says Xilinx. The card's power can be extended by adding power cables but considering that hyper data centres can have hundreds of thousands of servers, every extra Watt consumed comes at a cost.
Microsoft has reported that a production pilot it set up that had 1,632 servers using PCIe-based FPGA cards, achieved a doubling of throughput, a 29 percent lower latency, and a 30 percent cost reduction compared to servers without accelerator cards
In contrast, institutions and industries use far fewer servers in their data centres. "They can stomach the higher power consumption, from a cost perspective and in terms of dissipating the heat, up to a point," says Peckham. Their accelerator cards may consume up to 100W. "But both have this limitation because of the power ceiling," he says.
China’s largest search-engine specialist, Baidu, uses neural-network processing to solve problems in speech recognition, image search, and natural language processing, according to The Linley Group senior analyst, Loring Wirbel.
Baidu has developed a 400 Gigaflop software-defined accelerator board that uses a Xilinx Kintex-7 FPGA that plugs into any 1U or 2U high server using PCIe. Baidu says that the FPGA board achieves four times higher performance than graphics processing units (GPUs) and nine times higher performance than CPUs, while consuming between 10-20W.
Microsoft has reported that a production pilot it set up that had 1,632 servers using PCIe-based FPGA cards, achieved a doubling of throughput, a 29 percent lower latency, and a 30 percent cost reduction compared to servers without accelerator cards.
"The FPGA can implement highly parallel applications with the exact hardware required," says Peckham. Since the dynamic power consumed by the FPGA depends on clock frequency and the amount of logic used, the overall power consumption is lower than a CPU or GPU. That is because the FPGA's clock frequency may be 100MHz compared to a CPU's or GPU's 1 GHz, and the FPGA implements algorithms in parallel using hardware tailored to the task.
FPGA processing performance/ W for data centre acceleration tasks compared to GPUs and CPUs. Note the FPGA's performance/W advantage increases with the number of software threads. Source: Xilinx
SDAccel
To develop a design environment that a software developer alone can use, Xilinx has to make SDAccel aware of the FPGA card's hardware, using what is known as a board support package. "There needs to be an understanding of the memory and communications available to the FPGA processor," says Peckham. "The processor then knows all the hardware around it."
Xilinx claims SDAccel is the industry's first architecturally optimising compiler for FPGAs. "It is as good as hand-coding [RTL]," says Peckham. The tool also delivers a CPU-/ GPU-like design environment. "It is also the first tool that enables designs to have multiple operations at different times on the same FPGA," he says. "You can reconfigure the accelerator card in runtime without powering down the rest of the chip."
SDAccel and the FPGA cards are available, and the tool is with several customers. "We have proven the tool, debugged it, created a GUI as opposed to a command line interface, and have three FPGA boards being sold by our partners," says Peckham. "More partners and more boards will be available in 2015."
Peckham says the simplified design environment appeals to companies not addressing the data centre. "One company in Israel uses a lot of Virtex-6 FPGAs to accelerate functions that start in C code," he says. "They are using FPGAs but the whole design process is drawn-out; they were very happy to learn that [with SDAccel] they don't have to hand-code RTL to program them."
Xilinx is working to extend OpenCL for computing tasks beyond the data centre. "It is still a CPU-PCIe-to-co-processor architecture but for wider applications," says Peckham.
For Part 2, click here
For Part 3, click here
Alcatel-Lucent serves up x86-based IP edge routing
Alcatel-Lucent has re-architected its edge IP router functions - its service router operating system (SR OS) and applications - to run on Intel x86 instruction-set servers.
Shown is the VSR running on one server and distributed across several servers. Source: Alcatel-Lucent.
The company's Virtualized Service Router portfolio aims to reduce the time it takes operators to launch services and is the latest example of the industry trend of moving network functions from specialist equipment onto stackable servers, a development know as network function virtualisation (NFV).
"It is taking IP routing and moving it into the cloud," says Manish Gulyani, vice president product marketing for Alcatel-Lucent's IP routing and transport business.
IP edge routers are located at the edge of the network where services are introduced. By moving IP edge functions and applications on to servers, operators can trial services quickly and in a controlled way. Services can then be scaled according to demand. Operators can also reduce their operating costs by running applications on servers. "They don't have to spare every platform, and they don't need to learn its hardware operational environment," says Gulyani
Alcatel-Lucent has been offering two IP applications running on servers since mid-year. The first is a router reflector control plane application used to deliver internet services and layer-2/ layer-3 virtual private networks (VPNs). Gulyani says the application product has already been sold to two customers and over 20 are trialling it. The second application is a routing simulator used by customers for test and development work.
More applications are now being made available for trial: a provider edge function that delivers layer-2 and layer-3 VPNs, and an application assurance application that performs layer-4 to layer-7 deep-packet inspection. "It provides application level reporting and control," says Gulyani. Operators need to understand application signatures to make decisions based on which applications are going through the IP pipe, he says, and based on a customer's policy, the required treatment for an app.
Additional Virtualized Service Router (VSR) software products planned for 2015 include a broadband network gateway to deliver triple-play residential services, a carrier Wi-Fi solution and an IP security gateway.
Alcatel-Lucent claims a two rack unit high (2RU) server hosting two 10-core Haswell Intel processors achieves 160 Gigabit-per-second (Gbps) full-duplex throughput. The company has worked with Intel to determine how best to use the chipmaker's toolkit to maximise the processing performance on the cores.
"Using 16, 10 Gigabit ports, we can drive the full capacity with a router application," says Gulyani. "But as more and more [router] features are turned on - quality of service and security, for example - the performance goes below 100 Gigabit. We believe the sweet-spot is in the sub-100 Gig range from a single-server perspective."
In comparison, Alcatel-Lucent's own high-end network processor chipset, the FP3, that is used within its router platforms, achieves 400 Gigabit wireline performance even when all the features are turned on.
"With the VSR portfolio and the rest of our hardware platforms, we can offer the right combination to customers to build a performing network with the right economics," says Gulyani.
Alcatel-Lucent's server router portfolio split into virtual systems and IP platforms. Also shown (in grey) are two platforms that use merchant processors on which runs the company's SR OS router operating system i.e. the company has experience porting its OS onto hardware besides its own FPx devices before it tackled the x86. Source: Alcatel-Lucent.
Gazettabyte asked three market research analysts about the significance of the VSR announcement, the applications being offered, the benefits to operators, and what next for IP.
Glen Hunt, principal analyst, transport & routing infrastructure at Current Analysis
Alcatel-Lucent's full routing functionality available on an x86 platform enables operators to continue with their existing infrastructures - the 7750SR in Alcatel-Lucent's case - and expand that infrastructure to support additional services. This is on less expensive platforms which helps support new services that were previously not addressable due to capital expenditure and/ or physical restraints.
The edge of the service provider network is where all the services live. By supporting all services in the cloud, operators can retain a seamless operational model, which includes everything they currently run. The applications being discussed here are network-type functions - Evolved Packet Core (EPC), broadband network gateway (BNG), wireless LAN gateways (WLGWs), for example - not the applications found in the application layer. These functions are critical to delivering a service.
Virtualisation expands the operator’s ability to launch capabilities without deploying dedicated routing/ device platforms, not in itself a bad thing, but with the ability to spin up resources when and where needed. Using servers in a data centre, operators can leverage an on-demand model which can use distributed data centre resources to deliver the capacity and features.
Other vendors have launched, or are about to launch, virtual router functionality, and the top-level stories appear to be quite similar. But Alcatel-Lucent can claim one of the highest capacities per x86 blade, and can scale out to support Nx160Gbps in a seamless fashion; having the ability to scale the control plane to have multiple instances of the Virtualized Service Router (VSR) appear as one large router.
Furthermore, Alcatel-Lucent is shipping its VSR route reflector and the VSR simulator capabilities and is in trials with VSR provider edge and VSR application assurance – noting it has two contracts and 20-plus trials. This shows there is a market interest and possibly pent-up demand for the VSR capabilities.
It will be hard for an x86 platform to achieve the performance levels needed in the IP core to transit high volumes of packet data. Most of the core routers in the market today are pushing 16 Terabit-per-second of throughput across 100 Gigabit Ethernet ports and/ or via direct DWDM interfaces into an optical transport core. This level of capability needs specialised silicon to meet demands.
Performance will remain a key metric moving forward, even though an x86 is less expensive than most dedicated high performance platforms, it still has a cost basis. The efficiency which an application uses resources will be important. In the VSR case, the more work a single blade can do, the better. Also of importance is the ability for multiple applications to work efficiently, otherwise the cost savings are limited to the reduction in hardware costs. If the management of virtual machines is made more efficient, the result is even greater efficiency in terms of end-to-end performance of a service which relies on multiple virtualised network functions.
Ultimately, more and more services will move to the cloud, but it will take a long time before everything, if ever, is fully virtualised. Creating a network that can adapt to changing service needs is a lengthy exercise. But the trend is moving rapidly to the cloud, a combination of physical and virtual resources.
Michael Howard, co-founder and principal analyst, Infonetics Research
There is overwhelming evidence from the global surveys we’ve done with operators that they plan to move functions off the physical IP edge routers and use software versions instead.
These routers have two main functions: to handle and deliver services, and to move packets. I’ve been prodding router vendors for the last two years to tell us how they plan to package their routing software for the NFV market. Finally, we hear the beginnings, and we’ll see lots more software routing options.
The routing options can be called software routers or vRouters. The services functions will be virtualised network functions (VNFs), like firewalls, intrusion detection systems and intrusion prevention systems, deep-packet inspection, and caching/ content delivery networks that will be delivered without routing code. This is important for operators to see what routing functions they can buy and run in NFV environments on servers, so they can plan how to architect their new software-defined networking and NFV world.
It is important for router vendors to play in this world and not let newcomers or competitors take the business. Of course, there is a big advantage to buy their vRouter software — route reflection for example — from the same router vendor they are already using, since it obviously works with the router code running on physical routers, and the same software management tools can be used.
Juniper has just made its first announcement. We believe all router vendors are doing the same; we’ve been expecting announcements from all the router vendors, and finally they are beginning.
It will be interesting to see how the routing code is packaged into targeted use cases - we are just seeing the initial use cases now from Juniper and Alcatel-Lucent - like the route reflection control plane function, IP/ MPLS VPNs and others.
Despite the packet-processing performance achieved by Alcatel-Lucent using x86 processors, it should be noted that some functions like the control plane route reflection example only need compute power, not packet processing or packet-moving power.
There already is, and there will always be, a need for high performance for certain places in the network or for serving certain customers. And then there are places and customers where traffic can be handled with less performance.
As for what next for IP, the next 10 to 15 years will be spent moving to SDN- and NFV-architected networks, just as service providers have spent over 10 years moving from time-division multiplexing-based networks to packet-based ones, a transition yet to be finished.
Ray Mota, chief strategist and founder, ACG Research
Carriers have infrastructure that is complex and inflexible, which means they have to be risk-averse. They need to start transitioning their architecture so that they just program the service, not re-architect the network each time they have a new service. Having edge applications becoming more nimble and flexible is a start in the right direction. Alcatel-Lucent has decided to create a NFV edge product with a carrier-grade operating system.
It appears, based on what the company has stated, that it achieves faster performance than competitors' announcements.
Alcatel-Lucent is addressing a few areas: this is great for testing and proof of concepts, and an area of the market that doesn't need high capacity for routing, but it also introduces the potential to expand new markets in the webscaler space (that includes the large internet content providers and the leading hosting/ co-location companies).
You will see more and more IP domain products overlap into the IT domain; the organisationals and operations are lagging behind the technology but once service providers figure it out, only then will they have a more agile network.
ECOC 2014: Industry reflections on the show
Gazettabyte asked several attendees at the recent ECOC show, held in Cannes, to comment on key developments and trends they noted, as well as the issues they will track in the coming year.
Daryl Inniss, practice leader, components at market research firm, Ovum
It took a while to unwrap what happened at ECOC 2014. There was no one defining event or moment that was the highlight of the conference.
The location was certainly beautiful and the weather lovely. Yet I felt the participants were engaged with critical technical and business issues, given how competitive the market has become.
Kaiam’s raising US $35 million, Ranovus raising $24 million, InnoLight Technology raising $38 million and being funded by Google Capital, and JDSU and Emcore each splitting into two companies, all are examples of the shifting industry structure.
On the technology and product development front, advances in 100 Gig metro coherent solutions were reported although products are coming to market later than first estimated. The client-side 100 Gig is transitioning to CFP2. Datacom participants agree that QSFP28 is the module but what goes inside will include both parallel single mode solutions and wavelength multiplexed ones.
Finisar’s 50 Gig transmission demonstration that used silicon photonics as the material choice surprised the market. Compared to last year, there were few multi-mode announcements. ECOC 2014 had little excitement and no one defining show event but there were many announcements showing the market’s direction.
There is one observation from the show, which while not particularly exciting or sexy, is important, and it seems to have gone unnoticed in my opinion. Source Photonics demonstrated the 100GBASE-LR4, the 10km 100 Gigabit Ethernet standard, in the QSFP28 form factor. This is not new as Source Photonics also demonstrated this module at OFC. What’s interesting is that no one else has duplicated this result.
There will be demand for a denser -LR4 solution that’s backward compatible with the CFP, CFP2, and CFP4 form factors. It is unlikely that the PSM4, CWDM4, or CLR4 will go 10km and they are not optically compatible with the -LR4. The market is on track to use the QSFP28 for all 100 Gig distances so it needs the supporting optics. The Source Photonics demonstration shows a path for 10km. We expect to see other solutions for longer distances over time.
One surprise at the show was Finisar's and STMicroelectronics's demonstration of 50 Gig non-return-to-zero transmission over 2.2km on standard single mode fiber. The transceiver was in the CFP4 form factor and uses heterogeneous silicon technologies inside. The results were presented in a post-deadline paper (PD.2.4). The work is exciting because it demonstrates a directly modulated laser operating above 28 Gig, the current state-of-the-art.
The use of silicon photonics is surprising because Finisar has been forced to defend its legacy technology against the threat of transceivers based on silicon photonics. These results point to one path forward for next-generation 100 Gig and 400 Gig solutions.
In the coming year, I’m looking for the dominant metro 100G solution to emerge. When will the CFP2 analogue coherent optical module become generally available? Multiple suppliers with this module will help unleash the 100 Gig line-side transmission market, drive revenue growth and the development for the next-generation solution.
Slow product development gives competing approaches like the digital CFP a chance to become the dominant solution. At present, there is one digital CFP vendor with a generally available product, Acacia Communications, with a second, Fujitsu Optical Components, having announced general availability in the first half of 2015.
Neal Neslusan, vice president of sales and marketing at fabless chip company, MultiPhy.
It was impressive to see Oclaro's analogue CFP2 for coherent applications on the show floor, albeit only in loopback mode. Equally impressive was seeing ClariPhy's DSP on the evaluation board behind the CFP2.
I saw a few of the motherboard-based optics solutions at the show. They looked very interesting and in questioning various folks in the business I learned that for certain data centre applications these optics are considered acceptable. Indeed, they represent an ability to extract much higher bandwidth from a given motherboard as compared to edge-of-the-board based optics, but they are not pluggable.
Traditionally, pluggable optics has been the mainstay of the datacom and enterprise segments and these motherboard-based optics have been relegated to supercomputing. This is just another example, in my opinion, of how the data centre market is becoming distinct from the datacom market.
Where there any surprises at the show? I was surprised and alarmed at the cost of the Martini drinks at the hotel across the street from the show, and they weren't even that good!
Regarding developments in the coming year, the 8x50 Gig versus 4x100 Gig fight in the IEEE is clearly a struggle I will follow. I think it will have a great impact on product development in our industry. If 8x50 Gig wins, it may be one of the few times in the history of our industry that a less advanced solution is chosen over a more advanced and future-proofed one.
The physical size of the next-generation Terabit Ethernet switch chips will have a much larger impact on the optics they connect to in the coming years, compared to the past. This work combined with the motherboard-based optics may create a significant change in the solutions brought to bear for high-performance communications.
John Lively, principal analyst at market research firm, LightCounting.
There were several developments that I noted at the show. ECOC helped cement the view that 100 Gig coherent is mainstream for metro networks. Also more and more system vendors are incorporating Raman/ remote optically pumped amplifier (ROPA) into their toolkit. ROPA is a Raman-based amplifier where the pump is located at one end of the link, not in some intermediate node. Another trend evident at ECOC is how the network boundary between terrestrial and submarine is blurring.
As for developments to watch, I intend to follow mobile fronthaul/ backhaul, higher speed transceiver developments, of course, and how the mega-data-centre operators are disrupting networks, equipment, and components.
For the ECOC reflections, final part, click here
OIF defines carrier requirements for SDN
The Optical Internetworking Forum (OIF) has achieved its first milestone in defining the carrier requirements for software-defined networking (SDN).

The orchestration layer will coordinate the data centre and transport network activities and give easy access to new applications
Hans-Martin Foisel, OIF
The OIF's Carrier Working Group has begun the next stage, a framework document, to identify missing functionalities required to fulfill the carriers' SDN requirements. "The framework document should define the gaps we have to bridge with new specifications," says Hans-Martin Foisel of Deutsche Telekom, and chair of the OIF working group.
There are three main reasons why operators are interested in SDN, says Foisel. SDN offers a way for carriers to optimise their networks more comprehensively than before; not just the network but also processing and storage within the data centre.
"IP-based services and networks are making intensive use of applications and functionalities residing in the data centre - they are determining our traffic matrix," says Foisel. The data centre and transport network need to be coordinated and SDN can determine how best to distribute processing, storage and networking functionality, he says.
SDN also promises to simplify operators' operational support systems (OSS) software, and separate the network's management, control and data planes to achieve new efficiencies.
SDN architecture
The OIF's focus is on Transport SDN, involving the management, control and data plane layers of the network. Also included is an orchestration layer that will sit above the data centre and transport network, overseeing the two domains. Applications then reside on top of the orchestration layer, communicating with it and the underlying infrastructure via a programmable interface.
"Aligning the thinking among different people is quite an educational exercise, and we will have to get to a new understanding"
"The orchestration layer will coordinate the data centre and transport network activities and give, northbound, easy access to new applications," says Foisel.
A key SDN concept is programmability and application awareness, he says. The orchestration layer will require specified interfaces to ease the adding of applications independent of whether they impact the data centre, transport network or both.
Foisel says the OIF work has already highlighted the breadth of vision within the industry regarding how SDN should look. "Aligning the thinking among different people is quite an educational exercise, and we will have to get to a new understanding," he says.
Having equipment prototypes is also helping in understanding SDN. "Implementations that show part of this big picture - it is doable, it is working and how it is working - is quite helpful," says Foisel.
The OIF Carrier Working Group is working closely with the Open Networking Foundation's (ONF) Optical Transport Working Group to ensure that the two group are aligned. The ONF's Optical Transport Group is developing optical extensions to the OpenFlow standard.
Apps over packet-optical: Ciena boosts 6500's packet handling
Source: Ciena
Ciena has enhanced its packet-optical equipment portfolio by adding packet support to its flagship 6500 platform.
Cards and software from Ciena's established Carrier Ethernet packet platforms have been added to the 6500, a packet-optical platform that features reconfigurable optical add-drop multiplexing (ROADM), WaveLogic3 coherent transponders, Optical Transport Network (OTN) switching and SONET/SDH aggregation. The system vendor has also developed packet aggregation and switch fabric cards for the 6500.
"You can now use the 6500 for 100 percent packet switching, 100 percent OTN switching, or any mix in between," says Michael Adams, vice president of product and technical marketing at Ciena.
The development is part of a general trend to combine optical and packet to create scalable, manageable networks. It also addresses the operators' growing need for programmable networks to deliver cloud-based services and dynamic bandwidth.
Applications
Ciena has a virtual wide-area network (VWAN) control layer that resides above the networking layer that abstracts the hardware and through which software applications can be executed (see chart).
"We have a scheduler 'app' through the control layer VWAN that allows bandwidth to change between sites, for example," says Adams. "Every night I want to do a backup between these times and I want this much bandwidth as I do it."
Another application is machine-to-machine communication that can be used to link data centres. "If you can virtualise within a data centre, why not virtualise across data centres?" says Adams.
As [servers'] virtual machines move between data centres, the performance of the network becomes key. Ciena has an application programming interface (API) that links to the server's hypervisor that allows machine-to-machine communication to be intercepted to benefit the bandwidth made available for the virtual machine traffic. "We are not doing it today but we have the software to link between two data centres," says Adams.
6500 enhancements
Until now it has been difficult to combine packet with packet optical, requiring different platforms, each with their own management system, says Adams. "It has been hard to take a base station that needs only packet, put the Carrier Ethernet traffic onto a ring [network] and then onto a 100 Gigabit wavelength," he says. "You either built pure packet or used a form of packet optical but it was hard to mix."
Ciena has added hardware and software to the 6500 from its existing packet platforms. The packet platforms are used to deliver Ethernet services and infrastructure and are a $40 million-a-quarter business for Ciena, with over 300,000 network elements deployed.
The service-aware operating system (SAOS), developed for the Ethernet packet platforms, has also been ported onto the 6500's new packet and fabric cards.
With the 6500 running the same software as its packet platforms, service management across the network becomes simpler. "Now, one system can deploy services, and look at performance visualisation between the layers," says Adams.
Ciena's latest hardware cards include blades with 1 and 10 Gigabit-per-second (Gbps) aggregation that operate independently of the 6500's switch fabric. "You don't touch the fabric, just run [them] over a WDM wavelength," says Adams. The stackable blades support 120Gbps to 300Gbps of packet traffic.
Meanwhile, the 6500 switch fabric cards add 600 Gigbit or 1.2 Terabit packet switching capacity that will be increased further in future.
"We have got these blades that can be stacked besides each other for resiliency or scale," says Adams. "And if you want to scale those up, there is a [switch] fabric solution."
Further reading:
100 Gigabit and packet optical loom large in the metro
P-OTS 2.0: 60-second interview with Heavy Reading's Sterling Perrin
