Nuage Networks uses SDN to tackle data centre networking bottlenecks
Three planes of the network that host Nuage's .Virtualised Services Platform (VSP). Source: Nuage Networks
Alcatel-Lucent has set up Nuage Networks, a business venture addressing networking bottlenecks within and between data centres.
The internal start-up combines staff with networking and IT skills include web-scale services. "You can't solve new problems with old thinking," says Houman Modarres, senior director product marketing at Nuage Networks. Another benefit of the adopted business model is that Nuage benefits from Alcatel-Lucent's software intellectual property.
"It [the Nuage platform] is a good approach. It should scale well, integrate with the wide area network (WAN) and provide agility"
Joe Skorupa, Gartner
Network bottlenecks
Networking in the data centre connects computing and storage resources. Servers and storage have already largely adopted virtualisation such that networking has now become the bottleneck. Virtual machines on servers running applications can be enabled within seconds or minutes but may have to wait days before network connectivity is established, says Modarres.
Nuage has developed its Virtualised Services Platform (VSP) software, designed to solve two networking constraints.
"We are making the network instantiation automated and instantaneous rather than slow, cumbersome, complex and manual," says Modarres. "And rather than optimise locally, such as parts of the data centre like zones or clusters, we are making it boundless."
"It [the Nuage platform] is a good approach," says Joe Skorupa, vice president distinguished analyst, data centre convergence, data centre, at Gartner. "It should scale well, integrate with the wide area network (WAN) and provide agility."
Resources to be connected can now reside anywhere: within the data centre, and between data centres, including connecting the public cloud to an enterprise's own private data centre. Moreover, removing restrictions as to where the resources are located boosts efficiency.
"Even in cloud data centres, server utilisation is 30 percent or less," says Modarres. "And these guys spend about 60 percent of their capital expenditure on servers."
It is not that the hypervisor, used for server virtualisation, is inefficient, stresses Modarres: "It is just that when the network gets in the way, it is not worthwhile to wait for stuff; you become more wasteful in your placement of workloads as their mobility is limited."

"A lot of money is wasted on servers and networking infrastructure because the network is getting in the way"
Houman Modarres, Nuage Networks
SDN and the Virtualised Services Platform
Nuage's Virtualised Services Platform (VSP) uses software-defined networking (SDN) to optimise network connectivity and instantiation for cloud applications.
The VSP comprises three elements:
- the Virtualised Services Directory,
- the Virtualised Services Controller,
- and the Virtual Routing & Switching module.
The elements each reside at a different network layer, as shown (see chart, top).
The top layer, the cloud services management plane, houses the Virtualised Services Directory (VSD). The VSD is a policy and analytics engine that allows the cloud service provider to partition the network for each customer or group of tenants.
"Each of them get their zones for which they can place their applications and put [rules-based] permissions as to whom can use what, and who can talk to whom," says Modarres. "They do that in user-friendly terms like application containers, domains and zones for the different groups."
Domains and zones are how an IT administrator views the data centre, explains Modarres: "They don't need to worry about VLANs, IP addresses, Quality of Service policies and access control lists; the network maps that through its abstraction." The policies defined and implemented by the VSD are then adopted automatically when new users join.
The layer below the cloud services management plane is the data centre control plane. This is where the second platform element, the Virtualised Services Controller (VSC), sits. The VSC is the SDN controller: the control element that communicates with the data plane using the OpenFlow open standard.
The third element, the Virtual Routing & Switching module (VRS), sits in the data path, enabling the virtual machines to communicate to enable applications rapidly. The VRS sits on the hypervisor of each server. When a virtual machine gets instantiated, it is detected by the VRS which polls the SDN controller to see if a policy has already been set up for the tenant and the particular application. If a policy has been set up, the connectivity is immediate. Moreover, this connectivity is not confined to a single data centre zone but the whole data centre and even across data centres.
More than one data centre is involved for disaster recovery scenarios, for example. Another example involving more than one data centre is to boost overall efficiency. This is enhanced by enabling spare resources in other data centres to be used by applications as appropriate.
Meanwhile, the linking to an enterprise's own data centre is done using a virtual private network (VPN), bridging a private data centre with the public cloud. "We are the first to do this," says Modarres.
The VSP works with whatever server, hypervisor, networking equipment and cloud management platform is used in a data centre. The SDN controller is based on the same operating system that is used in Alcatel-Lucent's IP routers that supports a wealth of protocols. Meanwhile, the virtual switch in the VRS integrates with various hypervisors on the market, ensuring interoperability.
Nuage's Dimitri Stiliadis, chief architect at Nuage Networks, describes its VSP architecture as a distributed implementation of the functions performed by its router products.
The control plane of the router is effectively moved to the SDN controller. The router's 'line cards' become the virtual switches in the hypervisors. "OpenFlow is the protocol that allows our controller to talk to the line cards," says Stiliadis. "While the border gateway protocol (BGP) is the protocol that allows our controller to talk to other controllers in the rest of the network."
Michael Howard, principal analyst, carrier networks at Infonetics Research, says there are several noteworthy aspects to Nuage's product including the fact that operators participated at the company's launch and that the software is not tied to Alcatel-Lucent's routers but will run over other vendors' equipment.
"It also uses BGP, as other vendors are proposing, to tie together data centres and the carrier WAN," says Howard. "Several big operators say BGP is a good approach to integrate data centres and carrier WANs, including AT&T and Orange."
Nuage says that trials of its VSP began in April. The European and North America trial partners include UK cloud service provider Exponential-e, French telecoms service provider SFR, Canadian telecoms service provider TELUS and US healthcare provider, the University of Pittsburgh Medical Center (UPMC). The product will be generally available from mid-2013.
"There are other key use cases targeted for SDN that are not data centre related: content delivery networks, Evolved Packet Core, IP Multimedia Subsystem, service-chaining and cloudbox"
Michael Howard, Infonetics Research
Challenges
The industry analysts highlight that this market is still in its infancy and that challenges remain.
Gartner's Skorupa points out that the data centre orchestration systems still need to be integrated and that there is a need for cheaper, simpler hardware.
"Many vendors have proposed solutions but the market is in its infancy and customer acceptance and adoption is still unknown," says Skorupa.
Infonetics highlights dynamic bandwidth as a key use case for SDNs and in particularly between data centres.
"There are other key use cases targeted for SDN that are not data centre related: content delivery networks, Evolved Packet Core, IP Multimedia Subsystem, service-chaining and cloudbox," says Howard.
Cloudbox is a concept being developed by operators where an intelligent general purpose box is placed at a customer's location. The box works in conjunction with server-based network functions delivered via the network, although some application software will also run on the box.
Customers will sign up for different service packages out of firewall, intrusion detection system (IDS), parental control, turbo button bandwidth bursting etc., says Howard. Each customer's traffic is guided by the SDNs and uses Network Functions Virtualisation - those network functions such as a firewall or IDS formerly in individual equipment - such that the services subscribed to by a user are 'chained' using SDN software.
ECI Telecom demos 100 Gigabit over 4,600km
- 4,600km optical transmission over submarine cable
- The Tera Santa Consortium, chaired by ECI, will show a 400 Gigabit/ 1 Terabit transceiver prototype in the summer
- 100 Gigabit direct-detection module on hold as the company eyes new technology developments
"When we started the project it was not clear whether the market would go for 400 Gig or 1 Terabit. Now it seems that the market will start with 400 Gig."
Jimmy Mizrahi, ECI Telecom
ECI Telecom has transmitted a 100 Gigabit signal over 4,600km without signal regeneration. Using Bezeq International's submarine cable between Israel and Italy, ECI sent the 100 Gigabit-per-second (Gbps) signal alongside live traffic. The Apollo optimised multi-layer transport (OMLT) platform was used, featuring a 5x7-inch MSA 100Gbps coherent module with soft-decision, forward error correction (SD-FEC).
"We set a target for the expected [optical] performance with our [module] partner and it was developed accordingly," says Jimmy Mizrahi, head of the optical networking line of business at ECI Telecom. "The [100Gbps] transceiver has superior performance; we have heard that from operators that have tested the module's capabilities and performance."
One geography that ECI serves is the former Soviet Union which has large-span networks and regions of older fibre.
Tera Santa Consortium
ECI used the Bezeq trial to also perform tests as part of the Tera Santa Consortium project involving Israeli optical companies and universities. The project is developing a transponder capable of 400 Gigabit and 1 Terabit rates. The project is funded by seven participating firms and the Israeli Government.
"When we started the project it was not clear whether the market would go for 400 Gig or 1 Terabit,” says Mizrahi. “Now it seems that the market will start with 400 Gig."
The Tera Santa Consortium expects to demonstrate a 1 Terabit prototype in August and is looking to extend the project a further three years.
100 Gigabit direct detection
In 2012 ECI announced it was working with chip company, MultiPhy, to develop a 100 Gigabit direct-detection module. The 100 Gigabit direct detection technology uses 4x28Gbps wavelengths and is a cheaper solution than 100Gbps coherent. The technology is aimed at short reach (up to 80km) links used to connect data centres, for example, and for metro applications.
“We have changed our priorities to speed up the [100Gbps] coherent solution,” says Mizrahi. “It [100Gbps direct detection] is still planned but has a lower priority.”
ECI says it is monitoring alternative technologies coming to market in the next year. “We are taking it slowly because we might jump to new technologies,” says Mizrahi. “The line cards will be ready, the decision will be whether to go for new technologies or for direct detection."
Mizrahi would not list the technologies but hinted they may enable cheaper coherent solutions. Such coherent modules would not need SD-FEC to meet the shorter reach, metro requirements. Such a module could also be pluggable, such as the CFP or even the CFP2, and use indium phosphide-based modulators.
“For certain customers pricing will always be the major issue,” says Mizrahi. “If you have a solution at half the price, they will take it.”
Space-division multiplexing: the final frontier
System vendors continue to trumpet their achievements in long-haul optical transmission speeds and overall data carried over fibre.
Alcatel-Lucent announced earlier this month that France Telecom-Orange is using the industry's first 400 Gigabit link, connecting Paris and Lyon, while Infinera has detailed a trial demonstrating 8 Terabit-per-second (Tbps) of capacity over 1,175km and using 500 Gigabit-per-second (Gbps) super-channels.

"Integration always comes at the cost of crosstalk"
Peter Winzer, Bell Labs
Yet vendors already recognise that capacity in the frequency domain will only scale so far and that other approaches are required. One is space-division multiplexing such as using multiple channels separated in space and implemented using multi-core fibre with each core supporting several modes.
"We want a technology that scales by a factor of 10 to 100," says Peter Winzer, director of optical transmission systems and networks research at Bell Labs. "As an example, a fibre with 10 cores with each core supporting 10 modes, then you have the factor of 100."
Space-division multiplexing
Alcatel-Lucent's research arm, Bell Labs, has demonstrated the transmission of 3.8Tbps using several data channels and an advanced signal processing technique known as multiple-input, multiple-output (MIMO).
In particular, 40 Gigabit quadrature phase-shift keying (QPSK) signals were sent over a six-spatial mode fibre using two polarisation modes and eight wavelengths to achieve 3.8Tbps. The overall transmission uses 400GHz of spectrum only.
Alcatel-Lucent stresses that the commercial deployment of space-division multiplexing remains years off. Moreover operators will likely first use already-deployed parallel strands of single-mode fibre, needing the advanced signal processing techniques only later.
"You might say that is trivial [using parallel strands of fibre], but bringing down the cost of that solution is not," says Winzer.
First, cost-effective integrated amplifiers will be needed. "We need to work on a single amplifier that can amplify, say, ten existing strands of single-mode fibre at the cost of two single-mode amplifiers," says Winzer. An integrated transponder will also be needed: one transponder that couples to 10 individual fibres at a much lower cost than 10 individual transponders.
With a super-channel transponder, several wavelengths are used, each with its own laser, modulator and detector. "In a spatial super-channel you have the same thing, but not, say, three different frequencies but three different spatial paths," says Winzer. Here photonic integration is the challenge to achieve a cost-effective transponder.
Once such integrated transponders and amplifiers become available, it will make sense to couple them to multi-core fibre. But operators will only likely start deploying new fibre once they exhaust their parallel strands of single-mode fibre.
Such integrated amplifiers and integrated transponders will present challenges. "The more and more you integrate, the more and more crosstalk you will have," says Winzer. "That is fundamental: integration always comes at the cost of crosstalk."
Winzer says there are several areas where crosstalk may arise. An integrated amplifier serving ten single-mode fibres will share a multi-core erbium-doped fibre instead of ten individual strands. Crosstalk between those closely-spaced cores is likely.
The transponder will be based on a large integrated circuit giving rise to electrical crosstalk. One way to tackle crosstalk is to develop components to a higher specification but that is more costly. Alternatively, signal processing on the received signal can be used to undo the crosstalk. Using electronics to counter crosstalk is attractive especially when it is the optics that dominate the design cost. This is where MIMO signal processing plays a role. "MIMO is the most advanced version of spatial multiplexing," says Winzer.
To address crosstalk caused by spatial multiplexing in the Bell Labs' demo, 12x12 MIMO was used. Bell Labs says that using MIMO does not add significantly to the overall computation. Existing 100 Gigabit coherent ASICs effectively use a 2x2 MIMO scheme, says Winzer: “We are extending the 2x2 MIMO to 2Nx2N MIMO.”
Only one portion of the current signal processing chain is impacted, he adds; a portion that consumes 10 percent of the power will need to increase by a certain factor. The resulting design will be more complex and expensive but not dramatically so, he says.
Winzer says such mitigation techniques need to be investigated now since crosstalk in future systems is inevitable. Even if the technology's deployment is at least a decade away, developing techniques to tackle crosstalk now means vendors have a clear path forward.
Parallelism
Winzer points out that optical transmission continues to embrace parallelism. "With super-channels we go parallel with multiple carriers because a single carrier can’t handle the traffic anymore," he says. This is similar to parallelism in microprocessors where multi-core designs are now used due to the diminishing return in continually increasing a single core's clock speed.
For 400Gbps or 1 Terabit over a single-mode fibre, the super-channel approach is the near term evolution.
Over the next decade, the benefit of frequency parallelism will diminish since it will no longer increase spectral efficiency. "Then you need to resort to another physical dimension for parallelism and that would be space," says Winzer.
MIMO will be needed when crosstalk arises and that will occur with multiple mode fibre.
"For multiple strands of single mode fibre it will depend on how much crosstalk the integrated optical amplifiers and transponders introduce," says Winzer.
Part 1: Terabit optical transmission
Alcatel-Lucent demos dual-carrier Terabit transmission
"Without [photonic] integration you are doubling up your expensive opto-electronic components which doesn't scale"
Peter Winzer, Alcatel-Lucent's Bell Labs
Part 1: Terabit optical transmission
Alcatel-Lucent's research arm, Bell Labs, has used high-speed electronics to enable one Terabit long-haul optical transmission using two carriers only.
Several system vendors have demonstrated one Terabit transmission including Alcatel-Lucent but the company is claiming an industry first in using two multiplexed carriers only. In 2009, Alcatel-Lucent's first Terabit optical transmission used 24 sub-carriers.
"There is a tradeoff between the speed of electronics and the number of optical modulators and detectors you need," says Peter Winzer, director of optical transmission systems and networks research at Bell Labs. "In general it will be much cheaper doing it with fewer carriers at higher electronics speeds than doing it at a lower speed with many more carriers."
What has been done
In the lab-based demonstration, Bell Labs sent five, 1 Terabit-per-second (Tbps) signals over an equivalent distance of 3,200km. Each signal uses dual-polarisation 16-QAM (quadrature amplitude modulation) to achieve a 1.28Tbps signal. Thus each carrier holds 640Gbps: some 500Gbps data and the rest forward error correction (FEC) bits.
In current 100Gbps systems, dual-polarisation, quadrature phase-shift keying (DP-QPSK) modulation is used. Going from QPSK to 16-QAM doubles the bit rate. Bell Labs has also increased the symbol rate from some 30Gbaud to 80Gbaud using state-of-the-art high-speed electronics developed at Alcatel Thales III-V Lab.
"To achieve these rates, you need special high-speed components - multiplexers - and also high-speed multi-level devices," says Winzer. These are indium phosphide components, not CMOS and hence will not be deployed in commercial products for several years yet. "These things are realistic [in CMOS], just not for immediate product implementation," says Winzer.
Each carrier occupies 100GHz of channel bandwidth equating to 200GHz overall, or a 5.2b/s/Hz spectral efficiency. Current state-of-the-art 100Gbps systems use 50GHz channels, achieving 2b/s/Hz.
The 3,200km reach using 16-QAM technology is achieved in the lab, using good fibre and without any commercial product margins, says Winzer. Adding commercial product margins would reduce the optical link budget by 2-3dB and hence the overall reach.
Winzer says the one Terabit demonstration uses all the technologies employed in Alcatel-Lucent's photonic service engine (PSE) ASIC although the algorithms and soft-decision FEC used are more advanced, as expected in an R&D trial.
Before such one Terabit systems become commercial, progress in photonic integration will be needed as well as advances in CMOS process technology.
"Progress in photonic integration is needed to get opto-electronic costs down as it [one Terabit] is still going to need two-to-four sub-carriers," he says. A balance between parallelism and speed needs to be struck, and parallelism is best achieved using integration. "Without integration you are doubling up your expensive opto-electronic components which doesn't scale," says WInzer.
A FOX-C approach to flexible optical switching
Flexible switching of high-capacity traffic carried over ’super-channel' dense-wavelength division multiplexing wavelengths is the goal of the European Commission Seventh Framework Programme (FP7) research project.

The €3.6M FOX-C (Flexible optical cross-connect nodes) will develop a flexible spectrum reconfigurable optical add/drop multiplexer (ROADM) for 400Gbps and one Terabit optical transmission. The ROADM will be designed not only to switch super-channels but also the carrier constituent components.
Companies involved in the project include operator France Telecom and optical component player Finisar. However, no major European system vendor is taking part in the FOX-C project although W-Onesys, a small system vendor from Spain, is participating.
“We want to transfer to the optical layer the switching capability”
Erwan Pincemin, FT-Orange
“It is becoming more difficult to increase the spectral efficiency of such networks,” says Erwan Pincemin, senior expert in fibre optic transmission at France Telecom-Orange. “We want to increase the advantages of the network by adding flexibility in the management of the wavelengths in order to adapt the network as services evolve.”
FOX-C will increase the data rate carried by each wavelength to achieve a moderate increase in spectral efficiency. Pincemin says such modulation schemes as orthogonal frequency division multiplexing (OFDM) and Nyquist WDM will be explored. But the main goal is to develop flexible switching based on an energy efficient and cost effective ROADM design.
The ROADM’s filtering will be able to add and drop 10 and 100 Gigabit sub-channels or 400 Gigabit and 1 Terabit super-channels. By using the developed filter to switch optically at speeds as low as 10 Gigabit, the aim is to avoid having to do the switching electrically with its associated cost and power consumption overhead. “We want to transfer to the optical layer the switching capability,” says Pincemin.
While the ROADM design is part of the project’s goals, what is already envisaged is a two-stage pass-through-and-select architecture. The first stage, for coarse switching, will process the super-channels and will be followed by finer filtering to extract (drop) and insert (add) individual lower-rate tributaries.
The project started in Oct 2012 and will span three years. The resulting system testing will take place at France-Telecom Orange's Lab in Lannion, France.
Project players
The project’s technical leader is the Athens Institute of Technology (AIT), headed by Prof. Ioannis Tomkos, while the administrator leader is the Greek company Optronics Technologies.
Finisar will provide the two-stage optical switch while France Telecom-Orange will test the resulting ROADM and will build the multi-band OFDM transmitter and receiver to evaluate the design.
Athens Institute of Technology will work with Finisar on the technical aspects and in particular a flexible networking architecture study. The Hebrew University is working with Finisar on the design and the building of the ultra-selective adaptive optical filter, and has expertise is free-space optical systems. The Spanish firm, W-Onesys, is a system integration specialist and will also work with Finisar to integrate its wavelength-selective switch for the ROADM. Other project players include Aston University, Tyndall National Institute and the Karlsruhe Institute of Technology.
No major European system vendor is taking part in the FOX-C project. According to Pincemin this is regrettable although he points out that the equipment players are involved in other EC FP7 projects addressing flexible networking.
He believes that their priorities are elsewhere and that the FOX-C project may be deemed as too forward looking and risky. “They want to have a clear return on investment on their research,” says Pincemin.
PCIe 3.0 and USB 3.0 to link mobile chip
- Both protocols to run on the MIPI Alliance's M-PHY transceiver
- Goal is to exploit existing PCIe and USB driver and application software while benefitting from the low power M-PHY
- OcuLink cable based on PCIe 3.0 promises up to 32 Gigabit
Source: Gazzetabyte
At first glance the Peripheral Component Interconnect Express (PCIe) bus has little in common with the Universal Serial Bus (USB) interface.
PCIe is a multi-lane bus standard used to move heavy data payloads over computing backplanes while USB is a consumer electronics interface. Yet certain areas of overlap are appearing.
The PCI Special Interest Group (PCI-SIG) is looking to get PCIe adopted within mobile devices - tablets and smartphones - for chip-to-chip communication, an application already performed by the 480Mbps USB 2.0 High-Speed Inter-Chip (HSIC) standard.
PCI-SIG is also developing the OcuLink external copper cable for storage and consumer applications.
PCIe is a point-to-point link that can also be switched, and is used to connect processors, processors to co-processors and for storage. Now, USB 3.0 and PCIe 3.0 are set to play an embedded role within mobile devices.
Handsets use the Mobile Industry Processor Interface (MIPI) Alliance's interfaces centred on the handset's mobile application processor, with several MIPI point-to-point interfaces defined to link to the handset's baseband processor, display and camera sensor.
Two physical (PHY) devices - D-PHY or the M-PHY - are used by MIPI. The M-PHY is the faster of the two (up to 5.8Git/s compared to the D-PHY's 1Gbit/s). It is the M-PHY transceiver that is used in links between the handset’s application processor to the radio, display and camera sensor. And it is the M-PHY that will run the USB 3.0-based SuperSpeed InterChip (SSIC) - the follow-on to HSIC - and PCIe 3.0.
The motivation is to benefit from the huge amount of software drivers and applications developed for USB and PCIe, while taking advantage of M-PHY's lower power consumption than the USB's or PCIe's own transceivers.
M-PHY runs at 1.25-1.45Gbps while two faster versions are in development: 2.5-2.9Gbps and up to 5.8Gbps.
A PHY adaptor layer, known as the PIPE 3.0-to-M-PHY bridge, translates the USB protocol onto M-PHY. The same strategy is being pursued by the PCI-SIG using a logical PHY to run PCIe 3.0 on M-PHY.
The PCI-SIG group hopes to have the PCIe 3.0 mobile specification completed in the first quarter of 2013.
Meanwhile, the OcuLink cable will initially be a copper cable interface designed as a compact, low cost interface that will support one, two and four lanes of PCIe 3.0. Two versions are planned: a passive and an active copper cable before a fibre-based version will be developed.
Uses of OcuLink will include connecting to storage devices and to audiovisual equipment.
A more detailed article on PCIe and USB for mobile will appear in an upcoming article for New Electronics
Avago's latest optical engine targets active optical cables
Avago Technologies has unveiled its first family of active optical cables for use in the data centre and for high performance computing.
The company has developed an optical engine for use in the active optical cables (AOCs). Known as the Atlas 75x, the optical engine reduces the power consumption and cost of the AOC to better compete with direct-attach copper cables.

“Some 99 percent of [active optical cable] applications are 20m or less”
Sharon Hall, Avago
"This is a price-elastic market," says Sharon Hall, product line manager for embedded optics at Avago Technologies. "A 20 percent price premium over a copper solution, then it starts to get interesting."
The AOC family comprises a 10 Gigabit-per-second (Gbps) single channel SFP+ and two QSFP+ cables - a 4x10Gbps QSFP+ and a QSFP+-to-four SFP+. The SFP+ AOC is used for 10 Gigabit Ethernet, 8 Gigabit Fibre Channel and Infiniband applications. The QSFP+ is used for 4-channel Infiniband, serial-attached SCSI (SAS) storage while the QSFP-to-four-SFP+ is required for server applications.
There are also three 12-channel CXP AOC products: 10-channel and 12-channel cables with each channel at 10Gbps; and a 12-channel CXP, each at 12.5Gbps. The devices supports the 100GBASE-SR10 100 Gigabit Ethernet and 12-channel Infiniband standards.
The 12-channel 12.5Gbps CXP product is used typically for proprietary applications such as chassis-to-chassis links where greater bandwidth is required, says Avago.
The SFP+ and QSFP+ products have a reach of 20m whereas competing AOC products achieve 100m. “Some 99 percent of applications are 20m or less,” says Hall.
The SFP+ and QSFP+ AOC products use the Atlas 75x optical engine. The CXP cable uses Avago’s existing Atlas 77x MicroPod engine and has a reach of 100m.
The Atlas 75x duplex 10Gbps engine reduces the power consumption by adopting a CMOS-based VCSEL driver instead of a silicon germanium one. “With CMOS you do not get the same level of performance as silicon germanium and that impacts the reach,” says Hall. “This is why the MicroPod is more geared for the high-end solutions.”
The result of using the Atlas 75x is an SFP+ AOC with a power consumption of 270mW compared to 200mW of a passive direct-attach copper cable. However, the SFP+ AOC has a lower bit error rate (1x10-15 vs. 1x10-12), a reach of up to 20m compared to the copper cable’s 7m and is only a quarter of the weight.
The SFP+ AOC does have a lower power consumption compared to active direct-attach cable, which consumes 400-800mW and has a reach of 15m.
Avago says that up to a 30m reach is possible using the Atlas 75x optical engine. Meanwhile, samples of the AOCs are available now.
Does Cisco Systems' CPAK module threaten the CFP2?
Cisco Systems has been detailing over recent months its upcoming proprietary optical module dubbed CPAK. The development promises to reduce the market opportunity for the CFP2 multi-source agreement (MSA) and has caused some disquiet in the industry.

Source: Cisco Systems, Gazettabyte, see comments
"The CFP2 has been a bit slow - the MSA has taken longer than people expected - so Cisco announcing CPAK has frightened a few people," says Paul Brooks, director for JDSU's high speed transport test portfolio.
Brooks speculates that the advent of CPAK may even cause some module makers to skip the CFP2 and go straight to the smaller CFP4 given the time lag between the two MSAs is relatively short.
The CPAK module, smaller than the CFP2 MSA and three quarters its volume, has not been officially released and Cisco will not comment on the design but in certain company presentations the CPAK is compared with the CFP. The details are shown in the table above, with the CFP2’s details added.
The CPAK is the first example of Cisco's module design capability following its acquisition of silicon photonics player, Lightwire.
The development of the module highlights how the acquisition of core technology can give an equipment maker the ability to develop proprietary interfaces that promise costs savings and differentiation. But it also raises a question mark regarding the CFP2 and the merit of MSAs when a potential leading customer of the CFP2 chooses to use its own design.
"The CFP2 has been a bit slow - the MSA has taken longer than people expected - so Cisco announcing CPAK has frightened a few people"
Paul Brooks, JDSU
Industry analysts do not believe it undermines the CFP2 MSA. “I believe there is business for the CFP2,” says Daryl Inniss, practice leader, Ovum Components. “Cisco is shooting for a solution that has some staying power. The CFP2 is too large and the power consumption too high while the CFP4 is too small and will take too long to get to market; CPAK is a great compromise.”
That said, Inniss, in a recent opinion piece entitled: Optical integration challenges component/OEM ecosystem, writes:
“Cisco’s Lightwire acquisition provides another potential attack on the traditional ecosystem. Lightwire provides unique silicon photonics based technology that can support low power consumption and high-density modules. Cisco may adopt a proprietary transceiver strategy to lower cost, decrease time to market, and build competitive barriers. It need not go through the standards process, which would enable its competitors and provide them with its technology. Cisco only needs to convince its customers that it has a robust supply chain and that it can support its product.”
Vladimir Kozlov, CEO of market research firm, LightCounting, is not surprised by the development. “Cisco could use more proprietary parts and technologies to compete with Huawei over the next decade,” he says. “From a transceiver vendor perspective, custom-made products are often more profitable than standard ones; unless Cisco will make everything in house, which is unlikely, it is not bad news.”
JDSU has just announced that its ONT-100G test set supports the CFP2 and CFP4. The equipment will also support CPAK. "We have designed a range of adaptors that allows us to interface to other optics including one very large equipment vendor's - Cisco's - own CFP2-like form factor," says Brooks.
However, Brooks still expects the industry to align on a small number of MSAs despite the advent of CPAK. "The majority view is that the CFP2 and CFP4 will address most people's needs," says Brooks. "Although there is some debate whether a QSFP2 may be more cost effective than the CFP4." The QSFP2 is the next-generation compact follow-on to the QSFP that supports the 4x25Gbps electrical interface.
JDSU's Brandon Collings on silicon photonics, optical transport & the tunable SFP+
JDSU's CTO for communications and commercial optical products, Brandon Collings, discusses reconfigurable optical add/drop multiplexers (ROADMs), 100 Gigabit, silicon photonics, and the status of JDSU's tunable SFP+.

"We have been continually monitoring to find ways to use the technology [silicon photonics] for telecom but we are not really seeing that happen”
Brandon Collings, JDSU
Brandon Collings highlights two developments that summarise the state of the optical transport industry.
The industry is now aligned on the next-generation ROADM architecture of choice, while experiencing a ’heavy component ramp’ in high-speed optical components to meet demand for 100 Gigabit optical transmission.
The industry has converged on the twin wavelength-selective switch (WSS) route-and-select ROADM architecture for optical transport. "This is in large networks and looking forward, even in smaller sized networks," says Collings.
In a route-and-select architecture, a pair of WSSes reside at each degree of the ROADM. The second WSS is used in place of splitters and improves the overall optical performance by better suppressing possible interference paths.
JDSU showcased its TrueFlex portfolio of components and subsystems for next-generation ROADMs at the recent European Conference on Optical Communications (ECOC) show. The company first discussed the TrueFlex products a year ago. "We are now in the final process of completing those developments," says Collings.
Meanwhile, the 100 Gigabit-per-second (Gbps) component market is progressing well, says Collings. The issues that interest him include next-generation designs such as a pluggable 100Gbps transmission form factor.
Direct detection and coherent
JDSU remains uncertain about the market opportunities for 100Gbps direct-detection solutions for point-to-point and metro applications. "That area remains murky," says Collings. "It is clearly an easy way into 100 Gig - you don't have to have a huge ASIC developed - but its long-term prospects are unclear."
The price point of 100Gbps direct-detection, while attractive, is competing against coherent transmission solutions which Collings describes as volatile. "As coherent becomes comparable [in cost], the situation will change for the 4x25 Gig [direct detection] quite quickly," he says. "Coherent seems to be the long-term, robust cost-effective way to go, capturing most of the market."
At present, coherent solutions are for long-haul that require a large, power-consuming ASIC. Equally the accompanying optical components - the lasers and modulators - are also relatively large. For the coherent metro market, the optics must become cheaper and smaller as must the coherent ASIC.
"If you are looking to put that [coherent ASIC and optics] into a CFP or CFP2, the problem is based on power; cost is important but power is the black-and-white issue," says Collings. Engineers are investigating what features can be removed from the long-haul solution to achieve the target 15-20W power consumption. "That is pretty challenging from an ASIC perspective and leaves little-to-no headroom in a pluggable," says Collings.
The same applies to the optics. "Is there a lesser set of photonics that can sit on a board that is much lower cost and perhaps has some weaker performance versus today's high-performance long-haul?" says Collings. These are the issues designers are grappling with.
Silicon photonics
Another area in flux is the silicon photonics marketplace. "It is a very fluid and active area," says Collings. "We are not highly active in the area but we are very active with outside organisations to keep track of its progress, its capabilities and its overall evolution in terms of what the technology is capable of."
The silicon photonics industry has shifted towards datacom and interconnect technology in the last year, says Collings. The performance levels silicon photonics achieves are better suited to datacom than telecom's more demanding requirements. "We have been continually monitoring to find ways to use the technology for telecom but we are not really seeing that happen,” says Collings.
Tunable SFP+
JDSU demonstrated its tunable laser in an SFP+ pluggable optical module at the ECOC exhibition.
The company was first to market with the tunable XFP, claiming it secured JDSU an almost two-year lead in the marketplace. "We are aiming to repeat that with the SFP+," says Collings.
The SFP+ doubles a line card's interface density compared to the XFP module. The SFP+ supports both 10Gbps client-side and wavelength-division multiplexing (WDM) interfaces. "Most of the cards have transitioned from supporting the XFP to the SFP+," says Collings. This [having a tunable SFP+] completes that portfolio of capability."
JDSU has provided samples of the tunable pluggable to customers. "We are working with a handful of leading customers and they typically have a preference on chirp or no-chirp [lasers], APD [avalanche photo-diode] or no APD, that sort of thing," says Collings.
JDSU has not said when it will start production of the tunable SFP+. "It won't be long," says Collings, who points out that JDSU has been demonstrating the pluggable for over six months.
The company plans a two-stage rollout. JDSU will launch a slightly higher power-dissipating tunable SFP+ "a handful of months" before the standard-complaint device. "The SFP+ standard calls for 1.5W but for some customers that want to hit the market earlier, we can discuss other options," says Collings.
Further reading
The next high-speed Ethernet standard starts to take shape
Source: Gazettabyte
The IEEE has begun work to develop the next-speed Ethernet standard beyond 100 Gigabit to address significant predicted growth in bandwidth demand.
The standards body has set up the IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group, chaired by John D’Ambrosia, who previously chaired the 40 and 100 Gigabit IEEE P802.3ba Ethernet standards ratified in June 2010. "I guess I’m a glutton for punishment,” quips D'Ambrosia.
The Higher Speed Ethernet standard could be completed by early 2017.
The group has been set up after an extensive one-year study by the IEEE 802.3 Bandwidth Assessment Ad Hoc group investigating networking capacity growth trends in various markets. The study looked beyond core networking and data centres - the focus of the 40 and 100 Gigabit Ethernet (GbE) study work - to include high-performance computing, financial markets, Internet exchanges and the scientific community.
One of the resulting report's conclusions (IEEE 802.3 Industry Connections Ethernet Bandwidth Assessment report) is that Terabit capacity will likely be required by 2015, growing a further tenfold by 2020.
“By 2015 core networks on average will need ten times the bandwidth of 2010, and one hundred times [the bandwidth] by 2020,” says D’Ambrosia, who is also the chair of the IEEE 802.3 Ethernet Bandwidth Assessment Ad Hoc group, as well as chief Ethernet evangelist, CTO office at Dell. “If you look at Ethernet in 2010, it was at 100 Gigabit, so ten times 100 Gigabit in 2015 is a Terabit and a hundred times 2010 is 10 Terabit by 2020.”

"We have got to the point where the pesky laws of physics are challenging us"
John D'Ambrosia, chair of the IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group
D'Ambrosia stresses that the Ad Hoc group's role is to talk about capacity requirements, not interface speeds. The technical details of any interface implementation will only become clear once the standardisation effort is well under way.
A second Ethernet Bandwidth Assessment study finding is that network aggregation nodes are growing faster, and hence require greater capacity earlier, than the network's end points.
"There is also a growing deviation between the big guys and the rest of the market," says D'Ambrosia. He has heard individuals from the largest internet content providers say they need Terabit connections by 2013, while others claim it will be 2020 before a mass market develops for such an interconnect.
D'Ambrosia says the main findings are not necessarily surprising but there were two 'aha' moments during the study.
One was that the core networking growth rates predicted in 2007 by the 40 and 100 Gig High-speed Study Group are still valid five years on.
The other concerned the New York Stock Exchange that had forecast that it would need to install four 100Gbps links in its data centre yet ended up using 13. "If there is any company that has a lot of money on the line and would have the best chance of nailing down their needs, I would put the New York Stock Exchange up there," says D'Ambrosia. "That tells you something about bandwidth growth and that you can still underestimate what is going to happen."
"The reality is that I can't give you any solutions right now that are attractive to do a Terabit"
What next
The IEEE standardisation work for the next speed Ethernet has not started but the completed Ethernet Bandwidth Assessment study will likely form an important input for the Industry Connections Higher Speed Ethernet Consensus group.
The start of the standardisation work is expected in either March or July 2013 with the Study Group phase then taking a further eight months. This compares to 18 months for the IEEE 40GbE and 100GbE Study Group work (see chart above). The Task Force's work - writing the specification - is then expected to take a further two and a half years, completing the standard in early 2017 if all goes to plan.
Technology options
While stressing that the IEEE is talking about capacities and not yet interface speeds, Terabit capacity could be solved using multiple 400 Gigabit Ethernet interfaces, says D'Ambrosia.
At present there is no 400GbE project underway. However, the industry does believe that 400GbE is "doable" economically and technically. "Much of the supply base, when we are talking about Ethernet, is looking at 400 Gigabit," says D'Ambrosia.
Achieving a 1TbE interface looks much more distant. "People pushing for 1 Terabit tend to be the people looking at it from the bandwidth perspective and then looking at upgrading their networks and making multiple investments," he says. "But the reality is that I can't give you any solutions right now that are attractive to do a Terabit."
All agree that the technical challenges facing the industry to meet growing bandwidth demands are starting to mount. "We have got to the point where the pesky laws of physics are challenging us," says D'Ambrosia.
Further reading:
IEEE 802.3 Industry Connections Higher Speed Ethernet Ad Hoc
