ECI Telecom demos 100 Gigabit over 4,600km

  • 4,600km optical transmission over submarine cable
  • The Tera Santa Consortium, chaired by ECI, will show a 400 Gigabit/ 1 Terabit transceiver prototype in the summer
  • 100 Gigabit direct-detection module on hold as the company eyes new technology developments

 

"When we started the project it was not clear whether the market would go for 400 Gig or 1 Terabit. Now it seems that the market will start with 400 Gig."

Jimmy Mizrahi, ECI Telecom

 

 

 

ECI Telecom has transmitted a 100 Gigabit signal over 4,600km without signal regeneration. Using Bezeq International's submarine cable between Israel and Italy, ECI sent the 100 Gigabit-per-second (Gbps) signal alongside live traffic. The Apollo optimised multi-layer transport (OMLT) platform was used, featuring a 5x7-inch MSA 100Gbps coherent module with soft-decision, forward error correction (SD-FEC).

"We set a target for the expected [optical] performance with our [module] partner and it was developed accordingly," says Jimmy Mizrahi, head of the optical networking line of business at ECI Telecom. "The [100Gbps] transceiver has superior performance; we have heard that from operators that have tested the module's capabilities and performance."

One geography that ECI serves is the former Soviet Union which has large-span networks and regions of older fibre.


Tera Santa Consortium

ECI used the Bezeq trial to also perform tests as part of the Tera Santa Consortium project involving Israeli optical companies and universities. The project is developing a transponder capable of 400 Gigabit and 1 Terabit rates. The project is funded by seven participating firms and the Israeli Government.

"When we started the project it was not clear whether the market would go for 400 Gig or 1 Terabit,” says Mizrahi. “Now it seems that the market will start with 400 Gig."

The Tera Santa Consortium expects to demonstrate a 1 Terabit prototype in August and is looking to extend the project a further three years.

100 Gigabit direct detection

In 2012 ECI announced it was working with chip company, MultiPhy, to develop a 100 Gigabit direct-detection module. The 100 Gigabit direct detection technology uses 4x28Gbps wavelengths and is a cheaper solution than 100Gbps coherent. The technology is aimed at short reach (up to 80km) links used to connect data centres, for example, and for metro applications.

“We have changed our priorities to speed up the [100Gbps] coherent solution,” says Mizrahi. “It [100Gbps direct detection] is still planned but has a lower priority.”

ECI says it is monitoring alternative technologies coming to market in the next year. “We are taking it slowly because we might jump to new technologies,” says Mizrahi. “The line cards will be ready, the decision will be whether to go for new technologies or for direct detection."

Mizrahi would not list the technologies but hinted they may enable cheaper coherent solutions. Such coherent modules would not need SD-FEC to meet the shorter reach, metro requirements. Such a module could also be pluggable, such as the CFP or even the CFP2, and use indium phosphide-based modulators.

“For certain customers pricing will always be the major issue,” says Mizrahi. “If you have a solution at half the price, they will take it.”


Cisco Systems demonstrates 100 Gigabit technologies

* Cisco adds the CPAK transceiver to its mix of 100 Gigabit coherent and elastic core technologies
* Announces 100 Gigabit transmission over 4,800km

 

"CPAK helps accelerate the feasibility and cost points of deploying 100Gbps"

Stephen Liu, Cisco

 

 

 

 

 

Cisco Sytems has announced that its 100 Gigabit coherent module has achieved a reach of 4,800km without signal regeneration. The span was achieved in the lab and the system vendor intends to verify the span in a customer's network.

The optical transmission system achieved a reach of 3,000km over low-loss fibre when first announced in 2012. The extended reach is not a result of a design upgrade, rather the 100 Gigabit-per-second (Gbps) module is being used on a link with Raman amplification.

Cisco says it started shipping its 100Gbps coherent module in June 2012. "We have shipped over 2,000 100Gbps coherent dense WDM ports," says Sultan Dawood, marketing manager at Cisco. The 100Gbps ports include line-side 100Gbps interfaces integrated within Cisco's ONS 15454 multi-service transport platform and its CRS core router supporting its IP-over-DWDM elastic core architecture.

Cisco has also coupled the ASR 9922 series router to the ONS 15454. "We are extending what we have done for IP and optical convergence in the core," says Stephen Liu, director of market management at Cisco. "There is now a common solution to the [network] edge."

None of Cisco's customers has yet used 100Gbps over a 3,000km span, never mind 4,800km. But the reach achieved is an indicator of the optical transmission performance. "The [distance] performance is really a proxy for usefulness," says Liu. "If you take that 3,000km over low-loss fibre, what that buys you is essentially a greater degree of tolerance for existing fibre in the ground."

Much industry attention is being given to the next-generation transmission speeds of 400Gbps and one Terabit. This requires support for super-channels - multi-carrier signals to transmit 400Gbps and one Terabit as well as flexible spectrum to pack the multi-carrier signals efficiently across the fibre's spectrum. But Cisco argues that faster transmission is only one part of the engineering milestones to be achieved, especially when 100Gbps deployment is still in its infancy.

To benefit 100Gbps deployments, Cisco has officially announced its own CPAK 100Gbps client-side optical transceiver after discussing the technology over the last year. "CPAK helps accelerate the feasibility and cost points of deploying 100Gbps," says Liu.

CPAK

The CPAK is Cisco' first optical transceiver using silicon photonics technology following its acquisition of LightWire. The CPAK is a compact optical transceiver to replace the larger and more power hungry 100Gbps CFP interfaces.

The CPAK is being launched at the same time as many companies are announcing CFP2 multi-source agreement (MSA) optical transceiver products. Cisco stresses that the CPAK conforms to the IEEE 100GBASE-LR4 and -SR10 100Gbps standards. Indeed at OFC/NFOEC it is demonstrating the CPAK interfacing with a CFP2.

The CPAK will be used across several Cisco platforms but the first implementation is for the ONS 15454.

The CPAK transceiver will be generally available in the summer of 2013.


OFC/NFOEC 2013 to highlight a period of change

Next week's OFC/NFOEC conference and exhibition, to be held in Anaheim, California, provides an opportunity to assess developments in the network and the data centre and get an update on emerging, potentially disruptive technologies.

 

Source: Gazettabyte

Several networking developments suggest a period of change and opportunity for the industry. Yet the impact on optical component players will be subtle, with players being spared the full effects of any disruption. Meanwhile, industry players must contend with the ongoing challenges of fierce competition and price erosion while also funding much needed innovation.

The last year has seen the rise of software-defined networking (SDN), the operator-backed Network Functions Virtualization (NFV) initiative and growing interest in silicon photonics.

SDN has already being deployed in the data centre. Large data centre adopters are using an open standard implementation of SDN, OpenFlow, to control and tackle changing traffic flow requirements and workloads. 

Telcos are also interested in SDN. They view the emerging technology as providing a more fundamental way to optimise their all-IP networks in terms of processing, storage and transport.

Carrier requirements are broader than those of data centre operators; unsurprising given their more complex networks. It is also unclear how open and interoperable SDN will be, given that established vendors are less keen to enable their switches and IP routers to be externally controlled. But the consensus is that the telcos and large content service providers backing SDN are too important to ignore. If traditional switching and routers hamper the initiative with proprietary add-ons, newer players will willing fulfill requirements.   

Optical component players must assess how SDN will impact the optical layer and perhaps even components, a topic the OIF is already investigating, while keeping an eye on whether SDN causes market share shifts among switch and router vendors.

The ETSI Network Functions Virtualization (NFV) is an operator-backed initiative that has received far less media attention than SDN. With NFV, telcos want to embrace IT server technology to replace the many specialist hardware boxes that take up valuable space, consume power, add to their already complex operations support systems (OSS) while requiring specialist staff. By moving functions such as firewalls, gateways, and deep packet inspection onto cheap servers scaled using Ethernet switches, operators want lower cost systems running virtualised implementations of these functions.

The two-year NFV initiative could prove disruptive for many specialist vendors albeit ones whose equipment operate at higher layers of the network, removed from the optical layer. But the takeaway for optical component players is how pervasive virtualisation technology is becoming and the continual rise of the data centre.

Silicon photonics is one technology set to impact the data centre. The technology is already being used in active optical cables and optical engines to connect data centre equipment, and soon will appear in optical transceivers such as Cisco Systems' own 100Gbps CPAK module.

Silicon photonics promises to enable designs that disrupt existing equipment. Start-up Compass-EOS has announced a compact IP core router that is already running live operator traffic. The router makes use of a scalable chip coupled to huge-bandwidth optical interfaces based on 168, 8 Gigabit-per-second (Gbps) vertical-cavity surface-emitting lasers (VCSELs) and photodetectors. The Terabit-plus bandwidth enables all the router chips to be connected in a mesh, doing away with the need for the router's midplane and switching fabric.

The integrated silicon-optics design is not strictly silicon photonics - silicon used as a medium for light - but it shows how optics is starting to be used for short distance links to enable disruptive system designs. 

Some financial analysts are beating the drum of silicon photonics. But integrated designs using VCSELs, traditional photonic integration and silicon photonics will all co-exist for years to come and even though silicon photonics is expected to make a big impact in the data centre, the Compass-EOS router highlights how disruptive designs can occur in telecoms.

 

Market status

The optical component industry continues to contend with more immediate challenges after experiencing sharp price declines in 2012.

The good news is that market research companies do not expect a repeat of the harsh price declines anytime soon. They also forecast better market prospects: The Dell'Oro Group expects optical transport to grow through 2017 at a compound annual growth rate (CAGR) of 10 percent, while LightCounting expects the optical transceiver market to grow 50 percent, to US $5.1bn in 2017. Meanwhile Ovum estimates the optical component market will grow by a mid-single-digit percent in 2013 after a contraction in 2012.

In the last year it has become clear how high-speed optical transport will evolve. The equipment makers' latest generation coherent ASICs use advanced modulation techniques, add flexibility by trading transport speed with reach, and use super-channels to support 400 Gigabit and 1 Terabit transmissions. Vendors are also looking longer term to techniques such as spatial-division multiplexing as fibre spectrum usage starts to approach the theoretical limit.

Yet the emphasis on 400 Gigabit and even 1 Terabit is somewhat surprising given how 100 Gigabit deployment is still in its infancy. And if the high-speed optical transmission roadmap is now clear, issues remain.

OFC/NFOEC 2013 will highlight the progress in 100 Gigabit transponder form factors that follow the 5x7-inch MSA, 100 Gigabit pluggable coherent modules, and the uptake of 100 Gigabit direct-detection modules for shorter reach links - tens or hundreds of kilometers - to connect data centres, for example.

There is also an industry consensus regarding wavelength-selective switches (WSSes) - the key building block of ROADMs - with the industry choosing a route-and-select architecture, although that was already the case a year ago.

There will also be announcements at OFC/NFOEC regarding client-side 40 and 100 Gigabit Ethernet developments based on the CFP2 and CFP4 that promise denser interfaces and Terabit capacity blades. Oclaro has already detailed its 100GBASE-LR4 10km CFP2 while Avago Technologies has announced its 100GBASE-SR10 parallel fibre CFP2 with a reach of 150m over OM4 fibre. 

The CFP2 and QSFP+ make use of integrated photonic designs. Progress in optical integration, as always, is one topic to watch for at the show.

PON and WDM-PON remain areas of interest. Not so much developments in state-of-the-art transceivers such as for 10 Gigabit EPON and XG-PON1, though clearly of interest, but rather enhancements of existing technologies that benefit the economics of deployment. 

The article is based on a news analysis published by the organisers before this year's OFC/NFOEC event.


Fibre-to-the-NPU: optics reshapes the IP core router

Start-up Compass Electro-Optical Systems has announced an IP core router based on a chip with a Terabit-plus optical interface.

 

Asaf Somekh, vice president of marketing, showing Gazettabyte Compass-EOS's novel icPhotonics chip

Having an optical interface linking directly to the chip, which includes a merchant network processor, simplifies the system design and enables router features such as real output queuing. The r10004 IP router is in production and is already deployed in an operator's network.

The company's icPhotonics chip integrates 168, 8 Gigabit VCSELs and 168 photodetectors for a bandwidth of 1.344 Terabit-per-second (Tbps) each direction. Eight of the chips are connected in a full mesh, doing away with the need for a router's switch fabric and mid-plane used to interconnect the router cards.

The resulting architecture saves power, space and cost, says Asaf Somekh, vice president of marketing at Compass-EOS. The start-up estimates that its platform's total cost of ownership over five years is a quarter to a third of existing IP core routers.

The high-bandwidth optical links will also be used to connect multiple platforms, enabling operators to add routing resources as required. Compass-EOS is coming to market with a 6U-high standalone platform but says it will scale up to 21 platforms to appear as one logical router.

The 800Gbps-capacity r10004 comes with 2x100 Gigabit-per-second (Gbps) and 20x10Gbps line cards options. The platform has real output queuing where all the input ports' packets are queued with quality of service applied prior to the exit port. The router also supports software-defined networking (SDN) that enables external control of traffic routing.

The company has its own clean room where it makes its optical interface. Compass-EOS has also developed its own ASICs and the router software for the r10004.   

Somekh says developing the optical interface has been challenging, requiring years of development working with the Fraunhofer Institute and Tel-Aviv University. One challenge was developing a glue to fix the VCSELs on top of the silicon.

The start-up has raised US $120M with investors such as Cisco Systems, Deutsche Telekom and Comcast as well as several venture capitalist firms.

 

icPhotonics technology

Compass-EOS refers to its optical interface IC as silicon photonics but a more accurate description is integrated silicon-optics; silicon itself is not used as a medium for light. But its use of embedded optics to the chip has created a disruptive system.

The optical-interconnect addresses two chip design challenges: signal integrity for long transmission lengths and chip input/output (I/O).

With high-speed interfaces, achieving signal integrity across a high-speed line card and between boards is challenging. Routers use a midplane and switch fabric to connect the the router cards within a platform and parallel optics to connect chassis.

Compass-EOS has taken board-mounted optics one step further and integrated VCSELs and photodetectors to the packaged chip. This simplifies the platform by connecting cards using a mesh architecture, and allows scaling by linking systems. 

The chip window shows the VCSELs and photodetectors Source: Compass-EOS

The design also addresses chip I/O issues. "The I/O density is about 30x higher than traditional solutions and the gap will grow in future," says Somekh.

Directly attaching the optical interconnect to the CMOS chip overcomes limitations imposed by ball grid array and printed circuit board (PCB) technologies.

Typically data is routed from the host PCB to an ASIC via a ball grid array matrix which has a ball pitch of 0.8mm. Shrinking this further is non-trivial given PCB signal integrity issues. Moreover, each electrical serdes (serialiser/ deserialiser) for data I/O uses at least eight bumps (transmit, receive, signal and ground) occupying a cell of 3.2×1.6 mm. For a 10Gbps device the resulting duplex data density is 2Gbps/mm2, increasing to 5Gbps/mm2 if a 25Gbps device is used, according to Compass-EOS.

The start-up says its optical-interconnect achieves a chip I/O of 61Gbps/mm2. "This will increase to 243Gbps/mm2 once we move to 32Gbps."

The resulting design uses 10 percent of the total CMOS area for  I/O. "This is a more efficient chip design," says Somekh. "Most of the silicon is used for logic tasks."

The serdes on chip still need to interface to hundreds of 8Gbps channels. And moving to 32Gbps will present a greater challenge. In comparison, silicon photonics promises to simplify the coupling of optics and electronics.

Another design challenge is that the VCSELs are co-packaged with a large chip consuming 30-50W and generating heat. The design needs to make sure that the operating temperature of the VCSELs is not affected by the heat from the chip.

This is another promised advantage of silicon photonics where the operating temperature of the optics and silicon are matched.       

 

Analysts' perspective

Gazettabyte asked two analysts - IDC's Vernon Turner and ACG Research's Eve Griliches - about the significance of Compass-EOS's announcement. The analysts were also asked for their views on the router's modularity, the total cost of ownership claims, the support for SDN and real output queueing, and whether the platform will gain market share from the IP core router incumbents.

 

IDC

Vernon Turner, senior vice president & general manager enterprise computing, network, telecom, storage, consumer and infrastructure.

One of the hardest places to innovate in the ICT (information and communications technology) world is at or around the speed of light.  Anytime you can make things run faster, the last hurdle tends to be the speed by which things travel over an optical network.

Therefore, to see something that changes the form factor of a network router and innovates at the interconnect speed, it may be able to disrupt a significant part of the network industry.

 

"Separating the interconnect with the physical building block is huge. It means that you scale the pieces that you need, when and where you want them; this is not just a repackaging announcement"

 

Building the capacity of a router as needed is great for service providers and large enterprises since you deploy capacity only as you need it. Second, by using a photonics interconnect, the speed and distance over which two devices can sit is enhanced greatly, changing the way one builds network infrastructures.

Separating the interconnect with the physical building block is huge. It means that you scale the pieces that you need, when and where you want them; this is not just a repackaging announcement.

Regarding the total-cost-of-ownership claims, if these are valid, they are of a magnitude that does fit into a 'disruptive innovation' class where it will deliver network services to an underserved market and create new network services markets.

SDN is the latest buzzword [regarding the router's support for SDN]. But it is the last part of the virtualised data centre as the compute and I/O have already been figured out. SDN is not new, but the need to separate the data plane from the control plane for the service provider industry means that they can begin to create network services through virtualisation without impacting the network performance, something that already happens in server and storage performance.

Existing core router vendors use their own ASIC designs as the last-stop differentiation, so to do this [as Compass-EOS has done] on merchant silicon could have wide implications on router commoditisation, or at least at a faster rate than current trends.

 

ACG Research

Eve Griliches, vice president of optical networking

As to the significance of the announcement, it is not huge in the scheme of things, but it does bring the optical component use of replacing a backplane to market earlier than what has been quoted to ACG Research.

 

"Virtual output queueing is a smart way to do quality of service"


In theory, the router should be a smaller footprint which results in better total cost of ownership due to the optical modules. The advantage with this optical patch-panel approach is that it allows a much higher bandwidth to cross the backplane which is now an optical interconnect. That means you don't have to do as much flow control, or drop as many packets, or keep the utilization of the router so low. You can bring up the utilisation rate from let's say 15 percent to maybe 25 percent or higher. All that results in lower total cost of ownership in theory.

SDN in a bit nebulous. Virtual output queueing is a smart way to do quality of service, but there are key software features like how many BGP (border gateway protocol) peers are supported, multicast capability, as well as signaling for MPLS (multiprotocol label switching), do they support RSVP-TE (resource reservation protocol - traffic engineering) or LDP (label distribution protocol)?  Or both?  Building a real router still takes years of work.

Faster interconnects are the way to go across routing and optical platforms, period. This [Compass-EOS platform] can help. Do I see this optical piece fitting nicely into an already existing router? Yes. I think if that doesn't happen, they will have a bit of an uphill battle nudging the incumbents.

On the other hand, if full router functionality is not needed at some junctures, as we've seen with the LSR (label switch router) technology, then they may have a place in the network. But operators don't like to play around with their routed network too much, so it may be greenfield application that are mostly available to them [Compass-EOS] initially.


OFC/NFOEC 2013: Technical paper highlights

Source: The Optical Society

Network evolution strategies, state-of-the-art optical deployments, next-generation PON and data centre interconnect are just some of the technical paper highlights of the upcoming OFC/NFOEC conference and exhibition, to be held in Anaheim, California from March 17-21, 2013. Here is a selection of the papers.

Optical network applications and services

Fujitsu and AT&T Labs-Research (Paper Number: 1551236) present simulation results of shared mesh restoration in a backbone network. The simulation uses up to 27 percent fewer regenerators than dedicated protection while increasing capacity by some 40 percent.

KDDI R&D Laboratories and the Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Spain (Paper Number: 1553225) show results of an OpenFlow/stateless PCE integrated control plane that uses protocol extensions to enable end-to-end path provisioning and lightpath restoration in a transparent wavelength switched optical network (WSON).

In invited papers, Juniper highlights the benefits of multi-layer packet-optical transport, IBM discusses future high-performance computers and optical networking, while Verizon addresses multi-tenant data centre and cloud networking evolution.


Network technologies and applications

A paper by NEC (Paper Number: 1551818) highlights 400 Gigabit transmission using four parallel 100 Gigabit subcarriers over 3,600km. Using optical Nyquist shaping each carrier occupies 37.5GHz for a total bandwidth of 150GHz.

In an invited paper Andrea Bianco of the Politecnico de Torino, Italy details energy awareness in the design of optical core networks, while Verizon's Roman Egorov discusses next-generation ROADM architecture and design.


FTTx technologies, deployment and applications

In invited papers, operators share their analysis and experiences regarding optical access. Ralf Hülsermann of Deutsche Telekom evaluates the cost and performance of WDM-based access networks, while France Telecom's Philippe Chanclou shares the lessons learnt regarding its PON deployments and details its next steps.


Optical devices for switching, filtering and interconnects

In invited papers, MIT's Vladimir Stojanovic discusses chip and board scale integrated photonic networks for next-generation computers. Alcatel-Lucent's Bell Labs' Nicholas Fontaine gives an update on devices and components for space-division multiplexing in few-mode fibres, while Acacia's Long Chen discusses silicon photonic integrated circuits for WDM and optical switches.

Optoelectronic devices

Teraxion and McGill University (Paper Number: 1549579) detail a compact (6mmx8mm) silicon photonics-based coherent receiver. Using PM-QPSK modulation at 28 Gbaud, up to 4,800 km is achieved.

Meanwhile, Intel and the UC-Santa Barbara (Paper Number: 1552462) discuss a hybrid silicon DFB laser array emitting over 200nm integrated with EAMs (3dB bandwidth> 30GHz). Four bandgaps spread over greater than 100nm are realised using quantum well intermixing.


Transmission subsystems and network elements

In invited Papers, David Plant of McGill University compares OFDM and Nyquist WDM, while AT&T's Sheryl Woodward addresses ROADM options in optical networks and whether to use a flexible grid or not.

Core networks

Orange Labs' Jean-Luc Auge asks whether flexible transponders can be used to reduce margins. In other invited papers, Rudiger Kunze of Deutsche Telekom details the operator's standardisation activities to achieve 100 Gig interoperability for metro applications, while Jeffrey He of Huawei discusses the impact of cloud, data centres and IT on transport networks.

Access networks

Roberto Gaudino of the Politecnico di Torino discusses the advantages of coherent detection in reflective PONs. In other invited papers, Hiroaki Mukai of Mitsubishi Electric details an energy efficient 10G-EPON system, Ronald Heron of Alcatel-Lucent Canada gives an update on FSAN's NG-PON2 while Norbert Keil of the Fraunhofer Heinrich-Hertz Institute highlights progress in polymer-based components for next-generation PON.

Optical interconnection networks for datacom and computercom

Use of orthogonal multipulse modulation for 64 Gigabit Fibre Channel is detailed by Avago Technologies and the University of Cambridge (Paper Number: 1551341).

IBM T.J. Watson (Paper Number: 1551747) has a paper on a 35Gbps VCSEL-based optical link using 32nm SOI CMOS circuits. IBM is claiming record optical link power efficiencies of 1pJ/b at 25Gb/s and 2.7pJ/b at 35Gbps.

Several companies detail activities for the data centre in the invited papers.

Oracle's Ola Torudbakken has a paper on a 50Tbps optically-cabled Infiniband data centre switch, HP's Mike Schlansker discusses configurable optical interconnects for scalable data centres, Fujitsu's Jun Matsui details a high-bandwidth optical interconnection for an densely integrated server while Brad Booth of Dell also looks at optical interconnect for volume servers.

In other papers, Mike Bennett of Lawrence Berkeley National Lab looks at network energy efficiency issues in the data centre. Lastly, Cisco's Erol Roberts addresses data centre architecture evolution and the role of optical interconnect.


Netronome uses its network flow processor for OpenFlow

Part 2: Hardware for SDN

Netronome has demonstrated its flow processor chip implementing the OpenFlow protocol, an open standard implementation of software-defined networking (SDN).

 

"What OpenFlow does is let you control the hardware that is handling the traffic in the network. The value to the end customer is what they can do with that"

David Wells, Netronome

 

The reference design demonstration, which took place at an Open Networking User Group meeting, used the fabless semiconductor player's NFP-3240 network flow processor. The NFP-3240 was running the latest 1.3.0 version of the OpenFlow protocol.

Last year Netronome announced its next-generation flow processor family, the NFP-6xxx. The OpenFlow demonstration hints at what the newest flow processor will enable once first samples become available at the year end.  

Netronome believes its flow processor architecture is well placed to tackle emerging intelligent networking applications such as SDN due to its emphasis on packet flows.

“In security, mobile and other spaces, increasingly there needs to be equipment in the network that is looking at content of packets and states of a flow - where you are looking at content across multiple packets - to figure out what is going on,” says David Wells, co-founder of Netronome and vice president of technology. “That is what we term flow processing."

This requires equipment able to process all the traffic on network links at 10 and 40 Gigabit-per-second (Gbps), and with next-generation equipment at 100Gbps. "This is where you do more than look at the packet header and make a switching decision," says Wells. 

 

Software-defined networking

Operators and content service providers are interested in SDN due to its promise to deliver greater efficiencies and control in how they use their switches and routers in the data centre and network. With SDN, operators can add their own intelligence to tailor how traffic is routed in their networks. 

In the data centre, a provider may be managing a huge number of servers running virtualised applications. "The management of the servers and applications is clever enough to optimise where it moves virtual machines and where it puts particular applications," says Wells. "You want to be able to optimise how the traffic flows through the network to get to those servers in the same way you are optimising the rest of the infrastructure."

Without OpenFlow, operators depend on routing protocols that come with existing switches and routers. "It works but it won't necessarily take the most efficient route through the network," says Wells.

OpenFlow lets operators orchestrate from the highest level of the infrastructure where applications reside, map the flows that go to them, determine their encapsulation and the capacity they have. "The service can be put in a tunnel, for example, and have resource allocated to it so that you know it is not going to be contended with," says Wells, guaranteeing services to customers.

"What OpenFlow does is let you control the hardware that is handling the traffic in the network," says Wells. "The value to the end customer is what they can do with that, in conjunction with other things they are doing."  

Operators are also interested in using OpenFlow in the wide area network. "The attraction of OpenFlow is in the core and the edge [of the network] but it is the edge that is the starting point," says Wells.

 

OpenFlow demonstration

Netronome's OpenFlow demonstration used an NFP-3240 on a PCI Express (PCIe) card to run OpenFlow while other Netronome software runs on the host server in which the card resides. 

The NFP-3240 classifies the traffic and implements the actions to be taken on the flows. The software on the host exposes the OpenFlow application programming interface (API) enabling the OpenFlow controller, the equipment that oversees how traffic is handled, to address the NFP device and influence how flows are processed.

Early OpenFlow implementations are based on Ethernet switch chips that interface to a CPU that provides the OpenFlow API. However, the Ethernet chips support the OpenFlow 1.1.0 specification and have limited-sized look-up tables with 98, 64k or 100k entries, says Wells.

The OpenFlow controller can write to the table and dictate how traffic is handled, but its size is limited. "That is a starting point and is useful," says Wells. "But to really do SDN, you need hardware platforms that can handle many more flows than these switches."

This is where the NFP processor is being targeted: it is programmable with capabilities driven by software rather than the hardware architecture, says Wells. 

 

NFP-6xxx architecture

The NFP-6xxx is Netronome's latest network flow processor (NFP) family, rated at 40 to 200Gbps. No particular devices have yet been detailed but the highest-end NFP-6xxx device will comprise 216 processors: 120 flow processors (see chart - Netronome's sixth generation device) and new to its NFP devices, 96 packet processors.

The architecture is made up of 'islands', units that comprise a dozen flow processors. Netronome will combine different numbers of islands to create the various NFP-6xxx devices.    

The input-output bandwidth of the device is 800Gbps while the on-chip memory totals 30 Megabyte. The device also interfaces directly to QSFP, SFP+ and CFP optical transceivers.

The 120 flow processors tackle the more complex, higher-layer tasks. Netronome has added packet processors to the NFP-6xxx to free the flow processors from tasks such as taking packets from the input stream and passing them on to where they are processed. The packet processors are programmable and perform such tasks as header classification before being processed by the flow processors.

The NFP-6xxx devices will include some 100 hardware accelerator engines for tasks such as traffic management, encryption and deep packet inspection. 

The device will be implemented using Intel's latest 22nm 3D Tri-Gate CMOS process and is designed to work with high-end general purpose CPUs such as Intel's x86 devices, Broadcom's XLP and Freescale's PowerPC.

 

Markets

The data centre, where SDN is already being used, is one promising market for the device as customers look to enhance their existing capabilities.

There are requirements for intelligent gateways now but this is a market that is a year or two out, says Wells. Use of OpenFlow to control large IP core routers or core optical switches is a longer term application. "Those areas will come but it will be further out," says Wells.

For other markets such as security, there is a need for knowledge about the state of flows. This is more sophisticated treatment of packets than the simple looking up the action required based on a packet's header. Netronome believes that OpenFlow will develop to not only forward or terminate traffic at a certain destination but will also send traffic to a service before it is returned.

"You could insert a service in a OpenFlow environment and what it would do is guide packets to that service and return it but inside that service you may do something that is stateful," says Wells.  This is just the sort of task security performs on flows. For example, an intrusion prevention system as a service or a firewall function.  This function could be run on a dedicated platform or as a virtual application running on Netronome's flow processor.

 

Further reading:

Part 1: The role of software defined networking for telcos

EZchip expands the role of the network processor, click here

MPR: Netronome goes with the flow


Space-division multiplexing: the final frontier

System vendors continue to trumpet their achievements in long-haul optical transmission speeds and overall data carried over fibre. 

Alcatel-Lucent announced earlier this month that France Telecom-Orange is using the industry's first 400 Gigabit link, connecting Paris and Lyon, while Infinera has detailed a trial demonstrating 8 Terabit-per-second (Tbps) of capacity over 1,175km and using 500 Gigabit-per-second (Gbps) super-channels. 

 

"Integration always comes at the cost of crosstalk"

Peter Winzer, Bell Labs

 

 

 

 

 

 

Yet vendors already recognise that capacity in the frequency domain will only scale so far and that other approaches are required. One is space-division multiplexing such as using multiple channels separated in space and implemented using multi-core fibre with each core supporting several modes.

 "We want a technology that scales by a factor of 10 to 100," says Peter Winzer, director of optical transmission systems and networks research at Bell Labs. "As an example, a fibre with 10 cores with each core supporting 10 modes, then you have the factor of 100."

 

Space-division multiplexing

Alcatel-Lucent's research arm, Bell Labs, has demonstrated the transmission of 3.8Tbps using several data channels and an advanced signal processing technique known as multiple-input, multiple-output (MIMO).

In particular, 40 Gigabit quadrature phase-shift keying (QPSK) signals were sent over a six-spatial mode fibre using two polarisation modes and eight wavelengths to achieve 3.8Tbps. The overall transmission uses 400GHz of spectrum only.

Alcatel-Lucent stresses that the commercial deployment of space-division multiplexing remains years off. Moreover operators will likely first use already-deployed parallel strands of single-mode fibre, needing the advanced signal processing techniques only later.

"You might say that is trivial [using parallel strands of fibre], but bringing down the cost of that solution is not," says Winzer.

First, cost-effective integrated amplifiers will be needed. "We need to work on a single amplifier that can amplify, say, ten existing strands of single-mode fibre at the cost of two single-mode amplifiers," says Winzer. An integrated transponder will also be needed: one transponder that couples to 10 individual fibres at a much lower cost than 10 individual transponders.

With a super-channel transponder, several wavelengths are used, each with its own laser, modulator and detector. "In a spatial super-channel you have the same thing, but not, say, three different frequencies but three different spatial paths," says Winzer. Here photonic integration is the challenge to achieve a cost-effective transponder.

Once such integrated transponders and amplifiers become available, it will make sense to couple them to multi-core fibre. But operators will only likely start deploying new fibre once they exhaust their parallel strands of single-mode fibre.

Such integrated amplifiers and integrated transponders will present challenges. "The more and more you integrate, the more and more crosstalk you will have," says Winzer. "That is fundamental: integration always comes at the cost of crosstalk."

Winzer says there are several areas where crosstalk may arise. An integrated amplifier serving ten single-mode fibres will share a multi-core erbium-doped fibre instead of ten individual strands. Crosstalk between those closely-spaced cores is likely.

The transponder will be based on a large integrated circuit giving rise to electrical crosstalk. One way to tackle crosstalk is to develop components to a higher specification but that is more costly. Alternatively, signal processing on the received signal can be used to undo the crosstalk. Using electronics to counter crosstalk is attractive especially when it is the optics that dominate the design cost.  This is where MIMO signal processing plays a role. "MIMO is the most advanced version of spatial multiplexing," says Winzer.

To address crosstalk caused by spatial multiplexing in the Bell Labs' demo, 12x12 MIMO was used. Bell Labs says that using MIMO does not add significantly to the overall computation. Existing 100 Gigabit coherent ASICs effectively use a 2x2 MIMO scheme, says Winzer: “We are extending the 2x2 MIMO to 2Nx2N MIMO.” 

Only one portion of the current signal processing chain is impacted, he adds; a portion that consumes 10 percent of the power will need to increase by a certain factor. The resulting design will be more complex and expensive but not dramatically so, he says.

Winzer says such mitigation techniques need to be investigated now since crosstalk in future systems is inevitable. Even if the technology's deployment is at least a decade away, developing techniques to tackle crosstalk now means vendors have a clear path forward.

 

Parallelism

Winzer points out that optical transmission continues to embrace parallelism. "With super-channels we go parallel with multiple carriers because a single carrier can’t handle the traffic anymore," he says. This is similar to parallelism in microprocessors where multi-core designs are now used due to the diminishing return in continually increasing a single core's clock speed.

For 400Gbps or 1 Terabit over a single-mode fibre, the super-channel approach is the near term evolution.

Over the next decade, the benefit of frequency parallelism will diminish since it will no longer increase spectral efficiency. "Then you need to resort to another physical dimension for parallelism and that would be space," says Winzer.

MIMO will be needed when crosstalk arises and that will occur with multiple mode fibre.

"For multiple strands of single mode fibre it will depend on how much crosstalk the integrated optical amplifiers and transponders introduce," says Winzer.

 

Part 1: Terabit optical transmission


Alcatel-Lucent demos dual-carrier Terabit transmission

"Without [photonic] integration you are doubling up your expensive opto-electronic components which doesn't scale"

Peter Winzer, Alcatel-Lucent's Bell Labs

 

Part 1: Terabit optical transmission

Alcatel-Lucent's research arm, Bell Labs, has used high-speed electronics to enable one Terabit long-haul optical transmission using two carriers only.

Several system vendors have demonstrated one Terabit transmission including Alcatel-Lucent but the company is claiming an industry first in using two multiplexed carriers only. In 2009, Alcatel-Lucent's first Terabit optical transmission used 24 sub-carriers.

"There is a tradeoff between the speed of electronics and the number of optical modulators and detectors you need," says Peter Winzer, director of optical transmission systems and networks research at Bell Labs. "In general it will be much cheaper doing it with fewer carriers at higher electronics speeds than doing it at a lower speed with many more carriers."

 

What has been done

In the lab-based demonstration, Bell Labs sent five, 1 Terabit-per-second (Tbps) signals over an equivalent distance of 3,200km. Each signal uses dual-polarisation 16-QAM (quadrature amplitude modulation) to achieve a 1.28Tbps signal. Thus each carrier holds 640Gbps: some 500Gbps data and the rest forward error correction (FEC) bits.

In current 100Gbps systems, dual-polarisation, quadrature phase-shift keying (DP-QPSK) modulation is used. Going from QPSK to 16-QAM doubles the bit rate. Bell Labs has also increased the symbol rate from some 30Gbaud to 80Gbaud using state-of-the-art high-speed electronics developed at Alcatel Thales III-V Lab. 

"To achieve these rates, you need special high-speed components - multiplexers - and also high-speed multi-level devices," says Winzer.  These are indium phosphide components, not CMOS and hence will not be deployed in commercial products for several years yet. "These things are realistic [in CMOS], just not for immediate product implementation," says Winzer.

Each carrier occupies 100GHz of channel bandwidth equating to 200GHz overall, or a 5.2b/s/Hz spectral efficiency. Current state-of-the-art 100Gbps systems use 50GHz channels, achieving 2b/s/Hz.

The 3,200km reach using 16-QAM technology is achieved in the lab, using good fibre and without any commercial product margins, says Winzer. Adding commercial product margins would reduce the optical link budget by 2-3dB and hence the overall reach.

Winzer says the one Terabit demonstration uses all the technologies employed in Alcatel-Lucent's photonic service engine (PSE) ASIC although the algorithms and soft-decision FEC used are more advanced, as expected in an R&D trial.

Before such one Terabit systems become commercial, progress in photonic integration will be needed as well as advances in CMOS process technology.

"Progress in photonic integration is needed to get opto-electronic costs down as it [one Terabit] is still going to need two-to-four sub-carriers," he says. A balance between parallelism and speed needs to be struck, and parallelism is best achieved using integration. "Without integration you are doubling up your expensive opto-electronic components which doesn't scale," says WInzer.

 

In Part 2: Space-division multiplexing: the final frontier


The role of software-defined networking for telcos

The OIF's Carrier Working Group is assessing how software-defined networking (SDN) will impact transport. Hans-Martin Foisel, chair of the OIF working group, explains SDN's  importance for operators.

Briefing: Software-defined networking

Part 1: Operator interest in SDN

 

"Using SDN use cases, we are trying to derive whether the transport network is ready or if there is some missing functionality"

Hans-Martin Foisel, OIF


 

Hans-Martin Foisel, of Deutsche Telekom and chair of the OIF Carrier Working Group, says SDN is of great interest to operators that view the emerging technology as a way of optimising all-IP networks that increasingly make use of data centres.

"Software-defined networking is an approach for optimising the network in a much larger sense than in the past," says Foisel whose OIF working group is tasked with determining how SDN's requirements will impact the transport network.

Network optimisation remains an ongoing process for operators. Work continues to improve the interworking between the network's layers to gain efficiencies and reduce operating costs (see Cisco Systems' intelligent light).

With SDN, the scope is far broader. "It [SDN] is optimising the network in terms of processing, storage and transport," says Foisel. SDN takes the data centre environment and includes it as part of the overall optimisation. For example, content allocation becomes a new parameter for network optimisation.

Other reasons for operator interest in SDN, says Foisel, include optimising operation support systems (OSS) software, and the characteristic most commonly associated with SDN, making more efficient use of the network's switches and routers.

"A lot of carriers are struggling with their OSSes - these are quite complex beasts," he says. "With data centres involved, you now have a chance to simplify your IT as all carriers are struggling with their IT."

The Network Functions Virtualisation (NFV) industry specification group is a carrier-led initiative set up in January by the European Telecommunications Standards Institute (ETSI). The group is tasked with optimising software components, the OSSes, involved for processing, storage and transport.

The initiative aims to make use of standard servers, storage and Ethernet switches to reduce the varied equipment making up current carrier networks to improve service innovation and reduce the operators' capital and operational expenditure.

The NFV and SDN are separate developments that will benefit each other. The ETSI group will develop requirements and architecture specifications for the hardware and software infrastructure needed for the virtualized functions, as well as guidelines for developing network functions.

The third reason for operator interest in SDN - separating management, control and data planes - promises greater efficiencies, enabling network segmentation irrespective of the switch and router deployments. This allows flexible use the network, with resources shifted based on particular user requirements.

"Optimising the network as a whole - including the data centre services and applications - is a concept, a big architecture," says Foisel. "OpenFlow and the separation of data, management and control planes are tools to achieve them."

OpenFlow is an open standard implementation of the SDN concept. The OpenFlow protocol is being developed by the Open Networking Foundation, an industry body that includes Google, Facebook and Microsoft, telecom operators Verizon, NTT, Deutsche Telekom, and various equipment makers.

 

Transport SDN 

The OIF Working Group will identify how SDN impacts the transport network including layers one and two, networking platforms and even components. By undertaking this work, the operators' goal is to make SDN "carrier-grade'.

Foisel admits that the working group does not yet know whether the transport layer will be impacted by SDN. To answer the question, SDN applications will be used to identify required transport SDN functionalities. Once identified, a set of requirements will be drafted. 

"Using SDN use cases, we are trying to derive whether the transport network is ready or if there is some missing functionality," says Foisel.

The work will also highlight any areas that require standardisation, for the OIF and for other standards bodies, to ensure future SDN interworking between vendors' solutions. The OIF expects to have a first draft of the requirements by July 2013.

"In the transport network we are pushed by the mobile operators but also by the over-the-top applications to be faster and be more application-aware," says Foisel. "With SDN we have a chance to do so." 

 

Part 2: Hardware for SDN


Cisco Systems' intelligent light

Network optimisation continues to exercise operators and content service providers as their requirements evolve with the growth of services such as cloud computing. Cisco Systems' announced elastic core architecture aims to tackle  networking efficiency and address particular service provider requirements.

 

“The core [network] needs to be more robust, agile and programmable”

Sultan Dawood, Cisco

 

 

 

 

 

“The core [network] needs to be more robust, agile and programmable – especially with the advent of  cloud,” says Sultan Dawood, senior manager, service provider marketing at Cisco. “As service providers look at next-generation infrastructure, convergence of IP and optical is going to have a big play.”

Cisco's elastic core architecture combines several developments. One is the integration of Cisco's 100 Gigabit-per-second (Gbps) dense wavelength division multiplexing (DWDM) coherent transponder, first introduced on its ROADM platform, onto its router to enable IP-over-DWDM. 

This is part of what Cisco calls nLight – intelligent light - which itself has three components: its 100Gbps coherent ASIC hardware, the nLight control plane and nLight colourless and contentionless ROADMs. “As packet and optical networks converge, intelligence between the layers is needed,” says Dawood. “Today how the ROADM and the router communicate is limited."

There is the GMPLS [Generalized Multi-Protocol Label Switching] layer working at the IP layer, and WSON [Wavelength Switched Optical Layer] working at the optical layer. These two protocols are doing control plane functions at each of their respective layers. "What nLight is doing is communicating between these two layers [using existing parameters] and providing the interaction," says Dawood.

Ron Kline, principal analyst for network infrastructure at Ovum, describes nLight more generally as Cisco’s strategy for software-defined networking:  "Interworking control planes to share info across platforms and add the dynamic capabilities."

The second component of Cisco's announcement is an upgrade of its carrier-grade services engine, from 20Gbps to 80Gbps, that fits within Cisco's CSR-3 core router and will be available from May 2013. The services engine enables such services as IPv6 and 'cloud routing' - network positioning which determines the most suitable resource for a customer’s request based on the content’s location and the data centre's loading.

Cisco has also added anti distributed denial of service (anti-DDoS) software to counter cyber threats. “We have licensed software that we have put into our CRS-3 so that with our VPN services we can provide threat mitigation and scrub any traffic liable to hurt our customers,” says Dawood.

 

nLight

According to Cisco, several issues need to be addressed between the IP and optical layers. For example, how the router and the optical infrastructure exchange information like circuit ID, path identifiers and real-time information in order to avoid the manual intervention used currently.

“With this intelligent data that is extracted due to these layers communicating, I can now make better, faster decisions that result in rapid service provisioning and service delivery,” says Dawood.

Cisco cites as an example a financial customer requesting a low-latency path.  In this case, the optical network comes back through this nLight extraction process and highlights the most appropriate path. That path has a circuit ID that is assigned to the customer. If the customer then comes back to request a second identical circuit, the network can make use of the existing intelligence to deliver a similar-specification circuit.

Such a framework avoids lengthy, manual interactions between the IP and transport departments of an operator required when setting up an IP VPN, for example. By exchanging data between layers, service providers can understand and improve their network topology in real-time, and be more dynamic in how they shift resources and do capacity planning in their network.

Service providers can also improve their protection and restoration schemes and also how they configure and provision services. Such capabilities will enable operators to be more efficient in the introduction and delivery of cloud and mobile services.

 

Total cost of ownership

Market research firm ACG Research has done a total cost of ownership (TCO) analysis of Cisco's elastic core architecture. It claims using nLight achieves up to a halving of the TCO of the optical and packet core networks in designs using protected wavelengths. It also avoids a 10% overestimation of required capacity.

Meanwhile, ACG claims an 18-month payback and 156% return on investment from a CRS CGSE service module with its anti‐DDoS service, and a 24% TCO savings from demand engineering with the improved placement of routes and cloud service workload location.

Cisco says its designed framework architecture is being promoted in the Internet Engineering Task Force (IETF). The company is also liaising with the International Telecommunication Union (ITU) and the Optical Internetworking Forum (OIF) where relevant. 


Privacy Preference Center