Europe gets its first TWDM-PON field trial

Vodafone is conducting what is claimed to be the first European field trial of a multi-wavelength passive optical networking system using access equipment from Alcatel-Lucent. 

 

Source: Alcatel-Lucent

 

 

The time- and wavelength-division multiplexed passive optical network (TWDM-PON) technology being used is a next-generation access scheme that follows on from 10 gigabit GPON (XG-PON1) and 10 gigabit EPON. 

 

“There appears to be much more 'real' interest in TWDM-PON than in 10G GPON,” says Julie Kunstler, principal analyst, components at Ovum. 

 

The TWDM-PON standard is close to completion in the Full Service Access Network (FSAN) Group and ITU and supports up to eight wavelengths, each capable of 10 gigabit symmetrical or 10/ 2.5 gigabit asymmetrical speeds.

 

“You can start building hardware solutions that are fully [standard] compliant,” says Stefaan Vanhastel, director of fixed access marketing at Alcatel-Lucent. 

 

TWDM-PON’s support for additional functionality such as dynamic wavelength management, whereby subscribers could be moved between wavelengths, is still being standardised.  

 

The combination of time and wavelength division multiplexing, allows TWDM-PON to support multiple PONs, each sharing its capacity among 16, 32, 64 or even 128 end points depending on the operator’s chosen split ratio.   

 

 

There appears to be much more 'real' interest in TWDM PON than in 10G GPON

 

 

Alcatel-Lucent first detailed its TWDM-PON technology last year. The system vendor introduced a four-wavelength TWDM-PON based on a 4-port line-card, each port supporting a 10 gigabit PON. The line card is used with Alcatel-Lucent’s 7360 Intelligent Services Access Manager FX platform, and supports fixed and tunable SFP optical modules. 

 

“Several vendors also offer the possibility to use fixed wavelength  - XG-PON1 or 10G EPON optics," says Vanhastel. "This reduces the initial cost of a TWDM-PON deployment while allowing you to add tunable optics later."

 

Operators can thus start with a 10 gigabit PON using fixed-wavelength optics and move to TWDM-PON and tunable modules as their capacity needs grow. “You won’t have to swap out legacy XG-PON1 hardware two years from now,” says Vanhastel.

 

Alcatel-Lucent has been involved in 16 customer TWDM-PON trials overall, half in Asia Pacific and the rest split between North America and EMEA. Besides Vodafone, Alcatel-Lucent has named two other TWDM-PON triallists: Telefonica and Energia, an energy utility in Japan.

 

 

You won’t have to swap out legacy XG-PON1 hardware two years from now

 

 

Vanhastel says the company has been surprised that operators are also eyeing the technology for residential access. The high capacity and relative expense of tunable optics made the vendor think that early demand would be for business services and mobile backhaul only. 

 

Source: Gazettabyte

 

There are several reasons for the operator interest in TWDM-PON, says Vanhastel. One is its ample bandwidth - 40 gigabit symmetrical in a four-wavelength implementation - and that wavelengths can be assigned to different aggregation tasks such as backhaul, business and residential. Operators can also pay for wavelengths as needed. 

 

TWDM-PON also allows wavelengths to be shared between operators as part of wholesale agreements. Operators deploying TWDM-PON can lease a wavelength to each other in their respective regions. 

 

Vodafone, for example, is building its own fibre network but is also expanding its overall fixed broadband coverage by developing wholesale agreements across Europe. Vodafone's European broadband network covers 62 million households: 26 million premises covered with its own network and 36 million through wholesale agreements. 

 

First operator TWDM-PON pilot deployments will occur in 2016, says Alcatel-Lucent. 

 

 

Further reading:

 

White Paper: TWDM PON is on the horizon: facilitating fast FTTx network monetization, click here

 


Heading off the capacity crunch

Feature - Part 1: Capacity limits and remedies

Improving optical transmission capacity to keep pace with the growth in IP traffic is getting trickier. 

Engineers are being taxed in the design decisions they must make to support a growing list of speeds and data modulation schemes. There is also a fissure emerging in the equipment and components needed to address the diverging needs of long-haul and metro networks. As a result, far greater flexibility is needed, with designers looking to elastic or flexible optical networking where data rates and reach can be adapted as required.

Figure 1: The green line is the non-linear Shannon limit, above which transmission is not possible. The chart shows how more bits can be sent in a 50 GHz channel as the optical signal to noise ratio (OSNR) is increased. The blue dots closest to the green line represent the performance of the WaveLogic 3, Ciena's latest DSP-ASIC family. Source: Ciena.

But perhaps the biggest challenge is only just looming. Because optical networking engineers have been so successful in squeezing information down a fibre, their scope to send additional data in future is diminishing. Simply put, it is becoming harder to put more information on the fibre as the Shannon limit, as defined by information theory, is approached.

"Our [lab] experiments are within a factor of two of the non-linear Shannon limit, while our products are within a factor of three to six of the Shannon limit," says Peter Winzer, head of the optical transmission systems and networks research department at Bell Laboratories, Alcatel-Lucent. The non-linear Shannon limit dictates how much information can be sent across a wavelength-division multiplexing (WDM) channel as a function of the optical signal-to-noise ratio.

A factor of two may sound a lot, says Winzer, but it is not. "To exhaust that last factor of two, a lot of imperfections need to be compensated and the ASIC needs to become a lot more complex," he says. The ASIC is the digital signal processor (DSP), used for pulse shaping at the transmitter and coherent detection at the receiver.     

 

Our [lab] experiments are within a factor of two of the non-linear Shannon limit, while our products are within a factor of three to six of the Shannon limit - Peter Winzer 

 

At the recent OFC 2015 conference and exhibition, there was plenty of announcements pointing to industry progress. Several companies announced 100 Gigabit coherent optics in the pluggable, compact CFP2 form factor, while Acacia detailed a flexible-rate 5x7 inch MSA capable of 200, 300 and 400 Gigabit rates. And research results were reported on the topics of elastic optical networking and spatial division multiplexing, work designed to ensure that networking capacity continues to scale.  

 

Trade-offs

There are several performance issues that engineers must consider when designing optical transmission systems. Clearly, for submarine systems, maximising reach and the traffic carried by a fibre are key. For metro, more data can be carried on a single carrier to improving overall capacity but at the expense of reach.

Such varied requirements are met using several design levers:  

  •  Baud or symbol rate 
  •  The modulation scheme which determines the number of bits carried by each symbol 
  •  Multiple carriers, if needed, to carry the overall service as a super-channel

The baud rate used is dictated by the performance limits of the electronics. Today that is 32 Gbaud: 25 Gbaud for the data payload and up to 7 Gbaud for forward error correction and other overhead bits. 

Doubling the symbol rate from 32 Gbaud used for 100 Gigabit coherent to 64 Gbaud is a significant challenge for the component makers. The speed hike requires a performance overhaul of the electronics and the optics: the analogue-to-digital and digital-to-analogue converters and the drivers through to the modulators and photo-detectors. 

"Increasing the baud rate gives more interface speed for the transponder," says Winzer. But the overall fibre capacity stays the same, as the signal spectrum doubles with a doubling in symbol rate.

However, increasing the symbol rate brings cost and size benefits. "You get more bits through, and so you are sharing the cost of the electronics across more bits," says Kim Roberts, senior manager, optical signal processing at Ciena. It also implies a denser platform by doubling the speed per line card slot.  

 

As you try to encode more bits in a constellation, so your noise tolerance goes down - Kim Roberts   

 

Modulation schemes 

The modulation used determines the number of bits encoded on each symbol. Optical networking equipment already use binary phase-shift keying (BPSK or 2-quadrature amplitude modulation, 2-QAM) for the most demanding, longest-reach submarine spans; the workhorse quadrature phase-shift keying (QPSK or 4-QAM) for 100 Gigabit-per-second (Gbps) transmission, and the 200 Gbps 16-QAM for distances up to 1,000 km.

Moving to a higher QAM scheme increases WDM capacity but at the expense of reach. That is because as more bits are encoded on a symbol, the separation between them is smaller. "As you try to encode more bits in a constellation, so your noise tolerance goes down," says Roberts.   

One recent development among system vendors has been to add more modulation schemes to enrich the transmission options available. 

 

From QPSK to 16-QAM, you get a factor of two increase in capacity but your reach decreases of the order of 80 percent - Steve Grubb

 

Besides BPSK, QPSK and 16-QAM, vendors are adding 8-QAM, an intermediate scheme between QPSK and 16-QAM. These include Acacia with its AC-400 MSA, Coriant, and Infinera. Infinera has tested 8-QAM as well as 3-QAM, a scheme between BPSK and QPSK, as part of submarine trials with Telstra. 

"From QPSK to 16-QAM, you get a factor of two increase in capacity but your reach decreases of the order of 80 percent," says Steve Grubb, an Infinera Fellow. Using 8-QAM boosts capacity by half compared to QPSK, while delivering more signal margin than 16-QAM. Having the option to use the intermediate formats of 3-QAM and 8-QAM enriches the capacity tradeoff options available between two fixed end-points, says Grubb.    

Ciena has added two chips to its WaveLogic 3 DSP-ASIC family of devices: the WaveLogic 3 Extreme and the WaveLogic 3 Nano for metro. 

WaveLogic3 Extreme uses a proprietary modulation format that Ciena calls 8D-2QAM, a tweak on BPSK that uses longer duration signalling that enhances span distances by up to 20 percent. The 8D-2QAM is aimed at legacy dispersion-compensated fibre that carry 10 Gbps wavelengths and offers up to 40 percent additional upgrade capacity compared to BPSK. 

Ciena has also added 4-amplitude-shift-keying (4-ASK) modulation alongside QPSK to its WaveLogic3 Nano chip. The 4-ASK scheme is also designed for use alongside 10 Gbps wavelengths that introduce phase noise, to which 4-ASK has greater tolerance than QPSK. Ciena's 4-ASK design also generates less heat and is less costly than BPSK.    

According to Roberts, a designer’s goal is to use the fastest symbol rate possible, and then add the richest constellation as possible "to carry as many bits as you can, given the noise and distance you can go". 

After that, the remaining issue is whether a carrier’s service can be fitted on one carrier or whether several carriers are needed, forming a super-channel. Packing a super-channel's carriers tightly benefits overall fibre spectrum usage and reduces the spectrum wasted for guard bands needed when a signal is optically switched.  

Can symbol rate be doubled to 64 Gbaud? "It looks impossibly hard but people are going to solve that," says Roberts. It is also possible to use a hybrid approach where symbol rate and modulation schemes are used. The table shows how different baud rate/ modulation schemes can be used to achieve a 400 Gigabit single-carrier signal.

 

Note how using polarisation for coherent transmission doubles the overall data rate. Source: Gazettabyte

 

But industry views differ as to how much scope there is to improve overall capacity of a fibre and the optical performance.

Roberts stresses that his job is to develop commercial systems rather than conduct lab 'hero' experiments. Such systems need to be work in networks for 15 years and must be cost competitive. "It is not over yet," says Roberts.

He says we are still some way off from when all that remains are minor design tweaks only. "I don't have fun changing the colour of the paint or reducing the cost of the washers by 10 cents,” he says. “And I am having a lot of fun with the next-generation design [being developed by Ciena].”  

"We are nearing the point of diminishing returns in terms of spectrum efficiency, and the same is true with DSP-ASIC development," says Winzer. Work will continue to develop higher speeds per wavelength, to increase capacity per fibre, and to achieve higher densities and lower costs. In parallel, work continues in software and networking architectures. For example, flexible multi-rate transponders used for elastic optical networking, and software-defined networking that will be able to adapt the optical layer.

After that, designers are looking at using more amplification bands, such as the L-band and S-band alongside the current C-band to increase fibre capacity. But it will be a challenge to match the optical performance of the C-band across all bands used. 

"I would believe in a doubling or maybe a tripling of bandwidth but absolutely not more than that," says Winzer. "This is a stop-gap solution that allows me to get to the next level without running into desperation." 

The designers' 'next level' is spatial division multiplexing. Here, signals are launched down multiple channels, such as multiple fibres, multi-mode fibre and multi-core fibre. "That is what people will have to do on a five-year to 10-year horizon," concludes Winzer. 

 

For Part 2, click here

 

See also:

  • High Capacity Transport - 100G and Beyond, Journal of Lightwave Technology, Vol 33, No. 3, February 2015.

 

A version of this article first appeared in an OFC 2015 show preview


Alcatel-Lucent serves up x86-based IP edge routing

Alcatel-Lucent has re-architected its edge IP router functions - its service router operating system (SR OS) and applications - to run on Intel x86 instruction-set servers.

Shown is the VSR running on one server and distributed across several servers. Source: Alcatel-Lucent.

The company's Virtualized Service Router portfolio aims to reduce the time it takes operators to launch services and is the latest example of the industry trend of moving network functions from specialist equipment onto stackable servers, a development know as network function virtualisation (NFV).     

"It is taking IP routing and moving it into the cloud," says Manish Gulyani, vice president product marketing for Alcatel-Lucent's IP routing and transport business. 

IP edge routers are located at the edge of the network where services are introduced. By moving IP edge functions and applications on to servers, operators can trial services quickly and in a controlled way. Services can then be scaled according to demand. Operators can also reduce their operating costs by running applications on servers. "They don't have to spare every platform, and they don't need to learn its hardware operational environment," says Gulyani 

Alcatel-Lucent has been offering two IP applications running on servers since mid-year. The first is a router reflector control plane application used to deliver internet services and layer-2/ layer-3 virtual private networks (VPNs). Gulyani says the application product has already been sold to two customers and over 20 are trialling it. The second application is a routing simulator used by customers for test and development work. 

More applications are now being made available for trial: a provider edge function that delivers layer-2 and layer-3 VPNs, and an application assurance application that performs layer-4 to layer-7 deep-packet inspection. "It provides application level reporting and control," says Gulyani. Operators need to understand application signatures to make decisions based on which applications are going through the IP pipe, he says, and based on a customer's policy, the required treatment for an app.

Additional Virtualized Service Router (VSR) software products planned for 2015 include a broadband network gateway to deliver triple-play residential services, a carrier Wi-Fi solution and an IP security gateway.  

Alcatel-Lucent claims a two rack unit high (2RU) server hosting two 10-core Haswell Intel processors achieves 160 Gigabit-per-second (Gbps) full-duplex throughput. The company has worked with Intel to determine how best to use the chipmaker's toolkit to maximise the processing performance on the cores. 

"Using 16, 10 Gigabit ports, we can drive the full capacity with a router application," says Gulyani. "But as more and more [router] features are turned on - quality of service and security, for example - the performance goes below 100 Gigabit. We believe the sweet-spot is in the sub-100 Gig range from a single-server perspective."

In comparison, Alcatel-Lucent's own high-end network processor chipset, the FP3, that is used within its router platforms, achieves 400 Gigabit wireline performance even when all the features are turned on.

"With the VSR portfolio and the rest of our hardware platforms, we can offer the right combination to customers to build a performing network with the right economics," says Gulyani.  

 

Alcatel-Lucent's server router portfolio split into virtual systems and IP platforms. Also shown (in grey) are two platforms that use merchant processors on which runs the company's SR OS router operating system i.e. the company has experience porting its OS onto hardware besides its own FPx devices before it tackled the x86. Source: Alcatel-Lucent.

 

Gazettabyte asked three market research analysts about the significance of the VSR announcement, the applications being offered, the benefits to operators, and what next for IP.

 

Glen Hunt, principal analyst, transport & routing infrastructure at Current Analysis

Alcatel-Lucent's full routing functionality available on an x86 platform enables operators to continue with their existing infrastructures - the 7750SR in Alcatel-Lucent's case - and expand that infrastructure to support additional services. This is on less expensive platforms which helps support new services that were previously not addressable due to capital expenditure and/ or physical restraints.

The edge of the service provider network is where all the services live. By supporting all services in the cloud, operators can retain a seamless operational model, which includes everything they currently run. The applications being discussed here are network-type functions - Evolved Packet Core (EPC), broadband network gateway (BNG), wireless LAN gateways (WLGWs), for example - not the applications found in the application layer. These functions are critical to delivering a service.

Virtualisation expands the operator’s ability to launch capabilities without deploying dedicated routing/ device platforms, not in itself a bad thing, but with the ability to spin up resources when and where needed. Using servers in a data centre, operators can leverage an on-demand model which can use distributed data centre resources to deliver the capacity and features.

Other vendors have launched, or are about to launch, virtual router functionality, and the top-level stories appear to be quite similar. But Alcatel-Lucent can claim one of the highest capacities per x86 blade, and can scale out to support Nx160Gbps in a seamless fashion; having the ability to scale the control plane to have multiple instances of the Virtualized Service Router (VSR) appear as one large router.

Furthermore, Alcatel-Lucent is shipping its VSR route reflector and the VSR simulator capabilities and is in trials with VSR provider edge and VSR application assurance – noting it has two contracts and 20-plus trials. This shows there is a market interest and possibly pent-up demand for the VSR capabilities.

It will be hard for an x86 platform to achieve the performance levels needed in the IP core to transit high volumes of packet data. Most of the core routers in the market today are pushing 16 Terabit-per-second of throughput across 100 Gigabit Ethernet ports and/ or via direct DWDM interfaces into an optical transport core. This level of capability needs specialised silicon to meet demands.

Performance will remain a key metric moving forward, even though an x86 is less expensive than most dedicated high performance platforms, it still has a cost basis. The efficiency which an application uses resources will be important. In the VSR case, the more work a single blade can do, the better. Also of importance is the ability for multiple applications to work efficiently, otherwise the cost savings are limited to the reduction in hardware costs. If the management of virtual machines is made more efficient, the result is even greater efficiency in terms of end-to-end performance of a service which relies on multiple virtualised network functions.

Ultimately, more and more services will move to the cloud, but it will take a long time before everything, if ever, is fully virtualised. Creating a network that can adapt to changing service needs is a lengthy exercise. But the trend is moving rapidly to the cloud, a combination of physical and virtual resources.

 

Michael Howard, co-founder and principal analyst, Infonetics Research

There is overwhelming evidence from the global surveys we’ve done with operators that they plan to move functions off the physical IP edge routers and use software versions instead.

These routers have two main functions: to handle and deliver services, and to move packets. I’ve been prodding router vendors for the last two years to tell us how they plan to package their routing software for the NFV market. Finally, we hear the beginnings, and we’ll see lots more software routing options.

The routing options can be called software routers or vRouters. The services functions will be virtualised network functions (VNFs), like firewalls, intrusion detection systems and intrusion prevention systems, deep-packet inspection, and caching/ content delivery networks that will be delivered without routing code. This is important for operators to see what routing functions they can buy and run in NFV environments on servers, so they can plan how to architect their new software-defined networking and NFV world.

It is important for router vendors to play in this world and not let newcomers or competitors take the business. Of course, there is a big advantage to buy their vRouter software — route reflection for example — from the same router vendor they are already using, since it obviously works with the router code running on physical routers, and the same software management tools can be used.

Juniper has just made its first announcement. We believe all router vendors are doing the same; we’ve been expecting announcements from all the router vendors, and finally they are beginning.

It will be interesting to see how the routing code is packaged into targeted use cases - we are just seeing the initial use cases now from Juniper and Alcatel-Lucent - like the route reflection control plane function, IP/ MPLS VPNs and others.

Despite the packet-processing performance achieved by Alcatel-Lucent using x86 processors, it should be noted that some functions like the control plane route reflection example only need compute power, not packet processing or packet-moving power.

There already is, and there will always be, a need for high performance for certain places in the network or for serving certain customers. And then there are places and customers where traffic can be handled with less performance.

As for what next for IP, the next 10 to 15 years will be spent moving to SDN- and NFV-architected networks, just as service providers have spent over 10 years moving from time-division multiplexing-based networks to packet-based ones, a transition yet to be finished.

 

Ray Mota, chief strategist and founder, ACG Research

Carriers have infrastructure that is complex and inflexible, which means they have to be risk-averse. They need to start transitioning their architecture so that they just program the service, not re-architect the network each time they have a new service. Having edge applications becoming more nimble and flexible is a start in the right direction. Alcatel-Lucent has decided to create a NFV edge product with a carrier-grade operating system.

It appears, based on what the company has stated, that it achieves faster performance than competitors' announcements.

Alcatel-Lucent is addressing a few areas: this is great for testing and proof of concepts, and an area of the market that doesn't need high capacity for routing, but it also introduces the potential to expand new markets in the webscaler space (that includes the large internet content providers and the leading hosting/ co-location companies).

You will see more and more IP domain products overlap into the IT domain; the organisationals and operations are lagging behind the technology but once service providers figure it out, only then will they have a more agile network. 

 


Colt's network transformation

Colt's technology and architecture specialist, Mirko Voltolini, talks to Gazettabyte about how the service provider has transformed its network from one based on custom platforms to an open, modular design.

 

It was obvious to Colt that something had to change. Its network architecture based on proprietary platforms running custom software was not sustainable; the highly customised network was cumbersome, resistant to change and expensive to run. The network also required a platform to be replaced -  or at least a new platform added alongside an existing one - every five to seven years.

Mirko Voltolini

"The cost of this approach is enormous," says Mirko Voltolini, vice president technology and architecture at Colt Technology Services. "Not just in money but the time it takes to roll out a new platform."

Instead, the service provider has sought a modular approach to network design using standardised platforms that are separated from each other. That way, a new platform with a better feature set or improved economics can be slotted in without impacted the other platforms. Colt calls its resulting network a modular multi-service platform (MSP).

The MSP now delivers the majority of Colt's data networking and all-IP services. These includes Carrier Ethernet point-to-point, hub-and-spoke and private networks services, as well as internet access, IP VPNs and VoIP IP-based services.

The vendors chosen for the MSP include Cyan with its Z-Series packet-optical transport system (P-OTS) and Blue Planet software-defined networking (SDN) platform and Accedian Networks' customer premise equipment (CPE). Cyan's Z-Series does not support IP, so Colt uses Juniper Networks' and Alcatel-Lucent's IP edge platforms. Colt also has a legacy 20-year-old SDH network but despite using a P-OTS platform, it has decided to leave the SDH platform alone, with the modular MSP running alongside it.

Colt chose its vendors based on certain design goals. "The key was openness," says Voltolini. "We didn't want to have a closed system." It was Cyan's management system, the Blue Planet platform, that led Colt to choose Cyan.

Associated with Blue Planet is an ecosystem that allows the management software to control other vendors' platforms. Cyan uses 'element adapters' that mediate between its SDN interface software and the proprietary interfaces of its vendor partners. Cyan says that its Z-Series P-OTS appears as a third-party piece of equipment to its Blue Planet software in the same way as the other vendors' equipment are; a view confirmed by Colt. "Because of its openness, we have been able to integrate other vendors to use the same management system as if they were Cyan components," says Voltolini.

 

 

"Cyan was probably the best option available and we decided to go with it," says Voltolini. The company was looking at what was available two years ago and Voltolini points out that the market has evolved significantly since then. "In the end, if you want to move ahead, you need to make decisions," he says. "We are quite happy with what we have picked and we continue to improve it."

Colt says that as well as SDN, network functions virtualisation (NFV) is also important. "With the same modular platform we have created a virtual component which is a layer-3 CPE," says  Voltolini. The company is issuing a request-for-information (RFI) regarding other CPE functions like firewalls, load-balancers and other networking components.

 

Benefits and lessons learned 

Adopting the MSP has speeded up Colt's service delivery. Before the modular network, it would take between 30 and 45 days for Colt to fulfil a customer's request for a three-month-long Ethernet link upgrade, from 100 Megabit to 200 Megabit. Now, such a request can be fulfilled in seconds. "We didn't need any more layer-3 CPE and we can upgrade remotely the bandwidth," says Voltolini.

Colt also estimates that it will halve its operational costs once the new network is fully deployed; the network went live in November 2013 and has not been deployed in all locations. The operational expense improvement and the greater service flexibility both benefit Colt's bottom line, says Voltolini.

A key lesson learned from the network transformation is the importance of leading staff through change rather than any technological issues. "The technology has been a challenge but in the end, with the suppliers, you can design anything you want if you have the right level of collaboration," says Voltolini. "But when you completely transform the way you deliver services, you are touching everything that is part of the engine of the company."

Colt cites aspects such as engineering solutions, service delivery, service operations, systems and processes, and the sales process. "You need to lead the transition is such a way that everybody is going to follow you," says Voltolini.

Colt encountered obstacles created because of the staff's natural resistance to change. "Certain things took longer," says Voltolini. "We had to overcome obstacles that weren't really obstacles, just people's fear of change."


G.fast adds to the broadband options of the service providers

Feature: G.fast

Source: Alcatel-Lucent

Competition is commonly what motivates service providers to upgrade their access networks. And operators are being given every incentive to respond. Cable operators are offering faster broadband rates and then there are initiatives such as Google Fiber.

Internet giant Google is planning 1 Gigabit fibre rollouts in up to 34 US cities covering 9 metro areas. The initiative prompted AT&T to issue its own list of 21 cities it is considering to offer a 1 Gigabit fibre-to-the-home (FTTH) service.

But delivering fibre all the way to the home is costly, and then there is the engineering time required to connect the home gateway to the network. Hence the operator interest in the emerging G.fast standard, the latest digital subscriber line (DSL) development that promises Gigabit rates using the telephone wire.

"G.fast eliminates the need to run fibre for the last 250 meters [to the home]," says Dudi Baum, CEO of Sckipio, an Israeli start-up developing G.fast chipsets. "Providing 1 Gigabit over a copper pair is cheaper and faster to deploy, compared to running fibre all the way."

 

For G.fast, you need the fibre closer to your house to get the Gigabit and that is not available today with most carriers

 

Until recently, operators faced a choice of whether to deploy FTTH or use fibre-to-the-node (FTTN) and VDSL to boost broadband rates. Now, such boundaries are disappearing, says Stefaan Vanhastel, marketing director for fixed networks, Alcatel-Lucent. Operators are more pragmatic in their deployments and are choosing the most suitable technology for a given deployment based on what is most cost effective and fastest to deploy.

"It is very much no longer black and white," agrees Julie Kunstler, principal analyst, components at market research firm, Ovum. "The same service providers will be supporting multiple access networks."

The advent of G.fast will enhance the operators' choice, boosting data rates while using existing copper to bridge the gap between the fibre and the home. But the technology is still some way off and views differs as to whether deployments will begin in 2015 or 2016.

"For G.fast, you need the fibre closer to your house to get the Gigabit and that is not available today with most carriers," says Arun Hiremath, director, marketing at DSL chip company, Ikanos Communications. It will likely start with some small scale deployments, he says, "but the carriers will wait a little more for things to mature".

 

G.fast

G.fast enables Gigabit rates over telephone wire by expanding the usable spectrum to 106MHz. This compares to the 17MHz spectrum used by VDSL2, the current most advanced deployed DSL standard. But adopting the wider spectrum exacerbates two local-loop characteristics that dictate DSL performance: signal attenuation and crosstalk.

Operating at higher frequencies induces signal attenuation, shortening the copper reach over which data can be sent. VDSL2 is deployed over 1,500m links typically, G.fast distances will more likely be 200m or less.

Dudi BaumCrosstalk refers to signal leakage between copper pairs in a cable bundle. A cable can be made up of tens or hundreds of copper twisted pairs. The leakages causes each twisted pair not only to carry the signal sent but also noise, the sum of the leakage components from neighbouring DSL pairs.

Crosstalk becomes more prominent the higher the frequency. "One reason why no one has developed G.fast technology until now is the challenge of handling crosstalk at the much higher frequencies," says Baum. Indeed, from G.fast field trials, observed crosstalk is so severe that from certain frequencies upwards, the interference is as strong as the received signal, says Paul Spruyt, DSL strategist for fixed networks at Alcatel-Lucent.

 

Vectoring

Vectoring is a technique use to tackle crosstalk and restore a line's data capacity. Vectoring uses digital signal processing to implement noise cancellation, and is already used for VDSL2. "Vectoring is considered a key aspect of G.fast, even more than for VDSL2," says Spruyt.

G.fast can be seen as a logical evolution of VDSL2 but there are also differences. Besides the wider 106MHz spectrum, G.fast has a different duplexing scheme. VDSL2 uses frequency-division duplexing (FDD) where the data transmission is continuous - upstream (from the home) and downstream - but on different frequency bands or tones. In contrast, G.fast uses time-division duplexing (TDD) where all the spectrum is used to either send data (upstream) or receive data.

If a cable carries both services to homes/ businesses, G.fast is started from the 17-106MHz band to avoid overlapping with VDSL2, since crosstalk cannot be cancelled between the two because of their differing duplexing schemes.

Paul Spruyt

Both DSL schemes use discrete multi-tone, where each tone carries data bits. But G.fast uses half the number of tones - 2,048 - with each tone 12 times the bandwidth of the tones used for VDSL2.

Operators can also configure the upstream and downstream ratio more easily using TDD. An 80 percent downstream/ 20 percent upstream is common to the home whereas businesses have symmetric data flows.

Only transmitting or only receiving also simplifies the G.fast analogue front-end circuitry since it is less susceptible to signal echo, whereas such an echo is an issue with VDSL2 due to the simultaneous sending and receiving of data.

Operators want G.fast to deliver 150 Megabit-per-second (Mbps) aggregate data rates over 250m, 200Mbps over 200m, 500Mbps over 100m and up to 1 Gigabit-per-second over shorter spans. This compares to VDSL2's 70Mbps (50Mbps downstream, 20Mbps upstream) over 400m. With vectoring, VDSL2 performance is doubled: 100Mbps downstream and 40Mbps for the same span.

Vectoring works by measuring the crosstalk coupling on each line before the DSLAM - the platform at the cabinet, or the fibre distribution point unit for G.fast - generates anti-noise to null each line's crosstalk.

The crosstalk coupling between the pairs is estimated using special modulated ‘sync’ symbols that are sent between data transmissions. A user's DSL modem expects to see the modulated sync symbol, but in reality receives the symbol distorted with crosstalk from modulated sync symbols transmitted on the neighbouring lines.

The modem measures the error – the crosstalk – and sends it to the DSLAM. The DSLAM correlates the received error values on the ‘victim’ line with the pilot sequences transmitted on all the other ‘disturber’ lines. This way, the DSLAM measures the crosstalk coupling for every disturber–victim pair. Anti-noise is then generated using a vectoring chip in the DSLAM, and injected into the victim line on top of the transmitted signal to cancel the crosstalk picked up, a process repeated for each line.

Such an approach is known as pre-coding: in the downstream direction anti-noise signals are generated and injected in the DSLAM before the signal is transmitted on the line. For the upstream, post-coding is used: the DSLAM generates and adds the anti-noise after reception of the signal distorted with crosstalk. In this case, the DSL modem sends modulated sync symbols and the DSLAM measures the error signal and performs the correlations and anti-noise calculations.

G.fast vectoring is more complex than vectoring for VDSL2.

Besides the strength of the crosstalk at higher frequencies, G.fast uses a power-saving mode that deactivates the line when no data is being sent. The vectoring algorithm must stop generating anti-noise each time the line is deactivated, while quickly generate anti-noise when transmission restarts. A VDSL2 modem line can also be deactivated but this is much less commonplace.  

"The number of computations you need to do is proportional to the square of the number of lines," says Spruyt. For G.fast, the lines used are far less - 4 to 24 and even 48 in certain cases -  because the G.fast mini-DSLAM is much closer to the home. For VDSL2, the number of lines can be 200 or 400.

However, the symbol rate of G.fast is related to the tone spacing and hence is 12 times faster than VDSL2. That requires faster calculation, but since G.fast has half the number of tones of VDSL2, and crosstalk cancellation is performed for each tone, the overall G.fast processing for G.fast is six times greater.

G.fast vectoring may thus be more complex but the overall computation - and power consumption - of the vectoring processor is lower than VDSL2 due to the fewer DSL lines.

 

We should expect the first generation of G.fast to consume more power than VDSL2 silicon

 

Chip developments

The G.fast analogue silicon requires much faster analogue-to-digital and digital-to-analogue converters due to the broader spectrum used, while the G.fast line drivers use a lower transmit power due to the shorter reach requirements. "We should expect the first generation of G.fast to consume more power than VDSL2 silicon," says Spruyt.

Stefaan Vanhastel

The main functional blocks for G.fast and VDSL2 include the baseband digital signal processor, vectoring, the analogue front end, and the line driver. The degree to which they are integrated in silicon - whether one chip or four if the home-gateway functions are included - depends on where they are used.

"The chipsets will be designed differently for the different segments where they are used," says Hiremath. For example, the G.fast modem could be implemented as a single chip that includes the baseband, home gateway, and even the line driver due to the short lengths involved, he says.

Moreover, while the G.fast standard does not require backward compatibility with VDSL2, there is nothing stopping chipmakers from supporting both. The same was true with VDSL2 yet the resulting chipsets also supported ADSL2.

Ikanos has yet to unveil its G.fast silicon but it has announced its Neos development platform for customers to test and trial the technology. Hiremath says its G.fast design is based on the Neos architecture and that it expects first samples later this year. 

Start-up Sckipio has also to detail its G.fast silicon design but says it will provide more information in the coming months. G.fast has system requirements that are difficult to meet, says Baum: "The challenge is not to show the technology working but to meet the standard's boundary requirements with a small, efficient design that provides 1 Gigabit." By boundary conditions Baum is referring to performance requirements that the modem needs to achieve, such as certain speeds and distances with a given packet loss, for example.

Sckipio already has first samples of its silicon. The company ported the RTL design of its silicon onto a Cadence Palladium system - a box with hundreds of FPGAs that allows the complete hardware design to be built. The company also has DSL models - bundles of twisted copper pairs measured at greater than 200MHz - to test the design's performance. "We use those models to see the expected performance running our protocol over those wires," says Baum.

Alcatel-Lucent has developed its own vectoring know-how for VDSL2 and has now added  G.fast. "Having our own vectoring technology means that we have our own vectoring processing," says Alcatel-Lucent's Vanhastel.

Alcatel-Lucent has conducted G.fast trials with A1 Telecom Austria. "The good news is that we have been able to show that with vectoring, you can get really close to single-user capacity; the same capacity you have if there is only a single user active on the line," says  Vanhastel. In the trial using over 100m of cable, G.fast achieved 60Mbps due to crosstalk. "Activating G.fast vectoring it rose to 500Mbps - almost a factor of 10," he says. 

Much work remains before G.fast is deployed in the network, says Alcatel-Lucent. The International Telecommunication Union's G.9701 G.fast physical layer document is 300 pages long and while consent has been achieved, approving the standard is expected to take the rest of the year. Interoperability, test, functionality and performance specifications are still to be written by the Broadband Forum and then there are regulatory issues to be overcome: G.fast's 106MHz spectrum overlaps with FM radio, for example.   

Sckipio is more upbeat about timescales, believing operators will start deployments in 2015 due to competition including the cable operators. The start-up says it has multiple field trials of its G.fast silicon this year.

Meanwhile, extending the spectrum to 212MHz is the next logical step in the development of G.fast. "Bonding is another concept that could be applied," says Spruyt.

There is life in the plain old telephone service yet.

 

This is an extended version of an article that first appeared in New Electronics, click here.

 


Books in 2013 - Part 2

Alcatel-Lucent's President of Bell Labs and CTO, Marcus Weldon, on the history and future of Bell Labs, and titles for Christmas; Steve Alexander, CTO of Ciena, on underdogs, connectedness, and deep-sea diving; and Dave Welch, President of Infinera on how people think, and an extraordinary WWII tale: the second part of Books 2013.  

 

Steve Alexander, CTO of Ciena

David and Goliath: Underdogs, Misfits, and the Art of Battling Giants by Malcolm Gladwell

I’ve enjoyed some of Gladwell’s earlier works such as The Tipping Point and Outliers: The Story of Success. You often have to read his material with a bit of a skeptic's eye since he usually deals with people and events that are at least a standard deviation or two away from whatever is usually termed “normal.”  In this case he makes the point that overcoming an adversity (and it can be in many forms) is helpful in achieving extraordinary results.  It also reminded me of the many people who were skeptical about Ciena’s initial prospects back in the middle '90s when we first came to market as a “David” in a land of giant competitors. We clearly managed to prosper and have now outlived some of the giants of the day.

Overconnected: The Promise and Threat of the Internet by William Davidow. 

I downloaded this to my iPad a while back and finally got to read it on a flight back from South America. On my trip what had I been discussing with customers? Improving network connections of course. I enjoyed it quite a bit because I see some of his observations within my own family. The desire to “connect” whenever something happens and the “positive feedback” that can result from an over-rich set of connections can be both quite amusing as well as a little scary! I don’t believe that all of the events that the author attributes to being overconnected are really as cause-and-effect driven as he may portray, but I found the possibilities for fads, market bubbles, and market collapses entertaining. 

For another insight into such extremes see Extraordinary Popular Delusions and the Madness of Crowds by Charles Mackay, first published in the 1840s. We, as a species, have been a bit wacky for a long time.

Shadow Divers: The True Adventure of Two Americans Who Risked Everything to Solve One of the Last Mysteries of World War II  by Robert Kurson. 

Having grown up in the New York / New Jersey area and having listened to stories from my parents about the fear of sabotage in World War II (Google Operation Pastorius for some background) and grandparents, who had experienced the Black Tom Explosion during WW1,  this book was a “don’t put it down till done” for me. I found it by accident when browsing a used book store. It’s available on Kindle and is apparently somewhat controversial because another diver has written a rebuttal to at least some of what was described. It is a great example of what it takes to both dive deep and solve a mystery.

 

David Welch, President, Infinera

Here is my cut.  The first three books offer a perspective on how people think and I apply it to business.

My non-work related book is Unbroken: A World War II Story of Survival, Resilience, and Redemption by Laura Hillenbrand.

Unfortunately, I rarely get time to read books, so the picking can be thin at times.

 

Marcus Weldon, President of Bell Labs and CTO, Alcatel-Lucent

I am currently re-reading Jon Gertner's history of Bell labs, called The Idea Factory: Bell Labs and the Great Age of American Innovation which should be no surprise as I have just inherited the leadership of this phenomenal place, and much  of what he observes is still highly relevant today and will inform the future that I am planning.  

I joined Bell Labs in 1995 as a post-doctoral researcher in the famous, Nobel-prize winning Physics Division (Div111, as it was known) and so experienced much of this first hand.  In particular, I recall being surrounded by the most brilliant, opinionated, odd, inspired, collaborative, competitive, driven, relaxed, set of people I had ever met.  And with the shared goal of solving the biggest problems in information and telecommunications.  

Having recently returned back to the 'bosom of bell', I find that, remarkably, much of that environment and pool of talent still remains.  And that is hugely exciting as it means that we still have the raw ingredients for the next great era of Bell Labs.  My hope is that 10 years from now Gertner will write a second edition or updated version of the tale that includes the renewed success of Bell Labs, and not just the historical successes.

On the personal front, I am reading whatever my kids ask me to read them.  Two of the current favourites are: Turkey Claus, about a turkey trying to avoid becoming the centrepiece of a Christmas feast by adapting and trying various guises, and Pete the Cat Saves Christmas, about a world of an ailing feline Claus, requiring average cat, Pete, to save the big day.  

I am not sure there is a big message here, but perhaps it is that 'any one of us can be called to perform great acts, and can achieve them, and that adaptability is key to success'.  And of course, there is some connection in this to the Bell Labs story above, so I will leave it there!

 

Books in 2013: Part 1, click here


Alcatel-Lucent dismisses Nokia rumours as it launches NFV ecosystem

Michel Combes, CEO of Alcatel-Lucent, on a visit to Israel, talks Nokia, The Shift Plan and why service providers are set to regain the initiative.

Michel Combes, CEO. Photo: Kobi Kantor.

The CEO of Alcatel-Lucent, Michel Combes, has brushed off rumours of a tie-up with Nokia, after reports surfaced last week that Nokia's board was considering the move as a strategy option.

"You will have to ask Nokia," said Combes. "I'm fully focussed on the Shift Plan, it is the right plan [for the company]; I don't want to be distracted by anything else."

Combes was speaking at the opening of Alcatel-Lucent's cloud R&D centre in Kfar Saba, Israel, where the company's internal start-up CloudBand is developing cloud technology for carriers.

 

Network Functions Virtualisation

CloudBand used the site opening to unveil its CloudBand Ecosystem Program to spur adoption of Network Functions Virtualisation (NFV). NFV is a carrier-led initiative, set up by the European Telecommunications Standards Institute (ETSI), to benefit from the IT model of running applications on virtualised servers.

Carriers want to get away from vendor-specific platforms that are expensive to run and cumbersome to upgrade when new services are needed. Adding a service can take between 18 months and three years, said Dor Skuler, vice president and general manager of the CloudBand business unit. Moreover, such equipment can reside in the network for 15 years. "Most of the [telecom] software is running on CPUs that are 15 years old," said Skuler.

Instead, carriers want vendors to develop software 'network functions' executed on servers. NFV promises a common network infrastructure and reduced costs by exploiting the economies of scale associated with servers. Server volumes dwarf those of dedicated networking equipment, and are regularly upgraded with new CPUs.

Applications running on servers can also be scaled up and down, according to demand, using virtualisation and cloud orchestration techniques already present in the data centre. "This is about to make the network scalable and automated," said Combes.    

Alcatel-Lucent stresses that not all networking functions are suited for virtualisation. Optical transport is one example. Another is routing, which requires dedicated silicon for packet processing and traffic management.  

CloudBand was set up in 2011. The unit is focussed on the orchestration and automation of distributed cloud computing for carriers. "How do you operationalise cloud which may be distributed across 20 to 30 locations?" said Skuler.

CloudBand says it can add a "cloud node" - IT equipment at an operator's site - and have it up and running three hours after power-up. This requires processes that are fully automated, said Skuler. Also used are algorithms developed at Alcatel-Lucent Bell Labs that determine the best location for distributed cloud resources for a given task. The algorithms load-balance the resources based on an application's requirements.

The distributed cloud technology also benefits from software-defined networking (SDN) technology from Alcatel-Lucent's other internal venture, Nuage Networks. Nuage Networks automates and sets up network connections between data centres. "Just as SDN makes use of virtualisation to give applications more memory and CPU resources in the data centre, Nuage does the same for the network," said Skuler.

Open interfaces are needed for NFV to succeed and avoid the issue of proprietary solutions and vendor lock-in. Alcatel-Lucent's NFV solution needs to support third-party applications, while the company's applications will have to run on other vendors' platforms. To this aim, CloudBand has set up an NFV ecosystem for service providers, vendors and developers.

"We have opened up CloudBand to anyone in the industry to test network applications on top of the cloud," said Skuler. "We are the first to do that."

So far, 15 companies have signed up to the CloudBand Ecosystem Program including Deutsche Telekom, Telefonica, Intel and HP.

Technologies such as NFV promise operators a way to regain market traction and avoid the commoditisation of transport, said Combes. Operators can manage their networks more efficiently, and create new business models. For example, operators can sell  enterprises network functions such as infrastructure-as-a-service and platform-as-a-service.

Does not software functions run on servers undermine a telecom equipment vendor's primary business? "We are still perceived as a hardware company yet 85 percent of systems is software based," said Combes. Moreover, this is a carrier-driven initiative. "This is where our customers want to go," he said. "You either accept there will be a bit of canabalisation or run the risk of being canabalised by IT players or others."     

 

The Shift Plan

Combes has been in place as Alcatel-Lucent's CEO for four months. In that time he has launched the Shift Plan that focusses the company's activities in three broad directions: IP infrastructure including routing and transport, cloud, and ultra-broadband access including wireless (LTE) and wireline (FTTx).

Combes says the goal is to regain the competitiveness Alcatel-Lucent has lost in recent years. The goal is to improve product innovation, quality of execution and the company's cost structure. Combes has also tackled the balance sheet, refinancing company debt over the summer.

The Shift Plan's target is to get the company back on track by 2015: growing, profitable and industry-leading in the three areas of focus, he said.     


Space-division multiplexing: the final frontier

System vendors continue to trumpet their achievements in long-haul optical transmission speeds and overall data carried over fibre. 

Alcatel-Lucent announced earlier this month that France Telecom-Orange is using the industry's first 400 Gigabit link, connecting Paris and Lyon, while Infinera has detailed a trial demonstrating 8 Terabit-per-second (Tbps) of capacity over 1,175km and using 500 Gigabit-per-second (Gbps) super-channels. 

 

"Integration always comes at the cost of crosstalk"

Peter Winzer, Bell Labs

 

 

 

 

 

 

Yet vendors already recognise that capacity in the frequency domain will only scale so far and that other approaches are required. One is space-division multiplexing such as using multiple channels separated in space and implemented using multi-core fibre with each core supporting several modes.

 "We want a technology that scales by a factor of 10 to 100," says Peter Winzer, director of optical transmission systems and networks research at Bell Labs. "As an example, a fibre with 10 cores with each core supporting 10 modes, then you have the factor of 100."

 

Space-division multiplexing

Alcatel-Lucent's research arm, Bell Labs, has demonstrated the transmission of 3.8Tbps using several data channels and an advanced signal processing technique known as multiple-input, multiple-output (MIMO).

In particular, 40 Gigabit quadrature phase-shift keying (QPSK) signals were sent over a six-spatial mode fibre using two polarisation modes and eight wavelengths to achieve 3.8Tbps. The overall transmission uses 400GHz of spectrum only.

Alcatel-Lucent stresses that the commercial deployment of space-division multiplexing remains years off. Moreover operators will likely first use already-deployed parallel strands of single-mode fibre, needing the advanced signal processing techniques only later.

"You might say that is trivial [using parallel strands of fibre], but bringing down the cost of that solution is not," says Winzer.

First, cost-effective integrated amplifiers will be needed. "We need to work on a single amplifier that can amplify, say, ten existing strands of single-mode fibre at the cost of two single-mode amplifiers," says Winzer. An integrated transponder will also be needed: one transponder that couples to 10 individual fibres at a much lower cost than 10 individual transponders.

With a super-channel transponder, several wavelengths are used, each with its own laser, modulator and detector. "In a spatial super-channel you have the same thing, but not, say, three different frequencies but three different spatial paths," says Winzer. Here photonic integration is the challenge to achieve a cost-effective transponder.

Once such integrated transponders and amplifiers become available, it will make sense to couple them to multi-core fibre. But operators will only likely start deploying new fibre once they exhaust their parallel strands of single-mode fibre.

Such integrated amplifiers and integrated transponders will present challenges. "The more and more you integrate, the more and more crosstalk you will have," says Winzer. "That is fundamental: integration always comes at the cost of crosstalk."

Winzer says there are several areas where crosstalk may arise. An integrated amplifier serving ten single-mode fibres will share a multi-core erbium-doped fibre instead of ten individual strands. Crosstalk between those closely-spaced cores is likely.

The transponder will be based on a large integrated circuit giving rise to electrical crosstalk. One way to tackle crosstalk is to develop components to a higher specification but that is more costly. Alternatively, signal processing on the received signal can be used to undo the crosstalk. Using electronics to counter crosstalk is attractive especially when it is the optics that dominate the design cost.  This is where MIMO signal processing plays a role. "MIMO is the most advanced version of spatial multiplexing," says Winzer.

To address crosstalk caused by spatial multiplexing in the Bell Labs' demo, 12x12 MIMO was used. Bell Labs says that using MIMO does not add significantly to the overall computation. Existing 100 Gigabit coherent ASICs effectively use a 2x2 MIMO scheme, says Winzer: “We are extending the 2x2 MIMO to 2Nx2N MIMO.” 

Only one portion of the current signal processing chain is impacted, he adds; a portion that consumes 10 percent of the power will need to increase by a certain factor. The resulting design will be more complex and expensive but not dramatically so, he says.

Winzer says such mitigation techniques need to be investigated now since crosstalk in future systems is inevitable. Even if the technology's deployment is at least a decade away, developing techniques to tackle crosstalk now means vendors have a clear path forward.

 

Parallelism

Winzer points out that optical transmission continues to embrace parallelism. "With super-channels we go parallel with multiple carriers because a single carrier can’t handle the traffic anymore," he says. This is similar to parallelism in microprocessors where multi-core designs are now used due to the diminishing return in continually increasing a single core's clock speed.

For 400Gbps or 1 Terabit over a single-mode fibre, the super-channel approach is the near term evolution.

Over the next decade, the benefit of frequency parallelism will diminish since it will no longer increase spectral efficiency. "Then you need to resort to another physical dimension for parallelism and that would be space," says Winzer.

MIMO will be needed when crosstalk arises and that will occur with multiple mode fibre.

"For multiple strands of single mode fibre it will depend on how much crosstalk the integrated optical amplifiers and transponders introduce," says Winzer.

 

Part 1: Terabit optical transmission


Alcatel-Lucent demos dual-carrier Terabit transmission

"Without [photonic] integration you are doubling up your expensive opto-electronic components which doesn't scale"

Peter Winzer, Alcatel-Lucent's Bell Labs

 

Part 1: Terabit optical transmission

Alcatel-Lucent's research arm, Bell Labs, has used high-speed electronics to enable one Terabit long-haul optical transmission using two carriers only.

Several system vendors have demonstrated one Terabit transmission including Alcatel-Lucent but the company is claiming an industry first in using two multiplexed carriers only. In 2009, Alcatel-Lucent's first Terabit optical transmission used 24 sub-carriers.

"There is a tradeoff between the speed of electronics and the number of optical modulators and detectors you need," says Peter Winzer, director of optical transmission systems and networks research at Bell Labs. "In general it will be much cheaper doing it with fewer carriers at higher electronics speeds than doing it at a lower speed with many more carriers."

 

What has been done

In the lab-based demonstration, Bell Labs sent five, 1 Terabit-per-second (Tbps) signals over an equivalent distance of 3,200km. Each signal uses dual-polarisation 16-QAM (quadrature amplitude modulation) to achieve a 1.28Tbps signal. Thus each carrier holds 640Gbps: some 500Gbps data and the rest forward error correction (FEC) bits.

In current 100Gbps systems, dual-polarisation, quadrature phase-shift keying (DP-QPSK) modulation is used. Going from QPSK to 16-QAM doubles the bit rate. Bell Labs has also increased the symbol rate from some 30Gbaud to 80Gbaud using state-of-the-art high-speed electronics developed at Alcatel Thales III-V Lab. 

"To achieve these rates, you need special high-speed components - multiplexers - and also high-speed multi-level devices," says Winzer.  These are indium phosphide components, not CMOS and hence will not be deployed in commercial products for several years yet. "These things are realistic [in CMOS], just not for immediate product implementation," says Winzer.

Each carrier occupies 100GHz of channel bandwidth equating to 200GHz overall, or a 5.2b/s/Hz spectral efficiency. Current state-of-the-art 100Gbps systems use 50GHz channels, achieving 2b/s/Hz.

The 3,200km reach using 16-QAM technology is achieved in the lab, using good fibre and without any commercial product margins, says Winzer. Adding commercial product margins would reduce the optical link budget by 2-3dB and hence the overall reach.

Winzer says the one Terabit demonstration uses all the technologies employed in Alcatel-Lucent's photonic service engine (PSE) ASIC although the algorithms and soft-decision FEC used are more advanced, as expected in an R&D trial.

Before such one Terabit systems become commercial, progress in photonic integration will be needed as well as advances in CMOS process technology.

"Progress in photonic integration is needed to get opto-electronic costs down as it [one Terabit] is still going to need two-to-four sub-carriers," he says. A balance between parallelism and speed needs to be struck, and parallelism is best achieved using integration. "Without integration you are doubling up your expensive opto-electronic components which doesn't scale," says WInzer.

 

In Part 2: Space-division multiplexing: the final frontier


A Terabit network processor by 2015?

Given that 100 Gigabit merchant silicon network processors will appear this year only, it sounds premature to discuss Terabit devices. But Alcatel-Lucent's latest core router family uses the 400 Gigabit FP3 packet-processing chipset, and one Terabit is the next obvious development.


Source: Gazettabyte

 

Core routers achieved Terabit scale awhile back. Alcatel-Lucent's recently announced IP core router family includes the high-end 32 Terabit 7950 XRS-40, expected in the first half of 2013. The platform has 40 slots and will support up to 160, 100 Gigabit Ethernet interfaces.

Its FP3 400 Gigabit network processor chipset, announced in 2011, is already used in Alcatel-Lucent's edge routers but the 7950 is its first router platform family to exploit fully the chipset's capability.

The 7950 family comes with a selection of 10, 40 and 100 Gigabit Ethernet interfaces.  Alcatel-Lucent has designed the router hardware such that the card-level control functions are separate from the Ethernet interfaces and FP3 chipset that both sit on the line card. The re-design is to preserve the service provider's investment. Carrier modules can be upgraded independently of the media line cards, the bulk of the line card investment.

The 7950 XRS-20 platform, in trials now, has 20 slots which take the interface modules - dubbed XMAs (media adapters) - that house the various Ethernet interface options and the FP3 chipset. What Alcatel-Lucent calls the card-level control complex is the carrier module (XCM), of which there are up to are 10 in the system. The XCM, which includes control processing, interfaces to the router's system fabric and holds up to two XMAs.

There are two XCM types used with the 7950 family router members. The 800 Gigabit-per-second (Gbps) XCM supports a pair of 400Gbps XMAs or 200Gbps XMAs, while the 400Gbps XCM supports a single 400Gbps XMA or a pair of 200Gbps XMAs.

The slots that host the XCMs can scale to 2 Terabits, suggesting that the platforms are already designed with the next packet processor architecture in mind.

 

FP3 chipset

The FP3 chipset, like the previous generation 100Gbps FP2, comprises three devices: the P-chip network processor, a Q-chip traffic manager and the T-chip that interfaces to the router fabric.

The P-chip inspects packets and performs the look ups that determine where the packets should be forwarded. The P-chip determines a packet's class and the quality of service it requires and tells the Q-chip traffic manager in which queue the packet is to be placed. The Q-chip handles the packet flows and makes decisions as to how packets should be dealt with, especially when congestion occurs.

The basic metrics of the 100Gbps FP2 P-chip is that it is clocked at 840GHz and has 112 micro-coded programmable cores, arranged as 16 rows by 7 columns.  To scale to 400Gbps, the FP3 P-chip is clocked at 1GHz (x1.2) and has 288 cores arranged as a 32x9 matrix (x2.6). The cores in the FP3 have also been re-architected such that two instructions can be executed per clock cycle. However this achieves a 30-35% performance enhancement rather than 2x since there are data dependencies and it is not always possible to execute instructions in parallel.  Collectively the FP3 enhancements provide the needed 4x improvement to achieve 400Gbps packet processing performance.

The FP3's traffic manager Q-chip retains the FP2's four RISC cores but the instruction set has been enhanced and the cores are now clocked at 900GHz.

  

Terabit packet processing

Alcatel-Lucent has kept the same line card configuration of using two P-chips with each Q-chip. The second P-chip is viewed as an inexpensive way to add spare processing in case operators need to support more complex service mixes in future. However, it is rare that in the example of the FP2-based line card, the capability of the second P-chip has been used, says Alcatel-Lucent.

Having the second P-chip certainly boosts the overall packet processing on the line card but at some point Alcatel-Lucent will develop the FP4 and the next obvious speed hike is 1 Terabit.

Moving to a 28nm or an even more advanced CMOS process will enable the clock speed of the P-chip to be increased but probably not by much. A 1.2GHz clock would still require a further more-than-doubling of the cores, assuming Alcatel-Lucent doesn't also boost processing performance elswhere to achieve the overall 2.5x speed-up to a 1 Terabit FP4.

However, there are two obvious hurdles to be overcome to achieve a Terabit network processor: electrical interface speeds and memory.

Alcatel-Lucent settled on 10Gbps SERDES to carry traffic between the chips and for the interfaces on the T-chip, believing the technology the most viable and sufficiently mature when the design was undertaken. A Terabit FP4 will likely adopt 25Gbps interfaces to provide the required 2.5x I/O boost.

Another even more significant challenge is the memory speed improvement needed for the look up tables and for packet buffering. Alcatel-Lucent worked with the leading memory vendors when designing the FP3 and will do the same for its next-generation design.

Alcatel-Lucent, not surprisingly, will not comment on devices it has yet to announce. But the company did say that none of the identified design challenges for the next chipset are insurmountable.

 

Further reading:

Network processors to support multiple 100 Gigabit flows

A more detailed look at the FP3 in New Electronics, click here


Privacy Preference Center