OFDM promises compact Terabit transceivers

Source ECI Telecom

 

A one Terabit super-channel, crafted using orthogonal frequency-division multiplexing (OFDM), has been transmitted over a live network in Germany. The OFDM demonstration is the outcome of a three-year project conducted by the Tera Santa Consortium comprising Israeli companies and universities.

Current 100 Gig coherent networks use a single carrier for the optical transmission whereas OFDM imprints the transmitted data across multiple sub-carriers. OFDM is already used as a radio access technology, the Long Term Evolution (LTE) cellular standard being one example.

With OFDM, the sub-carriers are tightly packed with a spacing chosen to minimise the interference at the receiver. OFDM is being researched for optical transmission as it promises robustness to channel impairments as well as implementation benefits, especially as systems move to Terabit speeds. 

"It is clear that the market has voted for single-carrier transmission for 400 Gig," says Shai Stein, chairman of the Tera Santa Consortium and CTO of system vendor, ECI Telecom. "But at higher rates, such as 1 Terabit, the challenge will be to achieve compact, low-power transceivers."

 

The real contribution [of OFDM] is implementation efficiency

Shai Stein

 

 

 

 

One finding of the project is that the OFDM optical performance matches that of traditional coherent transmission but that the digital signal processing required is halved. "The real contribution [of OFDM] is implementation efficiency," says Stein.

For the trial, the 175GHz-wide 1 Terabit super-channel signal was transmitted through several reconfigurable optical add/drop multiplexer (ROADM) stages. The 175GHz spectrum comprises seven, 25GHz bands. Two OFDM schemes were trialled: 128 sub-carriers and 1024 sub-carriers across each band.

To achieve 1 Terabit, the net data rate per band was 142 Gigabit-per-second (Gbps). Adding the overhead bits for forward error corrections and pilot signals, the gross data rate per band is closer to 200Gbps.

The 128 or 1024 sub-carriers per band are modulated using either quadrature phase-shift keying (QPSK) or 16-quadrature amplitude modulation (16-QAM). One modulation scheme - QPSK or 16-QAM - was used across a band, although Stein points out that the modulation scheme can be chosen on a sub-carrier by sub-carrier basis, depending on the transmission conditions. 

The trial took place at the Technische Universität Dresden, using the Deutsches Forschungsnetz e.V. X-WiN research network. The signal recovery was achieved offline using MATLAB computational software. "It [the trial] was in real conditions, just the processing was performed offline," says Stein. The MATLAB algorithms will be captured in FPGA silicon and added to the transciever in the coming months.     

Using a purpose-built simulator, the Tera Santa Consortium compared the OFDM results with traditional coherent super-channel transmission. "Both exhibited the same performance," says David Dahan, senior research engineer for optics at ECI Telecom. "You get a 1,000km reach without a problem." And with hybrid EDFA-Raman amplification, 2,000km is possible. The system also demonstrated robustness to chromatic dispersion. Using 1024 sub-carriers, the chromatic dispersion is sufficient low that no compensation is needed, says ECI.

Stein says the project has been hugely beneficial to the Israeli optical industry: "There has been silicon photonics, transceiver and algorithmic developments, and benefits at the networking level."  For ECI, it is important that there is a healthy local optical supply chain. "The giants have that in-house, we do not," says Stein. 

One Terabit transmission will be realised in the marketplace in the next two years. Due to the project, the consortium companies are now well placed to understand the requirements, says Stein.

Set up in 2011, the Tera Santa Consortium includes ECI Telecom, Finisar, MultiPhy, Cello, Civcom, Bezeq International, the Technion Israel Institute of Technology, Ben-Gurion University, and the Hebrew University in Jerusalem, Bar-Ilan University and Tel-Aviv University.


Verizon on 100G+ optical transmission developments

Source: Gazettabyte

Feature: 100 Gig and Beyond. Part 1:

Verizon's Glenn Wellbrock discusses 100 Gig deployments and higher speed optical channel developments for long haul and metro. 

 

The number of 100 Gigabit wavelengths deployed in the network has continued to grow in 2013.

According to Ovum, 100 Gigabit has become the wavelength of choice for large wavelength-division multiplexing (WDM) systems, with spending on 100 Gigabit now exceeding 40 Gigabit spending. LightCounting forecasts that 40,000, 100 Gigabit line cards will be shipped this year, 25,000 in the second half of the year alone. Infonetics Research, meanwhile, points out that while 10 Gigabit will remain the highest-volume speed, the most dramatic growth is at 100 Gigabit. By 2016, the majority of spending in long-haul networks will be on 100 Gigabit, it says.

The market research firms' findings align with Verizon's own experience deploying 100 Gigabit. The US operator said in September that it had added 4,800, 100 Gigabit miles of its global IP network during the first half of 2013, to total 21,400 miles in the US network and 5,100 miles in Europe. Verizon expects to deploy another 8,700 miles of 100 Gigabit in the US and 1,400 miles more in Europe by year end.

"We expect to hit the targets; we are getting close," says Glenn Wellbrock, director of optical transport network architecture and design at Verizon.

Verizon says several factors are driving the need for greater network capacity, including its FiOS bundled home communication services, Long Term Evolution (LTE) wireless and video traffic. But what triggered Verizon to upgrade its core network to 100 Gig was converging its IP networks and the resulting growth in traffic. "We didn't do a lot of 40 Gig [deployments] in our core MPLS [Multiprotocol Label Switching] network," says Wellbrock.

The cost of 100 Gigabit was another factor: A 100 Gigabit long-haul channel is now cheaper than ten, 10 Gig channels.  There are also operational benefits using 100 Gig such as having fewer wavelengths to manage. "So it is the lower cost-per-bit plus you get all the advantages of having the higher trunk rates," says Wellbrock.          

Verizon expects to continue deploying 100 Gigabit. First, it has a large network and much of the deployment will occur in 2014. "Eventually, we hope to get a bit ahead of the curve and have some [capacity] headroom," says Wellbrock. 

 

We could take advantage of 200 Gig or 400 Gig or 500 Gig today

 

Super-channel trials

Operators, working with optical vendors, are trialling super-channels and advanced modulation schemes such as 16-QAM (quadrature amplitude amplitude). Such trials involve links carrying data in multiples of 100 Gig: 200 Gig, 400 Gig, even a Terabit.

Super-channels are already carrying live traffic. Infinera's DTN-X system delivers 500 Gig super-channels using quadrature phase-shift keying (QPSK) modulation. Orange has a 400 Gigabit super-channel link between Lyon and Paris. The 400 Gig super-channel comprises two carriers, each carrying 200 Gig using 16-QAM, implemented using Alcatel-Lucent's 1830 photonic service switch platform and its photonic service engine (PSE) DSP-ASIC.

"We could take advantage of 200 Gig or 400 Gig or 500 Gig today," says Wellbrock. "As soon as it is cost effective, you can use it because you can put multiple 100 Gig channels on there and multiplex them."

The issue with 16-QAM, however, is its limited reach using existing fibre and line systems - 500-700km - compared to QPSK's 2,500+ km before regeneration. "It [16-QAM] will only work in a handful of applications - 25 percent, something of this nature," says Wellbrock. This is good for a New York to Boston, he says, but not New York to Chicago. "From our end it is pretty simple, it is lowest cost," says Wellbrock. "If we can reduce the cost, we will use it [16-QAM]. However, if the reach requirement cannot be met, the operator will not go to the expense of putting in signal regenerators to use 16-QAM do, he says.

Earlier this year Verizon conducted a trial with Ciena using 16-QAM. The goals were to test 16-QAM alongside live traffic and determine whether the same line card would work at 100 Gig using QPSK and 200 Gig using 16-QAM. "The good thing is you can use the same hardware; it is a firmware setting," says Wellbrock.

 

We feel that 2015 is when we can justify a new, greenfield network and that 100 Gig or versions of that - 200 Gig or 400 Gig - will be cheap enough to make sense 

 

100 Gig in the metro

Verizon says there is already sufficient traffic pressure in its metro networks to justify 100 Gig deployments. Some of Verizon's bigger metro locations comprise up to 200 reconfigurable optical add/ drop multiplexer (ROADM) nodes. Each node is typically a central office connected to the network via a ROADM, varying from a two-degree to an eight-degree design.

"Not all the 200 nodes would need multiple 100 Gig channels but in the core of the network, there is a significant amount of capacity that needs to be moved around," says Wellbrock. "100 Gig will be used as soon as it is cost-effective." 

Unlike long-haul, 100 Gigabit in the metro remains costlier than ten 10 Gig channels. That said, Verizon has deployed metro 100 Gig when absolutely necessary, for example connecting two router locations that need to be connected using 100 Gig. Here Verizon is willing to pay extra for such  links.

"By 2015 we are really hoping that the [metro] crossover point will be reached, that 100 Gig will be more cost effective in the metro than ten times 10 [Gig]." Verizon will build a new generation of metro networks based on 100 Gig or 200 Gig or 400 Gig using coherent receivers rather than use existing networks based on conventional 10 Gig links to which 100 Gig is added.

"We feel that 2015 is when we can justify a new, greenfield network and that 100 Gig or versions of that - 200 Gig or 400 Gig - will be cheap enough to make sense."   

 

Data Centres

The build-out of data centres is not a significant factor driving 100 Gig demand. The largest content service providers do use tens of 100 Gigabit wavelengths to link their mega data centres but they typically have their own networks that connect relatively few sites.

"If you have lots of data centres, the traffic itself is more distributed, as are the bandwidth requirements," says Wellbrock.

Verizon has over 220 data centres, most being hosting centres. The data demand between many of the sites is relatively small and is served with 10 Gigabit links. "We are seeing the same thing with most of our customers," says Wellbrock.

 

Technologies

System vendors continue to develop cheaper line cards to meet the cost-conscious metro requirements. Module developments include smaller 100 Gig 4x5-inch MSA transponders, 100 Gig CFP modules and component developments for line side interfaces that fit within CFP2 and CFP4 modules.

"They are all good," says Wellbrock when asked which of these 100 Gigabit metro technologies are important for the operator. "We would like to get there as soon as possible." 

The CFP4 may be available by late 2015 but more likely in 2016, and will reduce significantly the cost of 100 Gig. "We are assuming they are going to be there and basing our timelines on that," he says.

Greater line card port density is another benefit once 100 Gig CFP2 and CFP4 line side modules become available. "Lower power and greater density which is allowing us to get more bandwidth on and off the card." sats Wellbrock.

Existing switch and routers are bandwidth-constrained: they have more traffic capability that the faceplate can provide. "The CFPs, the way they are today, you can only get four on a card, and a lot of the cards will support twice that much capacity," says Wellbrock.

With the smaller form factor CFP2 and CFP4, 1.2 and 1.6 Terabits card will become possible from 2015. Another possible development is a 400 Gigabit CFP which would achieve a similar overall capacity gains. 

 

Coherent, not just greater capacity

Verizon is looking for greater system integration and continues to encourage industry commonality in optical component building blocks to drive down cost and promote scale.

Indeed Verizon believes that industry developments such as MSAs and standards are working well. Wellbrock prefers standardisation to custom designs like 100 Gigabit direct detection modules or company-specific optical module designs. 

Wellbrock stresses the importance of coherent receiver technology not only in enabling higher capacity links but also a dynamic optical layer. The coherent receiver adds value when it comes to colourless, directionless, contentionless (CDC) and flexible grid ROADMs.

"If you are going to have a very cost-effective 100 Gigabit because the ecosystem is working towards similar solutions, then you can say: 'Why don't I add in this agile photonic layer?' and then I can really start to do some next-generation networking things."  This is only possible, says Wellbrock, because of the tunabie filter offered by a coherent receiver, unlike direct detection technology with its fixed-filter design.

"Today, if you want to move from one channel to the next - wavelength 1 to wavelength 2 - you have to physically move the patch cord to another filter," says Wellbrock. "Now, the [coherent] receiver can simply tune the local oscillator to channel 2; the transmitter is full-band tunable, and now the receiver is full-band tunable as well." This tunability can be enabled remotely rather than requiring an on-site engineer. 

Such wavelength agility promises greater network optimisation.

"How do we perhaps change some of our sparing policy? How do we change some of our restoration policies so that we can take advantage of that agile photonics later," says Wellbroack. "That is something that is only becoming available because of the coherent 100 Gigabit receivers."    

 

Part 2, click here


Cisco Systems demonstrates 100 Gigabit technologies

* Cisco adds the CPAK transceiver to its mix of 100 Gigabit coherent and elastic core technologies
* Announces 100 Gigabit transmission over 4,800km

 

"CPAK helps accelerate the feasibility and cost points of deploying 100Gbps"

Stephen Liu, Cisco

 

 

 

 

 

Cisco Sytems has announced that its 100 Gigabit coherent module has achieved a reach of 4,800km without signal regeneration. The span was achieved in the lab and the system vendor intends to verify the span in a customer's network.

The optical transmission system achieved a reach of 3,000km over low-loss fibre when first announced in 2012. The extended reach is not a result of a design upgrade, rather the 100 Gigabit-per-second (Gbps) module is being used on a link with Raman amplification.

Cisco says it started shipping its 100Gbps coherent module in June 2012. "We have shipped over 2,000 100Gbps coherent dense WDM ports," says Sultan Dawood, marketing manager at Cisco. The 100Gbps ports include line-side 100Gbps interfaces integrated within Cisco's ONS 15454 multi-service transport platform and its CRS core router supporting its IP-over-DWDM elastic core architecture.

Cisco has also coupled the ASR 9922 series router to the ONS 15454. "We are extending what we have done for IP and optical convergence in the core," says Stephen Liu, director of market management at Cisco. "There is now a common solution to the [network] edge."

None of Cisco's customers has yet used 100Gbps over a 3,000km span, never mind 4,800km. But the reach achieved is an indicator of the optical transmission performance. "The [distance] performance is really a proxy for usefulness," says Liu. "If you take that 3,000km over low-loss fibre, what that buys you is essentially a greater degree of tolerance for existing fibre in the ground."

Much industry attention is being given to the next-generation transmission speeds of 400Gbps and one Terabit. This requires support for super-channels - multi-carrier signals to transmit 400Gbps and one Terabit as well as flexible spectrum to pack the multi-carrier signals efficiently across the fibre's spectrum. But Cisco argues that faster transmission is only one part of the engineering milestones to be achieved, especially when 100Gbps deployment is still in its infancy.

To benefit 100Gbps deployments, Cisco has officially announced its own CPAK 100Gbps client-side optical transceiver after discussing the technology over the last year. "CPAK helps accelerate the feasibility and cost points of deploying 100Gbps," says Liu.

CPAK

The CPAK is Cisco' first optical transceiver using silicon photonics technology following its acquisition of LightWire. The CPAK is a compact optical transceiver to replace the larger and more power hungry 100Gbps CFP interfaces.

The CPAK is being launched at the same time as many companies are announcing CFP2 multi-source agreement (MSA) optical transceiver products. Cisco stresses that the CPAK conforms to the IEEE 100GBASE-LR4 and -SR10 100Gbps standards. Indeed at OFC/NFOEC it is demonstrating the CPAK interfacing with a CFP2.

The CPAK will be used across several Cisco platforms but the first implementation is for the ONS 15454.

The CPAK transceiver will be generally available in the summer of 2013.


The uphill battle to keep pace with bandwidth demand

Relative traffic increase normalised to 2010 Source: IEEE

Optical component and system vendors will be increasingly challenged to meet the expected growth in bandwidth demand.

According to a recent comprehensive study by the IEEE (The IEEE 802.3 Industry Connections Ethernet Bandwidth Assessment report), bandwidth requirements are set to grow 10x by 2015 compared to demand in 2010, and a further 10x between 2015 and 2020. Meanwhile, the technical challenges are growing for the vendors developing optical transmission equipment and short-reach high-speed optical interfaces. 

Fibre bandwidth is becoming a scarce commodity and various techniques will be required to scale capacity in metro and long-haul networks. The IEEE is expected to develop the next-higher speed Ethernet standard to follow 100 Gigabit Ethernet (GbE) in 2017 only. The IEEE is only talking about capacities and not interface speeds. Yet, at this early stage, 400 Gigabit Ethernet looks the most likely interface.

 

"The various end-user markets need technology that scales with their bandwidth demands and does so economically. The fact that vendors must work harder to keep scaling bandwidth is not what they want to hear"

 

A 400GbE interface will comprise multiple parallel lanes, requiring the use of optical integration. A 400GbE interface may also embrace modulation techniques, further adding to the size, complexity and cost of such an interface. And to achieve a Terabit, three such interfaces will be needed.

All these factors are conspiring against what the various end-user bandwidth sectors require: line-side and client-side interfaces that scale economically with bandwidth demand. Instead, optical components, optical module and systems suppliers will have to invest heavily to develop more complex solutions in the hope of matching the relentless bandwidth demand.

The IEEE 802.3 Bandwidth Assessment Ad Hoc group, which produced the report that highlights the hundredfold growth in bandwidth demand between 2010 and 2020, studied several sectors besides core networking and data centre equipment such as servers. These include Internet exchanges, high-performance computing, cable operators (MSOs) and the scientific community. 

The difference growth rates in bandwidth demand it found for the various sectors are shown in the chart above.

 

Optical transport

A key challenge for optical transport is that fibre spectrum is becoming a precious commodity. Scaling capacity will require much more efficient use of spectrum.

To this aim, vendors are embracing advanced modulation schemes, signal processing and complex ASIC designs. The use of such technologies also raises new challenges such as moving away from a rigid spectrum grid, requiring the introduction of flexible-grid switching elements within the network. 

And it does not stop there. 

Already considerable development work is underway to use multi-carriers - super-channels - whose carrier count can be adapted on-the-fly depending on demand, and which can be crammed together to save spectrum. This requires advanced waveform shaping based on either coherent orthogonal frequency division multiplexing (OFDM) or Nyquist WDM, adding further complexity to the ASIC design.

At present, a single light path can be increased from 100 Gigabit-per-second (Gbps) to 200Gbps using the 16-QAM amplitude modulation scheme. Two such light paths give a 400Gbps data rate. But 400Gbps requires more spectrum than the standard 50GHz band used for 100Gbps transmission. And using QAM reduces the overall optical transmission reach achieved.

The shorter resulting reach using 16-QAM or 64-QAM may be sufficient for metro networks (~1000km) but to achieve long-haul and ultra-long-haul spans will require super-channels based on multiple dual-polarisation, quadrature phase-shift keying (DP-QPSK) modulated carriers, each occupying 50GHz. Building up a 400Gbps or 1 Terabit signal this way uses 4 or 10 such carriers, respectively - a lot of spectrum. Some 8Tbps to 8.8Tbps long-haul capacity result using this approach.

The main 100Gbps system vendors have demonstrated 400Gbps using 16-QAM and two carriers. This doubles system capacity to 16-17.6Tbps. A further 30% saving in bandwidth using spectral shaping at the transmitter crams the carriers closer together, raising the capacity to some 23Tbps. The eventual adoption of coherent OFDM or Nyquist WDM will further boost overall fibre capacity across the C-band. But the overall tradeoff of capacity versus reach still remains. 

Optical transport thus has a set of techniques to improve the amount of traffic it can carry. But it is not at a pace that matches the relentless exponential growth in bandwidth demand.

After spectral shaping, even more complex solutions will be needed. These include extending transmission beyond the C-band, and developing exotic fibres. But these are developments for the next decade or two and will require considerable investment. 

The various end-user markets need technology that scales with their bandwidth demands and does so economically. The fact that vendors must work harder to keep scaling bandwidth is not what they want to hear.

 

"No-one is talking about a potential bandwidth crunch but if it is to be avoided, greater investment in the key technologies will be needed. This will raise its own industry challenges. But nothing like those to be expected if the gap between bandwidth demand and available solutions grows"

 

Higher-speed Ethernet 

The IEEE's Bandwidth Assessment study lays the groundwork for the development of the next higher-speed Ethernet standard.

Since the standard work has not yet started, the IEEE stresses that it is premature to discuss interface speeds. But based on the state of the industry, 400GbE already looks the most likely solution as the next speed hike after 100GbE. Adopting 400GbE, several approaches could be pursued:

  • 16 lanes at 25Gbps: 100GbE is moving to a 4x25Gbps electrical interface and 400GbE could exploit such technology for a 16-lane solution, made up of four, 4x25Gbps interfaces.  "If I was a betting man, I'd probably put better odds on that [25Gbps lanes] because it is in the realm of what everyone is developing," John D'Ambrosia, chair of the IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group and chair of the the IEEE 802.3 Bandwidth Assessment Ad Hoc group, told Gazettabyte. 
  • 10 lanes at 40Gbps: The Optical Internetworking Forum (OIF) has started work on an electrical interface operating between 39 and 56Gbps (Common Electrical Interface - 56G-Close Proximity Reach). This could lead to 40Gbps lanes and a 10x40Gbps implementation for a 400Gbps Ethernet design. 
  • Modulation: For the 100Gbps backplane initiative, the IEEE is working on pulse-amplitude modulation (PAM), says D'Ambrosia. Such modulation could be used for 400GbE. Modulation is also being considered by the IEEE to create a single-lane 100Gbps interface. Such a solution could lead to a 4-lane 400GbE solution. But adopting modulation comes at a cost: more sophisticated electronics, greater size and power consumption. 

 

As with any emerging standard, first designs will be large, power-hungry and expensive. The industry will have to work hard to produce more integrated 16-lane or 10-lane designs. Size and cost will also be important given that three 400GbE modules will be needed to implement a Terabit interface.

The challenge for component and module vendors is to develop such multi-lane designs yet do so economically. This will require design ingenuity and optical integration expertise.

 

Timescales

Super-channels exist now - Infinera is shipping its 5x100Gbps photonic integrated circuit. Ciena and Alcatel-Lucent are introducing their latest generation DSP-ASICs that promise 400Gbps signals and spectral shaping while other vendors have demonstrated such capabilities in the lab.

The next Ethernet standard is set for completion in 2017. If it is indeed based on a 400GbE Ethernet interface, it will likely use 4x25Gbps components for the first design, benefiting from emerging 100GbE CFP2 and CFP4 modules and their more integrated designs.  But given the standard will only be completed in five years' time, new developments should also be expected.

No-one is talking about a potential bandwidth crunch but if it is to be avoided, greater investment in the key technologies will be needed. This will raise its own industry challenges. But nothing like those to be expected if the gap between bandwidth demand and available solutions grows.


The CFP4 optical module to enable Terabit blades

The next-generation CFP modules - the CFP2 and CFP4 - promise to double and double again the number of 100 Gigabit-per-second (Gbps) optical module interfaces on a blade.

Using the CFP4, up to 16, 100Gbps modules will fit on a blade, a total line rate of 1.6 Terabits-per-second (Tbps). With a goal of a 60W total module power budget per blade, that equates to 27Gbps/W. In comparison, the power-efficient SFP+ achieves 10Gbps/W.
 

Source: Gazettabyte, Xilinx

The CFP2 is about half the size of the CFP while the CFP4 is half the size of the CFP2. The CFP4 is slightly wider and longer than the QSFP.

The two CFP modules will use a 4x25Gbps electrical interface, doing away with the need for a 10x10Gbps to 4x25Gbps gearbox IC used for current CFP 100GBASE-LR4 and -ER4 interfaces. The CFP2 and CFP4 are also defined for 40 Gigabit Ethernet use.

The CFP's maximum power rating is 32W, the CFP2 12W and the CFP4 5W. But vendors that put eight CFP2 or 16 CFP4s on a blade still want to meet the 60W total power budget.

 

Getting close: Four CFP modules deliver slightly less bandwidth than 48 SFP+ modules: 4x100Gbps versus 480Gbps. The four also consume more power - 60w versus 48W. Moving to the CFP2 module will double the blade's bandwidth without consuming more power while the CFP4 will do the same again. a blade with 16 CFP4 modules promises 1.6Tbps while requiring 60W. Source: Xilinx

The first CFP2 modules are expected this year - there could be vendor announcements as early as the upcoming OFC/NFOEC 2012 show to be held in LA in the first week in March. The first CFP4 products are expected in 2013.

 

Further reading

The CFP MSA presentation: CFP MSA 100G roadmap and applications

 


Terabit Consortium embraces OFDM

A project to develop optical networks using terabit light paths has been announced by a consortium of Israeli companies and universities. The Tera Santa Consortium will spend 3-5 years developing orthogonal frequency division multiplexing (OFDM)-based terabit optical networking equipment.

 

“This project is very challenging and very important”

Shai Stein, Tera Santa Consortium

 

 

 

 

Given the continual growth in IP traffic, higher-speed light paths are going to be needed, says Shai Stein, chairman of the Tera Santa Consortium and ECI Telecom’s CTO: “If 100 Gigabit is starting to be deployed, within five years we’ll start to see links with tenfold that capacity, meaning one Terabit.”

The project is funded by the seven participating firms and the Israeli Government. According to Stern, the Government has invested little in optical projects in recent years. “When we look at the [Israeli] academies and industry capabilities in optical, there is no justification for this,” says Stern. “We went with this initiative in order to get Government funding for something very challenging that will position us in a totally different place worldwide.”

 

Orthogonal frequency division multiplexing

OFDM differs from traditional dense wavelength division multiplexing (DWDM) technology in how fibre bandwidth is used. Rather than sending all the information on a lightpath within a single 50 or 100GHz channel – dubbed single-carrier transmission – OFDM uses multiple narrow carriers.  “Instead of using the whole bandwidth in one bulk and transmitting the information over it, [with OFDM] you divide the spectrum into pieces and on each you transmit a portion of the data,” says Stein. “Each sub-carrier is very narrow and the summation of all of them is the transmission.”

“Each time there is a new arena in telecom we find that there is a battle between single carrier modulation and OFDM; VDSL began as single carrier and later moved to OFDM,” says Amitai Melamed, involved in the project and a member of ECI’s CTO office. “In the optical domain, before running to [use] single-carrier modulation as is currently done at 100 Gigabit, it is better to look at the OFDM domain in detail rather than jump at single-carrier modulation and question whether this was the right choice in future.”

OFDM delivers several benefits, says Stern, especially in the flexibility it brings in managing spectrum. OFDM allows a fibre’s spectrum band to be used right up to its edge. Indeed Melamed is confident that by adopting OFDM for optical, the spectrum efficiency achieved will eventually match that of wireless.

 

“OFDM is very tolerant to rate adaptation.”

Amitai Melamed, ECI Telecom

 

The technology also lends itself to parallel processing. “Each of the sub-carriers is orthogonal and in a way independent,” says Stern. “You can use multiple small machines to process the whole traffic instead of a single engine that processes it all.” With OFDM, chromatic dispersion is also reduced because each sub-carrier is narrow in the frequency domain.

Using OFDM, the modulation scheme used per sub-carrier can vary depending on channel conditions. This delivers a flexibility absent from existing single-carrier modulation schemes such as quadrature phase-shift keying (QPSK) that is used across all the channel bandwidth at 100 Gigabit-per-second (Gbps). “With OFDM, some of the bins [sub-carriers] could be QPSK but others could be 16-QAM or even more,” says Melamed.  

The approach enables the concept of an adaptive transponder. “I don’t always need to handle fibre as a time-division multiplexed link – either you have all the capacity or nothing,” says Melamed. “We are trying to push this resource to be more tolerant to the media: We can sense the channels' and adapt the receiver to the real capacity.” Such an approach better suits the characteristics of packet traffic in general he says: “OFDM is very tolerant to rate adaptation.”

The Consortium’s goal is to deliver a 1 Terabit light path in a 175GHz channel. At present 160, 40Gbps can be crammed within the a fibre's C-band,  equating to 6.4Tbps using 25GHz channels. At 100Gbps, 80 channels - or 8Tbps - is possible using 50GHz channels. A 175GHz channel spacing at 1Tbps would result in 23Tbps overall capacity. However this figure is likely to be reduced in practice since frequency guard-bands between channels are needed. The spectrum spacings at speeds greater than 100Gbps are still being worked out as part of ITU work on "gridless" channels (see OFC announcements and market trends story).

ECI stresses that fibre capacity is only one aspect of performance, however, and that at 1Tbps the optical reach achieved is reduced compared to transmissions at 100Gbps. “It is not just about having more Gigabit-per-second-per-Hertz but how we utilize the resource,” says Melamed. “A system with an adaptive rate optimises the resource in terms of how capacity is managed.” For example if there is no need for a 1Tbps link at a certain time of the day, the system can revert to a lower speed and use the spectrum freed up for other services.  Such a concept will enable the DWDM system to be adaptive in capacity, time and reach.

 

Project focus

The project is split between digital and analogue, optical development work. The digital part concerns OFDM and how the signals are processed in a modular way.

The analogue work involves overcoming several challenges, says Stern. One is designing and building the optical functions needed for modulation and demodulation with the  accuracy required for OFDM. Another is achieving a compact design that fits within an optical transceiver. Dividing the 1Tbps signal into several sub-bands will require optical components to be implemented as a photonic integrated circuit (PIC). The PIC will integrate arrays of components for sub-band processing and will be needed to achieve the required cost, space and power consumption targets.

Taking part in the project are seven Israeli companies - ECI Telecom, the Israeli subsidiary of Finisar, MultiPhy, Civcom, Orckit-Corrigent, Elisra-Elbit and Optiway- as well as five Israeli universities.

Two of the companies in the Consortium

“There are three types of companies,” says Stern. “Companies at the component level – digital components like digital signal processors and analogue optical components, sub-systems such as transceivers, and system companies that have platforms and a network view of the whole concept.”

The project goal is to provide the technology enablers to build a terabit-enabled optical network. A simple prototype will be built to check the concepts and the algorithms before proceeding to the full 1Terabit proof-of-concept, says Stern. The five Israeli universities will provide a dozen research groups covering issues such as PIC design and digital signal processing algorithms.

Any intellectual property resulting from the project is owned by the company that generates it although it will be made available to any other interested Consortium partner for licensing.

Project definition work, architectures and simulation work have already started. The project will take between 3-5 years but it has a deadline after three years when the Consortium will need to demonstrate the project's achievements. “If the achievements justify continuation I believe we will get it [a funding extension],” says Stern. “But we have a lot to do to get to this milestone after three years.

Project funding for the three years is around US $25M, with the Israeli Office of the Chief Scientist (OCS) providing 50 million NIS (US $14.5M) via the Magnet programme, which ECI says is “over half” of the overall funding.

 

Further reading:

Ofidium to enter 100Gbps module market using OFDM

Webinar: MultiPhy on the 100G direct detect market 


Infinera details Terabit PICs, 5x100G devices set for 2012

What has been announced?

Infinera has given first detail of its terabit coherent detection photonic integrated circuits (PICs). The pair - a transmitter and a receiver PIC – implement a ten-channel 100 Gigabit-per-second (Gbps) link using polarisation multiplexing quadrature phase-shift keying (PM-QPSK). The Infinera development work was detailed at OFC/NFOEC held in Los Angeles between March 6-10.

Infinera has recently demonstrated its 5x100Gbps PIC carrying traffic between Amsterdam and London within Interoute Communications’ pan-European network. The 5x100Gbps PIC-based system will be available commercially in 2012.

 

“We think we can drive the system from where it is today – 8 Terabits-per-fibre - to around 25 Terabits-per-fibre”

Dave Welch, Infinera 

 

Why is this significant?

The widespread adoption of 100Gbps optical transport technology will be driven by how quickly its cost can be reduced to compete with existing 40Gbps and 10Gbps technologies.

Whereas the industry is developing 100Gbps line cards and optical modules, Infinera has demonstrated a 5x100Gbps coherent PIC based on 50GHz channel spacing while its terabit PICs are in the lab. 

If Infinera meets its manufacturing plans, it will have a compelling 100Gbps offering as it takes on established 100Gbps players such as Ciena. Infinera has been late in the 40Gbps market, competing with its 10x10Gbps PIC technology instead.

 

40 and 100 Gigabit 

Infinera views 40Gbps and 100Gbps optical transport in terms of the dynamics of the high-capacity fibre market. In particular what is the right technology to get most capacity out of a fibre and what is the best dollar-per-Gigabit technology at a given moment.

For the long-haul market, Dave Welch, chief strategy officer at Infinera, says 100Gbps provides 8 Terabits (Tb) of capacity using 80 channels versus 3.2Tb using 40Gbps (80x40Gbps). The 40Gbps total capacity can be doubled  to 6.4Tb (160x40Gbps) if 25GHz-spaced channels are used, which is Infinera’s approach.

“The economics of 100 Gigabit appear to be able to drive the dollar-per-gigabit down faster than 40 Gigabit technology,” says Welch. If operators need additional capacity now, they will adopt 40Gbps, he says, but if they have spare capacity and can wait till 2012 they can use 100Gbps. “The belief is that they [operators] will get more capacity out of their fibre and at least the same if not better economics per gigabit [using 100Gbps],” says Welch. Indeed Welch argues that by 2012, 100Gbps economics will be superior to 40Gbps coherent leading to its “rapid adoption”.

For metro applications, achieving terabits of capacity in fibre is less of a concern. What matters is matching speeds with services while achieving the lowest dollar-per-gigabit. And it is here – for sub-1000km networks – where 40Gbps technology is being mostly deployed. “Not for the benefit of maximum fibre capacity but to protect against service interfaces,” says Welch, who adds that 40 Gigabit Ethernet (GbE) rather than 100GbE is the preferred interface within data centres.

 

Shorter-reach 100Gbps

Companies such as ADVA Optical Networking and chip company MultiPhy highlight the merits of an additional 100Gbps technology to coherent based on direct detection modulation for metro applications (for a MultiPhy webinar on 100Gbps direct detection, click here). Direct detection is suited to distances from 80km up to 1000km, to connect data centres for example.

Is this market of interest to Infinera?  “This is a great opportunity for us,” says Welch.

The company’s existing 10x10Gbps PIC can address this segment in that it is least 4x cheaper than emerging 100Gbps coherent solutions over the next 18 months, says Welch, who claims that the company’s 10x10Gbps PIC is making ‘great headway’ in the metro.

“If the market is not trying to get the maximum capacity but best dollar-per-gigabit, it is not clear that full coherent, at least in discrete form, is the right answer,” says Welch. But the cost reduction delivered by coherent PIC technology does makes it more competitive for cost-sensitive markets like metro.

A 100Gbps coherent discrete design is relatively costly since it requires two lasers (one as a local oscillator (LO - see fig 1 - at the receiver), sophisticated optics and a high power-consuming digital signal processor (DSP). “Once you go to photonic integration the extra lasers and extra optics, while a significant engineering task, are not inhibitors in terms of the optics’ cost.”

Coherent PICs can be used ‘deeper in the network’ (closer to the edge) while shifting the trade-offs between coherent and on-off keying. However even if the advent of a PIC makes coherent more economical, the DSP’s power dissipation remains a factor regarding the tradeoff at 100Gbps line rates between on-off keying and coherent.

Welch does not dismiss the idea of Infinera developing a metro-centric PIC to reduce costs further. He points out that while such a solution may be of particular interest to internet content companies, their networks are relatively simple point-to-point ones. As such their needs differ greatly from cable operators and telcos, in terms of the services carried and traffic routing.

 

PIC challenges

Figure 1: Infinera's terabit PM-QPSK coherent receiver PIC architecture

There are several challenges when developing multi-channel 100Gbps PICs.  “The most difficult thing going to a coherent technology is you are now dealing with optical phase,” says Welch. This requires highly accurate control of the PIC’s optical path lengths.

The laser wavelength is 1.5 micron and with the PIC's indium phosphide waveguides this is reduced by a third to 0.5 micron. Fine control of the optical path lengths is thus required to tenths of a wavelength or tens of nanometers (nm).

Achieving a high manufacturing yield of such complex PICs is another challenge. The terabit receiver PIC detailed in the OFC paper integrates 150 optical components, while the 5x100Gbps transmit and receive PIC pair integrate the equivalent of 600 optical components.

Moving from a five-channel (500Gbps) to a ten-channel (terabit) PIC is also a challenge. There are unwanted interactions in terms of the optics and the electronics. “If I turn one laser on adjacent to another laser it has a distortion, while the light going through the waveguides has potential for polarisation scattering,” says Welch. “It is very hard.” 

But what the PICs shows, he says, is that Infinera’s manufacturing process is like a silicon fab’s. “We know what is predictable and the [engineering] guys can design to that,” says Welch. “Once you have got that design capability, you can envision we are going to do 500Gbps, a terabit, two terabits, four terabits – you can keep on marching as far as the gigabits-per-unit [device] can be accomplished by this technology.”

The OFC post-deadline paper details Infinera's 10-channel transmitter PIC which operates at 10x112Gbps or 1.12Tbps.

 

Power dissipation

The optical PIC is not what dictates overall bandwidth achievable but rather the total power dissipation of the DSPs on a line card. This is determined by the CMOS process used to make the DSP ASICs, whether 65nm, 40nm or potentially 28nm.

Infinera has not said what CMOS process it is using. What Infinera has chosen is a compromise between “being aggressive in the industry and what is achievable”, says Welch. Yet Infinera also claims that its coherent solution consumes less power than existing 100Gbps coherent designs, partly because the company has implemented the DSP in a more advanced CMOS node than what is currently being deployed. This suggests that Infinera is using a 40nm process for its coherent receiver ASICs. And power consumption is a key reason why Infinera is entering the market with a 5x100Gbps PIC line card. For the terabit PIC, Infinera will need to move its ASICs to the next-generation process node, he says.

Having an integrated design saves power in terms of the speeds that Infinera runs its serdes (serialiser/ deserialiser) circuitry and the interfaces between blocks. “For someone else to accumulate 500Gbps of bandwdith and get it to a switch, this needs to go over feet of copper cable, and over a backplane when one 100Gbps line card talks to a second one,” says Welch. “That takes power - we don’t; it is all right there within inches of each other.”

Infinera can also trade analogue-to-digital (A/D) sampling speed of its ASIC with wavelength count depending on the capacity required. “Now you have a PIC with a bank of lasers, and FlexCoherent allows me to turn a knob in software so I can go up in spectral efficiency,” he says, trading optical reach with capacity. FlexCoherent is Infinera’s technology that will allow operators to choose what coherent optical modulation format to use on particular routes. The modulation formats supported are polarisation multiplexed binary phase-shift keying (PM-BPSK) and PM-QPSK.

 

Dual polarisation 25Gbaud constellation diagrams

What next?

Infinera says it is an adherent of higher quadrature amplitude modulation (QAM) rates to increase the data rate per channel beyond 100Gbps. As a result FlexCoherent in future will enable the selection of higher-speed modulation schemes such as 8-QAM and 16-QAM. “We think we can drive the system from where it is today –8 Terabits-per-fibre - to around 25 Terabits-per-fiber.”

But Welch stresses that at 16-QAM and even higher level speeds must be traded with optical reach. Fibre is different to radio, he says. Whereas radio uses higher QAM rates, it compensates by increasing the launch power. In contrast there is a limit with fibre. “The nonlinearity of the fibre inhibits higher and higher optical power,” says Welch. “The network will have to figure out how to accommodate that, although there is still significant value in getting to that [25Tbps per fibre]” he says.

The company has said that its 500 Gigabit PIC will move to volume manufacturing in 2012. Infinera is also validating the system platform that will use the PIC and has said that it has a five terabit switching capacity.

Infinera is also offering a 40Gbps coherent (non-PIC-based) design this year. “We are working with third-party support to make a module that will have unique performance for Infinera,” says Welch.

The next challenge is getting the terabit PIC onto the line card. Based on the gap between previous OFC papers to volume manufacturing, the 10x100Gbps PIC can be expected in volume by 2014 if all goes to plan.

 


Privacy Preference Center