Q&A: Ciena’s CTO on networking and technology

In Part 2 of the Q&A, Steve Alexander, CTO of Ciena, shares his thoughts about the network and technology trends.

Part 2: Networking and technology

"The network must be a lot more dynamic and responsive"

Steve Alexander, Ciena CTO

 

Q. In the 1990s dense wavelength division multiplexing (DWDM) was the main optical development while in the '00s it was coherent transmission. What's next?

A couple of perspectives.

First, the platforms that we have in place today: III-V semiconductors for photonics and collections of quasi-discrete components around them - ASICs, FPGAs and pluggables - that is the technology we have.  We can debate, based on your standpoint, how much indium phosphide integration you have versus how much silicon integration.

Second, the way that networks built in the next three to five years will differentiate themselves will be based on the applications that the carriers, service providers and large enterprises can run on them.

This will be in addition to capacity - capacity is going to make a difference for the end user and you are going to have to have adequate capacity with low enough latency and the right bandwidth attributes to keep your customers. Otherwise they migrate [to other operators], we know that happens.

You are going to start to differentiate based on the applications that the service providers and enterprises can run on those networks. I see the value of networking changing from a hardware-based problem-set to one largely software-based.

I'll give you an analogy: You bought your iPhone, I'll claim, not so much because it is a cool hardware box - which it is - but because of the applications that you can run on it.  

The same thing will happen with infrastructure. You will see the convergence of the photonics piece and the Ethernet piece, and you will be able to run applications on top of that network that will do things such as move large amounts of data, encrypt large amounts of data, set up transfers for the cloud, assemble bandwidth together so you can have a good cloud experience for the time you need all that bandwidth and then that bandwidth will go back out, like a fluid, for other people to use.

That is the way the network is going to have to operate in future. The network must be a lot more dynamic and responsive.

 

How does Ciena view 40 and 100 Gig and in particular the role of coherent and alternative transmission schemes (direct detection, DQPSK)? Nortel Metro Ethernet Networks (MEN) was a strong coherent adherent yet Ciena was developing 100Gbps non-coherent solutions before it acquired MEN.

If you put the clock back a couple of years, where were the classic Ciena bets and what were the classic MEN bets?

We were looking at metro, edge of network, Ethernet, scalable switches, lots of software integration and lots of software intelligence in the way the network operates. We did not bet heavily on the long distance, submarine space and ultra long-haul. We were not very active in 40 Gig, we were going straight from 10 to 100 Gig.

Now look at the bets the MEN folks placed: very strong on coherent and applying it to 40 and 100 Gig, strong programme at 100 Gig, and they were focussed on the long-haul. Well, to do long-haul when you are running into things like polarisation mode dispersion (PMD), you've got to have coherent. That is how you get all those problems out of the network. 

Our [Ciena's] first 100 Gig was not focussed on long-haul; it was focussed on how you get across a river to connect data centres.

When you look at putting things together, we ended up stopping our developments that were targeted at competing with MEN's long-haul solutions. They, in many cases, stopped developments coming after our switching, carrier Ethernet and software integration solutions. The integration worked very well because the intent of both companies was the same.

Today, do we have a position?  Coherent is the right answer for anything that has to do with physical propagation because it simplifies networks. There are a whole bunch of reasons why coherent is such a game changer.

The reason why first 40 Gig implementations didn't go so well was cost. When we went from 10 to 40 Gig, the only tool was cranking up the clock rate.

At that time, once you got to 20GHz you were into the world of microwave. You leave printed circuit boards and normal manufacturing and move into a world more like radar. There are machined boxes, micro-coax and a very expensive manufacturing process.  That frustrated the desires of the 40 Gig guys to be able to say: Hey, we've got a better cost point than the 10 Gig guys.

Well, with coherent the fact that I can unlock the bit rate from the baud rate, the signalling rate from the symbol rate, that is fantastic. I can stay at 10GHz clocks and send four-bits per symbol - that is 40Gbps.

My basic clock rate, which determines manufacturing complexity, fabrication complexity and the basic technology, stays with CMOS, which everyone knows is a great place to play. Apply that same magic to 100 Gig.  I can send 100Gbps but stay at a 25GHz clock - that is tremendous, that is a huge economic win.

Coherent lets you continue to use the commercial merchant silicon technology base which where you want to be. You leverage the year-on-year cost reduction, a world onto itself that is driving the economics and we can leverage that.

So you get economics with coherent. You get improvement in performance because you simplify the line system - you can pop out the dispersion compensation, and you solve PMD with maths. You also get tunability. I'm using a laser - a local oscillator at the receiver - to measure the incoming laser. I have a tunable receiver that has a great economic cost point and makes the line system simpler.

Coherent is this triple win. It is just a fantastic change in technology.

 

What is Ciena’s thinking regarding bringing in-house sub-systems/ components (vertical integration), or the idea of partnerships to guarantee supply? One example is Infinera that makes photonic integrated circuits around which it builds systems. Another is Huawei that makes its own PON silicon.

The two examples are good ones.

With Huawei you have to treat them somewhat separately as they have some national intent to build a technology base in China. So they are going to make decisions about where they source components from that are outside the normal economic model. 

Anybody in the systems business that has a supply chain periodically goes through the classic make-versus-buy analysis. If I'm buying a module, should I buy the piece-parts and make it? You go through that portion of it. Then you look within the sub-system modules and the piece-parts I'm buying and say: What if I made this myself? It is frequently very hard to say if I had this component fully vertically integrated I'd be better off.

A good question to ask about this is: Could the PC industry have been better if Microsoft owned Intel? Not at all.

You have to step back and say: Where does value get delivered with all these things? A lot of the semiconductor and component pieces were pushed out [by system vendors] because there was no way to get volume, scale and leverage. Unless you corner the market, that is frequently still true. But that doesn't mean you don't go through the make-versus-buy analysis periodically.

Call that the tactical bucket.  

The strategic one is much different. It says: There is something out there that is unique and so differentiated, it would change my way of thinking about a system, or an approach or I can solve a problem differently.

 

"Coherent is this triple win. It is just a fantastic change in technology" 

 

 

 

 

 

 

 

If it is truly strategic and can make a real difference in the marketplace - not a 10% or 20% difference but a 10x improvement - then I think any company is obligated to take a really close look at whether it would be better being brought inside or entering into a good strategic partnership arrangement.

Certainly Ciena evaluates its relationships along these lines.

 

Can you cite a Ciena example?

Early when Ciena started, there was a technology at the time that was differentiated and that was Fibre Bragg Gratings. We made them ourselves. Today you would buy them.

You look at it at points in time. Does it give me differentiation? Or source-of-supply control? Am I at risk? Is the supplier capable of meeting my needs? There are all those pieces to it.

 

Optical Transport Network (OTN) integrated versus standalone products. Ciena has a standalone model but plans to evolve to an integrated solution. Others have an integrated product, while others still launched a standalone box and have since integrated. Analysts say such strategies confuse the marketplace. Why does Ciena believe its strategy is right?

Some of this gets caught up in semantics.

Why I say that is because we today have boxes that you would say are switches but you can put pluggable coloured optics in. Would you call that integrated probably depends more on what the competition calls it.

The place where there is most divergence of opinion is in the network core.

Normally people look at it and say: one big box that does everything would be great - that is the classic God-Box problem. When we look at it - and we have been looking at it on and off for 15 years now - if you try to combine every possible technology, there are always compromises.

The simplest one we can point to now: If you put the highest performance optics into a switch, you sacrifice switch density.

You can build switches today that because of the density of the switching ASICs, are I/O-port constrained: you can't get enough connectors on the face plate to talk to the switch fabric. That will change with time, there is always ebb and flow. In the past that would not have been true.

If I make those I/O ports datacom plugabbles, that is about as dense as I'm going to get. If I make them long-distance coherent optics, I'm not going to get as many because coherent optics take up more space.  In some cases, you can end up cutting by half your port density on the switch fabric. That may not be the right answer for your network depending on how you are using that switch.

While we have both technologies in-house, and in certain application we will do that. Years ago we put coloured optics on CoreDirector to talk to CoreStream, that was specific for certain applications. The reason is that in most networks, people try to optimise switch density and transport capacity and these are different levers. If you bolt those levers together you don't often get the right optimal point.

 

Any business books you have read that have been particularly useful for your job?

The Innovator's Dilemma (by Clayton Christensen). What is good about it is that it has a couple of constructs that you can use with people so they will understand the problem. I've used some of those concepts and ideas to explain where various industries are, where product lines are, and what is needed to describe things as innovation.

The second one is called: Fad Surfing in the Boardroom (by Eileen Shapiro). It is a history of the various approaches that have been used for managing companies. That is an interesting read as well.

 

Click here for Part 1 of the Q&A 

 


How ClariPhy aims to win over the system vendors

ClariPhy Communications will start volume production of its 40 Gig coherent IC in September and is working on a 28nm CMOS 100 Gig coherent ASIC. It also offers an ASIC design service, allowing customers to used their own IP as well as ClariPhy's silicon portfolio.


“We can build 200 million logic gate designs” 

Reza Norouzian, ClariPhy

 

 

 

ClariPhy is in the camp that believes that the 100 Gigabit-per-second (Gbps) market is developing faster than people first thought. “What that means is that instead of it [100Gbps] being deployed in large volumes in 2015, it might be 2014,” says Reza Norouzian, vice president of worldwide sales and business development at ClariPhy.  

Yet the fabless chip company is also glad it offers a 40Gbps coherent IC as this market continues to ramp while 100Gbps matures and overcomes hurdles common to new technology: The 100Gbps industry has yet to develop a cost-effective solution or a stable component supply that will scale with demand.

Another challenge facing the industry is reducing the power consumption of 100Gbps systems, says Norouzian. The need to remove the heat from a 100Gbp design - the ASIC and other components - is limiting the equipment port density achievable. “If you require three slots to do 100 Gig - whereas before you could use these slots to do 20 or 30, 10 Gig lines - you are not achieving the density and economies of scale hoped for,” says Norouzian.

 

40G and 100G coherent ASICs

ClariPhy has chosen a 40nm CMOS process to implement its 40Gbps coherent chip, the CL4010. But it has since decided to adopt 28nm CMOS for its 100Gbps design – the CL10010 - to integrate features such as soft-decision forward error correction (see New Electronics' article on SD-FEC) and reduce the chip’s power dissipation.

The CL4010 integrates analogue-to-digital and digital-to-analogue converters, a digital signal processor (DSP) and a multiplexer/ demultiplexer on-chip. “Normally the mux is a separate chip and we have integrated that,” says Norouzian.

The first CL4010 samples were delivered to select customers three months ago and the company expects volume production to start by the end of September.  The CL4010 also interoperates with Cortina Systems’ optical transport network (OTN) processor family of devices, says the company.

The start-up claims there is strong demand for the CL4010. “When we ask them [operators]: ‘With all the hoopla about 100 Gig, why are you buying all this 40 Gig?’, the answer is that it is a pragmatic solution and one they can ship today,” says Norouzian.

ClariPhy expects 40Gbps volumes to continue to ramp for the next three or four years, partly because of the current high power consumption of 100Gbps. The company says several system vendors are using the CL4010 in addition to optical module customers.

The 28nm 100Gbps CL10010 is a 100 million gate ASIC. ClariPhy acknowledges it will not be first to market with an 100Gbps ASIC but that by using the latest CMOS process it will be well position once volume deployments start from 2014.

ClariPhy is already producing a quad-10Gbps chip implementing the maximum likelihood sequence estimation (MLSE) algorithm used for dispersion compensation in enterprise applications.  The device covers links up to 80km (10GBASE-ZR) but the main focus is for 10GBASE-LRM (220m+) applications. “Line cards that used to have four times 10Gbps lanes now are moving to 24 and will use six of these chips,” says Norouzian. The device sits on the card and interfaces with SFP+ or Quad-SFP optical modules.

 

“The CL10010 is the platform to demonstrate all that we can do but some customers [with IP] will get their own derivatives”

 

System vendor design wins

The 100Gbps transmission ASIC market may be in its infancy but the market is already highly competitive with clear supply lines to the system vendors.

Several leading system vendors have decided to develop their own ASICs.   Alcatel-Lucent, Ciena, Cisco Systems (with the acquisition of CoreOptics), Huawei and Infinera all have in-house 100Gbps ASIC designs.

System vendors have justified the high development cost of the ASIC to get a time-to-market advantage rather than wait for 100Gbps optical modules to become available. Norouzian also says such internally-developed 100Gbps line card designs deliver a higher 100Gbps port density when compared to a module-based card.

Alternatively, system vendors can wait for 100Gbps optical modules to become available from the likes of an Oclaro or an Opnext. Such modules may include merchant silicon from the likes of a ClariPhy or may be internally developed, as with Opnext.

System vendors may also buy 100Gbps merchant silicon directly for their own 100Gbps line card designs. Several merchant chip vendors are targeting the coherent marketplace in addition to ClariPhy. These include such players as MultiPhy and PMC-Sierra while other firms are known to be developing silicon.

Given such merchant IC competition and the fact that leading system vendors have in-house designs, is the 100Gbps opportunity already limited for ClariPhy?

Norouzian's response is that the company, unlike its competitors, has already supplied 40Gbps coherent chips, proving the company’s mixed signal and DSP expertise. The CL10010 chip is also the first publicly announced 28nm design, he says: “Our standard product will leapfrog first generation and maybe even second generation [100Gbps] system vendor designs.”

The equipment makers' management will have to decide whether to fund the development of their own second-generation ASICs or consider using ClariPhy’s 28nm design.

ClariPhy acknowledges that leading system vendors have their own core 100Gbps intellectual property (IP) and so offers vendors a design service to develop their own custom systems-on-chip.  For example a system vendor could use ClariPhy's design but replace the DSP core with the system vendor’s own hardware block and software.

 

Source: ClariPhy Communications

Norouzian says system vendors making 100Gbps ASICs develop their own intellectual property (IP) blocks and algorithms and use companies like IBM or Fujitsu to make the design. ClariPhy offers a similar service while also being able to offer its own 100Gbps IP as required. “The CL10010 is the platform to demonstrate all that we can do,” says Norouzian. “But some customers [with IP] will get their own derivatives.”

The firm has already made such custom coherent devices using customers’ IP but will not say whether these were 40 or 100Gbps designs.

 

Market view

ClariPhy claims operator interest in 40Gbps coherent is not so much because of its superior reach but its flexibility when deployed in networks alongside existing 10Gbps wavelengths. “You don't have to worry about [dispersion] compensation along routes,” says Norouzian, adding that coherent technology simplifies deployments in the metro as well as regional links.

And while ClariPhy’s focus is on coherent systems, the company agrees with other 100Gbps chip specialists such as MultiPhy for the need for 100Gbps direct-detect solutions for distances beyond 40km. “It is very likely that we will do something like that if the market demand was there,” says Norouzian. But for now ClariPhy views mid-range 100Gbps applications as a niche opportunity.

 

Funding

ClariPhy raised US $14 million in June. The biggest investor in this latest round was Nokia Siemens Networks (NSN).

An NSN spokesperson says working with ClariPhy will help the system vendor develop technology beyond 100Gbps. “It also gives us a clear competitive edge in the optical network markets, because ClariPhy’s coherent IC and technology portfolio will enable us to offer differentiated and scalable products,” says the spokesperson. 

The funding follows a previous round of $24 million in May 2010 where the investors included Oclaro. ClariPhy has a long working relationship with the optical components company that started with Bookham, which formed Oclaro after it merged with Avanex.

“At 100Gbps, Oclaro get some amount of exclusivity as a module supplier but there is another module supplier that also gets access to this solution,” says Norouzian. This second module supplier has worked with ClariPhy in developing the design.  

ClariPhy will also supply the CL10010 to the system vendors. “There are no limitations for us to work with OEMs,” he says.

The latest investment will be used to fund the company's R&D effort in 100, 200 and 400Gbps, and getting the CL4010 to production.

 

Beyond 100 Gig

The challenge at higher data rates that 100Gbps is implementing ultra-large ASICs: closing the timings and laying out vast digital circuitry. This is an area the company has been investing in over the last 18 months. “Now we can build 200 million logic gate designs,” says Norouzian.

Moving from 100Gbps to 200Gbps wavelengths will require higher order modulation, says Norouzian, and this is within the realm of its ASIC. 

Going to 400Gbps will require using two devices in parallel. One Terabit transmission however will be far harder. “Going to one Terabit requires a whole new decade of development,” he says.


Further reading:

100G: Is market expectation in need of a reality check?

Terabit consortium embraces OFDM


Verizon plans coherent-optimised routes

Glenn Wellbrock, director of backbone network design at Verizon Business, was interviewed by gazettabyte as part of an upcoming feature on high-speed optical transmission.  Here are some highlights of what he shared. The topics will be expanded upon in the upcoming feature.

 

 "Next-gen lines will be coherent only"

 Glenn Wellbrock, Verizon Business

 

 

Muxponders at 40Gbps

Given the expense of OC-768 very short reach transponders, Verizon is a keen proponent of 4x10Gbps muxponders. Instead of using the OC-768 client side interface, Verizon uses 4x10Gbps pluggables which are multiplexed into the 40Gbps line-side interface. The muxponder approach is even more attractive with compared to 40Gbps IP core router interfaces which are considerable more expensive than 4x10Gbps pluggables.

DQPSK will be deployed this year

Verizon has been selective in its use of differential phase-shift keying (DPSK) based 40Gbps transmission within its network.  It must measure the polarisation mode dispersion (PMD) on a proposed 40Gbps route and its variable nature means that impairment issues can arise over time. For this reason Verizon favours differential quadrature phase-shift keying (DQPSK) modulation.

According to Wellbrock, DPSK has a typical PMD tolerance of 4 ps while DQPSK is closer to 8 ps. In contrast, 10Gbps DWDM systems have around 12 ps. “That [8 ps of DQPSK] is the right ballpark figure,” he says, pointing out that a measuring a route's PMD must still be done.

Verizon is testing the technology in its labs and Wellbrock says Verizon will deploy 40Gbps DQPSK technology this year.

Cost of 100Gbps

Verizon Business has already deployed Nortel’s 100Gbps dual- polarization quadrature phase-shift keying (DP-QSPK) coherent system in Europe, connecting Frankfurt and Paris. However, given 100Gbps is at the very early stages of development it will take time to meet the goal of costing 2x 40Gbps.

That said, Verizon expects at least one other system vendor to have a 100Gbps system available for deployment this year. And around mid-2011, at least three 300-pin module makers will likely have products. It will be the advent of 100Gbps modules and the additional 100Gbps systems they will enable that will reduce the price of 100Gbps. This has already happened with 40Gbps line side transponders; with 100Gbps the advent of 300-pin MSAs will happen far much quickly, says Wellbrock.

Next-gen routes coherent only

When Verizon starts deploying its next-generation fibre routes they will be optimised for 100Gbps coherent systems. This means that there will be no dispersion compensation fibre used on the links, depending on the 100Gbps receiver’s electronics to execute the dispersion compensation instead.

The routes will accommodate 40Gbps transmission but only if the systems use coherent detection. Moreover, much care will be needed in how these links are architected since they will need to comply with future higher-speed optical transmission schemes.

Verizon expects to start such routes in 2011 and “certainly” in 2012.


ECOC 2009: An industry view

One theme dominated all others for attendees at this year’s ECOC, held in Vienna in late September: high speed optical transmission technology.

“Most of the action was in 40 and 100 Gigabit,” said Stefan Rochus, vice president of marketing and business development at CyOptics. “There were many 40/ 100 Gigabit LR4 module announcements - from Finisar, Opnext and Sumitomo [Electric Industries].”

Daryl Inniss, practice leader, components at market research firm Ovum, noted a market shift regarding 40 Gigabit. “There has been substantial progress in lowering the cost, size and power consumption of 40 Gigabit technology,” he said.

John Sitch, Nortel’s senior advisor optical development, metro Ethernet networks, highlighted the prevalence and interest in coherent detection/ digital signal processing designs for 40 and 100 Gigabit per second (Gbps) transmission. Renewed interest in submarine was also evident, he said.

Rochus also highlighted photonic integration as a show theme, with the multi-source agreement from u2t Photonics and Picometrix, the integrated DPSK receiver involving Optoplex with u2t Photonics, Enablence Technologies, and CIP Technologies' monolithically integrated semiconductor optical amplifier with a reflective electro-absorption modulator.

Intriguingly, Rochus also heard talk of OEMs becoming vertically integrated again. “This time maybe by strategic partnerships rather than OEMs directly owning fabs,” he said.

The attendees were also surprised by the strong turnout at ECOC, which was expected to suffer given the state of the economy. “Attendance appeared to be thick and enthusiasm strong,” says Andrew Schmitt, directing analyst, optical at Infonetics Research. “I heard the organisers were expecting 200 people on the Sunday [for the workshops] but got 400.”

In general most of the developments at the show were as expected. “No big surprises, but the ongoing delays in getting actual 100 Gigabit CFP modules were a small surprise.” said Sitch. “And if everyone's telling the truth, there will be plenty of competition in 100 Gigabit.”

Inniss was struck by how 100 Gigabit technology is likely to fare: “The feeling regarding 100 Gigabit is that it is around the corner and that 40 Gigabit will somehow be subsumed,” he said. “I’m not so sure – 40 Gigabit  is growing up and while operators are cheerleading 100 Gigabit technology, it doesn’t mean they will buy – let’s be realistic here.”

As for the outlook, Rochus believes the industry has reason to be upbeat. “There is optimism regarding the third and fourth quarters for most people,” he said. “Inventories are depleted and carriers and enterprises are spending again.”

Inniss’ optimism stems from the industry's longer term prospects. He was struck by a quote used by ECOC speaker George Gilder: “Don’t solve problems, pursue opportunities.”

Network traffic continues to grow at a 40-50% yearly rate yet some companies continue to worry about taking a penny out of cost, said Inniss, when the end goal is solving the bandwidth problem.

For him 100 Gbps is just a data rate, as 400 Gbps will be the data rate that follows.  But given the traffic growth, the opportunity revolves around transforming data transmission. “For optical component companies innovation is the only way," said Inniss. "What is required here is not a linear, incremental solution."


40G and 100G Ethernet: First uses of the high-speed interfaces

 

Operators, enterprises and equipment vendors are all embracing 100 Gigabit technologies even though the standards will only be completed in June 2010.

Comcast and Verizon have said they will use 100Gbit/s transmission technology once it is available. Juniper Networks demonstrated a 100 Gigabit Ethernet (GbE) interface on its T1600 core router in June, while in May Ciena announced it will supply 100Gbit/s transmission technology to NYSE Euronext to connect its data centers.

Ciena’s technology is for long-haul transmission, outside the remit of the IEEE’s P802.3ba Task Force’s standards work defining 40GbE and 100GbE interfaces. But the two are clearly linked: the emergence of the Ethernet interfaces will drive 100Gbit/s long-haul transmission.

ADVA Optical networking foresees two applications for metro and long-haul 100Gbps transmission: carrying 100Gbps IP router links, and multiplexing 10Gbps streams into a 100Gbps lightpath. “We see both: for router and switch interfaces, and to improve fibre bandwidth,” says Klaus Grobe, principal engineer at ADVA Optical Networking.

The trigger for 40Gbit/s market adoption was the advent of OC-768 SONET/SDH 2km reach interfaces on IP core routers. In contrast, 40GbE and 100GbE interfaces will be used more broadly. As well as IP routers and multiplexing operators’ traffic, the standards will be used across the data centre, to interconnect high-end switches and for high-performance computing.

The IEEE Task Force is specifying several 40GbE and 100GbE standards, with copper-based interfaces used for extreme short reach, while optical addresses interfaces with reaches of 100m, 10km and 40km.

For 100m short-reach links, multimode fibre is used: four fibres at 10Gbps in each direction for 40GbE and ten fibres at 10Gbps in each direction for 100GbE interfaces. For 40 and 100GbE 10km long reach links, and for 100GbE 40km extended reach, single mode fibre is used. Here 4x10Gbps and 4x25Gbps are carried over a single fibre using wavelength division multiplexing (WDM).

“Short reach optics at 100Gigabit uses a 10x10 electrical interface that drives 10x10 optics,” says John D’Ambrosia, chair of the IEEE P802.3ba Task Force. “The first generation of 100GBASE-L/ER optics uses a 10x10 electrical interface that then goes to 4x25 WDM optics.”

The short reach interfaces reuse 10Gbps VCSEL and receiver technology and are designed for high density, power-sensitive applications. “The IEEE chose to keep the reach to 100m to give a low cost solution that hits the biggest market,” says D’Ambrosia, although he admits that a 100m reach is limiting for certain customers.

Cisco Systems agrees. “Short reach will limit you,” says Ian Hood, Cisco’s senior product marketing manager for service provider routing and switching. “It will barely get across the central office but it can be used to extend capacity within the same rack.” For this reason Cisco favours longer reach interfaces but will use short reach ‘where convenient’.

D’Ambrosia would not be surprised if a 1 to 2km single mode fibre variant will be developed though not as part of the current standards. Meanwhile, the Ethernet Alliance has called for an industry discussion on a 40Gbps serial initiative.

Within the data centre, both 40GbE and 100GbE reaches have a role.

A two-layer switching hierarchy is commonly used in data centres. Servers connect to top-of-rack switches that funnel traffic to aggregation switches that, in turn, pass traffic to the core switches. Top-of-rack switches will continue to receive 1GbE and 10GbE streams for a while yet but the interface to aggregation switches will likely be 40GbE. In turn, aggregation switches will receive 40GbE streams and use either 40GbE or 100GbE to interface to the core switches. Not surprisingly, first use of 100GbE interfaces will be to interconnect core Ethernet switches.

Extended reach 100GbE interfaces will be used to connect equipment up to 40km part, between two data centres for example. But only when a single 100GbE link over the fibre pair is sufficient. Otherwise dense WDM technology will be used.

Servers will take longer to migrate to 40 and 100GbE. “There are no 40GbE interfaces on servers,” says Daryl Inniss, Ovum’s vice president and practice leader components. “Ten gigabit interfaces only started to be used [on servers] last year.” Yet the IT manager in one leading German computing centre, albeit an early adopter, told ADVA that he could already justify using a 40GbE server interface and expected 100GbE interfaces would be needed by 2012.

Two pluggable form factors have already been announced for 100GbE. The CFP supports all three link distances and has been designed with long-haul transmission in mind, says Matt Traverso, senior manager of technical marketing at Opnext. The second, the CXP, is designed for compact short reach interfaces. For 40GbE more work is needed.

Juniper’s announced core router card uses the CFP to implement a 100m connection. Juniper’s CFP is being used to connect the router to a DWDM platform for IP traffic transmission between points-of presence, and for data centre trunking.

So will one 40GbE or 100GbE interface standard dominate early demand? Opnext’s Traverso thinks not. “All the early adopters have one or two favourite interfaces – high-performance computing favours 40 and 100GbE short reach while for core routers it is long reach 100GbE,” he says. “All the early adopters have their chosen interfaces before they will round out their portfolio.”

This article appeared in the exhibition magazine at ECOC 2009.

 


Privacy Preference Center