Acacia looks to co-package its coherent PIC and DSP-ASIC

  • Acacia Communications is working to co-package its coherent DSP and its silicon photonics transceiver chip.
  • The company is also developing a digital coherent optics module that will support 400 gigabit.

Acacia Communications is working to co-package its coherent DSP and its silicon photonics transceiver chip. The line-side optical transceiver company is working on a digital coherent optics module that will support 400 gigabits.

Acacia announced last November that it was sampling the industry’s first CFP2 Digital Coherent Optics (CFP2-DCO) that supports 100- and 200-gigabit line rates. The CFP2-DCO integrates the DSP and its silicon photonics chip within a CFP2 module, which is half the size of a CFP module, with each chip packaged separately.

The CFP2-DCO adds to the company’s CFP2-ACO design that was announced a year ago. In the CFP2-ACO, the CFP2 module contains just the optics with the DSP-ASIC chip on the same line card connected to the module via a special high-speed interface connector.

Now, Acacia is working to co-package the two chips, which will not only improve the performance of its CFP2-DCO but also enable new, higher-performance optical modules such as a 400-gigabit DCO. The Optical Internetworking Forum announced a new implementation agreement last December for an interoperable 400-gigabit ZR (80km) coherent interface.

 

Both [the DSP and silicon photonics chip] are based on CMOS processes. The next step for Acacia is to bring them into a single package.

 

Portfolio upgrades

Acacia has also upgraded its existing portfolio of coherent transceivers. The company has integrated the enhanced silicon photonics coherent transceiver in its AC100-CFP and its AC-400 5x7-inch modules.

The silicon-photonics transceiver achieves a more efficient coupling of light in and out of the chip and uses an improved modulator driver design that reduces the overall power consumption. The design also supports flexible grid, enabling channel sizes of 37.5GHz in addition to fixed-grid 50GHz channels.

The resulting AC100-CFP module has a greater reach of 2,500km and a lower power consumption than the first generation design announced in 2014. The enhanced PIC has also been integrated within the AC-400. The AC-400, announced in 2015, integrates two silicon photonics chips to support line rates of 200, 300 and 400 gigabits.

 

CFP2-DCO

Acacia is using the coherent transceiver photonic integrated circuit (PIC), first used in its CFP2-ACO, alongside a new coherent DSP to integrate the optics and DSP within the compact CFP2.

“The third-generation PIC is a mini PIC; in a gold box that is about the size of a dime, which is a third of the size of our original PIC,” says Benny Mikkelsen, founder and CTO of Acacia.

One design challenge with its latest DSP was retaining the reach of the original DSP used in the AC100-CFP while lowering its power consumption. Having an inherently low-power coherent DSP design in the first place is one important factor. Mikkelsen says this is achieved based on several factors such as the DSP algorithms chosen and how they are implemented in hardware, the clock frequencies used within the chip, how the internal busses are implemented, and the choice of bits-per-symbol used for the processing.

The resulting DSP’s power consumption can be further reduced by using an advanced CMOS process. Acacia uses a 16nm CMOS process for its latest DSP.

Other challenges to enable a CFP2-DCO module include reducing the power consumption of the optics and reducing the packaging size. “The modulator driver is the piece part that consumes the most power on the optics side,” says Mikkelsen.

Acacia's CFP2-DCO supports polarisation multiplexing, quadrature phase-shift keying (PM-QPSK) for 100 gigabits, and two modulation schemes: polarisation multiplexing, 8-ary quadrature amplitude multiplexing (PM-8QAM) and 16-ary QAM - for 200-gigabit line rates. In contrast, its -ACO supports just PM-QPSK and PM-16QAM.

At 100 gigabits, the DSP consumes about half the power of the Sky DSP used in the original AC100. Using PM-8QAM for 200 gigabits means the new DSP and optics support a higher baud rate - some 45 gigabaud compared to the traditional 32-35 gigabaud used for 100 and 200-gigabit transmission. However, while this increases the power consumption, the benefit of 8QAM is a 200-gigabit reach beyond 1,000km.

Mikkelsen stresses that a key reason the company can achieve a CFP2-DCO design is having both technologies in-house: “You can co-optimise the DSP and the silicon photonics”.

 

We think, at least in the near term, that the OSPF module seems to be a good form factor to work on

ACO versus DCO

Since Acacia now offers both the CFP2-ACO and CFP2-DCO modules, it is less concerned about how the relative demand for the two modules develops. “We don’t care too much which one is going to have the majority of the market,” says Mikkelsen. That said, Acacia believes that the CFP2-DCO market will become the larger of the two.

When the CFP2-ACO was first considered several years ago, the systems vendors and optical module makers shared a common interest. Systems vendors wanted to use their custom coherent DSP-ASICs while the -ACO module allowed component makers that didn't have the resources to develop their own DSP to address the market with their optics. It was also necessary to separate the DSP and the optics if the smaller CFP2 form factor was to be used.

But bringing CFP2-ACOs to volume production has proved more difficult than first envisaged. The CFP2-DCO is far easier to use, says Mikkelsen. The module can be plugged straight into equipment whereas the CFP2-ACO must be calibrated by a skilled optical engineer when a wavelength is first turned up.

 

Future work

Acacia is now looking at new module form factors and new packaging technologies. “Both [the DSP and silicon photonics chip] are based on CMOS processes,” says Mikkelsen. “The next step for Acacia is to bring them into a single package.”

In addition to the smaller size, a single package promises a slightly lower power consumption as well as manufacturing cost advantages. “We also expect to see higher performance once the DSP and optics are sitting next to each other which we believe will improve signal integrity between the two,” says Mikkelsen.

Acacia is not waiting for any industry challenges to be overcome for a single-package design to be achieved. The company points out that its silicon photonics chip is not temperature sensitive, aiding its co-packaging with the DSP.

Acacia is working on a 400-gigabit DCO design and is looking at several potential module types. The company is a member of the OSFP module MSA as well as the Consortium of On-Board Optics (COBO) which has started a coherent working group. “We think, at least in the near term, that the OSPF module seems to be a good form factor to work on,” says Mikkelsen.


Q&A with Kotura's CTO: Integration styles and I/O limits

The second, and final part, of the Q&A with Mehdi Asghari, CTO of silicon photonics start-up, Kotura. 

Part 2 of 2

 

"When do the big players adopt a new technology and go from an electrical to an optical solution? In my experience, usually when they absolutely have to."

Mehdi Asghari, CTO, Kotura

 

Q: Silicon photonics comes in two integration flavours: the monolithic approach where the modulators and detectors are implemented monolithically while the lasers are coupled externally (e.g. Kotura and Luxtera); and heterogeneous integration where III-V materials such as indium phosphide are bonded to the silicon to form a hybrid design yet are grown on a single die (e.g. Aurrion). Does one approach have an advantage?  

A: I have a III-V background and converted to silicon photonics over 15 years ago. The key issue here is what are you trying to do? Why are we going from III-V processing to silicon? Is it the yield and process maturity or the device performance for actives?

If it is the former, then heterogeneous integration does not really solve the problem since you are still processing III-V devices and are likely to need multiple fabs to do it. If it is the latter then you should stick to III-V wafers.

The fact is that silicon provides passive performance that is far superior to III-V while the active performance – the detector and modulator - is good enough. In fact our germanium detectors could be better and our electro-absorption modulators can be lower power and exhibit a broader working spectral range.

We have seen repeatedly that being good enough is all that silicon has to show it can do to win and that it is certainly doing.

 

"It is not enough to offer a 10%, 20% or even a 50% cost saving when you are offering the customer a brand new solution that comes with all the risks and unknowns associated with that technology."

 

Kotura has developed components for telecom (variable optical attenuators, and the functions needed for a 100 Gig coherent receiver) yet its focus is on datacom. Why is that?

We started in telecom as we looked for low hanging fruit that could give us a good margin and an easy start in our early days. This is important for a start-up with a new technology. The well-entrenched incumbent technologies are hard to displace.

You have to find an application with a clear value proposition to get started. Once you have established yourself, your supply chain and manufacturing infrastructure, you can take on more challenging and larger market opportunities.

We see certain areas in datacom that are not well served by either the short reach optics or the telecom grade solutions. Extended reach data centre is one key area where short-reach optics based on VCSELs cannot cover the reach needed and conventional telecom solutions are inherently over-engineered and do not meet the power, cost and size needed.

We think silicon photonics can play a key role here as a starting point in datacom. A key advantage of our platform here is that we can do WDM [wavelength division multiplexing] and hence offer 100 Gig on a single fibre (per direction). This is a major cost saving for longer reaches (>>50m) deployed in such links.

 

There are some big system players with silicon photonics (Cisco Systems, Alcatel-Lucent) and several small merchant silicon photonics players, such as the companies mentioned in the previous question, which must develop products to sell while funding the development of their technologies. How do you expect the silicon photonics marketplace to evolve, especially now that the technology is being more widely embraced?

For silicon photonics to succeed commercially, we need a multitude of vibrant and successful players in the field. Some of these can be start-ups that lead the innovation in technology and manufacturing but others can be larger organisations that have invested to service an internal need or leverage an existing dominance in the market.  

There is room and a necessity for both. It takes a village to raise a child. One single company will not turn silicon photonics into a successful commercial reality.

 

Cisco Systems has been talking about its proprietary CPAK transceiver. Here is an example of a system vendor using in-house silicon photonics for its own use. Why do you think about such a development? And is Kotura being approached by equipment vendors that want to work with you to develop a custom design?

It is not new for a large company like Cisco to try and make sure that is has its own proprietary components to go into its systems to protect their product and their margins. We see that in all industries.

In terms of other companies coming up with their own proprietary solutions, we do see more and more of this - and a lot more this year than last year - especially when you come off the telecom bandwagon and into the datacom environment: data centres and high-performance computing.

That is because the customer is in charge of the entire environment, the two ends of the link, they can leverage more value from the solution you have to offer without worrying about standards. This is one way for systems companies to leverage value from components.

People are starting to see that the conventional technologies they have deployed are hitting a wall. When they are deploying a new solution they are rethinking their hardware strategy, and how they leverage it to add more value and differentiation to their system.

New ways to architect systems are becoming possible. If you are able to avoid limitations such as distance between the processor and memory, the router and switches and so on, you can come up with a very different architecture for your system and solution.

When do the big players adopt a new technology and go from an electrical to an optical solution? In my experience, usually when they absolutely have to. Most people don't adopt a new solution until they really, really need to; when the value proposition completely outweighs the risk.

It is not enough to offer a 10%, 20% or even a 50% cost saving when you are offering the customer a brand new solution that comes with all the risks and unknowns associated with that technology.

You have to offer them something new, to enable a new application, to add value by enabling a feature, something they can leverage in their product.

 

When you say systems people adopt new technology when they hit a wall, can you highlight examples of these hurdles?

When you look at the adoption of optics coming from the copper-dominated connectivity, it is very interesting.

Originally, for optics to work its way into the copper world, it had to hide itself and look like a copper solution. People had no idea how to create connectors and they were worried about fibre. So it was disguised as a copper solution.

As customers have got used to it, we can now come out and be more open. We can now do more innovative things with optical transceivers. If you look at the adoption rate, it is being accelerated by customers' demand such as 25 Gigabit signalling.

We can see that the processors and the ASICs - a switch from Broadcom or a processor from Intel or AMD - they are running into I/O [input/ output] density bottlenecks. The chip area is pretty constant, the packages are about the standard size, the number of pins are going beyond what they can support, they have to ramp up the pin rate to about 25 Gigabit-per-second (Gbps), while there are also some 10Gbps pins.

But the number of 25Gbps pins are becoming so high, potentially many hundreds, that they are not going to be able to trace them into the PCB (printed circuit board).  The PCB can only take a 25Gbps signal for about 4 inches (~10cm) and then you need serdes [serialiser/ deserialiser) and repeaters.

You may imagine a current router or switch ASIC having ten 25Gbps pins and 100 10Gbps pins. The 10Gbps pins I can take to the edge and use 10Gbps transceivers; and the ten 25Gbps pins I can still do something about it. I may need a lot of electronics and serdes, and use pre-emphasis and equalisation.

But the next generation, when it becomes 100 25Gbps pins, you just cannot do that at the board level. That is where we will start to have to use optics close to the chip.

Will they go for very compact transceivers that sit next to the ASIC or would they try and co-package it with the ASIC?

My perception is that the first generation will be next to the ASIC. People will not integrate an unknown technology into a multi-billion dollar business, they will hedge their bets and have an external solution that offers them some level of assurance that if one solution does not work, they can change to another. But once they get used to it, they can start to integrate these in a multi-chip module solution.

 

What are the timescales?

I see transceivers next to the ASICs being deployed around 2017-18, maybe a bit sooner, with the co-packaging around 2018-20. People are already talking about it but usually these things take longer.

 

For part 1 of the Q&A, click here

 

Further reading:

Altera optical FPGA in 100 Gigabit Ethernet traffic demo

Boosting high performance computing with optics


Silicon photonics: Q&A with Kotura's CTO

A Q&A with Mehdi Asghari, CTO of silicon photonics start-up, Kotura.  In part one, Asghari talks about a recent IEEE conference he co-chaired that included silicon photonics, the next Ethernet standard, and the merits of silicon photonics for system design.

Part 1

 

"Photons and electrons are like cats and dogs. Electrons are dogs: they behave, they stick by you, they are loyal, they do exactly as you tell them, whereas cats are their own animals and they do what they like. And that is what photons are like."

Mehdi Asghari, CTO of Kotura

 

Q: You recently co-chaired the IEEE International Conference on Group IV Photonics that included silicon photonics. What developments and trends would you highlight? 

A: This year I wanted to show that silicon photonics was ready to make a leap from an active area of scientific research to a platform for engineering innovation and product development.

To this end, I needed to show that the ecosystem was ready and present. Therefore, a key objective was to get the industry more involved with the conference. "This has always been a challenge," I was told.

To address this issue I asked my co-chair, MIT's Professor Jurgen Michel, that we appoint joint-session chairs, one from industry and one from academia. We got people we knew from Google, Oracle and Intel as co-chairs, and paired them with prominent academics and asked them to ensure that there were an equal number of industry-invited talks in the schedule. We knew this would be a major attraction to industry attendees. We also got the industry to fund the conference at a level that set an IEEE record.

A key highlight of the show was a boat cruise journey on San Diego bay with Dr. Andrew Rickman as speaker, sharing his experiences and thoughts about setting up the first silicon photonics company - Bookham Technology - over 20 years ago.

Among other distinguished industry speakers we had Samsung telling us of the role of silicon photonics in consumer applications, Broadcom on the need for on-chip optical communication, Cisco on the role of silicon photonics in the future of the Internet, and Google on its broadband fibre-to-the-home (FTTh) initiative and what silicon photonics could offer in this area.

Oracle also shared its latest development in silicon photonics and the application of the technology in their systems, while Luxtera discussed the latest developments in its CMOS photonics platform, particularly the 4x25 Gigabit-per-second (Gbps) platform.

We also heard about the latest germanium laser development at MIT and had an invited speaker to talk about what III-V devices could do and to provide a comparison to silicon to make sure we are not blinded by our own rhetoric.

We ended up with a record number of attendees for the conference and, perhaps more importantly, close to half from industry; a record and vindicated my motivation and perspective for the conference and that silicon photonics is ready and coming.

 

Was there a trend or presentation at the IEEE event that stood out?

There are two areas creating excitement. One is the germanium laser. This is a topic of significant interest because these devices can operate at very high temperatures and therefore they can be next to the processor or ASIC. This can be a game-changer in how we envisage photonics and electronics being integrated.

We have germanium detectors and at Kotura we are working very hard to get a germanium electro-absorption modulator. We have shown this device can be extremely small and low power. And it can operate at very high speed - we have observed 3dB bandwidths in excess of 70GHz which means you can think of 100 Gigabit direct modulation for a device only 40 microns long and with a capacitance of a few femtofarads. So in terms of RF power, the dissipation of this device is virtually zero.

I would say the MIT group is probably leading the [germanium laser] efforts. They reported on room-temperature, current-driven laser emission which is very exciting. The efficiency of these lasers are still low for commercial applications; they probably have to improve by a factor of 100 or so. But given the progress we've seen in the last two years, if they keep going at that pace we may have viable germanium lasers in a couple of years. Then someone in industry has to take that on and turn it into a product and that is usually the hardest part.

This is exciting because that enables us to forget about off-the-chip lasers and integrate them in the device. We can then give up a whole bunch of problems. For example, the high temperature operation of the III-V devices is a real limit for us. Electronic devices can give off 100W and operate at 120oC, whereas optical devices often have to be stabilised, may go through multiple packaging layers, and the heat dissipation is usually directly related to cost.

If you could end up with a germanium laser that is happy at high temperatures - and we know our detectors and modulators work at high temperatures, and we know we can use electronic packaging to package these devices - then we can put these lasers next to the processor and address the bandwidth limitations that ASICs are facing today.

 

"Wavelength division multiplexing (WDM) is effectively a zero-power gearbox"

 

What was the second area?

The other area that was very interesting is graphene, a new material people are starting to work with and putting on silicon. They [researchers] are showing very low power, very high speed operation. It is still at a research level but that is another area we should watch.

 

The IEEE has started a group looking at the next speed Ethernet standard. No technical specification has been mentioned but it looks that 400 Gigabit Ethernet (GbE) will be the approach. Do you agree and what role can silicon photonics play in making the next speed Ethernet standard possible?

Industry is busy arguing about the different ways of doing 100 and 400GbE, and perhaps forgetting the fact that we have been here before.

The simple fact is that people always go for higher bit rate when it is cost-efficient and power-efficient to do so. After that, wavelengths are used.

Wavelength division multiplexing (WDM) is effectively a zero-power 'gearbox', mixing the signals in the optical domain. You do pay a power penalty for it in the form of photons lost in the multiplexer and demultiplexer. However that is not significant compared to the power consumption of an electronics gearbox chip.

Once we have exploited line rate and wavelength division multiplexing, we come to more complex modulation formats and pay the associated power and complexity penalty. Of course, more channels of fibre can always carry more information bandwidth but that is just a brute force solution that works while density and bandwidth requirements are moderate.

I think the right 100 Gigabit is based on a WDM 4x25 Gig solution. This can then scale to 400 Gigabit by adding more wavelengths, and can then scale to 1.6 Terabits. We have already demonstrated this in a single chip and will demonstrate this later in the form of a QSFP 100Gbps.

 

How does the interface scale to 1.6Tbps?

Our devices are capable of running at 40 or 50Gbps, depending on the electronics. The electronics is going to limit the speed of our devices. We can very easily see going from four channels at 25Gbps to 16 channels at 25Gbps to provide a 400 Gigabit solution.

We can also see a way of increasing the line rate to 50Gbps perhaps, either a straightforward NRZ (non-return-to-zero) line rate or some people are talking about multi-level modulation, PAM-4 (pulse amplitude modulation) type of stuff, to get to 50Gbps.

The customers we are talking to about 100Gbps are already talking about 400Gbps. So we can see 16x25Gbps, or 8x50Gbps if that is the right thing to do at the time based on the availability of electronics.

To go to 1.6 Terabit transceivers, we envisage something running at 40Gbps times 40 channels or 50Gbps times 32 channels. We already have done a single receiver chip demonstrator that has 40 channels, each at 40Gbps.

These things in silicon are not a big deal. The III-V guys really struggle with yield and cost. But you can envisage scaling to that level of complexity in a silicon platform.

 

Silicon photonics is spoken of not just as an optical platform like traditional optical integration technologies, but also as a design approach, making use of techniques associated with semiconductor design. The implication is that the technology will enable designs and even systems in a way that traditional optics can't. Can you explain how silicon photonics is a design approach and just what the implications are?

I think this is a key promise of silicon photonics, but perhaps one that has been oversold in recent years.

The key here is that given the maturity of the silicon processing capabilities, process simulation tools available and inherent properties of silicon, it is possible to predict the performance of the optical circuits far better in this platform than in any other before it. I think this is true and very valuable, potentially even a game changer.

However, we have to realise that there still remains an inherent difference between electrons and photons and their behavior in such circuits. Photons remain in a quantum world in such circuits, where the wavelength of light is comparable to feature sizes we manufacture. Hence we are dealing with a statistical quantum process whether we like it or not.

In summary, silicon will be a key enabler for on-chip system design, but it is too early for the university courses to stop graduating photonics PhDs!

 

So there is an advantage to silicon photonics but are you saying it is not that simple as using mature semiconductor design techniques?

Photons and electrons are like cats and dogs. Electrons are dogs: they behave, they stick by you, they are loyal, they do exactly as you tell them, whereas cats are their own animals and they do what they like. And that is what photons are like.

So it is really hard to predict what a photon does. The dimensions that we use for the structures we make are of the size of the wavelength of a photon. And that means it is more of a hit-and-miss process - there is always stray light, the stray light has a habit of interfering and you can always get unpredicted results.

When I interact with my electronic partners I find that they go through 6-9 months of very detailed simulation. They have very complex simulation tools.

When you come to photonics for sure we can borrow some of these simulation tools, we can simulate the process because we are using silicon. However some of the tolerances that we need are beyond what the silicon guys need, and the way the photons behave is very different. So in the end we don't spend 9 months simulating; we spend a month simulating and 3 months running the process and optimising it and re-running it and re-optimising it.

We end up with a reverse situation where the design is only 3 months, and the interaction with the designer and the manufacturing process is a 9-month process. So this is more of an iterative process. It is not as mature and a little bit more statistical. 

 


Dan Sadot on coherent's role in the metro and the data centre

Gazettabyte went to visit Professor Dan Sadot, academic, entrepreneur and founder of chip start-up MultiPhy, to discuss his involvement in start-ups, his research interests and why he believes coherent technology will not only play an important role in the metro but also the data centre.


"Moore's Law is probably the most dangerous enemy of optics"

Professor Dan Sadot 

 

The Ben-Gurion University campus in Beer-Sheba, Israel, is a mixture of brightly lit, sharp-edged glass-fronted buildings and decades-old Palm trees. 

The first thing you notice on entering Dan Sadot's office is its tidiness; a paperless desk on which sits a MacBook Air. "For reading maybe the iPad could be better but I prefer a single device on which I can do everything," says Sadot, hinting at a need to be organised, unsurprising given his dual role as CTO of MultiPhy and chairman of Ben-Gurion University's Electrical and Computer Engineering Department. 

The department, ranked in the country's top three, is multi-disciplinary. Just within the Electrical and Electronics Department there are eight tracks including signal processing, traditional communications and electro-optics. "That [system-oriented nature] is what gives you a clear advantage compared to experts in just optics," he says.

The same applies to optical companies: there are companies specialising in optics and ASIC companies that are expert in algorithms, but few have both. "Those that do are the giants: [Alcatel-Lucent's] Bell Labs, Nortel, Ciena," says Sadot. "But their business models don't necessarily fit that of start-ups so there is an opportunity here." 

 

MultiPhy  

MultiPhy is a fabless start-up that specialises in high-speed digital signal processing-based chips for optical transmission. In particular it is developing 100Gbps ICs for direct detection and coherent.

Sadot cites a rule of thumb that he adheres to religiously: "Everything you can do electronically, do not do optically. And vice versa: do optically only the things you can't do electronically." This is because using optics turns out to be more expensive.

And it is this that MultiPhy wants to exploit by being an ASIC-only company with specialist knowledge of the algorithms required for optical transmission.

"Electronics is catching up," says Sadot. "Moore's Law is probably the most dangerous enemy of optics."

 

 

Ben-Gurion University Source: Gazettabyte

 

Direct detection

Not only have developments in electronics made coherent transmission possible but also advances in hardware. For coherent, accurate retrieval of phase information is needed and that was not possible with available hardware until recently. In particular the phase noise of lasers was too high, says Sadot. Now optics is enabling coherent, and the issues that arise with coherent transmission can be solved electronically using DSP.

MultiPhy has entered the market with its MP1100Q chip for 100Gbps direct detection. According to Sadot, 100Gbps is the boundary data rate between direct detection and coherent. Below 100Gbps coherent is not really needed, he says, even though some operators are using the technology for superior long-haul optical transmission performance at 40Gbps.

"Beyond 100 Gig you need the spectral efficiency, you need to do denser [data] constellations so you must have coherent," says Sadot. "You are also much more vulnerable to distortions such as chromatic dispersion and you must have the coherent capability to do that." 

But at 100 Gig the two - coherent and direct detection - will co-exist.

MultiPhy's first device runs the maximum likelihood sequence estimation (MLSE) algorithm that is used to counter fibre transmission distortions. "MLSE offers the best possible theoretical solution on a statistical basis without retrieving the exact phase," says Sadot.  "That is the maximum you can squeeze out of direct detection."  

The MLSE algorithm benefits optical performance by extending the link's reach while allowing lower cost, reduced-bandwidth optical components to be used. MultiPhy claims 4x10Gbps can be used for the transmit and the receive path to implement the 4x28Gbps (100Gbps) design. 

Sadot describes MLSE as a safety net in its ability to handle transmitter and/or receiver imperfections. "We have shown that performance is almost identical with a high quality transmitter and a lower quality transmitter; MLSE is an important addition." he says.

 

Ben-Gurion University Source: Gazettabyte

 

Coherent metro

System vendors such as Ciena and Alcatel-Lucent have recently announced their latest generation coherent ASICs designed to deliver long-haul transmission performance. But this, argues Sadot, is overkill for most applications when ultra-long haul is not needed: metro alone accounts for 75% of all the line side ports.

He also says that the power consumption of long-haul solutions is over 3x what is required for metro: 75W versus the CFP pluggable module's 24W. This means the power available solely for the ASIC would be 15W. 

"This is not fine-tuning; you really need to design the [coherent metro ASIC] from scratch," says Sadot. "This is what we are doing."

To achieve this, MultiPhy has developed patents that involve “sub-Nyquist” sampling. This allows the analogue-to-digital converters and the DSP to operate at half the sampling rate, saving power.  To use sub-Nyquist sampling, a low-pass anti-aliasing filter is applied but this harms the received signal. Using the filter, sampling at half the rate can occur and using the MLSE algorithm, the effects of the low-pass filtering can be countered. And because of the low pass filtering, reduced bandwidth opto-electronics can be used which reduces cost.

The result is a low power, cost-conscious design suited for metro networks.

 

Coherent elsewhere

Next-generation PON is also a likely user of coherent technology for such schemes as ultra-dense WDM-PON.

Sadot believes coherent will also find its way into the data centre. "Again you will have to optimise the technology to fit the environment - you will not find an over-design here," he says. 

Why would coherent, a technology associated with metro and long-haul, be needed in the data centre? 

"Even though there is the 10x10 MSA, eventually you will be limited by spectral efficiency," he says. Although there is a tremendous amount of fibre in the data centre, there will be a need to use this resource to the maximum. "Here it will be all about spectral efficiency, not reach and optical signal-to-noise," says Sadot.

 

 

Sadot's start-ups

Sadot had a research posting at the optical communications lab at Stanford University. The inter-disciplinary and systems-oriented nature of the lab was an influence on Sadot when he founded the optical communications lab at Ben-Gurion University around the time of the optical boom. "A pleasant time to come up with ideas," is how he describes that period - 1999-2000.  

The lab's research focus is split between optical and signal processing topics. Work there resulted in two start-ups during the optical bubble which Sadot was involved in: Xlight Photonics and TeraCross.

Xlight focused on ultra-fast lasers as part of a tunable transponder. Xlight eventually merged with another Israeli start-up Civcom, which in turn was acquired by Padtek. 

The second start-up, TeraCross, looked at scheduling issues to improve throughput in Terabit routers. "The start-up led to a reference design that was plugged into routers in Cisco's Labs in Santa Clara [California]," says Sadot. "It was the first time a scheduler showed the capability to support a one Terabit data stream, and route in a sophisticated, global manner."

But with the downturn of the market, the need for terabit routers disappeared and the company folded.

Sadot's third and latest start-up, MultiPhy, also has its origins in Ben-Gurion's optical communications lab's work on enabling system upgrades without adding to system cost. 

MultiPhy started as a PON company looking at how to upgrade GPON and EPON to 10 Gigabit PON without changing the hardware. "The magic was to use previous-generation hardware which introduces distortion as it doesn't really fit this upgrade speed, and then to compensate by signal processing," says Sadot.

After several rounds of venture funding the company shifted its focus from PON, applying the concept to 100 Gigabit optical transmission instead.


The CTO Interviews

Gazettabyte has produced an ebook that includes the recent CTO interviews with Ciena, JDS Uniphase and Alcatel-Lucent.

 

To download the ebook, click here.

The book can be read using any ebook reader (the file has an .epub extention) but for best results please use the iBooks app on the iPhone or iPad.

The download file is large: 19M.  

 

The ebook was created using the Book Creator app.


What innovation gives you: Marcus Weldon Q&A Part II

Marcus Weldon discusses network optimisation, the future of optical access, Wikipedia, and why a one-year technology lead is the best a system vendor can hope for, yet that one year can make all the difference. Part II of the Q&A with the corporate CTO of Alcatel-Lucent.

 

Photo: Denise Panyik-Dale

"The advantage window [of an innovative product] is typically only a year... Knowing that year exists doesn't mean that there is not a tremendous focus on innovation because that year is everything."

Marcus Weldon, Alcatel-Lucent

 

Q: Where is the scope for a system vendor to differentiate in the network? Even developments like Alcatel-Lucent's lightRadio have been followed by similar announcements.

A: There is potential for innovation and often other vendors say they are innovating in the same way and it looks like everyone is innovating at once. But when you dig down, there are substantial innovations that still exist and persist and give vendors advantage.

The advantage window is typically only a year because of the power of the R&D communities each of us has and the ability to leverage a rich array of components driven by Moore's Law that can cause even a non-optimal design to be effective.

You don't have to be the world's expert to create a design that works, and one that works at a reasonable price point. So there is a toolbox of components and techniques that people have now that allows them to catch up quickly without producing their own components.

 

"In wireless, historically, when you win, you win for a decade."

 

Innovation still exists but I believe the advantage - from the time you have it to the time the competition has it – is typically a year - it varies by domain, whereas perhaps it used to be 3-5 years because you had to design your own components.

Knowing that year exists doesn't mean that there is not a tremendous focus on innovation because that year is everything.

That year gets you mindshare and gets you the early trials in operator labs and networks. And that relationship you build, even if by the time you have completed that cycle your competitors have the same technology, you have a mindshare and an engagement with the potential customer that allows you to win the long-term business.

In wireless, historically, when you win, you win for a decade.

There is still quite a bit of proprietary stuff in wireless networks that it makes it easier to keep going with one vendor if that vendor has a product that is still compatible with their needs.

So the whole argument is that if you can innovate and gain a one-year advantage, you can gain a 10-year advantage potentially in some market segments - particularly wireless - for your product sets.

That is why innovation is still important.

 

What is Alcatel-Lucent's strategic focus? What are the key areas where you are putting your investments?

Clearly lightRadio is a huge one. We have massively scaled our investment in that area, the focus on the lightRadio architecture, the building of those cube arrays and working with operators to move some of the baseband processing into a pooled configuration running in a cloud.

In the optical domain we are streamlining that portfolio and focussing it around a core set of assets. The 1830 product which has the OTN/ WDM (optical transport network/ wavelength division multiplexing) combination switch in it, with 100 Gig moving to 400 Gig. So there is a strategic focus and a highly optimised, leading-edge optical portfolio which is a significant part of our R&D.

Photo: Denise Panyik-Dale

To be honest we had a bit of a mess with an optical portfolio left over from the [Alcatel and Lucent] merger and we had not rationalised it appropriately. We have completed that and have a leading-edge portfolio. If you look at our market share numbers, where we were beginning to fall into second place in optics we have turned that around.

The IP layer, of course, is another area. TiMetra, which is the company we bought and is the basis of our IP portfolio, had $20 million revenues when we bought it and now that is over a billion [dollar] business for us.

That team is really one of the biggest innovation engines in our company. It is doing all the packet processing, all the routing work for the company and has a very efficient R&D team that allows them to move into other areas.

So that is the team that is producing our mobile packet core. It is the team that owns our cacheing product. It is the team that increasingly owns some interest we have in the data centre space. That team is a big focus and IP and Ethernet expertise in that team is propagating across our portfolio. 

In wireline it is 10G PON and the cross-talk cancellation in DSL. Those are the big focusses for us in terms of R&D effort.

On the software applications and services side, we are beginning to focus around new big themes. We have been a little bit all over the place in the applications business but now we have recently redefined what our business is and it is going to have some focussed themes.

Payment is one area which will remain important but payment moving to smart charging of different services is one focus area. Next-generation communications which means IMS [IP Multimedia Subsystem] but also immersive communications - rendering a communications service in the cloud in a virtual environment composed of the end participants - is a big focus.

The thing called customer experience optimisation is a big focus. Here you leverage all the network intelligence, all the device intelligence, all the call centre intelligence and allow the operator to understand whether a user is likely to churn. And it can optimise the network based on the output of that same decision and say, 'Ah, this seems to be an issue, I’m going to optimised the network so that this user is happier'. That is a big focus for us.

We are beginning to be active in machine-to-machine [communication] as well as this concept of cloud networking, which is distributed cloud, stitching together the network and moving resources around in a distributed resource way, as opposed to the centralised, monolithic data centre, the way that other people are focussing on. 

 

Photo: Denise Panyik-Dale

"I also like Wikipedia, I have to say. It is 80-90% right and that is often good enough to help you quickly think through a topic"

 

How do you stay on top of developments across all these telecom segments? Are there business books you have found useful for your job?

I generally read technology treatises rather than business books. It is a failing of mine.

I also like Wikipedia, I have to say. It is 80-90% right and that is often good enough to help you quickly think through a topic. It gets you where you need to be and then you can go and look further into the detail.

So I would argue that Wikipedia is the secret tool of all CTOs and even product marketing managers and R&D managers.

I am a fan of the Freakonomics books. That is the sort of business book I like to read, looking at how to parse things whether they are true causal relationships or correlations, and how one thing affects another. I find those interesting and they have a business sense to them that explains how incentives motivate people.

I'm very interested in that aspect because I think in a company, the big issue a CTO has is how to influence the rest of a company. One of our roles increasingly combines our CTO and strategy teams under the same leader so we are looking at how to effectively evolve a company using the right set of incentives, and the right sort of technology bases, but you still need to provide an incentive for people to move in that new direction whatever it is you choose.

 

"TiMetra, which is the company we bought and is the basis of our IP portfolio, had $20 million revenues when we bought it and now that is over a billion [dollar] business for us."

 

I'm fascinated by how to influence people effectively to believe in your vision. Ultimately they have to more than believe, they have to move towards that and that will need to involve some sort of incentive scheme for target teams that you assign to a new project started quickly and that then influences the rest of the company. We have done that a few times. 

I don't spend a lot of time reading business books. I spend a lot of time reading technical stuff. I think about how to influence corporate behavior. And I get my financial understanding just reading around work, reading lightly on business topics and talking to colleagues in the strategy department.

My answer would be Wikipedia, Freakonomics and technical treatises. Those are the things I use. 

 

Much work is being done to optimise the network across the IP, Ethernet transport and photonic layers. Once this optimisation has been worked through, what will the network look like?

We were one of the founders of this vision of convergence to the IP-optical layer. Two or three years ago we announced something called the converged backbone transport which we sell as a solution, which is a tight interworking between the IP and optical layers. 

Traffic at the optical layer doesn't need to be routed; it is kept at the photonic layer. Only traffic that needs additional treatment is forwarded up to the routing layer, and there is communication back and forth between the two layers.

So, for example, the IP layer has coloured optics and it can be told by the optical layer which wavelength to select in order to send the traffic into the optical layer without the optics having to do optical-electrical-optical regeneration, it can just do optical switching.

We have this intelligent interaction between the optical and IP layer which offloads the IP layer to the optical later which is a lower cost-per-bit proposition. But it also informs the IP layer about what colour wavelength or perhaps what VLAN (Virtual LAN) the optical layer expects to see so that the optical layer can more efficiently process the traffic and not have to do packet processing.

This is the interesting part, and this is where the industry is not aligned yet. We do not think that building full IP-functionality into the optical layer or building full optical functionality into an IP layer makes sense. It becomes essentially a 'God Box' and over the years such platforms ended up becoming a Jack of all trades and a master of none, being compromised in every dimension of performance.

They can't packet process at the density you would want, they can't optically switch at the density you would want, and all you have done is pushed two things into one box for the sake of physical appearance and not for any advantage.

What we believe you should do is keep them in separate boxes - they have separate processors, separate control planes and they even operate at different speeds - and have them tightly interworking. So they are communicating traffic information back and forth taking the traffic as much as possible themselves before forwarding to the other box when it is appropriate for it to handle the traffic.

Most operators agree that in the end having two boxes optimised for each of their activities is the right architecture, communicating effectively back and forth and acting on your traffic as a pair.

 

You didn't mention layer two.

I mentioned VLANs. And layer two does appear in the optical layer because the optical layer has to become savvy enough to understand Ethernet headers and VLANs is an example of that.

We do not believe that sophisticated packet processing has to appear in the optical layer because if you start doing that, you are building a large part of the router infrastructure - this whole FlexPath processor 400Gbps that we announced in the core of the router. If we move that into the optical layer, the optical layer essentially has the price of the routing layer and that is what you are trying to avoid.

You are trying to use the optical layer for the lower price-per-bit forwarding and the IP layer at the higher price per bit when it needs to be about higher price per bit. The pricing being capex and opex - the total cost of forwarding that bit: the power consumption of the box as well as the cost of the box.

 

Photo: Denise Panyik-Dale

"TDM-PON always is good enough, meaning it comes in at an attractive price - often more attractive than WDM - and it doesn't require you to rework the outside plant."

 

What comes after GPON and EPON and their 10 Gigabit PON  variants?

Remarkably, as always, just as we thought we were running out of TDM (time-division multiplexing) PON options, there is a lot of work on 40 Gig PON in Bell Labs and other research institutes.

There are schemes that allow you to do 40 Gig TDM-PON. So once again TDM will survive longer than you thought, but there are options being proposed that are hybrids of WDM and TDM.

For example, it is easy to imagine four wavelengths of 10G PON and that is a flavour of 40G PON. In FSAN (Full Service Access Network), they have something called XG-PON2 which is meant to have all the forward-looking technologies in there.

Now they are getting serious about that because they are done with 10G PON to some extent so let's focus on what is next. There are a lot of proposals going into FSAN for combinations of different technologies.

One is pure 40G TDM, another is four wavelengths of 10G, and there are many other hybrids that I've seen going in there. But there is a sense that it is a combination of a TDM and a WDM solution that might make it into standards for 40G, and it might not.

And 'the might not' is always because you have to redo your outside plant a little bit because you need to take the power splitters for TDM and replace them with wavelength splitters. So there is some reluctance by operators to go back outside and upgrade their plants. Very often they say: 'Well, if I can just do TDM again, why don't I do it that way and reuse the infrastructure already deployed'.

That is always the tension between the two.

It is not that WDM ultimately isn't a good option - it probably is - but TDM always is good enough, meaning it comes in at an attractive price - often more attractive than WDM - and it doesn't require you to rework the outside plant.

But at some point there will be a transition where WDM becomes a more economically attractive than TDM and does merits going back to your outside plant and changing out the splitters you deployed.

 

It is not clear how operators will make money from an open network so how will Alcatel-Lucent make money from open application programming interfaces (APIs)?

It is something, in all honesty, we have wrestled with. I think we are coming to a firm view on this.

To start, I'll answer the operator question which is important since if they aren't making money, it is very unlikely we'll be making money.

Operators are beginning to see that open APIs are not just about allowing access to their network to what you could call the over-the-top long-tail of applications, although that is part of it.

Netflix, for example, could be over-the-top but you would not call it long-tail because it has got 23 million subscribers. Long-tail is any web service that the user accesses and that might want to access network resources - it might need location information, or it might want the operator to do billing or quality-of-service treatment.

But it is also [operators allowing access] to their own IT organisations so they can more rapidly develop their own set of core service applications. I think of it as the voice application or video application; they open it to their own IT department which makes it much easier to innovate.

They open it to their partners, and those might be verticals in healthcare, banking or some sort of commerce space where they are going to offer a business service. And it is an easier way for partners to innovate on the network. And then of course it is also open to third parties to innovate.

So operators are beginning to see that it is not just be about exposing to the long-tail where it might be hard to imagine any revenue coming, because the business models of those long-tail providers may not even be profitable so how can they pay for something if they are not even profitable?

But for their partners and own IT, it is a no-brainer. 

Think of it as the new SDP (service delivery platform) in some ways. It's a web service-type SDP where they expose their own capabilities internally and you can, using a web services approach - which you can think of as a lightweight workflow where you make calls in sequence - you don't have a complex workflow engine that you are using as an orchestrator. They [operators] see that this is a much more efficient way to innovate and build partnerships. 

So that makes it interesting to them. That means they will buy a platform that does that. So there is a certain amount of money for Alcatel-Lucent in selling a platform that does that. However the big money is probably not in the selling of that platform, it is in the selling of the network assets that go with it.

There is a business case around that which falls through to the underlying network because the network has capabilities that this layer is exposing. So there is clearly an interest we have in that.

It is very similar to our philosophy about IP TV.  We never really owned preeminent IP TV assets. We had middleware that we acquired from Telefonica that we evolved but most of the time we partnered with Microsoft. And the reason we decided to partner was  because we saw the real value in pulling through the network and tying into the middleware layer, but not needing to own the middleware layer.

There are people that believe it makes sense to own the exposure layer because it is a point-of-importance to our customers. But in fact a lot of the revenue is probably associated with the network that that layer pulls through.

There is one more part of the API layer that is very interesting.

When you sit in that layer, you see all transactions. And you don't now have to use DPI (deep-packet inspection) to see those transactions - they are actually processing the transactions. Think of the API as a request for a resource by a user for an application. If you sit in that layer, you see all these requests, you can understand the customer needs and the demand patterns much better than having to do DPI.

DPI has bad PR because it seems like it brings something illicit. In the API layer you are doing nothing illicit as you are naturally part of processing the request, so you get to see and understand the traffic in an open and interesting way. So there is a lot of value to that layer.

Can you monitise that? Maybe.

Maybe there is a play in analytics or definition of the customer that allows you to sell an advertising proposition. But certainly it helps you optimise the network because you can understand the traffic flows.

If you have those analytics in the API layer you can dynamically optimise the network which is then another value proposition to better sell the network but also an optimisation engine that runs on top of that network.

So there are lots of pull-through effects for open APIs but there is money associated with the layer itself.

 

For part one of the Q&A, click here.

 


Intelligent networking: Q&A with Alcatel-Lucent's CTO

Alcatel-Lucent's corporate CTO, Marcus Weldon, in a Q&A with Gazettabyte. Here, in Part 1, he talks about the future of the network, why developing in-house ASICs is important and why Bell Labs is researching quantum computing.


Marcus Weldon (left) with Jonathan Segel, executive director in the corporate CTO Group, holding the lightRadio cube. Photo: Denise Panyik-Dale

Q:  The last decade has seen the emergence of Asian Pacific players. In Asia, engineers’ wages are lower while the scale of R&D there is hugely impressive. How is Alcatel-Lucent, active across a broad range of telecom segments, ensuring it remains competitive? 

A: Obviously we have a Chinese presence ourselves and also in India. It varies by division but probably half of our workforce in R&D is in what you would consider a low-cost country.  We are already heavily present in those areas and that speaks to the wage issue.

But we have decided to use the best global talent. This has been a trait of Bell Labs in particular but also of the company. We believe one of our strengths is the global nature of our R&D. We have educational disciplines from different countries, and different expertise and engineering foci etc. Some of the Eastern European nations are very strong in maths, engineering and device design. So if you combine the best of those with the entrepreneurship of the US, you end up with a very strong mix of an R&D population that allows for the greatest degree of innovation.

We have no intention to go further towards a low-cost country model. There was a tendency for that a couple of years ago but we have pulled back as we found that we were losing our innovation potential.

We are happy with the mix we have even though the average salary is higher as a result. And if you take government subsidies into account in European nations, you can get almost the same rate for a European engineer as for a Chinese engineer, as far as Alcatel-Lucent is concerned.

One more thing, Chinese university students, interestingly, work so hard up to getting into university that university is a period where they actually slack off. There are several articles in the media about this. The four years that students spend in university, away from home for the first time, they tend to relax.

Chinese companies were complaining that the quality of engineers out of university was ever decreasing because of what was essentially a slacker generation, they were arguing, of overworked high-school students that relaxed at college. Chinese companies found that they had to retrain these people once employed to bring them to the level needed.  

So that is another small effect which you could argue is a benefit of not being in China for some of our R&D.

 

Alcatel-Lucent's Bell Labs: Can you spotlight noteworthy examples of research work being done?

Certainly the lightRadio cube stuff is pure Bell Labs. The adaptive antenna array design, to give you an example, was done between the US - Bell Labs' Murray Hill - and Stuttgart, so two non-Asian sites at Bell Labs involved in the innovations. These are wideband designs that can operate at any frequencies and are technology agnostic so they can operate for GSM, 3G and LTE (Long Term Evolution).

 

"We believe that next-generation network intelligence, 10-15 years from now, might rely on quantum computing"

 

The designs can also form beams so you can be very power-efficient. Power efficiency in the antenna is great as you want to put the power where it is needed and not just have omni (directional) as the default power distribution. You want to form beams where capacity is needed.

That is clearly a big part of what Bell Labs has been focussing on in the wireless domain as well as all the overlaying technologies that allow you to do beam-forming. The power amplifier efficiency, that is another way you lose power and you operate at a more costly operational expense. The magic inside that is another focus of Bell Labs on wireless.

In optics, it is moving from 100 Gig to 400 Gig coherent. We are one of the early innovators in 100 Gig coherent and we are now moving forward to higher-order modulation and 400 Gig. 

On the DSL side it the vectoring/ crosstalk cancellation work where we have developed our own ASIC because the market could not meet the need we had. The algorithms ended up producing a component that will be in the first release of our products to maintain a market advantage.

We do see a need for some specialised devices like the FlexPath FP3 network processor, the IPTV product, the OTN (Optical Transport Network) switch that is at the heart of our optical products is our own ASIC, and the vectoring/ crosstalk cancellation engine in our DSL products.  Those are the innovations Bell Labs comes up with and very often they lead to our portfolio innovations.

There is also a lot of novel stuff like quantum computing that is on the fringes of what people think telecoms is going to leverage but we are still active in some of those forward-looking disciplines.  

We have quite a few researchers working on quantum computing, leveraging some of the material expertise that we have to fabricate novel designs in our lab and then create little quantum computing structures.

 

Why would quantum computing be useful in telecom? 

It is very good for parsing and pattern matching. So when you are doing complex searches or analyses, then quantum computing comes to the fore.

We do believe there will be processing that will benefit from quantum computing constructs to make decisions in ever-increasingly intelligent networks. Quantum computing has certain advantages in terms of its ability to recognise complex states and do complex calculations. We believe that next-generation network intelligence, 10-15 years from now, might rely on quantum computing.

We don't have a clear application in mind other than we believe it is a very important space that we need to be pioneering.

 

"Operators realise that their real-estate resource - including down to the central office - is not the burden that it appeared to be a couple of years ago but a tremendous asset

 

You wrote a recent blog on the future of the network. You mentioned the idea of the emergence of one network with the melding of wireless and wireline, and that this will halve the total cost of ownership. This is impressive but is it enough?

The half number relates to the lightRadio architecture. There are many ingredients in it. The most notable is that traffic growth is accounted for in that halving of the total cost of ownership. We calculated what the likely traffic demand would be going forward: a 30-fold increase in five years.

Based on that growth, when we computed how much the lightRadio architecture, involving the adaptive antenna arrays, small cells and the move to LTE, if you combine these things and map it into traffic demand, the number comes up that you can build the network for that traffic demand and with those new technologies and still halve the total cost of ownership.

It really is quite a bit more aggressive than it appears because it is taking account of a very significant growth in traffic.

Can we build that network and still lower the cost? The answer is yes.

 

You also say that intelligence will be increasingly distributed in the network, taking advantage of Moore's Law.  This raises two questions. First, when does it make sense to make your own ASICs?

When I say ASICs I include FPGAs. FPGAs are your own design just on programmable silicon and normally you evolve that to an ASIC design once you get to the right volumes.

There is a thing called an NRE (non-recurring engineering) cost, a non-refundable engineering cost to product an ASIC in a fab. So you have to have a certain volume that makes it worthwhile to produce that ASIC, rather than keeping it in an FPGA which is a more expensive component because it is programmable and has excess logic. On the other hand, there is economics that says an FPGA is the right way for sub-10,000 volumes per annum whereas for millions of parts you would do an ASIC.

We work on both those types of designs. And generally, and I think even Huawei would agree with us, a lot of the early innovation is done in FPGAs because you are still playing with the feature set.

 

Photo: Denise Panyik-Dale

Often there is no standard at that point, there may be preliminary work that is ongoing, so you do the initial innovation pre-standard using FPGAs. You use a DSP or FPGA that can implement a brand new function that no one has thought of, and that is what Bell Labs will do. Then, as it starts becoming of interest to the standard bodies, you have it implemented in a way that tries to follow what the standard will be, and you stay in a FPGA for that process. At some point later, you take a bet that the functionality is fixed and the volume will be high enough, and you move to an ASIC.

So it is fairly commonplace for novel technology to be implemented by the [system] vendors. And only in the end stage when it has become commoditised to move to commercial silicon, meaning a Broadcom or a Marvell.

Also around the novel components we produce there are a whole host of commercial silicon components from Texas Instruments, Broadcom, Marvell, Vitesse and all those others. So we focus on the components where the magic is, where innovation is still high and where you can't produce the same performance from a commercial part. That is where we produce our own FPGAs and ASICs.

 

Is this trend becoming more prevalent? And if so, is it because of the increasing distribution of intelligence in network.

I think it is but only partly because of intelligence. The other part is speed. We are reaching the real edges of processing speed and generally the commercial parts are not at that nanometer of [CMOS process] technology that can keep up.

To give an example, our FlexPath processor for the router product we have is on 40nm technology. Generally ASICs are a technology generation behind FPGAs. To get the power footprint and the packet-processing performance we need, you can't do that with commercial components. You can do it in a very high-end FPGA but those devices are generally very expensive because they have extremely low yields. They can cost hundreds or thousands of dollars.

The tendency is to use FPGAs for the initial design but very quickly move to an ASIC because those [FGPA] parts are so rare and expensive; nor do they have the power footprint that you want.  So if you are running at very high speeds - 100Gbps, 400Gbps - you run very hot, it is a very costly part and you quickly move to an ASIC.

Because of intelligence [in the network] we need to be making our own parts but again you can implement intelligence in FPGAs. The drive to ASICs is due to power footprint, performance at very high speeds and to some extent protection of intellectual property.

FPGAs can be reverse-engineered so there is some trend to use ASICs to protect against loss of intellectual property to less salubrious members of the industry.

 

Second, how will intelligence impact the photonic layer in particular?

You have all these dimensions you can trade off each other. There are things like flexible bit-rate optics, flexible modulation schemes to accommodate that, there is the intelligence of soft-decision FEC (forward error correction) where you are squeezing more out of a channel but not just making it a hard-decision FEC - is it a '0' or a '1' but giving a hint to the decoder as to whether it is likely to be a '0' or a '1'. And that improves your signal-to-noise ratio which allows you to go further with a given optics.

So you have several intelligent elements that you are going to co-ordinate to have an adaptive optical layer.

I do think that is the largest area.

Another area is smart or next-generation ROADMs - we call it connectionless, contentionless, and directionless.

There is a sense that as you start distributing resources in the network - cacheing resources and computing resources - there will be far more meshing in the metro network. There will be a need to route traffic optically to locally positioned resources - highly distributed data centre resources - and so there will be more photonic switching of traffic. Think of it as photonic offload to a local resource.

We are increasingly seeing operators realise that their real-estate resource - including down to the central office - is not the burden that it appeared to be a couple of years ago but a tremendous asset if you want to operate a private cloud infrastructure and offer it as a service, as you are closer to the user with lower latency and more guaranteed performance.

So if you think about that infrastructure, with highly distributed processing resources and offloading that at the photonic layer, essentially you can easily recognise that traffic needs to go to that location. You can argue that there will be more photonic switching at the edge because you don't need to route that traffic, it is going to one destination only.

This is an extension of the whole idea of converged backbone architecture we have, with interworking between the IP and optical domains, you don't route traffic that you don't need to route. If you know it is going to a peering point, you can keep that traffic in the optical domain and not send it up through the routing core and have it constantly routed when you know from the start where it is going.

So as you distribute computing and cacheing resources, you would offload in the optical layer rather than attempt to packet process everything.

There are smarts at that level too - photonic switching - as well as the intelligent photonic layer. 

 

For the second part of the Q&A, click here


Rational and innovative times: JDSU's CTO Q&A Part II

Brandon Collings, JDS Uniphase's CTO for communications and commercial optical products, talks about fostering innovation and what is coming after 100 Gigabit optical transmission. Part II of a Q&A with Gazettabyte.


"What happens after 100 Gig is going to be very interesting"

Brandon Collings (right), JDSU

 

How has JDS Uniphase (JDSU) adapted its R&D following the changes in the optical component industry over the last decade?

JDSU has been a public company for both periods [the optical boom of 1999-2000 and now]. The challenge JDSU faced in those times, when there was a lot of venture capital (VC) money flowing into the system, was that the money was sort of free money for these companies. It created an imbalance in that the money was not tied to revenue which was a challenge for companies like JDSU that ties R&D spend to revenue. You also have much more flexibility [as a start-up] in setting different price points if you are operating on VC terms.

The situation now is very straightforward, rational and predictable.

There is not a huge army of R&D going on. That lack of R&D does not speed up the industry but what it does do is allow those companies doing R&D - and there is still a significant number - a lot of focus and clarity. It also requires a lot of partnership between us, our customers [equipment makers] and operators. The people above us can't just sit back and pick and choose what they like today from myriad start-ups doing all sorts of crazy things.

We very much appreciate this rational time. Visions can be more easily discussed, things are more predictable and everyone is playing from a similar set of rules.

 

Given the changes at the research labs of system vendor and operators, is there a risk that insufficient R&D is being done, impeding optical networking's progress?

It is hard to say absolutely not as less people doing things can slow things down. But the work those labs did, covered a wide space including outside of telecom.

There is still a sufficient critical mass of research at placed like Alcatel-Lucent Bell Labs, AT&T and BT; there is increasingly work going on in new regions like Asia Pacific, and a lot more in and across Europe.  It is also much more focussed - the volume of workers may have decreased but the task still remains in hand.

 

"There are now design tradeoffs [at speeds higher than 100Gbps] whereas before we went faster for the same distance" 

 

How does JDSU foster innovation and ensure it is focussing on the right areas?

I can't say that we have at JDSU a process that ensures innovation. Innovation is fleeting and mysterious.   

We stay very connected to our key customers who are more on the cutting edge. We have very good personal and professional relationships with their key people. We have the same type of relationship with the operators. 

I and my team very regularly canvass and have open discussions about what is coming. What does JDSU see? What do you see? What technologies are blossoming? We talk through those sort of things. 

That isn't where innovation comes from. But what that can do is sow the seeds for the opportunity for innovation to happen. 

We take that information and cycle it through all our technology teams. The guys in the trenches - the material scientists, the free-space optics design guys - we try to educate them with as much of an understanding of the higher-level problems that ultimately their products, or the products they design into, will address.  

What we find is that these guys are pretty smart. If you arm them with a wider understanding, you get a much more succinct and powerful innovation than if you try to dictate to a material scientist here is what we need, come back when you are done.

It is a loose approach, there isn't a process, but we have found that the more we educate our keys [key guys] to the wider set of problems and the wider scope of their product segments, the more they understand and the more they can connect their sphere of influence from a technology point of view to a new capability. We grab that and run with it when it makes sense.

It is all about communicating with our customers and understanding the environment and the problem, then spreading that as wide as we can so that the opportunity for innovation is always there. We then nurse it back into our customers.

 

Turning to technology,  you recently announced the integration of a tunable laser into an SFP+, a product you expect to ship in a year. What platforms will want a tunable laser in this smallest pluggable form factor?

The XFP has been on routers and OTN (Optical Transport Network) boxes - anything that has 10 Gig - and those interfaces have been migrated over to SFP+ for compactness and face plate space. There are already packet and OTN devices that use SFP+, and DWDM formats of the SFP+, to do backhaul and metro ring application. The expectation is that while there are more XFP ports today, the next round of equipment will move to SFP+.

Certainly the Ciscos, Junipers and the packet guys are using tunable XFPs in great volume for IP over DWDM and access networks, but the more telecom-centric players riding OTN links or maybe native Ethernet links over their metro rings are probably the larger volume.  

 

What distance can the tunable SFP+ achieve?

The distances will be pretty much the same as the tunable XFP. We produce that in a number of flavours, whether it is metro and long-haul. The initial SFP+ will like be the metro reaches, 80km and things like that.

 

What is the upper limit of the tunable XFP?

We produce a negative chirp version which can do 80km of uncompensated dispersion, and then we produce a zero chirp which is more indicative of long-haul devices.

In that case the upper limit is more defined by the link engineering and the optical signal-to-noise ratio (OSNR), the extent of the dispersion compensation accuracy and the fibre type. It starts to look and smell like a long-haul lithium niobate transceiver where the distances are limited by link design as much as by the transceiver itself. As for the upper limit, you can push 1000km.

 

An XFP module can accommodate 3.5W while an SFP+ is about 1.5W. How have you reduced the power to fit the design into an SFP+?

It may be a generation before we get to that MSA level so we are working with our customers to see what level they can tolerate. We'll have to hit a lot less that 3.5W but it is not clear that we have to hit the SFP+ MSA specification. We are already closer now to 1.5W than 3.5W.

 

 

"I can't say that we have at JDSU a process that ensures innovation. Innovation is fleeting and mysterious."

  

Semiconductors now play a key role in high-speed optical transmission. Will semiconductors take over more roles and become a bigger part of what you do?

Coherent transmission [that uses an ASIC incorporating a digital signal processor (DSP)] is not going away. There is a lot of differentiation at the moment in what happens in that DSP, but I think overall it is going to be a tool the system houses use to get the job done.

If you look at 10 Gig, the big advancement there was FEC [forward error correction] and advanced FEC. In 2003 the situation was a lot like it is today: who has the best FEC was something that was touted.

If you look at coherent technology, it is certainly a different animal but it is a similar situation: that is, the big enabler for 40 and 100 Gig. Coherent is advanced technology, enhanced FEC was advanced technology back then, and over time it turned into a standardised, commoditised piece that is central and ubiquitously used for network links.

Coherent has more diversity in what it can do but you'll see some convergence and commoditisation of the technology. It is not going to replace or overtake the importance of photonics. In my mind they play together intimately; you can't replace the functions of photonics with electronics any time soon.

From a JDSU perspective, we have a lot of work to do because the bulk of the cost, the power and the size is still in the photonics components. The ASIC will come down in power, it will follow Moore's Law, but we will still need to work on all that photonics stuff because it is a significant portion of the power consumption and it is still the highest portion of the cost. 

 

JDSU has made acquisitions in the area of parallel optics. Given there is now more industry activity here, why isn't JDSU more involved in this area? 

We have been intermittently active in the parallel optics market.

The reality is that it is a fairly fragmented market: there are a lot of applications, each one with its own requirements and customer base. It is tough to spread one platform product around these applications. That said, parallel optics is now a mainstay for 40 and 100 Gig client [interfaces] and we are extremely active in that area: the 4x10, 4x25 and 12x10G [interfaces]. So that other parallel optics capability is finding its way into the telecom transceivers. 

We do stay active in the interconnect space but we are more selective in what we get engaged in. Some of the rules there are very different: the critical characteristics for chip interconnect are very different to transceivers, for example. It may be much better to have on-chip optics versus off-chip optics. Obviously that drives completely different technologies so it is a much more cloudy, fragmented space at the moment.

We are very tied into it and are looking for those proper opportunities where we do have the technologies to fit into the application.

 

How does JDSU view the issues of 200, 400 Gigs and 1 Terabit optical transmission? 

What happens after 100 Gig is going to be very interesting. 

Several things have happened. We have used up the 50GHz [DWDM] channel, we can't go faster in the 50GHz channel - that is the first barrier we are bumping into. 

Second, we're finding there is a challenge to do electronics well beyond 40 Gigabit. You start to get into electronics that have to operate at much higher rates - analogue-to-digital converters, modulator drivers - you get into a whole different class of devices.

Third, we have used all of our tools: we have used FEC, we are using soft-decision FEC and coherent detection. We are bumping into the OSNR problem and we don't have any more tools to run lines rates that have less power to noise yet somehow recover that with some magic technology like FEC at 10 Gig, and soft decision FEC and coherent at 40 and 100 Gig.

This is driving us into a new space where we have to do multi-carrier and bigger channels. It is opening up a lot of flexibility because, well, how wide is that channel? How many carriers do you use? What type of modulation format do you use? 

What format you use may dictate the distance you go and inversely the width of the channel. We have all these new knobs to play with and they are all tradeoffs: distance versus spectral efficiency in the C-band. The number of carriers will drive potentially the cost because you have to build parallel devices. There are now design tradeoffs whereas before we went faster for the same distance.

We will be seeing a lot of devices and approaches from us and our customers that provide those tradeoffs flexibly so the carriers can do the best they can with what mother nature will allow at this point.

That means transponders that do four carriers: two of them do 200 Gig nicely packed together but they only achieve a few hundred kilometers, but a couple of other carriers right next door go a lot further but they are a little bit wider so that density versus reach tradeoff is in play.  That is what is going to be necessary to get the best of what we can do with the technology. 

That is the transmission side, the transport side - the ROADMS and amplifiers - they have to accommodate this quirky new formats and reach requirements.

We need to get amplifiers to get the noise down. So this is introducing new concepts like Raman and flex[ible] spectrum to get the best we can do with these really challenging requirements like trying to get the most reach with the greatest spectral efficiency.

 

How do you keep abreast of all these subject areas besides conversations with customers?

It is a challenge, there aren't many companies in this space that are broader than JDSU's optical comms portfolio. 

We do have a team and the team has its area of focus, whether it is ROADMs, modulators, transmission gear or optical amplifiers. We segment it that way but it is a loose segmentation so we don't lose ideas crossing boundaries. We try to deal with the breadth that way.

Beyond that, it is about staying connected with the right people at the customer level, having personal relationships so that you can have open discussions. 

And then it is knowing your own organisation, knowing who to pull into a nebulous situation that can engage the customer, think on their feet and whiteboard there and then rather than [bringing in] intelligent people that tend to require more of a recipe to do what they are doing. 

It is all about how to get the most from each team member and creating those situations where the right things can happen.

 

For Part I of the Q&A, click here


Q&A with JDSU's CTO

In Part 1 of a Q&A with Gazettabyte, Brandon Collings, JDS Uniphase's CTO for communications and commercial optical products, reflects on the key optical networking developments of the coming decade, how the role of optical component vendors is changing and next-generation ROADMs. 


"For transmission components, photonic integration is the name of the game. If you are not doing it, you are not going to be a player"

Brandon Collings (left), JDSU

 

Q: What are the key optical networking trends of the next decade?

A: The two key pieces of technology at the photonic layer in the last decade were ROADMs [reconfigurable optical add-drop multiplexers] and the relentless reduction in size, cost and power of 10 Gigabit transponders.

If you look at the next decade, I see the same trends occupying us.

We are seeing a whole other generation of reconfigurable networks - this whole colourless, directionless, flexible spectrum - all this stuff is coming and it is requiring a complete overhaul of the transport network. We have to support Raman [amplifiers] and we need to support more flexible [optical] channel monitors to deal with flexible spectrum.

We have to overhaul every aspect of the transport system: the components, design, capability, usability and the management. It may take a good eight years for the dust to settle on how that all plays out.

The other piece is transmission size, cost and power.

Right now a 40 Gig or a 100 Gig transponder is large, power-hungry and extremely expensive. Ironically they don't look too different to a 10 Gig transponder in 1998 and you see where that has gone.

You have seen our recent announcement [a tunable laser in an SFP+ optical pluggable module]; that whole thing is now tunable, the size of your pinkie and costs a fraction of what it did in 1998.

I expect that same sort of progression to play out for 100 Gig, and we'll start to get into 400 Gig and some flexible devices in between 100 and 400 Gig.

The name of the game is going to be getting size, cost and power down to ensure density keeps going up and the cost-per-bit keeps going down; all that is enabled by the photonic devices themselves.

 

Is that what will occupy JDSU for the next decade?

This is what will occupy us at the component level. As you go up one level - and this will impact us more indirectly than it will our customers - we are seeing this ramp of capacity, driven by the likes of video, where the willingness to pay-per-bit is dropping through the floor but the cost to deliver that bit is dropping a lot less.

Operators are caught in the middle and they are after efficiency and cost advantages when operating their networks. We are seeing a re-evaluation of the age-old principles in how networks are operated: How they do protection, how they offer redundancy and how they do aggregation.

People are saying: 'Well, the optical layer is actual fairly cheap compared to the layer two and three. Let's see if we can't ask more of the somewhat cheaper network and maybe pull some of the complexity and requirements out of the upper layers and make that simpler, to end up with an overall cheaper and easier network to operate.'

That is putting a lot of feature requirements on us at the hardware level to build optical networks that are more capable and do more, as well as on our customers that must make that network easier to operate. 

That is a challenge that will be a very interesting area of differentiation. There are so many knobs to turn as you try to build a better delivery system optimised over packets, OTN [Optical Transport Network] and photonics.

 

Are you noting changes among system vendors to become more vertically integrated?

I've heard whisperings of vendors wanting to figure out how they could be more vertically integrated.  That's because: 'Well hey, that could make our products cheaper and we could differentiate'. But I think the reality is moving in the opposite direction.

To build differentiated, compelling products, you have to have expertise, capability and technology control all the way down to the materials level almost. Take for example the tunable XFP; that whole thing is enabled by complete technology ownership of an indium-phosphate fab and all the manufacturing that goes around it. That is a herculean effort.

It is tough to say they [system vendors] want to be vertically integrated because to do so effectively you need just a gigantic organisation.

JDSU is vertically integrated. We have an awful lot of technology and we have got a very large manufacturing infrastructure expertise and know-how. We can produce competitive products because for this particular application we use a PLC [planar lightwave circuit], and for that one, gallium arsenide. We can do this because we diversify all this infrastructure, operation and company size across a wide customer base.

Increasingly this is also into adjacent markets like solar, gesture recognition and optical interconnects. These adjacent spaces would not be something that a system vendor would probably want to get into.

The bottom line is that it [the trend] is actually going in the opposite direction because the level, size and scope of the vertical integration would need to be very large and completely non-trivial if system vendors want to be differentiating and compelling. And the business case would not work very well because it would only be for their product line.

 

"No one says exactly what they will pay for next-gen ROADMs but all can articulate why they want it and what it will do in general terms"

 

 

 

 

Is this system vendor trend changing the role of optical component players?

Our level of business and our competitors are looking to be more vertically integrated: semiconductors all the way to line cards.

We've proven it with things like our Super Transport Blade that the more you have control over, the more knobs you can turn to create new things when merging multiple functions.

Instead of selling a lot of small black boxes and having the OEMs splice them together, we can integrated those functions and make a more compact and cost-effective solution.  But you have to start with the ability to make all those blocks yourself.

Whether it is a line card, a tunable XFP or a 100 Gig module, the more you own and control, the more you can integrate and the more effective your solution will be.  This is playing out at the components level because you create more compelling solutions the more functional integration you accomplish.

 

How do you avoid competing with your customers? If system vendors are just putting cards together, what are they doing? Also, how do you help each vendor differentiate?

It is very true. There are several system vendors that don't build their line cards anymore. They have chosen to do so because they realise that from a design and manufacturing perspective, they don't add much value or even subtract value because we can do more functional integration and they may not be experts in wavelength-selective switch (WSS) construction and various other things.

A fair number of them basically acknowledge that giving these blades to the people who can do them is a better solution for them.

How they differentiate can go two ways.

First, they don't just say: 'Build me a ROADM card.'  We work very closely; they are custom design cards for each vendor. They specify what the blade will do and they participate intimately in its design. They make their own choices and put in their own secret sauce.

That means we have very strong partnerships with these operations, almost to the extent that we are part of their development organisations.

The importance of things above the photonic layer collectively is probably more important than the photonic layer. Usability, multiplexing, aggregation, security - all the things that go into the higher levels of a network, this is where system vendors are differentiating.

They can still differentiate at the photonic layer by building strong partnerships with technology engines like JDSU and it allows them to focus more resources at the upper levels where they can differentiate their complete network offering.

 

"The new generation of reconfigurable networks are not able to reuse anything that is being built today" 

 

 

 

 

Will is happening with regard photonic integration?

For transmission components, photonic integration is the name of the game. If you are not doing it, you are not going to be a player.

If you look at JDSU's tunable [laser] XFP, that is 100% photonic integration. Yes, we build an ASIC to control the device but it is just about getting a little bit extra volume and a little bit more power. The whole thing is about monolithic integration of a tunable laser, the modulator and some power control elements. And that is just 10 Gig.

If you look at 40 Gig, today's modulators are already putting in heavy integration and it is just the first round. These dual-polarisation QPSK modulators, they integrate multiple modulators - one for each polarisation as well as all the polarisation combining functionality, all into one device using waveguide-based integration. Today that is in lithium niobate, which is not a small technology.

100 Gig looks similar, it is just a little bit faster and when you go to 400 Gig, you go multi-carrier which means you make multiple copies of this same device.

So getting these things down in size, cost and power means photonic integration. And just the way 10 Gig migrated from lithium niobate down to monolithic indium phosphide, the same path is going to be followed for 40, 100 and 400 Gig.

It may be more complicated than 10 Gig but we are more advanced with our technology.

 

Operators are asking for advanced ROADM capabilities while system vendors are willing to provide such features but only once operators will pay for them. Meanwhile, optical component vendors must do significant ROADM development work without knowing when they will see a return.  How does JDSU manage this situation and is there a way of working smart here?

I don't think there is a terrifically clever way to look at this other than to say that we speak very carefully and closely with our customers.

These next-generation ROADMs have been going on for three or four years now.  We also meet operators globally and ask them very similar questions about when and how and to what extent their interest in these various features [colourless, directionless, contentionless, gridless (flexible spectrum)] lie.

We are a ROADM leader; this is a ROADM question so we'd be making critical decisions if we decided not to invest in this area. We have decided this is going to happen and we have invested very heavily in this space.

It is true; there is not a market there right now.

With anything that is new, if you want to be a market leader you can't enter a market that exists, otherwise you'll be late. So through those discussions with our customers and the trust we have with them, and understanding where their customers and their problems lie, we are confident in that investment.

If you look back at the initial round of ROADMs, the chitchat was the same. When WSSs and ROADMs first came out, the reaction was: 'Wow, these things are really expensive, why would I want this compared to a set of fixed filters which back then cost $100 a pop?".

The commentary on cost was all in that flavour but once they became available and the costs were known, the operators started adopting them because the operators could figure out how they could benefit from the flexibility. Today ROADMs are just about in every network in the world.

We expect the same track to follow. No one is going to say: 'Yes, I’m going to pay twice for this new functionality' because they are being cagey of course.

We are still in the development phase. We are starting to get to the end of that, so the costs and real capabilities - all enabled by the devices we are developing - are becoming clear enough so that our customers can now go to their customers and say: 'Here's what it is, here's what it does and here's what it cost'.

Operators will require time to get comfortable with that and figure out how that will work in their respective networks.   

We have seen consistent interest in these next-generation ROADM features. No one says exactly what they will pay for it but all can articulate why they want it and what it will do in general terms.

 

You say you are starting to get to the end of the development phase of these next-generation ROADMs. What challenges remain?

The new generation of reconfigurable networks are not able to reuse anything that is being built today whether it is from JDSU or Finisar, whether it is MEMS or LCOS (liquid crystal on silicon).

All the devices that are on the shelf today simply are not adequate or you end up with extremely expensive solutions.

This requires us to have a completely new generation of products in the WSS and the multiplexing demultiplexing space - all the devices that will do these functions that were done by AWGs or today by a 1x9 WSS but what is under development, they just look completely different.

They are still WSSs but they use different technologies so without saying exactly what they are and what they do, it is basically a whole new platform of devices.

 

Can you say when we will know what these look like?

I think the general architecture is fairly well known.

The exact details of the devices and components are still not publicly being talked about but it is the general combination of high-port-count WSSs that support flexible spectrum, fast switching and low loss, and are being used in a route-and-select approach rather than a broadcast-and-select one. That is the node building block.

Then these multicast switches are being built - fibre amplifier arrays; what comprise the colourless, directionless and contentionless multiplexing and demultiplexing.

That is the general architecture - it seems that that is what everyone is settling on. The devices to support that are what the industry is working on. 

 

For Part II of the Q&A with Brandon Collings, click here 


Q&A: Ciena’s CTO on networking and technology

In Part 2 of the Q&A, Steve Alexander, CTO of Ciena, shares his thoughts about the network and technology trends.

Part 2: Networking and technology

"The network must be a lot more dynamic and responsive"

Steve Alexander, Ciena CTO

 

Q. In the 1990s dense wavelength division multiplexing (DWDM) was the main optical development while in the '00s it was coherent transmission. What's next?

A couple of perspectives.

First, the platforms that we have in place today: III-V semiconductors for photonics and collections of quasi-discrete components around them - ASICs, FPGAs and pluggables - that is the technology we have.  We can debate, based on your standpoint, how much indium phosphide integration you have versus how much silicon integration.

Second, the way that networks built in the next three to five years will differentiate themselves will be based on the applications that the carriers, service providers and large enterprises can run on them.

This will be in addition to capacity - capacity is going to make a difference for the end user and you are going to have to have adequate capacity with low enough latency and the right bandwidth attributes to keep your customers. Otherwise they migrate [to other operators], we know that happens.

You are going to start to differentiate based on the applications that the service providers and enterprises can run on those networks. I see the value of networking changing from a hardware-based problem-set to one largely software-based.

I'll give you an analogy: You bought your iPhone, I'll claim, not so much because it is a cool hardware box - which it is - but because of the applications that you can run on it.  

The same thing will happen with infrastructure. You will see the convergence of the photonics piece and the Ethernet piece, and you will be able to run applications on top of that network that will do things such as move large amounts of data, encrypt large amounts of data, set up transfers for the cloud, assemble bandwidth together so you can have a good cloud experience for the time you need all that bandwidth and then that bandwidth will go back out, like a fluid, for other people to use.

That is the way the network is going to have to operate in future. The network must be a lot more dynamic and responsive.

 

How does Ciena view 40 and 100 Gig and in particular the role of coherent and alternative transmission schemes (direct detection, DQPSK)? Nortel Metro Ethernet Networks (MEN) was a strong coherent adherent yet Ciena was developing 100Gbps non-coherent solutions before it acquired MEN.

If you put the clock back a couple of years, where were the classic Ciena bets and what were the classic MEN bets?

We were looking at metro, edge of network, Ethernet, scalable switches, lots of software integration and lots of software intelligence in the way the network operates. We did not bet heavily on the long distance, submarine space and ultra long-haul. We were not very active in 40 Gig, we were going straight from 10 to 100 Gig.

Now look at the bets the MEN folks placed: very strong on coherent and applying it to 40 and 100 Gig, strong programme at 100 Gig, and they were focussed on the long-haul. Well, to do long-haul when you are running into things like polarisation mode dispersion (PMD), you've got to have coherent. That is how you get all those problems out of the network. 

Our [Ciena's] first 100 Gig was not focussed on long-haul; it was focussed on how you get across a river to connect data centres.

When you look at putting things together, we ended up stopping our developments that were targeted at competing with MEN's long-haul solutions. They, in many cases, stopped developments coming after our switching, carrier Ethernet and software integration solutions. The integration worked very well because the intent of both companies was the same.

Today, do we have a position?  Coherent is the right answer for anything that has to do with physical propagation because it simplifies networks. There are a whole bunch of reasons why coherent is such a game changer.

The reason why first 40 Gig implementations didn't go so well was cost. When we went from 10 to 40 Gig, the only tool was cranking up the clock rate.

At that time, once you got to 20GHz you were into the world of microwave. You leave printed circuit boards and normal manufacturing and move into a world more like radar. There are machined boxes, micro-coax and a very expensive manufacturing process.  That frustrated the desires of the 40 Gig guys to be able to say: Hey, we've got a better cost point than the 10 Gig guys.

Well, with coherent the fact that I can unlock the bit rate from the baud rate, the signalling rate from the symbol rate, that is fantastic. I can stay at 10GHz clocks and send four-bits per symbol - that is 40Gbps.

My basic clock rate, which determines manufacturing complexity, fabrication complexity and the basic technology, stays with CMOS, which everyone knows is a great place to play. Apply that same magic to 100 Gig.  I can send 100Gbps but stay at a 25GHz clock - that is tremendous, that is a huge economic win.

Coherent lets you continue to use the commercial merchant silicon technology base which where you want to be. You leverage the year-on-year cost reduction, a world onto itself that is driving the economics and we can leverage that.

So you get economics with coherent. You get improvement in performance because you simplify the line system - you can pop out the dispersion compensation, and you solve PMD with maths. You also get tunability. I'm using a laser - a local oscillator at the receiver - to measure the incoming laser. I have a tunable receiver that has a great economic cost point and makes the line system simpler.

Coherent is this triple win. It is just a fantastic change in technology.

 

What is Ciena’s thinking regarding bringing in-house sub-systems/ components (vertical integration), or the idea of partnerships to guarantee supply? One example is Infinera that makes photonic integrated circuits around which it builds systems. Another is Huawei that makes its own PON silicon.

The two examples are good ones.

With Huawei you have to treat them somewhat separately as they have some national intent to build a technology base in China. So they are going to make decisions about where they source components from that are outside the normal economic model. 

Anybody in the systems business that has a supply chain periodically goes through the classic make-versus-buy analysis. If I'm buying a module, should I buy the piece-parts and make it? You go through that portion of it. Then you look within the sub-system modules and the piece-parts I'm buying and say: What if I made this myself? It is frequently very hard to say if I had this component fully vertically integrated I'd be better off.

A good question to ask about this is: Could the PC industry have been better if Microsoft owned Intel? Not at all.

You have to step back and say: Where does value get delivered with all these things? A lot of the semiconductor and component pieces were pushed out [by system vendors] because there was no way to get volume, scale and leverage. Unless you corner the market, that is frequently still true. But that doesn't mean you don't go through the make-versus-buy analysis periodically.

Call that the tactical bucket.  

The strategic one is much different. It says: There is something out there that is unique and so differentiated, it would change my way of thinking about a system, or an approach or I can solve a problem differently.

 

"Coherent is this triple win. It is just a fantastic change in technology" 

 

 

 

 

 

 

 

If it is truly strategic and can make a real difference in the marketplace - not a 10% or 20% difference but a 10x improvement - then I think any company is obligated to take a really close look at whether it would be better being brought inside or entering into a good strategic partnership arrangement.

Certainly Ciena evaluates its relationships along these lines.

 

Can you cite a Ciena example?

Early when Ciena started, there was a technology at the time that was differentiated and that was Fibre Bragg Gratings. We made them ourselves. Today you would buy them.

You look at it at points in time. Does it give me differentiation? Or source-of-supply control? Am I at risk? Is the supplier capable of meeting my needs? There are all those pieces to it.

 

Optical Transport Network (OTN) integrated versus standalone products. Ciena has a standalone model but plans to evolve to an integrated solution. Others have an integrated product, while others still launched a standalone box and have since integrated. Analysts say such strategies confuse the marketplace. Why does Ciena believe its strategy is right?

Some of this gets caught up in semantics.

Why I say that is because we today have boxes that you would say are switches but you can put pluggable coloured optics in. Would you call that integrated probably depends more on what the competition calls it.

The place where there is most divergence of opinion is in the network core.

Normally people look at it and say: one big box that does everything would be great - that is the classic God-Box problem. When we look at it - and we have been looking at it on and off for 15 years now - if you try to combine every possible technology, there are always compromises.

The simplest one we can point to now: If you put the highest performance optics into a switch, you sacrifice switch density.

You can build switches today that because of the density of the switching ASICs, are I/O-port constrained: you can't get enough connectors on the face plate to talk to the switch fabric. That will change with time, there is always ebb and flow. In the past that would not have been true.

If I make those I/O ports datacom plugabbles, that is about as dense as I'm going to get. If I make them long-distance coherent optics, I'm not going to get as many because coherent optics take up more space.  In some cases, you can end up cutting by half your port density on the switch fabric. That may not be the right answer for your network depending on how you are using that switch.

While we have both technologies in-house, and in certain application we will do that. Years ago we put coloured optics on CoreDirector to talk to CoreStream, that was specific for certain applications. The reason is that in most networks, people try to optimise switch density and transport capacity and these are different levers. If you bolt those levers together you don't often get the right optimal point.

 

Any business books you have read that have been particularly useful for your job?

The Innovator's Dilemma (by Clayton Christensen). What is good about it is that it has a couple of constructs that you can use with people so they will understand the problem. I've used some of those concepts and ideas to explain where various industries are, where product lines are, and what is needed to describe things as innovation.

The second one is called: Fad Surfing in the Boardroom (by Eileen Shapiro). It is a history of the various approaches that have been used for managing companies. That is an interesting read as well.

 

Click here for Part 1 of the Q&A 

 


Privacy Preference Center