Books in 2013 - Part 2

Alcatel-Lucent's President of Bell Labs and CTO, Marcus Weldon, on the history and future of Bell Labs, and titles for Christmas; Steve Alexander, CTO of Ciena, on underdogs, connectedness, and deep-sea diving; and Dave Welch, President of Infinera on how people think, and an extraordinary WWII tale: the second part of Books 2013.  

 

Steve Alexander, CTO of Ciena

David and Goliath: Underdogs, Misfits, and the Art of Battling Giants by Malcolm Gladwell

I’ve enjoyed some of Gladwell’s earlier works such as The Tipping Point and Outliers: The Story of Success. You often have to read his material with a bit of a skeptic's eye since he usually deals with people and events that are at least a standard deviation or two away from whatever is usually termed “normal.”  In this case he makes the point that overcoming an adversity (and it can be in many forms) is helpful in achieving extraordinary results.  It also reminded me of the many people who were skeptical about Ciena’s initial prospects back in the middle '90s when we first came to market as a “David” in a land of giant competitors. We clearly managed to prosper and have now outlived some of the giants of the day.

Overconnected: The Promise and Threat of the Internet by William Davidow. 

I downloaded this to my iPad a while back and finally got to read it on a flight back from South America. On my trip what had I been discussing with customers? Improving network connections of course. I enjoyed it quite a bit because I see some of his observations within my own family. The desire to “connect” whenever something happens and the “positive feedback” that can result from an over-rich set of connections can be both quite amusing as well as a little scary! I don’t believe that all of the events that the author attributes to being overconnected are really as cause-and-effect driven as he may portray, but I found the possibilities for fads, market bubbles, and market collapses entertaining. 

For another insight into such extremes see Extraordinary Popular Delusions and the Madness of Crowds by Charles Mackay, first published in the 1840s. We, as a species, have been a bit wacky for a long time.

Shadow Divers: The True Adventure of Two Americans Who Risked Everything to Solve One of the Last Mysteries of World War II  by Robert Kurson. 

Having grown up in the New York / New Jersey area and having listened to stories from my parents about the fear of sabotage in World War II (Google Operation Pastorius for some background) and grandparents, who had experienced the Black Tom Explosion during WW1,  this book was a “don’t put it down till done” for me. I found it by accident when browsing a used book store. It’s available on Kindle and is apparently somewhat controversial because another diver has written a rebuttal to at least some of what was described. It is a great example of what it takes to both dive deep and solve a mystery.

 

David Welch, President, Infinera

Here is my cut.  The first three books offer a perspective on how people think and I apply it to business.

My non-work related book is Unbroken: A World War II Story of Survival, Resilience, and Redemption by Laura Hillenbrand.

Unfortunately, I rarely get time to read books, so the picking can be thin at times.

 

Marcus Weldon, President of Bell Labs and CTO, Alcatel-Lucent

I am currently re-reading Jon Gertner's history of Bell labs, called The Idea Factory: Bell Labs and the Great Age of American Innovation which should be no surprise as I have just inherited the leadership of this phenomenal place, and much  of what he observes is still highly relevant today and will inform the future that I am planning.  

I joined Bell Labs in 1995 as a post-doctoral researcher in the famous, Nobel-prize winning Physics Division (Div111, as it was known) and so experienced much of this first hand.  In particular, I recall being surrounded by the most brilliant, opinionated, odd, inspired, collaborative, competitive, driven, relaxed, set of people I had ever met.  And with the shared goal of solving the biggest problems in information and telecommunications.  

Having recently returned back to the 'bosom of bell', I find that, remarkably, much of that environment and pool of talent still remains.  And that is hugely exciting as it means that we still have the raw ingredients for the next great era of Bell Labs.  My hope is that 10 years from now Gertner will write a second edition or updated version of the tale that includes the renewed success of Bell Labs, and not just the historical successes.

On the personal front, I am reading whatever my kids ask me to read them.  Two of the current favourites are: Turkey Claus, about a turkey trying to avoid becoming the centrepiece of a Christmas feast by adapting and trying various guises, and Pete the Cat Saves Christmas, about a world of an ailing feline Claus, requiring average cat, Pete, to save the big day.  

I am not sure there is a big message here, but perhaps it is that 'any one of us can be called to perform great acts, and can achieve them, and that adaptability is key to success'.  And of course, there is some connection in this to the Bell Labs story above, so I will leave it there!

 

Books in 2013: Part 1, click here


What innovation gives you: Marcus Weldon Q&A Part II

Marcus Weldon discusses network optimisation, the future of optical access, Wikipedia, and why a one-year technology lead is the best a system vendor can hope for, yet that one year can make all the difference. Part II of the Q&A with the corporate CTO of Alcatel-Lucent.

 

Photo: Denise Panyik-Dale

"The advantage window [of an innovative product] is typically only a year... Knowing that year exists doesn't mean that there is not a tremendous focus on innovation because that year is everything."

Marcus Weldon, Alcatel-Lucent

 

Q: Where is the scope for a system vendor to differentiate in the network? Even developments like Alcatel-Lucent's lightRadio have been followed by similar announcements.

A: There is potential for innovation and often other vendors say they are innovating in the same way and it looks like everyone is innovating at once. But when you dig down, there are substantial innovations that still exist and persist and give vendors advantage.

The advantage window is typically only a year because of the power of the R&D communities each of us has and the ability to leverage a rich array of components driven by Moore's Law that can cause even a non-optimal design to be effective.

You don't have to be the world's expert to create a design that works, and one that works at a reasonable price point. So there is a toolbox of components and techniques that people have now that allows them to catch up quickly without producing their own components.

 

"In wireless, historically, when you win, you win for a decade."

 

Innovation still exists but I believe the advantage - from the time you have it to the time the competition has it – is typically a year - it varies by domain, whereas perhaps it used to be 3-5 years because you had to design your own components.

Knowing that year exists doesn't mean that there is not a tremendous focus on innovation because that year is everything.

That year gets you mindshare and gets you the early trials in operator labs and networks. And that relationship you build, even if by the time you have completed that cycle your competitors have the same technology, you have a mindshare and an engagement with the potential customer that allows you to win the long-term business.

In wireless, historically, when you win, you win for a decade.

There is still quite a bit of proprietary stuff in wireless networks that it makes it easier to keep going with one vendor if that vendor has a product that is still compatible with their needs.

So the whole argument is that if you can innovate and gain a one-year advantage, you can gain a 10-year advantage potentially in some market segments - particularly wireless - for your product sets.

That is why innovation is still important.

 

What is Alcatel-Lucent's strategic focus? What are the key areas where you are putting your investments?

Clearly lightRadio is a huge one. We have massively scaled our investment in that area, the focus on the lightRadio architecture, the building of those cube arrays and working with operators to move some of the baseband processing into a pooled configuration running in a cloud.

In the optical domain we are streamlining that portfolio and focussing it around a core set of assets. The 1830 product which has the OTN/ WDM (optical transport network/ wavelength division multiplexing) combination switch in it, with 100 Gig moving to 400 Gig. So there is a strategic focus and a highly optimised, leading-edge optical portfolio which is a significant part of our R&D.

Photo: Denise Panyik-Dale

To be honest we had a bit of a mess with an optical portfolio left over from the [Alcatel and Lucent] merger and we had not rationalised it appropriately. We have completed that and have a leading-edge portfolio. If you look at our market share numbers, where we were beginning to fall into second place in optics we have turned that around.

The IP layer, of course, is another area. TiMetra, which is the company we bought and is the basis of our IP portfolio, had $20 million revenues when we bought it and now that is over a billion [dollar] business for us.

That team is really one of the biggest innovation engines in our company. It is doing all the packet processing, all the routing work for the company and has a very efficient R&D team that allows them to move into other areas.

So that is the team that is producing our mobile packet core. It is the team that owns our cacheing product. It is the team that increasingly owns some interest we have in the data centre space. That team is a big focus and IP and Ethernet expertise in that team is propagating across our portfolio. 

In wireline it is 10G PON and the cross-talk cancellation in DSL. Those are the big focusses for us in terms of R&D effort.

On the software applications and services side, we are beginning to focus around new big themes. We have been a little bit all over the place in the applications business but now we have recently redefined what our business is and it is going to have some focussed themes.

Payment is one area which will remain important but payment moving to smart charging of different services is one focus area. Next-generation communications which means IMS [IP Multimedia Subsystem] but also immersive communications - rendering a communications service in the cloud in a virtual environment composed of the end participants - is a big focus.

The thing called customer experience optimisation is a big focus. Here you leverage all the network intelligence, all the device intelligence, all the call centre intelligence and allow the operator to understand whether a user is likely to churn. And it can optimise the network based on the output of that same decision and say, 'Ah, this seems to be an issue, I’m going to optimised the network so that this user is happier'. That is a big focus for us.

We are beginning to be active in machine-to-machine [communication] as well as this concept of cloud networking, which is distributed cloud, stitching together the network and moving resources around in a distributed resource way, as opposed to the centralised, monolithic data centre, the way that other people are focussing on. 

 

Photo: Denise Panyik-Dale

"I also like Wikipedia, I have to say. It is 80-90% right and that is often good enough to help you quickly think through a topic"

 

How do you stay on top of developments across all these telecom segments? Are there business books you have found useful for your job?

I generally read technology treatises rather than business books. It is a failing of mine.

I also like Wikipedia, I have to say. It is 80-90% right and that is often good enough to help you quickly think through a topic. It gets you where you need to be and then you can go and look further into the detail.

So I would argue that Wikipedia is the secret tool of all CTOs and even product marketing managers and R&D managers.

I am a fan of the Freakonomics books. That is the sort of business book I like to read, looking at how to parse things whether they are true causal relationships or correlations, and how one thing affects another. I find those interesting and they have a business sense to them that explains how incentives motivate people.

I'm very interested in that aspect because I think in a company, the big issue a CTO has is how to influence the rest of a company. One of our roles increasingly combines our CTO and strategy teams under the same leader so we are looking at how to effectively evolve a company using the right set of incentives, and the right sort of technology bases, but you still need to provide an incentive for people to move in that new direction whatever it is you choose.

 

"TiMetra, which is the company we bought and is the basis of our IP portfolio, had $20 million revenues when we bought it and now that is over a billion [dollar] business for us."

 

I'm fascinated by how to influence people effectively to believe in your vision. Ultimately they have to more than believe, they have to move towards that and that will need to involve some sort of incentive scheme for target teams that you assign to a new project started quickly and that then influences the rest of the company. We have done that a few times. 

I don't spend a lot of time reading business books. I spend a lot of time reading technical stuff. I think about how to influence corporate behavior. And I get my financial understanding just reading around work, reading lightly on business topics and talking to colleagues in the strategy department.

My answer would be Wikipedia, Freakonomics and technical treatises. Those are the things I use. 

 

Much work is being done to optimise the network across the IP, Ethernet transport and photonic layers. Once this optimisation has been worked through, what will the network look like?

We were one of the founders of this vision of convergence to the IP-optical layer. Two or three years ago we announced something called the converged backbone transport which we sell as a solution, which is a tight interworking between the IP and optical layers. 

Traffic at the optical layer doesn't need to be routed; it is kept at the photonic layer. Only traffic that needs additional treatment is forwarded up to the routing layer, and there is communication back and forth between the two layers.

So, for example, the IP layer has coloured optics and it can be told by the optical layer which wavelength to select in order to send the traffic into the optical layer without the optics having to do optical-electrical-optical regeneration, it can just do optical switching.

We have this intelligent interaction between the optical and IP layer which offloads the IP layer to the optical later which is a lower cost-per-bit proposition. But it also informs the IP layer about what colour wavelength or perhaps what VLAN (Virtual LAN) the optical layer expects to see so that the optical layer can more efficiently process the traffic and not have to do packet processing.

This is the interesting part, and this is where the industry is not aligned yet. We do not think that building full IP-functionality into the optical layer or building full optical functionality into an IP layer makes sense. It becomes essentially a 'God Box' and over the years such platforms ended up becoming a Jack of all trades and a master of none, being compromised in every dimension of performance.

They can't packet process at the density you would want, they can't optically switch at the density you would want, and all you have done is pushed two things into one box for the sake of physical appearance and not for any advantage.

What we believe you should do is keep them in separate boxes - they have separate processors, separate control planes and they even operate at different speeds - and have them tightly interworking. So they are communicating traffic information back and forth taking the traffic as much as possible themselves before forwarding to the other box when it is appropriate for it to handle the traffic.

Most operators agree that in the end having two boxes optimised for each of their activities is the right architecture, communicating effectively back and forth and acting on your traffic as a pair.

 

You didn't mention layer two.

I mentioned VLANs. And layer two does appear in the optical layer because the optical layer has to become savvy enough to understand Ethernet headers and VLANs is an example of that.

We do not believe that sophisticated packet processing has to appear in the optical layer because if you start doing that, you are building a large part of the router infrastructure - this whole FlexPath processor 400Gbps that we announced in the core of the router. If we move that into the optical layer, the optical layer essentially has the price of the routing layer and that is what you are trying to avoid.

You are trying to use the optical layer for the lower price-per-bit forwarding and the IP layer at the higher price per bit when it needs to be about higher price per bit. The pricing being capex and opex - the total cost of forwarding that bit: the power consumption of the box as well as the cost of the box.

 

Photo: Denise Panyik-Dale

"TDM-PON always is good enough, meaning it comes in at an attractive price - often more attractive than WDM - and it doesn't require you to rework the outside plant."

 

What comes after GPON and EPON and their 10 Gigabit PON  variants?

Remarkably, as always, just as we thought we were running out of TDM (time-division multiplexing) PON options, there is a lot of work on 40 Gig PON in Bell Labs and other research institutes.

There are schemes that allow you to do 40 Gig TDM-PON. So once again TDM will survive longer than you thought, but there are options being proposed that are hybrids of WDM and TDM.

For example, it is easy to imagine four wavelengths of 10G PON and that is a flavour of 40G PON. In FSAN (Full Service Access Network), they have something called XG-PON2 which is meant to have all the forward-looking technologies in there.

Now they are getting serious about that because they are done with 10G PON to some extent so let's focus on what is next. There are a lot of proposals going into FSAN for combinations of different technologies.

One is pure 40G TDM, another is four wavelengths of 10G, and there are many other hybrids that I've seen going in there. But there is a sense that it is a combination of a TDM and a WDM solution that might make it into standards for 40G, and it might not.

And 'the might not' is always because you have to redo your outside plant a little bit because you need to take the power splitters for TDM and replace them with wavelength splitters. So there is some reluctance by operators to go back outside and upgrade their plants. Very often they say: 'Well, if I can just do TDM again, why don't I do it that way and reuse the infrastructure already deployed'.

That is always the tension between the two.

It is not that WDM ultimately isn't a good option - it probably is - but TDM always is good enough, meaning it comes in at an attractive price - often more attractive than WDM - and it doesn't require you to rework the outside plant.

But at some point there will be a transition where WDM becomes a more economically attractive than TDM and does merits going back to your outside plant and changing out the splitters you deployed.

 

It is not clear how operators will make money from an open network so how will Alcatel-Lucent make money from open application programming interfaces (APIs)?

It is something, in all honesty, we have wrestled with. I think we are coming to a firm view on this.

To start, I'll answer the operator question which is important since if they aren't making money, it is very unlikely we'll be making money.

Operators are beginning to see that open APIs are not just about allowing access to their network to what you could call the over-the-top long-tail of applications, although that is part of it.

Netflix, for example, could be over-the-top but you would not call it long-tail because it has got 23 million subscribers. Long-tail is any web service that the user accesses and that might want to access network resources - it might need location information, or it might want the operator to do billing or quality-of-service treatment.

But it is also [operators allowing access] to their own IT organisations so they can more rapidly develop their own set of core service applications. I think of it as the voice application or video application; they open it to their own IT department which makes it much easier to innovate.

They open it to their partners, and those might be verticals in healthcare, banking or some sort of commerce space where they are going to offer a business service. And it is an easier way for partners to innovate on the network. And then of course it is also open to third parties to innovate.

So operators are beginning to see that it is not just be about exposing to the long-tail where it might be hard to imagine any revenue coming, because the business models of those long-tail providers may not even be profitable so how can they pay for something if they are not even profitable?

But for their partners and own IT, it is a no-brainer. 

Think of it as the new SDP (service delivery platform) in some ways. It's a web service-type SDP where they expose their own capabilities internally and you can, using a web services approach - which you can think of as a lightweight workflow where you make calls in sequence - you don't have a complex workflow engine that you are using as an orchestrator. They [operators] see that this is a much more efficient way to innovate and build partnerships. 

So that makes it interesting to them. That means they will buy a platform that does that. So there is a certain amount of money for Alcatel-Lucent in selling a platform that does that. However the big money is probably not in the selling of that platform, it is in the selling of the network assets that go with it.

There is a business case around that which falls through to the underlying network because the network has capabilities that this layer is exposing. So there is clearly an interest we have in that.

It is very similar to our philosophy about IP TV.  We never really owned preeminent IP TV assets. We had middleware that we acquired from Telefonica that we evolved but most of the time we partnered with Microsoft. And the reason we decided to partner was  because we saw the real value in pulling through the network and tying into the middleware layer, but not needing to own the middleware layer.

There are people that believe it makes sense to own the exposure layer because it is a point-of-importance to our customers. But in fact a lot of the revenue is probably associated with the network that that layer pulls through.

There is one more part of the API layer that is very interesting.

When you sit in that layer, you see all transactions. And you don't now have to use DPI (deep-packet inspection) to see those transactions - they are actually processing the transactions. Think of the API as a request for a resource by a user for an application. If you sit in that layer, you see all these requests, you can understand the customer needs and the demand patterns much better than having to do DPI.

DPI has bad PR because it seems like it brings something illicit. In the API layer you are doing nothing illicit as you are naturally part of processing the request, so you get to see and understand the traffic in an open and interesting way. So there is a lot of value to that layer.

Can you monitise that? Maybe.

Maybe there is a play in analytics or definition of the customer that allows you to sell an advertising proposition. But certainly it helps you optimise the network because you can understand the traffic flows.

If you have those analytics in the API layer you can dynamically optimise the network which is then another value proposition to better sell the network but also an optimisation engine that runs on top of that network.

So there are lots of pull-through effects for open APIs but there is money associated with the layer itself.

 

For part one of the Q&A, click here.

 


Intelligent networking: Q&A with Alcatel-Lucent's CTO

Alcatel-Lucent's corporate CTO, Marcus Weldon, in a Q&A with Gazettabyte. Here, in Part 1, he talks about the future of the network, why developing in-house ASICs is important and why Bell Labs is researching quantum computing.


Marcus Weldon (left) with Jonathan Segel, executive director in the corporate CTO Group, holding the lightRadio cube. Photo: Denise Panyik-Dale

Q:  The last decade has seen the emergence of Asian Pacific players. In Asia, engineers’ wages are lower while the scale of R&D there is hugely impressive. How is Alcatel-Lucent, active across a broad range of telecom segments, ensuring it remains competitive? 

A: Obviously we have a Chinese presence ourselves and also in India. It varies by division but probably half of our workforce in R&D is in what you would consider a low-cost country.  We are already heavily present in those areas and that speaks to the wage issue.

But we have decided to use the best global talent. This has been a trait of Bell Labs in particular but also of the company. We believe one of our strengths is the global nature of our R&D. We have educational disciplines from different countries, and different expertise and engineering foci etc. Some of the Eastern European nations are very strong in maths, engineering and device design. So if you combine the best of those with the entrepreneurship of the US, you end up with a very strong mix of an R&D population that allows for the greatest degree of innovation.

We have no intention to go further towards a low-cost country model. There was a tendency for that a couple of years ago but we have pulled back as we found that we were losing our innovation potential.

We are happy with the mix we have even though the average salary is higher as a result. And if you take government subsidies into account in European nations, you can get almost the same rate for a European engineer as for a Chinese engineer, as far as Alcatel-Lucent is concerned.

One more thing, Chinese university students, interestingly, work so hard up to getting into university that university is a period where they actually slack off. There are several articles in the media about this. The four years that students spend in university, away from home for the first time, they tend to relax.

Chinese companies were complaining that the quality of engineers out of university was ever decreasing because of what was essentially a slacker generation, they were arguing, of overworked high-school students that relaxed at college. Chinese companies found that they had to retrain these people once employed to bring them to the level needed.  

So that is another small effect which you could argue is a benefit of not being in China for some of our R&D.

 

Alcatel-Lucent's Bell Labs: Can you spotlight noteworthy examples of research work being done?

Certainly the lightRadio cube stuff is pure Bell Labs. The adaptive antenna array design, to give you an example, was done between the US - Bell Labs' Murray Hill - and Stuttgart, so two non-Asian sites at Bell Labs involved in the innovations. These are wideband designs that can operate at any frequencies and are technology agnostic so they can operate for GSM, 3G and LTE (Long Term Evolution).

 

"We believe that next-generation network intelligence, 10-15 years from now, might rely on quantum computing"

 

The designs can also form beams so you can be very power-efficient. Power efficiency in the antenna is great as you want to put the power where it is needed and not just have omni (directional) as the default power distribution. You want to form beams where capacity is needed.

That is clearly a big part of what Bell Labs has been focussing on in the wireless domain as well as all the overlaying technologies that allow you to do beam-forming. The power amplifier efficiency, that is another way you lose power and you operate at a more costly operational expense. The magic inside that is another focus of Bell Labs on wireless.

In optics, it is moving from 100 Gig to 400 Gig coherent. We are one of the early innovators in 100 Gig coherent and we are now moving forward to higher-order modulation and 400 Gig. 

On the DSL side it the vectoring/ crosstalk cancellation work where we have developed our own ASIC because the market could not meet the need we had. The algorithms ended up producing a component that will be in the first release of our products to maintain a market advantage.

We do see a need for some specialised devices like the FlexPath FP3 network processor, the IPTV product, the OTN (Optical Transport Network) switch that is at the heart of our optical products is our own ASIC, and the vectoring/ crosstalk cancellation engine in our DSL products.  Those are the innovations Bell Labs comes up with and very often they lead to our portfolio innovations.

There is also a lot of novel stuff like quantum computing that is on the fringes of what people think telecoms is going to leverage but we are still active in some of those forward-looking disciplines.  

We have quite a few researchers working on quantum computing, leveraging some of the material expertise that we have to fabricate novel designs in our lab and then create little quantum computing structures.

 

Why would quantum computing be useful in telecom? 

It is very good for parsing and pattern matching. So when you are doing complex searches or analyses, then quantum computing comes to the fore.

We do believe there will be processing that will benefit from quantum computing constructs to make decisions in ever-increasingly intelligent networks. Quantum computing has certain advantages in terms of its ability to recognise complex states and do complex calculations. We believe that next-generation network intelligence, 10-15 years from now, might rely on quantum computing.

We don't have a clear application in mind other than we believe it is a very important space that we need to be pioneering.

 

"Operators realise that their real-estate resource - including down to the central office - is not the burden that it appeared to be a couple of years ago but a tremendous asset

 

You wrote a recent blog on the future of the network. You mentioned the idea of the emergence of one network with the melding of wireless and wireline, and that this will halve the total cost of ownership. This is impressive but is it enough?

The half number relates to the lightRadio architecture. There are many ingredients in it. The most notable is that traffic growth is accounted for in that halving of the total cost of ownership. We calculated what the likely traffic demand would be going forward: a 30-fold increase in five years.

Based on that growth, when we computed how much the lightRadio architecture, involving the adaptive antenna arrays, small cells and the move to LTE, if you combine these things and map it into traffic demand, the number comes up that you can build the network for that traffic demand and with those new technologies and still halve the total cost of ownership.

It really is quite a bit more aggressive than it appears because it is taking account of a very significant growth in traffic.

Can we build that network and still lower the cost? The answer is yes.

 

You also say that intelligence will be increasingly distributed in the network, taking advantage of Moore's Law.  This raises two questions. First, when does it make sense to make your own ASICs?

When I say ASICs I include FPGAs. FPGAs are your own design just on programmable silicon and normally you evolve that to an ASIC design once you get to the right volumes.

There is a thing called an NRE (non-recurring engineering) cost, a non-refundable engineering cost to product an ASIC in a fab. So you have to have a certain volume that makes it worthwhile to produce that ASIC, rather than keeping it in an FPGA which is a more expensive component because it is programmable and has excess logic. On the other hand, there is economics that says an FPGA is the right way for sub-10,000 volumes per annum whereas for millions of parts you would do an ASIC.

We work on both those types of designs. And generally, and I think even Huawei would agree with us, a lot of the early innovation is done in FPGAs because you are still playing with the feature set.

 

Photo: Denise Panyik-Dale

Often there is no standard at that point, there may be preliminary work that is ongoing, so you do the initial innovation pre-standard using FPGAs. You use a DSP or FPGA that can implement a brand new function that no one has thought of, and that is what Bell Labs will do. Then, as it starts becoming of interest to the standard bodies, you have it implemented in a way that tries to follow what the standard will be, and you stay in a FPGA for that process. At some point later, you take a bet that the functionality is fixed and the volume will be high enough, and you move to an ASIC.

So it is fairly commonplace for novel technology to be implemented by the [system] vendors. And only in the end stage when it has become commoditised to move to commercial silicon, meaning a Broadcom or a Marvell.

Also around the novel components we produce there are a whole host of commercial silicon components from Texas Instruments, Broadcom, Marvell, Vitesse and all those others. So we focus on the components where the magic is, where innovation is still high and where you can't produce the same performance from a commercial part. That is where we produce our own FPGAs and ASICs.

 

Is this trend becoming more prevalent? And if so, is it because of the increasing distribution of intelligence in network.

I think it is but only partly because of intelligence. The other part is speed. We are reaching the real edges of processing speed and generally the commercial parts are not at that nanometer of [CMOS process] technology that can keep up.

To give an example, our FlexPath processor for the router product we have is on 40nm technology. Generally ASICs are a technology generation behind FPGAs. To get the power footprint and the packet-processing performance we need, you can't do that with commercial components. You can do it in a very high-end FPGA but those devices are generally very expensive because they have extremely low yields. They can cost hundreds or thousands of dollars.

The tendency is to use FPGAs for the initial design but very quickly move to an ASIC because those [FGPA] parts are so rare and expensive; nor do they have the power footprint that you want.  So if you are running at very high speeds - 100Gbps, 400Gbps - you run very hot, it is a very costly part and you quickly move to an ASIC.

Because of intelligence [in the network] we need to be making our own parts but again you can implement intelligence in FPGAs. The drive to ASICs is due to power footprint, performance at very high speeds and to some extent protection of intellectual property.

FPGAs can be reverse-engineered so there is some trend to use ASICs to protect against loss of intellectual property to less salubrious members of the industry.

 

Second, how will intelligence impact the photonic layer in particular?

You have all these dimensions you can trade off each other. There are things like flexible bit-rate optics, flexible modulation schemes to accommodate that, there is the intelligence of soft-decision FEC (forward error correction) where you are squeezing more out of a channel but not just making it a hard-decision FEC - is it a '0' or a '1' but giving a hint to the decoder as to whether it is likely to be a '0' or a '1'. And that improves your signal-to-noise ratio which allows you to go further with a given optics.

So you have several intelligent elements that you are going to co-ordinate to have an adaptive optical layer.

I do think that is the largest area.

Another area is smart or next-generation ROADMs - we call it connectionless, contentionless, and directionless.

There is a sense that as you start distributing resources in the network - cacheing resources and computing resources - there will be far more meshing in the metro network. There will be a need to route traffic optically to locally positioned resources - highly distributed data centre resources - and so there will be more photonic switching of traffic. Think of it as photonic offload to a local resource.

We are increasingly seeing operators realise that their real-estate resource - including down to the central office - is not the burden that it appeared to be a couple of years ago but a tremendous asset if you want to operate a private cloud infrastructure and offer it as a service, as you are closer to the user with lower latency and more guaranteed performance.

So if you think about that infrastructure, with highly distributed processing resources and offloading that at the photonic layer, essentially you can easily recognise that traffic needs to go to that location. You can argue that there will be more photonic switching at the edge because you don't need to route that traffic, it is going to one destination only.

This is an extension of the whole idea of converged backbone architecture we have, with interworking between the IP and optical domains, you don't route traffic that you don't need to route. If you know it is going to a peering point, you can keep that traffic in the optical domain and not send it up through the routing core and have it constantly routed when you know from the start where it is going.

So as you distribute computing and cacheing resources, you would offload in the optical layer rather than attempt to packet process everything.

There are smarts at that level too - photonic switching - as well as the intelligent photonic layer. 

 

For the second part of the Q&A, click here


Privacy Preference Center