Reporting the optical component & module industry
LightCounting recently published its six-monthly optical market research covering telecom and datacom. Gazettabyte interviewed Vladimir Kozlov, CEO of LightCounting, about the findings.
When people forecast they always make a mistake on the timeline because they overestimate the impact of new technology in the short term and underestimate in the long term
Q: How would you summarise the state of the optical component and module industry?
VK: At a high level, the telecom market is flat, even hibernating, while datacom is exceeding our expectations. In datacom, it is not only 40 and 100 Gig but 10 Gig is growing faster than anticipated. Shipments of 10 Gigabit Ethernet (GbE) [modules] will exceed 1GbE this year.
The primary reason is data centre connectivity - the 'spine and leaf' switch architecture that requires a lot more connections between the racks and the aggregation switch - that is increasing demand. I suspect it is more than just data centres, however. I wouldn't be surprised if enterprises are adopting 10GbE because it is now inexpensive. Service providers offer Ethernet as an access line and use it for mobile backhaul.
Can you explain what is causing the flat telecom market?
Part of the telecom 'hibernation' story is the rapidly declining SONET/SDH market. The decline has been expected but in fact it had been growing up till as recently as two years ago. First, 40 Gigabit OC-768 declined and then the second nail in the coffin was the decline in 10 Gig sales: 10GbE is all SFP+ whereas 0C-192 SONET/SDH is still in the XFP form factor.
The steady dense WDM module market and the growth in wireless backhaul are compensating for the decline in SONET/SDH market as well as the sharp drop this year in FTTx transceiver and BOSA (bidirectional optical sub assembly) shipments, and there is a big shift from transceivers to BOSAs.
LightCounting highlights strong growth of 100G DWDM in 2013, with some 40,000 line card port shipments expected this year. Yet LightCounting is cautious about 100 Gig deployments. Why the caution?
We have to be cautious, given past history with 10 Gig and 40 Gig rollouts.
If you look at 10 Gig deployments, before the optical bubble (1999-2000) there was huge expected demand before the market returned to normality, supporting real traffic demand. Whatever 10 Gig was installed in 1999-2000 was more than enough till 2005. In 2006 and 2007 10 Gig picked up again, followed by 40 Gig which reached 20,000 ports in 2008. But then the financial crisis occurred and the 40 Gig story was interrupted in 2009, only picking up from 2010 to reach 70,000 ports this year.
So 40 Gig volumes are higher than 100 Gig but we haven't seen any 40 Gig in the metro. And now 100 Gig is messing up the 40G story.
The question in my mind is how much metro is a bottleneck today? There may be certain large cities which already require such deployments but equally there was so much fibre deployed in metropolitan areas back in the bubble. If fibre cost is not an issue, why go into 100 Gig? The operator will use fibre and 10 Gig to make more money.
CenturyLink recently announced its first customer purchasing 100 Gig connections - DigitalGlobe, a company specialising in high-definition mapping technology - which will use 100 Gig connectivity to transfer massive amounts of data between its data centers. This is still a special case, despite increasing number of data centers around the world.
There is no doubt that 100 Gig will be a must-have technology in the metro and even metro-access networks once 1GbE broadband access lines become ubiquitous and 10 Gig will be widely used in the access-aggregation layer. It is starting to happen.
So 100 Gigabit in the metro will happen; it is just a question of timing. Is it going to be two to three years or 10-15 years? When people forecast they always make a mistake on the timeline because they overestimate the impact of new technology in the short term and underestimate in the long term.
LightCounting highlights strong sales in 10 Gig and 40 Gig within the data centre but not at 100 Gig. Why?
If you look at the spine and leaf architecture, most of the connections are 10 Gig, broken out from 40 Gig optical modules. This will begin to change as native 40GbE ramps in the larger data centres.
If you go to super-spine that takes data from aggregation to the data centre's core switches, there 100GbE could be used and I'm sure some companies like Google are using 100GbE today. But the numbers are probably three orders of magnitude lower than in a spine and leaf layers. The demand for volume today for 100GbE is not that high, and it also relates to the high price of the modules.
Higher volumes reduce the price but then the complexity and size of the [100 Gig CFP] modules needs to be reduced as well. With 10 Gig, the major [cost reduction] milestone was the transition to a 10 Gig electrical interface. It has to happen with 100 Gig and there will be the transition to a 4x25Gbps electrical interface but it is a big transition. Again, forget about it happening in two-three years but rather a five- to 10-year time frame.
I suspect that one reason for Google offerings of 1Gbps FTTH services to a few communities in the U.S. is to find out what these new application are, by studying end-user demand
You also point out the failure of the IEEE working group to come up with a 100 GbE solution for the 500m-reach sweet spot. What will be the consequence of this?
The IEEE is talking about 400GbE standards now. Go back to 40GbE that was only approved some three years, the majority of the IEEE was against having 40GbE at all, the objective being to go to 100GbE and skip 40GbE altogether. At the last moment a couple of vendors pushed 40GbE. And look at 40GbE now, it is [deployed] all over the place: the industry is happy, suppliers are happy and customers are happy.
Again look at 40GbE which has a standard at 10km. If you look at what is being shipped today, only 10 percent of 40GBASE-LR4 modules are compliant with the standard. The rest of the volume is 2km parts - substandard devices that use Fabry-Perot instead of DFB (distributed feedback) lasers. The yields are higher and customers love them because they cost one tenth as much. The market has found its own solution.
The same thing could happen at 100 Gig. And then there is Cisco Systems with its own agenda. It has just announced a 40 Gig BiDi connection which is another example of what is possible.
What will LightCounting be watching in 2014?
One primary focus is what wireline revenues service providers will report, particularly additional revenues generated by FTTx services.
AT&T and Verizon reported very good results in Q3 [2013] and I'm wondering if this is the start of a longer trend as wireline revenues from FTTx pick up, it will give carriers more of an incentive to invest in supporting those services.
AT&T and Verizon customers are willing to pay a little more for faster connectivity today, but it really takes new applications to develop for end-user spending on bandwidth to jump to the next level. Some of these applications are probably emerging, but we do not know what these are yet. I suspect that one reason for Google offerings of 1Gbps FTTH services to a few communities in the U.S. is to find out what these new application are, by studying end-user demand.
A related issue is whether deployments of broadband services improve economic growth and by how much. The expectations are high but I would like to see more data on this in 2014.
OpenFlow extends its control to the optical layer

"We see OpenFlow as an additional solution to tackle the problem of network control"
Jörg-Peter Elbers, ADVA Optical Networking
The largest data centre players have a single-mindedness when it comes to service delivery. Players such as Google, Facebook and Amazon do not think twice about embracing and even spurring hardware and software developments if they will help them better meet their service requirements.
Such developments are also having a wider impact, interesting traditional telecom operators that have their own service challenges.
The latest development causing waves is the OpenFlow protocol. An open standard, OpenFlow is being developed by the Open Networking Foundation, an industry body that includes Google, Facebook and Microsoft, telecom operators Verizon, NTT and Deutsche Telekom, and various equipment makers.
OpenFlow is already being used by Google, and falls under the more general topic of software-defined networking (SDN). A key principle underpinning SDN is the separation of the data and control planes to enable more centralised and simplified management of the network.
OpenFlow is being used in the management of packet switches for cloud services. "The promise of software-defined networking and OpenFlow is to give [data centre operators] a virtualised network infrastructure," says Jörg-Peter Elbers, vice president, advanced technology at ADVA Optical Networking.
The growing interest in OpenFlow is reflected in the activities of the telecom system vendors that have extended the protocol to embrace the optical layer. But whereas the content service provider giants need only worry about tailoring their networks to optimise their particular services, telecom operators must consider legacy equipment and issues of interoperability.

OFELIA
ADVA Optical Networking has started the ball rolling by running an experiment to show OpenFlow controlling both the optical and packet layers of the network. Until now the protocol, which provides a software-programmable interface, has been used to manage packet switches; the adding of the optical layer control is an industry first, the company claims.
The OpenFlow demonstration is part of the European “OpenFlow in Europe, Linking Infrastructure and Applications” (OFELIA) research project involving ADVA Optical Networking and the University of Essex. A test bed has been set up that uses the ADVA FSP 3000 to implement a colourless and directionless ROADM-based optical network.
"We have put a network together such that people can run the optical layer through an OpenFlow interface, as they do the packet switching layer, under one uniform control umbrella," says Elbers. "The purpose of this project is to set up an experimental facility to give researchers access to, and have them play with, the capabilities of an OpenFlow-enabled network."
"The fact that Google is doing it [SDN] is not a strong indication that service providers are going to do it tomorrow"
Mark Lutkowitz, Telecom Pragmatics
Remote researchers can access the test bed via GÉANT, a high-bandwidth pan-European backbone connecting national research and education networks.
ADVA Optical Networking hopes the project will act as a catalyst to gain useful feedback and ideas from the users, leading to further developments to meet emerging requirements.
OpenFlow and GMPLS
A key principle of SDN, as mentioned, is the separation of the data plane from the control plane. "The aim is to have a more unified control of what your network is doing rather than running a distributed specialised protocol in the switches," says Elbers.
That is not that much different from the Generalized Multi-Protocol Label Switching (GMPLS), he says: "With GMPLS in an optical network you effectively have a data plane - a wavelength switched data plane - and then you have a unified control plane implementation running on top, decoupled from the data plane."
But clearly there are differences. OpenFlow is being used by data centre operators to control their packet switches and generate packet flows. The goal is for their networks to gain flexibility and agility: "A virtualised network that can be run as you, the user, want it," said Elbers.
But the protocol only gives a user the capability to manage the forwarding behavior of a switch: an incoming packet's header is inspected and the user can program the forwarding table to determine how the packet stream is treated and the port it goes out on.
And while OpenFlow has since been extended to cater for circuit switches as well as wavelength circuits, there are aspects at the optical layer which OpenFlow is not designed to address - issues that GMPLS does.
To run end-to-end, the control plane needs to be aware of the blocking constraints of an optical switch, while when provisioning it must also be aware of such aspects as the optical power levels and optical performance constraints. "The management of optical is different from managing a packet switch or a TDM [circuit switched] platform," says Elbers. “We need to deal with transmission impairments and constraints that simply do not exist inside a packet switch.”
That said, having GMPLS expertise, it is relatively simple for a vendor to provide an OpenFlow interface to an optical controlled network, he says: "We see OpenFlow as an additional solution to tackle the problem of network control."
Operators want mature and proven interoperable standards for network control, that incorporate all the different network layers and that use GMPLS.
"We are seeing that in the data centre space, the players think that they may not have to have that level of complexity in their protocols and can run something lower level and streamlined for their applications," says Elbers.
While operators see the benefit of OpenFlow for their own data centres and managed service offerings, they also are eyeing other applications such as for access and aggregation to allow faster service mobility and for content management, says Elbers.
ADVA Optical Networking sees the adding of optical to OpenFlow as a complementary approach: the integration of optical networking into an existing framework to run it in a more dynamic fashion, an approach that benefits the data centre operators and the telcos.
"If you have one common framework, when you give server and compute jobs then you know what kind of connectivity and latency needs to go with this and request these resources and reconfigure the network accordingly," says Elbers.
But longer term the impact of OpenFlow and SDN will likely be more far-reaching: applications themselves could program the network, or it could be used to enable dial-up bandwidth services in a more dynamic fashion. "By providing software programmability into a network, you can develop your own networking applications on top of this - what we see as the heart of the SDN concept," says Elbers. “The long term vision is that the network will also become a virtualised resource, driven by applications that require certain types of connectivity.”
Providing the interface is the first step, the value-add will be the things that players do with the added network flexibility, either the vendors working with operators, or by the operators' customers and by third-party developers.
"This is a pretty significant development that addresses the software side of things," says Elbers, adding that software is becoming increasingly important, with OpenFlow being an interesting step in that direction.
Reflections and predictions: 2011 & 2012 - Part 1

"For 2012, the macroeconomy is likely to dominate any other developments"
Martin Geddes, telecom consultant @martingeddes
Sometimes the important stuff is slow-burning: we're seeing a continued decline in the traditional network equipment providers, and the rise in Genband, Acme, Sonus and Metaswitch in their place. Smaller, leaner, and more used to serving Tier 2 and Tier 3 operators and enterprise players and their lower cost structures.
The recognition of the decline of SMS and telephony became mainstream in 2011 -- maybe I can close down my Telepocalypse blog as what I foresaw is reality.
We've seen absolute declines in revenue and usage of telco voice and messaging in leading markets like Norway and Netherlands. The creation of Telefonica Digital is a landmark reorganisation around new markets. No longer are those initiatives endlessly parked in business development whilst marketing dream up a new price plan for minutes, messages and megabytes.
If I had to pick one thing to characterise 2011, it was the year of the App.
For 2012, the macroeconomy is likely to dominate any other developments. The scenarios are "distress", "meltdown" and "collapse".
Telecoms is well-placed to weather the storm. Even £600 smartphones may remain in vogue as people defer purchases like cars and holidays, and hide their fiscal distress with status symbols hewn out of pure blocks of profit.
Voice will be much more prominent, after decades of languishing, as LTE sets up a complex dynamic of service innovation driven by over-the-top applications - which will increasingly come from telcos as well as telecoms outsiders. Microsoft's purchase of Skype is the one to watch - if they get it right, it joins Windows and Office in the hall of fame; get it wrong, and Microsoft is probably out of the smartphone game due to a lack of competitive differentiation and advantage.
So 2012 is the year when (mobile) voice gets vocal again - because we're going to have a lot to talk about, and want to do it much cheaper and better.
Brandon Collings, CTO for communications and commercial optical products at JDS Uniphase
For the course of 2011, the tunable XFP shipped in volume and it rather quickly supplanted the 300-pin transceiver. On the service/ market trend, over-the-top consumer video (Netflix) grew rapidly to be the dominant traffic on the internet.
"Solutions for the next generation ROADM networks - self aware networks - are now firm"
I expect the maturation of 100 Gigabit to continue through 2012 with the introduction of a number of new 100 Gigabit solutions, both network equipment makers and at the transceiver level.
Also, as the adoption percentage of consumers using over-the-top video usage still seems to be relatively small, yet is growing strongly and is already the dominant traffic on the internet, it will be interesting to see how this trend continues as it strongly drives bandwidth yet with potentially unfavorable revenue models for the network operators who need to deliver it.
Lastly, I expect that as the solutions for the next generation ROADM networks - self aware networks - are now firm, the practical assessment of the value and advantages of these networks can quantitatively take place.
Eve Griliches, managing partner, ACG Research @EveGr
The Juniper PTX announcement really caught the market by surprise. I'm not so much sure why but clearly it rocked some folks back on their heels. Momentum for the product has been good as well. I think you can count this as a success story.
Another one is the Infinera 500Gbps release with super-channels. A pretty impressive technology and service providers are waiting for final product to test.
The death of Steve Jobs rattled us all. I think it struck a note for everyone in how different he was and how he touched us all.
"Content providers ask for simple, scalable and low-featured products. Those who deliver will be rewarded for listening."
I continue to be amazed at how much optical equipment content providers [the Googles, Facebooks, MSNs of this world] are deploying and how few folks at the vendor level are doing anything about getting into their networks. Maybe that is a 2012 thing, I don't know.
As for 2012, we'll definitely see some mergers and acquisitions - expect low acquisition prices too - and some companies exiting this market. I love optics and it really pains me to say that, but there are just more companies out there who can't support the declining margins. I think margin erosion will be key to who survives.
Cisco and Infinera should be bringing some cool products to market in the next six months. We hope the products are good because it will generate debate for the final vendor choices for operators such as AT&T and Verizon.
Again, content providers ask for simple, scalable and low-featured products. Those who deliver will be rewarded for listening. Some don't listen, and will wonder what happened.
Peter Jarich, service director, service provider infrastructure, mobile ecosystem, Current Analysis @pnjarich
2012 is going to be the year for LTE-Advanced (LTE-A). Why? One, vendors always like to talk up what’s next, and LTE-A is what follows LTE (Long Term Evolution).
At the same time, operators who haven’t yet deployed LTE will want to look to start with the latest and greatest. Of course, LTE-A brings real advances for operators: carrier aggregation for dealing with fragmented spectrum assets; heterogeneous networks for dealing with the interaction of small cell and macrocell networks; relaying for improved cell edge performance.
Avi Shabtai, CEO of MultiPhy
The most significant development of 2011 was the availability of CMOS technology that allows next-generation optical transport solutions for 100 Gigabit. And specifically, metro-focused solutions that hit the cost and power numbers required by this industry.
On top of that, optical communication has entered the era of digital signal processing receivers. We have also seen the potential segmentation in 100 Gigabit of metro versus long-haul, each with its specific set of solutions.
"We will see a huge growth in video consumption. This has already started but it is just the tip of the iceberg."
The transition of the telecom and datacom market to 100 Gigabit has also begun - from the transport optical network all the way to copper backplanes - it's all a 4x25Gbps architecture. This year has also seen consolidation in the ecosystem, especially among module companies.
This consolidation will continue at all industry levels in 2012: semiconductors, subsystems, systems and the carriers. The consolidation will coincide with an across-the-board price reduction in emerging technologies like 100 Gigabit transport.
The increase in capacity demand will also force an increase in requirements for various solutions supporting 100 Gigabit. I expect to see more CMOS-based devices introduced.
From a services point or view, we will see a huge growth in video consumption. This has already started but it is just the tip of the iceberg. Video will have a tremendous influence on network evolution.
Gilles Garcia, director, wired communication at Xilinx @gllsgarcia
The CFP2 and CFP4 optical modules are arriving a lot faster than it took for the CFP to follow the XFP optical module.
The CFP standard took 3-4 years to complete while the standard for the CFP2 just closed after two years. Now the CFP4 standard has been launched and is expected to take 18 months only. The new form factors are being driving by the cost-per-port of 100 Gigabit and how to reduce it. The CFP2 doubles the density when compared to the CFP while the CFP4 doubles it again.

"Programmability is becoming the key trend among telecom system vendors as operators look to react faster to standards, new feature requests and deployment of new services."
Telecom application-specific standard product (ASSP) players have been relatively quiet in 2011. Word from customers is that such vendors are pushing out their roadmap/ product availability because of too much flux in the various IEEE and ITU-T telecom standards and difficulties to justify the return-on-investment. This is proving a perfect opportunity for FPGAs.
Large system vendors are growing their network services as operators continue to outsource their network management and maintenance. As reported in their financial reports, this is an important source of business for the likes of Ericsson, Huawei and Alcatel-Lucent.
It is leading the vendors to push more of their own hardware, as they look to add value-add services and integrate the services using their own platforms. Some equipment vendors realise they do not have a full portfolio and have established partnerships for the missing platforms. They are also starting to develop platforms to generate more revenue.
In 2012, I’m not expecting a telecom revolution but I do expect accelerated evolution. And I foresee big disruptions in the ASSP market as it continues to consolidate: I expect several mergers and acquisitions among the top 20 ASSP suppliers.
Programmability is becoming the key trend among telecom system vendors as operators look to react faster to standards, new feature requests and deployment of new services. Programmability also improves time-to-market to deliver these services and reduce time-to-revenue.
Mobile backhaul will be a market driver in 2012. The growth in mobile data terminals will lead to a new generation of mobile backhaul networks. This will drive the move from 1 to 10 Gigabit Ethernet, higher-feature packet processing, and traffic management integration into mobile infrastructure to better control and bill bandwidth usage i.e. pay for what you use.
The 'God box' - packet optical transport systems and the like - are back, but really it is network needs that is driving this.
And one topic to watch that will become clearer in 2012 is how cloud computing impacts the networking market with regard such issues as security, cacheing and higher speed links.
Google is becoming an important internal - for its own usage -networking equipment player. And Google will be joined by others - Facebook, Amazon etc. What impact will this have on the traditional system networking vendors? Such new players are defining and building networks platforms tailored for their needs. This is competition to the traditional system vendors who are not getting this piece of the business. Semiconductors, including FPGAs, could serve those companies directly.
Other issues to note: What will Intel do in the networking space? Intel acquired Fulcrum in 2011 and has invested in several networking companies.
There are also technology issues.
What will happen to ternary content addressable memory (TCAM)? Broadcom's acquisition of NetLogic Microsystems has created a hole in the TCAM market. Will Broadcom continue with TCAM? Will customers want to give their TCAM business to Broadcom?
Xilinx FPGAs have added network search engines IP in the solution portfolio as multi-core ‘search engine’ face increasing difficulty in sustaining the performance required.
And of course there is the continual issue of power optimisation.
For Part 2, click here
For Part 3, click here
Rafik Ward Q&A - final part

"Feedback we are getting from customers is that the current 100 Gig LR4 modules are too expensive"
Rafik Ward, Finisar
Q: Broadway Networks, why has Finisar acquired the company?
A: We spent quite some time talking to Broadway and understanding their business. We also talked to Broadway’s customers and the feedback we got on the technical team, the products and what this little start-up was able to accomplish was unanimously very positive.
We think what Broadway has done, for instance their EPON* stick product, is very interesting. With that product, an end user has the ability to make any SFP* port on a low-end Ethernet switch an EPON ONU* interface. This opens up a whole new set of potential customers and end users for EPON.
In reality, consumers will never have Ethernet switches with SFP ports in their house. Where we do see such Ethernet switches are in every major enterprise and many multi-dwelling units. It is an interesting technology that enables enterprises and multi-dwelling units to quickly tool-up for EPON.
* [EPON - Ethernet passive optical network, SFP - small form-factor pluggable optical transceiver, ONU - optical network unit]
Optical transceivers have been getting smaller and faster in the last decade yet laser and photo-detector manufacturing have hardly changed, except in terms of speed. Is this about to change?
Speed is one of the focus areas for the industry and will continue to be. Looking forward in a number of applications, though, we are going to hit the limit for these lasers and we are going to have to look more carefully outside of just raw laser speed to move up the data rate curve.
"We are going to hit the limit for these lasers"
A lot of this work has already started on the line side using different modulation formats and DSP* technology. Over time the question is: What happens on the client side? In future, do we look to other modulation formats on the client side? Eventually we will get there; it may take several years before we need to do things like that. But as an industry we would be foolish to think we won’t have to do this.
WDM* is going to be an increasingly important technology on the client side. We are already seeing this with the 40GBASE-LR4 and 100GBASE-LR4 standards.
* [DSP - digital signal processing, WDM - wavelength-division multiplexing]
Google gave a presentation at ECOC that argued for the need for another 100Gbps interface. What is Finisar’s view?
Feedback we are getting from customers is that the current 100 Gig LR4 modules are too expensive. We have spent a lot of time with customers helping them understand how the current LR4 standard, as is written, actually enables a very low cost optical interface, and the timeframes we believe are very quick in terms of how we can get cost down considerably on 100 Gig.
Rafik Ward (right) giving Glenn Wellbrock, director of backbone network design at Verizon Business, a tour of Finisar's labsThat was part of the details that [Finisar’s] Chris Cole also presented at ECOC.
There has certainly been a lot of media attention on the two [ECOC] presentations between Finisar and Google. This really is not so much about the quote, ‘drama’, or two companies that have a disagreement which optical interface makes more sense. It is more fundamental than that.
What it comes down to is that, as an industry, we have pretty limited resources. The best thing all of us can do is try to direct these resources – this limited pool we have combined throughout the industry - on a path that makes the most sense to reduce bandwidth cost most significantly.
The best way to do that, and that is already established, is through standards. The [IEEE] standard got it right that the path the industry is on is going to enable the lowest cost 100 Gig [interface]. Like everything, there is some investment required to get us there. The 25 Gig technology now [used as 4x25 Gig] is becoming mainstream and will soon enable the lowest cost solution. My view is that within 18 months to two years this will be a moot point.
If the technology was available 18 months sooner, we wouldn’t even be having this discussion. But that is the position that we, as an industry, are in. With that, it creates some tensions, some turmoil, where customers don’t like to pay more than they perceive they have to.
There is the CFP form factor that is relatively large. Is the point that if current technology was available 18 months ago, 100Gbps could have come out in a QSFP?
The heart of the debate is cost.
There are other elements that always play into a debate like this. Beyond the cost argument, how quickly can two optical interfaces, like a 4x25 Gig versus a 10x10 Gig, each enable a smaller form factor solution.
But I think that is secondary. Had we not had the cost problem that we have now between 4x25 Gig versus 10x10 Gig, I don’t think we would be talking about it.
So it’s the current cost of the 4x25 Gig that is the issue?
Correct.
In September, the ECOC conference and exhibition was held. What were your impressions and did you detect any interesting changes?
There wasn’t so much an overwhelming theme this year at ECOC. In ECOC 2009, it was the year of coherent detection. This year there wasn’t a theme that resonated strongly throughout.
The mood was relatively upbeat. From our perspective, ECOC seemed a little bit smaller in terms of the size of the floor. But all the key people you would expect to be at the show were there.
Maybe the strongest theme – and I wrote about this in my blog – was colourless, directionless, contentionless (CDC) [ROADMs]. I think what I said is that they should have renamed it not ECOC but the ECDC show.
"A blog ... enables a much more informal mechanism to communicate to a broad audience."
Do you read business books and is there one that is useful for your job?
Probably the book I think about the most in my job is Clayton Christensen's The Innovator’s Dilemma.
He talks about how, when you look at very successful technology companies that have failed, what causes them to fail is often new solutions that come from the very low end of the market.
A lot of companies, and he cites examples from the disk drive industry, prided themselves on focussing on the high end of the market but ultimately ended up failing because there was a surprise upstart, someone who came in at the market's low end – in terms of performance, cost etc. – that continued to innovate using their low-end architecture, making it suitable for the core market.
For these large, well-established companies, once they realised they had this competitor, it was too late.
I think about that business book probably more than others. It’s a very interesting take on technology and the threat that can be posed to people in high-tech companies.
Your job sounds intensive and demanding. What do you do outside work to relax?
I’m a big [ice] hockey fan. I’ve been a hockey fan for many years; it’s a pretty intense sport. These days I tend to watch more hockey than I play but I very much enjoy the sport.
The other thing I started up this year that I had never done before – a little side project – was vegetable gardening. Surprisingly, it ended up taking a lot of my attention and I think it was a good distraction for me.
It can be quite remarkable, when you have your own little vegetable garden, how often you go and look at its progress. I’d find often coming home from work, first thing I’d want to do is go see how things were progressing in my vegetable garden.
You are the face of Finisar’s blog. What have you learnt from the experience?
A blog is an interesting tool to get information out to a broad audience. For companies like Finisar, it serves as a very important communication vehicle that didn’t exist previously.
In the old days, if you wanted to get information out to a broad group of customers, you either had to meet and communicate that information face-to-face, or via email; very targeted, one customer-at-a-time communication.
Another way was the press release. A press release was a very easy way to broadcast that information. But the challenge is that not all information that you want to broadcast is suitable for a press release.
The reason why I really like the blog is that it enables a much more informal mechanism to communicate to a broad audience.
Has it helped your job in any tangible way?
We found some interesting customer opportunities. These have come in through the blog when we’ve talked about specific products. That hasn’t happened extremely frequently but we have had a few instances. So it’s probably the most tangible thing: we can point to enhanced business because of it.
But the strength of something like a blog goes much deeper than that, in terms of the communication vehicle it enables.
You have about a year’s experience running a blog. If an optical component company is thinking about starting a blog, what is your advice?
The best advice I can give to anybody looking to do a blog is that it is something you have to commit to up-front.
A blog where you don’t continue to refresh the content regularly becomes a tired blog very quickly. We have made a conscious effort to have updated postings as best we can, on a weekly basis or even more frequently. There are certainly periods where we have gone longer than that but if you look back, in general, we have a wide variety of content that has been refreshed regularly.
I have to give credit to others - guest bloggers - within the organisation that help to maintain the content. This is critical. I would struggle to keep up with the pace if it was just myself every week.
Click here for the first part of Rafik Ward's Q&A.
Google and the optical component industry
According to a report by Pauline Rigby, Google wants something in between two existing IEEE interface standards. The 100GBase-SR10, which has 10 parallel channels and a 125m span, has too short a reach for Google.
“What is good for an 800-pound gorilla is not necessarily good for the industry. It [Google] should have been at the table when the IEEE was working on the standard."
Daryl Inniss, practice leader, components, Ovum
The second interface, the 100GBase-LR4, uses four channels that are multiplexed onto a single fibre and has a 10km reach. The issue here is that Google doesn’t need a 10km reach and while a single fibre is better than the multi-mode fibre based SR10, the interface is costly with its “gearbox” IC that translates between 10 lanes of 10Gbps and four lanes each at 25Gbps. Both IEEE interfaces are also implemented using a CFP form factor which is bulky.
What Google wants
Google wants optical component vendors to develop a new 100 Gigabit Ethernet multi-source agreement (MSA) that is based on a single-mode interface with a 2km reach, reports Rigby. Such a design would use a ten-channel laser array whose output is multiplexed onto a fibre, a similar laser array-multiplexer arrangement that has already been developed by Santur. Using such a part, the new interface could be developed quickly and cheaply, says Google.
The proposed interface clearly has merits and Google, an important force with an appetite for optics, makes some valid points. But the industry is developing 4x25Gbps interfaces and while such interfaces may be challenging, no-one doubts they will come.
Google’s next moves
Google has a history of being contrarian if it believes it best serves its business. The way the internet giant designs data centres is one example, using massive numbers of cheap servers arranged in a fault-tolerant architecture.
But there is only so much it can do in-house and developing a new optical interface will require help from optical component players.
Google has the financial muscle to hire an optical component firm to engineer and manufacture a custom interface. A recent example of such a partnership is IBM's work with Avago Technologies to develop board-level optics – or an optical engine – for use within IBM’s POWER7 supercomputer systems.
According to Karen Liu, vice president, components and video technologies at market research firm Ovum, once such an interface is developed, Google could allow others to buy it to help reduce its price. “Remember the Lucent form factor which became a de facto standard but wasn’t originally intended to be?” says Liu. “This approach could work.”
Taking a longer term view, Google could also invest in optical component start-ups. The return may take years and as the experience of the last decade has shown, optical components is a risky business. But Google could encourage a supply of novel, leading-edge technologies over the next decade.
The optical component industry is right to push back with regard Google’s request for a new 100 Gigabit Ethernet MSA, as Finisar has done. While Google may be an important player that can drive interface requirements, many players have helped frame the IEEE 100Gbps Ethernet standards work. In the last decade the optical industry has also seen other giant firms try to drive the industry only to eventually exit.
“The industry needs to move on,” says Daryl Inniss, practice leader, components at Ovum. “What is good for an 800-pound gorilla is not necessarily good for the industry.” Inniss also suggests a simple and effective way Google could have influenced the 100 Gigabit Ethernet MSA work: “It [Google] should have been at the table when the IEEE was working on the standard."
OFC/NFOEC 2010: Technical paper highlights

Here is a sample of some of the noteworthy papers.
Optical transmission
Nortel’s Next Generation Transmission Fiber for Coherent Systems details how various fibre parameters impact coherent system performance. This is important for existing 40 and 100Gbps systems and for future ones based on even higher data rates.
In 40G and 100G Deployment on 10G Infrastructure: Market Overview and Trends, Coherent Versus Conventional Technology, Alcatel-Lucent discusses 40G and 100G deployment strategies over 10G infrastructures based on a trial using live commercial traffic.
Two papers demonstrate possible future optical modulation steps.
In Ultra-High Spectral Efficiency Transmission, Bell Labs Alcatel-Lucent details the generation, transmission and coherent detection of 14-Gbaud polarization-division multiplexed, 16-ary quadrature-amplitude-modulation (16-QAM) signals achieving spectral efficiencies as high as 6.2 b/s/Hz.
Meanwhile, NEC Labs America and AT&T Labs address 112.8-Gb/s PM-RZ-64QAM Optical Signal Generation and Transmission on a 12.5GHz WDM Grid. The optical signal was sent over 2x40km using an 8-channel WDM using 12.5GHz grid spacing.
Photonic integration
In High Performance Photonic Integrated Circuits for Coherent Fiber Communication, Chris Doerr of Bell Labs, Alcatel-Lucent presents how photonic integration can benefit high-speed transmission. In particular, how optical integration can be used to tackle the complex circuitry needed for coherent systems to reduce the area, cost, and power consumption of optical coherent transceivers.
Another photonic integration development is the CMOS-Integrated Low-Noise Germanium Waveguide Avalanche Photodetector Operating at 40Gbps from IBM T.J. Watson Research Center. The avalanche photodiode has a gain-bandwidth-product above 350GHz operating at 3V. The avalanche photodetector is monolithically integrated into CMOS.
Optical access
An update will be given on the EU’s Seventh Framework programme for WDM-PON, dubbed Sardana - Scalable Advanced Ring-based passive Dense Access Network Architecture. The paper, Results from EU Project SARDANA on 10G Extended Reach WDM PONs, details the integration of WDM metro and PON access technologies to implement ring protection, 100km reach and up to 1024 users served at 10Gbps using a passive infrastructure.
In 44-Gb/s/λ Upstream OFDMA-PON Transmission with Polarization-Insensitive Source-Free ONUs, NEC Labs America details its work on colourless 44-Gb/s/λ upstream OFDMA-PON transmission using polarization-insensitive, source-free ONUs.
Green telecom and datacom
There are other, more subtle developments at OFC/NFOEC. Two papers from Japan have ‘Green’ in the title, highlighting how power consumption is increasingly a concern.
High Performance “Green” VCSELs for Data Centers from Furukawa Electric Co. Ltd details how careful design can achieve a 62% power conversion efficiency in the 1060nm VCSEL.
The second paper tackles power consumption in access networks. Key Roles of Green Technology for Access Network Systems from NTT Labs in Japan addresses the ITU-T’s standardisation activities. Optics for flow and interconnect
In Optical Flow Switching, Vincent Chan of MIT will discuss 'optical flow switching' that promises significant growth, power-efficiency and cost-effective scalability of next-generation networks.
Meanwhile Bell Labs, Lucent Technologies has a paper entitled Photonic Terabit Routers: The IRIS Project, detailing the results of the DARPA-MTO funded program to develop a router with an all-optical data plane and a total capacity of more than 100 Tbps.
Another important topic is optical interconnect. Low Power and High Density Optical Interconnects for Future Supercomputers from IBM Research reviews the status and prospects of technologies required to build low power, high density board and chip level interconnects needed to meet future supercomputers requirements.
NFOEC papers
There are also some noteworthy NFOEC papers bound to stir interest:
- Google reviews the optical communication technologies required to support data center operations and warehouse-scale computing.
- Verizon shares lessons learned during the five years of Verizon’s FiOS and the need to continually evolve product and service offerings.
- AT&T details the key decisions required in defining its new 100G backbone.
There is a comprehensive OFC/NFOEC preview in the February issue of IEEE Communications magazine, click on the "conference preview" tab.
