2012: A year of unique change
The third and final part on what CEOs, executives and industry analysts expect during the new year, and their reflections on 2011.
Karen Liu, principal analyst, components telecoms, Ovum @girlgeekanalyst

"We’ve entered the next decade for real: the mobile world is unified around LTE and moving to LTE Advanced, complete with small cells and heterogenous networks including Wi-Fi."
Last year was a long one. Looking back, it is hard to believe that only one year has elapsed between January 2011 and now.
In fact, looking back it is hard to remember how things looked a year ago: natural disasters were considered rare occurrences. WiMAX’s role was still being discussed, some viewed TDD LTE as a Chinese peculiarity. For that matter, cloud-RAN was another weird Chinese idea. But no matter, China could do anything given its immunity to economics and need for a return-on-investment.
Femtocells were consumer electronics for the occasional indoor coverage fix, and Wi-Fi was not for carriers.
Only optical could do 100Mbps to the subscriber, who, by the way, was moving on to 10 Gig PON in short order. Flexible spectrum ROADMS meant only Finisar could play, and high port-count wavelength-selective switches had come and gone. 100 Gigabit DWDM took several slots, hadn’t shipped for real, and even the client-side interface was a problem.
As for modules, 40 Gigabit Ethernet (GbE) client was CFP-sized, and high-density 100GbE looked so far away that the non-standard 10x10 MSA was welcomed.
NeoPhotonics was a private company, doing that wacky planar integration thing that works OK for passives but not actives.
Now it feels like we’ve entered the next decade for real: the mobile world is unified around LTE and moving to LTE Advanced, complete with small cells and heterogenous networks including Wi-Fi.
Optical is one of several ways to do backhaul or PC peripherals. 40GbE, even single-mode, comes in a QSFP package, tunable comes in an SFP — both of which, by the way, use optical integration.
Most optical transport vendors, even metro specialists, have 100 Gigabit coherent in trial stage at least. Thousands of 100 Gig ports and tens of thousands of 40 Gig have shipped.
Flexible spectrum is being standardised and CoAdna went public. The tunable laser start-up phase concluded with Santur finding a home in NeoPhotonics, now a public company.
But we also have a new feeling of vulnerability.
Optical components revenues and margins slid back down. Bad luck can strike twice, with Opnext taking the hit from both the spring earthquake and the fall floods. China turns out not to be immune after all, and time hasn’t automatically healed Europe.
What will happen this year? At this rate, I think we’ll see a lot of news at OFC in a couple of months' time. By then I’ll probably think: "Was it as recently as January when the world looked so different?"
Brian Protiva, CEO of ADVA Optical Networking @ADVAOpticalNews
Last year was an incredible year for networks. In many respects it was a watershed moment. Optical transport took a huge step forward with the genuine availability of 100 Gigabit technologies.
What's even more incredible is that 100 Gigabit emerged in more than the core: we saw 100 Gig metro solutions enter the marketplace. This means that for the first time enterprises and service providers have the opportunity to deploy 100 Gig solutions that fit their needs. Thanks to the development of direct-detection 100 Gig technology, cost is becoming less and less of an issue. This is a game changer.
In 2012, 100 Gig deployments will continue to be a key topic, with more available choices and maturing systems. However, I firmly believe the central focus of 2012 will be automation and multi-layer network intelligence.

"We need to see networks that can effectively govern and optimise themselves."
Talking to our customers and the industry, it is clear that more needs to be done to develop true network automation. There are very few companies that have successfully addressed this issue.
We need to see networks that can effectively govern and optimise themselves. That can automatically deliver bandwidth on demand, monitor and resolve problems before they become service disrupting, and drive dramatically increased efficiency.
The future of our networks is all about simplicity. The continued fierce bandwidth growth can no longer be supported by today's complex operational inefficiencies. Streamlined operations are essential if operators are to drive for further profitable growth.
I'm excited about helping to make this happen.
Arie Melamed, head of marketing, ECI Telecom @ecitelecom
The existing momentum of major traffic growth with no proportional revenue increase has continued - even intensified - in 2011. This means that operators have to invest in their networks without being able to generate the proportional revenue increase from this investment. We expect to see new business models crop up as operators cope with over-the-top (OTT) services.
To differentiate themselves from competition, operators must make the network a core part of the end-customer experience. To do so, we expect operators to introduce application-awareness in the network – optimising service delivery to avoid network expansions and introduce new revenues.
We also expect operators to offer quality-of-service assurance to end users and content application providers, turning a lose-lose situation around.
Larry Schwerin, CEO of Capella Intelligent Subsystems @CapellaPhotonic
Over 2011, we witnessed the demand for broadband access increase at an accelerated rate. Much of this has been fueled by the continuation of mass deployments of broadband access - PON/FTTH, wireless LTE, HFC, to name a few - as well as the ever-increasing implementation of cloud computing, requiring instantaneous broadband access. Video and rich media are a small but growing piece of this equation.
The ultimate of this is yet to be felt, as people start to draw more narrowcast versus broadcast content. The final element will be when upstream content via appliances similar to Sling Media, as well as the various forms of video conferencing, become more widespread. This will lead to more symmetrical bandwidth from an upstream perspective.

'Change is definitely in order for the optical ecosystem. The question is how and when?'
Along with this, the issue of falling revenue-per-bit is forcing network operators to develop more cost-effective ways for managing this traffic.
All of aforementioned is driving demand for higher capacity and more flexible support at the fundamental optical layer.
I believe this will work to translate into more bits-per-wavelength, more wavelengths-per-fibre, and finally more flexibility for network operators. They will be able to more easily manage the traffic at the optical layer. This points to good news for transponder, tunable and ROADM/ WSS suppliers.
2011 also pointed out certain issues within the optical communications sector. Most notably, entering 2011, the financial marketplace was bullish on the optical sector following rapid quarter-on-quarter growth of certain larger optical players. Then, the “Ides of March” came and optical stocks lost as much as 40% of their value when it was deemed there was a pull back in demand by a very few, but nonetheless important players in the sector.
Later in the year came the flooding in Thailand, which hampered the production capabilities of many of the optical components players.
Overall margins in the sector remain at unacceptable levels furthering the speculation that things need to change in order for a more robust environment to exist.
What will 2012 bring?
I believe demand for bandwidth will continue to grow. Data centres will gain more focus as cloud computing continues to gain traction. This will lead to more demand for fundamental technologies in the area of optical transmission and management.
The next phase of wavelength management solutions will start to emerge - both at the high port count (1x20) as well as low-port count (1x2, 1x4) for edge applications. More emphasis will be placed on monitoring and control as more complex optical networks are built.
Change is definitely in order for the optical ecosystem. The question is how and when? Will it simply be consolidation? How will vertical integration take shape? How will new technologies influence potential outcomes?
2012 should be a year of unique change.
Terry Unter, president and general manager, optical networks solutions, Oclaro
Discussion and progress on defining next-generation ROADM network architectures was a very important development in 2011. In particular, consensus on feature requirements and technology choices to enable a more cost-efficient optical network layer was generally agreed amongst the major network equipment manufacturers. Colourless, directionless and, to a significant degree, contentionless are clear goals, while we continue to drive down the cost of the network.

"We expect to see a host of system manufacturers making decisions on 100 Gig supply partners. This should be an exciting year."
Coherent detection transponder technology is a critical piece of the puzzle ensuring scalability of network capacity while leveraging a common technology platform. We succeeded in volume production shipments of a 40 Gig coherent transponder and we announced our 100 Gig transponder.
2012 will be an important year for 100 Gig. The availability of 100 Gig transponder modules for deployment will enable a much wider list of system manufacturers to offer their customers more spectrally-efficient network solutions. The interest is universal from metro applications to the long haul and ultra-long haul market segments.
While there is much discussion about 400 Gig and higher rates, standards are in very early stages. The industry as a whole expects 100 Gig to be a key line rate for several years.
As we enter 2012, we expect to see a host of system manufacturers making decisions on 100 Gig supply partners. This should be an exciting year.
For Part 1, click here
For Part 2, click here
Reflections 2011, Predictions 2012 - Part 2
Gazettabyte asked industry analysts, CEOs, executives and commentators to reflect on the last year and comment on developments they most anticipate for 2012. Here are the views of Verizon's Glenn Wellbrock, Professor Rod Tucker, Ciena's Joe Berthold, Opnext's Jon Anderson, NeoPhotonics' Tim Jenks and Vladimir Kozlov of LightCounting.
Glenn Wellbrock, Verizon's director of optical transport network architecture & design
The most significant accomplishment from an optical transport perspective for me was the introduction of 100 Gigabit into Verizon's domestic - US - network.

"The key technology enabler in 2012 will be the flexible grid optical switching that can support data rates beyond 100 Gigabit"
That accomplishment has paved the way for us to hit the ground running in 2012 with a very aggressive 100 Gigabit deployment plan. I also believe this accomplishment gives others the confidence to start taking advantage of this leading-edge technology.
With coherent receiver technology and the associated high-speed electronics lowering the propagation latency by up to 15%, we see a much cleaner line system design that eliminates external dispersion compensation fibre while bringing down the cost, space and power per bit.
The value of the whole industry moving in this direction means higher volumes and, therefore, lower costs. This new infrastructure will allow operators to get ahead of customer demand, thus improving delivery intervals and introducing new, higher bandwidth services to those large key customers that require it.
In my opinion, the key technology enabler in 2012 will be the flexible grid optical switching that can support data rates beyond 100 Gigabit and provides the framework to support colourless, directionless and contentionless optical nodes.
Today, field technicians must plug a new transmitter/ receiver into the appropriate direction and filter port at both circuit ends. With this new technology, operations personnel can simply plug the new card into the next available port and it can then be provisioned, tested and even moved to a new colour or direction remotely without any on-site personnel involvement - even when there are multiple copies of the same colour on the same add/ drop structure coming from different fibres.
This new nodal architecture takes advantage of the inherent channel selection capability of the coherent receiver to eliminate fixed filters and opens up the door for a truly reconfigurable optical add/ drop multiplexer (ROADM) - creating new flexibility that can be used for optical restoration, network defragmentation, operational simplicity, and more.
Rod Tucker, Director of the Institute for a Broadband Enabled Society (IBES), Director of the Centre for Energy-Efficient Telecommunications (CEET), and professor of electrical and electronic engineering at the University of Melbourne.
Australia's National Broadband Network (NBN) hit the ground running in 2011.
The project is still many years from completion, but in 2011 the roll-out of fibre-to-the-premises infrastructure began in earnest. This is a very noteworthy project - a wholesale broadband access network delivering advanced broadband services to the entire population of the country, including fibre to 93% of all premises and a mixture of fixed wireless and satellite to the remainder. At an estimated cost of around AUS$36 billion, the price tag is not small.

"The environment created by [Australia's] National Broadband Network will greatly enhance opportunities for innovations in new services and new modes of broadband service delivery"
But the wholesale-only model maximises opportunities for competition at the service provider level, and reduces wasteful duplication of infrastructure in the last mile. A remarkable aspect of the NBN project is that a deal has been struck between the incumbent telco, Telstra, and the government-owned owner of the NBN.
Under this deal, Telstra will shut down its Hybrid-Fibre-Coax (HFC) network and decommission its legacy copper access network. Australia will become a truly fibre-connected country, with a future-proof broadband infrastructure.
My thoughts for 2012 also relate to Australia's National Broadband Network. The environment created by the NBN will greatly enhance opportunities for innovations in new services and new modes of broadband service delivery.
I anticipate that in 2012 and beyond, new services providers and aggregators in areas such as health care, education, entertainment and energy will emerge.
I am very excited about the opportunities.
Joe Berthold, vice president of network architecture at Ciena
One of the most memorable developments from a network architecture point of view was the clear emergence of the category of packet-optical switching products to serve as the transport layer of backbone IP networks.
For years two competing points of view have been put forth. First, in the 'IP-over-glass' position, long-haul optics is incorporated into core routers. This has never taken off, with some disappointing attempts in the early days of 40 Gigabit. The second approach involves a separate, very much simpler, packet optical transport platform being introduced to interconnect core routers. The packet transport could be based on Ethernet protocols, MPLS, MPLS-TE or MPLS-TP.

"It will be interesting to see if a large internet data centre operator decides to embrace the OpenFlow concept at this very early stage of its development"
What is quite significant in this development, traditional router vendors seem to be going in this direction too, with the vision of a much simpler packet switching platform to keep cost, space and power under control.
This is a clear response to the overwhelming need we see in the market, representing a separation of packet switching into two layers: one with global routing capability at strategic locations in the network, and the other with flexible transport functionality for network traffic engineering.
In 2012 it will be fascinating to see how the struggle for protocol dominance plays out within the data centre.
While the IETF has many competing proposals, worked in multiple groups, the IEEE is in final ballot now for Shortest Path Bridging (IEEE 802.1aq).
Shortest Path Bridging has broad applicability in networks, but we might see it first emerge as a solution within the data centre.
The other contender within the data centre is OpenFlow, which has developed quite a momentum too.
It will be interesting to see if a large internet data centre operator decides to embrace the OpenFlow concept at this very early stage of its development.
Jon Anderson, director of technology programme at Opnext
Our most significant 2011 events were the Japan great earthquake in March and the Thailand floods in October. Both events caused major disruptions and challenges in optical component supply-chain management and manufacturing.
JDS Uniphase's tunable SFP+ announcement was well ahead of the technology curve.

"Our most significant 2011 events were the Japan great earthquake in March and the Thailand floods in October."
In 2012 we expect initial production shipments and deployment of 100Gbps PM-QPSK/ coherent modules. Also a fast production ramp of 40 Gigabit Ethernet (GbE) QSFP+ modules for data centre applications.
Another development to watch is the next-generation 100 GbE interconnect technology and standards development for low-cost, high-density modules for data centre applications.
Lastly, there will be an increased focus on technologies and solutions for 100 Gigabit DWDM in metro and extended reach enterprise applications.
Tim Jenks, CEO of NeoPhotonics

NeoPhotonics made significant progress this year in developments of components and technologies for coherent transmission networks, including receivers, transmitters and advanced approaches toward switching.
We continue to see increasing adoption of coherent transmission systems, broad-scale deployment of access networks and a continuing emergence of large scale data centres as a prominent element of the communications network landscape.
Vladimir Kozlov, CEO of LightCounting
The industry was strong enough to get over an earthquake, tsunami and flood in 2011. Softer demand for optics in 2011 helped - is still helping - many vendors to ride the disruptions. Ironically, the industry was more stressed ramping up production in 2010 to meet demand than dealing with the disruptions of 2011. We are looking forward to a smoother ride in 2012, as demand/ supply reach equilibrium and nature cooperates.
"Ironically, the industry was more stressed ramping up production in 2010 to meet demand than dealing with the disruptions of 2011"
Service provider revenue and capex were up significantly in 2011. Mobile data is driving the growth, but even wireline revenues are improving and FTTx is probably behind it. This should be a sustainable trend for 2012-2015, even as service providers curb expenses to improve profitability, a larger fraction of capex will be spend on equipment. New technology is critical to stay ahead of competition.
Data centre optics had another good year with 10GBASE-T falling further behind schedule and with 100 Gigabit generating much action. This will probably get even more interesting in 2012.
Our conservative forecast for active optical cable, criticised by some vendors, was not conservative enough in 2011. It will take a while for this segment to unfold.
For Part 1, click here
For Part 3, click here
Rafik Ward Q&A - final part

"Feedback we are getting from customers is that the current 100 Gig LR4 modules are too expensive"
Rafik Ward, Finisar
Q: Broadway Networks, why has Finisar acquired the company?
A: We spent quite some time talking to Broadway and understanding their business. We also talked to Broadway’s customers and the feedback we got on the technical team, the products and what this little start-up was able to accomplish was unanimously very positive.
We think what Broadway has done, for instance their EPON* stick product, is very interesting. With that product, an end user has the ability to make any SFP* port on a low-end Ethernet switch an EPON ONU* interface. This opens up a whole new set of potential customers and end users for EPON.
In reality, consumers will never have Ethernet switches with SFP ports in their house. Where we do see such Ethernet switches are in every major enterprise and many multi-dwelling units. It is an interesting technology that enables enterprises and multi-dwelling units to quickly tool-up for EPON.
* [EPON - Ethernet passive optical network, SFP - small form-factor pluggable optical transceiver, ONU - optical network unit]
Optical transceivers have been getting smaller and faster in the last decade yet laser and photo-detector manufacturing have hardly changed, except in terms of speed. Is this about to change?
Speed is one of the focus areas for the industry and will continue to be. Looking forward in a number of applications, though, we are going to hit the limit for these lasers and we are going to have to look more carefully outside of just raw laser speed to move up the data rate curve.
"We are going to hit the limit for these lasers"
A lot of this work has already started on the line side using different modulation formats and DSP* technology. Over time the question is: What happens on the client side? In future, do we look to other modulation formats on the client side? Eventually we will get there; it may take several years before we need to do things like that. But as an industry we would be foolish to think we won’t have to do this.
WDM* is going to be an increasingly important technology on the client side. We are already seeing this with the 40GBASE-LR4 and 100GBASE-LR4 standards.
* [DSP - digital signal processing, WDM - wavelength-division multiplexing]
Google gave a presentation at ECOC that argued for the need for another 100Gbps interface. What is Finisar’s view?
Feedback we are getting from customers is that the current 100 Gig LR4 modules are too expensive. We have spent a lot of time with customers helping them understand how the current LR4 standard, as is written, actually enables a very low cost optical interface, and the timeframes we believe are very quick in terms of how we can get cost down considerably on 100 Gig.
Rafik Ward (right) giving Glenn Wellbrock, director of backbone network design at Verizon Business, a tour of Finisar's labsThat was part of the details that [Finisar’s] Chris Cole also presented at ECOC.
There has certainly been a lot of media attention on the two [ECOC] presentations between Finisar and Google. This really is not so much about the quote, ‘drama’, or two companies that have a disagreement which optical interface makes more sense. It is more fundamental than that.
What it comes down to is that, as an industry, we have pretty limited resources. The best thing all of us can do is try to direct these resources – this limited pool we have combined throughout the industry - on a path that makes the most sense to reduce bandwidth cost most significantly.
The best way to do that, and that is already established, is through standards. The [IEEE] standard got it right that the path the industry is on is going to enable the lowest cost 100 Gig [interface]. Like everything, there is some investment required to get us there. The 25 Gig technology now [used as 4x25 Gig] is becoming mainstream and will soon enable the lowest cost solution. My view is that within 18 months to two years this will be a moot point.
If the technology was available 18 months sooner, we wouldn’t even be having this discussion. But that is the position that we, as an industry, are in. With that, it creates some tensions, some turmoil, where customers don’t like to pay more than they perceive they have to.
There is the CFP form factor that is relatively large. Is the point that if current technology was available 18 months ago, 100Gbps could have come out in a QSFP?
The heart of the debate is cost.
There are other elements that always play into a debate like this. Beyond the cost argument, how quickly can two optical interfaces, like a 4x25 Gig versus a 10x10 Gig, each enable a smaller form factor solution.
But I think that is secondary. Had we not had the cost problem that we have now between 4x25 Gig versus 10x10 Gig, I don’t think we would be talking about it.
So it’s the current cost of the 4x25 Gig that is the issue?
Correct.
In September, the ECOC conference and exhibition was held. What were your impressions and did you detect any interesting changes?
There wasn’t so much an overwhelming theme this year at ECOC. In ECOC 2009, it was the year of coherent detection. This year there wasn’t a theme that resonated strongly throughout.
The mood was relatively upbeat. From our perspective, ECOC seemed a little bit smaller in terms of the size of the floor. But all the key people you would expect to be at the show were there.
Maybe the strongest theme – and I wrote about this in my blog – was colourless, directionless, contentionless (CDC) [ROADMs]. I think what I said is that they should have renamed it not ECOC but the ECDC show.
"A blog ... enables a much more informal mechanism to communicate to a broad audience."
Do you read business books and is there one that is useful for your job?
Probably the book I think about the most in my job is Clayton Christensen's The Innovator’s Dilemma.
He talks about how, when you look at very successful technology companies that have failed, what causes them to fail is often new solutions that come from the very low end of the market.
A lot of companies, and he cites examples from the disk drive industry, prided themselves on focussing on the high end of the market but ultimately ended up failing because there was a surprise upstart, someone who came in at the market's low end – in terms of performance, cost etc. – that continued to innovate using their low-end architecture, making it suitable for the core market.
For these large, well-established companies, once they realised they had this competitor, it was too late.
I think about that business book probably more than others. It’s a very interesting take on technology and the threat that can be posed to people in high-tech companies.
Your job sounds intensive and demanding. What do you do outside work to relax?
I’m a big [ice] hockey fan. I’ve been a hockey fan for many years; it’s a pretty intense sport. These days I tend to watch more hockey than I play but I very much enjoy the sport.
The other thing I started up this year that I had never done before – a little side project – was vegetable gardening. Surprisingly, it ended up taking a lot of my attention and I think it was a good distraction for me.
It can be quite remarkable, when you have your own little vegetable garden, how often you go and look at its progress. I’d find often coming home from work, first thing I’d want to do is go see how things were progressing in my vegetable garden.
You are the face of Finisar’s blog. What have you learnt from the experience?
A blog is an interesting tool to get information out to a broad audience. For companies like Finisar, it serves as a very important communication vehicle that didn’t exist previously.
In the old days, if you wanted to get information out to a broad group of customers, you either had to meet and communicate that information face-to-face, or via email; very targeted, one customer-at-a-time communication.
Another way was the press release. A press release was a very easy way to broadcast that information. But the challenge is that not all information that you want to broadcast is suitable for a press release.
The reason why I really like the blog is that it enables a much more informal mechanism to communicate to a broad audience.
Has it helped your job in any tangible way?
We found some interesting customer opportunities. These have come in through the blog when we’ve talked about specific products. That hasn’t happened extremely frequently but we have had a few instances. So it’s probably the most tangible thing: we can point to enhanced business because of it.
But the strength of something like a blog goes much deeper than that, in terms of the communication vehicle it enables.
You have about a year’s experience running a blog. If an optical component company is thinking about starting a blog, what is your advice?
The best advice I can give to anybody looking to do a blog is that it is something you have to commit to up-front.
A blog where you don’t continue to refresh the content regularly becomes a tired blog very quickly. We have made a conscious effort to have updated postings as best we can, on a weekly basis or even more frequently. There are certainly periods where we have gone longer than that but if you look back, in general, we have a wide variety of content that has been refreshed regularly.
I have to give credit to others - guest bloggers - within the organisation that help to maintain the content. This is critical. I would struggle to keep up with the pace if it was just myself every week.
Click here for the first part of Rafik Ward's Q&A.
