Heavy Reading’s take on optical module trends
The industry knows what the next-generation 400-gigabit client-side interfaces will look like but uncertainty remains regarding what form factors to use. So says Simon Stanley who has just authored a report entitled: From 25/100G to 400/600G: A Competitive analysis of Optical Modules and Components.
Implementing the desired 400-gigabit module designs is also technically challenging, presenting 200-gigabit modules with a market opportunity should any slip occur at 400 gigabits.
Simon StanleyStanley, analyst-at-large at Heavy Reading and principal consultant at Earlswood Marketing, points to several notable developments that have taken place in the last year. For 400 gigabits, the first CFP8 modules are now available. There are also numerous suppliers of 100-gigabit QSFP28 modules for the CWDM4 and PSM4 multi-source agreements (MSAs). He also highlights the latest 100-gigabit SFP-DD MSA, and how coherent technology for line-side transmission continues to mature.
Routes to 400 gigabit
The first 400-gigabit modules using the CFP8 form factor support the 2km-reach 400Gbase-FR8 and the 10km 400Gbase-LR8; standards defined by the IEEE 802.3bs 400 Gigabit Ethernet Task Force. The 400-gigabit FR8 and LR8 employ eight 50Gbps wavelengths (in each direction) over a single-mode fibre.
There is significant investment going into the QSFP-DD and OSFP modules
But while the CFP8 is the first main form factor to deliver 400-gigabit interfaces, it is not the form factor of choice for the data centre operators. Rather, interest is centred on two emerging modules: the QSFP-DD that supports double the electrical signal lanes and double the signal rates of the QSFP28, and the octal small form factor pluggable (OSFP) MSA.
“There is significant investment going into the QSFP-DD and OSFP modules,” says Stanley. The OSFP is a fresh design, has a larger power envelope - of the order of 15W compared to the 12W of the QSFP-DD - and has a roadmap that supports 800-gigabit data rates. In contrast, the QSFP-DD is backwards compatible with the QSFP and that has significant advantages.
“Developers of semiconductors and modules are hedging their bets which means they have got to develop for the QSFP-DD, so that is where the bulk of the development work is going,” says Stanley. “But you can put the same electronics and optics in an OSFP.”
Given there is no clear winner, both will likely be deployed for a while. “Will QSFP-DD win out in terms of high-volumes?” says Stanley. “Historically, that says that is what is going to happen.”
The technical challenges facing component and module makers are achieving 100-gigabit-per-wavelength for 400 gigabits and fitting them in a power- and volume-constrained optical module.
The IEEE 400 Gigabit Ethernet Task Force has also defined the 400GBase-DR4 which has an optical interface comprising four single-mode fibres, each carrying 100 gigabits, with a reach up to 500m.
“The big jump for 100 gigabits was getting 25-gigabit components cost-effectively,” says Stanley. “The big challenge for 400 gigabits is getting 100-gigabit-per-wavelength components cost effectively.” This requires optical components that will work at 50 gigabaud coupled with 4-level pulse-amplitude modulation (PAM-4) that encodes two bits per symbol.
That is what gives 200-gigabit modules an opportunity. Instead of 4x50 gigabaud and PAM-4 for 400 gigabits, a 200-gigabit module can use existing 25-gigabit optics and PAM-4. “You get the benefit of 25-gigabit components and a bit of a cost overhead for PAM-4,” says Stanley. “How big that opportunity is depends on how quickly people execute on 400-gigabit modules.”
The first 200-gigabit modules using the QSFP56 form factor are starting to sample now, he says.
100-Gigabit
A key industry challenge at 100 gigabit is meeting demand and this is likely to tax the module suppliers for the rest of this year and next. Manufacturing volumes are increasing, in part because the optical module leaders are installing more capacity and because of the entrance of many, smaller vendors into the marketplace.
End users buying a switch only populate part of the ports due to the up-front costs. More modules are then added as traffic grows. Now, internet content providers turn on entire data centres filled with equipment that is fully populated with modules. “The hyper-scale guys have completely changed the model,” says Stanley.
The 100-gigabit module market has been coming for several years and has finally reached relatively high volumes. Stanley attributes this not just to the volumes needed by the large-scale data centre operators but also the fact that 100-gigabit modules have reached the right price point. Another indicator of the competitive price of 100-gigabit is the speed at which 40-gigabit technology is starting to be phased out.
Developments such as silicon photonics and smart assembly techniques are helping to reduce the cost of 100-gigabit modules, says Stanley, and this will be helped further with the advent of the new SFP-DD MSA.
SFP-DD
The double-density SFP (SFP-DD) MSA was announced in July. It is the next step after the SFP28, similar to the QSFP-DD being an advance on the QSFP28. And just as the 100-gigabit QSFP28 can be used in breakout mode to interface to four 25-gigabit SFP28s, the 400-gigabit QSFP-DD promises to perform a similar breakout role interfacing to SFP-DD modules.
Stanley sees the SFP-DD as a significant development. “Another way to reduce cost apart from silicon photonics and smart assembly is to cut down the number of lasers,” he says. The number of lasers used for 100 gigabits can be halved from four using 28 gigabaud signalling and PAM-4). Existing examples of two-wavelength/ PAM-4 styled 100-gigabit designs are Inphi’s ColorZ module and Luxtera’s CWDM2.
The industry’s embrace of PAM-4 is another notable development of the last year. The debate about the merits of using 56-gigabit symbol rate and non-return-to-zero signalling versus PAM-4 with its need for forward-error correction and extra latency has largely disappeared, he says.
The first 400-gigabit QSFP-DD and OSFP client-side modules are expected in a year’s time with volumes starting at the end of 2018 and into 2019
Coming of age
Stanley describes the coherent technology used for line-side transmissions as coming of age. Systems vendors have put much store in owning the technology to enable differentiation but that is now changing. To the well-known merchant coherent digital signal processing (DSP) players, NTT Electronics (NEL) and Inphi, can now be added Ciena which has made its WaveLogic Ai coherent DSP available to three optical module partners, Lumentum, NeoPhotonics and Oclaro.
CFP2-DCO module designs, where the DSP is integrated within the CFP2 module, are starting to appear. These support 100-gigabit and 200-gigabit line rates for metro and data centre interconnect applications. Meanwhile, the DSP suppliers are working on coherent chips supporting 400 gigabits.
Stanley says the CFP8 and OSFP modules are the candidates for future pluggable coherent module designs.
Meanwhile, the first 400-gigabit QSFP-DD and OSFP client-side modules are expected in a year’s time with volumes starting at the end of 2018 and into 2019.
As for 800-gigabit modules, that is unlikely before 2022.
“At OFC in March, a big data centre player said it wanted 800 Gigabit Ethernet modules by 2020, but it is always a question of when you want it and when you are going to get it,” says Stanley.
Giving telecom networks a computing edge
But a subtler approach is taking hold as networks evolve whereby what a user does will change depending on their location. And what will enable this is edge computing.
Source: Senza Fili Consulting
Edge computing
“This is an entirely new concept,” says Monica Paolini, president and founder at Senza Fili Consulting. “It is a way to think about service which is going to have a profound impact.”
Edge computing has emerged as a consequence of operators virtualising their networks. Virtualisation of network functions hosted in the cloud has promoted a trend to move telecom functionality to the network core. Functionality does not need to be centralised but initially, that has been the trend, says Paolini, especially given how virtualisation promotes the idea that network location no longer matters.
“That is a good story, it delivers a lot of cost savings,” says Paolini, who recently published a report on edge computing. *
But a realisation has emerged across the industry that location does matter; centralisation may save the operator some costs but it can impact performance. Depending on the application, it makes sense to move servers and storage closer to the network edge.
The result has been several industry initiatives. One is Mobile Edge Computing (MEC) being developed by the European Telecommunications Standards Institute (ETSI). In March, ETSI renamed the Industry Specification Group undertaking the work to Multi-access Edge Computing to reflect the operators requirements beyond just cellular.
“What Multi-access Edge Computing does is move some of the core functionality from a central location to the edge,” says Paolini.
Another initiative is M-CORD, the mobile component of the Central Office Re-architected as a Datacenter initiative, overseen by the Open Networks Labs non-profit organisation. Other initiatives Paolini highlights include the Open Compute Project, Open Edge Computing and the Telecom Infra Project.
This is an entirely new concept. It is a way to think about service which is going to have a profound impact.
Location
The exact location of the ‘edge’ where the servers and storage reside is not straightforward.
In general, edge computing is located somewhere between the radio access network (RAN) and the network core. Putting everything at the RAN is one extreme but that would lead to huge duplication of hardware and exceed what RAN locations can support. Equally, edge computing has arisen in response to the limitations of putting too much functionality in the core.
The matter of location is blurred further when one considers that the RAN itself is movable to the core using the Cloud RAN architecture.
Paolini cites another reason why the location of edge computing is not well defined: the industry does not yet know. And it will only be in the next year or two when operators start trialling the technology. “There is going to be some trial and error by the operators,” she says.
Use cases
An enterprise located across a campus is one example use of edge computing, given how much of the content generated stays on-campus. If the bulk of voice calls and data stay local, sending traffic to the core and back makes little sense. There are also security benefits keeping data local. An enterprise may also use the edge computing to run services locally and share them across networks, for example using cellular or Wi-Fi for calls.
Another example is to install edge computing at a sports stadium, not only to store video of the game’s play locally - again avoiding going to the core and back with content - but also to cache video from games taking place elsewhere for viewing by attending fans.
Virtual reality and augmented reality are other applications that require low-latency, another performance benefit of having local computation.
Paolini expects the uptake of edge computing to be gradual. She also points to its challenging business case, or at least how operators typically assess a business case may not tell the full story.
Operators view investing in edge computing as an extra cost but Paolini argues that operators need to look carefully at the financial benefits. Edge computing delivers better utilisation of the network and lower latency. “The initial cost for multi-access edge computing is compensated for by the improved utilisation of the existing network,” she says.
When Paolini started the report it was to research low-latency and the issues of distributed network design, reliability and redundancy. But she soon realised that multi-access edge computing was something broader and that edge computing is beyond what ETSI is doing.
This is not like an operator rolling out LTE and reporting to shareholders how much of the population now has coverage. “It is a very different business to learn how to use networks better,” says Paolini.
* Click here to access the report, Power at the edge. MEC, edge computing, and the prominence of location
Daryl Inniss reflects on a career in market research
Daryl Inniss
Rocky beginnings
I jumped ship in 2001 joining RHK, a market research firm, knowing nothing about the craft. I had been a technical manager and loved research and development, but work was 500 miles from my family and the weekly commute was gruelling.
Back then, the telecom market was crashing and I believed my job was at risk. Moving to a small market research firm could hardly be described as good planning, but it turned out to be a godsend.
I had no idea what I was getting into and my first months did not help. My mother passed away within a month of joining and I was absent for half of my first 40 days. But my boss was very supportive. Meanwhile, work consisted of unintelligible, endless conference calls. And while in this daze, September 11th occurred.
The first report - getting the job done
Completing my first market research report helped ground me in the art. I wrote about optical dispersion compensators. After interviewing many companies, I wrote a long and complicated piece, an exercise that I found difficult. I also struggled with who would read the report and what would be done with the data.
The report aimed to explain technical issues simply and included a market forecast. Completing it proved hard because there was always more information to include, a better explanation, and better forecast data to be gathered.
I felt unsatisfied but the report received kudos. Internally I was told that I was the second or third analyst to tackle that topic and the first to complete the work. And an optical company complimented me on the report. But I felt dissatisfied and wished I had done better. I wanted to understand the subject more, wished I could provide clearer, simpler explanations and also provide a better forecast.
Nonetheless, I learned the importance of completing assignments as they can go on forever.
A market researcher's role
An analyst tries to identify market opportunities and winning strategies. Looking at new products, for example, the goal is to explain what they are, why they are being introduced, who will use them, their value, and the competitive landscape. The issues must be explained to novices and experts alike. The technical novice may get a glimpse of what the technology means and how it works, while a technical expert may understand the ecosystem more deeply.
An analyst must strive to prepare simple messages that are steeped in facts. You need to have a story—say why something is happening and explain it in the context of the bigger picture.
Forecasts, market share, rankings, prices and volumes are all important. Everyone loves numbers. But the story underpinning the numbers is far more important and most people do not take the time to determine the causes behind the numbers.
Where is the industry going?
I have spent the last 15 years analysing the optical components market. Sustainable profitability is the biggest topic, and consolidation is viewed as providing the best approach. Notwithstanding the mergers and acquisitions, the market is fragmented, margins remain low, and there is still no evidence of true consolidation.
Independent of all the change, optical component suppliers post gross margins below 40 percent and most are below 30 percent while semiconductor companies are routinely above 50 percent. There is a force keeping the industry stuck at this level, in part because there is little product differentiation.
Forecasts, market share, rankings, prices and volumes are all important. Everyone loves numbers. But the story underpinning the numbers is far more important
Avago Technologies’ divestiture of its optical module business to Foxxconn Interconnect Technology Group points to one high-margin path. Discrete components—particularly lasers and modulators, and to a lesser extent photodiodes and receivers - command higher margins. Vendors can offer differentiated products at this level. Total revenues are lower so the challenge is to win enough business to fill the factory because these are fixed-cost, intensive businesses.
Subsystems offer another high-margin path, particularly for vertically-integrated companies. Here vendors are challenged with a long time-to-market, requiring a strong design team to support customer requests. Also business can be lumpy because solutions are customer-specific.
Acacia Communications' coherent 100 gigabit transponders is an example solution that has the basis to win broad-based business and high margins. The products offer a one-stop-shop solution including optics, electronics, and software. Acacia is developing silicon photonics so it controls most of the bill of materials, keeping down product cost. And its solution is differentiated in that it helps customers get their products to market while achieving a high level of performance.
Market research: even more important now
The communications industry is going through extensive change making market research more important than ever. The Web 2.0 companies are the new optical communication mindshare leaders, driving technology and business practices.
Simultaneously China is the biggest consumer of optical gear, both for long-haul and access networks. Optical component suppliers need to understand how to compete in this new environment. What are the new rules? How are they evolving? How can companies best position themselves to win more business?
Just like when I started, I ask how can a market researcher help component companies navigate this new world. No doubt, this is a challenge, but market researchers provide the collective market voice. They are the market mirror that shows the beauty spots and the warts. They are given license to say what everyone is thinking. They can raise market consciousness so participants may act fearlessly.
But market researchers need to understand the story from top to bottom—end customer to suppliers. They must communicate well which includes not only delivering the story but also being humble, admitting mistakes, keeping sources and information confidential, and taking corrective actions.
This is indeed a challenge and I feel honoured to have had the opportunity to participate. I could not have done the job without the help of wonderful people from all over the world. Their generosity, warmth, and kindness made all the difference. At bottom, it is these relationships that mattered as we tried to help each other navigate.
Biography
Dr. Daryl Inniss is Director, New Business Development at OFS Fitel, the designer, manufacturer and provider of optical fibre, fibre optic cable, connectivity, fibre-to-the-subscriber and specialty photonics products.
He was formerly Components Practice Leader at market research firm Ovum and RHK. Daryl was Technical Manager at JDSU and Lucent Technologies, Bell Laboratories, and started his career as a Member of the Technical Staff, AT&T Bell Labs.
Global optical networking market set for solid growth
Source: Ovum
The global optical networking market will grow at a year-over-year rate of 5 percent through 2019. So claims market research firm, Ovum, in its optical networking forecast for 2013 to 2019. North America will lead the market growth, with data centre deployments and demand for 100 Gigabit being the main drivers.
The building of data centres drives demand for optical interconnect. "It [data centre operators] is almost a new category of buyer," says Ian Redpath, principal analyst, network infrastructure at Ovum. The segment is growing faster than telco spending on fixed and mobile networks.
"This whole phenomenon of the large data centre operators is more pronounced in North America, and we think that will continue throughout the forecast period," says Redpath.
Demand for 100 Gigabit is coming from several segments: large incumbent operators, cable operators and internet content service providers. "All these entities are buying a technology [100 Gig] that is prime time," says Redpath.
Asia Pacific will be the region with the second largest growth for the forecast period, at 4.4 percent compound annual growth rate (CAGR).
The deployment of optical equipment in China and Japan was down in 2013: China dipped 6 percent while Japan was down a huge 23 percent compared to 2012 market demand.
The underlying trend in China is one of growth, with the optical market valued at US $3 billion. "They just had to have a pause," says Redpath, who points out that the Chinese market has tripled in a relatively short period. "They are now retooling for the next big thing: LTE; it it just a matter of time," he says. The deployment of 100 Gig, by the large three domestic operators, may start by the year end or spill into 2015.
Optics is the foundation of an industry that is growing
Japan's sharp decline in 2013 follows massive growth in 2012, the result of replacing networks lost following the 2011 earthquake and tsunami. "That was a one-time bump followed by a one-time reset, with the market now back to normal," says Redpath.
Meanwhile, the EMEA optical networking market will growth at 4.1 percent. "This is a pretty modest growth rate, with more upside coming in the latter period," says Redpath. "The operators have been neglecting their core for so long, they are going to have to come back and reinvest."
Ovum says the weakness of the European market will run its course during the forecast period and expects Europe's northern countries - the UK and Germany - to lead the recovery, followed by the likes of Spain, Italy and Greece.
The market research firm singles out the UK market as being particularly dynamic, and an economy that will lead Europe out of recession. "It is probably closer to the North America market than any other country in terms of competitors and non-carrier spending," says Redpath. "The UK is also one of the leading data centre markets in the world."
Ovum remains upbeat about the long-term prospects of the global optical networking market. "Optics is the foundation of an industry that is growing," says Redpath.
He also points to recent developments in the net neutrality debate, and cites how over-the-top TV and film player, Netflix, has signed agreements with telecom and cable operators. "If over-the-top players realise that they can't keep free-riding on these networks, and to get performance they give a little money to the telcos, then that is a good thing for the ultimate food chain," says Redpath.
Further reading:
Global market soft in 1Q14; North America bucks trend, click here
WDM and 100G: A Q&A with Infonetics' Andrew Schmitt
The WDM optical networking market grew 8 percent year-on-year, with spending on 100 Gigabit now accounting for a fifth of the WDM market. So claims the first quarter 2014 optical networking report from market research firm, Infonetics Research. Overall, the optical networking market was down 2 percent, due to the continuing decline of legacy SONET/SDH.
In a Q&A with Gazettabyte, Andrew Schmitt, principal analyst for optical at Infonetics Research, talks about the report's findings.
Q: Overall WDM optical spending was up 8% year-on-year: Is that in line with expectations?
Andrew Schmitt: It is roughly in line with the figures I use for trend growth but what is surprising is how there is no longer a fourth quarter capital expenditure flush in North America followed by a down year in the first quarter. This still happens in EMEA but spending in North America, particularly by the Tier-1 operators, is now less tied to calendar spending and more towards specific project timelines.
This has always been the case at the more competitive carriers. A good example of this was the big order Infinera got in Q1, 2014.
You refer to the growth in 100G in 2013 as breathtaking. Is this growth not to be expected as a new market hits its stride? Or does the growth signify something else?
I got a lot of pushback for aggressive 100G forecasts in 2010 and 2011 when everyone was talking about, and investing in, 40G. You can read a White Paper I wrote in early 2011 which turned out to be pretty accurate.
My call was based on the fact that, fundamentally, coherent 100G shouldn’t cost more than 40G, and that service providers would move rapidly to 100G. This is exactly what has happened, outside AT&T, NTT and China which did go big with 40G. But even my aggressive 100G forecasts in 2012 and 2013 were too conservative.
I have just raised my 2014 100G forecast after meeting with Chinese carriers and understanding their plans. 100G will essentially take over almost all of the new installations in the core by 2016, worldwide, and that is when metro 100G will start. But there is too much hype on metro 100G right now given the cost, but within two years the price will be right for volume deployment by service providers.
There is so much 'blah blah blah' about video but 90 percent is cacheable. Cloud storage is not
You say the lion's share of 100G revenue is going to five companies: Alcatel-Lucent, Ciena, Cisco, Huawei, and Infinera. Most of the companies are North American. Is the growth mainly due to the US market (besides Huawei, of course). And if so, is it due to Verizon, AT&T and Sprint preparing for growing LTE traffic? Or is the picture more complex with cable operators, internet exchanges and large data centre players also a significant part of the 100G story, as Infinera claims.
It’s a lot more complex than the typical smartphone plus video-bandwidth-tsunami narrative. Many people like to attach the wireless metaphor to any possible trend because it is the only area perceived as having revenue and profitability growth, and it has a really high growth rate. But something big growing at 35 percent adds more in a year than something small growing at 70 percent.
The reality is that wireless bandwidth, as a percentage of all traffic, is still small. 100G is being used for the long lines of the network today as a more efficient replacement for 10G and while good quantitative measures don’t exist, my gut tells me it is inter-data-centre traffic and consumer/ business to data centre traffic driving most of the network growth today.
I use cloud storage for my files. I’m a die-hard Quicken user with 15 years of data in my file. Every time I save that file, it is uploaded to the cloud – 100MB each time. The cloud provider probably shifts that around afterwards too. Apply this to a single enterprise user - think about how much data that is for just one person. There is so much 'blah blah blah' about video but 90 percent is cacheable. Cloud storage is not.
Each morning a hardware specialist must wake up and prove to the world that they still need to exist
Cisco is in this list yet does not seek much media attention about its 100G. Why is it doing well in the growing 100G market?
Cisco has a slice of customers that are fibre-poor who are always seeking more spectral efficiency. I also believe Cisco won a contract with Amazon in Q4, 2013, but hey, it’s not Google or Facebook so it doesn’t get the big press. But no one will dispute Amazon is the real king of public cloud computing right now.
You’ve got to do hard stuff that others can’t easily do or you are just a commodity provider
In the data centre world, there is a sense that the value of specialist hardware is diminishing as commodity platforms - servers and switches - take hold. The same trend is starting in telecoms with the advent of Network Functions Virtualisation (NFV) and software-defined networking (SDN). WDM is specialist hardware and will remain so. Can WDM vendors therefore expect healthy annual growth rates to continue for the rest of the decade?
I am not sure I agree.
There is no reason transport systems couldn’t be white-boxed just like other parts of the network. There is an over-reaction to the impact SDN will have on hardware but there have always been constant threats to the specialist.
Each morning a hardware specialist must wake up and prove to the world that they still need to exist. This is why you see continued hardware vertical integration by some optical companies; good examples are what Ciena has done with partners on intelligent Raman amplification or what Infinera has done building a tightly integrated offering around photonic-integrated circuits for cheap regeneration. Or Transmode which takes a hacker’s approach to optics to offer customers better solutions for specific category-killer applications like mobile backhaul. Or you swing to the other side of the barbell, and focus on software, which appears to be Cyan’s strategy.
You’ve got to do hard stuff that others can’t easily do or you are just a commodity provider. This is why Cisco and Intel are investing in silicon photonics – they can use this as an edge against commodity white-box assemblers and bare-metal suppliers.
Optical networking spending up in all regions except Europe
Source data: Ovum
Ovum forecasts that the global optical networking market will grow to US $17.5 billion by 2018, a compound annual growth rate of 3.1 percent.
Optical networking spending in North America will be up 9.1 percent in 2013 after two flat years. North American tier-1 service providers and cable operators are investing in the core network to support all traffic types, and 100 Gigabit is being deployed in volume.
In contrast, optical networking sales in EMEA will contract by nearly 10 percent in 2013. “Non-spending in Europe is the major factor in the overall EMEA decline,” says Ian Redpath, principal analyst, network infrastructure at Ovum.
The major technology trend for this forecast is the ascendancy of 100 Gig, whose sales exceeded 40 Gig revenues in 2Q13
EMEA optical networking spending has been down in four out of the past five years, and the lack of investment is becoming acute, says Ovum. Given that service providers are stretching their existing networks, spending will have to take place eventually to make up for the prolonged period of inactivity.
This year has seen 100 Gigabit become the wavelength of choice for large WDM systems, with sales surging. Spending on 100 Gigabit has now overtaken spending on 40 Gigabit which declined in the first half of the year.
"The major technology trend for this forecast is the ascendancy of 100 Gig, whose sales exceeded 40 Gig revenues in 2Q13," says Redpath.
Further reading:
Ovum: Optical networks forecast: top line steady, 100G surging, click here
Avago to acquire CyOptics
- Avago to become the second largest optical component player
- Company gains laser and photonic integration technologies
- The goal is to grow data centre and enterprise market share
- CyOptics achieved revenues of $210M in 2012
How the acquisition of CyOptics will expand Avago's market opportunities. SAM is the serviceable addressable market and TAM is the total addressable market. Source: Avago
Avago Technologies has announced its plan to acquire optical component player, CyOptics. The value of the acquisition, at US $400M, is double CyOptics' revenues in 2012.
CyOptics' sales were $210M last year, up 21 percent from the previous year. Avago's acquisition will make it the optical component industry's second largest company, behind Finisar, according to market research firm, Ovum. The deal is expected to be completed in the third quarter of the year.
The deal will add indium phosphide and planar lightwave circuit (PLC) technologies to Avago's vertical-cavity surface-emitting laser (VCSEL) and optical transceiver products. In particular, Avago will gain edge laser technology and photonic integration expertise. It will also inherit an advanced automated manufacturing site as well as entry into new markets such as passive optical networking (PON).
Avago stresses its interest in acquiring CyOptics is to bolster its data centre offerings - in particular 40 and 100 Gigabit data centre and enterprise applications - as well as benefit from the growing PON market.
The company has no plans to enter the longer distance optical transmission market beyond supplying optical components.

Significance
Ovum views the acquisition as a shift in strategy. Avago is known as a short distance interconnect supplier based on its VCSEL technology.
"Avago has seen that there are challenges being solely a short-distance supplier, and there are opportunities expanding its portfolio and strategy," says Daryl Inniss, Ovum's vice president and practice leader components.
Such opportunities include larger data centres now being built and their greater use of single-mode fibre that is becoming an attractive alternative to multi-mode as data rates and reach requirements increase.
"Avago's revenues can be lumpy partly because they have a few really large customers," says Inniss.
Another factor motivating the acquisition is that short-distance interconnect is being challenged by silicon photonics. "In the long run silicon photonics is going to win," he says.
What Avago will gain, says Inniss, is one of the best laser suppliers around. And its acquisition will impact adversely other optical module players. "CyOptics is a supplier to several transceiver vendors," says Inniss. "The outlook, two or three years' hence, is decreased business as a merchant supplier."
Inniss points out that CyOptics will represent the second laser manufacturer acquisition this year, following NeoPhotonics's acquisition of Lapis Semiconductor which has 40 Gigabit-per-second (Gbps) electro-absorption modulator lasers (EMLs).
These acquisitions will remove two merchant EML suppliers, given that CyOptics is a strong 10Gbps EML player, and lasers are a key technological asset.
See also:
For a 2011 interview with CyOptics' CEO, click here
P-OTS 2.0: 60s interview with Heavy Reading's Sterling Perrin

Q: Heavy Reading claims the metro packet optical transport system (P-OTS) market is entering a new phase. What are the characteristics of P-OTS 2.0 and what first-generation platform shortcomings does it address?
A: I would say four things characterise P-OTS 2.0 and separate 2.0 from the 1.0 implementations:
- The focus of packet-optical shifts from time-division multiplexing (TDM) functions to packet functions.
- Pure-packet implementations of P-OTS begin to ramp and, ultimately, dominate.
- Switched OTN (Optical Transport Network) enters the metro, removing the need for SONET/SDH fabrics in new elements.
- 100 Gigabit takes hold in the metro.
The last two points are new functions while the first two address shortcomings of the previous generation. P-OTS 1.0 suffered because its packet side was seen as sub-par relative to Ethernet "pure plays" and also because packet technology in general still had to mature and develop - such as standardising MPLS-TP (Multiprotocol Label Switching - Transport Profile).
Your survey's key findings: What struck Heavy Reading as noteworthy?
The biggest technology surprise was the tremendous interest in adding IP/MPLS functions to transport. There was a lot of debate about this 10 years ago. Then the industry settled on a de facto standard that transport includes layers 0-2 but no higher. Now, it appears that the transport definition must broaden to include up to layer 3.
A second key finding is how quickly SONET/SDH has gone out of favour. Going forward, it is all about packet innovation. We saw this shift in equipment revenues in 2012 as SONET/SDH spend globally dropped more than 20 percent. That is not a one-time hit - it's the new trend for SONET/SDH.
Heavy Reading argues that transport has broadened in terms of the networking embraced - from layers 0 (WDM) and 1 (SONET/SDH and OTN) to now include IP/MPLS. Is the industry converging on one approach for multi-layer transport optimisation? For example, IP over dense WDM? Or OTN, Carrier Ethernet 2.0 and MPLS-TP? Or something else?
We did not uncover a single winning architecture and it's most likely that operators will do different things. Some operators will like OTN and put it everywhere. Others will have nothing to do with OTN. Some will integrate optics on routers to save transponder capital expenditure, but others will keep hardware separate but tightly link IP and optical layers via the control plane. I think it will be very mixed.
You talk about a spike in 100 Gigabit metro starting in 2014. What is the cause? And is it all coherent or is a healthy share going to 100 Gigabit direct detection?
Interest in 100 Gigabit in the metro exceeds interest in OTN in the metro - which is different from the core, where those two technologies are more tightly linked.
Cloud and data centre interconnect are the biggest drivers for interest in metro 100 Gig but there are other uses as well. We did not ask about coherent versus direct in this survey, but based on general industry discussions, I'd say the momentum is clearly around coherent at this stage - even in the metro. It does not seem that direct detect 100 Gig has a strong enough cost proposition to justify a world with two very different flavours of 100 Gig.
What surprised you from the survey's findings?
It was really the interest-level in IP functionality on transport systems that was the most surprising find.
It opens up the packet-optical transport market to new players that are strongest on IP and also poses a threat to suppliers that were good at lower layers but have no IP expertise - they'll have to do something about that.
Heavy Reading surveyed 114 operators globally. All those surveyed were operators; no system vendors were included. The regional split was North America - 22 percent, Europe - 33 percent, Asia Pacific - 25 percent, and the rest of the world - Latin America mainly - 20 percent.
Optical transport to grow at a 10% CAGR through 2017
- Global optical transport market to reach US $13bn in 2017
- 100 Gigabit to grow at a 75% CAGR
"I won't be surprised if it [100 Gig] grows even faster"
Jimmy Yu, Dell'Oro Group
The Dell'Oro Group forecasts that the global optical transport market will grow to US $13 billion in 2017, equating to a 10-percent compound annual growth rate (CAGR).
In 2012 SONET/SDH sales declined by over 20 percent, greater than Dell'Oro expected, while wavelength-division multiplexing (WDM) equipment sales held their own.
Regions
Dell'Oro expects optical transport growth across all the main regions, with no one region dominating. The market research company does foresee greater growth in Europe given the prolonged underspend of recent years.
European operators are planning broadband access investment such as fibre-to-the-cabinet/ VDSL vectoring as well as fibre-to-the-home. "That will drive demand for backhaul bandwidth and that is where WDM fits in well," says Jimmy Yu, vice president, microwave transmission, mobile backhaul and optical transport at Dell'Oro.
Technologies
Forty and 100 Gigabit optical transport will be the main WDM growth areas through 2017. Yu expects 40 Gigabit demand to grow over the forecast period even if the growth rate will taper off due to demand for 100 Gigabit.
The 100 Gigabit market continues to exceed Dell'Oro's forecasted growth. The market research company predicts 100-Gbps wavelength shipments to grow at a 75 percent CAGR over the next five years, accounting for 60 percent of the WDM capacity shipments by 2017. "I won't be surprised if it [100 Gig] grows even faster," says Yu.
"A lot of people wonder why have 40 Gig when there is 100 Gig? But that granularity does help service providers; having 40 Gig and 100 Gig rather than going straight from 10 Gig to 100 Gig," says Yu. The 100 Gig sales span metro and long-haul networks with the latter generating greater revenue due to the systems being pricier. "Forty Gigabit sales were predominantly long haul originally but we are seeing a good chunk of growth in metro as well," says Yu.
The current forecast does not include 400Gbps optical transport sales though Yu does expect sales to start in 2016.
Dell'Oro is seeing sales of 100 Gigabit direct detection but says it will remain a niche market. "We are talking tens of [shipped] units a quarter," says Yu.
There are applications where customers will need links of 80km or several hundred kilometers and will want the lowest cost solution, says Yu: "There is a market for direct detection; it will not be a significant driver for 100 Gig but it will be there."
China and the global PON market
China has become the world's biggest market for passive optical network (PON) technology even though deployments there have barely begun. That is because China, with approximately a quarter of a billion households, dwarfs all other markets. Yet according to market research firm Ovum, only 7% of Chinese homes were connected by year end 2011.

"In 2012, BOSAs [board-based PON optical sub-assemblies] will represent the majority versus optical transceivers for PON ONTs and ONUs"
Julie Kunstler, Ovum
Until recently Japan and South Korea were the dominant markets. And while PON deployments continue in these two markets, the rate of deployments has slowed as these optical access markets mature.
According to Ovum, slightly more than 4 million PON optical line terminals (OLTs) ports, located in the central office, were shipped in Asia Pacific in 2011, of which China accounted for the majority. Worldwide OLT shipments for the same period totaled close to 4.5 million. The fact that in China the ratio of OLT to optical network terminal (ONT), the end terminal at the home or building, deployed is relatively low highlights that in the Chinese market the significant growth in PON end terminals is still to come.
The strength of the Chinese market has helped local system vendors Huawei, ZTE and Fiberhome become leading global PON players, accounting for over 85% of the OLTs sold globally in 2011, says Julie Kunstler, principal analayst, optical components at Ovum. Moreover, around 60% of fibre-to-the-x deployments in Europe, Middle East and Africa were supplied by the Chinese vendors. The strongest non-Chinese vendor is Alcatel-Lucent.
Ovum says that the State Grid China Corporation, the largest electric utility company in China, has begun to deploy EPON for their smart grid trial deployments. PON is preferred to wireless technology because of its perceived ability to secure the data. This raises the prospect of two separate PON lines going to each home. But it remains to be seen, says Kunstler, whether this happens or whether the telcos and utilities share the access network.
"After China the next region that will have meaningful numbers is Eastern Europe, followed by South and Central America and we have already seen it in places like Russia,” says Kunstler. Indeed FTTx deployments in Eastern Europe already exceed those in Western Europe.
EPON and GPON
In China both Ethernet PON (EPON) and Gigabit PON (GPON) are being deployed. Ovum estimates that in 2011, 65% of equipment shipments were EPON while GPON represented 35% GPON in China.
China Telecom was the first of the large operators in China to deploy PON and began with EPON. Ovum is now seeing deployments of GPON and in the 3rd quarter of 2012, GPON OLT deployments have overtaken EPON.
China Mobile, not a landline operator, started deployments later and chose GPON. But these GPON deployments are on top of EPON, says Kunstler: "EPON is still heavily deployed by China Telecom, while China Mobile is doing GPON but it is a much smaller player." Moreover, Chinese PON vendors also supplying OLTs that support EPON and GPON, allowing local decisions to be made as to which PON technology is used.
One trend that is impacting the traditional PON optical transceiver market is the growing use of board-based PON optical sub-assemblies (BOSAs). Such PON optics dispenses with the traditional traditional optical module form factor in the interest of trimming costs.
“A number of the larger, established ODMs [original design manufacturers] have begun to ship BOSA-based PON CPEs,” says Kunstler. In 2012, BOSAs will represent the majority versus optical transceivers for PON ONTs/ONUs.” says Kunstler.
10 Gigabit PON
Ovum says that there has been very few deployments of next generation 10G EPON and XG-PON, the 10 Gigabit version of GPON.
"There have been small amounts of 10G [EPON] in China," says Kunstler. "We are talking hundreds or thousands, not the tens of thousands [of units]."
One reason for this is the relative high cost of 10 Gigabit PON which is still in its infancy. Another is the growing shift to deploy fibre-to-the-home (FTTh) versus fibre-to-the-building deployments in China. 10 Gigabit PON makes more sense in multi-dwelling units where the incoming signal is split between apartments. Moving to 10G EPON boosts the incoming bandwidth by 10x while XG-PON would increase the bandwidth by 4x. "The need for 10 Gig for multi-dwelling units is not as strong as originally thought," says Kunstler.
It is a chicken-and-egg issue with 10G PON, says Kunstler. The price of 10G optics would go down if there was more demand, and if there was more demand, the optical vendors would work on bringing down cost. "10G GPON will happen but will take longer," says Kunstler, with volumes starting to ramp from 2014.
However, Ovum thinks that a stronger market application for 10G PON will be for supporting wireless backhaul. The market research company is seeing early deployments of PON for wireless backhaul especially for small cell sites (e.g. picocells). Small cells are typically deployed in urban areas which is where FTTx is deployed. It is too early to know the market forecast for this application but PON will join the list of communications technologies supporting wireless backhaul.
Challenges
Despite the huge expected growth in deployments, driven by China, challenges remain for PON optical transceiver and chip vendors.
The margins on optics and PON silicon continue to be squeezed. ODMs using BOSAs are putting pricing pressure on PON transceiver costs while the vertical integration strategy of system vendors such as Huawei, which also develops some of its own components squeezes, out various independent players. Huawei has its own silicon arm called HiSilicon and its activities in PON has impacted the chip opportunity of the PON merchant suppliers.
"Depending upon who the customer is, depending upon the pricing, depending on the features and the functions, Huawei will make the decision whether they are using HiSilicon or whether they are using merchant silicon from an independent vendor, for example," says Kunstler.
There has been consolidation in the PON chip space as well as several new players. For example, Broadcom acquired Teknouvs and Broadlight while Atheros acquired Opulan and Atheros was then acquired by Qualcomm. Marvell acquired a very small start-up and is now competing with Atheros and Broadcom. Most recently, Realtek is rumored to have a very low-cost PON chip.
