ECOC 2014: Industry reflections on the show
Gazettabyte asked several attendees at the recent ECOC show, held in Cannes, to comment on key developments and trends they noted, as well as the issues they will track in the coming year.
Daryl Inniss, practice leader, components at market research firm, Ovum
It took a while to unwrap what happened at ECOC 2014. There was no one defining event or moment that was the highlight of the conference.
The location was certainly beautiful and the weather lovely. Yet I felt the participants were engaged with critical technical and business issues, given how competitive the market has become.
Kaiam’s raising US $35 million, Ranovus raising $24 million, InnoLight Technology raising $38 million and being funded by Google Capital, and JDSU and Emcore each splitting into two companies, all are examples of the shifting industry structure.
On the technology and product development front, advances in 100 Gig metro coherent solutions were reported although products are coming to market later than first estimated. The client-side 100 Gig is transitioning to CFP2. Datacom participants agree that QSFP28 is the module but what goes inside will include both parallel single mode solutions and wavelength multiplexed ones.
Finisar’s 50 Gig transmission demonstration that used silicon photonics as the material choice surprised the market. Compared to last year, there were few multi-mode announcements. ECOC 2014 had little excitement and no one defining show event but there were many announcements showing the market’s direction.
There is one observation from the show, which while not particularly exciting or sexy, is important, and it seems to have gone unnoticed in my opinion. Source Photonics demonstrated the 100GBASE-LR4, the 10km 100 Gigabit Ethernet standard, in the QSFP28 form factor. This is not new as Source Photonics also demonstrated this module at OFC. What’s interesting is that no one else has duplicated this result.
There will be demand for a denser -LR4 solution that’s backward compatible with the CFP, CFP2, and CFP4 form factors. It is unlikely that the PSM4, CWDM4, or CLR4 will go 10km and they are not optically compatible with the -LR4. The market is on track to use the QSFP28 for all 100 Gig distances so it needs the supporting optics. The Source Photonics demonstration shows a path for 10km. We expect to see other solutions for longer distances over time.
One surprise at the show was Finisar's and STMicroelectronics's demonstration of 50 Gig non-return-to-zero transmission over 2.2km on standard single mode fiber. The transceiver was in the CFP4 form factor and uses heterogeneous silicon technologies inside. The results were presented in a post-deadline paper (PD.2.4). The work is exciting because it demonstrates a directly modulated laser operating above 28 Gig, the current state-of-the-art.
The use of silicon photonics is surprising because Finisar has been forced to defend its legacy technology against the threat of transceivers based on silicon photonics. These results point to one path forward for next-generation 100 Gig and 400 Gig solutions.
In the coming year, I’m looking for the dominant metro 100G solution to emerge. When will the CFP2 analogue coherent optical module become generally available? Multiple suppliers with this module will help unleash the 100 Gig line-side transmission market, drive revenue growth and the development for the next-generation solution.
Slow product development gives competing approaches like the digital CFP a chance to become the dominant solution. At present, there is one digital CFP vendor with a generally available product, Acacia Communications, with a second, Fujitsu Optical Components, having announced general availability in the first half of 2015.
Neal Neslusan, vice president of sales and marketing at fabless chip company, MultiPhy.
It was impressive to see Oclaro's analogue CFP2 for coherent applications on the show floor, albeit only in loopback mode. Equally impressive was seeing ClariPhy's DSP on the evaluation board behind the CFP2.
I saw a few of the motherboard-based optics solutions at the show. They looked very interesting and in questioning various folks in the business I learned that for certain data centre applications these optics are considered acceptable. Indeed, they represent an ability to extract much higher bandwidth from a given motherboard as compared to edge-of-the-board based optics, but they are not pluggable.
Traditionally, pluggable optics has been the mainstay of the datacom and enterprise segments and these motherboard-based optics have been relegated to supercomputing. This is just another example, in my opinion, of how the data centre market is becoming distinct from the datacom market.
Where there any surprises at the show? I was surprised and alarmed at the cost of the Martini drinks at the hotel across the street from the show, and they weren't even that good!
Regarding developments in the coming year, the 8x50 Gig versus 4x100 Gig fight in the IEEE is clearly a struggle I will follow. I think it will have a great impact on product development in our industry. If 8x50 Gig wins, it may be one of the few times in the history of our industry that a less advanced solution is chosen over a more advanced and future-proofed one.
The physical size of the next-generation Terabit Ethernet switch chips will have a much larger impact on the optics they connect to in the coming years, compared to the past. This work combined with the motherboard-based optics may create a significant change in the solutions brought to bear for high-performance communications.
John Lively, principal analyst at market research firm, LightCounting.
There were several developments that I noted at the show. ECOC helped cement the view that 100 Gig coherent is mainstream for metro networks. Also more and more system vendors are incorporating Raman/ remote optically pumped amplifier (ROPA) into their toolkit. ROPA is a Raman-based amplifier where the pump is located at one end of the link, not in some intermediate node. Another trend evident at ECOC is how the network boundary between terrestrial and submarine is blurring.
As for developments to watch, I intend to follow mobile fronthaul/ backhaul, higher speed transceiver developments, of course, and how the mega-data-centre operators are disrupting networks, equipment, and components.
For the ECOC reflections, final part, click here
Verizon on 100G+ optical transmission developments
Source: Gazettabyte
Feature: 100 Gig and Beyond. Part 1:
Verizon's Glenn Wellbrock discusses 100 Gig deployments and higher speed optical channel developments for long haul and metro.
The number of 100 Gigabit wavelengths deployed in the network has continued to grow in 2013.
According to Ovum, 100 Gigabit has become the wavelength of choice for large wavelength-division multiplexing (WDM) systems, with spending on 100 Gigabit now exceeding 40 Gigabit spending. LightCounting forecasts that 40,000, 100 Gigabit line cards will be shipped this year, 25,000 in the second half of the year alone. Infonetics Research, meanwhile, points out that while 10 Gigabit will remain the highest-volume speed, the most dramatic growth is at 100 Gigabit. By 2016, the majority of spending in long-haul networks will be on 100 Gigabit, it says.
The market research firms' findings align with Verizon's own experience deploying 100 Gigabit. The US operator said in September that it had added 4,800, 100 Gigabit miles of its global IP network during the first half of 2013, to total 21,400 miles in the US network and 5,100 miles in Europe. Verizon expects to deploy another 8,700 miles of 100 Gigabit in the US and 1,400 miles more in Europe by year end.
"We expect to hit the targets; we are getting close," says Glenn Wellbrock, director of optical transport network architecture and design at Verizon.
Verizon says several factors are driving the need for greater network capacity, including its FiOS bundled home communication services, Long Term Evolution (LTE) wireless and video traffic. But what triggered Verizon to upgrade its core network to 100 Gig was converging its IP networks and the resulting growth in traffic. "We didn't do a lot of 40 Gig [deployments] in our core MPLS [Multiprotocol Label Switching] network," says Wellbrock.
The cost of 100 Gigabit was another factor: A 100 Gigabit long-haul channel is now cheaper than ten, 10 Gig channels. There are also operational benefits using 100 Gig such as having fewer wavelengths to manage. "So it is the lower cost-per-bit plus you get all the advantages of having the higher trunk rates," says Wellbrock.
Verizon expects to continue deploying 100 Gigabit. First, it has a large network and much of the deployment will occur in 2014. "Eventually, we hope to get a bit ahead of the curve and have some [capacity] headroom," says Wellbrock.
We could take advantage of 200 Gig or 400 Gig or 500 Gig today
Super-channel trials
Operators, working with optical vendors, are trialling super-channels and advanced modulation schemes such as 16-QAM (quadrature amplitude amplitude). Such trials involve links carrying data in multiples of 100 Gig: 200 Gig, 400 Gig, even a Terabit.
Super-channels are already carrying live traffic. Infinera's DTN-X system delivers 500 Gig super-channels using quadrature phase-shift keying (QPSK) modulation. Orange has a 400 Gigabit super-channel link between Lyon and Paris. The 400 Gig super-channel comprises two carriers, each carrying 200 Gig using 16-QAM, implemented using Alcatel-Lucent's 1830 photonic service switch platform and its photonic service engine (PSE) DSP-ASIC.
"We could take advantage of 200 Gig or 400 Gig or 500 Gig today," says Wellbrock. "As soon as it is cost effective, you can use it because you can put multiple 100 Gig channels on there and multiplex them."
The issue with 16-QAM, however, is its limited reach using existing fibre and line systems - 500-700km - compared to QPSK's 2,500+ km before regeneration. "It [16-QAM] will only work in a handful of applications - 25 percent, something of this nature," says Wellbrock. This is good for a New York to Boston, he says, but not New York to Chicago. "From our end it is pretty simple, it is lowest cost," says Wellbrock. "If we can reduce the cost, we will use it [16-QAM]. However, if the reach requirement cannot be met, the operator will not go to the expense of putting in signal regenerators to use 16-QAM do, he says.
Earlier this year Verizon conducted a trial with Ciena using 16-QAM. The goals were to test 16-QAM alongside live traffic and determine whether the same line card would work at 100 Gig using QPSK and 200 Gig using 16-QAM. "The good thing is you can use the same hardware; it is a firmware setting," says Wellbrock.
We feel that 2015 is when we can justify a new, greenfield network and that 100 Gig or versions of that - 200 Gig or 400 Gig - will be cheap enough to make sense
100 Gig in the metro
Verizon says there is already sufficient traffic pressure in its metro networks to justify 100 Gig deployments. Some of Verizon's bigger metro locations comprise up to 200 reconfigurable optical add/ drop multiplexer (ROADM) nodes. Each node is typically a central office connected to the network via a ROADM, varying from a two-degree to an eight-degree design.
"Not all the 200 nodes would need multiple 100 Gig channels but in the core of the network, there is a significant amount of capacity that needs to be moved around," says Wellbrock. "100 Gig will be used as soon as it is cost-effective."
Unlike long-haul, 100 Gigabit in the metro remains costlier than ten 10 Gig channels. That said, Verizon has deployed metro 100 Gig when absolutely necessary, for example connecting two router locations that need to be connected using 100 Gig. Here Verizon is willing to pay extra for such links.
"By 2015 we are really hoping that the [metro] crossover point will be reached, that 100 Gig will be more cost effective in the metro than ten times 10 [Gig]." Verizon will build a new generation of metro networks based on 100 Gig or 200 Gig or 400 Gig using coherent receivers rather than use existing networks based on conventional 10 Gig links to which 100 Gig is added.
"We feel that 2015 is when we can justify a new, greenfield network and that 100 Gig or versions of that - 200 Gig or 400 Gig - will be cheap enough to make sense."
Data Centres
The build-out of data centres is not a significant factor driving 100 Gig demand. The largest content service providers do use tens of 100 Gigabit wavelengths to link their mega data centres but they typically have their own networks that connect relatively few sites.
"If you have lots of data centres, the traffic itself is more distributed, as are the bandwidth requirements," says Wellbrock.
Verizon has over 220 data centres, most being hosting centres. The data demand between many of the sites is relatively small and is served with 10 Gigabit links. "We are seeing the same thing with most of our customers," says Wellbrock.
Technologies
System vendors continue to develop cheaper line cards to meet the cost-conscious metro requirements. Module developments include smaller 100 Gig 4x5-inch MSA transponders, 100 Gig CFP modules and component developments for line side interfaces that fit within CFP2 and CFP4 modules.
"They are all good," says Wellbrock when asked which of these 100 Gigabit metro technologies are important for the operator. "We would like to get there as soon as possible."
The CFP4 may be available by late 2015 but more likely in 2016, and will reduce significantly the cost of 100 Gig. "We are assuming they are going to be there and basing our timelines on that," he says.
Greater line card port density is another benefit once 100 Gig CFP2 and CFP4 line side modules become available. "Lower power and greater density which is allowing us to get more bandwidth on and off the card." sats Wellbrock.
Existing switch and routers are bandwidth-constrained: they have more traffic capability that the faceplate can provide. "The CFPs, the way they are today, you can only get four on a card, and a lot of the cards will support twice that much capacity," says Wellbrock.
With the smaller form factor CFP2 and CFP4, 1.2 and 1.6 Terabits card will become possible from 2015. Another possible development is a 400 Gigabit CFP which would achieve a similar overall capacity gains.
Coherent, not just greater capacity
Verizon is looking for greater system integration and continues to encourage industry commonality in optical component building blocks to drive down cost and promote scale.
Indeed Verizon believes that industry developments such as MSAs and standards are working well. Wellbrock prefers standardisation to custom designs like 100 Gigabit direct detection modules or company-specific optical module designs.
Wellbrock stresses the importance of coherent receiver technology not only in enabling higher capacity links but also a dynamic optical layer. The coherent receiver adds value when it comes to colourless, directionless, contentionless (CDC) and flexible grid ROADMs.
"If you are going to have a very cost-effective 100 Gigabit because the ecosystem is working towards similar solutions, then you can say: 'Why don't I add in this agile photonic layer?' and then I can really start to do some next-generation networking things." This is only possible, says Wellbrock, because of the tunabie filter offered by a coherent receiver, unlike direct detection technology with its fixed-filter design.
"Today, if you want to move from one channel to the next - wavelength 1 to wavelength 2 - you have to physically move the patch cord to another filter," says Wellbrock. "Now, the [coherent] receiver can simply tune the local oscillator to channel 2; the transmitter is full-band tunable, and now the receiver is full-band tunable as well." This tunability can be enabled remotely rather than requiring an on-site engineer.
Such wavelength agility promises greater network optimisation.
"How do we perhaps change some of our sparing policy? How do we change some of our restoration policies so that we can take advantage of that agile photonics later," says Wellbroack. "That is something that is only becoming available because of the coherent 100 Gigabit receivers."
Part 2, click here
Reporting the optical component & module industry
LightCounting recently published its six-monthly optical market research covering telecom and datacom. Gazettabyte interviewed Vladimir Kozlov, CEO of LightCounting, about the findings.
When people forecast they always make a mistake on the timeline because they overestimate the impact of new technology in the short term and underestimate in the long term
Q: How would you summarise the state of the optical component and module industry?
VK: At a high level, the telecom market is flat, even hibernating, while datacom is exceeding our expectations. In datacom, it is not only 40 and 100 Gig but 10 Gig is growing faster than anticipated. Shipments of 10 Gigabit Ethernet (GbE) [modules] will exceed 1GbE this year.
The primary reason is data centre connectivity - the 'spine and leaf' switch architecture that requires a lot more connections between the racks and the aggregation switch - that is increasing demand. I suspect it is more than just data centres, however. I wouldn't be surprised if enterprises are adopting 10GbE because it is now inexpensive. Service providers offer Ethernet as an access line and use it for mobile backhaul.
Can you explain what is causing the flat telecom market?
Part of the telecom 'hibernation' story is the rapidly declining SONET/SDH market. The decline has been expected but in fact it had been growing up till as recently as two years ago. First, 40 Gigabit OC-768 declined and then the second nail in the coffin was the decline in 10 Gig sales: 10GbE is all SFP+ whereas 0C-192 SONET/SDH is still in the XFP form factor.
The steady dense WDM module market and the growth in wireless backhaul are compensating for the decline in SONET/SDH market as well as the sharp drop this year in FTTx transceiver and BOSA (bidirectional optical sub assembly) shipments, and there is a big shift from transceivers to BOSAs.
LightCounting highlights strong growth of 100G DWDM in 2013, with some 40,000 line card port shipments expected this year. Yet LightCounting is cautious about 100 Gig deployments. Why the caution?
We have to be cautious, given past history with 10 Gig and 40 Gig rollouts.
If you look at 10 Gig deployments, before the optical bubble (1999-2000) there was huge expected demand before the market returned to normality, supporting real traffic demand. Whatever 10 Gig was installed in 1999-2000 was more than enough till 2005. In 2006 and 2007 10 Gig picked up again, followed by 40 Gig which reached 20,000 ports in 2008. But then the financial crisis occurred and the 40 Gig story was interrupted in 2009, only picking up from 2010 to reach 70,000 ports this year.
So 40 Gig volumes are higher than 100 Gig but we haven't seen any 40 Gig in the metro. And now 100 Gig is messing up the 40G story.
The question in my mind is how much metro is a bottleneck today? There may be certain large cities which already require such deployments but equally there was so much fibre deployed in metropolitan areas back in the bubble. If fibre cost is not an issue, why go into 100 Gig? The operator will use fibre and 10 Gig to make more money.
CenturyLink recently announced its first customer purchasing 100 Gig connections - DigitalGlobe, a company specialising in high-definition mapping technology - which will use 100 Gig connectivity to transfer massive amounts of data between its data centers. This is still a special case, despite increasing number of data centers around the world.
There is no doubt that 100 Gig will be a must-have technology in the metro and even metro-access networks once 1GbE broadband access lines become ubiquitous and 10 Gig will be widely used in the access-aggregation layer. It is starting to happen.
So 100 Gigabit in the metro will happen; it is just a question of timing. Is it going to be two to three years or 10-15 years? When people forecast they always make a mistake on the timeline because they overestimate the impact of new technology in the short term and underestimate in the long term.
LightCounting highlights strong sales in 10 Gig and 40 Gig within the data centre but not at 100 Gig. Why?
If you look at the spine and leaf architecture, most of the connections are 10 Gig, broken out from 40 Gig optical modules. This will begin to change as native 40GbE ramps in the larger data centres.
If you go to super-spine that takes data from aggregation to the data centre's core switches, there 100GbE could be used and I'm sure some companies like Google are using 100GbE today. But the numbers are probably three orders of magnitude lower than in a spine and leaf layers. The demand for volume today for 100GbE is not that high, and it also relates to the high price of the modules.
Higher volumes reduce the price but then the complexity and size of the [100 Gig CFP] modules needs to be reduced as well. With 10 Gig, the major [cost reduction] milestone was the transition to a 10 Gig electrical interface. It has to happen with 100 Gig and there will be the transition to a 4x25Gbps electrical interface but it is a big transition. Again, forget about it happening in two-three years but rather a five- to 10-year time frame.
I suspect that one reason for Google offerings of 1Gbps FTTH services to a few communities in the U.S. is to find out what these new application are, by studying end-user demand
You also point out the failure of the IEEE working group to come up with a 100 GbE solution for the 500m-reach sweet spot. What will be the consequence of this?
The IEEE is talking about 400GbE standards now. Go back to 40GbE that was only approved some three years, the majority of the IEEE was against having 40GbE at all, the objective being to go to 100GbE and skip 40GbE altogether. At the last moment a couple of vendors pushed 40GbE. And look at 40GbE now, it is [deployed] all over the place: the industry is happy, suppliers are happy and customers are happy.
Again look at 40GbE which has a standard at 10km. If you look at what is being shipped today, only 10 percent of 40GBASE-LR4 modules are compliant with the standard. The rest of the volume is 2km parts - substandard devices that use Fabry-Perot instead of DFB (distributed feedback) lasers. The yields are higher and customers love them because they cost one tenth as much. The market has found its own solution.
The same thing could happen at 100 Gig. And then there is Cisco Systems with its own agenda. It has just announced a 40 Gig BiDi connection which is another example of what is possible.
What will LightCounting be watching in 2014?
One primary focus is what wireline revenues service providers will report, particularly additional revenues generated by FTTx services.
AT&T and Verizon reported very good results in Q3 [2013] and I'm wondering if this is the start of a longer trend as wireline revenues from FTTx pick up, it will give carriers more of an incentive to invest in supporting those services.
AT&T and Verizon customers are willing to pay a little more for faster connectivity today, but it really takes new applications to develop for end-user spending on bandwidth to jump to the next level. Some of these applications are probably emerging, but we do not know what these are yet. I suspect that one reason for Google offerings of 1Gbps FTTH services to a few communities in the U.S. is to find out what these new application are, by studying end-user demand.
A related issue is whether deployments of broadband services improve economic growth and by how much. The expectations are high but I would like to see more data on this in 2014.
