ECOC 2015: Reflections

Gazettabyte asked industry executives what trends and highlights they noted at the recent European Conference on Optical Communication (ECOC) event, held in Valencia. Here are three views.

 

Valery Tolstikhin, head of a design consultancy, Intengent


ECOC was a big show and included a number of satellite events, such as the 6th European Forum on Photonic Integration, the 3rd Optical Interconnect in Data Center Symposium and Market Focus, all of which I attended. So, lots of information to digest. 

My focus was mainly on data centre optical interconnects and photonic integration.

 

Data centre interconnects

What became evident at ECOC is that 50 Gig modulation and the PAM-4 modulation format will be the basis of the next generation (after 100 Gig) data centre interconnect. This is in contrast to the current 100 Gig non-return-to-zero (NRZ) modulation using 25 Gig lanes.

This paves the way towards 200 Gig (4 x PAM-4 lanes at 25 Gig) and 400 Gig (4 x PAM-4 lanes at 50 Gig) as a continuation of quads of 4 x NRZ lanes at 25 Gig, the state-of-the-art data centre interconnect still to take off in terms of practical deployment. 

The transition from 100 Gig to 400 Gig seems to be happening much faster than from 40Gig to 100 Gig. And 40 Gig serial finally seems to have gone; who needs 40 Gig when 50 Gig is available?

Another observation is that despite the common agreement that future new deployments should use single-mode fibre rather than multi-mode fibre, given the latter’s severe reach limitation that worsens with modulation speed, the multi-mode fibre camp does not give up easily. 

That is because of the tons of multi-mode fibre interconnects already deployed, and the low cost of gallium arsenide 850 nm VCSELs these links use. However, the spectral efficiency of such interconnects is low, resulting in high multi-mode fibre count and the associated cost. This is a strong argument against such fibre. 

Now, a short-wave WDM (SWDM) initiative is emerging as a partial solution to this problem, led by Finisar. Both OM3 and OM4 multi-mode fibre can be used, extending link spans to 100m at 25 Gig speeds. 

 

Single mode fibre 4 x 25 Gig QSFP28 pluggables with a reach of up to 2 km, which a year ago were announced with some fanfare, seems to have become more of a commodity.

 

The SWDM Alliance was announced just before ECOC 2015, with major players like Finisar and Corning on board, suggesting this is a serious effort not to be ignored by the single mode fibre camp.

Lastly, single mode fibre 4 x 25 Gig QSFP28 pluggables with a reach of up to 2 km, which a year ago were announced with some fanfare, seems to have become more of a commodity.  Two major varieties – PSM and WDM – are claimed and, probably shipping, by a growing number of vendors. 

Since these are pluggables with fixed specs, the only difference from the customer viewpoint is price. That suggests a price war is looming, as happens in all massive markets. Since the current price still are an order of magnitude or more above the target $1/Gig set by Facebook and the like, there is still a long way to go, but the trend is clear. 

This reminds me of that I’ve experienced in the PON market: a massive market addressed by a standardised product that can be assembled, at a certain time, using off-the-shelf components. Such a market creates intense competition where low-cost labour eventually wins over technology innovation.

 

Photonic integration 

Two trends regarding photonic integration for telecom and datacom became clear at ECOC 2015.

One positive development is an emerging fabless ecosystem for photonic integrated circuits (PICs), or at least an understanding of a need for such. These activities are driven by silicon photonics which is based on the fabless model since its major idea is to leverage existing silicon manufacturing infrastructure. For example, Luxtera, the most visible silicon component vendor, is a fabless company. 

There are also signs of the fabless ecosystem building up in the area of III-V photonics, primarily indium-phosphide based. The European JePPIX programme is one example. Here you see companies providing foundry and design house services emerging, while the programme itself supports access to PIC prototyping through multi-project wafer (MPW) runs for a limited fee. That’s how the ASIC business began 30 to 40 years ago.  

A link to OEM customers is still a weak point, but I see this being fixed in the near future. Of course, Intengent, my design house company, does just that: links OEM customers and the foundries for customised photonic chip and PIC development.

 

As soon as PICs give a system advantage, which Infinera’s chips do, they become a system solution enabler, not merely ordinary components made a different way

 

The second, less positive development, is that photonic integration continues to struggle to find applications and markets where it will become a winner. Apart from devices like the 100 Gig coherent receiver, where phase control requirements are difficult to meet using discretes, there are few examples where photonic integration provides an edge. 

Even a 4 x 25 Gig assembly using discrete components for today’s 100 Gig client side and data centre interconnect has been demonstrated by several vendors. It then becomes a matter of economies of scale and cheap labour, leaving little space for photonic integration to play. This is what happened in the PON market despite photonic integrated products being developed by my previous company, OneChip Photonics

On a flip side, the example of Infinera shows where the power of photonic integration is: its ability to create more complicated PICs as needed without changing the technology.

One terabit receiver and transmitter chips developed by Infinera are examples of complex photonic circuits, simply undoable by means of an optical sub-assembly. As soon as PICs give a system advantage, which Infinera’s chips do, they become a system solution enabler, not merely ordinary components made a different way.  

However, most of the photonic integration players - silicon photonics and indium phosphide alike - still try to do the same as what an optical sub-assembly can do, but more cheaply. This does not seem to be a winning strategy.

And a comment on silicon photonics. At ECOC 2015, I was pleased to see that, finally, there is a consensus that silicon photonics needs to aim at applications with a certain level of complexity if it is to provide any advantage to the customer. 

 

Silicon photonics must look for more complex things, maybe 400 Gig or beyond, but the market is not there yet

 

For simpler circuits, there is little advantage using photonic integration, least of all silicon photonics-based ones. Where people disagree is what this threshold level of complexity is. Some suggest that 100 Gig optics for data centres is the starting point but I’m unsure. There are discrete optical sub-assemblies already on the market that will become only cheaper and cheaper. Silicon photonics must look for more complex things, maybe 400 Gig or beyond, but the market is not there yet.

One show highlight was the clear roadmap to 400 Gig and beyond, based on a very high modulation speed (50 Gig) and the PAM-4 modulation format, as discussed. These were supported at previous events, but never before have I seen the trend so clearly and universally accepted.

What surprised me, in a positive way, is that people have started to understand that silicon photonics does not automatically solve their problems, just because it has the word silicon in its name. Rather, it creates new challenges, cost efficiency being an important one.  The conditions for cost efficient silicon photonics are yet to be found, but it is refreshing that only a few now believe that the silicon photonics can be superior by virtue of just being ‘silicon’.

I wouldn’t highlight one thing that I learned at the show. Basically, ECOC is an excellent opportunity to check on the course of technology development and people’s thoughts about it.  And it is often better seen and felt on the exhibition floor than attending the conference’s technical sessions.

For the coming year, I will continue to track data centre interconnect optics, in all its flavours, and photonic integration, especially through a prism of the emerging fabless ecosystem.

 

 

Vishnu Shukla, distinguished member technical staff in Verizon’s network planning group.

There were more contributions related to software-defined networking (SDN) and multi-layer transport at ECOC. There were no new technology breakthroughs as much as many incremental evolutions to high-speed optical networking technologies like modulation, digital signal processors and filtering.

I intend to track technologies and test results related to transport layer virtualisation and similar efforts for 400 Gig-and-beyond transport.

 

 

Vladimir Kozlov, CEO and founder of LightCounting

I had not attended ECOC since 2000. It is a good event, a scaled down version of OFC but just as productive. What surprised me is how small this industry is even 15 years after the bubble. Everything is bigger in the US, including cars, homes and tradeshows. Looking at our industry on the European scale helps to grasp how small it really is.

 

What is the next market opportunity for optics? The data centre market is pretty clear now, but what next? 

 

 Listening to the plenary talk of Sir David Paine, it struck me how infinite technology is. It is so easy to get overexcited with the possibilities, but very few of the technological advances lead to commercial success.

The market is very selective and it takes a lot of determination to get things done. How do start-ups handle this risk? Do people get delusional with their ideas and impact on the world? I suspect that some degree of delusion is necessary to deal with the risks.

As for issues to track in the coming year, what is the next market opportunity for optics? The data centre market is pretty clear now, but what next? 


OFDM promises compact Terabit transceivers

Source ECI Telecom

 

A one Terabit super-channel, crafted using orthogonal frequency-division multiplexing (OFDM), has been transmitted over a live network in Germany. The OFDM demonstration is the outcome of a three-year project conducted by the Tera Santa Consortium comprising Israeli companies and universities.

Current 100 Gig coherent networks use a single carrier for the optical transmission whereas OFDM imprints the transmitted data across multiple sub-carriers. OFDM is already used as a radio access technology, the Long Term Evolution (LTE) cellular standard being one example.

With OFDM, the sub-carriers are tightly packed with a spacing chosen to minimise the interference at the receiver. OFDM is being researched for optical transmission as it promises robustness to channel impairments as well as implementation benefits, especially as systems move to Terabit speeds. 

"It is clear that the market has voted for single-carrier transmission for 400 Gig," says Shai Stein, chairman of the Tera Santa Consortium and CTO of system vendor, ECI Telecom. "But at higher rates, such as 1 Terabit, the challenge will be to achieve compact, low-power transceivers."

 

The real contribution [of OFDM] is implementation efficiency

Shai Stein

 

 

 

 

One finding of the project is that the OFDM optical performance matches that of traditional coherent transmission but that the digital signal processing required is halved. "The real contribution [of OFDM] is implementation efficiency," says Stein.

For the trial, the 175GHz-wide 1 Terabit super-channel signal was transmitted through several reconfigurable optical add/drop multiplexer (ROADM) stages. The 175GHz spectrum comprises seven, 25GHz bands. Two OFDM schemes were trialled: 128 sub-carriers and 1024 sub-carriers across each band.

To achieve 1 Terabit, the net data rate per band was 142 Gigabit-per-second (Gbps). Adding the overhead bits for forward error corrections and pilot signals, the gross data rate per band is closer to 200Gbps.

The 128 or 1024 sub-carriers per band are modulated using either quadrature phase-shift keying (QPSK) or 16-quadrature amplitude modulation (16-QAM). One modulation scheme - QPSK or 16-QAM - was used across a band, although Stein points out that the modulation scheme can be chosen on a sub-carrier by sub-carrier basis, depending on the transmission conditions. 

The trial took place at the Technische Universität Dresden, using the Deutsches Forschungsnetz e.V. X-WiN research network. The signal recovery was achieved offline using MATLAB computational software. "It [the trial] was in real conditions, just the processing was performed offline," says Stein. The MATLAB algorithms will be captured in FPGA silicon and added to the transciever in the coming months.     

Using a purpose-built simulator, the Tera Santa Consortium compared the OFDM results with traditional coherent super-channel transmission. "Both exhibited the same performance," says David Dahan, senior research engineer for optics at ECI Telecom. "You get a 1,000km reach without a problem." And with hybrid EDFA-Raman amplification, 2,000km is possible. The system also demonstrated robustness to chromatic dispersion. Using 1024 sub-carriers, the chromatic dispersion is sufficient low that no compensation is needed, says ECI.

Stein says the project has been hugely beneficial to the Israeli optical industry: "There has been silicon photonics, transceiver and algorithmic developments, and benefits at the networking level."  For ECI, it is important that there is a healthy local optical supply chain. "The giants have that in-house, we do not," says Stein. 

One Terabit transmission will be realised in the marketplace in the next two years. Due to the project, the consortium companies are now well placed to understand the requirements, says Stein.

Set up in 2011, the Tera Santa Consortium includes ECI Telecom, Finisar, MultiPhy, Cello, Civcom, Bezeq International, the Technion Israel Institute of Technology, Ben-Gurion University, and the Hebrew University in Jerusalem, Bar-Ilan University and Tel-Aviv University.


OFC 2014 industry reflections - Part 1

Gazettabyte is asking industry figures for their thoughts following the recent OFC 2014 exhibition and conference: the noteworthy developments and trends, what they learnt at the show, and the topics to track in the coming year.  

T.J. Xia, distinguished member of technical staff at Verizon

The CFP2 form factor pluggable - analogue coherent optics (CFP2-ACO) at 100 and 200 Gig will become the main choice for metro core networks in the near future. 

I learnt that the discrete multitone (DMT) modulation format seems the right choice for a low-cost, single-wavelength direct-detection 100 Gigabit Ethernet (GbE)  interface for data ports, and a 4xDMT for 400GbE ports. 

As for developments to watch, photonic switches will play a much more important role for intra-data centre connections. As the port capacity of top-of-rack switches gets larger, photonic switches have more cost advantages over middle stage electrical switches.

 

Don McCullough, Ericsson's director of strategic communications at group function technology

The biggest trend in networking right now is software-defined networking (SDN) and Network Function Virtualisation (NFV), and both were on display at OFC. We see that the combination of SDN and NFV in the control and software domains will directly impact optical networks. The Ericsson-Ciena partnership embodies this trend with its agreement to develop joint transport solutions for IP-optical convergence and service provider SDN. 

We learnt that network transformation, both at the control layer (SDN and NFV) and at the data plane layer, including optical, is happening at the network operators. Related to that, we also saw interest at OFC in the announcement that AT&T made at Mobile World Congress about their User-Defined Network Cloud and Domain 2.0 strategy where AT&T has selected to work with Ericsson on integration and transformation services.

We will continue to watch the on-going deployment of SDN and NFV to control wide area networks including optical. We expect more joint developments agreements to connect SDN and NFV with optical networking, like the Ericsson-Ciena one.  

One new thing for 2014 is that we expect to see open source projects like OpenStack and Open DayLight play increasingly important roles in the transformation of networks.

 

Brandon Collings, JDSU's CTO for communications and commercial optical products

The announcements of integrated photonics for coherent CFP2s was an important development in the 100 Gig progression. While JDSU did not make an announcement at OFC, we are similarly engaged with our customers on pluggable approaches for coherent 100 Gig.

 

I would like to see convergence around 400 Gig client interface standards

There is a lack of appreciation of the data centre operators who aren’t big household names.  While the mega data centre operators have significant influence and visibility, the needs of the numerous, smaller-sized operators are largely under-represented.

I would like to see convergence around 400 Gig client interface standards.  Lots of complex technology here, challenges to solve and options to do so.  But ambiguity in these areas is typically detrimental to the overall industry.

Mike Freiberger, principal member of technical staff, Verizon

The emergence of 100 Gig for metro, access, and data centre reach optics generated a lot of contentious debate. Maybe the best way forward as an industry isn’t really solidified just yet.

What did I learn? Verizon is a leader in wireless backhaul and is growing its options at a rate faster than the industry.

The two developments that caught my attention are 100 Gig short-reach and above-100-Gig research. 100 Gig short-reach because this will set the trigger point for the timing of 100 Gig interfaces really starting to sell in volume. Research on data rates faster than 100 Gig because price-per-bit always has to come downward.


Reporting the optical component & module industry

LightCounting recently published its six-monthly optical market research covering telecom and datacom. Gazettabyte interviewed Vladimir Kozlov, CEO of LightCounting, about the findings.

 

When people forecast they always make a mistake on the timeline because they overestimate the impact of new technology in the short term and underestimate in the long term

 

Q: How would you summarise the state of the optical component and module industry?

VK: At a high level, the telecom market is flat, even hibernating, while datacom is exceeding our expectations. In datacom, it is not only 40 and 100 Gig but 10 Gig is growing faster than anticipated. Shipments of 10 Gigabit Ethernet (GbE) [modules] will exceed 1GbE this year.

The primary reason is data centre connectivity - the 'spine and leaf' switch architecture that requires a lot more connections between the racks and the aggregation switch - that is increasing demand. I suspect it is more than just data centres, however. I wouldn't be surprised if enterprises are adopting 10GbE because it is now inexpensive. Service providers offer Ethernet as an access line and use it for mobile backhaul.    

 

Can you explain what is causing the flat telecom market?

Part of the telecom 'hibernation' story is the rapidly declining SONET/SDH market. The decline has been expected but in fact it had been growing up till as recently as two years ago. First, 40 Gigabit OC-768 declined and then the second nail in the coffin was the decline in 10 Gig sales: 10GbE is all SFP+ whereas 0C-192 SONET/SDH is still in the XFP form factor.

The steady dense WDM module market and the growth in wireless backhaul are compensating for the decline in SONET/SDH market as well as the sharp drop this year in FTTx transceiver and BOSA (bidirectional optical sub assembly) shipments, and there is a big shift from transceivers to BOSAs.  

 

LightCounting highlights strong growth of 100G DWDM in 2013, with some 40,000 line card port shipments expected this year. Yet LightCounting is cautious about 100 Gig deployments. Why the caution?

We have to be cautious, given past history with 10 Gig and 40 Gig rollouts.

If you look at 10 Gig deployments, before the optical bubble (1999-2000) there was huge expected demand before the market returned to normality, supporting real traffic demand. Whatever 10 Gig was installed in 1999-2000 was more than enough till 2005. In 2006 and 2007 10 Gig picked up again, followed by 40 Gig which reached 20,000 ports in 2008. But then the financial crisis occurred and the 40 Gig story was interrupted in 2009, only picking up from 2010 to reach 70,000 ports this year.

So 40 Gig volumes are higher than 100 Gig but we haven't seen any 40 Gig in the metro. And now 100 Gig is messing up the 40G story.

The question in my mind is how much metro is a bottleneck today? There may be certain large cities which already require such deployments but equally there was so much fibre deployed in metropolitan areas back in the bubble. If fibre cost is not an issue, why go into 100 Gig? The operator will use fibre and 10 Gig to make more money.    

CenturyLink recently announced its first customer purchasing 100 Gig connections - DigitalGlobe, a company specialising in high-definition mapping technology - which will use 100 Gig connectivity to transfer massive amounts of data between its data centers. This is still a special case, despite increasing number of data centers around the world.

There is no doubt that 100 Gig will be a must-have technology in the metro and even metro-access networks once 1GbE broadband access lines become ubiquitous and 10 Gig will be widely used in the access-aggregation layer. It is starting to happen.

So 100 Gigabit in the metro will happen; it is just a question of timing. Is it going to be two to three years or 10-15 years? When people forecast they always make a mistake on the timeline because they overestimate the impact of new technology in the short term and underestimate in the long term.    

 

LightCounting highlights strong sales in 10 Gig and 40 Gig within the data centre but not at 100 Gig. Why?

If you look at the spine and leaf architecture, most of the connections are 10 Gig, broken out from 40 Gig optical modules. This will begin to change as native 40GbE ramps in the larger data centres.

If you go to super-spine that takes data from aggregation to the data centre's core switches, there 100GbE could be used and I'm sure some companies like Google are using 100GbE today. But the numbers are probably three orders of magnitude lower than in a spine and leaf layers. The demand for volume today for 100GbE is not that high, and it also relates to the high price of the modules.

Higher volumes reduce the price but then the complexity and size of the [100 Gig CFP] modules needs to be reduced as well. With 10 Gig, the major [cost reduction] milestone was the transition to a 10 Gig electrical interface. It has to happen with 100 Gig and there will be the transition to a 4x25Gbps electrical interface but it is a big transition. Again, forget about it happening in two-three years but rather a five- to 10-year time frame.

 

I suspect that one reason for Google offerings of 1Gbps FTTH services to a few communities in the U.S. is to find out what these new application are, by studying end-user demand

 

You also point out the failure of the IEEE working group to come up with a 100 GbE solution for the 500m-reach sweet spot. What will be the consequence of this?  

The IEEE is talking about 400GbE standards now. Go back to 40GbE that was only approved some three years, the majority of the IEEE was against having 40GbE at all, the objective being to go to 100GbE and skip 40GbE altogether. At the last moment a couple of vendors pushed 40GbE. And look at 40GbE now, it is [deployed] all over the place: the industry is happy, suppliers are happy and customers are happy.

Again look at 40GbE which has a standard at 10km. If you look at what is being shipped today, only 10 percent of 40GBASE-LR4 modules are compliant with the standard. The rest of the volume is 2km parts - substandard devices that use Fabry-Perot instead of DFB (distributed feedback) lasers. The yields are higher and customers love them because they cost one tenth as much. The market has found its own solution.

The same thing could happen at 100 Gig. And then there is Cisco Systems with its own agenda. It has just announced a 40 Gig BiDi connection which is another example of what is possible.

 

What will LightCounting be watching in 2014?

One primary focus is what wireline revenues service providers will report, particularly additional revenues generated by FTTx services.

AT&T and Verizon reported very good results in Q3 [2013] and I'm wondering if this is the start of a longer trend as wireline revenues from FTTx pick up, it will give carriers more of an incentive to invest in supporting those services.

AT&T and Verizon customers are willing to pay a little more for faster connectivity today, but it really takes new applications to develop for end-user spending on bandwidth to jump to the next level. Some of these applications are probably emerging, but we do not know what these are yet. I suspect that one reason for Google offerings of 1Gbps FTTH services to a few communities in the U.S. is to find out what these new application are, by studying end-user demand.

A related issue is whether deployments of broadband services improve economic growth and by how much. The expectations are high but I would like to see more data on this in 2014.


Optical transport to grow at a 10% CAGR through 2017

  • Global optical transport market to reach US $13bn in 2017
  • 100 Gigabit to grow at a 75% CAGR

 

"I won't be surprised if it [100 Gig] grows even faster"

Jimmy Yu, Dell'Oro Group

 

 

 

 

 

The Dell'Oro Group forecasts that the global optical transport market will grow to US $13 billion in 2017, equating to a 10-percent compound annual growth rate (CAGR).

In 2012 SONET/SDH sales declined by over 20 percent, greater than Dell'Oro expected, while wavelength-division multiplexing (WDM) equipment sales held their own.

 

Regions

Dell'Oro expects optical transport growth across all the main regions, with no one region dominating. The market research company does foresee greater growth in Europe given the prolonged underspend of recent years.

European operators are planning broadband access investment such as fibre-to-the-cabinet/ VDSL vectoring as well as fibre-to-the-home. "That will drive demand for backhaul bandwidth and that is where WDM fits in well," says Jimmy Yu, vice president, microwave transmission, mobile backhaul and optical transport at Dell'Oro.

 

Technologies

Forty and 100 Gigabit optical transport will be the main WDM growth areas through 2017. Yu expects 40 Gigabit demand to grow over the forecast period even if the growth rate will taper off due to demand for 100 Gigabit.

The 100 Gigabit market continues to exceed Dell'Oro's forecasted growth. The market research company predicts 100-Gbps wavelength shipments to grow at a 75 percent CAGR over the next five years, accounting for 60 percent of the WDM capacity shipments by 2017. "I won't be surprised if it [100 Gig] grows even faster," says Yu.

"A lot of people wonder why have 40 Gig when there is 100 Gig? But that granularity does help service providers; having 40 Gig and 100 Gig rather than going straight from 10 Gig to 100 Gig," says Yu. The 100 Gig sales span metro and long-haul networks with the latter generating greater revenue due to the systems being pricier. "Forty Gigabit sales were predominantly long haul originally but we are seeing a good chunk of growth in metro as well," says Yu. 

The current forecast does not include 400Gbps optical transport sales though Yu does expect sales to start in 2016.

Dell'Oro is seeing sales of 100 Gigabit direct detection but says it will remain a niche market. "We are talking tens of [shipped] units a quarter," says Yu.

There are applications where customers will need links of 80km or several hundred kilometers and will want the lowest cost solution, says Yu: "There is a market for direct detection; it will not be a significant driver for 100 Gig but it will be there."


Privacy Preference Center