ECOC reflections: final part
Gazettabyte asked several attendees at the recent ECOC show, held in Cannes, to comment on key developments and trends they noted, as well as the issues they will track in the coming year.
Dr. Ioannis Tomkos, Fellow of OSA & Fellow of IET, Athens Information Technology Center (AIT)
With ECOC 2014 celebrating its 40th anniversary, the technical programme committee did its best to mark the occasion. For example, at the anniversary symposium, notable speakers presented the history of optical communications. Actual breakthroughs discussed during the conference sessions were limited, however.
Ioannis Tomkos
It appears that after 2008 to 2012, a period of significant advancements, the industry is now more mainstream, and significant shifts in technologies are limited. It is clear that the original focus four decades ago on novel photonics technologies is long gone. Instead, there is more and more of a focus on high-speed electronics, signal processing algorithms, and networking. These have little to do with photonics even if they greatly improve the overall efficient operation of optical communication systems and networks.
Coherent detection technology is making its way in metro with commercial offerings becoming available, while in academia it is also discussed as a possible solution for future access network applications where long-reach, very-high power budgets and high-bit rates per customer are required. However, this will only happen if someone can come up with cost-effective implementations.
Advanced modulation formats and the associated digital signal processing are now well established for ultra-high capacity spectral-efficient transmission. The focus in now on forward-error-correction codes and their efficient implementations to deliver the required differentiation and competitive advantage of one offering versus another. This explains why so many of the relevant sessions and talks were so well attended.
There were several dedicated sessions covering flexible/ elastic optical networking. It was also mentioned in the plenary session by operator Orange. It looks like a field that started only fives years ago is maturing and people are now convinced about the significant short-term commercial potential of related solutions. Regarding latest research efforts in this field, people have realised that flexible networking using spectral super-channels will offer the most benefit if it becomes possible to access the contents of the super-channels at intermediate network locations/ nodes. To achieve that, besides traditional traffic grooming approaches such as those based on OTN, there were also several ground-breaking presentations proposing all-optical techniques to add/ drop sub-channels out of the super-channel.
Progress made so far on long-haul high-capacity space-division-multiplexed systems, as reported in a tutorial, invited talks and some contributed presentations, is amazing, yet the potential for wide-scale deployment of such technology was discussed by many as being at least a decade away. Certainly, this research generates a lot of interesting know-how but the impact in the industry might come with a long delay, after flexible networking and terabit transmission becomes mainstream.
Much attention was also given at ECOC to the application of optical communications in data centre networks, from data-centre interconnection to chip-to-chip links. There were many dedicated sessions and all were well attended.
Besides short-term work on high-bit-rate transceivers, there is also much effort towards novel silicon photonic integration approaches for realising optical interconnects, space-division-multiplexing approaches that for sure will first find their way in data centres, and even efforts related with the application of optical switching in data centres.
At the networking sessions, the buzz was around software-defined networking (SDN) and network functions virtualisation (NFV) now at the top of the “hype-cycle”. Both technologies have great potential to disrupt the industry structure, but scientific breakthroughs are obviously limited.
As for my interests going forward, I intend to look for more developments in the field of mobile traffic front-haul/ back-haul for the emerging 5G networks, as well as optical networking solutions for data centres since I feel that both markets present significant growth opportunities for the optical communications/ networking industry and the ECOC scientific community.
Dr. Jörg-Peter Elbers, vice president advanced technology, CTO Office, ADVA Optical Networking
The top topics at ECOC 2014 for me were elastic networks covering flexible grid, super-channels and selectable higher-order modulation; transport SDN; 100-Gigabit-plus data centre interconnects; mobile back- and front-hauling; and next-generation access networks.
For elastic networks, an optical layer with a flexible wavelength grid has become the de-facto standard. Investigations on the transceiver side are not just focussed on increasing the spectral efficiency, but also at increasing the symbol rate as a prospect for lowering the number of carriers for 400-Gigabit-plus super-channels and cost while maintaining the reach.
Jörg-Peter Elbers
As we approach the Shannon limit, spectral efficiency gains are becoming limited. More papers were focussed on multi-core and/or few-mode fibres as a way to increase fibre capacity.
Transport SDN work is focussing on multi-tenancy network operation and multi-layer/ multi-domain network optimisation as the main use cases. Due to a lack of a standard for north-bound interfaces and a commonly agreed information model, many published papers are relying on vendor-specific implementations and proprietary protocol extensions.
Direct detect technologies for 400 Gigabit data centre interconnects are a hot topic in the IEEE and the industry. Consequently, there were a multitude of presentations, discussions and demonstrations on this topic with non-return-to-zero (NRZ), pulse amplitude modulation (PAM) and discrete multi-tone (DMT) being considered as the main modulation options. 100 Gigabit per wavelength is a desirable target for 400 Gig interconnects, to limit the overall number of parallel wavelengths. The obtainable optical performance on long links, specifically between geographically-dispersed data centres, though, may require staying at 50 Gig wavelengths.
In mobile back- and front-hauling, people increasingly recognise the timing challenges associated with LTE-Advanced networks and are looking for WDM-based networks as solutions. In the next-generation access space, components and solutions around NG-PON2 and its evolution gained most interest. Low-cost tunable lasers are a prerequisite and several companies are working on such solutions with some of them presenting results at the conference.
Questions around the use of SDN and NFV in optical networks beyond transport SDN point to the access and aggregation networks as a primary application area. The capability to programme the forwarding behaviour of the networks, and place and chain software network functions where they best fit, is seen as a way of lowering operational costs, increasing network efficiency and providing service agility and elasticity.
What did I learn at the show/ conference? There is a lot of development in optical components, leading to innovation cycles not always compatible with those of routers and switches. In turn, the cost, density and power consumption of short-reach interconnects is continually improving and these performance metrics are all lower than what can be achieved with line interfaces. This raises the question whether separating the photonic layer equipment from the electronic switching and routing equipment is not a better approach than building integrated multi-layer god-boxes.
There were no notable new trends or surprises at ECOC this year. Most of the presented work continued and elaborated on topics already identified.
As for what we will track closely in the coming year, all of the above developments are of interesting. Inter-data centre connectivity, WDM-PON and open programmable optical core networks are three to mention in particular.
For the first ECOC reflections, click here
Mobile backhaul chips rise to the LTE challenge
The Long Term Evolution (LTE) cellular standard has a demanding set of mobile backhaul requirements. Gazettabyte looks at two different chip designs for LTE mobile backhaul, from PMC-Sierra and from Broadcom.
"Each [LTE Advanced cell] sector will be over 1 Gig and there will be a need to migrate the backhaul to 10 Gig"
Liviu Pinchas, PMC-Sierra
LTE is placing new demands on the mobile backhaul network. The standard, with its use of macro and small cells, increases the number of network end points, while the more efficient bandwidth usage of LTE is driving strong mobile traffic growth. Smartphone mobile data traffic is forecast to grow by a factor of 19 globally from 2012 to 2017, a compound annual growth rate of 81 percent, according to Cisco's visual networking index global mobile data traffic forecast.
Mobile networks backhaul links are typically 1 Gigabit. The advent of LTE does not require an automatic upgrade since each LTE cell sector is about 400Mbps, such that with several sectors, the 1 Gigabit Ethernet (GbE) link is sufficient. But as the standard evolves to LTE Advanced, the data rate will be 3x higher. "Each sector will be over 1 Gig and there will be a need to migrate the backhaul to 10 Gig," says Liviu Pinchas, director of technical marketing at PMC.
One example of LTE's more demanding networking requirements is the need for Layer 3 addressing and routing rather than just Layer 2 Ethernet. LTE base stations, known as eNodeBs, must be linked to their neighbours for call handover between radio cells. To do this efficiently requires IP (IPv6), according to PMC.
The chip makers must also take into account system design considerations.
Equipment manufacturers make several systems for the various backhaul media that are used: microwave, digital subscriber line (DSL) and fibre. The vendors would like common silicon and software that can be used for the various platforms.
Broadcom highlights how reducing the board space used is another important design goal, given that backhaul chips are now being deployed in small cells. An integrated design reduces the total integrated circuits (ICs) needed on a card. A power-efficient chip is also important due to thermal constraints and the limited power available at certain sites.
"Integration itself improves system-level power efficiency," says Nick Kucharewski, senior director for Broadcom’s infrastructure and networking group. "We have taken several external components and integrated them in one device."
WinPath4
PMC's WinPath4 supports existing 2G and 3G backhaul requirements, as well as LTE small and macro cells. A cell-side routers that previously served one macrocell will now have to serve one macrocell and up to 10 small cells, says PMC. This means everything is scaled up: a larger routing table, more users and more services.
To support LTE and LTE Advanced, WinPath4 has added additional programmable packet processors - WinGines - and hardware accelerators to meet new protocol requirements and the greater data throughput.
The previous generation 10Gbps WinPath3 has up to 12 WinGines, WinGines are multi-threaded processors, with each thread involving packet processing. Tasks performed include receiving, classifying, modifying, shaping and transmitting a packet.
The 40Gbps WinPath4 uses 48 WinGines and micro-programmable hardware accelerators for such tasks as packet parsing, packet header extraction and traffic matching, tasks too processing-intensive for the WinGines.
WinPath4 also support tables with up to two million IP destination addresses, up to 48,000 queues with four levels of hierarchical traffic shaping, encryption engines to implement the IP Security (IPsec) protocol and supports the IEEE 1588v2 timing protocol.
Two MIPs processor core are used for the control tasks, such as setting up and removing connections.
WinPath4 also supports the emerging software-defined networking (SDN) standard that aims to enhance network flexibility by making underlying switches and routers appear as virtual resources. For OpenFlow, the open standard use for SDN, the processor acts as a switching element with the MIPS core used to decode the OpenFlow commands.
StrataXGS BCM56450
Broadcom says its latest device, the BCM56450, will support the transition from 1GbE to 10GbE backhaul links, and the greater number of cells needed for LTE.
The BCM56450 will be used in what Broadcom calls the pre-aggregation network. This is a first level of aggregation in the wireline network that connects the radio access network's macro and small cells.
Pre-aggregation connects to the aggregation network, defined by Broadcom as having 10GbE uplinks and 1GbE downlinks. The BCM56450 meets these requirements but is referred as a pre-aggregating device since it also supports slower links such as microwave links or Fast Ethernet.
The BCM56450 is a follow-on to Broadcom's 56440 device announced two years ago. The BCM56450 upgrades the switching capacity to 100 Gigabit and doubles the size of the Layer 2 and Layer 3 forwarding tables.
The BCM56450 is one of a family of devices offering aggregation, from the edge through to 100GbE links deep in the network.
The network edge BCM56240 has 1GbE links and is designed for small cell applications, microwave units and small outdoor units. The 56450 is next in terms of capacity, aggregating the uplinks from the 240 device or linking directly to the backhaul end points.
The uplinks of the 56450 are 10GbE interfaces and these can be interfaced to the third family member, the BCM56540. The 56540, announced half a year ago, supports 10GbE downlinks and up to 40GbE uplinks.
The largest device, the BCM56640, used in large aggregation platforms takes 10GbE and 40GbE inputs and has the option for 100GbE uplinks for subsequent optical transport or routing. The 56640 is classed as a broadband aggregation device rather than just for mobile.
Features of the BCM56450 include support for MPLS (MultiProtocol Label Switching) and Ethernet OAM (operations, administration and maintenance), QoS and hardware protection switching. OAM performs such tasks as checking the link for faults, as well as performing link delay and packet loss measurements. This enables service providers to monitor the network's links quality. The device also supports the 1588 timing protocol used to synchronise the cell sites.
Another chip feature is sub-channelisation over Ethernet that allows the multiplexing of many end points into an Ethernet link. "We can support a higher number of downlinks than we have physical serdes on the device by multiplexing the ports in this way," says Kucharewski.
The on-chip traffic manager can also use additional, external memory if increasing the system's packet buffering size is needed. Additional buffering is typically required when a 10GbE interface's traffic is streamed to lower speed 1GbE or a Fast Ethernet port, or when the traffic manager is shaping multiple queues that are scheduled out of a lower speed port.
The BCM56450 integrates a dual-core ARM Cortex-A9 processor to configure and control the Ethernet switch and run the control plane software. The chip also has 10GbE serdes enabling the direct interfacing to optical transceivers.
Analysis
The differing nature of the two devices - the WinPath4 is a programmable chip whereas Broadcom's is a configurable Ethernet switch - means that the WinPath4 is more flexible. However, the greater throughput of the BCM56450 - at 100Gbps - makes it more suited to Carrier Ethernet switch router platforms. So says Jag Bolaria, a senior analyst at The Linley Group.
The WinPath4 also supports legacy T1/E1 TDM traffic whereas Broadcom's BCM56450 supports Ethernet backhaul only
The Linley Group also argues that the WinPath4 is more attractive for backhaul designers needing SDN OpenFlow support, given the chip's programmability and larger forwarding tables.
The WinPath4 and the BCM56450 are available in sample form. Both devices are expected to be generally available during the first half of 2014.
Further reading:
A more detailed piece on the WinPath4 and its protocol support is in New Electronics. Click here
The Linley Group: Networking Report, "Broadcom focuses on mobile backhaul", July 22nd, 2013. Click here (subscription is required)
OFC/NFOEC 2013 industry reflections - Final part
Gazettabyte spoke with Jörg-Peter Elbers, vice president, advanced technology at ADVA Optical Networking about the state of the optical industry following the recent OFC/NFOEC exhibition.

"There were many people in the OFC workshops talking about getting rid of pluggability and the cages and getting the stuff mounted on the printed circuit board instead, as a cheaper, more scalable approach"
Jörg-Peter Elbers, ADVA Optical Networking
Q: What was noteworthy at the show?
A: There were three big themes and a couple of additional ones that were evolutionary. The headlines I heard most were software-defined networking (SDN), Network Functions Virtualisation (NFV) and silicon photonics.
Other themes include what needs to be done for next-generation data centres to drive greater capacity interconnect and switching, and how do we go beyond 100 Gig and whether flexible grid is required or not?
The consensus is that flex grid is needed if we want to go to 400 Gig and one Terabit. Flex grid gives us the capability to form bigger pipes and get those chunks of signals through the network. But equally it allows not only one interface to transport 400 Gig or 1 Terabit as one chunk of spectrum, but also the possibility to slice and dice the signal so that it can use holes in the network, similar to what radio does.
With the radio spectrum, you allocate slices to establish a communication link. In optics, you have the optical fibre spectrum and you want to get the capacity between Point A and Point B. You look at the spectrum, where the holes [spectrum gaps] are, and then shape the signal - think of it as software-defined optics - to fit into those holes.
There is a lot of SDN activity. People are thinking about what it means, and there were lots of announcements, experiments and demonstrations.
At the same time as OFC/NFOEC, the Open Networking Foundation agreed to found an optical transport work group to come up with OpenFlow extensions for optical transport connectivity. At the show, people were looking into use cases, the respective technology and what is required to make this happen.
SDN starts at the packet layer but there is value in providing big pipes for bandwidth-on-demand. Clearly with cloud computing and cloud data centres, people are moving from a localised model to a cloud one, and this adds merit to the bandwidth-on-demand scenario.
This is probably the biggest use case for extending SDN into the optical domain through an interface that can be virtualised and shared by multiple tenants.
"This is not the end of III-V photonics. There are many III-V players, vertically integrated, that have shown that they can integrate and get compact, high-quality circuits"
Network Functions Virtualisation: Why was that discussed at OFC?
At first glance, it was not obvious. But looking at it in more detail, much of the infrastructure over which those network functions run is optical.
Just take one Network Functions Virtualisation example: the mobile backhaul space. If you look at LTE/ LTE Advanced, there is clearly a push to put in more fibre and more optical infrastructure.
At the same time, you still have a bandwidth crunch. It is very difficult to have enough bandwidth to the antenna to support all the users and give them the quality of experience they expect.
Putting networking functions such as cacheing at a cell site, deeper within the network, and managing a virtualised session there, is an interesting trend that operators are looking at, and which we, with our partnership with Saguna Networks, have shown a solution for.
Virtualising network functions such as cacheing, firewalling and wide area network (WAN) optimisation are higher layer functions. But as you do that, the network infrastructure needs to adapt dynamically.
You need orchestration that combines the control and the co-ordination of the networking functions. This is more IT infrastructure - server-based blades and open-source software.
Then you have SDN underneath, supporting changes in the traffic flow with reconfiguration of the network infrastructure.
There was much discussion about the CFP2 and Cisco's own silicon photonics-based CPAK. Was this the main silicon photonics story at the show?
There is much interest in silicon photonics not only for short reach optical interconnects but more generally, as an alternative to III-V photonics for integrated optical functions.
For light sources and amplification, you still need indium phosphide and you need to think about how to combine the two. But people have shown that even in the core network you can get decent performance at 100 Gig coherent using silicon photonics.
This is an interesting development because such a solution could potentially lower cost, simplify thermal management, and from a fab access and manufacturing perspective, it could be simpler going to a global foundry.
But a word of caution: there is big hype here too. This is not the end of III-V photonics. There are many III-V players, vertically integrated, that have shown that they can integrate and get compact, high-quality circuits.
You mentioned interconnect in the data centre as one evolving theme. What did you mean?
The capacities inside the data centre are growing much faster than the WAN interconnects. That is not surprising because people are trying to do as much as possible in the data centre because WAN interconnect is expensive.
People are looking increasingly at how to integrate the optics and the server hardware more closely. This is moving beyond the concept of pluggables all the way to mounted optics on the board or even on-chip to achieve more density, less power and less cost.
There were many people in the OFC workshops talking about getting rid of pluggability and the cages and getting the stuff mounted on the printed circuit board instead, as a cheaper, more scalable approach.
"Right now we are running 28 Gig on a single wavelength. Clearly with speeds increasing and with these kind of developments [PAM-8, discrete multi-tone], you see that this is not the end"
What did you learn at the show?
There wasn't anything that was radically new. But there were some significant silicon photonics demonstrations. That was the most exciting part for me although I'm not sure I can discuss the demos [due to confidentiality].
Another area we are interested in revolves around the ongoing IEEE work on short reach 100 Gigabit serial interfaces. The original objective was 2km but they have now honed in on 500m.
PAM-8 - pulse amplitude modulation with eight levels - is one of the proposed solutions; another is discrete multi-tone (DMT). [With DMT] using a set of electrical sub-carriers and doing adaptive bit loading means that even with bandwidth-limited components, you can transmit over the required distances. There was a demo at the exhibition from Fujitsu Labs showing DMT over 2km using a 10 Gig transmitter and receiver.
This is of interest to us as we have a 100 Gigabit direct detection dense WDM solution today and are working on the product evolution.
We use the existing [component/ module] ecosystem for our current direct detect solution. These developments bring up some interesting new thoughts for our next generation.
So you can go beyond 100 Gigabit direct detection?
Right now we are running 28 Gig on a single wavelength. Clearly with speeds increasing and with these kind of developments [PAM-8, DMT], you see that this is not the end.
Part 1: Software-defined networking: A network game-changer, click here
Part 2: OFC/NFOEC 2013 industry reflections, click here
Part 3: OFC/NFOEC 2013 industry reflections, click here
Part 4: OFC/NFOEC industry reflections, click here
Melding networks to boost mobile broadband
In a Q&A, Bryan Kim, manager at SK Telecom's Core Network Lab, discusses the mobile operator's heterogeneous network implementation and the service benefits.
SK Telecom has developed an enhanced mobile broadband service that combines two networks: 3G and Wi-Fi or Long Term Evolution (LTE) and Wi-Fi. The mobile operator will launch the 3G/ Wi-Fi heterogeneous network service in the second quarter of 2012 to achieve a maximum data rate of 60 Megabits-per-second (Mbps), while the LTE and Wi-Fi integrated service will be offered in 2013, enabling up to a 100Mbps wireless Internet service.
Q. What exactly has SK Telecom developed?
A. SK Telecom has developed a technology that provides subscribers with a faster data service by using two different wireless networks simultaneously. For instance, customers can enjoy a much faster video streaming service supported by either 3G and Wi-Fi, or LTE and Wi-Fi networks.
To benefit, a handset must use two radio frequencies at the same time. We have also built a system that is installed in the core network for simultaneous transmission.
"If it takes 10s to download a 10MB file using a 3G network and 5s to download the same file using the heterogeneous solution, the impact on the battery life is the same."
Bryan Kim, SK Telecom
Q. LTE-Advanced is standardising heterogeneous networking. This suggests that what SK Telecom has done is pre-standard and proprietary. What have you done that is different to the emerging standard?
A. SK Telecom is not talking about LTE-Advanced technology. This is a technology that enables simultaneous use of heterogeneous wireless networks we’ve deployed.
Q. What are the technical challenges involved in implementing a heterogeneous network?
A. It is technically difficult to realise the technology as it involves using networks with different characteristics in terms of speed and latency. At the same time, the technology is designed to minimise the changes required to the existing networks.
There has not really been challenges when linking the two separate networks but it is always a challenge to analyse the real-time network status to provide fast data services.
Q. What impact will simultaneous heterogeneous network operation have on a smartphone's battery life?
A. Using the heterogeneous network integration solution does increase the battery consumption: the device is using two radio frequencies. However, from a customer's perspective, if it takes 10s to download a 10MB file using a 3G network and 5s to download the same file using the heterogeneous solution, the impact on the battery life is the same.
SK Telecom also plans to apply a scanning algorithm for selecting qualified Wi-Fi networks.
Q. What services can SK Telecom see benefiting from having a 3G/ LTE network combined with a Wi-Fi network?
A. Customers will experience greater convenience when using multimedia services and network games, for example, with increased available bandwidth.
Source: SK Telecom
Heavy users tend to consume a lot of video services through mobile broadband. With this solution, SK Telecom will be providing faster data services to customers compared to when using only one network. This will enhance data service markets. The company has no plans for now to provide services directly.
Q. What mobile services come close to using 60Mbps or 100Mbps?
A. The 60Mbps and 100Mbps are theoretical maximum speeds. People who sign up for a 100Mbps fixed-line network service rarely experience the 100Mbps speed. With this technology, SK Telecom aims to increase the amount of wireless network resources for subscribers by using two different types of networks in a simultaneous manner, which in turn will boost the services that require wider bandwidth including video streaming service and network games.
Q. With a combination of Wi-Fi and cellular, most operators want to get traffic off the cellular network onto the ‘hot spot’. Does SK Telecom really want to fill their cellular network by providing higher speeds?
A. From the customer’s perspective, a Wi-Fi network offers narrow coverage and small capacity and since it is not a managed network, wireless data access is made upon request from customers. Thus, data offloading often does not work as intended by the mobile carriers.
In contrast, cellular networks provide national coverage so if there is an available Wi-Fi network to add to the cellular network, we can simultaneously use the cellular and Wi-Fi networks to offer a data service. By doing so customers will enjoy greater speed data services and mobile operators will be able to naturally offload data.
Is wireless becoming a valid alternative to fixed broadband?
Are wireless technologies such as Long Term Evolution (LTE) and WiMAX2 closing the gap on fixed broadband?
A recent blog by The Economist discussed how Long Term Evolution (LTE) is coming to the rescue of one of its US correspondents, located 5km from the DSL cabinet and struggling to get a decent broadband service.

Peak rates are rarely achieved: the mobile user needs to be very close to a base station and a large spectrum allocation is needed.
Mark Heath, Unwired Insights
The correspondent makes some interesting points:
- The DSL link offered a download speed of 700kbps at best while Verizon's FiOS passive optical networking (PON) service is not available as an alternative.
- The correspondent upgraded to an LTE handset service that enabled up to eight PCs and laptops to achieve a 15-20x download speed improvement.
The blog suggests that wireless data is becoming fast enough to address users' broadband needs.
But is LTE broadband now good enough? Mark Heath, a partner at telecom consultancy, Unwired Insight, is skeptical: "Is the gap between landline and wireless broadband narrowing? I'm not convinced."
Peak wireless rates, and in particular LTE, may suggest that wireless is now a substitute for fixed. But peak rates are rarely achieved: the mobile user needs to be very close to a base station and a large spectrum allocation is needed.
"While peak rates on mobile look to be increasing exponentially, average throughput per base station and base station capacities are increasing at a much more modest rate," says Heath. Hence the operator and vendor focus on LTE Advanced, as well as much bigger spectrum allocations and the use of heterogenous networks.
The advantage of landline broadband quality, in contrast, is that it does not suffer the degradation of a busy cell. There is much less disparity between peak rates and sustainable average throughputs with fixed broadband.
If fixed has advantages, it still requires operators to make the relevant investment, particularly in rural areas. "Wireless is better than nothing in rural areas," says Heath. But the gap between fixed and mobile isn't shrinking as much as peak data rates suggest.
Yet mobile networks do have a trump card: wide area mobility. With the increasing number of people dependent on smartphones, iPads and devices like the Kindle Fire, an ever increasing value is being placed on mobile broadband.
So if fixed broadband is keeping its edge over wireless, just what future services will drive the need for fixed's higher data rates?
This is a topic to be explored as part of the upcoming next-generation PON feature.
Further reading:
broadbandtrends: The Fixed versus mobile broadband conundrum, click here
Reflections and predictions: 2011 & 2012 - Part 1

"For 2012, the macroeconomy is likely to dominate any other developments"
Martin Geddes, telecom consultant @martingeddes
Sometimes the important stuff is slow-burning: we're seeing a continued decline in the traditional network equipment providers, and the rise in Genband, Acme, Sonus and Metaswitch in their place. Smaller, leaner, and more used to serving Tier 2 and Tier 3 operators and enterprise players and their lower cost structures.
The recognition of the decline of SMS and telephony became mainstream in 2011 -- maybe I can close down my Telepocalypse blog as what I foresaw is reality.
We've seen absolute declines in revenue and usage of telco voice and messaging in leading markets like Norway and Netherlands. The creation of Telefonica Digital is a landmark reorganisation around new markets. No longer are those initiatives endlessly parked in business development whilst marketing dream up a new price plan for minutes, messages and megabytes.
If I had to pick one thing to characterise 2011, it was the year of the App.
For 2012, the macroeconomy is likely to dominate any other developments. The scenarios are "distress", "meltdown" and "collapse".
Telecoms is well-placed to weather the storm. Even £600 smartphones may remain in vogue as people defer purchases like cars and holidays, and hide their fiscal distress with status symbols hewn out of pure blocks of profit.
Voice will be much more prominent, after decades of languishing, as LTE sets up a complex dynamic of service innovation driven by over-the-top applications - which will increasingly come from telcos as well as telecoms outsiders. Microsoft's purchase of Skype is the one to watch - if they get it right, it joins Windows and Office in the hall of fame; get it wrong, and Microsoft is probably out of the smartphone game due to a lack of competitive differentiation and advantage.
So 2012 is the year when (mobile) voice gets vocal again - because we're going to have a lot to talk about, and want to do it much cheaper and better.
Brandon Collings, CTO for communications and commercial optical products at JDS Uniphase
For the course of 2011, the tunable XFP shipped in volume and it rather quickly supplanted the 300-pin transceiver. On the service/ market trend, over-the-top consumer video (Netflix) grew rapidly to be the dominant traffic on the internet.
"Solutions for the next generation ROADM networks - self aware networks - are now firm"
I expect the maturation of 100 Gigabit to continue through 2012 with the introduction of a number of new 100 Gigabit solutions, both network equipment makers and at the transceiver level.
Also, as the adoption percentage of consumers using over-the-top video usage still seems to be relatively small, yet is growing strongly and is already the dominant traffic on the internet, it will be interesting to see how this trend continues as it strongly drives bandwidth yet with potentially unfavorable revenue models for the network operators who need to deliver it.
Lastly, I expect that as the solutions for the next generation ROADM networks - self aware networks - are now firm, the practical assessment of the value and advantages of these networks can quantitatively take place.
Eve Griliches, managing partner, ACG Research @EveGr
The Juniper PTX announcement really caught the market by surprise. I'm not so much sure why but clearly it rocked some folks back on their heels. Momentum for the product has been good as well. I think you can count this as a success story.
Another one is the Infinera 500Gbps release with super-channels. A pretty impressive technology and service providers are waiting for final product to test.
The death of Steve Jobs rattled us all. I think it struck a note for everyone in how different he was and how he touched us all.
"Content providers ask for simple, scalable and low-featured products. Those who deliver will be rewarded for listening."
I continue to be amazed at how much optical equipment content providers [the Googles, Facebooks, MSNs of this world] are deploying and how few folks at the vendor level are doing anything about getting into their networks. Maybe that is a 2012 thing, I don't know.
As for 2012, we'll definitely see some mergers and acquisitions - expect low acquisition prices too - and some companies exiting this market. I love optics and it really pains me to say that, but there are just more companies out there who can't support the declining margins. I think margin erosion will be key to who survives.
Cisco and Infinera should be bringing some cool products to market in the next six months. We hope the products are good because it will generate debate for the final vendor choices for operators such as AT&T and Verizon.
Again, content providers ask for simple, scalable and low-featured products. Those who deliver will be rewarded for listening. Some don't listen, and will wonder what happened.
Peter Jarich, service director, service provider infrastructure, mobile ecosystem, Current Analysis @pnjarich
2012 is going to be the year for LTE-Advanced (LTE-A). Why? One, vendors always like to talk up what’s next, and LTE-A is what follows LTE (Long Term Evolution).
At the same time, operators who haven’t yet deployed LTE will want to look to start with the latest and greatest. Of course, LTE-A brings real advances for operators: carrier aggregation for dealing with fragmented spectrum assets; heterogeneous networks for dealing with the interaction of small cell and macrocell networks; relaying for improved cell edge performance.
Avi Shabtai, CEO of MultiPhy
The most significant development of 2011 was the availability of CMOS technology that allows next-generation optical transport solutions for 100 Gigabit. And specifically, metro-focused solutions that hit the cost and power numbers required by this industry.
On top of that, optical communication has entered the era of digital signal processing receivers. We have also seen the potential segmentation in 100 Gigabit of metro versus long-haul, each with its specific set of solutions.
"We will see a huge growth in video consumption. This has already started but it is just the tip of the iceberg."
The transition of the telecom and datacom market to 100 Gigabit has also begun - from the transport optical network all the way to copper backplanes - it's all a 4x25Gbps architecture. This year has also seen consolidation in the ecosystem, especially among module companies.
This consolidation will continue at all industry levels in 2012: semiconductors, subsystems, systems and the carriers. The consolidation will coincide with an across-the-board price reduction in emerging technologies like 100 Gigabit transport.
The increase in capacity demand will also force an increase in requirements for various solutions supporting 100 Gigabit. I expect to see more CMOS-based devices introduced.
From a services point or view, we will see a huge growth in video consumption. This has already started but it is just the tip of the iceberg. Video will have a tremendous influence on network evolution.
Gilles Garcia, director, wired communication at Xilinx @gllsgarcia
The CFP2 and CFP4 optical modules are arriving a lot faster than it took for the CFP to follow the XFP optical module.
The CFP standard took 3-4 years to complete while the standard for the CFP2 just closed after two years. Now the CFP4 standard has been launched and is expected to take 18 months only. The new form factors are being driving by the cost-per-port of 100 Gigabit and how to reduce it. The CFP2 doubles the density when compared to the CFP while the CFP4 doubles it again.

"Programmability is becoming the key trend among telecom system vendors as operators look to react faster to standards, new feature requests and deployment of new services."
Telecom application-specific standard product (ASSP) players have been relatively quiet in 2011. Word from customers is that such vendors are pushing out their roadmap/ product availability because of too much flux in the various IEEE and ITU-T telecom standards and difficulties to justify the return-on-investment. This is proving a perfect opportunity for FPGAs.
Large system vendors are growing their network services as operators continue to outsource their network management and maintenance. As reported in their financial reports, this is an important source of business for the likes of Ericsson, Huawei and Alcatel-Lucent.
It is leading the vendors to push more of their own hardware, as they look to add value-add services and integrate the services using their own platforms. Some equipment vendors realise they do not have a full portfolio and have established partnerships for the missing platforms. They are also starting to develop platforms to generate more revenue.
In 2012, I’m not expecting a telecom revolution but I do expect accelerated evolution. And I foresee big disruptions in the ASSP market as it continues to consolidate: I expect several mergers and acquisitions among the top 20 ASSP suppliers.
Programmability is becoming the key trend among telecom system vendors as operators look to react faster to standards, new feature requests and deployment of new services. Programmability also improves time-to-market to deliver these services and reduce time-to-revenue.
Mobile backhaul will be a market driver in 2012. The growth in mobile data terminals will lead to a new generation of mobile backhaul networks. This will drive the move from 1 to 10 Gigabit Ethernet, higher-feature packet processing, and traffic management integration into mobile infrastructure to better control and bill bandwidth usage i.e. pay for what you use.
The 'God box' - packet optical transport systems and the like - are back, but really it is network needs that is driving this.
And one topic to watch that will become clearer in 2012 is how cloud computing impacts the networking market with regard such issues as security, cacheing and higher speed links.
Google is becoming an important internal - for its own usage -networking equipment player. And Google will be joined by others - Facebook, Amazon etc. What impact will this have on the traditional system networking vendors? Such new players are defining and building networks platforms tailored for their needs. This is competition to the traditional system vendors who are not getting this piece of the business. Semiconductors, including FPGAs, could serve those companies directly.
Other issues to note: What will Intel do in the networking space? Intel acquired Fulcrum in 2011 and has invested in several networking companies.
There are also technology issues.
What will happen to ternary content addressable memory (TCAM)? Broadcom's acquisition of NetLogic Microsystems has created a hole in the TCAM market. Will Broadcom continue with TCAM? Will customers want to give their TCAM business to Broadcom?
Xilinx FPGAs have added network search engines IP in the solution portfolio as multi-core ‘search engine’ face increasing difficulty in sustaining the performance required.
And of course there is the continual issue of power optimisation.
For Part 2, click here
For Part 3, click here
LTE-Advanced - a mind map
A mind map of the emerging 3GPP LTE Release 10 standard, also known as LTE-Advanced. For a pdf, click here
