Is wireless becoming a valid alternative to fixed broadband?

Are wireless technologies such as Long Term Evolution (LTE) and WiMAX2 closing the gap on fixed broadband? 

A recent blog by The Economist discussed how Long Term Evolution (LTE) is coming to the rescue of one of its US correspondents, located 5km from the DSL cabinet and struggling to get a decent broadband service. 

 

Peak rates are rarely achieved: the mobile user needs to be very close to a base station and a large spectrum allocation is needed.

Mark Heath, Unwired Insights

 

The correspondent makes some interesting points: 

  • The DSL link offered a download speed of 700kbps at best while Verizon's FiOS passive optical networking (PON) service is not available as an alternative.
  • The correspondent upgraded to an LTE handset service that enabled up to eight PCs and laptops to achieve a 15-20x download speed improvement.

The blog suggests that wireless data is becoming fast enough to address users' broadband needs. 

But is LTE broadband now good enough?  Mark Heath, a partner at telecom consultancy, Unwired Insight, is skeptical: "Is the gap between landline and wireless broadband narrowing? I'm not convinced." 

Peak wireless rates, and in particular LTE, may suggest that wireless is now a substitute for fixed. But peak rates are rarely achieved: the mobile user needs to be very close to a base station and a large spectrum allocation is needed. 

"While peak rates on mobile look to be increasing exponentially, average throughput per base station and base station capacities are increasing at a much more modest rate," says Heath. Hence the operator and vendor focus on LTE Advanced, as well as much bigger spectrum allocations and the use of heterogenous networks.

The advantage of landline broadband quality, in contrast, is that it does not suffer the degradation of a busy cell. There is much less disparity between peak rates and sustainable average throughputs with fixed broadband. 

If fixed has advantages, it still requires operators to make the relevant investment, particularly in rural areas. "Wireless is better than nothing in rural areas," says Heath. But the gap between fixed and mobile isn't shrinking as much as peak data rates suggest. 

Yet mobile networks do have a trump card: wide area mobility. With the increasing number of people dependent on smartphones, iPads and devices like the Kindle Fire, an ever increasing value is being placed on mobile broadband.

So if fixed broadband is keeping its edge over wireless, just what future services will drive the need for fixed's higher data rates?

This is a topic to be explored as part of the upcoming next-generation PON feature.

 

Further reading:

broadbandtrends: The Fixed versus mobile broadband conundrum, click here

 


2012: A year of unique change

The third and final part on what CEOs, executives and industry analysts expect during the new year, and their reflections on 2011.

Karen Liu, principal analyst, components telecoms, Ovum  @girlgeekanalyst 

 

"We’ve entered the next decade for real: the mobile world is unified around LTE and moving to LTE Advanced, complete with small cells and heterogenous networks including Wi-Fi."

 

Last year was a long one. Looking back, it is hard to believe that only one year has elapsed between January 2011 and now.

In fact, looking back it is hard to remember how things looked a year ago: natural disasters were considered rare occurrences. WiMAX’s role was still being discussed, some viewed TDD LTE as a Chinese peculiarity. For that matter, cloud-RAN was another weird Chinese idea. But no matter, China could do anything given its immunity to economics and need for a return-on-investment.  

Femtocells were consumer electronics for the occasional indoor coverage fix, and Wi-Fi was not for carriers. 

Only optical could do 100Mbps to the subscriber, who, by the way, was moving on to 10 Gig PON in short order.  Flexible spectrum ROADMS meant only Finisar could play, and high port-count wavelength-selective switches had come and gone. 100 Gigabit DWDM took several slots, hadn’t shipped for real, and even the client-side interface was a problem. 

As for modules, 40 Gigabit Ethernet (GbE) client was CFP-sized, and high-density 100GbE looked so far away that the non-standard 10x10 MSA was welcomed. 

NeoPhotonics was a private company, doing that wacky planar integration thing that works OK for passives but not actives.  

Now it feels like we’ve entered the next decade for real: the mobile world is unified around LTE and moving to LTE Advanced, complete with small cells and heterogenous networks including Wi-Fi. 

Optical is one of several ways to do backhaul or PC peripherals. 40GbE, even single-mode, comes in a QSFP package, tunable comes in an SFP — both of which, by the way, use optical integration. 

Most optical transport vendors, even metro specialists, have 100 Gigabit coherent in trial stage at least. Thousands of 100 Gig ports and tens of thousands of 40 Gig have shipped. 

Flexible spectrum is being standardised and CoAdna went public. The tunable laser start-up phase concluded with Santur finding a home in NeoPhotonics, now a public company.  

But we also have a new feeling of vulnerability. 

Optical components revenues and margins slid back down. Bad luck can strike twice, with Opnext taking the hit from both the spring earthquake and the fall floods.  China turns out not to be immune after all, and time hasn’t automatically healed Europe.

What will happen this year? At this rate, I think we’ll see a lot of news at OFC in a couple of months' time. By then I’ll probably think: "Was it as recently as January when the world looked so different?"

 

Brian Protiva, CEO of ADVA Optical Networking @ADVAOpticalNews

Last year was an incredible year for networks. In many respects it was a watershed moment. Optical transport took a huge step forward with the genuine availability of 100 Gigabit technologies. 

What's even more incredible is that 100 Gigabit emerged in more than the core: we saw 100 Gig metro solutions enter the marketplace. This means that for the first time enterprises and service providers have the opportunity to deploy 100 Gig solutions that fit their needs. Thanks to the development of direct-detection 100 Gig technology, cost is becoming less and less of an issue. This is a game changer.

In 2012, 100 Gig deployments will continue to be a key topic, with more available choices and maturing systems. However, I firmly believe the central focus of 2012 will be automation and multi-layer network intelligence. 

 

"We need to see networks that can effectively govern and optimise themselves." 

 

 

 

 

 

 

 

Talking to our customers and the industry, it is clear that more needs to be done to develop true network automation. There are very few companies that have successfully addressed this issue. 

We need to see networks that can effectively govern and optimise themselves. That can automatically deliver bandwidth on demand, monitor and resolve problems before they become service disrupting, and drive dramatically increased efficiency.

The future of our networks is all about simplicity. The continued fierce bandwidth growth can no longer be supported by today's complex operational inefficiencies. Streamlined operations are essential if operators are to drive for further profitable growth. 

I'm excited about helping to make this happen.

 

Arie Melamed, head of marketing, ECI Telecom @ecitelecom

The existing momentum of major traffic growth with no proportional revenue increase has continued  - even intensified - in 2011. This means that operators have to invest in their networks without being able to generate the proportional revenue increase from this investment. We expect to see new business models crop up as operators cope with over-the-top (OTT) services. 

To differentiate themselves from competition, operators must make the network a core part of the end-customer experience. To do so, we expect operators to introduce application-awareness in the network – optimising service delivery to avoid network expansions and introduce new revenues.  

We also expect operators to offer quality-of-service assurance to end users and content application providers, turning a lose-lose situation around.

 

Larry Schwerin, CEO of Capella Intelligent Subsystems @CapellaPhotonic

Over 2011, we witnessed the demand for broadband access increase at an accelerated rate. Much of this has been fueled by the continuation of mass deployments of broadband access - PON/FTTH, wireless LTE, HFC, to name a few - as well as the ever-increasing implementation of cloud computing, requiring instantaneous broadband access. Video and rich media are a small but growing piece of this equation. 

The ultimate of this is yet to be felt, as people start to draw more narrowcast versus broadcast content. The final element will be when upstream content via appliances similar to Sling Media, as well as the various forms of video conferencing, become more widespread. This will lead to more symmetrical bandwidth from an upstream perspective. 

 

'Change is definitely in order for the optical ecosystem. The question is how and when?'

 

 

Along with this, the issue of falling revenue-per-bit is forcing network operators to develop more cost-effective ways for managing this traffic. 

All of aforementioned is driving demand for higher capacity and more flexible support at the fundamental optical layer. 

I believe this will work to translate into more bits-per-wavelength, more wavelengths-per-fibre, and finally more flexibility for network operators. They will be able to more easily manage the traffic at the optical layer. This points to good news for transponder, tunable and ROADM/ WSS suppliers.

2011 also pointed out certain issues within the optical communications sector. Most notably, entering 2011, the financial marketplace was bullish on the optical sector following rapid quarter-on-quarter growth of certain larger optical players. Then, the “Ides of March” came and optical stocks lost as much as 40% of their value when it was deemed there was a pull back in demand by a very few, but nonetheless important players in the sector. 

Later in the year came the flooding in Thailand, which hampered the production capabilities of many of the optical components players. 

Overall margins in the sector remain at unacceptable levels furthering the speculation that things need to change in order for a more robust environment to exist.

What will 2012 bring? 

I believe demand for bandwidth will continue to grow. Data centres will gain more focus as cloud computing continues to gain traction. This will lead to more demand for fundamental technologies in the area of optical transmission and management. 

The next phase of wavelength management solutions will start to emerge - both at the high port count (1x20) as well as low-port count (1x2, 1x4) for edge applications. More emphasis will be placed on monitoring and control as more complex optical networks are built.

Change is definitely in order for the optical ecosystem. The question is how and when? Will it simply be consolidation? How will vertical integration take shape? How will new technologies influence potential outcomes?

2012 should be a year of unique change.

 

Terry Unter, president and general manager, optical networks solutions, Oclaro

Discussion and progress on defining next-generation ROADM network architectures was a very important development in 2011. In particular, consensus on feature requirements and technology choices to enable a more cost-efficient optical network layer was generally agreed amongst the major network equipment manufacturers. Colourless, directionless and, to a significant degree, contentionless are clear goals, while we continue to drive down the cost of the network. 

 

"We expect to see a host of system manufacturers making decisions on 100 Gig supply partners. This should be an exciting year."

 

 

 

 

Coherent detection transponder technology is a critical piece of the puzzle ensuring scalability of network capacity while leveraging a common technology platform. We succeeded in volume production shipments of a 40 Gig coherent transponder and we announced our 100 Gig transponder.

2012 will be an important year for 100 Gig. The availability of 100 Gig transponder modules for deployment will enable a much wider list of system manufacturers to offer their customers more spectrally-efficient network solutions. The interest is universal from metro applications to the long haul and ultra-long haul market segments. 

While there is much discussion about 400 Gig and higher rates, standards are in very early stages. The industry as a whole expects 100 Gig to be a key line rate for several years. 

As we enter 2012, we expect to see a host of system manufacturers making decisions on 100 Gig supply partners. This should be an exciting year.

 

For Part 1, click here

For Part 2, click here

 


The CFP4 optical module to enable Terabit blades

The next-generation CFP modules - the CFP2 and CFP4 - promise to double and double again the number of 100 Gigabit-per-second (Gbps) optical module interfaces on a blade.

Using the CFP4, up to 16, 100Gbps modules will fit on a blade, a total line rate of 1.6 Terabits-per-second (Tbps). With a goal of a 60W total module power budget per blade, that equates to 27Gbps/W. In comparison, the power-efficient SFP+ achieves 10Gbps/W.
 

Source: Gazettabyte, Xilinx

The CFP2 is about half the size of the CFP while the CFP4 is half the size of the CFP2. The CFP4 is slightly wider and longer than the QSFP.

The two CFP modules will use a 4x25Gbps electrical interface, doing away with the need for a 10x10Gbps to 4x25Gbps gearbox IC used for current CFP 100GBASE-LR4 and -ER4 interfaces. The CFP2 and CFP4 are also defined for 40 Gigabit Ethernet use.

The CFP's maximum power rating is 32W, the CFP2 12W and the CFP4 5W. But vendors that put eight CFP2 or 16 CFP4s on a blade still want to meet the 60W total power budget.

 

Getting close: Four CFP modules deliver slightly less bandwidth than 48 SFP+ modules: 4x100Gbps versus 480Gbps. The four also consume more power - 60w versus 48W. Moving to the CFP2 module will double the blade's bandwidth without consuming more power while the CFP4 will do the same again. a blade with 16 CFP4 modules promises 1.6Tbps while requiring 60W. Source: Xilinx

The first CFP2 modules are expected this year - there could be vendor announcements as early as the upcoming OFC/NFOEC 2012 show to be held in LA in the first week in March. The first CFP4 products are expected in 2013.

 

Further reading

The CFP MSA presentation: CFP MSA 100G roadmap and applications

 


Altera unveils its optical FPGA prototype

Altera has been showcasing a field-programmable gate array (FPGA) chip with optical interfaces. The 'optical FPGA' prototype makes use of parallel optical interfaces from Avago Technologies.

Combining the FPGA with optics extends the reach of the chip's transceivers to up to 100m. Such a device, once commercially available, will be used to connect high-speed electronics on a line card without requiring exotic printed circuit board (PCB) materials. An optical FPGA will also be used to link equipment such as Ethernet switches in the data centre.

"It is solving a problem the industry is going to face," says Craig Davis, product marketing manager at Altera. "As you go to faster bit-rate transceivers, the losses on the PCB become huge."

 

What has been done  

Altera's optical FPGA technology demonstrator combines a large FPGA - a Stratix IV EP4S100G5 - to two Avago 'MicroPod' 12x10.3 Gigabit-per-second (Gbps) optical engines.

Avago's MicroPod 12x10Gbps optical engine deviceThe FPGA used has 28, 11.3Gbps electrical transceivers and in the optical FPGA implementation, 12 of the interfaces connect to the two MicroPods, a transmitter optical sub-assembly (TOSA) and a receiver optical sub-assembly (ROSA).

The MicroPod measures 8x8mm and uses 850nm VCSELs. The two optical engines interface to a MTP connector and consume 2-3W. Each MicroPod sits in a housing - a land grid array compression socket - that is integrated as part of the FPGA package. 

"The reason we are doing it [the demonstrator] with a 10 Gig FPGA and 10 Gig transceivers is that they are known, good technologies," says Davis. "It is a production GT part and known Avago optics." 

 

Why it matters

FPGAs, with their huge digital logic resources and multiple high-speed electrical interfaces, are playing an increasingly important role in telecom and datacom equipment as the cost to develop application-specific standard product (ASSP) devices continues to rise. 

The 40nm-CMOS Stratix IV FPGA family have up to 32, 11.3Gbps transceivers, while Altera's latest 28nm Stratix V FPGAs support up to 66x14.1Gbps transceivers, or 4x28Gbps and 32x12.5Gbps electrical transceivers on-chip.

Altera's FPGAs can implement the 10GBASE-KR backplane standard at spans of up to 40 inches. "You have got the distances on the line card, the two end connectors and whatever the distances are across a 19-inch rack," says Davis. Moving to 28Gbps transceivers, the distance is reduced significantly to several inches only. To counter such losses expensive PCBs must be used.   

One way to solve this problem is to go optical, says Davis. Adding 12-channel 10Gbps optical engines means that the reach of the FPGAs is up to 100m, simplifying PCB design and reducing cost while enabling racks and systems to be linked.

 

The multimode fibre connector to the MicroPod

Developing an optical FPGA prototype highlights that chip vendors already recognise the role optical interfaces will play. 

It is also good news for optical component players as the chip market promises a future with orders of magnitude greater volumes than the traditional telecom market.

The optical FPGA is one target market for silicon photonics players.  One, Luxtera, has already demonstrated its technology operating at 28Gbps.

 

What next

Altera stresses that this is a technology demonstrator only.  

The company has not made any announcements regarding when its first optical FPGA product will be launched, and whether the optical technology will enter the market interfacing to its FPGAs' 11.3Gbps, 14.1Gbps or highest-speed 28Gbps transceivers.  

 

The undersideof the FPGA, showing the 1,932-pin ball grid array

 

 


OIF promotes uni-fabric switches & 100G transmitter

The OIF's OTN implementation agreement (IA) allows a packet fabric to also switch OTN traffic. The result is that operators can now use one switch for both traffic types, aiding IP/ Ethernet and OTN convergence. Source: OIF

Improving the switching capabilities of telecom platforms without redesigning the switch as well as tinier 100 Gigabit transmitters are just two recent initiatives of the Optical Internetworking Forum (OIF).

The OIF, the industry body tackling design issues not addressed by the IEEE and International Telecommunication Union (ITU) standards bodies, has just completed its OTN-over-Packet-Fabric protocol that enables optical transport network (OTN) traffic to be carried over a packet switch. The protocol works by modifying the line cards at the switch's input and output, leaving the switch itself untouched (see diagram above). 

In contrast, the OIF is starting a 100 Gigabit-per-second (Gbps) transmitter design project dubbed the integrated dual-polarisation quadrature modulated transmitter assembly (ITXA). The Working Group aims to expand the 100Gbps applications with a transmitter design half the size of the OIF's existing 100Gbps transmitter module.

The Working Group also wants greater involvement from the system vendors to ensure the resulting 100 Gig design is not conservative. "We joke about three types of people that attend these [working group] meetings," says Karl Gass, the OIF’s Physical and Link Layer Working Group vice-chair. "The first group has something they want to get done, the second group has something already and they don't want something to get done, and the third group want to watch." Quite often it is the system vendors that fall into this third group, he says.

 

OTN-over-Packet-Fabric protocol  

The OTN protocol enable a single switch fabric to be used for both traffic types - packets and time-division multiplexed (TDM) OTN - to save cost for the operators. 

"OTN is out there while Ethernet is prevalent," says Winston Mok, technical author of the OTN implementation agreement. "What we would like to do is enable boxes to be built that can do both economically."

 

The existing arrangement where separate packet and OTN time-division multiplexing (TDM) switches are required. Source: OIF

 

Platforms using the protocol are coming to market. ECI Telecom says its recently announced Apollo family is one of the first OTN platforms to use the technique.

The protocol works by segmenting OTN traffic into a packet format that is then switched before being reconstructed at the output line card. To do this, the constant bit-rate OTN traffic is chopped up so that it can easily go through the switch as a packet. "We want to keep the [switch] fabric agnostic to this operation," says Mok. "Only the line cards need to do the adaptations." 

The OTN traffic also has timing information which the protocol must convey as it passes through the switch. The OIF's solution is to vary the size of the chopped-up OTN packets.  The packet is nominally 128-bytes long. But the size will occasionally be varied to 127 and 126 bytes as required. These sequences are interpreted at the output of the switch as rate information and used to control a phase-locked loop.

Mok says the implementation agreement document that describes the protocol is now available. The protocol does not define the physical layer interface connecting the line card to the switch, however. "Most people have their own physical layer," says Mok.

 

100 Gig ITXA 

The ITXA project will add to the OIF's existing integrated transmitter document. The original document addresses the 100 Gigabit transmitter for dual-polarisation, quadrature phase-shift keying (DP-QPSK) for long-haul optical transmission. The OIF has also defined 100Gbps receiver assembly and tunable laser documents.

The latest ITXA Working Group has two goals: to shrink the size of the assembly to lower cost and increase the number of 100Gbps interfaces on a line card, and to expand the applications to include metro. The ITXA will still address 100Gbps coherent designs but will not be confined to DP-QPSK, says Gass.

"We started out with a 7x5-inch module and now there is interest from system vendors and module makers to go to smaller [optical module] form factors," says Gass. "There is also interest from other modulator vendors that want in on the game."

The reduce size, the ITXA will support other modulator technologies besides lithium niobate that is used for long-haul. These include indium phosphide, gallium arsenide and polymer-based modulators.

Gass stresses that the ITXA is not a replacement for the current transmitter implementation. "We are not going to get the 'quality' that we need for long-haul applications out of other modulator technologies," he says. "This is not a Gen II [design].

The Working Group's aim is to determine the 'greatest common denominator' for this component. "We are trying to get the smallest form factor possible that several vendors can agree on," says Gass. "To come out with a common pin out, common control, common RF (radio frequency) interface, things like that."

Gass says the work directions are still open for consideration. For example, adding the laser with the modulator. "We can come up with a higher level of integration if we consider adding the laser, to have a more integrated transmitter module," says Gass.

As for wanting great system-vendor input, the Working Group wants more of their system-requirement insights to avoid the design becoming too restrictive. 

"You end up with component vendors that do all the work and they want to be conservative," says Gass. "The component vendors don't want to push the boundaries as they want to hit the widest possible customer base."

Gass expects the ITXA work to take a year, with system demonstrations starting around mid-2013.


Reflections 2011, Predictions 2012 - Part 2

Gazettabyte asked industry analysts, CEOs, executives and commentators to reflect on the last year and comment on developments they most anticipate for 2012. Here are the views of Verizon's Glenn Wellbrock, Professor Rod Tucker, Ciena's Joe Berthold, Opnext's Jon Anderson, NeoPhotonics' Tim Jenks and Vladimir Kozlov of LightCounting.

 

Glenn Wellbrock, Verizon's director of optical transport network architecture & design

The most significant accomplishment from an optical transport perspective for me was the introduction of 100 Gigabit into Verizon's domestic - US - network. 


"The key technology enabler in 2012 will be the flexible grid optical switching that can support data rates beyond 100 Gigabit"

 

That accomplishment has paved the way for us to hit the ground running in 2012 with a very aggressive 100 Gigabit deployment plan. I also believe this accomplishment gives others the confidence to start taking advantage of this leading-edge technology. 

With coherent receiver technology and the associated high-speed electronics lowering the propagation latency by up to 15%, we see a much cleaner line system design that eliminates external dispersion compensation fibre while bringing down the cost, space and power per bit. 

The value of the whole industry moving in this direction means higher volumes and, therefore, lower costs.  This new infrastructure will allow operators to get ahead of customer demand, thus improving delivery intervals and introducing new, higher bandwidth services to those large key customers that require it.  

In my opinion, the key technology enabler in 2012 will be the flexible grid optical switching that can support data rates beyond 100 Gigabit and provides the framework to support colourless, directionless and contentionless optical nodes.

Today, field technicians must plug a new transmitter/ receiver into the appropriate direction and filter port at both circuit ends. With this new technology, operations personnel can simply plug the new card into the next available port and it can then be provisioned, tested and even moved to a new colour or direction remotely without any on-site personnel involvement - even when there are multiple copies of the same colour on the same add/ drop structure coming from different fibres.

This new nodal architecture takes advantage of the inherent channel selection capability of the coherent receiver to eliminate fixed filters and opens up the door for a truly reconfigurable optical add/ drop multiplexer (ROADM) - creating new flexibility that can be used for optical restoration, network defragmentation, operational simplicity, and more. 

 

Rod Tucker, Director of the Institute for a Broadband Enabled Society (IBES), Director of the Centre for Energy-Efficient Telecommunications (CEET), and professor of electrical and electronic engineering at the University of Melbourne.

Australia's National Broadband Network (NBN) hit the ground running in 2011.

The project is still many years from completion, but in 2011 the roll-out of fibre-to-the-premises infrastructure began in earnest. This is a very noteworthy project - a wholesale broadband access network delivering advanced broadband services to the entire population of the country, including fibre to 93% of all premises and a mixture of fixed wireless and satellite to the remainder. At an estimated cost of around AUS$36 billion, the price tag is not small.

 

"The environment created by [Australia's] National Broadband Network  will greatly enhance opportunities for innovations in new services and new modes of broadband service delivery"  

 

But the wholesale-only model maximises opportunities for competition at the service provider level, and reduces wasteful duplication of infrastructure in the last mile.  A remarkable aspect of the NBN project is that a deal has been struck between the incumbent telco, Telstra, and the government-owned owner of the NBN.  

Under this deal, Telstra will shut down its Hybrid-Fibre-Coax (HFC) network and decommission its legacy copper access network.  Australia will become a truly fibre-connected country, with a future-proof broadband infrastructure.

My thoughts for 2012 also relate to Australia's National Broadband Network.  The environment created by the NBN will greatly enhance opportunities for innovations in new services and new modes of broadband service delivery.  

I anticipate that in 2012 and beyond, new services providers and aggregators in areas such as health care, education, entertainment and energy will emerge.  

I am very excited about the opportunities.

 

Joe Berthold, vice president of network architecture at Ciena

One of the most memorable developments from a network architecture point of view was the clear emergence of the category of packet-optical switching products to serve as the transport layer of backbone IP networks.

For years two competing points of view have been put forth. First, in the 'IP-over-glass' position, long-haul optics is incorporated into core routers. This has never taken off, with some disappointing attempts in the early days of 40 Gigabit. The second approach involves a separate, very much simpler, packet optical transport platform being introduced to interconnect core routers. The packet transport could be based on Ethernet protocols, MPLS, MPLS-TE or MPLS-TP.

 

"It will be interesting to see if a large internet data centre operator decides to embrace the OpenFlow concept at this very early stage of its development"

 

 

 

 

What is quite significant in this development, traditional router vendors seem to be going in this direction too, with the vision of a much simpler packet switching platform to keep cost, space and power under control. 

This is a clear response to the overwhelming need we see in the market, representing a separation of packet switching into two layers: one with global routing capability at strategic locations in the network, and the other with flexible transport functionality for network traffic engineering.

In 2012 it will be fascinating to see how the struggle for protocol dominance plays out within the data centre. 

While the IETF has many competing proposals, worked in multiple groups, the IEEE is in final ballot now for Shortest Path Bridging (IEEE 802.1aq). 

Shortest Path Bridging has broad applicability in networks, but we might see it first emerge as a solution within the data centre. 

The other contender within the data centre is OpenFlow, which has developed quite a momentum too. 

It will be interesting to see if a large internet data centre operator decides to embrace the OpenFlow concept at this very early stage of its development.

 

Jon Anderson, director of technology programme at Opnext

Our most significant 2011 events were the Japan great earthquake in March and the Thailand floods in October. Both events caused major disruptions and challenges in optical component supply-chain management and manufacturing.

JDS Uniphase's tunable SFP+ announcement was well ahead of the technology curve.

 

"Our most significant 2011 events were the Japan great earthquake in March and the Thailand floods in October."

 

 

 

 

In 2012 we expect initial production shipments and deployment of 100Gbps PM-QPSK/ coherent modules. Also a fast production ramp of 40 Gigabit Ethernet (GbE) QSFP+ modules for data centre applications. 

Another development to watch is the next-generation 100 GbE interconnect technology and standards development for low-cost, high-density modules for data centre applications. 

Lastly, there will be an increased focus on technologies and solutions for 100 Gigabit DWDM in metro and extended reach enterprise applications.

 

Tim Jenks, CEO of NeoPhotonics 

NeoPhotonics made significant progress this year in developments of components and technologies for coherent transmission networks, including receivers, transmitters and advanced approaches toward switching.

We continue to see increasing adoption of coherent transmission systems, broad-scale deployment of access networks and a continuing emergence of large scale data centres as a prominent element of the communications network landscape.

 

Vladimir Kozlov, CEO of LightCounting

The industry was strong enough to get over an earthquake, tsunami and flood in 2011. Softer demand for optics in 2011 helped - is still helping - many vendors to ride the disruptions. Ironically, the industry was more stressed ramping up production in 2010 to meet demand than dealing with the disruptions of 2011.  We are looking forward to a smoother ride in 2012, as demand/ supply reach equilibrium and nature cooperates.

 

"Ironically, the industry was more stressed ramping up production in 2010 to meet demand than dealing with the disruptions of 2011"

 

 

 

 

Service provider revenue and capex were up significantly in 2011. Mobile data is driving the growth, but even wireline revenues are improving and FTTx is probably behind it. This should be a sustainable trend for 2012-2015, even as service providers curb expenses to improve profitability, a larger fraction of capex will be spend on equipment. New technology is critical to stay ahead of competition.

Data centre optics had another good year with 10GBASE-T falling further behind schedule and with 100 Gigabit generating much action. This will probably get even more interesting in 2012.

Our conservative forecast for active optical cable, criticised by some vendors, was not conservative enough in 2011. It will take a while for this segment to unfold.

 

For Part 1, click here

For Part 3, click here



Reflections and predictions: 2011 & 2012 - Part 1

Gazettabyte has asked industry analysts, CEOs, executives and commentators to reflect on the last year and comment on developments they most anticipate for 2012.

 

"For 2012, the macroeconomy is likely to dominate any other developments"

 

 

 

 

 

 

 

Martin Geddes, telecom consultant @martingeddes

Sometimes the important stuff is slow-burning: we're seeing a continued decline in the traditional network equipment providers, and the rise in Genband, Acme, Sonus and Metaswitch in their place. Smaller, leaner, and more used to serving Tier 2 and Tier 3 operators and enterprise players and their lower cost structures. 

The recognition of the decline of SMS and telephony became mainstream in 2011 -- maybe I can close down my Telepocalypse blog as what I foresaw is reality. 

We've seen absolute declines in revenue and usage of telco voice and messaging in leading markets like Norway and Netherlands. The creation of Telefonica Digital is a landmark reorganisation around new markets. No longer are those initiatives endlessly parked in business development whilst marketing dream up a new price plan for minutes, messages and megabytes.

If I had to pick one thing to characterise 2011, it was the year of the App.

For 2012, the macroeconomy is likely to dominate any other developments. The scenarios are "distress", "meltdown" and "collapse".

Telecoms is well-placed to weather the storm. Even £600 smartphones may remain in vogue as people defer purchases like cars and holidays, and hide their fiscal distress with status symbols hewn out of pure blocks of profit. 

Voice will be much more prominent, after decades of languishing, as LTE sets up a complex dynamic of service innovation driven by over-the-top applications - which will increasingly come from telcos as well as telecoms outsiders. Microsoft's purchase of Skype is the one to watch - if they get it right, it joins Windows and Office in the hall of fame; get it wrong, and Microsoft is probably out of the smartphone game due to a lack of competitive differentiation and advantage.

So 2012 is the year when (mobile) voice gets vocal again - because we're going to have a lot to talk about, and want to do it much cheaper and better.

 

Brandon Collings, CTO for communications and commercial optical products at JDS Uniphase

For the course of 2011, the tunable XFP shipped in volume and it rather quickly supplanted the 300-pin transceiver.  On the service/ market trend, over-the-top consumer video (Netflix) grew rapidly to be the dominant traffic on the internet.

 

"Solutions for the next generation ROADM networks - self aware networks - are now firm"

 

I expect the maturation of 100 Gigabit to continue through 2012 with the introduction of a number of new 100 Gigabit solutions, both network equipment makers and at the transceiver level.

Also, as the adoption percentage of consumers using over-the-top video usage still seems to be relatively small, yet is growing strongly and is already the dominant traffic on the internet, it will be interesting to see how this trend continues as it strongly drives bandwidth yet with potentially unfavorable revenue models for the network operators who need to deliver it.  

Lastly, I expect that as the solutions for the next generation ROADM networks - self aware networks - are now firm, the practical assessment of the value and advantages of these networks can quantitatively take place.

 

Eve Griliches, managing partner, ACG Research @EveGr

The Juniper PTX announcement really caught the market by surprise. I'm not so much sure why but clearly it rocked some folks back on their heels. Momentum for the product has been good as well. I think you can count this as a success story.  

Another one is the Infinera 500Gbps release with super-channels.  A pretty impressive technology and service providers are waiting for final product to test.   

The death of Steve Jobs rattled us all. I think it struck a note for everyone in how different he was and how he touched us all.  

 

"Content providers ask for simple, scalable and low-featured products. Those who deliver will be rewarded for listening."

 

I continue to be amazed at how much optical equipment content providers [the Googles, Facebooks, MSNs of this world] are deploying and how few folks at the vendor level are doing anything about getting into their networks.  Maybe that is a 2012 thing, I don't know.

As for 2012, we'll definitely see some mergers and acquisitions - expect low acquisition prices too - and some companies exiting this market.  I love optics and it really pains me to say that, but there are just more companies out there who can't support the declining margins. I think margin erosion will be key to who survives.  

Cisco and Infinera should be bringing some cool products to market in the next six months. We hope the products are good because it will generate debate for the final vendor choices for operators such as AT&T and Verizon. 

Again, content providers ask for simple, scalable and low-featured products. Those who deliver will be rewarded for listening. Some don't listen, and will wonder what happened.

 

Peter Jarich, service director, service provider infrastructure, mobile ecosystem, Current Analysis @pnjarich

2012 is going to be the year for LTE-Advanced (LTE-A). Why? One, vendors always like to talk up what’s next, and LTE-A is what follows LTE (Long Term Evolution). 

At the same time, operators who haven’t yet deployed LTE will want to look to start with the latest and greatest. Of course, LTE-A brings real advances for operators: carrier aggregation for dealing with fragmented spectrum assets; heterogeneous networks for dealing with the interaction of small cell and macrocell networks; relaying for improved cell edge performance.

 

Avi Shabtai, CEO of MultiPhy

The most significant development of 2011 was the availability of CMOS technology that allows next-generation optical transport solutions for 100 Gigabit. And specifically, metro-focused solutions that hit the cost and power numbers required by this industry.

On top of that, optical communication has entered the era of digital signal processing receivers. We have also seen the potential segmentation in 100 Gigabit of metro versus long-haul, each with its specific set of solutions.

 

"We will see a huge growth in video consumption. This has already started but it is just the tip of the iceberg."

 

The transition of the telecom and datacom market to 100 Gigabit has also begun - from the transport optical network all the way to copper backplanes - it's all a 4x25Gbps architecture. This year has also seen consolidation in the ecosystem, especially among module companies.

This consolidation will continue at all industry levels in 2012: semiconductors, subsystems, systems and the carriers. The consolidation will coincide with an across-the-board price reduction in emerging technologies like 100 Gigabit transport.

The increase in capacity demand will also force an increase in requirements for various solutions supporting 100 Gigabit. I expect to see more CMOS-based devices introduced.

From a services point or view, we will see a huge growth in video consumption. This has already started but it is just the tip of the iceberg. Video will have a tremendous influence on network evolution.

 

Gilles Garcia, director, wired communication at Xilinx @gllsgarcia

The CFP2 and CFP4 optical modules are arriving a lot faster than it took for the CFP to follow the XFP optical module. 

The CFP standard took 3-4 years to complete while the standard for the CFP2 just closed after two years. Now the CFP4 standard has been launched and is expected to take 18 months only. The new form factors are being driving by the cost-per-port of 100 Gigabit and how to reduce it. The CFP2 doubles the density when compared to the CFP while the CFP4 doubles it again.

 

"Programmability is becoming the key trend among telecom system vendors as operators look to react faster to standards, new feature requests and deployment of new services."

 

Telecom application-specific standard product (ASSP) players have been relatively quiet in 2011. Word from customers is that such vendors are pushing out their roadmap/ product availability because of too much flux in the various IEEE and ITU-T telecom standards and difficulties to justify the return-on-investment. This is proving a perfect opportunity for FPGAs.

Large system vendors are growing their network services as operators continue to outsource their network management and maintenance. As reported in their financial reports, this is an important source of business for the likes of Ericsson, Huawei and Alcatel-Lucent. 

It is leading the vendors to push more of their own hardware, as they look to add value-add services and integrate the services using their own platforms. Some equipment vendors realise they do not have a full portfolio and have established partnerships for the missing platforms. They are also starting to develop platforms to generate more revenue.

In 2012, I’m not expecting a telecom revolution but I do expect accelerated evolution. And I foresee big disruptions in the ASSP market as it continues to consolidate: I expect several mergers and acquisitions among the top 20 ASSP suppliers.

Programmability is becoming the key trend among telecom system vendors as operators look to react faster to standards, new feature requests and deployment of new services. Programmability also improves time-to-market to deliver these services and reduce time-to-revenue.  

Mobile backhaul will be a market driver in 2012. The growth in mobile data terminals will lead to a new generation of mobile backhaul networks. This will drive the move from 1 to 10 Gigabit Ethernet, higher-feature packet processing, and traffic management integration into mobile infrastructure to better control and bill bandwidth usage i.e. pay for what you use.

The 'God box' - packet optical transport systems and the like - are back, but really it is network needs that is driving this.

And one topic to watch that will become clearer in 2012 is how cloud computing impacts the networking market with regard such issues as security, cacheing and higher speed links.

Google is becoming an important internal - for its own usage -networking equipment player. And Google will be joined by others - Facebook, Amazon etc.  What impact will this have on the traditional system networking vendors? Such new players are defining and building networks platforms tailored for their needs. This is competition to the traditional system vendors who are not getting this piece of the business. Semiconductors, including FPGAs, could serve those companies directly.

Other issues to note: What will Intel do in the networking space? Intel acquired Fulcrum in 2011 and has invested in several networking companies.

There are also technology issues.

What will happen to ternary content addressable memory (TCAM)? Broadcom's acquisition of NetLogic Microsystems has created a hole in the TCAM market. Will Broadcom continue with TCAM? Will customers want to give their TCAM business to Broadcom?

Xilinx FPGAs have added network search engines IP in the solution portfolio as multi-core ‘search engine’ face increasing difficulty in sustaining the performance required. 

And of course there is the continual issue of power optimisation.

 

For Part 2, click here

For Part 3, click here


ROADMs: core role, modest return for component players

Next-generation reconfigurable optical add/ drop multiplexers (ROADMs) will perform an important role in simplifying network operation but optical component vendors making the core component  - the wavelength-selective switch (WSS) - on which such ROADMs will be based should expect a limited return for their efforts.

 

"[Component suppliers] are going to be under extreme constraints on pricing and cost"

Sterling Perrin, Heavy Reading

 

 

 

 

That is one finding from an upcoming report by market research firm, Heavy Reading, entitled: "The Next-Gen ROADM Opportunity: Forecast & Analysis". 

"We do see a growth opportunity [for optical component vendors]," says Sterling Perrin, senior analyst and author of the report. “But in terms of massive pools of money becoming available, it's not going to happen; it is a modest growth in spend that will go to next-generation ROADMs." 

That is because operators’ capex spending on optical will grow only in single digits annually while system vendors that supply the next-generation ROADMs will compete fiercely, including using discounting, to win this business. "All of this comes crashing down on the component suppliers, such that they are going to be under extreme constraints on pricing and cost," says Perrin.  The report will quantify the market opportunity but Heavy Reading will not discuss numbers until the report is published.

Next-generation ROADMs incorporate such features as colourless (wavelength-independence on an input port), directionless (wavelength routing to any port), contentionless (more than one same-wavelength light path accommodated at a port) and flexible spectrum (variable channel width for signal rates above 100 Gigabit-per-second (Gbps)). 

Networks using such ROADMs promise to reduce service providers' operational costs. And coupled with the wide deployment of coherent optical transmission technology, next-generation ROADMs are set to finally deliver agile optical networks.

Other of the report’s findings include the fact that operators have been deploying colourless and directionless ROADMs since 2010, even though implementing such features using current 1x9 WSSs are cumbersome and expensive. However, operators wanting these features in their networks have built such systems with existing components. "Probably about 10% of the market was using colourless and directionless functions in 2010," says Perrin.

Service providers are requiring ROADMs to support flexible spectrum even though networks will likely adopt light paths faster than 100Gbps (400Gbps and beyond) in several years' time. 

The need to implement a flexible spectrum scheme will force optical component vendors with microelectromechanical system (MEMS) technology to adopt liquid crystal technology – and liquid-crystal-on-silicon (LCoS) in particular - for their WSSs (see Comments). "MEMS WSS technology is great for all the stuff we do today - colourless, directionless and contentionless - but when you move to flexible spectrum it is not capable of doing that function," says Perrin. "The technology they (vendors with MEMS technology) have set their sights on - and which there is pretty much agreement as the right technology for flexible spectrum - is the liquid crystal on silicon."  A shift from MEMS to LCoS for next-generation ROADM technology is thus to be expected, he says.

Perrin also highlights how coherent detection technology, now being installed for 100 Gbps optical transmission, can also implement a colourless ROADM by making use of the tunable nature of the coherent receiver.  "It knocks out a bunch of WSSs added to the add/ drop," says Perrin. "It is giving a colourless function for free, which is a huge advantage."

Perrin views next-gen ROADMs as a money-saving exercise for the operators, not a money-making one. "This is hitting on the capex as well as the opex piece which is absolutely critical," he says. "You see the charts of the hockey stick of bandwidth growth and flat venue growth; that is what ROADMS hit at." 

The Heavy Reading report will be published later this month. 

 

Further reading:

Capella: Why the ROADm market is a good place to be

Q&A with JDSU's CTO


New editorial calendar for 2012

Gazettabyte has posted the in-depth briefing features planned for 2012.  

Please click here for details. 


Boosting the 100 Gigabit addressable market

Alcatel-Lucent has enhanced the optical performance of its 100 Gigabit technology with the launch of its extended reach (100G XR) line card. Extending the reach of 100 Gigabit systems helps makes the technology more attractive when compared to existing 40 Gigabit optical transport. 

 

"We have built some rather large [data centre to data centre] networks with spans larger that 1,000km in totality"  

Sam Bucci, Alcatel-Lucent

 

 

 

 

Used with the Alcatel-Lucent 1830 Photonic Service Switch, the line card improves optical transmission performance by 30% by fine-tuning the algorithm that runs on its coherent receiver ASIC. The system vendor says the typical optical reach extends to 2,000km. 

When Alcatel-Lucent first announced its 100 Gigabit technology in June 2010, it claimed a reach of 1,500-2,000km. Now this upper reach limit is met for most networking scenarios with the extended reach performance. 

"By announcing the extended reach, Alcatel-Lucent is able to highlight the 2,000km reach as well as draw attention to the fact that it has many deployments already, and that some of those customers are using 100 Gig in 1,000km+ applications," says Sterling Perrin, senior analyst at Heavy Reading.

Market research firm Ovum views the 100G XR announcement as a specific evolutionary improvement.

"But it is significant in that it makes the case for 100 Gig versus 40 Gig more attractive for terrestrial longer-reach applications," says Dana Cooperson, network infrastructure practice leader at Ovum. “The higher the performance vendors can make 100 Gig for more demanding applications - bad fiber, ultra long-haul and ultimately submarine - the quicker it will eclipse 40 Gig.” That said, Ovum does not expect 40 Gig to be eclipsed anytime soon. 

 

100G XR

The line card's improved optical performance equates to transmission across longer fibre spans before optical regeneration is required. This, says the vendor, saves on equipment cost, power and space. 

More complex network topologies can also be implemented such as mesh networks where the signal can encounter varying-length paths based on differing fibre types as well as multiple ROADM stages. Alcatel-Lucent says it has implemented a 1,700km link with 20 amplifiers and seven ROADM stages without the need for signal regeneration.

The improved optical performance of the 100G XR has been achieved without changing the line card's hardware. The card uses the same analogue-to-digital converter, digital signal processor (DSP) ASIC and the same forward error correction scheme used for its existing 100 Gigabit line card. 

What has changed is the dispersion compensation algorithm that runs on the DSP, making use of the experience Alcatel-Lucent has gained from existing 100 Gigabit deployments. 

"We can tune various parameters, such as power and the way it [the algorithm] deals with impairments," says Sam Bucci, vice president, terrestrial portfolio management at Alcatel-Lucent. In particular the 100G XR has increased tolerance to polarisation mode dispersion and non-linear impairments.

Cooperson says Alcatel-Lucent has adjusted the receiver ASIC performance after 'mining' data from coherent deployments, something the company is used to doing with its wireless networks. She says Alcatel-Lucent has also worked closely with component vendors to achieve the improved performance. 

Perrin points out that Alcatel-Lucent's 100 Gig design uses a single laser while Ciena's system is dual laser. "Alcatel-Lucent is saying that over an identical plant the two-laser approach has no distance advantages over the one laser approach," he says. However, other system vendors have announced distances at and beyond 2,000km. "So Alcatel-Lucent's enhanced system is not record-setting," says Perrin.

 

100 Gigabit Market 

Alcatel-Lucent says it has more than 45 deployments comprising over 1,200 100 Gig lines since the launch of its 100 Gigabit system in 2010.

"It appears that Alcatel-Lucent has shipped more 100G line cards than anyone," says Cooperson. "Alcatel-Lucent has a good opportunity to make some serious 100 Gig inroads here, along with Ciena, while everyone else gears up to get their solutions to market in 2012." 

Cooperson also says the 100G XR announcement dovetails nicely with Alcatel-Lucent’s recent CloudBand announcement. Indeed Bucci says that its deployments of 100 Gig include connecting data centres: "We have built some rather large [data centre to data centre] networks with spans larger that 1,000km in totality."

The 100G XR card is being tested by customers and will be generally available starting December 2011.


Privacy Preference Center