ECOC 2009: Squeezing optics out of optical communications
Prof. Polina BayvelAn interview with Polina Bayvel, Professor of Optical Communications and Networks and head of the Optical Networks Group at University College London (UCL), on her ECOC conference impressions.
What did you find noteworthy at ECOC 2009?
PB: So much work on digital signal processing and coherent detection...will these techniques lead to another revolution in fibre optics? But there is much to understand about how to design the DSP algorithms and how to best match these to appropriate fibre maps in some implementable way.
Did anything at the conference surprise you?
PB: Is there really a capacity crunch or is it a cost crunch and who will end up paying? There is much work on new fibres, new DSP but why is no-one looking at new amplifiers?
What did you learn from ECOC?
PB: I learnt how little progress there has been made in all-optical networking - the well-trodden ideas and arguments on wavelength routing which have been circulating for over 15 years are not being taken up by operators but are being re-discovered and re-offered as new...and just how conservative the operators still are, except those in Japan.
Did you see or hear anything that gives reason for industry optimism?
PB: Lots of buzz about linear and nonlinear DSP, error correcting codes, net coding gain, FPGAs and many other developments which, whilst invigorating the industry are squeezing optics out of optical communications. Here is to the fightback for optics!
ECOC 2009: An industry view
“Most of the action was in 40 and 100 Gigabit,” said Stefan Rochus, vice president of marketing and business development at CyOptics. “There were many 40/ 100 Gigabit LR4 module announcements - from Finisar, Opnext and Sumitomo [Electric Industries].”
Daryl Inniss, practice leader, components at market research firm Ovum, noted a market shift regarding 40 Gigabit. “There has been substantial progress in lowering the cost, size and power consumption of 40 Gigabit technology,” he said.
John Sitch, Nortel’s senior advisor optical development, metro Ethernet networks, highlighted the prevalence and interest in coherent detection/ digital signal processing designs for 40 and 100 Gigabit per second (Gbps) transmission. Renewed interest in submarine was also evident, he said.
Rochus also highlighted photonic integration as a show theme, with the multi-source agreement from u2t Photonics and Picometrix, the integrated DPSK receiver involving Optoplex with u2t Photonics, Enablence Technologies, and CIP Technologies' monolithically integrated semiconductor optical amplifier with a reflective electro-absorption modulator.
Intriguingly, Rochus also heard talk of OEMs becoming vertically integrated again. “This time maybe by strategic partnerships rather than OEMs directly owning fabs,” he said.
The attendees were also surprised by the strong turnout at ECOC, which was expected to suffer given the state of the economy. “Attendance appeared to be thick and enthusiasm strong,” says Andrew Schmitt, directing analyst, optical at Infonetics Research. “I heard the organisers were expecting 200 people on the Sunday [for the workshops] but got 400.”
In general most of the developments at the show were as expected. “No big surprises, but the ongoing delays in getting actual 100 Gigabit CFP modules were a small surprise.” said Sitch. “And if everyone's telling the truth, there will be plenty of competition in 100 Gigabit.”
Inniss was struck by how 100 Gigabit technology is likely to fare: “The feeling regarding 100 Gigabit is that it is around the corner and that 40 Gigabit will somehow be subsumed,” he said. “I’m not so sure – 40 Gigabit is growing up and while operators are cheerleading 100 Gigabit technology, it doesn’t mean they will buy – let’s be realistic here.”
As for the outlook, Rochus believes the industry has reason to be upbeat. “There is optimism regarding the third and fourth quarters for most people,” he said. “Inventories are depleted and carriers and enterprises are spending again.”
Inniss’ optimism stems from the industry's longer term prospects. He was struck by a quote used by ECOC speaker George Gilder: “Don’t solve problems, pursue opportunities.”
Network traffic continues to grow at a 40-50% yearly rate yet some companies continue to worry about taking a penny out of cost, said Inniss, when the end goal is solving the bandwidth problem.
For him 100 Gbps is just a data rate, as 400 Gbps will be the data rate that follows. But given the traffic growth, the opportunity revolves around transforming data transmission. “For optical component companies innovation is the only way," said Inniss. "What is required here is not a linear, incremental solution."
40G and 100G Ethernet: First uses of the high-speed interfaces
Operators, enterprises and equipment vendors are all embracing 100 Gigabit technologies even though the standards will only be completed in June 2010.

Comcast and Verizon have said they will use 100Gbit/s transmission technology once it is available. Juniper Networks demonstrated a 100 Gigabit Ethernet (GbE) interface on its T1600 core router in June, while in May Ciena announced it will supply 100Gbit/s transmission technology to NYSE Euronext to connect its data centers.
Ciena’s technology is for long-haul transmission, outside the remit of the IEEE’s P802.3ba Task Force’s standards work defining 40GbE and 100GbE interfaces. But the two are clearly linked: the emergence of the Ethernet interfaces will drive 100Gbit/s long-haul transmission.
ADVA Optical networking foresees two applications for metro and long-haul 100Gbps transmission: carrying 100Gbps IP router links, and multiplexing 10Gbps streams into a 100Gbps lightpath. “We see both: for router and switch interfaces, and to improve fibre bandwidth,” says Klaus Grobe, principal engineer at ADVA Optical Networking.
The trigger for 40Gbit/s market adoption was the advent of OC-768 SONET/SDH 2km reach interfaces on IP core routers. In contrast, 40GbE and 100GbE interfaces will be used more broadly. As well as IP routers and multiplexing operators’ traffic, the standards will be used across the data centre, to interconnect high-end switches and for high-performance computing.
The IEEE Task Force is specifying several 40GbE and 100GbE standards, with copper-based interfaces used for extreme short reach, while optical addresses interfaces with reaches of 100m, 10km and 40km.
For 100m short-reach links, multimode fibre is used: four fibres at 10Gbps in each direction for 40GbE and ten fibres at 10Gbps in each direction for 100GbE interfaces. For 40 and 100GbE 10km long reach links, and for 100GbE 40km extended reach, single mode fibre is used. Here 4x10Gbps and 4x25Gbps are carried over a single fibre using wavelength division multiplexing (WDM).
“Short reach optics at 100Gigabit uses a 10x10 electrical interface that drives 10x10 optics,” says John D’Ambrosia, chair of the IEEE P802.3ba Task Force. “The first generation of 100GBASE-L/ER optics uses a 10x10 electrical interface that then goes to 4x25 WDM optics.”
The short reach interfaces reuse 10Gbps VCSEL and receiver technology and are designed for high density, power-sensitive applications. “The IEEE chose to keep the reach to 100m to give a low cost solution that hits the biggest market,” says D’Ambrosia, although he admits that a 100m reach is limiting for certain customers.
Cisco Systems agrees. “Short reach will limit you,” says Ian Hood, Cisco’s senior product marketing manager for service provider routing and switching. “It will barely get across the central office but it can be used to extend capacity within the same rack.” For this reason Cisco favours longer reach interfaces but will use short reach ‘where convenient’.
D’Ambrosia would not be surprised if a 1 to 2km single mode fibre variant will be developed though not as part of the current standards. Meanwhile, the Ethernet Alliance has called for an industry discussion on a 40Gbps serial initiative.
Within the data centre, both 40GbE and 100GbE reaches have a role.
A two-layer switching hierarchy is commonly used in data centres. Servers connect to top-of-rack switches that funnel traffic to aggregation switches that, in turn, pass traffic to the core switches. Top-of-rack switches will continue to receive 1GbE and 10GbE streams for a while yet but the interface to aggregation switches will likely be 40GbE. In turn, aggregation switches will receive 40GbE streams and use either 40GbE or 100GbE to interface to the core switches. Not surprisingly, first use of 100GbE interfaces will be to interconnect core Ethernet switches.
Extended reach 100GbE interfaces will be used to connect equipment up to 40km part, between two data centres for example. But only when a single 100GbE link over the fibre pair is sufficient. Otherwise dense WDM technology will be used.
Servers will take longer to migrate to 40 and 100GbE. “There are no 40GbE interfaces on servers,” says Daryl Inniss, Ovum’s vice president and practice leader components. “Ten gigabit interfaces only started to be used [on servers] last year.” Yet the IT manager in one leading German computing centre, albeit an early adopter, told ADVA that he could already justify using a 40GbE server interface and expected 100GbE interfaces would be needed by 2012.
Two pluggable form factors have already been announced for 100GbE. The CFP supports all three link distances and has been designed with long-haul transmission in mind, says Matt Traverso, senior manager of technical marketing at Opnext. The second, the CXP, is designed for compact short reach interfaces. For 40GbE more work is needed.
Juniper’s announced core router card uses the CFP to implement a 100m connection. Juniper’s CFP is being used to connect the router to a DWDM platform for IP traffic transmission between points-of presence, and for data centre trunking.
So will one 40GbE or 100GbE interface standard dominate early demand? Opnext’s Traverso thinks not. “All the early adopters have one or two favourite interfaces – high-performance computing favours 40 and 100GbE short reach while for core routers it is long reach 100GbE,” he says. “All the early adopters have their chosen interfaces before they will round out their portfolio.”
This article appeared in the exhibition magazine at ECOC 2009.
Next-Gen PON: An interview with BT
Peter Bell, Access Platform Director, BT Innovate & Design
Q: The status of 10 Gigabit PON – 10G EPON and 10G GPON (XG-PON): Applications, where it will be likely be used, and why is it needed?
PB: IEEE 10G EPON: BT not directly involved but we have been tracking it and believe the standard is close to completion (gazettabyte: The standard was ratified in September 2009.)
ITU-T 10Gbps PON: This has been worked on in the Full Service Access Network group (FSAN) where it became known as XG-PON. The first version XG-PON1 is 10Gbps downstream and 2.5Gbps upstream and work has started on this in ITU-T with a view to completion in the 2010 timeframe. The second version XG-PON2 is 10Gbps symmetrical and would follow later.
Not specific to BT’s plans but an operator may use 10Gbps PON where its higher capacity justified the extra cost. For example: business customers, feeding multi-dwelling units (MDUs) or VDSL street cabinets
Q: BT's interest in WDM-PON and how would it use it?
PB: BT is actively researching WDM-PON. In a paper presented at ECOC '09 conference in Vienna (24th September 2009) we reported the operation of a compact DWDM comb source on an integrated platform in a 32-channel, 50km WDM-PON system using 1.25Gbps reflective modulation.
We see WDM-PON as a longer term solution providing significantly higher capacity than GPON. As such we are interested in the 1Gbps per wavelength variants of WDM-PON and not the 100Mbps per wavelength variants.
Q: FSAN has two areas of research regarding NG PON: What is the status of this work?
PB: NG-PON1 work is focussed on 10 Gbps PON (known as XG-PON) and has advanced quite quickly into standardisation in ITU-T.
NG-PON2 work is longer term and progressing in parallel to NG-PON1
Q: BT's activities in next gen PON – 10G PON and WDM-PON?
PB: It is fair to say BT has led research on 10Gbps PONs. For example an early 10Gbps PON paper by Nesset et al from ECOC 2005 we documented the first, error-free physical layer transmission at 10Gbps, over a 100km reach PON architecture for up and downstream.
We then partnered with vendors to achieve early proof-of-concepts via two EU funded collaborations.
Firstly in MUSE we collaborated with NSN et al to essentially do first proof-of-concept of what has become known as XG-PON1 (see attached long reach PON paper).
Secondly, our work with NSN, Alcatel-Lucent et al on 10Gbps symmetrical hybrid WDM/TDMA PONs in EU project PIEMAN has very recently been completed.
Q: What are the technical challenges associated with 10G PON and especially WDM-PON?
For 10Gbps PONs in general the technical challenges are:
- Achieving the same loss budgets - reach - as GPON despite operating at higher bitrate and without pushing up the cost.
- Coexistence on same fibres as GPON to aid migration.
- For the specific case of 10Gbps symmetrical (XG-PON2) the 10 Gbps burst mode receiver to use in the headend is especially challenging. This has been a major achievement of our work in PIEMAN.
For WDM-PONs the technical challenges are:
- Reducing the cost and footprint of the headend equipment (requires optical component innovation)
- Standardisation to increase volumes of WDM-PON specific optical components thereby reducing costs.
- Upgrade from live GPON/EPON network to WDM-PON (e.g. changing splitter technology)
Q: There are several ways in which WDM-PON can be implemented, does BT favour one and why, or is it less fussed about the implementation and more meeting its cost points?
PB: We are only interested in WDM-PONs giving 1Gbps per wavelength or more and not the 100Mbps per wavelength variants. In terms of detailed implementation we would support the variant giving lowest cost, footprint and power consumption.
Q: What has been happening with BT's Long Reach PON work
PB: We have done lots of work on the long reach PON concept which is summarised in a review published paper from IEEE JLT and includes details of our work to prototype a next-generation PON capable of 10Gbps, 100km reach and 512-way split. This includes EU collaborations MUSE and PIEMAN
From a technical perspective, Class B+ and C+ GPON (G.984.2) could reach a high percentage of UK customers from a significantly reduced number of BT exchanges. Longer reach PONs would then increase the coverage further.
Following our widely published work in amplified GPON, extended reach GPON has now been standardised (G.984.6) to have 60 km reach and 128-way split, and some vendors have early products. And 10Gbps PON standards are expected to have same reach as GPON.
BroadLight awarded a dynamic bandwidth allocation patent
Passive optical networking (PON) chip company, Broadlight, has been awarded a patent by the US Patent Office entitled: ‘Method and grant scheduler for cyclically allocating time slots to optical network units’.
Why is this important?
Dynamic bandwidth allocation (DBA) performs a key role in point-to-multipoint PON networks. A PON comprises an optical line terminal (OLT) at an operator’s central office connected to several optical network units (ONUs) via fibre. An ONU typically resides in the building basement or in a home.
The OLT broadcasts data downstream to the ONUs. In a gigabit PON (GPON), the downstream data rate is 2.5Gbps. Each ONU identifying the data meant for it using a unique packet header. In the upstream path – for GPON it is 1.25Gbps - only one ONU broadcasts at a time.
DBA is needed to make efficient use of the uplink capacity by assigning slots when each ONU can transmit its data. DBA must also take into account quality of service (QoS) requirements associated with the various traffic types (video, voice and data). “DBA increases revenue for the network provider by ensuring that bandwidth is not wasted,” says Eli Elmoalem, a system architect at Broadlight.
Method used:
Broadlight’s patent implements two approaches to DBA. The first, dubbed status reporting DBA, involves periodically polling the ONUs to determine their latest traffic needs. The second approach - traffic monitoring DBA – requires the OLT to run an algorithm that predicts the ONUs’ bandwidth needs based on their traffic bandwidth usage history.
Broadlight’s patented technique for GPON runs either or both approaches to determine how much bandwidth to allocate to each ONU. The patent also details how best to partition the tasks between the OLT silicon and software executed on the chip.
This is Broadlight’s second DBA patent award. The first, entitled “Method of providing QoS and bandwidth allocation in a point to multi-point network” is a generic DBA approach, says Eli Weitz, Broadlight’s CTO, applicable to any point-to-multipoint network whether it is cable, Broadband PON (BPON), GPON or Ethernet PON (EPON).
What next?
Developing DBA for 10G GPON. The development work for 10G PON is being undertaken by Full Service Access Network (FSAN) and will be standardised by the ITU-T.
DBA for 10G GPON will be more demanding: the split ratio - the number of ONUs served by one OLT – is higher with as many as 512 ONUs per PON, as is the upstream bandwidth. For 10G GPON, two upstream rates are being proposed: 2.5Gbps and 10Gbps.
References:
[1] “Predictive DBA: The ‘Right’ Method for Dynamic Bandwidth Allocation in Point-to-MultiPoint FTTH Networks”, a white paper by Broadlight
[2] “The Importance of Dynamic Bandwidth Allocation in GPON Networks,” a white paper by PMC-Sierra.
[3] “A Comparison of Dynamic Bandwidth Allocation for EPON, GPON and Next Generation TDM PON.” IEEE Communications Magazine, March 2009
OneChip solution for Fibre-To-The-Home
Jim Hjartarson, CEO of OneChip PhotonicsAn interview with Jim Hjartarson, CEO of OneChip Photonics
Q. In March 2009, OneChip raised $19.5m. How difficult is it nowadays for an optical component firm to receive venture capital funding?
A. Clearly, the venture capital community, given the current macroeconomic environment, is being selective about the new investments it makes in the technology market in general, and photonics in particular. However, if you can demonstrate that you have a unique approach to a problem that has not yet been solved, and that there is a large, untapped market opportunity, VCs will be interested in your value proposition.
Q. What is it about your company's business plan that secured the investment?
A. We believe OneChip Photonics has three fundamental advantages that resulted in our securing our initial two rounds of funding, which totaled $19.5 million:
- A truly breakthrough approach and technology that will remove the cost and performance barriers that have been impeding the ubiquitous deployment of Fiber-to-the-Home (FTTH) and enable new business and consumer broadband applications.
- A large, untapped market opportunity. Ovum estimates that the FTTx optical transceiver market will grow from $387 million by the end of 2009 to $594 million by the end of 2013. OneChip also is poised to introduce photonics integration into other high-volume business and consumer markets, where our breakthrough photonic integrated circuit (PIC) technology can reduce costs and improve performance. These markets could be orders of magnitude larger than the FTTx optical transceiver market.
- A seasoned and successful management team. OneChip has attracted top talent – from industry leading companies such as MetroPhotonics, Bookham, Catena Networks, Fiberxon, Nortel and Teknovus – who have successful track records of designing, manufacturing, marketing and selling transceivers, PICs and mass-market broadband access solutions.
Q. The passive optical networking (PON) transceiver market faces considerable pricing pressures. Companies use TO cans and manual labour or more sophisticated hybrid integration where the laser and photodetectors are dropped onto a common platform to meet various PON transceiver specifications. Why is OneChip pursuing indium phosphide-based monolithic integration and why will such an approach be cheaper than a hybrid platform that can address several PON standards?
A. Most current FTTH transceiver providers base their transceivers on either discrete optics or planar lightwave circuit (PLC) designs. These designs offer low levels of integration and require assembly from multiple parts. There is little technical differentiation among them. Rather, vendors must compete on the basis of who can assemble the parts in a slightly cheaper fashion. And there is little opportunity to further reduce such costs.
While more integrated than fully discrete optics-based designs, PLC designs still require discrete active components and the assembly of as many as 10 parts. Great care must be taken, during the manufacturing process, to align all parts of the transceiver correctly. And while packaging can be non-hermetic, these parts can fall out of alignment through thermal or mechanical stress. PLC designs also have proven to be an expensive alternative. For all of these reasons, the PON system vendors with which OneChip has engaged have indicated that they are not interested in deploying PLC-based designs.
OneChip Photonics is taking a new approach with its breakthrough PIC technology. OneChip is monolithically integrating all the functions required for an optical transceiver onto a single, indium phosphide (InP)-based chip. All active AND passive components of the chip – including the distributed-feedback (DFB) laser, optically pre-amplified detector (OPAD), wavelength splitter (WS), spot-size converter (SSC), and various elements of passive waveguide circuitry – are, uniquely, integrated in one epitaxial growth step, without re-growth or post-growth modification of the epitaxial material.
With respect to transmit performance, OneChip’s single-frequency DFB lasers will offer a superior performance – much more suitable for longer-reach and higher bit-rate applications – than competing Fabry-Perot (FP) lasers. With respect to receive performance, OneChip’s optically pre-amplified detectordesign is a higher gain-bandwidth solution than competing avalanche photodiode (APD) solutions. It also is a lower-cost solution, as it does not require a high-voltage power source.
OneChip’smonolithic photonic integrated circuits (PICs) have the smallest footprint on the market, the optical parts are aligned for life, and the parts are highly robust (resistant to vibration and other outside elements). Further, OneChip’s PICs are designed for automated mounting on a silicon optical bench, without requiring active alignment, using industry-standard, automated assembly processes – resulting in high yields of good devices.
Utilizing automated production processes, OneChip can maintain the highest production scalability (easily ramping up and down) in the industry and respond rapidly to customer needs. Standard production processes also mean reliable supplies to customers, at the lowest prices on the market.
Q. Several companies have explored integrated PON solutions and have either dismissed the idea or have come to market with impressive integrated designs only to ultimately fail (e.g. Xponent Photonics).Why are you confident OneChip will fare better?
As noted earlier, PLC designs developed by vendors such as Xponent are not fully integrated. PLC designs still require discrete active components and the assembly of as many as 10 parts, using a glass substrate. This results in poor yields and high costs.
OneChip is taking a fundamentally different approach. We are the only company in the optical access market that is monolithically integrating all the active and passive functions required for an optical transceiver onto a single, indium phosphide (InP)-based chip. This enables us to achieve low cost, high performance, high yields and high quality.
OneChip is one of only a few companies with new core intellectual property and advanced technology in the optical transceiver business that can sustain a competitive advantage over other optical component providers, which rely on conventional technology and assembly processes. Carriers and system providers recognize that an approach, which would eliminate assembly from multiple parts, is needed to lower the cost and improve the performance of transceivers, Optical Network Terminals (ONTs) and Optical Line Terminals (OLTs) in optical access networks. We believe OneChip’s fully integrated technology can help unleash the potential of FTTH and other mass-market optical communications applications.
Q. If integrated PON is a good idea why, in OneChip’s opinion, have silicon photonics startups so far ignored this market?
A. “Silicon photonics” designs face the inherent limitation that a laser cannot be implemented in silicon. Therefore, separate optical and electrical devices must be grown with different processes and then assembled together. With as many as 10 parts having to be interconnected on a ceramic substrate, the alignment, tuning and reliability issues can significantly add costs and reduce yields.
In addition, system providers and service providers need to be cognizant of the inherent performance limitations with transceivers built from discrete parts. While short-reach EPON transceivers already have been optimized down to below a U.S. $15 price, these implementations can only meet low-end performance requirements. Networks would require a switch to more costly transceivers to support longer-range EPON, 2.5G EPON, GPON or 10G PON. Because most service providers are looking to reap the payback benefits of their investments in fiber installations/retrofits over the shortest possible timeframes, it doesn’t make sense to risk adding the high cost of a forklift changeover of transceiver technology at some point during the payback period.
Q. PON with its high volumes has always been viewed as the first likely market for photonic integrated circuits (PICs). What will be the second?
A. OneChip recognizes that optical communications is becoming economically and technologically mandatory in areas outside of traditional telecommunications, such as optical interconnections in data centers and other short to ultra-short reach broadband optical networks. OneChip is poised to introduce photonics integration into these and other high-volume business and consumer markets, where our PIC technology can reduce costs and improve performance.
[End]
Click here tomore details on OneChip Photonics
Cloud computing: where telecoms and IT collide
Originally appeared in FibreSystems - May 21st 2009
IT directors worldwide are considering whether it makes financial sense to move their computing resources offsite into the 'cloud'. Roy Rubenstein assesses the potential opportunities for network operators and equipment vendors.
Cloud computing is the latest industry attempt to merge computing with networking. While previous efforts have all failed, the gathering evidence suggests that cloud computing may have got things right this time. Indeed it is set to have a marked effect on how enterprises do business, while driving the growth of network traffic and new switch architectures for the data centre.
In the mid-1990s, Oracle proposed putting computing within a network, and coined the term "network computer". The idea centred on a diskless desktop for businesses on which applications were served. The concept failed in its bid to dislodge Intel and Microsoft, but was resurrected during the dot-com boom with the advent of application service providers (ASPs).
ASPs delivered computer-based services to enterprises over the network. The ASPs faltered partly because applications were adapted from existing ones rather than being developed with the web in mind. Equally, the ASPs' business models were immature, and broadband access was in scarce supply. But the idea has since taken hold in the shape of software-as-a-service (SaaS). SaaS provides enterprises with business software on demand over the web, so a firm does not need to buy and maintain that software on its own platforms.
SaaS can be viewed as part of a bigger trend in cloud computing. Examples of cloud services include Google's applications such as e-mail and online storage, and Amazon with its Elastic Compute Cloud service, where application developers configure the computing resources they need.
Cloudy thinking
The impact of cloud starts and finishes in the IT sector. "Cloud computing is not just [for] Web 2.0 companies, it is a game-changer for the IT industry," said Dennis Quan, director of IBM's software group. "In general it's about massively scalable IT services delivered over the network."
An ecosystem of other players is required to make cloud happen. The early movers in this respect are data-centre owners and IT services companies like Amazon and IBM, and the suppliers of data-centre hardware, which include router vendors Cisco Systems and Juniper Networks, and Ethernet switch makers such as Extreme Networks and Force10 Networks.
Telecommunications carriers too are jumping on the bandwagon, which is not surprising given their experience as providers of hosting and managed services coupled with the networking expertise needed for cloud computing. International carrier AT&T, for instance, launched its Synaptic Hosting service in August 2008, a cloud-based, on-demand managed service where enterprises define their networking, computing and storage requirements, and pay for what they use. "There is a base-level platform for the [enterprise's] steady-state need, but users can tune up and tune down [resources] as required," explained Steve Caniano, vice-president, hosting and application services at AT&T.
"The top 10 operators in Europe are all adding utility-based offerings [such as storage and computing], and are moving to cloud computing by adding management and provisioning on top," said Alfredo Nulli, solutions manager for service provision at Cisco. However, it is the second- and third-tier operators in Europe that are "really going for cloud", he says, as they strive to compete with the likes of Amazon and steal a march on the big carriers.
The idea of using IT resources on a pay-as-you-go basis rather than buying platforms for in-house use is appealing to companies, especially in the current economic climate. "Enterprises are tired of over-provisioning by 150% only for equipment to sit idle and burn power," said Steve Garrison, vice-president of marketing at Force10 Networks.
Danny Dicks, an independent consultant and author of a recent Light Reading Insider report on cloud computing, agrees. But he stresses it is a huge jump from using cloud computing for application development to an enterprise moving its entire operations into the cloud. For a start-up developing and testing an application, the cost and scalability benefits of cloud are so great that it makes a huge amount of sense, he says. Once an application is running and has users, however, an enterprise is then dependent on the reliability of the connection. "No-one would worry if a Facebook application went down for an hour but it would make a big difference to an enterprise offering financial services," he commented.
The network perspective
The importance of the network and the implied demand for bandwidth as more and more applications and IT resources sit somewhere remote from the user is good news for operators and equipment makers.
If done right, there is a tremendous opportunity for telecoms operators to increase the value of their networks and create new revenue streams. At a minimum, cloud computing is expected to increase the amount of traffic on their networks.
Service providers stress the need for high-bandwidth, low-latency links to support cloud-based services. AT&T has 38 data centres worldwide, which are connected via its 40 Gbit/s MPLS global backbone network, says Gregg Sexton, AT&T's director of product development. The carrier is concentrating Synaptic Hosting applications in five "super data centres" located across three continents, linked using its OPT-E-WAN virtual private LAN service (VPLS). Using the VPLS, enterprise customers can easily change bandwidth assigned between sites and to particular virtual LANs.
BT, which describes its data centres and network as a "global cloud", also highlights the potential need for higher capacity links. "The big question we are asking ourselves is whether to go to 40 Gbit/s or wait for 100 Gbit/s," said Tim Hubbard, head of technology futures, BT Design.
Likewise, systems vendors are seeing the impact of cloud computing. Ciena first noted interest from large data-centre players seeking high-capacity links some 12 to 24 months ago. "It wasn't a step jump, more an incremental change in the way networks were being built and who was building them," said John-Paul Hemingway, chief technologist, EMEA for Ciena.
Cloud is also having an impact on access network requirements, he says. There is a need to change dynamically the bandwidth per application over a connection to an enterprise. Services such as LAN, video conferencing and data back-up need to be given different priorities at different times of the day, which requires technologies such as virtual LAN with quality-of-service and class-of-service settings.
German vendor ADVA Optical Networking has also noticed a rise in connectivity links to enterprises via demand for its FSP-150 Ethernet access product, which may be driven in part by increased demand for cloud-based services. Another area that's being driven by computing over long distances is the need to carry Infiniband natively over a DWDM lightpath. "Infiniband is used for computing nodes due to its highest connectivity and lowest latency," explained Christian Illmer, ADVA's director of business development.
Virtualization virtues
Cloud computing is also starting to influence the evolution of the data centre. One critical enabling technology for cloud computing in the data centre is virtualization, which refers to the ability to separate a software function from the underlying hardware, so that the hardware can be shared between different software usages without the user being aware. Networks, storage systems and server applications can all be "virtualized", giving end-users a personal view of their applications and resources, regardless of the network, storage or computing device they are physically using.
Virtualization enables multiple firms to share the same SaaS application while maintaining their own unique data, compute and storage resources. Virtualization has also led to a significant improvement in the utilization of servers and storage. Traditionally usage levels have been at a paltry 10 to 15%.
However, virtualization remains just one of several components needed for cloud computing. A separate management-tools layer is also needed to ensure that IT resources are efficiently provisioned, used and charged for. "This reflects the main finding of our report, that the cloud-computing world is starting to stratify into clearly defined layers," said Dicks.
Such management software can also shift applications between platforms to balance loads. An example is moving what is called a virtual machine image between servers. A virtual machine image may comprise 100 GB of storage, middleware and application software. "If [the image] takes up 5% of a server's workload, you may consolidate 10 or 20 such images onto a single machine and save power," said IBM's Quan.
Force10's Garrison notes that firms issuing request for proposals for new data centres typically don't mention cloud directly. Instead they ask questions like "Help me see how you can move an application from one rack to another, or between adjacent rows, or even between adjacent data centres 50 miles apart", he said.
Clearly, shuffling applications between servers and between data centres will drive bandwidth requirements. It also helps to explain why vendors are exploring how to consolidate and simplify the switching architecture within the data centre.
"Everything is growing exponentially, whether it is the number of servers and storage installed each year or the amount of traffic," said Andy Ingram, vice-president of product marketing and business development, data-centre business group at Juniper Networks. "The data centre is becoming a wonderful, dynamic and scary place."
This explains why vendors such as Juniper are investigating how current tiered Ethernet switching within the data centre — passing traffic between the platforms and users — can be adapted to handle the expected growth in data-centre traffic. Such growth will also strain connections between equipment: between servers, and between the servers and storage.
According to Ingram the first approach is to simplify the existing architecture. With this in mind, Juniper is looking to collapse the tiered switching from three layers to two, by linking its top-of-rack switches in a loop. Longer term, vendors are investigating the development of a singled-tiered switch in a project code-named Stratus. "We are looking to develop a scalable, flat, non-blocking, lossless data-centre fabric," said Ingram.
A flat fabric means processing a packet only once, while a non-blocking architecture removes the possibility of congestion. Such a switch fabric will scale to hundreds or even thousands of 10 Gigabit Ethernet access ports, says Ingram, who stresses that Juniper is in the first year of what will be a multi-year project to develop such an architecture.
Data centre convergence
Another development is Fibre Channel over Ethernet (FCOE) which promises to consolidate the various networks that run within a data centre. At present, servers connect to the LAN using Ethernet and to storage via Fibre Channel. This requires separate cards within the server: a LAN network interface card and a host-bus adapter for storage. FCOE promises to enable Ethernet, and one common converged network adapter card, to be used for both purposes. But this requires a new variant of Ethernet to be adopted within the data centre. Such an Ethernet development is already being referred to by a variety of names in the industry: Data Centre Ethernet, Converged Enhanced Ethernet, lossless Ethernet, and Data Centre Bridging.
Lossless Ethernet could be used to carry Fibre Channel packets since the storage protocol's key merit is that it does not lose packets. Such a development would remove one of the three main protocols in the data centre, leaving Ethernet to challenge Infiniband. But even though FCOE has heavyweight backers in the shape of Cisco and Brocade, it will probably be some years before a sole switching protocol rules the data centre.
Equipment makers believe they can benefit from the widespread adoption of cloud computing, at least in the short term. Although there will be efficiencies arising from virtualization and ever more enterprises sharing hardware, this will be eclipsed by the boost that cloud services provide to IT in general, meaning that more datacoms equipment will be sold rather than less. Longer term, however, it will probably impact hardware sales as fewer firms choose to invest in their own IT.
IBM's Quan notes that enterprises themselves are considering the adoption of cloud capabilities within their private data centres due to the efficiencies it delivers. The company thus expects to see growth of such "private" as well as "public" cloud-enabled data centres.
Dicks believes that cloud computing has a long road map. There will be plentiful opportunities for companies to deliver innovative products for cloud, from software and service support to underlying platforms, he says.
Further information
Cloud Computing: A Definition
Steve Garrison, vice-president of marketing at Force10 Networks, offers a straightforward description of cloud: accessing applications and resources and not caring where they reside.
Danny Dicks, an independent consultant, has come up with a more rigorous definition. He classifies cloud computing as "the provision and management of rapidly scalable, remote, computing resources, charged according to usage, and of additional application development and management tools, generally using the internet to connect the resources to the user."
