Chinese optical component vendors set for change

China’s optical component firms must adapt if they are to match their Western counterparts in market reach, company ambition and technology portfolios. These are the findings of a report - China: The New Land of Opportunity - on the local optical component (OC) industry by market research firm, LightCounting.

 

“If [Chinese optical component] companies get $100m from an IPO, they have the resources to really do things”

Vladimir Kozlov, LightCounting

 

 

The local OC players have benefitted from the prolonged growth of China’s economy, the rise of global telecom system vendors Huawei and ZTE, and the significant expansion in Chinese operators’ networks. But such domestic growth will not continue and will likely lead to a shake-up of the local OC firms.

“They [Chinese OC players] all have the same industry pitch: they all have huge capacity, they have tons of people and they are growing fast but when you research that, you uncover different approaches to doing business,” says Vladimir Kozlov, CEO at LightCounting.

The market research firm has identified several classes of OC player. There are quite a few mid-size companies that focus on niche local opportunities. “Very few of them have an ambition of becoming a global player,” says Kozlov. “They have been set up with local government support, primarily with the aim of employing local people and being involved in local telecom projects.”

But there are other players with broader ambitions and resources. Companies such as HiSense Broadband and HG Genuine, acknowledged manufacturers of electronics and consumer products, have formed OC business units recognising the growth potential of optical communications.

Another category that Western firms will do well to note, says Kozlov, is the Chinese OC players with a long history such as WTD and Accelink. “WTD is 30-years-old and grew from the Wuhan Research Institute that is also a founding body for Chinese system vendor FiberHome,” says Kozlov. WTD has been growing steadily and the pace has accelerated in the last two years. “WTD is becoming more aggressive and is gaining market share while Accelink has a successful IPO that brought in $100m,” he says.

Other companies will likely follow Accelink’s example and raise money through IPOs. But what will be interesting is whether such companies continue to focus on the Chinese market or start addressing issues such as what technologies they are missing and even make acquisitions, he says.  

“A lot more companies will have access to financial markets as the regulation that limits how many companies can become public is relaxed,” says Kozlov. “If [Chinese OC] companies get [US] $100m from an IPO, they have the resources to really do things.”

 

“It is unlikely that Huawei will keep on growing as fast as it did over recent years and continue to take market share from Alcatel-Lucent, Ericsson and others for much longer”

 

Yet another Chinese OC player segment is start-ups funded by venture capitalists (VCs). One example is Innolight which has received funding from local VCs and a Western company. “VCs will push firms to be as ambitious as possible as they are after returns,” says Kozlov. Interest among the financial investment community is also growing given the rise of the stock price of the OC industry’s leading firms in the last year. Such interest will likely lead to investment and restructuring of local Chinese firms, he says.

Chinese OC vendors have been helped by the rise of the system vendors Huawei and ZTE. The Chinese equipment makers have been disruptive in adopting technology quickly while reducing their costs. But having become global players, Huawei and ZTE now face their own challenges.

“Both [system vendors] companies have caught up on the technology and the next step for them is to see whether they can become leaders in technology and stay ahead of an Alcatel-Lucent or a Ciena,” says Kozlov. “They have the ambition but can they do it?” Kozlov notes that Chinese companies are now highly active with patent applications: “Chinese firms recognise that this is how they will achieve a longer-term advantage and protect their own technologies.”

Another challenge facing the system vendors, common to many technology industries, is that no one player dominates a market. “Usually three global companies share the dominance; the same if it is a local market,” says Kozlov. “It is therefore unlikely that Huawei will keep on growing as fast as it did over recent years and continue to take market share from Alcatel-Lucent, Ericsson and others for much longer.”

This will require Huawei and ZTE to adapt to more moderate growth in future. Meanwhile North American and European system vendors have long responded to the competitive threat, moving their manufacturing to Asia Pacific - and China in particular - to benefit from reduced operating costs. For the Chinese OC vendors, yet to become global players, the chance to be as disruptive as the Chinese system vendors has gone since leading OC vendors have established local manufacturing.

Can Western companies learn from the experience of Chinese system and OC vendors? Kozlov is not so sure.  

The Chinese have proved adept at learning the business and mastering new technologies.  The examples of Huawei and ZTE that have disrupted the market by being as efficient as possible have proved a wake-up call for Western companies. “I don’t see anything beyond that that Western companies can learn; it is still the Chinese that are learning from Western companies,” says Kozlov. “This does not mean that the Western companies have nothing to worry about; there is plenty of room for improvement in the industry supply chain.”

Looking at the decade ahead, Kozlov expects Huawei to have a much greater penetration in the North American telecom market. “And as it [Huawei] builds up its own intellectual property, it will be better able to compete with Cisco Systems and H-P in the datacom market,” says Kozlov.  And as Chinese companies get access to greater finance he also expects they will start acquiring Western firms to gain expertise and greater access to markets.

 


Optical engines bring Terabit bandwidth on a card

Avago Technologies is now delivering to customers its 120 Gigabit-per-second optical engine devices. 

Such a parallel optics design offer several advantages when used on a motherboard. It offer greater flexibility when cooling since traditional optics are normally in pluggable slots at the card edge, furthest away from the fans. Such optical engines also simplify high-speed signal routing and electromagnetic interference issues since fibre is used rather than copper traces.

 

Figure 1: Fourteen 120Gbps MiniPods on a board. Source: Avago Technologies

Avago has two designs – the 8x8mm MicroPod and the 22x18mm MiniPod. The 12x10.3125 Gigabit-per-second (Gbps) MicroPods are being used in IBM’s Blue Gene computer and Avago says it is already shipping tens of thousands of the devices a month.

“The [MicroPod’s] signal pins have a very tight pitch and some of our customers find that difficult to do,” says Victor Krutul, director of marketing for the fibre optics division at Avago Technologies.  The MiniPod design tackles this by using the MicroPod optical engine but a more relaxed pitch. The MiniPod uses a 9x9 electrical MegArray connector and is now sampling, says Avago.

Figure 1 shows 14 MiniPod optical engines on a board, each operating at 12x10Gbps. “If you were trying to route all those signals electrically on the board, it would be impossible,” says Krutul.  All 14 MiniPods go to one connector, equating to a 1.68Tbps interface.

 

Figure 2: Sixteen MicroPods in a 4x4 array. Source: Avago Technologies

Figure 2 shows 16 MicroPods in a 4x4 array. “Those [MicroPods] can get even closer,” says Krutul. Also shown are the connectors to the MicroPod array. Avago has worked with US Conec to design connectors whereby the flat ribbon fibres linking the MicroPods can stack on top of each other. In this example, there are four connections for each row of MicroPods.


OFC announcements and market trends

Avago Technologies, Finisar and Opnext spoke to Gazettabyte about market trends and their recent OFC/NFOEC announcements. 

More compact transceiver designs at 10, 40 and 100 Gigabit, advancements in reconfigurable optical add-drop multiplexer (ROADM) technology and parallel optical engine developments were all in evidence at this year’s OFC/NFOEC show held in Los Angeles in March.

 

“MSAs are designed by committee, and when you have a committee you throw away innovation and you throw away time-to-market”  

Victor Krutul, Avago Technologies

 

Finisar said that the show was one of the busiest in recent years. “There was an increasing system-vendor presence at OFC, and there was a lot more interest from investor analysts,” says Rafik Ward, vice president of marketing at Finisar.

 

Ethernet interfaces

Opnext demonstrated an IEEE 100GBASE-ER4 module design at the show, the 100 Gigabit Ethernet (GbE) standard with a 40km reach. Based on the company’s CFP-based 100GBASE-LR4 10km module, the design uses a semiconductor optical amplifier (SOA) on the receive path to achieve the extended reach. The IEEE standard calls for an SOA in front of the photo-detectors for the 100GBASE-ER4 interface.

“We don’t have that [SOA] integrated yet, we are just showing the [design] feasibility,” says Jon Anderson, director of technology programme at Opnext. The extended reach interface will be used to connect IP core routers to transport system when the two platforms reside in separate facilities. Such a 40km requirement for a 100GbE interface is not common but is an important one to meet, says Anderson.

Opnext’s first-generation LR4, currently shipping, is a discrete design comprising four discrete transmitter optical sub-assemblies (TOSAs) and four receiver optical sub-assemblies (ROSAs) and an optical multiplexer and demultiplexer. The company’s next-generation design will integrate the four lasers and the optical multiplexer into a package and will be used in future more compact CFP2 and CFP4 modules. 

The CFP2 module is half the size of the CFP module and the CFP4 is a quarter. In terms of maximum power, the CFP module is rated at 32W, the CFP2 12W and the CFP4 5W. “The CFP4 is a little bit wider and longer than the QSFP,” says Anderson. The first CFP2 modules are expected to become available in 2012 and the CFP4 in 2013.

System vendors are interested in the CFP4 as they want to support over one terabit of capacity on a 15-inch faceplate. Up to 16 ports can be supported –1.6Tbps – on a faceplate using the CFP4, and using a “belly-to-belly” configuration two rows of 16 ports will be possible, says Anderson.

Finisar demonstrated a distributed feedback laser (DFB) laser-based CFP module at OFC that implements the 10km 100GBASE-LR4 standard. The adoption of DFB lasers promises significant advantages compared to existing first-generation -LR4 modules that use electro-absorption modulated lasers (EMLs).  “If you look at current designs, ours included, not only do they use EMLs which are significantly more expensive, but each is in its own package and has its own thermo-electric cooler,” says Ward.  

Finisar’s use of DFBs means an integrated array of the lasers can be packaged and cooled using a single thermo-electric cooler, significantly reducing cost and nearly halving the power to 12W. “Now that the power [of the DFB-based] LR4 is 12W, we can place it within a CFP2 with its 25-28 Gigabit-per-second (Gbps) electrical I/O,” says Ward.  

Moving to the faster input/output (I/O) compared to the CFP’s 10Gbps I/O means that that serialiser/ deserialiser (serdes) chipset can be replaced with simpler clock data recovery (CDR) circuitry. “By the time we move to the CFP4, we remove the CDRs completely,” says Ward. “It’s an un-retimed interface.”  Finisar’s existing -LR4 design already uses an integrated four-photodetector array.

An early application of the 100GbE -LR4, as with the -ER4, is linking core routers with optical transport systems in operators’ central offices. Many Ethernet switch vendors have chosen to focus their early high-data efforts at 40GbE but Finisar says the move to 100GbE has started. 

Finisar argues that the adoption of DFBs will ultimately prove the cost-benefits of a 4-channel 100GbE design which faces competition from the emerging 10x10 multi-source agreement (MSA). “Everything we have heard about the 10x10 [MSA] has been around cost,” says Ward. “The simple view inside Finisar is that by the time the Gen2 100GbE module that we showed at OFC gets to market, this argument [4x25Gig vs. 10x10Gig] will be a moot point.” 

 

“40Gig is definitely still strong and healthy”

Jon Anderson, Opnext 

 

 

 

By then the second-generation -LR4 module design will be cost competitive if not even lower cost than the 10x10 MSA. “If you look at optoelectronic components, at the end of the day what really drives cost is yield,” says Ward. “If we can get our yields of 25Gig DFBs down to a level that is similar to 10Gig DFB yields- it doesn’t have to match, just in the ballpark - then we have a solution where the 4x25Gig looks like a 4x10Gig solution and then I believe everyone will agree that 4x25Gig is a less expensive architecture.”  Finisar expects the Gen2 CFP -LR4 in production by the first half of 2012.

Opnext demonstrated a 40GBASE- LR4 (40Gbps, up to 10km) standard in a QSFP+ module at OFC. Anderson says it is seeing demand for such a design from data centre operators and from switch and transport vendors.

Avago Technologies announced a 40Gbps QSFP+ module at OFC that implements the 100m IEEE 40GBASE-SR4. “It will interoperate with Avago’s SFP+ modules,” says Victor Krutul, director of marketing for the fibre optics division at Avago Technologies. The QSFP+ can interface to another QSFP+ module or to four 10Gbps SFP+ modules.

Avago also announced a proprietary mini-SFP+ design, 30% smaller than the standard SFP+ but which is electrically compatible. According to Krutul, the design came about following a request from one of its customers: “What it allows is the ability to have 64 ports on the front [panel] rather than 48.”

Did Avago consider making the mini-SFP+ design an MSA? “What we found with MSAs is that they are designed by committee, and when you have a committee you throw away innovation and you throw away time-to-market,” says Krutul. 

Krutul was previously a marketing manager for Intel’s LightPeak before joining Avago over half a year ago.

 

“There was an increasing system-vendor presence at OFC, and there was a lot more interest from investor analysts”

Rafik Ward, Finisar.  


 

 

 

Line-side interfaces

Opnext will be providing select customers with its 100Gbps DP-QPSK coherent module for trialling this quarter. The module has a 5-inch by 7-inch footprint and uses a 168-pin connector.  “We are working to try and meet the OIF spec [with regard power consumption] which is 80W.” says Anderson. “It is challenging and it may not be met in the first generation [design].”

The company is also moving its 40Gbps 2km very short reach (VSR) transponder to support the IEEE 40GBASE-FR standard within a CFP module, dubbed the “tri-rate” design.  “The 40BASE-FR has been approved, with the specification building on the ITU’s 40Gig VSR,” says Anderson. “It continues to support the [OC-768] SONET/SDH rate, it will support the new OTN ODU3 40Gbps and the intermediate 40 Gigabit Ethernet.”

Opnext and Finisar are both watching with interest the emerging 100Gbps direct detection market, an alternative to 100 Gigabit coherent aimed shorter reach metro applications.

“We certainly are watching this segment and do have an interest, but we don’t have any product plans to share at this point,” says Anderson. 

“The [100Gbps] direct-detection market is very interesting,” says Ward. Coherent is not going to be the only way people will deploy 100Gbps light paths. “There will be a market for shorter reach, lower performance 100 Gigabit DWDM that will be used primarily in datacentre-to-datacentre,” he says. Tier 2 and tier 3 carriers will also be interested in the technology for use in shorter metro reaches. “There is definitely a market for that,” says Ward.

Opnext also announced its small form-factor – 3.5-inch by 4.5-inch - 40Gbps DPSK module. “With a smaller form factor, the next generation could move to a CFP type pluggable,” says Anderson. “But that is if our customers are interested in migrating to a pluggable design for DPSK and DQPSK.”

Are there signs that the advent of 100 Gigabit is affecting 40Gbps uptake? “We definitely not seeing that,” says Anderson. “We are continuing to see good solid demand for both 40G line side – DPSK and DQPSK – and a lot of pull to being this tri-rate VSR.”

Such demand is not just from China but also North Ametican carriers. “40 Gig is definitely still strong and healthy,” says Anderson “But there are some operators that are waiting to see how 100G does and approved in for major build-outs.”

At 10Gbps, Opnext also had on show a tunable TOSA for use in an XFP module, while Finisar announced an 80km, 10Gbps SFP+ module.   “SFP+ has become a very successful form factor at 10Gbps,” says Ward. “All the market data I see show SFP+ leads in overall volumes deployed by a significant margin.”  Its success has been achieved despite being a form factor was not designed to achieve all the 10Gbps reaches required initially. This is some achievement, says Ward, since the XFP+ form factor used for 80km has a power rating of 3.5W while the 80km SFP+ has to work within a less than 2W upper limit.

 

Parallel Optics

Avago detailed its main parallel optic designs: the CXP module and its two optical engine designs.

The company claims it seeing much interested from high-performance computing vendors such as IBM and Fujitsu for its CXP 120 Gigabit (12x10Gbps) parallel transceiver module. Avago is sampling the module and it will start shipping in the summer.

The company also announced the status of its embedded parallel optics devices (PODs).  Such parallel optic designs offer several advantages, says Krutul.  Embedding the optics on the motherboard offers greater flexibility in cooling since the traditional optics is normally at the edge of the card, furthest away from the fans. Such optics also simplify high-speed signal routing on the printed circuit board since fibre is used.

Avago offers two designs – the 8x8mm MicroPod and the 22x18mm MiniPod. The 12x10Gbps MicroPods are being used in IBM’s Blue Gene computer and Avago says it is already shipping tens of thousands of the devices a month. “The [MicroPod’s] signal pins have a very tight pitch and some of our customers find that difficult to do,” says Krutul.  The MiniPod design tackles this by using the MicroPod optical engine but a more relaxed pitch. At OFC, Avago said that the MiniPod is now sampling.

 

Gridless ROADMs

Finisar demonstrated what it claims is the first gridless wavelength-selective switch (WSS) module at the show. A gridless ROADM supports variable channel widths beyond the fixed International Telecommunication Union's (ITU) defined spacings. Such a capability enables ROADMs to support variable channel spacings that may be required for transmission rates beyond 100Gbps: 400Gbps, 1Tbps and beyond.

“We have an increasing amount of customer interest in this [FlexGrid], and from what we can tell, there is also an increasing amount of carrier interest as well,” says Ward, adding that the company is already shipping FlexGrid WSSs to customers.

Finisar is a contributing to the ongoing ITU work to define what the grid spacings and the central channels should be for future ROADM deployments. Finisar demonstrated its FlexGrid design implementing integer increments of 12.5GHz spacing. “We could probably go down to 1GHz or even lower than that,” says Ward. “But the network management system required to manage such [fine] granularity would become incredibly complicated.” What is required for gridless is a balance between making good use of the fibre’s spectrum while ensuring the system in manageable, says Ward.

 


Infinera details Terabit PICs, 5x100G devices set for 2012

What has been announced?

Infinera has given first detail of its terabit coherent detection photonic integrated circuits (PICs). The pair - a transmitter and a receiver PIC – implement a ten-channel 100 Gigabit-per-second (Gbps) link using polarisation multiplexing quadrature phase-shift keying (PM-QPSK). The Infinera development work was detailed at OFC/NFOEC held in Los Angeles between March 6-10.

Infinera has recently demonstrated its 5x100Gbps PIC carrying traffic between Amsterdam and London within Interoute Communications’ pan-European network. The 5x100Gbps PIC-based system will be available commercially in 2012.

 

“We think we can drive the system from where it is today – 8 Terabits-per-fibre - to around 25 Terabits-per-fibre”

Dave Welch, Infinera 

 

Why is this significant?

The widespread adoption of 100Gbps optical transport technology will be driven by how quickly its cost can be reduced to compete with existing 40Gbps and 10Gbps technologies.

Whereas the industry is developing 100Gbps line cards and optical modules, Infinera has demonstrated a 5x100Gbps coherent PIC based on 50GHz channel spacing while its terabit PICs are in the lab. 

If Infinera meets its manufacturing plans, it will have a compelling 100Gbps offering as it takes on established 100Gbps players such as Ciena. Infinera has been late in the 40Gbps market, competing with its 10x10Gbps PIC technology instead.

 

40 and 100 Gigabit 

Infinera views 40Gbps and 100Gbps optical transport in terms of the dynamics of the high-capacity fibre market. In particular what is the right technology to get most capacity out of a fibre and what is the best dollar-per-Gigabit technology at a given moment.

For the long-haul market, Dave Welch, chief strategy officer at Infinera, says 100Gbps provides 8 Terabits (Tb) of capacity using 80 channels versus 3.2Tb using 40Gbps (80x40Gbps). The 40Gbps total capacity can be doubled  to 6.4Tb (160x40Gbps) if 25GHz-spaced channels are used, which is Infinera’s approach.

“The economics of 100 Gigabit appear to be able to drive the dollar-per-gigabit down faster than 40 Gigabit technology,” says Welch. If operators need additional capacity now, they will adopt 40Gbps, he says, but if they have spare capacity and can wait till 2012 they can use 100Gbps. “The belief is that they [operators] will get more capacity out of their fibre and at least the same if not better economics per gigabit [using 100Gbps],” says Welch. Indeed Welch argues that by 2012, 100Gbps economics will be superior to 40Gbps coherent leading to its “rapid adoption”.

For metro applications, achieving terabits of capacity in fibre is less of a concern. What matters is matching speeds with services while achieving the lowest dollar-per-gigabit. And it is here – for sub-1000km networks – where 40Gbps technology is being mostly deployed. “Not for the benefit of maximum fibre capacity but to protect against service interfaces,” says Welch, who adds that 40 Gigabit Ethernet (GbE) rather than 100GbE is the preferred interface within data centres.

 

Shorter-reach 100Gbps

Companies such as ADVA Optical Networking and chip company MultiPhy highlight the merits of an additional 100Gbps technology to coherent based on direct detection modulation for metro applications (for a MultiPhy webinar on 100Gbps direct detection, click here). Direct detection is suited to distances from 80km up to 1000km, to connect data centres for example.

Is this market of interest to Infinera?  “This is a great opportunity for us,” says Welch.

The company’s existing 10x10Gbps PIC can address this segment in that it is least 4x cheaper than emerging 100Gbps coherent solutions over the next 18 months, says Welch, who claims that the company’s 10x10Gbps PIC is making ‘great headway’ in the metro.

“If the market is not trying to get the maximum capacity but best dollar-per-gigabit, it is not clear that full coherent, at least in discrete form, is the right answer,” says Welch. But the cost reduction delivered by coherent PIC technology does makes it more competitive for cost-sensitive markets like metro.

A 100Gbps coherent discrete design is relatively costly since it requires two lasers (one as a local oscillator (LO - see fig 1 - at the receiver), sophisticated optics and a high power-consuming digital signal processor (DSP). “Once you go to photonic integration the extra lasers and extra optics, while a significant engineering task, are not inhibitors in terms of the optics’ cost.”

Coherent PICs can be used ‘deeper in the network’ (closer to the edge) while shifting the trade-offs between coherent and on-off keying. However even if the advent of a PIC makes coherent more economical, the DSP’s power dissipation remains a factor regarding the tradeoff at 100Gbps line rates between on-off keying and coherent.

Welch does not dismiss the idea of Infinera developing a metro-centric PIC to reduce costs further. He points out that while such a solution may be of particular interest to internet content companies, their networks are relatively simple point-to-point ones. As such their needs differ greatly from cable operators and telcos, in terms of the services carried and traffic routing.

 

PIC challenges

Figure 1: Infinera's terabit PM-QPSK coherent receiver PIC architecture

There are several challenges when developing multi-channel 100Gbps PICs.  “The most difficult thing going to a coherent technology is you are now dealing with optical phase,” says Welch. This requires highly accurate control of the PIC’s optical path lengths.

The laser wavelength is 1.5 micron and with the PIC's indium phosphide waveguides this is reduced by a third to 0.5 micron. Fine control of the optical path lengths is thus required to tenths of a wavelength or tens of nanometers (nm).

Achieving a high manufacturing yield of such complex PICs is another challenge. The terabit receiver PIC detailed in the OFC paper integrates 150 optical components, while the 5x100Gbps transmit and receive PIC pair integrate the equivalent of 600 optical components.

Moving from a five-channel (500Gbps) to a ten-channel (terabit) PIC is also a challenge. There are unwanted interactions in terms of the optics and the electronics. “If I turn one laser on adjacent to another laser it has a distortion, while the light going through the waveguides has potential for polarisation scattering,” says Welch. “It is very hard.” 

But what the PICs shows, he says, is that Infinera’s manufacturing process is like a silicon fab’s. “We know what is predictable and the [engineering] guys can design to that,” says Welch. “Once you have got that design capability, you can envision we are going to do 500Gbps, a terabit, two terabits, four terabits – you can keep on marching as far as the gigabits-per-unit [device] can be accomplished by this technology.”

The OFC post-deadline paper details Infinera's 10-channel transmitter PIC which operates at 10x112Gbps or 1.12Tbps.

 

Power dissipation

The optical PIC is not what dictates overall bandwidth achievable but rather the total power dissipation of the DSPs on a line card. This is determined by the CMOS process used to make the DSP ASICs, whether 65nm, 40nm or potentially 28nm.

Infinera has not said what CMOS process it is using. What Infinera has chosen is a compromise between “being aggressive in the industry and what is achievable”, says Welch. Yet Infinera also claims that its coherent solution consumes less power than existing 100Gbps coherent designs, partly because the company has implemented the DSP in a more advanced CMOS node than what is currently being deployed. This suggests that Infinera is using a 40nm process for its coherent receiver ASICs. And power consumption is a key reason why Infinera is entering the market with a 5x100Gbps PIC line card. For the terabit PIC, Infinera will need to move its ASICs to the next-generation process node, he says.

Having an integrated design saves power in terms of the speeds that Infinera runs its serdes (serialiser/ deserialiser) circuitry and the interfaces between blocks. “For someone else to accumulate 500Gbps of bandwdith and get it to a switch, this needs to go over feet of copper cable, and over a backplane when one 100Gbps line card talks to a second one,” says Welch. “That takes power - we don’t; it is all right there within inches of each other.”

Infinera can also trade analogue-to-digital (A/D) sampling speed of its ASIC with wavelength count depending on the capacity required. “Now you have a PIC with a bank of lasers, and FlexCoherent allows me to turn a knob in software so I can go up in spectral efficiency,” he says, trading optical reach with capacity. FlexCoherent is Infinera’s technology that will allow operators to choose what coherent optical modulation format to use on particular routes. The modulation formats supported are polarisation multiplexed binary phase-shift keying (PM-BPSK) and PM-QPSK.

 

Dual polarisation 25Gbaud constellation diagrams

What next?

Infinera says it is an adherent of higher quadrature amplitude modulation (QAM) rates to increase the data rate per channel beyond 100Gbps. As a result FlexCoherent in future will enable the selection of higher-speed modulation schemes such as 8-QAM and 16-QAM. “We think we can drive the system from where it is today –8 Terabits-per-fibre - to around 25 Terabits-per-fiber.”

But Welch stresses that at 16-QAM and even higher level speeds must be traded with optical reach. Fibre is different to radio, he says. Whereas radio uses higher QAM rates, it compensates by increasing the launch power. In contrast there is a limit with fibre. “The nonlinearity of the fibre inhibits higher and higher optical power,” says Welch. “The network will have to figure out how to accommodate that, although there is still significant value in getting to that [25Tbps per fibre]” he says.

The company has said that its 500 Gigabit PIC will move to volume manufacturing in 2012. Infinera is also validating the system platform that will use the PIC and has said that it has a five terabit switching capacity.

Infinera is also offering a 40Gbps coherent (non-PIC-based) design this year. “We are working with third-party support to make a module that will have unique performance for Infinera,” says Welch.

The next challenge is getting the terabit PIC onto the line card. Based on the gap between previous OFC papers to volume manufacturing, the 10x100Gbps PIC can be expected in volume by 2014 if all goes to plan.

 


Operators want to cut power by a fifth by 2020

Briefing: Green ICT

Part 2: Operators’ power efficiency strategies

Service providers have set themselves ambitious targets to reduce their energy consumption by a fifth by 2020. The power reduction will coincide with an expected thirty-fold increase in traffic in that period. Given the cost of electricity and operators’ requirements, such targets are not surprising: KPN, with its 12,000 sites in The Netherlands, consumes 1% of the country’s electricity.

 

“We also have to invest in capital expenditure for a big swap of equipment – in mobile and DSLAMs"

Philippe Tuzzolino, France Telecom-Orange

Operators stress that power consumption concerns are not new but Marga Blom, manager, energy management group at KPN, highlights that the issue had become pressing due to steep rises in electricity prices. “It is becoming a significant part of our operational expense,” she says.

"We are getting dedicated and allocated funds specifically for energy efficiency,” adds John Schinter, AT&T’s director of energy. “In the past, energy didn’t play anywhere near the role it does today.”

 

Power reduction strategies

Service providers are adopted several approaches to reduce their power requirements.

Upgrading their equipment is one. Newer platforms are denser with higher-speed interfaces while also supporting existing technologies more efficiently. Verizon, for example, has deployed 100 Gigabit-per-second (Gbps) interfaces for optical transport and for its IT systems in Europe. The 100Gbps systems are no larger than existing 10Gbps and 40Gbps platforms and while the higher-speed interfaces consume more power, overall power-per-bit is reduced.

 

 “There is a business case based on total cost of ownership for migrating to newer platforms.”

Marga Blom, KPN

 

 

 

 

Reducing the number of facilities is another approach. BT and Deutsche Telekom are reducing significantly the number of local exchanges they operate. France Telecom is consolidating a dozen data centres in France and Poland to two, filling both with new, more energy-efficient equipment. Such an initiative improves the power usage effectiveness (PUE), an important data centre efficiency measure, halving the energy consumption associated with France Telecom’s data centres’ cooling systems.

“PUE started with data centres but it is relevant in the future central office world,” says Brian Trosper, vice president of global network facilities/ data centers at Verizon. “As you look at the evolution of cloud-based services and virtualisation of applications, you are going to see a blurring of data centres and central offices as they interoperate to provide the service.”

Belgacom plans to upgrade its mobile infrastructure with 20% more energy-efficient equipment over the next two years as it seeks a 25% network energy efficiency improvement by 2020. France Telecom is committed to a 15% reduction in its global energy consumption by 2020 compared to the level in 2006. Meanwhile KPN has almost halted growth in its energy demands with network upgrades despite strong growth in traffic, and by 2012 it expects to start reducing demand.  KPN’s target by 2020 is to reduce energy consumption by 20 percent compared to its network demands of 2005.

 

Fewer buildings, better cooling

Philippe Tuzzolino, environment director for France Telecom-Orange, says energy consumption is rising in its core network and data centres due to the ever increasing traffic and data usage but that power is being reduced at sites using such techniques as virtualisation of servers, free-air cooling, and increasing the operating temperature of equipment. “We employ natural ventilation to reduce the energy costs of cooling,” says Tuzzolino.  

“Everything we do is going to be energy efficient.”

Brian Trosper, Verizon

 

 

 

 

 

Verizon uses techniques such as alternating ‘hot’ and ‘cold’ aisles of equipment and real-time smart-building sensing to tackle cooling. “The building senses the environment, where cooling is needed and where it is not, ensuring that the cooling systems are running as efficiently as possible,” says Trosper.

Verizon also points to vendor improvements in back-up power supply equipment such as DC power rectifiers and uninterruptable power supplies. Such equipment which is always on has traditionally been 50% efficient. “If they are losing 50% power before they feed an IP router that is clearly very inefficient,” says Chris Kimm, Verizon's vice president, network field operations, EMEA and Asia-Pacific. Now manufacturers have raised efficiencies of such power equipment to 90-95%. 

France Telecom forecasts that its data centre and site energy saving measures will only work till 2013 with power consumption then rising again. “We also have to invest in capital expenditure for a big swap of equipment – in mobile and DSLAMs [access equipment],” says Tuzzolino. 

Newer platforms support advanced networking technologies and more traffic while supporting existing technologies more efficiently. This allows operators to move their customers onto the newer platforms and decommission the older power-hungry kit.  

 

“Technology is changing so rapidly that there is always a balance between installing new, more energy efficient equipment and the effort to reduce the huge energy footprint of existing operations”

John Schinter, AT&T

 

Operators also use networking strategies to achieve efficiencies. Verizon is deploying a mix of equipment in its global private IP network used by enterprise customers. It is deploying optical platforms in new markets to connect to local Ethernet service providers. “We ride their Ethernet clouds to our customers in one market, whereas layer 3 IP routing may be used in an adjacent, next most-upstream major market,” says Kimm. The benefit of the mixed approach is greater efficiencies, he says: “Fewer devices to deploy, less complicated deployments, less capital and ultimately less power to run them.”

Verizon is also reducing the real-estate it uses as it retires older equipment. “One trend we are seeing is more, relatively empty-looking facilities,” says Kimm. It is no longer facilities crammed with equipment that is the problem, he says, rather what bound sites are their power and cooling capacity requirements.

“You have to look at the full picture end-to-end,” says Trosper. “Everything we do is going to be energy efficient.” That includes the system vendors and the energy-saving targets Verizon demands of them, how it designs its network, the facilities where the equipment resides and how they are operated and maintained, he says.

Meanwhile, France Telecom says it is working with 19 operators such as Vodafone and Telefonica, BT, DT, China Telecom, and Verizon as well as the organisations such as the ITU and ETSI to define standards for DSLAMs and base stations to aid the operators in meeting their energy targets.

Tuzzolino stresses that France Telecom’s capital expenditure will depend on how energy costs evolve. Energy prices will dictate when France Telecom will need to invest in equipment, and the degree, to deliver the required return on investment.

The operator has defined capital expenditure spending scenarios - from a partial to a complete equipment swap from 2015 - depending on future energy costs. New services will clearly dictate operators’ equipment deployment plans but energy costs will influence the pace.  

 

““If they [DC power rectifiers and UPSs] are losing 50% power before they feed an IP router that is clearly very inefficient” 

Chris Kimm, Verizon. 

 

 

 

Justifying capital expenditure spending based on energy and hence operational expense savings in now ‘part of the discussion’, says KPN’s Blom: “There is a business case based on total cost of ownership for migrating to newer platforms.”

 

Challenges

But if operators are generally pleased with the progress they are making, challenges remain.

“Technology is changing so rapidly that there is always a balance between installing new, more energy efficient equipment and the effort to reduce the huge energy footprint of existing operations,” says AT&T’s Schinter.

“The big challenge for us is to plan the capital expenditure effort such that we achieve the return-on-investment based on anticipated energy costs,” says Tuzzolino.

Another aspect is regulation, says Tuzzolino. The EC is considering how ICT can contribute to reducing the energy demands of other industries, he says. “We have to plan to reduce energy consumption because ICT will increasingly be used in [other sectors like] transport and smart grids.”

Verizon highlights the challenge of successfully managing large-scale equipment substitution and other changes that bring benefits while serving existing customers. “You have to keep your focus in the right place,” says Kimm. 

 

Part 1: Standards and best practices 


ICT could reduce global carbon emissions by 15%

Briefing: Green ICT

Part 1: Standards and best practices

Keith Dickerson is chair of the International Telecommunication Union's (ITU) working party on information and communications technology (ICT) and climate change.

In a Q&A with Gazettabyte, he discusses how ICT can help reduce emissions in other industries, where the power hot spots are in the network and what the ITU is doing.


"If you benchmark base stations across different countries and different operators, there is a 5:1 difference in their energy consumption"

Keith Dickerson

 

 

Q. Why is the ITU addressing power consumption reduction and will its involvement lead to standards?

KD: We are producing standards and best practices. The reason we are involved is simple: ICT – all IT and telecoms equipment - is generating 2% of [carbon] emissions worldwide. But traffic is doubling every two years and the energy consumption of data centres is doubling every five years. If we don’t watch out we will be part of the problem. We want to reduce emissions in the ICT sector and in other sectors. We can reduce emissions in other sectors by 5x or 6x what we emit in our own sector.

 

Just to understand that figure, you believe ICT can cut emissions in other industries by a factor of six?

KD: We could reduce emissions overall by 15% worldwide. Reducing things like travel and storage of goods and by increasing recycling. All these measures in conjunction, enabled by ICT, could reduce overall emissions by 15%. These sectors include travel, the forestry sector and waste management. The energy sector is huge and we can reduce emissions here by up to 30% using smarter grids.

 

What are the trends regarding ICT?

KD: ICT accounts for 2% at the moment, maybe 2.5% if you include TV, but it is growing very fast. By 2020 it could be 6% of worldwide emissions if we don’t do something. And you can see why: Broadband access rates are doubling every two years, and although the power-per-bit is coming down, overall power [consumed] is rising. 

 

Where are the hot spots in the network?

The areas where energy consumption is going up most greatly are at the ends of the network. They are in the home equipment and in data centres. Within the network it is still going up, but it is under control and there are clear ways of reducing it.

For example all operators are moving to a next-generation network (NGN) – BT is doing this with its 21CN - and this alone leads to a power reduction. It leads to a significant reduction in switching centres, by a factor of ten. And you can collapse different networks into a single IP network, reducing the energy consumption [associated with running multiple networks]. The equipment in the NGN doesn’t need as much cooling or air conditioning. The use of more advanced access technology such as VDSL2 and PON will by itself lead to a reduction in power-per-bit.

The EU has a broadband code of conduct which sets targets in reducing energy consumption in the access network and that leads to technologies such as standby modes. My home hub, if I don’t use it for awhile, switches to a low-power mode.

The ITU is looking at how to apply these low–power modes to VDSL2. There has also been a very recent proposal to reduce the power levels in PONs. There has been a contribution from the Chinese for a deep-sleep mode for XG-PON. The ITU-T Study Group 13 on future networks is also looking at such techniques, shutting down part of the core network when traffic levels are low such as at night.

 

What about mobile networks?

If you benchmark them across different countries and different operators there is a 5:1 difference in the energy consumption of base stations. They are running the same standard but their energy efficiency is somewhat different; they have been made at different times and by different vendors.

In a base station, some half of the power is lost in the [signal] coupling to the antenna. If you can make amplifiers more efficient and reduce the amount of cooling and air-condition required by the base station, you can reduce energy consumption by 70 or 80%. If all operators and all counties used best practices here, energy consumption in the mobile network could be reduced by 50% to 70%.

If you could get overall power consumption of a base station down to 100W, you could power it from renewable energy. That would make a huge difference; it could work without having to worry about the reliability of the electricity grid which in India and Africa is a tricky problem. And at the moment the price of diesel fuel [to power standby generators] is going through the roof.  

I visited Huawei recently and they have examples of 100W base stations powered by renewable energy, making them independent of the electricity network. At the moment a base station consume more like 1000W and overall they consume over half the overall power used by a mobile operator. At 100W, that wouldn’t be the case.

Other power saving activities in mobile include sharing networks among operators such as Orange and T-Mobile in the UK. And BT has signed a contract with four out of the five UK mobile operators to provide their backhaul and core networks in the future.  

 

What is the ITU doing with regard energy saving schemes?

The ITU set up the working party on ICT and climate change less than two years ago. We have work in three different areas.

One is increasing energy efficiencies in ICT which we are doing through the widespread introduction of best practices. We are relying on the EC to set targets. The ITU, because it has 193 countries involved, finds it very difficult to agree targets. So we issue best practices which show how targets can be met. This covers data centres, broadband and core networks.

Another of our areas is agreeing a common methodology for how to measure the impact of ICT on carbon emissions. We have been working on this for 18 months and the first recommendations should be consented this summer. Overall this work will be completed in the next two years. This will enable you to measure the emissions of ICT by country, or sector, or an individual product or service, or within a company. If companies don’t meet their targets in future they will be fined so it is very important companies are measured in the same way.

A third area of our activities are things like recycling. We have produced a standard for a universal charger for mobile phones. You won’t have to buy a new charger each time you buy a new phone. At the moment thousands of tonnes of chargers go to landfill [waste sites] every year. The standard introduced by the ITU last year only covers 25% of handsets. The revised standard will raise that to 80%.

At the last meeting the Chinese also proposed a universal battery – or a range of batteries. This would means you don’t have to throw away your old battery each time you buy a new mobile. It is all about reducing the amount of equipment that goes into landfill.

We are also doing some other activities. Most telecom equipment use a 50V power supply. We are taking that up to 400V. So a standard power supply for a data centre or a switch would be at 400V. This would mean you would lose a lot less power in the wiring as you would be operating at a lower current - power losses vary according to the square of the current.

 

These ITU activities coupled with operators moving to new architectures and adopting new technologies will all help yet traffic is doubling every two years. What will be the overall effect?

It all depends on the targets that are set. The EU is putting in more and more severe targets. If companies have to pay a fine if they don’t meet them, they will introduce new technologies more quickly. Companies won’t pay the extra investment unless they have to, I’m afraid, especially during this difficult economic period.

Every year the EC revises the code of conduct on broadband and sets stiffer targets. They are driving the introduction of new technology into the industry, and everyone wants to sign up to show that they are using best practices.  

What the ITU is doing is providing the best practices and the standards to help them do that. The rate at which they act will depend on how fast those targets are reduced.

Keith Dickerson is a director at Climate Associates

 

Part 2 Operators' power efficiency strategies



Webinar: MultiPhy on the 100G Direct Detect market

Gazettabyte has hosted a webinar with Israeli semiconductor firm MultiPhy. Entitled The Emerging 100 Gigabit Metro & Datacenter Connectivity Opportunity, the webinar includes:

  • An Ovum market forecast for 100 Gigabit Direct Detect to 2015
  • The changes in the network creating demand for 100 Gigabit Direct Detect optical transport
  • Emerging operator and vendor backing for the technology
  • MultiPhy’s IC technology and its 100 Gigabit Direct Detect solution
  • The performance metrics of 100 Gigabit Direct Detect

 

"An internet giant is now firmly committed to an 80km pluggable solution. And if it is 80km and pluggable we know it is not coherent"

Neal Neslusan, MultiPhy

 

Presenting the webinar for MultiPhy is Neal Neslusan, vice president of sales and marketing. 

To view, please register by clicking here. You will then receive an email with a link to the 100 Gigabit webinar.

 

Further reading: 

MultiPhy eyes 40 and 100 Gigabit direct detect and coherent


CyOptics gets $50m worth of new investors and funding

Optical component firm CyOptics has received a US $50million investment. Gazettabyte discussed the company’s activities and plans with CEO, Ed Coringrato, and Stefan Rochus, the company’s vice president of marketing and business development. 


“Volume production scale is very important to having a successful business”

Ed Coringrato, CyOptics

 

 

 

The $50m investment in CyOptics has two elements: the amount paid by new investors in CyOptics to replace existing ones and funding for the company.

“This is different from the years-ago, traditional funding round but not all that different from what is more and more taking place,” says Ed Coringrato, CEO of CyOptics. “Fifty million is a big number but it is a ‘primary/ secondary’: the secondary is tendering out current investors that are choosing to exit, while the primary is what people think of as a traditional investment.”  CyOptics has not detailed how the $50m is split between the two. 

The funding is needed to bolster the company’s working capital, says Coringrato, despite CyOptics achieving over $100m in revenues in 2010. The money is required because of growth, he says: inventories the company holds are growing, there is more cash outstanding and the company’s payments are also rising.

There is also a need to invest in the company. “For the first time in a long time we are starting to make significant capital investments in our business,” says Coringrato. “We are ramping the fab, the packaging capability, and the assembly and test.”

The company is investing in R&D. At the moment 11 percent of its revenue is invested in R&D and the company wants to approach 13 percent. “That is a challenge in our industry – the investment in R&D is pretty significant,” says Coringrato. “If we are to continue to be significant and have leading-edge products, we must continue to make that investment.”

 

Manufacturing

CyOptics acquired Triquint Semiconductor’s optoelectronics operations in 2005, and before that Triquint had bought the optoelectronics operations of Agere Systems. This resulted in CyOptics inheriting automated manufacturing facilities and as a result it never felt the need to move manufacturing to the Far East to achieve cost benefits. CyOptics does use some contract manufacturing but its high-end products are made in-house.

“We have been focussed on automated production, cycle-time reduction and yield improvement,” says Coringrato.  “The capital investment is to replicate what we have, adding more machines to get more output.”

 

Markets

CyOptics supplies fibre-to-the-x (FTTx) components to transmit optical subassembly (TOSA) and receive optical subassembly (ROSA) makers, optical transceiver players and board manufacturers. FTTx is an important market for CyOptics as it is a volume driver. “Volume production scale is very important to having a successful business,” says Coringrato.

The company also supplies 2.5 and 10 Gigabit-per-second (Gbps) TOSAs and ROSAs for XFP and SFP pluggable modules for the metro. “We want to play at the higher end as well as that is the where the growth opportunities are and the healthier margins,” says Coringrato.

CyOptics is also active in what it calls high-end product areas.

One area is as a supplier of components for the US defence industry. CyOptics entered the defence market in 2005. “These are custom products designed for specific applications,” says Stefan Rochus, vice president of marketing and business development. These include custom chip fabrication and packaging undertaking for defence contractors that supply the US Department of Defense. “When you look around there are not many companies that can do that,” says Rochus. One example CyOptics cites is a 1480nm pump-laser, part of a fibre-optic gyroscope for use in a satellite. 

 

“We are shipping 40Gbps and 100Gbps coherent receivers into the PM-QPSK market”

Stefan Rochus, CyOptics

 

 

 

 

The defence market may require long development cycles but CyOptics believes that in the next few years several of its products could lead to reasonable volumes and a better average selling price than telecom components.

Another high-end product segment CyOptics is pursuing is photonic integrated circuits (PICs) using the company's indium-phosphide and planar lightwave circuit expertise.

Rochus says the company has several PIC developments including 10x10Gbps TOSAs and ROSAs as well as emerging 40GBASE-LR4 and coherent detection designs. “We are shipping 40Gbps and 100Gbps coherent receivers into the PM-QPSK market,” says Rochus.

CyOptics’ product portfolio is a good balance between high-volume and high average selling price components, says Rochus.

 

10x10 MSA

CyOptics is part of the recent 10X10 MSA, the 100Gbps multi-source agreement that includes Google and Brocade. “There is a follow-up high density 10x10Gbps MSA and we will be a member of this as well,” says Rochus. “This [10x10G design] is for short reach, up to 2km, but we are also shipping product for DWDM for an Nx10Gbps TOSA/ROSA solution.”

Why is CyOptics supporting the Google-backed 10x10Gbps MSA?

“The IEEE has only standardised the 100GBASE-SR10 which is 100m and the 100GBASE-LR4 which is 10km, there is a gap in the middle for [a] 2km [interface] which the MSA tries to solve,” says Rochus. “This is particularly important for the larger data centres.”

Rochus claims the 10x10Gbps design is the cheapest solution and that the volumes that will result from growth in the 10 Gigabit PON market will further reduce the component costs used for the interface. Furthermore the interface will be lower power.

That said, CyOptics is backing both interface styles, selling TOSAs and ROSAs for the 10x10Gbps interface and lasers for the 4x25Gbps-styled 100 Gigabit interfaces.

 

What next?

“The bigger we can get in terms of volume and revenue, the better our financials,” says Coringrato. “Potentially CyOptics is not only attractive for our preferred path, which is an IPO offering at the right time, but also I think it won't discourage others from being interested in us.”

 

Further reading

CyOptics' work to achieve terabit-per-second interfaces 

Google and the optical component industry


10 Gigabit GPON gets broadband access support

Briefing: Next-Generation PON

Part 1: XG-PON1 goes commercial

Alcatel-Lucent is making available what it claims is the first broadband access platforms that support XG-PON1, the 10 Gigabit GPON standard.  The company has developed an XG-PON1 line card for use in its latest ISAM-FX as well as its existing ISAM-FD access platforms. The ISAM platforms support copper and fibre-based broadband access.

 

“First [XG-PON1] deployments will likely be in Asia Pacific but we are seeing strong interest from other regions"

Stefaan Vanhastel, Alcatel-Lucent

 

 

Why is this significant?

System vendors and operators have been trialling 10 Gigabit GPON technology. Now Alcatel-Lucent has signalled that the technology is ready for commercial deployment. The vendor says operator deployments will start later this year, a claim backed by Infonetics Research. However, the market research firm forecasts 10 Gigabit GPON global deployments will only reach two million ports by 2014.

 

What has been done?

XG-PON1 is the asymmetrical version of the 10 Gigabit GPON standard delivering 10 Gigabit-per-second (Gbps) data rates downstream (to the user) and 2.5Gbps upstream.  This compares to GPON, which delivers 2.5Gbps downstream and 1.25Gbps upstream.

The Alcatel-Lucent XG-PON1 line card has four 10 Gigabit GPON ports, and is available on the existing ISAM-FD products as well as the latest ISAM-FX high-capacity shelves.

There are three ISAM-FX shelves that accommodate four, eight and 16 line cards. The ISAM-FX shelves have a dual-100Gbps backplane capacity, compared to the ISAM-FD which has a 2x10Gbps capacity. The ISAM-FX shelves house up to two controllers, and the role of the backplane is to connect each line card to each controller. The 100Gbps is the capacity linking each line card to each of the two controllers. Since the XG-PON1 line card has four 10Gbps ports, the backplane will clearly support future denser line cards.

The controller acts as a central processing unit taking traffic from the line cards and packaging it for the network uplink. Each controller has a 480Gbps switching matrix, four 10 Gigabit Ethernet uplinks and service intelligence to handle the traffic flows. “You can have two controllers per shelf and then they work in a load sharing mode,” says Stefaan Vanhastel, marketing director wireline access at Alcatel-Lucent. “This gives you a total of eight uplinks and you can add more if needed.”

 

The PON architecture

The XG-PON1 standard allows operators a straightforward way to upgrade existing GPON networks. “The operator can put the two technologies on the same optical network, with some subscribers on GPON and others on 10 Gig GPON,” says Vanhastel.  

Source: Alcatel-Lucent

 

Moving to XG-PON1 not only provides greater bandwidth but also supports more subscribers on the one fibre.  According to Alcatel-Lucent the maximum number of PON end terminals or optical network units (ONUs) that GPON supports is 128, dubbed a split ratio of 1:128. In contrast, 1:128 is the starting split ratio for XG-PON1 while the maximum is 1:512.

 Source: Alcatel-Lucent

What next?

Vanhastel admits that existing GPON provides more than enough bandwidth to subscribers. To ensure that a GPON subscriber gets sufficient bandwidth, the average split ratio operators use is 1:18. “With the higher-capacity XG-PON1, the average split ratio could go up significantly,” says Vanhastel.

Alcatel-Lucent says initial deployments of XG-PON1 will start in the second half of this year with more widespread deployments occurring in 2012. “The first deployments will likely be in Asia Pacific but we are seeing strong interest from other regions,” says Vanhastel.  

Initial XG-PON1 deployments will likely be for backhauling traffic from fibre-to-the-building (FTTB) deployments. Here one fibre has a split ratio of 1:16 or 1:32 but each FTTB node supports 24 subscribers typically.

Meanwhile, the company announced in October 2010 trials with operators Verizon and Portugal Telecom involving the symmetrical (downstream and upstream) 10 Gigabit GPON variant known as XG-PON2. XG-PON2 has yet to become a standard.


Next-generation access will redefine the telcos

Benoît  Felten has left Yankee Group to set up a market research and consultancy company addressing next-generation access.

Gazettabyte caught up with him to understand the goals of his new company, Diffraction Analysis, and why he believes next-generation access is critical for service providers.

 

"As soon as you, the operator, make that investment decision, it has fundamental implications as to who you are as a company"

 

Benoît Felten, CEO, Diffraction Analysis

 

 

Gazettabyte: There are several established market research companies addressing access. What is Diffraction Analysis offering that is unique?

BF: There are two reasons [for setting up Diffraction Analysis]. The first came to me when I was doing consultancy work for a [Yankee Group] customer. He said: “You are the only guy I know working for an established company that only covers next-generation access.” All the other guys cover broadband, with next-generation access being a sub-topic.

At that moment it coalesced something that I had been thinking about for some time: the migration from legacy to next-generation access networks is probably the single most challenging issue that established players will face, and the single biggest opportunity for challengers to grab. If you drown that [topic] among legacy [broadband] issues you might be missing the point.

The second reason, much more pragmatic, is that there are many small companies that simply cannot afford the cost of generic telecom research from established market research companies. To access research affordably, for me, that is a market opportunity.

 

When you say next-generation access, what do you mean?

BF: It refers to the replacement of the legacy copper network in all its incarnations – most cell towers are connected with copper today - with a fibre-rich network. Cable networks, wireline copper networks, mobile networks are all going to be fibre-rich. 

 

What are the key issues facing operators regarding next-generation access?

BF: The first for the operators is: How do we finance a network deployment and why do we do it? The established players all agree that they have to do it, sooner or later and probably sooner, and the core question is: How do we do it?

The problem is that it places access at the core of the telco business model. Ever since the internet started being successful, most legacy players – and that includes cable players - have seen themselves as service providers rather than access providers. Effectively, they are faced with a major investment which if they don’t do opens up opportunities for others to displace them. We are seeing that happen is small markets like Hong Kong, where a competitive player is on the path to eliminate the access network of the incumbent.

The threat is real, the customer need is real. The problem is operators don’t know how to use the network for their own revenues. They are faced with the choice of becoming a long-term utility – investing in the network for 20 years and reaping revenues for another 50 years – but that is unpalatable for them, or they find another way to use the network for revenues, keeping in mind that most new services do not come from telcos these days but from over-the-top players.

What we plan to examine are the alternative paths: What will be the operators’ role and where will the operators’ revenues come from once they have made this investment?

As soon as you, the operator, make that investment decision, it has fundamental implications as to who you are as a company. It is not just an upgrade.  

I was at a conference last year and a guy from NTT said: “We didn’t realise that when we made that [fibre access network] investment decision, we were rebuilding the company from scratch.” He said: “Now, 10-years-on, at a strategy level, we have understood that – we are in a different business now.”

 

What is Diffraction Analysis going to do?

BF: We are a market research and consultancy firm. It is important to do both: consultancy keeps you grounded in what is happening in the market. Research is your ability to step back and articulate the global view.

I have already signed a couple of companies for whom I do advisory services. We also have classic consultancy projects. We are working for a vendor right now who is asking us to look at opportunities for them to enter the access market. They have disruptive technology and are looking to partner with companies and take a stake in the access market. We are in the middle of this and our advice might be: don’t do it.

One of the things we want to do is build modelling tools that allow legacy service providers to map the network deployment in time and not just based on a single investment decision. Right now the question is do I deploy fibre or not? But the reality is even if the answer is yes, the deployment will take 15 years. If it takes 15 years, what happens to all the people that don’t have fibre as I – the operator - gradually connect them? 

We are trying to build a model that will optimise the cost and the service offered to end customers with a variety of technologies. This is where fibre-to-the-curb and various flavours like phantom mode DSL come into play.

We are aiming to do this by geographical area, to model where you should deploy fibre first and what you should do in non-fibre areas, and for how long, looking at the lifetime of these various technology options.*

 

What are the key lessons you learnt as a Yankee Group analyst?

BF:  One of the things that strike me is that in this economic shift we have experienced in the last 30 years, something has been lost and that is long-term vision. That leads many organisations to make hugely inefficient decisions. These decisions may be rational but the long term is no longer part of the equation. In the telecom business it is striking how far this can lead people into making wrong decisions.

The second thing that I learnt interacting with many industry players is that the single toughest challenge each organisation has is fighting against their own culture. There is a culture of business-as-usual which is at odds with the challenges of an ever shifting technology market.  Even companies in the internet space that everyone views as agile and willing to reassess themselves, you find these cultural issues.

I’m not saying anything original but interacting with these companies all around the world for the three years at Yankee highlighted this for me.

 

Most broadband users are still DSL-based. How will fibre-based access become massively deployed?

BF: Essentially there are three drivers for telcos to deploy. In order of importance they are: competition, network reboot and meeting customer demand. 

Competition is a clear driver. When as an organisation your network access business is threatened, every consideration about how fast you deploy for payback goes out of the window - you have to deploy. And then you learn the hard way since by responding and not anticipating, you make mistakes.

The second driver [network reboot] is not mature today.  Smart CTOs around the world are seeing fibre deployments as an opportunity to rethink way more than just their access infrastructure. And WDM-PON [wavelength division multiplexing – passive optical network] technology in access plays a significant part in that thinking.

If they deploy now, they may make savings and achieve network concentration but it is not massive. If they wait they might be able to save more which is why this driver isn’t working right now.

The third driver is meeting customer needs. Now, in their public discourse, operators say this is first and foremost but the reality is that since they have not found ways to make money out of traffic, they don’t want more traffic. So meeting customer needs is not a priority except if you are in a competitive market and someone else is meeting customers’ needs in which case you have to do it.

 

Diffraction Analysis’s team comprises people with wireline experience but the company does plan to also cover mobile. “I do think that there is a great deal of sense in having a mobile arm too but I can’t build that myself – I don’t have the credibility or the knowledge,” says Felten, who is looking at partnerships or recruitment to add mobile to the operation.

*Diffraction Analysis has just published its research programme till June 2011.


Privacy Preference Center