Digital Home: Services, networking technologies and challenges facing operators
Source: Microsoft
The growth of internet-enabled devices and a maturing of networking technologies indicate the digital home is entering a new phase. But while operators believe the home offers important new service and revenue opportunities, considerable challenges remain. Operators are thus proceeding cautiously.
Here is a look at the status of the digital home in terms of:
- Services driving home networking
- Wireless and wireline home networking technologies
- Challenges
Services driving home networking
IPTV and video delivery are key services that place significant demands on the home network in terms of bandwidth and reach. Typically the residential gateway that links the home to the broadband network, and the set-top box where video is consumed are located apart. Connecting the two has been a challenge for telcos. In contrast, cable operators (MSOs) have always networked video around the home. The MSOs’ challenge is adding voice and linking home devices such as PCs.
Now the telcos are meeting the next challenge: distributing video between multiple set-tops and screens in the home.
Other revenue-generating home services interesting service providers include:
- A contract to support a subscriber’s home network
- E-health: remote monitoring a patient’s health
- Home security using video cameras
- Media content: enabling a user to grab home-stored content when on the move
- Smart meters and energy management
One development that operators cannot ignore is ‘over-the-top’ services. Users can get video from third parties directly over the internet. Such over-the-top services are a source of competition for operators and complicate home networking requirements in that users can buy and connect their own media players and internet-enabled TVs. Yet any connectivity issues and it is the operator that will get the service call.
However, over-the-top services are also an opportunity in that they can be integrated as part of the operator’s own offerings and delivered with quality of service (QoS).
Wireless and wireline home networking technologies
Operators face a daunting choice of networking technologies. Moreover, no one technology promises complete, reliable home coverage due to wireless signal fades or wiring that is not where it is needed.
As a result operators must use more than one networking technology. Within wireline there are over half a dozen technology options available. And even for a particular wireline technology, power line for example, operators have multiple choices.
Wireless:
- Wi-Fi is the technology of choice with residential gateway vendors now supporting the IEEE 802.11n standard which extends the data rate to beyond 100 megabits-per-second (Mbps). An example is Orange’s Livebox2 home gateway, launched in June 2009.
- The second wireless option is femtocells, that is now part of the define features of the Home Gateway Initiative’s next-generation (Release 3) platform, planned for 2010. Mass deployment of femtocells is still to happen and will only serve handsets and consumer devices that are 3G-enabled.
Wireline
- If new wiring of a home is possible, operators can use Ethernet Category-5 cabling, or plastic optical fibre (POF) which is flexible and thin.
- More commonly existing home wiring like coaxial cable, electrical wiring (powerline) or telephone wiring is used. Operators have adopted HomePNA which supports phone wiring; the Multimedia over Coax Alliance (MoCA) that uses coaxial cabling; and the HomePlug Powerline Alliance’s HomePlug AV, a powerline technology that uses a home’s power wiring over which data is transmitted.
- Gigabit home networking (G.hn) is a new standard being developed by the International Telecommunication Union. Set to appear in products in 2010, the standard can work over three wireline media: phone, coax and powerline. AT&T, BT and NTT are backing G.hn though analysts question its likely impact overall. Indeed one operator says the emerging standard could further fragment the market.
Challenges
- Building a home network is complex due to the many technologies and protocols involved.
- Users have an expectation that operators will solve their networking issues yet operators only own and are interested in their own home equipment: the gateway and set-top box. Operators risk getting calls from frustrated users that have deployed a variety of consumer devices. Such calls impact the operators’ bottom line.
- Effective tools and protocols for home networking monitoring and management is a must for the operators. The Broadband Forum’s TR-069 and the Universal Plug and Play (UPnP) diagnostic and QoS software protocols continue to evolve but so far only a fraction of their potential is being used.
Operators understandably are proceeding with care as they cross the home's front door to ensure their offerings are simple and reliable. Otherwise any revenue-potential home networking promises as well as a long-term relationship with the subscriber will be lost.
To read more:
- A full article including interviews with operators BT and Orange; vendors Cisco Systems, Alcatel-Lucent’s Motive division, Ericsson and Netgear; chip vendors Gigle Semiconductor and Broadcom; the Home Gateway Initiative and analysts Parks Associates, TelecomView and ABI Research will appear in the January 2010 issue of Total Telecom.
- Can residential broadband services succeed without home network management? Analysys Mason’s Patrick Kelly.
Mind maps

To download the full mind map in .pdf form, click here.
Active optical cable: market drivers

CIR’s report key findings
The global market for active optical cable (AOC) is forecast to grow to US $1.5bn by 2014, with the linking of datacenter equipment being the largest single market valued at $835m. Other markets for the cabling technology include digital signage, PC interconnect and home theatre.
CIR’s report entitled Active Optical Cabling: A Technology Assessment and Market Forecast notes how AOC emerged with a jolt. Two years on and the technology is now a permanent fixture that will continue to nimbly address application as they appear. This explains why CIR views AOC as an opportunistic and tactical interconnect technology.

AOC: "Opportunistic and tactical"
Loring Wirbel
What is active optical cable?
An AOC converts an electrical interface to optical for transmission across a cable before being restored to the electrical domain. Optics are embedded as part of the cabling connectors with AOC vendors using proprietary designs. Being self-contained, AOCs have the opportunity to become a retail sale at electronics speciality stores.
A common interface for AOC is the QSFP but there are AOC products that use proprietary interfaces. Indeed the same interface need not be used at each end of the cable. Loring Wirbel, author of the CIR AOC report, mentions a MergeOptics’ design that uses a 12-channel CXP interface at one end and three 4-channel QSFP interfaces at the other. “If it gets traction, everyone will want to do it,” he says.
Origins
AOC products were launched by several vendors in 2007. Start-up Luxtera saw it as an ideal entry market for its silicon photonics technology; Finisar came out with a 10Gbps serial design; while Zarlink identified AOC as a primary market opportunity, says Wirbel.
Application markets
AOC is the latest technology targeting equipment interconnect in the data centre. Typical distances linking equipment range from 10 to 100m; 10m is where 10Gbps copper cabling starts to run out of steam while 100m and above are largely tackled by structured cabling.
“Once you get beyond 100 meters, the only AOC applications I see are outdoor signage and maybe a data centre connecting to satellite operations on a campus,” says Wirbel.
AOC is used to connect servers and storage equipment using either Infiniband or Ethernet. “Keep in mind it is not so much corporate data centres as huge dedicated data centre builds from a Google or a Facebook,” says Wirbel.
AOC’s merits include its extended reach and light weight compared to copper. Servers can require metal plates to support the sheer weight of copper cabling. The technology also competes with optical pluggable transceivers and here the battleground is cost, with active optical cabling including end transceivers and the cable all-in-one.
To date AOC is used for 10Gbps links and for double data rate (DDR) and quad data rate (QDR) Infiniband. But it is the evolution of Infiniband’s roadmap - eight data rate (EDR, 20Gbps per lane) and hexadecimal data rate (HDR, 40Gbps per lane) - as well as the advent of 100m 40 and 100 Gigabit Ethernet links with their four and ten channel designs that will drive AOC demand.
The second largest market for AOC, about $450 million by 2014, and one that surprised Wirbel, is the ‘unassuming’ digital signage.
Until now such signs displaying video have been well served by 1Gbps Ethernet links but now with screens showing live high-definition feeds and four-way split screens 10Gbps feeds are becoming the baseline. Moreover distances of 100m to 1km are common.
PC interconnect is another market where AOC is set to play a role, especially with the inclusion of a high-definition multimedia interface (HDMI) interface as standard with each netbook.
“A netbook has no local storage, using the cloud instead,” says Wirbel. Uploading video from a video camera to the server or connecting video streams to a home screen via HDMI will warrant AOC, says Wirbel.
Home theatre is the fourth emerging application for AOC though Wirbel stresses this will remain a niche application.

AT&T rethinks its relationship with networking vendors

“We’ll go with only two players [per domain] and there will be a lot more collaboration.”
Tim Harden, AT&T
AT&T has changed the way it selects equipment suppliers for its core network. The development will result in the U.S. operator working more closely with vendors, and could spark industry consolidation. Indeed, AT&T claims the programme has already led to acquisitions as vendors broaden their portfolios.
The Domain Supplier programme was conjured up to ensure the financial health of AT&T’s suppliers as the operator upgrades its network to all-IP.
By working closely with a select group of system vendors, AT&T will gain equipment tailored to its requirements while shortening the time it takes to launch new services. In return, vendors can focus their R&D spending by seeing early the operator’s roadmap.
“This is a significant change to what we do today,” says Tim Harden, president, supply chain and fleet operations at AT&T. Currently AT&T, like the majority of operators, issues a request-for-proposal (RFP) before getting responses from six to ten vendors typically. A select few are taken into the operator’s labs where the winning vendor is chosen.
With the new programme, AT&T will work with players it has already chosen. “We’ll go with only two players [per domain] and there will be a lot more collaboration,” says Harden. “We’ll bring them into the labs and go through certification and IT issues.” Most importantly, operator and vendor will “interlock roadmaps”, he says.
The ramifications of AT&T’s programme could be far-reaching. The promotion of several broad-portfolio equipment suppliers into an operator’s inner circle promises them a technological edge, especially if the working model is embraced by other leading operators.
The development is also likely to lead to consolidation. Equipment start-ups will have to partner with domain suppliers if they wish to be used in AT&T’s network, or a domain supplier may decide to bring the technology in-house.
Meanwhile, selling to domain supplier vendors becomes even more important for optical component and chip suppliers.
Domain suppliers begin to emerge
AT&T first started work on the programme 18 months ago. “AT&T is on a five-year journey to an all-IP network and there was a concern about the health of the [vendor] community to help us make that transition, what with the bankruptcy of Nortel,” says Harden. The Domain Supplier programme represents 30 percent of the operator’s capital expenditure.
The operator began by grouping technologies. Initially 14 domains were identified before the list was refined to eight. The domains were not detailed by Harden but he did cite two: wireless access, and radio access including the packet core.
For each domain, two players will be chosen. “If you look at the players, all have strengths in all eight [domains],” says Harden.
AT&T has been discussing its R&D plans with the vendors, and where they have gaps in their portfolios. “You have seen the results [of such discussions] being acted out in recent weeks and months,” says Harden, who did not name particular deals.
In October Cisco Systems announced it planned to acquire IP-based mobile infrastructure provider Starent Networks, while Tellabs is to acquire WiChorus, a maker of wireless packet core infrastructure products. "We are not at liberty to discuss specifics about our customer AT&T,” says a Tellabs spokesperson. Cisco has still to respond.
Harden dismisses the suggestion that its programme will lead to vendors pursuing too narrow a focus. Vendors will be involved in a longer term relationship – five years rather than two or three common with RFPs, and vendors will have an opportunity to earn back their R&D spending. “They will get to market faster while we get to revenue faster,” he says.
The operator is also keen to stress that there is no guarantee of business for a vendor selected as a domain supplier. Two are chosen for each domain to ensure competition. If a domain supplier continues to meet AT&T’s roadmap and has the best solution, it can expect to win business. Harden stresses that AT&T does not require a second-supplier arrangement here.
In September AT&T selected Ericsson as one of the domain suppliers for wireline access, while suppliers for radio access Long Term Evolution (LTE) cellular will be announced in 2010.
WDM-PON: Can it save operators over €10bn in total cost of ownership?
Source: ADVA Optical Networking
“The focus of operators to squeeze the last dollar out of the system and optical component vendors is really nonsense.”
Klaus Grobe, ADVA Optical Networking.
Key findings
The total cost of ownership (TCO) of a widely deployed WDM-PON network is at least 20 percent cheaper than the broadband alternatives of VDSL and GPON. Given that the cost of deploying a wide-scale access network in a large western European country is €60bn, a 20 percent cost saving is huge, even if spread over 25 years.
What was modelled?
ADVA Optical Networking wanted to quantify the TCO of three access schemes: wavelength-division-multiplexing passive optical networking (WDM-PON), gigabit PON (GPON) - the PON scheme favoured by European incumbents, and copper-based VDSL (very high bit-rate digital subscriber line).
The company modelled a deployment serving 1 million residences and 10,000 enterprises. “We took seriously the idea of broadband roll out especially when operators talk about it being a strategic goal,” says Klaus Grobe, principal engineer at ADVA Optical Networking. “We wanted a single number that says it all.”
Assumptions
ADVA Optical Networking splits the TCO into four categories:
- Duct cost
- Other operational expense (OpEx)
- Energy consumption
- Capital expenditure (CapEx)
For ducting, it is assumed that VDSL already has fibre to the cabinet and the copper linking the user, whereas for optical access - WDM-PON and GPON - the feeder fibre is present but distribution fibre must be added to connect each home and enterprise. “There is also a certain upgrade of the feeder fibre required but it is 5 percent of the distribution fibre costs,” says Grobe. Hence the ducting costs of GPON and WDM-PON are similar and higher than VDSL.
A 25-year lifetime was also used for the TCO analysis during which three generations of upgrades are envisaged. For the end device like a PON optical network unit (ONU) the cost is the same for each generation, even if performance is significant improved each time.
The ‘other OpEx’ includes all the elements of OpEx except energy costs. The category includes planning and provisioning; operations, administration and maintenance (OA&M); and general overhead.
Planning and provisioning, as the name implies, covers the planning and provisioning of system links and bandwidth, says Grobe. Also the WDM-PON network serves both residential and enterprises whereas duplicate networks are required for GPON and VDSL, adding cost.
The ‘general overheads’ category includes an operator’s sales department. Grobe admits there is huge variation here depending on the operator and thus a common figure for all three cases was used.
Energy consumption is clearly important here. Three annual energy cost increase (AECI) rates were explored – 2, 5 and 10 percent (shown in the chart is the 5% case), with a cost of 80 €/MWh assumed for the first year.
The energy cost savings for WDM-PON come not from the individual equipment but from the reduced number of sites deploying the access technology allows. The power consumed of a WDM-PON ONU is 1W, greater than VDSL, says Grobe, but a lot more local exchanges and cabinets are used for VDSL than for WDM-PON.
And this is where the biggest savings arise: the difference in OA&M due to there being fewer sites for WDM-PON than for GPON and VDSL. That’s because WDM-PON has a larger, up to 100km, reach from the central office to the end user. And, as mentioned, a WDM-PON network caters for enterprise and residential users whereas GPON and VDSL require two distinct networks. This explains the large differences between VDSL, GPON and WDM-PON in the ‘other OpEX’ category.
Grobe says it is difficult to estimate the site reduction deploying WDM-PON will deliver. Operators are less forthcoming with such figures. However, the model and assumptions have been presented to operators and no objections were raised. Equally, the model is robust – varying wildly any one parameter does not change the main findings of the model.
Lastly, for CapEx, WDM-PON equipment is, as expected, the most expensive. CapEx for all three cases, however, is by far the smallest contributor to TCO.
Mass roll outs on the way?
So will operators now deploy WDM-PON on a huge scale? Sadly no, says Grobe. Up-front costs are paramount in operators’ thinking despite the vast cost saving if the lifetime of the network is considered.
But the analysis highlights something else for Grobe that will resonate with the optical community. “The focus of operators to squeeze the last dollar out of system and optical component vendors is really nonsense,” he says.
See ADVA Optical Networking's White Paper
See an associated presentation
ECOC 2009: Squeezing optics out of optical communications
Prof. Polina BayvelAn interview with Polina Bayvel, Professor of Optical Communications and Networks and head of the Optical Networks Group at University College London (UCL), on her ECOC conference impressions.
What did you find noteworthy at ECOC 2009?
PB: So much work on digital signal processing and coherent detection...will these techniques lead to another revolution in fibre optics? But there is much to understand about how to design the DSP algorithms and how to best match these to appropriate fibre maps in some implementable way.
Did anything at the conference surprise you?
PB: Is there really a capacity crunch or is it a cost crunch and who will end up paying? There is much work on new fibres, new DSP but why is no-one looking at new amplifiers?
What did you learn from ECOC?
PB: I learnt how little progress there has been made in all-optical networking - the well-trodden ideas and arguments on wavelength routing which have been circulating for over 15 years are not being taken up by operators but are being re-discovered and re-offered as new...and just how conservative the operators still are, except those in Japan.
Did you see or hear anything that gives reason for industry optimism?
PB: Lots of buzz about linear and nonlinear DSP, error correcting codes, net coding gain, FPGAs and many other developments which, whilst invigorating the industry are squeezing optics out of optical communications. Here is to the fightback for optics!
ECOC 2009: An industry view
“Most of the action was in 40 and 100 Gigabit,” said Stefan Rochus, vice president of marketing and business development at CyOptics. “There were many 40/ 100 Gigabit LR4 module announcements - from Finisar, Opnext and Sumitomo [Electric Industries].”
Daryl Inniss, practice leader, components at market research firm Ovum, noted a market shift regarding 40 Gigabit. “There has been substantial progress in lowering the cost, size and power consumption of 40 Gigabit technology,” he said.
John Sitch, Nortel’s senior advisor optical development, metro Ethernet networks, highlighted the prevalence and interest in coherent detection/ digital signal processing designs for 40 and 100 Gigabit per second (Gbps) transmission. Renewed interest in submarine was also evident, he said.
Rochus also highlighted photonic integration as a show theme, with the multi-source agreement from u2t Photonics and Picometrix, the integrated DPSK receiver involving Optoplex with u2t Photonics, Enablence Technologies, and CIP Technologies' monolithically integrated semiconductor optical amplifier with a reflective electro-absorption modulator.
Intriguingly, Rochus also heard talk of OEMs becoming vertically integrated again. “This time maybe by strategic partnerships rather than OEMs directly owning fabs,” he said.
The attendees were also surprised by the strong turnout at ECOC, which was expected to suffer given the state of the economy. “Attendance appeared to be thick and enthusiasm strong,” says Andrew Schmitt, directing analyst, optical at Infonetics Research. “I heard the organisers were expecting 200 people on the Sunday [for the workshops] but got 400.”
In general most of the developments at the show were as expected. “No big surprises, but the ongoing delays in getting actual 100 Gigabit CFP modules were a small surprise.” said Sitch. “And if everyone's telling the truth, there will be plenty of competition in 100 Gigabit.”
Inniss was struck by how 100 Gigabit technology is likely to fare: “The feeling regarding 100 Gigabit is that it is around the corner and that 40 Gigabit will somehow be subsumed,” he said. “I’m not so sure – 40 Gigabit is growing up and while operators are cheerleading 100 Gigabit technology, it doesn’t mean they will buy – let’s be realistic here.”
As for the outlook, Rochus believes the industry has reason to be upbeat. “There is optimism regarding the third and fourth quarters for most people,” he said. “Inventories are depleted and carriers and enterprises are spending again.”
Inniss’ optimism stems from the industry's longer term prospects. He was struck by a quote used by ECOC speaker George Gilder: “Don’t solve problems, pursue opportunities.”
Network traffic continues to grow at a 40-50% yearly rate yet some companies continue to worry about taking a penny out of cost, said Inniss, when the end goal is solving the bandwidth problem.
For him 100 Gbps is just a data rate, as 400 Gbps will be the data rate that follows. But given the traffic growth, the opportunity revolves around transforming data transmission. “For optical component companies innovation is the only way," said Inniss. "What is required here is not a linear, incremental solution."
40G and 100G Ethernet: First uses of the high-speed interfaces
Operators, enterprises and equipment vendors are all embracing 100 Gigabit technologies even though the standards will only be completed in June 2010.

Comcast and Verizon have said they will use 100Gbit/s transmission technology once it is available. Juniper Networks demonstrated a 100 Gigabit Ethernet (GbE) interface on its T1600 core router in June, while in May Ciena announced it will supply 100Gbit/s transmission technology to NYSE Euronext to connect its data centers.
Ciena’s technology is for long-haul transmission, outside the remit of the IEEE’s P802.3ba Task Force’s standards work defining 40GbE and 100GbE interfaces. But the two are clearly linked: the emergence of the Ethernet interfaces will drive 100Gbit/s long-haul transmission.
ADVA Optical networking foresees two applications for metro and long-haul 100Gbps transmission: carrying 100Gbps IP router links, and multiplexing 10Gbps streams into a 100Gbps lightpath. “We see both: for router and switch interfaces, and to improve fibre bandwidth,” says Klaus Grobe, principal engineer at ADVA Optical Networking.
The trigger for 40Gbit/s market adoption was the advent of OC-768 SONET/SDH 2km reach interfaces on IP core routers. In contrast, 40GbE and 100GbE interfaces will be used more broadly. As well as IP routers and multiplexing operators’ traffic, the standards will be used across the data centre, to interconnect high-end switches and for high-performance computing.
The IEEE Task Force is specifying several 40GbE and 100GbE standards, with copper-based interfaces used for extreme short reach, while optical addresses interfaces with reaches of 100m, 10km and 40km.
For 100m short-reach links, multimode fibre is used: four fibres at 10Gbps in each direction for 40GbE and ten fibres at 10Gbps in each direction for 100GbE interfaces. For 40 and 100GbE 10km long reach links, and for 100GbE 40km extended reach, single mode fibre is used. Here 4x10Gbps and 4x25Gbps are carried over a single fibre using wavelength division multiplexing (WDM).
“Short reach optics at 100Gigabit uses a 10x10 electrical interface that drives 10x10 optics,” says John D’Ambrosia, chair of the IEEE P802.3ba Task Force. “The first generation of 100GBASE-L/ER optics uses a 10x10 electrical interface that then goes to 4x25 WDM optics.”
The short reach interfaces reuse 10Gbps VCSEL and receiver technology and are designed for high density, power-sensitive applications. “The IEEE chose to keep the reach to 100m to give a low cost solution that hits the biggest market,” says D’Ambrosia, although he admits that a 100m reach is limiting for certain customers.
Cisco Systems agrees. “Short reach will limit you,” says Ian Hood, Cisco’s senior product marketing manager for service provider routing and switching. “It will barely get across the central office but it can be used to extend capacity within the same rack.” For this reason Cisco favours longer reach interfaces but will use short reach ‘where convenient’.
D’Ambrosia would not be surprised if a 1 to 2km single mode fibre variant will be developed though not as part of the current standards. Meanwhile, the Ethernet Alliance has called for an industry discussion on a 40Gbps serial initiative.
Within the data centre, both 40GbE and 100GbE reaches have a role.
A two-layer switching hierarchy is commonly used in data centres. Servers connect to top-of-rack switches that funnel traffic to aggregation switches that, in turn, pass traffic to the core switches. Top-of-rack switches will continue to receive 1GbE and 10GbE streams for a while yet but the interface to aggregation switches will likely be 40GbE. In turn, aggregation switches will receive 40GbE streams and use either 40GbE or 100GbE to interface to the core switches. Not surprisingly, first use of 100GbE interfaces will be to interconnect core Ethernet switches.
Extended reach 100GbE interfaces will be used to connect equipment up to 40km part, between two data centres for example. But only when a single 100GbE link over the fibre pair is sufficient. Otherwise dense WDM technology will be used.
Servers will take longer to migrate to 40 and 100GbE. “There are no 40GbE interfaces on servers,” says Daryl Inniss, Ovum’s vice president and practice leader components. “Ten gigabit interfaces only started to be used [on servers] last year.” Yet the IT manager in one leading German computing centre, albeit an early adopter, told ADVA that he could already justify using a 40GbE server interface and expected 100GbE interfaces would be needed by 2012.
Two pluggable form factors have already been announced for 100GbE. The CFP supports all three link distances and has been designed with long-haul transmission in mind, says Matt Traverso, senior manager of technical marketing at Opnext. The second, the CXP, is designed for compact short reach interfaces. For 40GbE more work is needed.
Juniper’s announced core router card uses the CFP to implement a 100m connection. Juniper’s CFP is being used to connect the router to a DWDM platform for IP traffic transmission between points-of presence, and for data centre trunking.
So will one 40GbE or 100GbE interface standard dominate early demand? Opnext’s Traverso thinks not. “All the early adopters have one or two favourite interfaces – high-performance computing favours 40 and 100GbE short reach while for core routers it is long reach 100GbE,” he says. “All the early adopters have their chosen interfaces before they will round out their portfolio.”
This article appeared in the exhibition magazine at ECOC 2009.
Next-Gen PON: An interview with BT
Peter Bell, Access Platform Director, BT Innovate & Design
Q: The status of 10 Gigabit PON – 10G EPON and 10G GPON (XG-PON): Applications, where it will be likely be used, and why is it needed?
PB: IEEE 10G EPON: BT not directly involved but we have been tracking it and believe the standard is close to completion (gazettabyte: The standard was ratified in September 2009.)
ITU-T 10Gbps PON: This has been worked on in the Full Service Access Network group (FSAN) where it became known as XG-PON. The first version XG-PON1 is 10Gbps downstream and 2.5Gbps upstream and work has started on this in ITU-T with a view to completion in the 2010 timeframe. The second version XG-PON2 is 10Gbps symmetrical and would follow later.
Not specific to BT’s plans but an operator may use 10Gbps PON where its higher capacity justified the extra cost. For example: business customers, feeding multi-dwelling units (MDUs) or VDSL street cabinets
Q: BT's interest in WDM-PON and how would it use it?
PB: BT is actively researching WDM-PON. In a paper presented at ECOC '09 conference in Vienna (24th September 2009) we reported the operation of a compact DWDM comb source on an integrated platform in a 32-channel, 50km WDM-PON system using 1.25Gbps reflective modulation.
We see WDM-PON as a longer term solution providing significantly higher capacity than GPON. As such we are interested in the 1Gbps per wavelength variants of WDM-PON and not the 100Mbps per wavelength variants.
Q: FSAN has two areas of research regarding NG PON: What is the status of this work?
PB: NG-PON1 work is focussed on 10 Gbps PON (known as XG-PON) and has advanced quite quickly into standardisation in ITU-T.
NG-PON2 work is longer term and progressing in parallel to NG-PON1
Q: BT's activities in next gen PON – 10G PON and WDM-PON?
PB: It is fair to say BT has led research on 10Gbps PONs. For example an early 10Gbps PON paper by Nesset et al from ECOC 2005 we documented the first, error-free physical layer transmission at 10Gbps, over a 100km reach PON architecture for up and downstream.
We then partnered with vendors to achieve early proof-of-concepts via two EU funded collaborations.
Firstly in MUSE we collaborated with NSN et al to essentially do first proof-of-concept of what has become known as XG-PON1 (see attached long reach PON paper).
Secondly, our work with NSN, Alcatel-Lucent et al on 10Gbps symmetrical hybrid WDM/TDMA PONs in EU project PIEMAN has very recently been completed.
Q: What are the technical challenges associated with 10G PON and especially WDM-PON?
For 10Gbps PONs in general the technical challenges are:
- Achieving the same loss budgets - reach - as GPON despite operating at higher bitrate and without pushing up the cost.
- Coexistence on same fibres as GPON to aid migration.
- For the specific case of 10Gbps symmetrical (XG-PON2) the 10 Gbps burst mode receiver to use in the headend is especially challenging. This has been a major achievement of our work in PIEMAN.
For WDM-PONs the technical challenges are:
- Reducing the cost and footprint of the headend equipment (requires optical component innovation)
- Standardisation to increase volumes of WDM-PON specific optical components thereby reducing costs.
- Upgrade from live GPON/EPON network to WDM-PON (e.g. changing splitter technology)
Q: There are several ways in which WDM-PON can be implemented, does BT favour one and why, or is it less fussed about the implementation and more meeting its cost points?
PB: We are only interested in WDM-PONs giving 1Gbps per wavelength or more and not the 100Mbps per wavelength variants. In terms of detailed implementation we would support the variant giving lowest cost, footprint and power consumption.
Q: What has been happening with BT's Long Reach PON work
PB: We have done lots of work on the long reach PON concept which is summarised in a review published paper from IEEE JLT and includes details of our work to prototype a next-generation PON capable of 10Gbps, 100km reach and 512-way split. This includes EU collaborations MUSE and PIEMAN
From a technical perspective, Class B+ and C+ GPON (G.984.2) could reach a high percentage of UK customers from a significantly reduced number of BT exchanges. Longer reach PONs would then increase the coverage further.
Following our widely published work in amplified GPON, extended reach GPON has now been standardised (G.984.6) to have 60 km reach and 128-way split, and some vendors have early products. And 10Gbps PON standards are expected to have same reach as GPON.
OneChip solution for Fibre-To-The-Home
Jim Hjartarson, CEO of OneChip PhotonicsAn interview with Jim Hjartarson, CEO of OneChip Photonics
Q. In March 2009, OneChip raised $19.5m. How difficult is it nowadays for an optical component firm to receive venture capital funding?
A. Clearly, the venture capital community, given the current macroeconomic environment, is being selective about the new investments it makes in the technology market in general, and photonics in particular. However, if you can demonstrate that you have a unique approach to a problem that has not yet been solved, and that there is a large, untapped market opportunity, VCs will be interested in your value proposition.
Q. What is it about your company's business plan that secured the investment?
A. We believe OneChip Photonics has three fundamental advantages that resulted in our securing our initial two rounds of funding, which totaled $19.5 million:
- A truly breakthrough approach and technology that will remove the cost and performance barriers that have been impeding the ubiquitous deployment of Fiber-to-the-Home (FTTH) and enable new business and consumer broadband applications.
- A large, untapped market opportunity. Ovum estimates that the FTTx optical transceiver market will grow from $387 million by the end of 2009 to $594 million by the end of 2013. OneChip also is poised to introduce photonics integration into other high-volume business and consumer markets, where our breakthrough photonic integrated circuit (PIC) technology can reduce costs and improve performance. These markets could be orders of magnitude larger than the FTTx optical transceiver market.
- A seasoned and successful management team. OneChip has attracted top talent – from industry leading companies such as MetroPhotonics, Bookham, Catena Networks, Fiberxon, Nortel and Teknovus – who have successful track records of designing, manufacturing, marketing and selling transceivers, PICs and mass-market broadband access solutions.
Q. The passive optical networking (PON) transceiver market faces considerable pricing pressures. Companies use TO cans and manual labour or more sophisticated hybrid integration where the laser and photodetectors are dropped onto a common platform to meet various PON transceiver specifications. Why is OneChip pursuing indium phosphide-based monolithic integration and why will such an approach be cheaper than a hybrid platform that can address several PON standards?
A. Most current FTTH transceiver providers base their transceivers on either discrete optics or planar lightwave circuit (PLC) designs. These designs offer low levels of integration and require assembly from multiple parts. There is little technical differentiation among them. Rather, vendors must compete on the basis of who can assemble the parts in a slightly cheaper fashion. And there is little opportunity to further reduce such costs.
While more integrated than fully discrete optics-based designs, PLC designs still require discrete active components and the assembly of as many as 10 parts. Great care must be taken, during the manufacturing process, to align all parts of the transceiver correctly. And while packaging can be non-hermetic, these parts can fall out of alignment through thermal or mechanical stress. PLC designs also have proven to be an expensive alternative. For all of these reasons, the PON system vendors with which OneChip has engaged have indicated that they are not interested in deploying PLC-based designs.
OneChip Photonics is taking a new approach with its breakthrough PIC technology. OneChip is monolithically integrating all the functions required for an optical transceiver onto a single, indium phosphide (InP)-based chip. All active AND passive components of the chip – including the distributed-feedback (DFB) laser, optically pre-amplified detector (OPAD), wavelength splitter (WS), spot-size converter (SSC), and various elements of passive waveguide circuitry – are, uniquely, integrated in one epitaxial growth step, without re-growth or post-growth modification of the epitaxial material.
With respect to transmit performance, OneChip’s single-frequency DFB lasers will offer a superior performance – much more suitable for longer-reach and higher bit-rate applications – than competing Fabry-Perot (FP) lasers. With respect to receive performance, OneChip’s optically pre-amplified detectordesign is a higher gain-bandwidth solution than competing avalanche photodiode (APD) solutions. It also is a lower-cost solution, as it does not require a high-voltage power source.
OneChip’smonolithic photonic integrated circuits (PICs) have the smallest footprint on the market, the optical parts are aligned for life, and the parts are highly robust (resistant to vibration and other outside elements). Further, OneChip’s PICs are designed for automated mounting on a silicon optical bench, without requiring active alignment, using industry-standard, automated assembly processes – resulting in high yields of good devices.
Utilizing automated production processes, OneChip can maintain the highest production scalability (easily ramping up and down) in the industry and respond rapidly to customer needs. Standard production processes also mean reliable supplies to customers, at the lowest prices on the market.
Q. Several companies have explored integrated PON solutions and have either dismissed the idea or have come to market with impressive integrated designs only to ultimately fail (e.g. Xponent Photonics).Why are you confident OneChip will fare better?
As noted earlier, PLC designs developed by vendors such as Xponent are not fully integrated. PLC designs still require discrete active components and the assembly of as many as 10 parts, using a glass substrate. This results in poor yields and high costs.
OneChip is taking a fundamentally different approach. We are the only company in the optical access market that is monolithically integrating all the active and passive functions required for an optical transceiver onto a single, indium phosphide (InP)-based chip. This enables us to achieve low cost, high performance, high yields and high quality.
OneChip is one of only a few companies with new core intellectual property and advanced technology in the optical transceiver business that can sustain a competitive advantage over other optical component providers, which rely on conventional technology and assembly processes. Carriers and system providers recognize that an approach, which would eliminate assembly from multiple parts, is needed to lower the cost and improve the performance of transceivers, Optical Network Terminals (ONTs) and Optical Line Terminals (OLTs) in optical access networks. We believe OneChip’s fully integrated technology can help unleash the potential of FTTH and other mass-market optical communications applications.
Q. If integrated PON is a good idea why, in OneChip’s opinion, have silicon photonics startups so far ignored this market?
A. “Silicon photonics” designs face the inherent limitation that a laser cannot be implemented in silicon. Therefore, separate optical and electrical devices must be grown with different processes and then assembled together. With as many as 10 parts having to be interconnected on a ceramic substrate, the alignment, tuning and reliability issues can significantly add costs and reduce yields.
In addition, system providers and service providers need to be cognizant of the inherent performance limitations with transceivers built from discrete parts. While short-reach EPON transceivers already have been optimized down to below a U.S. $15 price, these implementations can only meet low-end performance requirements. Networks would require a switch to more costly transceivers to support longer-range EPON, 2.5G EPON, GPON or 10G PON. Because most service providers are looking to reap the payback benefits of their investments in fiber installations/retrofits over the shortest possible timeframes, it doesn’t make sense to risk adding the high cost of a forklift changeover of transceiver technology at some point during the payback period.
Q. PON with its high volumes has always been viewed as the first likely market for photonic integrated circuits (PICs). What will be the second?
A. OneChip recognizes that optical communications is becoming economically and technologically mandatory in areas outside of traditional telecommunications, such as optical interconnections in data centers and other short to ultra-short reach broadband optical networks. OneChip is poised to introduce photonics integration into these and other high-volume business and consumer markets, where our PIC technology can reduce costs and improve performance.
[End]
Click here tomore details on OneChip Photonics
