Will LTE lead to new revenues for the operators?
The opportunities and challenges the Long Term Evolution (LTE) standard poses for mobile operators. An article for the Mobile World Congress show for the magazine Informilo, click here.
OFC/NFOEC 2012: Technical paper highlights
Source: The Optical Society
Novel technologies, operators' experiences with state-of-the-art optical deployments and technical papers on topics such as next-generation PON and 400 Gigabit and 1 Terabit optical transmission are some of the highlights of the upcoming OFC/NFOEC conference and exhibition, to be held in Los Angeles from March 4-8, 2012. Here is a taste of some of the technical paper highlights.
Optical networking
In Spectrum, Cost and Energy Efficiency in Fixed-Grid and Flew-Grid Networks (Paper number 1248601) an evaluation of single and multi-carrier networks at rates up to 400 Gigabit-per-second (Gbps) is made by the Athens Information Technology Center. One finding is that efficient spectrum utilisation and fine bit-rate granularity are essential if cost and energy efficiencies are to be realised.
In several invited papers, operators report their experiences with the latest networking technologies. AT&T Labs discusses advanced ROADM networks; NTT details the digital signal processing (DSP) aspects of 100Gbps DWDM systems and, in a separate paper, the challenge for Optical Transport Network (OTN) at 400Gbps and beyond, while Verizon gives an update on the status of MPLS-TP. As part of the invited papers, Finisar's Chris Cole outlines the next-generation CFP modules.
Optical access
Fabrice Bourgart of FT-Orange Labs details where the next generation PON standards - NGPON2 - are going while NeoPhotonics's David Piehler outlines the state of photonic integrated circuit (PIC) technologies for PONS. This is also a topic tackled by Oclaro's Michael Wale: PICs for next-generation optical access systems. Meanwhile Ao Zhang of Fiberhome Telecommunication Technologies discusses the state of FTTH deployments in the world's biggest market, China.
Switching, filtering and interconnect optical devices
NTT has a paper that details a flexible format modulator using a hybrid design based on a planar lightwave circuit (PLC) and lithium niobate. In a separate paper, NTT discusses silica-based PLC transponder aggregators for a colourless, directionless and contentionless ROADM, while Nistica's Tom Strasser discusses gridless ROADMs. Compact thin-film polymer modulators for telecoms is a subject tackled by GigOptix's Raluca Dinu.
One novel paper is on graphene-based optical modulators by Ming Liu, Xiang at the UC Berkeley (Paper Number: 1249064). The optical loss of graphene can be tuned by shifting its Fermi level, he says. The paper shows that such tuning can be used for a high-speed optical modulator at telecom wavelengths.
Optoelectronic Devices
CMOS photonic integrated circuits is the topic discussed by MIT's Rajeev Ram, who outlines a system-on-chip with photonic input and output. Applications range from multiprocessor interconnects to coherent communications (Paper Number: 1249068).
A polarisation-diversity coherent receiver on polymer PLC for QPSK and QAM signals is presented by Thomas Richter of the Fraunhofer Institute for Telecommunications (Paper Number: 1249427). The device has been tested in systems using 16-QAM and QPSK modulation up to 112 Gbps.
Core network
Ciena's Maurice O'Sullivan outlines 400Gbps/ 1Tbps high-spectral efficiency technology and some of the enabling subsystems. Alcatel-Lucent's Steven Korotky discusses traffic trends: drivers and measures of cost-effective and energy-efficient technologies and architectures for the optical backbone networks, while transport requirements for next-generation heterogeneous networks is the subject tackled by Bruce Nelson of Juniper Networks.
Data centre
IBM's Casimir DeCusatis presents a future - 2015-and-beyond - view of data centre optical networking. The data centre is also tackled by HP's Moray McLaren, in his paper on future computing architectures enabled by optical and nanophotonic interconnects. Optically-interconnected data centres are also discussed by Lei Xu of NEC Labs America.
Expanding usable capacity of fibre syposium
There is a special symposium at OFC/ NFOEC entitled Enabling Technologies for Fiber Capacities Beyond 100 Terabits/second. The papers in the symposium discuss MIMO and OFDM, technologies more commonly encountered in the wireless world.
Huawei boosts its optical roadmap with CIP acquisition
Huawei has acquired UK photonic integration specialist, CIP Technologies, from the East of England Development Agency (EEDA) for an undisclosed fee. The acquisition gives the Chinese system vendor a wealth of optical component expertise and access to advanced European Union R&D projects.
"By acquiring CIP and integrating the company’s R&D team into Huawei’s own research team, Huawei’s optic R&D capabilities can be significantly enhanced," says Peter Wharton, CEO at the Centre for Integrated Photonics (CIP). CIP Technologies is the trading name of the Centre for Integrated Photonics.
Huawei now has six European R&D centres with the acquisition of CIP.
CIP Technologies has indium phosphide as well as planar lightwave circuit (PLC) technology which it uses as the basis for its HyBoard hybrid integration technology. HyBoard allows actives to be added to a silica-on-silicon motherboard to create complex integrated optical systems.
CIP has been using its photonic integration expertise to develop compact, more cost-competitive WDM-PON optical line terminal (OLT) and optical network unit (ONU) designs, including the development of an integrated transmitter array.
The company employs 50 staff, with 70% of its work coming from the telecom and datacom sectors. About a third of its revenues are from advanced products and two thirds from technical services.
The CEO of CIP says all current projects for its customers will be carried out as planned but CIP’s main research and development service will be focused on Huawei’s business priorities. “We expect all contracted projects to be completed and current customers are being assisted to find alternate sources of supply," says Wharton.
CIP is also part of several EU Seventh Framework programme R&D projects. These include BIANCHO, a project to reduce significantly the power consumption of optical components and systems, and 3CPO, which is developing colourless and coolerless optical components for low-power optical networks.
Huawei's acquisition will not affect CIP's continuing participation in such projects. "For EU framework and other collaborative R&D projects, the ultimate share ownership does not matter so long as it is a research organisation based in Europe, which CIP will continue to be," says Wharton.
CIP said it had interest from several potential acquirers but that the company favoured Huawei.
What this means
CIP has a rich heritage. It started as BT's fibre optics group. But during the optical boom of 1999-2000, BT shed its unit, a move also adopted by such system vendors as Nortel and Lucent.
The unit was acquired by Corning in 2000 but the acquisition did not prove a success and in 2002 the group faced closure before being rescued by the East of England Development Agency (EEDA).
CIP has always been an R&D organisation in character rather than a start-up. Now with Huawei's ambition, focus and deep pockets coupled with CIP's R&D prowess, the combination could prove highly successful if the acquisition is managed well.
Huawei's acquisition looks shrewd. Optical integration has been discussed for years but its time is finally arriving. The technologies of 40 Gigabit and 100 Gigabit is based on designs with optical functions in parallel; at 400 Gigabit the number of channels only increases.
Optical access will also benefit from photonic integration - from board optical sub-assemblies for GPON and EPON to WDM-PON to ultra dense WDM-PON. China is also the biggest fibre-to-the-x (FTTx) market by far.
A BT executive talking about the operator's 21CN mentioned how system vendors used to ask him repeatedly about Huawei. Huawei, in contrast, used to ask him about Infinera.
Huawei, like all the other systems vendors, has much to do to match Infinera's photonic integrated circuit expertise and experience. But the Chinese vendor's optical roadmap just got a whole lot stronger with the acquisition of CIP.
Further reading:
Reflecting light to save power, click here
Melding networks to boost mobile broadband
In a Q&A, Bryan Kim, manager at SK Telecom's Core Network Lab, discusses the mobile operator's heterogeneous network implementation and the service benefits.
SK Telecom has developed an enhanced mobile broadband service that combines two networks: 3G and Wi-Fi or Long Term Evolution (LTE) and Wi-Fi. The mobile operator will launch the 3G/ Wi-Fi heterogeneous network service in the second quarter of 2012 to achieve a maximum data rate of 60 Megabits-per-second (Mbps), while the LTE and Wi-Fi integrated service will be offered in 2013, enabling up to a 100Mbps wireless Internet service.
Q. What exactly has SK Telecom developed?
A. SK Telecom has developed a technology that provides subscribers with a faster data service by using two different wireless networks simultaneously. For instance, customers can enjoy a much faster video streaming service supported by either 3G and Wi-Fi, or LTE and Wi-Fi networks.
To benefit, a handset must use two radio frequencies at the same time. We have also built a system that is installed in the core network for simultaneous transmission.
"If it takes 10s to download a 10MB file using a 3G network and 5s to download the same file using the heterogeneous solution, the impact on the battery life is the same."
Bryan Kim, SK Telecom
Q. LTE-Advanced is standardising heterogeneous networking. This suggests that what SK Telecom has done is pre-standard and proprietary. What have you done that is different to the emerging standard?
A. SK Telecom is not talking about LTE-Advanced technology. This is a technology that enables simultaneous use of heterogeneous wireless networks we’ve deployed.
Q. What are the technical challenges involved in implementing a heterogeneous network?
A. It is technically difficult to realise the technology as it involves using networks with different characteristics in terms of speed and latency. At the same time, the technology is designed to minimise the changes required to the existing networks.
There has not really been challenges when linking the two separate networks but it is always a challenge to analyse the real-time network status to provide fast data services.
Q. What impact will simultaneous heterogeneous network operation have on a smartphone's battery life?
A. Using the heterogeneous network integration solution does increase the battery consumption: the device is using two radio frequencies. However, from a customer's perspective, if it takes 10s to download a 10MB file using a 3G network and 5s to download the same file using the heterogeneous solution, the impact on the battery life is the same.
SK Telecom also plans to apply a scanning algorithm for selecting qualified Wi-Fi networks.
Q. What services can SK Telecom see benefiting from having a 3G/ LTE network combined with a Wi-Fi network?
A. Customers will experience greater convenience when using multimedia services and network games, for example, with increased available bandwidth.
Source: SK Telecom
Heavy users tend to consume a lot of video services through mobile broadband. With this solution, SK Telecom will be providing faster data services to customers compared to when using only one network. This will enhance data service markets. The company has no plans for now to provide services directly.
Q. What mobile services come close to using 60Mbps or 100Mbps?
A. The 60Mbps and 100Mbps are theoretical maximum speeds. People who sign up for a 100Mbps fixed-line network service rarely experience the 100Mbps speed. With this technology, SK Telecom aims to increase the amount of wireless network resources for subscribers by using two different types of networks in a simultaneous manner, which in turn will boost the services that require wider bandwidth including video streaming service and network games.
Q. With a combination of Wi-Fi and cellular, most operators want to get traffic off the cellular network onto the ‘hot spot’. Does SK Telecom really want to fill their cellular network by providing higher speeds?
A. From the customer’s perspective, a Wi-Fi network offers narrow coverage and small capacity and since it is not a managed network, wireless data access is made upon request from customers. Thus, data offloading often does not work as intended by the mobile carriers.
In contrast, cellular networks provide national coverage so if there is an available Wi-Fi network to add to the cellular network, we can simultaneously use the cellular and Wi-Fi networks to offer a data service. By doing so customers will enjoy greater speed data services and mobile operators will be able to naturally offload data.
100 Gigabit 'unstoppable'
A Q&A with Andrew Schmitt (@aschmitt), directing analyst for optical at Infonetics Research.

"40Gbps has even less value in the metro than in the core"
Andrew Schmitt, Infonetics Research
A study from market research firm, Infonetics Research, has found that operators have a strong preference for deploying 100 Gigabit-per-second (Gbps) technology as they upgrade their networks.
Infonetics interviewed 21 incumbent service providers, competitive operators and mobile operators that have either 40Gbps, 100Gbps or both wavelength types installed in their networks, or that plan to install by next year (2013).
The operators surveyed, from all the major regions, account for over a quarter (28%) of worldwide telecom carrier revenue and capital expenditure.
The study's findings include:
- A strong preference by the carriers for 100Gbps transport in both Brownfield and Greenfield installations. Carriers will use 40 and 100Gbps to the same degree in existing Brownfield networks while favouring 100Gbps for new, Greenfield builds.
- The reasons to deploy 40Gbps and 100Gbps optical transport equipment include lowering the cost per bit, taking advantage of the superior dispersion performance of coherent optics, and lowering incremental common equipment costs due to the increased spectral efficiency.
- Most respondents indicate 40Gbps is only a short-term solution and will move the majority of installations to 100Gbps once those products become widely available.
- Non-coherent 100Gbps is not yet viewed as an important technology.
- Colourless and directionless ROADMs and Optical Transport Network (OTN) switching are important components of Greenfield builds; gridless and contentionless ROADMs are much less so.
Q&A with Andrew Schmitt
Q. A key finding is that 40Gbps and 100Gbps are equally favoured for Brownfield routes. And is it correct that Brownfield refers to existing routes carrying 10Gbps and maybe 40Gbps wavelengths while Greenfield involves new 100Gbps wavelengths? What is it about Brownfield that 40Gbps and 100Gbps have equal footing? Equally, for Greenfield, is the thinking: "If we are deploying a new lit fibre, we might as well start with the newest and fastest"?
A: The assumptions on Brownfield versus Greenfield are correct, the definitions in the survey and the report are more detailed but that is right.
It is more an issue that they [carriers] are building with 40Gbps now but will transition to 100Gbps where it can be used. Where it can't be used they stick with 40Gbps. There are many reasons why 100Gbps may not work in existing networks.
Q: Another finding is that 40Gbps is seen as a short-term solution. What is short term? And will that also be true for the metro or does metro have its own dynamic?
A: We didn't test timing explicitly for Greenfield versus Brownfield networks. It [40Gbps] doesn't necessarily peak, it is just not growing at the same rate as 100Gbps. And 40Gbps has even less value in the metro than in the core, particularly in Greenfield builds. With Greenfield 100Gbps combined with soft-decision forward error correction (SD-FEC), it is almost as good as 40Gbps.
Q: The study found that non-coherent 100Gbps isn't yet viewed as an important technology. Why do you think that is so? And what is your take on the non-coherent 100Gbps opportunity?
A: The jury is still out.
The large customers I spoke with haven't looked at it and therefore can't form an opinion. A lot of promises and marketing at this point but that doesn't mean it won't work. Module vendors are pretty excited about it and they aren't stupid.
Q: You say colourless and directionless is seen as important ROADM attributes, gridless and contentionless much less so. If operators are building 100Gbps Greenfield overlays, is not gridless a must to future-proof the network investment?
A: The gridless requirement is completely overblown and folks positioning it as a requirement today haven't done the work to understand the issues trying to use it today. This survey was even more negative than I expected.
Is wireless becoming a valid alternative to fixed broadband?
Are wireless technologies such as Long Term Evolution (LTE) and WiMAX2 closing the gap on fixed broadband?
A recent blog by The Economist discussed how Long Term Evolution (LTE) is coming to the rescue of one of its US correspondents, located 5km from the DSL cabinet and struggling to get a decent broadband service.

Peak rates are rarely achieved: the mobile user needs to be very close to a base station and a large spectrum allocation is needed.
Mark Heath, Unwired Insights
The correspondent makes some interesting points:
- The DSL link offered a download speed of 700kbps at best while Verizon's FiOS passive optical networking (PON) service is not available as an alternative.
- The correspondent upgraded to an LTE handset service that enabled up to eight PCs and laptops to achieve a 15-20x download speed improvement.
The blog suggests that wireless data is becoming fast enough to address users' broadband needs.
But is LTE broadband now good enough? Mark Heath, a partner at telecom consultancy, Unwired Insight, is skeptical: "Is the gap between landline and wireless broadband narrowing? I'm not convinced."
Peak wireless rates, and in particular LTE, may suggest that wireless is now a substitute for fixed. But peak rates are rarely achieved: the mobile user needs to be very close to a base station and a large spectrum allocation is needed.
"While peak rates on mobile look to be increasing exponentially, average throughput per base station and base station capacities are increasing at a much more modest rate," says Heath. Hence the operator and vendor focus on LTE Advanced, as well as much bigger spectrum allocations and the use of heterogenous networks.
The advantage of landline broadband quality, in contrast, is that it does not suffer the degradation of a busy cell. There is much less disparity between peak rates and sustainable average throughputs with fixed broadband.
If fixed has advantages, it still requires operators to make the relevant investment, particularly in rural areas. "Wireless is better than nothing in rural areas," says Heath. But the gap between fixed and mobile isn't shrinking as much as peak data rates suggest.
Yet mobile networks do have a trump card: wide area mobility. With the increasing number of people dependent on smartphones, iPads and devices like the Kindle Fire, an ever increasing value is being placed on mobile broadband.
So if fixed broadband is keeping its edge over wireless, just what future services will drive the need for fixed's higher data rates?
This is a topic to be explored as part of the upcoming next-generation PON feature.
Further reading:
broadbandtrends: The Fixed versus mobile broadband conundrum, click here
2012: A year of unique change
The third and final part on what CEOs, executives and industry analysts expect during the new year, and their reflections on 2011.
Karen Liu, principal analyst, components telecoms, Ovum @girlgeekanalyst

"We’ve entered the next decade for real: the mobile world is unified around LTE and moving to LTE Advanced, complete with small cells and heterogenous networks including Wi-Fi."
Last year was a long one. Looking back, it is hard to believe that only one year has elapsed between January 2011 and now.
In fact, looking back it is hard to remember how things looked a year ago: natural disasters were considered rare occurrences. WiMAX’s role was still being discussed, some viewed TDD LTE as a Chinese peculiarity. For that matter, cloud-RAN was another weird Chinese idea. But no matter, China could do anything given its immunity to economics and need for a return-on-investment.
Femtocells were consumer electronics for the occasional indoor coverage fix, and Wi-Fi was not for carriers.
Only optical could do 100Mbps to the subscriber, who, by the way, was moving on to 10 Gig PON in short order. Flexible spectrum ROADMS meant only Finisar could play, and high port-count wavelength-selective switches had come and gone. 100 Gigabit DWDM took several slots, hadn’t shipped for real, and even the client-side interface was a problem.
As for modules, 40 Gigabit Ethernet (GbE) client was CFP-sized, and high-density 100GbE looked so far away that the non-standard 10x10 MSA was welcomed.
NeoPhotonics was a private company, doing that wacky planar integration thing that works OK for passives but not actives.
Now it feels like we’ve entered the next decade for real: the mobile world is unified around LTE and moving to LTE Advanced, complete with small cells and heterogenous networks including Wi-Fi.
Optical is one of several ways to do backhaul or PC peripherals. 40GbE, even single-mode, comes in a QSFP package, tunable comes in an SFP — both of which, by the way, use optical integration.
Most optical transport vendors, even metro specialists, have 100 Gigabit coherent in trial stage at least. Thousands of 100 Gig ports and tens of thousands of 40 Gig have shipped.
Flexible spectrum is being standardised and CoAdna went public. The tunable laser start-up phase concluded with Santur finding a home in NeoPhotonics, now a public company.
But we also have a new feeling of vulnerability.
Optical components revenues and margins slid back down. Bad luck can strike twice, with Opnext taking the hit from both the spring earthquake and the fall floods. China turns out not to be immune after all, and time hasn’t automatically healed Europe.
What will happen this year? At this rate, I think we’ll see a lot of news at OFC in a couple of months' time. By then I’ll probably think: "Was it as recently as January when the world looked so different?"
Brian Protiva, CEO of ADVA Optical Networking @ADVAOpticalNews
Last year was an incredible year for networks. In many respects it was a watershed moment. Optical transport took a huge step forward with the genuine availability of 100 Gigabit technologies.
What's even more incredible is that 100 Gigabit emerged in more than the core: we saw 100 Gig metro solutions enter the marketplace. This means that for the first time enterprises and service providers have the opportunity to deploy 100 Gig solutions that fit their needs. Thanks to the development of direct-detection 100 Gig technology, cost is becoming less and less of an issue. This is a game changer.
In 2012, 100 Gig deployments will continue to be a key topic, with more available choices and maturing systems. However, I firmly believe the central focus of 2012 will be automation and multi-layer network intelligence.

"We need to see networks that can effectively govern and optimise themselves."
Talking to our customers and the industry, it is clear that more needs to be done to develop true network automation. There are very few companies that have successfully addressed this issue.
We need to see networks that can effectively govern and optimise themselves. That can automatically deliver bandwidth on demand, monitor and resolve problems before they become service disrupting, and drive dramatically increased efficiency.
The future of our networks is all about simplicity. The continued fierce bandwidth growth can no longer be supported by today's complex operational inefficiencies. Streamlined operations are essential if operators are to drive for further profitable growth.
I'm excited about helping to make this happen.
Arie Melamed, head of marketing, ECI Telecom @ecitelecom
The existing momentum of major traffic growth with no proportional revenue increase has continued - even intensified - in 2011. This means that operators have to invest in their networks without being able to generate the proportional revenue increase from this investment. We expect to see new business models crop up as operators cope with over-the-top (OTT) services.
To differentiate themselves from competition, operators must make the network a core part of the end-customer experience. To do so, we expect operators to introduce application-awareness in the network – optimising service delivery to avoid network expansions and introduce new revenues.
We also expect operators to offer quality-of-service assurance to end users and content application providers, turning a lose-lose situation around.
Larry Schwerin, CEO of Capella Intelligent Subsystems @CapellaPhotonic
Over 2011, we witnessed the demand for broadband access increase at an accelerated rate. Much of this has been fueled by the continuation of mass deployments of broadband access - PON/FTTH, wireless LTE, HFC, to name a few - as well as the ever-increasing implementation of cloud computing, requiring instantaneous broadband access. Video and rich media are a small but growing piece of this equation.
The ultimate of this is yet to be felt, as people start to draw more narrowcast versus broadcast content. The final element will be when upstream content via appliances similar to Sling Media, as well as the various forms of video conferencing, become more widespread. This will lead to more symmetrical bandwidth from an upstream perspective.

'Change is definitely in order for the optical ecosystem. The question is how and when?'
Along with this, the issue of falling revenue-per-bit is forcing network operators to develop more cost-effective ways for managing this traffic.
All of aforementioned is driving demand for higher capacity and more flexible support at the fundamental optical layer.
I believe this will work to translate into more bits-per-wavelength, more wavelengths-per-fibre, and finally more flexibility for network operators. They will be able to more easily manage the traffic at the optical layer. This points to good news for transponder, tunable and ROADM/ WSS suppliers.
2011 also pointed out certain issues within the optical communications sector. Most notably, entering 2011, the financial marketplace was bullish on the optical sector following rapid quarter-on-quarter growth of certain larger optical players. Then, the “Ides of March” came and optical stocks lost as much as 40% of their value when it was deemed there was a pull back in demand by a very few, but nonetheless important players in the sector.
Later in the year came the flooding in Thailand, which hampered the production capabilities of many of the optical components players.
Overall margins in the sector remain at unacceptable levels furthering the speculation that things need to change in order for a more robust environment to exist.
What will 2012 bring?
I believe demand for bandwidth will continue to grow. Data centres will gain more focus as cloud computing continues to gain traction. This will lead to more demand for fundamental technologies in the area of optical transmission and management.
The next phase of wavelength management solutions will start to emerge - both at the high port count (1x20) as well as low-port count (1x2, 1x4) for edge applications. More emphasis will be placed on monitoring and control as more complex optical networks are built.
Change is definitely in order for the optical ecosystem. The question is how and when? Will it simply be consolidation? How will vertical integration take shape? How will new technologies influence potential outcomes?
2012 should be a year of unique change.
Terry Unter, president and general manager, optical networks solutions, Oclaro
Discussion and progress on defining next-generation ROADM network architectures was a very important development in 2011. In particular, consensus on feature requirements and technology choices to enable a more cost-efficient optical network layer was generally agreed amongst the major network equipment manufacturers. Colourless, directionless and, to a significant degree, contentionless are clear goals, while we continue to drive down the cost of the network.

"We expect to see a host of system manufacturers making decisions on 100 Gig supply partners. This should be an exciting year."
Coherent detection transponder technology is a critical piece of the puzzle ensuring scalability of network capacity while leveraging a common technology platform. We succeeded in volume production shipments of a 40 Gig coherent transponder and we announced our 100 Gig transponder.
2012 will be an important year for 100 Gig. The availability of 100 Gig transponder modules for deployment will enable a much wider list of system manufacturers to offer their customers more spectrally-efficient network solutions. The interest is universal from metro applications to the long haul and ultra-long haul market segments.
While there is much discussion about 400 Gig and higher rates, standards are in very early stages. The industry as a whole expects 100 Gig to be a key line rate for several years.
As we enter 2012, we expect to see a host of system manufacturers making decisions on 100 Gig supply partners. This should be an exciting year.
For Part 1, click here
For Part 2, click here
The CFP4 optical module to enable Terabit blades
Source: Gazettabyte, Xilinx
The CFP2 is about half the size of the CFP while the CFP4 is half the size of the CFP2. The CFP4 is slightly wider and longer than the QSFP.
The two CFP modules will use a 4x25Gbps electrical interface, doing away with the need for a 10x10Gbps to 4x25Gbps gearbox IC used for current CFP 100GBASE-LR4 and -ER4 interfaces. The CFP2 and CFP4 are also defined for 40 Gigabit Ethernet use.
The CFP's maximum power rating is 32W, the CFP2 12W and the CFP4 5W. But vendors that put eight CFP2 or 16 CFP4s on a blade still want to meet the 60W total power budget.
Getting close: Four CFP modules deliver slightly less bandwidth than 48 SFP+ modules: 4x100Gbps versus 480Gbps. The four also consume more power - 60w versus 48W. Moving to the CFP2 module will double the blade's bandwidth without consuming more power while the CFP4 will do the same again. a blade with 16 CFP4 modules promises 1.6Tbps while requiring 60W. Source: Xilinx
The first CFP2 modules are expected this year - there could be vendor announcements as early as the upcoming OFC/NFOEC 2012 show to be held in LA in the first week in March. The first CFP4 products are expected in 2013.
Further reading
The CFP MSA presentation: CFP MSA 100G roadmap and applications
OIF promotes uni-fabric switches & 100G transmitter
The OIF's OTN implementation agreement (IA) allows a packet fabric to also switch OTN traffic. The result is that operators can now use one switch for both traffic types, aiding IP/ Ethernet and OTN convergence. Source: OIF
Improving the switching capabilities of telecom platforms without redesigning the switch as well as tinier 100 Gigabit transmitters are just two recent initiatives of the Optical Internetworking Forum (OIF).
The OIF, the industry body tackling design issues not addressed by the IEEE and International Telecommunication Union (ITU) standards bodies, has just completed its OTN-over-Packet-Fabric protocol that enables optical transport network (OTN) traffic to be carried over a packet switch. The protocol works by modifying the line cards at the switch's input and output, leaving the switch itself untouched (see diagram above).
In contrast, the OIF is starting a 100 Gigabit-per-second (Gbps) transmitter design project dubbed the integrated dual-polarisation quadrature modulated transmitter assembly (ITXA). The Working Group aims to expand the 100Gbps applications with a transmitter design half the size of the OIF's existing 100Gbps transmitter module.
The Working Group also wants greater involvement from the system vendors to ensure the resulting 100 Gig design is not conservative. "We joke about three types of people that attend these [working group] meetings," says Karl Gass, the OIF’s Physical and Link Layer Working Group vice-chair. "The first group has something they want to get done, the second group has something already and they don't want something to get done, and the third group want to watch." Quite often it is the system vendors that fall into this third group, he says.
OTN-over-Packet-Fabric protocol
The OTN protocol enable a single switch fabric to be used for both traffic types - packets and time-division multiplexed (TDM) OTN - to save cost for the operators.
"OTN is out there while Ethernet is prevalent," says Winston Mok, technical author of the OTN implementation agreement. "What we would like to do is enable boxes to be built that can do both economically."
The existing arrangement where separate packet and OTN time-division multiplexing (TDM) switches are required. Source: OIF
Platforms using the protocol are coming to market. ECI Telecom says its recently announced Apollo family is one of the first OTN platforms to use the technique.
The protocol works by segmenting OTN traffic into a packet format that is then switched before being reconstructed at the output line card. To do this, the constant bit-rate OTN traffic is chopped up so that it can easily go through the switch as a packet. "We want to keep the [switch] fabric agnostic to this operation," says Mok. "Only the line cards need to do the adaptations."
The OTN traffic also has timing information which the protocol must convey as it passes through the switch. The OIF's solution is to vary the size of the chopped-up OTN packets. The packet is nominally 128-bytes long. But the size will occasionally be varied to 127 and 126 bytes as required. These sequences are interpreted at the output of the switch as rate information and used to control a phase-locked loop.
Mok says the implementation agreement document that describes the protocol is now available. The protocol does not define the physical layer interface connecting the line card to the switch, however. "Most people have their own physical layer," says Mok.
100 Gig ITXA
The ITXA project will add to the OIF's existing integrated transmitter document. The original document addresses the 100 Gigabit transmitter for dual-polarisation, quadrature phase-shift keying (DP-QPSK) for long-haul optical transmission. The OIF has also defined 100Gbps receiver assembly and tunable laser documents.
The latest ITXA Working Group has two goals: to shrink the size of the assembly to lower cost and increase the number of 100Gbps interfaces on a line card, and to expand the applications to include metro. The ITXA will still address 100Gbps coherent designs but will not be confined to DP-QPSK, says Gass.
"We started out with a 7x5-inch module and now there is interest from system vendors and module makers to go to smaller [optical module] form factors," says Gass. "There is also interest from other modulator vendors that want in on the game."
The reduce size, the ITXA will support other modulator technologies besides lithium niobate that is used for long-haul. These include indium phosphide, gallium arsenide and polymer-based modulators.
Gass stresses that the ITXA is not a replacement for the current transmitter implementation. "We are not going to get the 'quality' that we need for long-haul applications out of other modulator technologies," he says. "This is not a Gen II [design].
The Working Group's aim is to determine the 'greatest common denominator' for this component. "We are trying to get the smallest form factor possible that several vendors can agree on," says Gass. "To come out with a common pin out, common control, common RF (radio frequency) interface, things like that."
Gass says the work directions are still open for consideration. For example, adding the laser with the modulator. "We can come up with a higher level of integration if we consider adding the laser, to have a more integrated transmitter module," says Gass.
As for wanting great system-vendor input, the Working Group wants more of their system-requirement insights to avoid the design becoming too restrictive.
"You end up with component vendors that do all the work and they want to be conservative," says Gass. "The component vendors don't want to push the boundaries as they want to hit the widest possible customer base."
Gass expects the ITXA work to take a year, with system demonstrations starting around mid-2013.
Reflections 2011, Predictions 2012 - Part 2
Gazettabyte asked industry analysts, CEOs, executives and commentators to reflect on the last year and comment on developments they most anticipate for 2012. Here are the views of Verizon's Glenn Wellbrock, Professor Rod Tucker, Ciena's Joe Berthold, Opnext's Jon Anderson, NeoPhotonics' Tim Jenks and Vladimir Kozlov of LightCounting.
Glenn Wellbrock, Verizon's director of optical transport network architecture & design
The most significant accomplishment from an optical transport perspective for me was the introduction of 100 Gigabit into Verizon's domestic - US - network.

"The key technology enabler in 2012 will be the flexible grid optical switching that can support data rates beyond 100 Gigabit"
That accomplishment has paved the way for us to hit the ground running in 2012 with a very aggressive 100 Gigabit deployment plan. I also believe this accomplishment gives others the confidence to start taking advantage of this leading-edge technology.
With coherent receiver technology and the associated high-speed electronics lowering the propagation latency by up to 15%, we see a much cleaner line system design that eliminates external dispersion compensation fibre while bringing down the cost, space and power per bit.
The value of the whole industry moving in this direction means higher volumes and, therefore, lower costs. This new infrastructure will allow operators to get ahead of customer demand, thus improving delivery intervals and introducing new, higher bandwidth services to those large key customers that require it.
In my opinion, the key technology enabler in 2012 will be the flexible grid optical switching that can support data rates beyond 100 Gigabit and provides the framework to support colourless, directionless and contentionless optical nodes.
Today, field technicians must plug a new transmitter/ receiver into the appropriate direction and filter port at both circuit ends. With this new technology, operations personnel can simply plug the new card into the next available port and it can then be provisioned, tested and even moved to a new colour or direction remotely without any on-site personnel involvement - even when there are multiple copies of the same colour on the same add/ drop structure coming from different fibres.
This new nodal architecture takes advantage of the inherent channel selection capability of the coherent receiver to eliminate fixed filters and opens up the door for a truly reconfigurable optical add/ drop multiplexer (ROADM) - creating new flexibility that can be used for optical restoration, network defragmentation, operational simplicity, and more.
Rod Tucker, Director of the Institute for a Broadband Enabled Society (IBES), Director of the Centre for Energy-Efficient Telecommunications (CEET), and professor of electrical and electronic engineering at the University of Melbourne.
Australia's National Broadband Network (NBN) hit the ground running in 2011.
The project is still many years from completion, but in 2011 the roll-out of fibre-to-the-premises infrastructure began in earnest. This is a very noteworthy project - a wholesale broadband access network delivering advanced broadband services to the entire population of the country, including fibre to 93% of all premises and a mixture of fixed wireless and satellite to the remainder. At an estimated cost of around AUS$36 billion, the price tag is not small.

"The environment created by [Australia's] National Broadband Network will greatly enhance opportunities for innovations in new services and new modes of broadband service delivery"
But the wholesale-only model maximises opportunities for competition at the service provider level, and reduces wasteful duplication of infrastructure in the last mile. A remarkable aspect of the NBN project is that a deal has been struck between the incumbent telco, Telstra, and the government-owned owner of the NBN.
Under this deal, Telstra will shut down its Hybrid-Fibre-Coax (HFC) network and decommission its legacy copper access network. Australia will become a truly fibre-connected country, with a future-proof broadband infrastructure.
My thoughts for 2012 also relate to Australia's National Broadband Network. The environment created by the NBN will greatly enhance opportunities for innovations in new services and new modes of broadband service delivery.
I anticipate that in 2012 and beyond, new services providers and aggregators in areas such as health care, education, entertainment and energy will emerge.
I am very excited about the opportunities.
Joe Berthold, vice president of network architecture at Ciena
One of the most memorable developments from a network architecture point of view was the clear emergence of the category of packet-optical switching products to serve as the transport layer of backbone IP networks.
For years two competing points of view have been put forth. First, in the 'IP-over-glass' position, long-haul optics is incorporated into core routers. This has never taken off, with some disappointing attempts in the early days of 40 Gigabit. The second approach involves a separate, very much simpler, packet optical transport platform being introduced to interconnect core routers. The packet transport could be based on Ethernet protocols, MPLS, MPLS-TE or MPLS-TP.

"It will be interesting to see if a large internet data centre operator decides to embrace the OpenFlow concept at this very early stage of its development"
What is quite significant in this development, traditional router vendors seem to be going in this direction too, with the vision of a much simpler packet switching platform to keep cost, space and power under control.
This is a clear response to the overwhelming need we see in the market, representing a separation of packet switching into two layers: one with global routing capability at strategic locations in the network, and the other with flexible transport functionality for network traffic engineering.
In 2012 it will be fascinating to see how the struggle for protocol dominance plays out within the data centre.
While the IETF has many competing proposals, worked in multiple groups, the IEEE is in final ballot now for Shortest Path Bridging (IEEE 802.1aq).
Shortest Path Bridging has broad applicability in networks, but we might see it first emerge as a solution within the data centre.
The other contender within the data centre is OpenFlow, which has developed quite a momentum too.
It will be interesting to see if a large internet data centre operator decides to embrace the OpenFlow concept at this very early stage of its development.
Jon Anderson, director of technology programme at Opnext
Our most significant 2011 events were the Japan great earthquake in March and the Thailand floods in October. Both events caused major disruptions and challenges in optical component supply-chain management and manufacturing.
JDS Uniphase's tunable SFP+ announcement was well ahead of the technology curve.

"Our most significant 2011 events were the Japan great earthquake in March and the Thailand floods in October."
In 2012 we expect initial production shipments and deployment of 100Gbps PM-QPSK/ coherent modules. Also a fast production ramp of 40 Gigabit Ethernet (GbE) QSFP+ modules for data centre applications.
Another development to watch is the next-generation 100 GbE interconnect technology and standards development for low-cost, high-density modules for data centre applications.
Lastly, there will be an increased focus on technologies and solutions for 100 Gigabit DWDM in metro and extended reach enterprise applications.
Tim Jenks, CEO of NeoPhotonics

NeoPhotonics made significant progress this year in developments of components and technologies for coherent transmission networks, including receivers, transmitters and advanced approaches toward switching.
We continue to see increasing adoption of coherent transmission systems, broad-scale deployment of access networks and a continuing emergence of large scale data centres as a prominent element of the communications network landscape.
Vladimir Kozlov, CEO of LightCounting
The industry was strong enough to get over an earthquake, tsunami and flood in 2011. Softer demand for optics in 2011 helped - is still helping - many vendors to ride the disruptions. Ironically, the industry was more stressed ramping up production in 2010 to meet demand than dealing with the disruptions of 2011. We are looking forward to a smoother ride in 2012, as demand/ supply reach equilibrium and nature cooperates.
"Ironically, the industry was more stressed ramping up production in 2010 to meet demand than dealing with the disruptions of 2011"
Service provider revenue and capex were up significantly in 2011. Mobile data is driving the growth, but even wireline revenues are improving and FTTx is probably behind it. This should be a sustainable trend for 2012-2015, even as service providers curb expenses to improve profitability, a larger fraction of capex will be spend on equipment. New technology is critical to stay ahead of competition.
Data centre optics had another good year with 10GBASE-T falling further behind schedule and with 100 Gigabit generating much action. This will probably get even more interesting in 2012.
Our conservative forecast for active optical cable, criticised by some vendors, was not conservative enough in 2011. It will take a while for this segment to unfold.
For Part 1, click here
For Part 3, click here
