Glenn Wellbrock’s engineering roots

Glenn Wellbrock

After four decades shaping optical networking, Glenn Wellbrock has retired. He shares his career highlights, industry insights, and his plans to embrace a quieter life of farming and hands-on projects in rural Kansas.

Glenn Wellbrock’s (pictured) fascination with telecommunications began at an early age. “I didn’t understand how it worked, and I wanted to know,” he recalls.

Wellbrock’s uncle had a small, rural telephone company where he worked while studying, setting the stage for his first full-time job at telecom operator, MCI. Wellbrock entered a world of microwave and satellite systems; MCI was originally named Microwave Communications Incorporated. “They were all ex-military guys, and I’m the rookie coming out of school trying to do my best and learn,” says Wellbrock.

The challenge that dominated the first decade of this century.

The arrival of fibre optics in the late 1980s marked a pivotal shift. As colleagues hesitated to embrace the new “glass” technology, Wellbrock seized the opportunity. “I became the fibre guy,” he says. “My boss said, ‘Anything breaks over there, it’s your problem. You go fix it.’”

This hands-on role propelled him into the early days of optical networking, where he worked on asynchronous systems with bit rates ranging from hundreds of kilobits to over a megabit, before SONET/SDH standards took over.

By the 1990s, with a young family, Wellbrock moved to Texas, contributing to MCI’s development of OC-48 (2.5 gigabit-per-second or Gbps) systems, a precursor to the high-capacity networks that would define his career.

Hitting a speed wall

One of Wellbrock’s proudest achievements was overcoming the barrier to get to speeds faster than 10Gbps, a challenge that dominated the first decade of this century.

Polarisation mode dispersion (PMD) in an optical fibre was a significant hurdle, limiting the distance and reliability of high-speed links. By then, he was working at a start-up and did not doubt that using phase modulation was the answer.

Wellbrock recalls conversations he had with venture capitalists at the time: “I said: ‘Okay, I get we are a company of 40 guys and I don’t even know if they can build it, but somebody’s going to do it, and they’re going to own this place.’”

Wellbrock admits he didn’t know the answer would be coherent optics, but he knew intensity modulation direct detection had reached its limits.

For a short period, Wellbrock was part of Marconi before joining Verizon in 2006. In 2007, he was involved in a Verizon field trial between Miami and Tampa, 300 miles apart, which demonstrated a 100Gbps direct-detection system. “It was so manual,” he admits. “It took three of us working through the night to keep it working so we could show it to the executives in the morning.”

While the trial passed video, it was clear that direct detection wouldn’t scale. The solution lay in coherent detection, which Wellbrock’s team, working with Nortel (acquired by Ciena), finally brought to market by 2009.

“Coherent was like seeing a door,” he says. “PMD was killing you, but you open the door, and it’s a vast room. We had breathing room for almost two decades.”

Verizon’s lab in Texas had multiple strands of production fibre that looped back to the lab every 80km. “We could use real-world glass with all the impairments, but keep equipment in one location,” says Wellbrock.

This setup enabled rigorous testing and led to numerous post-deadline papers at OFC, cementing Verizon’s reputation for optical networking innovation.

Rise of the hyperscalers

Wellbrock’s career spanned a transformative era in telecom, from telco-driven innovation to the rise of hyperscalers like Google and Microsoft.

He acknowledges the hyperscalers’ influence as inevitable due to their scale. “If you buy a million devices, you’re going to get attention,” he says. “We’re buying 100 of the same thing.”

Hyperscalers’ massive orders for pluggable modules and tunable lasers—technologies telcos like Verizon helped pioneer—have driven costs down, benefiting the industry.

However, Wellbrock notes that telcos remain vital for universal connectivity. “Every person, every device is connected,” he says. “Telcos aren’t going anywhere.”

Reliability remains a core challenge, particularly as networks grow. Wellbrock emphasises dual homing—redundant network paths—as telecom’s time-tested solution. “You can’t have zero failures,” he says. “Everything’s got a failure rate associated with it.”

He sees hyperscalers grappling with similar issues, as evidenced by a Google keynote at the Executive Forum at OFC 2025, which sought solutions for network failures linking thousands of AI accelerators in a data centre.

Wellbrock’s approach to such challenges is rooted in collaboration. “You’ve got to work with the ecosystem,” he insists. “Nobody solves every problem alone.”

Hollow-core fibre

Looking forward, what excites Wellbrock is hollow-core fibre, which he believes could be as transformative as SONET, optical amplifiers, and coherent detection.

Unlike traditional fibre, hollow-core fibre uses air-filled waveguides, offering near-zero loss, low latency, and vast bandwidth potential. “If we could get hollow-core fibre with near-zero loss and as much bandwidth as you needed, it would give us another ride at 20 years’ worth of growth,” he says. “It’s like opening another door.”

While companies like Microsoft are experimenting with hollow-core fibre, Wellbrock cautions that widespread adoption is years away. “They’re probably putting in [a high fibre glass] 864 [strand]-count standard glass and a few hollow core [strands],” he notes.

For long-haul routes, the technology promises lower latency and freedom from nonlinear effects, but challenges remain in developing compatible transmitters, receivers, and amplifiers. “All we’ve got to do is build those,” he says, laughing, acknowledging the complexity.

Wellbrock also highlights fibre sensing as a practical innovation, enabling real-time detection of cable damage. “If we can detect an excavator getting closer, we can stop it before it breaks a fibre link,” he explains. This technology, developed in collaboration with partners like NEC and Ciena, integrates optical time-domain reflectometry (OTDR) into transmission systems, thereby enhancing network reliability.

Learnings

Wellbrock’s approach to innovation centres on clearly defining problems to engage the broader ecosystem. “Defining the problem is two-thirds of solving it,” he says, crediting a Verizon colleague, Tiejun J. Xia, for the insight. “If you articulate it well, lots of smart people can help you fix it.”

This philosophy drove his success at OFC, where he used the conference to share challenges, such as fibre sensing, and rally vendor support. “You’ve got to explain the value of solving it,” he adds. “Then you’ll get 10 companies and 1,000 engineers working on it.”

He advises against preconceived solutions or excluding potential partners. “Never say never,” he says. “Be open to ideas and work with anybody willing to address the problem.”

This collaborative mindset, paired with a willingness to explore multiple solutions, defined his work with Xia, a PhD associate fellow at Verizon. “Our favourite Friday afternoon was picking the next thing to explore,” he recalls. “We’d write down 10 possible things and pull on the string that had legs.”

Glenn Wellbrock's son, Dave, in farming action

Fibre to Farming

As Wellbrock steps into retirement, he is teaming up with his brother.

The two own 400 acres in Kansas, where wheat farming, hunting, and fishing will define their days. “I won’t miss 100 emails a day or meetings all day long,” he admits. “But I’ll miss the interaction and building stuff.”

Farming offers a chance to work with one’s hands, doing welding and creating things from metal. “I love to build things,” he says. “It’s fun to go, ‘Why hasn’t somebody built this before?’

Farming projects can be completed in a day or over a weekend. “Networks take a long time to build,” he notes. “I’m looking forward to starting a project and finishing it quickly.”

He plans to cultivate half their land to fund their hobbies, using “old equipment” that requires hands-on maintenance—a nod to his engineering roots.

OFC farewell

Wellbrock retired just before the OFC show in March 2025. His attendance was less about work and more about transition, where he spent the conference introducing his successor to vendors and industry peers, ensuring a smooth handoff.

“I didn’t work as hard as I normally do at OFC,” he says. “It’s about meeting with vendors, doing a proper handoff, and saying goodbye to folks, especially international ones.” He also took part in this year’s OFC Rump Session.

Wellbrock admits to some sadness. Yet, he remains optimistic about his future, with plans to possibly return to OFC as a visitor. “Maybe I’ll come just to visit with people,” he muses.

Timeline 

  • 1984: MCI 
  • 1987: Started working on fibre 
  • 2000: Joined start-ups and, for a short period, was part of Marconi 
  • 2004: Joined Worldcom, which had bought MCI 
  • 2006: Joined Verizon 
  • 2025: Retired from Verizon 

A tribute

Prof. Andrew Lord, Senior Manager, optical and quantum research, BT

I have had the privilege of knowing Glenn since the 1990s, when BT had a temporary alliance with MCI. We shared a vendor trip to Japan, where I first learnt of his appetite for breakfasting at McDonald’s!

Glenn has been a pivotal figure in our industry since then. A highlight would be the series of ambitious Requests For Information (RFIs) issued by Verizon, which would send vendor account managers scurrying to their R&D departments for cover.

Another highlight would be the annual world-breaking Post-Deadline Paper results at OFC: those thrilling sessions won’t be the same without a Wellbrock paper and neither will the OFC rump sessions, which have benefited from his often brutal pragmatism, always delivered with grace (which somehow made it even worse when defeating me in an argument!).

But it’s grace that defines the man who always has time for people and is always generous enough to share his views and experiences. Glenn will be sorely missed, but he deserves a fulfilling and happy retirement.


Will white boxes predominate in telecom networks?

Will future operator networks be built using software, servers and white boxes or will traditional systems vendors with years of network integration and differentiation expertise continue to be needed? 

 

AT&T’s announcement that it will deploy 60,000 white boxes as part of its rollout of 5G in the U.S. is a clear move to break away from the operator pack.

The service provider has long championed network transformation, moving from proprietary hardware and software to a software-controlled network based on virtual network functions running on servers and software-defined networking (SDN) for the control switches and routers.

Glenn WellbrockNow, AT&T is going a stage further by embracing open hardware platforms - white boxes - to replace traditional telecom hardware used for data-path tasks that are beyond the capabilities of software on servers.       

For the 5G deployment, AT&T will, over several years, replace traditional routers at cell and tower sites with white boxes, built using open standards and merchant silicon.   

“White box represents a radical realignment of the traditional service provider model,” says Andre Fuetsch, chief technology officer and president, AT&T Labs. “We’re no longer constrained by the capabilities of proprietary silicon and feature roadmaps of traditional vendors.”

But other operators have reservations about white boxes. “We are all for open source and open [platforms],” says Glenn Wellbrock, director, optical transport network - architecture, design and planning at Verizon. “But it can’t just be open, it has to be open and standardised.”

Wellbrock also highlights the challenge of managing networks built using white boxes from multiple vendors. Who will be responsible for their integration or if a fault occurs? These are concerns SK Telecom has expressed regarding the virtualisation of the radio access network (RAN), as reported by Light Reading.

“These are the things we need to resolve in order to make this valuable to the industry,” says Wellbrock. “And if we don’t, why are we spending so much time and effort on this?”

Gilles Garcia, communications business lead director at programmable device company, Xilinx, says the systems vendors and operators he talks to still seek functionalities that today’s white boxes cannot deliver. “That’s because there are no off-the-shelf chips doing it all,” says Garcia. 

 

We’re no longer constrained by the capabilities of proprietary silicon and feature roadmaps of traditional vendors

 

White boxes

AT&T defines a white box as an open hardware platform that is not made by an original equipment manufacturer (OEM).

A white box is a sparse design, built using commercial off-the-shelf hardware and merchant silicon, typically a fast router or switch chip, on which runs an operating system. The platform usually takes the form of a pizza box which can be stacked for scaling, while application programming interfaces (APIs) are used for software to control and manage the platform.

As AT&T’s Fuetsch explains, white boxes deliver several advantages. By using open hardware specifications for white boxes, they can be made by a wider community of manufacturers, shortening hardware design cycles. And using open-source software to run on such platforms ensures rapid software upgrades.

Disaggregation can also be part of an open hardware design. Here, different elements are combined to build the system. The elements may come from a single vendor such that the platform allows the operator to mix and match the functions needed. But the full potential of disaggregation comes from an open system that can be built using elements from different vendors. This promises cost reductions but requires integration, and operators do not want the responsibility and cost of both integrating the elements to build an open system and integrating the many systems from various vendors.   

Meanwhile, in AT&T’s case, it plans to orchestrate its white boxes using the Open Networking Automation Platform (ONAP) - the ‘operating system’ for its entire network made up of millions of lines of code. 

ONAP is an open software initiative, managed by The Linux Foundation, that was created by merging a large portion of AT&T’s original ECOMP software developed to power its software-defined network and the OPEN-Orchestrator (OPEN-O) project, set up by several companies including China Mobile and China Telecom.   

AT&T has also launched several initiatives to spur white-box adoption. One is an open operating system for white boxes, known as the dedicated network operator system (dNOS). This too will be passed to The Linux Foundation.

The operator is also a key driver of the open-based reconfigurable optical add/ drop multiplexer multi-source agreement, the OpenROADM MSA. Recently, the operator announced it will roll out OpenROADM hardware across its network. AT&T has also unveiled the Akraino open source project, again under the auspices of the Linux Foundation, to develop edge computing-based infrastructure.

At the recent OFC show, AT&T said it would limit its white box deployments in 2018 as issues are still to be resolved but that come 2019, white boxes will form its main platform deployments.

Xilinx highlights how certain data intensive tasks - in-line security, performed on a per-flow basis, routing exceptions, telemetry data, and deep packet inspection - are beyond the capabilities of white boxes. “White boxes will have their place in the network but there will be a requirement, somewhere else in the network for something else, to do what the white boxes are missing,” says Garcia. 

 

Transport has been so bare-bones for so long, there isn’t room to get that kind of cost reduction

 

AT&T also said at OFC that it expects considerable capital expenditure cost savings - as much as a halving - using white boxes and talked about adopting in future reverse auctioning each quarter to buy its equipment.

Niall Robinson, vice president, global business development at ADVA Optical Networking, questions where such cost savings will come from: “Transport has been so bare-bones for so long, there isn’t room to get that kind of cost reduction. He also says that there are markets that already use reverse auctioning but typically it is for items such as components. “For a carrier the size of AT&T to be talking about that, that is a big shift,” says Robinson. 

 

Layer optimisation

Verizon’s Wellbrock first aired reservations about open hardware at Lightwave’s Open Optical Conference last November.

In his talk, Wellbrock detailed the complexity of Verizon’s wide area network (WAN) that encompasses several network layers. At layer-0 are the optical line systems - terminal and transmission equipment - onto which the various layers are added: layer-1 Optical Transport Network (OTN), layer-2 Ethernet and layer-2.5 Multiprotocol Label Switching (MPLS). According to Verizon, the WAN takes years to design and a decade to fully exploit the fibre.

“You get a significant saving - total cost of ownership - from combining the layers,” says Wellbrock. “By collapsing those functions into one platform, there is a very real saving.” But there is a tradeoff: encapsulating the various layers’ functions into one box makes it more complex.

“The way to get round that complexity is going to a Cisco, a Ciena, or a Fujitsu and saying: ‘Please help us with this problem’,” says Wellbrock. “We will buy all these individual piece-parts from you but you have got to help us build this very complex, dynamic network and make it work for a decade.”

 

Next-generation metro

Verizon has over 4,000 nodes in its network, each one deploying at least one ROADM - a Coriant 7100 packet optical transport system or a Fujitsu Flashwave 9500. Certain nodes employ more than one ROADM; once one is filled, a second is added.

“Verizon was the first to take advantage of ROADMs and we have grown that network to a very large scale,” says Wellbrock.

The operator is now upgrading the nodes using more sophiticated ROADMs, as part of its next-generation metro. Now each node will need only one ROADM that can be scaled. In 2017, Verizon started to ramp and upgraded several hundred ROADM nodes and this year it says it will hit its stride before completing the upgrades in 2019.

“We need a lot of automation and software control to hide the complexity of what we have built,” says Wellbrock. This is part of Verizon’s own network transformation project. Instead of engineers and operational groups in charge of particular network layers and overseeing pockets of the network - each pocket being a ‘domain’, Verizon is moving to a system where all the networks layers, including ROADMs, are managed and orchestrated using a single system.

The resulting software-defined network comprises a ‘domain controller’ that handles the lower layers within a domain and an automation system that co-ordinates between domains.

“Going forward, all of the network will be dynamic and in order to take advantage of that, we have to have analytics and automation,” says Wellbrock.

 

In this new world, there are lots of right answers and you have to figure what the best one is

 

Open design is an important element here, he says, but the bigger return comes from analytics and automation of the layers and from the equipment.

This is why Wellbrock questions what white boxes will bring: “What are we getting that is brand new? What are we doing that we can’t do today?”

He points out that the building blocks for ROADMs - the wavelength-selective switches and multicast switches - originate from the same sub-system vendors, such that the cost points are the same whether a white box or a system vendor’s platform is used. And using white boxes does nothing to make the growing network complexity go away, he says.

“Mixing your suppliers may avoid vendor lock-in,” says Wellbrock. “But what we are saying is vendor lock-in is not as serious as managing the complexity of these intelligent networks.”

Wellbrock admits that network transformation with its use of analytics and orchestration poses new challenges. “I loved the old world - it was physics and therefore there was a wrong and a right answer; hardware, physics and fibre and you can work towards the right answer,” he says. “In this new world, there are lots of right answers and you have to figure what the best one is.”

 

Evolution

If white boxes can’t perform all the data-intensive tasks, then they will have to be performed elsewhere. This could take the form of accelerator cards for servers using devices such as Xilinx’s FPGAs.

Adding such functionality to the white box, however, is not straightforward. “This is the dichotomy the white box designers are struggling to address,” says Garcia. A white box is light and simple so adding extra functionality requires customisation of its operating system to run these application. And this runs counter to the white box concept, he says. 

 

We will see more and more functionalities that were not planned for the white box that customers will realise are mandatory to have

 

But this is just what he is seeing from traditional systems vendors developing designs that are bringing differentiation to their platforms to counter the white-box trend.

One recent example that fits this description is Ciena’s two-rack-unit 8180 coherent network platform. The 8180 has a 6.4-terabit packet fabric, supports 100-gigabit and 400-gigabit client-side interfaces and can be used solely as a switch or, more typically, as a transport platform with client-side and coherent line-side interfaces.

The 8180 is not a white box but has a suite of open APIs and has a higher specification than the Voyager and Cassini white-box platforms developed by the Telecom Infra Project.  

“We are going through a set of white-box evolutions,” says Garcia. “We will see more and more functionalities that were not planned for the white box that customers will realise are mandatory to have.”

Whether FPGAs will find their way into white boxes, Garcia will not say. What he will say is that Xilinx is engaged with some of these players to have a good view as to what is required and by when.

It appears inevitable that white boxes will become more capable, to handle more and more of the data-plane tasks, and as a response to the competition from traditional system vendors with their more sophisticated designs.

AT&T’s white-box vision is clear. What is less certain is whether the rest of the operator pack will move to close the gap.


Verizon, Ciena and Juniper trial 400 Gigabit Ethernet

Verizon has sent a 400 Gigabit Ethernet signal over its network, carried using a 400-gigabit optical wavelength.

The trial’s goal was to demonstrate multi-vendor interoperability and in particular the interoperability of standardised 400 Gigabit Ethernet (GbE) client signals.

Glenn Wellbrock“[400GbE] Interoperability with the client side has been the long pole in the tent - and continues to be,” says Glenn Wellbrock, director, optical transport network - architecture, design and planning at Verizon. “This was trial equipment, not generally-available equipment.” 

It is only the emergence of standardised modules - in this case, an IEEE 400GbE client-side interface specification - that allows multi-vendor interoperability, he says. 

By trialing a 400-gigabit lightpath, Verizon also demonstrated the working of a dense wavelength-division multiplexing (DWDM) flexible grid, and a baud rate nearly double the 32-35Gbaud in wide use for 100-gigabit and 200-gigabit wavelengths.

“It shows we can take advantage of the entire system; we don’t have to stick to 50GHz channel spacing anymore,” says Wellbrock.

 

[400GbE] Interoperability with the client side has been the long pole in the tent - and continues to be 

 

Trial set-up

The trial used Juniper Networks’ PTX5000 packet transport router and Ciena’s 6500 packet-optical platform, equipment already deployed in Verizon’s network.

The Verizon demonstration was not testing optical transmission reach. Indeed the equipment was located in two buildings in Richardson, within the Dallas area. Testing the reach of 400-gigabit wavelengths will come in future trials, says Wellbrock. 

The PTX5000 core router has a traffic capacity of up to 24 terabits and supports 10-gigabit, 40-gigabit and 100-gigabit client-side interfaces as well as 100-gigabit coherent interfaces for IP-over-DWDM applications. The PTX5000 uses a mother card on which sits one or more daughter cards hosting the interfaces, what Juniper calls a flexible PIC concentrator (FPC) and physical interface cards (PICs), respectively.  

Juniper created a PIC with a 400GbE CFP8 pluggable module implementing the IEEE’s 10km 400GBASE-LR8 standard.

“For us, it was simply creating a demo 400-gigabit pluggable line card to go into the line card Verizon has already deployed,” says Donyel Jones-Williams, director of product marketing management at Juniper Networks.

Donyel Jones-WilliamsThe CFP8 400GbE interface connected the router to Ciena’s 6500 packet-optical platform.

Ciena also used demonstration hardware developed for 400-gigabit trials. “We expect to develop other hardware for general deployment,” says Helen Xenos, senior director, portfolio marketing at Ciena. “We are looking at smaller form-factor pluggables to carry 400 Gigabit Ethernet.”

 

400-gigabit deployments and trials

Ciena started shipping its WaveLogic Ai coherent modem that implements 400-gigabit wavelengths in the third quarter of 2017. Since then, the company has announced several 400-gigabit deployments and trials.

Vodafone New Zealand deployed 400 gigabits in its national transport network last September, a world first, claims Ciena. German cable operator, Unitymedia, has also deployed Ciena’s WaveLogic Ai coherent modem to deliver a flexible grid and 400-gigabit wavelengths to support growing content delivered via its data centres. And JISC, which runs the UK’s national research and education network, has deployed the 6500 platform and is using 400-gigabit wavelengths.

Helen Xenos

Last September, AT&T conducted its own 400-gigabit trial with Ciena. With AT&T’s trial, the 400-gigabit signal was generated using a test bed. “An SDN controller was used to provision the circuit and the [400-gigabit] signal traversed an OpenROADM line system,” says Xenos.   

Using the WaveLogic Ai coherent modem and its support for a 56Gbaud rate means that tunable capacity can now be doubled across applications, says Xenos. The wavelength capacity used for long-haul distances can now be 200 gigabits instead of 100 gigabits, while metro-regional networks spanning 1,000km can use 300-gigabit wavelengths. Meanwhile, 400-gigabit lightpaths suit distances of several hundred kilometres.

It is the large data centre operators that are driving the majority of 400 gigabit deployments, says Ciena. The reason the 400-gigabit announcements relate to telecom operators is because the data centre players have not gone public with their deployments, says Xenos.

Juniper Networks’ PTX5000 core router with 400GbE interfaces will primarily be used by the telecom operators. “We are in trials with other providers on 400 gigabits,” says Jones-Williams. “Nothing is public as yet.”   


Talking markets: Oclaro on 100 gigabits and beyond

Oclaro’s chief commercial officer, Adam Carter, discusses the 100-gigabit market, optical module trends, silicon photonics, and why this is a good time to be an optical component maker.

Oclaro has started its first quarter 2017 fiscal results as it ended fiscal year 2016 with another record quarter. The company reported revenues of $136 million in the quarter ending in September, 8 percent sequential growth and the company's fifth consecutive quarter of 7 percent or greater revenue growth.

Adam CarterA large part of Oclaro’s growth was due to strong demand for 100 gigabits across the company’s optical module and component portfolio.

The company has been supplying 100-gigabit client-side optics using the CFP, CFP2 and CFP4 pluggable form factors for a while. “What we saw in June was the first real production ramp of our CFP2-ACO [coherent] module,” says Adam Carter, chief commercial officer at Oclaro. “We have transferred all that manufacturing over to Asia now.”

The CFP2-ACO is being used predominantly for data centre interconnect applications. But Oclaro has also seen first orders from system vendors that are supplying US communications service provider Verizon for its metro buildout.

The company is also seeing strong demand for components from China. “The China market for 100 gigabits has really grown in the last year and we expect it to be pretty stable going forward,” says Carter. LightCounting Market Research in its latest optical market forecast report highlights the importance of China’s 100-gigabit market. China’s massive deployments of FTTx and wireless front haul optics fuelled growth in 2011 to 2015, says LightCounting, but this year it is demand for 100-gigabit dense wavelength-division multiplexing and 100 Gigabit Ethernet optics that is increasing China’s share of the global market.

The China market for 100 gigabits has really grown in the last year and we expect it to be pretty stable going forward 

QSFP28 modules

Oclaro is also providing 100-gigabit QSFP28 pluggables for the data centre, in particular, the 100-gigabit PSM4 parallel single-mode module and the 100-gigabit CWDM4 based on wavelength-division multiplexing technology.

2016 was expected to be the year these 100-gigabit optical modules for the data centre would take off.  “It has not contributed a huge amount to date but it will start kicking in now,” says Carter. “We always signalled that it would pick up around June.”

One reason why it has taken time for the market for the 100-gigabit QSFP28 modules to take off is the investment needed to ramp manufacturing capacity to meet the demand. “The sheer volume of these modules that will be needed for one of these new big data centres is vast,” says Carter. “Everyone uses similar [manufacturing] equipment and goes to the same suppliers, so bringing in extra capacity has long lead times as well.”

Once a large-scale data centre is fully equipped and powered, it generates instant profit for an Internet content provider. “This is very rapid adoption; the instant monetisation of capital expenditure,” says Carter. “This is a very different scenario from where we were five to ten years ago with the telecom service providers."

Data centre servers and their increasing interface speed to leaf switches are what will drive module rates beyond 100 gigabits, says Carter. Ten Gigabit Ethernet links will be followed by 25 and 50 Gigabit Ethernet. “The lifecycle you have seen at the lower speeds [1 Gigabit and 10 Gigabit] is definitely being shrunk,” says Carter.

Such new speeds will spur 400-gigabit links between the data centre's leaf and spine switches, and between the spine switches. “Two hundred Gigabit Ethernet may be an intermediate step but I’m not sure if that is going to be a big volume or a niche for first movers,” says Carter.

400 gigabit CFP8

Oclaro showed a prototype 400-gigabit module in a CFP8 module at the recent ECOC show in September.  The demonstrator is an 8-by-50 gigabit design using 25 gigabaud optics and PAM-4 modulation. The module implements the 400Gbase-LR8 10km standard using eight 1310nm distributed feedback lasers, each with an integrated electro-absorption modulator. The design also uses two 4-wide photo-detector arrays.

“We are using the four lasers we use for the CWDM4 100-gigabit design and we can show we have the other four [wavelength] lasers as well,” says Carter.

Carter says IP core routers will be the main application for the 400Gbase-LR8 module. The company is not yet saying when the 400-gigabit CFP8 module will be generally available.

We can definitely see the CFP2-ACO could support 400 gigabits and above

Coherent

Oclaro is already working with equipment customers to increase the line-side interface density on the front panel of their equipment.

The Optical Internetworking Forum (OIF) has already started work on the CFP8-ACO that will be able to support up to four wavelengths, each supporting up to 400 gigabits. But Carter says Oclaro is working with customers to see how the line-side capacity of the CFP2-ACO can be advanced. “We can definitely see the CFP2-ACO could support 400 gigabits and above,” says Carter. “We are working with customers as to what that looks like and what the schedule will be.”

And there are two other pluggable form factors smaller than the CFP2: the CFP4 and the QSFP28. “Will you get 400 gigabits in a QSFP28? Time will tell, although there is still more work to be done around the technology building blocks,” says Carter.

Vendors are seeking the highest aggregate front panel density, he says: “The higher aggregate bandwidth we are hearing about is 2 terabits but there is a need to potentially going to 3.2 and 4.8 terabits.”

Silicon photonics

Oclaro says it continues to watch closely silicon photonics and to question whether it is a technology that can be brought in-house. But issues remain. “This industry has always used different technologies and everything still needs light to work which means the basic III-V [compound semiconductor] lasers,” says Carter.

“Producing silicon photonics chips versus producing packaged products that meet various industry standards and specifications are still pretty challenging to do in high volume,” says Carter.  And integration can be done using either silicon photonics or indium phosphide.  “My feeling is that the technologies will co-exist,” says Carter.


Verizon's move to become a digital service provider

Verizon’s next-generation network based on network functions virtualisation (NFV) and software-defined networking (SDN) is rapidly taking shape.

Working with Dell, Big Switch Networks and Red Hat, the US telco announced in April it had already brought online five data centres. Since then it has deployed more sites but is not saying how many.

Source: Verizon

“We are laying the foundation of the programmable infrastructure that will allow us to do all the automation, virtualisation and the software-defining we want to do on top of that,” says Chris Emmons, director, network infrastructure planning at Verizon.

“This is the largest OpenStack NFV deployment in the marketplace,” says Darrell Jordan-Smith, vice president, worldwide service provider sales at Red Hat. “The largest in terms of the number of [server] nodes that it is capable of supporting and the fact that it is widely distributed across Verizon’s sites.”

OpenStack is an open source set of software tools that enable the management of networking, storage and compute services in the cloud. “There are some basic levels of orchestration while, in parallel, there is a whole virtualised managed environment,” says Jordon-Smith.

 

This announcement suggests that Verizon feels confident enough in its experience with its vendors and their technology to take the longer-term approach


“Verizon is joining some of the other largest communication service providers in deploying a platform onto which they will add applications over time,” says Dana Cooperson, a research director at Analysys Mason. 

Most telcos start with a service-led approach so they can get some direct experience with the technology and one or more quick wins in the form of revenue in a new service arena while containing the risk of something going wrong, explains Cooperson. As they progress, they can still lead with specific services while deploying their platforms, and they can make decisions over time as to what to put on the platform as custom equipment reach their end-of-life.

A second approach - a platform strategy - is a more sophisticated, longer-term one, but telcos need experience before they take that plunge.

“This announcement suggests that Verizon feels confident enough in its experience with its vendors and their technology to take the longer-term approach,” says Cooperson.

 

Applications

The Verizon data centres are located in core locations of its network. “We are focussing more on core applications but some of the tools we use to run the network - backend systems - are also targeted,” says Emmons.

The infrastructure is designed to support all of Verizon’s business units. For example, Verizon is working with its enterprise unit to see how it can use the technology to deliver virtual managed services to enterprise customers.

“Wherever we have a need to virtualise something - the Evolved Packet Core, IMS [IP Multimedia Subsystem] core, VoLTE [Voice over LTE] or our wireline side, our VoIP [Voice over IP] infrastructure - all these things are targeted to go on the platform,” says Emmons. Verizon plans to pool all these functions and network elements onto the platform over the next two years.   

Red Hat’s Jordon-Smith talks about a two-stage process: virtualising functions and then making them stateless so that applications can run on servers independent of the location of the server and the data centre.

“Virtualising applications and running on virtual machines gives some economies of scale from a cost perspective and density perspective,” says Jordon-Smith. But there is a cost benefit as well as a level of performance and resiliency once such applications can run across data centres. 

And by having a software-based layer, Verizon will be able to add devices and create associated services applications and services. “With the Internet of Things, Verizon is looking at connecting many, many devices and add scale to these types of environments,” says Jordon-Smith.  

 

Architecture

Verizon is deploying a ‘pod and core architecture’ in its data centres. A pod contains racks of servers, top-of-rack or leaf switches, and higher-capacity spine switches and storage, while the core network is used to enable communications between pods in the same data centre and across sites (see diagram, top).

Dell is providing Verizon with servers, storage platforms and white box leaf and spine switches. Big Switch Networks is providing software that runs on the Dell switches and servers, while the OpenStack platform and ceph storage software is provided by Red Hat.  

Each Dell rack houses 22 servers - each server having 24 cores and supporting 48 hyper threads - and all 22 servers connect to the leaf switch. Each rack is teamed with a sister rack and the two are connected to two leaf switches, providing switch level redundancy.

“Each of the leaf switches is connected to however many spine switches are needed at that location and that gives connectivity to the outside world,” says Emmons. For the five data centres, a total of 8 pods have been deployed amounting to 1,000 servers and this has not changed since April.

 

This is the largest OpenStack NFV deployment in the marketplace

 

Verizon has deliberately chosen to separate the pods from the core network so it can innovate at the pod level independently of the data centre’s network.

“We wanted the ability to innovate at the pod level and not be tied into any technology roadmap at the data centre level,” says Emmons who points out that there are several ways to evolve the data centre network.  For example, in some cases, an SDN controller is used to control the whole data centre network. 

“We don't want our pods - at least initially - to participate in that larger data centre SDN controller because we were concerned about the pace of innovation and things like that,” says Emmons. “We want the pod to be self-contained and we want the ability to innovate and iterate in those pods.”

Its first-generation pods contain equipment and software from Dell, Big Switch and Red Hat but Verizon may decide to swap out some or all of the vendors for its next-generation pod. “So we could have two generations of pod that could talk to each other through the core network,” says Emmons. “Or they could talk to things that aren't in other pods - other physical network functions that have not yet been virtualised.”  

Verizon’s core networks are its existing networks in the data centres. “We didn't require any uplift and migration of the data centre networks,” says Emmons, However, Verizon has a project investigating data-centre interconnect platforms for core networking.    

 

What we have been doing with Red Hat and Big Switch is not a normal position for a telco where you test something to death; it is a lot different to what people are used to

 

Benefits

Verizon expects capital expenditure and operational expense benefits from its programmable network but says it is too early to quantify. What more excites the operator is the ability to get services up and running quickly, and adapt and scale the network according to demand.

“You can reconfigure and reallocate your network once it is all software-defined,”  says Emmons. There is still much work to be done, from the network core to the edge. “These are the first steps to that programmable infrastructure that we want to get to,” says Emmons.

Capital expenditure savings result from adopting standard hardware. “The more uniform you can keep the commodity hardware underneath, the better your volume purchase agreements are,” says Emmons. Operational savings also result from using standardised hardware. “Spares becomes easier, troubleshooting becomes easier as does the lifecycle management of the hardware,” he says. 

 

Challenges

“We are tip-of-the-spear here,” admits Emmons. “What we have been doing with Red Hat and Big Switch is not a normal position for a telco where you test something to death; it is a lot different to what people are used to.”

Red Hat’s Jordon-Smith also talks about the accelerated development environment enabled by the software-enabled network. The OpenStack platform undergoes a new revision every six months.

“There are new services that are going to be enabled through major revisions in the not too distant future - the next 6 to 12 months,” says Jordon-Smith.  “That is one of the key challenges for operators like Verizon have when they are moving in what is now a very fast pace.”

Verizon continues to deploy infrastructure across its network. The operator has completed most of the troubleshooting and performance testing at the cloud-level and in parallel is working on the applications in various of its labs. “Now it is time to put it all together,” says Emmons. 

One critical aspect of the move to become a digital service provider will be the operators' ability to offer new services more quickly - what people call service agility, says Cooperson. Only by changing their operations and their networks can operators create and, if needed, retire services quickly and easily. 

"It will be evident that they are truly doing something new when they can launch services in weeks instead of months or years, and make changes to service parameters upon demand from a customer, as initiated by the customer," says Cooperson. “Another sign will be when we start seeing a whole new variety of services and where we see communications service providers building those businesses so that they are becoming a more significant part of their revenue streams."

She cites as examples cloud-based services and more machine-to-machine and Internet of Things-based services.


Ciena enhances its 6500 packet-optical transport family

Ciena has upgraded its 6500 family of packet-optical transport platforms with the T-series that supports higher-capacity electrical and optical switching and higher-speed line cards.

"The 6500 T-Series is a big deal as Ciena can offer two different systems depending on what the customer is looking for," says Andrew Schmitt, founder and principal analyst of market research firm, Cignal AI.

 

Helen XenosIf customers want straightforward transport and the ability to reach a number of different distances, there is the existing 6500 S-series, says Schmitt. The T-series is a system specifically for metro-regional networks that can accommodate multiple traffic types – OTN or packet.

"It has very high density for a packet-optical system and offers pay-as-you-grow with CFP2-ACO [coherent pluggable] modules," says Schmitt.

Ciena says the T-series has been developed to address new connectivity requirements service providers face. Content is being shifted to the metro to improve the quality of experience for end users and reduce capacity on backbone networks. Such user consumption of content is one factor accounting for the strong annual 40 percent growth in metro traffic.

According to Ciena, service providers have to deploy multiple overlays of network elements to scale capacity, including at the photonic switch layer, because they need more than 8-degree reconfigurable optical add/ drop multiplexers (ROADMs).

 

Operators are looking for a next-generation platform for these very high-capacity switching locations to efficiently distribute content

 

But overlays add complexity to the metro network and slow the turn-up times of services, says Helen Xenos, director, product and technology marketing at Ciena: "Operators are looking for a next-generation platform for these very high-capacity switching locations to efficiently distribute content."

U.S. service provider Verizon is the first to announce the adoption of the 6500 T-series to modernise its metro and is now deploying the platform. "Verizon is dealing with a heterogeneous network in the metro with many competing requirements," says Schmitt. "They don’t have the luxury of starting over or specialising like some of the hyper-scale transport architectures."

The T-series, once deployed, will handle the evolving requirements of Verizon's network. "Sure, it comes with additional costs compared with bare-bones transport but my conversation with folks at Verizon would indicate flexibility is worth the price," says Schmitt.

Ciena has over 500 customers in 50 countries for its existing 6500 S-series. Customers include 18 of the top 25 communications service providers and three of the top five content providers.

Xenos says an increasing number of service providers are interested in its latest platform. The T-series is part of six request-for-proposals (RFPs) and is being evaluated in several service providers' labs. The 6500 T-series will be generally available this month.

 

6500 T-series

The existing 6500 S-series family comprises four platforms, from the 2 rack-unit (RU) 6500-D2 chassis to the 22RU 6500-S32 that supports Ethernet, time-division multiplexed traffic and wavelength division multiplexing, and 3.2 terabit-per-second (Tbps) packet/ Optical Transport Network (OTN) switching.

The two T-series platforms are the half rack 6500-12T and the full rack 6500-24T. The cards have been upgraded from 100-gigabit switching per slot to 500-gigabit per slot.

The 6500-T12 has 12 service slots which house either service interfaces or photonic modules. There are also 2 control modules. Shown at the base of the chassis are four 500 Gig switching modules. Source: Ciena

The 500 gigabit switching per slot means the 6500-12T supports 6 terabits of switching capacity while the -24T will support 12 terabits by year end. The platforms have been tested and will support 1 terabit per slot, such that the -24T will deliver the full 24 terabit. Over 100 terabit of switching capacity will be possible in a multiple-chassis configuration, managed as a single switching node.

The latest platforms can use Ciena's existing coherent line cards that support two 100 gigabit wavelengths. The T-Series also supports a 500-gigabit coherent line card with five CFP2-ACOs coupled with Ciena's WaveLogic 3 Nano DSP-ASIC.

"We will support higher-capacity wavelengths in a muxponder configuration using our existing S-series," says Xenos. "But for switching applications, switching lower-speed traffic across the shelf onto a very high-capacity wavelength, this is something that the T-series would be used for."

The T-series also adds a denser, larger-degree ROADM, from an existing 6500 S-series 8-degree to a 16-degree flexible grid, colourless, directionless and contentionless (CDC) design. Xenos says the ROADM design is also more compact such that the line amplifiers fit on the same card.

"The requirements of this platform is that it has full integration of layer 0, layer 1 and layer 2 functions," says Xenos.

The 6500 T-series supports open application programming interfaces (APIs) and is being incorporated as part of Ciena's Emulation Cloud. The Emulation Cloud enabling customers to test software on simulated network configurations without requiring 6500 hardware and is being demonstrated at OFC 2016.

The 6500 is also being integrated as part of Ciena's Blue Planet orchestration and management architecture. 


Verizon prepares its next-gen PON request for proposal

Verizon will publish its next-generation passive optical network (PON) requirements for equipment makers in the coming month.

Vincent O'Byrne

The NG-PON2 request for proposal (RFP) is being issued after the US operator completed a field test that showed a 40 gigabit NG-PON2 system working alongside Verizon’s existing GPON customer traffic.  

The field test involved installing a NG-PON2 optical line terminal (OLT) at a Verizon central office and linking it to a FiOS customer’s home 5 km away. A nearby business location was also included in the trial.

Cisco and PT Inovação, an IT and research company owned by Portugal Telecom, worked with Verizon on the trial and provided the NG-PON2 equipment. 

NG-PON2 is the follow-on development to XG-PON1, the 10 gigabit GPON standard. NG-PON2 supports both point-to-point links and a combination of time- and wavelength-division multiplexing that in effect supports a traditional time-division multiplexed PON per wavelength, known as TWDM-PON. The rates TWDM-PON supports include 10 gigabit symmetrical, 10 gigabit downstream and 2.5 gigabit upstream, and 2.5 gigabit symmetrical.

Verizon field-tested the transmission of NG-PON2 signals over a fibre already carrying GPON traffic to show that the two technologies can co-exist without interference, including Verizon’s analogue RF video signal. Another test demonstrated how, in the event of a OLT card fault at the central office, the customer’s optical network terminal (ONT) equipment can detect the fault and retune to a new wavelength, restoring the service within seconds.  

 

Now we know we can deploy this technology on the same fibre without interference and upgrade the customer when the market demands such speed 

 

Verizon is not saying when it will deploy the next-generation access technology. “We have not said as the technology has to become mature and the costs to reduce sufficiently,” says Vincent O'Byrne, director of access technology for Verizon. 

It will also be several years before such speeds are needed, he says. “But now we know we can deploy this technology on the same fibre without interference and upgrade the customer when the market demands such speed.”  

Verizon expects first NG-PON2 services will be for businesses, while residential customers will be offered the service once the technology is mature and cost-effective, says O’Byrne.

Vodafone is another operator conducting a TWDM-PON field trial based on four 10 gigabit wavelengths, using equipment from Alcatel-Lucent. Overall, Alcatel-Lucent says it has been involved in 16 customer TWDM-PON trials, half in Asia Pacific and the rest split between North America and EMEA.

 

Further reading

For an update on the NG-PON2 standard, click here


Verizon tips silicon photonics as a key systems enabler

Verizon's director of optical transport network architecture and design, Glenn Wellbrock, shares the operator’s thoughts regarding silicon photonics.

 

Part 3: An operator view

Glenn Wellbrock is upbeat about silicon photonics’ prospects. Challenges remain, he says, but the industry is making progress. “Fundamentally, we believe silicon photonics is a real enabler,” he says. “It is the only way to get to the densities that we want.”

 

Glenn Wellbrock

Wellbrock adds that indium phosphide-based photonic integrated circuits (PICs) can also achieve such densities.

But there are many potential silicon photonics suppliers because of its relatively low barrier to entry, unlike indium phosphide. "To date, Infinera has been the only real [indium phosphide] PIC company and they build only for their own platform,” says Wellbrock.

That an operator must delve into emerging photonics technologies may at first glance seem surprising. But Verizon needs to understand the issues and performance of such technologies. “If we understand what the component-level capabilities are, we can help drive that with requirements,” says Wellbrock. “We also have a better appreciation for what the system guys can and cannot do.”    

Verizon can’t be an expert in the subject, he says, but it can certainly be involved. “To the point where we understand the timelines, the cost points, the value-add and the risk factors,” he says. “There are risk factors that we also want to understand, independent of what the system suppliers might tell us.” 

 

The cost saving is real, but it is also the space savings and power saving that are just as important  

 

All the silicon photonics players must add a laser in one form or another to the silicon substrate since silicon itself cannot lase, but pretty much all the other optical functions can be done on the silicon substrate, says Wellbrock: “The cost saving is real, but it is also the space savings and power saving that are just as important.”  

The big achievement of silicon photonics, which Wellbrock describes as a breakthrough, is the getting rid of the gold boxes around the discrete optical components. “How do I get to the point where I don’t have fibre connecting all these discrete components, where the traces are built into the silicon, the modulator is built in, even the detector is built right in.” The resulting design is then easier to package. “Eventually I get to the point where the packaging is glass over the top of that.” 

So what has silicon photonics demonstrated that gives Verizon confidence about its prospects? 

Wellbrock points to several achievements, the first being Infinera’s PICs. Yes, he says, Infinera’s designs are indium phosphide-based and not silicon photonics, but the company makes really dense, low-power and highly reliable components.

He also cites Cisco’s silicon photonics-based CPAK 100 Gig optical modules, and Acacia, which is applying silicon photonics and its in-house DSP-ASICs to get a lower power consumption than other, high-end line-side transmitters.

Verizon believes the technology will also be used in CFP4 and QSFP28 optical modules, and at the next level of integration that avoids pluggable modules on the equipment's faceplate altogether.  

But challenges remain. Scale is one issue that concerns Verizon. What makes silicon chips cheap is the fact that they are made in high volumes. “It [silicon photonics] couldn’t survive on just the 100 gigabit modules that the telecom world are buying,” says Wellbrock. 

 

If these issues are not resolved, then indium phosphide continues to win for a long time because that is where the volumes are today

 

When Verizon asks the silicon photonics players about how such scale will be achieved, the response it gets is data centre interconnect. “Inside the data centre, the optics is growing so rapidly," says Wellbrock. "We can leverage that in telecom."

The other issue is device packaging, for silicon photonics and for indium phosphide. It is ok making a silicon-photonics die cheaply but unless the packaging costs can be reduced, the overall cost saving is lost. ”How to make it reliable and mainstream so that everyone is using the same packaging to get cost down,” says Wellbrock.

All these issues - volumes, packaging, increasing the number of applications a single part can be applied to -  need to be resolved and almost simultaneously. Otherwise, the technology will not realise its full potential and the start-ups will dwindle before the problems are fixed.

“If these issues are not resolved, then indium phosphide continues to win for a long time because that is where the volumes are today,” he says. 

Verizon, however, is optimistic. “We are making enough progress here to where it should all pan out,” says Wellbrock.     

 


OFC 2015 digest: Part 1

A survey of some of the key developments at the OFC 2015 show held recently in Los Angeles.  
 
Part 1: Line-side component and module developments 
  • Several vendors announced CFP2 analogue coherent optics   
  • 5x7-inch coherent MSAs: from 40 Gig submarine and ultra-long haul to 400 Gig metro  
  • Dual micro-ITLAs, dual modulators and dual ICRs as vendors prepare for 400 Gig
  • WDM-PON demonstration from ADVA Optical Networking and Oclaro 
  • More compact and modular ROADM building blocks  
  
Coherent optics within a CFP2  
 
Integrating line-side coherent optics into ever smaller pluggable modules promises higher-capacity line cards and transport platforms. Until now, the main pluggable module for coherent optical transmission has been the CFP but at OFC several optical module companies announced coherent optics that fit within the CFP2 module, dubbed CFP2 analogue coherent optics (CFP2-ACO).  
 
Oclaro, Finisar, Fujitsu Optical Components and JDSU all announced CFP2-ACO designs, capable of 100 Gigabit-per-second (Gbps) line rates using polarisation-multiplexing, quadrature phase-shift keying (PM-QPSK) and 200 Gbps transmission using polarisation-multiplexing, 16-quadrature amplitude modulation (PM-16-QAM).  
 
Unlike the CFP, the CFP2-ACO module houses the photonics for coherent transmission; the accompanying coherent DSP-ASIC resides on the line card. The CFP2’s 12W power consumption is insufficient to house the combined power consumption of the optics and current DSP-ASIC designs.  
 
With the advent of the CFP2-ACO, five or even six modules can be fitted on a line card. “With five CFP2s, if you do 100 Gigabit, you have a 500 Gigabit line card, but if you can do 200 Gigabit using 16-QAM, you have a one terabit line card,” says Robert Blum, director of strategic marketing at Oclaro. 
Such line cards can be used not just for metro and regional networks but for the emerging data centre interconnect market, says Blum. Using line-side pluggables also allows operators to add capacity as required.  
 
Oclaro says its CFP2-ACO module has been shown to work with seven different DSP-ASICs; five developed by the system vendors and two merchant chips, from ClariPhy and NEL.  
 
Oclaro uses a single high-output power narrow line-width laser for its CFP2-ACO. The bulk of the laser’s light is used for the transmitter path but some of the light is split off and used for the local oscillator in the receive path. This saves the cost of using a separate, second laser but requires that the transmit and receive paths operate on a common wavelength.  
 
In contrast, Finisar uses two lasers for its CFP2-ACO: one for the transmit path and one for the local oscillator source. This allows independent transmit and receive wavelengths, and uses all the laser’s output power for transmission. Rafik Ward, Finisar’s vice president of marketing says the company has invested significantly to develop its CFP2-ACO, and using it own in-house components. Finisar acquired indium phosphide specialist u2t Photonics in 2014 specifically to address the CFP2-ACO design. 
 
At OFC, fabless chip maker ClariPhy announced a CFP2-ACO reference design card. The design uses the company’s flagship CL20010 DSP-ASIC with a CFP2 cage into which various vendors’ CFP2-ACO modules can be inserted. The CL20010 DSP supports 100 Gbps and 200 Gbps data rates.  
 
“Every major CFP2 module maker is sampling [a CFP2-ACO],” says Paul Voois, co-founder and chief strategy officer at ClariPhy. Having coherent optics integrated into a CFP2 is a real game-changer, he says. Not only will the CFP2-ACO enable one terabit line cards, but the associated miniaturisation of the optics will lower the cost of coherent transmission.  
 
“The DSP’s cost will decline [with volumes] and so will the optics which account for two thirds of the transponder cost,” says Voois. Having a CFP2-ACO multi-source agreement (MSA) also promotes interoperability, further spurring the CFP2-ACO’s adoption, he says.   
 
NeoPhotonics announced a micro integrated coherent receiver (micro-ICR) for the CFP2-ACO. NeoPhotonics all but confirmed it will also supply a CFP2-ACO module. “That would be a logical assumption given that we have all the pieces,” says Ferris Lipscomb, vice president of marketing at NeoPhotonics.  
 
 
5x7-inch MSAs: 40 to 400 Gig  
    
Work continues to advance the line-side reach and line-speed capabilities of the fixed 5x7-inch MSA module. 
 
Acacia Communications announced a 5x7-inch coherent transponder that supports two carriers, each capable of carrying 100, 150 or 200 Gigabit  of data. The Acacia design uses two of the company’s silicon photonics chips, one for each carrier, coupled with Acacia’s DSP-ASIC. 
 
Finisar announced two 5x7 inch MSAs: one capable of 100 Gigabit and 200 Gigabit and one tailored for submarine and ultra long-haul applications using 40 Gig or 50 Gig binary phase-shift keying (PM-BPSK).  
 
Finisar claims it offers the industry’s broadest 200 Gigabit optical module portfolio with its 5x7 inch MSA and its CFP2-ACO. It demonstrated its 5x7-inch MSA also working with its CFP2-ACO at OFC. For the demonstration, Finisar used its CFP2-ACO module plugged into ClariPhy’s reference design.  
 
 
Micro-ITLAs, modulators and micro-ICRs go parallel   
 
Oclaro announced a dual micro-ITLA suited for two-carrier signals for a 400 Gig super-channel, with each carrier using PM-16-QAM.  
 
“People are designing discrete line cards using micro-ITLAs, lithium niobate modulators and coherent receivers for 400 Gig, for example, and they need two lasers, one for each channel,” says Oclaro’s Blum. This is the main application Oclaro is seeing for the design, but another use of the dual micro-ITLA is for networks where the receive wavelength is different to the transmitter one. “For that, you need a local oscillator that you tune independently,” says Blum.  
 

JDSU also showed a dual-carrier coherent lithium niobate modulator capable of 400 Gig for long-reach applications. The company is also sampling a dual 100 Gig micro-ICR also for multiple sub-channel applications. 

 

Avago announced a micro-ITLA device using its external cavity laser that has a line-width less than 100kHz. The micro-ITLA is suited for 100 Gig PM-QPSK and 200 Gig 16-QAM modulation formats and supports a flex-grid or gridless architecture.


Tunable SFP+

Oclaro announced a second-generation tunable SFP that has a power consumption below 1.5W, meeting the SFP MSA. The tunable SFP also operates over an extended temperature range of up to 85oC, but here the power consumption rises to 1.8W.  
 
“We see a lot of applications that need these higher temperatures: racks running hot, WDM-PON and wireless front-hauling,” says Blum. Wireless fronthaul typically uses grey optics to carry the radio-head traffic sent to the wireless baseband unit. But operators are looking to WDM technology as a way to aggregate traffic and this is where the extended temperature tunable SFP+ can play a role, says Blum.         
 
 
WDM-PON demonstration

ADVA Optical Networking and Oclaro demonstrated a WDM-PON prototype at OFC. WDM-PON has been spoken of for over a decade as the ultimate optical access technology, delivering dedicated wavelengths to premises. More recently, WDM-PON has been deployed to deliver business services and is being viewed for mobile backhaul and fronthaul applications.  
 
The ADVA-Oclaro WDM-PON demonstration is a 40-wavelength system using the C- and L-bands. The system’s 10 Gigabit wavelengths are implemented using tunable SFP+ modules at the customer’s site.  
 
The difference between Oclaro’s second-generation tunable SFP+ and the WDM-PON demonstration is that the latter module does not use a wavelength locker. Instead, a centralised wavelength controller is used to monitor all 40 channels and sends information back to the customer premise equipment via the L-band if a particular wavelength has drifted and needs adjustment. “We can get away with a very low-cost tunable laser in the customer premises [using this approach],” says Blum.     
  
 
ROADM building blocks 
 
JDSU showcased its latest ROADM line cards at OFC. These included its second-generation twin 1x20 wavelength-selective switch (WSS), part of its TrueFlex Super Transport blade, and its TrueFlex Multicast Switch blade that features a twin 4x16 multicast switch and a 4+4 array of amplifiers.  
 
JDSU’s first-generation twin 1x20 WSS required more than two slots in a chassis; two slots for the twin WSS and another for amplification and optical channel monitoring. JDSU can now fit all the functions on one blade with its latest design.  
 
The 4x16 multicast switch supports a four-degree (four directions) ROADM and 16 drop or add ports. The twin multicast switch design is used for multiplexing and demultiplexing of wavelengths. “This size multicast switch needs an amplifier on each of those four ports,” says Brandon Collings, CTO for communications and commercial optical products at JDSU. The 4+4 array of amplifiers is for the multicast switch multiplexing and the demultiplexing, “four amps on the mux side of the multicast switch and four amps for the demux side of the multicast switch”, says Collings. 
 
NeoPhotonics announced a modular 4x16 multicast switch which it claims does not need drop amplifiers.  
 
Being modular, operators can grow their systems based on demand, avoiding up-front costs and having to predict the ultimate size of the ROADM node. For example by adding multicast switches they can go from 4x16, 8x16, 12x16 to a full 16x16 switch configuration. “Carriers do not like to have to plan in advance, and they like to be future-proofed,” says Lipscomb.  
 
The NeoPhotonics multicast switch uses planar lightwave circuit (PLC) technology and has a broadcast-and-select architecture. As such, the architecture uses optical splitters which inevitably introduce signal loss. By concentrating on reducing switch loss and by increasing the sensitivity of the integrated coherent receiver, NeoPhotonics claims it can do away with the drop amplifiers for metro networks and even for certain long-haul routes. This can save up to a $1,000 a switch, says Lipscomb.    
 
NeoPhotonics’ multicast switch has already been designed on a line card and introduced into a customer’s platform. It is now undergoing qualification before being made generally available.   
 
ROADM status 
 
“This type of stuff [advanced WSSes and multicast switches for ROADMs] is what Verizon has been pushing for all these years,” says JDSU’s Collings. “These developments have been completed because operators like Verizon are getting serious.” Earlier this year, Verizon selected Ciena and Cisco Systems as the equipment suppliers for its large metro contract.  
 
Some analysts argue that it is largely Verizon promoting advanced ROADM usage and that the rest of the industry is less keen. Collings points out that JDSU, being a blade supplier and not a system vendor, is one customer layer removed from the operators. But he argues that other operators besides Verizon also want to deploy advanced ROADM technology but that two milestones must be overcome first. 
 
“People are waiting to see the technology mature and Verizon really do it,” he says. “[Their attitude is:] Let Verizon run headlong into that, and let’s see how they fare before we invest.” Collings says that until now, ROADM hardware has not been sufficiently mature: “Even Verizon has had to wait to start deploying this stuff.” 
 
The second milestone is having a control plane to manage the systems’ flexibility and dynamic nature. This is where the system vendors have focused their efforts in the past year, convincing operators that the hardware and the control plane are up and running, he says. 
 
“There is lots of interest [in advanced ROADMs] from a variety of carriers globally,”  says Collings. “But they have been waiting for these two shoes to drop.”

 

For Part 2, click here

60-second interview with Infonetics' Andrew Schmitt

Market research firm Infonetics Research, now part of IHS Inc., has issued its 2014 summary of the global wavelength-division multiplexing (WDM) equipment market. Andrew Schmitt, research director for carrier transport networking, in a Q&A with Gazettabyte.

 

Andrew Schmitt

Q: Infonetics claims the global WDM market grew 6% in 2014, to total US $10 billion. What accounted for such impressive growth in 2014?

AS: Primarily North American strength from data centre-related spending and growth in China.

 

Q: In North America, the optical vendors' fortunes were mixed: ADVA Optical Networking, Infinera and Ciena had strong results, balanced by major weakness at Alcatel-Lucent, Fujitsu and Coriant. You say those companies whose fortunes are tied to traditional carriers under-performed. What are the other markets that caused those vendors' strong results?

These three vendors are leading the charge into the data centre market. ADVA had flat revenue, North America saved their bacon in 2014. Ciena is also there because they are the ones who have suffered the least with the ongoing changes at AT&T and Verizon. And Infinera has just been killing it as they haven’t been exposed to legacy tier-1 spending and, despite the naysayers, has the platform the new customers want.

 

"People don’t take big risks and do interesting things to attack flat or contracting markets"

 

Q: Is this mainly a North American phenomenon, because many of the leading internet content providers are US firms?

Yes, but spending from Baidu, Alibaba, and Tencent in China is starting to scale. They are running the same playbook as the western data centre guys, with some interesting twists.

 

Q. You say the press and investors are unduly fascinated with AT&T's and Verizon's spending. Yet they are the two largest US operators, their sum capex was $39 billion in 2014, and their revenues grew. Are these other markets becoming so significant that this focus is misplaced?  

Growth is what matters.

People don’t take big risks and do interesting things to attack flat or contracting markets. Sure, it is a lot of spend, but the decisions are made and that data is seen - incorporated into people’s thought-process and market opinion. What matters is what changes. And all signs are that these incumbents are trying to become more like the data centre folks.

 

Q. What will be the most significant optical networking trend in 2015?

Cheaper 100 gigabit, which lights up the metro 100 gigabit market for real in 2016.


Privacy Preference Center