Glenn Wellbrock’s engineering roots

Glenn Wellbrock

After four decades shaping optical networking, Glenn Wellbrock has retired. He shares his career highlights, industry insights, and his plans to embrace a quieter life of farming and hands-on projects in rural Kansas.

Glenn Wellbrock’s (pictured) fascination with telecommunications began at an early age. “I didn’t understand how it worked, and I wanted to know,” he recalls.

Wellbrock’s uncle had a small, rural telephone company where he worked while studying, setting the stage for his first full-time job at telecom operator, MCI. Wellbrock entered a world of microwave and satellite systems; MCI was originally named Microwave Communications Incorporated. “They were all ex-military guys, and I’m the rookie coming out of school trying to do my best and learn,” says Wellbrock.

The challenge that dominated the first decade of this century.

The arrival of fibre optics in the late 1980s marked a pivotal shift. As colleagues hesitated to embrace the new “glass” technology, Wellbrock seized the opportunity. “I became the fibre guy,” he says. “My boss said, ‘Anything breaks over there, it’s your problem. You go fix it.’”

This hands-on role propelled him into the early days of optical networking, where he worked on asynchronous systems with bit rates ranging from hundreds of kilobits to over a megabit, before SONET/SDH standards took over.

By the 1990s, with a young family, Wellbrock moved to Texas, contributing to MCI’s development of OC-48 (2.5 gigabit-per-second or Gbps) systems, a precursor to the high-capacity networks that would define his career.

Hitting a speed wall

One of Wellbrock’s proudest achievements was overcoming the barrier to get to speeds faster than 10Gbps, a challenge that dominated the first decade of this century.

Polarisation mode dispersion (PMD) in an optical fibre was a significant hurdle, limiting the distance and reliability of high-speed links. By then, he was working at a start-up and did not doubt that using phase modulation was the answer.

Wellbrock recalls conversations he had with venture capitalists at the time: “I said: ‘Okay, I get we are a company of 40 guys and I don’t even know if they can build it, but somebody’s going to do it, and they’re going to own this place.’”

Wellbrock admits he didn’t know the answer would be coherent optics, but he knew intensity modulation direct detection had reached its limits.

For a short period, Wellbrock was part of Marconi before joining Verizon in 2006. In 2007, he was involved in a Verizon field trial between Miami and Tampa, 300 miles apart, which demonstrated a 100Gbps direct-detection system. “It was so manual,” he admits. “It took three of us working through the night to keep it working so we could show it to the executives in the morning.”

While the trial passed video, it was clear that direct detection wouldn’t scale. The solution lay in coherent detection, which Wellbrock’s team, working with Nortel (acquired by Ciena), finally brought to market by 2009.

“Coherent was like seeing a door,” he says. “PMD was killing you, but you open the door, and it’s a vast room. We had breathing room for almost two decades.”

Verizon’s lab in Texas had multiple strands of production fibre that looped back to the lab every 80km. “We could use real-world glass with all the impairments, but keep equipment in one location,” says Wellbrock.

This setup enabled rigorous testing and led to numerous post-deadline papers at OFC, cementing Verizon’s reputation for optical networking innovation.

Rise of the hyperscalers

Wellbrock’s career spanned a transformative era in telecom, from telco-driven innovation to the rise of hyperscalers like Google and Microsoft.

He acknowledges the hyperscalers’ influence as inevitable due to their scale. “If you buy a million devices, you’re going to get attention,” he says. “We’re buying 100 of the same thing.”

Hyperscalers’ massive orders for pluggable modules and tunable lasers—technologies telcos like Verizon helped pioneer—have driven costs down, benefiting the industry.

However, Wellbrock notes that telcos remain vital for universal connectivity. “Every person, every device is connected,” he says. “Telcos aren’t going anywhere.”

Reliability remains a core challenge, particularly as networks grow. Wellbrock emphasises dual homing—redundant network paths—as telecom’s time-tested solution. “You can’t have zero failures,” he says. “Everything’s got a failure rate associated with it.”

He sees hyperscalers grappling with similar issues, as evidenced by a Google keynote at the Executive Forum at OFC 2025, which sought solutions for network failures linking thousands of AI accelerators in a data centre.

Wellbrock’s approach to such challenges is rooted in collaboration. “You’ve got to work with the ecosystem,” he insists. “Nobody solves every problem alone.”

Hollow-core fibre

Looking forward, what excites Wellbrock is hollow-core fibre, which he believes could be as transformative as SONET, optical amplifiers, and coherent detection.

Unlike traditional fibre, hollow-core fibre uses air-filled waveguides, offering near-zero loss, low latency, and vast bandwidth potential. “If we could get hollow-core fibre with near-zero loss and as much bandwidth as you needed, it would give us another ride at 20 years’ worth of growth,” he says. “It’s like opening another door.”

While companies like Microsoft are experimenting with hollow-core fibre, Wellbrock cautions that widespread adoption is years away. “They’re probably putting in [a high fibre glass] 864 [strand]-count standard glass and a few hollow core [strands],” he notes.

For long-haul routes, the technology promises lower latency and freedom from nonlinear effects, but challenges remain in developing compatible transmitters, receivers, and amplifiers. “All we’ve got to do is build those,” he says, laughing, acknowledging the complexity.

Wellbrock also highlights fibre sensing as a practical innovation, enabling real-time detection of cable damage. “If we can detect an excavator getting closer, we can stop it before it breaks a fibre link,” he explains. This technology, developed in collaboration with partners like NEC and Ciena, integrates optical time-domain reflectometry (OTDR) into transmission systems, thereby enhancing network reliability.

Learnings

Wellbrock’s approach to innovation centres on clearly defining problems to engage the broader ecosystem. “Defining the problem is two-thirds of solving it,” he says, crediting a Verizon colleague, Tiejun J. Xia, for the insight. “If you articulate it well, lots of smart people can help you fix it.”

This philosophy drove his success at OFC, where he used the conference to share challenges, such as fibre sensing, and rally vendor support. “You’ve got to explain the value of solving it,” he adds. “Then you’ll get 10 companies and 1,000 engineers working on it.”

He advises against preconceived solutions or excluding potential partners. “Never say never,” he says. “Be open to ideas and work with anybody willing to address the problem.”

This collaborative mindset, paired with a willingness to explore multiple solutions, defined his work with Xia, a PhD associate fellow at Verizon. “Our favourite Friday afternoon was picking the next thing to explore,” he recalls. “We’d write down 10 possible things and pull on the string that had legs.”

Glenn Wellbrock's son, Dave, in farming action

Fibre to Farming

As Wellbrock steps into retirement, he is teaming up with his brother.

The two own 400 acres in Kansas, where wheat farming, hunting, and fishing will define their days. “I won’t miss 100 emails a day or meetings all day long,” he admits. “But I’ll miss the interaction and building stuff.”

Farming offers a chance to work with one’s hands, doing welding and creating things from metal. “I love to build things,” he says. “It’s fun to go, ‘Why hasn’t somebody built this before?’

Farming projects can be completed in a day or over a weekend. “Networks take a long time to build,” he notes. “I’m looking forward to starting a project and finishing it quickly.”

He plans to cultivate half their land to fund their hobbies, using “old equipment” that requires hands-on maintenance—a nod to his engineering roots.

OFC farewell

Wellbrock retired just before the OFC show in March 2025. His attendance was less about work and more about transition, where he spent the conference introducing his successor to vendors and industry peers, ensuring a smooth handoff.

“I didn’t work as hard as I normally do at OFC,” he says. “It’s about meeting with vendors, doing a proper handoff, and saying goodbye to folks, especially international ones.” He also took part in this year’s OFC Rump Session.

Wellbrock admits to some sadness. Yet, he remains optimistic about his future, with plans to possibly return to OFC as a visitor. “Maybe I’ll come just to visit with people,” he muses.

Timeline 

  • 1984: MCI 
  • 1987: Started working on fibre 
  • 2000: Joined start-ups and, for a short period, was part of Marconi 
  • 2004: Joined Worldcom, which had bought MCI 
  • 2006: Joined Verizon 
  • 2025: Retired from Verizon 

A tribute

Prof. Andrew Lord, Senior Manager, optical and quantum research, BT

I have had the privilege of knowing Glenn since the 1990s, when BT had a temporary alliance with MCI. We shared a vendor trip to Japan, where I first learnt of his appetite for breakfasting at McDonald’s!

Glenn has been a pivotal figure in our industry since then. A highlight would be the series of ambitious Requests For Information (RFIs) issued by Verizon, which would send vendor account managers scurrying to their R&D departments for cover.

Another highlight would be the annual world-breaking Post-Deadline Paper results at OFC: those thrilling sessions won’t be the same without a Wellbrock paper and neither will the OFC rump sessions, which have benefited from his often brutal pragmatism, always delivered with grace (which somehow made it even worse when defeating me in an argument!).

But it’s grace that defines the man who always has time for people and is always generous enough to share his views and experiences. Glenn will be sorely missed, but he deserves a fulfilling and happy retirement.


OFC 2025 industry reflections - Part 2

Exhibition floor. Source: OFC

Gazettabyte is asking industry figures for their thoughts after attending the 50th-anniversary OFC show in San Francisco. In Part 2, the contributions are from BT’s Professor Andrew Lord, Chris Cole, Coherent’s Vipul Bhatt, and Juniper Network’s Dirk van den Borne.ontent

Professor Andrew Lord, Head of Optical Network Research at BT Group

OFC was a highly successful and lively show this year, reflecting a sense of optimism in the optical comms industry. The conference was dominated by the need for optics in data centres to handle the large AI-driven demands. And it was exciting to see the conference at an all-time attendance peak.

From a carrier perspective, I continued to appreciate the maturing of 800-gigabit plugs for core networks and 100GZR plugs (including bidirectional operation for single-fibre working) for the metro-access side.

Hollow-core fibre continues to progress with multiple companies developing products, and evidence for longer lengths of fibre in manufacturing. Though dominated by data centres and low-latency applications such as financial trading, use cases are expected to spread into diverse areas such as subsea cables and 6G xHaul.

There was also a much-increased interest in fibre sensing as an additional revenue generator for telecom operators, although compelling use cases will require more cost-effective technology.

Lastly, there has been another significant increase in quantum technology at OFC. There was an ever-increasing number of Quantum Key Distribution (QKD) protocols on display but with a current focus on Continuous—Variable QKD (CV-QKD), which might be more readily manufacturable and easier to integrate.

Chris Cole, Optical Communications Advisor

For the premier optics conference, the amount of time and floor space devoted to electrical interfaces was astounding.

Even more amazing is that while copper’s death at the merciless hands of optics continues to be reported, the percentage of time devoted to electrical work at OFC is going up. Multiple speakers commented on this throughout the week.

One reason is that as rates increase, the electrical links connecting optical links to ASICs are becoming disproportionately challenging. The traditional Ethernet model of electrical adding a small penalty to the overall link is becoming less valid.

Another reason is the introduction of power-saving interfaces, such as linear and half-retimed, which tightly couple the optical and electrical budgets.

Optics engineers now have to worry about S-parameters and cross-talk of electrical connectors, vias, package balls, copper traces and others.

The biggest buzz in datacom was around co-packaged optics, helped by Nvidia’s switch announcements at GTC in March.

Established companies and start-ups were outbidding each other with claims of the highest bandwidth in the smallest space; typically the more eye-popping the claims, the less actual hard engineering data to back them up. This is for a market that is still approximately zero and faces its toughest hurdles of yield and manufacturability ahead.

To their credit, some companies are playing the long game and doing the slow, hard work to advance the field. For example, I continue to cite Broadcom for publishing extensive characterisation of their co-packaged optics and establishing the bar for what is minimally acceptable for others if they want to claim to be real.

The irony is that, in the meantime, pluggable modules are booming, and it was exciting to see so many suppliers thriving in this space, as demonstrated by the products and traffic in their booths.

The best news for pluggable module suppliers is that if co-packaged optics takes off, it will create more bandwidth demand in the data centre, driving up the need for pluggables.

I may have missed it, but no coherent ZR or other long-range co-packaged optics were announced.

A continued amazement at each OFC is the undying interest and effort in various incarnations of general optical computing.

Despite having no merit as easily shown on first principles, the number of companies and researchers in the field is growing. This is also despite the market holding steady at zero.

The superficiality of the field is best illustrated by a slogan gaining popularity and heard at OFC: computing at the speed of light. This is despite the speed of propagation being similar in copper and optical waveguides. The reported optical computing devices are hundreds of thousands or millions of times larger than equivalent CMOS circuits, resulting in the distance, not the speed, determining the compute time.

Practical optical computing precision is limited to about four bits, unverfied claims of higher precision not withstanding, making it useless in datacenter applications.

Vipul Bhatt, Vice President, Corporate Strategic Marketing at Coherent.

Three things stood out at OFC:

  • The emergence of transceivers based on 200-gigabit VCSELs
  • A rising entrepreneurial interest in optical circuit switching
  • And an accelerated momentum towards 1.6-terabit (8×200-gigabit transceivers) alongside the push for 400-gigabit lanes due to AI-driven bandwidth expansion.

The conversations about co-packaged optics showed increasing maturity, shifting from ‘pluggable versus co-packaged optics’ to their co-existence. The consensus is now more nuanced: co-packaged optics may find its place, especially if it is socketed, while front-panel pluggables will continue to thrive.

Strikingly, talk of optical interconnects beyond 400-gigabit lanes was almost nonexistent. Even as we develop 400 gigabit-per-lane products, we should be planning the next step: either another speed leap (this industry has never disappointed) or, more likely, a shift to ‘fast-and-wide’, blurring the boundary between scale-out and scale-up by using a high radix.

Considering the fast cadence of bandwidth upgrades, the absence of such a pivotal discussion was unexpected.

Dirk van den Borne, director of system engineering at Juniper Networks

The technology singularity is defined as the merger of man and machine. However, after a week at OFC, I will venture a different definition where we call the “AI singularity” the point when we only talk about AI every waking hour and nothing else. The industry seemed close to this point at OFC 2025.

My primary interest at the show was the industry’s progress around 1.6-terabit optics, from scale-up inside the rack to data centre interconnect and long-haul using ZR/ZR+ optics. The industry here is changing and innovating at an incredible pace, driven by the vast opportunity that AI unlocks for companies across the optics ecosystem.

A highlight was the first demos of 1.6-terabit optics using a 3nm CMOS process DSP, which have started to tape out and bring the power consumption down from a scary 30W to a high but workable 25W. Well beyond the power-saving alone, this difference matters a lot in the design of high-density switches and routers.

It’s equally encouraging to see the first module demos with 200 gigabit-per-lane VCSELs and half-retimed linear-retimed optical (LRO) pluggables. Both approaches can potentially reduce the optics power consumption to 20W and below.

The 1.6-terabit ecosystem is rapidly taking shape and will be ready for prime time once 1.6-terabit switch ASICs arrive in the marketplace. There’s still a lot of buzz around linear pluggable optics (LPO) and co-packaged optics, but both don’t seem ready yet. LPO mostly appears to be a case of too little, too late. It wasn’t mature enough to be useful at 800 gigabits, and the technology will be highly challenging for 1.6 terabits.

The dream of co-packaged optics will likely have to wait for two more years, though it does seem inevitable. But with 1.6 terabit pluggable optics maturing quickly, I don’t see it having much impact in this generation.

The ZR/ZR+ coherent optics are also progressing rapidly. Here, 800-gigabit is ready, with proven interoperability between modules and DSPs using the OpenROADM probabilistic constellation shaping standard, a critical piece for interoperability in more demanding applications.

The road to 1600ZR coherent optics for data centre interconnect (DCI) is now better understood, and power consumption projections seem reasonable for 2nm DSP designs.

Unfortunately, the 1600ZR+ is more of a question mark to me, as ongoing standardisation is taking this in a different direction and, hence, a different DSP design from 1600ZR.    The most exciting discussions are around “scale-up” and how optics can replace copper for intra-rack connectivity.

This is an area of great debate and speculation, with wildly differing technologies being proposed. However, the goal of around 10 petabit-per-second (Pbps) in cross-sectional bandwidth in a single rack is a terrific industry challenge, one that can spur the development of technologies that might open up new markets for optics well beyond the initial AI cluster application.


ECOC 2023 industry reflections

Gazettabyte is asking industry figures for their thoughts after attending the recent ECOC show in Glasgow. In particular, what developments and trends they noted, what they learned and what, if anything, surprised them. Here are the first responses from BT, Huawei, and Teramount.

Andrew Lord, Senior Manager, Optical Networks and Quantum Research at BT

I was hugely privileged to be the Technical Co-Chair of ECOC in Glasgow, Scotland and have been working on the event for over a year. The overriding impression was that the industry is fully functioning again, post-covid, with a bumper crop of submitted papers and a full exhibition. Chairing the conference left little time to indulge in content. I will need to do my regular ECOC using the playback option. But specific themes struck me as interesting.

There were solid sessions and papers around free space optics, including satellite. The activities here are more intense than we would typically see at ECOC. This reflects a growing interest and the specific expertise within the Scottish research community. Similarly, more quantum-related papers demonstrated how quantum is integrating into the mainstream optical industry.

I was impressed by the progress towards 800-gigabit ZR (800ZR) pluggables in the exhibition. This will make for some interesting future design decisions, mainly if these can be used instead of the increasingly ubiquitous 400 gigabit ZR. I am still unclear whether 800-gigabit coherent can hit the required power consumption points for plugging directly into routers. The costs for these plugs, driven by volumes, will have a significant impact.

I also enjoyed a lively and packed rump session debating the invasion of artificial intelligence (AI) into our industry. I believe considerable care is needed, particularly where AI might have a role in network management and optimisation.

Maxim Kuschnerov, Director R&D at Huawei

ECOC usually has fewer major announcements than the OFC show. But ECOC was full of technical progress this time, making the OFC held in March seem a distant memory.

What was already apparent in September at the CIOE in Shenzhen was on full display on the exhibition floor in Glasgow: the linear drive pluggable optics (LPO) trend has swept everyone off their feet. The performance of 100-gigabit native signalling using LPO can not be ignored for single-mode fibre and VCSELs.

Arista gave a technical deep-dive at the Market Focus with a surprising level of detail that went beyond the usual marketing. There was also a complete switch set-up at the Eoptolink booth, and the OIF interop demonstration.

While we must wait for a significant end user to adopt LPO, it begs the question: is this a one-off technological accident or should the industry embrace this trend and have research set its eyes on 200 gigabits per lane? The latter would require a rearchitecting of today’s switches, a more powerful digital signal processor (DSP) and likely a new forward error corrections (FEC) scheme, making the weak legacy KP4 for the 224-gigabit serdes in the IEEE 802.3dj look like a poor choice.

There was less emphasis on Ethernet 1.6 terabits per second (Tb/s) interfaces with 8x200G optical lanes. However, the arrival of a second DSP source with better performance was noted at the show.

The module power of 1.6-terabit DR8 modules showed no significant technological improvement compared with 800Gbps DSP-based modules and looked even more out of place when benchmarking against 800G LPO pluggables. Arista drove home that we can’t continue increasing the power consumption of the modules at the faceplate despite the 50W QSFP-DD1600 announcement.

The same is true for coherent optics.

Although the demonstration of the first 800ZR live modules was technically impressive, the efficiency of the power per bit hardly improved compared to 400ZR, making the 1600ZR project of OIF look like a tremendous technological challenge.

To explain, a symbol rate of 240 gigabaud (GBd) will drive the optics for 1600ZR. Using 240Gbaud with two levels per symbol to create 16QAM over two dimensions is a 400Gbps net rate or 480Gbps gross rate electrical per lane, albeit very short reach. Coherent has four lanes – 2 polarisations & in-phase and quadrature – to deliver four by 400G or 1.6Tbps. This is like what we have now: 200G on the optical side of 1.6T 8x200G PAM4 and 4x200G on 800ZR, while the electrical (longer reach) host still uses 100 gigabits per lane.

The industry will have to analyse which data centre scenarios direct detection will be able to cover with the same analogue-to-digital & digital-to-analogue converters and how deeply coherent could be driven within the data centre.

ECOC also featured optical access evolution. With the 50G FTTx standard completed with components sampling at the show and products shipping next year, the industry has set its eyes on the next generation of very high-speed PON.

There is some initial agreement on the technological choice for 200 gigabits with a dual-lambda non-return to zero (NRZ) signalling. Much of the industry debate was around the use cases. It is unrealistic to assume that private consumers will continue driving bandwidth demand. Therefore, a stronger focus on 6G wireless fronthaul or enterprise seems a likely scenario for point-to-multi-point technology.

Hesham Taha, CEO of Teramount

Co-packaged optics had renewed vigour in ECOC, thanks partly to the recent announcements of leading foundries and other semiconductor vendors collaborating in silicon photonics.

One crucial issue, though, is that scalable fibre assembly remains an unsolved problem that is getting worse due to the challenging requirements of high-performance systems for AI and high-performance computing. These requirements include a denser “shoreline” with a higher fibre count and a denser fibre pitch, and support for an interposer architecture with different photonic integrated component (PIC) geometries.

Despite customers having different requirements for co-packaged optics fibre assembly, detachable fibres now have wide backing. Having fibre ribbons that can be separated from the co-packaged optics packaging process increases manufacturing yield and reliability. It also allows the costly co-packaged optics-based servers/ switches to be serviced in the field ro replace faulty fibre.

Our company, Teramount, had an ECOC demo showing the availability of such a detachable fibre connector for CPO, dubbed Teraverse.

It is increasingly apparent that the solution for a commercially viable fibre assembly on chip lies with a robust manufacturing ecosystem rather than something tackled by any one system vendor. This fabless model has proven itself in semiconductors and must be extended to silicon photonics. This will allow each part of the production chain – IC designers, foundries, and outsourced semiconductor assembly and test (OSAT) players – to focus on what they do best.


Optical networking's future

Shown is Professor Polina Bayvel in her lab at University College London. Bayvel gave the opening plenary talk at ECOC.

Should the industry do more to support universities undertaking optical networking research? Professor Polina Bayvel thinks so and addressed the issue in her plenary talk at the ECOC conference and exhibition held in Glasgow, Scotland, earlier this month.

In 1994, Bayvel set up the Optical Networks Group at University College London (UCL). Telecom operators and vendors like STC, GPT, and Marconi led optical networking research. However, setting up the UCL’s group proved far-sighted as industry players cut their research budgets or closed.

Universities continue to train researchers, yet firms do not feel a responsibility to contribute to the costs of their training to ensure the flow of talent. One optical systems vendor has hired eight of her team.

In her address, Bayvel outlined how her lab should be compensated. For example, when a club sells a soccer player, the team that developed him should also get part of the fee.

Such income would be welcome, says Bayvel, citing how she has a talented student from Brazil who needs help to fund his university grant. Her lab would also benefit. During a visit, a pile of boxes – state-of-the-art test equipment – had just arrived.

Plenary talk

Bayvel mentioned how the cloud didn’t exist 18 years ago and that what has enabled it is optical networking and Moore’s law. She also tackled how technology will likely evolve in the next 18 years.

Digital data is being created at a remarkable rate, she said. Three exabytes (a billion billion bytes) are being added to the cloud, which holds several zettabytes (1,000 exabytes or ZB) of data. By 2025, data in the cloud will be 275ZB.

The cited stats continued: 6.2 billion kilometres of fibre have been deployed between 2005 and 2023, having 60Zbits of capacity. In comparison, all data satellite systems now deployed offer 100Tb, less than the capacity of one fibre.

Moore’s law has enabled complex coherent digital signal processors (DSPs) that clean up the distortions of an optical signal sent over a fibre. The first coherent DSPs consumed 1W for each gigabit of data sent. Over a decade later, DSPs use 0.1W to send a gigabit.

Data growth will keep driving capacity, says Bayvel. Engineers have had to fight hard to squeeze more capacity using coherent optical technology. Further improvement will come from techniques such as non-linear compensation. One benefit of Moore’s law is that coherent DSPs will be more capable of tasks such as non-linear compensation. For example, Ciena’s latest 3nm CMOS process, the WaveLogic 6e DSP, uses one billion digital logic gates.

Extra wide optical comms

But only so much can be done by the DSP and increasing the symbol rate. The next step will be to ramp the bandwidth by combining a fibre’s O, S, C, L, E and U spectrum bands. New optical devices, such as hybrid amplifiers, will be needed, and pushing transmission distance over these bands will be hard.

“We fought for fractions of a decibel [of signal-to-noise ratio]; surely we’re not going to give up the wavelengths available through this [source of] bandwidth?” said Bayvel.

In his Market Focus talk at ECOC, BT’s Professor Andrew Lord argued the opposite. There will be places where combining the C- and L-bands will make sense, but why bother when spatial division multiplexing fibre deployments in the network are inevitable, he said.

“It is not spatial division multiplexing versus extra wide optical comms; they can co-exist,” said Bayvel.

Bayvel describes work to model the performance of such a large amount of spectrum that has been done in her lab using data collected from the MAREA sub-sea cable. Combining the fibre’s spectral bands – a total of 60 terahertz of spectrum – promises to quadruple the bandwidth currently available. However, this will require more powerful DSPs than are available today.

Another area ripe for development is intelligent optical networking using machine learning.

An ideas lag

Bayvel used her talk to pay tribute to her mentor, Professor John Midwinter.

Midwinter was an optical communications pioneer at BT and then UCL. He headed the team that developed the first trial systems that led to BT becoming the first company in the world to introduce optical fibre communications systems in the network.

In 1983, his last year at BT, Midwinter wrote in the British Telecom Technology Journal that this was the year coherent optical systems would be taken seriously. It took another 20-plus years.

Bayvel noted how many ideas developed in optical research take considerable time before the industry adopts them. “Changes in the network are much slower,” she said. “Operators are conservative and focus on solving today’s problems.”

Another example she cited is Google’s Apollo optical switch being used in its data centres. Bayvel noted that the switch is relatively straightforward, using MEMS technology that has been around for 25 years.

Bayvel used her keynote to attack the telecom regulators.

“It is simply unfair that the infrastructure providers get such a small part of the profits compared to the content providers,” she said. “The regulators have done a terrible job.”


BT's IP-over-DWDM move

Professor Andrew Lord, BT's head of optical networking.

  • BT will roll out next year IP-over-DWDM using pluggable coherent optics in its network
  • At ECOC 2022, BT detailed network trials that involved the use of ZR+ and XR optics coherent pluggable modules

Telecom operators have been reassessing IP-over-DWDM with the advent of 400-gigabit coherent optics that plug directly into IP routers.

According to BT, using pluggables for IP-over-DWDM means a separate transponder box and associated ‘grey’ (short-reach) optics are no longer needed.

Until now, the transponder has linked the IP router to the dense wavelength-division multiplexing (DWDM) optical line system.

“Here is an opportunity to eliminate unnecessary equipment by putting coloured optics straight onto the router,” says Professor Andrew Lord, BT’s head of optical networking.

Removing equipment saves power and floor space too.

DWDM trends

Operators need to reduce the cost of sending traffic, the cost-per-bit, given the continual growth of IP traffic in their networks.

BT says its network traffic is growing at 30 per cent a year. As a result, the operator is starting to see the limits of its 100-gigabit deployments and says 400-gigabit wavelengths will be the next capacity hike.

Spectral efficiency is another DWDM issue. In the last 20 years, BT has increased capacity by lighting a new fibre pair using upgraded optical transport equipment.

Wavelength speeds have gone from 2.5 to 10, then to 40, 100, and soon 400 gigabits, each time increasing the total traffic sent over a fibre pair. But that is coming to an end, says BT.

“If you go to 1.2 terabits, it won’t go as far, so something has to give,” says Lord. ‌”So that is a new question we haven’t had to answer before, and we are looking into it.”

Fibre capacity is no longer increasing because coherent optical systems are already approaching the Shannon limit; send more data on a wavelength and it occupies a wider channel bandwidth.

Optical engineers have improved transmission speeds by using higher symbol rates. Effectively, this enables more data to be sent using the same modulation scheme. And keeping the same modulation scheme means existing reaches can still be met. However, upping the symbol rate is increasingly challenging.

Other ways of boosting capacity include making use of more spectral bands of a fibre: the C-band and the L-band, for example. BT is also researching spatial division multiplexing (SDM) schemes.

IP-over-DWDM

IP-over-DWDM is not a new topic, says BT. To date, IP-over-DWDM has required bespoke router coherent cards that take an entire chassis slot, or the use of coherent pluggable modules that are larger than standard QSFP-DD client-side optics ports.

“That would affect the port density of the router to the point where it’s not making the best use of your router chassis,“ says Paul Wright, optical research manager at BT Labs.

The advent of OIF-defined 400ZR optics has catalysed operators to reassess IP-over-DWDM.

The 400ZR standard was developed to link equipment housed in separate data centres up to 120km apart. The 120km reach is limiting for operators but BT’s interest in ZR optics stems from the promise of low-cost, high-volume 400-gigabit coherent optics.

“It [400ZR optics] doesn’t go very far, so it completely changes our architecture,” says Lord. “But then there’s a balance between the numbers of [router] hops and the cost reduction of these components.”

BT modelled different network architectures to understand the cost savings using coherent ZR and ZR+ optics; ZR+ pluggables have superior optical performance compared to 400ZR.

The networks modelled included IP routers in a hop-by-hop architecture where the optical layer is used for point-to-point links between the routers.

This worked well for traffic coming into a hub site but wasn’t effective when traffic growth occurred across the network, says Wright, since traffic cascaded through every hop.

BT also modelled ZR+ optics in a reconfigurable optical add-drop multiplexer (ROADM) network architecture, as well as a hybrid arrangement using both ZR+ and traditional coherent optics. Traditional coherent optics, with its superior optical performance, can pass through a string of ROADM stages where ZR+ optics falls short.

BT compared the cost of the architectures assuming certain reaches for the various coherent optics and published the results in a paper presented at ECOC 2020. The study concluded that ZR and ZR+ optics offer significant cost savings compared to coherent transponders.

ZR+ pluggables have since improved, using higher output powers to better traverse a network’s ROADM stages. “The [latest] ZR+ optics should be able to go further than we predicted,” says Wright.

It means BT is now bought into IP-over-DWDM using pluggable optics.

BT is doing integration tests and plans to roll out the technology sometime next year, says Lord.

XR optics

BT is a member of the Open XR Forum, promoting coherent optics technology that uses optical sub-carriers.

Dubbed XR optics, if all the subs-carriers originate at the same point and are sent to a common destination, the technology implements a point-to-point communication scheme.

Sub-carrier technology also enables traffic aggregation. Each sub-carrier, or a group of sub-carriers, can be sent from separate edge-network locations to a hub where they are aggregated. For example, 16 endpoints, each using a 25-gigabit sub-carrier, can be aggregated at a hub using a 400-gigabit XR optics pluggable module. Here, XR optics is implementing point-to-multipoint communication.

Lord views XR optics as innovative. “If only we could find a way to use it, it could be very powerful,” he says. “But that is not a given; for some applications, XR optics might be too big and for others it may be slightly too small.”

ECOC 2022

BT’s Wright shared the results of recent trial work using ZR+ and XR optics at the recent ECOC 2022 conference, held in Basel in September.

The 400ZR+ were plugged into Nokia 7750 SR-s routers for an IP-over-DWDM trial that included the traffic being carried over a third-party ROADM system in BT’s network. BT showed the -10dBm launch-power ZR+ optics working over the ROADM link.

For Wright, the work confirms that 0dBm launch-power ZR+ optics will be important for network operators when used with ROADM infrastructures.

BT also trialled XR optics where traffic flows were aggregated.

“These emerging technologies [ZR+ and XR optics] open up for the first time the ability to deploy a full IP-over-DWDM solution,” concluded Wright.

 

 


ECOC '22 Reflections - Part 3

Gazettabyte is asking industry and academic figures for their thoughts after attending ECOC 2022, held in Basel, Switzerland. In particular, what developments and trends they noted, what they learned, and what, if anything, surprised them. 

In Part 3, BT’s Professor Andrew Lord, Scintil Photonics’ Sylvie Menezo, Intel’s Scott Schube, and Quintessent’s Alan Liu share their thoughts.

Professor Andrew Lord, Senior Manager of Optical Networks Research, BT

There was strong attendance and a real buzz about this year’s show. It was great to meet face-to-face with fellow researchers and learn about the exciting innovations across the optical communications industry.

The clear standouts of the conference were photonic integrated circuits (PICs) and ZR+ optics.

PICs are an exciting piece of technology; they need a killer use case. There was a lot of progress and discussion on the topic, including an energetic Rump session hosted by Jose Pozo, CTO at Optica.

However, there is still an open question about what use cases will command volumes approaching 100,000 units, a critical milestone for mass adoption. PICs will be a key area to watch for me.

We’re also getting more clarity on the benefits of ZR+ for carriers, with transport through existing reconfigurable optical add-drop multiplexer (ROADM) infrastructures. Well done to the industry for getting to this point.

All in all, ECOC 2022 was a great success. As one of the Technical Programme Committee (TPC) Chairs for ECOC 2023 in Glasgow, we are already building on the great show in Basel. I look forward to seeing everyone again in Glasgow next year.

Sylvie Menezo, CEO of Scintil Photonics

What developments and trends did I note at ECOC? There is a lot of development work on emergent hybrid modulators.

Scott Schube, Senior Director of Strategic Marketing and Business Development, Silicon Photonics Products Division at Intel.

There were not a huge amount of disruptive announcements at the show. I expect the OFC 2023 event will have more, particularly around 200 gigabit-per-lane direct-detect optics.

Several optics vendors showed progress on 800 gigabit/ 2×400 gigabit optical transceiver development. There are now more vendors, more flavours and more components.

Generalising a bit, 800 gigabit seems to be one case where the optics are ready ahead of time, certainly ahead of the market volume ramp.

There may be common-sense lessons from this, such as the benefits of technology reuse, that the industry can take into discussions about the next generation of optics.

Alan Liu, CEO of Quintessent

Several talks focused on the need for high wavelength count dense wavelength division multiplexing (DWDM) optics in emerging use cases such as artificial intelligence/ machine learning interconnects.

Intel and Nvidia shared their vision for DWDM silicon photonics-based optical I/O. Chris Cole discussed the CW-WDM MSA on the show floor, looking past the current Ethernet roadmap at finer DWDM wavelength grids for such applications. Ayar Labs/Sivers had a DFB array DWDM light source demo, and we saw impressive research from Professor Keren Bergman’s group.

An ecosystem is coalescing around this area, with a healthy portfolio and pipeline of solutions being innovated on by multiple parties, including Quintessent.

The heterogeneous integration workshop was standing room only despite being the first session on a Sunday morning.

In particular, heterogeneously integrated silicon photonics at the foundry level was an emergent theme as we heard updates from Tower, Intel, imec, and X-Celeprint, among other great talks. DARPA has played – and plays – a key role in seeding the technology development and was also present to review such efforts.

Fibre-attach solutions are an area to watch, in particular for dense applications requiring a high number of fibres per chip. There is some interesting innovation in this area, such as from Teramount and Suss Micro-Optics among others.

Shortly after ECOC, Intel also showcased a pluggable fibre attach solution for co-packaged optics.

Reducing the fibre packaging challenge is another reason to employ higher wavelength count architectures and DWDM to reduce the number of fibres needed for a given aggregate bandwidth.


BT’s first quantum key distribution network

Professor Andrew Lord

The trial of a commercial quantum-secured metro network has started in London.

The BT network enables customers to send data securely between sites by first sending encryption keys over optical fibre using a technique known as quantum key distribution (QKD).

The attraction of QKD is that any attempt to eavesdrop and intercept the keys being sent is discernable at the receiver.

The network uses QKD equipment and key management software from Toshiba while the trial also involves EY, the professional services company.

EY is using BT’s network to connect two of its London sites and will showcase the merits of QKD to its customers.

London’s quantum network

BT has been trialling QKD for data security for several years. It had announced a QKD trial in Bristol in the U.K. that uses a point-to-point system linking two businesses.

BT and Toshiba announced last October that they were expanding their QKD work to create a metro network. This is the London network that is now being trialled with customers.

Building a quantum-secure network is a different proposition from creating point-to-point links.

“You can’t build a network with millions of separate point-to-point links,” says Professor Andrew Lord, BT’s head of optical network research. “At some point, you have to do some network efficiency otherwise you just can’t afford to build it.”

BT says quantum security may start with bespoke point-to-point links required by early customers but to scale a secure quantum network, a common pipe is needed to carry all of the traffic for customers using the service. BT’s commercial quantum network, which it claims is a world-first, does just that.

“We’ve got nodes in London, three of them, and we will have quantum services coming into them from different directions,” says Lord.

Not only do the physical resources need to be shared but there are management issues regarding the keys. “How does the key management share out those resources to where they’re needed; potentially even dynamically?” says Lord.

He describes the London metro network as QKD nodes with links between them.

One node connects Canary Wharf, London‘s financial district. Another node is in the centre of London for mainstream businesses while the third node is in Slough to serve the data centre community.

“We’re looking at everything really,” says Lord. “But we’d love to engage the data centre side, the financial side – those two are really interesting to us.”

Customers’ requirements will also differ; one might want a quantum-protected Ethernet service while another may only want the network to provide them with keys.

“We have a kind of heterogeneous network that we’re starting to build here, where each customer is likely to be slightly different,” says Lord.

QKD and post-quantum algorithms

QKD uses physics principles to secure data but cryptographic techniques also being developed are based on clever maths to make data secure, even against powerful future quantum computers.

Such quantum-resistant public-key cryptographic techniques are being evaluated and standardised by the US National Institute of Standards and Technology (NIST).

BT says it plans to also use such quantum-resistant techniques and are part of its security roadmap.

“We need to look at both the NIST algorithms and the key QKD ones,” says Lord. “Both need to be developed and to be understood in a commercial environment.“

Lord points out that the encryption products that will come out of the NIST work are not yet available. BT also has plenty of fibre, he says, which can be used not just for data transmission but also for security.

He also points out that the maths-based techniques will likely become available as freeware. “You could, if you have the skills, implement them yourself completely freely,” says Lord. “So the guys that make crypto kits using these maths techniques, how do they make money?”

Also, can a user be sure that those protocols are secure? “How do you know that there isn’t a backdoor into those algorithms?” says Lord. “There’s always this niggling doubt.”

BT says the post-quantum techniques are valuable and their use does not preclude using QKD.

Satellite QKD

Satellites can also be used for QKD.

Indeed, BT has an agreement with UK start-up Arqit which is developing satellite QKD technology whereby BT has exclusive rights to distribute and market quantum keys in the UK and to UK multinationals.

BT says satellite and fibre will both play a role, the question is how much of each will be used.

“They work well together but the fibre is not going to go across oceans, it’s going to be very difficult to do that,” says Lord. “And satellite does that very well.”

However, satellite QKD will struggle to provide dense coverage.

“If you think of a low earth orbit satellite coming overhead, it’s only gonna be able to lock onto to one ground station at a time, and then it’s gone somewhere else around the world,” says Lord. More satellites can be added but that is expensive.

He expects that a small number of satellite-based ground stations will be used to pick up keys at strategic points. Regional key distribution will then be used, based on fibre, with a reach of up to 100km.

“You can see a way in which satellite the fibre solutions come together,” says Lord, the exact balance being determined by economics.

Hollow-core fibre

BT says hollow-core fibre is also attractive for QKD since the hollowness of the optical fibre’s core avoids unwanted interaction between data transmissions and the QKD.

With hollow-core, light carrying regular data doesn’t interact with the quantum light operating at a different wavelength whereas it does for standard fibre that has a solid glass core.

“The glass itself is a mechanism that gets any photons talking to each other and that’s not good,” says Lord. “Particularly, it causes Raman scattering, a nonlinear process in glass, where light, if it’s got enough power, creates a lot of different wavelengths.”

In experiments using standard fibre carrying classical and quantum data, BT has had to turn down the power of the data signal to avoid the Raman effect and ensure the quantum path works.

Classical data generate noise photons that get into the quantum channel and that can’t be avoided. Moreover, filtering doesn’t work because the photons can’t be distinguished. It means the resulting noise stops the QKD system from working.

In contrast, with hollow-core fibre, there is no Raman effect and the classical data signal’s power can be ramped to normal transmission levels.

Another often-cited benefit of hollow-core fibre is its low latency performance. But for QKD that is not an issue: the keys are distributed first and the encryption may happen seconds or even minutes later.

But hollow-core fibre doesn’t just offer low latency, it offers tightly-controlled latency. With standard fibre the latency ‘wiggles around’ a lot due to the temperature of the fibre and pressure. But with a hollow core, such jitter is 20x less and this can be exploited when sending photons.

“As time goes on with the building of quantum networks, timing is going to become increasingly important because you want to know when your photons are due to arrive,” says Lord.

If a photon is expected, the detector can be opened just before its arrival. Detectors are sensitive and the longer they are open, the more likely they are to take in unwanted light.

“Once they’ve taken something in that’s rubbish, you have to reset them and start again,” he says. “And you have to tidy it all up before you can get ready for the next one. This is how these things work.“

The longer that detector can be kept closed, the better it performs when it is opened. It also means a higher key rate becomes possible.

“Ultimately, you’re going to need much better synchronisation and much better predictability in the fibre,” says Lord. “That’s another reason why I like hollow-core fibre for QKD.”

Quantum networks

“People focussed on just trying to build a QKD service, miss the point; that’s not going to be enough in itself,” says Lord. “This is a much longer journey towards building quantum networks.”

BT sees building quantum small-scale QKD networks as the first step towards something much bigger. And it is not just BT. There is the Innovate UK programme in the UK. There are also key European, US and China initiatives.

“All of these big nation-states and continents are heading towards a kind of Stage I, building a QKD link or a QKD network but that will take them to bigger things such as building a quantum network where you are now distributing quantum things.”

This will also include connecting quantum computers.

Lord says different types of quantum computers are emerging and no one yet knows which one is going to win. He believes all will be employed for different kinds of use cases.

“In the future, there will be a broad range of geographically scattered quantum computing resources, as well as classical compute resources,” says Lord. “That is a future internet.”

To connect such quantum computers, quantum information will need to be exchanged between them.

Lord says BT is working with quantum computing experts in the UK to determine what the capabilities of quantum computers are and what they are good at solving. It is classifying quantum computing capabilities into the different categories and matching them with problems BT has.

“In some cases, there’s a good match, in some cases, there isn’t,” says Lord. “So we try to extrapolate from that to say, well, what would our customers want to do with these and it’s a work in progress.”

Lord says it is still early days concerning quantum computing. But he expects quantum resources to sit alongside classical computing with quantum computers being used as required.

“Customers probably won’t use it for very long; maybe buying a few seconds on a quantum computer might be enough for them to run the algorithm that they need,” he says. In effect, quantum computing will eventually be another accelerator alongside classical computing.

”You already can buy time by the second on things like D-Wave Systems’ quantum computers, and you may think, well, how is that useful?” says Lord. “But you can do an awful lot in that time on a quantum computer.”

Lord already spends a third of his working week on quantum.

“It’s such a big growing subject, we need to invest time in it,” says Lord.


Building the data rate out of smaller baud rates

Professor Andrew Lord

In the second article addressing the challenges of increasing the symbol rate of coherent optical transport systems, Professor Andrew Lord, BT’s head of optical network research, argues that the time is fast approaching to consider alternative approaches.

Coherent discourse 2

Coherent optical transport systems have advanced considerably in the last decade to cope with the relentless growth of internet traffic.

One-hundred-gigabit wavelengths, long the networking standard, have been replaced by 400-gigabit ones while state-of-the-art networks now use 800 gigabits.

Increasing the data carried by a single wavelength requires advancing the coherent digital signal processor (DSP), electronics and optics.

It also requires faster symbol rates.

Moving from 32 to 64 to 96 gigabaud (GBd) has increased the capacity of coherent transceivers from 100 to 800 gigabits.

Last year, Acacia, now part of Cisco, announced the first 1-terabit-plus wavelength coherent modem that uses a 128GBd symbol rate.

Other vendors will also be detailing their terabit coherent designs, perhaps as soon as the OFC show, to be held in San Diego in March.

The industry consensus is that 240GBd systems will be possible towards the end of this decade although all admit that achieving this target is a huge challenge.

Baud rate

Upping the baud rate delivers several benefits.

A higher baud rate increases the capacity of a single coherent transceiver while lowering the cost and power used to transport data. Simply put, operators get more bits for the buck by upgrading their coherent modems.

But some voices in the industry question the relentless pursuit of higher baud rates. One is Professor Andrew Lord, head of optical network research at BT.

“Higher baud rate isn’t necessarily a panacea,” says Lord. “There is probably a stopping point where there are other ways to crack this problem.”

Parallelism

Lord, who took part in a workshop at ECOC 2021 addressing whether 200+ GBd transmission systems are feasible, says he used his talk to get people to think about this continual thirst for higher and higher baud rates.

“I was asking the community, ‘Are you pushing this high baud rate because it is a competition to see who builds the biggest rate?’ because there are other ways of doing this,” says Lord.

One such approach is to adopt a parallel design, integrating two channels into a transceiver instead of pushing a single channel’s symbol rate.

“What is wrong with putting two lasers next to each other in my pluggable?” says Lord. “Why do I have to have one? Is that much cheaper?”

For an operator, what matters is the capacity rather than how that capacity is achieved.

Lord also argues that having a pluggable with two lasers gives an operator flexibility.

A single-laser transceiver can only go in one direction but with two, networking is possible. “The baud rate stops that, it’s just one laser so I can’t do any of that anymore,” says Lord.

The point is being reached, he says, where having two lasers, each at 100GBd, probably runs better than a single laser at 200GBd.

Excess capacity

Lord cites other issues arising from the use of ever-faster symbol rates.

What about links that don’t require the kind of capacity offered by very high baud rate transceivers?

If the link spans a short distance, it may be possibe to use a higher modulation scheme such as 32-ary quadrature amplitude modulation (32-QAM) or even 64-QAM. With a 200GBd symbol rate transceiver, that equates to a 3.2-terabit transceiver. “Yet what if I only need 100 gigabits,” says Lord.

One option is to turn down the data rate using, say, probabilistic constellation shaping. But then the high-symbol rate would still require a 200GHz channel. Baud rate equals spectrum, says Lord, and that would be wasting the fibre’s valuable spectrum.

Another solution would be to insert a different transceiver but that causes sparing issues for the operators.

Alternatively, the baud rate could be turned down. “But would operators do that?” says Lord. “If I buy a device capable of 200GBd, wouldn’t I always operate it at its maximum or would I turn it down because I want to save spectrum in some places?”

Turning the baud rate down also requires the freed spectrum to be used and that is an optical network management challenge.

“If I need to have to think about defragmenting the network, I don’t think operators will be very keen to do that,” says Lord.

Pushing electronics

Lord raises another challenge: the coherent DSP’s analogue-to-digital and digital-to-analogue converters.

Operating at a 200+ GBd symbol rate means the analogue-to-digital converters at the coherent receiver must operate at least at 200 giga-samples per second.

“You have to start sampling incredibly fast and that sampling doesn’t work very well,” says Lord. “It’s just hard to make the electronics work together and there will be penalties.”

Lord cites research work at UCL that suggests that the limitations of the electronics – and the converters in particular – are not negligible. Just connecting two transponders over a short piece of fibre shows a penalty.

“There shouldn’t be any penalty but there will be, and the higher the baud rate, you will get a penalty back-to-back because the electronics are not perfect,” he says.

He suspects the penalty is of the order of 1 or 2dB. That is a penalty lost to the system margin of the link before the optical transmission even starts.

Such loss is clearly unacceptable especially when considering how hard engineers are working to enhance algorithms for even a few tenths of a dB gain.

Lord expects that such compromised back-to-back performance will ultimately lead to the use of multiple adjacent carriers.

“Advertising the highest baudrate is obviously good for publicity and shows industry leadership,” he concludes. “But it does feel that we are approaching a limit for this, and then the way forward will be to build aggregate data rates out of smaller baud rates.”


BT bolsters research in quantum technologies

BT is increasing its investment in quantum technologies. “We have a whole team of people doing quantum and it is growing really fast,” says Andrew Lord, head of optical communications at BT.

The UK incumbent is working with companies such as Huawei, ADVA Optical Networking and ID Quantique on quantum cryptography, used for secure point-to-point communications. And in February, BT joined the Telecom Infra Project (TIP), and will work with Facebook and other TIP members at BT Labs in Adastral Park and at London’s Tech City. Quantum computing is one early project.

Andrew LordThe topics of quantum computing and data security are linked. The advent of quantum computers promises the break the encryption schemes securing data today, while developments in quantum cryptography coupled with advances in mathematics promise new schemes resilient to the quantum computer threat.    

 

Securing data transmission

To create a secure link between locations, special digital keys are used to scramble data. Two common data encryption schemes are used, based on symmetric and asymmetric keys. 

A common asymmetric key scheme is public key cryptography which uses a public and private key pair that are uniquely related. The public key is published along with its user’s name. Any party wanting to send data securely to the user looks up their public key and uses it to scramble the data. Only the user, which has the associated private key, can unscramble the data. A widely used public-key crypto-system is the RSA algorithm.

 

There are algorithms that can be run on quantum computers that can crack RSA. Public key crypto has a big question mark over it in the future and anything using public key crypto now also has a question mark over it.

 

In contrast, symmetric schemes use the same key at both link ends, to lock and unlock the data. A well-known symmetric key algorithm is the Advanced Encryption Standard which uses keys up to 256-bits long (AES-256); the more bits, the more secure the encryption.

The issue with a symmetrical key scheme, however, is getting the key to the recipient without it being compromised. One way is to deliver the secret key using a security guard handcuffed to a case. An approach more befitting the digital age is to send the secret key over a secure link, and here, public key cryptography can be used. In effect, an asymmetric key is used to encrypt the symmetric key for transmission to the destination prior to secure communication.

But what worries governments, enterprises and the financial community is the advent of quantum computing and the risk it poses to cracking public key algorithms which are the predominant way data is secured. Quantum computers are not yet available but government agencies and companies such as Intel, Microsoft and Google are investing in their development and are making progress.

Michele Mosca estimates that there is a 50 percent chance that a quantum computer will exist by 2030. Professor Mosca, co-founder of the Institute for Quantum Computing at the University of Waterloo, Canada and of the security firm, evolutionQ, has a background in cyber security and has researched quantum computing for 20 years.

This is a big deal, says BT’s Lord. “There are algorithms that can be run on quantum computers that can crack RSA,” he says. “Public key crypto has a big question mark over it in the future and anything using public key crypto now also has a question mark over it.”

A one-in-two chance by 2030 suggests companies have time to prepare but that is not the case. Companies need to keep data confidential for a number of years. This means that they need to protect data to the threat of quantum computers at least as many years in advance since cyber-criminals could intercept and cache the data and wait for the advent of quantum computers to crack the coded data.   

 

Upping the game

The need to have secure systems in place years in advance of quantum computer systems is leading security experts and researchers to pursue two approaches to data security. One uses maths while the other is based on quantum physics.

Maths promises new algorithms that are not vulnerable to quantum computing. These are known as post-quantum or quantum-resistant techniques. Several approaches are being researched including lattice-based, coding-based and hash-function-based techniques. But these will take several years to develop. Moreover, such algorithms are deemed secure because they are based on sound maths that is resilient to algorithms run on quantum computers. But equally, they are secure because techniques to break them have not been widely investigated, by researchers and cyber criminals alike.   

The second, physics approach uses quantum mechanics for key distribution across an optical link, which is inherently secure.  

“Do you pin your hopes on a physics theory [quantum mechanics] that has been around for 100 years or do you base it on maths?” says BT’s Lord. “Or do you do both?”

 

In the world of the very small, things are linked, even though they are not next to each other

 

Quantum cryptography 

One way to create a secure link is to send the information encoded on photons - particles of light. Here, each photon carries a single bit of the key.

If the adversary steals the photon, it is not received and, equally, they are taking information that is no use to them, says Lord. A more sophisticated technique is to measure the photon while it passes through but here they come up against the quantum mechanical effect where measuring a photon changes its parameters. The transmitter and receiver typically reserve at random a small number of the key’s photons to detect a potential eavesdropper. If the receiver detects photons that were not sent, the change alerts them that the link has been compromised.

The issue with such quantum key distribution techniques is that the distances a single photon can be sent are limited to a few tens of kilometres only. If longer links are needed, intermediate secure trusted sites are used to regenerate the key. These trusted sites need to be secure.

Entanglement, whereby two photons are created such that they are linked even if they are physically in separate locations, is one way researchers are looking to extend the distance keys can be distributed. With such entangled photons, any change or measurement of one instantly affects the twin photon. “In the world of the very small, things are linked, even though they are not next to each other,” says Lord.

Entanglement could be used by quantum repeaters to increase the length possible for key distribution not least for satellites, says Lord: “A lot of work is going on how to put quantum key distribution on orbiting satellites using entanglement.”

But quantum key distribution only solves a particular class of problem such as protecting data sent across links, backing up data between a bank and a data centre, for example. The technique is also dependent on light and thus is not as widely applicable as post-quantum algorithms. "There is a view emerging in the industry that you throw both of these techniques [post quantum algorithms and quantum key distribution] especially at data streams you want to keep secure."

 

Practicalities

BT working with Toshiba and optical transport equipment maker ADVA Optical Networking have already demonstrated a quantum protected link operating at 100 gigabits-per-second.

BT’s Lord says that while quantum cryptography has been a relatively dormant topic for the last decade, this is now changing. “There are lots of investment around the world and in the UK, with millions poured in by the government,” he says. BT is also encouraged that there are more companies entering the market including Huawei.

“What is missing is still a little bit more industrialisation,” says Lord. “Quantum physics is pretty sound but we still need to check that the way this is implemented, there are no ways of breaching it; to be honest we haven't really done that yet.”

BT says it has spent the last few months talking to financial institutions and claims there is much interest, especially with quantum computing getting much closer to commercialisation. “That is going to force people to make some decisions in the coming years,” says Lord. 


Books of the year 2016 - Part 1

Each year Gazettabyte asks industry figures to comment on books that they recommend. Here are BT's Andrew Lord's and Cignal's Andrew Schmitt's recommendations to kick off this year's reviews.

 

Andrew Lord, Head of Optical Research at BT.

Quantum technologies are flavour of the month, with huge government investments from around the world. The title and cover of Bananaworld: Quantum Mechanics for Primates by Jeffrey Bub, suggest a book that will ‘unpeel’ a tough but increasingly important subject for general readers. 

The book itself is, however, far deeper than its cover suggests, going way beyond the basics, and attempting to forge a link between quantum mechanics and the structure of information. 

Imagining a strange world in which bananas exhibit quantum effects might just confuse rather than aid the general reader, but those wishing to probe the deeper information theory questions will find much here to ‘chew on’.

 

 

Andrew Schmitt, founder of Cignal AI

Starting a company with a wide customer base requires a lot of ‘infrastructure’ that I didn’t realise would consume so much time. I like to build things so it has been a real thrill but also a lot of work. I think I gravitated towards fun things to read as a result of having my hands full. All of them were outstanding.

A Fire Upon The Deep by Vernor Vinge, and Seveneves by Neal Stephenson require a great deal of mental fortitude but unfold on such a grand scale that they are very appealing. Stephenson is a favourite of mine, ever since reading Snow Crash in college. He’s like William Gibson except with a sense of humour.

I also reread Fahrenheit 451 by Ray Bradbury. Let’s just say it felt a lot less sci-fi the second time around. If you look at the monoculture of ideas in politics, education, even business – it’s a dangerous situation. A big reason populism is emerging in the West is because people are sick of getting told what to think by “smart” people, and the perceived loss of control. It is a healthy rebellion despite a lot of the downside because the alternative – everyone thinking in lockstep – is far more dangerous.

I had greater ambitions for non-fiction and have several unread Kindle books on my iPhone. I wanted to read The Hard Thing about Hard Things from Ben Horowitz but have not. Other titles include The Comeback: How Larry Ellison's Team Won the America's Cup by G. Bruce Knecht and American Sniper: The Autobiography of the Most Lethal Sniper in U.S. Military History by Chris Kyle.

The book Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future by Ashlee Vance is a great read. There are a lot of haters out there who don’t like Tesla for various reasons – his government funding, climate-change skeptics who don’t like his views, and who knows what else? Fine by me. But after reading this book you have to acknowledge the massive, ridiculous undertaking of starting both a rocket company and an electric car company. It is insane. Yet this guy has managed to keep the wheels from coming off so far. He has burned through people, capital, and relationships but the results are impressive.

He may not be everyone's idea of a nice guy – whatever - but he is a walking, breathing, living image of the American ethos of invention and capitalism. Whatever money it costs the US government is more than offset by the example he sets for others that anything is possible provided you have enough time, money, and guts.  


Privacy Preference Center