ADCs key for high baud-rate coherent systems

Increasing the baud rate of coherent modems benefits optical transport. The higher the baud rate the more data can be sent on a wavelength, reducing the cost-per-bit of traffic.
But engineers have become so good at designing coherent systems that they are now approaching the Shannon limit.
At the OFC show earlier this year, Ciena showcased a coherent module operating at 107 gigabaud (GBd). And last year, Acacia, now part of Cisco, announced its next-generation 1.2 terabits-per-second (Tbps) wavelength coherent module operating at up to 140GBd.
The industry believes that increasing the baud rate to 240+GBd is possible, but each new symbol-rate hike is challenging.
All the components in a modem – the coherent DSP and its digital-to-analogue (DAC) and analogue-to-digital (ADC) converters, the optics, and the analogue drive circuitry – must scale in lockstep.
Gigabaud and giga-samples
Coherent DSPs continue to improve in optical performance with each new CMOS process. The latest DSPs will use 5nm CMOS, while the semiconductor industry is developing 3nm CMOS and beyond.
Optical device performance is also scaling. For example, a 220GBd thin-film lithium niobate modulator has been demonstrated in the lab, while photodetectors will also achieve similar rates.
However, the biggest challenge facing coherent modem engineers is the analogue drive circuitry and the coherent DSP’s ADCs and DACs.
A key performance metric is its sampling rate measured in giga-samples-per-second (Gsps).

According to Nyquist, a signal needs to be sampled at twice its baud rate to be perfectly reconstructed. But that doesn’t mean sampling is always done at twice the baud rate. Instead, depending on the DSP implementation, the sampling rate is typically 1.2-1.6x the symbol rate.
“So, for a 200 gigabaud coherent modem, the DSP’s converters must operate at 240+ giga-samples per second,” says Tomislav Drenski, marketing manager, wireline, at Socionext Europe.
Socionext
Socionext is a system-on-chip specialist founded in 2015 with the combination of the system LSI divisions of Fujitsu and Panasonic, with its headquarters in Japan. Its European arm focuses on mixed-signal design, especially ADC, DACs and serialisers/ deserialisers (Serdes).
The company has developed 8-bit converters for several generations of long-haul optical designs, at 200Gbps, 400Gbps and greater than 1Tbps (see bottom photo). These optical systems used ADCs and DACs operating at 65, 92 and 128Gsps, respectively.
Socionext works with leading coherent optical module and network system providers but is also providing 5G and wireless ASIC solutions.
“We design the ADCs and DACs, which are ultra-high-speed, state-of-the-art circuit blocks, while our partners have their ideas on how the DSP should look,” says Drenski. “They provide us the DSP block, and we integrate everything into one chip.”
It is not just the quality of the circuit block that matters but how the design is packaged, says Drenski: “If the crosstalk or the losses in the package are too high, then whatever you have got with the IP is lost in the packaging.”
Any package-induced loss or added capacitance decreases bandwidth. And bandwidth, like sampling rate, is key to achieving high baud-rate coherent systems.
Design considerations
An important ADC metric is its resolution: the number of bits it uses to sample a signal. For high-performance coherent designs, 8-bit ADCs are used. However, because of the high sampling rate required and the associated jitter performance, the effective number of bits (ENOB) – an ADC metric – reduces to some 6 bits.
“People are asking for 10-bit converters for newer generations of design; these are shorter reach, not ultra-long-haul,” says Drenski.
Extra bits add fidelity and enable the recovery of higher-order modulated signals. Still, for ultra-long-haul, where the optical loss is more significant, using a 10-bit ADC makes little sense.
For 5G and wireless applications, higher resolutions, even going up to 14bit, is the recent trend. But such solutions use a lower sampling rate – 30Gsps – to enable the latest, direct-RF applications.

ADC architecture
An interleaved architecture enables an 8-bit ADC to sample a signal 128 billion times a second.
At the input to the ADC sits a sample-and-hold circuit. This circuit feeds a hierarchy of interleaved ‘sub-ADCs’. The interleaving goes from 1 to 4, then 4 to 16, 16 to 64, with the sub-ADCs all multiplexed.
“You take the signal and sample-and-hold it, then push everything down to many sub-ADCs to have the necessary speed at the end, at the output,” says Drenski.
These sub-ADCs must be aligned, and that requires calibration.
An ADC has three key metrics: sampling rate, bandwidth and ENOB. All three are interdependent.
For example, if you have a higher bandwidth, you will have a higher frequency, and clock jitter becomes a limiting factor for ENOB. Therefore, the number of sub-ADCs used must be well balanced and optimised to realise the high sampling frequencies needed without affecting ENOB. The challenge for the designer is keeping the gain, bias and timing variations to a minimum.
Drenski says designing the ADC is more challenging than the DAC, but both share common challenges such as clock jitter and also matching the path lengths of the sub-DACs.
240 gigabaud coherent systems
Can the bandwidth of the ADC reach 240+GBd?
“It all comes down to how much power you can spend,” says Drenski. “The more power you can spend to linearise, equalise, or optimise, the better.”
Noise is another factor. The amount of noise allowed determines how far the bandwidth can be increased. And with higher bandwidth, there is a need for higher clock speeds. “If we have higher clock speeds, we have higher complexity, so everything gets more complicated,” says Drenski.
The challenges don’t stop there.
Higher sampling rates mean the number of sub-ADCs must be increased, affecting circuit size and power consumption. And limiting the power consumption of the coherent DSP is a constant challenge.
At some point, the physical limitations of the process – the parasitics – limit bandwidth, independent of how the ADC circuitry is designed.
Coherent optics specialists like Acacia, Nokia, ADVA and Lumentum say that 220-240 gigabaud coherent systems are possible and will be achieved before the decade’s end.
Drenski agrees but stresses just how challenging this will be.
For him, such high baud rate coherent systems will only be possible if the electronics and optics are tightly co-integrated. Upping the bandwidth of each essential element of the coherent system, like the coherent DSP’s ADCs and DACs, is necessary but will not work alone.
What is needed is bringing both worlds together, the electronics and the optics.
FPGAs with 56-gigabit transceivers set for 2017
The company demonstrated a 56-gigabit transceiver using 4-level pulse-amplitude modulation (PAM-4) at the recent OFC show. The 56-gigabit transceiver, also referred to as a serialiser-deserialiser (serdes), was shown successfully working over backplane specified for 25-gigabit signalling only.
Gilles GarciaXilinx's 56-gigabit serdes is implemented using a 16nm CMOS process node but the first FPGAs featuring the design will be made using a 7nm process. Gilles Garcia says the choice of 7nm CMOS is solely a business decision and not a technical one.
”Optical module [makers] will take another year to make something decent using PAM-4," says Garcia, Xilinx's director marketing and business development, wired communications. "Our 7nm FPGAs will follow very soon afterwards.”
The company is still to detail its next-generation FPGA family but says that it will include an FPGA capable of supporting 1.6 terabit of Optical Transport Network (OTN) using 56-gigabit serdes only. At first glance that implies at least 28 PAM-4 transceivers on a chip but OTN is a complex design that is logic not I/O limited suggesting that the FPGA will feature more than 28, 56-gigabit serdes.
Applications
Xilinx’s Virtex UltraScale and its latest UltraScale+ FPGA families feature 16-gigabit and 25-gigabit transceivers. Managing power consumption and maximising reach of the high-speed serdes are key challenges for its design engineers. Xilinx says it has 150 engineers for serdes design.
“Power is always a key challenge because as soon as you talk about 400-gigabit to 1-terabit per line card, you need to be cautious about the power your serdes will use,” says Garcia. He says the serdes need to adapt to the quality of the traces for backplane applications. Customers want serdes that will support 25 gigabit on existing 10-gigabit backplane equipment.
Xilinx describes its Virtex UltraScale as a 400-gigabit capable single-chip system supporting up to 104 serdes: 52 at 16 gigabit and 52 at 25 gigabit.
The UltraScale+ is rated as a 500-gigabit to 600-gigabit capable system, depending on the application. For example, the FPGA could support three, 200-gigabit OTN wavelengths, says Garcia.
Xilinx says the UltraScale+ reduces power consumption by 35% to 50% compared to the same designs implemented on the UltrasScale. The Virtex UltraScale+ devices also feature dedicated hardware to implement RS-FEC, freeing up programmable logic for other uses. RS-FEC is used with multi-mode fibre or copper interconnects for error correction, says Xilinx. Six UltraScale+ FPGAs are available and the VU13P, not yet out, will feature up to 128 serdes, each capable of up to 32 gigabit.
We don’t need retimers so customers can connect directly to the backplane at 25 gigabit, thereby saving space, power and cost
The UltraScale and UltraScale+ FPGAs are being used in several telecom and datacom applications.
For telecom, 500-gigabit and 1-terabit OTN designs are an important market for the UltraScale FPGAs. Another use for the FPGA serdes is for backplane applications. “We don’t need retimers so customers can connect directly to the backplane at 25 gigabit, thereby saving space, power and cost,” says Garcia. Such backplane uses include OTN platforms and data centre interconnect systems.
The FPGA family’s 16-gigabit serdes are also being used in 10-gigabit PON and NG-PON2 systems. “When you have an 8-port or 16-port system, you need to have a dense serdes capability to drive the [PON optical line terminal’s] uplink,” says Garcia.
For data centre applications, the FPGAs are being employed in disaggregated storage systems that involved pooled storage devices. The result is many 16-gigabit and 25-gigabit streams accessing the storage while the links to the data centre and its servers are served using 100-gigabit links. The FPGA serdes are used to translate between the two domains (see diagram).
Source: Xilinx
For its next-generation 7nm FPGAs with 56-gigabit transceivers, Xilinx is already seeing demand for several applications.
Data centre uses include server-to-top-of-rack links as the large Internet providers look move from 25 gigabit to 50- and 100-gigabit links. Another application is to connect adjacent buildings that make up a mega data centre which can involve hundreds of 100-gigabit links. A third application is meeting the growing demands of disaggregated storage.
For telecom, the interest is being able to connect directly to new optical modules over 50-gigabit lanes, without the need for gearbox ICs.
Optical FPGAs
Altera, now part of Intel, developed an optical FPGA demonstrator that used co-packaged VCSELs for off-chip optical links. Since then Altera announced its Stratix 10 FPGAs that include connectivity tiles - transceiver logic co-packaged and linked with the FPGA using interposer technology.
Xilinx says it has studied the issue of optical I/O and that there is no technical reason why it can’t be done. But the issue is a business one when integrating optics in an FPGA, he says: “Who is responsible for the yield? For the support?”
Garcia admits Xilinx could develop its own I/O designs using silicon photonics and then it would be responsible for the logic and the optics. “But this is not where we are seeing the business growing,” he says.

