An interview with John D'Ambrosia

The chairman of the Ethernet Alliance talks to Gazettabyte about the many ways Ethernet is evolving due to industry requirements.

"We are witnessing the evolution of Ethernet in ways that many of us never planned because there are markets that are demanding different things from it."

 

John D'Ambrosia describes the industry as feeling like nuts right now. "There is just so much stuff going on in terms of Ethernet," he says.

Besides the development of 400 Gigabit Ethernet (GbE) - the specification work for the emerging Ethernet standard being well underway - new applications are creating requirements that the existing Ethernet specifications cannot meet. These requirements include additional Ethernet speeds; the IEEE 802.3 Ethernet Working Group has created a Study Group to develop single-lane 25GbE for server interconnect.

One busy Ethernet activity involves 100 Gig mid-reach interfaces. Mid-reach covers distances from 500m to 2km. The interfaces are needed in the data centre to connect switches, such as the leaf-spine switch architecture, and to connect switches to the data centre's edge router. The existing IEEE 802.3 Ethernet 100 Gig multi-mode standards - the 100GBASE-SR4 and the 100GBASE-SR10 span 100m only (150m over OM4 fibre), too short for certain data centre applications.

"As we go faster, multimode's reach capabilities are coming down," says D'Ambrosia. "It has got to do with those pesky laws of physics." The next IEEE 802.3 100 Gig interface option, 100GBASE-LR4, has a 10km span, too much for many data centre applications. The 100GBASE-LR4 is also expensive, seven times the cost of the 100GBASE-SR4 interface, according to market research firm, LightCounting.

One of the reasons the IEEE 802.3 Ethernet Working Group created the 802.3bm Task Force was to develop an inexpensive 500m-reach specification. Four proposals resulted: parallel single mode (PSM4), coarse WDM (CWDM), pulse amplitude modulation and discrete multi-tone.  None were adopted since each failed to muster sufficient backing.  The optical industry then pursued a multi-source agreement (MSA) approach, and since January 2014, four single-mode mid-reach interfaces have emerged: the CLR4 Alliance, the CWDM4, the PSM4 and OpenOptics.

D'Ambrosia says the mid-reach optics debate first arose in 2007 when the IEEE 802.3ba group, developing 40 GbE and 100 GbE standards, discussed whether a 3-4km 100 Gig reach interface was required. "There was still enough people that needed 10km," says D'Ambrosia, and if 3-4km had been chosen then the 10km requirement would have been addressed with an even more complex 40km interface. "In hindsight, I'm not sure that was the right decision but it was the right decision at the time," says D'Ambrosia.

The PSM4 100 GbE mid-reach MSA used four individual fibres for each direction, each fibre operating at 25 Gig. The other three mid-reach interfaces have a 2km reach and use 4x25 Gig wavelengths and duplex fibre, a single fibre in each direction.

The decision to use the ribbon fibre PSM4 or one of the other three WDM-based schemes depends on the existing fibre plant used in a data centre, and the link distance required. The PSM4 module may prove to be less costly that the other three module types but its ribbon fibre is more expensive compared to similar length duplex fibre; the longer the link, the more significant the fibre becomes as part of the overall link cost. "What someone really wants is the lowest cost solution for their application," says D'Ambrosia.

The PSM4 has other, secondary uses that are part of its appeal. "With a breakout solution, even in copper, you can get to lower speeds," says D'Ambrosia. For example, a 40 GbE QSFP optical module using parallel fibre can be viewed as a 40 Gig interface or as a dense 4x10 Gig interface, with each fibre a 10 Gig interface. Such a 'breakout' solution is likely to be attractive earlier on, as applications transition to higher speeds. 

Does it serve the industry to have four mid-reach solutions? D'Ambrosia says opinion varies. "My own personal belief is that it would be better for the industry overall if we didn't have so many choices," he says. "But the reality is there are a lot of different applications out there."

 

 

25 Gigabit Ethernet

Work has also started on a 25 GbE standard. An IEEE 802.3 Study Group has been created to investigate a copper-based and a multi-mode server interconnect at 25 Gig. In July, the 25G Ethernet Consortium was also announced by firms Google, Microsoft, Arista, Mellanox and Broadcom that is also backing 25 GbE for server interconnect.

"There are a lot of people who are worried that 25 GbE will go everywhere; you just don't introduce a new rate of Ethernet," says D'Ambrosia. And as with 100 Gig mid-reach with its proliferation of MSAs, now there is a concern about a proliferation of Ethernet speeds, he says.

But if there is one thing that D'Ambrosia has learned in his years active in Ethernet standards, it is not to second-guess the market. "If there is a cool application out there that will help save money, the market will figure it out and it [the solution] will become popular."

For now, the IEEE 802.3 25G Study Group has chosen to focus on single lane server interconnects. "That is what the charter is," says D'Ambrosia. "But that doesn't mean 25 Gigabit Ethernet will end there; there is never a single rate project."

 

400 Gigabit Ethernet

D'Ambrosia, who is also chair of the IEEE 802.3 400G Ethernet Task Force, also highlights the latest developments of the next Ethernet speed increment. There is a multi-mode 400 GbE fibre standard being worked on as well as three single mode fibre objectives.

The multi-mode solution will have a reach of 100m while the single mode options will span 500m, 2km and 10km. "For 500m, that is where everyone thinks parallel fibre can be used," says D'Ambrosia. At 10km, not surprisingly, it will be duplex fibre, while at 2km it is likely to be duplex simply because of the cost of long spans of parallel fibre.

In November, at the next Task Force meeting, proposals will be made as to how best to implement these differing requirements. For the multi-mode, talk is of a 16x25 Gig implementation. "I believe that is what we will see in the proposals in November," says D'Ambrosia. The Task Force is also looking at 50 Gig electrical interfaces for the longer 400 Gig reaches. Such an interface is likely to be ready by the time the 400G Task Force work is completed in 2017.

No one has suggested a 16x25 Gig single mode fibre optical interface, he says: "Do we do it as 50 Gig or 100 Gig?" Non-return-to-zero [NRZ], PAM4 and discrete multi-tone modulation schemes are all being considered. "For NRZ, we might see 8x50 Gig though that is not solidifying yet," he says. "For 500m there is talk of a x4 bundle and also pulse amplitude modulation for a single 100 Gig wavelength."

The November meeting is the last one for new proposals and in January 2015 decisions will be made.   

The Ethernet Alliance is sponsoring an industry event this month entitled: "The Rate Debate" at the TEF 2014 event in Santa Clara, CA, on October 16th. The event will look at whether 40 Gig or 50 Gig Ethernet makes more sense, and the likely evolution. And if 50Gig is adopted, will 100 GbE based on 4 channels evolve to 200 Gigabit? There is also interest in extending Category 5 cable from 1 Gig to 2.5 Gig and even 5 Gig to extend the useful life of campus cabling, and that will also be addressed.  More recently, there have been two Calls-For-Interest: for a Next Generation Enterprise Access BASE-T PHY and a 25GBASE-T and these will also likely be discussed.

Ethernet speeds used to evolve by a factor of 10, then by a factor of 4 and now 2.5. In future, with 50 Gig, it might also double. "With 40 Gig and 50 Gig, which one will dominate?" says D'Ambrosia. "But they are so close, why can't we come up with a solution that shares technology at both [speeds]?" These are just some of the issues to be discussed at the event.

"We are witnessing the evolution of Ethernet in ways that many of us never planned because there are markets that are demanding different things from it," says D'Ambrosia.  


OIF demonstrates its 25 Gig interfaces are ready for use

Eleven companies have been participating in nine demonstrations at the European Conference and Exhibition on Optical Communication (ECOC2013) being held in London this week.

The Open Internetworking Forum (OIF) has demonstrated its specified 25 and 28 Gigabit-per-second (Gbps) electrical interfaces working across various vendors' 100 Gigabit modules and ICs.

"The infrastructure over the backplane is maturing to the point of 25 Gig; you don't need special optical backplanes" John Monson, Mosys

"The ecosystem is maturing," says John Monson, vice president of marketing at Mosys, one of the 11 firms participating in the demonstrations. "The demos are not just showing the electrical OIF interfaces but their functioning between multiple vendors, with optical standards running across them at 100 Gig."

The demonstrations - using the CFP2, QSFP and CPAK optical modules and the 28Gbps CEI-28G-VSR module-to-chip electrical interface - set the stage for higher density 400 and 800 Gigabit line cards, says Monson. The CEI-28G-VSR is specified for up to 10dB of signal loss, equating to some 4 to 6 inches of trace on a high-quality material printed circuit board.

Higher density system backplanes are also ready using the OIF's CEI-25G-LR interface. "Until I get backplanes capable of high rates, there are just too many pins at 10 Gig to support 800 Gig and Terabit [line card] solutions," says Monson.

The ECOC demonstrations include two 100Gbps modules linked over fibre. "You have two CFP2 modules, from different vendors, running at 4x28Gbps OTN [Optical Transport Network] rates over 10km," says Monson.

On the host side, the  CEI-28G-VSR interface sits between a retimer inside the CFP2 module and a gearbox chip that translates between 25Gbps and the 10Gbps lanes that link a framer or a MAC IC on the line card.

The demonstrations cover different vendors' gearbox ICs talking to different optical module makers' CFP2s as well as Cisco's CPAK. "We are mixing and matching quite a bit in these demos," says Monson.

 

The OIF has already started work for the next-generation electrical interfaces that follow the 25 and 28 Gigabit ones


There is also a demo of a QSFP+ module driving active copper cable and one involving two 100 Gigabit SR10 modules and a gearbox IC. Three further demos involve the CEI-25G-LR backplane interface. Lastly, there is a demo involving the thermal modelling of a line card hosting eight slots of the CDFP 400Gbps optical module MSA.

The OIF's CEI-25G-LR is specificed for up to 25dB of loss. The IEEE P802.3bj 100 Gbps Backplane and Copper Cable Task Force is specifying an enhanced backplane electrical interface that supports 35dB of loss using techniques such as forward error correction.

"What the demos say is that the electrical interfaces, at 25 Gig, can be used not just for a 4-6 inch trace, but also high-density backplanes," says Monson. As a result line card density will increase using the smaller form factor 100Gbps optical modules. It also sets the stage for 400 Gig individual optics, says Monson: "The infrastructure over the backplane is maturing to the point of 25 Gig; you don't need special optical backplanes."

Meanwhile, standards work for 400 Gigabit Ethernet is still at an early stage, but proposals for 56Gbps links have been submitted for consideration. "Such a rate would double capacity and reduce the number of pins required on the ASSPs and ASICs," says Monson.

As to how the electrical interface for 400 Gigabit Ethernet will be implemented, it could be 16x25Gbps or 8x50Gbps lanes and will also be influenced by the chosen optical implementation. The OIF has already started work for the next-generation electrical interfaces that follow the 25 and 28 Gigabit ones.

The 11 companies and the two test and measurement companies taking part, as well as the demonstrations, are detailed in an OIF White Paper, click here.


The uphill battle to keep pace with bandwidth demand

Relative traffic increase normalised to 2010 Source: IEEE

Optical component and system vendors will be increasingly challenged to meet the expected growth in bandwidth demand.

According to a recent comprehensive study by the IEEE (The IEEE 802.3 Industry Connections Ethernet Bandwidth Assessment report), bandwidth requirements are set to grow 10x by 2015 compared to demand in 2010, and a further 10x between 2015 and 2020. Meanwhile, the technical challenges are growing for the vendors developing optical transmission equipment and short-reach high-speed optical interfaces. 

Fibre bandwidth is becoming a scarce commodity and various techniques will be required to scale capacity in metro and long-haul networks. The IEEE is expected to develop the next-higher speed Ethernet standard to follow 100 Gigabit Ethernet (GbE) in 2017 only. The IEEE is only talking about capacities and not interface speeds. Yet, at this early stage, 400 Gigabit Ethernet looks the most likely interface.

 

"The various end-user markets need technology that scales with their bandwidth demands and does so economically. The fact that vendors must work harder to keep scaling bandwidth is not what they want to hear"

 

A 400GbE interface will comprise multiple parallel lanes, requiring the use of optical integration. A 400GbE interface may also embrace modulation techniques, further adding to the size, complexity and cost of such an interface. And to achieve a Terabit, three such interfaces will be needed.

All these factors are conspiring against what the various end-user bandwidth sectors require: line-side and client-side interfaces that scale economically with bandwidth demand. Instead, optical components, optical module and systems suppliers will have to invest heavily to develop more complex solutions in the hope of matching the relentless bandwidth demand.

The IEEE 802.3 Bandwidth Assessment Ad Hoc group, which produced the report that highlights the hundredfold growth in bandwidth demand between 2010 and 2020, studied several sectors besides core networking and data centre equipment such as servers. These include Internet exchanges, high-performance computing, cable operators (MSOs) and the scientific community. 

The difference growth rates in bandwidth demand it found for the various sectors are shown in the chart above.

 

Optical transport

A key challenge for optical transport is that fibre spectrum is becoming a precious commodity. Scaling capacity will require much more efficient use of spectrum.

To this aim, vendors are embracing advanced modulation schemes, signal processing and complex ASIC designs. The use of such technologies also raises new challenges such as moving away from a rigid spectrum grid, requiring the introduction of flexible-grid switching elements within the network. 

And it does not stop there. 

Already considerable development work is underway to use multi-carriers - super-channels - whose carrier count can be adapted on-the-fly depending on demand, and which can be crammed together to save spectrum. This requires advanced waveform shaping based on either coherent orthogonal frequency division multiplexing (OFDM) or Nyquist WDM, adding further complexity to the ASIC design.

At present, a single light path can be increased from 100 Gigabit-per-second (Gbps) to 200Gbps using the 16-QAM amplitude modulation scheme. Two such light paths give a 400Gbps data rate. But 400Gbps requires more spectrum than the standard 50GHz band used for 100Gbps transmission. And using QAM reduces the overall optical transmission reach achieved.

The shorter resulting reach using 16-QAM or 64-QAM may be sufficient for metro networks (~1000km) but to achieve long-haul and ultra-long-haul spans will require super-channels based on multiple dual-polarisation, quadrature phase-shift keying (DP-QPSK) modulated carriers, each occupying 50GHz. Building up a 400Gbps or 1 Terabit signal this way uses 4 or 10 such carriers, respectively - a lot of spectrum. Some 8Tbps to 8.8Tbps long-haul capacity result using this approach.

The main 100Gbps system vendors have demonstrated 400Gbps using 16-QAM and two carriers. This doubles system capacity to 16-17.6Tbps. A further 30% saving in bandwidth using spectral shaping at the transmitter crams the carriers closer together, raising the capacity to some 23Tbps. The eventual adoption of coherent OFDM or Nyquist WDM will further boost overall fibre capacity across the C-band. But the overall tradeoff of capacity versus reach still remains. 

Optical transport thus has a set of techniques to improve the amount of traffic it can carry. But it is not at a pace that matches the relentless exponential growth in bandwidth demand.

After spectral shaping, even more complex solutions will be needed. These include extending transmission beyond the C-band, and developing exotic fibres. But these are developments for the next decade or two and will require considerable investment. 

The various end-user markets need technology that scales with their bandwidth demands and does so economically. The fact that vendors must work harder to keep scaling bandwidth is not what they want to hear.

 

"No-one is talking about a potential bandwidth crunch but if it is to be avoided, greater investment in the key technologies will be needed. This will raise its own industry challenges. But nothing like those to be expected if the gap between bandwidth demand and available solutions grows"

 

Higher-speed Ethernet 

The IEEE's Bandwidth Assessment study lays the groundwork for the development of the next higher-speed Ethernet standard.

Since the standard work has not yet started, the IEEE stresses that it is premature to discuss interface speeds. But based on the state of the industry, 400GbE already looks the most likely solution as the next speed hike after 100GbE. Adopting 400GbE, several approaches could be pursued:

  • 16 lanes at 25Gbps: 100GbE is moving to a 4x25Gbps electrical interface and 400GbE could exploit such technology for a 16-lane solution, made up of four, 4x25Gbps interfaces.  "If I was a betting man, I'd probably put better odds on that [25Gbps lanes] because it is in the realm of what everyone is developing," John D'Ambrosia, chair of the IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group and chair of the the IEEE 802.3 Bandwidth Assessment Ad Hoc group, told Gazettabyte. 
  • 10 lanes at 40Gbps: The Optical Internetworking Forum (OIF) has started work on an electrical interface operating between 39 and 56Gbps (Common Electrical Interface - 56G-Close Proximity Reach). This could lead to 40Gbps lanes and a 10x40Gbps implementation for a 400Gbps Ethernet design. 
  • Modulation: For the 100Gbps backplane initiative, the IEEE is working on pulse-amplitude modulation (PAM), says D'Ambrosia. Such modulation could be used for 400GbE. Modulation is also being considered by the IEEE to create a single-lane 100Gbps interface. Such a solution could lead to a 4-lane 400GbE solution. But adopting modulation comes at a cost: more sophisticated electronics, greater size and power consumption. 

 

As with any emerging standard, first designs will be large, power-hungry and expensive. The industry will have to work hard to produce more integrated 16-lane or 10-lane designs. Size and cost will also be important given that three 400GbE modules will be needed to implement a Terabit interface.

The challenge for component and module vendors is to develop such multi-lane designs yet do so economically. This will require design ingenuity and optical integration expertise.

 

Timescales

Super-channels exist now - Infinera is shipping its 5x100Gbps photonic integrated circuit. Ciena and Alcatel-Lucent are introducing their latest generation DSP-ASICs that promise 400Gbps signals and spectral shaping while other vendors have demonstrated such capabilities in the lab.

The next Ethernet standard is set for completion in 2017. If it is indeed based on a 400GbE Ethernet interface, it will likely use 4x25Gbps components for the first design, benefiting from emerging 100GbE CFP2 and CFP4 modules and their more integrated designs.  But given the standard will only be completed in five years' time, new developments should also be expected.

No-one is talking about a potential bandwidth crunch but if it is to be avoided, greater investment in the key technologies will be needed. This will raise its own industry challenges. But nothing like those to be expected if the gap between bandwidth demand and available solutions grows.


The next high-speed Ethernet standard starts to take shape

Source: Gazettabyte

The IEEE has begun work to develop the next-speed Ethernet standard beyond 100 Gigabit to address significant predicted growth in bandwidth demand. 

The standards body has set up the IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group, chaired by John D’Ambrosia, who previously chaired the 40 and 100 Gigabit IEEE P802.3ba Ethernet standards ratified in June 2010. "I guess I’m a glutton for punishment,” quips D'Ambrosia. 

The Higher Speed Ethernet standard could be completed by early 2017. 

The group has been set up after an extensive one-year study by the IEEE 802.3 Bandwidth Assessment Ad Hoc group investigating networking capacity growth trends in various markets. The study looked beyond core networking and data centres - the focus of the 40 and 100 Gigabit Ethernet (GbE) study work - to include high-performance computing, financial markets, Internet exchanges and the scientific community. 

One of the resulting report's conclusions (IEEE 802.3 Industry Connections Ethernet Bandwidth Assessment report) is that Terabit capacity will likely be required by 2015, growing a further tenfold by 2020. 

“By 2015 core networks on average will need ten times the bandwidth of 2010, and one hundred times [the bandwidth] by 2020,” says D’Ambrosia, who is also the chair of the IEEE 802.3 Ethernet Bandwidth Assessment Ad Hoc group, as well as chief Ethernet evangelist, CTO office at Dell. “If you look at Ethernet in 2010, it was at 100 Gigabit, so ten times 100 Gigabit in 2015 is a Terabit and a hundred times 2010 is 10 Terabit by 2020.” 

 

"We have got to the point where the pesky laws of physics are challenging us"

John D'Ambrosia, chair of the IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group

 

D'Ambrosia stresses that the Ad Hoc group's role is to talk about capacity requirements, not interface speeds. The technical details of any interface implementation will only become clear once the standardisation effort is well under way. 

A second Ethernet Bandwidth Assessment study finding is that network aggregation nodes are growing faster, and hence require greater capacity earlier, than the network's end points. 

"There is also a growing deviation between the big guys and the rest of the market," says D'Ambrosia. He has heard individuals from the largest internet content providers say they need Terabit connections by 2013, while others claim it will be 2020 before a mass market develops for such an interconnect. 

D'Ambrosia says the main findings are not necessarily surprising but there were two 'aha' moments during the study. 

One was that the core networking growth rates predicted in 2007 by the 40 and 100 Gig High-speed Study Group are still valid five years on.

The other concerned the New York Stock Exchange that had forecast that it would need to install four 100Gbps links in its data centre yet ended up using 13. "If there is any company that has a lot of money on the line and would have the best chance of nailing down their needs, I would put the New York Stock Exchange up there," says D'Ambrosia. "That tells you something about bandwidth growth and that you can still underestimate what is going to happen."

 

"The reality is that I can't give you any solutions right now that are attractive to do a Terabit"

 

What next

The IEEE standardisation work for the next speed Ethernet has not started but the completed Ethernet Bandwidth Assessment study will likely form an important input for the Industry Connections Higher Speed Ethernet Consensus group. 

The start of the standardisation work is expected in either March or July 2013 with the Study Group phase then taking a further eight months. This compares to 18 months for the IEEE 40GbE and 100GbE Study Group work (see chart above). The Task Force's work - writing the specification - is then expected to take a further two and a half years, completing the standard in early 2017 if all goes to plan.

 

Technology options

While stressing that the IEEE is talking about capacities and not yet interface speeds, Terabit capacity could be solved using multiple 400 Gigabit Ethernet interfaces, says D'Ambrosia.

At present there is no 400GbE project underway. However, the industry does believe that 400GbE is "doable" economically and technically. "Much of the supply base, when we are talking about Ethernet, is looking at 400 Gigabit," says D'Ambrosia.

Achieving a 1TbE interface looks much more distant. "People pushing for 1 Terabit tend to be the people looking at it from the bandwidth perspective and then looking at upgrading their networks and making multiple investments," he says. "But the reality is that I can't give you any solutions right now that are attractive to do a Terabit."

All agree that the technical challenges facing the industry to meet growing bandwidth demands are starting to mount. "We have got to the point where the pesky laws of physics are challenging us," says D'Ambrosia.

 

Further reading:

IEEE 802.3 Industry Connections Higher Speed Ethernet Ad Hoc


Xilinx's 400 Gigabit Ethernet FPGA

Xilinx has detailed its latest 28nm CMOS Virtex-7 FPGA family that will support 400 Gigabit Ethernet on a single device. The Virtex-7HT completes the Virtex-7, joining the Virtex-7T and Virtex-7XT product families announced in June.

 

A single FPGA will support 400 Gigabit Ethernet duplex traffic. The FPGA can also support 4x100Gig MACs and 4x150Gbps Interlaken interfaces. Source: Xilinx

Why is it important?

Xilinx says its switch and router customers are more than doubling the traffic capacity of their platforms every three years. “They are looking for silicon that will support a doubling of capacity within the same form-factor and the same power budget,” says Giles Peckham, EMEA marketing director at Xilinx.  

An FPGA has an advantage when compared to an application-specific standard product (ASSP) chip or an ASIC: being programmable and a volume-manufactured device, it is easier for an FPGA design to contend with changes in standards and the escalating cost of implementing chip designs in ever-finer CMOS geometries.

The Virtex-7HT will support 28 Gigabit-per-second (Gbps) transceivers (serial/ deserialiser or serdes). Used in a four-channel configuration, a 100Gbps interface can be implemented. Indeed the largest member of the Virtex-7HT family - the XC7VH870T - will have 16 x 28.05Gbps transceivers, enabling 4x100Gbps or even a 400 Gigabit Ethernet interface.

The 28Gbps transceivers will be used to interface to optical modules such as the emerging CFP2 pluggable form-factor. The CFP2 multi-source agreement  is expected to be ratified in the second half of 2011 and start shipping in the second half of 2012, says Xilinx.

 

“Network processors and ASICs are typically a [CMOS] process node or two behind us"

Giles Peckham, Xilinx

 

 

 

And with the additional 72, 13.1Gbps transceivers on-chip, the XC7VH870T will have sufficient input-output (I/O) to support bi-directional 400 Gigabit Ethernet traffic. The FPGA's lower-speed 13.1Gbps serdes are included to interface to network processors (NPUs) or ASICs that only support the lower-speed transceivers. “Network processors and ASICs are typically a [CMOS] process node or two behind us – partly because of cost - such that they end up at a technology disadvantage, as in transceiver speed,” says Peckham.

The additional 13.1Gbps transceivers - only 40 of the 72 transceivers are needed for the 400 Gigabit Ethernet port – will enable the FPGA to interface to other chips.

Xilinx says it will be at least a year and possibly 18 months before samples of the Virtex-7HT FPGA family become available. But it is making the Virtex-7HT announcement now because it has tested successfully the 28Gbps transceiver design.

 

Front panel evolution from 48 SFP+ to 4 CFPs to 8 CFP2s. Source: Xilinx

 

What has been done

There are three devices in the Virtex-7HT family which have 4, 8 and 16, 28Gbps transceivers. Xilinx claims this is four times the transceiver count of any competing 28nm FPGA detailed to date. But Peckham admits that additional announcements from competitors are inevitable before the Virtex-7HT devices become available in 2012.

In September Altera announced that it had successfully demonstrated a 25Gbps transceiver test chip. And in November, Intel and Achronix Semiconductor formed a strategic relationship that will allow the FPGA start-up to use Intel's leading-edge 22nm CMOS manufacturing process.

The three Virtex-7HT FPGAs also come with different amounts of programmable logic cells, memory blocks and Xilinx’s XtremeDSP building blocks tailored for digital signal processing.

Xilinx says meeting the CEI-28G electrical interface jitter specification has proved challenging.  At 10 Gigabit the signal period is 100 picoseconds (ps) and the jitter allowance is 35ps, while the signal period at 28Gbps is 35ps. “When you realise the jitter spec on the 10 Gigabit interface is the same as the full period in the 28 Gigabit spec – 35 picoseconds – there is quite a lot of work to be done in reducing the jitter when migrating to 28 Gigabit,” says Peckham.

Xilinx uses pre-emphasis techniques on the signals before they are transmitted across the printed circuit board to reduce loss. In addition, the FPGA maker has enhanced the noise isolation between the FPGA's digital and analogue CMOS circuitry. “The short spiky current loads in the digital circuitry can impact the noise in the analogue circuitry and increase the jitter,” says Peckham.

 

What next?

Xilinx has created a test vehicle 28Gbps transceiver. This allows Xilinx to validate and fine-tune the design. The rest of the FPGA design needs to be completed while another design iteration of the 28Gbps test vehicle is likely. “We have a lot of things to do yet,” he says.

Meanwhile system vendors can start to design their systems based on the FPGA family in advance of samples that are expected in the first half of 2012.

  • For a video demonstration of the 28Gbps test vehicle, click here.

Privacy Preference Center