Next-gen 100 Gigabit optics
Briefing: 100 Gigabit
Part 2: Interview
Gazettabyte spoke to John D'Ambrosia about 100 Gigabit technology
John D'Ambrosia, chair of the IEEE 100 Gig backplane and copper cabling task force
John D'Ambrosia laughs when he says he is the 'father of 100 Gig'.
He spent five years as chair of the IEEE 802.3ba group that created the 40 and 100 Gigabit Ethernet (GbE) standards. Now he is the chair of the IEEE task force looking at 100 Gig backplane and copper cabling. D'Ambrosia is also chair of the Ethernet Alliance and chief Ethernet evangelist in the CTO office of Dell's Force10 Networks.
“People are also starting to talk about moving data operations around the network based on where electricity is cheapest”
"Part of the reason why 100 Gig backplane technology is important is that I don't know anybody that wants a single 100 Gig port off whatever their card is," says D'Ambrosia. "Whether it is a router, line card, whatever you want to call it, they want multiple 100 Gig [interfaces]: 2, 4, 8 - as many as they can."
Earlier this year, there was a call for interest for next-generation 100 Gig optical interfaces, with the goal of reducing the cost and power consumption of 100 Gig interfaces while increasing their port density. "This [next-generation 100 Gig optical interfaces] is going to become very interesting in relation to what is going on in the industry,” he said.
Next-gen 100 Gig
The 10x10 MSA is an industry initiative that is an alternative 100 Gig interface to the IEEE 100 Gigabit Ethernet standards. Members of the 10x10 MSA include Google, Brocade, JDSU, NeoPhotonics (Santur), Enablence, CyOptics, AFOP, MRV, Oplink and Hitachi Cable America.
"Unfortunately, that [10x10 MSA] looks like it could cause potential interop issues,” says D'Ambrosia. That is because the 10x10 MSA has a 10-channel 10 Gigabit-per-second (Gbps) optical interface while the IEEE 100GbE use a 4x25Gbps optical interface.
The 10x10 interface has a 2km reach and the MSA has since added a 10km variant as well as 4x10x10Gbps and 8x10x10Gbps versions over 40km.
The advent of the 10x10 MSA has led to an industry discussion about shorter-reach IEEE interfaces. "Do we need something below 10km?” says D’Ambrosia.
Reach is always a contentious issue, he says. When the IEEE 802.3ba was choosing the 10km 100GBASE-LR4, there was much debate as to whether it should be 3 or 4km. "I won’t be surprised if you have people looking to see what they can do with the current 100GBASE-LR4 spec: There are things you can do to reduce the power and the cost," he says.
One obvious development to reduce size, cost and power is to remove the gearbox chip. The gearbox IC translates between 10x10Gbps and the 4x25Gbps channels. The chip consumes several watts each way (transmit to receive and vice versa). By adopting a 4x25Gbps input electrical interface, the gearbox chip is no longer needed - the electrical and optical channels will then be matched in speed and channel count. The result is that the 100GbE designs can be put into the upcoming, smaller CFP2 and even smaller CFP4 form factors.
As for other next-gen 100Gbps developments, these will likely include a 4x25Gbps multi-mode fibre specification and a 100 Gig, 2km serial interface, similar to the 40GBASE-FR.
The industry focus, he says, is to reduce the cost, power and size of 100Gbps interfaces rather than develop multiple 100 Gig link interfaces or expand the reach beyond 40km. "We are going to see new systems introduced over the next few years not based on 10 Gig but designed for 25 Gig,” says D’Ambrosia. The ASIC and chip designers are also keen to adopt 25Gbps signalling because they need to increase input-output (I/O) yet have only so may pins on a chip, he says.
D’Ambrosia is also part of an Ethernet bandwidth assessment ad-hoc committee that is part of the IEEE 802.3 work. The group is working with the industry to quantify bandwidth demand. “What you see is a lot of end users talking about needing terabit and a lot of suppliers talking about 400 Gig,” he says. Ultimately, what will determine the next step is what technologies are going to be available and at what cost.
Backplane I/0 and switching
Many of the systems D'Ambrosia is seeing use a single 100Gbps port per card. "A single port is a cool thing but is not that useful,” he says. “Frankly, four ports is where things start to become interesting.”
This is where 25Gbps electrical interfaces come into play. "It is not just 25 Gig for chip-to-chip, it is 25 Gig chip-to-module and 25 Gig to the backplane."
Moreover modules, backplane speeds, and switching capacity are all interrelated when designing systems. Designing a 10 Terabit switch, for example, the goal is to reduce the number of traces on a board and that go through the backplane to the switch fabric and other line cards.
Using 10Gbps electrical signals, between 1,200 to 2,000 signals are needed depending on the architecture, says D'Ambrosia. With 25Gbps the interface count reduces to 500-750. “The electrical signal has an impact on the switch capacity,” he says.
100 Gig in the data centre
D’Ambrosia stresses that care is needed when discussing data centres as the internet data centres (IDC) of a Google or a Facebook differ greatly from those of enterprises. “In the case of IDCs, those people were saying they needed 100 Gig back in 2006,” he says.
Such mega data centres use tens of thousands of servers connected across a flat switching architecture unlike traditional data centres that use three layers of aggregated switching. According to D'Ambrosia such flat architectures can justify using 100Gbps interfaces even when the servers each have a 1 Gig Ethernet interfaces only. And now servers are transitioning to 10 GbE interfaces.
“You are going to have to worry about the architecture, you are going to have to worry about the style of data centre and also what the server applications are,” says D'Ambrosia. “People are also starting to talk about moving data operations around the network based on where electricity is cheapest.” Such an approach will require a truly wide, flat architecture, he says.
D'Ambrosia cites the Amsterdam Internet exchange that announced in May its first customer using a 100 Gig service. "We are starting to see this happen,” he says.
One lesson D'Ambrosia has learnt is that there is no clear relationship between what comes in and out of the cloud and what happens within the cloud. Data centres themselves are one such example.
100 Gig direct detection
In recent months lower power, 200km to 800km reach, 100Gbps direct detection interfaces that are cheaper than coherent transmission have been announced by ADVA Optical Networking and MultiPhy. Such interfaces have a role in the network and are of varying interest to telco operators. But these are vendor-specific solutions.
D’Ambrosia stresses the importance of standards such as the IEEE and the work of the Optical Internetworking Forum (OIF) that has adopting coherent. “I still see customers that want a standards-based solution,” says D'Ambrosia, who adds that while the OIF work is not a standard, it is an interoperability agreement. “It allows everyone to develop the same thing," he says.
There are also other considerations regarding 100 Gig direct-detection besides cost, power and a pluggable form factor. Vendors and operators want to know how many people will be able to source this, he says.
D'Ambrosia says that new systems being developed now will likely be deployed in 2013. Vendors must assess the attractiveness of any alternative technologies to where industry backed technologies like coherent and the IEEE standards will be then.
The industry will adopt a variety of 100Gbps solutions, he says, with particular decisions based on a customer’s cost model, its long term strategy and its network.
For Part 1 - 100 Gig: An operator view click here
Calient brings optical switching to the data centre
Source: Calient
The Californian-based start-up has as been selling its FC 320, a 320-port 3D MEMS-based switch, since 2006. The optical switch is used by Verizon and AT&T at submarine cable landing sites, and by Government agencies.
Now Calient has raised US $19.4 million (€13.77M) in its latest funding round to complete the development and manufacturing of a more compact, power efficient version of its optical switch.
The company has upgraded the electronics and software of its MEMS-based optical switch module. This, says Gregory Koss, Calient's senior vice president for products and partners, reduces the power consumption to 20W, a 90% reduction compared to its existing design.
The new switch module is also more compact. Using the module in a new 320-port switch platform more than halves the size: from 17 to 7 rack units.
The 3D MEMS optics has not been changed. The MEMS design uses mirrors to form a free-space connection between an fibre input port and any of the 320 output ports. A control system then adjusts the mirrors to maximise the output signal. In all the years Calient has been selling its systems, there has not been a single MEMS failure, says the company.
Calient is also changing its strategy by selling the switch as a module to system vendors. The switch module can be incorporated on a line card, while Calient will work with system vendor partners that want to integrate the module within their own platform designs.
"[Data centre] operators want a future-proofed network. They don't want to rebuild when links are upgraded from 10 to 40 and then 100 Gig."
Gregory Koss, Calient Technologies
Data centre and cloud
Calient's MEMS-based switch will be used to connect large server clusters in content service providers' 'mega' data centres.
According to Koss, content service providers are interested in using an optical switch to link their server clusters. In a typical configuration, 48 servers are connected to a top-of-rack switch. This top-of-rack switch, via a 10 Gigabit Ethernet link, would be one input to the 320-port optical switch.
"[Data centre] operators want a future-proofed network," says Koss. "They don't want to rebuild when links are upgraded from 10 to 40 and then 100 Gig."
Common cabling used in the data centre include copper and multi-mode fibre while Calient's design uses single-mode fibre. According to Koss, data centre managers are installing more single-mode fibre: "It is it not so much for reach but for bandwidth and for scaling.”
The switch can also be used for what Calient calls cloud networking, to monitor and manage an enterprise's fibres as it enters the data centre.
ROADMs
The switch will also address agile optical networking, to enable colourless, directionless and contentionless ROADMs.
The optical module will be used for the add/ drop, alongside rather than replacing 1x9 or 1x20 WSSs which are used for the pass-through lambdas.
Koss says that the company's main focus in 2012 is addressing the data centre market opportunity but that the switch is of interest to ROADM system vendors. Such a 3D MEMS-based ROADM design will take longer to bring to market.
Further reading:
CALIENT's 3D MEMS Technology Enables Exploding Bandwidth Demands (log-in required to download the White Paper)
Framing the information age
When writing features for FibreSystems Europe, I repeatedly asked for high-resolution striking images. The magazine's editors always wanted photos that included people, like Maurice Broomfield's photos. Getting hold of such images did happen but not often.
Inspired by the Financial Times’ interview and Maurice Broomfield's beautiful images, some of the better images sent are presented here.
IBM data centre
I’m on the look-out for more. So if you are the media relations for an operator, equipment maker, optical transceiver or component (optical or IC) vendor, can I please request some inspiring photos - ideally with people - and I'll create a photo gallery of the best.
Network Operations Centre (NOC) Source: AT&T
Source: Cisco Systems
An Intel silicon photonics device
And here is an image of Tokyo's data centre on Flickr

