TIP seeks to shake up the telecom marketplace
Niall Robinson
Now, ten telcos, systems vendors, component and other players have joined Facebook as part of the Telecom Infra Project, or TIP, to bring the benefits of open-source design and white-box platforms to telecoms. TIP has over 300 members and has seven ongoing projects across three network segments of focus: access, backhaul, and core and management.
Facebook's involvement in a telecoms project is to benefit its business. The social media giant has 1.79 billion active monthly users and wants to make Internet access more broadly available. Facebook also has demanding networking requirements, both the linking of its data centres and supporting growing video traffic. It also wants better networks to support emerging services using technologies such as virtual reality headsets.
It is time to disrupt this closed market; it is time to reinvent everything we have today
The telecom operators want to collaborate with Facebook having seen how its Open Compute Project has created flexible, scalable equipment for the data centre. The operators also want to shake up the telecom industry. At the inaugural TIP summit held in November, the TIP chairman and CTO of SK Telecom, Alex Jinsung Choi, discussed how the scale and complexity of telecom networks make it hard for innovators and start-ups to enter the market. “It is time to disrupt this closed market; it is time to reinvent everything we have today,” said Choi during his TIP Summit talk.
Voyager
TIP unveiled a white-box packet optical platform dubbed Voyager at the summit. The one rack-unit (1RU) box is a project for backhaul. Voyager has been designed by Facebook and the platform’s specification has been made available to TIP.
Voyager is based on another platform Facebook has developed: the Wedge top-of-rack switch for the data centre. Wedge switches are now being made by several contract manufacturers. Each can be customised based on the operating system used and the applications loaded onboard. The goal is to adopt a similar approach with Voyager.
“Eventually, there will be something that is definitely market competitive in terms of hardware cost,” says Niall Robinson, vice president, global business development at ADVA Optical Networking, one of the companies involved in the Voyager initiative. “And you have got an open-source community developing a feature set from a software perspective.”
Other companies backing Voyager include Acacia Communications, Broadcom and Lumentum which are involved in the platform’s hardware design. Snaproute is delivering the software inside the box while first units are being made by the contract manufacturer, Celestica.
ADVA Optical Networking’s will provide a sales channel for Voyager and is interfacing it to its network management system. The system vendor will also provide services and software support. Coriant is another systems vendor backing the project. It is providing networking support including routeing and switching as well as dense WDM transmission capabilities.
This [initiative] has shown me that the whole supply and design chains for transport can be opened up; I find that fascinating.
Robinson describes TIP as one of the most ambitious and creative projects he has been involved in. “It is less around the design of the box," he says. "It is the shaking up of the ecosystem, that is what TIP is about.”
A 25-year involvement in transport has given Robinson an ingrained view that it is different to other aspects of telecom. For example, a vendor’s transport system must be at each end of the link due to the custom nature of platforms that are designed to squeeze maximum performance over a link. “In some cases, transport is different but what TIP maybe realises is that transport does not always have to be different,” says Robinson. “This [initiative] has shown me that the whole supply and design chains for transport can be opened up; I find that fascinating.”
Specification
At the core of the 1RU Voyager is the Broadcom StrataXGS Tomahawk. The 3.2-terabit switch chip is also the basis of the Wedge top-of-rack switch. The Tomahawk features 128 x 25 gigabit-per-second (Gbps) serdes to enable 32 x 100 gigabit ports, and supports layer-2 switching and layer-3 routeing.
Voyager uses 12, 100 Gigabit Ethernet client-side pluggable interfaces and four 200-gigabit networking interfaces based on Acacia’s AC-400 optical module. The AC-400 uses coherent optics and supports polarisation multiplexing, 16 quadrature amplitude modulation (PM-16QAM). “If it was a pure transport box the input rate would equal the output rate but because it is a packet box, you can take advantage of layer 2 over-subscription,” says Robinson.
At layer-3 the total routeing capacity is 2 terabits, the sum of the client and network interfaces. “At layer-3, the Tomahawk chip does not know what is a client port and what is a networking port; they are just Ethernet ports on that device,” says Robinson.
ADVA Optical Networking chose to back Voyager because it does not have a packet optical platform in its product portfolio. Until now, it has partnered with Juniper Networks and Arista Networks when such functionality has been needed. “We are chasing certain customers that are interested in Voyager,” says Robinson. “We are enabling ourselves to play in the packet optical space with a self-contained box.”
Status and roadmap
The Voyager is currently in beta-prototype status and has already been tested in trials. Equinix has tested the box working with Lumentum’s open line system over 140km of fiber, while operator MTN has also tested Voyager.
The platform is expected to be generally available in March or April 2017, by when ADVA Optical Networking will have completed the integration of Voyager with its network management system.
Robinson says there are two ways Voyager could develop.
Source: Gazettabyte
One direction is to increase the interface and switching capacities of the 1RU box. Next-generation coherent digital signal processors that support higher baud rates will enable 400Gbps and even 600Gbps wavelengths using PM-64QAM. This could enable the line-side capacity to increase from the current 800Gbps to 2 or 3 terabits. And soon, 400Gbps client-side pluggable modules will become available. Equally, Broadcom is already sampling its next-generation Tomahawk II chip that has 6.4 terabits of switching capacity.
Another direction the platform could evolve is to add an backplane to connect multiple Voyagers. This is something already done with the Wedge '6-pack' that combines six Wedge switch cards. A Voyager 6-pack would result in a packet-optical platform with multiple terabits of switching and routeing capacity.
“This is an industry-driven initiative as opposed to a company-driven one,” says Robinson. “Voyager will go whichever way the industry thinks the lowest cost is.”
Corrected on Dec 22nd. The AC-400 is a 5"x7" module and not as originally stated.
Interconnection networks - an introduction
Source: Jonah D. Friedman
If moving information between locations is the basis of communications, then interconnection networks represent an important subcategory.
The classic textbook, Principles and Practices of Interconnection Networks by Dally and Towles, defines interconnection networks as a way to transport data between sub-systems of a digital system.
The digital system may be a multi-core processor with the interconnect network used to link the on-chip CPU cores. Since the latest processors can have as many as 100 cores, designing such a network is a significant undertaking.
Equally, the digital system can be on a far larger scale: servers and storage in a data centre. Here the interconnection network may need to link as many as 100,000 servers, as well as the servers to storage.
The number of servers being connected in the data centre continues to grow.
“The market simply demands you have more servers,” says Andrew Rickman, chairman and CEO of UK start-up Rockley Photonics. “You can’t keep up with demand simply with the advantage of [processors and] Moore’s law; you simply need more servers.”
Scaling switches
To understand why networking complexity grows exponentially rather than linearly with server count, a simple switch scaling example is used.
With the 4-port switch shown in Figure 1 it is assumed that each port can connect to the any of the other three ports. The 4-port switch is also non-blocking: if Port 1 is connected to Port 3, then the remaining input and output can also be used without affecting the link between ports 1 and 3. So, if four servers are connected to the ports, each can talk to any other server as shown in Figure 1.
Figure 1: A 4-port switch. Source: Gazettabyte, Arista Networks
But once five or more servers need to be connected, things get more complicated. To double the size to create an 8-port switch, several 4-port basic building switches are needed, creating a more complex two-stage switching arrangement (Figure 2).
Figure 2: An 8-port switch made up of 4-port switch building blocks. Source: Gazettabyte, Arista Networks.
Indeed the complexity increases non-linearly. Instead of one 4-port building block switch, six are needed for a switch with twice the number of ports, with a total of eight interconnections (number of second tier switches multiplied by the number of first tier switches).
Doubling the number of effective ports to create a 16-port switch and the complexity more than doubles again: now three tiers of switching is needed, 20 4-port switches and 32 interconnections (See Table 1).
Table 1: How the number of 4-port building block switches and interconnects grow as the number of switch ports keep doubling. Source: Gazettabyte and Arista Networks.
The exponential growth in switches and interconnections is also plotted in Figure 3.
Figure 3: The exponential growth in N-sized switches and interconnects as the switch size grows to 2N, 4N etc. In this example N=4. Source: Gazettabyte, Arista Networks.
This exponential growth in complexity explains Rockley Photonics’ goal to use silicon photonics to make a larger basic building block. Not only would this reduce the number of switches and tiers needed for the overall interconnection network but allow larger number of servers to be connected.
Rockley believes its silicon photonics-based switch will not only improve scaling but also reduce the size and power consumption of the overall interconnection network.
The start-up also claims that its silicon photonics switch will scale with Moore’s law, doubling its data capacity every two years. In contrast, the data capacity of existing switch ASICs do not scale with Moore’s law, it says. However the company has still to launch its product and has yet to discuss its design.
Data centre switching
In the data centre, a common switching arrangement used to interconnect servers is the leaf-and-spine architecture. A ‘leaf’ is typically a top-of-rack switch while the ‘spine’ is a larger capacity switch.
A top-of-rack switch typically uses 10 gigabit links to connect to the servers. The connection between the leaf and spine is typically a higher capacity link - 40 or 100 gigabit. A common arrangement is to adopt a 3:1 oversubscription - the total input capacity to the leaf switch is 3x that of its output stream.
To illustrate the point with numbers, a 640 gigabit top-of-rack switch is assumed, 480 gigabit (or 48 x10 Gig) capacity used to connect the servers and 160 gigabit (4 x 40 Gig) to link the top-of-rack switch to the spine switches.
In the example shown (Figure 4) there are 32 leaf and four spine switches connecting a total of 1,536 servers.
Figure 4: An example to show the principles of a leaf and spine architecture in the data centre. Source: Gazettabyte
In a data centre with 100,000 servers, clearly a more complicated interconnection scheme involving multiple leaf and spine clusters is required.
Arista Network’s White Paper details data centre switching and leaf-and-spine arrangements, while Facebook published a blog (and video) discussing just how complex an interconnection network can be (see Figure 5).
Figure 5: How multiple leaf and spines can be connected in a large scale data centre. Source: Facebook
Data centres to give silicon photonics its chance
The scale of modern data centres and the volumes of transceivers they will use are going to have a significant impact on the optical industry. So claims Facebook, the social networking company.
Katharine Schmidtke
Facebook has been vocal in outlining the optical requirements it needs for its large data centres.
The company will use duplex single-mode fibre and has chosen the 2 km mid-reach 100 gigabit CWDM4 interface to connect its equipment.
But the company remains open regarding the photonics used inside transceivers. “Facebook is agnostic to technology,“ says Katharine Schmidtke, strategic sourcing manager, optical technology at Facebook. “There are multiple technologies that meet our requirements.”
That said, Facebook says silicon photonics has characteristics that are appealing.
Silicon photonics can produce integrated designs, with all the required functions placed in one or two chips. Such designs will also be needed in volume, given that a large data centre uses hundred of thousands of optical transceivers, and that requires a high-yielding process. This is a manufacturing model the chip industry excels at, and one that silicon photonics, which uses a CMOS-compatible process, can exploit.
When you bring up a data centre, you don’t just deploy, you deploy a data centre
New business model
What data centres brings to optics is scale. Optical transceiver volumes used by data centres are growing, and growing fast, and will account for half the industry’s demand for Ethernet transceivers by 2020, according to LightCounting Market Research.
Transceivers must be designed with high-volume, low-cost manufacturing in mind from the start. This is different to what the market has done traditionally. “With the telecom industry, you step into volume in more manageable, digestible chunks,” says Schmidtke. “When you bring up a data centre, you don’t just deploy, you deploy a data centre.”
Silicon photonics has already proven it can achieve the required optical performance, says Facebook, what remains open is whether the technology can meet the manufacturing demands of the data centre. What helps its cause is that the data centre provides the volumes needed to achieve such a manufacturing maturity.
Schmidtke is upbeat about silicon photonics’ prospects.
“Why silicon photonics is attractive is integration; you are reducing the number of components and the bill of materials significantly, and that reduces cost,” she says. “Then there is all the alignment and assembly cost reductions; that is what makes this technology appealing.”
Her expectation is that the industry will demonstrate the required level of manufacturing maturity in the coming year. Then the role silicon photonics will play for this market will become clearer.
“Within a year it will be very obvious,” she says.
Terabit interconnect to take hold in the data centre
Intel and Corning have further detailed their 1.6 Terabit interface technology for the data centre.
The collaboration combines Intel's silicon photonics technology operating at 25 Gigabit-per-fibre with Corning's ClearCurve LX multimode fibre and latest MXC connector.
Silicon photonics wafer and the ClearCurve fibres. Source: Intel
The fibre has a 300m reach, triple the reach of existing multi-mode fibre at such speeds, and uses a 1310nm wavelength. Used with the MXC connector that supports 64 fibres, the overall capacity will be 1.6 Terabits-per-second (Tbps).
"Each channel has a send and a receive fibre which are full duplex," says Victor Krutul, director business development and marketing for silicon photonics at Intel. "You can send 0.8Tbps on one direction and 0.8Tbps in the other direction at the same time."
The link supports connections within a rack and between racks; for example, connecting a data centre's top-of-rack Ethernet switch with an end-of-row one.
James Kisner, an analyst at global investment banking firm, Jefferies, views Intel’s efforts as providing important validation for the fledgling silicon photonics market.
However, in a research note, he points out that it is unclear whether large data centre equipment buyers will be eager to adopt the multi-mode fibre solution as it is more expensive than single mode. Equally, large data centres have increasingly longer span requirements - 500m to 2km - further promoting the long term use of single mode fibre.
Rack Scale Architecture
The latest details of the silicon photonics/ ClearCurve cabling were given as part of an Intel update on several data centre technologies including its Atom C2000 processor family for microservers, the FM5224 72-port Ethernet switch chip, and Intel's Rack Scale Architecture (RSA) that uses the new cabling and connector.
Intel is a member of Facebook's Open Compute Project based on a disaggregated system design that separates storage, computing and networking. "When I upgrade the microprocessors on the motherboard, I don't have to throw away the NICs [network interface controllers] and disc drives," says Krutul. The disaggregation can be within a rack or between rows of equipment. Intel's RSA is a disaggregated design example.
The chip company discussed an RSA design for Facebook. The rack has three 100Gbps silicon photonics modules per tray. Each module has four transmit and four receive fibres, or 24 fibres per tray and per cable. “Different versions of RSA will have more or less modules depending on requirements," says Krutul. Intel has also demonstrated a 32-fibre MXC prototype connector.
Corning says the ClearCurve fibre delivers several benefits. The fibre has a smaller bend radius of 7.5mm, enabling fibre routing on a line card. The 50 micron multimode fibre face is also expanded to 180 microns using a beam expander lens. The lenses make connector alignment easier and less sensitive to dust. Corning says the MXC connector comprises seven parts, fewer than other optical connectors.
Fibre and connector standardisation are key to ensure broad use, says Daryl Inniss, vice president and practice leader, components at Ovum.
"Intel is the only 1310nm multimode transmitter and receiver supplier, and expanding this optical link into other applications like enterprise data centres may require a broader supply base," says Inniss in a comment piece. But the fact that Corning is participating in the development signals a big market in the making, he says.
Intel has not said when the silicon photonics transceiver and fibre/ connector will be generally available. "We are not discussing schedules or pricing at this time," says Krutul.
Silicon photonics: Intel's first lab venture
The chip company has been developing silicon photonics technology for a decade.
"As our microprocessors get faster, you need bigger and faster pipes in and around the servers," says Krutul. "That is a our whole goal - feeding our microprocessors."
Intel is setting up what it calls 'lab ventures', with silicon photonics chosen to be the first.
"You have a research organisation that does not do productisation, and business units that just do products," says Krutul. "You need something in between so that technology can move from pure research to product; a lab venture is an organisational structure to allow that movement to happen."
The lab ventures will be discussed more in the coming year.
