Meeting the many needs of data centre interconnect

High capacity. Density. Power efficiency. Client-side optical interface choices. Coherent transmission. Direct detection. Open line system. Just some of the requirements vendors must offer to compete in the data centre interconnect market.

“A key lesson learned from all our interactions over the years is that there is no one-size-fits-all solution,” says Jörg-Peter Elbers, senior vice president of advanced technology, standards and IPR at ADVA Optical Networking. “What is important is that you have a portfolio to give customers what they need.”

 Jörg-Peter Elbers

Teraflex

ADVA Optical Networking detailed its Teraflex, the latest addition to its CloudConnect family of data centre interconnect products, at the OFC show held in Los Angeles in March (see video).

The platform is designed to meet the demanding needs of the large-scale data centre operators that want high-capacity, compact platforms that are also power efficient. 

 

A key lesson learned from all our interactions over the years is that there is no one-size-fits-all solution

 

Teraflex is a one-rack-unit (1RU) stackable chassis that supports three hot-pluggable 1.2-terabit modules or ‘sleds’. A sled supports two line-side wavelengths, each capable of coherent transmission at up to 600 gigabits-per-second (Gbps). Each sled’s front panel supports various client-side interface module options: 12 x 100-gigabit QSFPs, 3 x 400-gigabit QSFP-DDs and lower speed 10-gigabit and 40-gigabit modules using ADVA Optical Networking’s MicroMux technology.

“Building a product optimised only for 400-gigabit would not hit the market with the right feature set,” says Elbers. “We need to give customers the possibility to address all the different scenarios in one competitive platform.”   

The Teraflex achieves 600Gbps wavelengths using a 64-gigabaud symbol rate and 64-ary quadrature-amplitude modulation (64-QAM). ADVA Optical Networking is using Acacia’s Communications latest Pico dual-core coherent digital signal processor (DSP) to implement the 600-gigabit wavelengths. ADVA Optical Networking would not confirm Acacia is its supplier but Acacia decided to detail the Pico DSP at OFC because it wanted to end speculation as to the source of the coherent DSP for the Teraflex. That said, ADVA Optical Networking points out that Teraflex’s modular nature means coherent DSPs from various suppliers can be used.

 

The 1 rack unit Teraflex

The line-side optics supports a variety of line speeds – from 600Gbps to 100Gbps, the lower the speed, the longer the reach.

The resulting 3-sled 1RU Teraflex platform thus supports up to 3.6 terabits-per-second (Tbps) of duplex communications. This compares to a maximum 800Gbps per rack unit using the current densest CloudConnect 0.5RU Quadflex card.                                     

Markets

The data centre interconnect market is commonly split into metro and long haul.

The metro data centre interconnect market requires high-capacity, short-haul, point-to-point links up to 80km. Large-scale data centre operators may have several sites spread across a city, given they must pick locations where they can find them. Sites are typically no further apart than 80km to ensure a low-enough latency such that, collectively, they appear as one large logical data centre.

“You are extending the fabric inside the data centre across the data-centre boundary, which means the whole bandwidth you have on the fabric needs to be fed across the fibre link,” says Elbers. “If not, then there are bottlenecks and you are restricted in the flexibility you have.”  

Large enterprises also use metro data centre interconnect. The enterprises’ businesses involve processing customer data - airline bookings, for example - and they cannot afford disruption. As a result, they may use twin data centres to ensure business continuity.

Here, too, latency is an issue especially if synchronous mirroring of data using Fibre Channel takes place between sites. The storage protocol requires acknowledgement between the end points such that the round-trip time over the fibre is critical. “The average distance of these connections is 40km, and no one wants to go beyond 80 or 100km,” says Elbers, who stresses that this is not an application for Teraflex given it is aimed at massive Ethernet transport. Customers using Fibre Channel typically need lower capacities and use more tailored solutions for the application.

The second data centre interconnect market - long haul - has different requirements. The links are long distance and the data sent between sites is limited to what is needed. Data centres are distributed to ensure continual business operation and for quality-of-experience by delivering services closer to customers.

Hundreds of gigabits and even terabits are sent over the long-distance links between data centres sites but commonly it is about a tenth of the data sent for metro data centre interconnect, says Elbers.  

 

Direct Detection

Given the variety of customer requirements, ADVA Optical Networking is pursuing direct-detection line-side interfaces as well as coherent-based transmission.

At OFC, the system vendor detailed work with two proponents of line-side direct-detection technology - Inphi and Ranovus - as well as its coherent-based Teraflex announcement.

Working with Microsoft, Arista and Inphi, ADVA detailed a metro data centre interconnect demonstration that involved sending 4Tbps of data over an 80km link. The link comprised 40 Inphi ColorZ QSFP modules. A ColorZ module uses two wavelengths, each carrying 56Gbps using PAM-4 signalling. This is where having an open line system is important.

Microsoft wanted to use QSFPs directly in their switches rather than deploy additional transponders, says Elbers. But this still requires line amplification while the data centre operators want the same straightforward provisioning they expect with coherent technology. To this aim, ADVA demonstrated its SmartAmp technology that not only sets up the power levels of the wavelengths and provides optical amplification but also automatically measures and compensates for chromatic dispersion experienced over a link.  

ADVA also detailed a 400Gbps metro transponder card based on PAM-4 implemented using two 200Gbps transmitter optical subassemblies (TOSAs) and two 200Gbps receiver optical subassemblies (ROSAs) from Ranovus.      

 

Clearly there is also space for a direct-detection solution but that space will narrow down over time

 

Choices

The decision to use coherent or direct detection line-side optics boils down to a link’s requirements and the cost an end user is willing to pay, says Elbers.

As coherent-based optics has matured, it has migrated from long-haul to metro and now data centre interconnect. One way to cost-reduce coherent further is to cram more bits per transmission. “Teraflex is adding chunks of 1.2Tbps per sled which is great for people with very high capacities,” says Elbers, but small enterprises, for example, may only need a 100-gigabit link.

“For scenarios where you don’t need to have the highest spectral efficiency and the highest fibre capacity, you can get more cost-effective solutions,” says Elbers, explaining the system vendor’s interest in direct detection.

“We are seeing coherent penetrating more and more markets but still cost and power consumption are issues,” says Elbers. “Clearly there is also space for a direct-detection solution but that space will narrow down over time.”

Developments in silicon photonics that promise to reduce the cost of optics through greater integration and the adoption of packaging techniques from the CMOS industry will all help. “We are not there yet; this will require a couple of technology iterations,” says Elbers.

Until then, ADVA’s goal is for direct detection to cost half that of coherent.

“We want to have two technologies for the different areas; there needs to be a business justification [for using direct detection],” he says. “Having differentiated pricing between the two - coherent and direct detection - is clearly one element here.”   


Interview with Finisar's Jerry Rawls

Finisar is celebrating its 25th anniversary. Gazettabyte interviewed Finisar's executive chairman and company co-founder, Jerry Rawls, to mark the anniversary.

Part 1

 

Jerry Rawls, Finisar's executive chairman and co-founder

 

Q: How did you meet fellow Finisar co-founder Frank Levinson?

JR: I was a general manager of a division at Rachem, a company in Menlo Park, California. We were developing and manufacturing electric interconnect products; our markets were mostly defence electronics and the computer industry.

Our customers were starting to talk a lot about fibre optics and we had no products. It seemed like it was going to be a hole in our portfolio. So I started a fibre optics product development group and hired a bright young physicist from Bell Labs to be the principal technologist. His name was Frank Levinson.    

What decided you both to set up Finisar?

The division I was running was very successful: we were the fastest growing and the most profitable. Frank was lured away by our chairman to work on a fibre-optics start-up that was internally funded: Raynet.

Raynet lost almost a billion dollars over the next few years. It was the biggest venture loss in the history of Silicon Valley, and it may still be the biggest venture capital loss in Silicon Valley history.

At they were losing money, and it was sucking money from the rest of the company, our division was unable to fund a lot of projects we would have liked to have funded if we were to continue to grow. Frank was very frustrated as they were jousting at windmills. 

We had lunch one day and talked about the possibility of starting a fibre-optics company. It was as simple as that: we could do better on our own. This was in 1987.

What convinced you both that high-speed fibre optics was a business to pursue?


Frank LevinsonFrank had some original patents from Bell Labs on wavelength division multiplexing (WDM) and the use of fibre optics in telephony. That is where fibre optics first had a major impact.

As we started a little company, the thing that was happening in 1988 was that the Mac OS had just been introduced and Windows was right behind it. This was the first time colour and graphics were introduced to the PC. As we watched the change to graphics and colour, we knew video was not going to be too far behind. It was clear that files would be larger, and the bandwidth between systems, and between storage and systems, would need to be greater.

And so we started to think about high-speed optics for data centres. And the corollary to that was low-cost, high-speed optics for data centres.

We did not think we were up to competing with the telecommunications industry because in those days AT&T Bell Labs (Lucent), Alcatel and Nortel dominated the world of fibre optics. They built their own components, they built their own sub-systems and we did not think there was any chance of a start-up competing with them.

But in the world of computer networks, there were no established suppliers as fibre optics was almost non-existent there. Our goal was to focus on Gigabit-per-second speeds and how we could build low-cost Gigabit optical links for data centres. 

The reason low cost was so important was that to buy an OC-12 (622 Megabit-per-second SONET) link, the cost was thousands of dollars at each end. This was a telephony fibre link but there was no chance you could be successful in any sort of computer installation with an optical connection at such prices. 

So the question was: How do you bring the cost down and the prices down to a level that networks could afford, and that were priced lower than the computers at each end?

 

"Frank and I started the company with our own money. We had no outside investors. I took a second mortgage on my house and off we went to start a company"

 

So we looked for compromises. One was distance. OC-12s went 20km, 40km, 80km but data centres only needed a few hundred meters. Ok, if we can build a link that goes 500m, we have covered any data centre in the world.

The next thing was: What does that open up? And what can we do? It quickly led us to multi-mode transmission, and multi-mode transmission turned out for us to be much, much cheaper to build because the core of the fibre was either 50 or 62.5 microns versus 8 microns in telephony fibre. That means that the core is enormous compared to telephone fibres, and our job for alignment [with the laser] was that much easier.

We built some early samples. We went through several iterations to get there. We put together the components and ICs and we finally had a product that we thought was pretty good. We had a 1 Gigabit transmitter with 17 pins and a 1 Gigabit receiver with 17 pins, and we had a Gigabit transceiver with 28 pins.    

Our first customers for these devices were the national laboratories. Lawrence Livermore National Lab was one of the pioneers in the world of Fibre Channel. They, working with IBM, had a big hand in the whole Fibre Channel protocol.

Our engagement with Lawrence Livermore led to other labs.  All these physicists, building high-energy physics experiments, all of a sudden started buying these optical transceivers from us by the thousands. That was our first product.

Finisar's initial focus included consulting. What sort of things was the company doing during this period?

Consulting, we did a tiny bit. Mostly, what we did was contract design engineering.

Frank and I started the company with our own money. We had no outside investors. I took a second mortgage on my house and off we went to start a company. That meant we had to be able to support ourselves and our employees.  We had to have customers that pay their bills.

Early optical transceiver product from Finisar

So one of the things we did in the early days is we found customers to do design work for. We designed fibre optic systems, we designed cable TV fibre optics systems, we designed special fibre interconnects, we did some special fibre testing - which you might call consulting. We designed a scuba-diver computer that calculated dive tables - whether you would get the bends or not, how long you could stay down, and what depth and pressure. We designed a swimming pool chlorination control system. 

We did a lot of things along the way to generate revenue to support our simultaneous product development work to build the Gigabit optics devices.

We didn't start the company to be a contract design house; we started it to be a product company. But the financial reality was we had to have enough money coming in to support our employees and ourselves.

"His firm had so much inventory of the products from that company that he didn’t think they would buy anything for the next three or four years"

 

 

In the late 1990s, Finisar experienced the optical boom and then the crash. Do you recall when you first realised all was not well?

In November and December of 2000, we were about to acquire two companies. Both were component suppliers in the telecommunications industry. They both sold to big customers like Alcatel, Nortel and Lucent.

In the due-diligence process for one of the companies, I was on a phone call with Lucent who had been a huge customer – maybe 40 percent of their business came from Lucent. Talking to the VP of procurement about his history with this company and what his company’s future prospects were - all the things you do normally do in due diligence - he confirmed what his previous business had been and that he was satisfied with them as a supplier. They were a good company.

But, as we talked about future business, he went silent. And, then he came back with some devastating news: his firm had so much inventory of the products from that company that he didn’t think they would buy anything for the next three or four years. This fact was unknown to the company we were acquiring. That was my first signal that something bad was going on.

We did not acquire this company. We were in the late stages of the acquisition discussions – talking to their customers is usually one of the last things you do in due diligence – but there was obviously a material adverse change in the outlook of this company. So, we quickly terminated discussions.

A very similar thing happened with the other company only a couple of weeks later. This was late 2000, it was clear the bell was ringing. Something bad was about to happen in the optics, telephony, networking industry.

In our January quarter of 2001, we could see the incoming order rate falling. And by our February-April quarter that year, our revenues had dropped something like 47 percent in two quarters. It rolled through the industry pretty fast.

How did Finisar navigate the turbulent aftermath?

We were in a bit in shock, as most of the industry was.

To put it in perspective, our revenues dropped 47 percent in two quarters; Nortel’s High Performance Optical Components division, which had sales in one quarter during 2000 of $1.4 billion, their revenues dropped to something like $28 million. Some 98.5 percent of their revenue disappeared, it was that disastrous a time, particularly in telecom.

The issue with Finisar was that the business we built was predominantly about computer networks. We didn’t have that much business with telecom. We were selling optics for data centres and so our business didn’t decline as much as the Nortels, Alcatels and the Lucents. But it was still a precipitous decline and so we had to decide: Were we still going to stay in this business or were we going to open a hamburger stand or some other kind of a business? And our answer was we didn’t know much about the hamburger business or any other business.

We thought that, long term, fibre optics was going to be a good business. The use of information was only going to increase and that was a place where we had built a fundamental market position and we ought to continue.

To do that, we had to change our spots, that is, change our way of doing business. We were going to have to be more cost competitive. Enormous capacity had been created in the optics industry in the '90s and that capacity didn’t all evaporate [with the bust]. We knew we were going to have to be much more cost-competitive.

We decided that our strategy was to be a vertically-integrated company. In the ‘90s we were not vertically integrated: we bought lasers from the Japanese or Honeywell who made VCSELs, we bought photo-detectors from either US or Japanese suppliers, we bought ICs from merchant semiconductor companies, and we put it all together. We even outsourced all of our assembly and manufacturing. But in the future, we were convinced that we had to be more cost-competitive.

 

"One of the things that I think is really important here is that we allow people to make mistakes"

During this period Finisar had an IPO. How did it impact the company and this strategy?

We had previously had an IPO in 1999 that raised some money. The first thing we did after the crash was to buy a factory in Malaysia. This was around March 2001, business had started to crash, everyone was selling, and if you were buying, you could get a pretty good deal on almost anything. So we bought this factory from Seagate – 640,000 sq. ft. of almost brand new building, with 200,000 sq. ft. of clean room, 20 acres of land – we bought it for $10 million.

Then we decided we had to be vertically integrated with our ICs. We weren’t going to start an IC foundry but we had to start an IC design group. So we hired a senior IC design manager from National Semiconductor who had led their analogue design efforts and we started a semiconductor design group. Today we design almost all of the ICs that go into our datacom products. We have some 60 people worldwide who are involved in IC design, layout, testing and verification.

Next, we bought the Honeywell VCSEL fab. They were our big supplier, we were their largest customer. Honeywell decided that that business was not strategic and so we bought it.

We also bought a small laser fab in Fremont, California to make edge-emitting lasers. We could also make photo-detectors in both those fabs. So we were now in a position we could make photo-detectors and lasers, and we could design ICs and go to foundry with them instead of buying them from merchant semiconductor companies and pay their margins.

We had a beautiful big factory we could build our products in, and expand for years to come. We are still expanding in that factory. Today we have over 5,000 employees in that plant in Malaysia.

To finance all the tomfoolery, we needed a lot more money than we were able to raise with our IPO. I went to New York and Boston and peddled a convertible bond issue for $250 million. So we raised a enough cash that we could finance these acquisitions and also support the company through this crash and downturn.

It was great we were a public company because we couldn’t raise that much money if we had been a private company. It worked out well; and we eventually paid all that debt off.

Fast-forward to today, we are targeting more than a billion dollars in revenue this year, we are the largest company in our industry and I think we are the most profitable.

In 2006 IEEE Spectrum Magazine ranked Finisar top in terms of patent power among telecom equipment manufacturers. Is this still a key strategic goal of Finisar?  And if so, how do you ensure innovation continues year after year?

I wouldn’t say patents are a strategic goal of ours. The IEEE Spectrum ranking was based on the number of patents you had, how many you had issued recently, but it also was importantly weighted by how many times your patents were referenced by other patent applications. A lot of ours were referenced by others who were filing patents. We ended up pretty high on the list.

We do have over 1,000 issued US patents, and we have about 500 issued international patents. We employ maybe as many as 1,300 engineers and almost 300 of them have Ph.Ds. We will continue to innovate. We have been a leader in this industry for years. Our goal is to try to be out in front, to deliver the products that meet the speeds, the power, the density that our customers need for high-speed transmission. That means we have to have a lot of talented people, we have to be focussed. And, I promise you that innovation is very important to our success.

It is not so much about how many patents we get issued. Patents are important many times for defensive purposes as much as anything else. People can’t come after us and sue us frivolously for patent infringement because we have so many patents that cover products they likely make. In the end, patents for defence is really important.

Is there something that you have learnt over the years that has proved successful regarding innovation?

First, we want to be an innovative company. When we hire, we look for innovative people, we look for clever people, smart people, but also people with good interpersonal skills, that is a part of our culture.

But one of the things that I think is really important here is that we allow people to make mistakes. We don’t encourage people to make mistakes but we allow people to make mistakes. If they are trying to do their job and they make a mistake, we don’t fire them. We try to learn from the mistakes.

Over time, we have had guys make what appeared to be pretty serious mistakes that I am sure people might have been fired for in many other companies. But, for us, we are supportive of our employees. As long as we know they are not being lazy or dishonest, we support them.

I think that environment where you can try to innovate, you can work on projects but you know the culture of the company is not vengeful and that we will tolerate mistakes is an important part of our innovative environment.

 

For the second and final part, click here


NextIO simplifies top of rack switching with I/O virtualisation

NextIO has developed virtualised input/output (I/O) equipment that simplifies switch design in the data centre.

 

"Our box takes a single virtual NIC, virtualises that and shares that out with all the servers in a rack"

John Fruehe, NextIO 

 

The platform, known as vNET, replaces both Fibre Channel and Ethernet top-of-rack switches in the data centre and is suited for use with small one rack unit (1RU) servers. The platform uses PCI Express (PCIe) to implement I/O virtualisation.

"Where we tend to have the best success [with vNET] is with companies deploying a lot of racks - such as managed service providers, service providers and cloud providers - or are going through some sort of IT transition," says John Fruehe, vice president of outbound marketing at NextIO.

Three layers of Ethernet switches are typically used in the data centre. The top-of-rack switches aggregate traffic from server racks and link to end-of-row, aggregator switches that in turn interface to core switches. "These [core switches] aggregate all the traffic from all the mid-tier [switches]," says Fruehe. "What we are tackling is the top-of-rack stuff; we are not touching end-of-row or the core."

A similar hierarchical architecture is used for storage: a top-of-rack Fibre Channel switch, end-of-row aggregation and a core that connects to the storage area network. NextIO's vNET platform also replaces the Fibre Channel top-of-rack switch.

"We are replacing those two top-of-rack switches - Fibre Channel and Ethernet - with a single device that aggregates both traffic types," says Fruehe.

vNET is described by Fruehe as an extension of the server I/O. "All or our connections are PCI Express, we have a simple PCI Express card that sits in the server, and a PCI Express cable," he says. "To the server, it [vNET] looks like a PCI Express hub with a bunch of I/O cards attached to it." The server does not discern that the I/O cards are shared across multiple servers or that they reside in an external box.

For IT networking staff, the box appears as a switch providing 10 Gigabit Ethernet (GbE) ports to the end-of-rack switches, while for storage personnel, the box provides multiple Fibre Channel connections to the end-of-row storage aggregation switch. "Most importantly there is no difference to the software," says Fruehe.

 

I/O virtualisation

NextIO's technology pools the I/O bandwidth available and splits it to meet the various interface requirements. A server is assigned I/O resources yet it believes it has the resources all to itself. "Our box directs the I/O the same way a hypervisor directs the CPU and memory inside a server for virtualisation," says Fruehe.  

There are two NextIO boxes available that support up to 15 or up to 30 servers. One has 30, 10 Gigabit-per-second (Gbps) links and the other 15, 20Gbps links. These are implemented as 30x4 and 15x8 PCIe connections, respectively, that connect directly to the servers.

A customer most likely uses two vNET platforms at the top of the rack, the second being used for redundancy. "If a server is connected to two, you have 20 or 40 Gig of aggregate bandwidth," says Fruehe.

NextIO exploits two PCIe standards known as single root I/O virtualisation (SRIOV) and multi-root I/O virtualisation (MRIOV).

SRIOV allows a server to take an I/O connection like a network card, a Fibre Channel card or a drive controller and share it across multiple server virtual machines. MRIOV extends the concept by allowing an I/O controller to be shared by multiple servers. "Think of SRIOV as being the standard inside the box and MRIOV as the standard that allows multiple servers to share the I/O in our vNET box," says Fruehe.

Each server uses only a single PCIe connection to the vNET with the MRIOV's pooling and sharing happening inside the platform.

 

 

The vNET platform showing the PCIe connections to the servers, the 10GbE interfaces to the network and the 8 Gig Fibre Channel connections to the storage area networks (SANs). Source: NextIO

 

Meanwhile, vNET's front panel has eight shared slots. These house Ethernet controllers and/or Fibre Channel controllers, and these are shared across the multiple servers.

In affect an application running on the server communicates with its operating system to send the traffic over the PCIe bus to the vNET platform, where it is passed to the relevant network interface controller (NIC) or Fibre Channel card.

The NIC encapsulates the data in Ethernet frames before being sent over the network. The same applies with the host bus adaptor (HBA) that converts the data to be stored to Fibre Channel. "All these things are happening over the PCIe bus natively, and they are handled in different streams," says Fruehe.

In effect, a server takes a single physical NIC and partitions it into multiple virtual NICs for all the virtual machines running on the server. "Our box takes a single virtual NIC, virtualises that and shares that out with all the servers in a rack" says Fruehe. "We are using PCIe as the transport all the way back to the virtual machine and all the way forward to that physical NIC; that is all a PCIe channel."

The result is a high bandwidth, low latency link that is also scalable.

NextIO has a software tool that allows bandwidth to be assigned on the fly. "With vNET, you open up a console and grab a resource and drag it over to a server and in 2-3 seconds you've just provisioned more bandwidth for that server without physically touching anything."

The provisioning is between vNET and the servers. In the case of networking traffic, this is in 10GbE chucks. It is the server's own virtualisation tools that do the partitioning between the various virtual machines.

vNET has an additional four vNET slots - for a total of 12 -  for assignment to individual servers. "If you are taking all the I/O cards out of the server, you can use smaller form-factor servers," says Fruehe. But such 1RU servers may not have room for a specific I/O card. Accordingly, the four slots are available to host cards - such as a solid-state drive flash memory or a graphics processing unit accelerator - that may be needed by individual servers.

 

Operational benefits

There are power and cooling benefits using the vNET platform. First, smaller form factor servers draw less power while using PCIe results in fewer cables and better air flow.

To understand why fewer cables are needed, a typical server uses a quad 1GbE controller and a dual-ported Fibre Channel controller, resulting in six cables. To have a redundant system, a second set of Ethernet and Fibre Channel cards are used, doubling the cables to a dozen. With 30 servers in a rack, the total is 360 cables.

Using NextIO's vNET, in contrast, only two PCIe cables are required per server or 60 cables in total.  

On the front panel, there are eight shared slots and these can all be either dual 10GbE port cards or dual 8GbE port Fibre Channel cards or a mix of both.  This gives a total of 160GbE or 128 Gig of Fibre Channel. NextIO plans to upgrade the platforms to 40GbE interfaces for an overall capacity of 640GbE.


OFC/NFOEC 2013: Technical paper highlights

Source: The Optical Society

Network evolution strategies, state-of-the-art optical deployments, next-generation PON and data centre interconnect are just some of the technical paper highlights of the upcoming OFC/NFOEC conference and exhibition, to be held in Anaheim, California from March 17-21, 2013. Here is a selection of the papers.

Optical network applications and services

Fujitsu and AT&T Labs-Research (Paper Number: 1551236) present simulation results of shared mesh restoration in a backbone network. The simulation uses up to 27 percent fewer regenerators than dedicated protection while increasing capacity by some 40 percent.

KDDI R&D Laboratories and the Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Spain (Paper Number: 1553225) show results of an OpenFlow/stateless PCE integrated control plane that uses protocol extensions to enable end-to-end path provisioning and lightpath restoration in a transparent wavelength switched optical network (WSON).

In invited papers, Juniper highlights the benefits of multi-layer packet-optical transport, IBM discusses future high-performance computers and optical networking, while Verizon addresses multi-tenant data centre and cloud networking evolution.


Network technologies and applications

A paper by NEC (Paper Number: 1551818) highlights 400 Gigabit transmission using four parallel 100 Gigabit subcarriers over 3,600km. Using optical Nyquist shaping each carrier occupies 37.5GHz for a total bandwidth of 150GHz.

In an invited paper Andrea Bianco of the Politecnico de Torino, Italy details energy awareness in the design of optical core networks, while Verizon's Roman Egorov discusses next-generation ROADM architecture and design.


FTTx technologies, deployment and applications

In invited papers, operators share their analysis and experiences regarding optical access. Ralf Hülsermann of Deutsche Telekom evaluates the cost and performance of WDM-based access networks, while France Telecom's Philippe Chanclou shares the lessons learnt regarding its PON deployments and details its next steps.


Optical devices for switching, filtering and interconnects

In invited papers, MIT's Vladimir Stojanovic discusses chip and board scale integrated photonic networks for next-generation computers. Alcatel-Lucent's Bell Labs' Nicholas Fontaine gives an update on devices and components for space-division multiplexing in few-mode fibres, while Acacia's Long Chen discusses silicon photonic integrated circuits for WDM and optical switches.

Optoelectronic devices

Teraxion and McGill University (Paper Number: 1549579) detail a compact (6mmx8mm) silicon photonics-based coherent receiver. Using PM-QPSK modulation at 28 Gbaud, up to 4,800 km is achieved.

Meanwhile, Intel and the UC-Santa Barbara (Paper Number: 1552462) discuss a hybrid silicon DFB laser array emitting over 200nm integrated with EAMs (3dB bandwidth> 30GHz). Four bandgaps spread over greater than 100nm are realised using quantum well intermixing.


Transmission subsystems and network elements

In invited Papers, David Plant of McGill University compares OFDM and Nyquist WDM, while AT&T's Sheryl Woodward addresses ROADM options in optical networks and whether to use a flexible grid or not.

Core networks

Orange Labs' Jean-Luc Auge asks whether flexible transponders can be used to reduce margins. In other invited papers, Rudiger Kunze of Deutsche Telekom details the operator's standardisation activities to achieve 100 Gig interoperability for metro applications, while Jeffrey He of Huawei discusses the impact of cloud, data centres and IT on transport networks.

Access networks

Roberto Gaudino of the Politecnico di Torino discusses the advantages of coherent detection in reflective PONs. In other invited papers, Hiroaki Mukai of Mitsubishi Electric details an energy efficient 10G-EPON system, Ronald Heron of Alcatel-Lucent Canada gives an update on FSAN's NG-PON2 while Norbert Keil of the Fraunhofer Heinrich-Hertz Institute highlights progress in polymer-based components for next-generation PON.

Optical interconnection networks for datacom and computercom

Use of orthogonal multipulse modulation for 64 Gigabit Fibre Channel is detailed by Avago Technologies and the University of Cambridge (Paper Number: 1551341).

IBM T.J. Watson (Paper Number: 1551747) has a paper on a 35Gbps VCSEL-based optical link using 32nm SOI CMOS circuits. IBM is claiming record optical link power efficiencies of 1pJ/b at 25Gb/s and 2.7pJ/b at 35Gbps.

Several companies detail activities for the data centre in the invited papers.

Oracle's Ola Torudbakken has a paper on a 50Tbps optically-cabled Infiniband data centre switch, HP's Mike Schlansker discusses configurable optical interconnects for scalable data centres, Fujitsu's Jun Matsui details a high-bandwidth optical interconnection for an densely integrated server while Brad Booth of Dell also looks at optical interconnect for volume servers.

In other papers, Mike Bennett of Lawrence Berkeley National Lab looks at network energy efficiency issues in the data centre. Lastly, Cisco's Erol Roberts addresses data centre architecture evolution and the role of optical interconnect.


Differentiation in a market that demands sameness

Transceiver feature: Part 2

At first sight, optical transceiver vendors have little scope for product differentiation. Modules are defined through a multi-source agreement (MSA) and used to transport specified protocols over predefined distances.

  

“Their attitude is let the big guys kill themselves at 40 and 100 Gig while they beat down costs"

 

Vladimir Kozlov, LightCounting

 

 

“I don’t think differentiation matters so much in this industry,” says Daryl Inniss, practice leader components at Ovum. “Over time eventually someone always comes in; end customers constantly demand multiple suppliers.”

It is a view confirmed by Luc Ceuppens, senior director of marketing, high-end systems business unit at Juniper Networks. “We do look at the different vendors’ products - which one gives the lowest power consumption,” he says. “But overall there is very little difference.”

For vendors, developing transceivers is time-consuming and costly yet with no guarantee of a return. The very nature of pluggables means one vendor’s product can easily be swapped with a cheaper transceiver from a competitor. 

Being a vendor defining the MSA is one way to steal a march as it results in a time-to-market advantage.  There have even been cases where non-founder companies have been denied sight of an MSA’s specification, ensuring they can never compete, says Inniss:  “If you are part of an MSA, you are very definitely at an advantage.”

Rafik Ward, vice president of marketing at Finisar, cites other examples where companies have an advantage.

One is Fibre Channel where new data rates require high-speed vertical-cavity surface-emitting lasers (VCSELs) which only a few companies have.

Another is 100 Gigabit-per-second (Gbps) for long-haul transmission which requires companies with deep pockets to meet the steep development costs.  “One hundred Gigabit is a very expensive proposition whereas with the 40 Gigabit Ethernet LR4 (10km) standard, existing off-the-shelf 10Gbps technology can be used,” says Ward. 

 

"One hundred Gigabit is a very expensive proposition"

Rafik Ward, Finisar

 

 

 

 

 

 

Ovum’s Inniss highlights how optical access is set to impact wide area networking (WAN).  The optical transceivers for passive optical networking (PON) are using such high-end components as distributed feedback (DFB) lasers and avalanche photo-detectors (APDs), traditionally components for the WAN. Yet with the higher volumes of PON, the cost of WAN optics will come down.

“With Gigabit Ethernet the price declines by 20% each time volumes double,” says Inniss. “For PON transceivers the decline is 40%.” As 10Gbps PON optics start to be deployed, the price benefit will migrate up to the SONET/ Ethernet/ WAN world, he says.  Accordingly, those transceiver players that make and use their own components, and are active in PON and WAN, will most benefit.

“Differentiation is hard but possible,” says Vladimir Kozlov, CEO of optical transceiver market research firm, LightCounting.  Active optical cables (AOCs) have been an area of innovation partly because vendors have freedom to design the optics that are enclosed within the cabling, he says.

AOCs, Fibre Channel and 100Gbps are all examples where technology is a differentiator, says Kozlov, but business strategy is another lever to be exploited.

On a recent visit to China, Kozlov spoke to ten local vendors. “They have jumped into the transceiver market and think a 20% margin is huge whereas in the US it is seen as nothing.” 

The vendors differentiate themselves by supplying transceivers directly to the equipment vendors’ end customers. “They [the Chinese vendors] are finding ways in a business environment; nothing new here in technology, nothing new in manufacturing,” says Kozlov.

He cites one firm that fully populated with transceivers a US telecom system vendor’s installation in Malaysia. “Doing this in the US is harder but then the US is one market in a big world,” says Kozlov.

Offshore manufacturing is no longer a differentiator.  One large Chinese transceiver maker bemoaned that everyone now has manufacturing in China. As a result its focus has turned to tackling overheads: trimming costs and reducing R&D. 

“Their attitude is let the big guys kill themselves at 40 and 100 Gig while they beat down costs by slashing Ph.Ds, optimising equipment and improving yields,” says Kozlov.   “Is it a winning approach long term? No, but short-term quite possibly.”


Privacy Preference Center