Graphene prototype modulator shown working at 10 Gigabit
- Imec's graphene electro-absorption modulator works at 10 Gigabit-per-second
- The modulator is small and has be shown to be thermally stable
- Much work is required to develop the modulator commercially
Cross-section of the graphene electro-absorption modulator. The imec work was first detailed in a paper at the IEDM conference held in December 2014 in San Francisco. Source: imec
Imec has demonstrated an optical modulator using graphene operating at up to 10 Gigabit. The Belgium nano-electronics centre is exploring graphene - carbon atoms linked in a 2D sheet - as part of its silicon photonics research programme investigating next-generation optical interconnect. Chinese vendor Huawei joined imec's research programme late last year.
Several characteristics are sought for a modulator design. One is tiny dimensions to cram multiple interfaces in as tight a space as possible, as required for emerging board-to-board and chip-to-chip optical designs. Other desirable modulator characteristics include low power consumption, athermal operation, the ability to operate over a wide range of wavelengths, high speed (up to 50 Gbps) and ease of manufacture.
Imec's interest in graphene stems from the material's ability to change its light-absorbing characteristics over a wide spectral range. "Graphene has a high potential for a wide-band modulator solution and also for an athermal design," says Joris Van Campenhout, programme director for optical I/O at imec.
Source: Gazettabyte
Modulation
For optical modulation, either a material's absorption coefficient or its refractive index is used. Silicon photonics has already been used to implement Mach-Zehnder interferometer and ring resonator modulators. These designs modifying their refractive index and use interference to induce light intensity modulation.
"Mach-Zehnder modulators have been optimised dramatically over the last decade," says Van Campenhout. "They can generate at very high bit rates but they are still pretty big - 1mm or longer - and that prevents further scaling."
Ring resonators are more compact and have been shown working at up to 50 Gigabit. "But they are resonant devices; they are wavelength-specific and thermally dependent," says Van Campenhout. "A one degree change can detune the ring resonance from the laser's wavelength."
The other approach, an electro-absorption modulator, uses an electric field to vary the absorption coefficient of the material and this is the graphene modulator approach imec has chosen.
Electro-absorption modulators using silicon germanium meet the small footprint requirement, have a small capacitance and achieve broadband operation. Capacitance is an important metric as it defines the modulator's maximum data rate as well as such parameters as insertion loss (how many dBs of signal are lost passing through the modulator) and the extinction ratio (a measure of the modulator's on and off intensity).
"Silicon germanium offers a pretty decent modulation quality," says Van Campenhout but the wavelength drifts with temperature. Thermal drift is something that graphene appears to solve.
Imec's graphene electro-absorption modulator comprises a 50 micron graphene-oxide-silicon capacitor structure residing above a silicon-on-insulator rib waveguide. The waveguides are implemented using a 200mm wafer whereas the graphene is grown on a copper substrate before being placed on the silicon die. Van Campenhout refers to the design as hybrid or heterogenous silicon photonics.
The graphene modulator exhibits a low 4dB insertion loss and an extinction ratio of 2.5dB. The device's performance is stable over a broad spectrum: an 80nm window centred around the 1550nm wavelength. The performance of up to 10Gbps was achieved over a temperature range of 20-49°C.
"The key achievement is that we have been able to show that you can operate at 10 Gigabit with very clean modulation eye diagrams," says Van Campenhout. However, much work is needed before the device becomes a viable technology.
Source: Gazettabyte, imec
What next?
Imec has modelled the graphene modulator using a simple resistor-capacitor circuit. "We have been able to identify sources of capacitance and resistance," says Van Campenhout. "We can now better optimise the design for speed or for efficiency."
The speed of the modulator is dictated by the resistance-capacitance product. Yet the higher the capacitance, the greater the efficiency: the better the extinction ratio and the lower the insertion loss. "So it comes down to reducing the resistance," says Van Campenhout. "We think we should be able to get to 25 Gigabit."
With the first prototype, the absorption effect induced by the electric field is achieved between a single graphene plate and the silicon. Imec plans to develop a design using two graphene plates. "If two slabs of graphene are used, we expect to double the effect," says Van Campenhout. "All the charge on both plates of the capacitor will contribute to the modulation of the absorption."
However the integration is more difficult with two plates, and two metal contacts to graphene are needed. "This is still a challenge to do," says Van Campenhout.
Imec has also joined the Graphene Flagship, the European €1 billion programme that spans materials production, components and systems. "One of the work packages is to show you can process on a manufacturing scale graphene-based devices in a CMOS pilot line," he says. Another consideration is to use silicon nitride waveguides rather than silicon ones as these can be more easily deposited.
One challenge still to be overcome is the development of an efficient graphene-based photo-detector. "If this technology is ever going to be used in a real application, there should be a much more efficient graphene photo-detector being developed," says Van Campenhout.
Nuage uses SDN to aid enterprise connectivity needs
"Across the WAN and out to the branch, the context is increasingly complicated, with the need to deliver legacy and cloud applications to users - and sometimes customers - that are increasingly mobile, spanning several networks," says Brad Casemore, research director, data centre networks at IDC. These networks can include MPLS, Metro Ethernet, broadband and 3G and 4G wireless.
The data centre is a great microcosm of the network - Houman Modarres
At present, remote offices use custom equipment that require a visit from an engineer. In contrast, VNS uses SDN technology to deliver enterprise services to a generic box, or software that runs on the enterprise's server. The goal is to speed up the time it takes an enterprise to set up or change their business services at a remote site, while also simplifying the service provider's operations.
What has been done
Nuage designed its SDN-enabled connectivity products from the start for use in the data centre and beyond. "The data centre is a great microcosm of the network," says Modarres. "But we designed it in such a way that the end points could be flexible, within and across data centres but also anywhere."
Nuage uses open protocols like OpenFlow to enable the control plane to talk to any device, while its software agents that run on a server can work with any hypervisor. The control plane-based policies are downloaded to the end points via its SDN controller.
Using VNS, services can be installed without a visit from a specialist engineer. A user powers up the generic hardware or server and connects it to the network whereby policies are downloaded. The user enters a sent code that enables their privileges as defined by the enterprise's policies.
"Just as in the data centre, there is a real need for greater agility through automation, programmability, and orchestration," says IDC's Casemore. "One could even contend that for many enterprises, the pain is more acutely felt on the WAN, especially as they grapple with how to adapt to cloud and mobility."
Extending the connectivity end points beyond the data centre has required Nuage to bolster security and authentication procedures. Modarres points out that data centers and service provider central offices are secured environments; a remote office that could be a worker's home is not.

"You need to do authentication differently and IPsec connections are needed for security, but what if you unplug it? What if it is stolen?" he says. "If someone goes to the bank and steals a router, are they a bank branch now?"
To address this, once a remote office device is unplugged for a set time - typically several minutes - its configuration is reset. Equally, when a router is deliberated unplugged, for example during an office move, if notification is given, the user receives a new authentication code on the move's completion and the policies are restored.
Nuage's virtualised services platform comprise three elements: the virtualised services directory (VSD), virtualised services controller (VSC) - the SDN controller - and the virtual routing and switching module (VR&S).
"The only thing we are changing is the bottom layer, the network end point, which used to be in the data centre as the VR&S, and is now broken out of the data centre, as in the network services gateway, to be anywhere," says Modarres. "The network services gateway has physical and virtual form factors based on standard open compute."
Nuage is finding that businesses are benefitting from an SDN approach in surprising ways.
The company cites banks as an example that are forced by regulation to ensure that there are no security holes at their remote locations. One bank with 400 branches periodically sends individuals to each to check the configuration to ensure no human errors in its set-up could lead to a security flaw. With 400 branches, this procedure takes months and is costly.
With SDN and its policy-level view of all locations - what each site and what each group can do - there are predefined policy templates. There may be 10, 20 or 30 templates but they are finite, says Modarres: "At the push of a button, an organisation can check the templates, daily if needed".
This is not why a bank will adopt SDN, says Modarres, but the compliance department will be extremely encouraging for the technology to be used, especially when it saves the department millions of dollars in ensuring regulatory compliance.
Nuage Networks says it has 15 customer wins and 60 ongoing trials globally for its products. Customers that have been identified include healthcare provider UPMC, financial services provider BBVA, cloud provider Numergy, hosting provider OVH, infrastructure providers IDC Frontier and Evonet, and telecom providers TELUS and NTT Communications.
Ciena offers enterprises vNF pick and choose
Ciena, working with partners, has developed a platform for service providers to offer enterprises network functions they can select and configure with the click of a button.
Dubbed Agility Matrix, the product enables enterprises to choose their IT and connectivity services using software running on servers. It also promises to benefit service providers' revenues, enabling more adventurous service offerings due to the flexibility and new business models the virtual network functions (vNFs) enable. Currently, managed services require specialist equipment and on-site engineering visits for their set-up and management, while the contracts tend to be lengthy and inflexible.
"It offers an ecosystem of vNF vendors with a licensing structure that can give operators flexibility and vendors a revenue stream," says Eric Hanselman, chief analyst at 451 Research. "There are others who have addressed the different pieces of the puzzle, but Ciena has wrapped the products with the business tools to make it attractive to all of the players involved."
Ciena has created an internal division, dubbed Ciena Agility, to promote the venture. The unit has 100 staff while its technology, Agility Matrix, is being trialled by service providers although Ciena has declined to say how many.
"Why a separate devision? To move fast in a market that is moving rapidly," says Kevin Sheehan, vice president and general manager of Ciena Agility.
The unit inherits Agility products previously announced by Ciena. These include the multi-layer WAN controller that Ciena is co-developing with Ericsson, and certain applications that run on the software-defined networking (SDN) controller.
"The unique aspect of Ciena’s offering is the comprehensive approach to virtualised functions, says Hanselman. "It tackles everything from service orchestration out to monetisation."
Source: Ciena
What has been done
Agility Matrix comprises three elements: the vNF Market, Director and the host. The vNF Market is cloud-based and enables a service provider to offer a library of vNFs that its enterprise customers can choose from. An enterprise IT manager can select the vNFs required using a secure portal.
The Director, the second element, does the rest. The Director, built using Openstack software, delivers the vNFs to the host, an x86 instruction set-based server located at the enterprise's premises or in the service provider's central office or data centre.
The Director generates a software licence, the enterprise customer confirms the vNFs are working, which prompts the Director to generate post-payment charging data records. The VNF Market then invoices the service provider and pays the vNF vendors selected.
"Agility Matrix enables a pay-as-you-earn model for the service provider, much different from today's managed services providers' experiences," says Sheenan, who points out that a service provider currently buys custom hardware in bulk based on their enterprise-demand forecast, shipping products one by one. Now, with Agility Matrix, the service provider pays for a licence only after its enterprise customer has purchased one.
Ciena has launched Agility Matrix with five vNF partners. The partners and their vNF products are shown in the table.
Source: Gazettabyte
AT&T Domain 2.0 programme
Ciena is one of the vendors selected by AT&T for its Supplier Domain 2.0 programme. Does AT&T's programme influence this development?
“We are always working with our customers on addressing their current and future problems," says Sheehan. "When we bring something like Agility Matrix to the market, it is created by working with our partners and customers to develop a solution that is designed to meet everyone’s needs."
"Ciena has application programming interfaces that can support integration at several levels, but it is not clear that Agility is part of the deployment within Domain 2.0," says Hanselman. "The interesting things in Domain 2.0 are the automation and virtualisation pieces; Ciena can handle the automation part with its existing products."
Meanwhile, AT&T has announced its 'Network on Demand' that enables businesses to add and change network services in 'near real-time' using a self-service online portal.
Huawei joins imec to research silicon photonics
Huawei has joined imec, the Belgium nano-electronics research centre, to develop optical interconnect using silicon photonics technology. The strategic agreement follows Huawei's 2013 acquisition of former imec silicon photonics spin-off, Caliopa.
Source: Gazettabyte
“Having acquired cutting-edge expertise in the field of silicon photonics thanks to our acquisition of Caliopa last year, this partnership with imec is the logical next move towards next-generation optical communication,” says Hudson Liu, CEO at Huawei Belgium.
Imec's research focus is to develop technologies that are three to five years away from production. "Imec works with leading IC manufacturers and fabless companies in the field of CMOS fabrication," says Philippe Absil, department director for 3D and optical technologies at imec. "One of the programmes with our co-partners is about optical interconnect and silicon photonics, and Huawei is one of the participating companies."
Imec's research concentrates on board-to-board and chip-to-chip interconnect. The optical interconnect work includes increasing interface bandwidth density, reducing power consumption, and achieving thermal stability and system-cost reduction.
The research centre has demonstrated high-bandwidth interfaces as part of work with Chiral Photonics that makes multi-core fibre. Imec has developed a 2D ring of grating couplers that allow coupling between the silicon photonics chip and Chiral's 61-core fibre. "A grating coupler is a sub-wavelength structure that diffracts the light from a waveguide in a vertical direction towards the fibre above the chip," says Absil. This contrasts to traditional edge coupling to a device, achieved by dicing or cleaving a facet on the waveguide, he says.
Another research focus is how to reduce device power consumption and achieve thermal stability. One silicon photonics component that dictates the overall power consumption is the modulator, says Absil. "The Mach-Zehnder modulator is known to consume significant amounts of power for chip-to-chip distances," he says. "The alternative is to use resonating-based modulators but these have to be thermally controlled, and that has an associated power consumption."
Imec is looking at ways to reduce the thermal control needed and is investigating the addition of materials to silicon to create resonator modulators that do away with the need for heating.
The system-cost reduction work looks at packaging. "Eventually, we want to get the optical transceiver inside a host IC," says Absil. "That package has to enable an optical pass-through, whether it is fibre or an optically-transparent package." Such a requirement differs from established CMOS packaging technology. "The programme is also looking to explore new types of packaging for enabling this optical pass-through," he says.
Absil says certain programme elements are two years away from being completed. "In the programme, we have topics that are closer to being adopted and some that are further away, maybe even to 2020."
Multi-project wafer service
Imec is part of the a consortium of EC research institutes that provide low-cost access to companies that don't have the means to manufacture their own silicon photonics designs. Known as Essential, the EC's Seventh Framework (FP7) programme is an extension of the ePIXfab silicon photonics multi-project wafer initiative. "Imec is offering one flavour of the technology, Leti is also offering a flavour, and then there is IHP and VTT," says Absil. Once the Essential FP7 project is completed, the service will be continued by the Europractice IC service.
Has imec seen any growth now that the funding for OpSIS, the multi-project wafer provider, has come to an end? "We see decent contributions but I wouldn't say it is exponential growth," says Absil, who notes that the A*STAR Institute of Microelectronics in Singapore that OpSIS used continues to offer a multi-project wafer service.
Status of silicon photonics
Despite announcements from Acacia and Intel, and Finisar revealing at ECOC '14 that it is now active in silicon photonics, 2014 has been a quiet year for the technology.
"Right now it is a bit quiet because companies are investing in development," says Absil. "There is not so much incentive to publish this work." Another factor he cites for the limited news is that there are vertically-integrated vendors that are putting the technology in their servers rather than selling silicon-photonics products directly.
"This is only first generation," says Absil. "As it picks up, there will be more incentive to work on a second generation of silicon photonics which will depart from what we know from the early work published by Intel and Luxtera."
The opportunities this next-generation technology will offer are 'quite exciting', says Absil.
Mobile fronthaul: A Q&A with LightCounting's John Lively
LightCounting Market Research' s report finds that mobile fronthaul networks will use over 14 million optical transceivers in 2014, resulting in a market valued at US $530 million. This is roughly the size to the FTTX market. However, unlike FTTX, sales of fronthaul transceivers will nearly double in the next five years, to exceed $900 million. A Q&A with LightCounting's principal analyst, John Lively.

Q. What is mobile fronthaul?
There is a simple explanation for mobile front-haul but that belies how complicated it is.
The equipment manufacturers got together about 10 years ago and came up with the idea to separate the functionality within a base station. The idea is that if you separate the functionality into two parts, you can move some of it to the tower and thereby reduce the equipment, power and space needed in the hut below. That is the distributed base station.
So instead of a large chassis base station, the current equipment is in two: a baseband unit or BBU which is a smaller rack-mounted unit, and the remote radio unit (RRU) or sometimes the remote radio head, mounted at the top of the tower, next to the antennas. The link between the two units is defined as fronthaul.
Q. What role does optics have in mobile fronthaul?
In the old monolithic base station, the connection between the two parts was an inch or two of copper. Once you have half the equipment up on the tower, obviously a few inches of copper is not going to suffice.
They found that copper is a poor choice even if the BBU is at the bottom of the tower. Because the signal between the two is a radio frequency analogue one, the signal is not compressed and so has a fairly high bandwidth.
One statistic I saw is that if you use copper cable instead of fibre, the difference between the two just in terms of weight is 13x. And there are things to consider like the wind load and ice load on these towers. So you want small diameter, lightweight cables. So even if there were no considerations of distance, there are basic physical factors that favour fibre for this link. That is the genesis of fronthaul.
But then people realised: We have a fibre connection, we can move the BBU; now we can go tens of kilometers if we want to. Operators can then consider aggregating BBUs in central locations that serve multiple radio macrocells. This is called centralised RAN.
Centralised RAN reduces cost simply by saving real-estate, space and power. With the right equipment, you can also allocate processing capacity dynamically among multiple cells and realise greater efficiencies.
So there are layers of benefits to fronthaul. It starts with simple things like weight and the inability to shed ice, getting down to annual operating costs and the investment needed in future wireless capacity. Fronthaul is a concept with much to offer.
Q. What is driving mobile fronthaul adoption?
What has brought fronthaul to the fore has been the global deployment of LTE. Fronthaul is not LTE-specific; distributed base station equipment has been available for HSPA and other 3G equipment. But in the last 3-4 years, we have had a massive upgrade in global infrastructure with many operators installing LTE. It is that that has driven the growth in fronthaul, taking it from a niche to become a mainstream part of the network.
Q. What are the approaches for mobile fronthaul?
The fronthaul that we have heard about from component vendors is simple point-to-point grey optics links. But let me start by defining CPRI. As part of the development of distributed base stations, a bunch of equipment vendors defined a way the signals would be transmitted between the BBU and the RRU, and it is called the Common Public Radio Interface or CPRI. As part of the specification, they define minimum requirements from the optical links, and they go so far as to say that these can be met with existing optics including several Fibre Channel devices.
As part of LightCounting's vendor surveys, we know that the predominant mode of implementation of fronthaul today is grey optics. That paints one picture: fronthaul is simple point-to-point grey optics. Some of the largest deployments recently have been of that mode, with China Mobile being the flagship example.
However, grey optics is not the only scheme, and some mobile operators have opted to do it differently.
A competing scheme is simple wavelength-division multiplexing (WDM) - a coarse WDM multi-channel coloured optical system. It is obviously simpler than long-haul: not 80 channels of closely spaced lambdas but systems more like first-generation WDM long-haul of 10 or 15 years ago, using 16 channels.
At first glance, it appears that the WDM approach is a next-generation scheme. But that is not the case; it has been deployed. South Korea's SK Telecom has used a WDM fronthaul solution when building their LTE network.
Q. Is it clear what operators prefer?
Both schemes have pros and cons. If there is a scarcity of fibre, you are leasing fibre from a third party for example, every additional fibre you use costs money. Or you have to deploy new fibre which is super expensive. Then a WDM solution looks attractive.
Another benefit, which is interesting, is that if you are a third-party provider of fronthaul, such as a tower company or a cable operator that wants to provide fronthaul just as it provides mobile backhaul, you need a demarcation point so that when there is a problem, you can say where your responsibility begins and ends.
There is no demarcation point with point-to-point links, it is just fibre running directly from operator equipment from Point A to Point B. With WDM systems, you have a natural demarcation point: the add/ drop nodes where the signals get onto the WDM wavelengths.
For example, a tower may serve three operators. Each operator would then used short-reach grey optics from their RRU to connect to the add/ drop node that may be at the bottom or on the tower. Otherwise, when there is a fault, who is responsible? That is another advantage of the WDM scheme.
It is not unlike the situation with fibre-to-the-x: some places have fibre-to-the-home, some fibre-to-the-curb, some fibre-to-the basement. There are different scenarios having to do with density, operator environment or regulation that create different optimal solutions for each scenario. There is no one-size-fits-all.
Q. What optical modules are used for mobile fronthaul and how will this change over the next five years?
The RRHs typically require 3 or 6 Gigabit-per-second (Gbps). These are CPRI standard rates that are multiples of a basic base rate. In some cases when they are loaded up with multiple channels - daisy chaining the RRUs - you may require 10Gbps.
From our survey data, in 2013 the mix was 3 and 6Gbps devices primarily, and this year we saw a shift away from 3 and more towards 6 and 10 Gbps. We believe that was skewed to some degree by China Mobile, which in many areas is putting up high capacity LTE systems with multiple channels, unlike many other operators that are doing a LTE multi-phase deployment, lighting one channel to start with and adding capacity as needed.
There is also some demand for 12.5Gbps but nothing beyond that, and 12.5Gbps demand is rather small and unlikely to grow quickly. That is because the individual RRHs are not going up in capacity. Rather, the way that capacity keeps up with bandwidth is that the number of RRHs multiplies. The way fronthaul keeps up with bandwidth demand is mainly by the proliferation of links rather than increasing the speed of individual links.
Q. A market nearly doubling in five years, that is a healthy optical component segment?
The growth is good. But like everything in optical components, it is questionable whether vendors will find a way to make it profitable. The technology specifications are not particularly challenging, so you can expect competition to be pretty severe for this market.
We are already seeing several Chinese makers with low manufacturing costs establishing themselves among the top suppliers in this market.
Q. Besides market size, what were other findings of the report?
I do expect WDM systems to become more widespread over the next five years. It makes sense that not everyone will want to do the brute force method of a link for every RRU out there. This is probably the biggest area of uncertainty, too: to what extent will WDM catch up or displace first generation grey optics?
The other thing to think about is what happens next? LTE deployments are well underway, a bit more than half way done worldwide. And it will be at least 5 years before the next big cycle: people are only just starting to talk about 5G. What is fonthaul going to look like in a 5G system?
It is hard to answer that in any clarity because 5G systems are not yet defined. What I find fascinating is that they are talking about multi-service access networks instead of fixed and mobile broadband being separate.
With WDM-PON and other advanced access networks, there is a growing belief that fronthaul could be carried over existing networks rather than having purpose-built fronthaul and backhaul networks. Fronthaul may thus go away and just be a service that tags onto some other networking equipment in the 5-10 year timeframe.
Q. Did any of the findings surprise you?
One is the fact that WDM is being deployed today.
Another is the size of the market: the component revenues are as big as FTTx. If you think about it, it makes sense: they are both serving consumers and are similar types of applications in terms of what they are doing: one is fixed broadband and one is mobile broadband.
Q. What are the developments to watch in the next few years regarding mobile fronthaul?
The next five years, the key thing to watch is the adoption of WDM in lieu of point-to-point grey optics. Beyond that, for the next generation, what fronthaul will be needed in 5G networks?
Books in 2014 - Part 2
More book recommendations, from Infonetics Research's Andrew Schmitt and ADVA Optical Networking's Ulrich Kohn.
Andrew Schmitt, principal analyst for carrier transport networking at Infonetics Research
It has been a bit of a thin year for me. And what I’ve read is a little outside the job. It seems like with all of the new media at hand I make less time for long-form consumption.
My wife is a big rower and I bought The Boys in the Boat: Nine Americans and their epic quest for gold at the 1936 Berlin Olympics by Daniel James Brown for her. But then someone recommended it to me and I started reading it before she could. It is a fantastic underdog story about the University of Washington crew team and their road to the Berlin Olympics. It has lots of colour of what 1920's and 1930's America was like and it does a good job of conveying the subtleties of the sport. The central character has everything in life stacked against him but relies on a bottomless ability to suffer both in and out of the boat to grind his way towards a goal. There is so much vivid detail about the personalities and the races that I am slightly skeptical about whether it was all accurate but a fantastic read nonetheless.
Probably my favourite book of 2014 was Zero to One: Notes on startups, or how to build the future by Peter Thiel and Blake Masters, and I’m sure I won’t be the only one to mention it. It is like the anti-business book, blowing up all of the conventional thoughts surrounding start-ups and makes for a refreshing read. Excellent signal-to-noise ratio. Blake Masters deserves more credit than he has received for canning the thoughts of Peter Thiel in a very readable way.
I re-read Fahrenheit 451 by Ray Bradbury after 20-plus years. I decided to read this again with more life experience under my belt. Sci-fi when first written 60 years ago, it now outlines a frightening and plausible adjacent reality to ours. I don’t watch much TV but I see others consumed by popular culture media; it isn’t really that different than what happens in this book – only the outcome is. Like Brave New World, it was a remarkable window to the future when written.
In the last 20 years the media’s reach and power has soared with the Internet and mobile. Media is a weapon whose power has vastly expanded from 1950's America, not unlike the exponential increase in weaponry after World War II. This creation of so many new media sources was supposed to bring many more voices and result in a more balanced media. I am no longer confident this is happening as folks can now pick and choose the media sources that re-enforce their own biases – just as the characters in 451 do. So far this has been harmless in the USA but I’m afraid of what might happen here or in other parts of the world. I suppose one benefit to the current media structure is it is still 'Antifragile', as it is free, but that is not the case in places like China.
Michael Lewis is one of those authors that can take any subject and make it interesting but Wall St. certainly grants him a home field advantage. But Flash Boys is not one of my favourite books of his. But he does a great job outlining how high-frequency trading is conducted, and allows one to understand how it happens. It is shameful that the existing market structure knowingly allows buy/ sell orders to be front-run; it was like owners of a large bazaar allowing pickpockets to roam freely. There is some colourful mention of the use of optics and the physical means of getting ahead of legitimate orders at exchange points throughout the country.
Tyler Hamilton’s story The Secret Race: Inside the hidden world of the Tour de France closed the book on the Lance Armstrong scandal, and removed the mystery and shadows of doubt about what happened during the Tour de France during those years. The book walks the reader through what happened during Lance’s big victories, and Tyler’s solo career and subsequent downfall. You realise that everyone was doping, and that competing was not possible otherwise.
I had the chance to meet Tyler, ride with him and talk about his career at length. His brother is my son’s ski coach. He told me that the big lesson here applies to everyone – at many points in your career someone is going to make you do something wrong or illegal in order to get ahead, and they will tell you that it is OK because everyone else does it. That is where you have to be prepared to find another path, even at the expense of your dreams.
Once you lose trust, which he has, you can never go back. It is an extremely important lesson for my work too.
Ulrich Kohn, director of technical marketing at ADVA Optical Networking
The book that springs to mind is Economics of Good and Evil: The Quest for Economic Meaning from Gilgamesh to Wall Street by Tomáš Sedláček. It is one of those books you buy in passing, put aside, happen to start reading months later and become completely absorbed. It combines a broad outline of the history of economics with views on social, cultural and ethical development.
What’s clear is that much thought and research went into the writing. However, there is also plenty of speculation. So much so, that it is impossible to read without a great deal of personal reflection. This is especially true with Adam Smith’s concept of the ‘Invisible Hand,’ which assumes that stronger players who strive to further their own gains act in a way that also serves society.
As a communications engineer, this intrigues me.
The Internet has changed almost every aspect of our social and business lives. It enables us to continually stay in contact with our friends, family and colleagues. We have access to almost any desired information in real-time. We’re living in an era of shared wisdom and opinions. What or who controls this? Is there an Invisible Hand that makes sure we as a society make best use of innovative communication services similar to the model described by Adam Smith in the 18th century?
The book discusses Smith’s model and outlines the criticism and limitations as the Invisible Hand that could not solve all economic problems, such as the high unemployment rates in Europe in the early 20th century. John Maynard Keynes' suggestions for governmental control as self-regulation did not lead to a fair balance among all participants in the economic system. Today, it is widely accepted that fiscal and legal measures need to act as a safeguard.
The Internet is mainly a self-controlled environment. This approach so far has served us well. An Invisible Hand made sure that all participants benefitted. However, there is increasing concern that strong players can gain value in a way that impacts that balance. Various organisations are initiating action such as the EU’s intention to strengthen the right of users on ownership of their data and their network identities. It is fascinating to see those analogies between the historic development of economic systems and the present discussion about the Internet.
Tomáš Sedláček combines economic history with social, cultural and ethical development that inspires and triggers further thought. In his book, he outlines how ethical principles shape economic systems. I would love to see a similar book on the impact of communication on our society and culture.
The development of the Internet is driven by technical capabilities and innovative applications. A reflection on the social and cultural impact of such innovation is limited to expert discussion; a wider discussion could be fruitful, and some would argue, urgently required.
For Part 1, click here
OpenCL and the reconfigurable data centre
Part 3: General purpose data centres
Xilinx's adoption of the Open Computing Language (OpenCL) as part of its SDAccel development tool is important, not just for FPGAs but also for the computational capabilities of the data centre.
The FPGA vendor is promoting its chips as server co-processors to tackle complex processing tasks such as image searches, encryption, and custom computation.
Search-engine specialists such as Baidu and Microsoft have seen a greater amount of traffic for image and video searches in the last two years, says Loring Wirbel, senior analyst at market research firm, The Linley Group: "All of a sudden they are seeing that these accelerator cards as being necessary for general-purpose data centres."
Xilinx and Altera have been way ahead of the niche FPGA vendors, indeed ahead of a lot of the network processor and graphics processor (GPU) vendors, in recognising the importance of OpenCL
OpenCL was developed by Apple and is being promoted by the Khronos Group, an industry consortium set up to promote the integration of general purpose microprocessors, graphics processors, and digital signal processing blocks. And it is the FPGA vendors that are playing a pivotal role in OpenCL's adoption.
"Xilinx and Altera have been way ahead of the niche FPGA vendors, indeed ahead of a lot of the network processor and graphics processor (GPU) vendors, in recognising the importance of OpenCL," says Wirbel.
Altera announced the first compiler kit for OpenCL in 2013. "The significant thing Altera did was develop 'channels' for accelerator 'kernels'. Using the channels, kernels - the tasks to be accelerated in hardware - communicate with each other without needing the host processor. "It offers an efficient way for multiple co-processors to talk to each other," says Wirbel. The OpenCL community have since standardised elements of Altera's channels, now referred to as pipes.
"What Xilinx has brought with SDAccel is probably more significant in that it changes the design methodology for bringing together CPUs and GPUs with FPGAs," says Wirbel. Xilinx's approach may be specific to its FPGAs but Wirbel expects other firms to adopt a similar design approach. "Xilinx has created a new way to look at design that will ease the use of parallelism in general, and OpenCL," says Wirbel. (see SDAccel design approach, below.)
"Altera and Xilinx should be saluted in that they have encouraged people to start looking at OpenCL as a move beyond C for programming everything," says Wirbel. This broadening includes programming multi-core x86 and ARM processors, where a good parallel language is desirable. "You get better performance moving from C to C++, but OpenCL is a big jump," he says.
The future says that every data centre is going to become an algorithmically-rich one that can suddenly be reallocated to do other tasks
Wirbel does not have hard figures as to how many of a data centre's servers will have accelerator cards but he believes that every data centre is going to have specialised acceleration for tasks such as imaging and encryption as a regular feature within the next year or two. His educated guess is that it will be one accelerator card per eight host CPUs and possibly one in four.
Longer terms, such acceleration will change the computational nature of the data centre. "The future says that every data centre is going to become an algorithmically-rich one that can suddenly be reallocated to do other tasks," he says. It could mean that institutions such as national research labs that tackle huge-scale simulation work may no longer require specialist supercomputer resources.
"That is a little bit exaggerated because what will really happen is you will have to have whole clusters of data centres around the country allocated to ad-hoc virtual multiprocessing on very difficult problems," says Wirbel. "But the very notion that there needs to be assigned computers in data centres to one set of problems will be a thing of the past."
How does that relate to Xilinx's SDAccel and OpenCL?
"Some of this will happen because of tools like OpenCL as the language and tools like SDAccel for improving FPGAs," says Wirbel.
The SDAccel design approach
Xilinx has adopted the concept of co-simulation at an early stage of an FPGA-based co-processor design, alongside a server's x86 processor.
Wirbel says that despite all the talk about co-simulation over the last decade, little has been done in practice. With co-simulation, an x86 processor or a graphics processor is simulated with a designer's IP logic that makes up an ASIC or an FPGA design.
Making FPGAs with very tightly-packed processors and with a very low power dissipation is critical; it is a big deal
"What Xilinx did is they said: the biggest problem is designers having to redo an FPGA, even placing and routing elements and going back to using back-end EDA [electronic design automation] tools," says Wirbel. "Maybe the best way of doing this is recognising we have to do some early co-simulation on a target x86 CPU board."
This is where OpenCL plays a role.
"The power of OpenCL is that it lets you define an acceleration task as a kernel," says Wirbel. It is these acceleration kernels that are sent to the hardware emulator with the x86 on board. The kernels can then be viewed in the co-simulation environment working alongside the x86 such that any problems encountered can be tackled, and the two optimised. "Then, and only then, do you send it to a compiler for a particular FPGA architecture."

The challenge for Xilinx is keeping a lid on the FPGA accelerator card's power consumption given the huge number of servers in a data centre.
"The large internet players have got to be able to add these new features for almost zero extra power," says Wirbel. "Making FPGAs with very tightly-packed processors and with a very low power dissipation is critical; it is a big deal."
For Part 1, click here
For Part 2, click here
What role FPGA server co-processors for virtual routing?
IP routing specialists have announced first virtual edge router products that run on servers. These include Alcatel-Lucent with its Virtualized Service Router and Juniper with its vMX. Gazettabyte asked Alcatel-Lucent's Steve Vogelsang about the impact FPGA accelerator cards could have on IP routing.

Steve Vogelsang, IP routing and transport CTO, Alcatel-Lucent
The co-processor cards in servers could become interesting for software-defined networking (SDN) and network function virtualisation (NFV).
The main challenge is that we require that our virtualised network functions (vNFs) and SDN data plane can run on any cloud infrastructure; we can’t assume that any specific accelerator card is installed. That makes it a challenge.
I can imagine, over time, that DPDK, the set of libraries and drivers for packet processing, and other open source libraries will support co-processors, making it easier to exploit by an SDN data plane or vNF.
For now we’re not too worried about pushing the limits of performance because the advantage of NFV is the operational simplicity. However, when we have vNFs running at significant scale, we will likely evaluate co-processor options to improve performance. This is similar to what Microsoft and others are doing with search algorithms and other applications.
Note that there are alternative co-processors that are more focussed on networking acceleration. An example is Netronome which is a purpose-built network co-processor for the x86 architecture. Not sure how it compares to Xilinx for networking functionality, but it may outperform FPGAs and be a better option if networking is the focus.
Some servers are also built to enable workload-specific processing architectures. Some of these are specialised on a single processor architecture while others such as HP's Moonshot allow installation of various processors including FPGAs.
When we have vNFs running at significant scale, we will likely evaluate co-processor options to improve performance
I don’t expect FPGA accelerator cards will have much impact on network processors (NPUs). We or any other vendor could build an NPU using a Xilinx or another FPGA. But we get much more performance by building our own NPU because we control how we use the chip area.
When designing an FPGA, Xilinx and other FPGA vendors have to decide how to allocate chip space to I/O, processing cores, programmable logic, memory, and other functional blocks. The resulting structure can deliver excellent performance for a variety of applications, but we can still deliver considerably more performance by designing our own chips allocating the chip space needed to the required functions.
I have experience with my previous company which built multiple generations of NPUs using FPGAs, but they could not come close to the capabilities of our FP3 chipset.
For Part 1, click here
For Part 3, click here
FPGAs embrace data centre co-processing role
The PCIe accelerator card has a power budget of 25W. Hyper data centres can host hundreds of thousands of servers whereas other industries with more specialist computation requirements use far fewers servers. As such, they can afford a higher power budget per card. Source: Xilinx
Xilinx has developed a software-design environment that simplifies the use of an FPGA as a co-processor alongside the server's x86 instruction set microprocessor.
Dubbed SDAccel, the development environment enables a software engineer to write applications using OpenCL, C or the C++ programming language running on servers in the data centre.
Applications can be developed to run on the server's FPGA-based acceleration card without requiring design input from a hardware designer. Until now, a hardware engineer has been needed to convert the code into the RTL hardware description language that is mapped onto the FPGA's logic gates using synthesis tools.
"[Now with SDAccel] you suffer no degradation in [processing] performance/ Watt compared to hand-crafted RTL on an FPGA," says Giles Peckham, regional americas and EMEA marketing director at Xilinx. "And you move the entire design environment into the software domain; you don't need a hardware designer to create it."
Data centre acceleration
The data centre is the first application targeted for SDAccel along with the accompanying FPGA accelerator cards developed by Xilinx's three hardware partners: Alpha Data, Convey and Pico Computing.
The FPGA cards connect to the server's host processor via the PCI Express (PCIe) interface are not just being aimed at leading internet content providers but also institutions and industries that have custom computational needs. These include oil and gas, financial services, medical and defence companies.
PCIe cards have a power budget of 25W, says Xilinx. The card's power can be extended by adding power cables but considering that hyper data centres can have hundreds of thousands of servers, every extra Watt consumed comes at a cost.
Microsoft has reported that a production pilot it set up that had 1,632 servers using PCIe-based FPGA cards, achieved a doubling of throughput, a 29 percent lower latency, and a 30 percent cost reduction compared to servers without accelerator cards
In contrast, institutions and industries use far fewer servers in their data centres. "They can stomach the higher power consumption, from a cost perspective and in terms of dissipating the heat, up to a point," says Peckham. Their accelerator cards may consume up to 100W. "But both have this limitation because of the power ceiling," he says.
China’s largest search-engine specialist, Baidu, uses neural-network processing to solve problems in speech recognition, image search, and natural language processing, according to The Linley Group senior analyst, Loring Wirbel.
Baidu has developed a 400 Gigaflop software-defined accelerator board that uses a Xilinx Kintex-7 FPGA that plugs into any 1U or 2U high server using PCIe. Baidu says that the FPGA board achieves four times higher performance than graphics processing units (GPUs) and nine times higher performance than CPUs, while consuming between 10-20W.
Microsoft has reported that a production pilot it set up that had 1,632 servers using PCIe-based FPGA cards, achieved a doubling of throughput, a 29 percent lower latency, and a 30 percent cost reduction compared to servers without accelerator cards.
"The FPGA can implement highly parallel applications with the exact hardware required," says Peckham. Since the dynamic power consumed by the FPGA depends on clock frequency and the amount of logic used, the overall power consumption is lower than a CPU or GPU. That is because the FPGA's clock frequency may be 100MHz compared to a CPU's or GPU's 1 GHz, and the FPGA implements algorithms in parallel using hardware tailored to the task.
FPGA processing performance/ W for data centre acceleration tasks compared to GPUs and CPUs. Note the FPGA's performance/W advantage increases with the number of software threads. Source: Xilinx
SDAccel
To develop a design environment that a software developer alone can use, Xilinx has to make SDAccel aware of the FPGA card's hardware, using what is known as a board support package. "There needs to be an understanding of the memory and communications available to the FPGA processor," says Peckham. "The processor then knows all the hardware around it."
Xilinx claims SDAccel is the industry's first architecturally optimising compiler for FPGAs. "It is as good as hand-coding [RTL]," says Peckham. The tool also delivers a CPU-/ GPU-like design environment. "It is also the first tool that enables designs to have multiple operations at different times on the same FPGA," he says. "You can reconfigure the accelerator card in runtime without powering down the rest of the chip."
SDAccel and the FPGA cards are available, and the tool is with several customers. "We have proven the tool, debugged it, created a GUI as opposed to a command line interface, and have three FPGA boards being sold by our partners," says Peckham. "More partners and more boards will be available in 2015."
Peckham says the simplified design environment appeals to companies not addressing the data centre. "One company in Israel uses a lot of Virtex-6 FPGAs to accelerate functions that start in C code," he says. "They are using FPGAs but the whole design process is drawn-out; they were very happy to learn that [with SDAccel] they don't have to hand-code RTL to program them."
Xilinx is working to extend OpenCL for computing tasks beyond the data centre. "It is still a CPU-PCIe-to-co-processor architecture but for wider applications," says Peckham.
For Part 2, click here
For Part 3, click here
Books in 2014 - Part 1
Gazettabyte is asking various industry figures to recommend key books they have read this year.
Joe Berthold, vice president, network architecture at Ciena
Antifragile: Things that Gain from Disorder by Nassim Nicholas Taleb
I really enjoyed The Black Swan: The Impact of the Highly Improbable when I read it several years ago, so when I learned about Antifragile from a friend during a chat at an NSF workshop at the end of 2013 I decided to read it. He warned me that it was tough going at times. I enjoyed it so much I decided to reread The Black Swan and then also read Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets which I had not read. Then I went back and read Antifragile again. Yes, it was tough going at times, but I found it very worthwhile book to read and ponder.
Antifragile and Taleb’s other books are relevant to life in general, but have a special relevance to the broad networking and information technology industries as they undergo sweeping change.
Revolutionary change creates great risks and great opportunities, and plenty of disorder. We all have to be aware of what Taleb calls the 'Turkey problem'. The turkey enjoys his life, convinced that the farmer who feeds him every day will do so forever. Then early in November he looses his head, and becomes the guest of honour at a Thanksgiving Dinner.
To avoid the fate of the turkey we have to avoid the misconception that we are in total control of the future, and that the narrative of the future we have created in our minds is pretty certain. Recognising the uncertainty the future holds, both in actions of customers and competitors, and avoiding paths that have limited upside potential and large downside impacts is, I believe, one of his main insights.
Thinking, Fast and Slow by Daniel Kahneman
I read this book a couple of years ago and was reminded of it again while binging on Taleb’s books, as he refers to Kahneman’s research many times. You can see a recording of them both during an interview about Antifragile at the New York Public Library on YouTube (click here)
Kahneman won a Nobel Prize for his research on how our brains work, and identified two systems in operation, the fast and slow systems. The fast system is intuitive and emotional, while the slow system is deliberate and logical. The fast system is easy, and the slow system is hard, so we often default to the fast and draw erroneous conclusions. It is a fascinating book, as it describes, in terms accessible to a layman, that we are all quite imperfect in our thinking. The real value of this book for our professional lives is he makes us aware of these systems so that we can try to avoid mental glitches that can get us into trouble.
This is a very interesting explanation of the workings of the capital markets that is worth reading from that perspective, but anecdotes at the beginning and end were particularly interesting because the subject matter was optical communications. At the beginning Michael describes the construction of a new fibre optic cable from Carteret NJ, primary data center of the NASDAQ, to Chicago. The cable, built by Spread Networks, aimed to shave a few milliseconds off the latency between the two markets in order for computer-driven traders to profit from price differences. Extreme care was taken to make the path as straight as possible, and they did shave a few milliseconds off the transit time delivered by current commercial cables.
Fast-forward through all the good stuff in the book about high frequency trading to the end. The last description is of a visit back to the vicinity of the Spread Networks cable route, and a climb up a hill to the base of a microwave tower. Others had the bright idea of using a vastly inferior technology, in this case microwave transmission, to create communications links for the same purpose between the two markets. Microwave transmission is much inferior to optical fiber in channel speed and total link capacity. But since light in glass travels about 1/3 slower than microwaves do in free space, the microwave system delivered much lower latency. It turned out the capacity of the microwave system was adequate for many purposes.
Again, the message to us in technology is not to be complacent, and not to dismiss technologies we deem to be inferior based on our current presumptions and biases. Stay vigilant and open to new ideas!
Peter Jarich, vice president for the consumer and infrastructure services, Current Analysis
Undaunted Courage: Meriwether Lewis, Thomas Jefferson, and the Opening of the American West
I wish I could say I had to think about this, but I just don’t read many books. It’s always a new year’s resolution but rarely fulfilled.
But, here's one recommendation published in 1997. I decided to get deeper into the story of Lewis and Clark after a presentation that used their story as an introduction to changes taking place in the higher education system. I realised that the theme of exploring uncharted territories applied to much of what we’re going through in telecom right now, particularly with topics like 5G and SDN/NFV. In each case, you need the right combination of vision and execution to pull off a success. You saw that same dynamic embodied in Lewis and Clark.
I’d like to say there’s a way to link Ayn Rand’s 1937 novella Anthem to telecom, but I can’t. In reality, I’m still not quite sure what to make of it.
Brandon Collings, CTO for communications and commercial optical products at JDSU
As a father of younger children, I find I read a lot of children’s books. Quite a quantity and variety come through our hands and the good ones are always considerably more enjoyable for parent and kid. Assuming that many of the readers of this review are in a similar life situation (rather than they are kindergarten-reading level), here are some favourites of my family.
- The Ugly Pumpkin, by Dave Horowitz: A charming story about self-discovery and fitting in
-
Room on the Broom, by Julia Donaldson: The cadence, rhythm and rhyme make this story possibly more fun to read than to listen to. This book has it all: adventure, witches, dragons, magic, animals, good, evil, and even a broom with first-class seats.
-
ish by Peter Reynolds: A cute story about viewing the one’s self and the world through one’s own eyes rather than through others.
Eric Hall, vice resident of business development at Aurrion
It was tough to find time in 2014 for reading outside of work, but one memorable book was The Perfect Theory: A Century of Geniuses and the Battle over General Relativity by Pedro Ferreira, a professor of Astrophysics at Oxford who traced the thinking on Einstein’s theory for several decades. The scientific narrative, in itself, provided a fascinating review of the physics, covering early theories to more complex modern views of Black Holes and featuring cameos by notable names like Steven Hawking.
Perhaps the more interesting takeaway, however, was the very circuitous route that the evolution followed. I think it is very easy to forget that scientific progress doesn’t follow a linear path driven exclusively by the 'correct solution' but is often more driven by the personalities involved. As a result, what might eventually become the more generally-accepted solution can be derailed and held up for years by the nay-saying of single strong voices (such as Einstein himself).
Such inefficiency is maybe more obvious in the product marketplace but also translates to the marketplace of ideas.
For Part 2, click here
