Rockley Photonics showcases its in-packaged design at OFC
The packaged design includes Rockley's own 2 billion transistor layer 3 router chip, and its silicon photonics-based optical transceivers. The layer 3 router chip, described as a terabit device, also includes mixed-signal circuits needed for the optical transceevers' transmit and receive paths.
Source: Rockley Photonics (annotated by Gazettabyte).
Rockley says it is using 500m-reach PSM4 transceivers for the design and that while a dozen ribbon cables are shown, this does not mean there are 12 100-gigabit PSM4 transceivers. The company is not saying what the total optical input-output is.
Source: Rockley Photonics (annotated by Gazettabyte).
The company has said it is not looking to enter the marketplace as a switch chip player competing with the likes of Broadcom, Intel, Cavium, Barefoot Networks and Innovium. To develop such a device and remain competitive requires considerable investment and that is not Rockley's focus. Instead, it is using its router chip as a demonstrator to show the marketplace what can be done and that the technology works.
When asked what progress Rockley is making showcasing its technology, its CEO Andrew Rickman said: “It is going very well but nothing we can say publicly."
The switch chip makers continue to use electrical interfaces for their state-of-the-art switches which have a capacity of 12.8 terabits. It still remains to be seen which generation of switch chip will finally adopt in-packaged optics and whether on-board optics designs such as COBO will be adopted first.
For the full interview with CEO Andrew Rickman, click here.
Rockley Photonics eyes multiple markets
Andrew Rickman, founder and CEO of silicon photonics start-up, Rockley Photonics, discusses the new joint venture with Hengtong Optic-Electric, the benefits of the company’s micron-wide optical waveguides and why the timing is right for silicon photonics.
Andrew Rickman
The joint venture between Rockley Photonics and Chinese firm Hengtong Optic-Electric is the first announced example of Rockley’s business branching out.
The start-up’s focus has been to apply its silicon photonics know-how to data-centre applications. In particular, Rockley has developed an Opto-ASIC package that combines optical transceiver technology with its own switch chip design. Now it is using the transceiver technology for its joint venture.
“It was logical for us to carve out the pieces generated for the Opto-ASIC and additionally commercialise them in a standard transceiver format,” says Andrew Rickman, Rockley’s CEO. “That is what the joint venture is all about.”
Rockley is not stopping there. Rickman describes the start-up as a platform business, building silicon photonics and electronics chipsets for particular applications including markets other than telecom and datacom.
Joint venture
Hengtong and Rockley have set up the $42 million joint venture to make and sell optical transceivers.
Known for its optical fibre cables, Hengtong is also a maker of optical transceivers and owns 75.1 percent of the new joint venture. Rockley gains the remaining 24.9 percent share in return for giving Hengtong its 100-gigabit QSFP transceiver designs. The joint venture also becomes a customer of Rockley’s, buying its silicon photonics and electronics chips to make the QSFP modules.
“Hengtong is one of the world’s largest optical fibre cable manufacturers, is listed on the Shanghai stock market, and sells extensively in China and elsewhere into the data centre market,” says Rickman. “It is a great conduit, a great sales channel into these customers.”
The joint venture will make three 100-gigabit QSFP-based products: a PSM4 and a CWDM4 pluggable module and an active optical cable. Rickman expects the joint venture to make other module designs and points out that Rockley participates in the IEEE standards work for 400 gigabits and is one of the co-founders of the 400-gigabit CWDM8 MSA.
Rockley cites several reasons why the deal with Hengtong makes sense. First, a large part of the bill of materials used for active optical cables is the fibre itself, something which the vertically integrated Hengtong can provide.
China also has a ‘Made in China 2025’ initiative that encourages buying home-made optical modules. Teaming with Hengtong means Rockley can sell to the Chinese telecom operators and internet content players.
In addition, Hengtong is already doing substantial business with all of the global data centres as a cable, patch panel and connector supplier, says Rickman:“So it is an immediate sales channel into these companies without having to break into these businesses as a qualified supplier afresh.”
A huge amount of learning happened and then what Rockley represented was the opportunity to start all over again with a clean sheet of paper but with all that experience
Bigger is Best?
At the recent SPIE Photonics West conference held in San Francisco, Rickman gave a presentation entitled Silicon Photonics: Bigger is Better. His talk outlined the advantages of Rockley’s use of three-micron-wide optical waveguides, bucking the industry trend of using relatively advanced CMOS processes to make silicon photonics components.
Rickman describes as seductive the idea of using 45nm CMOS for optical waveguides.“These things exist and work but people are thinking of them in the same physics that have driven microelectronics,” he says. Moving to ever-smaller feature sizes may have driven Moore’s Law but using waveguide dimensions that are smaller than the wavelength of light makes things trickier.
To make his point, he plots the effective index of a waveguide against its size in microns. The effective index is a unitless measure - a ratio of a phase delay in a unit length of a waveguide relative to the phase delay in a vacuum. “Once you get below one micron, you get a waveguide that is highly polarisation-dependent and just a small variation in the size of the waveguide has a huge variation in the effective index,” says Rickman.
Such variations translate to inaccuracies in the operating wavelength. This impacts the accuracy of circuits, for example, arrayed-waveguide gratings built using waveguides to multiplex and demultiplex light for wavelength-division multiplexing (WDM).
“Above one micron is where you want to operate, where you can manufacture with a few percent variation in the width and height of a waveguide,” says Rickman.“But the minute you go below one micron, in order to hit the wavelength registration that you need for WDM, you have got to control the [waveguide’s] film thickness and line thickness to fractions of a percent.” A level of accuracy that the semiconductor industry cannot match, he says.
A 100GHz WDM channel equates to 0.8nm when expressed using a wavelength scale. “In our technology, you can easily get a wavelength registration on a WDM grid of less than 0.1nm,” says Rickman. “Exactly the same manufacturing technology applied to smaller waveguides is 25 times worse - the variation is 2.5nm.”
Moreover, WDM technology is becoming increasingly important in the data centre. The 100-gigabit PSM4 uses a single wavelength, the CWDM4 uses four, while the newer CWDM8 MSA for 400 gigabit uses eight wavelengths. “In telecom, 90-plus wavelengths can be used; the same thing will come to pass in the years to come in data centre devices,” he says.
Rockley also claims it has a compact modulator that is 50 times smaller than competing modulators despite them being implemented using nanometer feature sizes.
We set out to generate a platform that would be pervasive across communications, new forms of advanced computing, optical signal processing and a whole range of sensor applications
Opto-ASIC reference design
Rockley’s first platform technology example is its Opto-ASIC reference design. The design integrates silicon photonics-based transceivers with an in-house 2 billion transistor switch chip all in one package. Rockley demonstrated the technology at OFC 2017.
“If you look around, this is something the industry says is going to happen but there isn't a single practical instantiation of it,” says Rickman who points out that, like the semiconductor industry, very often a reference design needs to be built to demonstrate the technology to customers.“So we built a complete reference design - it is called Topanga - an optical-packaged switch solution,” he says.
Despite developing a terabyte-class packet processor, Rockley does not intend to compete with the established switch-chip players. The investment needed to produce a leading edge device and remain relevant is simply too great, he says.
Rockley has demonstrated its in-package design to relevant companies. “It is going very well but nothing we can say publicly,” says Rickman.
New Markets
Rockley is also pursuing opportunities beyond telecom and datacom.
“We set out to generate a platform that would be pervasive across communications, new forms of advanced computing, optical signal processing and a whole range of sensor applications,” says Rickman.
Using silicon photonics for sensors is generating a lot of interest. “We see these markets starting to emerge and they are larger than the data centre and communications markets,” he says. “A lot of these things are not in the public domain so it is very difficult to report on.”
Moreover, the company’s believes its technology gives it an advantage for such applications. “When we look across the other application areas, we don’t see the small waveguide platforms being able to compete,” says Rickman. Such applications can use relatively high power levels that exceed what the smaller waveguides can handle.
Rockley is sequencing the markets it will address. “We’ve chosen an approach where we have looked at the best match of the platform to the best opportunities and put them in an order that makes sense,” says Rickman.
Rockley Photonics represent Rickman’s third effort to bring silicon photonics to the marketplace.Bookham Technology, the first company he founded, build different prototypes in several different areas but the market wasn't ready. In 2005 he joined start-up Kotura as a board member. “A huge amount of learning happened and then what Rockley represented was the opportunity to start all over again with a clean sheet of paper but with all that experience,” says Rickman.
Back in 2013, Rockley saw certain opportunities for its platform approach and what has happened since is that their maturity and relevance has increased dramatically.
“Like all things it is always down to timing,” says Rickman. “The market is vastly bigger and much more ready than it was in the Bookham days.”
Reflections on OFC 2017
Mood, technologies, notable announcements - just what are the metrics to judge the OFC 2017 show held in Los Angeles last week?
It was the first show I had attended in several years and the most obvious changes were how natural the presence of the internet content providers now is alongside the telecom operators, as well as systems vendors exhibiting at the show. Chip companies, while also present, were fewer than before.
Source: OSA
Another impression were the latest buzz terms: 5G, the Internet of Things and virtual reality-augmented reality. Certain of these technologies are more concrete than others, but their repeated mention suggests a consensus that the topics are real enough to impact optical components and networking.
It could be argued that OFC 2017 was the year when 400 Gigabit Ethernet became a reality
The importance of 5G needs no explanation while the more diffuse IoT is expected to drive networking with the huge amounts of data it will generate. But what are people seeing about virtual reality-augmented reality that merits inclusion alongside 5G and IoT?
Another change is the spread of data rates. No longer does one rate represent the theme of an OFC such as 40 Gigabits or 100 Gigabits. It could be argued that OFC 2017 was the year when 400 Gigabit Ethernet became a reality but there is now a mix of relevant rates such as 25, 50, 200 and 600 gigabits.
Highlights
There were several highlights at the show. One was listening to Jiajin Gao, deputy general manager at China Mobile Technology, open the OIDA Executive Forum event by discussing the changes taking place in the operator's network. Gao started by outlining the history of China Mobile's network before detailing the huge growth in ports at different points in the network over the last two years. He then outlined China Mobile's ambitious rollout of new technologies this year and next.
China's main three operators have 4G and FTTx subscriber numbers that dwarf the rest of the world. Will 2017 eventually be seen as the year when the Chinese operators first became leaders in telecom networking and technologies?
The Executive Forum concluded with an interesting fireside discussion about whether the current optical market growth is sustainable. The consensus among representatives from Huawei, Hisense, Oclaro and Macom was that it is; that the market is more varied and stable this time compared to the boom and bust of 1999-2001. As Macom’s Preetinder Virk put it: "The future has nothing to do with the past". Meanwhile, Huawei’s Jeffrey Gao still expects strong demand in China for 100 gigabits in 2017 even if growth is less strong than in 2016. He also expects the second quarter this year to pick up compared to a relatively weak first quarter.
OFC 2017 also made the news with an announcement that signals industry change: Ciena's decision to share its WaveLogic Ai coherent DSP technology with optical module vendors Lumentum, Oclaro and NeoPhotonics.
The announcement can be viewed several ways. One is that the initiative is a response to the success of Acacia as a supplier of coherent modules and coherent DSP technology. System vendors designed their own coherent DSP-ASICs to differentiate their optical networking gear. This still holds true but the deal reflects how the progress of merchant line-side optics from the likes of Acacia is progressing and squeezing the scope for differentiation.
The deal is also a smart strategic move by Ciena which, through its optical module partners, will address new markets and generate revenues as its partners start to sell modules using the WaveLogic Ai. The deal also has a first-mover advantage. Other systems vendors may now decide to offer their coherent DSPs to the marketplace but Ciena has partnerships with three leading optical module makers and is working with them on future DSP developments for pluggable modules.
The deal also raises wider questions as to the role of differentiated hardware and whether it is subtly changing in the era of network function virtualisation, or whether it is a reflection of the way companies are now collaborating with each other in open hardware developments like the Telecom Infra Project and the Open ROADM MSA.
Another prominent issue at the show is the debate as to whether there is room for 200 Gigabit Ethernet modules or whether the industry is best served by going straight from 100 to 400 Gigabit Ethernet.
Facebook and Microsoft say they will go straight to 400 gigabits. Cisco agrees, arguing that developing an interim 200 Gigabit Ethernet interface does not justify the investment. In contrast, Finisar argues that 200 Gigabit Ethernet has a compelling cost-per-bit performance and that it will supply customers that want it. Google supported 200 gigabits at last year’s OFC.
Silicon photonics
Silicon photonics was one topic of interest at the show and in particular how the technology continues to evolve. Based on the evidence at OFC, silicon photonics continues to progress but there were no significant developments since our book (co-written with Daryl Inniss) on silicon photonics was published late last year.
One of the pleasures of OFC is being briefed by key companies in rapid succession. Intel demonstrated at its booth its silicon photonics products including its CWDM4 module which will be generally available by mid-year. Intel also demonstrated a 10km 4WDM module. The 4WDM MSA, created last year, is developing a 10km reach variant based on the CWDM4, as well as 20km and 40km based designs.
Meanwhile, Ranovus announced its 200-gigabit CFP2 module based on its quantum dot laser and silicon photonics ring resonator technologies with a reach approaching 100km. The 200 gigabit is achieved using 28Gbaud optics and PAM-4.
Elenion Technologies made several announcements including the availability of its monolithically integrated coherent modulator receiver after detailing it was already supplying a 200 gigabit CFP2-ACO to Coriant. The company was also demonstrating on-board optics and, working with Cavium, announced a reference architecture to link network interface cards and switching ICs in the data centre.
I visited Elenion Technologies in a hotel suite adjacent to the conference centre. One of the rooms had enough test equipment and boards to resemble a lab; a lab with a breathtaking view of the hills around Los Angeles. As I arrived, one company was leaving and as I left another well-known company was arriving. Elenion was using the suite to demonstrate its technologies with meetings continuing long after the exhibition hall had closed.
Two other silicon photonics start-ups at the show were Ayar Labs and Rockley Photonics.
Ayar Labs in developing a silicon photonics chip based on a "zero touch" CMOS process that will sit right next to complex ASICs and interface to network interface cards. The first chip will support 3.2 terabits of capacity. The advantage of the CMOS-based silicon photonics design is the ability to operate at high temperatures.
Ayar Labs is using the technology to address the high-bandwidth, low-latency needs of the high-performance computing market, with the company expecting the technology to eventually be adopted in large-scale data centres.
Rockley Photonics shared more details as to what it is doing as well as its business model but it is still to unveil its first products.
The company has developed silicon photonics technology that will co-package optics alongside ASIC chips. The result will be packaged devices with fibre-based input-output offering terabit data rates.
Rockley also talked about licensing the technology for a range of applications involving complex ICs including coherent designs, not just for switching architectures in the data centre that it has discussed up till now. Rockley says its first product will be sampling in the coming months.
Looking ahead
On the plane back from OFC I was reading The Undoing Project by Michael Lewis about the psychologists Danny Kahneman and Amos Tversky and their insights into human thinking.
The book describes the tendency of people to take observed facts, neglecting the many facts that are missed or could not be seen, and make them fit a confident-sounding story. Or, as the late Amos Tversky put it: "All too often, we find ourselves unable to predict what will happen; yet after the fact, we explain what did happen with a great deal of confidence. This 'ability' to explain that which we cannot predict, even in the absence of any additional information, represents an important, though subtle, flaw in our reasoning."
So, what to expect at OFC 2018? More of the same and perhaps a bombshell or two. Or to put it another way, greater unpredictability based on the impression at OFC 2017 of an industry experiencing an increasing pace of change.
Tackling system design on a data centre scale
Interview 1: Andrew Rickman
Silicon photonics has been a recurring theme in the career of Andrew Rickman. First, as a researcher looking at the feasibility of silicon-based optical waveguides, then as founder of Bookham Technologies, and after that as a board member of silicon photonics start-up, Kotura.
Andrew Rickman
Now as CEO of start-up Rockley Photonics, his company is using silicon photonics alongside its custom ASIC and software to tackle a core problem in the data centre: how to connect more and more servers in a cost effective and scaleable way.
Origins
As a child, Rickman attended the Royal Institution Christmas Lectures given by Eric Laithwaite, a popular scientist who was also a professor of electrical engineering at Imperial College. As an undergraduate at Imperial, Rickman was reacquainted with Professor Laithwaite who kindled his interest in gyroscopes.
“I stumbled across a device called a fibre-optic gyroscope,” says Rickman. “Within that I could see people starting to use lithium niobate photonic circuits.” It was investigating the gyroscope design and how clever it was that made Rickman wonder whether the optical circuits of such a device could be made using silicon rather than exotic materials like lithium niobate.
“That is where the idea triggered, to look at the possibility of being able to make optical circuits in silicon,” he says.
If you try and force a photon into a space shorter than its wavelength, it behaves very badly
In the 1980s, few people had thought about silicon in such a context. That may seem strange today, he says, but silicon was not a promising candidate material. “It is not a direct band-gap material - it was not offering up the light source, and it did not have a big electro-optic effect like lithium niobate which was good for modulators,” he says. “And no one had demonstrated a low-loss single-mode waveguide.”
Rickman worked as a researcher at the University of Surrey’s physics department with such colleagues as Graham Reed to investigate whether the trillions of dollars invested in the manufacturing of silicon could also be used to benefit photonic circuits and in particular whether silicon could be used to make waveguides. “The fundamental thing one needed was a viable waveguide,” he says.
Rickman even wrote a paper with Richard Soref who was collaborating with the University of Surrey at the time. “Everyone would agree that Richard Soref is the founding father of the idea - the proposal of having a useful waveguide in silicon - which is the starting point,” says Rickman. It was the work at the University of Surrey, sponsored by Bookham which Rickman had by then founded, that demonstrated low-loss waveguides in silicon.
Fabrication challenges
Rickman argues that not having a background in CMOS processes has been a benefit. “I wasn’t dyed-in-the-wool-committed to CMOS-type electronics processing,” he says. “I looked upon silicon technology as a set of machine-shop processes for making things.”
Looking at CMOS processing completely afresh and designing circuits optimised for photonics yielded Bookham a great number of high-performance products, he says. In contrast, the industry’s thrust has been very much a semiconductor CMOS-focused one. “People became interested in photonics because they just naturally thought it was going to be important in silicon, to perpetuate Moore’s law,” says Rickman.
You can use the structures and much of the CMOS processes to make optical waveguides, he says, but the problem is you create small structures - sub-micron - that guide light poorly. “If you try and force a photon into a space shorter than its wavelength, it behaves very badly,” he says. “In microelectronics, an electron has got a wavelength that is one hundred times smaller that the features it is using.”
The results include light being sensitive to interface roughness and to the manufacturing tolerances - the width, hight and composition of the waveguide. “At least an order of magnitude more difficult to control that the best processes that exist,” says Rickman.
“Our [Rockley’s] waveguides are one thousand times more relaxed to produce than the competitors’ smaller ones,” he says. “From a process point of view, we don’t need the latest CMOS node, we are more a MEMS process.”
If you take control of enough of the system problem, and you are not dictated to in terms of what MSA or what standard that component must fit into, and you are not competing in this brutal transceiver market, then that is when you can optimise the utilisation of silicon photonics
Rickman stresses that small waveguides do have merits - they go round tighter bends, and their smaller-dimensioned junctions make for higher-speed components. But using very large features solves the ‘fibre connectivity problem’, and Rockley has come up with its own solutions to achieve higher-speed devices and dense designs.
“Bookham was very strong in passive optics and micro-engineered features,” says Rickman. “We have taken that experience and designed a process that has all the advantages of a smaller process - speed and compactness - as well as all the benefits of a larger technology: the multiplexing and demultiplexing for doing dense WDM, and we can make a chip that already has a connector on it.”
Playing to silicon photonics’ strengths
Rickman believes that silicon photonics is a significant technological development: “It is a paradigm shift; it is not a linear improvement”. But what is key is how silicon photonics is applied and the problem it is addressing.
To make an optical component for an interface standard or a transceiver MSA using silicon photonics, or to use it as an add-on to semiconductors - a ’band-aid” – to prolong Moore’s law, is to undersell its full potential. Instead, he recommends using silicon photonics as one element - albeit an important one - in an array of technologies to tackle system-scale issues.
“If you take control of enough of the system problem, and you are not dictated to in terms of what MSA or what standard that component must fit into, and you are not competing in this brutal transceiver market, then that is when you can optimise the utilisation of silicon photonics,” says Rickman. “And that is what we are doing.” In other words, taking control of the environment that the silicon sits in.
It [silicon photonics] is a paradigm shift; it is not a linear improvement
Rockley’s team has been structured with the view to tackle the system-scale problem of interconnecting servers in the data centre. Its team comprises computer scientists, CMOS designers - digital and analogue - and silicon photonics experts.
Knowing what can be done with the technologies and organising them allows the problems caused by the ‘exhaustion of Moore’s law’ and the input/output (I/O) issues that result to be overcome. “Not how you apply one technology to make up for the problems in another technology,” says Rickman.
The ending of Moore’s law
Moore’s law continues to deliver a doubling of transistors every two years but the associated scaling benefits like the halving of power consumed per transistor no longer apply. As a result, while Moore’s law continues to grow gate count that drives greater computation, the overall power consumption is no longer constant.
Rickman also points out that the I/O - the number of connections on and off a chip - are not doubling with transistor count. “I/O may be going from 25 gigabit to 50 gigabit using PAM–4 but there are many challenges and the technology has yet to be demonstrated,” he says.
The challenge facing the industry is that increasing the I/O rate inevitably increases power consumption. “As power consumption goes up, it also equates to cost,” says Rickman. Clearly that is unwelcome and adds cost, he says, but that is not the only issue. As power goes up, you cannot fully benefit from the doubling transistor counts, so things cannot be packed more densely.
“You are running into to the end of Moore’s law and you don’t get the benefit of reducing space and cost because you’ve got to bolt on all these other things as it is very difficult to get all these signals off-chip,” he says.
This is where tackling the system as a whole comes in. You can look at microelectronics in isolation and use silicon photonics for chip-to-chip communications across a printed circuit board to reduce the electrical losses through the copper traces. “A good thing to do,” stresses Rickman. Or you can address, as Rockley aims to do, Moore’s law and the I/O limitations within a complete system the size of the data centre that links hundred of thousands of computers. “Not the same way you’d solve an individual problem in an individual device,” says Rickman.
Rockley Photonics
Rockley Photonics has already demonstrated all the basic elements of its design. “That has gone very well,” says Rickman.
The start-up has stated its switch design uses silicon photonics for optical switching and that the company is developing an accompanying controller ASIC. It has also developed a switching protocol to run on the hardware. Rockley’s silicon photonics design performs multiplexing and demultiplexing, suggesting that dense WDM is being used as well as optical switching.
Rockley is a fabless semiconductor company and will not be building systems. Partly, it is because it is addressing the data centre and the market has evolved in a different way to telecoms. For the data centre, there are established switch vendors and white-box manufacturers. As such, Rockley will provide its chipset-based reference design, its architecture IP and the software stack for its customers. “Then, working with the customer contract manufacturer, we will implement the line cards and the fabric cards in the format that the particular customer wants,” says Rickman.
The resulting system is designed as a drop-in replacement for the large-scale data centre players’ switches they haver already deployed, yet will be cheaper, more compact and consume less power, says Rockley.
“They [the data centre operators] can scale the way they do at the moment, or they can scale with our topology,” says Rickman.
The start-up expects to finally unveil its technology by the year end.
Books in 2015 - Final Part
Sterling Perring, senior analyst, Heavy Reading
My ambitions to read far exceed my actual reading output, and because I have such a backlog of books on my reading list, I generally don’t read the latest.
Source: The Age of Spiritual Machines
I have long been fascinated by a graphic from futurist Ray Kurweil which depicts the exponential growth of computing and plots it against living intelligence. The graphic is from Kurzweil’s 1999 book on artificial intelligence The Age of Spiritual Machines: When Computers Exceed Human Intelligence, which I read in 2015.
The book contains several predictions, but this one about computer intelligence vastly exceeding collective human intelligence in our own lifetimes interested me most. Kurzweil translates the brain power of living things into computational speeds and storage capacity and plots them against exponentially growing computing power, based on Moore’s law and its expected successors.
He writes that by 2020, a $1,000 personal computer will have enough speed and memory to match the human brain. But science fiction (and beyond) becomes reality quickly because computational power continues to grow exponentially while collective human intelligence continues on its plodding linear progression. The inevitable future, in Kurzweil’s scenario, blends human intelligence and AI to the point where by the end of this century, it’s no longer possible or relevant to distinguish between the two.
There have been many criticisms of Kurzweil’s theory and methodologies on AI evolution, but reading a futures book 15 years after publication gives you the ability to validate its predictions. On this, Kurzweil has been quite amazing, including self-driving cars, headset-based virtual reality gaming (which I experienced this year at the mall), tablet computing coming of age in 2009, and the coming end of Moore’s law, to name a few in this book that struck me as astoundingly accurate.
Of newer books, I read Yuval Noah Harari’s Sapiens: A Brief History of Humankind (originally published in Hebrew in 2011 but first published in English in 2014). I was attracted to this book because it provides a succinct summary of millions of years of human history and, from its high level vantage point, is able to draw fascinating conclusions about why our human species of sapiens has been so successful.
Harari’s thesis is that it’s not our thumbs, or the invention of fire, or even our languages that led to our dominance over all animals and other humans but rather the creation of fictional constructs – enabled by our languages – that unified sapiens in collective groups vastly larger than otherwise achievable.
Here, the book can strike some nerves because all religions qualify as fictional constructs, but he’s really talking about all intangible constructs under which humans can massively align themselves, including nations, empires, corporations, money and even capitalism. Without fictional constructs, he writes, it’s hard for humans to form meaningful social organizations beyond 150 people – a number also famously cited by Malcolm Gladwell in The Tipping Point.
In fiction, I completed the fifth and final published installment of George RR Martin’s Song of Ice and Fire Series, A Dance with Dragons. I’ve been drawn to this series in large part, I think, because the simpler medieval setting is such a stark contrast to the ultra-high-tech world in which we live and work.
I thought I had timed the reading to coincide with the release of the 6th book, The Winds of Winter, but I’ve heard that the book is delayed again. Fortunately, I’m still two seasons behind on the HBO series.
Aaron Zilkie, vice president of engineering at Rockley Photonics
I recommend the risk assessment principles in the book, Projects at Warp - Speed with QRPD: The Definitive Guidebook to Quality Rapid Product Development by Adam Josephs, Cinda Voegtli, and Orion Moshe Kopelman.
These principles provide valuable one-stop teaching of fundamental principles for the often under-utilised and taken-for-granted engineering practice of technology risk management and prioritisation. This is an important subject for technology and R&D managers in small-to-medium size technology companies to include in their thinking as they perform the challenging task of selecting new technologies to make next-generation products and product improvements.
The book Who: The A Method for Hiring by Geoff Smart and Randy Street teaches good practices for focused hiring, to build A-teams in technology companies, a topic of critical importance for the rapid success of start-up companies that is not taught in schools.
Tom Foremski, SiliconValleyWatcher
Return of a King: The Battle for Afghanistan, 1839-42 by William Dalrymple. This is one of the best reads, an amazing story! Only one survivor on an old donkey.
Interconnection networks - an introduction
Source: Jonah D. Friedman
If moving information between locations is the basis of communications, then interconnection networks represent an important subcategory.
The classic textbook, Principles and Practices of Interconnection Networks by Dally and Towles, defines interconnection networks as a way to transport data between sub-systems of a digital system.
The digital system may be a multi-core processor with the interconnect network used to link the on-chip CPU cores. Since the latest processors can have as many as 100 cores, designing such a network is a significant undertaking.
Equally, the digital system can be on a far larger scale: servers and storage in a data centre. Here the interconnection network may need to link as many as 100,000 servers, as well as the servers to storage.
The number of servers being connected in the data centre continues to grow.
“The market simply demands you have more servers,” says Andrew Rickman, chairman and CEO of UK start-up Rockley Photonics. “You can’t keep up with demand simply with the advantage of [processors and] Moore’s law; you simply need more servers.”
Scaling switches
To understand why networking complexity grows exponentially rather than linearly with server count, a simple switch scaling example is used.
With the 4-port switch shown in Figure 1 it is assumed that each port can connect to the any of the other three ports. The 4-port switch is also non-blocking: if Port 1 is connected to Port 3, then the remaining input and output can also be used without affecting the link between ports 1 and 3. So, if four servers are connected to the ports, each can talk to any other server as shown in Figure 1.
Figure 1: A 4-port switch. Source: Gazettabyte, Arista Networks
But once five or more servers need to be connected, things get more complicated. To double the size to create an 8-port switch, several 4-port basic building switches are needed, creating a more complex two-stage switching arrangement (Figure 2).
Figure 2: An 8-port switch made up of 4-port switch building blocks. Source: Gazettabyte, Arista Networks.
Indeed the complexity increases non-linearly. Instead of one 4-port building block switch, six are needed for a switch with twice the number of ports, with a total of eight interconnections (number of second tier switches multiplied by the number of first tier switches).
Doubling the number of effective ports to create a 16-port switch and the complexity more than doubles again: now three tiers of switching is needed, 20 4-port switches and 32 interconnections (See Table 1).
Table 1: How the number of 4-port building block switches and interconnects grow as the number of switch ports keep doubling. Source: Gazettabyte and Arista Networks.
The exponential growth in switches and interconnections is also plotted in Figure 3.
Figure 3: The exponential growth in N-sized switches and interconnects as the switch size grows to 2N, 4N etc. In this example N=4. Source: Gazettabyte, Arista Networks.
This exponential growth in complexity explains Rockley Photonics’ goal to use silicon photonics to make a larger basic building block. Not only would this reduce the number of switches and tiers needed for the overall interconnection network but allow larger number of servers to be connected.
Rockley believes its silicon photonics-based switch will not only improve scaling but also reduce the size and power consumption of the overall interconnection network.
The start-up also claims that its silicon photonics switch will scale with Moore’s law, doubling its data capacity every two years. In contrast, the data capacity of existing switch ASICs do not scale with Moore’s law, it says. However the company has still to launch its product and has yet to discuss its design.
Data centre switching
In the data centre, a common switching arrangement used to interconnect servers is the leaf-and-spine architecture. A ‘leaf’ is typically a top-of-rack switch while the ‘spine’ is a larger capacity switch.
A top-of-rack switch typically uses 10 gigabit links to connect to the servers. The connection between the leaf and spine is typically a higher capacity link - 40 or 100 gigabit. A common arrangement is to adopt a 3:1 oversubscription - the total input capacity to the leaf switch is 3x that of its output stream.
To illustrate the point with numbers, a 640 gigabit top-of-rack switch is assumed, 480 gigabit (or 48 x10 Gig) capacity used to connect the servers and 160 gigabit (4 x 40 Gig) to link the top-of-rack switch to the spine switches.
In the example shown (Figure 4) there are 32 leaf and four spine switches connecting a total of 1,536 servers.
Figure 4: An example to show the principles of a leaf and spine architecture in the data centre. Source: Gazettabyte
In a data centre with 100,000 servers, clearly a more complicated interconnection scheme involving multiple leaf and spine clusters is required.
Arista Network’s White Paper details data centre switching and leaf-and-spine arrangements, while Facebook published a blog (and video) discussing just how complex an interconnection network can be (see Figure 5).
Figure 5: How multiple leaf and spines can be connected in a large scale data centre. Source: Facebook
Rockley demos a silicon photonics switch prototype
Rockley Photonics has made a prototype switch to help grow the number of servers that can be linked in a data centre. The issue with interconnection networks inside a data centre is that they do not scale linearly as more servers are added.
Dr. Andrew Rickman
“If you double the number of servers connected in a mega data centre, you don’t just double the complexity of the network, it goes up exponentially,” explains Andrew Rickman, co-founder, chairman and CEO at Rockley Photonics. “That is the problem we are addressing.”
By 2017 and 2018, it will still be possible to build the networks that large-scale data centre network operators require, says Rickman, but at an ever increasing cost and with a growing power consumption. “The basic principles of what they are doing needs to be rethought,” he says.
Network scale
Modern data centre networks must handle significant traffic flow between servers, referred to as east-west traffic. A common switching arrangement in the data centre is the leaf-spine architecture, used to interconnect thousands of servers.
A ‘leaf’ may be a top-of-rack switch that is linked to multiple server chassis on one side and larger-capacity, ‘spine’ switches on the other. The result is a switch network where each leaf is connected to all the spine switches, while each spine switch is linked to all the leaves. In the example shown, four spine switches connect to 32 leaf switches.
A leaf-spine architecture
The leaf and spine switches are built using ASICs, with the largest ICs typically having 32, 100 gigabit ports. One switch ASIC may be used in a platform but as Rickman points out, larger switches may implement multiple stages such as a three-stage Clos architecture. As a result, traffic between servers on different leaves, travelling up and down the leaf-spine, may pass through five stages or hops but possibly as many as nine.
There is no replacement performance in this area
It is the switch IC’s capacity and port count that dictates the overall size of the leaf-spine network and therefore the number of servers that can be connected. Rockley’s goal is to develop a bigger switch building block making use of silicon photonics.
“The fundamental thing to address is making bigger switching elements,” says Rickman. “That way you can keep the number of stages in the network the same but still make bigger and bigger networks.” Rockley expects its larger building-block switch will reduce the switch stages needed.
The UK start-up is not yet detailing its switch beyond saying it uses optical switching and that the company is developing a photonic integrated circuit (PIC) and a controlling ASIC.
“In the field of silicon photonics, for the same area of silicon, you can produce a larger switch; you have more capacity than you do in electronics,” says Rickman. Moreover, Rockley says that its silicon photonics-based PIC will scale with Moore’s law, with its switch's data capacity approximately doubling every two years. “Previously, the network did not scale with Moore’s law,” says Rickman.
Customers can see something is real and that it works. We are optimising all the elements of the system before taping out the fully integrated devices
Status
The company has developed a switch prototype that includes ‘silicon photonics elements’ and FPGAs. “Customers can see something is real and that it works,” says Rickman. “We are optimising all the elements of the system before taping out the fully integrated devices.” Rockley expects to have its switch in volume production in 2017.
Last year the company raised its first round of funding and said that it would undergo a further round in 2015. Rockley has not said how much it has raised or the status of the latest round. “We are well-funded and we have a very supportive group of investors,” says Rickman.
Rickman has long been involved in silicon photonics, starting out as a researcher at the University of Surrey developing silicon photonics waveguides in the early 1990s, before founding Bookham Technologies (now Oclaro). He has also been chairman of silicon photonics start-up Kotura that was acquired by Mellanox Technologies in 2013. Rickman co-founded Rockley in 2013.
“What I’ve learned about silicon photonics, and about all those electronics technologies, is how to design stuff from a process point of view to make something highly manufacturable and at the same time having the performance,” says Rickman.
There is no replacement performance in the area of data centre switching, he stresses: “The benefit of our technology is to deliver the performance, not the fact that it is cheap or [offers] average performance.”
For Part 2, Interconnection networks - an introduction, click here
