Silicon photonics: concerns but viable and still evolving
Blaine Bateman set himself an ambitious goal when he started researching the topic of silicon photonics. The president of the management consultancy, EAF LLC, wanted to answer some key questions for a broad audience, not just academics and researchers developing silicon photonics but executives working in data centres, telecom and IT.
The result is a 192-page report entitled Silicon Photonics: Business Situation Report, 59 pages alone being references. In contrast to traditional market research reports, there is also no forecast or company profiles.
Blaine Bateman's risk meter for silicon photonics. Eleven key elements needed to deploy a silicon photonics solution were considered. And these were assessed from the perspective of various communities involved or impacted by the technology, from silicon providers to cloud-computing users. Source: EAF LLC.
“I thought it would be helpful to give people a business view,” says Bateman.
Bateman works with companies on strategy in such areas as antennas, wireless technologies and more recently analytics and machine learning. But a growing awareness of photonics made him want to research the topic. “I could see a convergence between the evolution of telecom switching centres to become more like data centres, and data centres starting to look more like telecoms,” he says.
The attraction of silicon photonics is that it is an emerging technology with wide applicability in communications.
Just watching entirely new technologies emerge and become commercially viable in the span of ten years; it is astonishing
“Silicon Photonics is a good topic to research and publish to help a broader community because it is highly technical,” says Bateman. “It is also a great case study, just watching entirely new technologies emerge and become commercially viable in the span of ten years; it is astonishing.”
Bateman spent two years conducting interviews and reading a vast number of academic papers and trade-press articles before publishing the report earlier this year.
Blaine BatemanThe main near-term opportunity for silicon photonics he investigated is the data centre. Moreover, not just large-scale data centre players with an obvious need for cheaper optics to interconnect servers but also enterprises facing important decisions regarding their cloud-computing strategy.
“The view that I developed is that it is still very early,” he says. “The price points for a given performance [of optics] are significantly higher than a Facebook thinks they need to meet their long-term business perspectives.”
The price-performance figure commonly floated is one dollar per gigabit but current 100-gigabit pluggable modules, whether using indium phosphide or silicon photonics, are several times more costly than that.
This is an important issue for cloud providers and for enterprises determining their cloud strategy.
Do cloud provider invest money in silicon photonics technologies for their data centres or do they let others be early adopters and come in later when prices have dropped? Equally, an enterprise considering moving their business operations to the cloud is in a precarious position, says Bateman. “If you pick the wrong horse, you could be boxed into a level of price and performance, while you will have competitors starting with cloud providers that have a 30 to 50 percent price-performance advantage,” he says. “In my view, it will trickle all the way to the large consumers of cloud resources.”
Longer term, the market will resolve the relative success of silicon photonics versus traditional optics but, near term, companies have some expensive decisions to make. “The price curve is still in the early phase,” says Bateman. “It just hasn’t come down enough that it is an easy decision.”
Bateman’s advice to enterprises considering a potential cloud provider is to ask about its roadmap plans regarding the deployment of photonics.
Findings
To help understand the technology and business risks associated with silicon photonics, Bateman has created risk meters. These are intuitive graphics that show the status of the different elements making up silicon photonics and the issues involved when making silicon phonics devices. These include the light source, modulation method, formation of the waveguides, fibering the chip and fabrication plants.
“The reason the fab is such a high risk is that even though the idea was to leverage existing foundries, in truth it is very much new processes,” says Bateman. “There is also a limited number of fabs that can build these things.”
The report also includes a risk meter summarising the overall status of silicon photonics (see above).
Bateman says there are concerns regarding silicon photonics which people need to be aware of but stresses that it is a viable technology.
This is one of two main conclusions he highlights. Silicon photonics is not mature enough to be at a commodity price. Accordingly, taking a non-commodity or early adopter technology could damage a company’s business plan in terms of cost and performance.
The second takeaway is that for every single aspect of silicon photonics, much is still open. “One of the reasons I made all these lists in the report - and I studied research from all over the globe - is that I wanted to show the management level that silicon photonics is still emerging,” says Bateman.
China is focused on innovation now, and has formidable resources
This surprised him. When a new technology comes to market, it typically uses R&D developed decades earlier. “In this area, I was shocked by the huge amount of basic research this is still ongoing and more and more is being done every day,” says Bateman. “It is daunting; it is moving so fast.”
Another aspect that surprised him was the amount of research coming out of Asia and in particular China. “This is also something new, seeing original work in China and other parts of the world,” he says.
The stereotypical view that China is a source of cheap manufacturing but little in terms of innovation must change, he says. In the US, in particular, there is still a large body of people that think this way, says Bateman: “I feel they have their head in the sand - China is focused on innovation now, and has formidable resources.”
The connected vehicle - driving in the cloud
Cars are already more silicon than steel. As makers add LTE high speed broadband, they are destined to become more app than automobile. The possibilities that come with connecting your car to the cloud are scintillating. No wonder Gil Golan, director at General Motors' Advanced Technical Center in Israel, says the automotive industry is at an 'inflection point'.
"If you put LTE to the vehicle ... you are going to open a very wide pipe and you can send to the cloud and get results with almost no latency"
Gil Golan, General Motors
After a century continually improving the engine, suspension and transmission, car makers are now busy embracing technologies outside their traditional skill sets. The result is a period of unprecedented change and innovation.
Gil Golan, director at General Motors' Advanced Technical Center in Israel, cites the use of in-camera car systems to aid driving and parking as an example. "Five years ago almost no vehicle used a camera whereas now increasing numbers have at least one, a fish eye-camera facing backwards." Vehicles offering 360-degree views using five cameras are taking to the road and such numbers will become the norm in the next five years.
The result is that the automotive industry is hiring people with optics and lens expertise, as well as image processing skills to analyse the images and video the cameras produce. "This is just the camera; the vehicle is going to be loaded with electronics," says Golan.
In 2004 the [automotive] industry crossed the point where, on average, we spend more on silicon than on steel
Moore's Law
Semiconductor advances driven by Moore's Law have already changed the automotive industry. "In 2004 the [automotive] industry crossed the point where, on average, we spend more on silicon than on steel," says Golan.
Moore's Law continues to improve processor and memory performance while driving down cost. "Every small system can now be managed or controlled in a better way," says Golan. "With a processor and memory, everything can be more accurate, more functionality can be built in, and it doesn't matter if it is a windscreen wiper or a sophisticated suspension system."
Current high-end vehicles have over 100 microprocessors. In turn, chip makers are developing 100 Megabit and 1 Gigabit Ethernet physical devices, media access controllers and switching silicon for in-vehicle networking to link the car's various electronic control units (ECUs).
The growing number of on-board microprocessors is also reflected in the software within vehicles. According to Golan, the Chevrolet Volt has over 10 million lines of code while the latest Lockheed Martin F-35 fighter has 8.7 million. "These are software vehicles on four wheels," says Golan. Moreover, the design of the Chevy Volt started nearly a decade ago.
Car makers must keep vehicles, crammed with electronics and software, updated despite their far longer life cycles compared to consumer devices such as smartphones.
According to General Motors, each car model has its content sealed every four or five years. A car design sealed today may only come on sale in 2016 after which it will be manufactured for five years and remain on the road for a further decade. "A vehicle sealed today is supposed to be updated and relevant through to 2030," says Golan. "This, in an era where things are changing at an unprecedented pace."
As a result car makers work on ways to keep vehicles updated after the design is complete, during its manufacturing phase, and then when the vehicle is on the road, says Golan.
Industry trends
Two key trends are driving the automotive industry:
- The development of autonomous vehicles
- The connected vehicle
Leading car makers are all working towards the self-driving car. Such cars promise far greater safety and more efficient and economical driving. Such a vehicle will also turn the driver into a passenger, free to do other things. Automated vehicles will need multiple sensors coupled to on-board algorithms and systems that can guide the vehicle in real-time.
Golan says camera sensors are now available that see at night, yet some sensors can perform poorly in certain weather conditions and can be confused by electromagnetic fields - the car is a 'noisy' environment. As a result, multiple sensor types will be needed and their outputs fused to ensure key information is not missed.
"Remember, we are talking about life; this is not computers or mobile handsets," says Golan. "If you put more active safety systems on-board, it means you have to have a very solid read on what is going on around you."
The Chevrolet Volt has over 10 million lines of code while the latest Lockheed Martin F-35 fighter has 8.7 million
Wireless
Wireless communications will play a key role in vehicles. The most significant development is the advent of the Long Term Evolution (LTE) cellular standard that will bring broadband to the vehicle.
Golan says there are different perimeters within and around the car where wireless will play a role. The first is within the vehicle for wireless communication between devices such as a user's smart phone or tablet and the vehicle's main infotainment unit.
Wireless will also enable ECUs to talk, eliminating wiring inside the vehicle. "Wires are expensive, are heavy and impact the fuel economy, and can be a source for different problems: in the connectors and the wires themselves," says Golan.
A second, wider sphere of communication involves linking the vehicle with the immediate surroundings. This could be other vehicles or the infrastructure such as traffic lights, signs, and buildings. The communication could even be with cyclists and pedestrians carrying cellphones. Such immediate environment communication would use short-range communications, not the cellular network.
Wide-area communication will be performed using LTE. Such communication could also be performed over wireline. "If it is an electric vehicle, you can exchange data while you charge the vehicle," says Golan.
This ability to communicate across the network and connect to the cloud is what excites the car makers.
You can talk to the vehicle and the processing can be performed in the cloud
Cloud and Big Data
"If you put LTE to the vehicle, you are showing your customers that you are committed to bringing the best technology to the vehicle, you are going to open a very wide pipe and you can send to the cloud and get results with almost no latency," says Golan.
LTE also raises the interesting prospect of enabling some of the current processing embedded in the vehicle to be offloaded onto servers. "I can control the vehicle from the cloud," says Golan. "You can talk to the vehicle and the processing can be performed in the cloud."
The processing and capabilities offered in the cloud are orders of magnitude greater than what can be done on the vehicle, says Golan: "The results are going to be by far better than what we are familiar with today."
Clearly pooling and processing information centrally will offer a broader view than any one vehicle can provide but just what car processing functions can be offloaded are less clear, especially when a broadband link will always be dependent on the quality of the cellular coverage.
Safety critical systems will remain onboard, stresses Golan, but some of the infotainment and some of the extra value creation will come wirelessly.
Choosing the LTE operator to use is a key decision for an automotive company. "We have to make sure you [the driver] are on a very good network," says Golan. "The service provider has to show us, prove to us [their network], and in some cases we run basic and sporadic tests with our operator to make sure that we do have the network in place."
Automotive companies see opportunity here.
"When you get into a vehicle, there is a new type of behaviour that we know," says Golan. "We know a lot about your vehicle, we know your behaviour while you are driving: your driving style, what coffee you like to drink and your favourite coffee store, and that you typically fill up when you have a half tank and you go to a certain station."
This knowledge - about the car and the driver's preferences when driving - when combined with the cloud, is a powerful tool, says Golan. Car companies can offer an ecosystem that supports the driver. "We can have everything that you need while in the vehicle, served by General Motors," says Golan. "Let your imagination think about the services because I'm not going to tell you; we have a long list of stuff that we work on."
If we don't see that what we work on creates tremendous value, we drop it
General Motors already owns a 'huge' data centre and being a global company with a local footprint, will use cloud service providers as required.
So automotive is part of the Big Data story? "Oh, big time," says Golan. "Business analytics is critical for any industry including the automotive industry."
Innovation
Given the opportunities new technologies such as sensors, computing, communication and cloud enable, how do automotive companies remain focussed?
"If we don't see that what we work on creates tremendous value, we drop it," says Golan. "We have no time or resources to spend on spinning wheels."
General Motors has its own venture capital arm to invest in promising companies and spends a lot of time talking to start-ups. "We talk to every possible start-up; if you see them for the first time you would say: 'where is the connection to the automotive industry?'," says Golan. "We talk to everybody on everything."
The company says it will always back ideas. "If some team member comes up with a great idea, it does not matter how thin the company is spread, we will find the resources to support that," says Golan.
General Motors set up its research centre in Israel a decade ago and is the only automotive company to have an advanced development centre there, says Golan."The management had the foresight to understand that the industry is undergoing mega trends and an entrepreneurial culture - an innovation culture - is critically important for the future of the auto industry."
The company also has development sites in Silicon Valley and several other locations. "This is the pipe that is going to feed you innovation, and to do the critical steps needed towards securing the future of the company," says Golan. "You have to go after the technology."
Further reading:
Google's Original X-Man, click here
Arista Networks embeds optics to boost 100 Gig port density
Arista Networks' latest 7500E switch is designed to improve the economics of building large-scale cloud networks.
The platform packs 30 Terabit-per-second (Tbps) of switching capacity in an 11 rack unit (RU) chassis, the same chassis as Arista's existing 7500 switch that, when launched in 2010, was described as capable of supporting several generations of switch design.

"The CFP2 is becoming available such that by the end of this year there might be supply for board vendors to think about releasing them in 2014. That is too far off."
Martin Hull, Arista Networks
The 7500E features new switch fabric and line cards. One of the line cards uses board-mounted optics instead of pluggable transceivers. Each of the line card's ports is 'triple speed', supporting 10, 40 or 100 Gigabit Ethernet (GbE). The 7500E platform can be configured with up to 1,152 10GbE, 288 40GbE or 96 100GbE interfaces.
The switch's Extensible Operating System (EOS) also plays a key role in enabling cloud networks. "The EOS software, run on all Arista's switches, enables customers to build, manage, provision and automate these large scale cloud networks," says Martin Hull, senior product manager at Arista Networks.
Applications
Arista, founded in 2004 and launched in 2008, has established itself as a leading switch player for the high-frequency trading market. Yet this is one market that its latest core switch is not being aimed at.
"With the exception of high-frequency trading, the 7500 is applicable to all data centre markets," says Hull. "That it not to say it couldn't be applicable to high-frequency trading but what you generally find is that their networks are not large, and are focussed purely on speed of execution of their transactions." Latency is a key networking performance parameter for trading.
The 7500E is being aimed at Web 2.0 companies and cloud service providers. The Web 2.0 players include large social networking and on-line search companies. Such players have huge data centres with up to 100,000 servers.
The same network architecture can also be scaled down to meet the requirements of large 'Fortune 500' enterprises. "Such companies are being challenged to deliver private cloud as the same competitive price points as the public cloud," says Hull.
The 7500 switches are typically used in a two-tier architecture. For the largest networks, 16 or 32 switches are used on the same switching tier in an arrangement known as a parallel spine.
A common switch architecture for traditional IT applications such as e-mail and e-commerce uses three tiers of switching. These include core switches linked to distribution switches, typically a pair of switches used in a given area, and top-of-rack or access switches connected to each distribution pair.
For newer data centre applications such as social networking, cloud services and search, the computation requirements result in far greater traffic shared on the same tier of switching, referred to as east-west traffic. "What has happened is that the single pair of distribution switches no longer has the capacity to handle all of the traffic in that distribution area," says Hull.
Customers address east-west traffic by throwing more platforms together. Eight or 16 distribution switches are used instead of a pair. "Every access switch is now connected to each one of those 16 distribution switches - we call them spine switches," says Hull.
The resulting two-tier design, comprising access switches and distribution switches, requires that each access switch has significant bandwidth between itself and any other access switch. As a result, many 7500 switches - 16 or 32 - can be used in parallel at the distribution layer.
Source: Arista Networks
"If I'm a Fortune 500 company, however, I don't need 16 of those switches," says Hull. "I can scale down, where four or maybe two [switches] are enough." Arista also offers a smaller 4-slot chassis as well as the 8 slot (11 RU) 7500E platform.
7500E specification
The switch has a capacity of 30Tbps. When the switch is fully configured with 1,152 10GbE ports, it equates to 23Tbps of duplex traffic. The system is designed with redundancy in place.
"We have six fabric cards in each chassis," says Hull, "If I lose one, I still have 25 Terabits [of switching fabric]; enough forwarding capacity to support the full line rates on all those ports." Redundancy also applies to the system's four power supplies. Supplies can fail and the switch will continue to work, says Hull.
The switch can process 14.4 billion 64-byte packets a second. This, says Hull, is another way of stating the switch capacity while confirming it is non-blocking.
The 7500E comes with four line card options: three use pluggable optics while the fourth uses embedded optics, as mentioned, based on 12 10Gbps transmit and 12 10Gbps receive channels (see table).
Using line cards supporting pluggable optics provides the customer the flexibility of using transceivers with various reach options, based on requirements. "But at 100 Gigabit, the limiting factor for customers is the size of the pluggable module," says Hull.
Using a CFP optical module, each card supports four 100Gbps ports only. The newer CFP2 modules will double the number to eight. "The CFP2 is becoming available such that by the end of this year there might be supply for board vendors to think about releasing them in 2014," says Hull. "That is too far off."
Arista's board mounted optics delivers 12 100GbE ports per line card.
The board-mounted triple-speed ports adhere to the IEEE 100 Gigabit SR10 standard, with a reach of 150m over OM4 fibre. The channels can be used discretely for 10GbE, grouped in four for 40GbE, while at 100GbE they are combined as a set of 10.
"At 100 Gig, the IEEE spec uses 20 out of 24 lanes (10 transmit and 10 receive); we are using all 24," says Hull. "We can do 12 10GbE, we can do three 40GbE, but we can still only do one 100Gbps because we have a little bit left over but not enough to make another whole 100GbE." In turn, the module can be configured as two 40GbE and four 10GbE ports, or 40GbE and eight 10GbE.
Using board-mounted optics reduces the cost of 100Gbps line card ports. A full 96 100GbE switch configuration achieves a cost of $10k/port while using existing CFP modules the cost is $30k-50k/ port, claims Arista.
Arista quotes 10GbE as costing $550 per line card port not including the pluggable transceiver. At 40GbE this scales to $2,200. For 100GbE the $10k/ port comprises the scaled-up port cost at 100GbE ($2.2k x 2.5) to which is added the cost of the optics. The power consumption is under 4W/ port when the system is fully loaded.
The company uses merchant chips rather than an in-house ASIC for its switch platform. Can't other vendors develop similar performance systems based on the same ICs? "They could, but it is not easy," says Hull.
The company points out that merchant chip switch vendors use a CMOS process node that is typically a generation ahead of state-of-the-art ASICs. "We have high-performance forwarding engines, six per line card, each a discrete system-on-chip solution," says Hull. "These have the technology to do all the Layer 2 and Layer 3 processing." All these devices on one board talk to all the other chips on the other cards through the fabric.
In the last year, equipment makers have decided to bring silicon photonics technology in-house: Cisco Systems has acquired Lightwire while Mellanox Technologies has announced its plan to acquire Kotura.
Arista says it is watching silicon photonics developments with keen interest. "Silicon photonics is very interesting and we are following that," says Hull. "You will see over the next few years that silicon photonics will enable us to continue to add density."
There is a limit to where existing photonics will go, and silicon photonics overcomes some of those limitations, he says.
Extensible Operating System
Arista's highlights several characteristics of its switch operating system. The EOS is standards-compliant, self-healing, and supports network virtualisation and software-defined networking (SDN).
The operating system implements such protocols as Border Gateway Protocol (BGP) and spanning tree. "We don't have proprietary protocols," says Hull. "We support VXLAN [Virtual Extensible LAN] an open standards way of doing Layer 2 overlay of [Layer] 3."
EOS is also described as self-healing. The modular operating system is composed of multiple software processes, each process described as an agent. "If you are running a software process and it is killed because it is misbehaving, when it comes back typically its work is lost," says Hull. EOS is self-healing in that should an agent need to be restarted, it can continue with its previous data.
"We have software logic in the system that monitors all the agents to make sure none are misbehaving," says Hull. "If it finds an agent doing stuff that it should not, it stops it, restarts it and the process comes back running with the same data." The data is not packet related, says Hull, rather the state of the network.
The operating system also supports network virtualisation, one aspect being VXLAN. VXLAN is one of the technologies that allows cloud providers to provide a customer with server resources over a logical network when the server hardware can be distributed over several physical networks, says Hull. "Even a VLAN can be considered as network virtualisation but VXLAN is the most topical."
Support for SDN is an inherent part of EOS from its inception, says Hull. “EOS is open - the customers can write scripts, they can write their own C-code, or they can install Linux packages; all can run on our switches." These agents can talk back to the customer's management systems. "They are able to automate the interactions between their systems and our switches using extensions to EOS," he says.
"We encompass most aspects of SDN," says Hull. "We will write new features and new extensions but we do not have to re-architect our OS to provide an SDN feature."
Arista is terse about its switch roadmap.
"Any future product would improve performance - capacity, table sizes, price-per-port and density," says Hull. "And there will be innovation in the platform's software.
Alcatel-Lucent adds networking to enhance the cloud
Alcatel-Lucent has developed an architecture that addresses the networking aspects of cloud computing. Dubbed CloudBand, the system will enable operators to deliver network-enhanced cloud services to enterprise customers. Operators can also use CloudBand to deliver their own telecom services.

“As far as we know there is no other system that bridges the gap between the network and the cloud"
Dor Skuler, Alcatel-Lucent
Alcatel-Lucent estimates that moving an operator's services to the cloud will reduce networking costs by 10% while speeding up new service introductions.
“As far as we know there is no other system that bridges the gap between the network and the cloud," says Dor Skuler, vice president of cloud solutions at Alcatel-Lucent.
In an Alcatel-Lucent survey of 3,500 IT decision makers, the biggest issue stopping their adoption of cloud computing was performance. Their issues of concern include service level agreements, customer experience, and ensuring low latency and guaranteed bandwidth.
Using CloudBand, a customer uses a portal to set such cloud parameters as the virtual machine to be used, the hypervisor and the operating system. Users can also set networking parameters such as latency, jitter, guaranteed bandwidth and whether a layer two or layer three VPN is used, for example. The user can even define where data is stored if regulation dictates that the data must reside within the country of origin.
Architecture
CloudBand uses an optimisation algorithm developed at Alcatel-Lucent's Bell Labs. The algorithm takes the requested cloud and networking settings and, knowing the underlying topology, works out the best configuration.
“This is a complex equation to optimise,” says Skuler. “All these resources - all different and in different locations - need to be optimised; the network needs to be optimised, I also have the requirements of the applications and I want to optimise it on price.” Moreover, these parameters change over time.

"We recommend service providers have tiny clouds that look like one logical cloud yet have different attributes"
According to Alcatel-Lucent, operators have an advantage over traditional cloud service providers in owning and being able to optimise their networks for cloud. Operators also have lots of locations - central offices and exchanges - distributed across the network where they can site cloud nodes.
Having such distributed IT resources benefits the end user by having more localised resources even though it makes the optimisation task of the CloudBand algorithm more complicated. “We recommend service providers have tiny clouds that look like one logical cloud yet have different attributes,” says Skuler.
At the heart of the architecture is the management and orchestration system (See diagram). The system takes the output of the optimisation algorithm, and provisions the cloud resources - moving the virtual machine to a particular site, turning it on, assuring its performance, checking the service level agreement and creating the required billing record.

Once assigned a service is fixed, but in future CloudBand will adapt existing services as new services are set up to ensure continual cloud optimisation.
Benefits
"Not every [telecom] service can be virtualised but overall we believe we can shave 10% out of the cost of the network,” says Skuler.
Alcatel-Lucent has already implemented its application store software, content management applications and digital media for use in the cloud. Skuler says video, IP Multimedia Subsystem (IMS) and the applications that run on the IMS architecture can also be moved to the cloud, while Alcatel-Lucent's lightRadio wireless architecture, announced earlier this year, can pool and virtualise cellular base station resources.
But Skuler says that the real benefit for operators moving services to the cloud is agility: operators will be able to introduce new cloud-based services in days rather than months. This will reduce time-to-revenue and costs while allowing operators to experiment with new services.
CloudBand will be ready for trialling in operators’ labs come January. The system will be available commercially in the first half of 2012.
Calient brings optical switching to the data centre
Source: Calient
The Californian-based start-up has as been selling its FC 320, a 320-port 3D MEMS-based switch, since 2006. The optical switch is used by Verizon and AT&T at submarine cable landing sites, and by Government agencies.
Now Calient has raised US $19.4 million (€13.77M) in its latest funding round to complete the development and manufacturing of a more compact, power efficient version of its optical switch.
The company has upgraded the electronics and software of its MEMS-based optical switch module. This, says Gregory Koss, Calient's senior vice president for products and partners, reduces the power consumption to 20W, a 90% reduction compared to its existing design.
The new switch module is also more compact. Using the module in a new 320-port switch platform more than halves the size: from 17 to 7 rack units.
The 3D MEMS optics has not been changed. The MEMS design uses mirrors to form a free-space connection between an fibre input port and any of the 320 output ports. A control system then adjusts the mirrors to maximise the output signal. In all the years Calient has been selling its systems, there has not been a single MEMS failure, says the company.
Calient is also changing its strategy by selling the switch as a module to system vendors. The switch module can be incorporated on a line card, while Calient will work with system vendor partners that want to integrate the module within their own platform designs.
"[Data centre] operators want a future-proofed network. They don't want to rebuild when links are upgraded from 10 to 40 and then 100 Gig."
Gregory Koss, Calient Technologies
Data centre and cloud
Calient's MEMS-based switch will be used to connect large server clusters in content service providers' 'mega' data centres.
According to Koss, content service providers are interested in using an optical switch to link their server clusters. In a typical configuration, 48 servers are connected to a top-of-rack switch. This top-of-rack switch, via a 10 Gigabit Ethernet link, would be one input to the 320-port optical switch.
"[Data centre] operators want a future-proofed network," says Koss. "They don't want to rebuild when links are upgraded from 10 to 40 and then 100 Gig."
Common cabling used in the data centre include copper and multi-mode fibre while Calient's design uses single-mode fibre. According to Koss, data centre managers are installing more single-mode fibre: "It is it not so much for reach but for bandwidth and for scaling.”
The switch can also be used for what Calient calls cloud networking, to monitor and manage an enterprise's fibres as it enters the data centre.
ROADMs
The switch will also address agile optical networking, to enable colourless, directionless and contentionless ROADMs.
The optical module will be used for the add/ drop, alongside rather than replacing 1x9 or 1x20 WSSs which are used for the pass-through lambdas.
Koss says that the company's main focus in 2012 is addressing the data centre market opportunity but that the switch is of interest to ROADM system vendors. Such a 3D MEMS-based ROADM design will take longer to bring to market.
Further reading:
CALIENT's 3D MEMS Technology Enables Exploding Bandwidth Demands (log-in required to download the White Paper)
Fulcrum's Alta switch chips add programmable pipeline to keep pace with standards
Part 2: Ethernet switch chips
Fulcrum Microsystems has announced its latest FocalPoint chip family of Ethernet switches. The Alta FM6000 series family supports up to 72 10-Gigabit ports and can process over one billion packets a second.

“Instead of every top-of-rack switch having a CPU subsystem, you could put all the horsepower into a set of server blades”
Gary Lee, Fulcrum Microsystems
The company’s Alta FM6000 series is its third generation of FocalPoint Ethernet switches. Based on a 65nm CMOS process, the Alta switch architecture includes a programmable packet-processing pipeline that can support emerging standards for data centre networking. These include Data Center Bridging (DCB), Transparent Interconnection of Lots of Links (TRILL), and two server virtualisation protocols: the IEEE 802.1Qbg Edge Virtual Bridging and the IEEE 802.1Qbh Bridge Port Extension.
Why is this important?
Data centre networking is undergoing a period of upheaval due to server virtualisation. Data centre operators must cope with the changing nature of traffic flows, as the predominant traffic becomes server-to-server (east-west traffic) rather than between the servers and end users (north-south).
In turn, IT staff want to consolidate the multiple networks they must manage - for LAN, storage and high-performance computing - onto a single network based on DCB.
They also want to reduce the number of switch platforms they must manage. This is leading switch vendors to develop larger, flatter architectures; instead of the traditional three tiers of switches, vendors are developing sleeker two-tier and even a single-layer, logical switch architecture that spans the data centre.
“There are people out there that have enterprise gear where their data centre connection has to go through the access, aggregation and core [switches],” says Gary Lee, director of product marketing at Fulcrum Microsystems. “They may not want to swap out that gear so they are going to continue to have three tiers even if it is not that efficient.”
But other customers such as large cloud computing players do not require such a switch hierarchy and its associated software complexity, says Lee: “They are the ones that are driving to a ‘lean core’, made up of top-of-rack and the end-of-row switch that acts as a core switch.”
Switch vendor Voltaire, a customer of Fulcrum’s ICs, uses such an arrangement to create 288 10-Gigabit Ethernet ports based on two tiers of 24-port switch chips. With the latest 72-port FM6000 series, a two-tier architecture with over 2,500 10-Gigabit ports becomes possible. “The software can treat the entire structure of chips as a single large virtual switch,” says Lee.
Alta architecture
Fulcrum's Alta FM6000 series architecture. Source: Fulcrum MicrosystemsFulcrum’s FocalPoint 6000 series comprises nine devices with capacities from 160 to 720 Gigabit-per-second (Gbps). The Alta chip architecture has three main components:
- Input-output ports
- RapidArray shared memory and
- the FlexPipe array pipeline.
Like Fulcrum’s second generation Bali architecture, the 6000 series has 96 serial/ deserialiser (serdes) ports but these have been upgraded from 3.125Gbps to 10Gbps.“We have very flexible port logic,” says Lee. “We can group four serdes to create a XAUI [10 Gigabit Ethernet] port or create an IEEE 40 Gigabit Ethernet port.”
RapidArray is a single shared memory which can be written to and read from at full line rate from all the ports simultaneous, says Fulcrum. Each memory output port has a set of eight class-of-service queues, while the shared memory can be partitioned to separate storage traffic from data traffic.
“The shared memory design is where we get the low latency, and good multicast performance which people in the broadband access market like for video distribution,” says Lee.
The architecture’s third main functional block is the FlexPipe array pipeline. The pipeline, new to Alta, is what enables up to a billion 64 byte packets to be processed each second. The packet-processing pipeline combines look-up tables and microcode-programmable functional blocks that process a packet’s fields. Being able to program the array pipeline means the device can accommodate standards’ changes as they evolve, as well as switch vendors’ proprietary protocols.
OpenFlow
The FocalPoint software development kit that comes with the chips supports OpenFlow. OpenFlow is an academic initiative that allows networking protocols to be explored using existing hardware but it is of growing interest to data centre operators.
“It creates an industry-standard application programming interface (API) to the switches,” explains Lee. It would allow the likes of a Google or a Yahoo! to switch vendors’ switch platforms as long as both vendors supported OpenFlow.
OpenFlow also establishes the idea of a central controller that would run on a server to configure the network. “Instead of every top-of-rack switch having a CPU subsystem, you could put all the horsepower into a set of server blades,” says Lee. This promises to lower the cost of switches and more importantly enable operators to unshackle themselves from switch vendors’ software.
Lee points out that OpenFlow is still in its infancy. But Fulcrum has added an ‘OpenFlow vSwitch stub’ to its software that translates between OpenFlow APIs and FocalPoint APIs.
What next?
Fulcrum says it continues to monitor the various evolving standards such as DCB, TRILL and the virtualisation work. Fulcrum is also getting requests to support latency measurement on its chips using techniques such as synchronous Ethernet to ensure service level agreements are met.
As for future FocalPoint designs, these will have greater throughput, with larger table sizes, packet buffers and higher-speed 100 Gigabit Ethernet interfaces.
Meanwhile all nine FM6000 series’ chip members will be available from the second quarter, 2011.
Click here for Part 1: Single-layer switch architectures
Click here for Part 3: Networking developments
Is Broadcom’s chip powering Juniper’s Stratus?
Part 1: Single-layer switch architectures
Juniper Networks’ Stratus switch architecture, designed for next-generation data centres, is several months away from trials. First detailed in 2009, Stratus is being engineered as a single-layer switch with an architecture that will scale to support tens of thousands of 10 Gigabit-per-second (Gbps) ports.

Stratus will be in customer trials in early 2011.
Andy Ingram, Juniper Networks
Data centres use a switch hierarchy, made up of three layers commonly. Multiple servers are connected to access switches, such as top-of-rack designs, which are connected to aggregation switches whose role is to funnel traffic to large, core data centre switches.
Moving to a single-layer design promises several advantages. Not only does a single-layer architecture reduce the overall number of managed platforms, bringing capital and operational expense savings, it also reduces switch latency.
Broadcom’s IC for Stratus?
The Stratus architecture has yet to be detailed by Juniper. But the company has said that the design will be based on a 64x10Gbps ASIC building block dubbed a path-forwarding engine (PFE).
“The building block – the PFE – that can have that kind of density (64x10Gbps) gives us the ability to build out the network fabric in a very economical way,” says Andy Ingram, vice president of product marketing and business development of the fabric and switching technologies business group at Juniper Networks.
Stratus is being designed to provide any-to-any connectivity and operate at wire speed. “You have a very dense, very high-cross-sectional bandwidth fabric,” says Ingram. “The only way to make it economical is to use dense ASICs.”
Broadcom’s latest StrataXGS Ethernet switch family - the BCM56840 series - comprises three devices to date, the largest of which - the BCM56845 – also has 64x10Gbps ports.
Juniper will not disclose whether it is using its own ASIC or a third-party device for Stratus.
Broadcom, however, has said that its BCM56840 series is being used by vendors developing flat, single-layer switch architectures. “Anyone using merchant Ethernet switching silicon to build a single-stage environment is probably using our technology,” says Nick Ilyadis, chief technical officer for Broadcom’s infrastructure networking group.
Stratus will be in customer trials in early 2011. “In a lot less than 6 months”, says Ingram. “We have some customers that have some very difficult networking challenges that are signed up to be our early field trials and we will work with them extensively.”
The timeline aligns with Broadcom’s claim that samples of the BCM56840 ICs have been available for months and will be in production by year-end.
According to Broadcom, only a handful of switch vendors have the resources to design such a complex switch ASIC and also expect to recoup their investment. Moreover, a switch vendor using Broadcom's IC has plenty of scope to differentiate their design using software, and even FPGA hardware if needed. It is software that brings out the many features of the BCM56845, says Broadcom.
The BCM56845
Broadcom’s BCM56840 ICs share a common feature set but differ in their switching capacity. The largest, the BCM56845, has a switching capacity of 640Gbps. The device’s 64x10 Gigabit Ethernet (GbE) ports can also be configured as 16x40 GbE ports.
The BCM56845 supports data center bridging (DCB), the Ethernet protocol enhancement that enables lossless transmission of storage and high-performance computing traffic. It also supports the Fibre Channel over Ethernet (FCoE) protocol that frames Fibre Channel storage traffic over DCB-enhanced Ethernet.
Besides DCB Ethernet, the series switch includes layer 3 packet processing and routeing. There is also a multi-stage content-aware engine that allows higher layer, more complex packet inspection (layer 4 to 7 of the Open Systems Interconnection model) and policy management.
The content-aware functional block can also be used for packet cut-through; a technique to reduce switch latency by inspecting header information and forwarding all the while the packet’s payload is arriving. Broadcom says the switch’s latency is less than one microsecond.
Most importantly, the BCM56845 addresses the move to a flatter switching architecture in the data centre.
It supports the Transparent Interconnection of Lots of Links (TRILL) standard ratified by the Internet Engineering Task Force (IETF) in July. Ethernet uses a spanning tree technique to avoid the creation of loops within a network. However the spanning tree becomes unwieldy as the Ethernet network size grows and works only at the expense of halving the available networking bandwidth. TRILL is designed to allow much larger Ethernet networks while using all available bandwidth.
Broadcom has its own protocol know as HiGig that adds tags to packets. Using HiGig, a very large logical switch can be created and managed, made up of multiple interconnected switches. Any port of the IC can be configured as a HiGig port.
So has Broadcom’s BCM56845 been chosen by Juniper Networks for Stratus? “I really can’t comment on which customers are using this,” says Ilyadis.
Click here for Part 2: Ethernet switch chips
Click here for Part 3: Networking developments
Retweet
"It's a bit like working with your wife; it has its ups and downs."
Siraj ElAhmadi, CEO of Menara Networks, on what it is like working with his brother, Salam, who is the company's CTO.
Titbits and tweets you may have missed
- ENISA has published a comprehensive report entitled Cloud Computing: Benefits, risks and recommendations for information security. The 125-page report can be downloaded from the ENISA site, click here.
- There is also a SecureCloud 2010 conference coming up in March involving ENISA and the Cloud Security Alliance, click here for details.
- A European court overruled German regulators that had allowed Deutsche Telekom to ban competitors from having access to its high-speed broadband network.
- Teknovus announced availability of the TK3401 EPON node controller that supports central office (OLT) to subscriber (ONU) distances of up to 100 km and up to 1,000 subscriber ONUs. And if you ever wondered what is the difference between the optical network unit (ONU) and optical network terminal (ONT), here is the answer.
- Broadcom announced its plan to acquire Dune Networks for $178m. Meanwhile, the 10GBASE-T copper interface got a shot in the arm with start-up Aquantia raising $44m in financing.
Retweet
Tidbits and tweets you may have missed
1. ENISA has published a comprehensive report entitled Cloud Computing: Benefits, risks and recommendations for information security. The 125 page report can be downloaded from the ENISA site, click here. http://tr.im/GC8x
2. A European court overruled German regulators that had allowed Duetche Telekom to ban competitors from having access to its high-speed broadband network. http://tr.im/GC7O
3. Teknovus announced availability of the TK3401 EPON node controller that supports central office (OLT) to subscriber (ONU) distances up to 100 km and connections to over 1,000 subscriber ONUs http://tr.im/GCe2 And if you evered wondered if there is a difference between the ONU and ONT here is the answer http://tr.im/GCgu
4. Broadcom announced its plan to acquire Dune Networks for $178m http://tr.im/GCf3
5. Meanwhile 10GBASE-T copper standard got a shot in the arm with start-up Aquantia gets $44m in financing http://bit.ly/6MCwdk
Quotes
"It's a bit like working with your wife; it has its ups and downs."
Siraj ElAhmadi, CEO of Menara Networks on what it is like working with his brother, Salam, who is the company's CTO.
Cloud computing: where telecoms and IT collide
Originally appeared in FibreSystems - May 21st 2009
IT directors worldwide are considering whether it makes financial sense to move their computing resources offsite into the 'cloud'. Roy Rubenstein assesses the potential opportunities for network operators and equipment vendors.
Cloud computing is the latest industry attempt to merge computing with networking. While previous efforts have all failed, the gathering evidence suggests that cloud computing may have got things right this time. Indeed it is set to have a marked effect on how enterprises do business, while driving the growth of network traffic and new switch architectures for the data centre.
In the mid-1990s, Oracle proposed putting computing within a network, and coined the term "network computer". The idea centred on a diskless desktop for businesses on which applications were served. The concept failed in its bid to dislodge Intel and Microsoft, but was resurrected during the dot-com boom with the advent of application service providers (ASPs).
ASPs delivered computer-based services to enterprises over the network. The ASPs faltered partly because applications were adapted from existing ones rather than being developed with the web in mind. Equally, the ASPs' business models were immature, and broadband access was in scarce supply. But the idea has since taken hold in the shape of software-as-a-service (SaaS). SaaS provides enterprises with business software on demand over the web, so a firm does not need to buy and maintain that software on its own platforms.
SaaS can be viewed as part of a bigger trend in cloud computing. Examples of cloud services include Google's applications such as e-mail and online storage, and Amazon with its Elastic Compute Cloud service, where application developers configure the computing resources they need.
Cloudy thinking
The impact of cloud starts and finishes in the IT sector. "Cloud computing is not just [for] Web 2.0 companies, it is a game-changer for the IT industry," said Dennis Quan, director of IBM's software group. "In general it's about massively scalable IT services delivered over the network."
An ecosystem of other players is required to make cloud happen. The early movers in this respect are data-centre owners and IT services companies like Amazon and IBM, and the suppliers of data-centre hardware, which include router vendors Cisco Systems and Juniper Networks, and Ethernet switch makers such as Extreme Networks and Force10 Networks.
Telecommunications carriers too are jumping on the bandwagon, which is not surprising given their experience as providers of hosting and managed services coupled with the networking expertise needed for cloud computing. International carrier AT&T, for instance, launched its Synaptic Hosting service in August 2008, a cloud-based, on-demand managed service where enterprises define their networking, computing and storage requirements, and pay for what they use. "There is a base-level platform for the [enterprise's] steady-state need, but users can tune up and tune down [resources] as required," explained Steve Caniano, vice-president, hosting and application services at AT&T.
"The top 10 operators in Europe are all adding utility-based offerings [such as storage and computing], and are moving to cloud computing by adding management and provisioning on top," said Alfredo Nulli, solutions manager for service provision at Cisco. However, it is the second- and third-tier operators in Europe that are "really going for cloud", he says, as they strive to compete with the likes of Amazon and steal a march on the big carriers.
The idea of using IT resources on a pay-as-you-go basis rather than buying platforms for in-house use is appealing to companies, especially in the current economic climate. "Enterprises are tired of over-provisioning by 150% only for equipment to sit idle and burn power," said Steve Garrison, vice-president of marketing at Force10 Networks.
Danny Dicks, an independent consultant and author of a recent Light Reading Insider report on cloud computing, agrees. But he stresses it is a huge jump from using cloud computing for application development to an enterprise moving its entire operations into the cloud. For a start-up developing and testing an application, the cost and scalability benefits of cloud are so great that it makes a huge amount of sense, he says. Once an application is running and has users, however, an enterprise is then dependent on the reliability of the connection. "No-one would worry if a Facebook application went down for an hour but it would make a big difference to an enterprise offering financial services," he commented.
The network perspective
The importance of the network and the implied demand for bandwidth as more and more applications and IT resources sit somewhere remote from the user is good news for operators and equipment makers.
If done right, there is a tremendous opportunity for telecoms operators to increase the value of their networks and create new revenue streams. At a minimum, cloud computing is expected to increase the amount of traffic on their networks.
Service providers stress the need for high-bandwidth, low-latency links to support cloud-based services. AT&T has 38 data centres worldwide, which are connected via its 40 Gbit/s MPLS global backbone network, says Gregg Sexton, AT&T's director of product development. The carrier is concentrating Synaptic Hosting applications in five "super data centres" located across three continents, linked using its OPT-E-WAN virtual private LAN service (VPLS). Using the VPLS, enterprise customers can easily change bandwidth assigned between sites and to particular virtual LANs.
BT, which describes its data centres and network as a "global cloud", also highlights the potential need for higher capacity links. "The big question we are asking ourselves is whether to go to 40 Gbit/s or wait for 100 Gbit/s," said Tim Hubbard, head of technology futures, BT Design.
Likewise, systems vendors are seeing the impact of cloud computing. Ciena first noted interest from large data-centre players seeking high-capacity links some 12 to 24 months ago. "It wasn't a step jump, more an incremental change in the way networks were being built and who was building them," said John-Paul Hemingway, chief technologist, EMEA for Ciena.
Cloud is also having an impact on access network requirements, he says. There is a need to change dynamically the bandwidth per application over a connection to an enterprise. Services such as LAN, video conferencing and data back-up need to be given different priorities at different times of the day, which requires technologies such as virtual LAN with quality-of-service and class-of-service settings.
German vendor ADVA Optical Networking has also noticed a rise in connectivity links to enterprises via demand for its FSP-150 Ethernet access product, which may be driven in part by increased demand for cloud-based services. Another area that's being driven by computing over long distances is the need to carry Infiniband natively over a DWDM lightpath. "Infiniband is used for computing nodes due to its highest connectivity and lowest latency," explained Christian Illmer, ADVA's director of business development.
Virtualization virtues
Cloud computing is also starting to influence the evolution of the data centre. One critical enabling technology for cloud computing in the data centre is virtualization, which refers to the ability to separate a software function from the underlying hardware, so that the hardware can be shared between different software usages without the user being aware. Networks, storage systems and server applications can all be "virtualized", giving end-users a personal view of their applications and resources, regardless of the network, storage or computing device they are physically using.
Virtualization enables multiple firms to share the same SaaS application while maintaining their own unique data, compute and storage resources. Virtualization has also led to a significant improvement in the utilization of servers and storage. Traditionally usage levels have been at a paltry 10 to 15%.
However, virtualization remains just one of several components needed for cloud computing. A separate management-tools layer is also needed to ensure that IT resources are efficiently provisioned, used and charged for. "This reflects the main finding of our report, that the cloud-computing world is starting to stratify into clearly defined layers," said Dicks.
Such management software can also shift applications between platforms to balance loads. An example is moving what is called a virtual machine image between servers. A virtual machine image may comprise 100 GB of storage, middleware and application software. "If [the image] takes up 5% of a server's workload, you may consolidate 10 or 20 such images onto a single machine and save power," said IBM's Quan.
Force10's Garrison notes that firms issuing request for proposals for new data centres typically don't mention cloud directly. Instead they ask questions like "Help me see how you can move an application from one rack to another, or between adjacent rows, or even between adjacent data centres 50 miles apart", he said.
Clearly, shuffling applications between servers and between data centres will drive bandwidth requirements. It also helps to explain why vendors are exploring how to consolidate and simplify the switching architecture within the data centre.
"Everything is growing exponentially, whether it is the number of servers and storage installed each year or the amount of traffic," said Andy Ingram, vice-president of product marketing and business development, data-centre business group at Juniper Networks. "The data centre is becoming a wonderful, dynamic and scary place."
This explains why vendors such as Juniper are investigating how current tiered Ethernet switching within the data centre — passing traffic between the platforms and users — can be adapted to handle the expected growth in data-centre traffic. Such growth will also strain connections between equipment: between servers, and between the servers and storage.
According to Ingram the first approach is to simplify the existing architecture. With this in mind, Juniper is looking to collapse the tiered switching from three layers to two, by linking its top-of-rack switches in a loop. Longer term, vendors are investigating the development of a singled-tiered switch in a project code-named Stratus. "We are looking to develop a scalable, flat, non-blocking, lossless data-centre fabric," said Ingram.
A flat fabric means processing a packet only once, while a non-blocking architecture removes the possibility of congestion. Such a switch fabric will scale to hundreds or even thousands of 10 Gigabit Ethernet access ports, says Ingram, who stresses that Juniper is in the first year of what will be a multi-year project to develop such an architecture.
Data centre convergence
Another development is Fibre Channel over Ethernet (FCOE) which promises to consolidate the various networks that run within a data centre. At present, servers connect to the LAN using Ethernet and to storage via Fibre Channel. This requires separate cards within the server: a LAN network interface card and a host-bus adapter for storage. FCOE promises to enable Ethernet, and one common converged network adapter card, to be used for both purposes. But this requires a new variant of Ethernet to be adopted within the data centre. Such an Ethernet development is already being referred to by a variety of names in the industry: Data Centre Ethernet, Converged Enhanced Ethernet, lossless Ethernet, and Data Centre Bridging.
Lossless Ethernet could be used to carry Fibre Channel packets since the storage protocol's key merit is that it does not lose packets. Such a development would remove one of the three main protocols in the data centre, leaving Ethernet to challenge Infiniband. But even though FCOE has heavyweight backers in the shape of Cisco and Brocade, it will probably be some years before a sole switching protocol rules the data centre.
Equipment makers believe they can benefit from the widespread adoption of cloud computing, at least in the short term. Although there will be efficiencies arising from virtualization and ever more enterprises sharing hardware, this will be eclipsed by the boost that cloud services provide to IT in general, meaning that more datacoms equipment will be sold rather than less. Longer term, however, it will probably impact hardware sales as fewer firms choose to invest in their own IT.
IBM's Quan notes that enterprises themselves are considering the adoption of cloud capabilities within their private data centres due to the efficiencies it delivers. The company thus expects to see growth of such "private" as well as "public" cloud-enabled data centres.
Dicks believes that cloud computing has a long road map. There will be plentiful opportunities for companies to deliver innovative products for cloud, from software and service support to underlying platforms, he says.
Further information
Cloud Computing: A Definition
Steve Garrison, vice-president of marketing at Force10 Networks, offers a straightforward description of cloud: accessing applications and resources and not caring where they reside.
Danny Dicks, an independent consultant, has come up with a more rigorous definition. He classifies cloud computing as "the provision and management of rapidly scalable, remote, computing resources, charged according to usage, and of additional application development and management tools, generally using the internet to connect the resources to the user."
