Books in 2013 - Part 1
Gazettabyte is asking various industry figures to highlight books they have read this year and recommend, both work-related and more general titles.
Part 1:
Tiejun J. Xia (TJ), Distinguished Member of Technical Staff, Verizon
The work-related title is Optical Fiber Telecommunications, Sixth Edition, by Ivan Kaminow, Tingye Li and Alan E. Willner. This edition, published in 2013, includes almost all the latest development results of optical fibre communications.
My non-work-related book is Fortune: Secrets of Greatness by the editors of Fortune Magazine. While published in 2006, the book still sheds light on the 'secrets' of people with significant accomplishments.
Christopher N. (Nick) Del Regno, Fellow Verizon
OpenStack Cloud Computing Cookbook, by Kevin Jackson is my work-related title. While we were in the throes of interviewing candidates for our open Cloud product development positions, I figured I had better bone up on some of the technologies.
One of those was OpenStack’s Cloud Computing software. I had seen recommendations for this book and after reading it and using it, I agree. It is a good 'OpenStack for Dummies' book which walks one through quickly setting up an OpenStack-based cloud computing environment. Further, since this is more of a tutorial book, it rightly assumes that the reader would be using some lower-level virtualisation environment (e.g., VirtualBox, etc) in which to run the OpenStack Hypervisor and virtual machines, which makes single-system simulation of a data centre environment even easier.
Lastly, the fact that it is available as a Kindle edition means it can be referenced in a variety of ways in various physical locales. While this book would work for those interested in learning more about OpenStack and virtualisation, it is better suited to those of us who like to get our hands dirty.
My somewhat work-related suggestions include Brilliant Blunders: From Darwin to Einstein – Colossal Mistakes by Great Scientists That Changed Our Understanding of Life and the Universe, by Mario Livio.
I discovered this book while watching Livio’s interview on The Daily Show. I was intrigued by the subject matter, since many of the major discoveries over the past few centuries were accidental (e.g. penicillin, radioactivity, semiconductors, etc). However, this book's focus is on the major mistakes made by some of the greatest minds in history: Darwin, Lord Kelvin, Pauling, Hoyle and Einstein.
It is interesting to consider how often pride unnecessarily blinded some of these scientists to contradictions to their own work. Further, this book reinforces my belief of the importance of collaboration and friendly competition. So many key discoveries have been made throughout history when two seemingly unrelated disciplines compare notes.
Another is Beyond the Black Box: The Forensics of Airplane Crashes, by George Bibel. As a frequent flyer and an aviation buff since childhood, I have always been intrigued by the process of accident investigation. This book offers a good exploration of the crash investigation process, with many case studies of various causes. The book explores the science of the causes and the improvements resulting from various accidents and related investigations. From the use of rounded openings in the skin (as opposed to square windows) to high-temperature alloys in the engines to ways to mitigate the impact of crash forces on the human body, the book is a fascinating journey through the lessons learned and the steps to avoid future lessons.
While enumerating the ways a plane could fail might dissuade some from flying, I found the book reassuring. The application of the scientific method to identifying the cause of, and solution to, airplane crashes has made air travel incredibly safe. In exploring the advances, I’m amazed at the bravery and temerity of early air travelers.
Outside work, my reading includes Doctor Sleep, by Stephen King. The sequel to “The Shining” following the little boy (Dan Torrence) as an adult and Dan’s role-reversal now as the protective mentor of a young child with powerful shining.
I also recommend Joyland (Hard Case Crime), by Stephen King. King tries his hand at writing a hard-case crime novel with great results. Not your typical King…think Stand by Me, Hearts in Atlantis, Shawshank Redemption.
Andrew Schmitt, Principal Analyst, Optical at Infonetics Research
My work-related reading is Research at Google.
Very little signal comes out of Google on what they are doing and what they are buying. But this web page summarises public technical disclosures and has good detail on what they have done.
There are a lot of pretenders in the analyst community who think they know the size and scale of Google's data center business but the reality is this company is sealed up tight in terms of disclosure. I put something together back in 2007 that tried to size 10GbE consumption (5,000 10GbE ports a month ) but am the first to admit that getting a handle on the magnitude of their optical networking and enterprise networking business today is a challenge.
Another offending area is software-defined networking (SDN). Pundits like to talk about SDN and how Google implemented the technology in their wide area network but I would wager few have read the documents detailing how it was done. As a result, many people mistakenly assume that because Google did it in their network, other carriers can do the same thing - which is totally false. The paper on their B4 network shows the degree of control and customisation (that few others have) required for its implementation.
I also have to plug a Transmode book on packet-optical networks. It does a really good job of defining what is a widely abused marketing term, but I’m a little biased as I wrote the forward. It should be released soon.
The non-work-related reading include Nate Silver’s book: The Signal and the Noise: Why So Many Predictions Fail — but Some Don't . I am enjoying it. I think he approaches the work of analysis the right way. I’m only halfway through but it is a good read so far. The description on Amazon summarises it well.
But some very important books that shaped my thinking are from Nassim Taleb . Fooled by Randomness is by far the best read and most approachable. If you like that then go for The Black Swan. Both are excellent and do a fantastic job of outlining the cognitive biases that can result in poor outcomes. It is philosophy for engineers and you should stop taking market advice from anyone who hasn’t read at least one.
The Steve Jobs biography by Walter Isaacson was widely popular and rightfully so.
A Thread Across the Ocean is a great book about the first trans-Atlantic cable, but that is a great book only for inside folks – don’t talk about it with people outside the industry or you’ll be marked as a nerd.
If you are into crazy infrastructure projects try Colossus about the Hoover Dam and The Path Between the Seas about the Panama Canal. The latter discloses interesting facts like how an entire graduating class of French engineers died trying to build it – no joke.
Lastly, I have to disclose an affinity for some favourite fiction: Brave New World, by Aldous Huxley and The Fountainhead by Ayn Rand.
I could go on.
If anyone reading this likes these books and has more ideas please let me know!
Books in 2013 - Part 2, click here
Alcatel-Lucent dismisses Nokia rumours as it launches NFV ecosystem
Michel Combes, CEO. Photo: Kobi Kantor.
The CEO of Alcatel-Lucent, Michel Combes, has brushed off rumours of a tie-up with Nokia, after reports surfaced last week that Nokia's board was considering the move as a strategy option.
"You will have to ask Nokia," said Combes. "I'm fully focussed on the Shift Plan, it is the right plan [for the company]; I don't want to be distracted by anything else."
Combes was speaking at the opening of Alcatel-Lucent's cloud R&D centre in Kfar Saba, Israel, where the company's internal start-up CloudBand is developing cloud technology for carriers.
Network Functions Virtualisation
CloudBand used the site opening to unveil its CloudBand Ecosystem Program to spur adoption of Network Functions Virtualisation (NFV). NFV is a carrier-led initiative, set up by the European Telecommunications Standards Institute (ETSI), to benefit from the IT model of running applications on virtualised servers.
Carriers want to get away from vendor-specific platforms that are expensive to run and cumbersome to upgrade when new services are needed. Adding a service can take between 18 months and three years, said Dor Skuler, vice president and general manager of the CloudBand business unit. Moreover, such equipment can reside in the network for 15 years. "Most of the [telecom] software is running on CPUs that are 15 years old," said Skuler.
Instead, carriers want vendors to develop software 'network functions' executed on servers. NFV promises a common network infrastructure and reduced costs by exploiting the economies of scale associated with servers. Server volumes dwarf those of dedicated networking equipment, and are regularly upgraded with new CPUs.
Applications running on servers can also be scaled up and down, according to demand, using virtualisation and cloud orchestration techniques already present in the data centre. "This is about to make the network scalable and automated," said Combes.
Alcatel-Lucent stresses that not all networking functions are suited for virtualisation. Optical transport is one example. Another is routing, which requires dedicated silicon for packet processing and traffic management.
CloudBand was set up in 2011. The unit is focussed on the orchestration and automation of distributed cloud computing for carriers. "How do you operationalise cloud which may be distributed across 20 to 30 locations?" said Skuler.
CloudBand says it can add a "cloud node" - IT equipment at an operator's site - and have it up and running three hours after power-up. This requires processes that are fully automated, said Skuler. Also used are algorithms developed at Alcatel-Lucent Bell Labs that determine the best location for distributed cloud resources for a given task. The algorithms load-balance the resources based on an application's requirements.
The distributed cloud technology also benefits from software-defined networking (SDN) technology from Alcatel-Lucent's other internal venture, Nuage Networks. Nuage Networks automates and sets up network connections between data centres. "Just as SDN makes use of virtualisation to give applications more memory and CPU resources in the data centre, Nuage does the same for the network," said Skuler.
Open interfaces are needed for NFV to succeed and avoid the issue of proprietary solutions and vendor lock-in. Alcatel-Lucent's NFV solution needs to support third-party applications, while the company's applications will have to run on other vendors' platforms. To this aim, CloudBand has set up an NFV ecosystem for service providers, vendors and developers.
"We have opened up CloudBand to anyone in the industry to test network applications on top of the cloud," said Skuler. "We are the first to do that."
So far, 15 companies have signed up to the CloudBand Ecosystem Program including Deutsche Telekom, Telefonica, Intel and HP.
Technologies such as NFV promise operators a way to regain market traction and avoid the commoditisation of transport, said Combes. Operators can manage their networks more efficiently, and create new business models. For example, operators can sell enterprises network functions such as infrastructure-as-a-service and platform-as-a-service.
Does not software functions run on servers undermine a telecom equipment vendor's primary business? "We are still perceived as a hardware company yet 85 percent of systems is software based," said Combes. Moreover, this is a carrier-driven initiative. "This is where our customers want to go," he said. "You either accept there will be a bit of canabalisation or run the risk of being canabalised by IT players or others."
The Shift Plan
Combes has been in place as Alcatel-Lucent's CEO for four months. In that time he has launched the Shift Plan that focusses the company's activities in three broad directions: IP infrastructure including routing and transport, cloud, and ultra-broadband access including wireless (LTE) and wireline (FTTx).
Combes says the goal is to regain the competitiveness Alcatel-Lucent has lost in recent years. The goal is to improve product innovation, quality of execution and the company's cost structure. Combes has also tackled the balance sheet, refinancing company debt over the summer.
The Shift Plan's target is to get the company back on track by 2015: growing, profitable and industry-leading in the three areas of focus, he said.
Coriant adds optical control to SDN framework
Coriant's CTO, Uwe Fischer, explains its Intelligent Optical Control and how the system will complement Transport SDN.

"You either master all that complexity at once, or you find the right entry point and provide value for each concrete challenge, and extend step-by-step from there"
Uwe Fischer, CTO of Coriant
Coriant has deployed a networking framework that it says will comply with Transport SDN, the software-defined networking (SDN) implementation for the wide area network (WAN).
The company's Intelligent Optical Control system is already deployed with one large North American operator while Coriant is working to install the system with other Tier 1 customers.
Work to extend SDN technology beyond the data centre to work across operators' transport networks has just begun. The Open Networking Foundation (ONF), for example, has established an Optical Transport Working Group to define the extensions needed to enable SDN control of the transport layer and not just packet.
"SDN and optical networking go together nicely; they are not decoupled but make up an end-to-end overall framework," says Uwe Fischer, CTO at Coriant.
The Intelligent Optical Control is designed to tackle immediate networking issues as Transport SDN is developed. Coriant says its system complies with the ONF's three networking layer SDN model. The top, application layer interfaces with the middle, control layer. And it is at the control layer where the SDN controller oversees the network elements found in the third, infrastructure layer.
Intelligent Optical Control adds two other components to the model. An extra intelligence component in the control layer that sits between the SDN controller and the infrastructure layer. This intelligence is designed to exploit the intricacies of the optical layer.
Coriant has also added an application at the topmost layer to automate operational procedures. "SDN at the application layer is centered around service creation," says Fischer. "We see a complete set of other applications which automate operational workflows."
Optical intelligence
One key benefit of SDN is the central view it has of the network and its resources. Such centralised control works well in the data centre and packet networking. Operators' networks are more complex, however, housing multiple vendors' equipment and multiple networking layers and protocols.
The ONF's Optical Transport Working Group is investigating two approaches - direct and abstract models - to enable the OpenFlow standard to extend its control across all the transport layers.
With the direct model, an SDN controller will talk to each network element, controlling its forwarding behaviour and port characteristics. The abstract model, in contrast, will enable the controller to talk to a network element or an intermediate controller or 'mediation'. This mediation performs a translator role, enacting requests from the SDN controller.
The direct model interests certain ONF members due to its potential of reduce the cost of networking equipment by moving much of the software from each element to the SDN controller. The abstract model, in contrast, has the benefit of limiting how much the controller needs to be exposed to the underlying network's details.
Coriant says it has yet to form a view as to the benefits of the direct and abstract ONF models. That said, Fischer does not see any mechanisms being discussed in the ONF that will fully exploit the potential of the photonic network. Accordingly, Coriant has added its own intelligence that sits between the SDN controller and the photonic layer.
“We fully comply with the approach of an SDN controller, however, we put another layer in between the control layer and the infrastructure layer,” says Fischer. “We consider it a part of the control layer, but adding the planning and routing intelligence to leverage the full performance of the infrastructure layer underneath."
Fischer says there is a role for abstraction at the photonic layer but perhaps only for metro networks. "We currently don't think this will really extend to the wide area photonic layer," he says.
"The added intelligence can leverage the full performance of the WDM network because it knows all the planning rules in detail," says Fischer. It does multi-layer optimisation across the transport layers. Coriant has added the intelligence because it does not think the transport-network-specific aspects can be centralised in a generic way.
Automated operations
Coriant's Intelligent Optical Controller also adds an application to automate operational procedures. Fischer cites how the application layer component benefits the workflow when a service is activated in the network.
With each service request, the Intelligent Optical Control details whether the new service can be squeezed onto existing infrastructure and details the service performance parameters to be expected, such as latency and the guaranteed bandwidth. "The operator can immediately judge the service level they would get," says Fischer.
Another planning mode supports the adding of equipment at the infrastructure layer. This enables a comparison to be made as to how the service level would improve with extra equipment in place.
If the operator can justify that business case for new hardware, the workflow is then automated. The tool creates the bill of materials, the electronic order, and the configuration and planning data needed to implement the hardware in the network.
Coriant says equipment and services can be time-tagged. If an engineer is known to be visiting a site once the hardware becomes available, the card can be pre-assigned and automatically used once it is plugged in. "There is a full consistency as to how the hardware is managed and optimised towards service creation," says Fischer.
Coriant is working with its major customers to create a testbed to demonstrate an SDN implementation of IP-over-DWDM. "It will involve interworking with third-party routers, and using SDN controllers to control the packet part of the network with Openflow and other mechanisms, and then connected to the Intelligent Optical Controller."
The goal is to demonstrate that Coriant's approach complies with this use case while better exploiting the optical network's capabilities.
Fischer says optical networking is moving to a new phase as transmission speeds move beyond 100 Gigabit.
"We are entering an interesting phase as capacity and reach hit the limits of practical networks," he says. "This means we are talking about flexible modulation formats and variously composed super-channels for 400 Gigabit and 1 Terabit."
In effect, a virtualisation of bandwidth is taking place at the photonic layer. "This fits nicely into the SDN principle as on the one hand it virtualises capacity, which very much fits in the model of virtualising infrastructure."
But it also brings challenges.
"There is currently not a good practical means to manage such flexible capacity at the photonic layer," says Fischer. This, says Coriant, it what its customers are saying. It also explains Coriant's decision to add the optical controller. "You either master all that complexity at once, or you find the right entry point and provide value for each concrete challenge, and extend step-by-step from there," says Fischer.
OIF defines carrier requirements for SDN
The Optical Internetworking Forum (OIF) has achieved its first milestone in defining the carrier requirements for software-defined networking (SDN).

The orchestration layer will coordinate the data centre and transport network activities and give easy access to new applications
Hans-Martin Foisel, OIF
The OIF's Carrier Working Group has begun the next stage, a framework document, to identify missing functionalities required to fulfill the carriers' SDN requirements. "The framework document should define the gaps we have to bridge with new specifications," says Hans-Martin Foisel of Deutsche Telekom, and chair of the OIF working group.
There are three main reasons why operators are interested in SDN, says Foisel. SDN offers a way for carriers to optimise their networks more comprehensively than before; not just the network but also processing and storage within the data centre.
"IP-based services and networks are making intensive use of applications and functionalities residing in the data centre - they are determining our traffic matrix," says Foisel. The data centre and transport network need to be coordinated and SDN can determine how best to distribute processing, storage and networking functionality, he says.
SDN also promises to simplify operators' operational support systems (OSS) software, and separate the network's management, control and data planes to achieve new efficiencies.
SDN architecture
The OIF's focus is on Transport SDN, involving the management, control and data plane layers of the network. Also included is an orchestration layer that will sit above the data centre and transport network, overseeing the two domains. Applications then reside on top of the orchestration layer, communicating with it and the underlying infrastructure via a programmable interface.
"Aligning the thinking among different people is quite an educational exercise, and we will have to get to a new understanding"
"The orchestration layer will coordinate the data centre and transport network activities and give, northbound, easy access to new applications," says Foisel.
A key SDN concept is programmability and application awareness, he says. The orchestration layer will require specified interfaces to ease the adding of applications independent of whether they impact the data centre, transport network or both.
Foisel says the OIF work has already highlighted the breadth of vision within the industry regarding how SDN should look. "Aligning the thinking among different people is quite an educational exercise, and we will have to get to a new understanding," he says.
Having equipment prototypes is also helping in understanding SDN. "Implementations that show part of this big picture - it is doable, it is working and how it is working - is quite helpful," says Foisel.
The OIF Carrier Working Group is working closely with the Open Networking Foundation's (ONF) Optical Transport Working Group to ensure that the two group are aligned. The ONF's Optical Transport Group is developing optical extensions to the OpenFlow standard.
Ciena and partners build SDN testbed for carriers
Ciena, working with partners, is building a network to enable the development of software-defined networking (SDN) applied to the wide area network (WAN). The motivation in creating the testbed network is to boost carrier confidence in SDN while aiding its development.
"When you get very serious vice presidents in tier-one carriers saying, 'This [SDN] is the biggest change in my career', there is something to it."
Chris Janz, Ciena
Many software elements will be needed for SDN in the carrier environment, spanning the network through to the back office. Much is made of the benefits SDN will deliver, but it is difficult for operators to gauge SDN's full potential until they transform their networks. Carriers also want the confidence that the industry will deliver the SDN components needed.
To this aim, Ciena, along with research and education partners, CANARIE, Internet2 and StarLight, are developing the SDN test network. Carriers, research partners and Ciena's R&D team will use the network to experiment and validate SDN's benefits for packet and optical WANs.
Parts of the SDN network have already been demonstrated and Ciena expects the SDN test environment to be up and running in the next couple of months. Many of the components are in prototype form.
"The goal is to leap to the end point by providing the key parts of the future system for a carrier-style SDN-powered WAN, thereby demonstrating conclusively the macro SDN service cases that people imagine can be delivered," says Chris Janz, vice president of market development at Ciena.
Another aim is to help carriers determine how best to migrate their networks to the future SDN framework.
Testbed
The testbed conforms to the Open Networking Foundation's SDN architecture that comprises three layers. "Two of them are software: business applications talking to a network controller system which drives the physical network," says Janz.
The business application layer includes such systems as customer management, service creation and billing. "What we think of as OSS/ BSS and cloud orchestration systems," says Janz. Components such as cloud orchestration systems and portals that simulate customer actions are being contributed by partners to exercise the testbed.
Ciena has chosen OpenFlow, the open standard, to drive the packet and transport layers. "This [SDN] is not the data centre, it is not all packet; it is a model carrier-style network," he says.
The SDN controller is designed to add flexibility and open up the design. "There is a clear spirit in SDN that customers want to take more affirmative control of their competitive destiny," says Janz. "They do not want to be locked into services, features and functions that their vendors deliver to them and their competitors."
Ciena is part of OpenDaylight, the Linux-based SDN controller industry initiative, and this will be included. "There is a modular structure with internal interfaces," says Janz. "There is leveraging of some early generation open source components for part of the structure."
The control system is designed to be the heart of what Janz refers to as 'autonomic intelligence' to deliver the sought-after benefits of SDN.
One such benefit is for carriers to contain their capital costs by better filling their networks with traffic - running them 'hotter'. "Can they move their networks from 35 percent average utilisation to 95 percent?" says Janz.
Software-based intelligence as delivered by SDN can match dynamically demand with fulfillment. "You have all the service demands coming into the [SDN] control system, and you have control at that point of the entire configuration and state of the network," says Janz.
Ciena has added real-time analytics software to the controller prototype to aid such optimisation. "It is piece parts like this that prove the postulated benefits of SDN," he says.
The platforms used for the network includes 4 Terabit core switches with 400 Gigabit packet blades and optical and Optical Transport Network (OTN) transport using Ciena's 6500 and 5400 converged packet-optical product families, all configured using OpenFlow.
The 2500km network will connect Ciena’s headquarters in Hanover, Maryland with the company's R&D center in Ottawa. International connectivity is provided by Internet2 through the StarLight International/National Communications Exchange in Chicago and CANARIE, Canada's national optical fiber based advanced R&E network.
"If we look ten years down the road, the whole [software] stack - from the bottom of the network to the top of the back office - will look different to what it does today"
Testbed goals
The initial goal is to implement key SDN services and prove use cases. The open testbed will run indefinitely, says Janz: "SDN will unfold and we view the testbed as a standing platform that will change over time with new software and hardware." In effect, the testbed will implement an end-to-end infrastructure whose state is controlled in fine detail by a centralised controller.
Janz cites mass-customised network-as-a-service (NaaS) as one service SDN-in-the-WAN can enable.
Traditional Ethernet connectivity is a static service where the customer requests a given bandwidth and specifies the end points. "It is a very limited template and once it is locked in, the customer generally can't change it," says Janz.
SDN promises more sophisticated connection services. "Instead of defining just the end points, you can define virtual end points," says Janz. All sorts of parameters can then be specified: bandwidth, latency, availability and the restoration required, and these can be changed with time. Moreover, all can be ordered using an application programming interface (API) to the orchestration system at the customer's site.
"It would enable the customer to have many effective service pipes rather than one big one, and resolve and match each of them to a specific application need or flow," says Janz. The customer can then optimise them as the needs of each changes with time.
The benefits to the operator include better meeting the customer's needs, and an ability to charge across multiple service parameters, not just two. "That should be the path to greater revenue," says Janz.
The trick is managing such a system. "Can you price it effectively and know that you are targeting maximum revenue? Can you co-manage all these customer changes while respecting the changing service parameters of each?" says Janz. "You need the critical mass of piece parts to show that such a situation is workable and that, hey, I can make more money with a service like that."
60-second interview with Michael Howard
Infonetics Research has interviewed global service providers regarding their plans for software-defined networking (SDN) and network functions virtualisation (NFV). Gazettabyte asked Michael Howard, co-founder and principal analyst, carrier networks, about Infonetics' findings.
"Data centres are simple when compared to carrier networks"
Michael Howard, Infonetics Research
What is it about SDN and NFV - technologies still in their infancy - that already convinces 86 percent of the operators to deploy the technologies in their optical transport networks?
Michael H: Operators have a universal draw to SDN and NFV for two basic reasons:
1. They want to accelerate revenue by reducing the time to new services and applications.
2. They have operational drivers, of which there are also two parts:
- Carriers expect software-defined networks to give them a single view across multiple vendor equipment, network layers and equipment types for mobile backhaul, consumer digital subscriber line (DSL), passive optical network (PON), optical transport, routers, mobile core and Ethernet access. This global view will allow them to provision, monitor and deliver service-level agreements while controlling services, virtual networks and traffic flows in an easier, more flexible and automated way.
- An additional function possible with such a global view across the multi-vendor network is that traffic can be monitored and re-distributed along pathways to make best use of the network. In this way, the network can run 'hotter' and thereby require less equipment, saving capital expenditure (CapEx).
Optical transport networks have a history of being engineered to effect predictable flows on transport arteries and backbones. Many operators have deployed, or have been experimenting with, GMPLS (Generalized Multi-Protocol Label-Switching) and vendor control planes. So it is natural for them to want to bring this industry standard method of deploying an SDN control plane over the usually multi-vendor transport network.
In our conversations - independent of our survey - we find that several operators believe the biggest bang for the SDN buck is to use SDN for single control plane over multi-layer data - router, Ethernet - and the optical transport network.
"The virtualisation of data centre networks has inspired operators who want to apply the same general principles to their oh-so-much-more complex networks"
Early use of SDN has been in the data centre. How will the technologies benefit networks more generally and optical transport in particular?
SDNs were developed initially to solve the operational problems of un-automated networks. That is to say, slow human labour-intensive network changes required by the automated hypervisor as it moves, adds and changes virtual machines across servers that may be in the same data centre or in multiple data centres.
The virtualisation of data centre networks has inspired operators who want to apply the same general principles to their oh-so-much-more complex networks. Data centres are simple when compared to carrier networks. Data centres are basically large numbers of servers connected by Ethernet LANs and virtual LANs with some router separations of the LANs connecting servers.
"It will be many years before SDNs-NFV will be deployed in major parts of a carrier network"
Service provider networks are a set of many different types of networks including consumer broadband, business virtual private networks, optical transport, access/ aggregation Ethernet and router networks, mobile core and mobile backhaul. Each of these comprises multiple layers and almost certainly involves multiple vendor equipment. This explains why operators are starting their SDN-NFV investigations with small network segments which we call 'contained domains'. It will be many years before SDNs-NFV will be deployed in major parts of a carrier network.
You mention small SDN and NFV deployments. What will these early applications look like?
Our survey respondents indicated that intra-datacentre, inter-datacentre, cloud services, and content delivery networks (CDNs) will be the first to be deployed by the end of 2014. Other areas targeted longer term are optical transport, mobile packet core, IP Multimedia Subsystem, and more.
Was there a finding that struck you as significant or surprising?
Yes. A lot of current industry buzz is about optical transport networks, making me think that we'd see SDNs deployed soon. But what we heard from operators is that optical transport networks are further out in their deployment plans. This makes sense in that the Open Networking Foundation working group for transport networks has just recently got their standardisation efforts going, which usually takes a couple of years.
You say that it will be years before large parts or a whole network will be SDN-controlled. What are the main challenges here regarding SDN and will they ever control a whole network?
As I said earlier, carrier networks are complex beasts, and they are carrying revenue-generating services that cannot be risked by deployment of a new set of technologies that make fundamental changes to the way networks operate.
A major problem yet to be resolved or even addressed much by the industry is how to add SDN control planes to the router-controlled network that uses the MPLS control plane. SDN and MPLS control planes must cooperate or be coordinated in some way since they both control the same network equipment-not an easy problem, and probably the thorniest of all challenges to deploy SDNs and NFV.
The study participants rated CDNs, IP multimedia subsystem (IMS), and virtual routers/ security gateways as the main NFV applications. At least two of these segments already use servers so just how impactful will NFV be for operators?
Many operators see that they can deploy NFV in a much simpler way than deploying control plane changes involved with SDNs.
Many network functions have already been virtualised, that is software-only versions are available, and many more are under development. But these are individual vendor developments, not done according to any industry standards. This means that NFV - network functions run on servers rather than on specialised network equipment like firewalls, intrusion prevention/ intrusion detection systems, Evolved Packet Core hardware - is already in motion.
The formalisation of NFV by the carrier-driven ETSI standards group is underway, developing recommendations and standards so that these virtualised network functions can be deployed in a standardised way.
Infonetics interviewed purchase-decision makers at 21 incumbent, competitive and independent wireless operators from EMEA (Europe, Middle East, Africa), Asia Pacific and North America that have evaluated SDN projects or plan to do so. The carriers represent over half (53 percent) of the world's telecom revenue and CapEx.
Arista Networks embeds optics to boost 100 Gig port density
Arista Networks' latest 7500E switch is designed to improve the economics of building large-scale cloud networks.
The platform packs 30 Terabit-per-second (Tbps) of switching capacity in an 11 rack unit (RU) chassis, the same chassis as Arista's existing 7500 switch that, when launched in 2010, was described as capable of supporting several generations of switch design.

"The CFP2 is becoming available such that by the end of this year there might be supply for board vendors to think about releasing them in 2014. That is too far off."
Martin Hull, Arista Networks
The 7500E features new switch fabric and line cards. One of the line cards uses board-mounted optics instead of pluggable transceivers. Each of the line card's ports is 'triple speed', supporting 10, 40 or 100 Gigabit Ethernet (GbE). The 7500E platform can be configured with up to 1,152 10GbE, 288 40GbE or 96 100GbE interfaces.
The switch's Extensible Operating System (EOS) also plays a key role in enabling cloud networks. "The EOS software, run on all Arista's switches, enables customers to build, manage, provision and automate these large scale cloud networks," says Martin Hull, senior product manager at Arista Networks.
Applications
Arista, founded in 2004 and launched in 2008, has established itself as a leading switch player for the high-frequency trading market. Yet this is one market that its latest core switch is not being aimed at.
"With the exception of high-frequency trading, the 7500 is applicable to all data centre markets," says Hull. "That it not to say it couldn't be applicable to high-frequency trading but what you generally find is that their networks are not large, and are focussed purely on speed of execution of their transactions." Latency is a key networking performance parameter for trading.
The 7500E is being aimed at Web 2.0 companies and cloud service providers. The Web 2.0 players include large social networking and on-line search companies. Such players have huge data centres with up to 100,000 servers.
The same network architecture can also be scaled down to meet the requirements of large 'Fortune 500' enterprises. "Such companies are being challenged to deliver private cloud as the same competitive price points as the public cloud," says Hull.
The 7500 switches are typically used in a two-tier architecture. For the largest networks, 16 or 32 switches are used on the same switching tier in an arrangement known as a parallel spine.
A common switch architecture for traditional IT applications such as e-mail and e-commerce uses three tiers of switching. These include core switches linked to distribution switches, typically a pair of switches used in a given area, and top-of-rack or access switches connected to each distribution pair.
For newer data centre applications such as social networking, cloud services and search, the computation requirements result in far greater traffic shared on the same tier of switching, referred to as east-west traffic. "What has happened is that the single pair of distribution switches no longer has the capacity to handle all of the traffic in that distribution area," says Hull.
Customers address east-west traffic by throwing more platforms together. Eight or 16 distribution switches are used instead of a pair. "Every access switch is now connected to each one of those 16 distribution switches - we call them spine switches," says Hull.
The resulting two-tier design, comprising access switches and distribution switches, requires that each access switch has significant bandwidth between itself and any other access switch. As a result, many 7500 switches - 16 or 32 - can be used in parallel at the distribution layer.
Source: Arista Networks
"If I'm a Fortune 500 company, however, I don't need 16 of those switches," says Hull. "I can scale down, where four or maybe two [switches] are enough." Arista also offers a smaller 4-slot chassis as well as the 8 slot (11 RU) 7500E platform.
7500E specification
The switch has a capacity of 30Tbps. When the switch is fully configured with 1,152 10GbE ports, it equates to 23Tbps of duplex traffic. The system is designed with redundancy in place.
"We have six fabric cards in each chassis," says Hull, "If I lose one, I still have 25 Terabits [of switching fabric]; enough forwarding capacity to support the full line rates on all those ports." Redundancy also applies to the system's four power supplies. Supplies can fail and the switch will continue to work, says Hull.
The switch can process 14.4 billion 64-byte packets a second. This, says Hull, is another way of stating the switch capacity while confirming it is non-blocking.
The 7500E comes with four line card options: three use pluggable optics while the fourth uses embedded optics, as mentioned, based on 12 10Gbps transmit and 12 10Gbps receive channels (see table).
Using line cards supporting pluggable optics provides the customer the flexibility of using transceivers with various reach options, based on requirements. "But at 100 Gigabit, the limiting factor for customers is the size of the pluggable module," says Hull.
Using a CFP optical module, each card supports four 100Gbps ports only. The newer CFP2 modules will double the number to eight. "The CFP2 is becoming available such that by the end of this year there might be supply for board vendors to think about releasing them in 2014," says Hull. "That is too far off."
Arista's board mounted optics delivers 12 100GbE ports per line card.
The board-mounted triple-speed ports adhere to the IEEE 100 Gigabit SR10 standard, with a reach of 150m over OM4 fibre. The channels can be used discretely for 10GbE, grouped in four for 40GbE, while at 100GbE they are combined as a set of 10.
"At 100 Gig, the IEEE spec uses 20 out of 24 lanes (10 transmit and 10 receive); we are using all 24," says Hull. "We can do 12 10GbE, we can do three 40GbE, but we can still only do one 100Gbps because we have a little bit left over but not enough to make another whole 100GbE." In turn, the module can be configured as two 40GbE and four 10GbE ports, or 40GbE and eight 10GbE.
Using board-mounted optics reduces the cost of 100Gbps line card ports. A full 96 100GbE switch configuration achieves a cost of $10k/port while using existing CFP modules the cost is $30k-50k/ port, claims Arista.
Arista quotes 10GbE as costing $550 per line card port not including the pluggable transceiver. At 40GbE this scales to $2,200. For 100GbE the $10k/ port comprises the scaled-up port cost at 100GbE ($2.2k x 2.5) to which is added the cost of the optics. The power consumption is under 4W/ port when the system is fully loaded.
The company uses merchant chips rather than an in-house ASIC for its switch platform. Can't other vendors develop similar performance systems based on the same ICs? "They could, but it is not easy," says Hull.
The company points out that merchant chip switch vendors use a CMOS process node that is typically a generation ahead of state-of-the-art ASICs. "We have high-performance forwarding engines, six per line card, each a discrete system-on-chip solution," says Hull. "These have the technology to do all the Layer 2 and Layer 3 processing." All these devices on one board talk to all the other chips on the other cards through the fabric.
In the last year, equipment makers have decided to bring silicon photonics technology in-house: Cisco Systems has acquired Lightwire while Mellanox Technologies has announced its plan to acquire Kotura.
Arista says it is watching silicon photonics developments with keen interest. "Silicon photonics is very interesting and we are following that," says Hull. "You will see over the next few years that silicon photonics will enable us to continue to add density."
There is a limit to where existing photonics will go, and silicon photonics overcomes some of those limitations, he says.
Extensible Operating System
Arista's highlights several characteristics of its switch operating system. The EOS is standards-compliant, self-healing, and supports network virtualisation and software-defined networking (SDN).
The operating system implements such protocols as Border Gateway Protocol (BGP) and spanning tree. "We don't have proprietary protocols," says Hull. "We support VXLAN [Virtual Extensible LAN] an open standards way of doing Layer 2 overlay of [Layer] 3."
EOS is also described as self-healing. The modular operating system is composed of multiple software processes, each process described as an agent. "If you are running a software process and it is killed because it is misbehaving, when it comes back typically its work is lost," says Hull. EOS is self-healing in that should an agent need to be restarted, it can continue with its previous data.
"We have software logic in the system that monitors all the agents to make sure none are misbehaving," says Hull. "If it finds an agent doing stuff that it should not, it stops it, restarts it and the process comes back running with the same data." The data is not packet related, says Hull, rather the state of the network.
The operating system also supports network virtualisation, one aspect being VXLAN. VXLAN is one of the technologies that allows cloud providers to provide a customer with server resources over a logical network when the server hardware can be distributed over several physical networks, says Hull. "Even a VLAN can be considered as network virtualisation but VXLAN is the most topical."
Support for SDN is an inherent part of EOS from its inception, says Hull. “EOS is open - the customers can write scripts, they can write their own C-code, or they can install Linux packages; all can run on our switches." These agents can talk back to the customer's management systems. "They are able to automate the interactions between their systems and our switches using extensions to EOS," he says.
"We encompass most aspects of SDN," says Hull. "We will write new features and new extensions but we do not have to re-architect our OS to provide an SDN feature."
Arista is terse about its switch roadmap.
"Any future product would improve performance - capacity, table sizes, price-per-port and density," says Hull. "And there will be innovation in the platform's software.
Nuage Networks uses SDN to tackle data centre networking bottlenecks
Three planes of the network that host Nuage's .Virtualised Services Platform (VSP). Source: Nuage Networks
Alcatel-Lucent has set up Nuage Networks, a business venture addressing networking bottlenecks within and between data centres.
The internal start-up combines staff with networking and IT skills include web-scale services. "You can't solve new problems with old thinking," says Houman Modarres, senior director product marketing at Nuage Networks. Another benefit of the adopted business model is that Nuage benefits from Alcatel-Lucent's software intellectual property.
"It [the Nuage platform] is a good approach. It should scale well, integrate with the wide area network (WAN) and provide agility"
Joe Skorupa, Gartner
Network bottlenecks
Networking in the data centre connects computing and storage resources. Servers and storage have already largely adopted virtualisation such that networking has now become the bottleneck. Virtual machines on servers running applications can be enabled within seconds or minutes but may have to wait days before network connectivity is established, says Modarres.
Nuage has developed its Virtualised Services Platform (VSP) software, designed to solve two networking constraints.
"We are making the network instantiation automated and instantaneous rather than slow, cumbersome, complex and manual," says Modarres. "And rather than optimise locally, such as parts of the data centre like zones or clusters, we are making it boundless."
"It [the Nuage platform] is a good approach," says Joe Skorupa, vice president distinguished analyst, data centre convergence, data centre, at Gartner. "It should scale well, integrate with the wide area network (WAN) and provide agility."
Resources to be connected can now reside anywhere: within the data centre, and between data centres, including connecting the public cloud to an enterprise's own private data centre. Moreover, removing restrictions as to where the resources are located boosts efficiency.
"Even in cloud data centres, server utilisation is 30 percent or less," says Modarres. "And these guys spend about 60 percent of their capital expenditure on servers."
It is not that the hypervisor, used for server virtualisation, is inefficient, stresses Modarres: "It is just that when the network gets in the way, it is not worthwhile to wait for stuff; you become more wasteful in your placement of workloads as their mobility is limited."

"A lot of money is wasted on servers and networking infrastructure because the network is getting in the way"
Houman Modarres, Nuage Networks
SDN and the Virtualised Services Platform
Nuage's Virtualised Services Platform (VSP) uses software-defined networking (SDN) to optimise network connectivity and instantiation for cloud applications.
The VSP comprises three elements:
- the Virtualised Services Directory,
- the Virtualised Services Controller,
- and the Virtual Routing & Switching module.
The elements each reside at a different network layer, as shown (see chart, top).
The top layer, the cloud services management plane, houses the Virtualised Services Directory (VSD). The VSD is a policy and analytics engine that allows the cloud service provider to partition the network for each customer or group of tenants.
"Each of them get their zones for which they can place their applications and put [rules-based] permissions as to whom can use what, and who can talk to whom," says Modarres. "They do that in user-friendly terms like application containers, domains and zones for the different groups."
Domains and zones are how an IT administrator views the data centre, explains Modarres: "They don't need to worry about VLANs, IP addresses, Quality of Service policies and access control lists; the network maps that through its abstraction." The policies defined and implemented by the VSD are then adopted automatically when new users join.
The layer below the cloud services management plane is the data centre control plane. This is where the second platform element, the Virtualised Services Controller (VSC), sits. The VSC is the SDN controller: the control element that communicates with the data plane using the OpenFlow open standard.
The third element, the Virtual Routing & Switching module (VRS), sits in the data path, enabling the virtual machines to communicate to enable applications rapidly. The VRS sits on the hypervisor of each server. When a virtual machine gets instantiated, it is detected by the VRS which polls the SDN controller to see if a policy has already been set up for the tenant and the particular application. If a policy has been set up, the connectivity is immediate. Moreover, this connectivity is not confined to a single data centre zone but the whole data centre and even across data centres.
More than one data centre is involved for disaster recovery scenarios, for example. Another example involving more than one data centre is to boost overall efficiency. This is enhanced by enabling spare resources in other data centres to be used by applications as appropriate.
Meanwhile, the linking to an enterprise's own data centre is done using a virtual private network (VPN), bridging a private data centre with the public cloud. "We are the first to do this," says Modarres.
The VSP works with whatever server, hypervisor, networking equipment and cloud management platform is used in a data centre. The SDN controller is based on the same operating system that is used in Alcatel-Lucent's IP routers that supports a wealth of protocols. Meanwhile, the virtual switch in the VRS integrates with various hypervisors on the market, ensuring interoperability.
Nuage's Dimitri Stiliadis, chief architect at Nuage Networks, describes its VSP architecture as a distributed implementation of the functions performed by its router products.
The control plane of the router is effectively moved to the SDN controller. The router's 'line cards' become the virtual switches in the hypervisors. "OpenFlow is the protocol that allows our controller to talk to the line cards," says Stiliadis. "While the border gateway protocol (BGP) is the protocol that allows our controller to talk to other controllers in the rest of the network."
Michael Howard, principal analyst, carrier networks at Infonetics Research, says there are several noteworthy aspects to Nuage's product including the fact that operators participated at the company's launch and that the software is not tied to Alcatel-Lucent's routers but will run over other vendors' equipment.
"It also uses BGP, as other vendors are proposing, to tie together data centres and the carrier WAN," says Howard. "Several big operators say BGP is a good approach to integrate data centres and carrier WANs, including AT&T and Orange."
Nuage says that trials of its VSP began in April. The European and North America trial partners include UK cloud service provider Exponential-e, French telecoms service provider SFR, Canadian telecoms service provider TELUS and US healthcare provider, the University of Pittsburgh Medical Center (UPMC). The product will be generally available from mid-2013.
"There are other key use cases targeted for SDN that are not data centre related: content delivery networks, Evolved Packet Core, IP Multimedia Subsystem, service-chaining and cloudbox"
Michael Howard, Infonetics Research
Challenges
The industry analysts highlight that this market is still in its infancy and that challenges remain.
Gartner's Skorupa points out that the data centre orchestration systems still need to be integrated and that there is a need for cheaper, simpler hardware.
"Many vendors have proposed solutions but the market is in its infancy and customer acceptance and adoption is still unknown," says Skorupa.
Infonetics highlights dynamic bandwidth as a key use case for SDNs and in particularly between data centres.
"There are other key use cases targeted for SDN that are not data centre related: content delivery networks, Evolved Packet Core, IP Multimedia Subsystem, service-chaining and cloudbox," says Howard.
Cloudbox is a concept being developed by operators where an intelligent general purpose box is placed at a customer's location. The box works in conjunction with server-based network functions delivered via the network, although some application software will also run on the box.
Customers will sign up for different service packages out of firewall, intrusion detection system (IDS), parental control, turbo button bandwidth bursting etc., says Howard. Each customer's traffic is guided by the SDNs and uses Network Functions Virtualisation - those network functions such as a firewall or IDS formerly in individual equipment - such that the services subscribed to by a user are 'chained' using SDN software.
OFC/NFOEC 2013 industry reflections - Final part
Gazettabyte spoke with Jörg-Peter Elbers, vice president, advanced technology at ADVA Optical Networking about the state of the optical industry following the recent OFC/NFOEC exhibition.

"There were many people in the OFC workshops talking about getting rid of pluggability and the cages and getting the stuff mounted on the printed circuit board instead, as a cheaper, more scalable approach"
Jörg-Peter Elbers, ADVA Optical Networking
Q: What was noteworthy at the show?
A: There were three big themes and a couple of additional ones that were evolutionary. The headlines I heard most were software-defined networking (SDN), Network Functions Virtualisation (NFV) and silicon photonics.
Other themes include what needs to be done for next-generation data centres to drive greater capacity interconnect and switching, and how do we go beyond 100 Gig and whether flexible grid is required or not?
The consensus is that flex grid is needed if we want to go to 400 Gig and one Terabit. Flex grid gives us the capability to form bigger pipes and get those chunks of signals through the network. But equally it allows not only one interface to transport 400 Gig or 1 Terabit as one chunk of spectrum, but also the possibility to slice and dice the signal so that it can use holes in the network, similar to what radio does.
With the radio spectrum, you allocate slices to establish a communication link. In optics, you have the optical fibre spectrum and you want to get the capacity between Point A and Point B. You look at the spectrum, where the holes [spectrum gaps] are, and then shape the signal - think of it as software-defined optics - to fit into those holes.
There is a lot of SDN activity. People are thinking about what it means, and there were lots of announcements, experiments and demonstrations.
At the same time as OFC/NFOEC, the Open Networking Foundation agreed to found an optical transport work group to come up with OpenFlow extensions for optical transport connectivity. At the show, people were looking into use cases, the respective technology and what is required to make this happen.
SDN starts at the packet layer but there is value in providing big pipes for bandwidth-on-demand. Clearly with cloud computing and cloud data centres, people are moving from a localised model to a cloud one, and this adds merit to the bandwidth-on-demand scenario.
This is probably the biggest use case for extending SDN into the optical domain through an interface that can be virtualised and shared by multiple tenants.
"This is not the end of III-V photonics. There are many III-V players, vertically integrated, that have shown that they can integrate and get compact, high-quality circuits"
Network Functions Virtualisation: Why was that discussed at OFC?
At first glance, it was not obvious. But looking at it in more detail, much of the infrastructure over which those network functions run is optical.
Just take one Network Functions Virtualisation example: the mobile backhaul space. If you look at LTE/ LTE Advanced, there is clearly a push to put in more fibre and more optical infrastructure.
At the same time, you still have a bandwidth crunch. It is very difficult to have enough bandwidth to the antenna to support all the users and give them the quality of experience they expect.
Putting networking functions such as cacheing at a cell site, deeper within the network, and managing a virtualised session there, is an interesting trend that operators are looking at, and which we, with our partnership with Saguna Networks, have shown a solution for.
Virtualising network functions such as cacheing, firewalling and wide area network (WAN) optimisation are higher layer functions. But as you do that, the network infrastructure needs to adapt dynamically.
You need orchestration that combines the control and the co-ordination of the networking functions. This is more IT infrastructure - server-based blades and open-source software.
Then you have SDN underneath, supporting changes in the traffic flow with reconfiguration of the network infrastructure.
There was much discussion about the CFP2 and Cisco's own silicon photonics-based CPAK. Was this the main silicon photonics story at the show?
There is much interest in silicon photonics not only for short reach optical interconnects but more generally, as an alternative to III-V photonics for integrated optical functions.
For light sources and amplification, you still need indium phosphide and you need to think about how to combine the two. But people have shown that even in the core network you can get decent performance at 100 Gig coherent using silicon photonics.
This is an interesting development because such a solution could potentially lower cost, simplify thermal management, and from a fab access and manufacturing perspective, it could be simpler going to a global foundry.
But a word of caution: there is big hype here too. This is not the end of III-V photonics. There are many III-V players, vertically integrated, that have shown that they can integrate and get compact, high-quality circuits.
You mentioned interconnect in the data centre as one evolving theme. What did you mean?
The capacities inside the data centre are growing much faster than the WAN interconnects. That is not surprising because people are trying to do as much as possible in the data centre because WAN interconnect is expensive.
People are looking increasingly at how to integrate the optics and the server hardware more closely. This is moving beyond the concept of pluggables all the way to mounted optics on the board or even on-chip to achieve more density, less power and less cost.
There were many people in the OFC workshops talking about getting rid of pluggability and the cages and getting the stuff mounted on the printed circuit board instead, as a cheaper, more scalable approach.
"Right now we are running 28 Gig on a single wavelength. Clearly with speeds increasing and with these kind of developments [PAM-8, discrete multi-tone], you see that this is not the end"
What did you learn at the show?
There wasn't anything that was radically new. But there were some significant silicon photonics demonstrations. That was the most exciting part for me although I'm not sure I can discuss the demos [due to confidentiality].
Another area we are interested in revolves around the ongoing IEEE work on short reach 100 Gigabit serial interfaces. The original objective was 2km but they have now honed in on 500m.
PAM-8 - pulse amplitude modulation with eight levels - is one of the proposed solutions; another is discrete multi-tone (DMT). [With DMT] using a set of electrical sub-carriers and doing adaptive bit loading means that even with bandwidth-limited components, you can transmit over the required distances. There was a demo at the exhibition from Fujitsu Labs showing DMT over 2km using a 10 Gig transmitter and receiver.
This is of interest to us as we have a 100 Gigabit direct detection dense WDM solution today and are working on the product evolution.
We use the existing [component/ module] ecosystem for our current direct detect solution. These developments bring up some interesting new thoughts for our next generation.
So you can go beyond 100 Gigabit direct detection?
Right now we are running 28 Gig on a single wavelength. Clearly with speeds increasing and with these kind of developments [PAM-8, DMT], you see that this is not the end.
Part 1: Software-defined networking: A network game-changer, click here
Part 2: OFC/NFOEC 2013 industry reflections, click here
Part 3: OFC/NFOEC 2013 industry reflections, click here
Part 4: OFC/NFOEC industry reflections, click here
OFC/NFOEC 2013 industry reflections - Part 4
Gazettabyte asked industry figures for their views after attending the recent OFC/NFOEC show.

"Spatial domain multiplexing has been a hot topic in R&D labs. However, at this year's OFC we found that incumbent and emerging carriers do not have a near-term need for this technology. Those working on spatial domain multiplexing development should adjust their efforts to align with end-users' needs"
T.J. Xia, Verizon
T.J. Xia, distinguished member of technical staff, Verizon
Software-defined networking (SDN) is an important topic. Looking forward, I expect SDN will involve the transport network so that all layers in the network are controlled by a unified controller to enhance network efficiency and enable application-driven networking.
Spatial domain multiplexing has been a hot topic in R&D labs. However, at this year's OFC we found that incumbent and emerging carriers do not have a near-term need for this technology. Those working on spatial domain multiplexing development should adjust their efforts to align with end-users' needs.
Several things are worthy to watch. Silicon photonics has the potential to drop the cost of optical interfaces dramatically. Low-cost pluggables such as CFP2, CFP4 and QSFP28 will change the cost model of client connections. Also, I expect adaptive, DSP-enabled transmission to enable high spectral efficiencies for all link conditions.
Andrew Schmitt, principal analyst, optical at Infonetics Research
The Cisco CPAK announcement was noteworthy because the amount of attention it generated was wildly out of proportion to the product they presented. They essentially built the CFP2 with slightly better specs.
"It was very disappointing to see how breathless people were about this [CPAK] announcement. When I asked another analyst on a panel if he thought Cisco could out-innovate the entire component industry he said yes, which I think is just ridiculous."
Cisco has successfully exploited the slave labour and capital of the module vendors for over a decade and I don't see why they would suddenly want to be in that business.
The LightWire technology is much better used in other applications than modules, and ultimately the CPAK is most meaningful as a production proof-of-concept. I explored this issue in depth in a research note for clients.
It was very disappointing to see how breathless people were about this announcement. When I asked another analyst on a panel if he thought Cisco could out-innovate the entire component industry he said yes, which I think is just ridiculous.
There were also some indications surrounding CFP2 customers that cast doubt on the near-term adoption of the technology, with suppliers such as Sumitomo Electric deciding to forgo development entirely in favour of CFP4 and/ or QSFP.
I think CFP2 ultimately will be successful outside of enterprise and data centre applications but there is not a near-term catalyst for adoption of this format, particularly now that Cisco has bowed out, at least for now.
SDN is a really big deal for data centres and enterprise networking but its applications in most carrier networks will be constrained to only a few areas relative to multi-layer management.
Within carrier networks, I think SDN is ultimately a catalyst for optical vendors to potentially add value to their systems, and a threat to router vendors as it makes bypass architectures easier to implement.
"Pluggable coherent is going to be just huge at OFC/NFOEC 2014"
Optical companies like ADVA Optical Networking, Ciena and Infinera are pushing the envelope here and the degree to which optical equipment companies are successful is dependent on who their customers are and how hungry these customers are for solutions.
Meanwhile, pluggable coherent is going to be just huge at OFC/NFOEC 2014, followed by QSFP/ CFP4 prototyping and more important production planning and reliability. Everyone is going to use different technologies to get there and it will be interesting to see what works best.
I also think the second half of 2013 will see an increase in deployment of common equipment such as amplifiers and ROADMs.
Magnus Olson, director hardware engineering, Transmode
Two clear trends from the conference, affecting quite different layers of the optical networks, are silicon photonics and SDN.
"If you happen to have an indium phosphide fab, the need for silicon photonics is probably not that urgent. If you don't, now seems very worthwhile to look into silicon photonics"
Silicon photonics, deep down in the physical layer, is now emerging rapidly from basic research to first product realisation. Whereas some module and component companies barely have taken the step from lithium niobate modulators to indium phospide, others have already advanced indium phosphide photonic integrated circuits (PICs) in place.
If you happen to have an indium phosphide fab, the need for silicon photonics is probably not that urgent. If you don't, now seems very worthwhile to look into silicon photonics.
Silicon photonics is a technology that should help take out the cost of optics for 100 Gigabit and beyond, primarily for short distance, data centre applications.
SDN, on the other hand, continues to mature. There is considerable momentum and lively discussion in the research community as well as within the standardisation bodies that could perhaps help SDN to succeed where Generalized Multi-Protocol Label Switching (GMPLS) failed.
Ongoing industry consolidation has reduced the number of companies to meet and discuss issues with to a reasonable number. The larger optical module vendors all have full portfolios and hence the consolidation would likely slow down for awhile. The spirit at the show was quite optimistic, in a very positive, sustainable way.
As for emerging developments, the migration of form factors for 100 Gigabit, from CFP via CFP2 to CFP4 and beyond, is important to monitor and influence from a wavelength-division multiplexing (WDM) vendor point of view.
We should learn from the evolution of the SFP+, originally invented with purely grey data centre applications. Once the form factor is well established and mature, coloured versions start to appear.
If not properly taken into account from the start in the multi-source agreement (MSA) work with respect to, for example, power classes, it is not easy to accommodate tunable dense WDM versions in these form factors. Pluggable optics are crucial for cost as well as flexibility, on both the client side and line side.
Shai Rephaeli, vice president of interconnect products, Mellanox
At OFC, many companies demonstrated 25 Gigabit-per-second (Gbps) prototypes and solutions, both multi mode and single mode.
Thus, a healthy ecosystem for the 100 Gigabit Ethernet (GbE) and EDR (Enhanced Data Rate) InfiniBand looks to be well aligned with our introduction of new NIC (network interface controller)/ HCA (Infiniband host channel adaptor) and switch systems.
However, a significant increase in power consumption compared to current 10Gbps and 14Gbps product is observed. This requires the industry to focus heavily on power optimisation and thermal solutions.
"One development to watch is 1310nm and 1550nm VCSELs"
Standardisation for 25Gbps single mode fibre solutions is a big challenge. All the industry leaders have products at some level of development, but each company is driving its own technology. There may be a real interoperability barrier, considering the different technologies: WDM/ 1310nm, parallel and pulse-amplitude modulation (PAM) which, itself, may have several flavours: 4-levels, 8-levels and 16-levels.
One development to watch is 1310nm and 1550nm VCSELs, which can bring the data centre/ multi-mode fibre volume and prices into the mid-reach market. This technology can be important for the new large-scale data centres, requiring connections significantly longer than 100m.
Part 1: Software-defined networking: A network game-changer, click here
Part 2: OFC/NFOEC 2013 industry reflections, click here
Part 3: OFC/NFOEC 2013 industry reflections, click here
Part 5: OFC/NFEC 2013 industry reflections, click here
