What the cable operators are planning for NFV and SDN
Cable operators are working on adding wireless to their fixed access networks using NFV and SDN technologies.
Don Clarke“Cable operators are now every bit as informed about NFV and SDN as the telcos are, but they are not out there talking too much about it,” says Don Clarke, principal architect for network technologies at CableLabs, the R&D organisation serving the cable operators.
Clarke is well placed to comment. While at BT, he initiated the industry collaboration on NFV and edited the original white paper which introduced the NFV concept and outlined the operators’ vision for NFV.
NFV plans
The cable operators are planning developments by exploiting the Central Office Re-architected as a Datacenter (CORD) initiative being pursued by the wider telecom community. Comcast is one cable operator that has already joined the Open Networking Lab’s (ON.Lab) CORD initiative. The aim is to add a data centre capability to the cable operators’ access network onto which wireless will be added.
CableLabs is investigating adding high-bandwidth wireless to the cable network using small cells, and the role 5G will play. The cable operators use DOCSIS as their broadband access network technology and it is ideally suited for small cells once these become mainstream, says Clarke: “How you overlay wireless on top of that network is probably where there is going to be some significant opportunities in the next few years.”
One project CableLabs is working on is helping cable operators provision services more efficiently. At present, operators deliver services over several networks: DOCSIS, EPON and in some cases, wireless. CableLabs has been working for a couple of years on simplifying the provisioning process so that the system is agnostic to the underlying networks. “The easiest way to do that is to abstract and virtualize the lower-level functionality; we call that virtual provisioning,” says Clarke.
CableLabs recently published its Virtual Provisioning Interfaces Technical Report on this topic and is developing data models and information models for the various access technologies so that they can be abstracted. The result will be more efficient provisioning of services irrespective of the underlying access technology, says Clarke.
How you overlay wireless on top of that network is probably where there is going to be some significant opportunities in the next few years
SNAPS
CableLabs is also looking at how to virtualise functionality cable operators may deploy near the edge of their networks.
“As the cable network evolves to do different things and adds more capabilities, CableLabs is looking at the technology platform that would do that,” says Clarke.
To this aim, CableLabs has created the SDN-NFV Application development Platform and Stack - SNAPS - which it has contributed to the Open Platform for NFV (OPNFV) group, part of the open source management organisation, The Linux Foundation.
SNAPS is a reference platform to be located near the network edge, and possibly at the cable head-end where cable operators deliver video over their networks. The reference platform makes use of the cloud-based operating system, OpenStack, and other open source components such as OpenDaylight, and is being used to instantiate virtual network functions (VNFs) in a real-time dynamic way. “The classic NFV vision,” says Clarke.
CableLabs' Randy Levensalor says one challenge facing cable operators is that, like telcos, they have separate cloud infrastructures for their services and that impacts their bottom line.
Cable operators are now every bit as informed about NFV and SDN as the telcos are, but they are not out there talking too much about it
“You have one [cloud infrastructure] for business services, one for video delivery and one for IT, and you are operationally less efficient when you have those different stacks,” says Levensalor, lead software architect at CableLabs. “With SNAPS, you bring together all the capabilities that are needed in a reference configuration that can be replicated.”
This platform can support local compute with low latency. "We are not able to say much but there is a longer-term vision for that capability that we’ll develop new applications around,” says Clarke.
Challenges and opportunities
The challenges facing cable operators concerning NFV and SDN are the same as those facing the telcos, such as how to orchestrate and manage virtual networks and do it in a way that avoids vendor lock-in.
“The whole industry wants an open ecosystem where we can buy virtual network functions from one vendor and connect them to virtual network functions and other components from different vendors to create an end-to-end platform with the best capabilities at any given time,” says Clarke.
He also believes that cable operators can move more quickly than telcos because of how they collaborate via CableLabs, their research hub. However, the cable operators' progress is inevitably linked to that of the telcos given they want to use the same SDN and NFV technologies to achieve economies of scale. “So we can’t diverge in the areas that need to be common, but we can move more quickly in areas where the cable network has an inherent advantage, for example in the access network,” says Clarke.
WDM and 100G: A Q&A with Infonetics' Andrew Schmitt
The WDM optical networking market grew 8 percent year-on-year, with spending on 100 Gigabit now accounting for a fifth of the WDM market. So claims the first quarter 2014 optical networking report from market research firm, Infonetics Research. Overall, the optical networking market was down 2 percent, due to the continuing decline of legacy SONET/SDH.
In a Q&A with Gazettabyte, Andrew Schmitt, principal analyst for optical at Infonetics Research, talks about the report's findings.
Q: Overall WDM optical spending was up 8% year-on-year: Is that in line with expectations?
Andrew Schmitt: It is roughly in line with the figures I use for trend growth but what is surprising is how there is no longer a fourth quarter capital expenditure flush in North America followed by a down year in the first quarter. This still happens in EMEA but spending in North America, particularly by the Tier-1 operators, is now less tied to calendar spending and more towards specific project timelines.
This has always been the case at the more competitive carriers. A good example of this was the big order Infinera got in Q1, 2014.
You refer to the growth in 100G in 2013 as breathtaking. Is this growth not to be expected as a new market hits its stride? Or does the growth signify something else?
I got a lot of pushback for aggressive 100G forecasts in 2010 and 2011 when everyone was talking about, and investing in, 40G. You can read a White Paper I wrote in early 2011 which turned out to be pretty accurate.
My call was based on the fact that, fundamentally, coherent 100G shouldn’t cost more than 40G, and that service providers would move rapidly to 100G. This is exactly what has happened, outside AT&T, NTT and China which did go big with 40G. But even my aggressive 100G forecasts in 2012 and 2013 were too conservative.
I have just raised my 2014 100G forecast after meeting with Chinese carriers and understanding their plans. 100G will essentially take over almost all of the new installations in the core by 2016, worldwide, and that is when metro 100G will start. But there is too much hype on metro 100G right now given the cost, but within two years the price will be right for volume deployment by service providers.
There is so much 'blah blah blah' about video but 90 percent is cacheable. Cloud storage is not
You say the lion's share of 100G revenue is going to five companies: Alcatel-Lucent, Ciena, Cisco, Huawei, and Infinera. Most of the companies are North American. Is the growth mainly due to the US market (besides Huawei, of course). And if so, is it due to Verizon, AT&T and Sprint preparing for growing LTE traffic? Or is the picture more complex with cable operators, internet exchanges and large data centre players also a significant part of the 100G story, as Infinera claims.
It’s a lot more complex than the typical smartphone plus video-bandwidth-tsunami narrative. Many people like to attach the wireless metaphor to any possible trend because it is the only area perceived as having revenue and profitability growth, and it has a really high growth rate. But something big growing at 35 percent adds more in a year than something small growing at 70 percent.
The reality is that wireless bandwidth, as a percentage of all traffic, is still small. 100G is being used for the long lines of the network today as a more efficient replacement for 10G and while good quantitative measures don’t exist, my gut tells me it is inter-data-centre traffic and consumer/ business to data centre traffic driving most of the network growth today.
I use cloud storage for my files. I’m a die-hard Quicken user with 15 years of data in my file. Every time I save that file, it is uploaded to the cloud – 100MB each time. The cloud provider probably shifts that around afterwards too. Apply this to a single enterprise user - think about how much data that is for just one person. There is so much 'blah blah blah' about video but 90 percent is cacheable. Cloud storage is not.
Each morning a hardware specialist must wake up and prove to the world that they still need to exist
Cisco is in this list yet does not seek much media attention about its 100G. Why is it doing well in the growing 100G market?
Cisco has a slice of customers that are fibre-poor who are always seeking more spectral efficiency. I also believe Cisco won a contract with Amazon in Q4, 2013, but hey, it’s not Google or Facebook so it doesn’t get the big press. But no one will dispute Amazon is the real king of public cloud computing right now.
You’ve got to do hard stuff that others can’t easily do or you are just a commodity provider
In the data centre world, there is a sense that the value of specialist hardware is diminishing as commodity platforms - servers and switches - take hold. The same trend is starting in telecoms with the advent of Network Functions Virtualisation (NFV) and software-defined networking (SDN). WDM is specialist hardware and will remain so. Can WDM vendors therefore expect healthy annual growth rates to continue for the rest of the decade?
I am not sure I agree.
There is no reason transport systems couldn’t be white-boxed just like other parts of the network. There is an over-reaction to the impact SDN will have on hardware but there have always been constant threats to the specialist.
Each morning a hardware specialist must wake up and prove to the world that they still need to exist. This is why you see continued hardware vertical integration by some optical companies; good examples are what Ciena has done with partners on intelligent Raman amplification or what Infinera has done building a tightly integrated offering around photonic-integrated circuits for cheap regeneration. Or Transmode which takes a hacker’s approach to optics to offer customers better solutions for specific category-killer applications like mobile backhaul. Or you swing to the other side of the barbell, and focus on software, which appears to be Cyan’s strategy.
You’ve got to do hard stuff that others can’t easily do or you are just a commodity provider. This is why Cisco and Intel are investing in silicon photonics – they can use this as an edge against commodity white-box assemblers and bare-metal suppliers.
Books in 2013 - Part 1
Gazettabyte is asking various industry figures to highlight books they have read this year and recommend, both work-related and more general titles.
Part 1:
Tiejun J. Xia (TJ), Distinguished Member of Technical Staff, Verizon
The work-related title is Optical Fiber Telecommunications, Sixth Edition, by Ivan Kaminow, Tingye Li and Alan E. Willner. This edition, published in 2013, includes almost all the latest development results of optical fibre communications.
My non-work-related book is Fortune: Secrets of Greatness by the editors of Fortune Magazine. While published in 2006, the book still sheds light on the 'secrets' of people with significant accomplishments.
Christopher N. (Nick) Del Regno, Fellow Verizon
OpenStack Cloud Computing Cookbook, by Kevin Jackson is my work-related title. While we were in the throes of interviewing candidates for our open Cloud product development positions, I figured I had better bone up on some of the technologies.
One of those was OpenStack’s Cloud Computing software. I had seen recommendations for this book and after reading it and using it, I agree. It is a good 'OpenStack for Dummies' book which walks one through quickly setting up an OpenStack-based cloud computing environment. Further, since this is more of a tutorial book, it rightly assumes that the reader would be using some lower-level virtualisation environment (e.g., VirtualBox, etc) in which to run the OpenStack Hypervisor and virtual machines, which makes single-system simulation of a data centre environment even easier.
Lastly, the fact that it is available as a Kindle edition means it can be referenced in a variety of ways in various physical locales. While this book would work for those interested in learning more about OpenStack and virtualisation, it is better suited to those of us who like to get our hands dirty.
My somewhat work-related suggestions include Brilliant Blunders: From Darwin to Einstein – Colossal Mistakes by Great Scientists That Changed Our Understanding of Life and the Universe, by Mario Livio.
I discovered this book while watching Livio’s interview on The Daily Show. I was intrigued by the subject matter, since many of the major discoveries over the past few centuries were accidental (e.g. penicillin, radioactivity, semiconductors, etc). However, this book's focus is on the major mistakes made by some of the greatest minds in history: Darwin, Lord Kelvin, Pauling, Hoyle and Einstein.
It is interesting to consider how often pride unnecessarily blinded some of these scientists to contradictions to their own work. Further, this book reinforces my belief of the importance of collaboration and friendly competition. So many key discoveries have been made throughout history when two seemingly unrelated disciplines compare notes.
Another is Beyond the Black Box: The Forensics of Airplane Crashes, by George Bibel. As a frequent flyer and an aviation buff since childhood, I have always been intrigued by the process of accident investigation. This book offers a good exploration of the crash investigation process, with many case studies of various causes. The book explores the science of the causes and the improvements resulting from various accidents and related investigations. From the use of rounded openings in the skin (as opposed to square windows) to high-temperature alloys in the engines to ways to mitigate the impact of crash forces on the human body, the book is a fascinating journey through the lessons learned and the steps to avoid future lessons.
While enumerating the ways a plane could fail might dissuade some from flying, I found the book reassuring. The application of the scientific method to identifying the cause of, and solution to, airplane crashes has made air travel incredibly safe. In exploring the advances, I’m amazed at the bravery and temerity of early air travelers.
Outside work, my reading includes Doctor Sleep, by Stephen King. The sequel to “The Shining” following the little boy (Dan Torrence) as an adult and Dan’s role-reversal now as the protective mentor of a young child with powerful shining.
I also recommend Joyland (Hard Case Crime), by Stephen King. King tries his hand at writing a hard-case crime novel with great results. Not your typical King…think Stand by Me, Hearts in Atlantis, Shawshank Redemption.
Andrew Schmitt, Principal Analyst, Optical at Infonetics Research
My work-related reading is Research at Google.
Very little signal comes out of Google on what they are doing and what they are buying. But this web page summarises public technical disclosures and has good detail on what they have done.
There are a lot of pretenders in the analyst community who think they know the size and scale of Google's data center business but the reality is this company is sealed up tight in terms of disclosure. I put something together back in 2007 that tried to size 10GbE consumption (5,000 10GbE ports a month ) but am the first to admit that getting a handle on the magnitude of their optical networking and enterprise networking business today is a challenge.
Another offending area is software-defined networking (SDN). Pundits like to talk about SDN and how Google implemented the technology in their wide area network but I would wager few have read the documents detailing how it was done. As a result, many people mistakenly assume that because Google did it in their network, other carriers can do the same thing - which is totally false. The paper on their B4 network shows the degree of control and customisation (that few others have) required for its implementation.
I also have to plug a Transmode book on packet-optical networks. It does a really good job of defining what is a widely abused marketing term, but I’m a little biased as I wrote the forward. It should be released soon.
The non-work-related reading include Nate Silver’s book: The Signal and the Noise: Why So Many Predictions Fail — but Some Don't . I am enjoying it. I think he approaches the work of analysis the right way. I’m only halfway through but it is a good read so far. The description on Amazon summarises it well.
But some very important books that shaped my thinking are from Nassim Taleb . Fooled by Randomness is by far the best read and most approachable. If you like that then go for The Black Swan. Both are excellent and do a fantastic job of outlining the cognitive biases that can result in poor outcomes. It is philosophy for engineers and you should stop taking market advice from anyone who hasn’t read at least one.
The Steve Jobs biography by Walter Isaacson was widely popular and rightfully so.
A Thread Across the Ocean is a great book about the first trans-Atlantic cable, but that is a great book only for inside folks – don’t talk about it with people outside the industry or you’ll be marked as a nerd.
If you are into crazy infrastructure projects try Colossus about the Hoover Dam and The Path Between the Seas about the Panama Canal. The latter discloses interesting facts like how an entire graduating class of French engineers died trying to build it – no joke.
Lastly, I have to disclose an affinity for some favourite fiction: Brave New World, by Aldous Huxley and The Fountainhead by Ayn Rand.
I could go on.
If anyone reading this likes these books and has more ideas please let me know!
Books in 2013 - Part 2, click here

