Mobile fronthaul: A Q&A with LightCounting's John Lively
LightCounting Market Research' s report finds that mobile fronthaul networks will use over 14 million optical transceivers in 2014, resulting in a market valued at US $530 million. This is roughly the size to the FTTX market. However, unlike FTTX, sales of fronthaul transceivers will nearly double in the next five years, to exceed $900 million. A Q&A with LightCounting's principal analyst, John Lively.

Q. What is mobile fronthaul?
There is a simple explanation for mobile front-haul but that belies how complicated it is.
The equipment manufacturers got together about 10 years ago and came up with the idea to separate the functionality within a base station. The idea is that if you separate the functionality into two parts, you can move some of it to the tower and thereby reduce the equipment, power and space needed in the hut below. That is the distributed base station.
So instead of a large chassis base station, the current equipment is in two: a baseband unit or BBU which is a smaller rack-mounted unit, and the remote radio unit (RRU) or sometimes the remote radio head, mounted at the top of the tower, next to the antennas. The link between the two units is defined as fronthaul.
Q. What role does optics have in mobile fronthaul?
In the old monolithic base station, the connection between the two parts was an inch or two of copper. Once you have half the equipment up on the tower, obviously a few inches of copper is not going to suffice.
They found that copper is a poor choice even if the BBU is at the bottom of the tower. Because the signal between the two is a radio frequency analogue one, the signal is not compressed and so has a fairly high bandwidth.
One statistic I saw is that if you use copper cable instead of fibre, the difference between the two just in terms of weight is 13x. And there are things to consider like the wind load and ice load on these towers. So you want small diameter, lightweight cables. So even if there were no considerations of distance, there are basic physical factors that favour fibre for this link. That is the genesis of fronthaul.
But then people realised: We have a fibre connection, we can move the BBU; now we can go tens of kilometers if we want to. Operators can then consider aggregating BBUs in central locations that serve multiple radio macrocells. This is called centralised RAN.
Centralised RAN reduces cost simply by saving real-estate, space and power. With the right equipment, you can also allocate processing capacity dynamically among multiple cells and realise greater efficiencies.
So there are layers of benefits to fronthaul. It starts with simple things like weight and the inability to shed ice, getting down to annual operating costs and the investment needed in future wireless capacity. Fronthaul is a concept with much to offer.
Q. What is driving mobile fronthaul adoption?
What has brought fronthaul to the fore has been the global deployment of LTE. Fronthaul is not LTE-specific; distributed base station equipment has been available for HSPA and other 3G equipment. But in the last 3-4 years, we have had a massive upgrade in global infrastructure with many operators installing LTE. It is that that has driven the growth in fronthaul, taking it from a niche to become a mainstream part of the network.
Q. What are the approaches for mobile fronthaul?
The fronthaul that we have heard about from component vendors is simple point-to-point grey optics links. But let me start by defining CPRI. As part of the development of distributed base stations, a bunch of equipment vendors defined a way the signals would be transmitted between the BBU and the RRU, and it is called the Common Public Radio Interface or CPRI. As part of the specification, they define minimum requirements from the optical links, and they go so far as to say that these can be met with existing optics including several Fibre Channel devices.
As part of LightCounting's vendor surveys, we know that the predominant mode of implementation of fronthaul today is grey optics. That paints one picture: fronthaul is simple point-to-point grey optics. Some of the largest deployments recently have been of that mode, with China Mobile being the flagship example.
However, grey optics is not the only scheme, and some mobile operators have opted to do it differently.
A competing scheme is simple wavelength-division multiplexing (WDM) - a coarse WDM multi-channel coloured optical system. It is obviously simpler than long-haul: not 80 channels of closely spaced lambdas but systems more like first-generation WDM long-haul of 10 or 15 years ago, using 16 channels.
At first glance, it appears that the WDM approach is a next-generation scheme. But that is not the case; it has been deployed. South Korea's SK Telecom has used a WDM fronthaul solution when building their LTE network.
Q. Is it clear what operators prefer?
Both schemes have pros and cons. If there is a scarcity of fibre, you are leasing fibre from a third party for example, every additional fibre you use costs money. Or you have to deploy new fibre which is super expensive. Then a WDM solution looks attractive.
Another benefit, which is interesting, is that if you are a third-party provider of fronthaul, such as a tower company or a cable operator that wants to provide fronthaul just as it provides mobile backhaul, you need a demarcation point so that when there is a problem, you can say where your responsibility begins and ends.
There is no demarcation point with point-to-point links, it is just fibre running directly from operator equipment from Point A to Point B. With WDM systems, you have a natural demarcation point: the add/ drop nodes where the signals get onto the WDM wavelengths.
For example, a tower may serve three operators. Each operator would then used short-reach grey optics from their RRU to connect to the add/ drop node that may be at the bottom or on the tower. Otherwise, when there is a fault, who is responsible? That is another advantage of the WDM scheme.
It is not unlike the situation with fibre-to-the-x: some places have fibre-to-the-home, some fibre-to-the-curb, some fibre-to-the basement. There are different scenarios having to do with density, operator environment or regulation that create different optimal solutions for each scenario. There is no one-size-fits-all.
Q. What optical modules are used for mobile fronthaul and how will this change over the next five years?
The RRHs typically require 3 or 6 Gigabit-per-second (Gbps). These are CPRI standard rates that are multiples of a basic base rate. In some cases when they are loaded up with multiple channels - daisy chaining the RRUs - you may require 10Gbps.
From our survey data, in 2013 the mix was 3 and 6Gbps devices primarily, and this year we saw a shift away from 3 and more towards 6 and 10 Gbps. We believe that was skewed to some degree by China Mobile, which in many areas is putting up high capacity LTE systems with multiple channels, unlike many other operators that are doing a LTE multi-phase deployment, lighting one channel to start with and adding capacity as needed.
There is also some demand for 12.5Gbps but nothing beyond that, and 12.5Gbps demand is rather small and unlikely to grow quickly. That is because the individual RRHs are not going up in capacity. Rather, the way that capacity keeps up with bandwidth is that the number of RRHs multiplies. The way fronthaul keeps up with bandwidth demand is mainly by the proliferation of links rather than increasing the speed of individual links.
Q. A market nearly doubling in five years, that is a healthy optical component segment?
The growth is good. But like everything in optical components, it is questionable whether vendors will find a way to make it profitable. The technology specifications are not particularly challenging, so you can expect competition to be pretty severe for this market.
We are already seeing several Chinese makers with low manufacturing costs establishing themselves among the top suppliers in this market.
Q. Besides market size, what were other findings of the report?
I do expect WDM systems to become more widespread over the next five years. It makes sense that not everyone will want to do the brute force method of a link for every RRU out there. This is probably the biggest area of uncertainty, too: to what extent will WDM catch up or displace first generation grey optics?
The other thing to think about is what happens next? LTE deployments are well underway, a bit more than half way done worldwide. And it will be at least 5 years before the next big cycle: people are only just starting to talk about 5G. What is fonthaul going to look like in a 5G system?
It is hard to answer that in any clarity because 5G systems are not yet defined. What I find fascinating is that they are talking about multi-service access networks instead of fixed and mobile broadband being separate.
With WDM-PON and other advanced access networks, there is a growing belief that fronthaul could be carried over existing networks rather than having purpose-built fronthaul and backhaul networks. Fronthaul may thus go away and just be a service that tags onto some other networking equipment in the 5-10 year timeframe.
Q. Did any of the findings surprise you?
One is the fact that WDM is being deployed today.
Another is the size of the market: the component revenues are as big as FTTx. If you think about it, it makes sense: they are both serving consumers and are similar types of applications in terms of what they are doing: one is fixed broadband and one is mobile broadband.
Q. What are the developments to watch in the next few years regarding mobile fronthaul?
The next five years, the key thing to watch is the adoption of WDM in lieu of point-to-point grey optics. Beyond that, for the next generation, what fronthaul will be needed in 5G networks?
WDM and 100G: A Q&A with Infonetics' Andrew Schmitt
The WDM optical networking market grew 8 percent year-on-year, with spending on 100 Gigabit now accounting for a fifth of the WDM market. So claims the first quarter 2014 optical networking report from market research firm, Infonetics Research. Overall, the optical networking market was down 2 percent, due to the continuing decline of legacy SONET/SDH.
In a Q&A with Gazettabyte, Andrew Schmitt, principal analyst for optical at Infonetics Research, talks about the report's findings.
Q: Overall WDM optical spending was up 8% year-on-year: Is that in line with expectations?
Andrew Schmitt: It is roughly in line with the figures I use for trend growth but what is surprising is how there is no longer a fourth quarter capital expenditure flush in North America followed by a down year in the first quarter. This still happens in EMEA but spending in North America, particularly by the Tier-1 operators, is now less tied to calendar spending and more towards specific project timelines.
This has always been the case at the more competitive carriers. A good example of this was the big order Infinera got in Q1, 2014.
You refer to the growth in 100G in 2013 as breathtaking. Is this growth not to be expected as a new market hits its stride? Or does the growth signify something else?
I got a lot of pushback for aggressive 100G forecasts in 2010 and 2011 when everyone was talking about, and investing in, 40G. You can read a White Paper I wrote in early 2011 which turned out to be pretty accurate.
My call was based on the fact that, fundamentally, coherent 100G shouldn’t cost more than 40G, and that service providers would move rapidly to 100G. This is exactly what has happened, outside AT&T, NTT and China which did go big with 40G. But even my aggressive 100G forecasts in 2012 and 2013 were too conservative.
I have just raised my 2014 100G forecast after meeting with Chinese carriers and understanding their plans. 100G will essentially take over almost all of the new installations in the core by 2016, worldwide, and that is when metro 100G will start. But there is too much hype on metro 100G right now given the cost, but within two years the price will be right for volume deployment by service providers.
There is so much 'blah blah blah' about video but 90 percent is cacheable. Cloud storage is not
You say the lion's share of 100G revenue is going to five companies: Alcatel-Lucent, Ciena, Cisco, Huawei, and Infinera. Most of the companies are North American. Is the growth mainly due to the US market (besides Huawei, of course). And if so, is it due to Verizon, AT&T and Sprint preparing for growing LTE traffic? Or is the picture more complex with cable operators, internet exchanges and large data centre players also a significant part of the 100G story, as Infinera claims.
It’s a lot more complex than the typical smartphone plus video-bandwidth-tsunami narrative. Many people like to attach the wireless metaphor to any possible trend because it is the only area perceived as having revenue and profitability growth, and it has a really high growth rate. But something big growing at 35 percent adds more in a year than something small growing at 70 percent.
The reality is that wireless bandwidth, as a percentage of all traffic, is still small. 100G is being used for the long lines of the network today as a more efficient replacement for 10G and while good quantitative measures don’t exist, my gut tells me it is inter-data-centre traffic and consumer/ business to data centre traffic driving most of the network growth today.
I use cloud storage for my files. I’m a die-hard Quicken user with 15 years of data in my file. Every time I save that file, it is uploaded to the cloud – 100MB each time. The cloud provider probably shifts that around afterwards too. Apply this to a single enterprise user - think about how much data that is for just one person. There is so much 'blah blah blah' about video but 90 percent is cacheable. Cloud storage is not.
Each morning a hardware specialist must wake up and prove to the world that they still need to exist
Cisco is in this list yet does not seek much media attention about its 100G. Why is it doing well in the growing 100G market?
Cisco has a slice of customers that are fibre-poor who are always seeking more spectral efficiency. I also believe Cisco won a contract with Amazon in Q4, 2013, but hey, it’s not Google or Facebook so it doesn’t get the big press. But no one will dispute Amazon is the real king of public cloud computing right now.
You’ve got to do hard stuff that others can’t easily do or you are just a commodity provider
In the data centre world, there is a sense that the value of specialist hardware is diminishing as commodity platforms - servers and switches - take hold. The same trend is starting in telecoms with the advent of Network Functions Virtualisation (NFV) and software-defined networking (SDN). WDM is specialist hardware and will remain so. Can WDM vendors therefore expect healthy annual growth rates to continue for the rest of the decade?
I am not sure I agree.
There is no reason transport systems couldn’t be white-boxed just like other parts of the network. There is an over-reaction to the impact SDN will have on hardware but there have always been constant threats to the specialist.
Each morning a hardware specialist must wake up and prove to the world that they still need to exist. This is why you see continued hardware vertical integration by some optical companies; good examples are what Ciena has done with partners on intelligent Raman amplification or what Infinera has done building a tightly integrated offering around photonic-integrated circuits for cheap regeneration. Or Transmode which takes a hacker’s approach to optics to offer customers better solutions for specific category-killer applications like mobile backhaul. Or you swing to the other side of the barbell, and focus on software, which appears to be Cyan’s strategy.
You’ve got to do hard stuff that others can’t easily do or you are just a commodity provider. This is why Cisco and Intel are investing in silicon photonics – they can use this as an edge against commodity white-box assemblers and bare-metal suppliers.
NFV moves from the lab to the network
Dor Skuler
In October 2012, several of the world's leading telecom operators published a document to spur industry action. Entitled Network Functions Virtualisation - Introductory White Paper, the document stressed the many benefits such a telecom transformation would bring: reduced equipment costs, power consumption savings, portable applications, and nimbleness instead of ordeal when a service is launched.
Eighteen months on and much progress has been made. Operators and vendors have been identifying the networking functions to virtualise on servers, and the impact Network Functions Virtualisation (NFV) will have on the network.
A group within ETSI, the standards body behind NFV, is fleshing out the architectural layers of NFV: the virtual network functions layer that resides above the management and orchestration one that oversees the servers, distributed in data centres across the network.
In the lab, network functions have been put on servers and then onto servers in the cloud. "Now we are at the start of the execution phase: leaving the lab and moving into first deployments in the network," says Dor Skuler, vice president and general manager of CloudBand, the NFV spin-in of Alcatel-Lucent. Skuler views 2014 as the year of experimentation for NFV. By 2015, there will be pockets of deployments but none at scale; that will start in 2016.
SDN is a simple way for virtual network functions to get what they need from the network through different commands
Deploying NFV in the network and at scale will require software-defined networking (SDN). That is because network functions make unique requirements of the network, says Skuler. Because the network functions are distributed, each application must make connections to the different sites on demand. "SDN is a simple way for virtual network functions to get what they need from the network through different commands," he says.
CloudBand's customers include Deutsche Telekom, Telefonica and NTT. Overall, the company says it is involved in 14 customer projects.
CloudBand 2.0
CloudBand has developed a management and orchestration platform, and launched an 'ecosystem' that includes 25 companies. Companies such as Radware and Metaswitch Networks are developing virtual network functions that use the CloudBand platform.
More recently, CloudBand has upgraded its platform, what it calls CloudBand 2.0, and has launched its own virtualised network functions (VNFs) for the Long Term Evolution (LTE) cellular standard. In particular, VNFs for the Evolved Packet Core (EPC), IP Multimedia Subsystem (IMS) and the radio access network (RAN). "These are now virtualised and running in the cloud," says Skuler.
SDN technology from Nuage Networks, another Alcatel-Lucent spin-in, has been integrated into the CloudBand node that is set up in a data centre. The platform also has enhanced management systems. "How to manage the many nodes into a single logical cloud, with a lot of tools that help applications," says Skuler. CloudBand 2.0 has also added support for OpenStack alongside its existing support for CloudStack. OpenStack and CloudStack are open-source platforms supporting cloud.
For the EPC, the functions virtualised are on the network side of the basestation: the Mobility Management Entity (MME), the Serving Gateway and Packet Data Network Gateway (S- and P-Gateways) and the Policy and Charging Rules Function (PCRF).
IMS is used for Voice over LTE (VoLTE). "Operators are looking for more efficient ways of delivering VoLTE," says Skuler. This includes reducing deployment times and scalability, growing the service as more users sign up.
The high-frequency parts of the radio access network, typically located in a remote radio head (RRH), cannot be virtualised. What can is the baseband processing unit (BBU). The BBUs run on off-the-shelf servers in pools up to 40km away from the radio heads. "This allows more flexible capacity allocation to different radio heads and easier scaling and upgrading," says Skuler.
Skuler points out that virtualising a function is not simply a case of putting a piece of code on a server running a platform such as CloudBand. "The VNF itself needs to go through a lot of change; a big monolithic application needs to be broken up into small components," he says.
"The VNF needs to use the development tools we offer in CloudBand so it can give rules so it can run in the cloud." The VNF also needs to know what key performance indicators to look at, and be able to request scaling, and inform the system when it is unhealthy and how to remedy the situation.
These LTE VNFs are designed to run on CloudBand and on other vendors' platforms. "CloudBand won't be run everywhere which is why we use open standards," says Skuler.
Pros and cons
The benefits from adopting NFV include prompt service deployment, "Today it can take 9-18 months for an operator to scale [a service]," says Skuler. The services, effectively software on servers, can scale more easily whereas today, typically, operators have to overprovision to ensure extra capacity is in place.
Less equipment also needs to be kept by operators for maintainance. "A typical North America mobile operator may have 450,000 spare parts," says Skuler; items such as line cards and power supplies. With automation and the use of dedicated servers, the number of spare parts held is typically reduced by a factor of ten.
Services can be scaled and healed, while functionality can be upgraded using software alone. "If I have a new verison of IMS, I can test it in parallel and then migrate users; all behind my desk at the push of a button," says Skuler.
The NFV infrastructure - comprising compute, storage, and networking resources - reside at multiple locations - the operator's points-of-presence. These resources are designed to be shared by applications - VNFs - and it is this sharing of a common pool of resources that is one of the biggest advantages of NFV, says Skuler.
But there are challenges.
"Operating [existing] systems has been relatively simple; if there is a faulty line card, you simply replace it," says Skuler. "Now you have all these virtual functions sitting on virtual machines across data centres and that creates complexities."
An application needs to be aware of this and provide the required rules to the management and orchestration system such as CloudBand. Such systems need to provide the necessary operational tools to operators to enable automated upgrades and automated scaling as well as pinpoint causes of failures.
For example, an IMS core might have 12 tiers. In cloud-speak, a tier is one of a set of virtual machines making up a virtual network function. Examples of a tier include a load balancer, an application or a database server. Each tier consists of one or more virtual machines. Scaling of capacity is enabled by adding or removing virtual machines from a tier.
In a cloud deployment, these linkages between tiers must be understood by the system to allow scaling. Two tiers may be placed in the same data centre to ensure low latency, but an extra pair of the tier-pair may be placed in separate sites in case one pair goes down. SDN is used to connect the different sites, says Skuler: "All this needs to be explained simply to the system so that it understands it and execute it".
That, he says, is what CloudBand does.
See also:
Telcos eye servers and software to meet networking needs, click here
The connected vehicle - driving in the cloud
Cars are already more silicon than steel. As makers add LTE high speed broadband, they are destined to become more app than automobile. The possibilities that come with connecting your car to the cloud are scintillating. No wonder Gil Golan, director at General Motors' Advanced Technical Center in Israel, says the automotive industry is at an 'inflection point'.
"If you put LTE to the vehicle ... you are going to open a very wide pipe and you can send to the cloud and get results with almost no latency"
Gil Golan, General Motors
After a century continually improving the engine, suspension and transmission, car makers are now busy embracing technologies outside their traditional skill sets. The result is a period of unprecedented change and innovation.
Gil Golan, director at General Motors' Advanced Technical Center in Israel, cites the use of in-camera car systems to aid driving and parking as an example. "Five years ago almost no vehicle used a camera whereas now increasing numbers have at least one, a fish eye-camera facing backwards." Vehicles offering 360-degree views using five cameras are taking to the road and such numbers will become the norm in the next five years.
The result is that the automotive industry is hiring people with optics and lens expertise, as well as image processing skills to analyse the images and video the cameras produce. "This is just the camera; the vehicle is going to be loaded with electronics," says Golan.
In 2004 the [automotive] industry crossed the point where, on average, we spend more on silicon than on steel
Moore's Law
Semiconductor advances driven by Moore's Law have already changed the automotive industry. "In 2004 the [automotive] industry crossed the point where, on average, we spend more on silicon than on steel," says Golan.
Moore's Law continues to improve processor and memory performance while driving down cost. "Every small system can now be managed or controlled in a better way," says Golan. "With a processor and memory, everything can be more accurate, more functionality can be built in, and it doesn't matter if it is a windscreen wiper or a sophisticated suspension system."
Current high-end vehicles have over 100 microprocessors. In turn, chip makers are developing 100 Megabit and 1 Gigabit Ethernet physical devices, media access controllers and switching silicon for in-vehicle networking to link the car's various electronic control units (ECUs).
The growing number of on-board microprocessors is also reflected in the software within vehicles. According to Golan, the Chevrolet Volt has over 10 million lines of code while the latest Lockheed Martin F-35 fighter has 8.7 million. "These are software vehicles on four wheels," says Golan. Moreover, the design of the Chevy Volt started nearly a decade ago.
Car makers must keep vehicles, crammed with electronics and software, updated despite their far longer life cycles compared to consumer devices such as smartphones.
According to General Motors, each car model has its content sealed every four or five years. A car design sealed today may only come on sale in 2016 after which it will be manufactured for five years and remain on the road for a further decade. "A vehicle sealed today is supposed to be updated and relevant through to 2030," says Golan. "This, in an era where things are changing at an unprecedented pace."
As a result car makers work on ways to keep vehicles updated after the design is complete, during its manufacturing phase, and then when the vehicle is on the road, says Golan.
Industry trends
Two key trends are driving the automotive industry:
- The development of autonomous vehicles
- The connected vehicle
Leading car makers are all working towards the self-driving car. Such cars promise far greater safety and more efficient and economical driving. Such a vehicle will also turn the driver into a passenger, free to do other things. Automated vehicles will need multiple sensors coupled to on-board algorithms and systems that can guide the vehicle in real-time.
Golan says camera sensors are now available that see at night, yet some sensors can perform poorly in certain weather conditions and can be confused by electromagnetic fields - the car is a 'noisy' environment. As a result, multiple sensor types will be needed and their outputs fused to ensure key information is not missed.
"Remember, we are talking about life; this is not computers or mobile handsets," says Golan. "If you put more active safety systems on-board, it means you have to have a very solid read on what is going on around you."
The Chevrolet Volt has over 10 million lines of code while the latest Lockheed Martin F-35 fighter has 8.7 million
Wireless
Wireless communications will play a key role in vehicles. The most significant development is the advent of the Long Term Evolution (LTE) cellular standard that will bring broadband to the vehicle.
Golan says there are different perimeters within and around the car where wireless will play a role. The first is within the vehicle for wireless communication between devices such as a user's smart phone or tablet and the vehicle's main infotainment unit.
Wireless will also enable ECUs to talk, eliminating wiring inside the vehicle. "Wires are expensive, are heavy and impact the fuel economy, and can be a source for different problems: in the connectors and the wires themselves," says Golan.
A second, wider sphere of communication involves linking the vehicle with the immediate surroundings. This could be other vehicles or the infrastructure such as traffic lights, signs, and buildings. The communication could even be with cyclists and pedestrians carrying cellphones. Such immediate environment communication would use short-range communications, not the cellular network.
Wide-area communication will be performed using LTE. Such communication could also be performed over wireline. "If it is an electric vehicle, you can exchange data while you charge the vehicle," says Golan.
This ability to communicate across the network and connect to the cloud is what excites the car makers.
You can talk to the vehicle and the processing can be performed in the cloud
Cloud and Big Data
"If you put LTE to the vehicle, you are showing your customers that you are committed to bringing the best technology to the vehicle, you are going to open a very wide pipe and you can send to the cloud and get results with almost no latency," says Golan.
LTE also raises the interesting prospect of enabling some of the current processing embedded in the vehicle to be offloaded onto servers. "I can control the vehicle from the cloud," says Golan. "You can talk to the vehicle and the processing can be performed in the cloud."
The processing and capabilities offered in the cloud are orders of magnitude greater than what can be done on the vehicle, says Golan: "The results are going to be by far better than what we are familiar with today."
Clearly pooling and processing information centrally will offer a broader view than any one vehicle can provide but just what car processing functions can be offloaded are less clear, especially when a broadband link will always be dependent on the quality of the cellular coverage.
Safety critical systems will remain onboard, stresses Golan, but some of the infotainment and some of the extra value creation will come wirelessly.
Choosing the LTE operator to use is a key decision for an automotive company. "We have to make sure you [the driver] are on a very good network," says Golan. "The service provider has to show us, prove to us [their network], and in some cases we run basic and sporadic tests with our operator to make sure that we do have the network in place."
Automotive companies see opportunity here.
"When you get into a vehicle, there is a new type of behaviour that we know," says Golan. "We know a lot about your vehicle, we know your behaviour while you are driving: your driving style, what coffee you like to drink and your favourite coffee store, and that you typically fill up when you have a half tank and you go to a certain station."
This knowledge - about the car and the driver's preferences when driving - when combined with the cloud, is a powerful tool, says Golan. Car companies can offer an ecosystem that supports the driver. "We can have everything that you need while in the vehicle, served by General Motors," says Golan. "Let your imagination think about the services because I'm not going to tell you; we have a long list of stuff that we work on."
If we don't see that what we work on creates tremendous value, we drop it
General Motors already owns a 'huge' data centre and being a global company with a local footprint, will use cloud service providers as required.
So automotive is part of the Big Data story? "Oh, big time," says Golan. "Business analytics is critical for any industry including the automotive industry."
Innovation
Given the opportunities new technologies such as sensors, computing, communication and cloud enable, how do automotive companies remain focussed?
"If we don't see that what we work on creates tremendous value, we drop it," says Golan. "We have no time or resources to spend on spinning wheels."
General Motors has its own venture capital arm to invest in promising companies and spends a lot of time talking to start-ups. "We talk to every possible start-up; if you see them for the first time you would say: 'where is the connection to the automotive industry?'," says Golan. "We talk to everybody on everything."
The company says it will always back ideas. "If some team member comes up with a great idea, it does not matter how thin the company is spread, we will find the resources to support that," says Golan.
General Motors set up its research centre in Israel a decade ago and is the only automotive company to have an advanced development centre there, says Golan."The management had the foresight to understand that the industry is undergoing mega trends and an entrepreneurial culture - an innovation culture - is critically important for the future of the auto industry."
The company also has development sites in Silicon Valley and several other locations. "This is the pipe that is going to feed you innovation, and to do the critical steps needed towards securing the future of the company," says Golan. "You have to go after the technology."
Further reading:
Google's Original X-Man, click here
Mobile backhaul chips rise to the LTE challenge
The Long Term Evolution (LTE) cellular standard has a demanding set of mobile backhaul requirements. Gazettabyte looks at two different chip designs for LTE mobile backhaul, from PMC-Sierra and from Broadcom.
"Each [LTE Advanced cell] sector will be over 1 Gig and there will be a need to migrate the backhaul to 10 Gig"
Liviu Pinchas, PMC-Sierra
LTE is placing new demands on the mobile backhaul network. The standard, with its use of macro and small cells, increases the number of network end points, while the more efficient bandwidth usage of LTE is driving strong mobile traffic growth. Smartphone mobile data traffic is forecast to grow by a factor of 19 globally from 2012 to 2017, a compound annual growth rate of 81 percent, according to Cisco's visual networking index global mobile data traffic forecast.
Mobile networks backhaul links are typically 1 Gigabit. The advent of LTE does not require an automatic upgrade since each LTE cell sector is about 400Mbps, such that with several sectors, the 1 Gigabit Ethernet (GbE) link is sufficient. But as the standard evolves to LTE Advanced, the data rate will be 3x higher. "Each sector will be over 1 Gig and there will be a need to migrate the backhaul to 10 Gig," says Liviu Pinchas, director of technical marketing at PMC.
One example of LTE's more demanding networking requirements is the need for Layer 3 addressing and routing rather than just Layer 2 Ethernet. LTE base stations, known as eNodeBs, must be linked to their neighbours for call handover between radio cells. To do this efficiently requires IP (IPv6), according to PMC.
The chip makers must also take into account system design considerations.
Equipment manufacturers make several systems for the various backhaul media that are used: microwave, digital subscriber line (DSL) and fibre. The vendors would like common silicon and software that can be used for the various platforms.
Broadcom highlights how reducing the board space used is another important design goal, given that backhaul chips are now being deployed in small cells. An integrated design reduces the total integrated circuits (ICs) needed on a card. A power-efficient chip is also important due to thermal constraints and the limited power available at certain sites.
"Integration itself improves system-level power efficiency," says Nick Kucharewski, senior director for Broadcom’s infrastructure and networking group. "We have taken several external components and integrated them in one device."
WinPath4
PMC's WinPath4 supports existing 2G and 3G backhaul requirements, as well as LTE small and macro cells. A cell-side routers that previously served one macrocell will now have to serve one macrocell and up to 10 small cells, says PMC. This means everything is scaled up: a larger routing table, more users and more services.
To support LTE and LTE Advanced, WinPath4 has added additional programmable packet processors - WinGines - and hardware accelerators to meet new protocol requirements and the greater data throughput.
The previous generation 10Gbps WinPath3 has up to 12 WinGines, WinGines are multi-threaded processors, with each thread involving packet processing. Tasks performed include receiving, classifying, modifying, shaping and transmitting a packet.
The 40Gbps WinPath4 uses 48 WinGines and micro-programmable hardware accelerators for such tasks as packet parsing, packet header extraction and traffic matching, tasks too processing-intensive for the WinGines.
WinPath4 also support tables with up to two million IP destination addresses, up to 48,000 queues with four levels of hierarchical traffic shaping, encryption engines to implement the IP Security (IPsec) protocol and supports the IEEE 1588v2 timing protocol.
Two MIPs processor core are used for the control tasks, such as setting up and removing connections.
WinPath4 also supports the emerging software-defined networking (SDN) standard that aims to enhance network flexibility by making underlying switches and routers appear as virtual resources. For OpenFlow, the open standard use for SDN, the processor acts as a switching element with the MIPS core used to decode the OpenFlow commands.
StrataXGS BCM56450
Broadcom says its latest device, the BCM56450, will support the transition from 1GbE to 10GbE backhaul links, and the greater number of cells needed for LTE.
The BCM56450 will be used in what Broadcom calls the pre-aggregation network. This is a first level of aggregation in the wireline network that connects the radio access network's macro and small cells.
Pre-aggregation connects to the aggregation network, defined by Broadcom as having 10GbE uplinks and 1GbE downlinks. The BCM56450 meets these requirements but is referred as a pre-aggregating device since it also supports slower links such as microwave links or Fast Ethernet.
The BCM56450 is a follow-on to Broadcom's 56440 device announced two years ago. The BCM56450 upgrades the switching capacity to 100 Gigabit and doubles the size of the Layer 2 and Layer 3 forwarding tables.
The BCM56450 is one of a family of devices offering aggregation, from the edge through to 100GbE links deep in the network.
The network edge BCM56240 has 1GbE links and is designed for small cell applications, microwave units and small outdoor units. The 56450 is next in terms of capacity, aggregating the uplinks from the 240 device or linking directly to the backhaul end points.
The uplinks of the 56450 are 10GbE interfaces and these can be interfaced to the third family member, the BCM56540. The 56540, announced half a year ago, supports 10GbE downlinks and up to 40GbE uplinks.
The largest device, the BCM56640, used in large aggregation platforms takes 10GbE and 40GbE inputs and has the option for 100GbE uplinks for subsequent optical transport or routing. The 56640 is classed as a broadband aggregation device rather than just for mobile.
Features of the BCM56450 include support for MPLS (MultiProtocol Label Switching) and Ethernet OAM (operations, administration and maintenance), QoS and hardware protection switching. OAM performs such tasks as checking the link for faults, as well as performing link delay and packet loss measurements. This enables service providers to monitor the network's links quality. The device also supports the 1588 timing protocol used to synchronise the cell sites.
Another chip feature is sub-channelisation over Ethernet that allows the multiplexing of many end points into an Ethernet link. "We can support a higher number of downlinks than we have physical serdes on the device by multiplexing the ports in this way," says Kucharewski.
The on-chip traffic manager can also use additional, external memory if increasing the system's packet buffering size is needed. Additional buffering is typically required when a 10GbE interface's traffic is streamed to lower speed 1GbE or a Fast Ethernet port, or when the traffic manager is shaping multiple queues that are scheduled out of a lower speed port.
The BCM56450 integrates a dual-core ARM Cortex-A9 processor to configure and control the Ethernet switch and run the control plane software. The chip also has 10GbE serdes enabling the direct interfacing to optical transceivers.
Analysis
The differing nature of the two devices - the WinPath4 is a programmable chip whereas Broadcom's is a configurable Ethernet switch - means that the WinPath4 is more flexible. However, the greater throughput of the BCM56450 - at 100Gbps - makes it more suited to Carrier Ethernet switch router platforms. So says Jag Bolaria, a senior analyst at The Linley Group.
The WinPath4 also supports legacy T1/E1 TDM traffic whereas Broadcom's BCM56450 supports Ethernet backhaul only
The Linley Group also argues that the WinPath4 is more attractive for backhaul designers needing SDN OpenFlow support, given the chip's programmability and larger forwarding tables.
The WinPath4 and the BCM56450 are available in sample form. Both devices are expected to be generally available during the first half of 2014.
Further reading:
A more detailed piece on the WinPath4 and its protocol support is in New Electronics. Click here
The Linley Group: Networking Report, "Broadcom focuses on mobile backhaul", July 22nd, 2013. Click here (subscription is required)
Melding networks to boost mobile broadband
In a Q&A, Bryan Kim, manager at SK Telecom's Core Network Lab, discusses the mobile operator's heterogeneous network implementation and the service benefits.
SK Telecom has developed an enhanced mobile broadband service that combines two networks: 3G and Wi-Fi or Long Term Evolution (LTE) and Wi-Fi. The mobile operator will launch the 3G/ Wi-Fi heterogeneous network service in the second quarter of 2012 to achieve a maximum data rate of 60 Megabits-per-second (Mbps), while the LTE and Wi-Fi integrated service will be offered in 2013, enabling up to a 100Mbps wireless Internet service.
Q. What exactly has SK Telecom developed?
A. SK Telecom has developed a technology that provides subscribers with a faster data service by using two different wireless networks simultaneously. For instance, customers can enjoy a much faster video streaming service supported by either 3G and Wi-Fi, or LTE and Wi-Fi networks.
To benefit, a handset must use two radio frequencies at the same time. We have also built a system that is installed in the core network for simultaneous transmission.
"If it takes 10s to download a 10MB file using a 3G network and 5s to download the same file using the heterogeneous solution, the impact on the battery life is the same."
Bryan Kim, SK Telecom
Q. LTE-Advanced is standardising heterogeneous networking. This suggests that what SK Telecom has done is pre-standard and proprietary. What have you done that is different to the emerging standard?
A. SK Telecom is not talking about LTE-Advanced technology. This is a technology that enables simultaneous use of heterogeneous wireless networks we’ve deployed.
Q. What are the technical challenges involved in implementing a heterogeneous network?
A. It is technically difficult to realise the technology as it involves using networks with different characteristics in terms of speed and latency. At the same time, the technology is designed to minimise the changes required to the existing networks.
There has not really been challenges when linking the two separate networks but it is always a challenge to analyse the real-time network status to provide fast data services.
Q. What impact will simultaneous heterogeneous network operation have on a smartphone's battery life?
A. Using the heterogeneous network integration solution does increase the battery consumption: the device is using two radio frequencies. However, from a customer's perspective, if it takes 10s to download a 10MB file using a 3G network and 5s to download the same file using the heterogeneous solution, the impact on the battery life is the same.
SK Telecom also plans to apply a scanning algorithm for selecting qualified Wi-Fi networks.
Q. What services can SK Telecom see benefiting from having a 3G/ LTE network combined with a Wi-Fi network?
A. Customers will experience greater convenience when using multimedia services and network games, for example, with increased available bandwidth.
Source: SK Telecom
Heavy users tend to consume a lot of video services through mobile broadband. With this solution, SK Telecom will be providing faster data services to customers compared to when using only one network. This will enhance data service markets. The company has no plans for now to provide services directly.
Q. What mobile services come close to using 60Mbps or 100Mbps?
A. The 60Mbps and 100Mbps are theoretical maximum speeds. People who sign up for a 100Mbps fixed-line network service rarely experience the 100Mbps speed. With this technology, SK Telecom aims to increase the amount of wireless network resources for subscribers by using two different types of networks in a simultaneous manner, which in turn will boost the services that require wider bandwidth including video streaming service and network games.
Q. With a combination of Wi-Fi and cellular, most operators want to get traffic off the cellular network onto the ‘hot spot’. Does SK Telecom really want to fill their cellular network by providing higher speeds?
A. From the customer’s perspective, a Wi-Fi network offers narrow coverage and small capacity and since it is not a managed network, wireless data access is made upon request from customers. Thus, data offloading often does not work as intended by the mobile carriers.
In contrast, cellular networks provide national coverage so if there is an available Wi-Fi network to add to the cellular network, we can simultaneously use the cellular and Wi-Fi networks to offer a data service. By doing so customers will enjoy greater speed data services and mobile operators will be able to naturally offload data.
Is wireless becoming a valid alternative to fixed broadband?
Are wireless technologies such as Long Term Evolution (LTE) and WiMAX2 closing the gap on fixed broadband?
A recent blog by The Economist discussed how Long Term Evolution (LTE) is coming to the rescue of one of its US correspondents, located 5km from the DSL cabinet and struggling to get a decent broadband service.

Peak rates are rarely achieved: the mobile user needs to be very close to a base station and a large spectrum allocation is needed.
Mark Heath, Unwired Insights
The correspondent makes some interesting points:
- The DSL link offered a download speed of 700kbps at best while Verizon's FiOS passive optical networking (PON) service is not available as an alternative.
- The correspondent upgraded to an LTE handset service that enabled up to eight PCs and laptops to achieve a 15-20x download speed improvement.
The blog suggests that wireless data is becoming fast enough to address users' broadband needs.
But is LTE broadband now good enough? Mark Heath, a partner at telecom consultancy, Unwired Insight, is skeptical: "Is the gap between landline and wireless broadband narrowing? I'm not convinced."
Peak wireless rates, and in particular LTE, may suggest that wireless is now a substitute for fixed. But peak rates are rarely achieved: the mobile user needs to be very close to a base station and a large spectrum allocation is needed.
"While peak rates on mobile look to be increasing exponentially, average throughput per base station and base station capacities are increasing at a much more modest rate," says Heath. Hence the operator and vendor focus on LTE Advanced, as well as much bigger spectrum allocations and the use of heterogenous networks.
The advantage of landline broadband quality, in contrast, is that it does not suffer the degradation of a busy cell. There is much less disparity between peak rates and sustainable average throughputs with fixed broadband.
If fixed has advantages, it still requires operators to make the relevant investment, particularly in rural areas. "Wireless is better than nothing in rural areas," says Heath. But the gap between fixed and mobile isn't shrinking as much as peak data rates suggest.
Yet mobile networks do have a trump card: wide area mobility. With the increasing number of people dependent on smartphones, iPads and devices like the Kindle Fire, an ever increasing value is being placed on mobile broadband.
So if fixed broadband is keeping its edge over wireless, just what future services will drive the need for fixed's higher data rates?
This is a topic to be explored as part of the upcoming next-generation PON feature.
Further reading:
broadbandtrends: The Fixed versus mobile broadband conundrum, click here
2012: A year of unique change
The third and final part on what CEOs, executives and industry analysts expect during the new year, and their reflections on 2011.
Karen Liu, principal analyst, components telecoms, Ovum @girlgeekanalyst

"We’ve entered the next decade for real: the mobile world is unified around LTE and moving to LTE Advanced, complete with small cells and heterogenous networks including Wi-Fi."
Last year was a long one. Looking back, it is hard to believe that only one year has elapsed between January 2011 and now.
In fact, looking back it is hard to remember how things looked a year ago: natural disasters were considered rare occurrences. WiMAX’s role was still being discussed, some viewed TDD LTE as a Chinese peculiarity. For that matter, cloud-RAN was another weird Chinese idea. But no matter, China could do anything given its immunity to economics and need for a return-on-investment.
Femtocells were consumer electronics for the occasional indoor coverage fix, and Wi-Fi was not for carriers.
Only optical could do 100Mbps to the subscriber, who, by the way, was moving on to 10 Gig PON in short order. Flexible spectrum ROADMS meant only Finisar could play, and high port-count wavelength-selective switches had come and gone. 100 Gigabit DWDM took several slots, hadn’t shipped for real, and even the client-side interface was a problem.
As for modules, 40 Gigabit Ethernet (GbE) client was CFP-sized, and high-density 100GbE looked so far away that the non-standard 10x10 MSA was welcomed.
NeoPhotonics was a private company, doing that wacky planar integration thing that works OK for passives but not actives.
Now it feels like we’ve entered the next decade for real: the mobile world is unified around LTE and moving to LTE Advanced, complete with small cells and heterogenous networks including Wi-Fi.
Optical is one of several ways to do backhaul or PC peripherals. 40GbE, even single-mode, comes in a QSFP package, tunable comes in an SFP — both of which, by the way, use optical integration.
Most optical transport vendors, even metro specialists, have 100 Gigabit coherent in trial stage at least. Thousands of 100 Gig ports and tens of thousands of 40 Gig have shipped.
Flexible spectrum is being standardised and CoAdna went public. The tunable laser start-up phase concluded with Santur finding a home in NeoPhotonics, now a public company.
But we also have a new feeling of vulnerability.
Optical components revenues and margins slid back down. Bad luck can strike twice, with Opnext taking the hit from both the spring earthquake and the fall floods. China turns out not to be immune after all, and time hasn’t automatically healed Europe.
What will happen this year? At this rate, I think we’ll see a lot of news at OFC in a couple of months' time. By then I’ll probably think: "Was it as recently as January when the world looked so different?"
Brian Protiva, CEO of ADVA Optical Networking @ADVAOpticalNews
Last year was an incredible year for networks. In many respects it was a watershed moment. Optical transport took a huge step forward with the genuine availability of 100 Gigabit technologies.
What's even more incredible is that 100 Gigabit emerged in more than the core: we saw 100 Gig metro solutions enter the marketplace. This means that for the first time enterprises and service providers have the opportunity to deploy 100 Gig solutions that fit their needs. Thanks to the development of direct-detection 100 Gig technology, cost is becoming less and less of an issue. This is a game changer.
In 2012, 100 Gig deployments will continue to be a key topic, with more available choices and maturing systems. However, I firmly believe the central focus of 2012 will be automation and multi-layer network intelligence.

"We need to see networks that can effectively govern and optimise themselves."
Talking to our customers and the industry, it is clear that more needs to be done to develop true network automation. There are very few companies that have successfully addressed this issue.
We need to see networks that can effectively govern and optimise themselves. That can automatically deliver bandwidth on demand, monitor and resolve problems before they become service disrupting, and drive dramatically increased efficiency.
The future of our networks is all about simplicity. The continued fierce bandwidth growth can no longer be supported by today's complex operational inefficiencies. Streamlined operations are essential if operators are to drive for further profitable growth.
I'm excited about helping to make this happen.
Arie Melamed, head of marketing, ECI Telecom @ecitelecom
The existing momentum of major traffic growth with no proportional revenue increase has continued - even intensified - in 2011. This means that operators have to invest in their networks without being able to generate the proportional revenue increase from this investment. We expect to see new business models crop up as operators cope with over-the-top (OTT) services.
To differentiate themselves from competition, operators must make the network a core part of the end-customer experience. To do so, we expect operators to introduce application-awareness in the network – optimising service delivery to avoid network expansions and introduce new revenues.
We also expect operators to offer quality-of-service assurance to end users and content application providers, turning a lose-lose situation around.
Larry Schwerin, CEO of Capella Intelligent Subsystems @CapellaPhotonic
Over 2011, we witnessed the demand for broadband access increase at an accelerated rate. Much of this has been fueled by the continuation of mass deployments of broadband access - PON/FTTH, wireless LTE, HFC, to name a few - as well as the ever-increasing implementation of cloud computing, requiring instantaneous broadband access. Video and rich media are a small but growing piece of this equation.
The ultimate of this is yet to be felt, as people start to draw more narrowcast versus broadcast content. The final element will be when upstream content via appliances similar to Sling Media, as well as the various forms of video conferencing, become more widespread. This will lead to more symmetrical bandwidth from an upstream perspective.

'Change is definitely in order for the optical ecosystem. The question is how and when?'
Along with this, the issue of falling revenue-per-bit is forcing network operators to develop more cost-effective ways for managing this traffic.
All of aforementioned is driving demand for higher capacity and more flexible support at the fundamental optical layer.
I believe this will work to translate into more bits-per-wavelength, more wavelengths-per-fibre, and finally more flexibility for network operators. They will be able to more easily manage the traffic at the optical layer. This points to good news for transponder, tunable and ROADM/ WSS suppliers.
2011 also pointed out certain issues within the optical communications sector. Most notably, entering 2011, the financial marketplace was bullish on the optical sector following rapid quarter-on-quarter growth of certain larger optical players. Then, the “Ides of March” came and optical stocks lost as much as 40% of their value when it was deemed there was a pull back in demand by a very few, but nonetheless important players in the sector.
Later in the year came the flooding in Thailand, which hampered the production capabilities of many of the optical components players.
Overall margins in the sector remain at unacceptable levels furthering the speculation that things need to change in order for a more robust environment to exist.
What will 2012 bring?
I believe demand for bandwidth will continue to grow. Data centres will gain more focus as cloud computing continues to gain traction. This will lead to more demand for fundamental technologies in the area of optical transmission and management.
The next phase of wavelength management solutions will start to emerge - both at the high port count (1x20) as well as low-port count (1x2, 1x4) for edge applications. More emphasis will be placed on monitoring and control as more complex optical networks are built.
Change is definitely in order for the optical ecosystem. The question is how and when? Will it simply be consolidation? How will vertical integration take shape? How will new technologies influence potential outcomes?
2012 should be a year of unique change.
Terry Unter, president and general manager, optical networks solutions, Oclaro
Discussion and progress on defining next-generation ROADM network architectures was a very important development in 2011. In particular, consensus on feature requirements and technology choices to enable a more cost-efficient optical network layer was generally agreed amongst the major network equipment manufacturers. Colourless, directionless and, to a significant degree, contentionless are clear goals, while we continue to drive down the cost of the network.

"We expect to see a host of system manufacturers making decisions on 100 Gig supply partners. This should be an exciting year."
Coherent detection transponder technology is a critical piece of the puzzle ensuring scalability of network capacity while leveraging a common technology platform. We succeeded in volume production shipments of a 40 Gig coherent transponder and we announced our 100 Gig transponder.
2012 will be an important year for 100 Gig. The availability of 100 Gig transponder modules for deployment will enable a much wider list of system manufacturers to offer their customers more spectrally-efficient network solutions. The interest is universal from metro applications to the long haul and ultra-long haul market segments.
While there is much discussion about 400 Gig and higher rates, standards are in very early stages. The industry as a whole expects 100 Gig to be a key line rate for several years.
As we enter 2012, we expect to see a host of system manufacturers making decisions on 100 Gig supply partners. This should be an exciting year.
For Part 1, click here
For Part 2, click here
Mobile broadband: congestion is inevitable
The table is taken from a recent report by Peter Rysavy of Rysavy Research, entitled Mobile Broadband Capacity Constraints And the Need for Optimization.
The report looks at the huge growth in mobile broadband services and the resulting congestion. The report includes a nice model showing how only a few intensive users can consume much of a cell's capacity. The report also discusses how operators must continue to add wireless capacity while being a lot smarter in the bandwidth consumed by applications.
To see a copy of the report, click here
|
Application |
Recommended Bandwidth |
|
Mobile voice call |
6 kbps to 12 kbps |
|
Text-based e-mail |
10 to 20 kbps |
|
Low-quality music stream |
28 kbps |
|
Medium-quality music stream |
128 kbps |
|
High-quality music stream |
300 kbps |
|
Video conferencing |
384 kbps to 3 Mbps |
|
Entry-level, high-speed Internet |
1 Mbps |
|
Minimum speed for responsive Web browsing |
1 Mbps |
|
Internet streaming video |
1 to 2 Mbps |
|
Telecommuting |
1 to 5 Mbps |
|
Gaming |
1 to 10 Mbps |
|
Enterprise applications |
1 to 10 Mbps |
|
Standard definition TV |
2 Mbps |
|
Distance learning |
3 Mbps |
|
Basic, high-speed Internet |
5 Mbps |
|
High-Definition TV |
7.5 to 9 Mbps |
|
Multimedia Web interaction |
10 Mbps |
|
Enhanced, high-speed Internet |
10 to 50 Mbps, 100 Mbps emerging |
Best books of 2009?

Books I'd highlight this year are:
- The Nature of Technology by W Brian Arthur. A look at what technology is and how it advances. This is full of original thinking.
If you want to understand the latest cellular standards - the air interfaces and networking technologies - here are two recommended titles:
- 3G Evolution: HSPA and LTE for Mobile Broadband by Erik Dahlman, Stefan Parkvall, Johan Skold and Per Beming.
- LTE for UMTS: OFDMA and SC-FDMA based radio access by Harri Holma and Antti Toskala
The first title goes into more detail while the second is broader. As such the two are complementary. However, if only one title is to be chosen, it is 3G Evolution.
Click here for more information.
