Mobile fronthaul: A Q&A with LightCounting's John Lively

LightCounting Market Research' s report finds that mobile fronthaul networks will use over 14 million optical transceivers in 2014, resulting in a market valued at US $530 million. This is roughly the size to the FTTX market. However, unlike FTTX, sales of fronthaul transceivers will nearly double in the next five years, to exceed $900 million. A Q&A with LightCounting's principal analyst, John Lively.


Q. What is mobile fronthaul?

There is a simple explanation for mobile front-haul but that belies how complicated it is.

The equipment manufacturers got together about 10 years ago and came up with the idea to separate the functionality within a base station. The idea is that if you separate the functionality into two parts, you can move some of it to the tower and thereby reduce the equipment, power and space needed in the hut below. That is the distributed base station.

So instead of a large chassis base station, the current equipment is in two: a baseband unit or BBU which is a smaller rack-mounted unit, and the remote radio unit (RRU) or sometimes the remote radio head, mounted at the top of the tower, next to the antennas.  The link between the two units is defined as fronthaul.

Q. What role does optics have in mobile fronthaul?

In the old monolithic base station, the connection between the two parts was an inch or two of copper. Once you have half the equipment up on the tower, obviously a few inches of copper is not going to suffice.

They found that copper is a poor choice even if the BBU is at the bottom of the tower. Because the signal between the two is a radio frequency analogue one, the signal is not compressed and so has a fairly high bandwidth.

One statistic I saw is that if you use copper cable instead of fibre, the difference between the two just in terms of weight is 13x. And there are things to consider like the wind load and ice load on these towers. So you want small diameter, lightweight cables. So even if there were no considerations of distance, there are basic physical factors that favour fibre for this link. That is the genesis of fronthaul.

But then people realised: We have a fibre connection, we can move the BBU; now we can go tens of kilometers if we want to. Operators can then consider aggregating BBUs in central locations that serve multiple radio macrocells. This is called centralised RAN.

Centralised RAN reduces cost simply by saving real-estate, space and power. With the right equipment, you can also allocate processing capacity dynamically among multiple cells and realise greater efficiencies.

So there are layers of benefits to fronthaul. It starts with simple things like weight and the inability to shed ice, getting down to annual operating costs and the investment needed in future wireless capacity.  Fronthaul is a concept with much to offer.

 

Q. What is driving mobile fronthaul adoption?

What has brought fronthaul to the fore has been the global deployment of LTE. Fronthaul is not LTE-specific; distributed base station equipment has been available for HSPA and other 3G equipment. But in the last 3-4 years, we have had a massive upgrade in global infrastructure with many operators installing LTE. It is that that has driven the growth in fronthaul, taking it from a niche to become a mainstream part of the network.

Q. What are the approaches for mobile fronthaul?

The fronthaul that we have heard about from component vendors is simple point-to-point grey optics links. But let me start by defining CPRI. As part of the development of distributed base stations, a bunch of equipment vendors defined a way the signals would be transmitted between the BBU and the RRU, and it is called the Common Public Radio Interface or CPRI. As part of the specification, they define minimum requirements from the optical links, and they go so far as to say that these can be met with existing optics including several Fibre Channel devices.

As part of LightCounting's vendor surveys, we know that the predominant mode of implementation of fronthaul today is grey optics. That paints one picture: fronthaul is simple point-to-point grey optics. Some of the largest deployments recently have been of that mode, with China Mobile being the flagship example.

However, grey optics is not the only scheme, and some mobile operators have opted to do it differently.

A competing scheme is simple wavelength-division multiplexing (WDM) - a coarse WDM multi-channel coloured optical system. It is obviously simpler than long-haul: not 80 channels of closely spaced lambdas but systems more like first-generation WDM long-haul of 10 or 15 years ago, using 16 channels.

At first glance, it appears that the WDM approach is a next-generation scheme. But that is not the case; it has been deployed. South Korea's SK Telecom has used a WDM fronthaul solution when building their LTE network.

Q. Is it clear what operators prefer?

Both schemes have pros and cons. If there is a scarcity of fibre, you are leasing fibre from a third party for example, every additional fibre you use costs money. Or you have to deploy new fibre which is super expensive. Then a WDM solution looks attractive.

Another benefit, which is interesting, is that if you are a third-party provider of fronthaul, such as a tower company or a cable operator that wants to provide fronthaul just as it provides mobile backhaul, you need a demarcation point so that when there is a problem, you can say where your responsibility begins and ends.

There is no demarcation point with point-to-point links, it is just fibre running directly from operator equipment from Point A to Point B. With WDM systems, you have a natural demarcation point: the add/ drop nodes where the signals get onto the WDM wavelengths.

For example, a tower may serve three operators. Each operator would then used short-reach grey optics from their RRU to connect to the add/ drop node that may be at the bottom or on the tower. Otherwise, when there is a fault, who is responsible? That is another advantage of the WDM scheme.

It is not unlike the situation with fibre-to-the-x: some places have fibre-to-the-home, some fibre-to-the-curb, some fibre-to-the basement. There are different scenarios having to do with density, operator environment or regulation that create different optimal solutions for each scenario. There is no one-size-fits-all.

Q. What optical modules are used for mobile fronthaul and how will this change over the next five years?

The RRHs typically require 3 or 6 Gigabit-per-second (Gbps). These are CPRI standard rates that are multiples of a basic base rate. In some cases when they are loaded up with multiple channels - daisy chaining the RRUs - you may require 10Gbps.

From our survey data, in 2013 the mix was 3 and 6Gbps devices primarily, and this year we saw a shift away from 3 and more towards 6 and 10 Gbps. We believe that was skewed to some degree by China Mobile, which in many areas is putting up high capacity LTE systems with multiple channels, unlike many other operators that are doing a LTE multi-phase deployment, lighting one channel to start with and adding capacity as needed.

There is also some demand for 12.5Gbps but nothing beyond that, and 12.5Gbps demand is rather small and unlikely to grow quickly. That is because the individual RRHs are not going up in capacity. Rather, the way that capacity keeps up with bandwidth is that the number of RRHs multiplies. The way fronthaul keeps up with bandwidth demand is mainly by the proliferation of links rather than increasing the speed of individual links.

Q. A market nearly doubling in five years, that is a healthy optical component segment?

The growth is good. But like everything in optical components, it is questionable whether vendors will find a way to make it profitable. The technology specifications are not particularly challenging, so you can expect competition to be pretty severe for this market.

We are already seeing several Chinese makers with low manufacturing costs establishing themselves among the top suppliers in this market.

 

Q. Besides market size, what were other findings of the report?

I do expect WDM systems to become more widespread over the next five years. It makes sense that not everyone will want to do the brute force method of a link for every RRU out there. This is probably the biggest area of uncertainty, too: to what extent will WDM catch up or displace first generation grey optics?

The other thing to think about is what happens next? LTE deployments are well underway, a bit more than half way done worldwide. And it will be at least 5 years before the next big cycle: people are only just starting to talk about 5G. What is fonthaul going to look like in a 5G system?

It is hard to answer that in any clarity because 5G systems are not yet defined. What I find fascinating is that they are talking about multi-service access networks instead of fixed and mobile broadband being separate.

With WDM-PON and other advanced access networks, there is a growing belief that fronthaul could be carried over existing networks rather than having purpose-built fronthaul and backhaul networks. Fronthaul may thus go away and just be a service that tags onto some other networking equipment in the 5-10 year timeframe.

Q. Did any of the findings surprise you?

One is the fact that WDM is being deployed today.

Another is the size of the market: the component revenues are as big as FTTx. If you think about it, it makes sense: they are both serving consumers and are similar types of applications in terms of what they are doing: one is fixed broadband and one is mobile broadband.  

Q. What are the developments to watch in the next few years regarding mobile fronthaul?

The next five years, the key thing to watch is the adoption of WDM in lieu of point-to-point grey optics. Beyond that, for the next generation, what fronthaul will be needed in 5G networks?     

 


Achieving 56 Gigabit VCSELs

A Q&A with Finisar's Jim Tatum, director of new product development. Tatum talks about the merits of the vertical-cavity surface-emitting laser (VCSEL) and the challenges to get VCSELs to work at 56 Gigabit.

Briefing: VCSELs


VCSELs galore! A wafer of 28 Gig devices Source: Finisar

Q. What are the merits of VCSELs compared to other laser technologies?

A: VCSELs have been a workhorse for the datacom industry for some 15 years. In that time there have been some 500 million devices deployed for data infrastructure links, with Finisar being a major producer of these VCSELs.

The competition is copper which means you need to be at a cost that makes such [optical] links attractive. This is where VCSELs have value: operating at 850nm which means running on multi-mode fibre. 

Coupling VCSELs to multi-mode fibre [the core diameter] is in the tens of microns whereas it is one micron for single-mode fibre and that is where the cost is. Also with VCSELs and multi-mode fibre, we don't need optical isolators which add significant cost to the assemblies. It is not the cost of the laser die itself; the difference in terms of the link [approaches] is the cost of the optics and getting light in and out of the fibre.

There are also advantages to the VCSEL itself: wafer-level testing that allows rapid testing of the die before you commit to further packaging costs. This becomes more important as the VCSEL speed gets higher.

 

What are the differences with 850nm VCSELs compared to longer wavelength (1300nm and 1550nm) VCSELs?

At 850nm you are growing devices that are all epitaxial - the laser mirrors are grown epitaxially and the quantum wells are grown in one shot. At the other wavelengths, it is much harder.

People have managed it at 1300nm but it is not yet proven to be a reliable material system for getting high-speed operation. When you go to 1550nm, you are doing wafer bonding of the mirrors and active regions or you are doing more complex epitaxial processing.

That is where 850nm VCSELs has a nice advantage in that the whole thing is done in one shot; the epitaxy and the fabrication are relatively simple. You don't have the complex manufacturing of chip parts that you do at 1550nm.

 

What link distances are served by 850nm VCSELs?

The longest standards are for 500m. As we venture to higher speeds - 28 Gigabit-per-second (Gbps) - 100m is more the maximum. And this trend will continue, at 56Gbps I would anticipate less than 50m and maybe 25m.

The good news is that the number of links that become economically viable at those speeds grows exponentially at these shorter distances. Put another way, copper is very challenged at 56Gbps lane rates and we'll see optics and VCSEL technology move inside the chassis for board-to-board and even chip-to-chip interconnects. Such applications will deliver much higher volumes.

 

"Taking that next step - turning the 28Gbps VCSEL into a product - is where all the traps lie"

 

What are the shortest distances?

There are the edge-mounted connections and those are typically 1-5m. There is also a lot of demonstrated work with VCSELs on boards doing chip-to-chip interconnect. That is a big potential market for these devices as well.

The 28Gbps VCSEL has been demonstrated but commercial products are not yet available. It is difficult to sense whether such a device is relatively straightforward to develop or a challenge.

Achieving a 28Gbps VCSEL is hard. Certainly there have been many companies that have demonstrated a modulation capability at that speed. However, it is one thing to do it one time, another to put a reliable VCSEL product into a transceiver with everything around it.

Taking that next step - turning the 28Gbps VCSEL into a product - is where all the traps lie. That is where the bulk of the work is being done today.  Certainly this year there will be 25Gbps/ 28Gbps products out in customers' hands.

 

"With a VCSEL, you have to fill up a volume of active region with enough carriers to generate photons and you can only put in so many, so fast. The smaller you can make that volume, the faster you can lase."

 

What are the issues that dictate a VCSEL's speed?

When you think about going to the next VCSEL speed, it helps to think about where we came from.

All the devices shipped, from 1 to 10 Gig, had gallium arsenide active regions. It has lots of wonderful attributes but one of its less favourable ones is that it is not the highest speed. Going to 14Gbps and 28Gbps we had to change the active region from gallium arsenide to indium gallium arsenide and that gives us an enhancement of the differential gain, a key parameter for controlling speed. 

What you really want to do when you are dealing with speed is that for every incremental bit of current I give the [VCSEL] device, how much more does that translate into gain, or more photons coming out? If you can make that happen more efficiently, then the edge speed of the device increases. In other words, you don't have to deal with other parasitics - carriers going into non-recombination centres and that sort of thing; everything is going into the production of photons rather than other parasitic things.

With a VCSEL, you have to fill up a volume of active region with enough carriers to generate photons and you can only put in so many, so fast. The smaller you can make that volume, the faster you can lase. 

Differential gain is a measure of the efficiency in terms of the number of photons generated by a particular carrier. If I can increase that efficiency of making photons, then my transition speed and my edge speed of the laser increases.

Shown is the chart on the y-axis is the differential gain and on the x-axis is the current density going into the part. The decay tells you that if I'm running really high currents, the differential gain is worse for indium gallium arsenide parts. So you want to operate your device with a carrier density that maximises the differential gain. 

Part of that maximisation is using less carriers in smaller quantum wells so that it ramps up the curve. You want to operate at a lower current density while also doing a better job of each carrier transitioning into photons.

 

What else besides differential gain dictates VCSEL performance?

The speed of the laser increases above threshold as the square root of the current. That gives you a return-on-investment in terms of how much current you put into the device.

However, the reliability of the part degrades with the cube of the current you put into it. So you get to a boundary condition where speed varies as the square root of the current and you have the reliability which is degrading with the cube of the current. The intersection of those two points is where you are willing to live in terms of reliability.

That is the trade-off we constantly have to deal with when designing lasers for high speed communications.

 

Having explained the importance of this region of operation, what changes in terms of the laser when operating at 28Gbps and at 56Gbps?

At 14Gbps and even at 28Gbps the lasers are directly modulated with little analogue trickery. That said, 28Gbps Fibre Channel does allow you to use equalisation at the receiver.

My feeling today is that at 56Gbps, direct modulation of the laser is going to be pretty tricky. At that speed there is going to have to be dispersion compensation or equalisation built into the optical system.

There are a lot of ways to incorporate some analogue or even digital methods to reduce the effective bandwidth of the device from 56Gbps to running less. One of these is a little bit of pre-emphasis and equalisation. Another way is to use analogue modulation levels. Alternatively, you can start borrowing a whole lot more from the digital communication world and look at sub-carrier multiplexing or other more advanced modulation schemes. In other words pull the bandwidth of the laser down instead of doing 1, 0 on-off stuff.  At 56 Gig those things are going to be a requirement.

The bottom line is that a 28Gbps VCSEL design maybe something pretty similar to a 56 Gig part with the addition of bandwidth enhancements techniques.

 

"I can see [VCSEL] modulation rates going to 100Gbps"

 

So has VCSEL technology already reached its peak?

In terms of direct modulation of a VCSEL - pushing current into it and generating photons - 28 Gig is a reasonable place. And 56 Gig or 40Gig VCSELs may happen with some electronic trickery around it.

The next step - and even at 56Gbps - there is a fair amount of investigation of alternate modulation techniques for VCSELs.

Instead of modulating the current in the active region, you can do passive modulation of an external absorber inside the epitaxial structure. That starts to look like a modulated laser you would see in the telecom industry but it is all grown expitaxially. Once you are modulating a passive component, the modulation speed can get significantly higher. I can see modulation rates going to 100Gbps, for example.

 

The VCSEL roadmap isn't running out then, but it is getting more complicated. Will it take longer to achieve each device transition: from 28 to 56Gbps, and from 56Gbps to 112Gbps?

A question that is difficult to answer.

The time line will probably scale out every time you try to scale the bandwidth. But maybe not if you are able to do things like combine other technologies at 56Gbps or you do things that are more package related. For example, one way to achieve a 56 Gig link is to multiplex two lasers together on a multi-core fibre. That is significantly less challenging thing to do from a technology development point of view than lasers fundamentally capable of 56Gbps. Is such a solution cost optimised? Well, it is hard to say at this point but it may be time-to-market optimised, at least for the first generation.

Multi-core fibre is one way, another is spatial-division multiplexing. In other words, coarse WDM, making lasers at 850nm, 980nm, 1040nm - a whole bunch of different colours and multiplexing them.

There is more than one way to achieve a total aggregate throughput.

 

Does all this make your job more interesting, more stressful, or both?

It means I have options in my job which is always a good thing.


Privacy Preference Center