MultiPhy unveils 100G single-wavelength PAM-4 chip

A chip to enable 100-gigabit single-wavelength client-side optical modules has been unveiled by MultiPhy. The 100-gigabit 4-level pulse amplitude modulation (PAM-4) circuit will also be a key building block for 400 Gigabit Ethernet interfaces that use four wavelengths.

Source: MultiPhy

Dubbed the MPF3101, the 100-gigabit physical layer (PHY) chip is aimed at such applications as connecting switches within data centres and for 5G cloud radio access network (CRAN).

“The chip has already been sent out to customers and we are heading towards market introductions,” says Avi Shabtai, CEO of MultiPhy.

The MPF3101 will support 100-gigabit over 500m, 2km and 10km.

The IEEE has developed the 100-gigabit 100GBASE-DR standard for 500m while the newly formed 100G Lambda MSA (multi-source agreement) is developing specifications for the 2km 100-gigabit single-channel 100G-FR and the 10km 100G-LR. 

MultiPhy says the QSFP28 will be the first pluggable module to implement a 100-gigabit single-wavelength design using its chip. The SFP-DD MSA, currently under development, will be another pluggable form factor for the single-wavelength 100-gigabit designs. 

 

The chip has already been sent out to customers and we are heading towards market introductions

 

400 Gigabit

The 100-gigabit IP will also be a key building block for a second MultiPhy chip for 400-gigabit optical modules needed for next-generation data centre switches that have 6.4 and 12.8 terabits of capacity. “This is the core engine for all these markets,” says Shabtai. 

Companies have differing views as to how best to address the 400-gigabit interconnect market. There is a choice of form factors such as the OSFP, QSFP-DD and embedded optics based on the COBO specification, as well as emerging standards and MSAs.

The dilemma facing companies is what approach will deliver 400-gigabit modules to coincide with the emergence of next-generation data centre switches.

One consideration is the technical risk associated with implementing a particular design. Another is cost, with the assumption that 4-wavelength 400-gigabit designs will be cheaper than 8x50-gigabit based modules but that they may take longer to come to market.

For 400 gigabits, the IEEE 803.3bs 400 Gigabit Ethernet Task Force has specified the 400GBASE-DR4, a 500m-reach four-wavelength specification that uses four parallel single-mode fibres. The 100G Lambda MSA is also working on a 400-gigabit 2km specification based on coarse wavelength-division multiplexing (CWDM), known as 400G-FR4, with work on a 10km reach specification to start in 2018. 

 

We are hearing a lot in the industry about 50-gigabit-per-lambda. For us, this is old news; we are moving to 100-gigabit-per-lambda and we believe the industry will align with us.


And at ECOC 2017 show, held last week in Gothenburg, another initiative - the CWDM8 MSA - was announced. The CWDM8 is an alternative design to the IEEE specifications that sends eight 50-gigabit non-return-to-zero signals rather that PAM-4 over a fibre. 

“We are hearing a lot in the industry about 50-gigabit-per-lambda,” says Shabtai. “For us, this is old news; we are moving to 100-gigabit-per-lambda and we believe the industry will align with us.”

 

Chip architecture

The MPF3101, implemented using a 16nm CMOS process, supports PAM-4 at symbol rates up to 58 gigabaud.

The chip’s electrical input is four 25-gigabit lanes that are multiplexed and encoded into a 50-plus gigabaud PAM-4 signal that is fed to a modulator driver, part of a 100-gigabit single-channel transmitter optical sub-assembly (TOSA). A 100-gigabit receiver optical sub-assembly (ROSA) feeds the received PAM-4 encoded signal to the chip’s DSP before converting the 100-gigabit signal to 4x25 gigabit electrical signals (see diagram).

“If you need now only one laser and one optical path [for 100 gigabits] instead of four [25 gigabits optical paths], that creates a significant cost reduction,” says Shabtai.

The advent of a single-wavelength 100-gigabit module promises several advantages to the industry. One is lower cost. Estimates that MultiPhy is hearing is that a single-wavelength 100-gigabit module will be half the cost of existing 4x25-gigabit optical modules. Such modules will also enable higher-capacity switches as well as 100-gigabit breakout channels when connected to a 400-gigabit four-wavelength module. Lastly, MultiPhy expects the overall power consumption to be less.   

 

Availability

MultiPhy says first 100-gigabit single-wavelength QSFP28s will appear sometime in 2018.

The company is being coy as to when it will have a 400-gigabit PAM-4 chip but it points out that by having working MPF3101 silicon, it is now an integration issue to deliver a 4-channel 400-gigabit design.

As for the overall market, new high-capacity switches using 400-gigabit modules will start to appear next year. The sooner four-channel 400-gigabit PAM-4 silicon and optical modules appear, the less opportunity there will be for eight-wavelength 400-gigabit designs to gain a market foothold.

“That is the race we are in,” says Shabtai.


Reflections and predictions: 2011 & 2012 - Part 1

Gazettabyte has asked industry analysts, CEOs, executives and commentators to reflect on the last year and comment on developments they most anticipate for 2012.

 

"For 2012, the macroeconomy is likely to dominate any other developments"

 

 

 

 

 

 

 

Martin Geddes, telecom consultant @martingeddes

Sometimes the important stuff is slow-burning: we're seeing a continued decline in the traditional network equipment providers, and the rise in Genband, Acme, Sonus and Metaswitch in their place. Smaller, leaner, and more used to serving Tier 2 and Tier 3 operators and enterprise players and their lower cost structures. 

The recognition of the decline of SMS and telephony became mainstream in 2011 -- maybe I can close down my Telepocalypse blog as what I foresaw is reality. 

We've seen absolute declines in revenue and usage of telco voice and messaging in leading markets like Norway and Netherlands. The creation of Telefonica Digital is a landmark reorganisation around new markets. No longer are those initiatives endlessly parked in business development whilst marketing dream up a new price plan for minutes, messages and megabytes.

If I had to pick one thing to characterise 2011, it was the year of the App.

For 2012, the macroeconomy is likely to dominate any other developments. The scenarios are "distress", "meltdown" and "collapse".

Telecoms is well-placed to weather the storm. Even £600 smartphones may remain in vogue as people defer purchases like cars and holidays, and hide their fiscal distress with status symbols hewn out of pure blocks of profit. 

Voice will be much more prominent, after decades of languishing, as LTE sets up a complex dynamic of service innovation driven by over-the-top applications - which will increasingly come from telcos as well as telecoms outsiders. Microsoft's purchase of Skype is the one to watch - if they get it right, it joins Windows and Office in the hall of fame; get it wrong, and Microsoft is probably out of the smartphone game due to a lack of competitive differentiation and advantage.

So 2012 is the year when (mobile) voice gets vocal again - because we're going to have a lot to talk about, and want to do it much cheaper and better.

 

Brandon Collings, CTO for communications and commercial optical products at JDS Uniphase

For the course of 2011, the tunable XFP shipped in volume and it rather quickly supplanted the 300-pin transceiver.  On the service/ market trend, over-the-top consumer video (Netflix) grew rapidly to be the dominant traffic on the internet.

 

"Solutions for the next generation ROADM networks - self aware networks - are now firm"

 

I expect the maturation of 100 Gigabit to continue through 2012 with the introduction of a number of new 100 Gigabit solutions, both network equipment makers and at the transceiver level.

Also, as the adoption percentage of consumers using over-the-top video usage still seems to be relatively small, yet is growing strongly and is already the dominant traffic on the internet, it will be interesting to see how this trend continues as it strongly drives bandwidth yet with potentially unfavorable revenue models for the network operators who need to deliver it.  

Lastly, I expect that as the solutions for the next generation ROADM networks - self aware networks - are now firm, the practical assessment of the value and advantages of these networks can quantitatively take place.

 

Eve Griliches, managing partner, ACG Research @EveGr

The Juniper PTX announcement really caught the market by surprise. I'm not so much sure why but clearly it rocked some folks back on their heels. Momentum for the product has been good as well. I think you can count this as a success story.  

Another one is the Infinera 500Gbps release with super-channels.  A pretty impressive technology and service providers are waiting for final product to test.   

The death of Steve Jobs rattled us all. I think it struck a note for everyone in how different he was and how he touched us all.  

 

"Content providers ask for simple, scalable and low-featured products. Those who deliver will be rewarded for listening."

 

I continue to be amazed at how much optical equipment content providers [the Googles, Facebooks, MSNs of this world] are deploying and how few folks at the vendor level are doing anything about getting into their networks.  Maybe that is a 2012 thing, I don't know.

As for 2012, we'll definitely see some mergers and acquisitions - expect low acquisition prices too - and some companies exiting this market.  I love optics and it really pains me to say that, but there are just more companies out there who can't support the declining margins. I think margin erosion will be key to who survives.  

Cisco and Infinera should be bringing some cool products to market in the next six months. We hope the products are good because it will generate debate for the final vendor choices for operators such as AT&T and Verizon. 

Again, content providers ask for simple, scalable and low-featured products. Those who deliver will be rewarded for listening. Some don't listen, and will wonder what happened.

 

Peter Jarich, service director, service provider infrastructure, mobile ecosystem, Current Analysis @pnjarich

2012 is going to be the year for LTE-Advanced (LTE-A). Why? One, vendors always like to talk up what’s next, and LTE-A is what follows LTE (Long Term Evolution). 

At the same time, operators who haven’t yet deployed LTE will want to look to start with the latest and greatest. Of course, LTE-A brings real advances for operators: carrier aggregation for dealing with fragmented spectrum assets; heterogeneous networks for dealing with the interaction of small cell and macrocell networks; relaying for improved cell edge performance.

 

Avi Shabtai, CEO of MultiPhy

The most significant development of 2011 was the availability of CMOS technology that allows next-generation optical transport solutions for 100 Gigabit. And specifically, metro-focused solutions that hit the cost and power numbers required by this industry.

On top of that, optical communication has entered the era of digital signal processing receivers. We have also seen the potential segmentation in 100 Gigabit of metro versus long-haul, each with its specific set of solutions.

 

"We will see a huge growth in video consumption. This has already started but it is just the tip of the iceberg."

 

The transition of the telecom and datacom market to 100 Gigabit has also begun - from the transport optical network all the way to copper backplanes - it's all a 4x25Gbps architecture. This year has also seen consolidation in the ecosystem, especially among module companies.

This consolidation will continue at all industry levels in 2012: semiconductors, subsystems, systems and the carriers. The consolidation will coincide with an across-the-board price reduction in emerging technologies like 100 Gigabit transport.

The increase in capacity demand will also force an increase in requirements for various solutions supporting 100 Gigabit. I expect to see more CMOS-based devices introduced.

From a services point or view, we will see a huge growth in video consumption. This has already started but it is just the tip of the iceberg. Video will have a tremendous influence on network evolution.

 

Gilles Garcia, director, wired communication at Xilinx @gllsgarcia

The CFP2 and CFP4 optical modules are arriving a lot faster than it took for the CFP to follow the XFP optical module. 

The CFP standard took 3-4 years to complete while the standard for the CFP2 just closed after two years. Now the CFP4 standard has been launched and is expected to take 18 months only. The new form factors are being driving by the cost-per-port of 100 Gigabit and how to reduce it. The CFP2 doubles the density when compared to the CFP while the CFP4 doubles it again.

 

"Programmability is becoming the key trend among telecom system vendors as operators look to react faster to standards, new feature requests and deployment of new services."

 

Telecom application-specific standard product (ASSP) players have been relatively quiet in 2011. Word from customers is that such vendors are pushing out their roadmap/ product availability because of too much flux in the various IEEE and ITU-T telecom standards and difficulties to justify the return-on-investment. This is proving a perfect opportunity for FPGAs.

Large system vendors are growing their network services as operators continue to outsource their network management and maintenance. As reported in their financial reports, this is an important source of business for the likes of Ericsson, Huawei and Alcatel-Lucent. 

It is leading the vendors to push more of their own hardware, as they look to add value-add services and integrate the services using their own platforms. Some equipment vendors realise they do not have a full portfolio and have established partnerships for the missing platforms. They are also starting to develop platforms to generate more revenue.

In 2012, I’m not expecting a telecom revolution but I do expect accelerated evolution. And I foresee big disruptions in the ASSP market as it continues to consolidate: I expect several mergers and acquisitions among the top 20 ASSP suppliers.

Programmability is becoming the key trend among telecom system vendors as operators look to react faster to standards, new feature requests and deployment of new services. Programmability also improves time-to-market to deliver these services and reduce time-to-revenue.  

Mobile backhaul will be a market driver in 2012. The growth in mobile data terminals will lead to a new generation of mobile backhaul networks. This will drive the move from 1 to 10 Gigabit Ethernet, higher-feature packet processing, and traffic management integration into mobile infrastructure to better control and bill bandwidth usage i.e. pay for what you use.

The 'God box' - packet optical transport systems and the like - are back, but really it is network needs that is driving this.

And one topic to watch that will become clearer in 2012 is how cloud computing impacts the networking market with regard such issues as security, cacheing and higher speed links.

Google is becoming an important internal - for its own usage -networking equipment player. And Google will be joined by others - Facebook, Amazon etc.  What impact will this have on the traditional system networking vendors? Such new players are defining and building networks platforms tailored for their needs. This is competition to the traditional system vendors who are not getting this piece of the business. Semiconductors, including FPGAs, could serve those companies directly.

Other issues to note: What will Intel do in the networking space? Intel acquired Fulcrum in 2011 and has invested in several networking companies.

There are also technology issues.

What will happen to ternary content addressable memory (TCAM)? Broadcom's acquisition of NetLogic Microsystems has created a hole in the TCAM market. Will Broadcom continue with TCAM? Will customers want to give their TCAM business to Broadcom?

Xilinx FPGAs have added network search engines IP in the solution portfolio as multi-core ‘search engine’ face increasing difficulty in sustaining the performance required. 

And of course there is the continual issue of power optimisation.

 

For Part 2, click here

For Part 3, click here


Boosting the 100 Gigabit addressable market

Alcatel-Lucent has enhanced the optical performance of its 100 Gigabit technology with the launch of its extended reach (100G XR) line card. Extending the reach of 100 Gigabit systems helps makes the technology more attractive when compared to existing 40 Gigabit optical transport. 

 

"We have built some rather large [data centre to data centre] networks with spans larger that 1,000km in totality"  

Sam Bucci, Alcatel-Lucent

 

 

 

 

Used with the Alcatel-Lucent 1830 Photonic Service Switch, the line card improves optical transmission performance by 30% by fine-tuning the algorithm that runs on its coherent receiver ASIC. The system vendor says the typical optical reach extends to 2,000km. 

When Alcatel-Lucent first announced its 100 Gigabit technology in June 2010, it claimed a reach of 1,500-2,000km. Now this upper reach limit is met for most networking scenarios with the extended reach performance. 

"By announcing the extended reach, Alcatel-Lucent is able to highlight the 2,000km reach as well as draw attention to the fact that it has many deployments already, and that some of those customers are using 100 Gig in 1,000km+ applications," says Sterling Perrin, senior analyst at Heavy Reading.

Market research firm Ovum views the 100G XR announcement as a specific evolutionary improvement.

"But it is significant in that it makes the case for 100 Gig versus 40 Gig more attractive for terrestrial longer-reach applications," says Dana Cooperson, network infrastructure practice leader at Ovum. “The higher the performance vendors can make 100 Gig for more demanding applications - bad fiber, ultra long-haul and ultimately submarine - the quicker it will eclipse 40 Gig.” That said, Ovum does not expect 40 Gig to be eclipsed anytime soon. 

 

100G XR

The line card's improved optical performance equates to transmission across longer fibre spans before optical regeneration is required. This, says the vendor, saves on equipment cost, power and space. 

More complex network topologies can also be implemented such as mesh networks where the signal can encounter varying-length paths based on differing fibre types as well as multiple ROADM stages. Alcatel-Lucent says it has implemented a 1,700km link with 20 amplifiers and seven ROADM stages without the need for signal regeneration.

The improved optical performance of the 100G XR has been achieved without changing the line card's hardware. The card uses the same analogue-to-digital converter, digital signal processor (DSP) ASIC and the same forward error correction scheme used for its existing 100 Gigabit line card. 

What has changed is the dispersion compensation algorithm that runs on the DSP, making use of the experience Alcatel-Lucent has gained from existing 100 Gigabit deployments. 

"We can tune various parameters, such as power and the way it [the algorithm] deals with impairments," says Sam Bucci, vice president, terrestrial portfolio management at Alcatel-Lucent. In particular the 100G XR has increased tolerance to polarisation mode dispersion and non-linear impairments.

Cooperson says Alcatel-Lucent has adjusted the receiver ASIC performance after 'mining' data from coherent deployments, something the company is used to doing with its wireless networks. She says Alcatel-Lucent has also worked closely with component vendors to achieve the improved performance. 

Perrin points out that Alcatel-Lucent's 100 Gig design uses a single laser while Ciena's system is dual laser. "Alcatel-Lucent is saying that over an identical plant the two-laser approach has no distance advantages over the one laser approach," he says. However, other system vendors have announced distances at and beyond 2,000km. "So Alcatel-Lucent's enhanced system is not record-setting," says Perrin.

 

100 Gigabit Market 

Alcatel-Lucent says it has more than 45 deployments comprising over 1,200 100 Gig lines since the launch of its 100 Gigabit system in 2010.

"It appears that Alcatel-Lucent has shipped more 100G line cards than anyone," says Cooperson. "Alcatel-Lucent has a good opportunity to make some serious 100 Gig inroads here, along with Ciena, while everyone else gears up to get their solutions to market in 2012." 

Cooperson also says the 100G XR announcement dovetails nicely with Alcatel-Lucent’s recent CloudBand announcement. Indeed Bucci says that its deployments of 100 Gig include connecting data centres: "We have built some rather large [data centre to data centre] networks with spans larger that 1,000km in totality."

The 100G XR card is being tested by customers and will be generally available starting December 2011.


Rational and innovative times: JDSU's CTO Q&A Part II

Brandon Collings, JDS Uniphase's CTO for communications and commercial optical products, talks about fostering innovation and what is coming after 100 Gigabit optical transmission. Part II of a Q&A with Gazettabyte.


"What happens after 100 Gig is going to be very interesting"

Brandon Collings (right), JDSU

 

How has JDS Uniphase (JDSU) adapted its R&D following the changes in the optical component industry over the last decade?

JDSU has been a public company for both periods [the optical boom of 1999-2000 and now]. The challenge JDSU faced in those times, when there was a lot of venture capital (VC) money flowing into the system, was that the money was sort of free money for these companies. It created an imbalance in that the money was not tied to revenue which was a challenge for companies like JDSU that ties R&D spend to revenue. You also have much more flexibility [as a start-up] in setting different price points if you are operating on VC terms.

The situation now is very straightforward, rational and predictable.

There is not a huge army of R&D going on. That lack of R&D does not speed up the industry but what it does do is allow those companies doing R&D - and there is still a significant number - a lot of focus and clarity. It also requires a lot of partnership between us, our customers [equipment makers] and operators. The people above us can't just sit back and pick and choose what they like today from myriad start-ups doing all sorts of crazy things.

We very much appreciate this rational time. Visions can be more easily discussed, things are more predictable and everyone is playing from a similar set of rules.

 

Given the changes at the research labs of system vendor and operators, is there a risk that insufficient R&D is being done, impeding optical networking's progress?

It is hard to say absolutely not as less people doing things can slow things down. But the work those labs did, covered a wide space including outside of telecom.

There is still a sufficient critical mass of research at placed like Alcatel-Lucent Bell Labs, AT&T and BT; there is increasingly work going on in new regions like Asia Pacific, and a lot more in and across Europe.  It is also much more focussed - the volume of workers may have decreased but the task still remains in hand.

 

"There are now design tradeoffs [at speeds higher than 100Gbps] whereas before we went faster for the same distance" 

 

How does JDSU foster innovation and ensure it is focussing on the right areas?

I can't say that we have at JDSU a process that ensures innovation. Innovation is fleeting and mysterious.   

We stay very connected to our key customers who are more on the cutting edge. We have very good personal and professional relationships with their key people. We have the same type of relationship with the operators. 

I and my team very regularly canvass and have open discussions about what is coming. What does JDSU see? What do you see? What technologies are blossoming? We talk through those sort of things. 

That isn't where innovation comes from. But what that can do is sow the seeds for the opportunity for innovation to happen. 

We take that information and cycle it through all our technology teams. The guys in the trenches - the material scientists, the free-space optics design guys - we try to educate them with as much of an understanding of the higher-level problems that ultimately their products, or the products they design into, will address.  

What we find is that these guys are pretty smart. If you arm them with a wider understanding, you get a much more succinct and powerful innovation than if you try to dictate to a material scientist here is what we need, come back when you are done.

It is a loose approach, there isn't a process, but we have found that the more we educate our keys [key guys] to the wider set of problems and the wider scope of their product segments, the more they understand and the more they can connect their sphere of influence from a technology point of view to a new capability. We grab that and run with it when it makes sense.

It is all about communicating with our customers and understanding the environment and the problem, then spreading that as wide as we can so that the opportunity for innovation is always there. We then nurse it back into our customers.

 

Turning to technology,  you recently announced the integration of a tunable laser into an SFP+, a product you expect to ship in a year. What platforms will want a tunable laser in this smallest pluggable form factor?

The XFP has been on routers and OTN (Optical Transport Network) boxes - anything that has 10 Gig - and those interfaces have been migrated over to SFP+ for compactness and face plate space. There are already packet and OTN devices that use SFP+, and DWDM formats of the SFP+, to do backhaul and metro ring application. The expectation is that while there are more XFP ports today, the next round of equipment will move to SFP+.

Certainly the Ciscos, Junipers and the packet guys are using tunable XFPs in great volume for IP over DWDM and access networks, but the more telecom-centric players riding OTN links or maybe native Ethernet links over their metro rings are probably the larger volume.  

 

What distance can the tunable SFP+ achieve?

The distances will be pretty much the same as the tunable XFP. We produce that in a number of flavours, whether it is metro and long-haul. The initial SFP+ will like be the metro reaches, 80km and things like that.

 

What is the upper limit of the tunable XFP?

We produce a negative chirp version which can do 80km of uncompensated dispersion, and then we produce a zero chirp which is more indicative of long-haul devices.

In that case the upper limit is more defined by the link engineering and the optical signal-to-noise ratio (OSNR), the extent of the dispersion compensation accuracy and the fibre type. It starts to look and smell like a long-haul lithium niobate transceiver where the distances are limited by link design as much as by the transceiver itself. As for the upper limit, you can push 1000km.

 

An XFP module can accommodate 3.5W while an SFP+ is about 1.5W. How have you reduced the power to fit the design into an SFP+?

It may be a generation before we get to that MSA level so we are working with our customers to see what level they can tolerate. We'll have to hit a lot less that 3.5W but it is not clear that we have to hit the SFP+ MSA specification. We are already closer now to 1.5W than 3.5W.

 

 

"I can't say that we have at JDSU a process that ensures innovation. Innovation is fleeting and mysterious."

  

Semiconductors now play a key role in high-speed optical transmission. Will semiconductors take over more roles and become a bigger part of what you do?

Coherent transmission [that uses an ASIC incorporating a digital signal processor (DSP)] is not going away. There is a lot of differentiation at the moment in what happens in that DSP, but I think overall it is going to be a tool the system houses use to get the job done.

If you look at 10 Gig, the big advancement there was FEC [forward error correction] and advanced FEC. In 2003 the situation was a lot like it is today: who has the best FEC was something that was touted.

If you look at coherent technology, it is certainly a different animal but it is a similar situation: that is, the big enabler for 40 and 100 Gig. Coherent is advanced technology, enhanced FEC was advanced technology back then, and over time it turned into a standardised, commoditised piece that is central and ubiquitously used for network links.

Coherent has more diversity in what it can do but you'll see some convergence and commoditisation of the technology. It is not going to replace or overtake the importance of photonics. In my mind they play together intimately; you can't replace the functions of photonics with electronics any time soon.

From a JDSU perspective, we have a lot of work to do because the bulk of the cost, the power and the size is still in the photonics components. The ASIC will come down in power, it will follow Moore's Law, but we will still need to work on all that photonics stuff because it is a significant portion of the power consumption and it is still the highest portion of the cost. 

 

JDSU has made acquisitions in the area of parallel optics. Given there is now more industry activity here, why isn't JDSU more involved in this area? 

We have been intermittently active in the parallel optics market.

The reality is that it is a fairly fragmented market: there are a lot of applications, each one with its own requirements and customer base. It is tough to spread one platform product around these applications. That said, parallel optics is now a mainstay for 40 and 100 Gig client [interfaces] and we are extremely active in that area: the 4x10, 4x25 and 12x10G [interfaces]. So that other parallel optics capability is finding its way into the telecom transceivers. 

We do stay active in the interconnect space but we are more selective in what we get engaged in. Some of the rules there are very different: the critical characteristics for chip interconnect are very different to transceivers, for example. It may be much better to have on-chip optics versus off-chip optics. Obviously that drives completely different technologies so it is a much more cloudy, fragmented space at the moment.

We are very tied into it and are looking for those proper opportunities where we do have the technologies to fit into the application.

 

How does JDSU view the issues of 200, 400 Gigs and 1 Terabit optical transmission? 

What happens after 100 Gig is going to be very interesting. 

Several things have happened. We have used up the 50GHz [DWDM] channel, we can't go faster in the 50GHz channel - that is the first barrier we are bumping into. 

Second, we're finding there is a challenge to do electronics well beyond 40 Gigabit. You start to get into electronics that have to operate at much higher rates - analogue-to-digital converters, modulator drivers - you get into a whole different class of devices.

Third, we have used all of our tools: we have used FEC, we are using soft-decision FEC and coherent detection. We are bumping into the OSNR problem and we don't have any more tools to run lines rates that have less power to noise yet somehow recover that with some magic technology like FEC at 10 Gig, and soft decision FEC and coherent at 40 and 100 Gig.

This is driving us into a new space where we have to do multi-carrier and bigger channels. It is opening up a lot of flexibility because, well, how wide is that channel? How many carriers do you use? What type of modulation format do you use? 

What format you use may dictate the distance you go and inversely the width of the channel. We have all these new knobs to play with and they are all tradeoffs: distance versus spectral efficiency in the C-band. The number of carriers will drive potentially the cost because you have to build parallel devices. There are now design tradeoffs whereas before we went faster for the same distance.

We will be seeing a lot of devices and approaches from us and our customers that provide those tradeoffs flexibly so the carriers can do the best they can with what mother nature will allow at this point.

That means transponders that do four carriers: two of them do 200 Gig nicely packed together but they only achieve a few hundred kilometers, but a couple of other carriers right next door go a lot further but they are a little bit wider so that density versus reach tradeoff is in play.  That is what is going to be necessary to get the best of what we can do with the technology. 

That is the transmission side, the transport side - the ROADMS and amplifiers - they have to accommodate this quirky new formats and reach requirements.

We need to get amplifiers to get the noise down. So this is introducing new concepts like Raman and flex[ible] spectrum to get the best we can do with these really challenging requirements like trying to get the most reach with the greatest spectral efficiency.

 

How do you keep abreast of all these subject areas besides conversations with customers?

It is a challenge, there aren't many companies in this space that are broader than JDSU's optical comms portfolio. 

We do have a team and the team has its area of focus, whether it is ROADMs, modulators, transmission gear or optical amplifiers. We segment it that way but it is a loose segmentation so we don't lose ideas crossing boundaries. We try to deal with the breadth that way.

Beyond that, it is about staying connected with the right people at the customer level, having personal relationships so that you can have open discussions. 

And then it is knowing your own organisation, knowing who to pull into a nebulous situation that can engage the customer, think on their feet and whiteboard there and then rather than [bringing in] intelligent people that tend to require more of a recipe to do what they are doing. 

It is all about how to get the most from each team member and creating those situations where the right things can happen.

 

For Part I of the Q&A, click here


Q&A: Ciena’s CTO on networking and technology

In Part 2 of the Q&A, Steve Alexander, CTO of Ciena, shares his thoughts about the network and technology trends.

Part 2: Networking and technology

"The network must be a lot more dynamic and responsive"

Steve Alexander, Ciena CTO

 

Q. In the 1990s dense wavelength division multiplexing (DWDM) was the main optical development while in the '00s it was coherent transmission. What's next?

A couple of perspectives.

First, the platforms that we have in place today: III-V semiconductors for photonics and collections of quasi-discrete components around them - ASICs, FPGAs and pluggables - that is the technology we have.  We can debate, based on your standpoint, how much indium phosphide integration you have versus how much silicon integration.

Second, the way that networks built in the next three to five years will differentiate themselves will be based on the applications that the carriers, service providers and large enterprises can run on them.

This will be in addition to capacity - capacity is going to make a difference for the end user and you are going to have to have adequate capacity with low enough latency and the right bandwidth attributes to keep your customers. Otherwise they migrate [to other operators], we know that happens.

You are going to start to differentiate based on the applications that the service providers and enterprises can run on those networks. I see the value of networking changing from a hardware-based problem-set to one largely software-based.

I'll give you an analogy: You bought your iPhone, I'll claim, not so much because it is a cool hardware box - which it is - but because of the applications that you can run on it.  

The same thing will happen with infrastructure. You will see the convergence of the photonics piece and the Ethernet piece, and you will be able to run applications on top of that network that will do things such as move large amounts of data, encrypt large amounts of data, set up transfers for the cloud, assemble bandwidth together so you can have a good cloud experience for the time you need all that bandwidth and then that bandwidth will go back out, like a fluid, for other people to use.

That is the way the network is going to have to operate in future. The network must be a lot more dynamic and responsive.

 

How does Ciena view 40 and 100 Gig and in particular the role of coherent and alternative transmission schemes (direct detection, DQPSK)? Nortel Metro Ethernet Networks (MEN) was a strong coherent adherent yet Ciena was developing 100Gbps non-coherent solutions before it acquired MEN.

If you put the clock back a couple of years, where were the classic Ciena bets and what were the classic MEN bets?

We were looking at metro, edge of network, Ethernet, scalable switches, lots of software integration and lots of software intelligence in the way the network operates. We did not bet heavily on the long distance, submarine space and ultra long-haul. We were not very active in 40 Gig, we were going straight from 10 to 100 Gig.

Now look at the bets the MEN folks placed: very strong on coherent and applying it to 40 and 100 Gig, strong programme at 100 Gig, and they were focussed on the long-haul. Well, to do long-haul when you are running into things like polarisation mode dispersion (PMD), you've got to have coherent. That is how you get all those problems out of the network. 

Our [Ciena's] first 100 Gig was not focussed on long-haul; it was focussed on how you get across a river to connect data centres.

When you look at putting things together, we ended up stopping our developments that were targeted at competing with MEN's long-haul solutions. They, in many cases, stopped developments coming after our switching, carrier Ethernet and software integration solutions. The integration worked very well because the intent of both companies was the same.

Today, do we have a position?  Coherent is the right answer for anything that has to do with physical propagation because it simplifies networks. There are a whole bunch of reasons why coherent is such a game changer.

The reason why first 40 Gig implementations didn't go so well was cost. When we went from 10 to 40 Gig, the only tool was cranking up the clock rate.

At that time, once you got to 20GHz you were into the world of microwave. You leave printed circuit boards and normal manufacturing and move into a world more like radar. There are machined boxes, micro-coax and a very expensive manufacturing process.  That frustrated the desires of the 40 Gig guys to be able to say: Hey, we've got a better cost point than the 10 Gig guys.

Well, with coherent the fact that I can unlock the bit rate from the baud rate, the signalling rate from the symbol rate, that is fantastic. I can stay at 10GHz clocks and send four-bits per symbol - that is 40Gbps.

My basic clock rate, which determines manufacturing complexity, fabrication complexity and the basic technology, stays with CMOS, which everyone knows is a great place to play. Apply that same magic to 100 Gig.  I can send 100Gbps but stay at a 25GHz clock - that is tremendous, that is a huge economic win.

Coherent lets you continue to use the commercial merchant silicon technology base which where you want to be. You leverage the year-on-year cost reduction, a world onto itself that is driving the economics and we can leverage that.

So you get economics with coherent. You get improvement in performance because you simplify the line system - you can pop out the dispersion compensation, and you solve PMD with maths. You also get tunability. I'm using a laser - a local oscillator at the receiver - to measure the incoming laser. I have a tunable receiver that has a great economic cost point and makes the line system simpler.

Coherent is this triple win. It is just a fantastic change in technology.

 

What is Ciena’s thinking regarding bringing in-house sub-systems/ components (vertical integration), or the idea of partnerships to guarantee supply? One example is Infinera that makes photonic integrated circuits around which it builds systems. Another is Huawei that makes its own PON silicon.

The two examples are good ones.

With Huawei you have to treat them somewhat separately as they have some national intent to build a technology base in China. So they are going to make decisions about where they source components from that are outside the normal economic model. 

Anybody in the systems business that has a supply chain periodically goes through the classic make-versus-buy analysis. If I'm buying a module, should I buy the piece-parts and make it? You go through that portion of it. Then you look within the sub-system modules and the piece-parts I'm buying and say: What if I made this myself? It is frequently very hard to say if I had this component fully vertically integrated I'd be better off.

A good question to ask about this is: Could the PC industry have been better if Microsoft owned Intel? Not at all.

You have to step back and say: Where does value get delivered with all these things? A lot of the semiconductor and component pieces were pushed out [by system vendors] because there was no way to get volume, scale and leverage. Unless you corner the market, that is frequently still true. But that doesn't mean you don't go through the make-versus-buy analysis periodically.

Call that the tactical bucket.  

The strategic one is much different. It says: There is something out there that is unique and so differentiated, it would change my way of thinking about a system, or an approach or I can solve a problem differently.

 

"Coherent is this triple win. It is just a fantastic change in technology" 

 

 

 

 

 

 

 

If it is truly strategic and can make a real difference in the marketplace - not a 10% or 20% difference but a 10x improvement - then I think any company is obligated to take a really close look at whether it would be better being brought inside or entering into a good strategic partnership arrangement.

Certainly Ciena evaluates its relationships along these lines.

 

Can you cite a Ciena example?

Early when Ciena started, there was a technology at the time that was differentiated and that was Fibre Bragg Gratings. We made them ourselves. Today you would buy them.

You look at it at points in time. Does it give me differentiation? Or source-of-supply control? Am I at risk? Is the supplier capable of meeting my needs? There are all those pieces to it.

 

Optical Transport Network (OTN) integrated versus standalone products. Ciena has a standalone model but plans to evolve to an integrated solution. Others have an integrated product, while others still launched a standalone box and have since integrated. Analysts say such strategies confuse the marketplace. Why does Ciena believe its strategy is right?

Some of this gets caught up in semantics.

Why I say that is because we today have boxes that you would say are switches but you can put pluggable coloured optics in. Would you call that integrated probably depends more on what the competition calls it.

The place where there is most divergence of opinion is in the network core.

Normally people look at it and say: one big box that does everything would be great - that is the classic God-Box problem. When we look at it - and we have been looking at it on and off for 15 years now - if you try to combine every possible technology, there are always compromises.

The simplest one we can point to now: If you put the highest performance optics into a switch, you sacrifice switch density.

You can build switches today that because of the density of the switching ASICs, are I/O-port constrained: you can't get enough connectors on the face plate to talk to the switch fabric. That will change with time, there is always ebb and flow. In the past that would not have been true.

If I make those I/O ports datacom plugabbles, that is about as dense as I'm going to get. If I make them long-distance coherent optics, I'm not going to get as many because coherent optics take up more space.  In some cases, you can end up cutting by half your port density on the switch fabric. That may not be the right answer for your network depending on how you are using that switch.

While we have both technologies in-house, and in certain application we will do that. Years ago we put coloured optics on CoreDirector to talk to CoreStream, that was specific for certain applications. The reason is that in most networks, people try to optimise switch density and transport capacity and these are different levers. If you bolt those levers together you don't often get the right optimal point.

 

Any business books you have read that have been particularly useful for your job?

The Innovator's Dilemma (by Clayton Christensen). What is good about it is that it has a couple of constructs that you can use with people so they will understand the problem. I've used some of those concepts and ideas to explain where various industries are, where product lines are, and what is needed to describe things as innovation.

The second one is called: Fad Surfing in the Boardroom (by Eileen Shapiro). It is a history of the various approaches that have been used for managing companies. That is an interesting read as well.

 

Click here for Part 1 of the Q&A 

 


How ClariPhy aims to win over the system vendors

ClariPhy Communications will start volume production of its 40 Gig coherent IC in September and is working on a 28nm CMOS 100 Gig coherent ASIC. It also offers an ASIC design service, allowing customers to used their own IP as well as ClariPhy's silicon portfolio.


“We can build 200 million logic gate designs” 

Reza Norouzian, ClariPhy

 

 

 

ClariPhy is in the camp that believes that the 100 Gigabit-per-second (Gbps) market is developing faster than people first thought. “What that means is that instead of it [100Gbps] being deployed in large volumes in 2015, it might be 2014,” says Reza Norouzian, vice president of worldwide sales and business development at ClariPhy.  

Yet the fabless chip company is also glad it offers a 40Gbps coherent IC as this market continues to ramp while 100Gbps matures and overcomes hurdles common to new technology: The 100Gbps industry has yet to develop a cost-effective solution or a stable component supply that will scale with demand.

Another challenge facing the industry is reducing the power consumption of 100Gbps systems, says Norouzian. The need to remove the heat from a 100Gbp design - the ASIC and other components - is limiting the equipment port density achievable. “If you require three slots to do 100 Gig - whereas before you could use these slots to do 20 or 30, 10 Gig lines - you are not achieving the density and economies of scale hoped for,” says Norouzian.

 

40G and 100G coherent ASICs

ClariPhy has chosen a 40nm CMOS process to implement its 40Gbps coherent chip, the CL4010. But it has since decided to adopt 28nm CMOS for its 100Gbps design – the CL10010 - to integrate features such as soft-decision forward error correction (see New Electronics' article on SD-FEC) and reduce the chip’s power dissipation.

The CL4010 integrates analogue-to-digital and digital-to-analogue converters, a digital signal processor (DSP) and a multiplexer/ demultiplexer on-chip. “Normally the mux is a separate chip and we have integrated that,” says Norouzian.

The first CL4010 samples were delivered to select customers three months ago and the company expects volume production to start by the end of September.  The CL4010 also interoperates with Cortina Systems’ optical transport network (OTN) processor family of devices, says the company.

The start-up claims there is strong demand for the CL4010. “When we ask them [operators]: ‘With all the hoopla about 100 Gig, why are you buying all this 40 Gig?’, the answer is that it is a pragmatic solution and one they can ship today,” says Norouzian.

ClariPhy expects 40Gbps volumes to continue to ramp for the next three or four years, partly because of the current high power consumption of 100Gbps. The company says several system vendors are using the CL4010 in addition to optical module customers.

The 28nm 100Gbps CL10010 is a 100 million gate ASIC. ClariPhy acknowledges it will not be first to market with an 100Gbps ASIC but that by using the latest CMOS process it will be well position once volume deployments start from 2014.

ClariPhy is already producing a quad-10Gbps chip implementing the maximum likelihood sequence estimation (MLSE) algorithm used for dispersion compensation in enterprise applications.  The device covers links up to 80km (10GBASE-ZR) but the main focus is for 10GBASE-LRM (220m+) applications. “Line cards that used to have four times 10Gbps lanes now are moving to 24 and will use six of these chips,” says Norouzian. The device sits on the card and interfaces with SFP+ or Quad-SFP optical modules.

 

“The CL10010 is the platform to demonstrate all that we can do but some customers [with IP] will get their own derivatives”

 

System vendor design wins

The 100Gbps transmission ASIC market may be in its infancy but the market is already highly competitive with clear supply lines to the system vendors.

Several leading system vendors have decided to develop their own ASICs.   Alcatel-Lucent, Ciena, Cisco Systems (with the acquisition of CoreOptics), Huawei and Infinera all have in-house 100Gbps ASIC designs.

System vendors have justified the high development cost of the ASIC to get a time-to-market advantage rather than wait for 100Gbps optical modules to become available. Norouzian also says such internally-developed 100Gbps line card designs deliver a higher 100Gbps port density when compared to a module-based card.

Alternatively, system vendors can wait for 100Gbps optical modules to become available from the likes of an Oclaro or an Opnext. Such modules may include merchant silicon from the likes of a ClariPhy or may be internally developed, as with Opnext.

System vendors may also buy 100Gbps merchant silicon directly for their own 100Gbps line card designs. Several merchant chip vendors are targeting the coherent marketplace in addition to ClariPhy. These include such players as MultiPhy and PMC-Sierra while other firms are known to be developing silicon.

Given such merchant IC competition and the fact that leading system vendors have in-house designs, is the 100Gbps opportunity already limited for ClariPhy?

Norouzian's response is that the company, unlike its competitors, has already supplied 40Gbps coherent chips, proving the company’s mixed signal and DSP expertise. The CL10010 chip is also the first publicly announced 28nm design, he says: “Our standard product will leapfrog first generation and maybe even second generation [100Gbps] system vendor designs.”

The equipment makers' management will have to decide whether to fund the development of their own second-generation ASICs or consider using ClariPhy’s 28nm design.

ClariPhy acknowledges that leading system vendors have their own core 100Gbps intellectual property (IP) and so offers vendors a design service to develop their own custom systems-on-chip.  For example a system vendor could use ClariPhy's design but replace the DSP core with the system vendor’s own hardware block and software.

 

Source: ClariPhy Communications

Norouzian says system vendors making 100Gbps ASICs develop their own intellectual property (IP) blocks and algorithms and use companies like IBM or Fujitsu to make the design. ClariPhy offers a similar service while also being able to offer its own 100Gbps IP as required. “The CL10010 is the platform to demonstrate all that we can do,” says Norouzian. “But some customers [with IP] will get their own derivatives.”

The firm has already made such custom coherent devices using customers’ IP but will not say whether these were 40 or 100Gbps designs.

 

Market view

ClariPhy claims operator interest in 40Gbps coherent is not so much because of its superior reach but its flexibility when deployed in networks alongside existing 10Gbps wavelengths. “You don't have to worry about [dispersion] compensation along routes,” says Norouzian, adding that coherent technology simplifies deployments in the metro as well as regional links.

And while ClariPhy’s focus is on coherent systems, the company agrees with other 100Gbps chip specialists such as MultiPhy for the need for 100Gbps direct-detect solutions for distances beyond 40km. “It is very likely that we will do something like that if the market demand was there,” says Norouzian. But for now ClariPhy views mid-range 100Gbps applications as a niche opportunity.

 

Funding

ClariPhy raised US $14 million in June. The biggest investor in this latest round was Nokia Siemens Networks (NSN).

An NSN spokesperson says working with ClariPhy will help the system vendor develop technology beyond 100Gbps. “It also gives us a clear competitive edge in the optical network markets, because ClariPhy’s coherent IC and technology portfolio will enable us to offer differentiated and scalable products,” says the spokesperson. 

The funding follows a previous round of $24 million in May 2010 where the investors included Oclaro. ClariPhy has a long working relationship with the optical components company that started with Bookham, which formed Oclaro after it merged with Avanex.

“At 100Gbps, Oclaro get some amount of exclusivity as a module supplier but there is another module supplier that also gets access to this solution,” says Norouzian. This second module supplier has worked with ClariPhy in developing the design.  

ClariPhy will also supply the CL10010 to the system vendors. “There are no limitations for us to work with OEMs,” he says.

The latest investment will be used to fund the company's R&D effort in 100, 200 and 400Gbps, and getting the CL4010 to production.

 

Beyond 100 Gig

The challenge at higher data rates that 100Gbps is implementing ultra-large ASICs: closing the timings and laying out vast digital circuitry. This is an area the company has been investing in over the last 18 months. “Now we can build 200 million logic gate designs,” says Norouzian.

Moving from 100Gbps to 200Gbps wavelengths will require higher order modulation, says Norouzian, and this is within the realm of its ASIC. 

Going to 400Gbps will require using two devices in parallel. One Terabit transmission however will be far harder. “Going to one Terabit requires a whole new decade of development,” he says.


Further reading:

100G: Is market expectation in need of a reality check?

Terabit consortium embraces OFDM


Optical networking market in rude health

 Quarterly market revenues, global optical networking (1Q 2011). Source: Ovum

Despite recent falls in optical equipment makers’ stock, the optical networking market remains in good health with analysts predicting 6-7% growth in 2011.

For Andrew Schmitt, directing analyst for optical at Infonetics Research, unfulfilled expectations are nothing new. Optical networking is a market of single-digit yearly growth yet in the last year certain market segments have grown above average: spending on ROADM-based wavelength division multiplexing (WDM) optical network equipment, for example, has grown 20% since the first quarter of 2010.

“Every few years people get this expectation that there is going to be this hockey stick [growth] and it is not,” says Schmitt. “There has been a lot of Wall Street money moving into this sector in the latter part of 2010 and first part of this year and they have just had their expectations reset, but operationally the industry is very healthy.”

 

“Nothing in this business changes quickly but the pace of change is starting to accelerate”

Andrew Schmitt, Infonetics Research

 

But Schmitt acknowledges that there is industry concern about the market outlook. “There have been lots of client calls in the first half of the year wanting to talk numbers,” says Schmitt. “When the market is growing rapidly there is no need for such calls but when it is uncertain, customers put more time into understanding what is going on.” 

Both Infonetics and market research firm Ovum say the optical networking market grew 7% globally in the last year (2Q10 to 1Q11).  

Ovum says the market reached US $3.5bn in the first quarter of 2011 and it expects 6% growth this year. “Most of the growth will come from North America—general recovery, stimulus-related spending, and LTE (Long Term Evolution)-inspired spending; and from South and Central America mostly mobile and fixed broadband-related,” says Dana Cooperson, network infrastructure practice leader at Ovum.

Ovum also notes that optical networking annualised spending for the last four quarters (2Q10-1Q11) finally went into the black with 1% growth, to reach $14.6bn. Annualised share figures are a strong indicator of longer-term market trends, says Ovum.

 

Market growth

Factors accounting for the growth include optical equipment demand for mobile and broadband backhaul. Carriers are also embarking on a multi-year optical upgrade to 40 and 100 Gigabit transmission over Optical Transport Network (OTN) and ROADM-based networks. Infonetics notes that ROADM spending in particular set a new high in the first quarter, rising 4% sequentially.

Ovum expects overall growth to come from metro and backbone WDM markets and from LTE. “For metro it is a combination of new builds, as DWDM continues to take over the metro core from SONET/SDH, and expansions of ROADM and 40 Gigabit,” says Cooperson. “For backbone it is a combination of retrofits for 40 and 100 Gigabit and overbuilds with 40 and 100 Gigabit coherent-optimised systems.”

Many operators are also looking at OTN switching and how it can help with network efficiency and manageability, she says, while mobile backhaul continues to be a hot spot as well at the access end of the network.

The Americas are the regions accounting for market growth whereas in Asia-Pacific and Europe, Middle East and Africa the spending remains flat.

“We’re not as bullish on Europe as I’ve heard some others are,” says Cooperson. “We expected China to slow down as capital intensities in the 34-35% seen in 2008 and 2009 were unsustainable. We saw the cooling down a bit earlier in 2010 than we had expected, but it did cool down and will continue to.”

Ovum expects Asia-Pacific as a whole to be moribund. But at least the pullbacks in China will be countered by slow growth in Japan and a big upsurge in India after a huge decline last year due to delayed 3G-related builds among other issues.

 

Outlook

Ovum is optimistic about the optical networking market due to continued competitive pressures and traffic growth. “We don’t think traffic growth can just continue without attention to the underlying issues related to revenue pressure, regardless of competitive pressures,” says Cooperson. “But newer optical and packet systems offer significant improvements over the old in terms of power efficiency, manageability, and of course 40 and 100 Gigabit coherent and ROADM features.”

 

“Most of the growth will come from North America"

Dana Cooperson, Ovum.

 

Many networks worldwide are also due for a core infrastructure update to benefit capacity and efficiency while many other operators are upgrading their access networks for mobile backhaul and enterprise Ethernet services.

Schmitt stresses that while it is right to talk about a 'core reboot', there are all sorts of operators that make up the market: the established carriers, those focussed on Layer 2 and Layer 3 transport, dark fibre companies and cable companies.

“Everyone has a different business so there is not a whole lot of group-think in this industry,” says Schmitt. “So when you talk about a transition to 40 and 100 Gigabit, some carriers will make that transition earlier than others because the nature of their business demands it.”

However, there are developments in equipment costs that are leading to change. “Once you get out to 2013-14, 100 Gigabit [transport] looks really good relative to 40 Gigabit and tunable XFPs at 10 Gigabit look really, really good,” says Schmitt, who believes these are going to be two dominating technologies. “People are going to use 100 Gigabit and when they can afford to throw more 10 Gigabit at the [capacity] problem, in shorter metro and regional spans, they will use tunable XFPs,” he says. “That is a whole new level in terms of driving down cost at 10 Gigabit that people haven’t factored in yet.”

 

Pacier change

The move to 100 Gigabit will not lead to increased spending, stresses Schmitt. Rather its significance is as a ‘mix shift’: The adoption of 100 Gigabit will shift spending from older systems to newer ones so that the technology is interesting in terms of market share shift rather than by growing overall revenues.

That said, there are areas of optical spending where capital expenditure (capex) is growing faster than the single-digit trend. These include certain competitive telco providers and dark fibre providers like AboveNet, TimeWarner Telecom and Colt. “You look at their capex year-over-year and it is increasing in some cases more over 20% a year,” says Schmitt.

He also notes that while the likes of Google, Yahoo, Microsoft and Apple do not spend on optical equipment as much as established operators such as Verizon or AT&T, their growth rate is higher. “There are sectors of the market that are growing quickly, and competition that are positioned to service those sectors successfully are going to see above-trend growth,” says Schmitt.

He highlights three areas of innovations - ‘big vectors’- that are going to change the business.

One is optical transport's move away from simple on-off keying signalling that opens up all kinds of innovation. Another is the shift in the players buying optical equipment. “A lot more of the R&D is driven by the AboveNets, Time Warners, Comcasts and the Googles and less by the old time PTTs,” says Schmitt. “That is going to change the way R&D is done.”

The third is photonic integration which Schmitt equates to the very early state of the electronics business. While Infinera has done some interesting things with integration, its latest 500 Gigabit PIC (photonic integrated circuit) is a big leap in density, he says: “It will be interesting if that sort of technology crosses over into other applications such as short- and intermediate-reach applications.”

“Nothing in this business changes quickly but the pace of change is starting to accelerate,” says Schmitt. “These three things, when you throw them together in a pot, are going to result in some unpredictable outcomes.”

 


Reflecting light to save power

CIP Technologies is bringing its reflective component expertise to an EU-funded project to reduce the power consumption of optical systems.  

System vendors will be held increasingly responsible for the power consumption of their telecom and datacom platforms. That’s because for each watt the equipment generates, up to six watts is required for cooling. It is a burden that will only get heavier given the relentless growth in network traffic.

 

"Enterprises are looking for huge capacity at low cost and are increasingly concerned about the overall impact on power consumption"

David Smith, CIP Technologies

 

No surprise, then, that the European 7th Framework Programme has kicked-off a research project to tackle power consumption. The Colorless and Coolerless Components for Low-Power Optical Networks (C-3PO) project involves six partners that include component specialist CIP Technologies and system vendors ADVA Optical Networking.

CIP is the project’s sole opto-electronics provider while ADVA Optical Networking's role is as system integrator.

“It’s not the power consumption of the optics alone,” says David Smith, CTO of CIP Technologies. “The project is looking at component technology and architectural issues which can reduce overall power consumption.”

The data centre is an obvious culprit, requiring up to 5 megawatts. Power is consumed by IT and networking equipment within the data centre – not a C-3PO project focus – and by optical networking equipment that links the data centre to other sites. “Large enterprises have to transport huge amounts of capacity between data centres, and requirements are growing exponentially,” says Smith. “They [enterprises] are looking for huge capacity at low cost and are increasingly concerned about the overall impact on power consumption.”

One C-3PO goal is to explore how to scale traffic without impacting the data centre’s overall power consumption. Conventional dense wavelength division multiplexing (DWDM) equipment isn’t necessarily the most power-efficient given that DWDM tunable lasers requires their own cooling. “There is the power that goes into cooling the transponder, and to get the heat away you need to multiply again by the power needed for air conditioning,” says Smith.

Another idea gaining attention is operating data centres at higher ambient temperatures to reduce the air conditioning needed. This idea works with chips that have a wide operating temperature but the performance of optics - indium phosphide-based actives - degrade with temperature such that extra cooling is required. As such, power consumption could even be worse, says Smith

A more controversial optical transport idea is changing how line-side transport is done. Adding transceivers directly to IP core routers saves on the overall DWDM equipment deployed. This is not a new idea, says Smith, and an argument against this is it places tunable lasers and their cooling on an IP router which operates at a relatively high ambient temperature. The power reduction sought may not be achieved.

But by adopting a new transceiver design, using coolerless and colourless (reflective) components, operating at a wider temperature range without needing significant cooling is possible. “It is speculative but there is a good commercial argument that this could be effective,” says Smith.

C-3PO will also exploit material systems to extend devices’ temperature range - 75oC to 85oC - to eliminate as much cooling as possible. Such material systems expertise is the result of CIP’s involvement in other collaborative projects.

 

"If the [WDM-PON] technology is deployed on a broad scale - that is millions of user lines – every single watt counts"

Klaus Grobe, ADVA Optical Networking

 

Indeed a companion project, to be announced soon, will run alongside C-3PO based on what Smith describes as ‘revolutionary new material systems’. These systems will greatly improve the temperature performance of opto-electronics. “C-3PO is not dependent on this [project] but may benefit from it,” he says.

 

Colourless and coolerless

CIP’s role in the project will be to integrate modulators and arrays of lasers and detectors to make coolerless and colourless optical transmission technology.  CIP has its own hybrid optical integration technology called HyBoard.

“Coolerless is something that will always be aspirational,” says Smith. C-3PO will develop technology to reduce and even eliminate cooling where possible to reduce overall power consumption. “Whether you can get all parts coolerless, that is something to be strived for,” he says.

Colourless implies wavelength independence. For light sources, one way to achieve colourless operation is by using tunable lasers, another is to use reflective optics.

CIP Technologies has been working on reflective optics as part of its work on wavelength division multiplexing, passive optical networks (WDM-PON). Given such reflective optics work for distances up to 100km for optical access, CIP has considered using the technology for metro and enterprise networking applications.

Smith expects the technology to work over 200-300km, at data rates from 10 to 28 Gigabit-per-second (Gbps) per channel. Four 28Gbps channels would enable low-cost 100Gbps DWDM interfaces.

 

Reflective transmission

CIP’s building-block components used for colourless transmission include a multi-wavelength laser, an arrayed waveguide grating (AWG), reflective modulators and receivers (see diagram).

 

Reflective DWDM architecture. Source: CIP Technologies

 

Smith describes the multi-wavelength laser as an integrated component, effectively an array of sources. This is more efficient for longer distances than using a broadband source that is sliced to create particular wavelengths. “Each line is very narrow, pure and controlled,” says Smith.

The laser source is passed through the AWG which feds individual wavelengths to the reflective modulators where they are modulated and passed back through the AWG.  The benefit of using a reflective modulator rather than a pass-through one is a simpler system. If the light source is passed through the modulator, a second AWG is needed to combine all the sources, as well as a second fibre. Single-ended fibre is also simpler to package. 

For data rates of 1 or 2Gbps, the reflective modulator used can be a reflective semiconductor optical amplifier (RSOA). At speeds of 10Gbps and above, the complementary SOA-REAM (reflective electro-absorption modulator) is used; the REAM offers a broader bandwidth while the SOA offers gain.

The benefit of a reflective scheme is that the laser source, made athermal and coolerless, consumes far less power than tunable lasers. “It has to be at least half the cost and we think that is achievable,” says Smith.

Using the example of the IP router, the colourless SFP transceiver – made up of a modulator and detector - would be placed on each line card.  And the multi-wavelength laser source would be fed to each card’s module.

Another part of the project is looking at using arrays of REAMs for WDM-PON. Such an modulator array would be used at the central office optical line terminal (OLT). “Here there are real space and cost savings using arrays of reflective electro-absorption modulators given their low power requirements,” says Smith. “If we can do this with little or no cooling required there will be significant savings compared to a tunable laser solution.”

ADVA Optical Networking points out that with an 80-channel WDM-PON system, there will be a total of 160 wavelengths (see the business case for WDM-PON). “If you consider 80 clients at the OLT being terminated with 80 SFPs, there will be a cost, energy consumption and form-factor overkill,” says Klaus Grobe, senior principal engineer at ADVA Optical Networking. “The only known solution for this is high integration of the transceiver arrays and that is exactly what C-3PO is about.”

The low-power aspect of C-3PO for WDM-PON is also key. “In next-gen access, it is absolutely vital,” says Grobe. “If the technology is deployed on a broad scale - that is millions of user lines – every single watt counts, otherwise you end up with differences in the approaches that go into the megawatts and even gigawatts.”

There is also a benchmarking issue: the WDM-PON OLT will be compared to the XG-PON standard, the next-generation 10Gbps Gigabit passive optical network (GPON) scheme. Since XG-PON will use time-division multiplexing, there will be only one transceiver at the OLT. But this is what a 40- or 80-channel WDM-PON OLT will be compared with.

CIP will also be working closely with 3-CPO partner, IMEC, as part of the design of the low-power ICs to drive the modulators.

 

Project timescales

The C-3PO project started in June 2010 and will last three years. The total funding of the project is €2.6 million with the European Union contributing €1.99 million.

The project will start by defining system requirements for the WDM-PON and optical transmission designs.

At CIP the project will employ the equivalent of two full-time staff for the project’s duration though Smith estimates that 15 CIP staff will be involved overall.

ADVA Optical Networking plans to use the results of the project – the WDM-PON and possibly the high-speed transmission interfaces - as part of its FSP 3000 WDM platform.

CIP expects that the technology developed as part of 3-CPO will be part of its advanced product offerings.


Verizon plans coherent-optimised routes

Glenn Wellbrock, director of backbone network design at Verizon Business, was interviewed by gazettabyte as part of an upcoming feature on high-speed optical transmission.  Here are some highlights of what he shared. The topics will be expanded upon in the upcoming feature.

 

 "Next-gen lines will be coherent only"

 Glenn Wellbrock, Verizon Business

 

 

Muxponders at 40Gbps

Given the expense of OC-768 very short reach transponders, Verizon is a keen proponent of 4x10Gbps muxponders. Instead of using the OC-768 client side interface, Verizon uses 4x10Gbps pluggables which are multiplexed into the 40Gbps line-side interface. The muxponder approach is even more attractive with compared to 40Gbps IP core router interfaces which are considerable more expensive than 4x10Gbps pluggables.

DQPSK will be deployed this year

Verizon has been selective in its use of differential phase-shift keying (DPSK) based 40Gbps transmission within its network.  It must measure the polarisation mode dispersion (PMD) on a proposed 40Gbps route and its variable nature means that impairment issues can arise over time. For this reason Verizon favours differential quadrature phase-shift keying (DQPSK) modulation.

According to Wellbrock, DPSK has a typical PMD tolerance of 4 ps while DQPSK is closer to 8 ps. In contrast, 10Gbps DWDM systems have around 12 ps. “That [8 ps of DQPSK] is the right ballpark figure,” he says, pointing out that a measuring a route's PMD must still be done.

Verizon is testing the technology in its labs and Wellbrock says Verizon will deploy 40Gbps DQPSK technology this year.

Cost of 100Gbps

Verizon Business has already deployed Nortel’s 100Gbps dual- polarization quadrature phase-shift keying (DP-QSPK) coherent system in Europe, connecting Frankfurt and Paris. However, given 100Gbps is at the very early stages of development it will take time to meet the goal of costing 2x 40Gbps.

That said, Verizon expects at least one other system vendor to have a 100Gbps system available for deployment this year. And around mid-2011, at least three 300-pin module makers will likely have products. It will be the advent of 100Gbps modules and the additional 100Gbps systems they will enable that will reduce the price of 100Gbps. This has already happened with 40Gbps line side transponders; with 100Gbps the advent of 300-pin MSAs will happen far much quickly, says Wellbrock.

Next-gen routes coherent only

When Verizon starts deploying its next-generation fibre routes they will be optimised for 100Gbps coherent systems. This means that there will be no dispersion compensation fibre used on the links, depending on the 100Gbps receiver’s electronics to execute the dispersion compensation instead.

The routes will accommodate 40Gbps transmission but only if the systems use coherent detection. Moreover, much care will be needed in how these links are architected since they will need to comply with future higher-speed optical transmission schemes.

Verizon expects to start such routes in 2011 and “certainly” in 2012.


ECOC 2009: An industry view

One theme dominated all others for attendees at this year’s ECOC, held in Vienna in late September: high speed optical transmission technology.

“Most of the action was in 40 and 100 Gigabit,” said Stefan Rochus, vice president of marketing and business development at CyOptics. “There were many 40/ 100 Gigabit LR4 module announcements - from Finisar, Opnext and Sumitomo [Electric Industries].”

Daryl Inniss, practice leader, components at market research firm Ovum, noted a market shift regarding 40 Gigabit. “There has been substantial progress in lowering the cost, size and power consumption of 40 Gigabit technology,” he said.

John Sitch, Nortel’s senior advisor optical development, metro Ethernet networks, highlighted the prevalence and interest in coherent detection/ digital signal processing designs for 40 and 100 Gigabit per second (Gbps) transmission. Renewed interest in submarine was also evident, he said.

Rochus also highlighted photonic integration as a show theme, with the multi-source agreement from u2t Photonics and Picometrix, the integrated DPSK receiver involving Optoplex with u2t Photonics, Enablence Technologies, and CIP Technologies' monolithically integrated semiconductor optical amplifier with a reflective electro-absorption modulator.

Intriguingly, Rochus also heard talk of OEMs becoming vertically integrated again. “This time maybe by strategic partnerships rather than OEMs directly owning fabs,” he said.

The attendees were also surprised by the strong turnout at ECOC, which was expected to suffer given the state of the economy. “Attendance appeared to be thick and enthusiasm strong,” says Andrew Schmitt, directing analyst, optical at Infonetics Research. “I heard the organisers were expecting 200 people on the Sunday [for the workshops] but got 400.”

In general most of the developments at the show were as expected. “No big surprises, but the ongoing delays in getting actual 100 Gigabit CFP modules were a small surprise.” said Sitch. “And if everyone's telling the truth, there will be plenty of competition in 100 Gigabit.”

Inniss was struck by how 100 Gigabit technology is likely to fare: “The feeling regarding 100 Gigabit is that it is around the corner and that 40 Gigabit will somehow be subsumed,” he said. “I’m not so sure – 40 Gigabit  is growing up and while operators are cheerleading 100 Gigabit technology, it doesn’t mean they will buy – let’s be realistic here.”

As for the outlook, Rochus believes the industry has reason to be upbeat. “There is optimism regarding the third and fourth quarters for most people,” he said. “Inventories are depleted and carriers and enterprises are spending again.”

Inniss’ optimism stems from the industry's longer term prospects. He was struck by a quote used by ECOC speaker George Gilder: “Don’t solve problems, pursue opportunities.”

Network traffic continues to grow at a 40-50% yearly rate yet some companies continue to worry about taking a penny out of cost, said Inniss, when the end goal is solving the bandwidth problem.

For him 100 Gbps is just a data rate, as 400 Gbps will be the data rate that follows.  But given the traffic growth, the opportunity revolves around transforming data transmission. “For optical component companies innovation is the only way," said Inniss. "What is required here is not a linear, incremental solution."


Privacy Preference Center