OTN hardware gets the 100 Gigabit treatment

AppliedMicro’s TPACK unit has unveiled the first of its 100 Gigabit Optical Transport Network (OTN) designs. Two devices were announced in November 2010 - the TPOT414 and TPOT424 - that perform 100 Gigabit mapping and framing functions, while in December a 100 Gigabit OTN muxponder (multiplexer-transponder) was announced that combines several of its designs.

 

 “The real market demand is for simple systems - the transponder and interfaces to the routers"

Lars Pedersen, AppliedMicro

 

 

 

 

Why is this significant?

The OTN standard, defined by the telecom standards body of the International Telecommunication Union (ITU-T), has existed for a decade but has emerged recently as a key networking technology.

“SONET/SDH is now legacy while packet optical is next-generation work,” says Sterling Perrin, senior analyst at Heavy Reading. “OTN has emerged as an interim step away from SONET/SDH that is able to handle packets.”

With the advent of 100 Gigabit-per-second (Gbps) optical transmission, OTN has been upgraded to handle 100Gbps signals and multiplex existing 10Gbps and 40Gbps OTN within the 100 Gigabit framing format. AppliedMicro claims to be first-to-market with merchant 100Gbps OTN hardware.

AppliedMicro’s 100Gbps OTN designs are implemented using field-programmable gate arrays (FPGAs) and will become available to system vendors this quarter. Using FPGAs allows vendors to start their hardware designs early, adding AppliedMicro’s FPGA software as the OTN design is completed.

 

What has been done?

The TPOT414 and TPOT424 designs, implemented on a line card, perform mapping - taking a 100 Gigabit client-side signal and turning into a 100Gbps line-side signal for transmission - and regeneration of a 100Gbps signal.  

The 100Gbps OTN muxponder uses framing and mapping but adds multiplexing between 10, 40 Gbps and 100Gbps streams. One application is a router taking IP traffic at different rates and framing them before transmission over a 100Gbps dense wavelength division multiplexing (DWDM) network.

The 100Gbps muxponder comprises AppliedMicro’s PQ60 framer/mapper chip and multiplexing FPGA products, referred to by AppliedMicro as soft silicon. “It [soft silicon] is a combination of an FPGA and the programming image delivered as one unified component,” says Lars Pedersen, TPACK’s CTO. “There is still some uncertainty as to the specification and what is needed.”

The benefits of a soft silicon approach compared to an application-specific standard product (ASSP) include the ability to reprogram the design to accommodate standards’ changes, and allowing system vendors to add new elements as they customise their designs. 

AppliedMicro also provides an application programming interface (API) which simplifies control and maintenance when several of its designs are combined to implement a more complex function. “From a software perspective it looks like one combined function,” says Pedersen. The 100Gbps muxponder, for example, is controlled via the API. The API also allows software reuse were AppliedMicro to offer the functions as an ASSP chip.

 

The TPOT OTN architecture

The two functions – the TPOT414 and TPOT424 – are implemented on a common FPGA design.

 

The TPOT414: Source: AppliedMicro

The TPOT414 has a 100 Gigabit Ethernet (GbE) CAUI interface (10 x 11.2Gbps) and performs physical coding sub-layer (PCS) monitoring per lane before mapping the signal into OTU4, prior to long-haul transmission. The two signals - the 100GbE and the 100Gbps line side - have separate clocks and the role of the mapper is to place the 100GbE stream into the OTN format.

The TPOT414 could be used to interface two optical modules on a line card: a CFP module that takes in a 100GbE client signal and an MSA-168 long-haul transponder whose electrical input is the OTN OTU4 signal.

 

The TPOT424 Source: AppliedMicro

The second design, the TPOT424, takes in an OTU4 signal made up of a payload and overhead components. The overhead part that includes a forward error correction (FEC) is terminated - errors corrected and signal measurements made – before the payload is put into a new OTU4 frame and a fresh overhead including a new FEC scheme is applied.

Both the TPOT414 and TPOT424 use standard FEC from the ITU-T G.709 standard. Separate devices in the optical module are needed if more powerful FECs are used. AppliedMicro says it will support more powerful FECs in future 100Gbps OTN devices.

“These [the TPOT414 and TPOT424] are the bulk of the emerging market and are the most needed components to start with,” says Pedersen.

The 100G OTN muxponder also supports the multiplexing function, including support for 10GbE and 40GbE, OC-192 and OC-768 SONET/SDH, and 8Gbps and 10Gbps Fibre Channel signals

 

What next?

Pedersen says there is now significant demand for its 100Gbps OTN designs as vendors prepare to launch systems supporting 100Gbps interfaces in 2011 and 2012. These include packet optical transport platforms and 100Gbps IP router line cards.

“The real market demand is for simple systems - the transponder and interfaces to the routers,” says Pedersen. “But at the same time there are many vendors working on packet optical transport platforms.”

The company does not rule out developing ASSP designs that support100Gbps OTN.


MultiPhy eyes 40 and 100 Gigabit direct-detect and coherent schemes

Visiting Israeli start-up MultiPhy at its office in Ness Ziona, near Rehovot, involves dancing around boxes. “We are about to move,” apologises Ronen Weinberg, director of product management at MultiPhy. But the company will not have to travel far. It is crossing buildings in the same Ness Ziona Science Park, moving in next to Finisar’s Israeli headquarters.

 

MultiPhy's Avi Shabtai (left) and Ronen Weinberg

MultiPhy is developing transceiver designs to boost the transmission performance of metro and long-haul 40 and 100 Gigabit-per-second (Gbps) links. The start-up is aiming its advanced digital signal processing (DSP) chips at direct detection and coherent-based modulation schemes.

“We are the only company, as far as we know, who is doing DSP-based semiconductors for the 40G and 100G direct-detect world,” says Avi Shabtai, CEO of Multiphy.

At 40Gbps the main direct-detection schemes are differential phase-shift keying (DPSK) and differential quadrature phase-shift keying (DQPSK), while at 100Gbps several direct-detect modulation schemes are being considered. “The fact that we are doing DSP at 40G and 100G enables us to achieve much better performance than regular hard-detection technology,” says Shabtai.

Established in 2007, the fabless semiconductor start-up raised US$7.2m in its latest funding round in May. MultiPhy is targeting its physical layer chips at module makers and system vendors. “While there is a clear ecosystem involving optical module companies and systems vendors, there is a lot of overlap,” says Shabtai. “You can find module companies that develop components; you can find system companies that skip the module companies, buying components to make their own line cards.”

MultiPhy’s CMOS chips include high-speed analogue-to-digital converters (ADC) and hardware to implement the maximum-likelihood sequence estimation (MLSE) algorithm. The company is operating the MLSE algorithm at “tens of gigasymbols-per-second”, says Shabtai. “We believe we are the only company implementing MLSE at these speeds.”

MultiPhy's office is alongside Finisar's Israeli headquartersMultiPhy will not disclose the exact sampling rate but says it is sampling at the symbol rate rather than at the Nyquist sampling theorem rate of double the symbol rate. Since commercial ADCs for 100Gbps have been announced that sample at 65Gsample/s, it suggests MultiPhy is sampling at up to half that rate.  

MLSE is used to compensate for the non-linear impairments of fibre transmission, to improve overall transmission performance. “We implement an anti-aliasing filter at the input to the ADC and we use the MLSE engine to compensate for impairments due to the low-bandwidth sampling,” says Shabtai.

 

“There is a good chance that 100Gbps will leapfrog 40Gbps coherent deployments”

Avi Shabtai, MultiPhy

 

 

MultiPhy benefits from using one-sample-per-symbol in terms of simplifying the chip design and its power consumption but the MLSE algorithm must counter the resulting distortion. Shabtai claims the result is a significant reduction in power consumption compared to the tradition two-samples-per-symbol approach: “Tens of percent – I won’t say the exact number but it is not 10 percent.”

Other chip companies implementing MLSE designs for optical transmission include CoreOptics, which was acquired by Cisco in May, and Clariphy. (See Oclaro and Clariphy)

Does using MLSE make sense for 40Gbps DPSK and DQPSK?

“If you use DSP for DQPSK at 40Gbps you can significantly improve polarisation mode dispersion tolerance, the limiting factor today of DQPSK transceivers,” says Shabtai.  MultiPhy expects the 40 Gigabit direct-detect market to shift towards DQPSK, accounting for the bulk of deployments in two years’ time.

 

Market applications

MultiPhy is delivering two solutions: for 40 and 100Gbps direct-detect, and 40 and 100Gbps coherent designs. The company has not said when it will deliver products but hinted that first it will address the direct-detect market and that chip samples will be available in 2011.

Not only will the samples enhance the reach of DQPSK-modulation based links but also allow the optical component specifications to be relaxed.  For example, cheaper 10Gbps optical components can be used which, says MultiPhy, will reduce total design cost by “tens of percent”. 

This is noteworthy, says Shabtai, as the direct-detect markets are increasingly cost-sensitive. “Coherent is being positioned as the high-end solution, and there will be pressure on the direct-detect market to show lower cost solutions,” he says.

 

MultiPhy is eyeing two 100Gbps spaces

MultiPhy’s view is that direct-detect modulation schemes will be deployed for quite some time due to their price and power advantage compared to coherent detection.

Another factor against 40Gbps coherent technology will be the price difference between 40Gbps and 100Gbps coherent schemes. “There is a good chance that 100Gbps will leapfrog 40Gbps coherent deployments,” he says. “The 40Gbps coherent modules will need to go a long way to get to the right price.”  MultiPhy says it is hearing about the expense of coherent modules from system vendors and module makers, as well as industry analysts.

 

Metro and long-haul

The company says it has received several requests for 40Gbps and 100Gbps direct-detect schemes for the metro due to its sensitivity to cost and power consumption. “We are getting to the point in optical communications where one solution does not fit all – that the same solution for long-haul will also suit metro,” says Shabtai.

He believes 100Gbps coherent will become a mainstream solution but will take time for the technology to mature and its costs to come down. It will thus take time before 100Gbps coherent expands beyond long-haul and into the metro. He also expects a different 100Gbps coherent solution to be used in the metro. “The requirements are different – in reach, in power constraints” he says. “The metro will increasingly become a segment, not only for direct-detect but also for coherent.”

 

Coherent: Already a crowded market

There are at least a dozen companies actively developing silicon for coherent transmission, while half-a-dozen leading system vendors developing designs in-house. In addition, no-one really knows when the 100Gbps market will take off. So how does MultiPhy expect to fare given the fierce competition and uncertain time-to-revenues?

“It is very hard to predict the exact ramp up to high volumes,” says Shabtai. “At the end of the day, 100Gbps will come instead of 10Gbps and when people look back in five and six years’ time, they will say: ‘Gee, who would have expected so much capacity would have been needed?’.”

The big question mark is when will coherent technology ramp and this explains why MultiPhy is also targeting next-generation direct-detect schemes with its technology. “We cannot come to market doing the same thing as everyone else,” says Shabtai. “Having a solution that addresses power consumption based on one-sample-per-symbol gives us a significant edge.”

MultiPhy admits it has received greater market interest following Cisco’s acquisition of CoreOptics. “While Cisco said it would fulfill all previous commitments, still it worried some of CoreOptics’ customers,” says Shabtai. The acquisition also says something else to Shabtai: 100Gbps coherent is a strategic technology. 

Did Cisco consider MultiPhy as a potential acquisition target? “First, I can’t comment, and I wasn’t at the company at the time,” says Shabtai.

As for design wins, Shabtai says MultiPhy is in “advanced discussion” with several leading module and system vendor companies concerning its 40Gbps and 100Gbps direct-detect and coherent technologies.



Further reading

See Opnext's multiplexer IC plays its part in 100Gbps trial


Cisco's P-OTS: Denser and distributed

Cisco Systems’ carrier packet transport (CPT) product family adds metro packet optical transport to its existing switch and router offerings.

Cisco claims the CPT is its second-generation packet optical transport system (P-OTS), complementing the ONS 15454. But some analysts view the CPT as the vendor’s first true packet optical transport product.

 

"This announcement is an acknowledgement that P-OTS equipment is important and that operators are insisting on it"

Sterling Perrin, Heavy Reading

 

 

The CPT family comprises the CPT 200 and CPT 600 platforms, while the CPT 50 port extension shelf enables the CPT products to be implemented as a distributed switch architecture.

Gazettabyte spoke to Stephen Liu, manager, service provider marketing at Cisco Systems about the announcement and asked three analysts on the significance of Cisco’s CPT, how the product family advances packet optical transport and how the platforms will benefit operators.

 

 Carrier packet transport family

The CPT platforms are aimed at operators transitioning their metro networks from traditional SONET/SDH to packet-based transport.

Cisco says the CPT is its second-generation P-OTS. A first generation P-OTS supports dense wavelength division multiplexing (DWDM) with some Ethernet capability. “The truly integrated P-OTS that unites the simplicity of optical delivery with packet routing is in the second generation,” says Cisco’s Liu.

Market research firm, Heavy Reading, defines P-OTS as a platform that combines SONET/SDH, connection-oriented Ethernet, DWDM and, depending on where the platform is used within the network, also optical transport network (OTN) switching and reconfigurable optical add-drop multiplexers (ROADMs). The global P-OTS market will total $870 million in 2010, says Heavy Reading.

The CPT combines DWDM, OTN, Ethernet, multi-protocol label switching – transport profile (MPLS-TP) and ROADMs. MPLS-TP is a stripped down version of the multi-protocol label switching (MPLS) protocol and is used for point-to-point communication. MPLS-TP’s ability to interoperate with IP-MPLS allows operators to combine packet-based technology with transport control in the access and aggregation part of the network, says Cisco.

So what is new with the introduction of the CPT platforms? “The ability to do high-density packet optical transport with MPLS-TP,” says Liu.  

Cisco has fitted 160 Gigabit-per-second (Gbps) switching capacity into the two-rack-sized CPT 200 platform and 480Gbps in the six-rack CPT 600. The respective platform port counts are 176 Gigabit Ethernet (GbE) and 352 GbE ports, says Liu.

Cisco also stresses the functionality integrated into the dense platforms. “We have ROADMs coming together with transponders that do the electrical-to-optical conversion, and the TDM/Ethernet switching functions,” says Liu. “It takes about 30 inches of ROADM/transponder and TDM/Ethernet switching functions on separate platforms; with the CPT it is condensed into 10.5 inches of rack space.”

The result, says Liu, is a 60% operational expense (OpEx) saving in power consumption, cooling and space.  Cisco also claims that unifying the management of the optical and packet transport domains will result in a 20% OpEx saving.

The CPT 50 satellite shelf complements the CPT platforms. The CPT 50 has 44 GbE ports and four 10GbE uplink ports. “The shelf can be deployed locally next to a CPT platform or up to 80km away, but from a management point-of-view it all looks like a single box,” says Liu.

The platforms do not support 40 or 100Gbps interfaces but that is part of the product roadmap, says Liu. Earlier this year, Cisco acquired 40 and 100 Gigabit transport specialist, CoreOptics. Nor will the platform family be limited to the metro. “Long-haul opportunities are certainly open to us,” says Liu.

Cisco says that the CPT platforms are being trialled and will be available from 1Q of 2011.  Several large operators including Verizon, XO Communications and BT are in various stages of platform evaluation.

 

Analysts’ comments

Sterling Perrin, senior analyst at Heavy Reading 

We believe the CPT is Cisco’s most significant optical announcement since its acquisition spree at the beginning of the decade.

Cisco has always positioned its legacy product, the ONS 15454, as packet transport but really it is a multi-service provision platform (MSPP) – or as Cisco calls it, a multi-service transport platform (MSTP) ­– with SONET/SDH and DWDM. We have not counted that as a P-OTS. What it is doing now is entering the [P-OTS] market.

Cisco is an IP router and Ethernet switch company and is strong on IP-over-DWDM.  It has pushed that story to operators for years and while that has been happening, there has been the packet optical transport trend which has been gaining steam. Vendors have either used P-OTS for next-generation networks or have had a dual strategy of switches and routers and P-OTS. Cisco have always been in the switch-router space. This announcement is an acknowledgement that P-OTS equipment is important and that operators are insisting on it.

Cisco will be competitive with the CPT based on its newness. The density looks impressive – 480Gbps for the six-rack and 160Gbps for the two-rack platform. But this is a generational thing; in time as everyone else releases their next product, they will also have a dense platform. But for now it is a differentiator. The remote shelf is also interesting but it is unclear to what degree that will be telling with operators.

As for the operators mentioned in the Cisco press release, Verizon has already picked Fujitsu and Tellabs as the P-OTS suppliers for its metro and regional networks. The big opportunity with Verizon is in the core, and the first two CPT platforms are not for core.

Mention of BT is also interesting as the operator is in favour of the opposite approach, based on switches and routers from Alcatel-Lucent and Juniper, and has moved away from P-OTS. XO is probably the most likely operator [of the three mentioned] to adopt the platform and already uses Cisco’s ONS 15454. 

The opportunity for Cisco is protecting the ONS 15454 customer base that is looking to move from MSPPs to packet optical transport.

Heavy Reading believes the standalone DWDM and MSPP markets are declining, but will remain large markets for the next two years. Accordingly, it makes sense for Cisco to continue supporting the legacy product line.

 

Eve Griliches, managing partner, ACG Research

The CPT is more along the lines of a purpose built P-OTS than some variations that have came to market.  It has all the requirements a P-OTS should have including a hybrid switch fabric that supports packet and OTN.  I suspect the packet functionality is very good, and possibly better than other transport carriers have delivered, but the operators are still testing and they will speak soon.  I do know that operators I've spoken with are already very impressed with what they’ve seen.

 

"Operators I've spoken with are already very impressed with what they’ve seen"

Eve Griliches, ACG Research

 

 

In terms of how the CPT will benefit operators, the CPT is a metro aggregation P-OTS box, and it will have to compete with Tellabs and Fujitsu who have been shipping equipment for the metro for two years.  But Cisco will likely bring better packet functionality, which is what operators have been waiting for.

 

Rick Talbot, senior analyst, transport and routing infrastructure, Current Analysis

Cisco is introducing a product into a space recently defined by other vendors – packet-based access/ aggregation devices for backhaul, currently mobile backhaul.  Example devices are the Alcatel-Lucent 1850 TSS-100, ECI Telecom’s BG-64 and the Ericsson OMS 1410.

 

"The CPT will likely blur the line between metro P-OTS and packet-based access/ aggregation devices"

Rick Talbot, Current Analysis

 

 

 

CPT brings quite a significant advantage in port density and packet-switching capacity. The CPT 200’s 160Gbps capacity is twice that of the OMS 1410, the current leader in that category.  The CPT 600 boasts the capacity of a full metro P-OTS in a chassis the size of a small MSPP. From Cisco’s perspective, the CPT product line is not about introducing a new access/ aggregation device but extending the metro architecture closer to cell towers and end-users.

The CPT will likely blur the line between metro P-OTS and packet-based access/ aggregation devices.  It has a modest size and power consumption. It also extends MPLS, in the form of MPLS-TP, to the very edge of the operator’s network, enabling a single end-to-end packet-forwarding method.

The high capacity and low-power consumption of the CPT will, of course, save operators OpEx and CapEx.  In addition, the platform extends a single connection-oriented management view to the end-user site, minimising management expense.

The flexibility of the platform will further benefit the operator if and when the operator deploys cache content storage at the network edge. But such deployment of servers beyond the central office remains to be seen.

 

Related links:

See also Intune Networks' packet optical transport platform


Xilinx's 400 Gigabit Ethernet FPGA

Xilinx has detailed its latest 28nm CMOS Virtex-7 FPGA family that will support 400 Gigabit Ethernet on a single device. The Virtex-7HT completes the Virtex-7, joining the Virtex-7T and Virtex-7XT product families announced in June.

 

A single FPGA will support 400 Gigabit Ethernet duplex traffic. The FPGA can also support 4x100Gig MACs and 4x150Gbps Interlaken interfaces. Source: Xilinx

Why is it important?

Xilinx says its switch and router customers are more than doubling the traffic capacity of their platforms every three years. “They are looking for silicon that will support a doubling of capacity within the same form-factor and the same power budget,” says Giles Peckham, EMEA marketing director at Xilinx.  

An FPGA has an advantage when compared to an application-specific standard product (ASSP) chip or an ASIC: being programmable and a volume-manufactured device, it is easier for an FPGA design to contend with changes in standards and the escalating cost of implementing chip designs in ever-finer CMOS geometries.

The Virtex-7HT will support 28 Gigabit-per-second (Gbps) transceivers (serial/ deserialiser or serdes). Used in a four-channel configuration, a 100Gbps interface can be implemented. Indeed the largest member of the Virtex-7HT family - the XC7VH870T - will have 16 x 28.05Gbps transceivers, enabling 4x100Gbps or even a 400 Gigabit Ethernet interface.

The 28Gbps transceivers will be used to interface to optical modules such as the emerging CFP2 pluggable form-factor. The CFP2 multi-source agreement  is expected to be ratified in the second half of 2011 and start shipping in the second half of 2012, says Xilinx.

 

“Network processors and ASICs are typically a [CMOS] process node or two behind us"

Giles Peckham, Xilinx

 

 

 

And with the additional 72, 13.1Gbps transceivers on-chip, the XC7VH870T will have sufficient input-output (I/O) to support bi-directional 400 Gigabit Ethernet traffic. The FPGA's lower-speed 13.1Gbps serdes are included to interface to network processors (NPUs) or ASICs that only support the lower-speed transceivers. “Network processors and ASICs are typically a [CMOS] process node or two behind us – partly because of cost - such that they end up at a technology disadvantage, as in transceiver speed,” says Peckham.

The additional 13.1Gbps transceivers - only 40 of the 72 transceivers are needed for the 400 Gigabit Ethernet port – will enable the FPGA to interface to other chips.

Xilinx says it will be at least a year and possibly 18 months before samples of the Virtex-7HT FPGA family become available. But it is making the Virtex-7HT announcement now because it has tested successfully the 28Gbps transceiver design.

 

Front panel evolution from 48 SFP+ to 4 CFPs to 8 CFP2s. Source: Xilinx

 

What has been done

There are three devices in the Virtex-7HT family which have 4, 8 and 16, 28Gbps transceivers. Xilinx claims this is four times the transceiver count of any competing 28nm FPGA detailed to date. But Peckham admits that additional announcements from competitors are inevitable before the Virtex-7HT devices become available in 2012.

In September Altera announced that it had successfully demonstrated a 25Gbps transceiver test chip. And in November, Intel and Achronix Semiconductor formed a strategic relationship that will allow the FPGA start-up to use Intel's leading-edge 22nm CMOS manufacturing process.

The three Virtex-7HT FPGAs also come with different amounts of programmable logic cells, memory blocks and Xilinx’s XtremeDSP building blocks tailored for digital signal processing.

Xilinx says meeting the CEI-28G electrical interface jitter specification has proved challenging.  At 10 Gigabit the signal period is 100 picoseconds (ps) and the jitter allowance is 35ps, while the signal period at 28Gbps is 35ps. “When you realise the jitter spec on the 10 Gigabit interface is the same as the full period in the 28 Gigabit spec – 35 picoseconds – there is quite a lot of work to be done in reducing the jitter when migrating to 28 Gigabit,” says Peckham.

Xilinx uses pre-emphasis techniques on the signals before they are transmitted across the printed circuit board to reduce loss. In addition, the FPGA maker has enhanced the noise isolation between the FPGA's digital and analogue CMOS circuitry. “The short spiky current loads in the digital circuitry can impact the noise in the analogue circuitry and increase the jitter,” says Peckham.

 

What next?

Xilinx has created a test vehicle 28Gbps transceiver. This allows Xilinx to validate and fine-tune the design. The rest of the FPGA design needs to be completed while another design iteration of the 28Gbps test vehicle is likely. “We have a lot of things to do yet,” he says.

Meanwhile system vendors can start to design their systems based on the FPGA family in advance of samples that are expected in the first half of 2012.

  • For a video demonstration of the 28Gbps test vehicle, click here.

Fujitsu Labs adds processing to boost optical reach

Fujitsu Labs has developed a technique that compensates for non-linear effects in a coherent receiver. The technique promises to boost the reach of 100 Gigabit-per-second (Gbps) and future higher-speed optical transmission systems.

 

“That is one of the virtues of the technology; it is not dependent on the modulation format or the bit rate”

Takeshi Hoshida, Fujitsu Labs

 

Why is it important?

Much progress has been made in developing digital signal processing techniques for 100Gbps coherent receivers to compensate for undesirable fibre transmission effects such as polarisation mode dispersion and chromatic dispersion (See Performance of Dual-Polarization QPSK for Optical Transport Systems).  Both dispersions are linear in nature and are compensated for using linear digital filtering. What Fujitsu Labs has announced is the next step: a digital filter design that compensates for non-linear effects.

A key challenge facing optical-transmission designers is extending the reach of 100Gbps transmissions to match that of 10Gbps systems. In the simplest sense, reach falls with increased transmission speed because the shorter-pulsed signals contain less photons.  Channel impairments also become more prominent the higher the transmission speed.

Engineers can increase system reach by boosting the optical signal-to-noise ratio but this gives rise to non-linear effects in the fibre.  “When the signal power is higher, the refractive index of the fibre changes and that distorts the phase of the optical signal,” says Takeshi Hoshida, a senior researcher at Fujitsu Labs.

The non-linear effect, combined with polarisation mode dispersion and chromatic dispersion, interact with the signal in a complicated way. “The linear and non-linear effects combine to result in a very complex distortion of the received signal,” says Hoshida.

Fujitsu has developed a non-linear distortion compensation technique that recovers 2dB of the transmitted optical signal. Moreover, the compensation technique will equally benefit 400 Gigabit or 1 Terabit channels, says Hoshida: “That is one of the virtues of the technology; it is not dependent on the modulation format or the bit rate.”

Fujitsu plans to extend the reach of its long-haul optical transmission systems using the technique.  The 2dB equates to a 1.6x distance improvement. But, as Hoshida points out, this is the theoretical benefit. In practice, the benefit is less since a greater transmission distance means the signal passes through more amplifier and optical add-drop stages that introduce their own signal impairments.

 

Method used

Fujitsu Labs has implemented a two-stage filtering block. The first filter stage is linear and compensates for chromatic dispersion, while the second unit counteracts the fibre's non-linear effect on the optical signal. To achieve the required compensation, Fujitsu Labs uses multiple filter-stage blocks in cascade.

According to Hoshida, optical phase is rotated according to the optical power: “If the power is higher, the more phase rotation occurs – that is the non-linear effect in the fibre.”  The effect is distributed, occurring along the length of the fibre, and is also coupled with chromatic dispersion.  “Chromatic dispersion changes the optical intensity waveform, and that intensity waveform induces the non-linear effect,” says Hoshida. “Those two problems are coupled to each other so you have to solve both.”

Fujitsu tackles the problem by applying a filter stage to compensate for each optical span – the fibre segment between repeaters. For a terrestrial transmission system there can be as many as 20 or 30 such spans. “But [using a filter stage per span] is rather inefficient,” says Hoshida.  By inserting a weighted-average technique, Fujitsu has reduced by a factor of four the filter stages needed.

Weighted-averaging is a filtering operation that smoothes the signal in the time domain.  “It is not necessary to change the weights [of the filter] symbol-by-symbol; it is almost static,” says Hoshida. Changes do occur but infrequently, depending on the fibre’s condition such as changes in temperature, for example.

Fujitsu has been surprised that the weighted-averaging technique is so effective. The technique’s use and the subsequent 4x reduction in filter stages reduce by 70% the hardware needed to implement the compensation. The reason it is not the full 75% is that extra hardware for the weighted averaging must be added to each stage. 

 

What next?

Fujitsu has demonstrated that the technique is technically feasible but practical issues remain such as power consumption. According to Hoshida, the power consumption is too high even using an advanced 40nm CMOS process, and will likely require a 28nm process. Fujitsu thus expects the technique to be deployed in commercial systems by 2015 at the latest.

There are also further optical performance improvements to be claimed, says Hoshida, by addressing cross-phase modulation. This is another non-linear effect where one lightpath affects the phase of another.

Fujitsu Labs has developed two algorithms to address cross-phase modulation which is a more challenging problem since it is modulation-dependent.

 

For a copy of Fujitsu’s ECOC 2010 slides, please click here.

 


Oclaro points its laser diodes at new markets

Yves LeMaitre has experienced much over the course of the last decade working in the optical components industry. He has been CEO of a start-up during the optical boom, lived through acquisitions, and has undertaken business development in telecom and non-telecom markets.

 

“To succeed in any market ... you need to be the best at something, to have that sustainable differentiator”

 Yves LeMaitre, Oclaro

 

 

 

Now LeMaitre is executive vice president at Oclaro, managing the company’s advanced photonics solutions (APS) arm. The APS division is tasked with developing non-telecom opportunities based on Oclaro’s high-power laser diode portfolio, and accounts for 10%-15% of the company’s revenues

“The goal is not to create a separate business,” says LeMaitre. “Our goal is to use the infrastructure and the technologies we have, find those niche markets that need these technologies and grow off them.”

Recently Oclaro opened a design centre in Tucson, Arizona that adds packing expertise to its existing high-power laser diode chip business. The company bolstered its laser diode product line in June 2009 when Oclaro gained the Newport Spectra Physics division in a business swap. “We became the largest merchant vendor for high-power laser diodes,” says LeMaitre.

The products include single laser chips, laser arrays and stacked arrays that deliver hundred of watts of output power. “We had all that fundamental chip technology,” says LeMaitre. “What we have been less good at is packaging those chips - managing the thermals as well as coupling that raw chip output power into fibre.”

The new design centre is focussed on packaging which typically must be tailored for each product.

 

Laser diodes

There are three laser types that use laser diodes, either directly or as ‘pumps’:

  • Solid-state laser, known as diode-pumped solid-state (DPSS) lasers.
  • Fibre laser, where the fibre is the medium that amplifies light.
  • Direct diode laser - here the semiconductor diode itself generates the light.

All three types use laser diodes that operate in the 800-980nm range. Oclaro has much experience in gallium arsenide pump-diode designs for telecom that operate at 920nm wavelengths and above.

Laser diode designs for non-telecom applications are also gallium arsenide-based but operate at 800nm and above. They are also scaled-up designs, says LeMaitre: “If you can get 1W on a single mode fibre for telecom, you can get 10W on a multi-mode fibre.”  Combining the lasers in an array allows 100-200W outputs. And by stacking the arrays while inserting cooling between the layers, several hundreds of watts of output power are possible.

The lasers are typically sold as packaged and cooled designs, rather than as raw chips. The laser beam can be collimated to precisely deliver the light, or the beam may be coupled when fibre is the preferred delivery medium.

“The laser beam is used to heat, to weld, to burn, to mark and to engrave,” says LeMaitre. “That beam may be coming directly from the laser [diode], or from another medium that is pumped by the laser [diode].”  Such designs require specialist packaging, says LeMaitre, and this is what Oclaro secured when it acquired the Spectra Physics division.

 

Applications

Laser diodes are used in four main markets which Oclaro values at US$800 million a year.

One is the mature, industrial market. Here lasers are used for manufacturing tasks such as metal welding and metal cutting, marking and welding of plastics, and scribing semiconductor wafers.

Another is high-quality printing where the lasers are used to mark large printing plates. This, says LeMaitre, is a small specialist market.

Health care is a growing market for lasers which are used for surgery, although the largest segment is now skin and hair treatment.

The final main market is consumer where vertical-cavity surface-emitting lasers (VCSELs) are used. The VCSELs have output powers in the tens or hundreds of milliwatts only and are used in computer mouse interfaces and for cursor navigation in smartphones.

“These are simple applications that use lasers because they provide reliable, high-quality optical control of the device,” says LeMaitre. “We are talking tens of millions of [VCSEL] devices [a year] that we are shipping right now for these types of applications.”

Oclaro is a supplier of VCSELs for Light Peak, Intel’s high-speed optical cable technology to link electronic devices.  “There will be adoptions of the initial Light Peak starting the end of this year or early next year, and we are starting to ramp up production for that,” says LeMaitre. “In the meantime, there are many alternative [designs] happening – the market is extremely active – and we are talking to a lot of players.” Oclaro sells the laser chips for such interface designs; it does not sell optical engines or the cables.

Is Oclaro pursuing optical engines for datacom applications, linking large switch and IP router systems? “We are actively looking at that but we haven’t made any public announcements,” he says.

 

Market status

LeMaitre has been at Oclaro since 2008 when Avanex merged with Bookham (to become Oclaro). Before that, he was CEO at optical component start-up, LightConnect.

How does the industry now compare with that of a decade ago?

“At that time [of the downturn] the feeling was that it was going to be tough for maybe a year or two but that by 2002 or 2003 the market would be back to normal,” says LeMaitre. “Certainly no-one expected the downturn would last five years.” Since then, nearly all of the start-ups have been acquired or have exited; Oclaro itself is the result of the merger of some 15 companies.

“People were talking about the need for consolidation, well, it has happened,” he says.  Oclaro’s main market – optical components for metro and long haul transmission – now has some four main players. “The consolidation has allowed these companies, including Oclaro, to reach a level of profitability which has not been possible until the last two years,” says LeMaitre.

Demand for bandwidth has continued even with the recent economic downturn, and this has helped the financial performance of the optical component companies.

“The need for bandwidth has still sustained some reasonable level of investment even in the dark times,” he says. “The market is not as sexy as it was in those [boom] days but it is much more healthy; a sign of the industry maturing.”

Industry maturity also brings corporate stability which LeMaitre says provides a healthy backdrop when developing new business opportunities.  

The industrial, healthcare and printing markets require greater customisation than optical components for telecom, he says, whereas the consumer market is the opposite, being characterised by vastly greater unit volumes.

“To succeed in any market – this is true for this market and for the telecom market – you need to be the best at something, to have that sustainable differentiator,” says LeMaitre. For Oclaro, its differentiator is its semiconductor laser chip expertise. “If you don’t have a sustainable differentiator, it just doesn’t work.” 


Packet optical transport: Hollowing the network core

Intune Networks has developed an optical packet switching and transport (OPST) system that effectively turns fibre into a distributed switch.

The platform enables a fully-meshed metropolitan networkIntune Networks' CEO, Tim Fritzley (right) and John Dunne, co-founder and CTO with software support for web-based services, claims the Irish start-up

“What we have designed allows for the sharing of the same fibre switching assets across multiple services in the metro,” says Tim Fritzley, Intune’s CEO.

The company is in talks with several operators about its OPST system, which is being used for a nationwide network in Ireland. The system is also part of an EC seventh framework project that includes Spanish operator Telefónica.

 

OPST architecture

Intune’s OPST system, dubbed the Verisma iVX8000, uses dense wavelength division multiplexing (DWDM) technology but with a twist. Each wavelength is assigned to a particular destination port, over which the data is transmitted in bursts. The result is an architecture that uses both wavelength-division and time-division multiplexing.

To enable the approach, Intune has developed a control algorithm that can switch and lock a tunable laser’s wavelength “in nanoseconds”. Such rapid laser switching enables wavelength addressing - assigning a dedicated wavelength to each destination port.

As packets arrive at the iVX8000, they are ‘coloured’ and queued before being sent on the required wavelength to their destination.  In effect packets are routed at the optical layer, in contrast to traditional systems where traffic is packed onto a lightpath that has a fixed predefined point-to-point optical path.

The packets are sent in bursts based on their class-of-service. Intune uses a proprietary framing scheme for transmission, with Ethernet frames restored at the destination.  At the input port, all the packets are queued based on their wavelength and class-of-service. The scheduler, which composes the bursts, picks bits to transmit from the queues based on their class, with the bits sent without having to be aligned with a frame’s boundaries.

 

“Instead of assigning an electrical address to a fixed wavelength, we are assigning electrical addresses to dynamic wavelengths”

Tim Fritzley, Intune Networks

 

 

Intune also uses dynamic bandwidth allocation: any bandwidth unused by the higher classes of service is assigned to lower priority traffic. This achieves over 80 percent utilisation of the Ethernet switching and the fibre, says Fritzley.

“You are responding to the dynamic loading of the traffic as it comes in, on a destination-by-destination, colour-by-colour basis,” says Fritzley “Instead of assigning an electrical address to a fixed wavelength [as with traditional systems], we are assigning electrical addresses to dynamic wavelengths.”

The result is a fully meshed architecture with any transponder able to talk to any other transponder on the network, says Fritzley. 

 

System’s span

The network architecture is arranged as a ring with up to a 300km span. The ring connects up to 16 iVX8000 nodes each comprising four 10 Gigabit-per-second (Gbps) ports and switching hardware. Each port is assigned a particular wavelength, equating to a total switch capacity of 640Gbps.

Intune has an 80-wavelength design even though only 64 are used. Indeed it uses two optical rings in parallel. The two rings run in opposite directions, providing optical protection for each port and effectively doubling overall capacity.

For the client side interfaces, the iVX8000 uses four 10 Gigabit Ethernet ports. Since transmissions are in bursts, multiple ports can transmit data to the same destination port even though they share the same wavelength.

The system’s 300km span is an artificial value set by Intune to guarantee “plug-and-play” performance. If the individual chassis are less than 65km apart and the total ring is 300km or under, Intune guarantees no DWDM engineering is required.  “We auto-discover all the optical paths and nodes in the network; we automatically adjust all the amplification and set up the dispersion compensation,” says Fritzley. “This saves thousands of engineer-hours and truck rolls.”

Intune points out that it has engineered a 700km network but claims that for distances beyond 1,000km, point-to-point links connecting regions make more sense.

John Dunne, co-founder and CTO of Intune, claims the metro architecture simplifies networking greatly when connecting the network edge to the IP core. “It is different to what is there today because there are no routeing decisions to be made,” says Dunne. “All of the routes pre-exist, and that is because the tunable lasers contain all the colours of all the ports on the ring.”

As a result, setting up a flow of packets between the edge and core involves using a single interface to the ring. “You don’t have to talk to all the [ring’s] elements, you just talk to the ring,” says Dunne. “The ring is pre-engineered so it knows it’s a ring; it also knows how to guarantee the latency, the bandwidth, the jitter of any flow.”

This is the system’s main merit, says Dunne, the pre-engineered ring hides all the difficulty of building a control layer on top of a dynamic optical and layer-two switching system.

 

Bringing the web into the network

Intune realised that traditional telecom software would struggle to make best use of its distributed optical packet switch architecture. The company has adopted the representational state transfer (REST) software approach for its architecture instead.

“REST is the heart of web services,” says Fritzley. “The reason we did this is that there are hundreds of thousands of programmers that understand how to program it, so you are not into the arcane telecom languages of SNMP and TL1.”  Adopting a 'RESTful' approach, claims Intune, reduces code development by 70 percent.

Moreover, REST by its nature is distributed such that it lends itself to supporting distributed transactions across Intune’s switch. “We have put a mini-http server on every card; we do not centralise control inside a node,” says Fritzley. “Every card peers with all of its peer-functions on the ring.”

In terms of the switch's operation, high-level XML commands are used instead of sending low-level instructions to numerous elements. “For example you ask the ring - set up this flow of packets with this bandwidth, this jitter and this delay,” says Dunne. “The ring replies that it can set this up and it performs the low-level stuff internal to the ring."

Such a capability will ultimately enable a machine to provision bandwidth for services, and enable machine-to-machine communications, says Intune. It will also enable third-party application developers to use the switch for service provisioning.  This isn’t possible today because there is a lack of control, says Dunne.

“We have a full suite of XML-based interface commands,” he says. “All [the interface commands] would go to the carrier, the carrier would expose a subset to the Googles, the Googles would expose a subset to their application writers, and the application writers would expose a subset to the consumer.”  Were the consumer to send a command to request some bandwidth, the call would be passed through the various layers directly into the switch, all in a controlled manner.

Provisioning of bandwidth in such an automated fashion is possible because Intune’s underlying network is bounded and predictable, says Dunne, with the optical path pre-engineered to work with the data path.

Meanwhile until XML becomes more commonplace, Intune uses a code translator that converts the XML code to SNMP or TL1 to interface to existing systems.

“The ring is pre-engineered so it knows it’s a ring; it also knows how to guarantee the latency, the bandwidth, the jitter of any flow”

John Dunne, Intune Networks

 

Applications

The iVX8000 is being targetted at applications such as cloud computing services and the moving of virtualised environments between data centres. But the real target is using the platform to support multiple services – 3G and 4G wireless backhaul, on-demand IP TV as well as cloud.  “No-one can do traffic planning anymore around such services,” says Fritzley.

The platform addresses what one large European operator calls ‘hollowing the core’. The operator wants to simplify its metro network by moving such networking elements as broadband remote access servers (BRASs) to the network edge. These will be connected using a simpler layer-two network that lessens the use of large, expensive IP core routers.”All the IP processing is on the edge and you go edge-to-edge on a flat layer two,” says Fritzley.

 

Market developments

Intune is using its system to enable the Exemplar network in Ireland. Backed by the Irish Government, the company’s systems will be used to build a nationwide network. The first phase involves a lab for application development and testing. So far 40 multi-nationals have signed up to use the network. Starting next year, a ring network will be up and running around Dublin to be followed with a nationwide roll-out in 2013.

The Irish start-up is also part of an EC Seventh Framework research project called MAINS. The project, which started in January, involves Telefónica which is using the iVX8000 to move virtualised resources between data centres depending on user demand and latency requirements.  The project uses XML commands to call for bandwidth from the networking layer. 

Meanwhile, Intune says that it is “deeply engaged” with four to five of the largest operators in North America and Europe.


Google and the optical component industry

Google caused a stir at ECOC by requesting a new 100 gigabit-per-second (Gbps) interface, claiming the existing 100 Gigabit standards fall short of what is needed.

According to a report by Pauline Rigby, Google wants something in between two existing IEEE interface standards. The 100GBase-SR10, which has 10 parallel channels and a 125m span, has too short a reach for Google.

 

“What is good for an 800-pound gorilla is not necessarily good for the industry. It [Google] should have been at the table when the IEEE was working on the standard."

Daryl Inniss, practice leader, components, Ovum

 

The second interface, the 100GBase-LR4, uses four channels that are multiplexed onto a single fibre and has a 10km reach. The issue here is that Google doesn’t need a 10km reach and while a single fibre is better than the multi-mode fibre based SR10, the interface is costly with its “gearbox” IC that translates between 10 lanes of 10Gbps and four lanes each at 25Gbps. Both IEEE interfaces are also implemented using a CFP form factor which is bulky.

 

What Google wants

Google wants optical component vendors to develop a new 100 Gigabit Ethernet multi-source agreement (MSA) that is based on a single-mode interface with a 2km reach, reports Rigby. Such a design would use a ten-channel laser array whose output is multiplexed onto a fibre, a similar laser array-multiplexer arrangement that has already been developed by Santur. Using such a part, the new interface could be developed quickly and cheaply, says Google.

The proposed interface clearly has merits and Google, an important force with an appetite for optics, makes some valid points.  But the industry is developing 4x25Gbps interfaces and while such interfaces may be challenging, no-one doubts they will come. 

 

Google’s next moves

Google has a history of being contrarian if it believes it best serves its business. The way the internet giant designs data centres is one example, using massive numbers of cheap servers arranged in a fault-tolerant architecture.

But there is only so much it can do in-house and developing a new optical interface will require help from optical component players.

Google has the financial muscle to hire an optical component firm to engineer and manufacture a custom interface. A recent example of such a partnership is IBM's work with Avago Technologies to develop board-level optics – or an optical engine – for use within IBM’s POWER7 supercomputer systems.

According to Karen Liu, vice president, components and video technologies at market research firm Ovum, once such an interface is developed, Google could allow others to buy it to help reduce its price. “Remember the Lucent form factor which became a de facto standard but wasn’t originally intended to be?” says Liu. “This approach could work.”

Taking a longer term view, Google could also invest in optical component start-ups. The return may take years and as the experience of the last decade has shown, optical components is a risky business. But Google could encourage a supply of novel, leading-edge technologies over the next decade.

The optical component industry is right to push back with regard Google’s request for a new 100 Gigabit Ethernet MSA, as Finisar has done. While Google may be an important player that can drive interface requirements, many players have helped frame the IEEE 100Gbps Ethernet standards work. In the last decade the optical industry has also seen other giant firms try to drive the industry only to eventually exit.

“The industry needs to move on,” says Daryl Inniss, practice leader, components at Ovum.  “What is good for an 800-pound gorilla is not necessarily good for the industry.” Inniss also suggests a simple and effective way Google could have influenced the 100 Gigabit Ethernet MSA work: “It [Google] should have been at the table when the IEEE was working on the standard."


Cisco Systems' coherent power move

Cisco Systems’ acquisition of CoreOptics means the company has largely cornered the coherent market, says Telecom Pragmatics. 

Cisco Systems announced its intent to acquire the optical transmission specialist CoreOptics back in May. CoreOptics has digital signal processing expertise used to enhance high-speed long-haul dense wavelength division multiplexing (DWDM) optical transmission. Cisco’s acquisition values the German company at US $99m.

 

"Let me be clear, we don’t believe 100Gbps serial will dominate the market for a long time, or 40Gbps for that matter"

Mark Lutkowitz, Telecom Pragmatics

 

 

 

“It has become clear that Cisco, with a few exceptions, has cornered the coherent market for 40 Gig and 100 Gig,” says Mark Lutkowitz, principal at market research firm, Telecom Pragmatics, which has published a report on Cisco's move.

Prior to Cisco’s move, several system vendors were working with CoreOptics for coherent transmission technology at 40 and 100 Gigabit-per-second (Gbps). Nokia Siemens Networks (NSN) was one and had invested in the company, another was Fujitsu Network Communications. Telecom Pragmatics believes other firms were also working with CoreOptics including Xtera and Ericsson (CoreOptics had worked with Marconi before it was acquired by Ericsson).

ACG Research in its May report Cisco/ CoreOptics Acquisition: What Does It Mean for the Packet Optical Transport Space? also claimed that the Cisco acquisition would set back NSN and Ericsson and listed other system vendors such as ADVA Optical Networking and Transmode that may have been considering using CoreOptics’ 100Gbps multi-source agreement (MSA) design.

“The mere fact that you have all these companies working with CoreOptics - and we don’t know all of them – says it all,” says Lutkowitz. “This was the company they were initially going to be depending on and Cisco made a power move that was brilliant.” 

With Cisco bringing CoreOptics in-house, these system vendors will need to find a new coherent technology partner. “The next chance would be with a company like Opnext coming out with a sub-system,” says Lutkowitz. “There is no doubt about it – this was a major coup for Cisco.”

For Cisco, the deal is important for its router business more than its optical transmission business. “In terms of transceivers that go into routers and switches it was absolutely essential that Cisco comes up with coherent technology,” says Lutkowitz. Cisco views transport as a low-margin business unlike IP core routers. “This [acquisition] is about protecting Cisco’s bread and butter – the router business,” he says.

The acquisition also has consequences among the router vendors. Alcatel-Lucent has its own 100Gbps coherent technology which it could add to its router platforms. In contrast, the other main router player, Juniper Networks, must develop the technology internally or partner. Telecom Pragmatics claims Juniper has an internal coherent technology development programme.

 

40 and 100 Gig markets

Cisco kick-started the 40Gbps market when it added the high-speed interface on its IP core router and Lutkowitz expects Cisco to do the same at 100Gbps. “But let me be clear, we don’t believe 100Gbps serial will dominate the market for a long time, or 40Gbps for that matter.”

In Telecom Pragmatics’ view, multiple channels of 10Gbps will be the predominant approach. First, 10Gbps DWDM systems are widely deployed and their cost continues to come down. And while Alcatel-Lucent and Ciena already have 100Gbps systems, they remain expensive given the infancy of the technology.  

But with business with large US operators to be won, systems vendors must have a 100Gbps optical transport offering. Verizon has an ultra-long haul request for proposal (RFP), AT&T has named Ciena as its first domain supplier for its optical and transport equipment but a second partner is still to be announced. And according to ACG Research, Google also has DWDM business.

 

What next?

Besides Alcatel-Lucent, Ciena, Infinera, Huawei, and now Cisco developing coherent technology, several optical module players are also developing 100Gbps line-side optics. These include Opnext, Oclaro and JDS Uniphase. There are also players such as Finisar that has yet to detail their plans. Lutkowitz believes that if Finisar is holding off developing 100Gbps coherent modules, it may prove a wise move given the continuing strength of the 10Gbps DWDM market.

Opnext acquired subsystem vendor StrataLight Communications in January 2009 and one benefit was gaining StrataLight’s systems expertise and its direct access to operators. Oclaro made its own subsystem move in July, acquiring Mintera. Oclaro has also partnered with Clariphy, which is developing coherent receiver ASICs.

But Telecom Pragmatics questions the long-term prospects of high-end line-side module/ subsystem vendors. “This [technology] is the guts of systems and where the money is made,” says Lutkowitz. “Ultimately all the system vendors will look to develop their own subsystems.”

Lutkowitz highlights other challenges facing module firms. Since they are foremost optical component makers it is challenging for them to make significant investment in subsystems. He also questions when the market 100Gbps will take off.  “Some of our [market research] competitors talk about 2014 but they don’t know,” says Lutkowitz.

But is not the trend that over time, 40Gbps and 100Gbps modules will gain increasing share of the line side systems optics, as has happened at 10Gbps?  

That is certainly LightCounting’s view that sees Cisco’s move as good news for component and transceiver vendors developing 40 and 100Gbps products. LightCounting argues that with Cisco’s commitment to the technology, other system vendors will have to follow suit, boosting demand for the higher-margin products.

“There will be all types of module vendors but it is possible that going higher in the food chain will not work out,” says Lutkowitz. “There will be more module and component vendors than we have now but all I question is: where are the examples of companies that have gone into subsystems that have done relatively well?”

Opnext is likely to be the next vendor with 100Gbps product, says Lutkowitz, and Oclaro could easily come out with its own offering. “All I’m saying is that there is a possibility that, in the final analysis, systems vendors take the technology and do it themselves.”


AT&T domain suppliers

Date

Domain

Partners

Sept 2009

Wireline Access 

Ericsson

Feb 2010

Radio Access Network

Alcatel-Lucent, Ericsson

April 2010

Optical and transport equipment 

Ciena

July 2010

IP/MPLS/Ethernet/Evolved Packet Core

Alcatel-Lucent, Juniper, Cisco

 

The table shows the selected players in AT&T's domain supplier programme announced to date.

AT&T has stated that there will likely be eight domain supplier categories overall so four more have still to be detailed.

Looking at the list, several thoughts arise:

  • AT&T has already announced wireless and wireline infrastructure providers whose equipment spans the access network all the way to ultra long-haul. The networking technologies also address the photonic layer to IP or layer 3.
  • Alcatel-Lucent and Ericsson already play in two domains while no Asian vendor has yet to be selected.
  • One or two more players may be added to the wireline access and optical and transport infrastructure domains but this part of the network is pretty much done.

So what domains are left? Peter Jarich, service director at market research firm Current Analysis, suggests the following:

  • Datacentre
  • OSS/BSS
  • IP Service Layer (IP Multimedia Subsystem, subscriber data management, service delivery platform)
  • Voice Core (circuit, softswitch)
  • Content Delivery (IP TV, etc.)

AT&T was asked to comment but the operator said that it has not detailed any domains beyond those that have been announced.

Date

Domain

Partners

Sept 2009

Wireline Access

Ericsson

Feb 2010

Radio Access Network

Alcatel-Lucent, Ericsson

April 2010

Optical and transport equipment

Ciena

July 2010

IP/MPLS/Ethernet/Evolved Packet Core

Alcatel-Lucent, Juniper, Cisco


Privacy Preference Center