Macom readies its silicon photonics platform for 400G

  • Macom has announced a laser-integrated photonic integrated circuit (L-PIC) for the 400G-FR4 standard

  • The company is also working with GlobalFoundries to use the semiconductor foundry’s 300mm wafer silicon photonics process

Vivek Rajgarhia (centre) being interviewed at OFC. Source: Macom.

Vivek Rajgarhia (centre) being interviewed at OFC. Source: Macom.

Macom has detailed its latest silicon photonics chip to meet the upcoming demand for 400-gigabit interfaces within the data centre. 

The chip, a laser-integrated photonic integrated circuit (L-PIC), was unveiled at the OFC show held last month in San Diego. The L-PIC implements the transmitter circuitry for the 400G FR4 2km interface standard.

Backing silicon photonics 

“Five to six years ago, we saw that silicon photonics would have a key role to play in photonics and optical interconnect,” says Vivek Rajgarhia, senior vice president and general manager, lightwave at Macom.

Macom acquired several companies to gain the capabilities needed to become a silicon photonics player. 

In 2014 the company paid $230 million for BinOptics which provided Macom with etched facet laser technology that plays a key role in how its L-PIC platform is assembled. Also acquired was the silicon photonics design company, Photonic Controls. In 2015 Macom added FiBest, a packaging specialist, for $60 million.

“We also have the electronics expertise to go alongside [the photonics] to provide chipset solutions,” says Rajgarhia.

 

>
Today, as a photonics company, if you don’t have a play in silicon photonics, you are legacy 
— Vivek Rajgarhia

 

Laser-integrated PIC 

The biggest challenge in silicon photonics is integrating the laser, says Rajgarhia. Coupling and aligning the laser, especially when developing optical interfaces for the high-volume data centre market, needs to be done in a cost-effective and scalable way, he says.

The L-PIC, a coarse wavelength division multiplexing (CWDM) design, tackles this by having four cavities for the lasers. “Each laser is flip-chipped and inserted into a cavity without any lens or isolator, and without active alignment,” says Rajgarhia. 

The self-alignment is possible by using the etched-facet laser technology from BinOptics. “When you cleave the laser facet, the dimensional control has a lot of play - the tolerance is very high - but with an etched facet, you lithographically define the mechanical dimensions,” he says. “We create a cavity in the silicon that matches the laser’s dimensions.” Macom has also incorporated multiple alignment structures as part of its L-PIC platform to enable the self-alignment.

Macom has already developed the L-PIC for the 100-gigabit CWDM4 standard. “We started with the CWDM4 because it had four wavelengths,” says Rajgarhia. “The CWDM4 is a more challenging design [than the 100-gigabit PSM4 interface] because it requires multiplexing.” 

The L-PIC has now been extended to support 100-gigabit channels, to address the DR single channel and the four-channel 400-gigabit FR4 standards. The modulator bandwidth had to be extended and the laser power is different but the approach - the platform - remains the same, says Rajgarhia.

Macom refers to the L-PIC as a smart device. The electro-absorptive modulated lasers (EMLs) used for the FR4 are uncooled. The L-PIC includes ‘structures’ in the silicon such as heaters for tuning the optical elements and photo-detectors that monitor the optical performance. Macom has developed an accompanying micro-controller that sets and controls the device using such structures. 

“We have developed software which we give to customers,” says Rajgarhia. “You can type in what extinction ratio you want, what power you want and it sets that up.”

The company has also started the FR4 receiver development that will also be an integrated design with a demultiplexer and four optical receiver channels.  

Macom is not saying when the L-PIC will be available. However, the company says 'meaningful demand' for 400-gigabit interfaces will start from 2021.

GlobalFoundries

Macom also announced at OFC that it is working with GlobalFoundries to use the chip maker’s 90nm silicon-on-insulator 300mm wafer processing line. 

“Today, as a photonics company, if you don’t have a play in silicon photonics, you are legacy,” says Rajgarhia, adding that in order to make money, what is needed is a working solution that can scale. 

“When we started developing [silicon photonics devices], we and others used research foundries to get our products ready,” says Rajgarhia. “Now, what we have announced is that we are scaling this up at GlobalFoundries.” 

Macom has started the development at GlobalFoundaries’ East Fishkill fab, the former IBM Microelectronics site that has undertaken a lot of research in silicon photonics, says Rajgarhia.

GlobalFoundries recently created a process development kit (PDK) for its silicon photonics line. Now Macom is an early user of the PDK.

Last year, silicon photonics start-up, Ayar Labs, entered into a strategic agreement with GlobalFoundries, providing the foundry with its optical input-output (I/O) technology while gaining access to its 45nm silicon photonics process.


Intel targets 5G fronthaul with a 100G CWDM4 module

  • Intel announced at ECOC that it is sampling a 10km extended temperature range 100-gigabit CWDM4 optical module for 5G fronthaul. 
  • Another announced pluggable module pursued by Intel is the 400 Gigabit Ethernet (GbE) parallel fibre DR4 standard.
  • Intel, a backer of the CWDM8 MSA, says the 8-wavelength 400-gigabit module will not be in production before 2020.

Intel has expanded its portfolio of silicon photonics-based optical modules to address 5G mobile fronthaul and 400GbE.

Robert BlumAt the European Conference on Optical Communication (ECOC) being held in Rome this week, Intel announced it is sampling a 100-gigabit CWDM4 module in a QSFP form factor for wireless fronthaul applications.

The CWDM4 module has an extended temperature range, -20°C to +85°C, and a 10km reach.

“The final samples are available now and [the product] will go into production in the first quarter of 2019,” says Robert Blum, director of strategic marketing and business development at Intel’s silicon photonics product division.

Intel also announced it will support the 400GBASE-DR4, the IEEE’s 400 GbE standard that uses four parallel fibres for transmit and four for the receive path, each carrying a 100-gigabit 4-level pulse amplitude modulation (PAM-4) signal. 

 

5G wireless

5G wireless will be used for a variety of applications. Already this year the first 5G fixed and mobile wireless services are expected to be launched. 5G will also support massive Internet of Things (IoT) deployments as well as ultra-low latency applications. 

The next-generation wireless standard uses new spectrum that includes millimetre wave spectrum in the 24GHz to 40GHz region. Such higher frequency bands will drive small-cell deployments. 

5G’s use of new spectrum, small cells and advanced air interface techniques such as multiple input, multiple output (MIMO) antenna technology is what will enable its greater data speeds and vastly expanded capacity compared to the current LTE cellular standard. 

Source: Intel.

The 5G wireless standard will also drive greater fibre deployment at the network edge. And it is here where mobile fronthaul plays a role, linking the remote radio heads at the antennas with the centralised baseband controllers at the central office (see diagram). Such fronthaul links will use 25-gigabit and 100-gigabit links. “We have multiple customers that are excited about the 100-gigabit CWDM4 for these applications,” says Blum 

Intel expects demand for 25-gigabit and 100-gigabit transceivers for mobile fronthaul to begin in 2019. 

 

Intel is now producing over one million PSM4 and CWDM4 modules a year

 

Client-side modules 

Intel entered the optical module market with its silicon photonics technology in 2016 with a 100-gigabit PSM4 module, quickly followed by a 100-gigabit CWDM4 module. Intel is now producing over one million PSM4 and CWDM4 modules a year. 

Intel will provide customers with 400-gigabit DR4 samples in the final quarter of 2018 with production starting in the second half of 2019. This is when Intel says large-scale data centre operators will require 400 gigabits.

“The initial demand in hyperscale data centres for 400 gigabits will not be for duplex [fibre] but parallel fibre,” says Blum. “So we expect the DR4 to go to volume first and that is why we are announcing the product at ECOC.”       

Intel says the advantages of its silicon photonics approach have already been demonstrated with its 100-gigabit PSM4 module. One is the optical performance resulting from the company’s heterogeneous integration technique combining indium-phosphide lasers with silicon photonics modulators on the one chip. Another advantage is scale using Intel’s 300mm wafer-scale manufacturing. 

Intel says demand for the 500m-reach DR4 module to go hand-in-hand with that for the 100-gigabit single- wavelength DR1, given how the DR4 will also be used in breakout mode to interface with four DR1 modules. 

“We don’t see the DR1 standard competing or replacing 100-gigabit CWDM4,” says Blum. “The 100-gigabit CWDM4 is now mature and at a very attractive price point.”

Intel is a leading proponent of the CWDM8 MSA, an optical module design based on eight wavelengths, each a 50 gigabit-per-second (Gbps) non-return-to-zero (NRZ) signal. The CWDM8 MSA was created to fast-track 400 gigabit interfaces by avoiding the wait for 100-gigabit PAM-4 silicon. 

When the CWDM8 MSA was launched in 2017, the initial schedule was to deploy the module by the end of this year. Intel also demonstrated the module working at the OFC show held in March. 

Now, Intel expects production of the CWDM8 in 2020 and, by then, other four-wavelength solutions using 100-gigabit PAM-4 silicon such as the 400G-FR4 MSA will be available. 

“We just have to see what the use case will be and what the timing will be for the CWDM8’s deployment,” says Blum. 


An insider's view on the merits of optical integration

One of the pleasures of attending the OFC show, held in Los Angeles last month, is the many conversations possible in one location. The downside is that too many are cut short due to the show's hectic schedule. One exception was a conversation with Valery Tolstikhin (pictured), held in a quiet room prior to the exhibition hall's opening.

Tolstikhin is president and CEO of Intengent, the Ottawa-based consultancy and custom design service provider, and an industry veteran of photonic integration. In 2005 he founded OneChip Photonics, a fabless maker of indium phosphide photonic integrated circuits for optical access

One important lesson he learned at OneChip was how the cost benefit of a photonic integrated circuit (PIC) can be eroded with a cheap optical sub-assembly made from discrete off-the-shelf components. When OneChip started, the selling price for GPON optics was around $100 a unit but this quickly came down to $6. "We needed sales in volumes and they never came close to meeting $6," says Tolstikhin.

OneChip changed strategy, seeing early the emerging opportunity for 100-gigabit optics for the data centre but despite being among the first to demonstrate fully integrated 100-gigabit transmitter and receiver chips – at OFC 2013 –  the company eventually folded.

 

When OneChip started, the selling price for GPON optics was around $100 a unit but this quickly came down to $6

 

Integent can be seen as the photonic equivalent of an electronic ASIC design house that was common in the chip industry, acting as the intermediary between an equipment vendor commissioning a chip design and the foundry making the chip.

Integent creates designs for system integrators which it takes to a commercial foundry for manufacturing. The company makes stand-alone devices, device arrays, and multi-function PICs. Integent uses the regrowth-free taper-assistant vertical integration (TAVI) indium phosphide process of the California-based foundry Global Communication Semiconductors (GCS). "We have also partnered with a prominent PIC design house, VLC Photonics, for PIC layout and verification testing,” says Tolstikhin. Together, Intengent, VLC and GCS offer a one-stop-shop for the development and production of PICs.

 

III-V and silicon photonics

Tolstikhin is a big fan of indium phosphide and related III-V semiconductor materials, pointing out that they can implement all the optical functions required for telecom and datacom applications. He is a firm believer that III-V will continue to be the material system of choice for various applications and argues that silicon photonics is not so much a competitor to III-V but a complement.

"Silicon photonics needs indium-phosphide-based sources but also benefits from III-V modulators and detectors, which have better performance than their silicon photonics counterparts," he says.

He admits that indium phosphide photonics cannot compete with the PIC scalability that silicon photonics offers. But that will benefit indium phosphide as silicon photonics matures. Intengent already benefits from this co-existence, offering specialised indium phosphide photonic chip development for silicon photonics as well.

"Silicon photonics cannot compete with indium phosphide photonics in relatively simple yet highest volume optical components for telecom and datacom transceivers," says Tolstikhin. Partly this is due to silicon photonics' performance inferiority but mainly for economical reasons.

 

Silicon photonics will have its chance, but only where it beats competing technologies on fundamentals, not just cost

 

There are also few applications that need monolithic photonic integration. Tolstikhin highlights coherent optics as one example but that is a market with limited volumes. Meanwhile, the most promising emerging market - transceivers for the data centre, whether 100-gigabit (4x25G NRZ) PSM or CWDM4 designs or in future 400-gigabit (4x100G PAM4) transceivers, will likely be implemented using optical sub-assembly and hybrid integration technologies.

Tolstikhin may be a proponent of indium phosphide but he does not dismiss silicon photonics' prospects: "It will have its chance, but only where it beats competing technologies on fundamentals, not just cost."

One such area is large-scale optoelectronic systems, such as data processors or switch fabrics for large-scale data centres. These are designs that cannot be assembled using discretes and go beyond the scalability of indium phosphide PICs. "This is not silicon photonics-based optical components instead of indium phosphide ones but a totally different system and possibly network solutions," he says. This is also where co-integration of CMOS electronics with silicon photonics makes a difference and can be justified economically.

He highlights Rockley Photonics and Ayar Labs as start-ups doing just this: using silicon photonics for large-scale electro-photonic integration targeting system and network applications. "There may also be more such companies in the making," says Tolstikhin. "And should they succeed, the entire setup of optics for the data centre and the role of silicon photonics could change quite dramatically."


Talking markets: Oclaro on 100 gigabits and beyond

Oclaro’s chief commercial officer, Adam Carter, discusses the 100-gigabit market, optical module trends, silicon photonics, and why this is a good time to be an optical component maker.

Oclaro has started its first quarter 2017 fiscal results as it ended fiscal year 2016 with another record quarter. The company reported revenues of $136 million in the quarter ending in September, 8 percent sequential growth and the company's fifth consecutive quarter of 7 percent or greater revenue growth.

Adam CarterA large part of Oclaro’s growth was due to strong demand for 100 gigabits across the company’s optical module and component portfolio.

The company has been supplying 100-gigabit client-side optics using the CFP, CFP2 and CFP4 pluggable form factors for a while. “What we saw in June was the first real production ramp of our CFP2-ACO [coherent] module,” says Adam Carter, chief commercial officer at Oclaro. “We have transferred all that manufacturing over to Asia now.”

The CFP2-ACO is being used predominantly for data centre interconnect applications. But Oclaro has also seen first orders from system vendors that are supplying US communications service provider Verizon for its metro buildout.

The company is also seeing strong demand for components from China. “The China market for 100 gigabits has really grown in the last year and we expect it to be pretty stable going forward,” says Carter. LightCounting Market Research in its latest optical market forecast report highlights the importance of China’s 100-gigabit market. China’s massive deployments of FTTx and wireless front haul optics fuelled growth in 2011 to 2015, says LightCounting, but this year it is demand for 100-gigabit dense wavelength-division multiplexing and 100 Gigabit Ethernet optics that is increasing China’s share of the global market.

The China market for 100 gigabits has really grown in the last year and we expect it to be pretty stable going forward 

QSFP28 modules

Oclaro is also providing 100-gigabit QSFP28 pluggables for the data centre, in particular, the 100-gigabit PSM4 parallel single-mode module and the 100-gigabit CWDM4 based on wavelength-division multiplexing technology.

2016 was expected to be the year these 100-gigabit optical modules for the data centre would take off.  “It has not contributed a huge amount to date but it will start kicking in now,” says Carter. “We always signalled that it would pick up around June.”

One reason why it has taken time for the market for the 100-gigabit QSFP28 modules to take off is the investment needed to ramp manufacturing capacity to meet the demand. “The sheer volume of these modules that will be needed for one of these new big data centres is vast,” says Carter. “Everyone uses similar [manufacturing] equipment and goes to the same suppliers, so bringing in extra capacity has long lead times as well.”

Once a large-scale data centre is fully equipped and powered, it generates instant profit for an Internet content provider. “This is very rapid adoption; the instant monetisation of capital expenditure,” says Carter. “This is a very different scenario from where we were five to ten years ago with the telecom service providers."

Data centre servers and their increasing interface speed to leaf switches are what will drive module rates beyond 100 gigabits, says Carter. Ten Gigabit Ethernet links will be followed by 25 and 50 Gigabit Ethernet. “The lifecycle you have seen at the lower speeds [1 Gigabit and 10 Gigabit] is definitely being shrunk,” says Carter.

Such new speeds will spur 400-gigabit links between the data centre's leaf and spine switches, and between the spine switches. “Two hundred Gigabit Ethernet may be an intermediate step but I’m not sure if that is going to be a big volume or a niche for first movers,” says Carter.

400 gigabit CFP8

Oclaro showed a prototype 400-gigabit module in a CFP8 module at the recent ECOC show in September.  The demonstrator is an 8-by-50 gigabit design using 25 gigabaud optics and PAM-4 modulation. The module implements the 400Gbase-LR8 10km standard using eight 1310nm distributed feedback lasers, each with an integrated electro-absorption modulator. The design also uses two 4-wide photo-detector arrays.

“We are using the four lasers we use for the CWDM4 100-gigabit design and we can show we have the other four [wavelength] lasers as well,” says Carter.

Carter says IP core routers will be the main application for the 400Gbase-LR8 module. The company is not yet saying when the 400-gigabit CFP8 module will be generally available.

We can definitely see the CFP2-ACO could support 400 gigabits and above

Coherent

Oclaro is already working with equipment customers to increase the line-side interface density on the front panel of their equipment.

The Optical Internetworking Forum (OIF) has already started work on the CFP8-ACO that will be able to support up to four wavelengths, each supporting up to 400 gigabits. But Carter says Oclaro is working with customers to see how the line-side capacity of the CFP2-ACO can be advanced. “We can definitely see the CFP2-ACO could support 400 gigabits and above,” says Carter. “We are working with customers as to what that looks like and what the schedule will be.”

And there are two other pluggable form factors smaller than the CFP2: the CFP4 and the QSFP28. “Will you get 400 gigabits in a QSFP28? Time will tell, although there is still more work to be done around the technology building blocks,” says Carter.

Vendors are seeking the highest aggregate front panel density, he says: “The higher aggregate bandwidth we are hearing about is 2 terabits but there is a need to potentially going to 3.2 and 4.8 terabits.”

Silicon photonics

Oclaro says it continues to watch closely silicon photonics and to question whether it is a technology that can be brought in-house. But issues remain. “This industry has always used different technologies and everything still needs light to work which means the basic III-V [compound semiconductor] lasers,” says Carter.

“Producing silicon photonics chips versus producing packaged products that meet various industry standards and specifications are still pretty challenging to do in high volume,” says Carter.  And integration can be done using either silicon photonics or indium phosphide.  “My feeling is that the technologies will co-exist,” says Carter.


ST makes its first PSM4 optical engine deliveries

Flavio Benetti is upbeat about the prospects of silicon photonics. “Silicon photonics as a market is at a turning point this year,” he says.

What gives Benetti confidence is the demand he is seeing for 100-gigabit transceivers in the data centre. “From my visibility today, the tipping point is 2016,” says Benetti, group vice president and general manager, digital and mixed processes ASIC division at STMicroelectronics.

 

Flavio Benetti

Benetti and colleagues at ST have spent the last four years working to bring to market the silicon photonics technology that the chip company licensed from Luxtera.

The company has developed a 300mm-wafer silicon photonics production line at its fabrication plant in Crolles that is now up and running. ST also has its first silicon photonics product - a mid-reach PSM4 100-gigabit optical engine - and has just started its very first deliveries.

At the OFC show in March, ST said it had already delivered samples to one unnamed 'customer partner', possibly Luxtera, and Benetti showed a slide of the PSM4 chips as part of a Lumentum transceiver.  

Another ST achievement Benetti highlights is the development of a complete supply chain for the technology. In addition to wafer production, ST has developed electro-optic wafer testing. This allows devices to be probed electrically and optically to select working designs before the wafer is diced. ST has also developed a process to 3D-bond chips.

“We have focussed on building an industrial environment, with a supply chain that can deliver hundreds of thousands and millions of devices,” says Benetti. 

 

PSM4 and CWDM4

ST’s first product, the components for a 4x25 gigabit PSM4 transceiver, is a two-chip design.

One chip is the silicon photonics optical engine which integrates the PSM4’s four modulators, four detectors and the grating couplers used to interface the chip to the fibres. The second chip, fabricated using ST’s 55nm BiCMOS process, houses the transceiver’s associated electronics such as the drivers, and trans-impedance amplifiers.

The two chips are combined using 3D packaging. “The 3D packaging consists of the two dies, one copper-pillar bonded to the other,” says Benetti. “It is a dramatic simplification of the mounting process of an optical module.” 

The company is also developing a 100-gigabit CWDM4 transceiver which unlike the PSM4 uses four 25-gigabit wavelengths on a single fibre.

The CWDM4 product will be developed using two designs. The first is an interim, hybrid solution that uses an external planar lightwave circuit-based multiplexer and demultiplexer,  followed by an integrated silicon photonics design. The hybrid design is being developed and is expected in late 2017; the integrated silicon photonics design is due in 2018.

With the hybrid design, it is not just a question of adding a mux-demux to the PSM4 design. “The four channels are each carrying a different wavelength so there are some changes that need to be done to the PSM4,” says Benetti, adding that ST is working with partners that will provide the mux-demux and do the integration.   

 

We need to have a 100-gigabit solution in high volume for the market, and the pricing pressure that is coming has convinced us that silicon photonics is the right thing to do

 

Opportunities 

Despite the growing demand for 100-gigabit transceivers that ST is seeing, Benetti stresses that these are not 'mobile-phone wafer volumes'. “We are much more limited in terms of wafers,” he says. Accordingly, there is probably only room for one or two large fabs for silicon photonics globally, in his opinion. 

So why is ST investing in a large production line? For Benetti, this is an obvious development for the company which has been a provider of electrical ICs for the optical module industry for years.

“ST has entered silicon photonics to provide our customers with a roadmap,” says Benetti. “We need to have a 100-gigabit solution in high volume for the market, and the pricing pressure that is coming has convinced us that silicon photonics is the right thing to do.”

It also offers chip players the possibility of increasing its revenues. “The optical engine integrates all the components that were in the old-fashioned modules so we can increase our revenues there,” he says.

ST is tracking developments for 200-gigabit and 400-gigabit links and is assessing whether there is enough of an opportunity to justify pursuing 200-gigabit interconnects.

For now though, it is seeing strong pricing pressure for 100-gigabit links for reaches of several hundred meters. “We do not think we can compete for very short reach distances,” says Benetti.  “We will leave that to VCSELs until the technology can no longer follow.” As link speeds increase, the reach of VCSEL links diminishes. “We will see more room for silicon photonics but this is not the case in the short term,” says Benetti.

 

Market promise

People have been waiting for years for silicon photonics to become a reality, says Benetti. “My target is to demonstrate it [silicon photonics] is possible, that we are serious in delivering parts to the market in an industrial way and in volumes that have not been delivered before.”

To convince the market, it is not just showing the technological advantages of silicon photonics but the fact that there is a great simplification in constructing the optical module along with the ability to deliver devices in volume. “This is the point,” he says. 

Benetti’s other role at ST is overseeing advanced networking ASICs. He argues that over the mid- to long-term, there needs to be a convergence between ASIC and optical connectivity.

“Look at a switch board, for example, you have a big ASIC or two in the middle and a bunch of optical modes on the side,” says Benetti. For him, the two technologies - photonics and ICs - are complementary and the industry’s challenge is to make the two live together in an efficient way.


ECOC 2015 Review - Final Part

The second and final part of the survey of developments at the ECOC 2015 show held recently in Valencia.  

Part 2 - Client-side component and module developments   

  • The first SWDM Alliance module shown
  • More companies detail CWDM4, CLR4 and PSM4 mid-reach modules
  • 400 Gig datacom technologies showcased
  • The CFP8 MSA for 400 Gigabit Ethernet unveiled

The CFP MSA modules including the newest CFP8. Source: Finisar

  • Lumentum and Kaiam use silicon photonics for mid-reach modules
  • Finisar demonstrates a 10 km 25 Gig SFP28, and low-latency 25 Gig and 100 Gig SR4 interfaces 

 

Shortwave wavelength-division multiplexing

Finisar demonstrated the first 100 gigabit shortwave wavelength-division multiplexing (SWDM) module at ECOC. Dubbed the SWDM4, the 100 gigabit interface supports WDM over multi-mode fibre. Finisar showed a 40 version at OFC earlier this year. “This product [the SWDM4] provides the next step in that upgrade path,” says Rafik Ward, vice president of marketing at Finisar. 

The SWDM Alliance was formed in September to exploit the large amount of multi-mode fibre used by enterprises. The goal of the SWDM Alliance is to extend the use of multi-mode fibre by enabling link speeds beyond 10 gigabit.

“We believe if you can do something with multi-mode fibre, you can achieve cost points that are not achievable with single-mode fibre,” says Ward. “SWDM4 allows us to have not only low-cost optics on either end, but allows customers to reuse their installed fibre.”

The SWDM4 interface uses four 25 gigabit VCSELs operating at wavelengths sufficiently apart that cooling is not required. “By having this [wavelength] gap, you can keep to relatively low-cost components like for multiplexing and de-multiplexing,” says Ward.

The 100 Gig SWDM4 achieves 70 meters over OM3 fibre and 100 meters over OM4 fibre. SWDM can scale beyond 100 gigabit, says Ward, but the challenge with multi-mode fibre remains the tradeoff between speed and distance.

Finisar is already shipping SWDM4 alpha samples to customers.

The SWDM Alliance founding members include CommScope, Corning, Dell, Finisar, H3C, Huawei, Juniper Networks, Lumentum, and OFS.

 

CWDM4, CLR4 and PSM4

Oclaro detailed a 100 gigabit mid-reach QSFP28 module that supports both the CWDM4 multi-source agreement (MSA) and the CLR4 MSA. “We can support either depending on whether, on the host card, there is forward-error correction or not,” says Robert Blum, director of strategic marketing at Oclaro.

Both MSAs have a 2 km reach and use four 25 gigabit channels. However, the CWDM4 uses a more relaxed optical specification as its overall performance is complemented with forward-error correction (FEC) on the host card. The CLR4, in contrast, does not use FEC and therefore requires a more demanding optical specification.

“The requirements are significantly harder to meet for the CLR4 specification,” says Blum. By avoiding FEC, the CLR4 module benefits low-latency applications such as financial trading.

Oclaro showed its dual-MSA module achieving a 10 km reach at ECOC even though the two specifications call for 2 km only. “We have very large margins for the module compared to the specification,” says Blum, adding that customers now need to only qualify one module to meet their CWDM4 or CLR4 line card needs.

Other optical module vendors that announced support for CWDM4 in a QSFP28 module include Source Photonics, whose module is also CLR4-compliant. Kaiam is making CWDM4 and CLR4 modules using silicon photonics as part of its designs.

Lumentum also detailed its CWDM4 and the PSM4, a QSFP28 that uses a single-mode ribbon cable to deliver 100 Gig over 500 meters. Lumentum says its CWDM4 and PSM4 QSFP28 products will be available this quarter. “These 100 gigabit modules are what the hyper-scale data centre operators are clamouring for,” says Brandon Collings, CTO of Lumentum.

 

The question is who can ramp and support the 100 Gig deployments that are going to happen next year

 

Lumentum says it is using silicon photonics technology for one of its designs but has not said which. “We have both technologies [indium phosphide and silicon photonics], we use both technologies, and silicon photonics is involved with one of these [modules],” says Collings.

There is demand for both the PSM4 and CWDM4, says Lumentum. Which type a particular data centre operator chooses depends on such factors as what fibre they have or plan to deploy, whether they favour single-mode fibre pairs or ribbon cable, and if their reach requirements are beyond 500 meters.

Quite a few module companies have already sampled [100 Gig] products, says Oclaro’s Blum: “The question is who can ramp and support the 100 Gig deployments that are going to happen next year.”

 

Technologies for 400 gigabit

Several companies demonstrated technologies that will be needed for 400 gigabit client-side interfaces.

NeoPhotonics and chip company InPhi partnered to demonstrate the use of PAM-4 modulation to achieve 100 gigabit. “To do PAM-4, you need not only the optics but a special PAM-4 DSP,” says Ferris Lipscomb, vice president of marketing at NeoPhotonics.

The 400 Gigabit Ethernet standard under development by the IEEE 802.3bs supports several configurations using PAM-4 including a four-channel parallel single-mode fibre configuration, each at 100 gigabit that will have a 500m reach, and two 8 x 50 gigabit, for 2 km and 10 km links.    

The company showcased its 4x28 Gig transmitter optical sub-assembly (TOSA) that uses a photonic integrated circuit comprising electro-absorptive modulated lasers (EMLs). Combined with InPhi’s PAM-4 chip, two channels were combined to achieve 100 gigabit. NeoPhotonics says its EMLs are also capable of supporting 56 gigabaud rates which, coupled with PAM-4, would achieve 100 gigabit single channels. 

Lipscomb points out that not only are there several interfaces under development but also various optical form factors. “For 100 Gig and 400 Gig client-side data centre links, there are several competing MSA groups,” says Lipscomb. “The final winning approach has not yet emerged and NeoPhotonics wants its solution to be generic enough so that it supports this winning approach once it emerges.” 

Meanwhile, Teraxion announced its silicon photonics-based modulator technology for 100 gigabit (4 x 25 Gig) and 400 gigabit datacom interfaces. “People we talk to are interested in WDM applications for short-reach links,” says Martin Guy, Teraxion’s CTO and strategic marketing.

Teraxion says a challenge using silicon photonics for WDM is supporting a broad band of wavelengths. “People use surface gratings to couple light into the silicon photonics,” says Guy. “But surface gratings have a strong wavelength-dependency over the C-band.”

Teraxion has developed an edge coupler instead which is on the same plane as the propagating light. This compares to a surface grating where light is coupled vertical to the plane.

 

You hear a lot about the cost of silicon photonics but one of the key advantages is the density you can achieve on the chip itself. Having many modulators in a very small footprint has value for the platform; you can make smaller and smaller transceivers. 

 

“We can couple light efficiently with large-tolerance alignment and our approach can be used for WDM applications,” says Guy. Teraxion’s modulator array can be used for CWDM4 and CLR4 MSAs as well as optical engines for future 400 gigabit datacom systems. 

“You hear a lot about the cost of silicon photonics but one of the key advantages is the density you can achieve on the chip itself,” says Guy. “Having many modulators in a very small footprint has value for the platform; you can make smaller and smaller transceivers.” 

 

CFP8 MSA

Finisar demonstrated a 400 gigabit link that included a mock-up of the CFP8 form factor, the latest CFP MSA member being developed to support emerging standards such as 400 Gigabit Ethernet.

The 400 gigabit demonstration implemented the 400GE-SR16 multi-mode standard. A Xilinx FPGA was used to implement an Ethernet MAC and generated 16, 25 Gig channels that were fed to four CFP4 modules, each implementing a 100GBASE-SR4 but collectively acting as the equivalent of the 400GE-SR16. The 16 fibre outputs were then fed to the CFP8 prototype which performed an optical loop-back function, sending the signals back to the CFP4s and FPGA.

 

The CFP8 will be able to support 6.4 terabit of switching on a 1U card when used in a 2 row by 8 module configuration. The CFP8 has a similar size and power consumption profile of the CFP2. “There is still a lot of work putting an MSA together for 400 gigabit,” says Ward, adding that there is still no timeframe as to when the CFP8 MSA will be completed.

 

25 Gig SFP28

Finisar also announced at ECOC a 1310nm SFP28 supporting 25 gigabit Ethernet over 10 km, complementing the 850nm SFP28 short reach module it announced at OFC 2015.

Ethernet vendors are designing their next-generation series of switches that use the SFP28, says Finisar, while the IEEE is completing standardising 25 Gigabit Rthernet over copper and multi-mode fibre options.

“There hasn’t yet been a motion to standardise a long-wave interface,” says Ward. “With the demo at ECOC, we have come out with a 25 Gig long-wave interface in advance of a standard.”       

Ward points out that the large-scale data centres several years ago only had 40 gigabit as a higher speed option beyond 10 gigabit. Now enterprises will also have a 25 gigabit option.

Ward points out that 25 gigabit compared to 40 Gig delivers an attractive cost-performance. Forty gigabit short-reach and long-reach interfaces are based on four channels at 10 gigabit, whereas 25 gigabit uses one laser and one photo-detector that fit in an SFP28. This compares to a QSFP for 40 Gig.

“25 Gigabit Ethernet is a very interesting interface for the next set of customers after the Web 2.0 players that are looking to migrate beyond 10 gigabit,” said Ward.     

 

Low-latency 25 Gig SR and 100 Gig Ethernet SR4 modules

Also announced by Finisar are 25 Gigabit Ethernet SFP28 SR and 100GE QSFP28 SR4 transceivers that can operate without accompanying FEC on the host board. The transceivers achieve a 30 meter reach on OM3 fibre and 40 meters using OM4 fibre.

“Using FEC simplifies the optical link,” says Ward. “It can take the cost out of the optics by having FEC which gives you additional gain.”  But some customers have requested the parts for use without FEC to reduce link latency, similar to those that choose the CLR4 MSA for mid-reach 100 Gig.

Finisar has not redesigned its modules but offering modules that have its higher performing VCSELs and photo-detectors. “Think of it as a simple screen,” says Ward.

 

Click here for the ECOC 2015 Review - Part 1.  


Data centres to give silicon photonics its chance

Part 4: A large data centre operator’s perspective

The scale of modern data centres and the volumes of transceivers they will use are going to have a significant impact on the optical industry. So claims Facebook, the social networking company.

Katharine Schmidtke

Facebook has been vocal in outlining the optical requirements it needs for its large data centres.

The company will use duplex single-mode fibre and has chosen the 2 km mid-reach 100 gigabit CWDM4 interface to connect its equipment.

But the company remains open regarding the photonics used inside transceivers. “Facebook is agnostic to technology,“ says Katharine Schmidtke, strategic sourcing manager, optical technology at Facebook. “There are multiple technologies that meet our requirements.” 

That said, Facebook says silicon photonics has characteristics that are appealing. 

Silicon photonics can produce integrated designs, with all the required functions placed in one or two chips. Such designs will also be needed in volume, given that a large data centre uses hundred of thousands of optical transceivers, and that requires a high-yielding process. This is a manufacturing model the chip industry excels at, and one that silicon photonics, which uses a CMOS-compatible process, can exploit.

 

When you bring up a data centre, you don’t just deploy, you deploy a data centre

 

New business model

What data centres brings to optics is scale. Optical transceiver volumes used by data centres are growing, and growing fast, and will account for half the industry’s demand for Ethernet transceivers by 2020, according to LightCounting Market Research.

Transceivers must be designed with high-volume, low-cost manufacturing in mind from the start. This is different to what the market has done traditionally. “With the telecom industry, you step into volume in more manageable, digestible chunks,” says Schmidtke. “When you bring up a data centre, you don’t just deploy, you deploy a data centre.”

Silicon photonics has already proven it can achieve the required optical performance, says Facebook, what remains open is whether the technology can meet the manufacturing demands of the data centre. What helps its cause is that the data centre provides the volumes needed to achieve such a manufacturing maturity. 

Schmidtke is upbeat about silicon photonics’ prospects. 

“Why silicon photonics is attractive is integration; you are reducing the number of components and the bill of materials significantly, and that reduces cost,” she says. “Then there is all the alignment and assembly cost reductions; that is what makes this technology appealing.”

Her expectation is that the industry will demonstrate the required level of manufacturing maturity in the coming year. Then the role silicon photonics will play for this market will become clearer.  

“Within a year it will be very obvious,” she says.


IBM demos a 100 Gigabit silicon photonics transceiver

IBM has demonstrated a 100 gigabit transceiver using silicon photonics technology, its most complex design unveiled to date. The 100 gigabit design is not a product but a technology demonstrator, and IBM says it will not offer branded transceivers to the marketplace.

“It is a demonstration vehicle illustrating the complex design capabilities of the technology and the functionality of the optical and electrical components,” says Will Green, manager of IBM’s silicon integrated nano-photonics group. 

Will Green

IBM has been developing silicon photonics technology for over a decade, starting with building-block optical functions based on silicon, to its current monolithic system-on-chip technology that includes design tools, testing and packaging technologies.

Now this technology is nearing commercialisation.   

“We do plan to have the technology available for use within IBM’s systems but also within the larger market; large-volume applications such as the data centre and hyper-scale data centres in particular,” says Green. 

IBM is already working with companies developing their own optical component designs using its technology and design tools. “These are tools that circuit designers are familiar with, such that they do not need to have an in-depth knowledge of photonics in order to build, for example, an optical transceiver,” says Green.  

 

We do plan to have the technology available for use within IBM’s systems but also within the larger market

 

100 gig demonstrator

IBM refers to its silicon photonics technology as CMOS-integrated nano-photonics. CMOS-integrated refers to the technology’s monolithic nature that combines CMOS electronics with photonics on one substrate. Nano-photonics highlights the dimensions of the feature sizes used.   

IBM is rare among the silicon photonics community in combining electronics and photonics on one chip; other players implement photonics and electronics on separate dies before combining the two. What is not included is the laser which is externally attached using fibre.

The platform supports 25 gigabit speeds as well as wavelength division multiplexing. Originally, IBM started with 90 nm CMOS using bulk silicon before transferring to a silicon-on-insulator (SOI) substrate. An SOI wafer is ideal for creating optical waveguides that confine light using the large refractive index difference between silicon and silicon dioxide. However, to make the electrical devices run at 25 gigabit, the resulting transistor gate length ended up being closer to a 65 nm CMOS process.   

 

Source: IBM Corporation.

 

IBM's optical waveguides are sub-micron, having dimensions of a few hundred nanometers. This is the middle ground, says Green, trading off the density of smaller-dimensioned waveguides with larger, micron-plus ones that deliver low propagation loss.   

Also used are sub-wavelength optical 'metamaterial' structures that transition between the refractive index of the fibre and that of the optical waveguide to achieve a good match between the two. “These very tiny sub-wavelength structures are made using lithography near the limits of what is available,” says Green. “We are engineering the optical properties of the waveguide in order to achieve a low insertion loss when bringing the fibre onto the chip.”  The single mode fibre to the chip is attached using passive alignment.

The 100 gigabit transceiver demonstrator uses four 25 gigabit coarse wavelengths around 1310 nm.  The technology is suited to implement the CWDM4 MSA.

 

The whole technology is available to be commercialised by any chip manufacturer

 

“We are working with four wavelengths today but in the same way as telecom uses many wavelengths, we can follow a similar path,” says Green.

The chip design features transmitter electronics - a series of amplifiers that boost the voltage to drive the Mach-Zehnder Interferometer modulators - and a multiplexer to combine the four wavelengths onto the fibre while the receiver circuitry includes a demultiplexer, four photo-detectors and trans-impedance amplifiers and limiting amplifiers, says Green. What is lacking to make the 100 gigabit transceiver functional is a micro-controller, feedback loops to control the temperature of key circuits, and the circuitry to interface to standard electrical input/ output.  

Green highlights how the bill of materials of a chip is only a fraction of the total cost since assembly and testing must also be included.  

“We reduce the cost of assembly through automated passive optical alignment and the introduction of custom structures onto the wafer,” he says. “We believe we can make an impact on the cost structure of the optical transceiver and where this technology needs to be to access the data centre.” IBM has also developed a way to test the transceiver chips at the wafer level. 

Green admits that its CMOS-integrated nanophotonics process will not scale beyond 25 gigabit as the 90-65 nm CMOS is not able to implement faster serial rates. But IBM has already shown an optical implementation of the PAM-4 modulation scheme that doubles a link's rate to 50 gigabit.    

Meanwhile, IBM’s process design kit (PDK) is already with customers. A PDK includes documents and data files that describe the fabrication process and enable a user to complete a design. A PDK includes a fab’s process parameters, mask layout instructions, and the library of silicon photonics components; grating couplers, waveguides, modulators and the like [1].  

“They [customers] have used the design kit provided by IBM but have built their own designs,” says Green. “And now they are testing hardware.”

IBM is keen that its silicon photonics technology will be licensed and used by circuit design houses. "Houses that bring their own IP [intellectual property], use the enablement tools and manufacture at a site that is licensing the technology from IBM,” says Green. "The whole technology is available to be commercialised by any chip manufacturer.”

 

Reference

[1] Silicon Photonics Design: From Devices to Systems, Lukas Chrostowski and Michael Hochberg, Cambridge University Press, 2015. Click here


Module makers rush to fill the 100 Gig mid-reach void

 

You may give little thought as to how your Facebook page is constructed each time you log in, or the data centre ramifications when you access Gmail. But for the internet giants, what is clear is that they need cheaper, higher-speed optical links to connect their equipment that match the growing size of their hyper-scale data centres. 

The challenge for the web players is that existing 100 Gig links are either too short or too expensive. Ten and 40 Gig multimode interfaces span 300m, but at 100 Gig the reach plummets; the existing IEEE 802.3 Ethernet 100GBASE-SR10 and 100GBASE-SR4 multi-mode standards are 100m only. Meanwhile, the 10km reach of the next IEEE interface option, the 100 Gig single-mode 100GBASE-LR4, is overkill and expensive; the LR4 being sevenfold the cost of the 100GBASE-SR10, according to market research firm, LightCounting.

"The largest data centre operators will tell you less than 1km, less than 500m, is their sweet spot," says Martin Hull, director of product management at switch vendor, Arista Networks. Hyperscale data centres use a flatter switching architecture known as leaf and spine. "The flatter switching architectures require larger quantities of economical links between the leaf and spine switches," says Dale Murray, principal analyst at LightCounting.

A 'leaf' can be a top-of-rack switch connecting the servers to the larger-capacity 'spine' of the switch architecture. Operators want 100GbE interfaces with sufficient optical link budget to span 500m and greater distances, to interconnect the leaf and spine, or the spine to the data centre's edge router.

The optical industry has been heeding the web companies' request.

One reason the IEEE 802.3 Ethernet Working Group created the 802.3bm Task Force is to address mid-reach demand by creating a specification for a cheaper 500m interface. Four proposals emerged: parallel single mode (PSM4), coarse WDM (CWDM), pulse amplitude modulation, and discrete multi-tone.  But none of the proposals passed the 75% voting threshold to become a standard. 

The optical industry has since pursued a multi-source agreement (MSA) strategy to bring the much-needed solutions to market. In the last year, no fewer than four single-mode interfaces have emerged: the CLR4 Alliance, and the CWDM4, PSM4 and OpenOptics MSAs.

"The MSA-based solutions will have two important advantages," says Murray. "All will be much less expensive than a 10km 100Gig LR4 module and all can be accommodated by a QSFP28 form factor."

The 100 GbE PSM4, backed by the leading optical module makers (see table above), differs from the other three designs in using parallel ribbon fibre and having a 500m rather than a 2km reach. The PSM4 uses four 25 Gig channels, each sent over a fibre, such that four fibres are used in each direction. The PSM4 is technically straightforward and is likely to be the most economical for links up to 500m. In contrast, the CLR4, CWDM4 and OpenOptical all use 4x25 Gig WDM over duplex fibre. Thus, while the PSM4 will likely be the cheapest of the four modules, the link's cost advantage is eroded with distance due to the ribbon fibre cost.

The PSM4 is also attractive for secondary applications; the 25 Gig channels could be used as individual 'breakout' links. Already there is industry interest in 25GbE, while the module could be used in future for 32 Gig Fibre Channel and high-density 128 Gig Fibre Channel. 

The OpenOptics MSA, backed by Mellanox and start-up Ranovus, operates in the 1550nm C-band and uses dense WDM, whereas the CLR4 Alliance and CWDM4 operate around 1310nm and use CWDM. The 100 GbE OpenOptics is also 4x25 Gig, such that the wavelengths can be spaced far apart but DWDM promises a roadmap for even higher speed interfaces.    

The CLR4 Alliance is an Intel-Arista initiative that has garnered wide industry backing, but it is not an MSA. The specification is very similar to the CWDM4. Both the CLR4 and the CWDM4 include forward error correction (FEC) but whereas FEC is fundamental to the CWDM4, it is an option with the CLR4.

"We have focussed on the FEC-enabled [CWDM4] version so that optical manufacturers can develop the lowest possible cost components to support the interface," says Mitchell Fields, senior director, product marketing and strategy, fiber-optics product division at Avago. FEC adds flexibility, he says, not just in relaxing the components' specification but also simplifying module testing.

The backers of CWDM4 and CLR4 are working to align their specifications and while it is likely the two will interoperate, it remains unclear whether the two will merge.

The CWDM4 specification is likely to be completed in September with first products appearing as early as one or two quarters later. Arista points out that it already has a switch that could use CWDM4/ CLR4 modules now if they were available. 

John D'Ambrosia, chairman of the Ethernet Alliance, regrets that four specifications have emerged. "My own personal belief is that it would be better for the industry overall if we didn't have so many choices," he says. "But the reality is there are a lot of different applications out there." 

LightCounting expects the PSM4 and a merged CWDM offering will find strong market traction. "Avago, Finisar, JDSU and Oclaro are participating in both categories, demonstrating that each has its own value proposition," says Murray.

 

This article first appeared in the Optical Connections ECOC '14 magazine issue.

For a more detailed article on mid-reach optics, see p28 of the Autumn issue of Fibre Systems, click here

Article Revision: 30/10/2014: Updated members list of the OpenOptics MSA


Industry in a flurry of mid-reach MSA announcements

Another day, another multi-source agreement.

The CLR4 Alliance is the latest 100 Gig multi-source agreement (MSA) to address up-to-2km links in the data centre. The 100 Gig CLR4 Alliance is backed by around 20 companies including data centre operators, equipment vendors, optical module and component players and chip makers.

 

The table provides a summary of the latest MSAs and how they relate to the IEEE 100 Gigabit client interface standards. Source: Gazettabyte.

The announcement follows in the footsteps of the CWDM4 MSA, announced at the start of the week. The CWDM4 is another 100 Gig single-mode fibre MSA backed by optical module makers, Avago Technologies, Finisar, JDSU and Oclaro.

The two MSAs are the latest of several announced interfaces - three in the last three weeks - to tackle mid-reach distances from 100m-plus to 2km. The MSAs reflect an industry need to fill the void in the IEEE standards: the -SR4 multimode standard, with its 100m reach, and the 10km -LR4 that is seen as over-specified for data centre requirements.  

Below is a discussion of the recent data centre MSAs

 

The PSM4 MSA

The PSM4 MSA is a four-channel parallel single mode interface that uses eight- or 12-fibre cabling based on the MTP/MPO optical connectors. The PSM4 uses simpler optics than the 10km IEEE 100GBASE-LR4 and the shorter-reach 2km offshoot, the CWDM4 MSA, and thus promises lower cost. But this is at the expense of using eight fibres and more expensive connectors compared to the single-mode CWDM4.

The PSM4 is expected to have a reach of at least 500m; above that the cost of the fibre becomes the dominant factor. "Note that a 500m PMD [physical medium dependent layer] at 100 Gig was an objective of the IEEE 802.3bm group but it did not happen, so the industry is defining products that fill the gap," says Dale Murray, principal analyst at LightCounting Market Research.

The PSM4 MSA was first detailed in January and includes such members as Avago Technologies, Brocade, JDSU, Luxtera, Oclaro and Panduit.

 

The 100 Gig CLR4 Alliance

The 100 Gig CLR4 MSA is backed by companies including ebay; equipment vendors Arista Networks, Brocade, Dell, Fujitsu, HP, and Oracle; silicon photonics players Aurrion, Intel, Skorpios Technologies (Oracle is also a proponent of silicon photonics); optical module and component players ColorChip, Kaiam, Oclaro, Oplink, NeoPhotonics; and chip vendors, Netronome and Semtech.    

The 100 Gig standard is based on a QSFP form factor module and uses two single mode fibres - 1 send and 1 receive - for duplex communications. The MSA has a 2km reach and uses coarse wavelength-division multiplexing (CWDM). The 8.5mm x18mm x 72mm QSFP has a maximum power consumption of 3.5W and enables a port density of 36 modules on a face plate of a 1 rack unit card, or 3.6 Terabits overall. 

At the recent OFC show, Skorpios Technologies demonstrated a QSFP28-CLR4. The silicon photonics player said its module was based on a single-chip that integrates the lasers, modulators, detectors and optical multiplexer and de-multiplexer, to deliver significant size, cost and power benefits. It also said the transceiver achieved a reach of 10km, putting its CLR4 on a par with the IEEE -LR4

At OFC ColorChip announced its iLR4 which is a QSFP28 with a 2km although, like with Skorpios, this was before the 100G CLR4 Alliance launch.   

 

The CWDM4 MSA

The CWDM4 MSA also uses 4-channel CWDM optics and two-fibre cabling. The CWDM4 is being promoted as a complement to the PSM4. "It is MSA-based and has a 2km target," says Murray. "This is an LR4 with relaxed specs;  it has no thermal electric cooler but uses the same wavelengths."

"From the link solution point of view, the PSM4 may be more cost effective than the CWDM4 up to 200m-300m," says I-Hsing Tan, segment marketing manager for Ethernet and storage optical transceivers at Avago. "But CWDM4 is for sure the winner beyond 200m and can be more cost effective than the 100GBASE-LR4 solution up to 2km."

Companies backing the CWDM4 MSA include Avago Technologies, Finisar, JDSU and Oclaro.

 

The OpenOptics MSA

The OpenOptics MSA was launched at OFC by Mellanox Technologies and Ranovus. The MSA uses 1550nm optics and DWDM. The first implementation will be a 100G QSFP28 module and the distance it will address is up to 2km. The MSA will also support future 400 Gig and greater interface speeds.

The degree of acceptance of the OpenOptics MSA is still to be determined compared to the more broadly backed CWDM4.

 

Other developments

The CLR4 Alliance may not be the final word regarding MSA announcements for the data centre.

Work is ongoing to use advanced modulation for data centre links, such as PAM-8 and carrier multi-tone. 

Both Ciena's Joe Berthold and Ovum's Daryl Inniss address the importance of client-side interfaces and whether the rush to announce new MSAs is beneficial overall.

 

The story was first published on April 1st and has been updated to include the CLR4 Alliance MSA.


Privacy Preference Center