Intel details its 800-gigabit DR8 optical module

The company earmarks 2023 for its first co-packaged optics product
Intel is sampling an 800-gigabit DR8 in an OSFP pluggable optical module, as announced at the recent OFC virtual conference and show.
“It is the first time we have done a pluggable module with 100-gigabit electrical serdes [serialisers/ deserialisers],” says Robert Blum, Intel’s senior director, marketing and new business. “The transition for the industry to 100-gigabit serdes is a big step.”
The 800-gigabit DR8 module has eight electrical 100-gigabit interfaces and eight single-mode 100-gigabit optical channels in each transmission direction.
Intel demonstrated a prototype 12.8-terabit co-packaged optics design
The attraction of the single-module DR8 design, says Blum, is that it effectively comprises two 400-gigabit DR4 modules. “The optical interface allows you the flexibility that you can break it out into 400-gigabit DR4,” says Blum. “You can also do single 100-gigabit breakouts or you can do 800-gigabit-to-800-gigabit traffic.”
Intel expects volume production of the DR8 in early 2022. Developing a DR8 in a QSFP-DD800 form factor will depend on customer demand, says Blum.
Intel will follow the 800-gigabit DR8 module with a dual 400G FR4, expected later in 2022. The company is also developing a 400-gigabit FR4 module that is expected then.
Meanwhile, Intel is ramping its 200-gigabit FR4 and 400-gigabit DR4 modules.
51.2-terabit co-packaged optics
Intel demonstrated a prototype 12.8-terabit co-packaged optics design, where the optics is integrated alongside its Tofino 2 Ethernet switch chip, at last year’s OFC event.
The company says its first co-packaged optics design will be for 51.2-terabit switches and is scheduled in late 2023. “We see smaller-scale deployments at 51.2 terabits,” says Blum.

Moving the industry from pluggable optical modules to co-packaged optics is a big shift, says Intel. The technology brings clear system benefits such as 30 per cent power savings and lower cost but these must be balanced against the established benefits of using pluggable modules and the need to create industry partnerships for the production of co-packaged optics.
The emergence of 800-gigabit client-side pluggable modules such as Intel’s also means a lesser urgency for co-packaged optics. “You have something that works even if it is more expensive,” says Blum.
Thirty-two 800-gigabit modules can serve a 25.6-terabit switch in a one rack unit (1RU) platform.
However, for Intel, the crossover point occurs once 102.4-terabit switch chips and 200-gigabit electrical interfaces emerge.
“We see co-packaged optics as ubiquitous; we think pluggables will no longer make sense at that point,” says Blum.
FPGA-based optical input-output
Intel published a paper at OFC 2021 highlighting its latest work a part of the U.S. DARPA PIPES programme.
The paper describes a co-packaged optics design that adds 8 terabits of optical input-output (I/0) to its Stratix 10 FPGA. The design uses Ayar Labs’ TeraPHY chiplet for the optical I/O.
The concept is to use optical I/O to connect compute nodes – in this case, FPGAs – that may be 10s or 100s of meters apart.
Intel detailed its first Stratix 10 with co-packaged optical I/O two years ago.
The latest multi-chip package also uses a Stratix 10 FPGA with Intel’s Advanced Interface Bus (AIB), a parallel electrical interface technology, as well as the Embedded Multi-die Interconnect Bridge (EMIB) technology which supports the dense I/O needed to interface the FPGA to the TeraPHY chiplet. The latest design integrates five TeraPHYs compared to the original one that used two. Each chiplet offers 1.6 terabits of capacity such that the FPGA-based co-package has 8 terabits of I/O in total.
Optically enabling Ethernet silicon or an FPGA is part of the industry’s vision to bring optics close to the silicon. Other devices include CPUs and GPUs and machine-learning devices used in computing clusters that require high-density interconnect (see diagram below).

“It is happening first with some of the highest bandwidth Ethernet switches but it is needed with other processors as well,” says Blum.
The Intel OFC 2021 paper concludes that co-packaged optics is inevitable.
Milestones, LiDAR and sensing
Intel has shipped a total of over 5 million 100-gigabit optical modules, generating over $1 billion of revenues.
Blum also mentioned Intel’s Mobileye unit which in January announced its LiDAR-on-a-chip design for autonomous vehicles.
“We have more than 6,000 individual components on this LiDAR photonic integrated circuit,” says Blum. The count includes building blocks such as waveguides, taps, and couplers.
“We have this mature [silicon photonics] platform and we are looking at where else it can be applied,” says Blum.
LiDAR is one obvious example: the chip has dozens of coherent receivers on a chip and dozens of semiconductor optical amplifiers that boost the output power into free space. “You really need to integrate the different functionalities for it to make sense,” says Blum.
Intel is also open to partnering with companies developing biosensors for healthcare and for other sensing applications.
Certain sensors use spectroscopy and Intel can provide a multi-wavelength optical source on a chip as well as ring-resonator technology.
“We are not yet at a point where we are a foundry and people can come but we could have a collaboration where they have an idea and we make it for them,” says Blum.
Intel combines optics to its Tofino 2 switch chip

Part 1: Co-packaged Ethernet switch
The advent of co-packaged optics has moved a step closer with Intel’s demonstration of a 12.8-terabit Ethernet switch chip with optical input-output (I/O).
The design couples a Barefoot Tofino 2 switch chip to up to 16 optical ‘tiles’ – each tile, a 1.6-terabit silicon photonics die – for a total I/O of 25.6 terabits.
“It’s an easy upgrade to add our next-generation 25.6-terabit [switch chip] which is coming shortly,” says Ed Doe, Intel’s vice president, connectivity group, general manager, Barefoot division.
Intel acquired switch-chip maker, Barefoot, seven months ago after which it started the co-packaging optics project.
Intel also revealed that it is in the process of qualifying four new optical transceivers – a 400Gbase-DR4, a 200-gigabit FR4, a 100-gigabit FR1 and a 100Gbase-LR4 – to add to its portfolio of 100-gigabit PSM4 and CWDM4 modules.
Urgency
Intel had planned to showcase the working co-packaged switch at the OFC conference and exhibition, held last week in San Diego. But after withdrawing from the show due to the Coronavirus outbreak, Intel has continued to demonstrate the working co-packaged switch at its offices in Santa Clara.

“We have some visionaries of the industry coming through and being very excited, making comments like: ‘This is an important milestone’,” says Hong Hou, corporate vice president, general manager, silicon photonics product division at Intel.
“There are a lot of doubts still [about co-packaged optics], in the reliability, the serviceability, time-to-market, and the right intercept point [when it will be needed]: is it 25-, 51- or 102-terabit switch chips?” says Hou. “But no one says this is not going to happen.”
If the timing for co-packaged optics remains uncertain, why the urgency?
“There has been a lot of doubters as to whether it is possible,” says Doe. “We had to show that this was feasible and more than just a demo.”
Intel has also been accumulating IP from its co-packaging work. Topics include the development of a silicon-photonics ring modulator, ensuring optical stability and signal integrity, 3D packaging, and passive optical alignment. Intel has also developed a fault-tolerant design that adds a spare laser to each tile to ensure continued working should the first laser fail.
“We can diagnose which laser is the source of the problem, and we have a redundant laser for each channel,” says Hou. “So instead of 16 lasers we have 32 functional lasers but, at any one time, only half are used.”
Co-packaged optics
Ethernet switches connected in the data centre currently use pluggable optics. The switch chip resides on a printed circuit board (PCB) and is interfaced to the pluggable modules via electrical traces.
But given that the capacity of the Ethernet switch ICs is doubling every two years, the power consumption of the I/O continues to rise yet the power delivered to a data centre is limited. Accordingly, solutions that ensure a doubling of switch speed but do not increase the power consumed are required.
One option is embedded optics such as the COBO initiative. Here, optics are moved from the switch’s faceplate onto the PCB, closer to the switch chip. This shortens the electrical traces while overcoming the capacity limitations of the number of pluggable modules that can be fitted onto the switch’s faceplate. Freeing up the faceplate by removing pluggables also improves airflow to cool the switch.
The second, more ambitious approach is co-packaged optics where optics are combined with the switch ASIC in the one package.
Co-packaged optics can increase the overall I/O on and off the switch chip, something that embedded optics doesn’t address. And placing the optics next to the ASIC, the drive requirements of the high-speed serialiser-deserialisers (serdes) is simplified.
Meanwhile, pluggable optics continue to advance in the form factors used and their speeds as well as developments such as fly-over cables that lower the loss connecting the switch IC to the front-panel pluggables.
In turn, certain hyperscalers are not convinced about co-packaged optics.
Microsoft and Facebook announced last year the formation of the Co-Packaged Optics (CPO) Collaboration to help guide the industry to develop the elements needed for packaging optics. But Google and Alibaba said at OFC that they prefer the flexibility and ease of maintenance of pluggables.
Data centre trends
The data centre is a key market for Intel which sells high-end server microprocessors, switch ICs, FPGAs and optical transceivers.
Large-scale data centres deploy 100,000 servers, 50,000 switches and over one million optical modules. And a million pluggable modules equate to $150M to $250M of potential revenue, says Intel.

“One item that is understated is the [2:1] ratio of servers to switches,” says Doe. “We have seen a trend in recent years where the layers of switching in data centres have increased significantly.”
One reason for more switching layers is that traffic over-subscription is no longer used. With top-of-rack switches, a 3:1 over-subscription was common which limited the switch’s uplink bandwidth needed.
However, the changing nature of the computational workloads now requires that any server can talk to any other server.
“You can’t afford to have any over-subscription at any layer in the network,” says Doe. “As a result, you need to have a lot more bandwidth: an equal amount of downlink bandwidth to uplink bandwidth.”
Another factor that has increased the data centre’s switch layer count is the replacement of chassis switches with disaggregated pizza boxes. Typically, a chassis switch encompasses three layers of switching.
“Disaggregation is a factor but the big one is the 1:1 [uplink-downlink bandwidth] ratio, not just at the top-of-rack switch but all the way through,” says Doe. “They [the hyperscalers] want to have uniform bandwidth throughout the entire data centre.”
Tofino switch IC
Barefoot has two families of Tofino chips. The first-generation Tofino devices have a switching capacity ranging from 1.2 to 6.4 terabits and are implemented using a 16nm CMOS process. The Tofino 2 devices, implemented using a 7nm CMOS IC, range from 4 terabits to 12.8 terabits.
“What we having coming soon is the Tofino next-generation which will go to both 25 terabits and 51 terabits,” says Doe.
Intel is not discussing future products but Doe hints that both switch ICs will be announced jointly rather than the typical two-year delay between successive generations of switch IC. This also explains the urgency of the company’s co-packaging work.
The 12.8-terabit Tofino 2 chip comprises the switch core dies and four electrical I/O tiles that house the device’s serdes.
“The benefit of the tile design is that it allows us to easily swap the tiles for higher-speed serdes – 112 gigabit-per-second (Gbps) – once they become available,” says Doe. And switching the tiles to optical was already envisaged by Barefoot.
Optical tile
Intel’s 1.6-terabit silicon-photonics tile includes two integrated lasers (active and spare), a ring modulator, an integrated modulator driver, and receiver circuitry. “We also have on-chip a v-groove which allows for passive optical alignment,” says Hou.
Each tile implements the equivalent of four 400GBASE-DR4s. The 500m-reach DR4 comprises four 100-gigabit channels, each sent over single-mode fibre.
“This is a standards-based interface,” says Robert Blum, Intel’s director of strategic marketing and business development, as the switch chip must interact with standard-based optics.
The switch chip and the tiles sit on an interposer. Having an interposer will enable different tiles and different system-on-chips to be used in future.
Hou says that having the laser integrated with the tile saves power. This contrasts with designs where the laser is external to the co-packaged design.
The argument for using an external laser is that it is remote from the switch chip which runs hot. But Hou says that the switch chip itself has efficient thermal management which the tile and its laser(s) can exploit. Each tile consumes 35W, he says.
As for laser reliability, Intel points to its optical modules that it has been selling since 2016 when it started selling the PSM4.
Hou claims Intel’s hybrid laser design, where the gain chip is separated from the cavity, is far more reliable than a III-V facet cavity.
“We have shipped over three million 100-gigabit transceivers, primarily the PSM4. The DPM [defects per million] is 28-30, about two orders of magnitude less than our closest competitor,” says Hou. “Eight out of ten times the cause of the failure of a transceiver is the laser, and nine out of ten times, the laser failure is due to a cavity problem.”
The module’s higher reliability reduces the maintenance needed, and enables data centre operators to offer more stringent service-level agreements, says Hou.
Intel says it will adopt wavelength-division multiplexing (WDM) to enable a 3.2-terabit tile which will be needed with the 51.2-terabit Tofino.

Switch platform
Intel’s 2-rack-unit (2RU) switch platform is a hybrid design: interfaced to the Tofino 2 are four tiles as well as fly-over cables to connect the chip to the front-panel pluggables.
“The hyperscalers are most interested in co-packaging but when you talk to enterprise equipment manufacturers, their customers may not have a fabric as complicated as that of the hyperscalers,” says Hou. “Bringing pluggables in there allows for a transition.”
The interposer design uses vertical plug-in connectors enabling a mix of optical and electrical interfaces “It is pretty easy, at the last minute, to [decide to] bring in 10 optical [interfaces] and six fly-over cables [to connect] to the pluggables,” says Hou.
“This is not like on-board optics,” adds Blum. “This [connector arrangement] is part of the multi-chip package, it doesn’t go through the PCB. It allows us to have [OIF-specified] XSR serdes and get the power savings.”
Intel expects its co-packaged design to deliver a 30 per cent power saving as well as a 25 to 30 per cent cost-saving. And now that it has a working platform, Hou expects more engagements with customers that seeking these benefits and its higher-bandwidth density.
“This can stimulate more discussions and drive an ecosystem formation around this technology,” concludes Hou.
See Part 2: Ranovus outlines its co-packaged optics plans.
Intel targets 5G fronthaul with a 100G CWDM4 module
- Intel announced at ECOC that it is sampling a 10km extended temperature range 100-gigabit CWDM4 optical module for 5G fronthaul.
- Another announced pluggable module pursued by Intel is the 400 Gigabit Ethernet (GbE) parallel fibre DR4 standard.
- Intel, a backer of the CWDM8 MSA, says the 8-wavelength 400-gigabit module will not be in production before 2020.
Intel has expanded its portfolio of silicon photonics-based optical modules to address 5G mobile fronthaul and 400GbE.
Robert BlumAt the European Conference on Optical Communication (ECOC) being held in Rome this week, Intel announced it is sampling a 100-gigabit CWDM4 module in a QSFP form factor for wireless fronthaul applications.
The CWDM4 module has an extended temperature range, -20°C to +85°C, and a 10km reach.
“The final samples are available now and [the product] will go into production in the first quarter of 2019,” says Robert Blum, director of strategic marketing and business development at Intel’s silicon photonics product division.
Intel also announced it will support the 400GBASE-DR4, the IEEE’s 400 GbE standard that uses four parallel fibres for transmit and four for the receive path, each carrying a 100-gigabit 4-level pulse amplitude modulation (PAM-4) signal.
5G wireless
5G wireless will be used for a variety of applications. Already this year the first 5G fixed and mobile wireless services are expected to be launched. 5G will also support massive Internet of Things (IoT) deployments as well as ultra-low latency applications.
The next-generation wireless standard uses new spectrum that includes millimetre wave spectrum in the 24GHz to 40GHz region. Such higher frequency bands will drive small-cell deployments.
5G’s use of new spectrum, small cells and advanced air interface techniques such as multiple input, multiple output (MIMO) antenna technology is what will enable its greater data speeds and vastly expanded capacity compared to the current LTE cellular standard.
Source: Intel.
The 5G wireless standard will also drive greater fibre deployment at the network edge. And it is here where mobile fronthaul plays a role, linking the remote radio heads at the antennas with the centralised baseband controllers at the central office (see diagram). Such fronthaul links will use 25-gigabit and 100-gigabit links. “We have multiple customers that are excited about the 100-gigabit CWDM4 for these applications,” says Blum
Intel expects demand for 25-gigabit and 100-gigabit transceivers for mobile fronthaul to begin in 2019.
Intel is now producing over one million PSM4 and CWDM4 modules a year
Client-side modules
Intel entered the optical module market with its silicon photonics technology in 2016 with a 100-gigabit PSM4 module, quickly followed by a 100-gigabit CWDM4 module. Intel is now producing over one million PSM4 and CWDM4 modules a year.
Intel will provide customers with 400-gigabit DR4 samples in the final quarter of 2018 with production starting in the second half of 2019. This is when Intel says large-scale data centre operators will require 400 gigabits.
“The initial demand in hyperscale data centres for 400 gigabits will not be for duplex [fibre] but parallel fibre,” says Blum. “So we expect the DR4 to go to volume first and that is why we are announcing the product at ECOC.”
Intel says the advantages of its silicon photonics approach have already been demonstrated with its 100-gigabit PSM4 module. One is the optical performance resulting from the company’s heterogeneous integration technique combining indium-phosphide lasers with silicon photonics modulators on the one chip. Another advantage is scale using Intel’s 300mm wafer-scale manufacturing.
Intel says demand for the 500m-reach DR4 module to go hand-in-hand with that for the 100-gigabit single- wavelength DR1, given how the DR4 will also be used in breakout mode to interface with four DR1 modules.
“We don’t see the DR1 standard competing or replacing 100-gigabit CWDM4,” says Blum. “The 100-gigabit CWDM4 is now mature and at a very attractive price point.”
Intel is a leading proponent of the CWDM8 MSA, an optical module design based on eight wavelengths, each a 50 gigabit-per-second (Gbps) non-return-to-zero (NRZ) signal. The CWDM8 MSA was created to fast-track 400 gigabit interfaces by avoiding the wait for 100-gigabit PAM-4 silicon.
When the CWDM8 MSA was launched in 2017, the initial schedule was to deploy the module by the end of this year. Intel also demonstrated the module working at the OFC show held in March.
Now, Intel expects production of the CWDM8 in 2020 and, by then, other four-wavelength solutions using 100-gigabit PAM-4 silicon such as the 400G-FR4 MSA will be available.
“We just have to see what the use case will be and what the timing will be for the CWDM8’s deployment,” says Blum.
The CWDM8 MSA avoids PAM-4 to fast-track 400G
Another multi-source agreement (MSA) group has been created to speed up the market introduction of 400-gigabit client-side optical interfaces.
The CWDM8 MSA is described by its founding members as a pragmatic approach to provide 400-gigabit modules in time for the emergence of next-generation switches next year. The CWDM8 MSA was announced at the ECOC show held in Gothenburg last week.
Robert BlumThe eight-wavelength coarse wavelength-division multiplexing (CWDM) MSA is being promoted as a low-cost alternative to the IEEE 803.3bs 400 Gigabit Ethernet Task Force’s 400-gigabit eight-wavelength specifications, and less risky than the newly launched 100G Lambda MSA specifications based on four 100-gigabit wavelengths for 400 gigabit.
“The 100G Lambda has merits and we are also part of that MSA,” says Robert Blum, director of strategic marketing and business development at Intel’s silicon photonics product division. “We just feel the time to get to 100-gigabit-per-lambda is really when you get to 800 Gigabit Ethernet.”
Intel is one of the 11 founding companies of the CWDM8 MSA.
Specification
The CWDM8 MSA will develop specifications for 2km and 10km links. The MSA uses wavelengths spaced 20nm apart. As a result, unlike the IEEE’s 400GBASE-FR8 and 400GBASE-LR8 that use the tightly-spaced LAN-WDM wavelength scheme, no temperature control of the lasers is required. “It is just like the CWDM4 but you add four more wavelengths,” says Blum.
The CWDM8 MSA also differs from the IEEE specifications and the 100G Lambda MSA in that it does not use 4-level pulse-amplitude modulation (PAM-4). Instead, 50-gigabit non-return-to-zero (NRZ) signalling is used for each of the eight wavelengths.
The MSA will use the standard CDAUI-8 8x50-gigabit PAM-4 electrical interface. Accordingly, a retimer chip will be required inside the module to translate the input 50-gigabit PAM electrical signal to 50-gigabit NRZ. According to Intel, several companies are developing such a chip.
When we looked at what is available and how to do an optical interface, there was no good solution that would allow us to meet those timelines
Benefits
Customers are telling Intel that they need 400-gigabit duplex-fibre optical modules early next year and that they want to have them in production by the end of 2018.
“When we looked at what is available and how to do an optical interface, there was no good solution that would allow us to meet those timelines, fit the power budget of the QSFP-DD [module] and be at the cost points required for data centre deployment,” says Blum.
An 8x50-gigabit NRZ approach is seen as a pragmatic solution to meet these requirements.
No PAM-4 physical layer DSP chip is needed since NRZ is used. The link budget is significantly better compared to using PAM-4 modulation. And there is a time-to-market advantage since the technologies used for the CWDM8 are already proven.
We just think it [100-gigabit PAM4] is going to take longer than some people believe
This is not the case for the emerging 100-gigabit-per-wavelength MSA that uses 50-gigabaud PAM-4. “PAM-4 makes a lot of sense on the electrical side, a low-bandwidth [25 gigabaud], high signal-to-noise ratio link, but it is not the ideal when you have high bandwidth on the optical components [50 gigabaud] and you have a lot of noise,” says Blum.
One-hundred-gigabit-per-wavelength will be needed for the optical path, says Blum, but for 800 Gigabit Ethernet with its eight electrical channels and eight optical ones. “We just think it [100-gigabit PAM4] is going to take longer than some people believe.” Meanwhile, the CWDM8 is the best approach to meet market demand for a 400-gigabit duplex interfaces to support next-generation data centre switches expected next year, says Blum.
The founding members of the CWDM8 MSA include chip and optical component players as well as switch system makers. Unlike the 100G Lambda MSA, no larger-scale data centre operators have joined the MSA.
The members are Accton, Barefoot Networks, Credo Semiconductor, Hisense, Innovium, Intel, MACOM, Mellanox, Neophotonics and Rockley Photonics.
Oclaro demonstrates flexible rate coherent pluggable module
- The CFP2 coherent optical module operates at 100 and 200 Gig
- Samples are already with customers, with general availability in the first half of 2015
- Oclaro to also make more CFP2 100GBASE-LR4 products

The CFP2 is not just used in metro/ regional networks but also in long-haul applications
Robert Blum
The advent of a pluggable CFP2, capable of multi-rate long-distance optical transmission, has moved a step closer with a demonstration by Oclaro. The optical transmission specialist showed a CFP2 transmitting data at 200 Gigabits-per-second.
The coherent analogue module demonstration, where the DSP-ASIC resides alongside rather than within the CFP2, took place at ECOC 2014 held in September at Cannes. Oclaro showcased the CFP2 to potential customers in March, at OFC 2014, but then the line side module supported 100 Gig only.
"What has been somewhat surprising to us is that the CFP2 is not just used in metro/ regional networks but also in long-haul applications," says Robert Blum, director of strategic marketing at Oclaro. "We are also seeing quite significant interest in data centre interconnect, where you want to get 400 Gig between sites using two CFP2s and two DSPs." Oclaro says that the typical distances are from 200km to 1,000km.
The CFP2 achieves 200 Gig using polarisation multiplexing, 16-quadrature amplitude modulation (PM-16-QAM) while working alongside ClariPhy's merchant DSP-ASIC. ClariPhy announced at ECOC that it is now shipping its 200 Gig LightSpeed-II CL20010 coherent system-on-chip, implemented using a 28nm CMOS process.
"One of the beauties of an analogue CFP2 is that it works with a variety of DSPs," says Blum. Other merchant coherent DSPs are becoming available, while leading long-haul optical equipment vendors have their own custom coherent DSPs.
Oclaro's CFP2, even when operating at 200 Gig, falls within the 12W module's power rating. "One of the things you need to have for 200 Gig is a linear modulator driver, and such drivers consume slightly more power [200mW] than limiting modulator drivers [used for 100 Gig only]," says Blum.
Oclaro will offer two CFP2 line-side variants, one with linear drivers and one using limiting ones. The limiting driver CFP2 will be used for 100 Gig only whereas the linear driver CFP2 supports 100 Gig PM-QPSK and 200 Gig PM-16-QAM schemes. "Some customers prefer the simplicity of a limiting interface; for the linear interface you have to do more calibration or set-up," says Blum. "Linear also allows you to do pre-emphasis of the signal path, from the DSP all the way to the modulator." Pre-emphasis is used to compensate for signal path impairments.
By consuming under 12W, up to eight line-side CFP2 interfaces can fit on a line card, says Blum, who also stresses the CFP2 has a 0dBm output power at 200 Gig. Achieving such an output power level means the 200 Gig signal is on a par with 100 Gig wavelengths. "When you launch a 200 Gig signal, you want to make sure that there is not a big difference between signals," says Blum.
To achieve the higher output power, the micro integrable tunable laser assembly (micro-iTLA) includes a semiconductor optical amplifier (SOA) with the laser, while SOAs are also added to the Mach–Zehnder modulator chip. "That allows us to compensate for some of the [optical] losses," says Blum.
Customers received first CFP2 samples in May, with the module currently at the design validation stage. Oclaro expects volume shipments to begin in the first half of 2015.
100 Gig and the data centre
Oclaro also announced at ECOC that it has expanded manufacturing capacity for its CFP2-based 100GBASE-LR4 10km-reach module.
One reason for the flurry of activity around 100 Gig mid-reach interfaces that span 500m-2km in the data centre is that the 100GBASE-LR4 module is relatively expensive. Oclaro itself has said it will support the PSM-4, CWDM4 and CLR4 Alliance mid-reach 100 Gig interfaces. So why is Oclaro expanding manufacturing of its CFP2-based 100GBASE-LR4?
It is about being pragmatic and finding the most cost-effective solution for a given problem
"There is no clear good solution to get 100 Gig over 500m or 2km right now," says Blum. "CFP2 is here, it is a mature technology and we have made improvements both in performance and cost."
Oclaro has improved its EML design such that the laser needs less cooling, reducing overall power dissipation. The accompanying electronic functions such as clock data recovery have also been redesigned using one IC instead of two such that the CFP2 -LR4's overall power consumption is below 8W.
Demand has been so significant, says Blum, that the company has been unable to meet customer demand. Oclaro expects that towards year-end, it will have increased its CFP2 100GBASE-LR4 manufacturing capacity by 50 percent compared to six months earlier.
"It is about being pragmatic and finding the most cost-effective solution for a given problem," says Blum. "There are other [module] variants that are of interest [to us], such as the CWDM4 MSA that offers a cost-effective way to get to 2km."
ECOC 2012 summary - Part 1: Oclaro
Gazettabyte completes its summary of key optical announcements at the recent ECOC show held in Amsterdam. Oclaro's announcements detailed here are followed by those of Finisar and NeoPhotonics.
Part 1: Oclaro

"Networks are getting more complex and you need automation so that they are more foolproof and more efficient operationally"
Per Hansen, Oclaro
Oclaro made several announcements at ECOC included an 8-port flexible-grid optical channel monitor, a new small form factor pump laser MSA and its first CFP2 module. The company also gave an update regarding its 100 Gigabit coherent optical transmission module as well as the company's status following Oclaro's merger with Opnext (see below).
The 8-port flexible grid optical channel monitor (OCM) is to address emerging, more demanding requirements of optical networks. "Networks are getting more complex and you need automation so that they are more foolproof and more efficient operationally," says Per Hansen, vice president of product marketing, optical networks solutions at Oclaro.
The 8-port device can monitor up to eight fibres, for example the input and seven output ports of a wavelength-selective switch or an amplifier's outputs.
The programmable OCM can do more than simply go from fibre to fibre, measuring the spectrum. The OCM can dwell on particular ports, or monitor a wavelength on particular ports when the system is adjusting or turning up a wavelength, for example.
"There is processing power included such that you can do a lot of data processing which can then be exported to the line card in the format required," says Hansen. This is important as operators start to adopt flexible-grid network architectures. "[With flexible-grid spectrum] you don't know where channels stop and start such that an OCM that looks at fixed slots in no longer enough," says Hansen.
The OCM can monitor bands finer than 6.25GHz through to the spectrum across the complete C-band.
Oclaro also detailed that its OMT-100 coherent 100 Gigabit optical module is entering volume production. "We have shipped well over 100 [units] to various customers," says Hansen. "There are a lot of system houses looking at this type of module this year." The OMT-100 was developed by Opnext and replaces Oclaro's own MI 8000XM 100 Gigabit module
The company also announced its first 100 Gigabit CFP2 module and its second-generation CFP module 16W power consumption that support the IEEE 100GBASE-LR4 10km standard.
A new small form factor multi-source agreement (MSA) for pump laser diodes was also announced at the show, involving Oclaro and 3S Photonics.
The 10-pin butterfly package is designed to replace the existing 14-pin design. "It is 75% smaller in volume - about two-thirds in each dimension," says Robert Blum, director of product marketing for Oclaro's photonic components. The MSA supports a single cooled or uncooled pump laser, and its smaller volume enables more integrated amplifier designs.
Oclaro says other companies have expressed interest in the MSA and it expects additional players to join.
The New Oclaro

Source: Ovum
Oclaro also gave an update of the company's status following the merger with Opnext earlier this year. The now 3,000-strong company has estimated annual revenues of US $800m. This places the optical component company second only to Finisar.
The merger has broadened the company's product line, adding Opnext's strength in datacom pluggable transceivers to Oclaro's core networking products. The company is also more vertically integrated, using its optical components such as tunable laser and VCSEL technologies, modulators and receivers within its line-side transponders and pluggable optical transceivers.
"You can drive technologies in different directions and not just be out there buying components and throwing them together," says Hansen.
The company also has a range of laser diodes for industrial and consumer applications. "We [Oclaro] were already the largest merchant supplier of high-power laser diodes but now we have a complete portfolio that covers all the wavelengths from 400 up to 1500nm," says Blum.
The company has a broad range of technologies that include indium phosphide, gallium arsenide, lithium niobate, MEMS, liquid crystal and gallium nitride.
An extra business unit has also been created. To the existing optical networks solutions and the photonic components businesses there is now the modules and devices unit covering pluggable and high-speed client side transceivers, and which is based in Japan.
Huawei's novel Petabit switch
The Chinese equipment maker showcased a prototype optical switch at this year's OFC/NFOEC that can scale to 10 Petabit.

"Although the numbers [400,000 lasers] appear quite staggering, they point to a need for photonic integration"
Reg Wilcox, Huawei
Huawei has demonstrated a concept Petabit Packet Cross Connect (PPXC), a switching platform to meet future metro and data centre requirements. The demonstrator is not expected to be a commercial product before 2017.
Current platforms have switching capacities of several Terabits. Yet Huawei believes a one thousand-fold increase in switching capacity will be needed. Fibre capacity will be filled to 20 and eventually 50 Terabits using higher-order modulation schemes and flexible spectrum. This will add up to a Petabit (one million Gigabits) per site, assuming 200 switched fibres at busy network exchanges.
"We are not saying we will introduce a 10 Petabit product in five years' time, although the technology is capable of that," says Reg Wilcox, vice president of network marketing and product management at Huawei. "We will size it to what we deem the market needs at that time."
Source: Huawei
The PPXC uses optical burst transmission to implement the switching. Such burst transmission uses ultra-fast switching lasers, each set to a particular wavelength in nanoseconds. Like Intune Networks’ Verisma iVX8000 optical packet switching and transport system, each wavelength is assigned to a particular destination port. As OTN traffic or packets arrive, they are assigned a wavelength before being sent to a destination port.
Huawei's switch demonstration linked two Huawei OSN8800 32-slot platforms, each with an Optical Transport Network (OTN) switching capacity of 2.56 Terabit-per-second (Tbps), to either side of the core optical switch, to implement what is known as a three-stage Clos switching matrix.
With each OSN8800, half the slots are for inter-machine trunks to the core optical switch, the middle stage of the Clos switch. "The other half [of the OSN8800] would be dedicated to whatever services you want to have: Gigabit Ethernet, 10 Gigabit Ethernet; whatever traffic you want riding over OTN," says Wilcox.
The core optical switch implements an 80x80 matrix using 80 wavelengths, each operating at 25Gbps. The 80x80 matrix is surrounded by MxM fast optical switches to implement a larger 320x320 matrix that has an 8 Terabit capacity. It is these larger matrices - 'switch planes' - that are stacked to achieve 10 Petabit. The PPXC grooms traffic starting at 1 Gigabit rates and can switch 100Gbps and even higher speed incoming wavelengths in future.
Oclaro provided Huawei with the ultra-fast lasers for the demonstrator. The laser - a digital supermode-distributed Bragg reflector (DS-DBR) - has an electro-optic tuning mechanism, says Robert Blum, director of product marketing for Oclaro's photonic component. Here current is applied to the grating to set the laser's wavelength. The resulting tuning speed is in nanoseconds although Oclaro will not say the exact switching speed specified for the switch.
Each switch plane uses 4x80 or 320, 25Gbps lasers. A 10 Petabit switch requires 400,000 (320x1250) lasers. "Although the numbers appear quite staggering, they point to a need for photonic integration," says Wilcox. Huawei recently acquired photonic integration specialist CIP Technologies.
The demonstration highlighted the PPXC switching OTN traffic but Wilcox stresses that the architecture is cell-based and can support all packet types: "We are flexible in the technology as the world evolves to all-packet.” The design is therefore also suited to large data centres to switch traffic between servers and for linking aggregation routers. "It is applicable in the data centre as a flattened [switch] architecture," says Wilcox.
Huawei claims the Petabit switch will deliver other benefits besides scalability. "Rough estimates comparing this device to OTN switches, MPLS switches and routers yields savings of greater than 60% on power, anywhere from 15-80% on footprint and at least a halving of fibre interconnect," says Wilcox.
Meanwhile Oclaro says Huawei is not the only vendor interested in the technology. "We have seen quite some interest recently in this area [of optical burst transmission]." says Oclaro's Blum. "I wouldn't be surprised if other companies make announcements in this space."
Further reading:
- OFC/ NFOEC 2012 paper: An Optical Burst Switching Fabric of Multi-Granularity for Petabit/s Multi-Chassis Switches and Routers

