ADVA's 100 Terabit data centre interconnect platform
- The FSP 3000 CloudConnect comes in several configurations
- The data centre interconnect platform scales to 100 terabits of throughput
- The chassis use a thin 0.5 RU QuadFlex card with up to 400 Gig transport capacity
- The optical line system has been designed to be open and programmable
ADVA Optical Networking has unveiled its FSP 3000 CloudConnect, a data centre interconnect product designed to cater for the needs of the different data centre players. The company has developed several sized platforms to address the workloads and bandwidth needs of data centre operators such as Internet content providers, communications service providers, enterprises, cloud and colocation players.
Certain Internet content providers want to scale the performance of their computing clusters across their data centres. A cluster is a grouping of distributed computing comprising a defined number of virtual machines and processor cores (see Clusters, pods and recipes explained, bottom). Yet there are also data centre operators that only need to share limited data between their sites.
ADVA Optical Networking highlights two internet content providers - Google and Microsoft with its Azure cloud computing and services platform - that want their distributed clusters to act as one giant global cluster.
“The performance of the combined clusters is proportional to the bandwidth of the interconnect,” says Jim Theodoras, senior director, technical marketing at ADVA optical Networking. “No matter how many CPU cores or servers, you are now limited by the interconnect bandwidth.”
ADVA Optical Networking cites a Google study that involved running an application on different cluster configurations, starting with a single cluster; then two, side-by-side; two clusters in separate buildings through to clusters across continents. Google claimed the distributed clusters only performed at 20 percent capacity due to the limited interconnect bandwidth. “The reason you are hearing these ridiculous amounts of connectivity, in the hundreds of terabits, is only for those customers that want their clusters to behave as a global cluster,” says Theodoras.
Yet other internet content providers have far more modest interconnect demands. ADVA cites one, as large as the two global cluster players, that wants only 1.2 terabit-per-second (Tbps) between its sites. “It is normal duplication/ replication between sites,” says Theodoras. “They want each campus to run as a cluster but they don’t want their networks to behave as a global cluster.”
FSP 3000 CloudConnect
The FSP 3000 CloudConnect has several configurations. The company stresses that it designed CloudConnect as a high-density, self-contained platform that is power-efficient and that comes with advanced data security features.
All the CloudConnect configurations use the QuadFlex card that has a 800 Gigabit throughput: up to 400 Gigabit client-side interfaces and 400 Gigabit line rates.
Jim TheodorasThe QuadFlex card is thin, measuring only a half rack unit (RU). Up to seven can be fitted in ADVA’s four rack-unit (4 RU) platform, dubbed the SH4R, for a line side transport capacity of 2.8 Tbps. The SH4R’s remaining, eighth slot hosts either one or two management controllers.
The QuadFlex line-side interface supports various rates and reaches, from 100 Gigabit ultra long-haul to 400 Gigabit metro/ regional, in increments of 100 Gigabit. Two carriers, each using polarisation-multiplexing, 16 quadrature amplitude modulation (PM-16-QAM), are used to achieve the 400 Gbps line rate, whereas for 300 Gbps, 8-QAM is used on each of the two carriers.
“The reason you are hearing these ridiculous amounts of connectivity, in the hundreds of terabits, is only for those customers that want their clusters to behave as a global cluster”
The advantage of 8-QAM, says Theodoras, is that it is 'almost 400 Gigabit of capacity' yet its can span continents. ADVA is sourcing the line-side optics but uses its own code for the coherent DSP-ASIC and module firmware. The company has not confirmed the supplier but the design matches Acacia's 400 Gigabit coherent module that was announced at OFC 2015.
ADVA says the CloudConnect 4 RU chassis is designed for customers that want a terabit-capacity box. To achieve a terabit link, three QuadFlex cards and an Erbium-doped fibre amplifier (EDFA) can be used. The EDFA is a bidirectional amplifier design that includes an integrated communications channel and enables the 4 RU platform to achieve ultra long-haul reaches. “There is no need to fit into a [separate] big chassis with optical line equipment,” says Theodoras. Equally, data centre operators don’t want to be bothered with mid-stage amplifier sites.
Some data centre operators have already installed 40 dense WDM channels at 100GHz spacing across the C-band which they want to keep. ADVA Optical Networking offers a 14 RU configuration that uses three SH4R units, an EDFA and a DWDM multiplexer, that enables a capacity upgrade. The three SH4R units house a total of 20 QuadFlex cards that fit 200 Gigabit in each of the 40 channels for an overall transport capacity of 8 terabits.
ADVA CloudConnect configuration supporting 25.6 Tbps line side capacity. Source: ADVA Optical Networking
The last CloudConnect chassis configuration is for customers designing a global cluster. Here the chassis has 10 SH4R units housing 64 QuadFlex cards to achieve a total transport capacity of 25.6 Tbps and a throughput of 51.2 Tbps.
Also included are 2 EDFAs and a 128-channel multiplexer. Two EDFAs are needed because of the optical loss associated with the high number of channels, such that an EDFA is allocated for each of the 64 channels. “For the [14 RU] 40 channels [configuration], you need only one EDFA,” says Theodoras.
The vendor has also produced a similar-sized configuration for the L-band. Combining the two 40 RU chassis delivers 51.2Tbps of transport and 102.4 Tbps of throughput. “This configuration was built specifically for a customer that needed that kind of throughput,” says Theodoras.
Other platform features include bulk encryption. ADVA says the encryption does not impact the overall data throughput while adding only a very slight latency hit. “We encrypt the entire payload; just a few framing bytes are hidden in the existing overhead,” says Theodoras.
The security management is separate from the network management. “The security guys have complete control of the security of the data being managed; only they can encrypt and decrypt content,” says Theodoras.
CloudConnect consumes only 0.5W/ Gigabit. The platform does not use electrical multiplexing of data streams over the backplane. The issue with using such a switched backplane is that power is consumed independent of traffic. The CloudConnect designers has avoided this approach. “The reason we save power is that we don’t have all that switching going on over the backplane.” Instead all the connectivity comes from the front panel of the cards.
The downside of this approach is that the platform does not support any-port to any-port connectivity. “But for this customer set, it turns out that they don’t need or care about that.”
Open hardware and software
ADVA Optical Networking claims is 4 RU basic unit addresses a sweet spot in the marketplace. The CloudConnect also has fewer inventory items for the data centre operators to manage compared to competing designs based on 1 RU or 2 RU pizza boxes, it says.
Theodoras also highlights the system’s open hardware and software design.
“We will let anybody’s hardware or software control our network,” says Theodoras. “You don’t have to talk to our software-defined networking (SDN) controller to control our network.” ADVA was part of a demonstration last year whereby an NEC and a Fujitsu controller oversaw ADVA’s networking elements.
Every vendor is always under pressure to have the best thing because you are only designed in for 18 months
By open hardware, what is meant is that programmers can control the optical line system used to interconnect the data centres. “We have found a way of simplifying it so it can be programmed,” says Theodoras. “We have made it more digital so that they don’t have to do dispersion maps, polarisation mode dispersion maps or worry about [optical] link budgets.” The result is that data centre operators can now access all the line elements.
“At OFC 2015, Microsoft publicly said they will only buy an open optical line system,” says Theodoras. Meanwhile, Google is writing a specification for open optical line systems dubbed OpenConfig. “We will be compliant with Microsoft and Google in making every node completely open.”
General availability of the CloudConnect platforms is expected at the year-end. “The data centre interconnect platforms are now with key partners, companies that we have designed this with,” says Theodoras.
Clusters, pods and recipes explained
A cluster is made up of a number of virtual machines and CPU cores and is defined in software. A cluster is a virtual entity, says Theodoras, unrelated to the way data centre managers define their hardware architectures.
“Clusters vary a lot [between players],” says Theodoras. “That is why we have had to make scalability such a big part of CloudConnect.”
The hardware definition is known as a pod or recipe. “How these guys build the network is that they create recipes,” says Theodoras. “A pod with this number of servers, this number of top-of-rack switches, this amount of end-of-row router-switches and this transport node; that will be one recipe.”
Data centre players update their recipes every 18 months. “Every vendor is always under pressure to have the best thing because you are only designed in for 18 months,” says Theodoras.
Vendors are informed well in advance what the next hardware requirements will be, and by when they will be needed to meet the new recipe requirements.
In summary, pods and recipes refer to how the data centre architecture is built, whereas a cluster is defined at a higher, more abstract layer.
Ciena's stackable platform for data centre interconnect
Ciena is the latest system vendor to unveil its optical transport platform for the burgeoning data centre interconnect market. Data centre operators require scalable platforms that can carry significant amounts of traffic to link sites over metro and long-haul distances, and are power efficient.
The Waveserver stackable interconnect system delivers 800 Gig traffic throughput in a 1 rack unit (1RU) form factor. The throughput comprises 400 Gigabit of client-side interfaces and 400 Gigabit coherent dense WDM transport.
For the Waveserver’s client-side interfaces, a mix of 10, 40 and 100 Gigabit interfaces can be used, with the platform supporting the latest 100 Gig QSFP28 optical module form factor. One prominent theme at the recent OFC 2015 show was the number of interface types now supported in a QSFP28.
On the line side, Ciena uses two of its latest WaveLogic 3 Extreme coherent DSP-ASICs. Each DSP-ASIC supports polarisation multiplexing, 16 quadrature amplitude modulation (PM–16-QAM), equating to 200 Gigabit transmission capacity.
The Extreme was chosen rather than Ciena’s more power-efficient WaveLogic 3 Nano DSP-ASIC to maximise capacity over a fibre. “The amount of fibre the internet content providers have tends to be limited so getting high capacity is key,” says Michael Adams, vice president of product and technical marketing at Ciena. The Nano DSP-ASIC does not support 16-QAM.
A rack can accommodate up to 44 Waveserver stackable units to deliver 88 wavelengths, each 50GHz wide, or 17.6 Terabit-per-second (Tbps) of capacity. And up to 96 wavelengths, or 19.2Tbps, is supported on a fibre pair.
"We are going down the path of opening the platform to automation"
“We could add flexible grid and probably get closer to 24 or 25 Tbps,” says Adams. Flexible grid refers to moving off the C-band's set ITU grid by using digital signal processing at the transmitter. By shaping the signal before it is sent, each carrier can be squeezed from a 50GHz channel into a 37.5GHz wide one, boosting overall capacity carried over the fibre.
Adams says that it is not straightforward to compare the power consumption of different vendors’ data centre interconnect platforms but Ciena believes its platform is competitive. He estimates that the Waveserver consumes between 1W and 1.5W per Gigabit line side.
Ciena has stated that between five and 10 percent of its revenues come from web-scale customers, and accounts for a third of its total 100 Gig line-side port shipments.
Web-scale companies include Internet content providers, providers of data centre co-location and interconnect, and enterprises. Web-scale companies also drive the traditional telecom optical networking market as they also use large amounts of the telcos' network capacity to link their sites.
The global data centre interconnect market grew 16 percent in 2014 to reach $US 2.5 billion, according to market research firm, Ovum. Almost half of the spending was by the communications service providers whereas the Internet content providers spending grew 64 percent last year.
Open software
Ciena also announced an open application development environment, dubbed emulation cloud, that allows applications to be developed without needing Waveserver hardware.
One obvious application is the moving server virtual machines between data centres. But more novel applications can be developed by the data centre operators and third-party developers. Ciena cites what it calls an augmented reality application that allows a mobile phone to be pointed at a Waveserver to inform the of user the status of the machine: which ports are active and what type of bandwidth each port is consuming. “It can also show power and specific optical parameters of each line port,” says Adams. “Right there, you have all the data you need to know.”
The Waveserver platform also comes with software that allows data centre managers to engineer, plan, provision and operate links via a browser. More sophisticated users can benefit from Ciena’s OPn architecture and a set of open application programming interfaces (APIs).
“We are going down the path of opening the platform to automation,” says Adams. “We can foresee for the most sophisticated users, plugging into APIs and going to some very specific optical parameters and playing with them.”
Waveserver Status
Ciena is demonstrating its Waveserver platform to over 100 customers, as part of an annual event at the company’s Ottawa site.
“We are well engaged with a variety of Internet content providers,” says Adams. “We will be in trials with many of those folks this summer.” General availability is expected at the end of the third quarter.
In May, Ciena announced it had entered a definitive agreement to acquire Cyan. Cyan announced its own N-Series data centre interconnect platform earlier this year. Ciena says it is premature to comment on the future of the N-Series platform.
Ciena's Tom Mock reflects on a career in telecom
Working for one technology company for so long may be uncommon, says Mock, but not at Ciena: the CTO has clocked 20 years while the CEO boasts 15 years.
Tom Mock: “I’m about ready to go do something else.”
Mock studied electrical engineering and was at Scientific Atlanta running a product development group before joining Ciena where he crossed over from engineering to marketing. “I’ve been in telecom pretty much my entire career, 35 years worth of telecom,” says Mock. “I’m about ready to go do something else.”
A work colleague says that if there is one word that describes Mock, it is decency: “He has been a key role model of the ‘do the right thing’ culture at Ciena.”
Mock joined Ciena days before the company went public in 1997. He experienced the optical bubble of 1999-2000 and the bust that followed, and just when he thought the company had put that ‘nuclear winter’ behind it, Ciena endured the 2008 global financial crisis.
Now he leaves Ciena as senior vice president of corporate communications. A role, he says, that involves communicating the company's value proposition to the investment community and media, while helping Ciena’s sales staff communicate the company’s brand. The role also involves explaining the significance of the company’s technology: “It is great we can do 16-QAM [quadrature amplitude modulation] on optical, but why is it important?"
When Mock joined Ciena, optical technology in the form of dense wavelength-division multiplexing (DWDM) was starting to be deployed. “You could go to a service provider and say, look, I can increase the capacity of your network by a factor of 16 just by swapping out the bits at the end of your fibre route,” he says.
I remember sitting at my desk looking at stock prices and market capitalisations and realising that a start-up called Corvis ... had a market capitalisation larger than Ford Motor Company
The optical bubble quickly followed. The internet was beginning to change the world, and large enterprises were taking advantage of communication services in new ways. And with it came the inflated expectation that bandwidth demand would grow through the roof. As a result, optical communications became the hottest technology around.
"I remember sitting at my desk looking at stock prices and market capitalisations and realising that a start-up called Corvis, a competitor of ours started by one of the guys that founded Ciena, Dave Huber, had a market capitalisation larger than Ford Motor Company,” says Mock. Ford was the second largest auto manufacturer in the world at the time.
Yet despite all the expected demand for - and speculation in - bandwidth, conversations with Ciena’s customers revealed that their networks were lightly loaded. The inevitable shake-out, once it came, was brutal, particularly among equipment makers. In the end, all that capacity placed in the network was needed, but only from 2006 as the cloud began to emerge and enterprises started making greater use of computing.
“The one positive that came out of the bubble was that a lot of key technologies that enabled things that happened in the late 2000s were developed in that time,” says Mock.
Ciena made several acquisitions during the optical boom, and has done so since; some successful, others less so. Mock says that with most of the good ones, the technology and the market didn't overlap much with Ciena’s.
Speculation didn't work well for the industry in terms of building infrastructure, and it probably doesn't work well in terms of acquisitions.
One acquisition was Cyras Systems for $2.6 billion in 2000, a company developing 10 Gigabit multi-service provisioning platforms and add/ drop multiplexers. But so was Ciena. “That was one example that didn't work so well but if I look at the one that is going the best - Nortel MEN - that was a place where we didn't have as much technology and market overlap,” he says. That makes streamlining products easier and less disruptive for customers.
“The other thing that is important in a good acquisition is a very good understanding of what the end objective is,” he says. “Speculation didn't work well for the industry in terms of building infrastructure, and it probably doesn't work well in terms of acquisitions.”
Making sure the company cultures fit is also key. “In any of these technology acquisitions, it is not just about buying products and markets, it is about buying the capabilities of a workforce,” says Mock. It is important that the new workforce remains productive, and the way that of done is to make sure the staff feel an important part of the company, he says.
Mock highlights two periods that he found most satisfying at Ciena. One was 2006-2008 before the global economic crisis. Ciena was back of a sound financial footing and was making good money. “There was a similar feeling a year to 18 months after the Nortel acquisition” he says. “The integration had been successful, the people were all pointing in the same direction, and employee morale was pretty high.”
You hear about white boxes in the data centre, there are areas in the network where that is going to happen.
What Mock is most proud of in his time at Ciena is the company’s standing. “We do a perception study with our customers every year to 18 months and one of things that comes back is that people really trust the company,” he says. “Our customers feel like we have their best interest at heart, and that is something we have worked very hard to do; it is also the sort of thing you don't get easily.”
Now the industry is going through a period of change, says Mock. If the last 10-15 years can be viewed as a period of incremental change, people are now thinking about how networks are built and used in new ways. It is about shifting to a model that is more in tune with on-demand needs of users, he says: “That kind of shift typically creates a lot of opportunity.” Networks are becoming more important because people are accessing resources in different places and the networks need to be more responsive.
For Ciena it has meant investing in software as more things come under software control. The benefits include network automation and reduced costs for the operators, but it also brings risk. “There are parts of the infrastructure that are likely to become commoditised,” says Mock. “You hear about white boxes in the data centre, there are areas in the network where that is going to happen.”
We both came from small-town, working-class families. Over the years we have probably been more successful that we ever thought we would be, but a lot of that is due to people helping us along the way.
If this is a notable period, why exit now? “It’s a good time for me,” he says. “And there were some things that my wife and I wanted to start looking at.” Mock’s wife retired two years ago and both are keen to give something back.
“We both came from small-town, working-class families,” he says. “Over the years we have probably been more successful that we ever thought we would be, but a lot of that is due to people helping us along the way.”
Mock and his wife were their families’ first generation that got a good professional education. “One of the things that we have taken on board is helping others gain that same sort of opportunity,” he says.
“I’m excited for Tom but will miss having him around,” says his colleague. “Hopefully, in his next phase, he will make the rest of the world a little more decent as well.”
Oclaro demonstrates flexible rate coherent pluggable module
- The CFP2 coherent optical module operates at 100 and 200 Gig
- Samples are already with customers, with general availability in the first half of 2015
- Oclaro to also make more CFP2 100GBASE-LR4 products

The CFP2 is not just used in metro/ regional networks but also in long-haul applications
Robert Blum
The advent of a pluggable CFP2, capable of multi-rate long-distance optical transmission, has moved a step closer with a demonstration by Oclaro. The optical transmission specialist showed a CFP2 transmitting data at 200 Gigabits-per-second.
The coherent analogue module demonstration, where the DSP-ASIC resides alongside rather than within the CFP2, took place at ECOC 2014 held in September at Cannes. Oclaro showcased the CFP2 to potential customers in March, at OFC 2014, but then the line side module supported 100 Gig only.
"What has been somewhat surprising to us is that the CFP2 is not just used in metro/ regional networks but also in long-haul applications," says Robert Blum, director of strategic marketing at Oclaro. "We are also seeing quite significant interest in data centre interconnect, where you want to get 400 Gig between sites using two CFP2s and two DSPs." Oclaro says that the typical distances are from 200km to 1,000km.
The CFP2 achieves 200 Gig using polarisation multiplexing, 16-quadrature amplitude modulation (PM-16-QAM) while working alongside ClariPhy's merchant DSP-ASIC. ClariPhy announced at ECOC that it is now shipping its 200 Gig LightSpeed-II CL20010 coherent system-on-chip, implemented using a 28nm CMOS process.
"One of the beauties of an analogue CFP2 is that it works with a variety of DSPs," says Blum. Other merchant coherent DSPs are becoming available, while leading long-haul optical equipment vendors have their own custom coherent DSPs.
Oclaro's CFP2, even when operating at 200 Gig, falls within the 12W module's power rating. "One of the things you need to have for 200 Gig is a linear modulator driver, and such drivers consume slightly more power [200mW] than limiting modulator drivers [used for 100 Gig only]," says Blum.
Oclaro will offer two CFP2 line-side variants, one with linear drivers and one using limiting ones. The limiting driver CFP2 will be used for 100 Gig only whereas the linear driver CFP2 supports 100 Gig PM-QPSK and 200 Gig PM-16-QAM schemes. "Some customers prefer the simplicity of a limiting interface; for the linear interface you have to do more calibration or set-up," says Blum. "Linear also allows you to do pre-emphasis of the signal path, from the DSP all the way to the modulator." Pre-emphasis is used to compensate for signal path impairments.
By consuming under 12W, up to eight line-side CFP2 interfaces can fit on a line card, says Blum, who also stresses the CFP2 has a 0dBm output power at 200 Gig. Achieving such an output power level means the 200 Gig signal is on a par with 100 Gig wavelengths. "When you launch a 200 Gig signal, you want to make sure that there is not a big difference between signals," says Blum.
To achieve the higher output power, the micro integrable tunable laser assembly (micro-iTLA) includes a semiconductor optical amplifier (SOA) with the laser, while SOAs are also added to the Mach–Zehnder modulator chip. "That allows us to compensate for some of the [optical] losses," says Blum.
Customers received first CFP2 samples in May, with the module currently at the design validation stage. Oclaro expects volume shipments to begin in the first half of 2015.
100 Gig and the data centre
Oclaro also announced at ECOC that it has expanded manufacturing capacity for its CFP2-based 100GBASE-LR4 10km-reach module.
One reason for the flurry of activity around 100 Gig mid-reach interfaces that span 500m-2km in the data centre is that the 100GBASE-LR4 module is relatively expensive. Oclaro itself has said it will support the PSM-4, CWDM4 and CLR4 Alliance mid-reach 100 Gig interfaces. So why is Oclaro expanding manufacturing of its CFP2-based 100GBASE-LR4?
It is about being pragmatic and finding the most cost-effective solution for a given problem
"There is no clear good solution to get 100 Gig over 500m or 2km right now," says Blum. "CFP2 is here, it is a mature technology and we have made improvements both in performance and cost."
Oclaro has improved its EML design such that the laser needs less cooling, reducing overall power dissipation. The accompanying electronic functions such as clock data recovery have also been redesigned using one IC instead of two such that the CFP2 -LR4's overall power consumption is below 8W.
Demand has been so significant, says Blum, that the company has been unable to meet customer demand. Oclaro expects that towards year-end, it will have increased its CFP2 100GBASE-LR4 manufacturing capacity by 50 percent compared to six months earlier.
"It is about being pragmatic and finding the most cost-effective solution for a given problem," says Blum. "There are other [module] variants that are of interest [to us], such as the CWDM4 MSA that offers a cost-effective way to get to 2km."
OFDM promises compact Terabit transceivers
Source ECI Telecom
A one Terabit super-channel, crafted using orthogonal frequency-division multiplexing (OFDM), has been transmitted over a live network in Germany. The OFDM demonstration is the outcome of a three-year project conducted by the Tera Santa Consortium comprising Israeli companies and universities.
Current 100 Gig coherent networks use a single carrier for the optical transmission whereas OFDM imprints the transmitted data across multiple sub-carriers. OFDM is already used as a radio access technology, the Long Term Evolution (LTE) cellular standard being one example.
With OFDM, the sub-carriers are tightly packed with a spacing chosen to minimise the interference at the receiver. OFDM is being researched for optical transmission as it promises robustness to channel impairments as well as implementation benefits, especially as systems move to Terabit speeds.
"It is clear that the market has voted for single-carrier transmission for 400 Gig," says Shai Stein, chairman of the Tera Santa Consortium and CTO of system vendor, ECI Telecom. "But at higher rates, such as 1 Terabit, the challenge will be to achieve compact, low-power transceivers."

The real contribution [of OFDM] is implementation efficiency
Shai Stein
One finding of the project is that the OFDM optical performance matches that of traditional coherent transmission but that the digital signal processing required is halved. "The real contribution [of OFDM] is implementation efficiency," says Stein.
For the trial, the 175GHz-wide 1 Terabit super-channel signal was transmitted through several reconfigurable optical add/drop multiplexer (ROADM) stages. The 175GHz spectrum comprises seven, 25GHz bands. Two OFDM schemes were trialled: 128 sub-carriers and 1024 sub-carriers across each band.
To achieve 1 Terabit, the net data rate per band was 142 Gigabit-per-second (Gbps). Adding the overhead bits for forward error corrections and pilot signals, the gross data rate per band is closer to 200Gbps.
The 128 or 1024 sub-carriers per band are modulated using either quadrature phase-shift keying (QPSK) or 16-quadrature amplitude modulation (16-QAM). One modulation scheme - QPSK or 16-QAM - was used across a band, although Stein points out that the modulation scheme can be chosen on a sub-carrier by sub-carrier basis, depending on the transmission conditions.
The trial took place at the Technische Universität Dresden, using the Deutsches Forschungsnetz e.V. X-WiN research network. The signal recovery was achieved offline using MATLAB computational software. "It [the trial] was in real conditions, just the processing was performed offline," says Stein. The MATLAB algorithms will be captured in FPGA silicon and added to the transciever in the coming months.
Using a purpose-built simulator, the Tera Santa Consortium compared the OFDM results with traditional coherent super-channel transmission. "Both exhibited the same performance," says David Dahan, senior research engineer for optics at ECI Telecom. "You get a 1,000km reach without a problem." And with hybrid EDFA-Raman amplification, 2,000km is possible. The system also demonstrated robustness to chromatic dispersion. Using 1024 sub-carriers, the chromatic dispersion is sufficient low that no compensation is needed, says ECI.
Stein says the project has been hugely beneficial to the Israeli optical industry: "There has been silicon photonics, transceiver and algorithmic developments, and benefits at the networking level." For ECI, it is important that there is a healthy local optical supply chain. "The giants have that in-house, we do not," says Stein.
One Terabit transmission will be realised in the marketplace in the next two years. Due to the project, the consortium companies are now well placed to understand the requirements, says Stein.
Set up in 2011, the Tera Santa Consortium includes ECI Telecom, Finisar, MultiPhy, Cello, Civcom, Bezeq International, the Technion Israel Institute of Technology, Ben-Gurion University, and the Hebrew University in Jerusalem, Bar-Ilan University and Tel-Aviv University.
ClariPhy samples a 200 Gigabit coherent DSP-ASIC
The CL20010 is the first of ClariPhy's LightSpeed-II family of coherent digital signal processing ASICs (DSP-ASICs), manufactured using a 28nm CMOS process. "We believe it is the first 28nm standard product, and leaps ahead of the current generation [DSP-ASIC] devices," says Paul Voois, co-founder and chief strategy officer at ClariPhy.
Paul Voois
ClariPhy has been shipping its 40 Gigabit coherent CL4010 LightSpeed chip since September 2011. Customers using the device include optical module makers Oclaro, NEC and JDSU. "We continue to go into new deployments but it is true that the 40 Gig market is not growing like the 100 Gig market," says Voois.
With the CL20010, Clariphy now joins NTT Electronics (NEL) as a merchant supplier of high-speed coherent silicon. Clariphy has said that the LightSpeed-II devices will address metro, long-haul and submarine.
No longer do the integrators need to buy a separate transmit multiplexer chip
Using an advanced 28nm CMOS process enables greater on-chip integration. The CL20010 includes the transmit and receive DSPs, soft-decision forward error correction, and mixed signal analogue-to-digital and digital-to-analogue converters. "No longer do the integrators need to buy a separate transmit multiplexer chip," says Voois.
The LightSpeed-II silicon also features an on-chip Optical Transport Network (OTN) framer/ mapper and a general-purpose processor. The general purpose processor enables the chip to be more network aware - for example, the state of a link - and support software-defined networking (SDN) in the WAN.
The LightSpeed-II ICs support three modulation schemes - polarisation-multiplexed, bipolar phase-shift keying (PM-BPSK), quadrature phase-shift keying (PM-QPSK) and 16-quadrature amplitude modulation (PM-16-QAM). Using PM-16-QAM, the CL20010 can support 200 Gigabit traffic. "I believe that is also a first for merchant silicon," says Voois.
Having an on-chip framer enables the transmission of two 100 Gigabit clients signals as a 200 Gigabit OTN signal. In turn, combining two CL20010 devices enables a 400 Gig flexible-grid super-channel to be transmitted. The on-chip transmit DSP enables the CL20010 to support flexible grid, with the dual carrier 400 Gigabit super-channel occupying 75GHz rather than 100GHz. The CL20010 can achieve a reach of 3,500km at 100 Gig, and over 600km at 200 and 400 Gig.
ClariPhy has not announced the power consumption of its chips but says that it is also targeting the metro pluggable market. Given that a CFP coherent module consumes up to 32W and that the optics alone consume 12W, a LightSpeed-II metro DSP-ASIC will likely consume 18-20W.
Merchant market
Many of the leading optical transport equipment makers, such as Alcatel-Lucent, Ciena, Cisco Systems, Huawei and Infinera, use their own coherent DSP-ASICs. More recently, Acacia Communications announced a CFP 100 Gig coherent pluggable module that uses its internally developed DSP-ASIC.
Some of the OEMs will continue to develop internal technology but even they can't cover all possible applications
ClariPhy says that despite the bulk of the 100 Gigabit coherent ports shipped use internally developed designs, there are signs that the market is moving towards adopting merchant silicon. "It doesn't happen all at once," says Voois. "Some of the OEMs will continue to develop internal technology but even they can't cover all possible applications."
He cites coherent silicon for metro networks as an example. Equipment makers are focussed on DSP-ASIC designs that satisfy the most demanding, ultra-long-haul network applications. But such high-performance, high-power chips are not suited for the more cost-conscious, low-power and compact metro requirements.
"Our committed customer base includes a nice spectrum of applications and integration types: OEMs and module vendors; metro, long haul and submarine," says Voois.
General availability of the CL20010 is expected later this year. The company will also be demonstrating the device at OFC 2014.
Xtera demonstrates 40 Terabit using Raman amplification
- Xtera's Raman amplification boosts capacity and reach
- 40 Terabit optical transmission over 1,500km in Verizon trial
- 64 Terabit over 1,500km in 2015 using a Raman module operating over 100nm of spectrum
Herve FevrierSystem vendor Xtera is using all these techniques as part of its Nu-Wave Optima system but also uses Raman amplification to extend capacity and reach.
"We offer capacity and reach using a technology - Raman amplification - that we have been pioneering and working on for 15 years," says Herve Fevrier, executive vice president and chief strategy officer at Xtera.
The distributed amplification profile of Raman (blue) compared to an EDFA's point amplification. Source: XteraOne way vendors are improving the amplification for 100 Gigabit and greater deployments is to use a hybrid EDFA/ Raman design. This benefits the amplifier's power efficiency and the overall transmission reach but the spectrum width is still dictated by Erbium to around 35nm. "And Raman only helps you have spans which are a bit longer," says Fevrier.
Meanwhile, Xtera is working on programable cards that will support the various transmission options. Xtera will offer a 100nm amplifier module this year that extends its system capacity to 24 Terabit (240, 100 Gig channels). Also planned this year is super-channel PM-QPSK implementation that will extend transmissions to 32 Terabit using the 100nm amplifier module. In 2015 Xtera will offer PM-16-QAM that will deliver the 48 Terabit over 2,000km and the 64 Terabit over 1,500km.
For Part 1, click here
Verizon on 100G+ optical transmission developments
Source: Gazettabyte
Feature: 100 Gig and Beyond. Part 1:
Verizon's Glenn Wellbrock discusses 100 Gig deployments and higher speed optical channel developments for long haul and metro.
The number of 100 Gigabit wavelengths deployed in the network has continued to grow in 2013.
According to Ovum, 100 Gigabit has become the wavelength of choice for large wavelength-division multiplexing (WDM) systems, with spending on 100 Gigabit now exceeding 40 Gigabit spending. LightCounting forecasts that 40,000, 100 Gigabit line cards will be shipped this year, 25,000 in the second half of the year alone. Infonetics Research, meanwhile, points out that while 10 Gigabit will remain the highest-volume speed, the most dramatic growth is at 100 Gigabit. By 2016, the majority of spending in long-haul networks will be on 100 Gigabit, it says.
The market research firms' findings align with Verizon's own experience deploying 100 Gigabit. The US operator said in September that it had added 4,800, 100 Gigabit miles of its global IP network during the first half of 2013, to total 21,400 miles in the US network and 5,100 miles in Europe. Verizon expects to deploy another 8,700 miles of 100 Gigabit in the US and 1,400 miles more in Europe by year end.
"We expect to hit the targets; we are getting close," says Glenn Wellbrock, director of optical transport network architecture and design at Verizon.
Verizon says several factors are driving the need for greater network capacity, including its FiOS bundled home communication services, Long Term Evolution (LTE) wireless and video traffic. But what triggered Verizon to upgrade its core network to 100 Gig was converging its IP networks and the resulting growth in traffic. "We didn't do a lot of 40 Gig [deployments] in our core MPLS [Multiprotocol Label Switching] network," says Wellbrock.
The cost of 100 Gigabit was another factor: A 100 Gigabit long-haul channel is now cheaper than ten, 10 Gig channels. There are also operational benefits using 100 Gig such as having fewer wavelengths to manage. "So it is the lower cost-per-bit plus you get all the advantages of having the higher trunk rates," says Wellbrock.
Verizon expects to continue deploying 100 Gigabit. First, it has a large network and much of the deployment will occur in 2014. "Eventually, we hope to get a bit ahead of the curve and have some [capacity] headroom," says Wellbrock.
We could take advantage of 200 Gig or 400 Gig or 500 Gig today
Super-channel trials
Operators, working with optical vendors, are trialling super-channels and advanced modulation schemes such as 16-QAM (quadrature amplitude amplitude). Such trials involve links carrying data in multiples of 100 Gig: 200 Gig, 400 Gig, even a Terabit.
Super-channels are already carrying live traffic. Infinera's DTN-X system delivers 500 Gig super-channels using quadrature phase-shift keying (QPSK) modulation. Orange has a 400 Gigabit super-channel link between Lyon and Paris. The 400 Gig super-channel comprises two carriers, each carrying 200 Gig using 16-QAM, implemented using Alcatel-Lucent's 1830 photonic service switch platform and its photonic service engine (PSE) DSP-ASIC.
"We could take advantage of 200 Gig or 400 Gig or 500 Gig today," says Wellbrock. "As soon as it is cost effective, you can use it because you can put multiple 100 Gig channels on there and multiplex them."
The issue with 16-QAM, however, is its limited reach using existing fibre and line systems - 500-700km - compared to QPSK's 2,500+ km before regeneration. "It [16-QAM] will only work in a handful of applications - 25 percent, something of this nature," says Wellbrock. This is good for a New York to Boston, he says, but not New York to Chicago. "From our end it is pretty simple, it is lowest cost," says Wellbrock. "If we can reduce the cost, we will use it [16-QAM]. However, if the reach requirement cannot be met, the operator will not go to the expense of putting in signal regenerators to use 16-QAM do, he says.
Earlier this year Verizon conducted a trial with Ciena using 16-QAM. The goals were to test 16-QAM alongside live traffic and determine whether the same line card would work at 100 Gig using QPSK and 200 Gig using 16-QAM. "The good thing is you can use the same hardware; it is a firmware setting," says Wellbrock.
We feel that 2015 is when we can justify a new, greenfield network and that 100 Gig or versions of that - 200 Gig or 400 Gig - will be cheap enough to make sense
100 Gig in the metro
Verizon says there is already sufficient traffic pressure in its metro networks to justify 100 Gig deployments. Some of Verizon's bigger metro locations comprise up to 200 reconfigurable optical add/ drop multiplexer (ROADM) nodes. Each node is typically a central office connected to the network via a ROADM, varying from a two-degree to an eight-degree design.
"Not all the 200 nodes would need multiple 100 Gig channels but in the core of the network, there is a significant amount of capacity that needs to be moved around," says Wellbrock. "100 Gig will be used as soon as it is cost-effective."
Unlike long-haul, 100 Gigabit in the metro remains costlier than ten 10 Gig channels. That said, Verizon has deployed metro 100 Gig when absolutely necessary, for example connecting two router locations that need to be connected using 100 Gig. Here Verizon is willing to pay extra for such links.
"By 2015 we are really hoping that the [metro] crossover point will be reached, that 100 Gig will be more cost effective in the metro than ten times 10 [Gig]." Verizon will build a new generation of metro networks based on 100 Gig or 200 Gig or 400 Gig using coherent receivers rather than use existing networks based on conventional 10 Gig links to which 100 Gig is added.
"We feel that 2015 is when we can justify a new, greenfield network and that 100 Gig or versions of that - 200 Gig or 400 Gig - will be cheap enough to make sense."
Data Centres
The build-out of data centres is not a significant factor driving 100 Gig demand. The largest content service providers do use tens of 100 Gigabit wavelengths to link their mega data centres but they typically have their own networks that connect relatively few sites.
"If you have lots of data centres, the traffic itself is more distributed, as are the bandwidth requirements," says Wellbrock.
Verizon has over 220 data centres, most being hosting centres. The data demand between many of the sites is relatively small and is served with 10 Gigabit links. "We are seeing the same thing with most of our customers," says Wellbrock.
Technologies
System vendors continue to develop cheaper line cards to meet the cost-conscious metro requirements. Module developments include smaller 100 Gig 4x5-inch MSA transponders, 100 Gig CFP modules and component developments for line side interfaces that fit within CFP2 and CFP4 modules.
"They are all good," says Wellbrock when asked which of these 100 Gigabit metro technologies are important for the operator. "We would like to get there as soon as possible."
The CFP4 may be available by late 2015 but more likely in 2016, and will reduce significantly the cost of 100 Gig. "We are assuming they are going to be there and basing our timelines on that," he says.
Greater line card port density is another benefit once 100 Gig CFP2 and CFP4 line side modules become available. "Lower power and greater density which is allowing us to get more bandwidth on and off the card." sats Wellbrock.
Existing switch and routers are bandwidth-constrained: they have more traffic capability that the faceplate can provide. "The CFPs, the way they are today, you can only get four on a card, and a lot of the cards will support twice that much capacity," says Wellbrock.
With the smaller form factor CFP2 and CFP4, 1.2 and 1.6 Terabits card will become possible from 2015. Another possible development is a 400 Gigabit CFP which would achieve a similar overall capacity gains.
Coherent, not just greater capacity
Verizon is looking for greater system integration and continues to encourage industry commonality in optical component building blocks to drive down cost and promote scale.
Indeed Verizon believes that industry developments such as MSAs and standards are working well. Wellbrock prefers standardisation to custom designs like 100 Gigabit direct detection modules or company-specific optical module designs.
Wellbrock stresses the importance of coherent receiver technology not only in enabling higher capacity links but also a dynamic optical layer. The coherent receiver adds value when it comes to colourless, directionless, contentionless (CDC) and flexible grid ROADMs.
"If you are going to have a very cost-effective 100 Gigabit because the ecosystem is working towards similar solutions, then you can say: 'Why don't I add in this agile photonic layer?' and then I can really start to do some next-generation networking things." This is only possible, says Wellbrock, because of the tunabie filter offered by a coherent receiver, unlike direct detection technology with its fixed-filter design.
"Today, if you want to move from one channel to the next - wavelength 1 to wavelength 2 - you have to physically move the patch cord to another filter," says Wellbrock. "Now, the [coherent] receiver can simply tune the local oscillator to channel 2; the transmitter is full-band tunable, and now the receiver is full-band tunable as well." This tunability can be enabled remotely rather than requiring an on-site engineer.
Such wavelength agility promises greater network optimisation.
"How do we perhaps change some of our sparing policy? How do we change some of our restoration policies so that we can take advantage of that agile photonics later," says Wellbroack. "That is something that is only becoming available because of the coherent 100 Gigabit receivers."
Part 2, click here
100 Gigabit and packet optical loom large in the metro
"One hundred Gig metro has become critical in terms of new [operator] wins"
Michael Adams, Ciena
Ciena says operator interest in 100 Gigabit for the metro has been growing significantly.
"One hundred Gig metro has become critical in terms of new [operator] wins," says Michael Adams, vice president of product and technical marketing at Ciena. "Another request is integrated packet switching and OTN (Optical Transport Network) switching to fill those 100 Gig pipes."
The operator CenturyLink announced recently it had selected Ciena's 6500 packet optical transport platform for its network spanning 50 metropolitan regions.
The win is viewed by Ciena as significant given CenturyLink is the third largest telecommunications company in the US and has a global network. "We have already deployed Singapore, London and Hong Kong, and a few select US metropolitans and we are rolling that out across the country," says Adams.
Ciena says CenturyLink wants to offer 1, 10 and 100 Gigabit Ethernet (GbE) services. "In terms of the RFP (request for proposal) process with CenturyLink for next generation metro, the 100 Gigabit wavelength service was key and an important part of the [vendor] selection process."
The vendor offers different line cards based on its WaveLogic 3 coherent chipset depending on a metro or long haul network's specifications. "We firmly believe that 100 Gig coherent in the metro is going to be the way the market moves," says Adams.
At the recent OFC/NFOEC show, Ciena demonstrated WaveLogic 3 based production cards moving between several modulation formats, from binary phase-shift keying (BPSK) to quadrature PSK (QPSK) to 16-QAM (quadrature amplitude modulation).
Ciena showed a 16-QAM-based 400 Gig circuit using two, 200 Gig carriers. "With a flexible grid ROADM, the two [carriers] are pushed together into a spectral grid much less than 100GHz [wide]," says Adams.
The WaveLogic 3 features a transmit digital signal processor (DSP) as well as the receive DSP. "The transmit DSP is key to be able to move the frequencies to much less than 100GHz of spectrum in order to get greater than 20 Terabits [capacity] per fibre," says Adams. "With 88 wavelengths at 100 Gig that is 8.8 Terabits, and with 16-QAM that doubles to 17.6Tbps; we expect at least a [further] 20 percent uplift with the transmit DSP and gridless."
Adams says the company will soon detail the reach performance of its 400 Gig technology using 16-QAM.
It is still early in the market regarding operator demand for 400 Gig transmission. "2013 is the year for 100 Gig but customers always want to know that your platform can deliver the next thing," says Adams. "In the metro regional distances, we believe we can get a 50 percent improvement in economics using 16-QAM." That is because WaveLogic 3 can transmit 100GbE or 10x10GbE in a 50GHz channel, or double that - 2x100GbE or 20x10GbE - using 16-QAM modulation.
The system vendor is also one of AT&T's domain programme suppliers. Ciena will not expand on the partnership beyond saying there is close collaboration between the two. "We give them a lot of insight on roadmaps and on technology; they have a lot of opportunity to say where they would like their partner to be investing," says Adams.
Ciena came top in terms of innovation and leadership in a recent Heavy Reading survey of over 100 operators regarding metro packet-optical. Ciena was rated first, followed by Cisco Systems, Alcatel-Lucent and Huawei. "Our solid packet switching [on the 6500] is why CenturyLink chose us," says Adams.
OFC/NFOEC 2013 product round-up - Part 2
Second and final part
- Custom add/drop integrated platform and a dual 1x20 WSS module
- Coherent receiver with integrated variable optical attenuator
- 100/200 Gigabit coherent CFP and 100 Gigabit CFP2 roadmaps
- Mid-board parallel optics - from 150 to over 600 Gigabit.
- 10 Gigabit EPON triplexer
Add/drop platform and wavelength-selective switches
Oclaro announced an add/drop routing platform for next-generation reconfigurable optical add/drop multiplexers (ROADMs). The platform, which supports colourless, directionless, contentionless (CDC) and flexible grid ROADMs, can be tailored to a system vendor's requirements and includes such functions as cross-connect switching, arrayed amplifiers and optical channel monitors.
"If we make the whole thing [add/drop platform], we can integrate in a much better way"
Per Hansen, Oclaro
After working with system vendors on various line card designs, Oclaro realised there are significant benefits to engineering the complete design.
"You end up with a controller controlling other controllers, and boxes that get bolted on top of each other; a fairly unattractive solution," says Per Hansen, vice president of product marketing, optical networks solutions at Oclaro. "If we make the whole thing, we can integrate in a much better way."
The increasingly complex nature of the add/drop card is due to the dynamic features now required. "You have support for CDC and even flexible grid," says Hansen. "You want to have many more features so that you can control it remotely in software."
A consequence of the add/drop's complexity and automation is a need for more amplifiers. "It is a sign that the optics is getting mature; you are integrating more functionality within your equipment and as you do that, you have losses and you need to put amplifiers into your circuits," says Hansen.
Oclaro continues to expand its amplifier component portfolio. At OFC/NFOEC, the company announced dual-chip uncooled pump lasers in the 10-pin butterfly package multi-source agreement (MSA) it announced at ECOC 2012.
"We have two 500mW uncooled pumps in a single package with two fibres, each pump being independently controlled," says Robert Blum, director of product marketing for Oclaro's photonic components unit.
The package occupies half the space and consumes less than half the power compared to two standard discrete thermo-electrically cooled pumps. The dual-chip pump lasers will be available as samples in July 2013.
Oclaro gets requests to design 4- and 8-degree nodes; with four- and eight-degree signifying the number of fibre pairs emanating from a node.
"Depending on what features customers want in terms of amplifiers and optical channel monitors, we can design these all the way down to single-slot cards," says Hansen. Vendors can then upgrade their platforms with enhanced switching and flexibility while using the same form factor card.
Meanwhile, Finisar demonstrated at OFC/NFOEC a module containing two 1x20 liquid-crystal-on-silicon-based wavelength-selective switches (WSSes). The module supports CDC and flexible grid ROADMs. "This two-port module supports the next-generation route-and-select [ROADM] architecture; one [WSS] on the add side and one on the drop side," says Rafik Ward, vice president of marketing at Finisar.
100Gbps line side components
NeoPhotonics has added two products to its 100 Gigabit-per-second (Gbps) coherent transport product line.
The first is an coherent receiver that integrates a variable optical attenuator (VOA). The VOA sits in front of the receiver to screen the dynamic range of the incoming signal. "This is even more important in coherent systems as coherent is different to direct detection in that you do not have to optically filter the channels coming in," says Ferris Lipscomb, vice president of marketing at NeoPhotonics.
"That is the power of photonic integration: you do a new chip with an extra feature and it goes in the same package."
Ferris Lipscomb, NeoPhotonics
In a traditional system, he says, a drop port goes through an arrayed waveguide grating which filters out the other channels. "But with coherent you can tune it like a heterodyne radio," says Lipscomb. "You have a local oscillator that you 'beat' against the signal so that the beat frequency for the channel you are tuned to will be within the bandwidth of the receiver but the beat frequency of the adjacent channel will be outside the bandwidth of the receiver."
It is possible to do colourless ROADM drops where many channels are dropped, and using the local oscillator, the channel of interest is selected. "This means that the power coming in can be more varied than in a traditional case," says Lipscomb, depending on how many other channels are present. Since there can be up to 80 channels falling on the detector, the VOA is needed to control the dynamic range of the signal to protect the receiver.
"Because we use photonic integration to make our integrated coherent receiver, we can put the VOA directly on the chip," says Lipscomb. "That is the power of photonic integration: you do a new chip with an extra feature and it goes in the same package."
The VOA integrated coherent receiver is sampling and will be generally available in the third quarter of 2013.
NeoPhotonics also announced a narrow linewidth tunable laser for coherent systems in a micro integrated tunable laser assembly (micro-ITLA). This is the follow-on, more compact version of the Optical Internetworking Forum's (OIF) ITLA form factor for coherent designs.
While the device is sampling now, Lipscomb points out that is it for next-generation designs such that it is too early for any great demand.
Sumitomo Electric Industries and ClariPhy Communications demonstrated 100Gbps coherent CFP technology at OFC/NFOEC.
ClariPhy has implemented system-on-chip (SoC) analogue-to-digital (ADC) and digital-to-analogue (DAC) converter blocks in 28nm CMOS while Sumitomo has indium phosphide modulator and driver technology as well as an integrated coherent receiver, and an ITLA.
The SoC technology is able to support 100Gbps and 200Gbps using QPSK and 16-QAM formats. The companies say that their collaboration will result in a pluggable CFP module for 100Gbps coherent being available this year.
Market research firm, Ovum, points out that the announcement marks a change in strategy for Sumitomo as it enters the long-distance transmission business.
In another development, Oclaro detailed integrated tunable transmitter and coherent receiver components that promise to enable 100 Gigabit coherent modules in the CFP2 form factor.
The company has combined three functions within the transmitter. It has developed a monolithic tunable laser that does not require an external cavity. "The tunable laser has a high-enough output power that you can tap off a portion of the signal and use it as the local oscillator [for the receiver]," says Blum. Oclaro has also developed a discrete indium-phosphide modulator co-packaged with the laser.
The CFP2 100Gbps coherent pluggable module is likely to have a reach of 80-1,000km, suited to metro and metro regional networks. It will also be used alongside next-generation digital signal processing (DSP) ASICs that will use a more advanced CMOS process resulting in a much lower power consumption .
To be able to meet the 12W power consumption upper limit of the CFP2, the DSP-ASIC will reside on the line card, external to the module. A CFP, however, with its upper power limit of 32W will be able to integrate the DSP-ASIC.
Oclaro expects such an CFP2 module to be available from mid-2014 but there are several hurdles to be overcome.
One is that the next-generation DSP-ASICs will not be available till next year. Another is getting the optics and associated electronics ready. "One challenge is the analogue connector to interface the optics and the DSP," says Blum.
Achieving the CFP2 12W power consumption limit is non-trivial too. "We have data that the transmitter already has a low enough power dissipation," says Blum.
Board-mounted optics
Finisar demonstrated its board-mounted optical assembly (BOA) running at 28Gbps-per-channel. When Finisar first detailed the VCSEL-based parallel optics engine, it operated at 10Gbps.
The mid-board optics, being aimed at linking chassis and board-to-board interconnect, can be used in several configurations: 24 transmit channels, 24 receive channels or as a transceiver - 12 transmit and 12 receive. When operated at full rate, the resulting data rate is 672Gbps (24x28Gbps) simplex.
The BOA is protocol-agnostic operating at several speeds ranging from 10Gbps to 28Gbps. For example 25Gbps supports Ethernet lanes for 100Gbps while 28Gbps is used for Optical Transport Network (OTN) and Fibre Channel. Overall the mid-board optics supports Ethernet, PCI Express, Serial Attached SCSI (SAS), Infiniband, Fibre Channel and proprietary protocols. Finisar has started shipping BOA samples.
Avago detailed samples of higher-speed Atlas optical engine devices based on its 12-channel MicroPod and MiniPod designs. The company has extended the channel speed from 10Gbps to 12.5Gbps and to 14Gbps, giving a total bandwidth of 150Gbps and 168Gbps, respectively.
"There is enough of a market demand for applications up to 12.5Gbps that justifies a separate part number," says Sharon Hall, product line manager for embedded optics at Avago Technologies.
The 12x12.5Gbps optical engines can be used for 100GBASE-SR10 (10x10Gbps) as well as quad data rate (QDR) Infiniband. The extra capacity supports Optical Transport Network (OTN) with its associated overhead bits for telecom. There are also ASIC designs that require 12.5Gbps interfaces to maximise system bandwidth.
The 12x14Gbps supports the Fourteen Data Rate (FDR) Infiniband standard and addresses system vendors that want yet more bandwidth.
The Atlas optical engines support channel data rates from 1Gbps. The 12x12.5Gbps devices have a reach of 100m while for the 12x14Gbps devices it is 50m.
Hall points out that while there is much interest in 25Gbps channel rates, the total system cost can be expensive due to the immaturity of the ICs: "It is going to take a little bit of time." Offering a 14Gbps-per-channel rate can keep the overall system cost lower while meeting bandwidth requirements, she says.
10 Gig EPON
Operators want to increase the split ratio - the number of end users supported by a passive optical network - to lower the overall cost.
A PON reach of 20km is another important requirement to operators, to make best use of their central offices housing the optical line terminal (OLT) that serves PON subscribers.
To meet both requirements, the 10G-EPON has a PRX40 specification standard which has a sufficiently high optical link budget. Finisar has announced a 10G-EPON OLT triplexer optical sub-assembly (OSA) that can be used within an XFP module among others that meets the PRX40 specification.
The OSA triplexer supports 10Gbps and 1G downstream (to the user) and 1Gbps upstream. The two downstream rates are needed as not all subscribers on a PON will transition to a 10G-EPON optical network unit (ONU).
To meet the standard, a triplexer design typically uses an externally modulated laser. Finisar has met the specification using a less complex directly modulated laser. The result is a 10G-EPON triplexer supporting a split ratio of 1:64 and higher, and that meets the 20km reach requirement.
Finisar will sell the OSA to PON transceiver makers with production starting first quarter, 2014. Up till now the company has used its designs for its own PON transceivers.
See also:
OFC/NFOEC 2013 product round-up - Part 1, click here
