An interview with John D'Ambrosia
The chairman of the Ethernet Alliance talks to Gazettabyte about the many ways Ethernet is evolving due to industry requirements.

"We are witnessing the evolution of Ethernet in ways that many of us never planned because there are markets that are demanding different things from it."
John D'Ambrosia describes the industry as feeling like nuts right now. "There is just so much stuff going on in terms of Ethernet," he says.
Besides the development of 400 Gigabit Ethernet (GbE) - the specification work for the emerging Ethernet standard being well underway - new applications are creating requirements that the existing Ethernet specifications cannot meet. These requirements include additional Ethernet speeds; the IEEE 802.3 Ethernet Working Group has created a Study Group to develop single-lane 25GbE for server interconnect.
One busy Ethernet activity involves 100 Gig mid-reach interfaces. Mid-reach covers distances from 500m to 2km. The interfaces are needed in the data centre to connect switches, such as the leaf-spine switch architecture, and to connect switches to the data centre's edge router. The existing IEEE 802.3 Ethernet 100 Gig multi-mode standards - the 100GBASE-SR4 and the 100GBASE-SR10 span 100m only (150m over OM4 fibre), too short for certain data centre applications.
"As we go faster, multimode's reach capabilities are coming down," says D'Ambrosia. "It has got to do with those pesky laws of physics." The next IEEE 802.3 100 Gig interface option, 100GBASE-LR4, has a 10km span, too much for many data centre applications. The 100GBASE-LR4 is also expensive, seven times the cost of the 100GBASE-SR4 interface, according to market research firm, LightCounting.
One of the reasons the IEEE 802.3 Ethernet Working Group created the 802.3bm Task Force was to develop an inexpensive 500m-reach specification. Four proposals resulted: parallel single mode (PSM4), coarse WDM (CWDM), pulse amplitude modulation and discrete multi-tone. None were adopted since each failed to muster sufficient backing. The optical industry then pursued a multi-source agreement (MSA) approach, and since January 2014, four single-mode mid-reach interfaces have emerged: the CLR4 Alliance, the CWDM4, the PSM4 and OpenOptics.
D'Ambrosia says the mid-reach optics debate first arose in 2007 when the IEEE 802.3ba group, developing 40 GbE and 100 GbE standards, discussed whether a 3-4km 100 Gig reach interface was required. "There was still enough people that needed 10km," says D'Ambrosia, and if 3-4km had been chosen then the 10km requirement would have been addressed with an even more complex 40km interface. "In hindsight, I'm not sure that was the right decision but it was the right decision at the time," says D'Ambrosia.
The PSM4 100 GbE mid-reach MSA used four individual fibres for each direction, each fibre operating at 25 Gig. The other three mid-reach interfaces have a 2km reach and use 4x25 Gig wavelengths and duplex fibre, a single fibre in each direction.
The decision to use the ribbon fibre PSM4 or one of the other three WDM-based schemes depends on the existing fibre plant used in a data centre, and the link distance required. The PSM4 module may prove to be less costly that the other three module types but its ribbon fibre is more expensive compared to similar length duplex fibre; the longer the link, the more significant the fibre becomes as part of the overall link cost. "What someone really wants is the lowest cost solution for their application," says D'Ambrosia.
The PSM4 has other, secondary uses that are part of its appeal. "With a breakout solution, even in copper, you can get to lower speeds," says D'Ambrosia. For example, a 40 GbE QSFP optical module using parallel fibre can be viewed as a 40 Gig interface or as a dense 4x10 Gig interface, with each fibre a 10 Gig interface. Such a 'breakout' solution is likely to be attractive earlier on, as applications transition to higher speeds.
Does it serve the industry to have four mid-reach solutions? D'Ambrosia says opinion varies. "My own personal belief is that it would be better for the industry overall if we didn't have so many choices," he says. "But the reality is there are a lot of different applications out there."

25 Gigabit Ethernet
Work has also started on a 25 GbE standard. An IEEE 802.3 Study Group has been created to investigate a copper-based and a multi-mode server interconnect at 25 Gig. In July, the 25G Ethernet Consortium was also announced by firms Google, Microsoft, Arista, Mellanox and Broadcom that is also backing 25 GbE for server interconnect.
"There are a lot of people who are worried that 25 GbE will go everywhere; you just don't introduce a new rate of Ethernet," says D'Ambrosia. And as with 100 Gig mid-reach with its proliferation of MSAs, now there is a concern about a proliferation of Ethernet speeds, he says.
But if there is one thing that D'Ambrosia has learned in his years active in Ethernet standards, it is not to second-guess the market. "If there is a cool application out there that will help save money, the market will figure it out and it [the solution] will become popular."
For now, the IEEE 802.3 25G Study Group has chosen to focus on single lane server interconnects. "That is what the charter is," says D'Ambrosia. "But that doesn't mean 25 Gigabit Ethernet will end there; there is never a single rate project."
400 Gigabit Ethernet
D'Ambrosia, who is also chair of the IEEE 802.3 400G Ethernet Task Force, also highlights the latest developments of the next Ethernet speed increment. There is a multi-mode 400 GbE fibre standard being worked on as well as three single mode fibre objectives.
The multi-mode solution will have a reach of 100m while the single mode options will span 500m, 2km and 10km. "For 500m, that is where everyone thinks parallel fibre can be used," says D'Ambrosia. At 10km, not surprisingly, it will be duplex fibre, while at 2km it is likely to be duplex simply because of the cost of long spans of parallel fibre.
In November, at the next Task Force meeting, proposals will be made as to how best to implement these differing requirements. For the multi-mode, talk is of a 16x25 Gig implementation. "I believe that is what we will see in the proposals in November," says D'Ambrosia. The Task Force is also looking at 50 Gig electrical interfaces for the longer 400 Gig reaches. Such an interface is likely to be ready by the time the 400G Task Force work is completed in 2017.
No one has suggested a 16x25 Gig single mode fibre optical interface, he says: "Do we do it as 50 Gig or 100 Gig?" Non-return-to-zero [NRZ], PAM4 and discrete multi-tone modulation schemes are all being considered. "For NRZ, we might see 8x50 Gig though that is not solidifying yet," he says. "For 500m there is talk of a x4 bundle and also pulse amplitude modulation for a single 100 Gig wavelength."
The November meeting is the last one for new proposals and in January 2015 decisions will be made.
The Ethernet Alliance is sponsoring an industry event this month entitled: "The Rate Debate" at the TEF 2014 event in Santa Clara, CA, on October 16th. The event will look at whether 40 Gig or 50 Gig Ethernet makes more sense, and the likely evolution. And if 50Gig is adopted, will 100 GbE based on 4 channels evolve to 200 Gigabit? There is also interest in extending Category 5 cable from 1 Gig to 2.5 Gig and even 5 Gig to extend the useful life of campus cabling, and that will also be addressed. More recently, there have been two Calls-For-Interest: for a Next Generation Enterprise Access BASE-T PHY and a 25GBASE-T and these will also likely be discussed.
Ethernet speeds used to evolve by a factor of 10, then by a factor of 4 and now 2.5. In future, with 50 Gig, it might also double. "With 40 Gig and 50 Gig, which one will dominate?" says D'Ambrosia. "But they are so close, why can't we come up with a solution that shares technology at both [speeds]?" These are just some of the issues to be discussed at the event.
"We are witnessing the evolution of Ethernet in ways that many of us never planned because there are markets that are demanding different things from it," says D'Ambrosia.
Next-generation access will redefine the telcos
Gazettabyte caught up with him to understand the goals of his new company, Diffraction Analysis, and why he believes next-generation access is critical for service providers.

"As soon as you, the operator, make that investment decision, it has fundamental implications as to who you are as a company"
Benoît Felten, CEO, Diffraction Analysis
Gazettabyte: There are several established market research companies addressing access. What is Diffraction Analysis offering that is unique?
BF: There are two reasons [for setting up Diffraction Analysis]. The first came to me when I was doing consultancy work for a [Yankee Group] customer. He said: “You are the only guy I know working for an established company that only covers next-generation access.” All the other guys cover broadband, with next-generation access being a sub-topic.
At that moment it coalesced something that I had been thinking about for some time: the migration from legacy to next-generation access networks is probably the single most challenging issue that established players will face, and the single biggest opportunity for challengers to grab. If you drown that [topic] among legacy [broadband] issues you might be missing the point.
The second reason, much more pragmatic, is that there are many small companies that simply cannot afford the cost of generic telecom research from established market research companies. To access research affordably, for me, that is a market opportunity.
When you say next-generation access, what do you mean?
BF: It refers to the replacement of the legacy copper network in all its incarnations – most cell towers are connected with copper today - with a fibre-rich network. Cable networks, wireline copper networks, mobile networks are all going to be fibre-rich.
What are the key issues facing operators regarding next-generation access?
BF: The first for the operators is: How do we finance a network deployment and why do we do it? The established players all agree that they have to do it, sooner or later and probably sooner, and the core question is: How do we do it?
The problem is that it places access at the core of the telco business model. Ever since the internet started being successful, most legacy players – and that includes cable players - have seen themselves as service providers rather than access providers. Effectively, they are faced with a major investment which if they don’t do opens up opportunities for others to displace them. We are seeing that happen is small markets like Hong Kong, where a competitive player is on the path to eliminate the access network of the incumbent.
The threat is real, the customer need is real. The problem is operators don’t know how to use the network for their own revenues. They are faced with the choice of becoming a long-term utility – investing in the network for 20 years and reaping revenues for another 50 years – but that is unpalatable for them, or they find another way to use the network for revenues, keeping in mind that most new services do not come from telcos these days but from over-the-top players.
What we plan to examine are the alternative paths: What will be the operators’ role and where will the operators’ revenues come from once they have made this investment?
As soon as you, the operator, make that investment decision, it has fundamental implications as to who you are as a company. It is not just an upgrade.
I was at a conference last year and a guy from NTT said: “We didn’t realise that when we made that [fibre access network] investment decision, we were rebuilding the company from scratch.” He said: “Now, 10-years-on, at a strategy level, we have understood that – we are in a different business now.”
What is Diffraction Analysis going to do?
BF: We are a market research and consultancy firm. It is important to do both: consultancy keeps you grounded in what is happening in the market. Research is your ability to step back and articulate the global view.
I have already signed a couple of companies for whom I do advisory services. We also have classic consultancy projects. We are working for a vendor right now who is asking us to look at opportunities for them to enter the access market. They have disruptive technology and are looking to partner with companies and take a stake in the access market. We are in the middle of this and our advice might be: don’t do it.
One of the things we want to do is build modelling tools that allow legacy service providers to map the network deployment in time and not just based on a single investment decision. Right now the question is do I deploy fibre or not? But the reality is even if the answer is yes, the deployment will take 15 years. If it takes 15 years, what happens to all the people that don’t have fibre as I – the operator - gradually connect them?
We are trying to build a model that will optimise the cost and the service offered to end customers with a variety of technologies. This is where fibre-to-the-curb and various flavours like phantom mode DSL come into play.
We are aiming to do this by geographical area, to model where you should deploy fibre first and what you should do in non-fibre areas, and for how long, looking at the lifetime of these various technology options.*
What are the key lessons you learnt as a Yankee Group analyst?
BF: One of the things that strike me is that in this economic shift we have experienced in the last 30 years, something has been lost and that is long-term vision. That leads many organisations to make hugely inefficient decisions. These decisions may be rational but the long term is no longer part of the equation. In the telecom business it is striking how far this can lead people into making wrong decisions.
The second thing that I learnt interacting with many industry players is that the single toughest challenge each organisation has is fighting against their own culture. There is a culture of business-as-usual which is at odds with the challenges of an ever shifting technology market. Even companies in the internet space that everyone views as agile and willing to reassess themselves, you find these cultural issues.
I’m not saying anything original but interacting with these companies all around the world for the three years at Yankee highlighted this for me.
Most broadband users are still DSL-based. How will fibre-based access become massively deployed?
BF: Essentially there are three drivers for telcos to deploy. In order of importance they are: competition, network reboot and meeting customer demand.
Competition is a clear driver. When as an organisation your network access business is threatened, every consideration about how fast you deploy for payback goes out of the window - you have to deploy. And then you learn the hard way since by responding and not anticipating, you make mistakes.
The second driver [network reboot] is not mature today. Smart CTOs around the world are seeing fibre deployments as an opportunity to rethink way more than just their access infrastructure. And WDM-PON [wavelength division multiplexing – passive optical network] technology in access plays a significant part in that thinking.
If they deploy now, they may make savings and achieve network concentration but it is not massive. If they wait they might be able to save more which is why this driver isn’t working right now.
The third driver is meeting customer needs. Now, in their public discourse, operators say this is first and foremost but the reality is that since they have not found ways to make money out of traffic, they don’t want more traffic. So meeting customer needs is not a priority except if you are in a competitive market and someone else is meeting customers’ needs in which case you have to do it.
Diffraction Analysis’s team comprises people with wireline experience but the company does plan to also cover mobile. “I do think that there is a great deal of sense in having a mobile arm too but I can’t build that myself – I don’t have the credibility or the knowledge,” says Felten, who is looking at partnerships or recruitment to add mobile to the operation.
*Diffraction Analysis has just published its research programme till June 2011.
Rafik Ward Q&A - final part

"Feedback we are getting from customers is that the current 100 Gig LR4 modules are too expensive"
Rafik Ward, Finisar
Q: Broadway Networks, why has Finisar acquired the company?
A: We spent quite some time talking to Broadway and understanding their business. We also talked to Broadway’s customers and the feedback we got on the technical team, the products and what this little start-up was able to accomplish was unanimously very positive.
We think what Broadway has done, for instance their EPON* stick product, is very interesting. With that product, an end user has the ability to make any SFP* port on a low-end Ethernet switch an EPON ONU* interface. This opens up a whole new set of potential customers and end users for EPON.
In reality, consumers will never have Ethernet switches with SFP ports in their house. Where we do see such Ethernet switches are in every major enterprise and many multi-dwelling units. It is an interesting technology that enables enterprises and multi-dwelling units to quickly tool-up for EPON.
* [EPON - Ethernet passive optical network, SFP - small form-factor pluggable optical transceiver, ONU - optical network unit]
Optical transceivers have been getting smaller and faster in the last decade yet laser and photo-detector manufacturing have hardly changed, except in terms of speed. Is this about to change?
Speed is one of the focus areas for the industry and will continue to be. Looking forward in a number of applications, though, we are going to hit the limit for these lasers and we are going to have to look more carefully outside of just raw laser speed to move up the data rate curve.
"We are going to hit the limit for these lasers"
A lot of this work has already started on the line side using different modulation formats and DSP* technology. Over time the question is: What happens on the client side? In future, do we look to other modulation formats on the client side? Eventually we will get there; it may take several years before we need to do things like that. But as an industry we would be foolish to think we won’t have to do this.
WDM* is going to be an increasingly important technology on the client side. We are already seeing this with the 40GBASE-LR4 and 100GBASE-LR4 standards.
* [DSP - digital signal processing, WDM - wavelength-division multiplexing]
Google gave a presentation at ECOC that argued for the need for another 100Gbps interface. What is Finisar’s view?
Feedback we are getting from customers is that the current 100 Gig LR4 modules are too expensive. We have spent a lot of time with customers helping them understand how the current LR4 standard, as is written, actually enables a very low cost optical interface, and the timeframes we believe are very quick in terms of how we can get cost down considerably on 100 Gig.
Rafik Ward (right) giving Glenn Wellbrock, director of backbone network design at Verizon Business, a tour of Finisar's labsThat was part of the details that [Finisar’s] Chris Cole also presented at ECOC.
There has certainly been a lot of media attention on the two [ECOC] presentations between Finisar and Google. This really is not so much about the quote, ‘drama’, or two companies that have a disagreement which optical interface makes more sense. It is more fundamental than that.
What it comes down to is that, as an industry, we have pretty limited resources. The best thing all of us can do is try to direct these resources – this limited pool we have combined throughout the industry - on a path that makes the most sense to reduce bandwidth cost most significantly.
The best way to do that, and that is already established, is through standards. The [IEEE] standard got it right that the path the industry is on is going to enable the lowest cost 100 Gig [interface]. Like everything, there is some investment required to get us there. The 25 Gig technology now [used as 4x25 Gig] is becoming mainstream and will soon enable the lowest cost solution. My view is that within 18 months to two years this will be a moot point.
If the technology was available 18 months sooner, we wouldn’t even be having this discussion. But that is the position that we, as an industry, are in. With that, it creates some tensions, some turmoil, where customers don’t like to pay more than they perceive they have to.
There is the CFP form factor that is relatively large. Is the point that if current technology was available 18 months ago, 100Gbps could have come out in a QSFP?
The heart of the debate is cost.
There are other elements that always play into a debate like this. Beyond the cost argument, how quickly can two optical interfaces, like a 4x25 Gig versus a 10x10 Gig, each enable a smaller form factor solution.
But I think that is secondary. Had we not had the cost problem that we have now between 4x25 Gig versus 10x10 Gig, I don’t think we would be talking about it.
So it’s the current cost of the 4x25 Gig that is the issue?
Correct.
In September, the ECOC conference and exhibition was held. What were your impressions and did you detect any interesting changes?
There wasn’t so much an overwhelming theme this year at ECOC. In ECOC 2009, it was the year of coherent detection. This year there wasn’t a theme that resonated strongly throughout.
The mood was relatively upbeat. From our perspective, ECOC seemed a little bit smaller in terms of the size of the floor. But all the key people you would expect to be at the show were there.
Maybe the strongest theme – and I wrote about this in my blog – was colourless, directionless, contentionless (CDC) [ROADMs]. I think what I said is that they should have renamed it not ECOC but the ECDC show.
"A blog ... enables a much more informal mechanism to communicate to a broad audience."
Do you read business books and is there one that is useful for your job?
Probably the book I think about the most in my job is Clayton Christensen's The Innovator’s Dilemma.
He talks about how, when you look at very successful technology companies that have failed, what causes them to fail is often new solutions that come from the very low end of the market.
A lot of companies, and he cites examples from the disk drive industry, prided themselves on focussing on the high end of the market but ultimately ended up failing because there was a surprise upstart, someone who came in at the market's low end – in terms of performance, cost etc. – that continued to innovate using their low-end architecture, making it suitable for the core market.
For these large, well-established companies, once they realised they had this competitor, it was too late.
I think about that business book probably more than others. It’s a very interesting take on technology and the threat that can be posed to people in high-tech companies.
Your job sounds intensive and demanding. What do you do outside work to relax?
I’m a big [ice] hockey fan. I’ve been a hockey fan for many years; it’s a pretty intense sport. These days I tend to watch more hockey than I play but I very much enjoy the sport.
The other thing I started up this year that I had never done before – a little side project – was vegetable gardening. Surprisingly, it ended up taking a lot of my attention and I think it was a good distraction for me.
It can be quite remarkable, when you have your own little vegetable garden, how often you go and look at its progress. I’d find often coming home from work, first thing I’d want to do is go see how things were progressing in my vegetable garden.
You are the face of Finisar’s blog. What have you learnt from the experience?
A blog is an interesting tool to get information out to a broad audience. For companies like Finisar, it serves as a very important communication vehicle that didn’t exist previously.
In the old days, if you wanted to get information out to a broad group of customers, you either had to meet and communicate that information face-to-face, or via email; very targeted, one customer-at-a-time communication.
Another way was the press release. A press release was a very easy way to broadcast that information. But the challenge is that not all information that you want to broadcast is suitable for a press release.
The reason why I really like the blog is that it enables a much more informal mechanism to communicate to a broad audience.
Has it helped your job in any tangible way?
We found some interesting customer opportunities. These have come in through the blog when we’ve talked about specific products. That hasn’t happened extremely frequently but we have had a few instances. So it’s probably the most tangible thing: we can point to enhanced business because of it.
But the strength of something like a blog goes much deeper than that, in terms of the communication vehicle it enables.
You have about a year’s experience running a blog. If an optical component company is thinking about starting a blog, what is your advice?
The best advice I can give to anybody looking to do a blog is that it is something you have to commit to up-front.
A blog where you don’t continue to refresh the content regularly becomes a tired blog very quickly. We have made a conscious effort to have updated postings as best we can, on a weekly basis or even more frequently. There are certainly periods where we have gone longer than that but if you look back, in general, we have a wide variety of content that has been refreshed regularly.
I have to give credit to others - guest bloggers - within the organisation that help to maintain the content. This is critical. I would struggle to keep up with the pace if it was just myself every week.
Click here for the first part of Rafik Ward's Q&A.
Q&A with Rafik Ward - Part 1
"This is probably the strongest growth we have seen since the last bubble of 1999-2000." Rafik Ward, Finisar
Q: How would you summarise the current state of the industry?
A: It’s a pretty fun time to be in the optical component business, and it’s some time since we last said that.
We are at an interesting inflexion point. In the past few years there has been a lot of emphasis on the migration from 1 to 2.5 Gig to 10 Gig. The [pluggable module] form factors for these speeds have been known, and involved executing on SFP, SFP+ and XFPs.
But in the last year there has been a significant breakthrough; now a lot of the discussion with customers are around 40 and 100 Gig, around form factors like QSFP and CFP - new form factors we haven’t discussed before, around new ways to handle data traffic at these data rates, and new schemes like coherent modulation.
It’s a very exciting time. Every new jump is challenging but this jump is particularly challenging in terms of what it takes to develop some of these modules.
From a business perspective, certainly at Finisar, this is probably the strongest growth we have seen since the last bubble of 1999-2000. It’s not equal to what it was then and I don’t think any of us believes it will be. But certainly the last five quarters has been the strongest growth we’ve seen in a decade.
What is this growth due to?
There are several factors.
There was a significant reduction in spending at the end of 2008 and part of 2009 where end users did not keep up with their networking demands. Due to the global financial crisis, they [service providers] significantly cut capex so some catch-up has been occurring. Keep in mind that during the global financial crisis, based on every metric we’ve seen, the rate of bandwidth growth has been unfazed.
From a Finisar perspective, we are well positioned in several markets. The WSS [wavelength-selective switch] ROADM market has been growing at a steady clip while other markets are growing quite significantly – at 10 Gig, 40 Gig and even now 100 Gig. The last point is that, based on all the metrics we’ve seen, we are picking up market share.
Your job title is very clear but can you explain what you do?
I love my job because no two days are the same. I come in and have certain things I expect to happen and get done yet it rarely shapes out how I envisaged it.
There are really three elements to my job. Product management is the significant majority of where I focus my efforts. It’s a broad role – we are very focussed on the products and on the core business to win market share. There is a pretty heavy execution focus in product management but there is also a strategic element as well.
The second element of my job is what we call strategic marketing. We spend time understanding new, potential markets where we as Finisar can use our core competencies, and a lot of the things we’ve built, to go after. This is not in line with existing markets but adjacent ones: Are there opportunities for optical transceivers in things like military and consumer applications?
One of the things I’m convinced of is that, as the price of optical components continues to come down, new markets will emerge. Some of those markets we may not even know today, and that is what we are finding. That’s a pretty interesting part of my job but candidly I spend quite a bit less time on it [strategic marketing] than product management.
The third area is corporate communications: talking to media and analysts, press releases, the website and blog, and trade shows.
"40Gbps DPSK and DQPSK compete with each other, while for 40 Gig coherent its biggest competitor isn’t DPSK and DQPSK but 100 Gig."
Some questions on markets and technology developments.
Is it becoming clearer how the various 40Gbps line side optics – DPSK, DQPSK and coherent – are going to play out?
The situation is becoming clearer but that doesn’t mean it is easier to explain.
The market is composed of customers and end users that will use all of the above modulation formats. When we talk to customers, every one has picked one, two or sometimes all three modulation formats. It is very hard to point to any trend in terms of picks, it is more on a case-by-case basis. Customers are, like us at the component level, very passionate about the modulation format that they have chosen and will have a variety of very good reasons why a particular modulation format makes sense.
Unlike certain markets where you see a level of convergence, I don’t think that there will be true convergence at 40 Gbps. Coherent – DP-QPSK - is a very powerful technology but the biggest challenge 40 Gig has with DP-QPSK is that you have the same modulation format at 100 Gig.
The more I look at the market, 40Gbps DPSK and DQPSK compete with each other, while for 40 Gig coherent its biggest competitor isn’t DPSK and DQPSK but 100 Gig.
Finisar has been quiet about its 100 Gig line side plans, what is its position?
We view these markets - 40 and 100 Gig line side – as potentially very large markets at the optical component level. Despite that fact that there are some customers that are doing vertical integrated solutions, we still see these markets as large ones. It would be foolish for us not to look at these markets very carefully. That is probably all I would say on the topic right now.
"Photonic integration is important and it becomes even more important as data rates increase."
Finisar has come out with an ‘optical engine’, a [240Gbps] parallel optics product. Why now?
This is a very exciting part of our business. We’ve been looking for some time at the future challenges we expect to see in networking equipment. If you look at fibre optics today, they are used on the front panel of equipment. Typically it is pluggable optics, sometimes it is fixed, but the intent is that the optics is the interface that brings data into and out of a chassis.
People have been using parallel optics within chassis – for backplane and other applications – but it has been niche. The reason it’s niche is that the need hasn’t been compelling for intra-chassis applications. We believe that need will change in the next decade. Parallel optics intra-chassis will be needed just to be able to drive the amount of bandwidth required from one printed circuit board to another or even from one chip to another.
The applications driving this right now are the very largest supercomputers and the very largest core routers. So it is a market focussed on the extreme high-end but what is the extreme high-end today will be mainstream a few years from now. You will see these things in mainstream servers, routers and switches etc.
Photonic integration – what’s happening here?
Photonic integration is something that the industry has been working on for several years in different forms; it continues to chug on in the background but that is not to understate its importance.
For vendors like Finisar, photonic integration is important and it becomes even more important as data rates increase. What we are seeing is that a lot of emerging standards are based around multiple lasers within a module. Examples are the 40GBASE-LR4 and the 100GBASE-LR4 (10km reach) standards, where you need four lasers and four photo-detectors and the corresponding mux-demux optics to make that work.
The higher the number of lasers required inside a given module, and the more complexity you see, the more room you have to cost-reduce with photonic integration.
