ADVA targets access with its latest pluggable module

- The 25 gigabit-per-second (Gbps) SFP28 is self-tuning and has a reach of 40km
- ADVA’s CEO, Christoph Glingener, in his plenary talk at ECOC 2022 addressed the unpredictable nature of technology adoption.
ADVA has expanded its portfolio of optical modules with an SFP28 for the access market.
The AccessWave25 is a self-tuning dense wavelength division multiplexing (DWDM) pluggable.
The SFP28 is designed to enable communications service providers to straightforwardly upgrade their access networks from 10Gbps to 25Gbps.
ADVA made the announcement just before ECOC 2022.
Features
The SFP28 module links switches and routers to DWDM open-line systems (see diagram below).
The 40km-reach pluggable uses 4-level pulse amplitude modulation (PAM-4) and supports 25 gigabit Ethernet and eCPRI traffic.
The module uses the G.metro self-tuning standard to coordinate with the remote-end transceiver a chosen channel in the C-band, simplifying configuration and removing human error.
The G.metro communication channel also enables remote monitoring of the module.
The SFP28 consumes 3W and works over the extended temperature of -40 to 85oC.

Strategy
ADVA says vertical integration is a critical part of its Optical Engine unit’s strategy.
Saeid Aramideh, ADVA’s Optical Engine’s vice president of business development, says the unit focusses on such technology disciplines as silicon photonics, laser technology and digital signal processing.
The digital signal processing includes aggregation as with ADVA‘s MicroMux module products, PAM-4 used by the AccessWave25, and coherent as with its 100ZR module announced in June.
Advanced packaging is another technology area of interest.
“These are the fundamental innovation areas we focus on,” says Aramideh. “We build our product portfolio based on these platforms.”
ADVA also looks at the market to identify product gaps.
“Not so much every MSA module, but what is happening on the aggregation side,” says Aramideh. “What is it that other people are not paying attention to?”
This is what motivated ADVA’s MicroMux products. The MicroMux module family includes a 10-by-10 gigabit going into 100 gigabits, a 10-by-one gigabit into 10 gigabits, and a four-by-100 gigabit going into 400 gigabits.
“The reality is over 10,000 MicroMux modules are carrying traffic with a top tier-one network provider in Europe,“ says Aramideh. “Not on ADVA equipment but on other network equipment maker, which we haven’t made public.”
For access aggregation, ADVA unveiled at OFC its four-by-10 gigabit MicroMux Edge BiDi with a 40km reach.
“This is for Ethernet, backhaul, and services where fibre is limited and symmetric latency is important,” says Aramideh.
ADVA’s 100ZR module uses a coherent digital signal processor (DSP) developed with Coherent. The 100ZR is a QSFP28 module that dissipates 5W and reaches 300km.
Now, ADVA has added the AccessWave25, a tunable SFP28 that uses direct-detect technology and PAM-4, including ADVA’s IP for distance optimisation.
“The AccessWave25 works on legacy, so if you have a 10-gigabit network, you don’t have to change anything on the physical layer,” he says.
ADVA also looks at metro applications and says it will announce lower-power, smaller form factor coherent designs.
ECOC plenary talk
The CEO of ADVA, Christoph Glingener, gave a plenary talk at ECOC.
Entitled Never say never, Glingener reflected on technology adoption and its timing.
He pointed out how technologies that, at first, seem impractical or too difficult to adopt can subsequently become mainstream. He cited coherent optical communication as one example.
Glingener also discussed how such unpredictability impacts business, citing supply-chain issues, the global pandemic, and sovereignty.
Sovereignty and the influx of government capital for fibre rollout and semiconductors confirm that the optical communications industry is in a good place. But Glingener worries how the industry’s practitioners are ageing and stresses more needs to be done to attract graduates.
Tracing optical communications’ progress, he talked about the 15-year cycles of first direct detect and then fibre amplification. Coherent then followed in 2010.
The industry is thus ripe for breakthrough technology.

Reaching limits
Shannon’s limit means spectral efficiency no longer improves while Moore’s law’s demise continues. Near-term trends are clear, he says, parallelism, whether it is multiple spectrum bands, multiple fibres, or multiple fibre cores. This, in turn, will drive new optical amplifier and wavelength-selective switch designs.
Further optimisation will be needed, integration at the device level and the creation of denser systems. Network automation is also essential and that requires much work.
Glingener also argues for optical bypass rather than electrical packet processing. Large core routers overseeing routing at the IP and optical layer will not aid the greening of the internet.
Next wave
So what is the next technology wave?
Possibilities he cited include hollow-core fibre, photonic computing, and quantum entanglement for communications and the quantum internet.
Will they reach a large scale? Glingener is doubtful.
Whatever the technology proves to be, he said, it is likely already being discussed at ECOC 2022.
If he has a message for the audience, it is to apply their own filter whenever they hear people say, ‘it will never come,’ or ‘it is too difficult.’ Never say never, says Glingener.
Has the restructuring of the optical industry already started?
The view that consolidation in the optical networking industry is needed is not new. For a decade, ever since the end of the optical boom in 2001, consolidation has been called for and has been expected. And while the many optical startups funded then have long exited or been acquired, the optical industry continues to support numerous optical networking and component generalist and specialists.
Given the state of the telecom market, is a more fundamental industry restructuring finally on its way?

"The business model of the communication sector needs to change, and change in a relatively short order"
Larry Schwerin, CEO of Capella Intelligent Subsystems
Larry Schwerin, CEO of Capella Intelligent Subsystems, believes change is inevitable. He argues that the industry supply chain will change, especially as firms become more vertically integrated.
"This is not to say that the market and demand are not there," says Schwerin, but the industry is stuck with a decade-old structure yet the market has changed.
Optical market dynamics
Schwerin starts his argument by highlighting certain fundamental drivers. IP traffic continues to grow at over 30% a year, while the nature of the traffic is changing, especially with cloud computing and as users generate more digital media content.
“The current rate of bandwidth growth coupled with the rate of CapEx spend, the gap is widening and the revenue-per-bit is dropping,” he says. “Some argue that bandwidth growth will slow down as operators charge [users] more, but to date this hasn't been seen.”
These trends are welcome for the optical companies, says Schwerin, as operators adopt lower layer, optical switching as a cheaper alternative to IP routing. “The number of [wavelength-selective] switches per node is growing quite dramatically," he says. "We are now seeing deployments with, on average, 6-8 switches per node and people are projecting as many as 20 as people start deploying colourless, directionless, contentionless-based switching."
But such demand is coupled with fierce competition among numerous players at each layer of the optical industry's supply chain.
"Some 80% of the optics used by system vendors are bought. How do you differentiate on features above and beyond what you are buying?"
Supply chain
The annual global operator market for wireless and wireline equipment is valued at US $250bn, says Schwerin, using market research and financial analyst firms' data.
The global optical networking equipment market is $15bn. The Chinese vendors Huawei and ZTE now account for 30% of the market, while Alcatel-Lucent is the only other major vendor with double-digit share. The rest of the market is split among numerous optical vendors. "If you think about that, if you have 5% or less [optical networking] market share, that really is not a sustainable business given the [companies'] overhead expenses," says Schwerin.
The global optical component market is valued at $5bn. It is likely larger, anything up to $8bn, argues Schwerin, because of the Chinese optical companies supplying Huawei and ZTE.
"You have a $5-8bn market selling products into $15bn, and then the $15bn is trying to repurpose that material and resell it to the carriers - is that really what is going on?" says Schwerin. To this vendor hierarchy is added contract manufacturers, with different players serving the component and the system vendors.
The slim profits operators are making on their services is forcing them to place significant pricing pressure on the system companies that already face fierce competition. Meanwhile, the optical component and contract manufacturers are also trying to make money in this environment.
Looking at gross margin data from Morgan Stanley, Schwerin says that the system vendors' figures range from 35% for the low end to 40% at the high end. "What the figures highlight is a lack of differentiation," he says. "And, in part, it is because they are buying all the same technology."
Schwerin says that some 80% of the optics used by system vendors are bought. "How do you differentiate on features above and beyond what you are buying?"
The optical components vendors' gross margins of a year ago were 30%. More recent data shows these figures are down, with the only segment showing a rise being optical sub-systems.
What next?
Schwerin says one way to improve the health of the industry is greater vertical integration. How this will be done - which players get consumed and how - will only become clear in the next 2-3 years but he is confident it will happen. "There are just too many layers of the ecosystem and it is just too fragmented," he says.
Operator mergers and slower spending put pressure on vendors at each layer of the supply chain, inducing revenue stalls. "These swings seems to be more and more violent," says Schwerin. "It is difficult for companies to maintain themselves in these cycles, let alone innovate."
Schwerin highlights Cisco System's acquisition of silicon photonics start-up, Lightwire, earlier this year, as an example of a system vendor embracing vertical integration while also acquiring innovation. Another example is Huawei's acquisition of optical integration specialist, CIP Technologies.
"The business model of the communication sector needs to change, and change in a relatively short order," says Schwerin, who believes it has already started. He cites the merger between the two large optical component vendors, Oclaro and Opnext, and expects a similar deal among the system vendors: "One of those 5 percenters will be absorbed."
As the market further consolidates, and as system companies drive fundamental technologies, the components' market will start to shrink. "It is then like a chain reaction; it forces itself," he says.
Schwerin's take is that rather than continue with the existing optical component and contract manufacturing model, what is more likely is that what will be supplied will be basic optical components. Differentiation will be driven by the system vendors.
Q&A: Ciena’s CTO on networking and technology
In Part 2 of the Q&A, Steve Alexander, CTO of Ciena, shares his thoughts about the network and technology trends.
Part 2: Networking and technology

"The network must be a lot more dynamic and responsive"
Steve Alexander, Ciena CTO
Q. In the 1990s dense wavelength division multiplexing (DWDM) was the main optical development while in the '00s it was coherent transmission. What's next?
A couple of perspectives.
First, the platforms that we have in place today: III-V semiconductors for photonics and collections of quasi-discrete components around them - ASICs, FPGAs and pluggables - that is the technology we have. We can debate, based on your standpoint, how much indium phosphide integration you have versus how much silicon integration.
Second, the way that networks built in the next three to five years will differentiate themselves will be based on the applications that the carriers, service providers and large enterprises can run on them.
This will be in addition to capacity - capacity is going to make a difference for the end user and you are going to have to have adequate capacity with low enough latency and the right bandwidth attributes to keep your customers. Otherwise they migrate [to other operators], we know that happens.
You are going to start to differentiate based on the applications that the service providers and enterprises can run on those networks. I see the value of networking changing from a hardware-based problem-set to one largely software-based.
I'll give you an analogy: You bought your iPhone, I'll claim, not so much because it is a cool hardware box - which it is - but because of the applications that you can run on it.
The same thing will happen with infrastructure. You will see the convergence of the photonics piece and the Ethernet piece, and you will be able to run applications on top of that network that will do things such as move large amounts of data, encrypt large amounts of data, set up transfers for the cloud, assemble bandwidth together so you can have a good cloud experience for the time you need all that bandwidth and then that bandwidth will go back out, like a fluid, for other people to use.
That is the way the network is going to have to operate in future. The network must be a lot more dynamic and responsive.
How does Ciena view 40 and 100 Gig and in particular the role of coherent and alternative transmission schemes (direct detection, DQPSK)? Nortel Metro Ethernet Networks (MEN) was a strong coherent adherent yet Ciena was developing 100Gbps non-coherent solutions before it acquired MEN.
If you put the clock back a couple of years, where were the classic Ciena bets and what were the classic MEN bets?
We were looking at metro, edge of network, Ethernet, scalable switches, lots of software integration and lots of software intelligence in the way the network operates. We did not bet heavily on the long distance, submarine space and ultra long-haul. We were not very active in 40 Gig, we were going straight from 10 to 100 Gig.
Now look at the bets the MEN folks placed: very strong on coherent and applying it to 40 and 100 Gig, strong programme at 100 Gig, and they were focussed on the long-haul. Well, to do long-haul when you are running into things like polarisation mode dispersion (PMD), you've got to have coherent. That is how you get all those problems out of the network.
Our [Ciena's] first 100 Gig was not focussed on long-haul; it was focussed on how you get across a river to connect data centres.
When you look at putting things together, we ended up stopping our developments that were targeted at competing with MEN's long-haul solutions. They, in many cases, stopped developments coming after our switching, carrier Ethernet and software integration solutions. The integration worked very well because the intent of both companies was the same.
Today, do we have a position? Coherent is the right answer for anything that has to do with physical propagation because it simplifies networks. There are a whole bunch of reasons why coherent is such a game changer.
The reason why first 40 Gig implementations didn't go so well was cost. When we went from 10 to 40 Gig, the only tool was cranking up the clock rate.
At that time, once you got to 20GHz you were into the world of microwave. You leave printed circuit boards and normal manufacturing and move into a world more like radar. There are machined boxes, micro-coax and a very expensive manufacturing process. That frustrated the desires of the 40 Gig guys to be able to say: Hey, we've got a better cost point than the 10 Gig guys.
Well, with coherent the fact that I can unlock the bit rate from the baud rate, the signalling rate from the symbol rate, that is fantastic. I can stay at 10GHz clocks and send four-bits per symbol - that is 40Gbps.
My basic clock rate, which determines manufacturing complexity, fabrication complexity and the basic technology, stays with CMOS, which everyone knows is a great place to play. Apply that same magic to 100 Gig. I can send 100Gbps but stay at a 25GHz clock - that is tremendous, that is a huge economic win.
Coherent lets you continue to use the commercial merchant silicon technology base which where you want to be. You leverage the year-on-year cost reduction, a world onto itself that is driving the economics and we can leverage that.
So you get economics with coherent. You get improvement in performance because you simplify the line system - you can pop out the dispersion compensation, and you solve PMD with maths. You also get tunability. I'm using a laser - a local oscillator at the receiver - to measure the incoming laser. I have a tunable receiver that has a great economic cost point and makes the line system simpler.
Coherent is this triple win. It is just a fantastic change in technology.
What is Ciena’s thinking regarding bringing in-house sub-systems/ components (vertical integration), or the idea of partnerships to guarantee supply? One example is Infinera that makes photonic integrated circuits around which it builds systems. Another is Huawei that makes its own PON silicon.
The two examples are good ones.
With Huawei you have to treat them somewhat separately as they have some national intent to build a technology base in China. So they are going to make decisions about where they source components from that are outside the normal economic model.
Anybody in the systems business that has a supply chain periodically goes through the classic make-versus-buy analysis. If I'm buying a module, should I buy the piece-parts and make it? You go through that portion of it. Then you look within the sub-system modules and the piece-parts I'm buying and say: What if I made this myself? It is frequently very hard to say if I had this component fully vertically integrated I'd be better off.
A good question to ask about this is: Could the PC industry have been better if Microsoft owned Intel? Not at all.
You have to step back and say: Where does value get delivered with all these things? A lot of the semiconductor and component pieces were pushed out [by system vendors] because there was no way to get volume, scale and leverage. Unless you corner the market, that is frequently still true. But that doesn't mean you don't go through the make-versus-buy analysis periodically.
Call that the tactical bucket.
The strategic one is much different. It says: There is something out there that is unique and so differentiated, it would change my way of thinking about a system, or an approach or I can solve a problem differently.

"Coherent is this triple win. It is just a fantastic change in technology"
If it is truly strategic and can make a real difference in the marketplace - not a 10% or 20% difference but a 10x improvement - then I think any company is obligated to take a really close look at whether it would be better being brought inside or entering into a good strategic partnership arrangement.
Certainly Ciena evaluates its relationships along these lines.
Can you cite a Ciena example?
Early when Ciena started, there was a technology at the time that was differentiated and that was Fibre Bragg Gratings. We made them ourselves. Today you would buy them.
You look at it at points in time. Does it give me differentiation? Or source-of-supply control? Am I at risk? Is the supplier capable of meeting my needs? There are all those pieces to it.
Optical Transport Network (OTN) integrated versus standalone products. Ciena has a standalone model but plans to evolve to an integrated solution. Others have an integrated product, while others still launched a standalone box and have since integrated. Analysts say such strategies confuse the marketplace. Why does Ciena believe its strategy is right?
Some of this gets caught up in semantics.
Why I say that is because we today have boxes that you would say are switches but you can put pluggable coloured optics in. Would you call that integrated probably depends more on what the competition calls it.
The place where there is most divergence of opinion is in the network core.
Normally people look at it and say: one big box that does everything would be great - that is the classic God-Box problem. When we look at it - and we have been looking at it on and off for 15 years now - if you try to combine every possible technology, there are always compromises.
The simplest one we can point to now: If you put the highest performance optics into a switch, you sacrifice switch density.
You can build switches today that because of the density of the switching ASICs, are I/O-port constrained: you can't get enough connectors on the face plate to talk to the switch fabric. That will change with time, there is always ebb and flow. In the past that would not have been true.
If I make those I/O ports datacom plugabbles, that is about as dense as I'm going to get. If I make them long-distance coherent optics, I'm not going to get as many because coherent optics take up more space. In some cases, you can end up cutting by half your port density on the switch fabric. That may not be the right answer for your network depending on how you are using that switch.
While we have both technologies in-house, and in certain application we will do that. Years ago we put coloured optics on CoreDirector to talk to CoreStream, that was specific for certain applications. The reason is that in most networks, people try to optimise switch density and transport capacity and these are different levers. If you bolt those levers together you don't often get the right optimal point.
Any business books you have read that have been particularly useful for your job?
The Innovator's Dilemma (by Clayton Christensen). What is good about it is that it has a couple of constructs that you can use with people so they will understand the problem. I've used some of those concepts and ideas to explain where various industries are, where product lines are, and what is needed to describe things as innovation.
The second one is called: Fad Surfing in the Boardroom (by Eileen Shapiro). It is a history of the various approaches that have been used for managing companies. That is an interesting read as well.
Click here for Part 1 of the Q&A
AT&T rethinks its relationship with networking vendors

“We’ll go with only two players [per domain] and there will be a lot more collaboration.”
Tim Harden, AT&T
AT&T has changed the way it selects equipment suppliers for its core network. The development will result in the U.S. operator working more closely with vendors, and could spark industry consolidation. Indeed, AT&T claims the programme has already led to acquisitions as vendors broaden their portfolios.
The Domain Supplier programme was conjured up to ensure the financial health of AT&T’s suppliers as the operator upgrades its network to all-IP.
By working closely with a select group of system vendors, AT&T will gain equipment tailored to its requirements while shortening the time it takes to launch new services. In return, vendors can focus their R&D spending by seeing early the operator’s roadmap.
“This is a significant change to what we do today,” says Tim Harden, president, supply chain and fleet operations at AT&T. Currently AT&T, like the majority of operators, issues a request-for-proposal (RFP) before getting responses from six to ten vendors typically. A select few are taken into the operator’s labs where the winning vendor is chosen.
With the new programme, AT&T will work with players it has already chosen. “We’ll go with only two players [per domain] and there will be a lot more collaboration,” says Harden. “We’ll bring them into the labs and go through certification and IT issues.” Most importantly, operator and vendor will “interlock roadmaps”, he says.
The ramifications of AT&T’s programme could be far-reaching. The promotion of several broad-portfolio equipment suppliers into an operator’s inner circle promises them a technological edge, especially if the working model is embraced by other leading operators.
The development is also likely to lead to consolidation. Equipment start-ups will have to partner with domain suppliers if they wish to be used in AT&T’s network, or a domain supplier may decide to bring the technology in-house.
Meanwhile, selling to domain supplier vendors becomes even more important for optical component and chip suppliers.
Domain suppliers begin to emerge
AT&T first started work on the programme 18 months ago. “AT&T is on a five-year journey to an all-IP network and there was a concern about the health of the [vendor] community to help us make that transition, what with the bankruptcy of Nortel,” says Harden. The Domain Supplier programme represents 30 percent of the operator’s capital expenditure.
The operator began by grouping technologies. Initially 14 domains were identified before the list was refined to eight. The domains were not detailed by Harden but he did cite two: wireless access, and radio access including the packet core.
For each domain, two players will be chosen. “If you look at the players, all have strengths in all eight [domains],” says Harden.
AT&T has been discussing its R&D plans with the vendors, and where they have gaps in their portfolios. “You have seen the results [of such discussions] being acted out in recent weeks and months,” says Harden, who did not name particular deals.
In October Cisco Systems announced it planned to acquire IP-based mobile infrastructure provider Starent Networks, while Tellabs is to acquire WiChorus, a maker of wireless packet core infrastructure products. "We are not at liberty to discuss specifics about our customer AT&T,” says a Tellabs spokesperson. Cisco has still to respond.
Harden dismisses the suggestion that its programme will lead to vendors pursuing too narrow a focus. Vendors will be involved in a longer term relationship – five years rather than two or three common with RFPs, and vendors will have an opportunity to earn back their R&D spending. “They will get to market faster while we get to revenue faster,” he says.
The operator is also keen to stress that there is no guarantee of business for a vendor selected as a domain supplier. Two are chosen for each domain to ensure competition. If a domain supplier continues to meet AT&T’s roadmap and has the best solution, it can expect to win business. Harden stresses that AT&T does not require a second-supplier arrangement here.
In September AT&T selected Ericsson as one of the domain suppliers for wireline access, while suppliers for radio access Long Term Evolution (LTE) cellular will be announced in 2010.
