Infinera unveils its next-gen packet-optical platforms

  • Infinera has announced its first major metro product upgrade since it acquired Transmode in 2015.
  • The XTM II platforms use CFP2-DCO pluggable modules for the line-side optics, not Infinera’s photonic integrated circuit (PIC) technology.
  • Infinera’s XTM II achieves new levels of power efficiency by adopting CFP2-DCO pluggables and a distributed switch architecture.
  •  

    Source: Infinera

    Infinera has unveiled its latest metro products that support up to 200-gigabit wavelengths using CFP2-DCO pluggable modules.

    The XTM II platform family is designed to support growing metro traffic, low-latency services and the trend to move sophisticated equipment towards the network edge. Placing computing, storage and even switching near the network edge contrasts with the classical approach of backhauling traffic, sometimes deep within the network.

    “If you backhaul everything, you really do not know if it belongs in that part of the network,” says Geoff Bennett, director, solutions and technology at Infinera. Backhauling inherently magnifies traffic whereas operators want greater efficiencies in dealing with bandwidth growth, he says: “This is where the more cloud-like architectures towards the network edge come in.”

    But locating equipment at the network edge means it must fit within existing premises or in installed prefabricated huts where space and the power supplied are constrained.

    “If you are asking service providers to put more complex equipment there, then you need low power utilisation,” says Bennett. “This has been a key piece of feedback from customers we have been asking as to how they want our existing products to evolve in the metro-access.”

     

    Having a distributed switch fabric is a long-term advantage for Infinera

     

    Infinera says its latest XTM II products are eight times denser in terms of tranmission capacity while setting a new power-consumption low of 20W-27W per 100 gigabits depending on the operating temperature (25oC to 55oC). Infinera claims its nearest metro equipment competitor achieves 47W per 100 gigabits.

    Sterling Perrin, principal analyst, optical networking and transport at Heavy Reading, says Infinera has achieved the power-efficient design by using a distributed switch architecture rather that a central switch fabric and adopting the CFP2-DCO pluggable module with its low-power coherent DSP.

    “If you have a centralised fabric and you put it into an edge application then for some cases it will be a perfect fit but for many applications, it will be overkill in terms of capacity and hence power,” says Perrin. “Infinera is able to do it in a modular fashion in terms of just how much capacity and power is put in an application.”

    Having a distributed switch fabric is a long-term advantage for Infinera for these applications, says Perrin, whereas competitor vendors will also benefit from the CFP2-DCO for their next designs.

    And even if a competitor uses a distributed design, they will not leapfrog Infinera, says Perrin, although he expects competitors’ designs to come down considerably in power with the adoption of the CFP2-DCO. 

    Infinera has chosen not to use its photonic integrated circuit (PIC) technology for its latest metro platform given the large installed base of XTM chassis that already use pluggable modules. “It would make sense that customers would give feedback that they want a product that has industry-leading performance but which is also backwards compatible,” says Bennett.

    Infinera has said it will evaluate whether its PIC technology will be applied to each new generation of the product line. “So when you get to the XTM III they will have another round looking at it,” says Perrin. “If I were placing bets on the XTM III, I would say they are going to continue down this route [of using pluggables].”

    Perrin expects line-side pluggable technology to continue to progress with companies such as Acacia Communications and the collaboration between Ciena with its WaveLogic DSP technology and several optical module makers.

    “At what point is the PIC going to be better than what is available with the pluggables?” says Perrin. “For this application, I don’t see it.”       

     

    XTM II family

    Infinera has already been shipping upgraded XTM chassis for the last 18 months in advance of the launch of its latest metro cards. The upgraded chassis - the one rack unit (1RU) TM-102/II, the 3RU TM-301/II and the 11RU TM-3000/II - all feature enhanced power management and cooling.

    What Infinera is unveiling now are three cards that enhance the capacity and features of the enhanced chassis. The new cards will work with the older generation XTM chassis (without the ‘II’ suffix) as long as a vacant card slot is available and the chassis’ total power supply is not exceeded. This is important given over 30,000 XTM chassis have been deployed.

    The Infinera cards announced are the 400-flexponder, a 200-gigabit muxponder, and the EMXP440 packet-optical transport switch. The distributed switch architecture is implemented using the EMXP440 card.

    Operators will also be offered Infinera’s Instant Bandwidth feature as part of the XTM II whereby they can pay for the line side capacity they use: either 100-gigabit or 200-gigabit wavelengths using the CFP2-DCO. The Instant Bandwidth offered is not the superchannel format available for Infinera’s other platforms that use its PIC but it does offer operators the option of deploying a higher-speed wavelength when needed and paying later.

     

    400G flexponder 

    The flexponder can operate as a transponder and as a muxponder. For a transponder, the client signal and line-side data rate operate at the same data rate. In contrast, a muxponder aggregates lower data-rate client signals for transport on a single wavelength.

    Infinera’s 400-gigabit flexponder card uses four 100 Gigabit Ethernet QSFP28 client interfaces and two 200-gigabit CFP2-DCO pluggable line-side modules. Each CFP2-DCO can transport data at 100 gigabits using polarisation-multiplexing, quadrature phase-shift keying (PM-QPSK) modulation or at 200 gigabits using 16-ary quadrature amplitude modulation (PM-16QAM).

    The 400-gigabit card can thus operate as a transponder when the CFP2-DCO transports at 100 gigabits and as a muxponder when it carries two 100-gigabit signals over a 200-gigabit lambda. Given the card has two CFP2 line-side modules, it can even operate as a transponder and muxponder simultaneously.

    The flexponder card also supports OTN block encryption using the AES-256 symmetric key protocol.

    The flexponder is an upgrade on Infinera’s existing 100-gigabit muxponder card. The eightfold increase in capacity is achieved by using two 200-gigabit ports instead of a single 100-gigabit module and halving the width of the line card.

    Using the flexponder card, the TM-102/II chassis has a transport capacity of 400 gigabits, up to 1.6 terabits with the TM-301/II and a total of 4 terabits using the TM-3000/II platform.

     

    We can dial back the FEC if you need low latency and don't need the reach

     

    200G muxponder

    The double-width 200G card includes all the electronics needed for multi-service multiplexing. The line-side optics is a single CFP2-DCO module whereas the client side can accommodate two QSFP28s and 12 SFP+ 10-gigabit modules. The card can multiplex a mix of services including 10GbE, 40GbE, and 100GbE; 8-, 16- and 32-gigabit Fibre Channel; OTN and legacy SONET/SDH traffic.

    Other features include support for OTN block encryption using the AES-256 symmetric key protocol.

    The card’s forward error correction performance can also be traded to reduce the traffic latency. “We can dial back the FEC if you need low latency and don't need the reach,” says Bennett.

    OTN add-drop multiplexing can also be implemented by pairing two of the multiplexer cards.

     

    EMXP440 switch and flexible open line system

    The EMXP440 packet-optical transport switch card supports layer-two functionality such as Carrier Ethernet 2.0 and MPLS-TP. “Mobile backhaul and residential broadband, these are the cards the operators tend to use,” says Bennett.

    The two-slot EMXP440 card has two CFP2-DCOs and 12 SFP+ client-side interfaces. The reason why the line side and client side interface capacity differ (400 gigabits versus 120 gigabits) is that the card can be used to build simple packet rings (see diagram, top).

    The line-side interfaces can be used for ‘East’ and ‘West' traffic while the SFP+ modules can be used to add and drop signals. The EMXP440 card also has an MPO port such that up to 12 SFP+ further ports can be added using Infinera’s PTIO-10G card, part of its PT Fabric products.     

    A flexible grid open line system is also available for the XTM II. The XTM II’s 100-gigabit and 200-gigabit wavelengths fit within a 50GHz-wide fixed grid channel but Infinera is already anticipating future higher baud rates that will require channels wider than 50GHz. A flexible grid also improves the use of the fibre’s overall capacity. In turn, RAMAN amplification will also be needed to extend the reach using future higher order modulation schemes such as 32- and 64-QAM. 

    Infinera says the 400-gigabit flexponder card will be available in the next quarter while the 200-gigabit muxponder and the EMXP440 cards will ship in the final quarter of 2017.   


    Micro QSFP module to boost equipment port densities

    Twelve companies are developing a compact Quad Small-Form-Factor Pluggable (QSFP) module. Dubbed the Micro QSFP (μQSFP), the multi-source agreement (MSA) will improve by a third the port count on a platform's face plate compared to the current QSFP.

     

    Nathan Tracy

    The μQSFP will support both copper and optical cabling, and will have an improved thermal performance, benefitting interfaces and platforms.

    “There is always a quest for greater port density or aggregate bandwidth,” says Nathan Tracy, technologist at TE Connectivity and chair of the μQSFP MSA.

    The challenge for the module makers is to provide denser form factors to increase overall system traffic. “As we go to higher densities, we are also increasing the thermal load,” says Tracy. “And so now it is a mechanical and a thermal [design] problem, and both need to be solved jointly.”

    The thermal load is increased since the μQSFP supports interfaces that consume up to 3.5 W - like the QSFP - while having the width of the smaller SFP rated at 1.5 W. 

    “We are limited in the directions we can pull the heat out,” says Tracy. “If we are going to enable a higher density form factor that has the same width as an SFP but it is going to have the functionality of a QSFP, now we have a problem.”

    This requires the MSA engineers to develop new ways to rid the μQSFP of its heat.

     

    If we are going to enable a higher density form factor that has the same width as an SFP but it is going to have the functionality of a QSFP, now we have a problem

     

    Heat transfer and other challenges

    The volume and surface area of a module determine the overall thermal capacity or thermal density. The module can be modelled as an electrical circuit, with heat flow equivalent to current, while each interface has a thermal resistance.

    There are three interfaces - thermal resistances - associated with a module: between the heat source and the module case, the case and the heat sink, and the heat sink and ambient air. These three thermal resistances are in series and the goal is to reduce them to ensure greater heat flow.

    The module’s circuitry generates heat and the interface between the circuitry and the module’s case is one of the thermal resistances. “You are going to have a heat source in the module and no matter what you do, there is going to be some thermal resistance from that source to the module housing,” says Tracy.

     

    You have to get good signal integrity through that electrical interface because we are working at 25 gigabit-per-second (Gbps) data rates today and we know 50 Gbps data rates are coming

     

    The second thermal resistance - one that the µQSFP eliminates - is between the module housing and the heat sink. Sliding a module into its cage puts it into contact with the heat sink. But the contact between the two surfaces is imperfect, making heat extraction harder.  Building the heat sink into the μQSFP module avoids using the sliding design. 

    The remaining thermal resistance is between the heat sink and the cooling air blown through the equipment. This thermal resistance between the heat sink's metal fin structure and the air flow exists however good the heat sink design, says Tracy. 

    Other design challenges include achieving signal integrity when cramming the four electrical lanes across the µQSFP’s smaller width, especially when its support 25 Gbit/s lanes and likely 50 Gbit/s in future, says Tracy.

    And the module's optical interface must also support duplex LC and MPO connectors to interoperate with existing cabling.

    “It is all a balancing act,” says Tracy.  

     

    Applications

    The μQSFP is aimed at platforms such as 4.8 and 6.4 Tbps capacity switches. The QSFP is used for current 3.2 Tbps platforms but greater port densities will be needed for these next-generation platforms. The size of the μQSFP means 48 ports will fit in the space 36 QSFPs currently occupy, while 72 μQSFPs will fit on a line card if three rows are used. 

    The μQSFP may also find use outside the data centre for longer, 100 km reaches. “Today you can buy SFP modules that go 100 km,” says Tracy. “With this form factor, we are creating the capability to go up to four lanes in the same width as an SFP and, at the same time, we are improving the thermal performance significantly over what an SFP can do.”

    The Micro QSFP group is not saying when the µQSFP MSA will be done. But Tracy believes the μQSFP would be in demand were it available now. Its attraction is not just the greater port density, but how the µQSFP would aid systems engineers in tackling their thermal design challenges.

    The pluggable form factor will allow air to flow from the face plate and through the module to where ICs and other circuitry reside. Moreover, since 32 μQSFP ports will take up less face-plate area than 32 QSFPs, perforations could be added, further improving airflow.

    “If you look at the QSFP or SFP, it does not allow airflow through the cage from the front [plate] to the back,” says Tracy.  

    The μQSFP MSA founding members are Avago Technologies, Broadcom, Brocade, Cisco, Dell, Huawei, Intel, Lumentum (formerly JDSU), Juniper Networks, Microsoft, Molex, and TE Connectivity. 

     


    Rafik Ward Q&A - final part

    In the second and final part, Rafik Ward, vice president of marketing at Finisar, discusses Google’s call for a new 100 Gig interface, the ECOC show, and what Finisar has learnt from running a corporate blog.

     

    "Feedback we are getting from customers is that the current 100 Gig LR4 modules are too expensive"

    Rafik Ward, Finisar

     

    Q: Broadway Networks, why has Finisar acquired the company?

    A: We spent quite some time talking to Broadway and understanding their business. We also talked to Broadway’s customers and the feedback we got on the technical team, the products and what this little start-up was able to accomplish was unanimously very positive.

    We think what Broadway has done, for instance their EPON* stick product, is very interesting. With that product, an end user has the ability to make any SFP* port on a low-end Ethernet switch an EPON ONU*  interface. This opens up a whole new set of potential customers and end users for EPON. 

    In reality, consumers will never have Ethernet switches with SFP ports in their house. Where we do see such Ethernet switches are in every major enterprise and many multi-dwelling units. It is an interesting technology that enables enterprises and multi-dwelling units to quickly tool-up for EPON.

    * [EPON - Ethernet passive optical network, SFP - small form-factor pluggable optical transceiver, ONU - optical network unit]

     

    Optical transceivers have been getting smaller and faster in the last decade yet laser and photo-detector manufacturing have hardly changed, except in terms of speed. Is this about to change?

    Speed is one of the focus areas for the industry and will continue to be. Looking forward in a number of applications, though, we are going to hit the limit for these lasers and we are going to have to look more carefully outside of just raw laser speed to move up the data rate curve.

     

    "We are going to hit the limit for these lasers"

     

    A lot of this work has already started on the line side using different modulation formats and DSP* technology. Over time the question is: What happens on the client side? In future, do we look to other modulation formats on the client side? Eventually we will get there; it may take several years before we need to do things like that. But as an industry we would be foolish to think we won’t have to do this.

    WDM* is going to be an increasingly important technology on the client side. We are already seeing this with the 40GBASE-LR4 and 100GBASE-LR4 standards.

    * [DSP - digital signal processing, WDM - wavelength-division multiplexing]

     

    Google gave a presentation at ECOC that argued for the need for another 100Gbps interface. What is Finisar’s view? 

    Feedback we are getting from customers is that the current 100 Gig LR4 modules are too expensive. We have spent a lot of time with customers helping them understand how the current LR4 standard, as is written, actually enables a very low cost optical interface, and the timeframes we believe are very quick in terms of how we can get cost down considerably on 100 Gig.  Rafik Ward (right) giving Glenn Wellbrock, director of backbone network design at Verizon Business, a tour of Finisar's labsThat was part of the details that [Finisar’s] Chris Cole also presented at ECOC.

    There has certainly been a lot of media attention on the two [ECOC] presentations between Finisar and Google. This really is not so much about the quote, ‘drama’, or two companies that have a disagreement which optical interface makes more sense. It is more fundamental than that.

    What it comes down to is that, as an industry, we have pretty limited resources. The best thing all of us can do is try to direct these resources – this limited pool we have combined throughout the industry - on a path that makes the most sense to reduce bandwidth cost most significantly.

    The best way to do that, and that is already established, is through standards. The [IEEE] standard got it right that the path the industry is on is going to enable the lowest cost 100 Gig [interface]. Like everything, there is some investment required to get us there. The 25 Gig technology now [used as 4x25 Gig] is becoming mainstream and will soon enable the lowest cost solution. My view is that within 18 months to two years this will be a moot point.

    If the technology was available 18 months sooner, we wouldn’t even be having this discussion. But that is the position that we, as an industry, are in. With that, it creates some tensions, some turmoil, where customers don’t like to pay more than they perceive they have to.

     

    There is the CFP form factor that is relatively large. Is the point that if current technology was available 18 months ago, 100Gbps could have come out in a QSFP?

    The heart of the debate is cost.

    There are other elements that always play into a debate like this. Beyond the cost argument, how quickly can two optical interfaces, like a 4x25 Gig versus a 10x10 Gig, each enable a smaller form factor solution.

    But I think that is secondary. Had we not had the cost problem that we have now between 4x25 Gig versus 10x10 Gig, I don’t think we would be talking about it.

     

    So it’s the current cost of the 4x25 Gig that is the issue?

    Correct.

     

    In September, the ECOC conference and exhibition was held. What were your impressions and did you detect any interesting changes?

    There wasn’t so much an overwhelming theme this year at ECOC. In ECOC 2009, it was the year of coherent detection. This year there wasn’t a theme that resonated strongly throughout.

    The mood was relatively upbeat. From our perspective, ECOC seemed a little bit smaller in terms of the size of the floor. But all the key people you would expect to be at the show were there.

    Maybe the strongest theme – and I wrote about this in my blog – was colourless, directionless, contentionless (CDC) [ROADMs]. I think what I said is that they should have renamed it not ECOC but the ECDC show.

     

    "A blog ... enables a much more informal mechanism to communicate to a broad audience."

     

    Do you read business books and is there one that is useful for your job?

    Probably the book I think about the most in my job is Clayton Christensen's The Innovator’s Dilemma.

    He talks about how, when you look at very successful technology companies that have failed, what causes them to fail is often new solutions that come from the very low end of the market.

    A lot of companies, and he cites examples from the disk drive industry, prided themselves on focussing on the high end of the market but ultimately ended up failing because there was a surprise upstart, someone who came in at the market's low end – in terms of performance, cost etc. – that continued to innovate using their low-end architecture, making it suitable for the core market.

    For these large, well-established companies, once they realised they had this competitor, it was too late. 

    I think about that business book probably more than others. It’s a very interesting take on technology and the threat that can be posed to people in high-tech companies.

     

    Your job sounds intensive and demanding. What do you do outside work to relax?

    I’m a big [ice] hockey fan. I’ve been a hockey fan for many years; it’s a pretty intense sport. These days I tend to watch more hockey than I play but I very much enjoy the sport.

    The other thing I started up this year that I had never done before – a little side project – was vegetable gardening. Surprisingly, it ended up taking a lot of my attention and I think it was a good distraction for me.

    It can be quite remarkable, when you have your own little vegetable garden, how often you go and look at its progress. I’d find often coming home from work, first thing I’d want to do is go see how things were progressing in my vegetable garden.

     

    You are the face of Finisar’s blog. What have you learnt from the experience?

    A blog is an interesting tool to get information out to a broad audience. For companies like Finisar, it serves as a very important communication vehicle that didn’t exist previously.

    In the old days, if you wanted to get information out to a broad group of customers, you either had to meet and communicate that information face-to-face, or via email; very targeted, one customer-at-a-time communication.

    Another way was the press release. A press release was a very easy way to broadcast that information. But the challenge is that not all information that you want to broadcast is suitable for a press release.

    The reason why I really like the blog is that it enables a much more informal mechanism to communicate to a broad audience.

     

    Has it helped your job in any tangible way?

    We found some interesting customer opportunities. These have come in through the blog when we’ve talked about specific products. That hasn’t happened extremely frequently but we have had a few instances. So it’s probably the most tangible thing: we can point to enhanced business because of it.

    But the strength of something like a blog goes much deeper than that, in terms of the communication vehicle it enables.

     

    You have about a year’s experience running a blog. If an optical component company is thinking about starting a blog, what is your advice?

    The best advice I can give to anybody looking to do a blog is that it is something you have to commit to up-front.

    A blog where you don’t continue to refresh the content regularly becomes a tired blog very quickly.  We have made a conscious effort to have updated postings as best we can, on a weekly basis or even more frequently. There are certainly periods where we have gone longer than that but if you look back, in general, we have a wide variety of content that has been refreshed regularly.

    I have to give credit to others - guest bloggers - within the organisation that help to maintain the content. This is critical. I would struggle to keep up with the pace if it was just myself every week.

     

    Click here for the first part of Rafik Ward's Q&A.


    Privacy Preference Center