Infinera unveils its next-gen packet-optical platforms
Source: Infinera
Infinera has unveiled its latest metro products that support up to 200-gigabit wavelengths using CFP2-DCO pluggable modules.
The XTM II platform family is designed to support growing metro traffic, low-latency services and the trend to move sophisticated equipment towards the network edge. Placing computing, storage and even switching near the network edge contrasts with the classical approach of backhauling traffic, sometimes deep within the network.
“If you backhaul everything, you really do not know if it belongs in that part of the network,” says Geoff Bennett, director, solutions and technology at Infinera. Backhauling inherently magnifies traffic whereas operators want greater efficiencies in dealing with bandwidth growth, he says: “This is where the more cloud-like architectures towards the network edge come in.”
But locating equipment at the network edge means it must fit within existing premises or in installed prefabricated huts where space and the power supplied are constrained.
“If you are asking service providers to put more complex equipment there, then you need low power utilisation,” says Bennett. “This has been a key piece of feedback from customers we have been asking as to how they want our existing products to evolve in the metro-access.”
Having a distributed switch fabric is a long-term advantage for Infinera
Infinera says its latest XTM II products are eight times denser in terms of tranmission capacity while setting a new power-consumption low of 20W-27W per 100 gigabits depending on the operating temperature (25oC to 55oC). Infinera claims its nearest metro equipment competitor achieves 47W per 100 gigabits.
Sterling Perrin, principal analyst, optical networking and transport at Heavy Reading, says Infinera has achieved the power-efficient design by using a distributed switch architecture rather that a central switch fabric and adopting the CFP2-DCO pluggable module with its low-power coherent DSP.
“If you have a centralised fabric and you put it into an edge application then for some cases it will be a perfect fit but for many applications, it will be overkill in terms of capacity and hence power,” says Perrin. “Infinera is able to do it in a modular fashion in terms of just how much capacity and power is put in an application.”
Having a distributed switch fabric is a long-term advantage for Infinera for these applications, says Perrin, whereas competitor vendors will also benefit from the CFP2-DCO for their next designs.
And even if a competitor uses a distributed design, they will not leapfrog Infinera, says Perrin, although he expects competitors’ designs to come down considerably in power with the adoption of the CFP2-DCO.
Infinera has chosen not to use its photonic integrated circuit (PIC) technology for its latest metro platform given the large installed base of XTM chassis that already use pluggable modules. “It would make sense that customers would give feedback that they want a product that has industry-leading performance but which is also backwards compatible,” says Bennett.
Infinera has said it will evaluate whether its PIC technology will be applied to each new generation of the product line. “So when you get to the XTM III they will have another round looking at it,” says Perrin. “If I were placing bets on the XTM III, I would say they are going to continue down this route [of using pluggables].”
Perrin expects line-side pluggable technology to continue to progress with companies such as Acacia Communications and the collaboration between Ciena with its WaveLogic DSP technology and several optical module makers.
“At what point is the PIC going to be better than what is available with the pluggables?” says Perrin. “For this application, I don’t see it.”
XTM II family
Infinera has already been shipping upgraded XTM chassis for the last 18 months in advance of the launch of its latest metro cards. The upgraded chassis - the one rack unit (1RU) TM-102/II, the 3RU TM-301/II and the 11RU TM-3000/II - all feature enhanced power management and cooling.
What Infinera is unveiling now are three cards that enhance the capacity and features of the enhanced chassis. The new cards will work with the older generation XTM chassis (without the ‘II’ suffix) as long as a vacant card slot is available and the chassis’ total power supply is not exceeded. This is important given over 30,000 XTM chassis have been deployed.
The Infinera cards announced are the 400-flexponder, a 200-gigabit muxponder, and the EMXP440 packet-optical transport switch. The distributed switch architecture is implemented using the EMXP440 card.
Operators will also be offered Infinera’s Instant Bandwidth feature as part of the XTM II whereby they can pay for the line side capacity they use: either 100-gigabit or 200-gigabit wavelengths using the CFP2-DCO. The Instant Bandwidth offered is not the superchannel format available for Infinera’s other platforms that use its PIC but it does offer operators the option of deploying a higher-speed wavelength when needed and paying later.
400G flexponder
The flexponder can operate as a transponder and as a muxponder. For a transponder, the client signal and line-side data rate operate at the same data rate. In contrast, a muxponder aggregates lower data-rate client signals for transport on a single wavelength.
Infinera’s 400-gigabit flexponder card uses four 100 Gigabit Ethernet QSFP28 client interfaces and two 200-gigabit CFP2-DCO pluggable line-side modules. Each CFP2-DCO can transport data at 100 gigabits using polarisation-multiplexing, quadrature phase-shift keying (PM-QPSK) modulation or at 200 gigabits using 16-ary quadrature amplitude modulation (PM-16QAM).
The 400-gigabit card can thus operate as a transponder when the CFP2-DCO transports at 100 gigabits and as a muxponder when it carries two 100-gigabit signals over a 200-gigabit lambda. Given the card has two CFP2 line-side modules, it can even operate as a transponder and muxponder simultaneously.
The flexponder card also supports OTN block encryption using the AES-256 symmetric key protocol.
The flexponder is an upgrade on Infinera’s existing 100-gigabit muxponder card. The eightfold increase in capacity is achieved by using two 200-gigabit ports instead of a single 100-gigabit module and halving the width of the line card.
Using the flexponder card, the TM-102/II chassis has a transport capacity of 400 gigabits, up to 1.6 terabits with the TM-301/II and a total of 4 terabits using the TM-3000/II platform.
We can dial back the FEC if you need low latency and don't need the reach
200G muxponder
The double-width 200G card includes all the electronics needed for multi-service multiplexing. The line-side optics is a single CFP2-DCO module whereas the client side can accommodate two QSFP28s and 12 SFP+ 10-gigabit modules. The card can multiplex a mix of services including 10GbE, 40GbE, and 100GbE; 8-, 16- and 32-gigabit Fibre Channel; OTN and legacy SONET/SDH traffic.
Other features include support for OTN block encryption using the AES-256 symmetric key protocol.
The card’s forward error correction performance can also be traded to reduce the traffic latency. “We can dial back the FEC if you need low latency and don't need the reach,” says Bennett.
OTN add-drop multiplexing can also be implemented by pairing two of the multiplexer cards.
EMXP440 switch and flexible open line system
The EMXP440 packet-optical transport switch card supports layer-two functionality such as Carrier Ethernet 2.0 and MPLS-TP. “Mobile backhaul and residential broadband, these are the cards the operators tend to use,” says Bennett.
The two-slot EMXP440 card has two CFP2-DCOs and 12 SFP+ client-side interfaces. The reason why the line side and client side interface capacity differ (400 gigabits versus 120 gigabits) is that the card can be used to build simple packet rings (see diagram, top).
The line-side interfaces can be used for ‘East’ and ‘West' traffic while the SFP+ modules can be used to add and drop signals. The EMXP440 card also has an MPO port such that up to 12 SFP+ further ports can be added using Infinera’s PTIO-10G card, part of its PT Fabric products.
A flexible grid open line system is also available for the XTM II. The XTM II’s 100-gigabit and 200-gigabit wavelengths fit within a 50GHz-wide fixed grid channel but Infinera is already anticipating future higher baud rates that will require channels wider than 50GHz. A flexible grid also improves the use of the fibre’s overall capacity. In turn, RAMAN amplification will also be needed to extend the reach using future higher order modulation schemes such as 32- and 64-QAM.
Infinera says the 400-gigabit flexponder card will be available in the next quarter while the 200-gigabit muxponder and the EMXP440 cards will ship in the final quarter of 2017.
Ethernet access switch chip boosts service support
The Serval-2 architecture. Source: Vitesse
Vitesse Semiconductor has detailed its latest Carrier Ethernet access switch for mobile backhaul, cloud and enterprise services.
The Serval-2 chip broadens Vitesse's access switch offerings, adding 10 Gigabit Ethernet (GbE) ports while near-tripling the switching capacity to 32 Gigabit; the Serval-2 has 2x10 GbE and 12 1GbE ports.
The device features Vitesse's service aware architecture (ViSAA) that supports Carrier Ethernet 2.0 (CE 2.0). "We have built a hardware layer into the Ethernet itself which understands and can provision services," says Uday Mudoi, product marketing director at Vitesse.
CE 2.0, developed by the Metro Ethernet Forum (MEF), is designed to address evolving service requirements. First equipment supporting the technology was certified in January 2013. What CE 2.0 does not do is detail how services are implemented, says Mudoi. Such implementations are the work of the ITU, IETF and IEEE standard bodies with protocols such as Muti-Protocol Label Switching (MPLS)/ MPLS-Transport Profile (MPLS-TP) and provider bridging (Q-in-Q). "There is a full set of Carrier Ethernet networking protocols which comes on top of CE 2.0," says Mudoi.
Serval-2 switch
The Serval-2 switch features include 256 Ethernet virtual connections, hierarchical quality of service (QoS), provider bridging, and MPLS/ MPLS-TP.
An Ethernet Virtual Connection (EVC) is a logical representation of an Ethernet service, says Vitesse, a connection that an enterprise, data center or cell site uses to send traffic over the WAN.
Multiple EVCs can run on the same physical interface and can be point-to-point, point-to-multipoint, or multipoint-to-multipoint. Each EVC can have a bandwidth profile that specifies the committed information rate (CIR) and excess information rate (EIR) of the traffic transmitted to, or received from, the Ethernet service provider’s network.
The EVC also supports one or more classes of service and measurable QoS performance metrics. Such metrics include frame delay - latency - and frame loss to meet a particular application performance requirements.
The Serval-2 supports 256x8 class of service (CoS) EVCs, equivalent to over 4,000 bi-directional Ethernet services, says Mudoi.
The Serval-2 also supports per-EVC hierarchical queuing. It allows for 256 bi-directional EVCs with policing, statistics, and QoS guarantees for each CoS and EVC. Hierarchical QoS also enables a mix of any strict or byte-accurate weighting within the EVC, and supports the MEF's dual leaky bucket (DLB) algorithm that shapes traffic per-EVC and per-port.
"Service providers guarantee QoS to subscribers for the services that they buy," says Mudoi. "If each subscriber's traffic - even different applications per-subscriber - is treated using separate queues, then one subscriber's behavior does not impact the QoS of another." Supporting thousands of queues allows service providers to offer thousands of services, each with its own QoS.

Q-in-Q, defined in IEEE 802.1ad, allows for multiple VLAN headers - tags - to be inserted into a frame, says Mudoi, enabling service provider tags and customer tags.
Meanwhile, MPLS/ MPLS-TP direct data from one network node to the next based on shortest path labels rather than on long network addresses, thereby avoiding complex routing table look-ups. The labels identify virtual links between distant nodes rather than endpoints.
MPLS can encapsulate packets of various network protocols. Serval-2's MPLS-TP supports Label Edge Router (LER) with Ethernet pseudo-wires, Label Switch Router (LSR), and H-VPLS edge functions.
Q-in-Q in considered a basic networking function for enterprise and carrier networks, says Mudoi, while MPLS-TP is a more complex protocol.
Serval-2 also supports service activation and Vitesse's implementation of the IEEE 1588v2 timing standard, dubbed VeriTime.
"Before you provision a service, you need to run a test to make sure that once your service is provisioned, the user gets the required service level agreement," says Mudoi. Serval-2 supports the latest ITU-T Y.1564 service activation standard.
IEEE 1588v2 establishes accurate timing across a packet-based network and is used for such applications as mobile. The Serval-2 also benefits from Intellisec, Vitesse's MACsec Layer 2 security standard implementation (see Vitesse's Intellisec ).
"Both [Vitesse's VeriTime IEEE 1588v2 and Intellisec technologies] highly complement what we are doing in ViSAA," says Mudoi.
Availability
Serval-2 samples will be available in the third quarter of 2013. Vitesse expects it will take six months for system qualification such that Ethernet access devices using the chip and carrying live traffic are expected in the first half of 2014.
P-OTS 2.0: 60s interview with Heavy Reading's Sterling Perrin

Q: Heavy Reading claims the metro packet optical transport system (P-OTS) market is entering a new phase. What are the characteristics of P-OTS 2.0 and what first-generation platform shortcomings does it address?
A: I would say four things characterise P-OTS 2.0 and separate 2.0 from the 1.0 implementations:
- The focus of packet-optical shifts from time-division multiplexing (TDM) functions to packet functions.
- Pure-packet implementations of P-OTS begin to ramp and, ultimately, dominate.
- Switched OTN (Optical Transport Network) enters the metro, removing the need for SONET/SDH fabrics in new elements.
- 100 Gigabit takes hold in the metro.
The last two points are new functions while the first two address shortcomings of the previous generation. P-OTS 1.0 suffered because its packet side was seen as sub-par relative to Ethernet "pure plays" and also because packet technology in general still had to mature and develop - such as standardising MPLS-TP (Multiprotocol Label Switching - Transport Profile).
Your survey's key findings: What struck Heavy Reading as noteworthy?
The biggest technology surprise was the tremendous interest in adding IP/MPLS functions to transport. There was a lot of debate about this 10 years ago. Then the industry settled on a de facto standard that transport includes layers 0-2 but no higher. Now, it appears that the transport definition must broaden to include up to layer 3.
A second key finding is how quickly SONET/SDH has gone out of favour. Going forward, it is all about packet innovation. We saw this shift in equipment revenues in 2012 as SONET/SDH spend globally dropped more than 20 percent. That is not a one-time hit - it's the new trend for SONET/SDH.
Heavy Reading argues that transport has broadened in terms of the networking embraced - from layers 0 (WDM) and 1 (SONET/SDH and OTN) to now include IP/MPLS. Is the industry converging on one approach for multi-layer transport optimisation? For example, IP over dense WDM? Or OTN, Carrier Ethernet 2.0 and MPLS-TP? Or something else?
We did not uncover a single winning architecture and it's most likely that operators will do different things. Some operators will like OTN and put it everywhere. Others will have nothing to do with OTN. Some will integrate optics on routers to save transponder capital expenditure, but others will keep hardware separate but tightly link IP and optical layers via the control plane. I think it will be very mixed.
You talk about a spike in 100 Gigabit metro starting in 2014. What is the cause? And is it all coherent or is a healthy share going to 100 Gigabit direct detection?
Interest in 100 Gigabit in the metro exceeds interest in OTN in the metro - which is different from the core, where those two technologies are more tightly linked.
Cloud and data centre interconnect are the biggest drivers for interest in metro 100 Gig but there are other uses as well. We did not ask about coherent versus direct in this survey, but based on general industry discussions, I'd say the momentum is clearly around coherent at this stage - even in the metro. It does not seem that direct detect 100 Gig has a strong enough cost proposition to justify a world with two very different flavours of 100 Gig.
What surprised you from the survey's findings?
It was really the interest-level in IP functionality on transport systems that was the most surprising find.
It opens up the packet-optical transport market to new players that are strongest on IP and also poses a threat to suppliers that were good at lower layers but have no IP expertise - they'll have to do something about that.
Heavy Reading surveyed 114 operators globally. All those surveyed were operators; no system vendors were included. The regional split was North America - 22 percent, Europe - 33 percent, Asia Pacific - 25 percent, and the rest of the world - Latin America mainly - 20 percent.
Transmode's evolving packet optical technology mix
- Transmode adds MPLS-TP, Carrier Ethernet 2.0 and OTN
- The three protocols make packet transport more mesh-like and service-aware
- The 'native' in Native Packet Optical 2.0 refers to native Ethernet
Transmode has enhanced its metro and regional network equipment to address the operators' need for more efficient and cost-effective packet transport.

“Native Packet Optical 2.0 extends what the infrastructure can do, with operators having the option to use MPLS-TP, Carrier Ethernet 2.0 and OTN, making the network much more service-aware”
Jon Baldry, Transmode
Three new technologies have been added to create what Transmode calls Native Packet Optical 2.0 (NPO2.0). Multiprotocol Label Switching - Transport Profile (MPLS-TP) was launched in June 2012 to which has now been added the Metro Ethernet Forum's (MEF) latest Carrier Ethernet 2.0 (CE2.0) standard. The company will also have line cards that support Optical Transport Network (OTN) functionality from April 2013.
Until several years ago operators had distinct layer 2 and layer 1 networks. “The first stage of the evolution was to collapse those two layers together,” says Jon Baldry, technical marketing director at Transmode. “NPO2.0 extends what the infrastructure can do, with operators having the option to use MPLS-TP, CE2.0 and OTN, making the network much more service-aware.”
By adopting the enhanced capabilities of NPO2.0, operators can use the same network for multiple services. “A ROADM based optical layer with native packet optical at the wavelength layer,” says Baldry. “That could be a switched video distribution network or a mobile backhaul network; doing many different things but all based on the same stuff.”
Transmode uses native Ethernet in the metro and OTN for efficient traffic aggregation. “We are using native Ethernet frames as the payload in the metro,” says Baldry. “A 10 Gig LAN PHY frame that is moved from node to node, once it is aggregated from Gigabit Ethernet to 10 Gig Ethernet; we are not doing Ethernet over SONET/SDH or Ethernet over OTN.”
Shown are the options as to how layer 2 services can be transported and interfaced to multiple core networks. The Ethernet muxponder supports MPLS-TP, native Ethernet and the option for OTN, all over a ROADM-based optical layer. “It is not just a case of interfacing to three core network types, we can be aware of what is going on in these networks and switch traffic between types,” says Transmode's Jon Baldry. Note: EXMP is the Ethernet muxponder. Source: Transmode.
Once the operator no longer needs to touch the Ethernet traffic, it is then wrapped in an OTN frame for aggregation and transport. This, says Baldry, means that unnecessary wrapping and unwrapping of OTN frames is avoided, with OTN being used only where needed.
There are economical advantages in adopting NPO2.0 for an operator delivering layer 2 services. There are also considerable operational advantages in terms of the way the network can be run using MPLS-TP, the service types offered with CE2.0, and how the metro network interworks with the core network, says Baldry.
MPLS-TP and Carrier Ethernet 2.0
Introducing MPLS-TP and the latest CE2.0 standard benefits transport and services in several ways, says Baldry.
MPLS-TP provides better traffic engineering as well as working practices similar to SONET/SDH that operators are familiar with. “MPLS-TP creates a transport-like way of dealing with Ethernet which is good for operators having to move from a layer-1-only world to a packet world,” says Baldry. MPLS-TP is also claimed to have a lower total cost of ownership compared to IP/MPLS when used in the metro.
The protocol is also more suited to the underlying infrastructure. “Quite a lot of the networks we are deploying have MPLS-TP running on top of a ROADM network, which is naturally mesh-like,” says Baldry.
In contrast Ethernet provides mainly point-to-point and ring-based network protection mechanisms; there is no support for mesh-based restoration. This resiliency option is supported by MPLS-TP with its support of mesh-styled ‘tunnelling’. A MPLS-TP tunnel creates a service layer path over which traffic is sent.
“You can build tunnels and restoration paths through a network in a way that is more suited to the underlying [ROADM-based] infrastructure, thereby adding resiliency when a fibre cut occurs,” says Baldry.
MPLS-TP also benefits service scalability. It is much easier to create a tunnel and its protection scheme and define the services at the end points than to create many individual circuits across the network, each time defining the route and the protection scheme.
“Because MPLS-TP is software-based, we can mix and match MPLS-TP and Ethernet on any port,” says Baldry. “You can use MPLS-TP as much or as little as you like over particular parts of the network.”
The second new technology, the MEF’s Carrier Ethernet 2.0, benefits services. The MEF has extended the range of services available, from three to eight with CE2.0, while improving class-of-service handling and management features.
Transmode says its equipment is CE2.0 compliant and suggests its systems will become CE2.0-certified in the new year.
Hardware
The packet-optical products of Transmode comprise the TM-Series transport platforms and Ethernet demarcation units.
The company's single and double slot cards - Ethernet muxponders – fit into the TM-Series transport platforms. The single-slot Ethernet muxponder has ten, 1 Gigabit Ethernet (GbE) and 2x10GbE interfaces while the double-slot card supports 22, 1GbE and 2x10GbE interfaces. Transmode also offers 10GbE only cards: the single slot is 4x10GbE and the double-slot has 8x10GbE interfaces. These cards are software upgradable to support MPLS-TP and the MEF’s CE2.0.
“In early 2013, we are introducing a couple of new cards – enhanced Ethernet muxponders – with more gutsy processors and optional hardware support for OTN on 10 Gigabit lines,” says Baldry.
The Ethernet demarcation unit, also known as a network interface device (NID), is a relatively small unit that resides for example at a cell site. The unit undertakes such tasks as defining an Ethernet service and performance monitoring. The box or rack mounted units have Gigabit Ethernet uplinks and interface to Transmode’s platforms.
Baldry cites the UK mobile operator, Virgin Media, which is using its platforms for mobile backhaul. Here, the Ethernet demarcation units reside at the cell sites, and at the first aggregation point the10- or 22-port GbE card is used. These Ethernet muxponder cards then feed 10GbE pipes to the 4- or 8-port 10GbE cards.
“For the first few thousand cell sites there are hundreds of these aggregation points,” says Baldry. “And those aggregation points go back to Virgin Media’s 50-odd main sites and it is at those points we put the 8x 10GbE cards.” Thus the traffic is backhauled from the edge of the network and aggregated before being handed over as a 10GbE circuit to Virgin Media’s various radio network controller (RNC) sites.
Transmode says that half of it customers use its existing native packet optical cards in their networks. Since MPLS-TP and CE2.0 are software options, these customers can embrace these features once they are required.
However, operators will only likely start deploying CE2.0-based services once Transmode’s offering becomes certified.
Further reading:
Detailed NPO2.0 application note, click here
