Data centre interconnect drives coherent
-
NeoPhotonics announced at OFC a high-speed modulator and intradyne coherent receiver (ICR) that support an 800-gigabit wavelength
-
It also announced limited availability of its nano integrable tunable laser assembly (nano-ITLA) and demonstrated its pico-ITLA, an even more compact silicon photonics-based laser assembly
-
The company also showcased a CFP2-DCO pluggable
NeoPhotonics unveiled several coherent optical transmission technologies at the OFC conference and exhibition held in San Diego last month.
“There are two [industry] thrusts going on right now: 400ZR and data centre interconnect pizza boxes going to even higher gigabits per wavelength,” says Ferris Lipscomb, vice president of marketing at NeoPhotonics.

Ferris Lipscomb
The 400ZR is an interoperable 400-gigabit coherent interface developed by the Optical Internetworking Forum (OIF).
Optical module makers are developing 400ZR solutions that fit within the client-side QSFP-DD and OSFP pluggable form factors, first samples of which are expected by year-end.
800-gigabit lambdas
Ciena and Infinera announced in the run-up to OFC their latest coherent systems - the WaveLogic 5 and ICE6, respectively - that will support 800-gigabit wavelengths. NeoPhotonics announced a micro intradyne coherent receiver (micro-ICR) and modulator components that are capable of supporting such 800-gigabit line-rate transmissions.
NeoPhotonics says its micro-ICR and coherent driver modulator are class 50 devices that support symbol rates of 85 to 90 gigabaud required for such a state-of-the-art line rate.
The OIF classification defines categories for devices based on their analogue bandwidth performance. “With class 20, the 3dB bandwidth of the receiver and the modulator is 20GHz,” says Lipscomb. “With tricks of the trade, you can make the symbol rate much higher than the 3dB bandwidth such that class 20 supports 32 gigabaud.” Thirty-two gigabaud is used for 100-gigabit and 200-gigabit coherent transmissions.
Class 50 refers to the highest component performance category where devices have an analogue bandwidth of 50GHz. This equates to a baud rate close to 100 gigabaud, fast enough to achieve data transmission rates exceeding a terabit. “But you have to allow for the overhead the forward-error correction takes, such that the usable data rate is less than the total,” says Lipscomb (see table).

Source: Gazettabyte, NeoPhotonics
Silicon photonics-based COSA
NeoPhotonics also announced a 64-gigabaud silicon photonics-based coherent optical subassembly (COSA). The COSA combines the receiver and modulator in a single package that is small enough to fit within a QSFP-DD or OSFP pluggable for applications such as 400ZR.
Last year, the company announced a similar COSA implemented in indium phosphide. In general, it is easier to do higher speed devices in indium phosphide, says Lipscomb, but while the performance in silicon photonics is not quite as good, it can be made good enough.
“It [silicon photonics] is now stretching certainly into the Class 40 [that supports 600-gigabit wavelengths] and there are indications, in certain circumstances, that you might be able to do it in the Class 50.”
Lipscomb says NeoPhotonics views silicon photonics as one more material that complements its indium phosphide, planar lightwave circuit and gallium arsenide technologies. “Our whole approach is that we use the material platform that is best for a certain application,” says Lipscomb.
In general, coherent products for telecom applications take time to ramp in volumes. “With the advent of data centre interconnect, the volume growth is much greater than it ever has been in the past,” says Lipscomb.
NeoPhotonics’ interested in silicon photonics is due to the manufacturing benefits it brings that help to scale volumes to meet the hyperscalers’ requirements. “Whereas indium phosphide has very good performance, the infrastructure is still limited and you can’t duplicate it overnight,” says Lipscomb. “That is what silicon photonics does, it gives you scale.”
NeoPhotonics also announced the limited availability of its nano integrable tunable laser assembly (nano-ITLA). “This is a version of our external cavity ITLA that has the narrowest line width in the industry,” says Lipscomb.
The nano-ITLA can be used as the source for Class 50, 800-gigabit systems and current Class 40 600 gigabit-per-wavelength systems. It is also small enough to fit within the QDFP-DD and OSFP client-side modules for 400ZR designs. “It is a new compact laser that can be used with all those speeds,” says Lipscomb.
NeoPhotonics also showed a silicon-photonics based pico-ITLA that is even smaller than the nano-ITLA.“The [nano-ITLA’s] optical cavity is now made using silicon photonics so that makes it a silicon photonics laser,” says Lipscomb.
Instead of having to assemble piece parts using silicon photonics, it can be made as one piece. “It means you can integrate that into the same chip you put your modulator and receiver on,” says Lipscomb. “So you can now put all three in a single COSA, what is called the IC-TROSA.” The IC-TROSA refers to an integrated coherent transmit-receive optical subassembly, defined by the OIF, that fits within the QSFP-DD and OSFP.
Despite the data centre interconnect market with its larger volumes and much faster product uptakes, indium phosphide will still be used in many places that require higher optical performance. “But for bulk high-volume applications, there are lots of advantages to silicon photonics,” says Lipscomb.
400ZR and 400ZR+
A key theme at this year’s OFC was the 80km 400ZR. Also of industry interest is the 400ZR+, not an OIF specification but an interface that extends the coherent range to metro distances.
Lipscomb says that the initial market for the 400ZR+ will be smaller than the 400ZR, while the ZR+’s optical performance will depend on how much power is left after the optics is squeezed into a QSFP-DD or OSFP module.
“The next generation of DSP will be required to have a power consumption low enough to do more than ZR distances,” he says. “The further you go, the more work the DSP has to do to eliminate the fibre impairments and therefore the more power it will consume.”
Will not the ZR+ curtail the market opportunity for the 400-gigabit CFP2-DCO that is also aimed at the metro?
“It’s a matter of timing,” says Lipscomb. “The advantage of the 400-gigabit CFP2-DCO is that you can almost do it now, whereas the ZR+ won’t be in volume till the end of 2020 or early 2021.”
Meanwhile, NeoPhotonics demonstrated at the show a CFP2-DCO capable of 100-gigabit and 200-gigabit transmissions.
NeoPhotonics has not detailed the merchant DSP it is using for its CFP2-DCO except to say that it working with ‘multiple ones’. This suggests it is using the merchant coherent DSPs from NEL and Inphi.
Acacia eyes pluggables as it demos its AC1200 module
The emerging market opportunity for pluggable coherent modules is causing companies to change their strategies.
Ciena is developing and plans to sell its own coherent modules. And now Acacia Communications, the coherent technology specialist, says it is considering changing its near-term coherent digital signal processor (DSP) roadmap to focus on coherent pluggables for data centre interconnect and metro applications.

Source: Gazettabyte
DSP roadmap
Acacia’s coherent DSP roadmap in recent years has alternated between an ASIC for low-power, shorter-reach applications followed by a DSP to address more demanding, long-haul applications.
In 2014, Acacia announced its Sky 100-gigabit DSP for metro applications that was followed in 2015 by its Denali dual-core DSP that powers its 400-gigabit AC-400 5x7-inch module. Then, in 2016, Acacia unveiled its low-power Meru, used within its pluggable CFP2-DCO modules. The high-end 1.2-terabit dual-core Pico DSP used for Acacia’s board-mounted AC1200 coherent module was unveiled in 2017.
“The 400ZR is our next focus,” says Tom Williams, senior director of marketing at Acacia.
The 400ZR standard, promoted by the large internet content providers, is being developed to link switches in separate data centres up to 80km apart. Acacia’s subsequent coherent DSP that follows the 400ZR may also target pluggable applications such as 400-gigabit CFP2-DCO modules that will span metro and metro-regional distances.
“There is a trend to pluggable, not just the 400ZR but the CFP2-DCO [400-gigabit] for metro,” says Williams. “We are still evaluating whether that causes a shift in our overall cadence and DSP development.”
AC1200 trials
Meanwhile, Acacia has announced the results of two transatlantic trials involving its AC1200 module whose production is now ramping.
>
“There is a trend to pluggable, not just the 400ZR but the CFP2-DCO [400-gigabit] for metro”
In the first trial, Acacia, working with ADVA, transmitted a 300-gigabit signal over a 6,800km submarine cable. The 300-gigabit wavelength occupied a 70GHz channel and used ADVA’s Teraflex technology, part of ADVA’s FSP 3000 CloudConnect platform. Teraflex is a one-rack-unit (1RU) stackable chassis that supports three hot-pluggable 1.2-terabit sleds, each sled incorporating an Acacia AC1200 module.
In a separate trial, the AC1200 was used to send a 400-gigabit signal over 6,600km using the Marea submarine cable. Marea is a joint project between Microsoft, Facebook and Telxius that links the US and Spain. The cable is designed for performance and uses an open line system, says Williams: “It is not tailored to a particular company’s [transport] solution”.
The AC1200 module - 40 percent smaller than the 5x7-inch AC400 module - uses Acacia’s patented Fractional QAM (quadrature amplitude modulation) technology. The technology uses probabilistic constellation shaping that allows for non-integer constellations. “Instead of 3 or 4 bits-per-symbol, you can have 3.56 bits-per-symbol,” says Williams.
Acacia’s Fractional QAM also uses an adaptive baud rate. For the trial, the 400-gigabit wavelength was sent using the maximum baud rate of just under 70 gigabaud. Using the baud rate to the full allows a lower constellation to be used for the 400-gigabit wavelength thereby achieving the best optical signal-to-noise ratio (OSNR) and hence reach.
In a second demonstration using the Marea cable, Acacia demonstrated a smaller-width channel in order to maximise the overall capacity sent down the fibre. Here, a lower baud rate/ higher constellation combination was used to achieve a spectral efficiency of 6.41 bits-per-second-per-Hertz (b/s/Hz). “If you built out all the channels [on the fibre], you achieve of the order of 27 terabits,” says Williams.
Pluggable coherent
The 400ZR will be implemented using the same OSFP and QSFP-DD pluggable modules used for 400-gigabit client-side interfaces. This is why an advanced 7nm CMOS process is needed to implement the 400ZR DSP so that its power consumption will be sufficiently low to meet the modules’ power envelopes when integrated with Acacia’s silicon-photonics optics.
There is also industry talk of a ZR+, a pluggable module with a reach exceeding80km. “At ECOC, there was more talk about the ZR+,” says Williams. “We will see if it becomes standardised or just additional proprietary performance.”
Another development is the 400-gigabit CFP2-DCO. At present, the CFP2-DCO delivers up to 200-gigabitwavelengths but the standard, as defined by the Optical Internetworking Forum (OIF), also supports 400 gigabits.
Williams says that there a greater urgency to develop the 400ZR than the 400-gigabit CFP2-DCO. “People would like to ramp the ZR pretty close to the timing of the 400-gigabit client-side interfaces,” says Williams. And that is likely to be from mid-2019.
In contrast, the 400-gigabit CFP2-DCO pluggable while wanted by carriers for metro applications, is not locked to any other infrastructure build-out, says Williams.
T-API taps into the transport layer
The Optical Internetworking Forum (OIF) in collaboration with the Open Networking Foundation (ONF) and the Metro Ethernet Forum (MEF) have tested the second-generation transport application programming interface (T-API 2.0).
SK Telecom's Park Jin-hyo
T-API 2.0 is a standardised interface, released in late 2017 by the ONF, that enables the dynamic allocation of transport resources using software-defined networking (SDN) technology.
The interface has been created so that when a service provider, or one of its customers, requests a service, the required resources including the underlying transport are configured promptly.
The OIF-led interoperability demonstration tested T-API 2.0 in dynamic use cases involving equipment from several systems vendors. Four service providers - CenturyLink, Telefonica, China Telecom and SK Telecom - provided their networking labs, located in three continents, for the testing.
Packets and transport
SDN technology is generally associated with the packet layer but there is also a need for transport links, from fibre and wavelength-division multiplexing technology at Layer 0 through to Layer 2 Ethernet.
Transport SDN differs from packet-based SDN in several ways. Transport SDN sets up dedicated pipes whereas a path is only established when packets flow for packet SDN. “When you order a 100-gigabit connection in the transport network, you get 100 gigabits,” says Jonathan Sadler, the OIF’s vice president and Networking Interoperability Working Group chair. “You are not sharing it with anyone else.”
Another difference is that at the packet layer with its manipulation of packet headers is a digital domain whereas the photonic layer is analogue. “A lot of the details of how a signal interacts with a fibre, with the wavelength-selective switches, and with the different componentry that is used at Layer 0, are important in order to characterise whether the signal makes it through the network,” says Sadler.
T-API 1.0 is a configure and step-away deployment, T-API 2.0 is where the dynamic reactions to things happening in the network become possible
Prior to SDN, control functions resided on a platform as part of a network’s distributed control plane. Each vendor had their own interface between the control and the optical domain embedded within their platforms. T-API has been created to expose and standardise that interface such that applications can request transport resources independent of the underlying vendor equipment.
NBI refers to a northbound interface while SBI stands for a southbound interface. Source: OIF.
To fulfil a connection across an operator’s network involves a hierarchy of SDN controllers. An application’s request is first handled by a multi-domain SDN controller that decomposes the request for the various domain controllers associated with the vendor-specific platforms. T-API 2.0’s role is to link the multi-domain controller to the application layer’s orchestrator and also connect the individual domain controllers to the multi-domain SDN controller (see diagram above). T-API is an example of a northbound interface.
The same T-API 2.0 interface is used at both SDN controller levels, what differs is the information each handles. Sadler compares the upper T-API 2.0 interface to a high-level map whereas the individual TAPI 2.0 domain interfaces can be seen as maps with detailed ‘local’ data. “Both [interfaces] work on topology information and both direct the setting-up of connections,” says Sadler. “But the way they are doing it is with different abstractions of the information.”
T-API 2.0
The ONF developed the first T-API interface as part of its Common Information Model (CIM) work. The interface was tested in 2016 as part of a previous interoperability demonstration involving the OIF and the ONF.
One important shortfall revealed during the 2016 demonstrations, and which has slowed its deployment, is that the T-API 1.0 interface didn't fully define how to notify an upper controller of events in the lower domains. For example, if a link is congested, or worst, lost, it couldn’t inform the upper controller to re-route traffic. This has been put right with T-API 2.0.
“T-API 1.0 is a configure and step-away deployment, T-API 2.0 is where the dynamic reactions to things happening in the network become possible,” says Sadler.
When it comes to the orchestrator tying into the transport network, we do believe T-API will be one of the main approaches for these APIs
Interoperability demonstration
In addition to the four service providers, six systems vendors took part in the recent interoperability demonstration: ADVA Optical Networking, Coriant, Infinera, NEC/ Netcracker, Nokia and SM Optics.
The recent tests focussed on the performance of the TAPI-2.0 interface under dynamic network conditions. Another change since the 2016 tests was the involvement of the MEF. The MEF has adopted and extended T-API as part of its Network Resource Modeling (NRM) and Network Resource Provisioning (NRP) projects, elements of the MEF’s Lifecycle Service Orchestration (LSO) architecture. The LSO allows for service provisioning using T-API extensions that support the MEF’s Carrier Ethernet services.
Three aspects of the T-API 2.0 interface were tested as part of the use cases: connectivity, topology and notification.
Setting up a service requires both connectivity and topology. Topology refers to how a service is represented in terms of the node edge points and the links. Notification refers to the northbound aspect of the interface, pushing information upwards to the orchestrator at the application layer. This allows the orchestrator in a multi-domain network to re-route connectivity services across domains.
The four use cases tested included multi-layer network connections whereby topology information is retrieved from a multi-domain network with services provisioned across domains.
T-API 2.0 was also used to show the successful re-routing of traffic when network situations change such as a fault, congestion, or to accommodate maintenance work. Re-routing can be performed across the same layer such as the IP, Ethernet or optical layer, or, more optimally, across two or more layers. Such a capability promises operators the ability to automate re-routing using SDN technology.
The two other use cases tested during the recent demonstration were the orchestrator performing network restoration across two or more domains, and the linking of data centres’ network functions virtualisation infrastructure (NFVI). Such NFVI interconnect is a complex use case involving SDN controllers using T-API to create a set of wide area networks connecting the NFV sites. The use case set up is shown in the diagram below.
Source: OIF
SK Telecom, one of the operators that participated in the interoperability demonstration, welcomes the advent of T-API 2.0 and says how such APIs will allow operators to enable services more promptly.
“It has been difficult to provide services such as bandwidth-on-demand and networking services for enterprise customers enabled using a portal,” says Park Jin-hyo, executive vice president of the ICT R&D Centre at SK Telecom. “These services will be provided within minutes, according to the needs, using the graphical user interface of SK Telecom’s network-as-service platform.”
SK Telecom stresses the importance of open APIs in general as part of its network transformation plans. As well as implementing a 5G Standalone (SA) Core, SK Telecom aims to provide NFV and SDN-based services across its network infrastructure including optical transport, IP, data centres, wired access as well as networks for enterprise customers.
“Our final goal is to open the network itself to enterprise customers via an open API,” says Park. “Our mission is to create 5G-enabled network-slicing-based business models and services for vertical markets.”
Takeways
The OIF says the use cases have shown that T-API 2.0 enables real-time orchestration and that the main shortcomings identified with the first T-API interface have been addressed with T-API 2.0.
The OIF recognises that while T-API may not be the sole approach available for the industry - the IETF has a separate activity - the successful tests and the broad involvement of organisations such as the ONF and MEF make a strong case for T-API 2.0 as the approach for operators as they seek to automate their networks.
“When it comes to the orchestrator tying into the transport network, we do believe T-API will be one of the main approaches for these APIs,“ says Sadler.
SK Telecom said participating in the interop demonstrations enabled it to test and verify, at a global level, APIs that the operators and equipment manufacturers have been working on. And from a business perspective, the demonstration work confirmed to SK Telecom the potential of the ‘global network-as-a-service’ concept.
Editor note: Added input from SK Telecom on September 1st.
ON2020 rallies industry to address networking concerns
Source: ON2020
The slide shows how router-blade client interfaces are scaling at 40% annually compared to the 20% growth rate of general single-wavelength interfaces (see chart).
Extrapolating the trend to 2024, router blades will support 20 terabits while client interfaces will only be at one terabit. Each blade will thus require 20 one-terabit Ethernet interfaces. “That is science fiction if you go off today’s technology,” says Winzer, director of optical transmission subsystems research at Nokia Bell Labs and a member of the ON2020 steering committee.
This is where ON2020 comes in, he says, to flag up such disparities and focus industry efforts so they are addressed.
ON2020
Established in 2016, the companies driving ON2020 are Fujitsu, Huawei, Nokia, Finisar, and Lumentum.
The reference to 2020 signifies how the group looks ahead four to five years, while the name is also a play on 20/20 vision, says Brandon Collings, CTO of Lumentum and also a member of the steering committee.
Brandon CollingsON2020 addresses a void in the industry, says Collings. The Optical Internetworking Forum (OIF) organisation may have a similar conceptual mission but it is more hands-on, focussing on components and near-term implementations. ON2020 looks further out.
“Maybe you could argue it is a two-step process,” says Collings. “First, ON2020 is longer term followed by the OIF’s definition in the near term.”
To build a longer-term view, ON2020 surveyed network operators worldwide including the largest internet content providers players and leading communications service providers.
ON2020 reported its findings at the recent ECOC show under three broad headings: traffic growth and the impact on fibre capacity and interfaces, interconnect requirements, and network management and operations.
Things will have to get cheaper; that is the way things are.
Network management
One key survey finding is the importance network operators attach to software-defined networking (SDN) although the operators are frustrated with the lack of SDN solutions available, forcing them to work with vendors to address their needs.
Peter WinzerThe network operators also see value in white boxes and disaggregation, to lower hardware costs and avoid vendor lock-in. But as with SDN, there are challenges with white boxes and disaggregation.
“Let’s not forget that SDN comes from the big webscales,” says Winzer, companies with abundant software and control experience. Telecom companies don’t have such sophisticated resources.
“This produces a big conundrum for the telecom operators: they want to get the benefits without spending what the webscales are spending,” says Winzer. The telcos also need higher network reliability such that their job is even harder.
Responding to ON2020’s anonymous survey, the telecom players stress how SDN, disaggregation and the adoption of white boxes will require a change in practices and internal organisation and even the employment of system integrators.
“They are really honest. They say, nice, but we are just overwhelmed,” says Winzer. “It highlights the very important organisational challenges operators are facing.”
Operators are frustrated with the lack of SDN solutions available.
Capacity and connectivity
The webscales and telecom operators were also surveyed about capacity and connectivity issues.
Both classes of operator use 10-terabit links or more and this will soon rise to 40 terabits. The consensus is that the C-band alone is insufficient given their capacity needs.
Those operators with limited fibre want to grow capacity by also using the L-band with the C-band, while operators with plenty of fibre want to combine fibre pairs - a form of spatial division multiplexing - and using the C and L bands. The implication here is that there is an opportunity for hardware integration, says ON2020.
Network operators use backbone wavelengths at 100, 200 and 400 gigabits. As for service feeds - what ON2020 refers to as granularity - webscale players favour 25 gigabit-per-second (Gbps) whereas telecom operators continue to deal with much slower feeds - 10Mbps, 100Mbps, and 1Gbps.
What can ON2020 do to address the demanding client-interface requirements of IP router blades, referred to in the chart?
Xiang Liu, distinguished scientist, transmission product line at Huawei and a key instigator in the creation of ON2020, says photonic integration and a tighter coupling between photonics and CMOS will be essential to reduce the cost-per-bit and power-per-bit of future client interfaces.
Xiang Liu
“As the investment for developing routers with such throughputs could be unprecedentedly high, it makes sense for our industry to collectively define the specifications and interfaces,” says Liu. “ON2020 can facilitate such an industry-wide effort.”
Another survey finding is that network operators favour super-channels once client interfaces reach 400 gigabits and higher rates. Super-channels are more efficient in their use of the fibre’s spectrum while also delivering operations, administration, and management (OAM) benefits.
The network operators were also asked about their node connectivity needs. While they welcome the features of advanced reconfigurable optical add-drop multiplexers (ROADMs), they don’t necessarily need them all. A typical response being they will adopt such features if they are practically for free.
This, says Winzer, is typical of carriers. “Things will have to get cheaper; that is the way things are.”
Photonic integration and a tighter coupling between photonics and CMOS will be essential to reduce the cost-per-bit and power-per-bit of future client interfaces
Future plans
ON2020 is still seeking feedback from additional network operators, the survey questionnaire being availability for download on its website. “The more anonymous input we get, the better the results will be,” says Winzer.
Huawei’s Liu says the published findings are just the start of the group’s activities.
ON2020 will conduct in-depth studies on such topics as next-generation ROADM and optical cross-connects; transport SDN for resource optimisation and multi-vendor interoperability; 5G-oriented optical networking that delivers low latency, accurate synchronisation and network slicing; new wavelength-division multiplexing line rates beyond 200 gigabit; and optical link technologies beyond just the C-band and new fibre types.
ON2020 will publish a series of white papers to stimulate and guide the industry, says Liu.
The group also plans to provide input to standardisation organisations to enhance existing standards and start new ones, create proof-of-concept technology demonstrators, and enable multi-vendor interoperable tests and field trials.
Discussions have started for ON2020 to become an IEEE Industry Connections programme. “We don’t want this to be an exclusive club of five [companies],” says Winzer. “We want broad participation.”
The OIF’s 400ZR coherent interface starts to take shape
Part 2: Coherent developments
The Optical Internetworking Forum’s (OIF) group tasked with developing two styles of 400-gigabit coherent interface is now concentrating its efforts on one of the two.
When first announced last November, the 400ZR project planned to define a dense wavelength-division multiplexing (DWDM) 400-gigabit interface and a single wavelength one. Now the work is concentrating on the DWDM interface, with the single-channel interface deemed secondary.
Karl Gass"It [the single channel] appears to be a very small percentage of what the fielded units would be,” says Karl Gass of Qorvo and the OIF Physical and Link Layer working group vice chair, optical, the group responsible for the 400ZR work.
The likelihood is that the resulting optical module will serve both applications. “Realistically, probably both [interfaces] will use a tunable laser because the goal is to have the same hardware,” says Gass.
The resulting module may also only have a reach of 80km, shorter than the original goal of up to 120km, due to the challenging optical link budget.
Origins and status
The 400ZR project began after Microsoft and other large-scale data centre players such as Google and Facebook approached the OIF to develop an interoperable 400-gigabit coherent interface they could then buy from multiple optical module makers.
The internet content providers’ interest in an 80km-plus link is to connect premises across the metro. “Eighty kilometres is the magic number from a latency standpoint so that multiple buildings can look like a single mega data centre,” says Nathan Tracy of TE Connectivity and the OIF’s vice president of marketing.
Since then, traditional service providers have shown an interest in 400ZR for their metro needs. The telcos’ requirements are different to the data centre players, causing the group to tweak the channel requirements. This is the current focus of the work, with the OIF collaborating with the ITU.
The catch is how much can we strip everything down and still meet a large percentage of the use cases
“The ITU does a lot of work on channels and they have a channel measurement methodology,” says Gass. “They are working with us as we try to do some division of labour.”
The group will choose a forward error correction (FEC) scheme once there is common agreement on the channel. “Imagine all those [coherent] DSP makers in the same room, each one recommending a different FEC,” says Gass. “We are all trying to figure out how to compare the FEC schemes on a level playing field.”
Meeting the link budget is challenging, says Gass, which is why the link might end up being 80km only. “The catch is how much can we strip everything down and still meet a large percentage of the use cases.”
The cloud is the biggest voice in the universe
400ZR form factors
Once the FEC is chosen, the power envelope will be fine-tuned and then the discussion will move to form factors. The OIF says it is still too early to discuss whether the project will select a particular form factor. Potential candidates include the OSFP MSA and the CFP8.
Nathan TracyThe industry assumption is that the 80km-plus 400ZR digital coherent optics module will consume around 15W, requiring a very low-power coherent DSP that will be made using 7nm CMOS.
“There is strong support across the industry for this project, evidenced by the fact that project calls are happening more frequently to make the progress happen,” says Tracy.
Why the urgency? “The cloud is the biggest voice in the universe,” says Tracy. To support the move of data and applications to the cloud, the infrastructure has to evolve, leading to the data centre players linking smaller locations spread across the metro.
“At the same time, the next-gen speed that is going to be used in these data centres - and therefore outside the data centres - is 400 gigabit,” says Tracy.
OIF prepares for virtual network services
The Optical Internetworking Forum has begun specification work for virtual network services (VNS) that will enable customers of telcos to define their own networks. VNS will enable a user to define a multi-layer network (layer-1 and layer-2, for now) more flexibly than existing schemes such as virtual private networks.
Vishnu Shukla"Here, we are talking about service, and a simple way to describe it [VNS] is network slicing," says OIF president, Vishnu Shukla. "With transport SDN [software-defined networking], such value-added services become available."
The OIF work will identify what carriers and system vendors must do to implement VNS. Shukla says the OIF already has experience working across multiple networking layers, and is undertaking transport SDN work. "VNS is a really valuable extension of the transport SDN work," says Shukla.
The OIF expects to complete its VNS Implementation Agreement work by year-end 2015.
Meanwhile, the OIF's Carrier Working Group has published its recommendations document, entitled OIF Carrier WG Requirements for Intermediate Reach 100G DWDM for Metro Type Applications, that provides input for the OIF's Physical Link Layer (PLL) Working Group.
The PLL Working Group is defining the requirements needed for a compact, low-cost and low-power 100 Gig interface for metro and regional networks. This is similar to the OIF work that successfully defined the first 100 Gig coherent modules in a 5x7-inch MSA.
The Carrier Working Group report highlights key metro issues facing operators. One is the rapid growth of metro traffic which, according to Cisco Systems, will surpass long-haul traffic in 2014. Another is the change metro networks are undergoing. The metro is moving from a traditional ring to a mesh architecture with the increasing use of reconfigurable optical add/drop multiplexers (ROADMs). As a result, optical wavelengths have further to travel, must contend with passing through more ROADMs stages and more fibre-induced signal impairments.
Shukla stresses there are differences among operators as to what is considered a metro network. For example, metro networks in North America span 400-600km typically and can be as much as 1,000km. In Europe such spans are considered regional or even long-haul networks. Metro networks also vary greatly in their characteristics. "Because of these variations, the requirements on optical modules varies so much, from unit to unit and area to area," says Shukla.
Given these challenges, operators want a module with sufficient optical performance to contend with the ROADM stages, and variable distances and network conditions encountered. "Sometimes we feel that the requirements [between metro and long-haul] won't be that much [different]," says Shukla. Indeed, the Carrier Working Group report discusses how the boundaries between metro and long-haul networks are blurring.
Yet operators also want such robust optical module performance at a greatly reduced price. One of the report's listed requirements is the need for the 100 Gig intermediate-reach interfaces to cost 'significantly' less than the cheapest long-haul 100 Gig.
To this aim, the report recommends that the 100 Gig pluggable optical modules such as the CFP or CFP2 be used. Standardising on industry-accepted pluggable MSAs will drive down cost as happened with the introduction of 100 Gig long haul 5x7-inch MSA modules.
Metro and regional coherent interfaces will also allow the specifications to be relaxed in terms of the DSP-ASIC requirements and the modulation schemes used. "When we come to the metro area, chances are that some of the technologies can be done more simply, and the cost will go down," says Shukla. Using pluggables will also increase 100 Gig line card densities, further reducing cost, while the report also favours the DSP-ASIC being integrated into the pluggable module, where possible.
Contributors to the Carrier Working Group report include representatives from China Telecom, Deutsche Telekom, Orange, Telus and Verizon, as well as module maker Acacia.
OIF defines carrier requirements for SDN
The Optical Internetworking Forum (OIF) has achieved its first milestone in defining the carrier requirements for software-defined networking (SDN).

The orchestration layer will coordinate the data centre and transport network activities and give easy access to new applications
Hans-Martin Foisel, OIF
The OIF's Carrier Working Group has begun the next stage, a framework document, to identify missing functionalities required to fulfill the carriers' SDN requirements. "The framework document should define the gaps we have to bridge with new specifications," says Hans-Martin Foisel of Deutsche Telekom, and chair of the OIF working group.
There are three main reasons why operators are interested in SDN, says Foisel. SDN offers a way for carriers to optimise their networks more comprehensively than before; not just the network but also processing and storage within the data centre.
"IP-based services and networks are making intensive use of applications and functionalities residing in the data centre - they are determining our traffic matrix," says Foisel. The data centre and transport network need to be coordinated and SDN can determine how best to distribute processing, storage and networking functionality, he says.
SDN also promises to simplify operators' operational support systems (OSS) software, and separate the network's management, control and data planes to achieve new efficiencies.
SDN architecture
The OIF's focus is on Transport SDN, involving the management, control and data plane layers of the network. Also included is an orchestration layer that will sit above the data centre and transport network, overseeing the two domains. Applications then reside on top of the orchestration layer, communicating with it and the underlying infrastructure via a programmable interface.
"Aligning the thinking among different people is quite an educational exercise, and we will have to get to a new understanding"
"The orchestration layer will coordinate the data centre and transport network activities and give, northbound, easy access to new applications," says Foisel.
A key SDN concept is programmability and application awareness, he says. The orchestration layer will require specified interfaces to ease the adding of applications independent of whether they impact the data centre, transport network or both.
Foisel says the OIF work has already highlighted the breadth of vision within the industry regarding how SDN should look. "Aligning the thinking among different people is quite an educational exercise, and we will have to get to a new understanding," he says.
Having equipment prototypes is also helping in understanding SDN. "Implementations that show part of this big picture - it is doable, it is working and how it is working - is quite helpful," says Foisel.
The OIF Carrier Working Group is working closely with the Open Networking Foundation's (ONF) Optical Transport Working Group to ensure that the two group are aligned. The ONF's Optical Transport Group is developing optical extensions to the OpenFlow standard.


