Europe's first glimpse of a live US baseball game

The Radôme protecting the vast horn antenna

It is rare to visit a museum dedicated to telecoms, never mind one set in beautiful grounds. Nor does it often happen that the visit coincides with an important anniversary for the site.

La Cité des Télécoms, a museum set in 11 hectares of land in Pleumeur-Bodou, Brittany, France, is where the first TV live feed was sent by satellite from the US to Europe.

The Telstar 1 communications satellite was launched 60 years ago, on July 10, 1962. The first transmission that included part of a live Chicago baseball game almost immediately followed.

By then, a vast horn radio antenna had been constructed and was awaiting the satellite’s first signals. The Radôme houses the antenna, an inflated dome-shaped skin to protect it from the weather. The antenna is built using 276 tonnes of steel and sits on 4,000 m3 of concrete. Just the bolts holding together the structure weigh 10 tonnes. It is also the largest inflated unsupported dome in the world.

The antenna continued to receive satellite transmissions till 1985. The location was then classed as a site of national historical importance. The huge horn antenna is unique since the twin antenna in the US has been dismantled.

The Cité des Télécoms museum was opened in 1991 and the site is a corporate foundation supported by Orange.

History of telecoms

A visitor to the museum is guided through a history of telecoms.

The tour begins with key figures of telecom such as Samuel Morse, Guglielmo Marconi, Lee de Forest and Thomas Edison. Lesser known inventors are also included, like Claude Chappe, who developed a semaphore system that eventually covered all of France.

The tour moves on to the advent of long-distance transmission of messages using telegraphy. Here, a variety of exquisitely polished wooden telegraphy systems are exhibited. Also included are rooms that explain the development of undersea cables and the advent of optical fibre.

Source: Cité des Télécoms

In the optical section, an exhibit allows a user to point a laser at different angles to show how internal reflection of an optical fibre always guides the incident light to the receiver.

Four video displays expertly explain to the general public what is single-mode fibre, optical amplification, wavelength-division multiplexing, forward error correction, and digital signal processing.

The digital age

Radio waves and mobile communications follow before the digital world is introduced, starting with George Boole and an interactive display covering Boolean algebra. Other luminaries introduced include Norbert Wiener and Claude Shannon.

There are also an impressive collection of iconic computing and communications devices, including an IBM PC, the Apple II, an early MacBook, generations of mobile phones, and the French’s effort to computerise the country, the Minitel system, which was launced in 1982 and was only closed down in 2012.

The tour ends with interactive exhibits and displays covering the Web, Bitcoin and 5G.

The Radôme

The visit’s highlight is the Radôme.

On entering, you arrive in a recreated office containing 1960s engineering paraphernalia – a technical drawing board, slide rules, fountain pens, and handwritten documents. A guy (in a video) looks up and explains what is happening in the lead-up to the first transmission.

The horn antenna used to receive the first satellite TV broadcasts from the US.

You then enter the antenna control centre and feel the tension and uncertainty as to whether the antenna will successfully receive the Telstra transmission. From there, you enter the vast dome housing the antenna.

TV displays take you through the countdown to the first successful transmission. Then a video display projected onto the vast ceiling gives a whistle-stop tour of the progress made since 1962: images sent from the moon landing in 1969, live World Cup football matches in 1970 through to telecom developments of the 1980s, 1990s, and 2000s.

The video ends with a glimpse of how telecoms may look in future.

Future of telecoms

The Radôme video is the closest the Cité des Télécoms museum comes to predicting the future and more would have been welcome.

But perhaps this is wise since, when you exit the Radome, a display bordering a circular lawn shows each key year’s telecom highlight from 1987 to 2012.

In 1987, the first optical cable linked Corsica to mainland Europe. The following year the first transatlantic optical cable (TAT-8) was deployed, while Bell Labs demonstrated ADSL in 1989.

The circular lawn display continues. In 1992, SMS was first sent, followed by the GSM standard in 1993. France Telecom’s national network became digital in 1995. And so it goes, from the iPhone in 2007 to the launch of 4G in Marseille in 2012.

There the display stops. There is no mention of Google, data centres, AI and machine learning, network functions virtualization, open RAN or 6G.

The Radôme

A day out in Brittany

The Radôme and the colossal antenna are a must-see, while the museum does an excellent job of demystifying telecoms. The museum is located in the Pink Granite Coast, a prime tourist attraction in Brittany.

Perhaps the museum’s key takeaway is how quickly digitisation and the technologies it has spawned have changed our world.

What lies ahead is anyone’s guess.


OPNFV's releases reflect the evolving needs of the telcos

The Open Platform for NFV (OPNFV) is increasingly focused on supporting cloud-native technologies and the network edge.

Heather KirkseyThe open source group, part of the Linux Foundation, specialises in the system integration of network functions virtualisation (NFV) technology.

The OPNFV issued Fraser, its latest platform release, earlier this year while its next release, Gambia, is expected soon.  

Moreover, the telcos continual need for new features and capabilities means the OPNFV’s work is not slowing down.

“I don’t see us entering maintenance-mode anytime soon,” says Heather Kirksey, vice president, community and ecosystem development, The Linux Foundation and executive director, OPNFV. 

 

Meeting a need

The OPNFV was established in 2014 to address an industry shortfall.  

“When we started, there was a premise that there were a lot of pieces for NFV but getting them to work together was incredibly difficult,” says Kirksey.

Open-source initiatives such as OpenStack, used to control computing, storage, and networking resources in the data centre, and the OpenDaylight software-defined networking (SDN) controller, lacked elements needed for NFV. “No one was integrating and doing automated testing for NFV use cases,” says Kirksey.

 

I don’t see us entering maintenance-mode anytime soon 

 

OPNFV set itself the task of identifying what was missing from such open-source projects to aid their deployment. This involved working with the open-source communities to add NFV features, testing software stacks, and feeding the results back to the groups.  

The nature of the OPNFV’s work explains why it is different from other, single-task, open-source initiatives that develop an SDN controller or NFV management and orchestration, for example. “The code that the OPNFV generates tends to be for tools and installation - glue code,” says Kirksey.

OPNFV has gained considerable expertise in NFV since its founding. It uses advanced software practices and has hardware spread across several labs. “We have a large diversity of hardware we can deploy to,” says Kirksey.

One of the OPNFV’s advanced software practices is continuous integration/ continuous delivery (CI/CD).  Continuous integration refers to how code is added to a software-build while it is still being developed unlike the traditional approach of waiting for a complete software release before starting the integration and testing work. For this to be effective, however, requires automated code testing. 

Continuous delivery, meanwhile, builds on continuous integration by automating a release’s update and even its deployment. 

“Using our CI/CD system, we will build various scenarios on a daily, two-daily or weekly basis and write a series of tests against them,” says Kirksey, adding that the OPNFV has a large pool of automated tests, and works with code bases from various open-source projects.

Kirksey cites two examples to illustrate how the OPNFV works with the open-source projects.

When OPNFV first worked with OpenStack, the open-source cloud platform took far too long - about 10 seconds - to detect a faulty virtual machine used to implement a network function running on a server. “We had a team within OPNFV, led by NEC and NTT Docomo, to analyse what it would take to be able to detect faults much more quickly,” says Kirksey. 

The result required changes to 11 different open-source projects, while the OPNFV created test software to validate that the resulting telecom-grade fault-detection worked. 

Another example cited by Kirksey was to enable IPv6 support that required changes to OpenStack, OpenDaylight and FD.io, the fast data plane open source initiative.   

 

The reason cloud-native is getting a lot of excitement is that it is much more lightweight with its containers versus virtual machines

 

OPNFV Fraser 

In May, the OPNFV issued its sixth platform release dubbed Fraser that progresses its technology on several fronts.

Fraser offers enhanced support for cloud-native technology that use microservices and containers, an alternative to virtual machine-based network functions.

The OPNFV is working with the Cloud Native Computing Foundation (CNCF), another open-source organisation overseen by the Linux Foundation. 

CNCF is undertaking several projects addressing the building blocks needed for cloud-native applications. The most well-known being Kubernetes, used to automate the deployment, scaling and management of containerised applications.

“The reason cloud-native is getting a lot of excitement is that it is much more lightweight with its containers versus virtual machines,” says Kirksey. “It means more density of what you can put on your [server] box and that means capex benefits.” 

Meanwhile, for applications such as edge computing, where smaller devices will be deployed at the network edge, lightweight containers and Kubernetes are attractive, says Kirksey.

Another benefit of containers is faster communications. “Because you don’t have to go between virtual machines, communications between containers is faster,” she says. “If you are talking about network functions, things like throughput starts to become important.”

The OPNFV is working with cloud-native technology in the same way it started working with OpenStack. It is incorporating the technology within its frameworks and undertaking proof-of-concept work for the CNCF, identifying shortfalls and developing test software. 

OPNFV has incorporated Kubernetes in all its installers and is adopting other CNCF work such as the Prometheus project used for monitoring. 

“There is a lot of networking work happening in CNCF right now,” says Kirksey. “There are even a couple of projects on how to optimise cloud-native for NFV that we are also involved in.”

OPNFV’s Fraser also enhances carrier-grade features. Infrastructure maintenance work can now be performed without interrupting virtual network functions. 

Also expanded are the metrics that can be extracted from the underlying hardware, while the OPNFV’s Calipso project has added modules for service assurance as well as support for Kubernetes.  

Fraser has also improved the support for testing and can allocate hardware dynamically across its various labs. “Basically we are doing more testing across different hardware and have got that automated as well,” says Kirksey. 

 

Linux Foundation Networking Fund

In January, the Linux Foundation combined the OPNFV with five other open-source telecom projects it is overseeing to create the Linux Foundation Networking Fund (LNF). 

The other five LNF projects are the Open Network Automation Platform (ONAP), OpenDaylight, FD.io, the PNDA big data analytics project, and the SNAS streaming network analytics system.

 

Edge is becoming a bigger and more important use-case for a lot of the operators

 

“We wanted to break down the silos across the different projects,” says Kirksey. There was also overlap with members sitting on several projects’ boards. “Some of the folks were spending all their time in board meetings,” says Kirksey. 

Service provider Orange is using the OPNFV Fraser functional testing framework as it adopts ONAP. Orange used the functional testing to create its first test container for ONAP in one day. Orange also achieved a tenfold reduction in memory demands, going from a 1-gigabyte test virtual machine to a 100-megabyte container. And the operator has used the OPNFV’s CI/CD toolchain for the ONAP work.

By integrating the CI/CD toolchain across projects, OPNFV says it is much easier to incorporate new code on a regular basis and provide valuable feedback to the open source projects.

The next code release, Gambia, could be issued as early as November.

Gambia will offer more support for cloud-native technologies. There is also a need for more work around Layer 2 and Layer 3 networking as well as edge computing work involving OpenStack and Kubernetes. 

“Edge is becoming a bigger and more important use-case for a lot of the operators,” says Kirksey.

OPNFV is also continuing to enhance its test suites for the various projects. “We want to ensure we can support the service providers real-world deployment needs,” concludes Kirksey.


The Open ROADM MSA adds new capabilities in Release 2.0

The Multi-Source Agreement (MSA) for open reconfigurable add-drop multiplexers (ROADM) group expects to publish its second release in the coming months. The latest MSA specifications extend optical reach by including line amplification and adds support for flexible grid and lower-speed tributaries with OTN switching.

Xavier PougnardThe Open ROADM MSA, set up by AT&T, Ciena, Fujitsu and Nokia, is promoting interoperability between vendors’ ROADMs by specifying open interfaces for their control using software-defined networking (SDN) technology. Now, one year on, the MSA has 10 members, equally split between operators and systems vendors.

Orange joined the Open ROADM MSA last July and says it shares AT&T’s view that optical networks lack openness given the proprietary features of the vendors’ systems.

“As service providers, we suffer from lock-in where our networks are composed of equipment from a single vendor,” says Xavier Pougnard, R&D manager for transport networks at Orange Labs. “When we want to introduce another vendor for innovation or economic reasons, it is nearly impossible.”

This is what the MSA group wants to tackle with its open specifications for the data and management planes. The goal is to enable an operator to swap equipment without having to change their control by using a common, open management interface. “Right now, for every new provider, we need IT development for the management of the [network] node,” says Pougnard.

 

As service providers, we suffer from lock-in where our networks are composed of equipment from a single vendor. When we want to introduce another vendor for innovation or economic reasons, it is nearly impossible.

 

MSA status

The Open ROADM MSA has published two data sets as part of its Release 1.2. One set tackles 100-gigabit data plane interoperability by defining what is needed for two line-side transponders to talk to each other. The second set of specifications uses the YANG modelling language to allow the management of the transponders and ROADMs.

The group is now working on Release 2.0 that will enable longer reaches and exploit OTN switching. The specifications will also support flexgrid whereas Release 1.2 specifies 50GHz fixed channels only. Release 2.0 is expected to be completed in the second quarter of 2017. “Service providers would like it as soon as possible,” says Pougnard.

Pougnard highlights the speed of development of an open MSA model with new releases issued every few months, far quicker that traditional standardisation bodies. It was this frustration with the slow pace of development of the standards bodies that led Orange to join the Open ROADM MSA.

Orange stresses that the Open ROADM will not be used for all dense wavelength-division multiplexing cases. There will be applications which require extended performance where a specific vendor's equipment will be used. “We do specify the use of an FEC [forward error correction] in the specification but there are more powerful FECs that extend the reach for 100-gigabit interfaces,” says Pougnard. But the underlying flexibility offered by the MSA trumps performance.

 

Trials

AT&T detailed in December a network demonstration of the Open ROADM technology. The operator used a 100-gigabit optical wavelength in its Dallas area network to connect two IP-MPLS routers using transponders and ROADMs from Ciena and Fujitsu.

Orange is targeting its own lab trials in the first half of this year using a simplified OpenDaylight SDN controller working with ROADMs from three systems vendors. “We want to showcase the technology and prove the added value of an open ROADM,” says Pougnard. 

Orange is also a member of the Telecom Infra Project, a venture that includes Facebook and 10 operators to tackle telecom networks from access to the core. The two groups have had discussions about areas of possible collaboration but while the Open ROADM MSA wants to promote a single YANG model that includes the amplifiers of the line system, TIP expects there to be more than a single model. The two organisations also differ in their philosophies: the Open ROADM MSA concerns itself with the interfaces to the platforms whereas TIP also tackles the internal design of platforms.  

Coriant, which is a member of TIP and the Open ROADM MSA, is keen for alignment. "As an industry we should try to make sure that certain elements such as open API definitions are aligned between TIP and the Open ROADM MSA," says Uwe Fischer, CTO of Coriant.  

Meanwhile, the Open ROADM MSA will announce another vendor member soon and says additional operators are watching the MSA’s progress with interest.

Pougnard stresses how open developments such as the ROADM MSA require WDM engineers to tackle new things. “We have a tremendous shift in skills,” he says. “Now they need to work on the automation capability, on YANG modelling and Netconf.”  Netconf - the IETF’s network configuration protocol - uses YANG models to enable the management of network devices such as ROADMs.    


Privacy Preference Center