Using an open-source model to spur AI adoption
The Linux Foundation’s (LF) Deep Learning Foundation has set itself the ambitious goal of providing companies with all the necessary artificial intelligence (AI) software they will need.
“Everything AI, we want you to take from open source,” says Eyal Felstaine, a member of the LF Deep Learning governing board and also the CTO of Amdocs. “We intend to have the entire [software] stack.”
The Deep Learning Foundation is attracting telecom, large-scale data centre operators and other players. Orange, Ciena, Red Hat, Chinese ride-sharing firm, Didi, and Intel are the latest companies to join the initiative.
The Deep Learning Foundation’s first project is Acumos, a platform for developers to build, share and deploy AI applications. Two further projects have since been added: Angel and Elastic Deep Learning.
Goal
The 'democratisation of data' is what has motivated the founding of the deep-learning initiative, says Felstaine.
A company using a commercial AI platform must put its data in a single repository. “You are then stuck [in that environment],” says Felstaine. Furthermore, fusing data from multiple sources exacerbates the issue in that the various datasets must be uploaded to the one platform.
Using an open-source approach will result in AI software that companies can download for free. “You can run it at your own place and you are not locked into any one vendor,” says Felstaine.
Everything AI, we want you to take from open source
Deep learning, machine learning and AI
Deep learning is associated with artificial neural networks which is one way to perform machine learning. And just as deep learning is a subset of machine learning, machine learning is a subset of AI, albeit the predominant way AI is undertaken today.
“Forty years ago if you had computer chess, the program’s developers had to know how to play chess,” says Felstaine. “That is AI but it is not machine learning.”
With machine learning, a developer need not know the rules of chess. “The software developer just needs to get the machine to see enough games of chess such that the machine will know how to play,” says Felstaine.
A neural network is composed of interconnected processing units or neurons. Like AI, it is a decades-old computer science concept. But an issue has been the efficient execution of a neural network when shared across processors due to input-output constraints. Now, with the advent of the internet content providers and cloud, not only can huge datasets be used to train neural networks but the ‘hyper-connectivity’ between the servers’ virtual machines or containers means large-scale neural networks can be used.
Containers offer a more efficient way to run many elements on a server. “The numbers of virtual machines on a CPU is maybe 12 if you are lucky; with containers, it is several hundred,” says Felstaine. Another benefit of developing an application using containers is that it can be ported across different platforms.
“This [cloud clustering] is a quantitative jump in the enabling technology for traditional neural networks because you can now have thousands and even tens of thousands of nodes [neurons] that are interconnected,” says Felstaine. Running the same algorithms on much larger neural networks has only become possible in the last five years, he says.
Felstaine cites as an example the analysis of X-ray images. Typically, X-rays are examined by a specialist. For AI, the images are sent to a firm for parsing where the images are assessed and given a ‘label’. Millions of X-ray images can be labelled before being fed to a machine-learning application such as Tensorflow or H2O. Tensorflow, for example, is open-source software that is readily accessible.
The resulting trained software, referred to as a predictor, is then capable of analysing an X-ray picture and give a prognosis based on what it has learnt from the dataset of X-rays and labels created by experts. “This is pure machine learning because the person who defined Tensorflow doesn’t know anything about human anatomy,” says Felstaine. Using the software creates a model. “It’s an empty hollow brain that needs to be taught.”
Moreover, the X-ray data could be part of a superset of data from several providers such as life habits from a fitness watch, the results of a blood test, and heart data to create a more complex model. And this is where an open-source framework that avoids vendor lock-in has an advantage.
Acumos
Acumos started as a collaboration between AT&T and the Indian IT firm, Tech Mahindra, and was contributed to the LF Deep Learning Foundation.
Felstaine describes Acumos as a way to combine, or federate, different AI tools that will enable users to fuse data from various sources "and make one whole out of it”.
There is already an alpha release of Acumos and the goal, like other open-source projects, is to issue two new software releases a year.
How will such tools benefit telecom operators? Felstaine says AT&T is already using AI to save costs by helping field engineers maintain its cell towers. The field engineer uses a drone to inspect the operator’s cell towers, and employing AI to analyse the drone’s images, it guides the field engineer as to what maintenance, if any, is needed.
One North American operator has said it has over 30 AI projects including one that is guiding the operator as to how to upgrade a part of its network to minimise the project's duration and the disruption.
One goal for Acumos is to benefit the Open Networking Automation Platform (ONAP) that oversees Network Functions Virtualisation (NFV)-based networks. ONAP is an open-source project that is managed by the Linux Foundation Networking Fund.
NFV is being adopted by operators to help them lunch and scale services more efficiently and deliver operational and capital expenditure savings. But the operation and management of NFV across a complex telecom network is a challenge to achieving such benefits, says Felstaine.
ONAP already has a Data Collection, Analytics, and Events (DCAE) subsystem which collects data regarding the network’s status. Adding Acumos to ONAP promises a way for machine learning to understand the network’s workings and provide guidance when faults occur, such as the freezing of a virtual machine running a networking function.
With such a fault, the AI could guide the network operations engineer, pointing out that humans take this action next and that the action has an 85 percent success rate. It then gives the staff member the option to proceed or not. Ultimately, AI will control the networking actions and humans will be cut out of the loop. “AI as part of ONAP? That is in the future,” says Felstaine.
The two new framework projects - Angel and Elastic Deep Learning - have been contributed to the Foundation from the Chinese internet content providers, Tencent and Baidu, respectively.
Both projects address scale and how to do clustering. “They are not AI, more ways to distribute and scale neural networks,” says Felstaine.
The Deep Learning Foundation was launched in March by the firms Amdocs, AT&T, B.Yond, Baidu, Huawei, Nokia, Tech Mahindra, Tencent, Univa, and ZTE.
Reader Comments