University researchers have developed neural networks – data systems that mimic the learning patterns of the human brain – that are adaptable and agile enough to work with compact devices.
Until now, the vast size and scope of neural networks have made it necessary for them to run primarily on servers with extensive memory and power. But a group of Massachusetts Institute of Technology researchers has developed new chip technology that would make the networks energy-efficient enough to operate on smartphones.
First, the MIT group found a way to measure the amount of energy that a neural network requires for a given device. They then applied that knowledge to the design of more streamlined neural networks, whose energy use is economic enough to let the networks function on smartphones.
This advance makes it possible for smartphones to serve a more versatile and useful role in sophisticated IT operations.
Deep neural networks could compress complex data
Other research into neural networks suggests they could master complex computing functions, such as those involved with air traffic control safety.
Researchers at Stanford University, Johns Hopkins University and MIT are using deep neural networks, or networks with the most computing power, to refine a data compression method that might make airborne collision avoidance systems more effective.
Sensors in these next-generation air traffic control systems use data from planes in a certain vicinity to calculate the best, safest travel route. Because it’s impossible to precisely predict the planes’ changing trajectories, the calculations must account for many possible trajectories and score the risk involved with each.
The resulting data table is much too large for existing control systems, and current compression methods can shrink the database only slightly (by a factor of five) without sacrificing reliability. That reliability is critical, since the data serves to prevent collisions. But deep neural networks were able to compress the data table by a factor of 1,000 – and millions of simulations yielded fewer collisions and alerts.
“Having this neural network that can represent this gigantic amount of data and compress it by a factor of 1,000 or more opens up the door for a lot of other applications,” says Kyle Julian, a Stanford graduate student who led the research, in a Stanford magazine article.
For example, compressing data may make it easier to use artificial intelligence to analyse data sets.
Machine learning, data power up neural networks
As neural networks handle more data and get better at computing, they can also work with machine learning to perform industry-specific tasks.
“The data and the computational capability are increasing exponentially, and the more data you give these deep-learning networks and the more computational capability you give them, the better the result becomes, because the results of previous machine-learning exercises can be fed back into the algorithms,“ data scientist Jeremy Howard said in an interview with McKinsey Quarterly.
Frequently, this makes it easier for organisations to spot patterns or classify data so they can better segment their customers according to basic characteristics (behavioural and demographic) and in theory, target them more accurately with offers and messaging.
For example, machine learning can gather information from real-time telecommunications network operations – such as the amount and location of traffic, the types of calls being made and who’s making them – to create better calling plans for subscribers.