Skip to content

Search the site

Overcoming AI’s greatest challenges through accelerated parallel learning

"Deploying edge AI as containerised cloud endpoints is inherently flawed."

In 1967, buoyed by rapid progress in developing early rule-based algorithms, Marvin Minsky, head of MIT's AI laboratory, confidently wrote that "within a generation, the problems of creating genuine 'artificial intelligence' will be substantially solved." Instead, by the mid-1970s, it was clear that transplanting AI successes from the lab into the real world was not as straightforward as AI’s early evangelists had hoped, writes Daniel Warner, CEO, LGN. Today, with the rise of edge AI, ubiquitous real-world AI deployment is far closer than it had been at any other time in history. However, important challenges remain.

See also: TSB COO Suresh Viswanathan on building an IT team, tackling an analogue legacy, and “innovating on the fly”

Many of the same obstacles to AI development and adoption that, following Minsky's premature prediction, catalysed the first of several AI winters, still inhibit AI deployment. Long-standing inhibitors of AI adoption like unmet customer expectations, outsized training and development costs, and the inherent difficulty of operating data-intensive models in the real world continue to slow down the momentum of AI at “the edge.” Fortunately, the same issues holding back edge AI currently may also be at the heart of AI's next paradigm shift: accelerated parallel learning.

The Challenge of Real World Edge AI

Capable of empowering real-world devices with low latency and secure, adaptive decision-making capabilities, AI that runs autonomously within edge devices has limitless potential. Once a futuristic idea, edge AI is already used or being trialled within an array of sectors ranging from agri-tech to postal services. While edge AI is a potential boon to countless industries, running technologies like deep learning and natural language processing within real-world devices creates challenges. Variable real-world environments, messy data, and bandwidth restrictions can turn deploying edge AI at scale into a technological and financial headache. An inherent factor of integrating AI into devices like drones, mobile phones, and cars, data processing limits place natural restrictions on processing capability. With onboard memory and processing ability severely hampered by hardware, interference times lag and trainability can become a challenge.

On the other hand, deploying edge AI as containerised cloud endpoints is inherently flawed. In real-world environments, the several hundred milliseconds it takes for a decision to go from device to cloud and back again is an order of magnitude too long. As well as incurring unavoidable lag times, edge AI in the cloud inevitably results in spiralling cloud computing costs —  particularly when it comes to monitoring and reporting data.

Networked AI Turns These Challenges Into Virtues

As any AI model is only as good as the data it receives and can process, the endlessly messy, confusing, and variable nature of the real world is the worst possible operating environment. Outside of the lab, critical data, whether from IoT sensors or user input, is often incomplete, missing information, or full of outliers. These theoretical problems can even cause fatal accidents when human users overestimate the ability of edge AI devices like autonomous cars.

Continuously retraining models to make decisions in critical situations when data is not perfect is essential but remains a core challenge. When edge AI fleets expand, this challenge grows further, and an organisation's ability to deploy and train devices can have an inverse relationship with how many devices they use. However, while this appears to add up to a case of more edge AI, more problems, by networking edge AI, the opposite becomes true.

Read this: Dr Michael Gorriz, Group CIO, Standard Chartered, on going all-in on the cloud in 2021.

Acting as an inter-connected, collaborating network of AI agents, networked AI allows edge AI devices to work together to improve their understanding of the real world by themselves —  a process known as accelerated parallel learning. Sharing information between sensors, devices, and models in real-time, networked edge AI dramatically minimises training time, bandwidth costs and the expense of adopting this technology.

By leveraging the ability of networked AI devices to optimise solutions to problems like data processing limitations and data quality, not only does scaling device numbers become easier, but each device also gains genuine resilience and adaptability — critical assets in a world full of the unexpected. By inverting the scalability problem of edge AI through networking, accelerated parallel learning transforms the relationship between cost, performance, and scale.

Accelerated Parallel Learning Unlocks a New Paradigm

Turning edge AI's biggest cost centres into opportunities, networked AI undoubtedly has the potential to make near-future edge AI projects more capable, cost-effective, and, ultimately, fit better into real-world business cases. However, the accelerated parallel learning unleashed by networked edge AI may also transform our understanding of how artificial intelligence works as a whole.

As opposed to the kind of task-based lab-trained applications we see today, networked AI systems will process information better, manage people and resources more efficiently, and make more insightful decisions. This level of technical capability will transform whole sectors and professions from education to management and lead to a massive shakeup and disruption of the economic landscape.

With models able to optimise themselves opaquely, networked AI will reimagine the world in ways we cannot yet comprehend. While Minksy's prophecy was ahead of its time in 1967, a similar prediction made today may not be too far off.

Follow The Stack on LinkedIn

Latest