The next step for AI: Make it smarter with Edge Computing and HCI

The next step for AI: Make it smarter with Edge Computing and HCI

As the hyperconverged infrastructure (HCI) market continues to grow exponentially, the next big frontier for HCI now sits at the edge of the network

As the hyperconverged infrastructure (HCI) market continues to grow exponentially, the next big frontier for HCI now sits at the Edge of the network. Phil White, CTO, Scale Computing tells us why it’s time for HCI to embrace and include Edge Computing to enable the growth of AI.

It is unsurprising that the hyperconverged infrastructure (HCI) market is expected to grow to US$17.1 billion by 2023 since it has had such major growth in this year alone. HCI has made very positive changes in the tech industry, providing benefits such as simplified management, taking up less rack space and power, as well as fewer overall vendors and an easy transition to commodity servers. 

The next big step for HCI is towards the Edge of the network. Organisations are now turning to HCI and Edge Computing in order to support high-performance use cases, such as Artificial Intelligence (AI), by capturing data at the source of creation. With the help of HCI and Edge Computing, organisations can harness AI tools for smarter decision-making. 

Meet Edge Computing in theory

In a world that is increasingly data-driven, much of that information is generated outside of the traditional data centre. This is Edge Computing: the processing of data outside of that traditional data centre and typically on the Edge of a network onsite. 

Infrastructure at the Edge, despite its small hardware footprint, is able to collect, process and reduce vast quantities of data so that it can be uploaded to a centralised data centre or the cloud. Instead of sending data across long routes, this allows for data to be processed and reacted upon closer to the point of creation. Many use cases, such as self-driving cars, quick service restaurants, grocery stores and industrial settings like energy plants and mines, have found Edge Computing to be key in their implementation.  

This said, there are still improvements to be made in how effectively information captured at the Edge is used. Since AI is still in its infancy, it requires an incredible amount of resources in order to train its models. For these training purposes, Edge Computing is best suited to allow information and telemetry to flow into the cloud for deep analysis, and models that are trained in the cloud should then be deployed back to the Edge. Cloud and data centres will always be the best resources for model creation.

And now in practice

Cerebras, a next-generation silicon chip company, just introduced its new ‘Wafer Scale Engine’ which is designed specifically for the training of AI models. With 1.2 trillion transistors and 400,000 processing cores, the new chip is phenomenally fast. However, all of this consumes a huge amount of power, which means it isn’t viable for most Edge deployments.

Data lakes can be created and better utilised by organisations when consolidating Edge Computing workloads using HCI. Once data is in a data lake, it’s available to all applications for analysis. On top of this, Machine Learning can provide new insights using shared data from different devices and applications.

HCI creates an ease of use by combining servers, storage and networking all in one box. This eliminates many of the challenges of configuration and networking that come with Edge Computing. Additionally, platforms can integrate management for hundreds or thousands of Edge devices in different geographical locations all with different types of networks and interfaces. These allow for much of the complexity to be avoided, which significantly reduces operational expenses. 

How does AI benefit from HCI and Edge Computing? 

With the introduction of smart home devices, wearable technology and self-driving cars, AI is becoming much more common and is only set to grow, with an estimated 80% of devices having some sort of an AI feature by 2022. 

Most AI technology relies on the cloud: it makes decisions based on the collection of data stored in the cloud it is accessing. However, since the data has to travel to data centres and then back to the device, this can cause latency. Latency is especially problematic for technologies such as self-driving cars, which cannot wait for the round-trip of data to know when to brake, or how fast to travel.

A key benefit of Edge Computing for AI is that necessary data would live locally to the device, which reduces latency. New data can be stored, accessed and then uploaded to the cloud when accessible, while data resides on the Edge of the device’s network. This feature greatly benefits AI devices, such as smartphones and self-driving cars which don’t always have access to the cloud due to network availability or bandwidth but are reliant on data processing to make decisions.

The combination of HCI and Edge Computing also provides reduced form factors for AI. HCI allows technology to operate within a smaller hardware design; in fact, some companies are set to launch highly available HCI Edge Compute clusters which are no bigger than a cup of coffee.

It is time for HCI to embrace and include Edge Computing; doing so provides benefits that are important to the growth of AI. It also allows for the technology to operate without too much human involvement, which gives AI the opportunity to best optimise its Machine Learning feature and increase its efficiency on smarter decision-making.

Up until now, cloud has been sufficient in providing AI with the platform it needed in order to become available on nearly every technological device. But now, it will be the combination of HCI and Edge Computing that gives AI the tools to take its next big step, providing organisations with smarter and faster decision-making.

Browse our latest issue

Intelligent CIO Middle East

View Magazine Archive