Ian Jansen van Rensburg, VMware EMEA Senior Systems Engineer, looks at the rise of Artificial Intelligence and the birth of the next generation data centre.
Artificial Intelligence has rapidly become a leading driver of innovation, creating competitive advantages and new business opportunities. The proliferation of data is enabling breakthroughs across disparate industries, from transportation and healthcare to energy and communications. However, one of the most profound AI-mediated transformations will occur within the world of enterprise technology.
Based on our extensive experience within the enterprise tech stack, we see three core factors that have created the perfect storm fuelling today’s AI innovation:
- Compute (the need for speed): From CPUs and GPUs through FPGAs and ASICs, computing resources have made incredible progress in the past few years, allowing us to process data more quickly, more broadly, and more deeply than ever before. In addition, new deployment channels (such as public cloud GPUs/ASICs) allow customers to balance Capex versus Opex in their AI initiatives.
- Algorithms (the modern-day equation): Algorithms are the theoretical foundation underlying Machine Learning and AI, from simple neural networks to more complicated recurrent and convolutional architectures. Many of these algorithms trace back decades – yet have only recently led to applied breakthroughs. This is partially due to the advances in Compute, but even more so, because of…
- Data (data is the new oil): Machine Learning techniques are famously data-inefficient, compared to humans. Many of the headline advances in AI performance, such as AlphaGo, are trained on enormous data sets that no human could see in an entire lifetime. Without sufficient training volume, Machine Learning techniques fail to reach acceptable performance levels. And as recently as a decade ago, the quantity of enterprise data available for Machine Learning was a tiny fraction of what is available today, from logs and metrics to traces and configuration events.
The AI opportunity: Self-optimising data centres
This explosion of operational data is both a blessing and a curse. In the current world of data centre and cloud operations, companies are desperately trying to keep up with the flood of raw information and falling further behind each year. The volume of data has outpaced currently available tools and platforms, placing an increasing burden on human operators – even feature developers – to keep up.
In fact, a recent report by EMA cites that an average of 30-40% of developer time is spent on production deployment, configuration, testing, debugging and support challenges rather than feature development. This operational ‘tax’ is unacceptable for firms in competitive industries where feature velocity is a key driver.
AI will allow companies to transform this operational burden into a strategic advantage. It will enable firms to move to a global operations model where they can leverage the deep value of their data, resulting in real-time insights that drive business value. AI will fill the void between operational complexity and operational capability. Some common uses where companies can leverage AI to improve their data centres include improved operational efficiencies, real-time cost-performance balancing, security, and even business metric optimisation.
Don’t fight the data deluge – embrace it
The increase in operational complexity won’t be slowing down anytime soon. The gap between human scale and machine scale continues to grow. Organisations that are not able to augment their data centre, cloud and edge computing strategies by adopting AI technologies will risk falling further behind. Conversely, organisations that are able to effectively leverage Machine Learning will build a significant competitive edge.
We envision a hybrid data centre, cloud and edge that is self-healing and self-optimising, greatly reducing administrative overload and allowing firms to focus on strategic innovation and customer experience. We also believe that an AI-enabled infrastructure will deliver degrees of self-regulation well beyond today’s policy-based capabilities.
What next?
Preparing your data centre for the modern world is not a trivial task. Start by future-proofing your data centre as groundwork for the AI-driven approaches that are coming around the corner. For example, on the control surface, that means adopting infrastructure-as-code (IAC) patterns, as well as intermediating legacy manual processes with APIs where possible.
As for the data, consider implementing edge computing, which enables data gathering and analytics to occur near the source of the data. Companies should also invest in software-defined infrastructure as a key enabler of this process. Last, but certainly not least, explore a multi-cloud strategy to offer the most agility and flexibility to your IT infrastructure as you prepare for the high-velocity, machine-learning-driven future.