Embracing liquid cooling amid the rise of high-performance computing 

Embracing liquid cooling amid the rise of high-performance computing 

Chee Hoe Ling, Vice President of Product Management, Vertiv Asia, on what sets data centers in the direction of getting liquid cooling solutions to work optimally for their needs. 

From artificial intelligence (AI) to augmented and virtual realities, our world is now firmly in the era of advanced computational demands. But what does the need to perform high computing tasks across hundreds and thousands of nodes mean for data centers? 

For one, the growing adoption of HPC requires Graphics Processing Units (GPUs) to work harder.  

Supporting AI models, for example, requires five times more power and cooling capacity from GPUs in the same space as traditional servers. Rack densities of 40 kW per rack are now at the lower end of what is required to facilitate AI deployments, with rack densities surpassing 100 kW per rack and at large scale in the near future. In other words, traditional air-cooling methods simply cannot keep up. 

Optimal performance and efficiency 

As data centers grapple with the growth of HPC, viable alternatives to traditional air cooling are highly sought after. For many operators, this has emerged in the shape of liquid cooling solutions.  

According to the Dell’Oro Group, revenue for the liquid cooling market is set to be worth US$2 billion before 2030.  

Meanwhile a recent industry survey by Vertiv suggests that 17% of data centers are currently utilizing liquid cooling – with an additional 61% considering its implementation. 

But besides the obvious fact that traditional air cooling is becoming less effective in heat rejection of AI servers workload, especially in tropical climates, what is it about liquid cooling specifically that makes it a better option? 

Well, firstly, because liquid cooling uses fluids, it has higher thermal transfer properties. That makes it up to 3,000 times more effective at cooling high-density racks than air. Furthermore, liquid cooling enables a lower PUE for a large data centre, reducing overall power consumption, which doesn’t just reduce costs but is also better for the environment.  

Liquid cooling has long been used for Super Computers and gaming applications, proving its credentials as an enabler of enhanced energy efficiency, performance and scalability. 

Navigating challenges 

However, deploying liquid cooling poses challenges, particularly since data centers designed for air cooling will need to undergo at least some structural changes. Often, it will entail retrofitting or redesigning the data center layout to accommodate coolant distribution systems, heat exchangers and associated components. If it is not planned carefully, it can disrupt ongoing data centre operations. 

Liquid cooling systems also introduce new maintenance and operational complexity being a composite of pumps, pipes and coolant circulation mechanisms. Without specialized expertise, ongoing monitoring and robust contingency plans, these components can heighten operational and safety risks. 

Another challenge is integration with existing hardware, software and management systems. This process may involve software updates, firmware modifications and compatibility testing to mitigate compatibility issues and ensure smooth operation. 

To navigate these challenges, data center operators will need to transition through a hybrid approach.  

That hinges on a three-pronged strategy involving: 

  • Creating a team to oversee the addition of liquid cooling to cool high-density applications  

This group of subject matter experts – comprising internal experts, consultants, manufacturers, and other vendors – will provide input on hybrid cooling infrastructure design, selection, installation, and maintenance.  

  • Meticulously preparing for deployment  

Decision-makers should gather application and workload requirements to determine future-state needs. Through a well thought out design and budget guidance, data center operators can develop a target state infrastructure to support the new requirements. 

  • Using project services to deploy and maintain new systems  

Operators should commission systems in line with their design specifications. This must include starting up services, training teams on new systems, ensuring an effective handover to operations teams, and scheduling maintenance. This information is foundational for designing a solution that is tailored to site requirements. 

Empowering HPC into the future 

A good rule of thumb is to look for solutions providers that support the entire transition as it streamlines procurement, simplifies lifecycle management, and ensures comprehensive support.  

That, in turn, enables data center owners and operators to leverage the full range of tools to ensure compatibility and interoperability – while reining in complexity.  

By assisting with design, implementation, and ongoing maintenance, end-to-end providers significantly ease the transition – which ultimately sets data centers in the direction of getting liquid cooling solutions to work optimally for their needs. 

Browse our latest issue

Intelligent CIO APAC

View Magazine Archive