How Schneider Electric’s industry-first blueprint will help optimise data centres to harness the power of AI

How Schneider Electric’s industry-first blueprint will help optimise data centres to harness the power of AI

In an era of AI disruption, it can be a challenge for organisations to ensure responsible and efficient data centre operations. Steven Carlini, Vice President of Innovation and Data Center, Schneider Electric, tells us how the company’s industry-first blueprint for optimising data centres can help find solutions to this.

Steven Carlini, Vice President of Innovation and Data Center, Schneider Electric

The need to adapt physical infrastructure design for data centres in the era of AI disruption is important. Can you elaborate on the specific challenges that data centres face when accommodating AI-driven workloads?

Even though AI is a hot topic, very few people are talking about the physical infrastructure aspect of it. AI presents a different type of workload and technology compared to the traditional, more common x86 two-socket server. New AI workloads have GPU accelerators that run in parallel, operating like one giant computer, which is quite different from the x86 servers, which process workloads and then return to idle mode. They are capable of processing and training data at very high speeds and capacities.

Schneider Electric’s white paper outlines key considerations related to power, cooling, racks and software tools in the context of AI. Can you provide insight into some of the most critical considerations and their impact on data centre design?

The servers are different – they are larger, heavier, deeper and have more connections. Frequently, they are now liquid-cooled, or will be in the future. There are tremendous changes to power, cooling and racks, which must be beefier to support the weight. In addition to the servers being larger and heavier, they also use more power.

Another consideration is that the density of these racks could range from 30 to 50 to up to 100 kilowatts per rack. This denser orientation dramatically changes everything and presents challenges as the power must be delivered in a smaller area and distributed at higher amperage.

Finally, you have cooling concentration and piping coming in and out of the server, which leads to manifolds and cooling distribution units, which are all new factors.

You cannot spread out the load because these servers and individual GPUs run in parallel and are connected through fibre. The fabric, or the InfiniBand, is running at high speed, which means it is extremely expensive. By trying to spread the load apart, you would spend a lot of money deploying this fibre network for all these processors. In the servers, each GPU has a network connection and each one has a fibre connection. This presents a large cost in addition to the real estate costs.

Due to these high costs, we are seeing a high desire to deploy these servers, high-density racks and clusters with as small a footprint as possible. Due to their design, they operate very close to capacity and maximum thresholds. Previously, when you had 10 kilowatts per rack, you were usually running at three kilowatts per rack and it spiked up occasionally to 10. The new training clusters run at capacity, so if you design it for 100 kilowatts per rack, it will run at 100 kilowatts per rack.

It’s important to be cognizant of running at capacity and use software tools to manage your environment as you are on the critical edge and have marginal buffer.

AI workloads are expected to grow significantly. What strategies and innovations can organisations implement to address the increasing power demand in existing and new data centres, optimising them for AI?

There are two approaches to consider. Starting from scratch would be the preferred option, as it allows for the optimisation of the power train with fewer step downs of voltages and transformers. For existing environments with sufficient power capacity, technologies like rear door heat exchangers can be fitted to current racks, providing higher densities, such as 40 to 70 kilowatts per rack. Depending on available power, retrofitting the current site can be done. However, if power is limited, a very dense application may result in excess floor space that remains vacant.

Recommended strategies involve modelling everything with Digital Twins, from power systems to IT rooms. This allows organisations to better visualise the implications before deploying in the physical world.

With AI applications placing strain on power and cooling infrastructure, how can data centres balance energy efficiency and environmental responsibility with the demands of AI-driven applications?

Currently, a permit for the construction of a data centre requires a demonstration the facility can operate at a very high efficiency level or very low PUE. In many cases, to obtain a permit, a data centre must show that it will be powered with a certain amount of renewable sources or use PPAs or renewable energy credits. It’s a given that these centres must be designed to be highly efficient.

Liquid cooling significantly contributes to making the cooling more efficient. The types of neon economizers and heat rejection used outside for further liquid are also important considerations. Designing data centres to be as efficient as possible and using the highest percentage of renewable power is the approach that leading companies are taking.

Net Zero goals and the need to address physical infrastructure redundancy are important. Any design that’s considered can be modelled to see the effects on efficiency. With a shift in the utility grid to more distributed resources and more sustainable sources like wind and solar, sustainable strategies can be adopted.

With this changing utility grid, software is available that allows you to pick from different sources based on your preferences. If your company had a Net Zero commitment, this would allow you to select the most sustainable source that was available and combine this with the software to better manage the other sources.

Combining that, emerging technologies like energy storage, using lithium-ion battery technology, offer the ability to store excess renewable power on-site. This can be used when the utility is strapped, charging more, or when there’s no sustainable power available. The fact that you can discharge those batteries is also interesting. There are a lot of big developments happening with grid management and energy storage versus sustainability.

Can you share insights into the role of software platforms and end-to-end solutions in effectively deploying AI in data centres and ensuring responsible and efficient operations?

During the design and procurement, software is used to specify a body carbon for many components. The procurement process is becoming more digital and the design process involves Digital Twins. Simulations can be run to see the effects on the operation side.

Software for complete power training allows you to examine transformers, switchgear, breakers, busway and monitor temperature. Operation software is critical and covers all power and IT room systems, including cooling systems.

Different types of AI can be used to adjust parameters like temperature and flow to the water, optimising efficiency. For example, when you are in a big data centre, hundreds of variables can be at play. It is hard for a human to change all the variables and see what the effect is, but a computer running AI can zero in on the most efficient operation.

Schneider Electric’s AI-Ready Data Centre Guide addresses critical intersections of AI and data centre infrastructure. Can you provide more details on the guidance it offers? 

AI is broken up into two distinct versions. There are the training models, the high-density, high capacity GPU models and the working models deployed as inference servers at the Edge or AGI closer to the loads. Depending on the required response speed, accuracy and comprehensiveness,  these models are deployed close to the load. Predictive analytics in manufacturing and supply chain co-ordination are examples. Deploying these working models closer to users and applications is for more traditional data centres. They may run GPU accelerators but at much less than 100 kilowatts per rack.

Schneider offers solutions for all sizes of data centres, including pre-configured options like micro data centre solutions that can be rolled into place and self-configured by software. The larger projects are more custom and each one is different. The inference and working AI models use more standardised solutions familiar from Schneider.

Looking at future technologies for seamless AI integration, what does this mean for the future of data centre design? 

AI is going to be used for predicting the weather to assist workload migration, utility power delivery optimisation, cooling and predictive maintenance. AI has the capacity to look at the history of your data centre and what is being supplied through the grid. AI will impact various phases, dynamically adapting to sustainable power sources and supporting workload migration. It will be used in maintenance, predictive analytics and grid management.

How will the growth of AI impact data centre sustainability and what steps can CIOs can take to minimise their environmental impact?

CIOs need to understand and measure the environmental footprint, especially with distributed or campus-based data centres. Schneider’s free tool like the Life Cycle Carbon Footprint for Data Centres can assist in modelling and predicting carbon emissions. This tool allows you to model what your current data centre looks like and change parameters like IT refresh, the carbon intensity of your power and usage of generators.

It offers a multitude of options that can be changed and will give you a detailed output on scope one, scope two and scope three carbon of the data centre. It cannot be used for reporting, but you can use it to get an idea of where the carbon is being emitted for a specific +-. Hence this is a fantastic tool to use, whether you have the centre in existence, or you’re looking at the point one to get an idea of what it’s going to look like.

Browse our latest issue

Intelligent CIO Europe

View Magazine Archive