Vultr advances global AI cloud inference with AMD Instinct MI300X

Vultr advances global AI cloud inference with AMD Instinct MI300X

New Cloud GPU offering unlocks enterprise agility, performance, and cost-efficiency for global scaling of AI-native applications with AMD Instinct MI300X.

Vultur has announced the new AMD Instinct MI300X accelerator and ROCm open software as available within Vultr’s composable cloud infrastructure.

The collaboration is pitched as unlocking ‘new frontiers’ of GPU-accelerated workloads from the data centre to the edge.

“Innovation thrives in an open ecosystem,” said J.J Kardwell, CEO, Vultr.

“The future of enterprise AI workloads is in open environments that allow for flexibility, scalability, and security.  AMD accelerators give our customers unparalleled cost-to-performance. The balance of high memory with low power requirements furthers sustainability efforts and gives our customers the capabilities to efficiently drive innovation and growth through AI.”

The open nature of AMD architecture and Vultr infrastructure allows enterprises access to thousands of open source, pre-trained models and frameworks with a drop-in code experience, creating an optimised environment for AI development to advance projects at speed.

“We are proud of our close collaboration with Vultr, as its cloud platform is designed to manage high-performance AI training and inferencing tasks and provide improved overall efficiency,” said Negin Oliver, Corporate Vice President of Business Development, Data Center GPU Business Unit, AMD.

“With the adoption of AMD Instinct MI300X accelerators and ROCm open software for these latest deployments, Vultr’s customers will benefit from having a truly optimised system tasked to manage a wide range of AI-intensive workloads.”  

Designed for next-generation workloads, AMD architecture on Vultr infrastructure allows for true cloud-native orchestration of all AI resources.

AMD Instinct accelerators and ROCm software management tools integrate seamlessly with the Vultr Kubernetes Engine for Cloud GPU to create GPU-accelerated Kubernetes clusters that can power the most resource-intensive workloads anywhere in the world. These platform capabilities give developers and innovators the resources to build sophisticated AI and machine learning solutions to the most complex business challenges.

Other benefits include:

  • Improved price-to-performance:  Vultr’s high-performance cloud compute, accelerated by AMD GPUs, offers exceptional processing power for demanding workloads while maintaining cost efficiency.
  • Scalable compute and optimised workload management: Vultr’s scalable cloud infrastructure, combined with AMD’s advanced processing capabilities, allows businesses to seamlessly scale their compute resources as demand grows.
  • Accelerated discovery and innovation in R&D:  Vultr’s cloud infrastructure offers the necessary computational power and scalability for developers to deploy AMD Instinct GPUs, AMD ROCm open software and the vast partner ecosystem to solve complex problems for faster discovery cycles and innovation.
  • Optimised for AI inference: Vultr’s platform is optimised for AI inference, with AMD Instinct MI300X GPUs providing faster, scalable and energy-efficient processing of AI models.
  • Sustainable computing: Vultr’s eco-friendly cloud infrastructure allows users to achieve energy-efficient and sustainable computing in large-scale operations with AMD-efficient AI technologies.
Click below to share this article

Browse our latest issue

Intelligent CIO North America

View Magazine Archive