F5 BIG-IP Next for Kubernetes, F5’s new intelligent proxy, combined with NVIDIA BlueField-3 DPUs, transforms application delivery for AI workloads.
F5 has announced the availability of BIG-IP Next for Kubernetes, an innovative AI application delivery and security solution that equips service providers and large enterprises with a centralised control point to accelerate, secure and streamline data traffic that flows into and out of large-scale AI infrastructures.
The solution harnesses the power of high-performance NVIDIA BlueField-3 DPUs to enhance the efficiency of data centre traffic that is critical to large-scale AI deployments. With an integrated view of networking, traffic management and security, customers will be able to maximise data centre resource utilization while achieving optimal AI application performance.
This is pitched as not only improving infrastructure efficiency but also enabling faster, more responsive AI inference, ultimately delivering an enhanced AI-driven customer experience.
F5 BIG-IP Next for Kubernetes is a purpose-built solution for Kubernetes environments that has been proven in large-scale telco cloud and 5G infrastructures. With BIG-IP Next for Kubernetes, this technology is now tailored for leading AI use cases such as inference, retrieval-augmented generation (RAG), and seamless data management and storage. The integration with NVIDIA BlueField-3 DPUs minimizes hardware footprint, enables granular multi-tenancy and optimizes energy consumption while delivering high-performance networking, security and traffic management.
The combination of F5 and NVIDIA technologies allows both mobile and fixed-line telco service providers to ease the transition to cloud-native (Kubernetes) infrastructure, addressing the growing demand for vendors to adapt their functions to a cloud-native network functions (CNFs) model.
F5 BIG-IP Next for Kubernetes offloads data-heavy tasks to the BlueField-3 DPUs, freeing up CPU resources for revenue-generating applications. The solution is particularly beneficial at the network edge for virtualized RAN (vRAN) or DAA for MSO – and in the core network for 5G, enabling future potential for 6G.
Kunal Anand, Chief Technology and AI Officer, F5, said: “The synergy between F5’s robust application delivery and security services and NVIDIA’s full stack accelerated computing creates a powerful ecosystem. This integration provides customers with enhanced observability, granular control, and optimized performance for their AI workloads across the entire stack, from the hardware acceleration layer to the application interface.”
Ash Bhalgat, Sr. Director of AI Networking and Security Partnerships, said: “Service providers and enterprises require accelerated computing to deliver high-performance AI applications securely and efficiently at cloud scale. NVIDIA is working with F5 to accelerate AI application delivery, better ensuring peak efficiency and seamless user experiences powered by BlueField-3 DPUs.”