Infinidat introduces Retrieval-Augmented Generation (RAG) workflow deployment architecture to make AI more accurate for enterprises

Infinidat introduces Retrieval-Augmented Generation (RAG) workflow deployment architecture to make AI more accurate for enterprises

Company bringing enterprise storage and GenAI together providing a RAG architecture that will enhance the accuracy of AI.


Infinidat has announced its Retrieval-Augmented Generation (RAG) workflow deployment architecture to enable enterprises to fully leverage generative AI (GenAI).

This is pitched as dramatically improving the accuracy and relevancy of AI models with up-to-date, private data from multiple company data sources, including unstructured data and structured data, such as databases, from existing Infinidat platforms.

With Infinidat’s RAG architecture, enterprises utilize Infinidat’s existing InfiniBox and InfiniBox SSA enterprise storage systems as the basis to optimize the output of AI models, without the need to purchase any specialized equipment.

Infinidat also provides the flexibility of using RAG in a hybrid multi-cloud environment, with InfuzeOS™ Cloud Edition, making the storage infrastructure a strategic asset for unlocking the business value of GenAI applications for enterprises.  

“Infinidat will play a critical role in RAG deployments, leveraging data on InfiniBox enterprise storage solutions, which are perfectly suited for retrieval-based AI workloads,” said Eric Herzog, CMO, Infinidat.

“Vector databases that are central to obtaining the information to increase the accuracy of GenAI models run extremely well in Infinidat’s storage environment. Our customers can deploy RAG on their existing storage infrastructure, taking advantage of the InfiniBox system’s high performance, industry-leading low latency and unique Neural Cache technology – enabling delivery of rapid and highly accurate responses for GenAI workloads.”

RAG augments AI models using relevant and private data retrieved from an enterprise’s vector databases.

Vector databases are offered by a number of vendors, such as Oracle, PostgreSQL, MongoDB and DataStax Enterprise. These are used during the AI inference process that follows AI training.

As part of a GenAI framework, RAG enables enterprises to auto-generate more accurate, more informed and more reliable responses to user queries. It enables AI learning models, such as a Large Language Model (LLM) or a Small Language Model (SLM), to reference information and knowledge that is beyond the data on which it was trained. It not only customizes general models with a business’ most updated information, but it also eliminates the need for continually re-training AI models.

“Infinidat is positioning itself the right way as an enabler of RAG inferencing in the GenAI space,” said Marc Staimer, President, Dragon Slayer Consulting.

“Retrieval-augmented generation is a high value proposition area for an enterprise storage solution provider that delivers high levels of performance, 100% guaranteed availability, scalability, and cyber resilience that readily apply to LLM RAG inferencing. With RAG inferencing being part of almost every enterprise AI project, the opportunity for Infinidat to expand its impact in the enterprise market with its highly targeted RAG reference architecture is significant.”

Stan Wysocki, President, Mark III Systems, said: “Infinidat is bringing enterprise storage and GenAI together in a very important way by providing a RAG architecture that will enhance the accuracy of AI. It makes perfect sense to apply this retrieval-augmented generation for AI to where data is actually stored in an organization’s data infrastructure. This is a great example of how Infinidat is propelling enterprise storage into an exciting AI-enhanced future.” 

Fine-tuning AI in the Enterprise Storage Infrastructure

Inaccurate or misleading results from a GenAI model, referred to as “AI hallucinations,” are a common problem that have held back the adoption and broad deployment of AI within enterprises.

An AI hallucination may present inaccurate information as “fact,” cite non-existent data or provide false attribution – all of which tarnish AI and expose a gap that calls for the continual refinement of data queries.

A focus on AI models, without a RAG strategy, tends to rely on a large amount of publicly available data, while under-utilizing an enterprise’s own proprietary data assets.

To address this major challenge in GenAI, Infinidat is making its architecture available for enterprises to continuously refine a RAG pipeline with new data, thereby reducing the risk of AI hallucinations.

By enhancing the accuracy of AI model-driven insights, Infinidat is helping to advance the fulfillment of the promise of GenAI for enterprises. Infinidat’s solution can encompass any number of InfiniBox platforms and enables extensibility to third-party storage solutions via file-based protocols such as NFS.

In addition, to simplify and accelerate the rollout of RAG for enterprises, Infinidat integrates with the cloud providers, using its award-winning InfuzeOS Cloud Edition for AWS and Azure to make RAG work in a hybrid cloud configuration. This complements the work that hyperscalers are doing to build out LLMs on a larger scale to do the initial training of the AI models. The combination of AI models and RAG is a key component for defining the future of GenAI.

Browse our latest issue

Intelligent CIO North America

View Magazine Archive