What can AI models learn from food labels? 

What can AI models learn from food labels? 

Michael Boone, Trustworthy AI Product Manager, NVIDIA, on the case for model cards to make AI more transparent. 

Food labels provide vital information on a product’s nutritional value. Without them, consumers would have no easy way of knowing how much fat, protein or sugar was in their foods. But this wasn’t always the case. Food labels only became mainstream in the UK in the 90s and before that, consumers were left guessing what was contained in their favourite foods. 

Food labels provide transparency on what’s in a product. In turn, this enables the consumer to make informed decisions and trust their products. While a cutting-edge artificial intelligence tool and a loaf of bread may not appear to have much in common, industry could learn a lot about the principles of and necessity for AI transparency from food labels. 

What do we mean by trust and transparency, in relation to AI? 

As the adage goes, trust is earned in drops and lost in buckets. Trust in AI cannot be boiled down to just one action or approach; it is a framework which continually builds and maintains trust. Trustworthy AI is an approach to AI development that prioritizes safety and transparency as the foundation – it must be built into every aspect of the value chain and cannot be left as an afterthought. 

The key principles of trustworthy AI are privacy, safety and security, transparency and nondiscrimination. Trustworthy AI models should avoid unintended biases and discrimination, comply with regulations and safeguard data, and perform as intended. 

Transparency is a characteristic of trust. For creators, users, and stakeholders to trust AI models, they must understand how they work. Best practice is to describe how the model was trained and identify any known risks. Not only does this ensure that the correct models are being used for appropriate use cases by AI experts, but it also enables stakeholders to understand and use these tools. 

 How can we improve AI trustworthiness? 

Up and down the value chain, all stakeholders benefit from transparent AI. That is why at NVIDIA, we recognise that the most effective way to enhance AI transparency and demonstrate trustworthiness is to use AI model cards, short documents that provide key information at a glance about machine learning models. 

AI model cards can learn a lot from food labels. They should be easy to understand and offer clear insights into the AI model.  You don’t need to be a chef to see if a certain food product is healthy and you shouldn’t have to be an AI developer to understand how a certain AI model works. This means that model cards should be jargon-free and standardised. While we’ve seen some progress in the area, we must strive for greater harmonization across the industry. AI tools are already playing a foundational role in our society and therefore it is vital that these tools are democratized. 

Without model cards, it is difficult to determine whether a model is suitable for a particular organisation’s desired use.  Model cards prevent time being needlessly wasted, working backwards to obtain information that should have been passed down the value chain. They also empower organisations and users to make informed decisions according to their needs. 

It’s true that knowledge begets knowledge. With reference to model cards, there is nothing to be gained from withholding knowledge or attempting to use it as leverage. In fact, we’re seeing companies increasingly demand and expect explainability when it comes to AI. When we’re dealing with such powerful tools, it’s important to recognise the responsibility that each part of the value chain holds. 

 Building a transparent and trustworthy future 

While there’s still progress to be made, we’re pleased to see a consensus among those in the industry to implement model cards. NVIDIA’s own market research found that 90% of developers agree that model cards are important. That is why NVIDIA is investing in helping improve model card accessibility and understandability. 

NVIDIA is championing the use of model cards, through initiatives such as Model Card ++ in a push to make AI models more transparent. Research has shown that model cards increase model usage.  To get the most out of current and future models we need to understand their limits, have visibility on the data they’ve been trained on and have clarity on ethical considerations. Model Card ++ also contains information on model version identifiers, license restrictions and other qualitative information. Above all, we must be able to trust AI tools. This is possible through enhancing transparency for users. 

When we were younger, we would pick which cereal we wanted based on how colorful the box was or the free toy that came inside. As adults, we understand that to make an informed decision we need to take a look at the underlying information about what’s really in the box. In the context of AI models, accessible model cards enable these informed decisions and ultimately empower greater levels of trust in development and deployment.  

Click below to share this article

Browse our latest issue

Intelligent CIO North America

View Magazine Archive