Bad data, bad outcomes

Bad data, bad outcomes

Ravi Mayuram, Chief Technology Officer, Couchbase, says that addressing AI bias is not only in the best interests of Singapore but also the entire region.

Ravi Mayuram, Chief Technology Officer, Couchbase

Plagiarism, cheating and a threat to critical thinking development are some of the concerns raised by Singapore educators, as sophisticated Artificial Intelligence (AI) tools like OpenAI’s DALL-E 2, and ChatGPT cross into the mainstream.

While AI tools can take a prompt and return an intelligent textual or visual response (seemingly like magic) they are not perfect – many have risks and limitations and the democratization of AI must be supported by policies covering ethics, transparency and more.

This is important for many sectors, especially business, where AI is streamlining processes and augmenting human intelligence.

According to Forrester: “Rapid progress in areas of fundamental AI research, novel applications of existing models, the adoption of AI governance and ethics frameworks and reporting and many more developments will make AI an intrinsic part of what makes a successful enterprise.”

AI is especially important in Singapore – and Asia Pacific – where companies have jumped on the bandwagon.

Spending on AI hardware and software services across AP is projected to rise from $20.6 billion in 2022 to $46.6 billion in 2026. China, India and Australia are predicted to invest the most – with Singapore not far behind.

As AI adoption rises, however, business leaders must invest in managing human biases that make their way into algorithms.

With the spotlight on ethical AI, Singapore is already laying the groundwork through initiatives like AI Verify, which provides the basis for the further development of AI governance principles.

The impact of AI bias

While some people have tipped ChatGPT to overtake Google, Google CEO Sundar Pichai says customers trust Google’s search results and that “you can imagine for search-like applications, the factuality issues are really important and for other applications, bias and toxicity and safety issues are also paramount.”

Phrasing a question to ChatGPT in a certain way, for instance, can produce offensive and biased results (e.g. ChatGPT has been used to rank people who should be tortured based on country of origin).

On the enterprise side, a biased dataset can drive decisions based on skewed predictions.

Legal and reputational risks are other considerations. For example, if it got out that a company used AI-driven hiring practices biased against marginalized groups, the backlash could bring down the organization.

AI bias also hurts the bottom line. In a recent survey of UK and US business owners, 36% said their organizations have suffered because of AI bias in algorithms with lost revenue (62%) and lost customers (61%), damaging their profitability the most.

Australia and Singapore lead the way in trying to reduce AI bias in the Asia Pacific today.

Singapore has introduced the A.I Verify platform, a governance toolkit that allows industries to be more transparent about their AI deployment and processes.

The AI Ethics Framework in Australia aims to better protect citizens, consumers and companies.

Biased datasets lead to biased AI

AI producing offensive results is attributed to models using datasets that contain questionable and problematic content. Online hate speech, for instance, has been produced and shared since the launch of the Internet and due to the impact of bots (primarily), this content is spread and captured by popular AI models to skew results.

Based on these developments, pre-and post-AI era data will probably be rated differently in the future.

Where and how do the biases originate? They are frequently traced to biased datasets or datasets that under-represent or ignore whole populations. These biased sample sets – which are used to train AI models – produce untrustworthy outcomes.

How to eliminate AI bias

Eliminating bias is widely discussed today but it is a big challenge because bias comes from human and tech-driven data bias. Although human bias is almost impossible to remove, we can create fairer, more ethical and transparent data-gathering processes to train AI models.

A recent World Economic Forum article highlighted three focus areas including more foreseeable design processes that give greater consideration to predicting the impact of AI systems and more emphasis on inclusivity across gender, race and class.

User testing involving representatives from diverse groups is another recommendation to garner a wider range of views and insights before launching AI solutions.

STEEPV analysis was also put forward as a way to enhance fairness and non-discrimination.

This is an analysis of external environments covering social and political attitudes, demographics, cultural priorities and the tech and economic landscape to build a bigger, clearer picture.

We have steps to move forward, and just as academia, industry and governments came together to find solutions during the height of the pandemic, we can take the same approach to reduce AI bias.

The need to act is now and we can find a way to make AI more trusted and adopted.

From my discussions with stakeholders in Singapore, there is widespread consensus over this and taking on AI bias sooner, rather than later, will truly serve everyone’s best interests.

Click below to share this article

Browse our latest issue

Intelligent CIO APAC

View Magazine Archive