Editor’s Question: How do companies navigate responsible AI adoption in the APAC region?

Editor’s Question: How do companies navigate responsible AI adoption in the APAC region?

Hemanta Banerjee, Vice President of Public Cloud Data Services, Rackspace Technology, says AI decision-making demands the same rigor and oversight applied to human decisions.

What exactly is responsible AI, and what are the core principles that guide its development and deployment?

When discussing talk about “responsible AI,” it is referring to an overall commitment to develop, deploy, and use AI in ethical, transparent, and beneficial ways. How is this different from accountability? Instead of the responsibility falling on individuals or groups, responsible AI is about making sure, collectively, that AI is used as a tool to support and enhance decisions instead of relying on it to make decisions. By making this clear distinction, we can ensure fairness, trustworthiness, and eliminate potential biases in the decision-making process.


It can be done in several ways. Ethical use is emphasised through model supervision, explainability, and safeguards against harmful biases, reinforcing transparency, accountability, and fairness. The AI system follows the same purchasing and internal oversight procedures as other applications to ensure consistency in deployment.


Additionally, there are strict guidelines governing the handling of confidential and sensitive information. This will ensure that AI systems comply with intellectual property protections and data regulations. Data retention, privacy and security measures follow corporate policies to ensure compliance and protect user information. Employees are also encouraged to report violations of AI standards in good faith.

How can companies ensure that AI systems are designed and implemented fairly?

This is a particularly important topic because the Asia-Pacific AI market is expected to reach US$ 110 billion by 2028. That is why it is absolutely essential that we design and deploy AI systems responsibly and ethically.


For us to get to that stage, we need to first set up some parameters that prioritise data integrity, human oversight, transparency and accountability. Even though we have seen amazing advancements in AI, these systems still require humans in the loop and cannot run on their own.


Human oversight is essential in managing AI responsibly, especially in complex sectors like healthcare and finance. With this level of oversight, companies will be able to mitigate biases and ensure ethical decision-making.

Another important parameter to consider is transparency. Since AI is now becoming more integrated into core business functions, we need to ensure there are clear policies and tooling in place when it comes to regulatory compliance, data usage, and privacy. Companies need mechanisms that would be able to log and audit AI decisions, assess their impact on stakeholders and create reporting structures for issues and complaints.


Validation is critical when it comes to fairness and data integrity. That’s why all training data must be properly managed as this will go a long way towards reducing biases and addressing AI hallucinations.

How can companies ensure their AI systems comply with local and international laws?

Companies must establish clear policies, governance structures, and oversight mechanisms to align with ethical principles and regulatory standards. Companies should look at what’s needed to implement controls and foster a culture of responsible AI use through awareness, training, and guidelines. One source of reference that companies can refer to is the National Artificial Intelligence Strategy (NAIS) that was established to spearhead the development of responsible AI.


A key aspect of responsible AI is building a chain of trust in AI systems. We need to know what’s collected, shared and processed. This is where governance policies are the most effective — by defining data classification standards, ensuring intellectual property protection and providing guidance on the secure use of AI tools.


Responsible AI isn’t a ‘build it and forget it’ project. Building trust in AI, especially with LLMs, demands continuous monitoring through LLMOps. This proactive approach ensures models remain accurate, fair and secure throughout their lifecycle. Ongoing oversight is critical for bias detection, explainability, compliance with data privacy and maintaining robust security as models evolve in production environments.

How does human oversight play a part in responsible AI and how can companies balance automation with human judgment?

Rackspace Technology’s recent Global AI Report that highlighted what’s essentially a paradox: While 75% of the respondents in the survey placed unconditional trust in AI-generated answers, only 20% of them believed these outputs should always involve human validation. 
Human oversight is an absolutely critical part of responsible AI and should be integrated into AI processes. It’s how organisations can make sure that automated systems are efficient, ethically sound and aligned with norms.  

Companies should implement governance and auditing mechanisms that mandate and enforce human review of AI-driven decisions, especially in cases that involve compliance, security or ethical implications.

Stakeholders should also go one step further and integrate additional transparency and accountability mechanisms to the layer of human oversight. Establishing clear reporting structures, auditing AI outputs and incorporating explainability features into AI systems will help companies maintain control over automated processes.


The key to responsible AI? Let AI automate the mundane, freeing up humans for critical thinking. But remember, AI decision-making demands the same rigor and oversight we apply to human decisions. Just as we wouldn’t let an untrained person make critical calls, we can’t unleash AI without robust safeguards and continuous monitoring. Only then can we unlock AI’s power without sacrificing security, ethics, or human control.