Can AI sales agents keep up with the right side of the law?

Can AI sales agents keep up with the right side of the law?

Byron Fernandez, Group CIO and EVP, TDCX, says agentic AI for sales must handle compliance as proactively and intelligently as humans – if not better.

Since its launch earlier last month, the general AI agent Manus has taken the internet by storm. Touted as the world’s first general AI agent, Manus leverages multiple AI models alongside independently operating agents to execute a wide range of tasks autonomously. In other words, it embodies the latest buzzword in my world: agentic AI.

In sales, the ability for AI to go beyond automating tasks such as lead scoring, customer outreach, data entry and quote generation to negotiating deals, pricing and terms independently is appealing. But can agentic AI make sound judgments and maintain jurisdiction without breaking rules? 

The leap from assistive to agentic introduces significant accountability. Businesses must trust AI not just to persuade, but to navigate complexities traditionally handled by humans, who still need to be in the loop when negotiating or signing off high-stakes business-to-business (B2B) deals.

Agentic AI won’t just automate but will make decisions and trigger downstream actions. When that autonomy enters sales, a small misstep in disclosures, internal approvals, or contract terms can create legal exposure, damage trust, or lock the business into unintended obligations. The risks won’t just be about what the AI agent says, but what it sets in motion and who will be accountable when it mishandles them.

To illustrate, in Anti-Money Laundering (AML) compliance, the inability to explain AI-driven decisions could lead to regulatory scrutiny and potential fines.

To deploy agentic AI responsibly, businesses need to have foundational pillars and hygiene checks. A future-proof data infrastructure, for example, ensures that the AI agent’s decisions are based on accurate, up-to-date information. Process mapping is also crucial – unclear handoff points and exception criteria can lead to an AI agent making commitments beyond its scope.

Beyond infrastructure, businesses must take ownership of oversight. AI operates probabilistically, and their decisions don’t exist in a vacuum. Policy engines and compliance modules should be built in from the start to ensure that the AI agent’s actions remain defensible. This entails building explainability mechanisms and structured audit logs to document every decision, clearly outlining what choices were made, why they were made and how compliance criteria influenced the outcomes. These should be easily ingestible by compliance management tools and readily available for regulatory audits. These capabilities are neither automatic nor guaranteed – with clear guardrails, the AI agent can act autonomously within its parameters.

The EU AI Act alone carries a hefty €35 million penalty for noncompliance. To avoid such pitfalls, businesses must intentionally design AI sales agents capable of navigating complex compliance demands transparently and responsibly.

How would this look like in practice? Let’s take a global software provider deploying an autonomous AI sales copilot capable of independently negotiating, finalizing contracts and handling compliance paperwork across multiple regions.

Embedded compliance logic automatically tailors contract terms to meet Europe’s General Data Protection Regulation (GDPR) standards while also aligning pricing and offers with regional anticompetitive regulations in Southeast Asian markets.

Integrated directly with policy management and e-signature platforms, the AI agent autonomously completes the necessary paperwork, inserts accurate regulatory disclosures, routes contracts for internal approvals and automatically creates structured audit trails.

As ethical or compliance complexities emerge, the AI proactively collaborates with humans, ensuring every finalized agreement meets regulatory, ethical and reputational standards.

This real-world example demonstrates that agentic AI represents far more than incremental progress. It signifies a profound shift in sales technology, business strategy and especially data utilization, which remains a barrier for 95% of enterprises. Its real value will hinge on trust and responsibility — so much so that by 2028, 40% of CIOs will require their companies to have AI agents that will manage AI agents.

Agentic AI for sales must handle compliance as proactively and intelligently as humans, if not better. It has indeed every potential to become your company’s best salesperson, but only if compliance remains central, proactive and foundational to every AI-driven interaction – from design to deployment.