It seems that now is the time for organisations to be prioritising their ethics efforts to ensure AI is being applied appropriately. Joanna Hu, Manager, Data Science (Machine Learning) at Exabeam, explores why CIOs need to make AI ethics a top priority, ensuring their organisation keeps pace with a fast-evolving regulatory landscape.
We are now facing the fourth industrial revolution. Data has become the most valuable asset, due in no small part to the fact that AI technology relies heavily on it. Companies need more access to data than ever before, but with this comes a need to be more open to share data and more importantly, to share data through a secure and well-regulated way without violating privacy. Consumer-based companies also need to standardise and simplify the procedure to allow users to choose whether to share their data or not.
The UK government’s recent establishment of the world’s first national AI ethics advisory board represents a truly landmark event. Working with government, regulators and industry to lay the foundations for AI, the Centre’s remit is to anticipate gaps in the governance landscape, agree and set out best practice to guide ethical and innovative uses of data, and advise government on the need for specific policy or regulatory action.
Organisations that develop or use AI-powered systems should take note. According to a recent Deloitte survey, 76% of executives say they expect AI to ‘substantially transform’ their companies within the next three years. Yet one-third identified ethical risks as one of their top three concerns about the technology.
The use – or potential misuse – of data
AI systems learn from the datasets they are trained with. But how these datasets are compiled can introduce assumptions or biases – gender, race or income – that influence how the system behaves. Consider how datasets drawn from a male-dominated field unintentionally resulted in a gender-biased recruitment tool.
Similarly, a lack of transparency around the data models and information infrastructures used to power AI systems – what data is used and how decisions are made – has the potential to lead to similar issues associated with ethical AI design, development and deployment.
The fast-paced adoption and reliance on data and Machine Learning in every field of activity – government, healthcare, agriculture, policing, finance and so forth – to automate decision-making and transactional tasks, means concerns about the potential misuse of these technologies cannot be ignored.
The fallout can be seen in the public mistrust of social networking platforms, fuelled by the Cambridge Analytica scandal and growing focus on how organisations use personal information and data. The widespread recognition of the social and public dimensions of AI means concerns that organisations could use AI-powered tools to intentionally – or inadvertently – invade privacy or cause harm; unfairness or moral wrongs must be addressed.
Risk and reward – The benefits of taking an ethical approach
The UK government’s £9 million investment in a national advisory board is being driven by a desire to ensure safe, ethical and ground-breaking innovation in AI and data-driven technologies – and propel UK leadership in this arena on the global stage. Earlier this year, a House of Lords select committee report on the impact of Artificial Intelligence on the UK’s economy and society identified ethics as an area the country could gain a commercial edge on a global scale. Indeed, it has been estimated that AI could add an additional £630 billion to the UK economy by 2035.
Enterprises and public sector bodies looking to take advantage of these opportunities are recognising that ethical judgements will need to be made about how we use and apply data – and are putting appropriate accountability structures in place. The year 2018 saw a number of major technology vendors – including Google and IBM – publish AI ethical guidelines. Meanwhile, the European Commission recently published its Ethics Guidelines for Trustworthy AI.
These moves clearly signal that now is the time for organisations to prioritise their ethics efforts to ensure AI is being applied appropriately – in other words, that outcomes are fair, transparent, legal and aligned with human values. Future commercial performance and corporate reputations will depend on enforcing appropriate AI standards. At a bare minimum, that will require a process of oversight and evaluation, as well as individual accountability.
The issue of trust and data assets
The UK government is striving to enable sustainable and trusted data infrastructures that maximise data use and value to the benefit of the economy and society in general. It’s a concept that underpins eGovernment, as well as concepts like Smart Cities.
Working with the Open Data Institute, the government is exploring the potential of ‘data trusts’ that allow two or more organisations to share data in a safe, fair and ethical way so they can work together to tackle issues at a local level – enabling open collaboration models that reduce cost and create value.
When it comes to the development and application of AI, an ethical framework should take into consideration three key deployment areas:
- Creation – Does the AI use training data that poses a significant risk to an individual’s right to privacy? Is it representative and does it contain historic biases that could be perpetuated?
- Function – Are the assumptions used by AI, and the processes that power these, reasonable and fair? Can anyone understand how AI works and audit how a given output was created? Can you protect against hacking or manipulation?
- Outcomes – Is AI being used to do anything unethical? Has appropriate oversight and evaluation been applied? Who is ultimately responsible for decisions made?
The EU’s General Data Protection Regulation (GDPR) represents the first step in a raft of regulations that aim to establish clear governance principles in relation to data. The California Consumer Privacy Act (CCPA) is set to take effect in 2020 and is widely considered to be the most comprehensive privacy legislation in the United States. Organisations must get ahead of this fast evolving regulatory curve and integrate new control structures and processes designed to manage AI risks and ensure AI technologies are used appropriately.
For Luciano Floridi, Chair of The Alan Turing Institute Data Ethics Research Group and Ethics Advisory Board, the success of AI will depend on the use of well-curated, updated and fully reliable datasets. For him, quality and provenance – where the data comes from – is critical.
When it comes to addressing legal compliance and ethical issues such as privacy, consent and other social issues, he believes the answer lies in using synthetic data that’s generated by AI itself. In the foreseeable future, he predicts a move from using anonymised historic data to entirely synthetic data will be key to ensuring privacy or confidentiality is not infringed at the development stage of AI solutions – although he acknowledges that predictive privacy harms at the deployment stage will still need to be managed carefully.
Action steps
Organisations need to act now or risk potential exposure to financial loss and reputational damage if their approach to AI is not ethical. With regulators and governments preparing to address the AI ecosystem, establishing a dedicated AI governance and ethics advisory board that includes cross-functional leaders and advisers should be the first priority.
When it comes to the development and deployment of AI technologies, companies will need to be confident these systems do not unintentionally encode bias or treat people – employees, customers, or other parties – unfairly. That means putting tools, processes and control structures in place and ensuring that development teams are appropriately trained on ethical AI practices.
Responsibility now falls on the shoulders of data scientists and organisations to act ethically and keep their finger on the pulse when it comes to the evolving regulatory landscape. The ability to take advantage of AI technologies and capture potential future value depends upon it.