Artificial Intelligence (AI) – is on the rise in the 21st Century and has become an essential part of IT. But even though AI can solve complex problems that are beyond human ability, is there still room for the human brain in the modern workforce? Andrew Quixley, Data Science & AI Leader at IBM, shared his views.
Artificial Intelligence is being used in two broad ways. The first of these is to analyse very big data sets much more deeply and effectively than ever before, to produce better answers.
The big improvement comes from the ability of AI to make sense of unstructured information, like words, speech and images. Doing so with a computer rather than a human, means it can be done at a speed and scale that is impossible for humans. The progress with interpreting the content in images (including video) is especially impressive.
The second main way that AI is being used is to automate tasks that require decision-making. If you have a machine that makes a widget, and it’s the same widget every time, then there’s no decision to be made. But if there are some decisions to be made, and the decisions are not extremely complex or nuanced, then a machine can literally learn to make the same decisions that the humans make, allowing the decison-making to be automated from then on. It’s called Robotic Process Automation but to clarify, the term is about a computer making decisions, and it doesn’t usually involve a physical robot.
Let’s consider an example. Imagine a company that operates a life insurance business. When a claim comes in, it needs to be assessed for payment. Some of the decisions will be very clear cut – usually at either end of the spectrum. For example, definitely PAY or definitely REJECT. These decisions can be automated with low risk. The claims that lie in the grey area in between – where the right (or best) decision takes much more complex thinking – remain the task of the human assessors.
Creating a system that can make decisions like insurance claim assessments is done by allowing a Machine Learning model to consider the past decisions that were taken by humans, and build an algorithm that codifies those prior decisions. The Machine Learning algorithm is then followed thereafter for deciding the claims in the future.
What are the security implications of using Artificial Intelligence?
AI has numerous implications for security. If we start by considering data security and privacy, clearly it’s important to safeguard data that is used by AI. At IBM, many of our AI services are made available as web services in the cloud. There are implications for businesses that need to know where their data is being processed or stored, in order to comply with laws such as the GDPR for example. Our response to this need – as a vendor – is to make our AI available in private clouds as well as public clouds.
In the domain of cybersecurity, Artificial Intelligence and Machine Learning have significant implications in, for example, encryption or access control. AI is useful for making and breaking codes. The two could be thought of as constantly trying to leapfrog over each other. We have to assume that the technology will find its way into the hands of ‘bad actors’, so it’s important that the good guys can stay one step ahead.
If it was easy, there wouldn’t be companies who exist solely to develop and maintain anti-virus software, as well as hiring honest hackers to try and break codes deliberately to test their weaknesses. Inevitably, AI will be a weapon on both sides.
What advice would you give to industry leaders looking to use AI?
I believe it’s important to use AI ethically and responsibly. There are global standards – such as the 23 Asilomar Principles or the 8 Tenets of The Partnership on AI – that define codes of practice for those companies that have aligned to them (which include IBM).
We have to be conscious that AI does cause disruption and does change the nature of work. My recommendation to business leaders in South Africa – where we have high unemployment – is to adopt AI at the pace humans can reasonably bear. For example, let’s suppose your business has 1,000 people working in a call centre and you don’t care about their welfare. You could lay off 950 of them tomorrow by switching on an automatic chatbot, but that would cause economic pain among 950 families. Clearly that is not humane or reasonable, and nor is it necessary. Call centres generally have high levels of staff turnover, so, if you manage the transition carefully and responsibly, the call centre could shrink gradually over time by just not hiring as many people as you used to.
There will always be things that humans can do better than machines. The most successful businesses are not going to be entirely automated; it’s going to be a mix of the very best AI can bring and the best humans can bring.
What is the future for the technology?
Getting the most out of advances in AI generally requires giant data sets and massive processing power. We’re seeing more graphic processing units being used in AI because of their ability to handle giant data sets. When quantum computing becomes mainstream, maybe in three to five years, we’re going to see a giant upswing in computing power, with the ability to crunch much bigger numbers and solve much more complicated problems.
Personally, I think AI might disappear as a term, and just become absorbed back into the computing field. In a sense, everything we’ve done since the first cave painting or the first abacus could be considered as Artificial Intelligence, it’s just cognitive processes of the human mind that were getting done outside the brain by something other than a human.