Alex Penk

By Alex Penk - 06/06/2018

Alex Penk

By Alex Penk -

Like what you're reading?
Share it around.

Or just highlight the part you like...

When AI goes wrong

 

You may have heard that the robots are coming to take our jobs, thanks to artificial intelligence. But a recent New Zealand report, Artificial Intelligence: Shaping a Future New Zealand, says that only 10 percent of “normal job creation and destruction” will be due to AI. The biggest issues with AI may be about ethics, not employment.

In theory, AI offers an impartial tool to make evidence-based decisions, instead of leaving them up to the foibles and prejudices of an individual. Associate Professor Colin Gavaghan, of Otago University, points out that AI often has a “veneer of objectivity because people think machines can’t be biased.” The trouble is that the biases of developers can be built into the tool itself. For example, the Artificial Intelligence report notes that judges in the US have been using artificial intelligence to help sentence offenders. The AI they were using turned out to be biased against black defendants because it was based on “historical sentencing data.”

The biggest issues with AI may be about ethics, not employment.

Militaries around the world are also considering the development of “lethal autonomous robotics” which, once enabled, would be able to kill humans without any direct human control. When and how machines should be empowered to kill is a fraught ethical question.

There are other issues with AI, like simple failure, but AI isn’t a bad thing by itself. There will also be benefits. AI could be used to carry out the kind of number-crunching necessary to detect complex fraud, as the New Zealand report Determining our Future has pointed out. The problem is that development of AI is running well in advance of public awareness, ethical reflection, and legal and regulatory frameworks that could make the most of the benefits and minimise the risks.

This is a common problem with technology because it’s hard to come up with good ways of thinking about things that haven’t been invented yet. Unfortunately, this gap in our ethical thinking is often replaced by what’s known as the “technological imperative,” the belief that if new technology exists, we should use it. This can lead to us deploying technology before we’ve worked through all the implications.

The problem is that development of AI is running well in advance of public awareness, ethical reflection, and legal and regulatory frameworks

For example, the Artificial Intelligence report says that “robo-advisors” are coming to New Zealand. These AI advisors may be able to give consumers financial advice in a more cost-effective and timely way than talking to a real person, but before we start to use them we need to answer questions like, who is responsible if the advice they give is wrong? The person who created them, or the person who chose to rely on them, or someone else?

Determining Our Future called for the creation of a multidisciplinary “high-level working group” featuring “expertise in science, business, law, ethics, society and government,” and the recent creation of an AI and Public Policy Centre at Otago University is a positive step. These are the kind of steps that could help our ethical and legal frameworks catch up with the technological development that’s already taking place, and prevent the technological imperative pushing us into places we don’t want to go.

Post Tags:
Alex Penk

By Alex Penk -

Like what you're reading?
Share it around.

Or just highlight the part you like...

Want to know more about Maxim Institute and what we do?

Find out more

SIGN UP TO OUR NEWSLETTER FOR UPDATES FROM THE MAXIM TEAM

Monthly eNewsEvent Invitations