How do we make artificial intelligence more humane?

Byline: Nicholas Davis and Edward Santow

We've all had something like it happen: one minute you're searching for a present suitable for a two-year-old; the next, ads for nappies and prams are on every site you visit.

It's unsettling. No one feels comfortable about bots following us surreptitiously as we roam around the web, when companies use what they learn from our online behaviour to promote products and services in creepy ways.

But could concerns around privacy and informed consent - though undeniably important - be distracting us from what we should be really worried about?

The exploitation of personal information for marketing purposes is a real problem. But the more serious risk is that our personal information can be used against us - not just to advertize a product we don't want, but to discriminate against us on the basis of our age, race, gender or some other characteristic we can't control.

Precision prejudice

For example, if you have darker skin, facial-recognition technology is dramatically less accurate than if you have a light complexion. As this technology is progressively rolled out across law enforcement, in border security and even in delivering financial services, the risk that you'll be unfairly disadvantaged increases depending on your ethnicity.

Similarly, there are examples of artificial intelligence (AI) operating to prevent women or older people seeing certain online employment opportunities.

Not only does this violate the human rights of anyone negatively affected, but it also undermines community trust in AI more broadly. A collapse in community trust in AI would be disastrous, because AI has the potential to be an enormous boon - not just for our economy, but also in making our community more inclusive.

For every instance of AI causing harm, there's also an uplifting counter-example. This could be anything from AI-powered smartphone applications allowing blind people to "see" the world around them, to huge strides in precision medicine.

Our challenge, therefore, is to build enduring trust in the development and use of a tremendously exciting set of technologies, so we can take advantage of the opportunities while addressing the threats to our basic rights. Unfortunately, this challenge is made harder by a damaging but pervasive myth.

Righting the wrongs

Too often we're told that if Australia is to compete globally in developing AI products, Australian researchers and companies must not be fettered by human...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT