Skip to navigationSkip to contentSkip to footerHelp using this website - Accessibility statement
Advertisement

Autonomous cars 'won't kill insurance'

Subscribe to gift this article

Gift 5 articles to anyone you choose each month when you subscribe.

Subscribe now

Already a subscriber?

The chief customer officer of Insurance Australia Group says driverless cars will not kill the insurance sector, but they will substantially change how the industry operates.

Speaking at an Australian Financial Review Future Briefings event in Sydney, IAG's Julie Batch was asked about the impact of driverless cars on insurance, given they are predicted to dramatically reduce the number of accidents.

She said while an accident-free future would be "amazing", it was very unlikely, and said the insurance industry was working to predict the shifting make-up of Australia's vehicle fleet, and the changing nature of accidents that would occur.

Insurance Australia Group chief customer officer Julie Batch speaking at The AFR Future Briefings Breakfast at The Ivy Ballroom in Sydney.  Janie Barrett

"There will be no need for insurance in the form it currently takes," Ms Batch said.

"There are likely to be less accidents, but because someone may run on the road, or an algorithm might fail, there will be a liability risk that emerges because of that.

Advertisement

"It might not be how the car and another vehicle interact, but it might be about how the car interacts with the environment or the road system that it's driving on."

About 60 per cent of IAG's overall revenue comes from motor vehicle insurance premiums, and Ms Batch said IAG spent considerable time preparing for an increasingly AI-dense world across its business lines.

This goes far beyond driverless cars to incorporate preparations for big structural economic changes set to hit other industries.

For instance, insurers have been among the first movers in responding to how climate-related disasters such as bushfires and floods will play out over the next hundred years. From a tech point of view, Ms Batch said AI was "probably the most profound technology of this century".

"We are saying, 'what do we think is going to happen to the way humans are going to interact with the assets that they put on the earth. And how are those assets changing'? That's kind of our job.

"We won't get that 100 per cent right, but we spend a lot of time thinking about global trends along long time horizons. What are the emerging risks that are coming? What are the societal trends? What political influences might change behaviour? That's what we talk about, so we can anticipate how best to protect our customers."

Advertisement

Ms Batch said IAG's own use of AI in its internal processes was still a long way off systems that involve complex emotional and social reasoning.

Rather, it is using machine learning algorithms to get computers, rather than people, to assess pictures of accidents and analyse call centre data.

Lawyer and author Jamie Susskind describes machine learning as "a profound development because it liberates AI systems from the limitations of their human creators".

It also has consequences for ethics and accountability, because systems are making decisions on customers' premiums.

IAG-backed not-for-profit AI research company The Gradient Institute is working on designing ethical AI frameworks and teaching them to the people building systems in the industry.

"When we get to those ethical questions, it's important that it's not business people who are economically motivated, potentially, trying to answer questions that are best answered by people who have spent their whole careers in philosophy and moral science," Ms Batch said.

Advertisement

Rather than Facebook's early motto of "move fast and break things", Gradient Institute's chief executive officer Bill Simpson-Young said he hoped a more careful approach would be taken with AI.

"You've got a lot of data-driven automated decisions happening at scale. Those decisions are having massive impacts on people's lives," he said.

"If AI is used well, that's a great thing. But there's a lot that can go wrong.

"Because you are relying on data, there could be bias in the data, there could be bias in the world that you don't want to replicate in your system, there could be issues with the algorithm."

Mr Simpson-Young said because AI algorithms could not be right 100 per cent of the time, part of building ethical AI systems involved distributing the risk of error fairly across any given population.

Further, he said it was vital that data scientists working in areas such as insurance were deeply trained in both the technical skills required to do the jobs as well as the more nuanced aspects of the job.

Advertisement

This included recognition that the data they use to build systems, from photographs of faces to loan applications, was never value neutral.

"There is no such thing as raw data,'' he said. "If you're going to be using data, you need to understand the context of capture, and how that data relates to the real world. If you don't understand that context, you're not going to build a good system."

Natasha Gillezeau is a journalist for The Australian Financial Review based in the Sydney office. Connect with Natasha on Twitter. Email Natasha at natasha.gillezeau@afr.com.au

Subscribe to gift this article

Gift 5 articles to anyone you choose each month when you subscribe.

Subscribe now

Already a subscriber?

Read More

Latest In Technology

Fetching latest articles

Most Viewed In Technology