Skip to navigationSkip to contentSkip to footerHelp using this website - Accessibility statement
Advertisement

Watch for ‘bad actors’ in AI power struggle, regulators urged

John Davidson
John DavidsonColumnist

Subscribe to gift this article

Gift 5 articles to anyone you choose each month when you subscribe.

Subscribe now

Already a subscriber?

One of the nation’s leading artificial intelligence experts has warned that regulators need to start paying attention to anyone throwing vast amounts of computer power at machine learning, to prevent “bad actors” from creating harmful new capabilities for AI.

Dr Tiberio Caetano

Tiberio Caetano, chief scientist at the Gradient Institute.  

The large language models (LLMs) that underpin services such as ChatGPT have placed new opportunities to create AI functionality in the hands of anyone with enough computing power, according to Dr Tiberio Caetano, co-founder and chief scientist of Gradient Institute, a think tank that works on AI ethics, accountability and transparency.

While regulators are focusing their attention on the algorithms used in machine learning, and on the data that’s used to train those algorithms, too little attention is being paid to the way raw computing power (called compute in the industry) affects the outcomes of AI.

“The really astonishing fact is that if you just increase the size or compute of these models – and size and compute are essentially the same thing – they acquire qualitatively new capabilities,” Dr Caetano said at ChatLLM23, a global AI ethics conference held at the University of Sydney on Friday.

“Traditionally, if you throw more compute power at a problem, everything goes faster. In AI, it gets smarter, too,” he said.

Advertisement

Dr Caetano said that although discussions among governments and industry insiders around the world about regulating AI development were well under way, most proposals had focused on the algorithms used for machine learning, and the data the LLMs are trained on.

He said the fact that the amount of computer power used fundamentally affected how AI would work meant compute needed to be added to the regulatory mix.

Emerging risk

“The reality is that if you have a hundred million dollars and if you are a bad actor, you can build systems because the algorithms are known, and you can get the data from the internet for free,” Dr Caetano said.

“Everyone can start to do these things, so there is this emerging risk of increasing power for agents that are without accountability.

“At the global policy level, it’s absolutely clear we need to start thinking seriously about compute governance.”

Advertisement

Professor Didar Zowghi, who leads the Diversity & Inclusion in AI unit at the CSIRO’s National AI Centre, said the governance of AI had become so problematic so quickly that there needed to be an “all hands on deck” response to dealing with emerging incidents of irresponsible AI usage.

As part of that response, regulators ought to borrow a leaf out of the aviation industry’s book and create a type of “black box” recorder for the AI sector.

In the same way that the aviation industry investigates accidents and then publishes its findings to ensure the same accident never happens again, the AI industry needs to start cataloguing and investigating incidents where the use of AI has led people to suffer from discrimination.

The National AI Centre was calling on the AI community to help “develop the framework for this black-box-inspired approach to investigate AI incidents to understand how we can tackle diversity and inclusion,” she said.

John Davidson is an award-winning columnist, reviewer, and senior writer based in Sydney and in the Digital Life Laboratories, from where he writes about personal technology. Connect with John on Twitter. Email John at jdavidson@afr.com

Subscribe to gift this article

Gift 5 articles to anyone you choose each month when you subscribe.

Subscribe now

Already a subscriber?

Read More

Latest In Technology

Fetching latest articles

Most Viewed In Technology