ReportDecember 08, 2023

Investigating Manipulative Applications of Generative AI

Article image

With support from the Minderoo Foundation, Gradient Institute is exploring how bad actors could leverage the latest pre-trained language models for personalised persuasion and manipulation.

Our focus is on uncovering and illustrating ethical risks associated with this technology, contributing to a more informed and responsible technological landscape. We are consulting our network of experts as we formulate scenarios where bad actors use large language models for applications such as:

  • - Covertly collecting personal information by posing as a helpful assistant, exploiting trust to deceive individuals into sharing details unknowingly.

  • - Applying personalised persuasion techniques to endorse products or political views without revealing motives, tailoring messages to exploit the information asymmetry.

  • - Distorting perceptions government representatives hold of sentiment and the salience of voting issues in their electorate, thereby influencing policy decision-making and undermining the democratic process.

Moving beyond the theory, we are actively developing software to demonstrate the risk viability, providing an interactive experience for senior decision-makers in government and industry. Our ultimate aim is to equip them with the necessary knowledge to prompt a thoughtful reassessment of risks, practices, or legislative imperatives as they navigate the dynamically evolving AI ecosystem.

Related news

New Guidance for AI Adoption
ResourceOctober 21, 2025

New Guidance for AI Adoption

The Department of Industry, Science and Resources (DISR) has released Guidance for AI Adoption to enable safe and responsible AI across Australian industry. The Guidance for AI Adoption: Foundations and supporting templates were developed b...

Read more
Gradient Institute Response to the Productivity Commission's Interim Recommendations on AI
SubmissionSeptember 25, 2025

Gradient Institute Response to the Productivity Commission's Interim Recommendations on AI

Gradient Institute has responded to the Productivity Commission's interim report on 'Harnessing data and digital technology', arguing that its proposed regulatory approach to AI applies conventional governance principles to a fundamentally...

Read more
Gradient Institute provides Responsible AI Support to Australian Not-for-Profits
EducationAugust 22, 2025

Gradient Institute provides Responsible AI Support to Australian Not-for-Profits

Australian not-for-profit organisations (NFPs)—organisations with socially-aligned motivations, knowledge, and experience could significantly amplify their impact by using artificial intelligence (AI) capabilities in a safe and responsible...

Read more

Let's navigate AI responsibly together.

Contact us