Science-based clarity in an uncertain AI landscape

Trusted by individuals and organisations across government, industry, academia, civil society, and the community

ACT Education
Australian Human Rights Commission
The Australian National University
Commonwealth Bank of Australia
Consumer Policy Research Centre
Australian Government Department of Industry, Science and Resources
Google.org, Google's philanthropic arm
Komply AI
Monetary Authority of Singapore
Minderoo Foundation
National Artificial Intelligence Centre
NSW Department of Customer Service
Digital NSW
NSW Ombudsman
Reserve Bank of Australia
Seek
Telstra
Telstra Foundation
Timaeus
The Ethics Centre
UnionBank of the Philippines
IAG
University of Sydney
CSIRO's Data61
ACT Education
Australian Human Rights Commission
The Australian National University
Commonwealth Bank of Australia
Consumer Policy Research Centre
Australian Government Department of Industry, Science and Resources
Google.org, Google's philanthropic arm
Komply AI
Monetary Authority of Singapore
Minderoo Foundation
National Artificial Intelligence Centre
NSW Department of Customer Service
Digital NSW
NSW Ombudsman
Reserve Bank of Australia
Seek
Telstra
Telstra Foundation
Timaeus
The Ethics Centre
UnionBank of the Philippines
IAG
University of Sydney
CSIRO's Data61
Latest WORK

Guidance, research and events...

Resources and reports to enable responsible AI practice.

New Guidance for AI Adoption
ResourceOctober 21, 2025

New Guidance for AI Adoption

The Department of Industry, Science and Resources (DISR) has released Guidance for AI Adoption to enable safe and responsible AI across Australian industry. The Guidance for AI Adoption: Foundations and supporting templates were developed by Gradient Institute with the National AI Centre.This new package of practical tools and templates helps organisations of all sizes put responsible AI principles into action. Rather than adding complexity, it focuses on essential practices that build trust and accountability, helping organisations adopt AI safely, confidently, and to its fullest potential.

Read more
Gradient Institute Response to the Productivity Commission's Interim Recommendations on AI
SubmissionSeptember 25, 2025

Gradient Institute Response to the Productivity Commission's Interim Recommendations on AI

Gradient Institute has responded to the Productivity Commission's interim report on 'Harnessing data and digital technology', arguing that its proposed regulatory approach to AI applies conventional governance principles to a fundamentally unconventional technology that may lead to a paradigm shift in how society operates.

Read more
Gradient Institute provides Responsible AI Support to Australian Not-for-Profits
EducationAugust 22, 2025

Gradient Institute provides Responsible AI Support to Australian Not-for-Profits

Australian not-for-profit organisations (NFPs)—organisations with socially-aligned motivations, knowledge, and experience could significantly amplify their impact by using artificial intelligence (AI) capabilities in a safe and responsible way.AI adoption poses unique challenges for the NFP sector, which often operates with constrained resources and a natural aversion to risk given their funding environments and impact profiles. In response to these challenges, over the past year Gradient Institute delivered a dedicated program aimed at uplifting the capability of Australian NFPs and social enterprises (SEs) to develop and use AI responsibly.With support from Google.org, the program aimed to provide mission-driven organisations with the knowledge, confidence, and practical guidance to explore AI innovation, while remaining mindful of the potential risks and ethical considerations.The initiative was delivered through two streams: education, offering a suite of learning options including introductory and specialised courses, live webinars, and self-paced eLearning modules tailored for NFP and SE needs; and advisory, providing NFPs with actionable responsible AI goals, tailored support through exploratory workshops, individual consultations, and assistance with AI governance planning.This approach enabled organisations to build a strong foundation in responsible AI and receive targeted assistance relevant to their operational context and social mission.

Read more
New Report Analysing Multi-Agent Risks
ReportJuly 29, 2025

New Report Analysing Multi-Agent Risks

Organisations are starting to adopt AI agents based on large language models to automate complex tasks, with deployments evolving from single agents towards multi-agent systems. While this promises efficiency gains, multi-agent systems fundamentally transform the risk landscape rather than simply adding to it. A collection of safe agents does not guarantee a safe collection of agents – interactions between multiple LLM agents create emergent behaviours and failure modes extending beyond individual components.

Read more
Australian AI Safety Forum 2024
EventNovember 18, 2024

Australian AI Safety Forum 2024

On 7-8 November 2024 the inaugural Australian AI Safety Forum 2024 was held at the Sydney Knowledge Hub at the University of Sydney, bringing together researchers, policymakers, and industry representatives to discuss AI Safety as documented in the Interim International Scientific Report on the Safety of Advanced AI.

Read more
Australian Government releases mandatory AI guardrails paper and voluntary AI safety standard
RegulationSeptember 05, 2024

Australian Government releases mandatory AI guardrails paper and voluntary AI safety standard

The Australian Government has released two key documents aimed at promoting safe and responsible AI in Australia, with Gradient Institute contributing expertise to their development.

Read more
About Gradient

Navigating AI with clarity and discernment

Gradient Institute is an independent nonprofit research organisation helping society understand, manage and shape AI as it transforms our world. We conduct, distil, and interpret scientific research to bring clarity where decisions are complex, stakes are high, and uncertainty is the norm. Our approach is built on:

Independent research

Rigorous research into AI systems, capabilities and risks, grounded in how they affect people, institutions, and society in practice.

Public-interest insight

Research designed to support policy, governance, and responsible use - with public benefit, not private advantage, as the guiding principle.

Practical clarity

Clear, science-based explanations that help decision-makers and the public understand trade-offs, limits, and consequences, so decisions become responsive rather than reactive.

Professional Discussion

What We Do

We help people build calibrated trust in AI: trust that is proportionate to the evidence, no more and no less. More trust than the evidence warrants creates risk. Less holds back AI’s potential. We provide the research, methods, and frameworks that ground trust in evidence and rigorous analysis, so that AI can be used responsibly, governed effectively, and questioned rigorously.

Rigorous Independent Research

We conduct, distil, and interpret research, including through sponsored research and strategic partnerships, on AI capability, safety, and societal impact. Our work generates new knowledge and insight that helps tackle high-stakes challenges where technical and scientific AI research is essential.

Explore research

Advisory and Expert Guidance

We partner with governments, organisations, civil society, and communities to bring scientific understanding into real decisions about AI and their implementation. We help people understand the terrain and support thoughtful implementation as AI reshapes their world.

Explore Advisory

Education and Capability Building

We help people across government, industry, civil society, and the public build genuine understanding of AI: what it is, what it can and cannot do at any point in time, where it can go wrong, and how to engage with it responsibly. Our programs build skills, discernment, and calibrated trust, so supporting clear-eyed decision-making rather than blind or reluctant adoption.

Learn more about Education
Partners

Partner with Gradient Institute

We collaborate with governments, research institutions, universities, industry, and civil society on work aligned with our public-interest mission. Our partnerships focus on bringing rigorous research and clear judgment into decisions where AI carries real societal consequences.

Want to collaborate with us?

Contact us

Foundation Members:

IAG
The University of Sydney

Turn AI complexity into clarity

Contact us