
What we do
Advancing safe and responsible AI through research, policy, and practice.
Our Core Activities

Conduct Rigorous Research: We conduct rigorous research into AI and its impacts, synthesising findings into actionable knowledge for specific policy and practice contexts. This enables us to bridge the gap between scientific understanding and responsible real-world application.

Guide Safe and Responsible AI Practice: Through training, advisory services, technical assessments, and practical resources, we help businesses, government agencies, startups, and not-for-profits deploy AI systems safely and responsibly, grounded in the best available evidence.

Shape AI Policy: We provide evidence-based advice to governments on AI policy, regulation, standards, and safety frameworks. Our research informs the crafting of scientifically well-founded policies that enable innovation while proactively protecting against genuine harms.

Engage with Pivotal Challenges: Sponsored research and strategic partnerships position us to tackle high-stakes problems where deep technical and scientific AI research is essential. Through collaborations with industry, academia, and government, we translate theoretical insights into practical solutions for high-impact challenges.

Advance Public Understanding of AI: Outreach through training programs, expert forums, community partnerships, and media engagements helps us inform diverse audiences—from not-for-profit leaders to government officials—on the science of AI, its capabilities, and its risks.