What we do
What we do
We help society understand, govern, and use AI with judgment. We combine deep technical research, practical guidance, education, and public-benefit work to bring clarity into the fog of AI uncertainty.
Our work spans four pillars that together support calibrated, evidence-based decisions about AI across government, industry, civil society, and communities.
Pillar 1
Deep, independent research
Understanding AI progress, safety, and societal impact.
We conduct, distil, and interpret research on AI capability, safety, and sociotechnical impacts, often through sponsored projects and strategic partnerships. Our work generates new knowledge for high-stakes problems where deep technical and scientific understanding is essential—no hype, no hidden agendas, just science people can rely on.
- Study how AI systems actually behave over time.
- Analyse safety, robustness, and failure modes.
- Investigate societal and institutional impacts of AI adoption.
- Publish insights that distil what matters, clearly and responsibly.
Pillar 2
Advisory & expert guidance
Bringing scientific understanding into real decisions.
We partner with governments, organisations, civil society, and communities to integrate scientific understanding of AI into concrete decisions, policies, and systems. Beyond advice, we help people see the terrain clearly, bring context into focus, and design thoughtful implementation as AI reshapes their world.
- Translate technical research into actionable strategies and safeguards.
- Support policy, regulatory, and governance decisions on AI.
- Review and stress-test AI systems for risk and impact.
- Act as an independent, trusted voice in complex AI decisions.
Pillar 3
Education & capability building
Building skills, judgment, and calibrated trust.
We help people across government, industry, civil society, and the public build a grounded understanding of AI: what it is, what it can and cannot do, where it can fail, and how to engage with it responsibly. Our programs focus on skills, judgment, and calibrated trust—not blind adoption.
- Design learning programs and workshops tailored to different audiences.
- Explain AI capabilities and limitations in clear, non-hyped language.
- Explore real-world scenarios, risks, and governance challenges.
- Equip teams to ask better questions and make informed choices.
Pillar 4
Public benefit & societal engagement
Putting public interest at the centre of AI.
We put public benefit at the centre of everything we do. As an independent nonprofit, we focus on work that scales and helps society make thoughtful, evidence-based decisions about AI. We make AI more understandable and empower communities and civil society to shape the systems that shape their lives.
- Collaborate with civil society and community organisations.
- Make technical issues legible to non-technical audiences.
- Contribute to public debates, standards, and policy processes.
- Advocate for careful, purposeful, ethical development and use of AI.