The client is a market leader online marketplace that spans multiple countries. The organisation uses artificial intelligence (AI) systems as core components of their marketplace business including for matching participants in the marketplace.
The client's team has always recognised, and acted on, the need to use AI carefully and responsibly. They engaged Gradient Institute to help them further advance their responsible use of AI, including in particular AI systems, and to help refine key responsible AI practices, roles and responsibilities.
As part of the engagement, Gradient reviewed the client's pre-existing AI governance and performed a detailed examination of two in-production AI systems to review their key ethical risks and how those risks are managed.
Based on this assessment, Gradient provided technical subject matter expertise towards system improvements and recommendations on incorporating additional responsible AI practices into the organisation's model governance framework. The client's staff were closely involved in all aspects and made improvements to the systems during the project.
Contextualising principles such as fairness and transparency in the client's sector poses key ethical questions such as which sources of inequality are unacceptable, and what tradeoffs are posed by choosing a particular set of criteria for matching. Over the course of the project we held illuminating discussions around value-laden concepts such as bias and fairness in this context, and how to measure the performance of recommendation models in an unbiased manner. Gradient Institute measures bias by way of algorithmic and measurement bias.
The client stated that the project was a success, that the client's team enjoyed working with the Gradient team and that their AI governance and systems improved from the project.