Design Research and Strategy

Making AI systems transparent through explainability

Craig Walker Design and Research

The Project

Bringing together startups from Asia Pacific region and Europe along with multidisciplinary experts in a series of design jams and workshops for product and policy prototyping. The end deliverable for the project was an all-encompassing report that may be a guiding tool for future product makers and a medium for change for policies around AI.

Design Challenge

To promote the adoption of trustworthy AI explainability practices supported by personas and co-created prototypes. Bring to life Meta’s AI Explainability framework and identify opportunity areas for policymakers to better incorporate and lead product makers in their approaches to explaining AI to their users.

Process & methods

Design and research process I was involved in

  • Analysis and synthesis of data collected from a design jam with the participant startups aross the Asia Pacific region and Europe.

  • During the design jam, these start-ups developed personas based on their target users which further informed the co-creation of design patterns and prototypes that demonstrate trustworthiness and transparency of AI for these personas.

  • Identification of three primary touch points of explainability in each of the startup prototypes- ‘upfront - in context - on demand’ against the pillars of Meta’s AI explainability framework. Assessing these in detail to generate evidence driven product design insights.

  • Further synthesis to bring to life the key insights by highlighting the examples and touch points within the fictional prototypes that gave rise to the insight. Using feedback prompts : How would you bring this insight to life and make it more actionable? Does this insight help validate our internal thinking? How might this insight be further developed?

  • Diving into AI policy to understand where policy needs to align better with product goals and potential. Policymaking approaches to AI explainability, need to account for technology, use cases, contexts, applications, ideas and inputs that are continuously and rapidly evolving. This complexity was handled by identifying opportunities for policy makers to determine and prescribe where explainability is required and to what extent.

  • Find the full report here! This project won the 2023 Good Design Awards.

Learnings

In a world where our data is everywhere and AI is a part of our daily lives, people have growing anxieties and trust issues. To give people governance over the data they share, explainability must be surfaced at various levels of the user interface.

It is important for people to know how the AI works and why it makes their overall experience with the product better.

Explainability must be looked at as an interactive process and not a one time declaration.

There is requirement for policy to continuously improve and update with the quickly accelerating growth that AI is bound to undergo, so that it may match the pace of product makers.

A peak into the Miro board for synthesis created by me

How explainability is done is more important than what is explained

Ensure that different users get the kind of explanation that is most relevant to them

Final report created by design team, informed by insights and context set by research team.

Previous
Previous

Connectedness

Next
Next

Retirement Living