

AI for Accessibility: Adding Real-Time Audio Description to Games Using Generative AI
Information
114 million blind and visually-impaired people are currently being excluded from mainstream video games due to their disability.
Dr Zhang’s groundbreaking research demonstrates that live ’Audio Description’ (AD) has the potential to provide these players with a great “unlock” - enabling them to enjoy mainstream video games for the very first time.
But there’s a problem: due the dynamic nature of the medium, AD for video games has to be delivered on a one-to-one basis, by a human Audio Describer. This means that only a handful of players can ever hope to access it.
That’s why Dr Zhang partnered with Meaning Machine - an experimental AI game & tech studio - to create a generative AI solution that could be used to deliver dynamic AD… at scale, and without one-to-one in-person support.
We call this technology “Game Conscious AD”. See it in action here: https://youtu.be/YFNQ4mbf3FA
Using the hit Indie title Frog Detective as our testbed (thanks Worm Club!), we created a version of the game that features Game Conscious AD - then conducted extensive formal playtests with blind and visually-impaired players.
This talk will be the very first presentation of our findings - sharing how we achieved this technological milestone, as well as the human impact this technology has on real players. It will also address the wider question around the potential to use AI for good, not just profit - calling on the industry to step up to the challenge of making video games more accessible, and to enlist cutting edge technologies to make that possible.




