Wesley Goatley

The Hope

2025

Commissioned by: Bridging Responsible AI Divides (BRAID) Programme with funding from the Arts & Humanities Research Council (AHRC).

Exhibitions: Tipping Point, InSpace Gallery, Edinburgh Festival 2025

A proposal for a better use of AI for everyday people: one focused on community, transparency, and ethics.

An image of The Hope, a gray device with a square base an a rectanglar screen attached.

The Hope demonstrates how AI could be radically transformed, starting with one simple change: it’s no longer called ‘Artificial Intelligence’.

The device is set in a social housing tower block in Tower Hamlets, London, whose diverse residents have started to slowly adopt this low-cost, customisable, repairable, and open-source device in their homes. This voice and touch interactive ‘smart home’ device uses ‘AI’, which in this speculative scenario stands for ‘Assistive Interfaces’, meaning that all the technologies within it have been thoughtfully designed to assist the residents in their lives, rather than automate their labour or create generative hallucinations. The device allows for the typical uses of voice interfaces that people enjoy: setting timers, checking the weather and public transport, controlling connected devices, and playing music. Users can also ask general knowledge queries, but instead of using a hallucination-prone large language model like ChatGPT, the system uses foundational language models to extract intent and return information directly from Wikipedia instead. The voice interface does not refer to itself as ‘I’, and it does not have a name; after all, it’s an ‘assistive interface’, not something designed to masquerade as an artificial being.

Close up of the Hope device as seen from the side, the screw holes on the arm are visible.
The Hope's design emphasises decolonialism, carbon literacy, accessibility, and transparency. It can be customised to use any combination of computer, mic, and speaker that is available, and the device allows users to edit the 3D files for the design of the device itself. In the London tower block where this is set, the devices owned by each resident creat a private mesh network just for the residents, a digital community space that allows them to talk privately with each other, share data and files between themselves, and to share excess computing power so that those with simpler or cheaper hardware can enjoy increased performance. This distributed computing allows them to train new personalised speech-to-text and text-to-speech AI models that work for the residents' own dialects, languages, and the nuances of their speech. All new models are shared across the local network, creating an ad-hoc co-operative driven by the diverse needs of the local community.

Audiences interacting with The Hope can browse the fully-featured device through its touch screen and voice interface, personalising its functions, and accessing its library of models and data created by the community. Audiences can watch the simulated residents interacting in real-time via the device’s community forum and direct message services, where residents share their thoughts about life with the tool, ask questions, and have discussions with each other.

Mesh networks, distributed computing, Wikipedia, community forums, locally-hosted AI, and low-carbon computing are all existing techniques and computing; The Hope simply brings them all together and demonstrates how we already have the capacity to make better AI, and to make it work for us and our communities.

This work features additional voice performances from Pedro Oliveira, Maria Yablonina, and Irene Chung.

Close up of the Hope device as seen from the side, the screw holes on the arm are visible.