top of page
in(A)n(I)mate
.png)
in(A)n(I)mate is an interactive AI-driven art piece that is designed to invite participants to converse with objects.
by
Avital Meshi & Adam Wright

Lately, I've been talking with my hairbrush-
And my hairbrush responds...
A Performative Exercise
Of course, the hairbrush does not actually speak. However, if you let an AI augment it the hairbrush suddenly has a voice. This idea is examined through the artwork in(A)n(I)mate, an interactive AI-driven piece that is designed to invite participants to converse with objects.
When you interact with the piece, you might not be aware that you’re speaking to an AI language model. Even if you did know, you begin to feel that maybe, just maybe, this object is actually listening to you, reflecting on an answer, and responding back to your questions. in(A)n(I)mate isn’t trying to trick anyone into believing a hairbrush is sentient. Instead, it tries to use AI to mediate a performative encounter between you and the object you brought to the table.
During this experience, the object you’re speaking with begins to matter in a different way. It invites attention-maybe even empathy. Suddenly, it is no longer just "ready-to-hand" as Martin Heidegger might say—a tool to be used. Instead it becomes “present-at-hand”: a thing noticed, contemplated, and strangely alive in its own materiality.
Throughout this encounter you might begin to wonder: “What is it like to be a hairbrush?” This question echoes Thomas Nagel’s famous 1974 essay, "What is it Like to Be a Bat? " Nagel argued that no matter how much we study a bat’s physiology or behavior, we can never fully grasp the subjective experience of being a bat. The “what it is like” from the inside. The bat’s world is shaped by modes of being that are fundamentally inaccessible to human understanding. So when we try to understand what it is like to be a hairbrush. We are limited to our own human frame of reference, and this resource is inadequate to the task.
Rather than trying to “solve” the object or extract its inner truth, in(A)n(I)mate uses GPT to approach the object obliquely, through metaphor. Metaphor, as Graham Harman argues, is a powerful method of contact. It gestures toward the object’s surface while honoring its depth. It lets us approach the object as a “sensual” entity, acknowledging that the “real” object remains fundamentally withdrawn-which is the point. GPT doesn’t “know” what it means to be a hairbrush any more than we do. But in mediating this encounter it produces a space of reflection. A space where the object becomes a collaborator in a process of meaning-making.
But, what about anthropomorphism? Jane Bennett encourages us to use a little bit of anthropomorphism in an attempt to better understand what is in front of us. Bennett talks about vibrant matter and argues that inanimate things possess a kind of liveliness, an agency that isn’t conscious, but still active. She warns that when we think we already know what something is we stop noticing what else it might be. We miss the chance to see the object as an active participant.
N. Katherine Hayles invites us to consider cognitive assemblage. A distributed, relational, and often inaccessible form of thinking that occurs across systems, both human and nonhuman. GPT, in this light, can be seen as a cognitive partner. It doesn’t understand the object. But it doesn’t need to. It connects data, concepts, and patterns in ways that exceed our human capacity, surfacing associations we might not have made. GPT helps us reveal what Hayles calls “latent knowledge.”
Karen Barad’s concept of posthumanist performativity helps us see that the object’s voice is not a static representation of its essence, but the result of a relational performance. The hairbrush in this setting doesn’t have a fixed personality. It is not merely recognized by GPT. It is rather becoming throughout the encounter. It is co-constituted by the questions we ask and the AI generated responses. If we were to ask a different question, give GPT a different prompt, a different framing of the image, the personality of the hairbrush might shift entirely.
You can bring any object to the table and start having a conversation with it. As Ian Bogost writes “anything is thing enough to party.” However, some objects may be misrecognized. Bill Brown reminds us that our understanding of objects often lags behind their being. GPT is trained on contemporary language and associations, may misrecognize objects and be biased for or against particular objects. And yet, even these misrecognitions can be generative. A forgotten object, misunderstood by AI, might speak with a strange, unexpected voice.
Marshall McLuhan suggested that media are extensions. We might begin to think of GPT not as an extension of the human, but as an extension of objects, allowing them to express themselves in natural language. So maybe the hairbrush has been trying to speak with me all along. We just didn’t have the right interface to hear it.

Design &
Technology
The Parts
%20(cropped).png)
in(A)n(I)mate is designed as a black box equipped with two buttons, which is intentional. This design choice presents the interactor with limited and simple operational affordances–but more importantly it simultaneously evokes the “black box” metaphor that purposely abstracts the interactor away from its mysterious inner workings. So, the interactions in(A)n(I)mate
facilitates aren’t the only stories it generates–The object itself is a performance or art piece that tells its own story.
…But all stories are mediated. So, let’s take a look at what is happening inside the box…

Inside there is a Raspberry Pi 4 microcomputer, which hosts OpenAI’s speech-to-text model and GPT-4-Turbo, delivered by OpenAI’s API through a wireless internet connection. This system is also interfaced with a webcam, a microphone, and a speaker which are also contained within the box itself.
Interaction

Pressing the first button initiates the system, and instructs the participant to place an object in front of the camera and press the other button.

…Let's place an object… …press the second button… The second button captures a photo and sends it to GPT along with this prompt:
"What is the most conspicuous object in this image? Include only the object description in the response, not a full sentence."

Once GPT recognizes the object it will announce this through the speaker, which indicates the participants can initiate a conversation with the object.

Each time the button is pressed, the box emits a “beep” sound, and the participant can pose a question or comment (or start a dialogue) into the microphone. The system captures words and transcribes them into text, which is transmitted to GPT in real-time along with this prompt:
"Respond in the style of [RECOGNIZED OBJECT]. Keep your response short and phrase the response as if spoken from a first-person perspective."
The generated response is converted back into speech and played aloud through the speaker.

To carry on a conversation, this process is simply repeated–Or the participants can initiate a new conversation using new objects if desired. There are a total of about six voice styles provided by the Speech-to-Text model that can be cycled through–with no deliberate reasoning or logic behind attaching a specific voice to a specific object.

The Practice
All Videos
All Videos
Search video...
in(A)n(I)mate - A conversation with a Can of Pepsi
08:23
in(A)n(I)mate - A conversation with a pair of Sunglasses
07:34
in(A)n(I)mate - A conversation with a Wooden Craft
06:54
in(A)n(I)mate - A conversation with a Black Glove
10:42

bottom of page