The AudienceBot Microphone is an interactive platform that enables audiences to convey feedback to performers in real-time. Using a mobile application audience participants collectively determine the “mood” of the microphone. The microphone utilizes six servo motors and a motion tracking device to express a range of affective states, from inattentive to attentive and from playful to aggressive. Audiences make decisions about how to modulate these parameters as they listen in real-time. The project speculates about alternative rituals of audience-performer interaction and explores animistic behaviors as live data-visualization. Since the 2008 election, live graphical representations of audience-sentiment have become a staple of debate coverage, yet there remain unexplored opportunities to use this kind of data to alter the unfolding structure of audience-performer interactions. This project aims to provoke speculation about alternative public rituals in which audiences take on a more agentive role. Previous research has explored how audience platforms can drive human or robot “Tele-actors” in remote space. My own work has built upon the Tele-actor approach by positioning familiar rituals—such as “on the street” interviews—as audience-driven interactive experiences. Extending this emphasis on ritual, the AudienceBot Microphone project seeks to playfully disrupt the implicit rules of audience-performer interaction. In particular, the project explores how familiar features like the speaker’s lectern and microphone might be positioned as animistic entities. Animism as a design theme is an approach that leverages the human capacity to see objects as having intention and autonomy without requiring fully developed anthropomorphic features. The AudienceBot Microphone utilizes animistic behaviors as a medium for live data-visualization—an approach I have come to refer to as ‘representational animism.’ The design consists of a kinetic microphone prototype that moves according to the rotation of six independent servo-joints. Utilizing a camera and motion tracker, the microphone follows and engages with performers in proximity. But this “attending” activity is modulated by a mobile application that aggregates audience input to determine the “mood” of the microphone at any given moment. The microphone can embody various affective dimensions, from aggressive to playful or from attentive to inattentive depending upon the aggregate decision making of a live audience. In playtests, the prototype helped provoke conversation about how audience-performer interactions might be hacked or tinkered with by positioning the audience as an in situ performative agent.
AudienceBot Microphone design and kinetic behaviors designed by Joshua McVeigh-Schultz using the Netlab Toolkit.
Microphone arm form factor designed in collaboration with Lucas Ainsworth.
Mobile interface built by Max Kreminski.