The following portfolio represents a range of work and methodologies, including: physical computing, mobile design, experience prototyping, design research, performance, and experimental video.
This project consists of a series of workshops designed to explore new models of live audience interaction. I am interested here in tinkering with the interactional rule-set of audience-performer-interaction. In November of 2011 I collaborated with Kevin Driscoll and A. J. Patrick Liszkiewicz on a workshop titled Occupy this Seminar.
The approach here eschews technologically sophisticated platforms, and instead places emphasis on a deliberate tweaking of the unspoken social contract that guides rituals of public assembly. This work resonates with approaches familiar within the Occupy movement, where rituals of consensus formation (like the General Assembly or human mic) point to new models of public interaction. These workshops often involve simple objects as props, but emphasis is on novel interactional rule-sets, the symbolic affordances of objects, and the emergent properties of group attention and expression. For example, the “conch shell” scenario in Lord of the Flies conveys the kind of simple ritual invention that I have in mind. I like this example because it suggests both the symbolic potency of the conch shell but also the opportunity for breakdown. It is in this interstitial space between symbols, rules, and breakdown that I’m interested in exploring.
Essentially the Synaptic Crowd platform enables online participants to conduct collaborative “on the street” interviews without actually having to be “on the street.” Interviews are conducted in physical space through an intermediary wielding a camera and a phone, but the responsibility of determining questions gets placed on the shoulders of the audience participants.
The Synaptic Crowd tool integrates browser and mobile interfaces, along with face-to-face interaction. Online participants submit potential questions or statements to a public pool, and then the question or statement that has been selected most gets relayed to the intermediary’s phone.
Online participants watch the interviewee’s response as they formulate follow-ups. By enabling a live feedback loop between audience and subject, the Synaptic Crowd shuffles the agencies of the interview and enables participants to ask different kinds of questions than are normally licensed by a traditional interview format. These audience generated questions often create contextual breakdown by forcing participants to mix intimate and professional registers. As the interviewee (and phone wielding intermediary) try to make sense of these contextual curve-balls, a new kind of civic space gets carved out with new intersection points between the personal and the political. In this sense, I situate the work as “making trouble” for the assumptions that traditional journalism creates when it uses social media sampling and vox pop interviews to curate the public back to itself.
The Synaptic Crowd: Vox Pop Experiments, serves as a key example for me in demonstrating what it might look like to reimagine our civic rituals from the ground up. Drawing upon McLuhan’s imagery of electronic media as prosthetic extensions, the project explores a series of performative experiments that reposition the “street” (and other public spaces) as sites to be activated by remote audiences.
For more information, here is a talk I gave at DIY Citizenship conference in which I discuss how the involvement of a live audiences disrupts our expectations about the interview form. You can take a look at the prototype here (although keep in mind that the tool only works when it’s live, and right now that means “we” have to turn it on). Going forward, we’d like to create a scalable version that anyone can use, and our Knight News Challenge proposal aims to do just that.
(2009) Work originally exhibited at the Santa Cruz Museum of Art & History in InterActivate, the MFA Thesis Show for UC Santa Cruz’s Digital Arts and New Media program. Created in collaboration with developer Brian Alexakis.
[Project lead: Joshua McVeigh-Schultz, Flash and VXML development: Brian Alexakis, Videography: Lorenzo Estébanez and Joshua McVeigh-Schultz]
Completed for Phil van Allen’s New Ecologies of Things course at the Art Center, this project presents prospective interactants with a touch interface that doesn’t “want” to be touched.
When touch screen interface devices are found in the wild, their shielding appendages haven’t been clipped yet. As you might expect, these appendages have to be surgically removed before the devices can be domesticated and shipped for sale.
This design explores a familiar user interface paradigm (touch screen interaction) and reframes it as invasive, awkward, and potentially erotic. In this way, I treat the interactions between humans and objects as themselves rituals to be tinkered with and defamiliarized.
By frustrating the typical user expectations about touch interfaces, I recast the iPod touch as an animistic object whose skittish behavior suggests trauma. While the object follows a user with its “gaze”, it clamps shut in response when one attempts to touch it forcefully. Instead, users need to earn the object’s trust before it will allow itself to be touched or stroked—an action that triggers a change in the object’s data-visualization display.
Designed as a critique of status monitoring in online contexts, this project presents a prototype of a prosthetic device that conversation partners wear in their mouths to provide visual and auditory feedback about the speaker’s level of online popularity (measured in retweets). The speaker with more current retweets experiences voice amplification (and their mouth glows a clear blue) while the less popular interlocutor gets quieter and their mouth glows red. The design aims to call attention to problematic features of the “marketplace of attention” that structures the amplification of “speech” in online contexts. In this way, I deliberately designed the objects to frustrate communication by awkwardly interjecting online status into meat-space.
‘Ambient storytelling’ — part of the design philosophy of USC’s Mobile and Environmental Media Lab — represents a departure from customization algorithms familiar to discussions of pervasive computing. Rather than thinking about how a car can play the role of glorified butler, anticipating its driver’s every need, instead we reposition the car as co-participant in an evolving relationship. We use the framework of the Lifelog Interface as a portal though which drivers can engage with their vehicle as a new kind of experience platform. In our model, the car is no longer merely transport but a springboard for adventure, a “drivable” musical instrument, a 21st century scrapbook, and a playful reimagining of what it means to drive. Our prototype transforms the concept of a car key — which we see as a tangible avatar of the vehicle — into an evocative interface object. When drivers return home, this object beckons them and offers a portal into the Lifelog Interface. The interface itself plays on the metaphor of concentric tree rings to represent units of time. Navigating through these rings, drivers can visualize, plan, organize, and reflect on a variety of experiences including: (1) soundscape compositions generated by the car’s internal and environmental sensor data, (2) in-car augmented reality games, (3) guided-tour adventure modes, (4) networked trip planning, (5) car configuration and lifecycle milestones, (6) badges and social media portals. Finally, our system is designed to analyze not only the ways that drivers engage with space, media, and other drivers, but also the ways in which drivers choose (or refuse) to engage with the system itself. This iterative feedback loop between vehicle and driver allows the driver to build on their own projections and aspirations for their car as a resource for constructing the story of their vehicle as an evolving “character.”
[Mobile and Environmental Media Lab: Principle Investigator, Prof. Scott Fisher; Project Lead: Jen Stein; MEML Team: Emily Duff, Joshua McVeigh-Schultz, Jen Stein, Jeff Watson; Storyboard illustration: Cecilia Fletcher; Microsoft Surface Table programming: Emily Duff]
This project extended our work with automotive lifelogging by using in-car sensors to engage drivers in ongoing discoveries about their vehicle, driving environment, and social context throughout the lifecycle of their car. A goal of the design was to extend the contexts of automotive user-interface design by (1) looking inward to the imagined “character” of the car and (2) looking outward to the larger social context surrounding a drive. We deployed storytelling and theatrical strategies as a way of moving our thinking outside the familiar constraints of automotive design. These unique methods help us to extend the concept of a lifelog to consider the “lives” of objects and the relationship between humans and non-humans as fruitful areas of design research.
Within the mobile and environmental media lab we spend a lot of time using and thinking about strategies of narrative prototyping. The typical interaction-design prototypes are intended to be tested over minutes rather than years. By conducting narrative exercises and scenario crafting in visual media such as storyboards, animations, and video, we have access to a deeper understanding of time over longer durations of interaction. In our work with BMW’s Mini line, this storytelling strategy has helped us to probe new possibilities for vehicular lifelogging by raising questions about multiple drivers and encouraging us to consider novel subjects like location-based memory annotation as a conceivable topic of automotive design. This process allowed us to ask questions about longer chains of causality and probe the possibilities of more ambient modes of storytelling and speculate about the ways that experience unfolds over the entire lifecycle of the car.
[Mobile and Environmental Media Lab: Principle Investigator, Prof. Scott Fisher; Research Assistant and Project Manager, Joshua McVeigh-Schultz; MEML Team: Michael Annetta, Jacob Boyle, Emily Duff, Hyung Oh, Jen Stein, Avimaan Syam, Amanda Tasse, Jeff Watson, Simon Wiscombe; iOS Programming: Jacob Boyle; Storyboard illustration: Bryant Paul Johnson]
[Project lead: Jen Stein; Dissertation Chair: Prof. Scott Fisher; MEML team: Jacob Boyle, Joshua McVeigh-Schultz, Hyung Oh, Amanda Tasse, Jeff Watson; Storyboard illustrations: Bryant Paul Johnson]
As a key member of the interactive media team, I collaborated on 6under60 with colleagues from the School of Architecture and the Roski School of Art.
6 Under 60 is a collaborative research endeavor and interactive multi-media exhibition organized and presented by the University of Southern California (USC) School of Architecture, School of Cinematic Arts and Roski School of Fine Arts. An interdisciplinary team of USC faculty, research associates, and students in architecture, design, curatorial practice, and interactive media have analyzed six cities that emerged or were transformed within the last 60 years—Chandigarh, Brasilia, Gaborone, Almere, Shenzhen, and Las Vegas.
The Movie Tagger project (a continuation of work described here and here) was initially inspired by a grand vision to parse and richly tag every movie ever made. With an eye toward exploring new models of folksonomic “expert-sourcing”, I set out to interview 12 film different scholars in order to adapt their research to metadata tagging schemas.
CLEAN UP WALL STREET is a game-event inspired by the global financial crisis. The game explores the actions that led financial institutions to endanger the health of the global economy. Playfully assuming the roles of commodity traders and credit rating agents in the actual spaces of Wall Street in Lower Manhattan, players compete to trade and sell the most before the inflated system bursts and time runs out.
Working in teams, players collaboratively pursue wealth and prosperity. However, success in the game depends on face-to-face interactions with a cross-section of visitors to the Lower Manhattan Financial District. Performing an act of citizen journalism, players engage individuals to ask how the financial crisis (and recovery) may have impacted their lives. The stories are captured via a voicemail system and uploaded onto the game’s website along with additional gameplay imagery gathered by the players, forming a visual and audio mosaic of stories highlighting the financial, emotional, and mental consequences of an unstable and unregulated financial system.
I’m interested in interactive systems that get completed by leaps of imagination or by social mediation. One example I’ve pointed to is the elevator close-door button. This is a device that is often non-functional yet nevertheless encourages us to assign causality even where there is none. Building on this the idea, I created a responsive system that is only triggered when a user closes their eyes.
(2010) Collaboration with Michael Annetta.
This design uses XML data from NAVTEQ to translate daily traffic flow at particular road sections into a rhythmic pulses that maps onto PWM voltage for vibrating motors.
By the year 2060, all the humans who survived peak-oil live in giant honeycomb-like structures that contain self-sustaining mini-ecologies within each geodesic cell. Movement is tightly regulated, but residences are efficiently distributed such that all experiences of landscape are consistent.
There is no open space nor closed space; there is only space. Each individual residence is the same size and the same distance from every other. Experiences of proximity to other human beings are thus normalized, and travel is coordinated by cloud-based supercomputers, so that one never encounters more or less than the same number of people at any given time.
But ironically, years after peak oil, people start to nostalgicize the era of the automobile. Entranced by the tragic romance of our (once-upon-a-time) collective disregard for the future, consumers look at the car as a kind of thematic palette for restaurants, parties, films, etc. In this sense, the era of automobile is experienced the way we think of pirates, the 50s, or the Wild West today.
This design emerged from thinking about how social surveillance operates to regulate recycling practice in Japan. Unlike in the U.S. where recycling is practice by a single household, in Japan, recycling is deposited in a common neighborhood repository. All recycling must be separated and deposited in a specific manner. Paper-goods, for example, must be folded and tied into a 8.5 x 11 stack. When I was living in Japan, I found these standards to be quite demanding, but I was intrigued by how much I internalized the watchful eyes of my neighbors, so that even when no one was looking I was aware of that gaze. My imagination about the potentially disapproving eyes of my neighbors in some ways eclipsed the actual experience. Foucault talks about this kind of internalized surveillance in relation to his concept of biopolitics and the state. But I think this case of localized self surveillance is somewhat different from the way we think about the watchful eye of “Big Brother”, because the act is negotiated and enacted horizontally.
So, I wanted to come up with a design that leveraged this kind of imagination about the gaze of others — but instead of being surveilled by one’s cohabitants, one would be surveilled by tiny eyes that run along the floor. These eyes would be “cute” much the way that public service posters in Japan leverage the cuteness of mascots to encourage you to remember important instructions. Instead of focusing on recycling practice, though, I wanted these eyes to be regulating hot or cold air leakage (by guiding an inhabitant to close an open window for example).
Elephant in the Relationship is a game for 2 to 4 players in which players try to communicate deeply troubling relationship issues. Players take on the roles of two people in a personal relationship and turn the drama of a potentially risky (or intimate) interaction into spectacle for a group to enjoy. The game fuses Pictionary-style drawing and guessing mechanics with elements of doll-play and improvisational theater, asking players to place themselves into a difficult emotional scenario with their partner. The game includes a whiteboard arena, dry-erase markers, post-its, and colored playing pieces (designed to “stand-in” for the players). Using only these tools, a player tries to get their partner to guess the unspeakable relationship issue. By inserting players directly into the representational world of the drawing space, the game encourages empathy, divergent thinking, and novel communication strategies.
This locative game was designed in collaboration with Jeff Watson, Juli Griffo, and Ed Yee. It requires partners to collaborate via mobile phones as they navigated through physical and virtual worlds. One player navigates a text-based MUD modeled after the real-life rooms of the play space while the other navigates through physical space. The text-world-navigator can spot the interdimensional gnomes when they enter a room in the MUD and must quickly communicate the location to the physical-navigator without attracting the attention of the other players.
Interdimensional Gnomes are on the loose! Ingest dendritix pills to travel in the gnomes’ dimension. Teams compete to find and capture the elusive gnomes. Gnomes can only be captured when you utter their true name. One team member searches for gnomes in a drug facilitated virtual world while the other sneaks about in the physical world carrying out naming-missions and trying not to lose their tail. Watch out for other teams who may thwart your attempts to win the game.
Set in the context of telecom immunity debates of 2007 and 2008, Secrets for Senators is a performative intervention in which intimate secrets are confessed over the phone to senators who support warrantless wiretapping. This work considers the current threat of pervasive surveillance and illegal spying as a kind of psychic violence inflicted by the state. Secrets for Senators aims to subvert this violence by repositioning the violation of privacy as a deliberate and empowering act of self-exposure. I am interested in amplifying an awkward juxtaposition between private and public voices. In the video above callers leave answering machine messages sympathizing with the government’s need for secrets about its citizenry, and then, in an elaborate quid pro quo, proceed to divulge intimate secrets in exchange for the senator’s opposition to retroactive immunity. In other examples (not shown here), I spoke live to staffers while taking on an aggressive anti-privacy persona in order to co-opt the narrative that those who complain about surveillance must have “something” to hide.
Inspired by my own experience as a liminal subject in Japan, this project explores the tensions between my sense of self as a foreigner in Japan and the image of the westerner in Japanese media. I focused especially on the issue of translation (or translational adaptation), as the piece was originally written in English and then translated into more colloquial Japanese (which I read as voice-over). In the final act of adaptation, I tried to preserve linguistic nuance as I translated the Japanese text back into English subtitles.
An experimental shot in Japan, A Different Self? explores the identity transformations of bilingual speakers. Structured as a series of interviews shot shot in Japan in 2005, the piece revolves around a central question: “Are you a different person when you speak a different language?”
I made this remix in January of 2006 when Youtube was still relatively new. I remember staying up late mesmerized by this new window onto our culture, and I recall being fascinated by the way that the ethos of online video in that particular moment seemed to reiterate so much of what I’d read about the early days of cinema, when the exhibition context was still so connected to its vaudevillian roots and the cinematic medium was driven by spectacle. This thread of media history has been described by Tom Gunning as the “Cinema of Attractions”. In 2006 web video seemed to have returned us to these themes of slapstick, erotica, violence, and bodily risk. Of course we are still in that moment to varying degrees, but there was something about the early days of Youtube that felt like such a defining moment. For me, the youth driven meme of chugging pickle juice (as a proxy for alcohol binging) along with the erotic overtones of the pickle, as a spectacularized object, seemed ripe as subjects of remix. This material also offered a compellingly pairing with the Youtube era fascination for “Epic Fail” violence as two sides of spectacle—one disturbingly macabre and the other pointing to a potential loss of innocence. In this way, while the two very different tags, ‘crash’ and ‘pickle,’ might at first seem unrelated, for me they represented poles along a much larger continuum. They were part of a new 21st century language of spectacle, but one that harkened back to the early nickelodeon era and its spirit of participation and performative risk-taking. I remember being inspired to edit this after discovering the clip that’s shown in the final shot; it was such a perfect crystalizing moment that the rest of the remix just fell into place.