Category Archives: Hans & Umbach

Hans and Umbach: Exploring the Boundary Between Physical and Digital

My final semester of graduate school is now long over, I have spent the last few weeks immersed in the awesome culture that is Adaptive Path, and yet embodied interaction continues to dominate my thoughts.

Grope

Today I have been reviewing my notes from the Hans and Umbach project, after using a terminal command to combine hundreds of text documents into a single file, which I converted to a PDF and loaded onto an iPad that I am currently borrowing from work. One thing that is striking, and perhaps disappointing, is that I originally set out on the modest goal of learning Arduino, hardware sketching and physical computing. Where others have made LED matrixes that let you play Super Mario Brothers, or tanks that they can control with an iPhone, I made a few LED mixers, interfaced with a 7-segment display, took apart a Super Nintendo controller, and played Pong with a Wii Nunchuk.

Hans and Umbach: Arduino, 8-bit shift register, 7-segment display

My results didn’t quite meet my initial expectations. Electronics, it turns out, is still an archaic craft wrapped in cloaks of obtuse language and user-hostile encodings, and is certainly an art unto itself. I realized that to produce the robust interactions I had intended, with all the nuance and detail with which I approach my screen-mediated design work, would take an entire career worth of learning and refinement.

So then, were my efforts with Hans and Umbach all for naught? I don’t believe so. Physical computing exists at the intersection of the physical and the so-called digital worlds, which is why I was originally so interested in studying it. In reflecting extensively on my own process of learning electronics, and simultaneously diving deep on academic research behind notions of embodiment, I came to realize that perhaps stumbling through the craft of linking these two worlds together wasn’t the best use of my strengths.

Because, I realized, the boundary between the physical and digital worlds was a false one.

Indeed, mentally compartmentalizing the physical from the digital makes sense from a computer science perspective, or from a system architecture perspective, but it is a wrong, dead-wrong approach for an interaction design perspective.

Every interaction, whether it is with a coffee cup or a keyboard or pixels on a screen, exists in the physical world, is perceived through our senses, is actively interpreted by us, and is thus rendered meaningful by our interpretation. Whether it is physical or digital, every interaction is embodied, as we only interact with the physical manifestations of digital information.

Musée Mécanique

This was a surprising conclusion to reach, as the whole reason I set out on this inquiry was to prove that the interactions afforded by the devices at the Musee Mecanique were of a different class than those afforded by screens and input devices. What I began to discover, though, is that even our most familiar, most natural, most culturally-embedded interactions, are all technologically-mediated.

Type Cliché Letterpress Project

There is nothing natural about plain paper, dark ink or the printing press; these are all technologies. However, a book differs greatly from an e-book in terms of the richness of its physicality. Screens typically comprise an interaction that is physically impoverished, given the rich range of sensing capabilities we have as human beings. By not engaging our senses for texture, warmth, smell and sound, the e-book is limited in how it engages our sense of embodiment, but it is embodied nevertheless.

Indeed, too much effort has been wasted trying to explain how and why tangible computing is new and different than what came before it… what, intangible computing? I believe that the assertion is irrelevant, that tangible computing is not new, but as an area of inquiry it has given us a new perspective from which to reconsider all interactions, namely that of their embodied qualities. While tangible computing is mostly concerned with the sense of touch and physical manipulability, embodied interaction considers the larger notions of physicality as a whole, the human body as a mediator of experience, the nature of being, and the role of individual interpretation as central to the formation of meaning.

All interactions can benefit from an embodied perspective, not just analog, physical, in-the-world interactions, but so-called digital ones as well. There are all these things in the world, hardly perceptible but nonetheless important, that we use every day to create meaning.

What I continue to outline, through my consideration of embodied interaction, phenomenology and metaphor, is a means by which we can talk about these experiences in such a way that embodiment can better inform our design process.

I’m not there yet, but I’m continuing to work it out.

And now, I get to work it out with some of the coolest people in the whole entire world.

Hans and Umbach: Miscellaneous thoughts regarding the embodied future of interaction design.

Okay. It’s been awhile. I’ve taken some time off. From this, as well as from embodied interaction.

But let’s get back at it. Embodied interaction, that is.

I’ve had quite some time to decompress about this, take a pause and see what sort of ideas keep bubbling to the surface. And the results are not surprising. Or are surprising. Or whatever.

On a postmodern interpretation of technology, and rejecting a sense of inevitability.

Jaron Lanier. And questioning underlying assumptions, tacit assumptions, the colorless, odorless nature of our technological surroundings. Of our environments. Render explicit to consciousness that our ecologies are not inevitable, that they are not natural, that they are not predestined, that they are constructed and hold no truths. That is not to say they are completely relative, but just that they are subjective. Indeed. Privacy on the internet. That you cannot hear people from your car. Certain responsibilities, the lines we draw between designer and user, producer and consumer, etc.

Who is responsible when my gas pedal sticks? How about when I swerve to miss a deer? Or if I hit the deer? Should the car allow me to swerve so strong that I can flip it? Should it never be allowed to exceed 55 mph? What is the logical limit for the top speed? What is safe? What is safe enough? Given infinite resources, given no technological constraints, where would we find ourselves?

On the future of screen-mediated interactions.

Screens, for instance. There’s nothing inherently wrong with screens. A screen is just a collection of pixels. RGB emissive light, photons in this case. However, they could just as well be CMYK dots, like a newspaper. Like a magazine. Imagine how that would change our relationship with paper. Indeed, in that case you could pinch-zoom the 2010 Rand McNally U.S. Road Atlas. You’ve already tried that. Jake’s already tried to slide-unlock his wallet.

There is a feedback loop here. Digital technology influences how we interact with our analog environments. Not just vice versa. Twenty years ago the analog interaction of operating a Rolodex offered a logical metaphor for the digital interaction of browsing an address book. But no more. How many 15-year-olds today do you think have operated a Rolodex? How many do you think have operated an iPod? Or an iPhone? One metaphor for interaction here can no longer effectively be leveraged. Methinks the metaphor of the Rolodex interaction is dead.

So, you take Iron Man 2, with its transparent glass screens, and you think, man, that looks cool at first, but then you realize trying to focus on flat content projected on a transparent screen would be somewhat straining. But. If you can project an image on clear glass like that, who is to say you cannot also project a black background? And now, your office may consist of hundreds of screens, but when they’re off they’re transparent. Open. Airy. They barely exist. And the fact that it can be transparent, or can project its own black background for familiar contrast, or yes, that they can be transparent, opens up all sorts of options for augmented reality.

Though, for a truly transparent glass screen (which is merely transitory technology on the way to in-air displays) you come up against problems of auto-stereoscopy and determining the relativistic perspective of the user’s viewpoint… parallax and the like. I move my head, and the display needs to update accordingly. Or, what if I’m sharing the same augmented reality screen with someone else? They need to see a different display, from a different angle, than myself. Here we need some sort of holography, that projects a unique display along all emissive points.

When we think of screens, we need to think beyond the current technological implementation of the screen, and instead think of the screen metaphorically. What are the terms we use to construct our thinking of this display? Can we touch it? Do we touch it? If we touch it does it get greasy? Not necessarily. I predict in a few years that nanotechnology will provide us with materials, perhaps inspired by the leaves of the lotus, that collect no dust, accept no grease (even from the infinitely-greasy human hand). Imagine if all glass were made of such a material.

Is a screen an extension of a book? A viewport into another world? A wormhole? How we align ourselves, socially and culturally, with these artifacts greatly influences how we perceive them, how we conceptualize them, how we imagine ourselves using them. We look back at old science fiction movies and laugh at their cornball conceptualization of the future, but it’s important to recognize that every piece of science fiction is a product of a unique society and culture. Especially mainstream science fiction (or depictions of the future, as seen in Iron Man) needs to consider their sociocultural situatedness.

Ironically, technology in science fiction needs to appear futuristic, but not so much so that it seems unbelievable and unachievable given current understandings of the world. I recently read that plants may use quantum entanglement to maximize the efficiency of photosynthesis, and that quantum entanglement may allow birds to “see” the Earth’s magnetic field, aiding in migration. If these theories turn out to hold weight and thusly become popularized, they will influence our shared, intersubjective world, and become a resource that science fiction can leverage for believably futuristic renderings of the, well, future.

On questioning hegemonies.

I am realizing that one of my roles as a designer is to question, or at least render explicit, the tacit assumptions of the hegemonies in which we conduct our lives. As interaction designers, we have inherited the legacy, a powerful and important legacy at that, of a scientific approach to computation, as well as an initially cognitive-systems approach to interaction. The scientific, non-humanistic origins of our field, I believe, continue to silently influence the way we think about and talk about interaction.

There is a strong, increasingly strong, reaction against these rational histories of human-computer interaction, towards a more experiential model that considers the whole person, their emotions, desires, goals and fears, not only as something to design for, but something to design with. The user as a medium for design. Indeed, the interpretative abilities of the user are an incredible resource that can, nay must, be effectively leveraged by our designs.

The value in a design is not objectively measurable, and is not contained in the designed artifact itself, but in the union between the artifact and the user. The simplest designs are compelling not merely because they are simple, but because they so gracefully leverage the rich intersubjective world of the user (or users) to give them meaning. As phenomenology tells us, these meanings are situated not in the artifact, but in the consciousness of the user herself. Interaction design is concerned not with the objective world, but the messy, subjective world of interpretation. Phenomenology is at the very core of interaction design; concerned with reality as it is revealed to and manifest in consciousness.

I am proud that interaction design is increasing concerned with the messy subjective world, that it realizes that an account of the objective qualities of the world are insufficient to design compelling interactions. Nevertheless, I believe there is still significant work to be done in shrugging off the scientific cloak of computation, so that we can truly design future-facing interactions. I believe certain metaphors used for describing our systems have hung on past their prime, and silently and insidiously damage progress in our field. Most notably, as I have described recently, is the conceptualization of a virtual world that exists independently of the physical world.

On dispelling the myth of the virtual world.

While the difference between the physical and digital is certainly important from a technology and computation perspective, I believe it is meaningless from an interactive perspective. Nevertheless, we still speak of making virtual friends, roaming virtual worlds, or downloading digital information. I believe this categorization creates a false boundary between the physical and digital worlds, mischaracterizing the digital and trivializing the real, physical, embodied interactions that happen, that must happen, when a user interacts with the so-called virtual world.

Interacting with a friend in World of Warcraft is greatly different than interacting with them when they’re standing in your living room, but not because one is a “virtual” interaction and the other is a “real” interaction. No, they are both physical interactions, one mediated in co-present physical space (with all the available expressive faculties that come along with such co-presence), and one mediated through keyboard, mouse, screen and audio. To characterize the latter as “virtual” is to casually dismiss the embodied interactions that must happen in order for the conversation to take place, and to neglect possible opportunities to make the interaction more richly embodied.

On disentangling interaction design from its computational roots.

Computer science must necessarily distinguish between hardware and software layers, either of which can branch into any multitude of sub-disciplines. However, users do not necessarily make any such distinction. I have observed college freshmen working with computers, and their conceptual model of computers often does not distinguish between operating system and application, or even local (as in, on their computer) or remote (as in, on the internet). To them, a computer (or even computation as a whole) is one amorphous interactive mass, which whether we like it or not, is how we have to design it.

Also. We must design in the abstract, but ultimately our design are interacted with at the ultimate particular level. People never abstractly interact with a product. They only particularly, specifically, interact with something.

Or something.

Hans and Umbach: The Virtual World You Requested Does Not Exist

Our interests in embodied interaction started almost a year ago, as we spent the summer in San Francisco. Confronted by the overwhelming colors and textures of a real living-and-breathing (and, based on olfactory sensations, clearly excreting) city, we realized how malnourished our computer-mediated interactions were, compared to the rich sensory experience of the real, physical world. Additionally, our time at the Musée Mécanique made us appreciate the aesthetic experience of real physical artifacts, of tactile materials like warm wood cabinets and cold metal handles. We liked heavy dials you had to twist, piano scrolls that spun, machines that would hum and shake as their gears and belts within worked their magic.

Musée Mécanique

There is something different, something tangibly different, about real objects in real space around you. Sound that emanates from two metal pieces clanging together in real space is so much more satisfying than a recording played through a speaker. That they move, that they displace the air around them, the same air that you breathe, is just one of the ways we seem inexplicably tied to the physical realm.

Electronics as a tool for extending computation into the physical world.

This was the goal as we began our inquiry; how to create physical interactions, that exist in the real world and involve the manipulation of real artifacts, that are invisibly-backed by the strengths of modern computation and network technology. In our efforts to reintroduce interaction to the space around us we learned electronics, we experimented with Arduino, and we took our knowledge of programming and extended it into interactive electronic artifacts that existed in the world with us.

Hans and Umbach: Wiichuck Pong Components

As we tinkered with electronics we quickly discovered that all of the subtlety and nuance, as well as challenges, that go into designing “digital” interactions are present in physical computing, only amplified because we were now considering both an electronic and physical layer. These are two layers that, say, when you build a website or application, you take for granted. With Arduino they become your responsibility, subject to whatever limited grasp you may have with the subject area. We are disappointed by the limited progress we made in learning electronics, but we certainly have a renewed appreciation for people with the wide array of skills necessary to make not only functional, not only good, but great computationally-backed physical interactions.

From a theoretical perspective, our original interests were to understand why these physical in-the-world interactions are so fulfilling and evocative, and why our virtual interactions feel so vapid in comparison. Our goal was to explain why these physical interactions should earn a privileged seat at the table, while virtual interactions should be sent to their room.

Tangible computing: digital information, physical interaction.

As we sifted through the layers of theory on embodiment, we realized we needed a better understanding of what defines a virtual interaction, and how it is different from a physical one. Traditionally, a virtual/digital interaction involves a screen comprised of pixels, with a keyboard and pointing device used for controlling the interface. Tangible computing has worked to categorize interactions, characterizing certain ones as ‘tangible’ and other ones as ‘digital’. Indeed, with ambient computing Ishii and Ullmer have dedicated much of their work to studying how we can render ‘digital’ information, or ‘bits’, in ‘physical’ space. A number of authors have sought to define tangible computing in a manner that differentiates it from ‘regular’ computing. A few common tenets:

  • Tangible computing unifies input and output surfaces. Instead of a keyboard that adds characters to a screen, or instead of a mouse that moves a cursor, tangible computing offers a new interactive model where the input mechanism and output display are one and the same artifact.
  • Tangible computing affords direct manipulation. Instead of mapping the physical space my mouse occupies to the virtual space on the screen, the physical object I am working with can be grasped, turned and shaped in order to change its unified output characteristics.

Three Flavors: Data-Centered, Perceptual-Motor-Centered, Space-Centered

Hornecker outlines three primary views of tangible interaction. The work of Ishii and Ullmer concerns a data-centered viewpoint, where physical artifacts are computationally-augmented with digital information. A second is a perceptual-motor-centered view of tangible interaction, which aims to leverage the affordance and rich sensory experience of physical objects. As championed by Djajadiningrat and Overbeeke among others, this view of tangible interaction emphasizes the expressive nature of human movement. Finally, Hornecker subscribes to a space-centered view of tangible interaction, which involves embedding virtual displays in real-world spaces.

I subscribe to a perceptual-motor-centered view of tangible interaction.

And I believe that the data-centered and space-centered views are absolute nonsense.

Let’s talk about digital and virtual.

Both the data-centered and space-centered views of tangible computing make reference to a ‘digital’ or ‘virtual’ world of information. These concepts are familiar enough, and we all have a gut feeling as to what they mean. Digital information is ones and zeroes. It lives in a computer or on a server somewhere. It is ephemeral, existing without really existing, and can be infinitely accessed, copied, reproduced and distributed without loss. It’s what got the music industry’s panties in a bundle.

The virtual world is the world in which this ‘digital’ information exists, and it lacks many of the familiar characteristics of the physical world. Things don’t actually ‘exist’ in the virtual world. A photo on Flickr is not a photograph in real life. You don’t need to be co-present with the photo in order to see it. Multiple people can look at the same photograph at the same time, without being aware of one another.

The metaphors we use to describe digital information and the virtual world emphasize its distributed, ephemeral nature. Deleted files disappear “into the ether” and we pull things down from “the cloud.” Like an atmosphere that envelops us, we think of it as existing independent of us, independent of the moments we perceive it through our devices.

Nevertheless, digital and virtual are only conceptual metaphors. They do not describe the objective qualities of our networked, computational systems, but rather our subjective framing of them. They are extremely effective metaphors, yes, as characterized by their widespread use and prevalence in thought. But it is nonsense, utter nonsense, to claim that the virtual world exists, and is any different than the physical world.

There is no digital information. There is no virtual world. There is only the physical world, where we encounter mediated instances of so-called ‘digital’ information.

You never see, nor interact with, a virtual world. There is no such thing as a virtual display. The idea of augmenting real, physical objects with digital information is meaningless, as is the idea of augmenting physical spaces with virtual displays.

If a tree falls in the forest and no one hears the sound, it does not make a sound.

My mind’s telling me virtual, but my body says physical.

All of your interactions with the ‘virtual’ world are necessarily mediated by whatever system or tool you are using to access it. This post, for instance, is not virtual. It is a collection of physical pixels emitting physical photons of light, which enter your eye in a pattern that your brain recognizes and interprets as text. This is the case if you are reading this on your laptop, your phone or your iPad. If you want to comment on this post, you will perhaps press physical keys on your laptop to make recognizable characters appear on your physical screen, until they are in an amount and order you deem satisfactory.

Perhaps you will comment by pecking this out on an iPhone or iPad’s ‘virtual’ keyboard. Again, just because the keyboard is rendered on a screen, comprised of pixels, does not mean that it is virtual. Just because there isn’t tactile feedback (technically there is tactile feedback, as your finger doesn’t pass through the device like a ghost, but it may be feedback that doesn’t meet your expectations for a keyboard) doesn’t mean it isn’t physical.

You never have direct, unmediated access to what is metaphorically described as the virtual world. All of your interactions with ‘virtual’ information are necessarily physical, necessarily tangible, and therefore embodied. Thus, anyone who claims that tangibility is a new agenda for computing is sadly mistaken. Tangibility has always been core to our ability to interact with and experience computational devices, from pixels to keyboards to touch screens.

All interactions are not created equal.

The arguments over classification, determining what ‘is’ a tangible interaction versus what ‘is’ a virtual interaction, are completely misguided. All interactions are tangible, all interactions are physical, all interactions are embodied, but all interactions are not necessarily created equal. As humans we have highly-developed capacities to perceive, interpret, and make meaning out of our surroundings. In traditional desktop computing, as well as touch screen computing, devices tend to leverage only our most basic capacities for seeing and touching. These characteristics do not make an interaction virtual, they do not make it intangible, but they do make it physically impoverished.

This was the big surprise the boys and I encountered over the course of this project. We initially set out to explain why physical interactions were more fulfilling than virtual ones, and how the traditional screen, keyboard and mouse ignored all but the most rudimentary human capabilities for interacting with the world. What we realized was that we couldn’t merely categorize some interactions as physical and the other ones as virtual, because all interactions are necessarily situated in and mediated by the physical world. ‘Virtual’ is a convenient conceptual metaphor for describing a certain class of interactions, those that evoke only a limited set of our physical and perceptual capabilities, but the notion of a disembodied virtual world independent of the physical world is absolute nonsense. Moreover, I believe the appeal of the ‘physical’ and ‘virtual’ metaphors, and the territorial battles that have been fought under their banners, have distracted us from far more important agendas.

Traditional desktop interactions are unsatisfying not because they are ‘intangible’ or ‘virtual’, but because they offer an impoverished physical interaction that does little to leverage our unique abilities to perceive, interpret, and make meaning from our surroundings. Tangible computing differentiates itself not because it offers a ‘physical’ representation of ‘digital’ information, but because it uniquely focuses on the tactile qualities of interaction, and the rich sensory experiences that the world can afford.

Ultimately, all interactions are tangible. By acknowledging the metaphorical barrier between the ‘physical’ and ‘virtual’ worlds as a false one, and instead focusing on our ability to deliver richly evocative interactions through these different interactive paradigms, we are empowered to build more compelling interactions.

Hans and Umbach: The Role of Metaphor in Embodied Interaction

Through their research, Hans and Umbach have discovered that there is no shortage of brilliant work summarizing the primary concepts of embodied interaction. From Antle to Schiphorst, from Dourish to Hornecker, from Robertson to Sharlin to Lowgren to Fernaeus to Djajadiningrat to Fishkin, everyone seems to be reading the right stuff. Everyone is talking about Heidegger and his hermeneutical phenomenology, a philosophical approach to understanding the way the world is manifest in consciousness, how we interpret our experience with the world, and ultimately how we form meanings with it.

Everyone is channeling Dourish, and his work unifying social computing and tangible computing under the banner of embodied interaction. Many authors are channeling Lakoff and Johnson, and their profound work studying linguistics, metaphors and embodied cognition. Indeed, any text that discusses embodied interaction, without reference to Lakoff and Johnson, is immediately suspect in the boys’ book.

Lakoff and Johnson, and the role of metaphor in human thinking.

Lakoff and Johnson posit that much of our language, and thus much of our thinking, is dependent on our use of metaphors to describe the world. These metaphors are so ingrained in our thinking that we are rarely conscious of their use. For example, we describe time using spatial metaphors, or even material metaphors. Things that are in the future are “ahead” of us, and things in the past are “behind” us. We talk about the speed at which we perceive time passing, and we describe time as though it is water, a continually flowing substance. Time slips through our fingers, we don’t have enough of it, and we frequently run out of it.

Lakoff and Johnson argue that metaphors are not just convenient linguistic tricks we use that allow us to communicate more efficiently with one another, but that our brains are hard-wired to categorize and associate in such a way that we can’t help but think in metaphor. Hans and Umbach have definitely experienced that in the last few months, as they’ve been learning electronics. As they work with circuits and components, trying to build things that work and debug things that don’t, they’re constantly using spatial and material metaphors as a foundation to their thinking. We talk about electricity “flowing” from negative to positive, as though it is water. We talk about resistors resisting (or constricting) the flow of electricity. We talk about capacitors “filling up”, or buttons “closing” a circuit, or transistors “waiting” for a signal.

If we pause just for a moment, none of these thoughts regarding electricity make any sense at all. We can’t see it, so it’s meaningless to “know” or even to “think” that it acts like water, even while this particular mental model sets us up for success when creating a functional circuit. I close things in my environment all the time, such as doors, windows and notebooks, but to say that a pressed button “closes” a circuit is nonsense. Worst of all, how can a transistor “wait”? For something to wait implies that it perceives time, that it can anticipate the future, that it will respond in some manner when the appropriate stimuli presents itself.

Animals wait. Humans wait. Transistors do not wait, and yet this metaphor, that of the transistor as an organism that can anticipate and respond, tells us how to work with them. This, then, is where Lakoff and Johnson’s work gets particularly juicy. Humans are biological creatures with particular sensory capacities. We see light across a particular spectrum, can sense heat across a particular range at varying degrees of sensitivity, and have bodies with arms, fingers and hands that grant us certain abilities for interacting with the world.

Cognition is situated in the the body, and the body influences cognition.

J.J. Gibson’s work in ecological psychology argues that action and cognition are radically situated in the environment and inseparable from it, such that you can make no predictions about an organism’s behavior without knowing about the environment in which it is situated. Lakoff and Johnson extend Gibson’s work by channeling the concept of embodied cognition, which similarly claims that cognition is radically situated in the body.

Indeed, according to embodied cognition, the reason we perceive the world the way we do is not necessarily because the world possesses certain perceptible qualities, but rather because our bodies perceive and make sense of the world in a certain way. We perceive time in a certain way because we are hard-wired to experience it in that way. We organize the physical world in time because it is impossible for us to organize it independent of time. The more we learn about quantum mechanics, too, the more we learn that there is little in the world that objectively reflects the common sense human experience of time.

This is not to say that the objective world does not exist, but rather that we need to deliberately consider the way our minds make sense of the world. Since our minds are situated in our bodies, and our bodies have certain capabilities that pre-filter our access to the world, the importance of considering subjective experience as a phenomenon independent of the objective world cannot be understated.

“I can’t get my body out of my mind.”

The notion of embodied cognition has profound implications, and we can see some of them manifested in the way we talk about, and orient ourselves towards, the physical world. Our bodies are basically symmetrical from left to right, but strongly asymmetrical from front to back. We can see things when they are in front of us, but not when they are behind us. Our limbs are oriented in such a way that we walk in a forward vector, towards our line of sight.

Thus, things that we encounter “in the future” we typically encounter as we walk towards them, and things that we encountered “in the past” are things that are behind us. This asymmetry from front to back gives rise not only to the way we orient ourselves spatially, but also influences how we perceive the world. In this way our bodies’ unique configuration determines our understanding of time, spatially situating our temporal metaphors.

The richer notions of embodiment that Hans and Umbach have discovered over the course of our project consider these notions of metaphor as a fundamental part of how we interpret the world and make meaning of it. These metaphors arise out of the unique qualities and perceptual capabilities of our bodies, such that the way we make sense of and interact with the world is necessarily shaped by our own physical characteristics.

Hans and Umbach: Establishing a Language of Embodied Interaction for Design Practitioners

My work with Hans and Umbach on physical computing and embodied interaction took an interesting turn recently, down a path I hadn’t anticipated when I set out to pursue this project. My initial goal with this independent study was to develop the skills necessary to work with electronics and physical computing as a prototyping medium. In recent years, hardware platforms such as Arduino and programming environments such as Wiring have clearly lowered the barrier of entry for getting involved in physical computing, and have allowed even the electronic layman to build some super cool stuff.

Rob Nero presented his TRKBRD prototype at Interaction 10, an infrared touchpad built in Arduino that turns the entire surface of one’s laptop keyboard into a device-free pointing surface. Chris Rojas built an Arduino tank that can be controlled remotely through an iPhone application called TouchOSC. What’s super awesome is that most everyone building this stuff is happy to share their source code, and contribute their discoveries back to the community. The forums on the Arduino website are on fire with helpful tips, and it seems an answer to any technical question is only a Google search away. SparkFun has done tremendous work in making electronics more user-friendly and approachable, offering suggested uses, tutorials and data sheets right alongside the components they sell.

Dourish and Embodied Interaction: Uniting Tangible Computing and Social Computing

In tandem with my continuing education with electronics, I’ve been doing extensive research into embodied interaction, an emerging area of study in HCI that considers how our engagement, perception, and situatedness in the world influences how we interact with computational artifacts. Embodiment is closely related to a philosophical interest of mine, phenomenology, which studies the phenomena of experience and how reality is revealed to, and interpreted by, human consciousness. Phenomenology brackets off the external world and isn’t concerned with establishing a scientifically objective understanding of reality, but rather looks at how reality is experienced through consciousness.

Paul Dourish outlines a notion of embodied interaction in his landmark work, “Where The Action Is: The Foundations of Embodied Interaction.” In Chapter Four he iterates through a few definitions of embodiment, starting with what he characterizes as a rather naive one:

“Embodiment 1. Embodiment means possessing and acting through a physical manifestation in the world.”

He takes issue with this definition, however, as it places too high a priority on physical presence, and proposes a second iteration:

“Embodiment 2. Embodied phenomena are those that by their very nature occur in real time and real space.”

Indeed, in this definition embodiment is concerned more with participation than physical presence. Dourish uses the example of conversation, which is characterized by minute gestures and movements that hold no objective meaning independent of human interpretation. In “Technology as Experience” McCarthy and Wright use the example of a wink versus a blink. While closing and opening one’s eye is an objective natural phenomena that exists in the world, the meaning behind a wink is more complicated; there are issues of the intent of the “winker”, whether they intend for the wink to represent flirtation, collusion, or whether they simply had a speck of dirt in their eye. There are also issues of interpretation of the “winkee”, whether they perceive the wink, how they interpret the wink, and whether or not they interpret it as intended by the “winker.”

Thus, Dourish’s second iteration on embodiment deemphasizes physical presence while allowing for these subjective elements that do not exist independent of human consciousness. A wink cannot exist independent of real time and real space, but its meaning involves more than just its physicality. Indeed, Edmund Husserl originally proposed phenomenology in the early 20th century, but it was his student Martin Heidegger who carried it forward into the realm of interpretation. Hermeneutics is an area of study concerned with the theory of interpretation, and thus Heidegger’s hermeneutical phenomenology (or the study of experience and how it is interpreted by consciousness) has rather become the foundation of all recent phenomenological theory.

Beyond Heidegger, Dourish takes us through Alfred Schutz, who considered intersubjectivity and the social world of phenomenology, and Maurice Merleau-Ponty, who deliberately considered the human body by introducing the embodied nature of perception. In wrapping up, Dourish presents a third definition of embodiment:

Embodied 3. “Embodied phenomena are those which by their very nature occur in real time and real space. … Embodiment is the property of our engagement with the world that allows us to make it meaningful.”

Thus, Dourish says:

“Embodied interaction is the creation, manipulation, and sharing of meaning through engaged interaction with artifacts.”

Dourish’s thesis behind “Where The Action Is” is that tangible computing (computer interactions that happens in the world, through the direct manipulation of physical artifacts) and social computing (computer-augmented interaction that involves the continual navigation and reconfiguration of social space) are two sides of the same coin; namely, that of embodied interaction. Just as tangible interactions are necessarily embedded in real space and real time, social interaction is embedded as an active, practical accomplishment between individuals.

According to Dourish, embodied computing is a larger frame that encompasses tangible computing and social computing. This is a significant observation, and “Where The Action Is” is a landmark achievement. But, as Dourish himself admits, there isn’t a whole lot new here. He connects the dots between two seemingly unrelated areas of HCI theory, unifies them under the umbrella term embodied interaction, and leaves it to us to work it out from there.

And I’m not so sure that’s happened. “Where The Action Is” came out nine years ago, and based on the papers I’ve read on embodied interaction, few have attempted to extend the definition beyond Dourish’s work. While I wouldn’t describe his book as inadequate, I would certainly characterize it as a starting point, a signifiant one at that, for extending our thoughts on computing into the embodied, physical world.

From Physical Computing to Notions of Embodiment

For the last two months I have been researching theories on embodiment, teaching myself physical computing, and reflecting deeply on my experience of learning the arcane language of electronics. Even with all the brilliantly-written books and well-documented tutorials in the world, I find that learning electronics is hard. It frequently violates my common-sense experience with the world, and authors often use familiar metaphors to compensate for this. Indeed, electricity is like water, except when it’s not, and it flows, except when it doesn’t.

In reading my reflections I can trace the evolution of how I’ve been thinking about electronics, how I discover new metaphors that more closely describe my experiences, reject old metaphors, and become increasingly disenchanted that this is a domain of expertise I can master in three months. What is interesting is not that I was wrong in my conceptualizations of how electronics work, however, but how I was wrong and how I found myself compensating for it.

Hans and Umbach: Arduino, 8-bit shift register, 7-segment display

While working with a seven-segment display, for instance, I could not figure out which segmented LED mapped to which pin. As I slowly began to figure this out, it did not seem to map to any recognizable pattern, and certainly did not adhere to my expectations. I thought the designers of the display must have had deliberately sinister motives in how their product so effectively violated any sort of common sense interpretation.

To compensate, I drew up my own spatial map, both on paper as well as in my mind, to establish a personal pattern where no external pattern was immediately perceived. “The pin in the upper lefthand corner starts on the middle, center segment,” I told myself, “and spirals out clockwise from there, clockwise for both the segments as well as the pins, skipping the middle-pin common anodes, with the decimal seated awkwardly between the rightmost top and bottom segments.”

It was this personal spatial reasoning, this establishment of my own pattern language to describe how the seven-segment display worked, that made me realize how strongly my own embodied experience determines how I perceive, interact with, and make sense of the world. So long as a micro-controller has been programmed correctly, it doesn’t care which pin maps to which segment. But for me, a bumbling human who is poor at numbers but excels at language, socialization and spatial reasoning, you know, those things that humans are naturally good at, I needed some sort of support mechanism. And that mechanism arose out of my own embodied experience as a real physical being with certain capabilities for navigating and making sense of a real physical world.

Over time this realization, that I am constantly leveraging my own embodiment as a tool to interpret the world, dwarfed the interest I had in learning electronics. I’m still trying to figure out how to get an 8×8 64-LED matrix to interface with an Arduino through a series of 74HC595N 8-bit shift registers, so I can eventually make it play Pong with a Wii Nunchuk. That said, it’s frustrating that every time I try to do something, the chip I have is not the chip I need, and the chip I need is $10 plus $5 shipping and will arrive in a week, and by the way have I thought about how to send constant current to all the LEDs so they’re all of similar brightness because my segmented number “8” is way dimmer than my segmented number “1” because of all the LEDs that need to light up, and oh yeah, there’s an app for that.

Sigh.

Especially when I’m trying to play Pong on my 8×8 LED matrix, while someone else is already playing Super Mario Bros. on hers.

Extending Notions of Embodiment into Design Practice

In accordance with Merleau-Ponty and his work introducing the human body to phenomenology, and the work of Lakoff and Johnson in extending our notions of embodied cognition, I believe that the human body itself is central to structuring the way we perceive, interact with, and make sense of the world. Thus, I aim to take up the challenge issued by Dourish, and extend our notions of embodiment as they apply to the design of computational interactions. The goal of my work is to establish a language of embodied interaction that will help design practitioners create more compelling, more engaging, more natural interactions.

Considering physical space and the human body is an enormous topic in interaction design. In a panel at SXSW Interactive last week, Peter Merholz, Michele Perras, David Merrill, Johnny Lee and Nathan Moody discussed computing beyond the desktop as a new interaction paradigm, and Ron Goldin from Lunar discussed touchless invisible interactions in a separate presentation. At Interaction 10, Kendra Shimmell demonstrated her work with environments and movement-based interactions, Matt Cottam presented his considerable work integrating computing technologies with the richly tactile qualities of wood, and Christopher Fahey even gave a shout-out specifically to “Where The Action Is” in his talk on designing the human interface (slide 50 in the deck). The migration of computing off the desktop and into the space of our everyday lives seems only to be accelerating, to the point where Ben Fullerton proposed at Interaction 10 that we as interaction designers need to begin designing not just for connectivity and ubiquity, but for solitude and opportunities to actually disconnect from the world.

Establishing a Language of Embodied Interaction for Design Practitioners

To recap, my goal is to establish a language of embodied interaction that helps designers navigate this increasing delocalization and miniaturization of computing. I don’t know yet what this language will look like, but a few guiding principles seem to be emerging from my work:

All interactions are tangible. There is no such thing as an intangible interaction. I reject the notion that tangible interaction, the direct manipulation of physical representations of digital information, is significantly different from manipulating pixels on a screen, interactions that involve a keyboard or pointing device, or even touch screen interactions.

Tangibility involves all the senses, not just touch. Tangibility considers all the ways that objects make their presence known to us, and involves all senses. A screen is not “intangible” simply because it is comprised of pixels. A pixel is merely a colored speck on a screen, which I perceive when its photons reach my eye. Pixels are physical, and exist with us in the real world.

Likewise, a keyboard or mouse is not an intangible interaction simply because it doesn’t afford direct manipulation. I believe the wall that has been erected between historic interactions (such as the keyboard and mouse) and tangible interactions (such as the wonderful Siftables project) is false, and has damaged the agenda of tangible interaction as a whole. These interactions exist on a continuum, not between tangible and intangible, but between richly physical and physically impoverished. A mouse doesn’t allow for a whole lot of nuance of motion or pressure, and a glass touch screen doesn’t richly engage our sense of touch, but they are both necessarily physical interactions. There is an opportunity to improve the tangible nature of all interactions, but it will not happen by categorically rejecting our interactive history on the grounds that they are not tangible.

Everything is physical. There is no such thing as the virtual world, and there is no such thing as a digital interaction. Ishii and Ullmer (PDF link) in the Tangible Media Group at the MIT Media Lab have done extensive work on tangible interactions, characterizing them as physical manifestations of digital information. “Tangible Bits,” the title of their seminal work, largely summarizes this view. Repeatedly in their work, they set up a dichotomy between atoms and bits, physical and digital, real and virtual.

The trouble is, all information that we interact with, no matter if it is in the world or stored as ones and zeroes on a hard drive, shows itself to us in a physical way. I read your text message as a series of latin characters rendered by physical pixels that emit physical photons from the screen on my mobile device. I perceive your avatar in Second Life in a similar manner. I hear a song on my iPod because the digital information of the file is decoded by the software, which causes the thin membrane in my headphones to vibrate at a particular frequency. Even if I dive deep and study the ones and zeroes that comprise that audio file, I’m still seeing them represented as characters on a screen.

All information, in order to be perceived, must be rendered in some sort of medium. Thus, we can never interact with information directly, and all our interactions are necessarily mediated. As with the supposed wall between tangible interactions and the interactions that proceeded them, the wall between physical and digital, or real and virtual, is equally false. We never see nor interact with digital information, only the physical representation of it. We cannot interact with bits, only atoms. We do not and cannot exist in a virtual world, only the real one.

This is not to say that talking with someone in-person is the same as video chatting with them, or talking on the phone, or text messaging back and forth. Each of these interactions is very different based on the type and quality of information you can throw back and forth. It is, however, to illustrate that there isn’t necessarily any difference between a physical interaction and a supposed virtual one.

Thus, what Ishii and Ullmer propose, communicating digital information by embodying it in ambient sounds or water ripples or puffs of air, is no different than communicating it through pixels on a screen. What’s more, these “virtual” experiences we have, the “virtual” friendships we form, the “virtual” worlds we live in, are no different than the physical world, because they are all necessarily revealed to us in the physical world. The limitations of existing computational media may prevent us from allowing such high-bandwidth interactions as those allowed by face-to-face interaction (think of how much we communicate through subtle facial expressions and body language), but the fact that these interactions are happening through a screen, rather than at a coffee shop, does not make them virtual. It may, however, make them an impoverished physical interaction, as they do not engage our wide array of senses as a fully in-the-world interaction does.

Again, the dichotomy between real and virtual is false. The dichotomy between physical and digital is false. What we have is a continuum between physically rich and physically impoverished. It is nonsense to speak of digital interactions, or virtual interactions. All interactions are necessarily physical, are mediated by our bodies, and are therefore embodied.

The traditional compartmentalization of senses is a false one. In confining tangible interactions to touch, we ignore how our senses work together to help us interpret the world and make sense of it. The disembodiment of sensory inputs from one another is a byproduct of the compartmentalization of computational output (visual feedback from a screen rendered independently from audio feedback from a speaker, for instance) that contradicts our felt experience with the physical world. “See with your ears” and “hear with your eyes” are not simply convenient metaphors, but describe how our senses work in concert with one another to aid perception and interpretation.

Humans have more than five senses. Our experience with everything is situated in our sense of time. We have a sense of balance, and our sense of proprioception tells us where our limbs are situated in space. We have a sense of temperature and a sense of pain that are related to, but quite independent from, our sense of touch. Indeed, how can a loud sound “hurt” our ears if our sense of pain is tied to touch alone? Further, some animals can see in a wider color spectrum than humans, can sense magnetic or electrical fields, or can detect minute changes in air pressure. If computing somehow made these senses available to humans, how would that change our behavior?

My goal in breaking open these senses is not to arrive at a scientific account of how the brain processes sensory input, but to establish a more complete subjective, phenomenological account that offers a deeper understanding of how the phenomena of experience are revealed to human consciousness. I aim to render explicit the tacit assumptions that we make in our designs as to how they engage the senses, and uncover new design opportunities by mashing them together in unexpected ways.

Embodied Interaction: A Core Principle for Designing the Next Generation of Computing

By transcending the senses and considering the overall experience of our designs in a deeper, more reflective manner, we as interaction designers will be empowered to create more engaging, more fulfilling interactions. By considering the embodied nature of understanding, and how the human body plays a role in mediating interaction, we will be better prepared to design the systems and products for the post-desktop era.

Hans and Umbach: WiiChuck Pong

Hans and Umbach recently had a huge breakthrough that they wanted to share with you. A few weeks ago they built the Monski Pong example from Tom Igoe’s Making Things Talk book, substituting a few potentiometers for the arms of their non-existent Monski monkey (and non-existent flex sensors). They learned a lot in the process, but the boys have become increasingly concerned that they haven’t done enough work with front-facing interactions.

Stuffing a few wires into a breadboard is great for proof-of-concept work, but it brings with it a delicate and fussy interaction environment that lacks robustness and aesthetics. In the last week they’ve refocused their efforts on interactive input methods, rather than raw electronics, taking apart a Super Nintendo controller and interfacing a Nintendo Wii Nunchuk in the process.

Hans and Umbach: Taking apart an SNES controller

Hans and Umbach: Taking apart an SNES controller

Hans and Umbach: Arduino Hearts Wii Nunchuck

This got them thinking. “If we can access the accelerometers of the Wii Nunchuk as an input source, can we use them to play our Pong game?” The answer is yes, and the boys want to show you how they did it.

Hans and Umbach: Wiichuck Pong Components

First up, you’ll need a Nintendo Wii Nunchuk. These things are sweet, as they carry both an X and Y axis accelerometer (as well as a couple of buttons) for less than $20. Hans hasn’t found any libraries yet that interface with the analog control up top, but these other inputs have been more than enough to keep Umbach busy.

You need access to the wires and pins inside the controller, but it would be an awful shame to cut that beautiful cable. Lucky for us, Tod Kurt has created the WiiChuck adapter, a simple tiny PCB that takes the pins from the Nunchuk plug and breaks them out into a standard 4-pin header. You can get a WiiChuck adapter at SparkFun for a measly $3.

The adapter doesn’t come with the pins to plug them into your Arduino, though, so you’ll want to get a row of break-away headers so you can cut off a 4-pin header for yourself. You need to solder those pins into place, so now you’re also in the market for a soldering iron and some solder as well. And some wire cutters for separating those break-away headers from their kin. Yeah, it takes quite a bit of stuff to get started. We’re lucky to have Umbach on our team, who carries with himself a bandolier full of tools and electronics wherever he goes.

The whole point of the WiiChuck adapter is to be able to plug your Nunchuk into your Arduino, so you can do magic stuff like communicate serially with your computer, or control other things plugged into your Arduino. When it comes to writing code and working with the software side, Tod Kurt put together a WiiChuck library that makes it pretty easy to interface between the Arduino and the Nunchuk without doing everything yourself. If you download the WiiChuck Demo zip file, you’ll get the library of functions for connecting to the Nunchuk, as well as a demo that shows it all (hopefully) working.

The demo is great and all, but the boys wanted to make it do something. They had recently built the pong example from Tom Igoe’s book, and were interested in controlling the paddles with the accelerometer inside the Nunchuk. There are two pieces of software at work here. The first is the Pong game itself, written in Processing, that accepts incoming serial data and moves the paddles based on that. The second is the sensor reader, written in Arduino, that takes incoming sensor data from the Arduino and converts it into a format that the Pong game understands.

To get it all to work, Hans made some changes to the Arduino sensor reader example, blending it with the code from the WiiChuck demo. That way, the Arduino would pull down and translate input from the Nunchuk’s accelerometers (and buttons) into a format compatible with the Pong game. The game itself required minimal modification, only modifying the minimum and maximum ranges for the paddle values to conform to the range of values produced by the accelerometers.

Hans and Umbach: Wiichuck Pong Game

Et voila! C’est magnifique! This video up top shows the fruits of our labor… tilting the Nunchuk up and down moves the right paddle, and tilting it left and right moves the left paddle. One button starts the game rollin’, and the other button resets the scores.

If you’re interested in trying it out for yourself, Hans and Umbach have packaged up all their code into a fine and handy zip file. Or, you can browse the individual files ici:

WiichuckPongReader.pde (Arduino)
nunchuck_funcs.h (Arduino library)
WiichuckPongGame.pde (Processing)
WiichuckPong.zip (Everything zipped up)

Thanks, and happy hacking!

Hans and Umbach: Prototyping In Light

Hans and Umbach took some time out from their work to help me with my capstone project, where I’m trying to help people maintain a connection with the outdoors when they work inside for a living. In particular I’ve been studying how sunlight plays with indoor architectural spaces, and how the shapes of cast light change throughout the day as the sun moves across the sky. My explorations have been deeply inspired by the work of Daniel Rybakken, Adam Frank, and Philips’ efforts with dynamic lighting.

I wanted to create a device that would mimic the movement of the sun throughout the day, and I turned to Hans and Umbach for advice as to how to build such a thing. They recommended something as simple as a clock movement with a paper screen that would rotate, changing the angle and position of a beam of light from a Maglite over the course of time. Deemed Chrono we set forth to build such a prototype, to see how it would work.

"Outside In" Chrono Prototype Construction

"Outside In" Chrono Prototype Construction

"Outside In" Chrono Prototype Construction

"Outside In" Chrono Prototype Stage

Light is a tricky beast to prototype with, to be sure, but these small steps begin to point us in the right direction. We recorded a few time-lapse videos that show the movement of the prototype in a simulated office desk environment, condensing thirteen minutes of movement into less than two minutes:

The electronics are simple, but it’s an interesting and subtle way to communicate the slow passage of time within “embodied” space!

Hans and Umbach: Arduino Party!

Our good friend Lorelei needed some electronics help the other day, so Hans and Umbach invited her over for a fun-filled Arduino Party. She’s prototyping a force-sensing coaster that encourages people to drink plenty of water throughout the day, and the first step towards that goal is getting a force sensor to communicate with her Arduino.

She managed to pull an analog input from her FlexiForce pressure sensor and send it as a PWM output to an LED, but was still getting some terrible noise from the sensor. Umbach managed to dump it all to the serial monitor, and lo and behold, the Arduino was reporting readings that ranged from 0 – 1023 and everything in between!

Hans and Umbach: Arduino Party!

Hans dismantled the circuit and rebuilt it, taking the Cat Sat On The Mat example from Tom Igoe’s wonderful Making Things Talk book as inspiration. In adjusting the sensitivity of the output from the sensor he tried all sorts of different resistors, from 1K to 100K, before ultimately settling on a 15K resistor. Umbach wired up the circuit to the Arduino, ran it into the serial monitor on Lorelei’s computer, and whammo! Success! A clean, analog signal coming from the FlexiForce!

Hans and Umbach: The Completed Circuit

We had a lot of fun, and encourage the rest of ya’ll to throw your own Arduino parties! Just don’t try to combine them with fondue parties… electronics don’t mix well with boiling oil and cheese… then again, they might go well with a chocolate fondue, so don’t let us stop you!

Hans and Umbach: Tragedy!

We have some sad news to report on the Hans and Umbach side of things. Umbach was soldering the other day, putting together our second Arduino Proto Shield from Adafruit, when he burned himself pretty bad on his soldering iron. Don’t worry, he’s a healer!

You see, Umbach keeps his soldering iron to the left of himself when he’s working. The strong affordance of the soldering iron seems to indicate that you should hold it like a pen, but of course that is a ridiculous notion. The long metal end of the iron is about a million degrees, and it will burn your skin in an instant. You should hold it not like a pen, but further back, like a… not pen… or a paint brush… or something.

But then, even that is not entirely accurate. As you get more comfortable with soldering you realize, or at least Umbach has realized, that the iron is not the most important thing you wield in your hands. The iron merely heats up the area, and it does not require nearly the fine motor control as the solder itself. Indeed, the solder should be held in your dominant hand, so you can be as precise as possible with whatever parts you may be slagging in liquid metal.

Umbach was in his groove, grabbed his soldering iron in his left hand, and without thinking made to pass it to his right hand, as he would a pen. He grabbed it for only half a second, but it was enough to burn the back of his index finger and the inside of his middle finger.

There is a lesson here, and it’s not necessarily that Umbach was thoughtless, careless and stupid. As humans we are constantly filtering information, performing apparently routine tasks without deliberate thought. This is in much the same way that I am convinced no one actually learns Photoshop or Illustrator, but over time is able to unconsciously filter out the aspects of the interface that distract from their everyday usage. It’s an incredible ability, and one that frees up our mental capacity to dream of such awesome things as transistors, skee ball, and bears juggling chainsaws.

We go through life largely in a state of absorbed coping. In the case of Umbach, we see that this can get us into trouble sometimes. Grabbing the hot end of a soldering iron is clearly a poor decision, and had Umbach been consciously aware of the results that would inevitably follow from his actions he would never have done it in the first place.

But we are people, and as people we adopt certain habits that are applicable in certain situations. When these situations unexpectedly cross one another, such as the strong pen-like affordance of a soldering iron triggering the pen-like habit of holding, we may find ourselves with burned fingers. As designers it’s important that we deliberately consider what the form of our products communicate to our users, even unconsciously, and design in a manner that discourages the absent-minded adoption of an incorrect interaction model.

Or maybe Hans just needs to take over the soldering from now on.

Hans and Umbach: “You know how grip works.”

Over winter break, Kate and I were fortunate enough to attend the British Advertising Awards at the Walker Art Center in Minneapolis. One commercial from Audi in particular really stuck with me, because of its clear reference to our highly sophisticated ability to navigate and interact with our physical surroundings.

With the Hans and Umbach project, this is what I aim to render explicit; that we have these incredibly well-developed skills for working with the physical artifacts in our environment, and by deliberately designing for these skills we can create more compelling, more engaging, more intuitive interactions.