Category Archives: Design

Hans and Umbach: The Virtual World You Requested Does Not Exist

Our interests in embodied interaction started almost a year ago, as we spent the summer in San Francisco. Confronted by the overwhelming colors and textures of a real living-and-breathing (and, based on olfactory sensations, clearly excreting) city, we realized how malnourished our computer-mediated interactions were, compared to the rich sensory experience of the real, physical world. Additionally, our time at the Musée Mécanique made us appreciate the aesthetic experience of real physical artifacts, of tactile materials like warm wood cabinets and cold metal handles. We liked heavy dials you had to twist, piano scrolls that spun, machines that would hum and shake as their gears and belts within worked their magic.

Musée Mécanique

There is something different, something tangibly different, about real objects in real space around you. Sound that emanates from two metal pieces clanging together in real space is so much more satisfying than a recording played through a speaker. That they move, that they displace the air around them, the same air that you breathe, is just one of the ways we seem inexplicably tied to the physical realm.

Electronics as a tool for extending computation into the physical world.

This was the goal as we began our inquiry; how to create physical interactions, that exist in the real world and involve the manipulation of real artifacts, that are invisibly-backed by the strengths of modern computation and network technology. In our efforts to reintroduce interaction to the space around us we learned electronics, we experimented with Arduino, and we took our knowledge of programming and extended it into interactive electronic artifacts that existed in the world with us.

Hans and Umbach: Wiichuck Pong Components

As we tinkered with electronics we quickly discovered that all of the subtlety and nuance, as well as challenges, that go into designing “digital” interactions are present in physical computing, only amplified because we were now considering both an electronic and physical layer. These are two layers that, say, when you build a website or application, you take for granted. With Arduino they become your responsibility, subject to whatever limited grasp you may have with the subject area. We are disappointed by the limited progress we made in learning electronics, but we certainly have a renewed appreciation for people with the wide array of skills necessary to make not only functional, not only good, but great computationally-backed physical interactions.

From a theoretical perspective, our original interests were to understand why these physical in-the-world interactions are so fulfilling and evocative, and why our virtual interactions feel so vapid in comparison. Our goal was to explain why these physical interactions should earn a privileged seat at the table, while virtual interactions should be sent to their room.

Tangible computing: digital information, physical interaction.

As we sifted through the layers of theory on embodiment, we realized we needed a better understanding of what defines a virtual interaction, and how it is different from a physical one. Traditionally, a virtual/digital interaction involves a screen comprised of pixels, with a keyboard and pointing device used for controlling the interface. Tangible computing has worked to categorize interactions, characterizing certain ones as ‘tangible’ and other ones as ‘digital’. Indeed, with ambient computing Ishii and Ullmer have dedicated much of their work to studying how we can render ‘digital’ information, or ‘bits’, in ‘physical’ space. A number of authors have sought to define tangible computing in a manner that differentiates it from ‘regular’ computing. A few common tenets:

  • Tangible computing unifies input and output surfaces. Instead of a keyboard that adds characters to a screen, or instead of a mouse that moves a cursor, tangible computing offers a new interactive model where the input mechanism and output display are one and the same artifact.
  • Tangible computing affords direct manipulation. Instead of mapping the physical space my mouse occupies to the virtual space on the screen, the physical object I am working with can be grasped, turned and shaped in order to change its unified output characteristics.

Three Flavors: Data-Centered, Perceptual-Motor-Centered, Space-Centered

Hornecker outlines three primary views of tangible interaction. The work of Ishii and Ullmer concerns a data-centered viewpoint, where physical artifacts are computationally-augmented with digital information. A second is a perceptual-motor-centered view of tangible interaction, which aims to leverage the affordance and rich sensory experience of physical objects. As championed by Djajadiningrat and Overbeeke among others, this view of tangible interaction emphasizes the expressive nature of human movement. Finally, Hornecker subscribes to a space-centered view of tangible interaction, which involves embedding virtual displays in real-world spaces.

I subscribe to a perceptual-motor-centered view of tangible interaction.

And I believe that the data-centered and space-centered views are absolute nonsense.

Let’s talk about digital and virtual.

Both the data-centered and space-centered views of tangible computing make reference to a ‘digital’ or ‘virtual’ world of information. These concepts are familiar enough, and we all have a gut feeling as to what they mean. Digital information is ones and zeroes. It lives in a computer or on a server somewhere. It is ephemeral, existing without really existing, and can be infinitely accessed, copied, reproduced and distributed without loss. It’s what got the music industry’s panties in a bundle.

The virtual world is the world in which this ‘digital’ information exists, and it lacks many of the familiar characteristics of the physical world. Things don’t actually ‘exist’ in the virtual world. A photo on Flickr is not a photograph in real life. You don’t need to be co-present with the photo in order to see it. Multiple people can look at the same photograph at the same time, without being aware of one another.

The metaphors we use to describe digital information and the virtual world emphasize its distributed, ephemeral nature. Deleted files disappear “into the ether” and we pull things down from “the cloud.” Like an atmosphere that envelops us, we think of it as existing independent of us, independent of the moments we perceive it through our devices.

Nevertheless, digital and virtual are only conceptual metaphors. They do not describe the objective qualities of our networked, computational systems, but rather our subjective framing of them. They are extremely effective metaphors, yes, as characterized by their widespread use and prevalence in thought. But it is nonsense, utter nonsense, to claim that the virtual world exists, and is any different than the physical world.

There is no digital information. There is no virtual world. There is only the physical world, where we encounter mediated instances of so-called ‘digital’ information.

You never see, nor interact with, a virtual world. There is no such thing as a virtual display. The idea of augmenting real, physical objects with digital information is meaningless, as is the idea of augmenting physical spaces with virtual displays.

If a tree falls in the forest and no one hears the sound, it does not make a sound.

My mind’s telling me virtual, but my body says physical.

All of your interactions with the ‘virtual’ world are necessarily mediated by whatever system or tool you are using to access it. This post, for instance, is not virtual. It is a collection of physical pixels emitting physical photons of light, which enter your eye in a pattern that your brain recognizes and interprets as text. This is the case if you are reading this on your laptop, your phone or your iPad. If you want to comment on this post, you will perhaps press physical keys on your laptop to make recognizable characters appear on your physical screen, until they are in an amount and order you deem satisfactory.

Perhaps you will comment by pecking this out on an iPhone or iPad’s ‘virtual’ keyboard. Again, just because the keyboard is rendered on a screen, comprised of pixels, does not mean that it is virtual. Just because there isn’t tactile feedback (technically there is tactile feedback, as your finger doesn’t pass through the device like a ghost, but it may be feedback that doesn’t meet your expectations for a keyboard) doesn’t mean it isn’t physical.

You never have direct, unmediated access to what is metaphorically described as the virtual world. All of your interactions with ‘virtual’ information are necessarily physical, necessarily tangible, and therefore embodied. Thus, anyone who claims that tangibility is a new agenda for computing is sadly mistaken. Tangibility has always been core to our ability to interact with and experience computational devices, from pixels to keyboards to touch screens.

All interactions are not created equal.

The arguments over classification, determining what ‘is’ a tangible interaction versus what ‘is’ a virtual interaction, are completely misguided. All interactions are tangible, all interactions are physical, all interactions are embodied, but all interactions are not necessarily created equal. As humans we have highly-developed capacities to perceive, interpret, and make meaning out of our surroundings. In traditional desktop computing, as well as touch screen computing, devices tend to leverage only our most basic capacities for seeing and touching. These characteristics do not make an interaction virtual, they do not make it intangible, but they do make it physically impoverished.

This was the big surprise the boys and I encountered over the course of this project. We initially set out to explain why physical interactions were more fulfilling than virtual ones, and how the traditional screen, keyboard and mouse ignored all but the most rudimentary human capabilities for interacting with the world. What we realized was that we couldn’t merely categorize some interactions as physical and the other ones as virtual, because all interactions are necessarily situated in and mediated by the physical world. ‘Virtual’ is a convenient conceptual metaphor for describing a certain class of interactions, those that evoke only a limited set of our physical and perceptual capabilities, but the notion of a disembodied virtual world independent of the physical world is absolute nonsense. Moreover, I believe the appeal of the ‘physical’ and ‘virtual’ metaphors, and the territorial battles that have been fought under their banners, have distracted us from far more important agendas.

Traditional desktop interactions are unsatisfying not because they are ‘intangible’ or ‘virtual’, but because they offer an impoverished physical interaction that does little to leverage our unique abilities to perceive, interpret, and make meaning from our surroundings. Tangible computing differentiates itself not because it offers a ‘physical’ representation of ‘digital’ information, but because it uniquely focuses on the tactile qualities of interaction, and the rich sensory experiences that the world can afford.

Ultimately, all interactions are tangible. By acknowledging the metaphorical barrier between the ‘physical’ and ‘virtual’ worlds as a false one, and instead focusing on our ability to deliver richly evocative interactions through these different interactive paradigms, we are empowered to build more compelling interactions.

Hans and Umbach: The Role of Metaphor in Embodied Interaction

Through their research, Hans and Umbach have discovered that there is no shortage of brilliant work summarizing the primary concepts of embodied interaction. From Antle to Schiphorst, from Dourish to Hornecker, from Robertson to Sharlin to Lowgren to Fernaeus to Djajadiningrat to Fishkin, everyone seems to be reading the right stuff. Everyone is talking about Heidegger and his hermeneutical phenomenology, a philosophical approach to understanding the way the world is manifest in consciousness, how we interpret our experience with the world, and ultimately how we form meanings with it.

Everyone is channeling Dourish, and his work unifying social computing and tangible computing under the banner of embodied interaction. Many authors are channeling Lakoff and Johnson, and their profound work studying linguistics, metaphors and embodied cognition. Indeed, any text that discusses embodied interaction, without reference to Lakoff and Johnson, is immediately suspect in the boys’ book.

Lakoff and Johnson, and the role of metaphor in human thinking.

Lakoff and Johnson posit that much of our language, and thus much of our thinking, is dependent on our use of metaphors to describe the world. These metaphors are so ingrained in our thinking that we are rarely conscious of their use. For example, we describe time using spatial metaphors, or even material metaphors. Things that are in the future are “ahead” of us, and things in the past are “behind” us. We talk about the speed at which we perceive time passing, and we describe time as though it is water, a continually flowing substance. Time slips through our fingers, we don’t have enough of it, and we frequently run out of it.

Lakoff and Johnson argue that metaphors are not just convenient linguistic tricks we use that allow us to communicate more efficiently with one another, but that our brains are hard-wired to categorize and associate in such a way that we can’t help but think in metaphor. Hans and Umbach have definitely experienced that in the last few months, as they’ve been learning electronics. As they work with circuits and components, trying to build things that work and debug things that don’t, they’re constantly using spatial and material metaphors as a foundation to their thinking. We talk about electricity “flowing” from negative to positive, as though it is water. We talk about resistors resisting (or constricting) the flow of electricity. We talk about capacitors “filling up”, or buttons “closing” a circuit, or transistors “waiting” for a signal.

If we pause just for a moment, none of these thoughts regarding electricity make any sense at all. We can’t see it, so it’s meaningless to “know” or even to “think” that it acts like water, even while this particular mental model sets us up for success when creating a functional circuit. I close things in my environment all the time, such as doors, windows and notebooks, but to say that a pressed button “closes” a circuit is nonsense. Worst of all, how can a transistor “wait”? For something to wait implies that it perceives time, that it can anticipate the future, that it will respond in some manner when the appropriate stimuli presents itself.

Animals wait. Humans wait. Transistors do not wait, and yet this metaphor, that of the transistor as an organism that can anticipate and respond, tells us how to work with them. This, then, is where Lakoff and Johnson’s work gets particularly juicy. Humans are biological creatures with particular sensory capacities. We see light across a particular spectrum, can sense heat across a particular range at varying degrees of sensitivity, and have bodies with arms, fingers and hands that grant us certain abilities for interacting with the world.

Cognition is situated in the the body, and the body influences cognition.

J.J. Gibson’s work in ecological psychology argues that action and cognition are radically situated in the environment and inseparable from it, such that you can make no predictions about an organism’s behavior without knowing about the environment in which it is situated. Lakoff and Johnson extend Gibson’s work by channeling the concept of embodied cognition, which similarly claims that cognition is radically situated in the body.

Indeed, according to embodied cognition, the reason we perceive the world the way we do is not necessarily because the world possesses certain perceptible qualities, but rather because our bodies perceive and make sense of the world in a certain way. We perceive time in a certain way because we are hard-wired to experience it in that way. We organize the physical world in time because it is impossible for us to organize it independent of time. The more we learn about quantum mechanics, too, the more we learn that there is little in the world that objectively reflects the common sense human experience of time.

This is not to say that the objective world does not exist, but rather that we need to deliberately consider the way our minds make sense of the world. Since our minds are situated in our bodies, and our bodies have certain capabilities that pre-filter our access to the world, the importance of considering subjective experience as a phenomenon independent of the objective world cannot be understated.

“I can’t get my body out of my mind.”

The notion of embodied cognition has profound implications, and we can see some of them manifested in the way we talk about, and orient ourselves towards, the physical world. Our bodies are basically symmetrical from left to right, but strongly asymmetrical from front to back. We can see things when they are in front of us, but not when they are behind us. Our limbs are oriented in such a way that we walk in a forward vector, towards our line of sight.

Thus, things that we encounter “in the future” we typically encounter as we walk towards them, and things that we encountered “in the past” are things that are behind us. This asymmetry from front to back gives rise not only to the way we orient ourselves spatially, but also influences how we perceive the world. In this way our bodies’ unique configuration determines our understanding of time, spatially situating our temporal metaphors.

The richer notions of embodiment that Hans and Umbach have discovered over the course of our project consider these notions of metaphor as a fundamental part of how we interpret the world and make meaning of it. These metaphors arise out of the unique qualities and perceptual capabilities of our bodies, such that the way we make sense of and interact with the world is necessarily shaped by our own physical characteristics.

Hans and Umbach: Establishing a Language of Embodied Interaction for Design Practitioners

My work with Hans and Umbach on physical computing and embodied interaction took an interesting turn recently, down a path I hadn’t anticipated when I set out to pursue this project. My initial goal with this independent study was to develop the skills necessary to work with electronics and physical computing as a prototyping medium. In recent years, hardware platforms such as Arduino and programming environments such as Wiring have clearly lowered the barrier of entry for getting involved in physical computing, and have allowed even the electronic layman to build some super cool stuff.

Rob Nero presented his TRKBRD prototype at Interaction 10, an infrared touchpad built in Arduino that turns the entire surface of one’s laptop keyboard into a device-free pointing surface. Chris Rojas built an Arduino tank that can be controlled remotely through an iPhone application called TouchOSC. What’s super awesome is that most everyone building this stuff is happy to share their source code, and contribute their discoveries back to the community. The forums on the Arduino website are on fire with helpful tips, and it seems an answer to any technical question is only a Google search away. SparkFun has done tremendous work in making electronics more user-friendly and approachable, offering suggested uses, tutorials and data sheets right alongside the components they sell.

Dourish and Embodied Interaction: Uniting Tangible Computing and Social Computing

In tandem with my continuing education with electronics, I’ve been doing extensive research into embodied interaction, an emerging area of study in HCI that considers how our engagement, perception, and situatedness in the world influences how we interact with computational artifacts. Embodiment is closely related to a philosophical interest of mine, phenomenology, which studies the phenomena of experience and how reality is revealed to, and interpreted by, human consciousness. Phenomenology brackets off the external world and isn’t concerned with establishing a scientifically objective understanding of reality, but rather looks at how reality is experienced through consciousness.

Paul Dourish outlines a notion of embodied interaction in his landmark work, “Where The Action Is: The Foundations of Embodied Interaction.” In Chapter Four he iterates through a few definitions of embodiment, starting with what he characterizes as a rather naive one:

“Embodiment 1. Embodiment means possessing and acting through a physical manifestation in the world.”

He takes issue with this definition, however, as it places too high a priority on physical presence, and proposes a second iteration:

“Embodiment 2. Embodied phenomena are those that by their very nature occur in real time and real space.”

Indeed, in this definition embodiment is concerned more with participation than physical presence. Dourish uses the example of conversation, which is characterized by minute gestures and movements that hold no objective meaning independent of human interpretation. In “Technology as Experience” McCarthy and Wright use the example of a wink versus a blink. While closing and opening one’s eye is an objective natural phenomena that exists in the world, the meaning behind a wink is more complicated; there are issues of the intent of the “winker”, whether they intend for the wink to represent flirtation, collusion, or whether they simply had a speck of dirt in their eye. There are also issues of interpretation of the “winkee”, whether they perceive the wink, how they interpret the wink, and whether or not they interpret it as intended by the “winker.”

Thus, Dourish’s second iteration on embodiment deemphasizes physical presence while allowing for these subjective elements that do not exist independent of human consciousness. A wink cannot exist independent of real time and real space, but its meaning involves more than just its physicality. Indeed, Edmund Husserl originally proposed phenomenology in the early 20th century, but it was his student Martin Heidegger who carried it forward into the realm of interpretation. Hermeneutics is an area of study concerned with the theory of interpretation, and thus Heidegger’s hermeneutical phenomenology (or the study of experience and how it is interpreted by consciousness) has rather become the foundation of all recent phenomenological theory.

Beyond Heidegger, Dourish takes us through Alfred Schutz, who considered intersubjectivity and the social world of phenomenology, and Maurice Merleau-Ponty, who deliberately considered the human body by introducing the embodied nature of perception. In wrapping up, Dourish presents a third definition of embodiment:

Embodied 3. “Embodied phenomena are those which by their very nature occur in real time and real space. … Embodiment is the property of our engagement with the world that allows us to make it meaningful.”

Thus, Dourish says:

“Embodied interaction is the creation, manipulation, and sharing of meaning through engaged interaction with artifacts.”

Dourish’s thesis behind “Where The Action Is” is that tangible computing (computer interactions that happens in the world, through the direct manipulation of physical artifacts) and social computing (computer-augmented interaction that involves the continual navigation and reconfiguration of social space) are two sides of the same coin; namely, that of embodied interaction. Just as tangible interactions are necessarily embedded in real space and real time, social interaction is embedded as an active, practical accomplishment between individuals.

According to Dourish, embodied computing is a larger frame that encompasses tangible computing and social computing. This is a significant observation, and “Where The Action Is” is a landmark achievement. But, as Dourish himself admits, there isn’t a whole lot new here. He connects the dots between two seemingly unrelated areas of HCI theory, unifies them under the umbrella term embodied interaction, and leaves it to us to work it out from there.

And I’m not so sure that’s happened. “Where The Action Is” came out nine years ago, and based on the papers I’ve read on embodied interaction, few have attempted to extend the definition beyond Dourish’s work. While I wouldn’t describe his book as inadequate, I would certainly characterize it as a starting point, a signifiant one at that, for extending our thoughts on computing into the embodied, physical world.

From Physical Computing to Notions of Embodiment

For the last two months I have been researching theories on embodiment, teaching myself physical computing, and reflecting deeply on my experience of learning the arcane language of electronics. Even with all the brilliantly-written books and well-documented tutorials in the world, I find that learning electronics is hard. It frequently violates my common-sense experience with the world, and authors often use familiar metaphors to compensate for this. Indeed, electricity is like water, except when it’s not, and it flows, except when it doesn’t.

In reading my reflections I can trace the evolution of how I’ve been thinking about electronics, how I discover new metaphors that more closely describe my experiences, reject old metaphors, and become increasingly disenchanted that this is a domain of expertise I can master in three months. What is interesting is not that I was wrong in my conceptualizations of how electronics work, however, but how I was wrong and how I found myself compensating for it.

Hans and Umbach: Arduino, 8-bit shift register, 7-segment display

While working with a seven-segment display, for instance, I could not figure out which segmented LED mapped to which pin. As I slowly began to figure this out, it did not seem to map to any recognizable pattern, and certainly did not adhere to my expectations. I thought the designers of the display must have had deliberately sinister motives in how their product so effectively violated any sort of common sense interpretation.

To compensate, I drew up my own spatial map, both on paper as well as in my mind, to establish a personal pattern where no external pattern was immediately perceived. “The pin in the upper lefthand corner starts on the middle, center segment,” I told myself, “and spirals out clockwise from there, clockwise for both the segments as well as the pins, skipping the middle-pin common anodes, with the decimal seated awkwardly between the rightmost top and bottom segments.”

It was this personal spatial reasoning, this establishment of my own pattern language to describe how the seven-segment display worked, that made me realize how strongly my own embodied experience determines how I perceive, interact with, and make sense of the world. So long as a micro-controller has been programmed correctly, it doesn’t care which pin maps to which segment. But for me, a bumbling human who is poor at numbers but excels at language, socialization and spatial reasoning, you know, those things that humans are naturally good at, I needed some sort of support mechanism. And that mechanism arose out of my own embodied experience as a real physical being with certain capabilities for navigating and making sense of a real physical world.

Over time this realization, that I am constantly leveraging my own embodiment as a tool to interpret the world, dwarfed the interest I had in learning electronics. I’m still trying to figure out how to get an 8×8 64-LED matrix to interface with an Arduino through a series of 74HC595N 8-bit shift registers, so I can eventually make it play Pong with a Wii Nunchuk. That said, it’s frustrating that every time I try to do something, the chip I have is not the chip I need, and the chip I need is $10 plus $5 shipping and will arrive in a week, and by the way have I thought about how to send constant current to all the LEDs so they’re all of similar brightness because my segmented number “8” is way dimmer than my segmented number “1” because of all the LEDs that need to light up, and oh yeah, there’s an app for that.

Sigh.

Especially when I’m trying to play Pong on my 8×8 LED matrix, while someone else is already playing Super Mario Bros. on hers.

Extending Notions of Embodiment into Design Practice

In accordance with Merleau-Ponty and his work introducing the human body to phenomenology, and the work of Lakoff and Johnson in extending our notions of embodied cognition, I believe that the human body itself is central to structuring the way we perceive, interact with, and make sense of the world. Thus, I aim to take up the challenge issued by Dourish, and extend our notions of embodiment as they apply to the design of computational interactions. The goal of my work is to establish a language of embodied interaction that will help design practitioners create more compelling, more engaging, more natural interactions.

Considering physical space and the human body is an enormous topic in interaction design. In a panel at SXSW Interactive last week, Peter Merholz, Michele Perras, David Merrill, Johnny Lee and Nathan Moody discussed computing beyond the desktop as a new interaction paradigm, and Ron Goldin from Lunar discussed touchless invisible interactions in a separate presentation. At Interaction 10, Kendra Shimmell demonstrated her work with environments and movement-based interactions, Matt Cottam presented his considerable work integrating computing technologies with the richly tactile qualities of wood, and Christopher Fahey even gave a shout-out specifically to “Where The Action Is” in his talk on designing the human interface (slide 50 in the deck). The migration of computing off the desktop and into the space of our everyday lives seems only to be accelerating, to the point where Ben Fullerton proposed at Interaction 10 that we as interaction designers need to begin designing not just for connectivity and ubiquity, but for solitude and opportunities to actually disconnect from the world.

Establishing a Language of Embodied Interaction for Design Practitioners

To recap, my goal is to establish a language of embodied interaction that helps designers navigate this increasing delocalization and miniaturization of computing. I don’t know yet what this language will look like, but a few guiding principles seem to be emerging from my work:

All interactions are tangible. There is no such thing as an intangible interaction. I reject the notion that tangible interaction, the direct manipulation of physical representations of digital information, is significantly different from manipulating pixels on a screen, interactions that involve a keyboard or pointing device, or even touch screen interactions.

Tangibility involves all the senses, not just touch. Tangibility considers all the ways that objects make their presence known to us, and involves all senses. A screen is not “intangible” simply because it is comprised of pixels. A pixel is merely a colored speck on a screen, which I perceive when its photons reach my eye. Pixels are physical, and exist with us in the real world.

Likewise, a keyboard or mouse is not an intangible interaction simply because it doesn’t afford direct manipulation. I believe the wall that has been erected between historic interactions (such as the keyboard and mouse) and tangible interactions (such as the wonderful Siftables project) is false, and has damaged the agenda of tangible interaction as a whole. These interactions exist on a continuum, not between tangible and intangible, but between richly physical and physically impoverished. A mouse doesn’t allow for a whole lot of nuance of motion or pressure, and a glass touch screen doesn’t richly engage our sense of touch, but they are both necessarily physical interactions. There is an opportunity to improve the tangible nature of all interactions, but it will not happen by categorically rejecting our interactive history on the grounds that they are not tangible.

Everything is physical. There is no such thing as the virtual world, and there is no such thing as a digital interaction. Ishii and Ullmer (PDF link) in the Tangible Media Group at the MIT Media Lab have done extensive work on tangible interactions, characterizing them as physical manifestations of digital information. “Tangible Bits,” the title of their seminal work, largely summarizes this view. Repeatedly in their work, they set up a dichotomy between atoms and bits, physical and digital, real and virtual.

The trouble is, all information that we interact with, no matter if it is in the world or stored as ones and zeroes on a hard drive, shows itself to us in a physical way. I read your text message as a series of latin characters rendered by physical pixels that emit physical photons from the screen on my mobile device. I perceive your avatar in Second Life in a similar manner. I hear a song on my iPod because the digital information of the file is decoded by the software, which causes the thin membrane in my headphones to vibrate at a particular frequency. Even if I dive deep and study the ones and zeroes that comprise that audio file, I’m still seeing them represented as characters on a screen.

All information, in order to be perceived, must be rendered in some sort of medium. Thus, we can never interact with information directly, and all our interactions are necessarily mediated. As with the supposed wall between tangible interactions and the interactions that proceeded them, the wall between physical and digital, or real and virtual, is equally false. We never see nor interact with digital information, only the physical representation of it. We cannot interact with bits, only atoms. We do not and cannot exist in a virtual world, only the real one.

This is not to say that talking with someone in-person is the same as video chatting with them, or talking on the phone, or text messaging back and forth. Each of these interactions is very different based on the type and quality of information you can throw back and forth. It is, however, to illustrate that there isn’t necessarily any difference between a physical interaction and a supposed virtual one.

Thus, what Ishii and Ullmer propose, communicating digital information by embodying it in ambient sounds or water ripples or puffs of air, is no different than communicating it through pixels on a screen. What’s more, these “virtual” experiences we have, the “virtual” friendships we form, the “virtual” worlds we live in, are no different than the physical world, because they are all necessarily revealed to us in the physical world. The limitations of existing computational media may prevent us from allowing such high-bandwidth interactions as those allowed by face-to-face interaction (think of how much we communicate through subtle facial expressions and body language), but the fact that these interactions are happening through a screen, rather than at a coffee shop, does not make them virtual. It may, however, make them an impoverished physical interaction, as they do not engage our wide array of senses as a fully in-the-world interaction does.

Again, the dichotomy between real and virtual is false. The dichotomy between physical and digital is false. What we have is a continuum between physically rich and physically impoverished. It is nonsense to speak of digital interactions, or virtual interactions. All interactions are necessarily physical, are mediated by our bodies, and are therefore embodied.

The traditional compartmentalization of senses is a false one. In confining tangible interactions to touch, we ignore how our senses work together to help us interpret the world and make sense of it. The disembodiment of sensory inputs from one another is a byproduct of the compartmentalization of computational output (visual feedback from a screen rendered independently from audio feedback from a speaker, for instance) that contradicts our felt experience with the physical world. “See with your ears” and “hear with your eyes” are not simply convenient metaphors, but describe how our senses work in concert with one another to aid perception and interpretation.

Humans have more than five senses. Our experience with everything is situated in our sense of time. We have a sense of balance, and our sense of proprioception tells us where our limbs are situated in space. We have a sense of temperature and a sense of pain that are related to, but quite independent from, our sense of touch. Indeed, how can a loud sound “hurt” our ears if our sense of pain is tied to touch alone? Further, some animals can see in a wider color spectrum than humans, can sense magnetic or electrical fields, or can detect minute changes in air pressure. If computing somehow made these senses available to humans, how would that change our behavior?

My goal in breaking open these senses is not to arrive at a scientific account of how the brain processes sensory input, but to establish a more complete subjective, phenomenological account that offers a deeper understanding of how the phenomena of experience are revealed to human consciousness. I aim to render explicit the tacit assumptions that we make in our designs as to how they engage the senses, and uncover new design opportunities by mashing them together in unexpected ways.

Embodied Interaction: A Core Principle for Designing the Next Generation of Computing

By transcending the senses and considering the overall experience of our designs in a deeper, more reflective manner, we as interaction designers will be empowered to create more engaging, more fulfilling interactions. By considering the embodied nature of understanding, and how the human body plays a role in mediating interaction, we will be better prepared to design the systems and products for the post-desktop era.

Gleaming The Cube: Design Principles for Bringing the Outdoors Indoors

For Distant Viewing

I’ve been working on my capstone project for two semesters now, trying to figure out a way to introduce a slice of the outdoor experience to the inside world. Playing, recreating and simply being outside is something that is extremely important to me, and based on conversations with my research participants, important to them as well.

There’s an apparent dichotomy between the richly engaging, dynamically changing outside world, and the rather static, sterile, sensory-deprivation tank that is the typical indoor workspace. Regarding the individual who has established a deep, personal connection to the outdoors, or to nature, or to wilderness, how do we improve the quality of life for this person if they have to spend most of their waking hours in an indoor built environment? What sort of experiential qualities are present in an outdoor setting that we can appropriately introduce to an indoor space? How can we do this in a manner that is still aligned with work and business needs?

My interests are not in arriving at a factual, scientifically objective account of outdoor experience, but rather how outdoor spaces are received by our senses, interpreted in our minds, and ultimately made meaningful to us. Mine is a phenomenological approach, where I am concerned with the experience of direct realism. How does nature reveal itself to our consciousness? How does our consciousness interpret the outdoors, and regard it as meaningful? How is the situatedness of the individual, from their perceptual capabilities, to their social and cultural values, to their memories and lived experiences, how are these evoked by a particular experience, and how do they determine how the individual interprets it?

The goal of my capstone project is to establish a series of high-level design principles that help to guide interaction designers who find themselves trying to evoke a sense of the outdoors in an indoor space. I do not precisely know yet what these principles will be, but a few possible threads have bubbled to the surface.

The Biological Thread

Green Dude

Most animals have what is called a circadian rhythm, a biological clock that runs on a 24-hour period and determines when an organism wakes up, does certain activities, and goes to sleep. Animals still heed to this internal clock even when deprived of external stimuli, such as the movement of the sun and changes in temperature, and humans are no exception. Despite artificial lighting and built environments, we are still inexplicably bound to this rhythm.

The circadian rhythm is clearly an evolutionary response to the 24-hour day of our planet, and in this way our biology is not only situated in, but largely determined by our environment. Our biological nature is born from the nature of the Earth itself, and its subsequent rhythms. Indeed, the natural length of a day is inescapably woven into the biology of our own humanity.

It goes further than that, however. Lakoff and Johnson have done extensive work demonstrating that our use of language, and our thoughts themselves, are tightly coupled to a series of primary metaphors that rise out of our experience with our own bodies. The foundation of human thought is bound up not in some kind of disembodied rationality, argue Lakoff and Johnson, but is rather determined by our own embodied cognition. We talk of purpose as a destination, time in terms of motion, and things that are similar as being close together. These are not just convenient linguistic phrases, but are the very foundation of how we structure and make sense of the world.

Our perceptions and subsequent rationalism are a product of our own embodiment, and our embodiment is a product of our biology. Since our biology evolved in response to the inescapable rhythms of the natural world, it would seem that a connection to the outside world is an undeniably important component of our humanity. To deny the rhythms of the outside world is to deny the very thing that makes us human.

As humans we are unavoidably situated in our biology, which influences how we perceive, categorize and make meaning of the world. A design that aims to communicate a sense of the outdoors must consider the biological connection that makes the natural world intrinsically meaningful to us.

The Cultural Thread

I hope she said yes.

A longstanding claim has been that it is reason, our unique access to a transcendent and objective reality, that distinguishes humans from other animals. The implications of Lakoff and Johnson’s work, that rationality is not disembodied but is rather a product of our own embodiment, stands to elevate other uniquely human activities such as culture and art to a similar level as reason.

This is certainly not to undercut rational thought, which remains an incredibly powerful tool that, in the case of quantum mechanics, continues to unearth a world that is in direct violation of our common-sense notions of direct realism. It is, however, to demonstrate that reason is not the privileged, disembodied force we may think it is, but is rather determined by the unique nature of our own humanity. If reason (that is, human reason) is one important capability that make us uniquely human, than our other capabilities such as culture and art may be equally important, despite their subjective nature.

Our relationship with the outdoors cannot be described fully in a purely biological, or purely rational, account, as our social and cultural experiences influence our attitudes towards the natural world as well. There is biological precedent for our connection, but the way we ultimately make meaning and form relationships with the outdoors will be highly dependent on the culture we are situated in, and the experiences with the outdoors that we have collected.

As a designer, it is inappropriate to assume that everyone will interpret a palm tree in the same way, or a cactus, or a coniferous tree. For a person in the midwestern United States a palm tree might signify a faraway exotic place to spend spring break, whereas for a person in Florida it may represent just another damn tree. Someone who lives in the mountains may not have the same appreciation for their local topography as someone who grew up in the plains.

The values we associate with the outdoors are heavily influenced by the society and culture we inhabit. A design that aims to communicate a sense of the outdoors must consider the sociocultural relationships its users have with the natural world, and how (or if) it intends to change them.

The Temporal and Perceptual Thread

Waning Sunlight

The natural world changes slowly, often at a rate below immediate human perception. We notice the leaves changing in autumn, but you can’t sit down and literally watch the leaves change. The sun moves across the sky throughout the day, the days get longer or shorter depending on one’s latitude and the time of year, and the phases of the moon change. There are, however, changes that we can perceive, such as wind blowing, clouds moving, rain falling, and certainly lightning striking nearby.

The indoor world has limited access to these natural processes, but it does possess some of its own. Co-workers arrive in the morning, fetch their coffee, take bathroom breaks, go to lunch, and eventually filter out for the evening. Human Resources may hang holiday decorations depending on the time of year, and the wear-and-tear of the hallway carpet may become a topic of conversation for bored individuals. Indeed, we are ambiently aware of these processes, often without consciously attending to them or deliberately marking them out.

From an informational standpoint the natural world is always communicating its status, albeit at a level below that of immediate human perception. We notice changes from time to time, but we cannot consciously focus and attend to them, because they cannot be actively witnessed by our senses. The sun moves, the phases of the moon change, the trees bud and the flowers bloom, and while all of these channels communicate information about the state of the outdoors, they are far from being distracting or overwhelming. Thus, a design for bringing a sense of the outdoors indoors would do well for capturing and communicating these slow processes in an elegant manner.

However, part of the intrigue of the outside world is the interplay between these longer imperceivable processes, and the more immediate perceivable ones. I can’t sit down and watch the sun move across the sky, but on a partly cloudy day I can tell when it comes out from behind a cloud. I can feel and hear the breeze on a windy day, and while I could just barely perceive that thunderhead bearing down on me, I can certainly feel its drenching rain.

This interplay demonstrates how the processes of nature situate themselves in a multi-scalar, almost fractal relationship. Certain changes are perceivable minute-to-minute, hour-to-hour or day-to-day. Others are only noticeable at larger timescales, such as week-to-week, month-to-month or season-to-season. Still other changes are noticeable from year-to-year. The natural world of course works on timescales far beyond this, beyond the limits of human perception and even imagination, and certain creative designs cast a reflective light on even these vast timescales.

A design that aims to communicate a sense of the outdoors must allow for multiple levels of perception and temporal resolution, utilizing different magnitudes of perceivable change to communicate the multi-scalar cyclic relationships of the natural world.

So that largely summarizes my current work. I’m not sure if these are the actual design principles I’m going to roll with, but a few categories definitely seem to be emerging. I’m deeply interested in a phenomenological standpoint that considers sense-making, sensuality and embodied experience as core to my argument. I have found that a key component to my work is the temporal, multi-scalar, cyclic nature of outdoor processes, as well as the differing levels of human perception of those changes. Indeed, these two principles are tightly woven together at this point, but it may make more sense to split them apart.

I’m already realizing that I need a principle that considers space, such as the way sunlight filters through leaves or how crepuscular rays fill outdoor space, and mapping these to surfaces in the office or dust particles in the air. Nature has an interesting way of rendering space visible in subtle ways and using it to communicate information, and I’m fairly certain I need a principle that captures that. I also aim to further explain my design principles by applying them specifically to light as a design medium, based on my lighting studies.

In Summary

  • As humans we are unavoidably situated in our biology, which influences how we perceive, categorize and make meaning of the world. A design that aims to communicate a sense of the outdoors must consider the biological connection that makes the natural world intrinsically meaningful to us.
  • The values we associate with the outdoors are heavily influenced by the society and culture we inhabit. A design that aims to communicate a sense of the outdoors must consider the sociocultural relationships its users have with the natural world, and how (or if) it intends to change them.
  • A design that aims to communicate a sense of the outdoors must allow for multiple levels of perception and temporal resolution, utilizing different magnitudes of perceivable change to communicate the multi-scalar cyclic relationships of the natural world.

Hans and Umbach: WiiChuck Pong

Hans and Umbach recently had a huge breakthrough that they wanted to share with you. A few weeks ago they built the Monski Pong example from Tom Igoe’s Making Things Talk book, substituting a few potentiometers for the arms of their non-existent Monski monkey (and non-existent flex sensors). They learned a lot in the process, but the boys have become increasingly concerned that they haven’t done enough work with front-facing interactions.

Stuffing a few wires into a breadboard is great for proof-of-concept work, but it brings with it a delicate and fussy interaction environment that lacks robustness and aesthetics. In the last week they’ve refocused their efforts on interactive input methods, rather than raw electronics, taking apart a Super Nintendo controller and interfacing a Nintendo Wii Nunchuk in the process.

Hans and Umbach: Taking apart an SNES controller

Hans and Umbach: Taking apart an SNES controller

Hans and Umbach: Arduino Hearts Wii Nunchuck

This got them thinking. “If we can access the accelerometers of the Wii Nunchuk as an input source, can we use them to play our Pong game?” The answer is yes, and the boys want to show you how they did it.

Hans and Umbach: Wiichuck Pong Components

First up, you’ll need a Nintendo Wii Nunchuk. These things are sweet, as they carry both an X and Y axis accelerometer (as well as a couple of buttons) for less than $20. Hans hasn’t found any libraries yet that interface with the analog control up top, but these other inputs have been more than enough to keep Umbach busy.

You need access to the wires and pins inside the controller, but it would be an awful shame to cut that beautiful cable. Lucky for us, Tod Kurt has created the WiiChuck adapter, a simple tiny PCB that takes the pins from the Nunchuk plug and breaks them out into a standard 4-pin header. You can get a WiiChuck adapter at SparkFun for a measly $3.

The adapter doesn’t come with the pins to plug them into your Arduino, though, so you’ll want to get a row of break-away headers so you can cut off a 4-pin header for yourself. You need to solder those pins into place, so now you’re also in the market for a soldering iron and some solder as well. And some wire cutters for separating those break-away headers from their kin. Yeah, it takes quite a bit of stuff to get started. We’re lucky to have Umbach on our team, who carries with himself a bandolier full of tools and electronics wherever he goes.

The whole point of the WiiChuck adapter is to be able to plug your Nunchuk into your Arduino, so you can do magic stuff like communicate serially with your computer, or control other things plugged into your Arduino. When it comes to writing code and working with the software side, Tod Kurt put together a WiiChuck library that makes it pretty easy to interface between the Arduino and the Nunchuk without doing everything yourself. If you download the WiiChuck Demo zip file, you’ll get the library of functions for connecting to the Nunchuk, as well as a demo that shows it all (hopefully) working.

The demo is great and all, but the boys wanted to make it do something. They had recently built the pong example from Tom Igoe’s book, and were interested in controlling the paddles with the accelerometer inside the Nunchuk. There are two pieces of software at work here. The first is the Pong game itself, written in Processing, that accepts incoming serial data and moves the paddles based on that. The second is the sensor reader, written in Arduino, that takes incoming sensor data from the Arduino and converts it into a format that the Pong game understands.

To get it all to work, Hans made some changes to the Arduino sensor reader example, blending it with the code from the WiiChuck demo. That way, the Arduino would pull down and translate input from the Nunchuk’s accelerometers (and buttons) into a format compatible with the Pong game. The game itself required minimal modification, only modifying the minimum and maximum ranges for the paddle values to conform to the range of values produced by the accelerometers.

Hans and Umbach: Wiichuck Pong Game

Et voila! C’est magnifique! This video up top shows the fruits of our labor… tilting the Nunchuk up and down moves the right paddle, and tilting it left and right moves the left paddle. One button starts the game rollin’, and the other button resets the scores.

If you’re interested in trying it out for yourself, Hans and Umbach have packaged up all their code into a fine and handy zip file. Or, you can browse the individual files ici:

WiichuckPongReader.pde (Arduino)
nunchuck_funcs.h (Arduino library)
WiichuckPongGame.pde (Processing)
WiichuckPong.zip (Everything zipped up)

Thanks, and happy hacking!

Hans and Umbach: Prototyping In Light

Hans and Umbach took some time out from their work to help me with my capstone project, where I’m trying to help people maintain a connection with the outdoors when they work inside for a living. In particular I’ve been studying how sunlight plays with indoor architectural spaces, and how the shapes of cast light change throughout the day as the sun moves across the sky. My explorations have been deeply inspired by the work of Daniel Rybakken, Adam Frank, and Philips’ efforts with dynamic lighting.

I wanted to create a device that would mimic the movement of the sun throughout the day, and I turned to Hans and Umbach for advice as to how to build such a thing. They recommended something as simple as a clock movement with a paper screen that would rotate, changing the angle and position of a beam of light from a Maglite over the course of time. Deemed Chrono we set forth to build such a prototype, to see how it would work.

"Outside In" Chrono Prototype Construction

"Outside In" Chrono Prototype Construction

"Outside In" Chrono Prototype Construction

"Outside In" Chrono Prototype Stage

Light is a tricky beast to prototype with, to be sure, but these small steps begin to point us in the right direction. We recorded a few time-lapse videos that show the movement of the prototype in a simulated office desk environment, condensing thirteen minutes of movement into less than two minutes:

The electronics are simple, but it’s an interesting and subtle way to communicate the slow passage of time within “embodied” space!

Hans and Umbach: Tragedy!

We have some sad news to report on the Hans and Umbach side of things. Umbach was soldering the other day, putting together our second Arduino Proto Shield from Adafruit, when he burned himself pretty bad on his soldering iron. Don’t worry, he’s a healer!

You see, Umbach keeps his soldering iron to the left of himself when he’s working. The strong affordance of the soldering iron seems to indicate that you should hold it like a pen, but of course that is a ridiculous notion. The long metal end of the iron is about a million degrees, and it will burn your skin in an instant. You should hold it not like a pen, but further back, like a… not pen… or a paint brush… or something.

But then, even that is not entirely accurate. As you get more comfortable with soldering you realize, or at least Umbach has realized, that the iron is not the most important thing you wield in your hands. The iron merely heats up the area, and it does not require nearly the fine motor control as the solder itself. Indeed, the solder should be held in your dominant hand, so you can be as precise as possible with whatever parts you may be slagging in liquid metal.

Umbach was in his groove, grabbed his soldering iron in his left hand, and without thinking made to pass it to his right hand, as he would a pen. He grabbed it for only half a second, but it was enough to burn the back of his index finger and the inside of his middle finger.

There is a lesson here, and it’s not necessarily that Umbach was thoughtless, careless and stupid. As humans we are constantly filtering information, performing apparently routine tasks without deliberate thought. This is in much the same way that I am convinced no one actually learns Photoshop or Illustrator, but over time is able to unconsciously filter out the aspects of the interface that distract from their everyday usage. It’s an incredible ability, and one that frees up our mental capacity to dream of such awesome things as transistors, skee ball, and bears juggling chainsaws.

We go through life largely in a state of absorbed coping. In the case of Umbach, we see that this can get us into trouble sometimes. Grabbing the hot end of a soldering iron is clearly a poor decision, and had Umbach been consciously aware of the results that would inevitably follow from his actions he would never have done it in the first place.

But we are people, and as people we adopt certain habits that are applicable in certain situations. When these situations unexpectedly cross one another, such as the strong pen-like affordance of a soldering iron triggering the pen-like habit of holding, we may find ourselves with burned fingers. As designers it’s important that we deliberately consider what the form of our products communicate to our users, even unconsciously, and design in a manner that discourages the absent-minded adoption of an incorrect interaction model.

Or maybe Hans just needs to take over the soldering from now on.

Your Workflow is the Battlefield

There’s been quite the wailing and gnashing of teeth over the Apple iPad not supporting Flash. Personally, I welcome this new landscape of the web, where a future without Flash seems not only bright but possible indeed.

That said, what is unfolding here is of considerable gravity, and will likely determine the future of the web. Most web professionals use Adobe tools in some capacity to do their job, whether Photoshop, Illustrator, Dreamweaver (gasp), Flash, Flex, Flash Cataylst, or even Fireworks (which is, according to many, the best wireframing tool on the market, despite its quirks and crash-prone behaviors).

Now, I am not privy to inside information, but based on what I’ve been able to glean, Adobe’s strategy is something like this. There is a deliberate reason that your workflow as a standards-based web professional sucks; that Photoshop doesn’t behave the way you want it to, that exporting web images is still a pain in the ass, and that you actually need to fight the software to get it to do what you want.

Adobe knows how you use its software. Adobe knows how you want to use its software. Adobe understands your existing workflow.

And it doesn’t fucking care.

You see, Adobe doesn’t view you, as a web professional, as someone engaged in building websites. It doesn’t view itself as one who builds the tools to support you in your job. Adobe does not view you as the author of images and CSS and HTML and Javascript that all magically comes together to create a website, but rather as the author of what could potentially be Adobe Web Properties™.

They are not interested in supporting your workflow to create standards-based websites, because that is not in their strategic interest. They would much rather you consented to the cognitive model of Adobe Software™ to create proprietary Adobe Web Properties™ that render using Adobe Web Technologies™.

In essence, Adobe wants to be the gatekeeper for the production, as well as the consumption, of the web.

Apple knows this, and knows that the future of the web is mobile. Their actions are no less strategic than that of Adobe, and Apple has chosen a route that deliberately undermines Adobe’s strategy; Adobe’s strategy for controlling not just the consumption of rich interactive experiences on the web, but their production as well.

From the production side, as far as Adobe is concerned, if you’re not building your websites in Flash Catalyst and exporting them as Flash files, you’re doing it wrong.

Your frustrations with Photoshop and Fireworks in not supporting the “real way” web professionals build standards-based websites are not by accident, but by design. Adobe views each website as a potential property over which they can exert control over the look, feel and experience. As these “experiences” become more sophisticated, so do the tools necessary to create them. Adobe wants to be in the business of selling the only tools that do the job, controlling your production from end-to-end, and then even controlling the publication of and access to your creation.

Apple’s own domination plans for the mobile web undermines all this.

And Adobe is pissed.

Hans and Umbach: “You know how grip works.”

Over winter break, Kate and I were fortunate enough to attend the British Advertising Awards at the Walker Art Center in Minneapolis. One commercial from Audi in particular really stuck with me, because of its clear reference to our highly sophisticated ability to navigate and interact with our physical surroundings.

With the Hans and Umbach project, this is what I aim to render explicit; that we have these incredibly well-developed skills for working with the physical artifacts in our environment, and by deliberately designing for these skills we can create more compelling, more engaging, more intuitive interactions.

Hans and Umbach: Soldering and Building

The Hans and Umbach Electro-Mechanical Computing Company

Phew, have we got a treat for ya’ll! Last night Hans was able to tame the wild beast that is Adobe Premiere Pro, and compiled some videos of Umbach (or was it Hans?) building some stuff with Arduino.

First up, the boys soldered together an Arduino Proto Shield kit from Adafruit. You can witness their amazing efforts in super-speed time, where sixty minutes of inhaling metallic fumes has been condensed into three power-packed minutes!

After that, the boys took their new creation and built a three-channel LED color mixer, out of a few potentiometers and one of these kick-ass triple output LEDs from SparkFun.

A huge shout goes out to Ryan Rapsys of Erratik Productions for the music!