Hans and Umbach: Establishing a Language of Embodied Interaction for Design Practitioners

My work with Hans and Umbach on physical computing and embodied interaction took an interesting turn recently, down a path I hadn’t anticipated when I set out to pursue this project. My initial goal with this independent study was to develop the skills necessary to work with electronics and physical computing as a prototyping medium. In recent years, hardware platforms such as Arduino and programming environments such as Wiring have clearly lowered the barrier of entry for getting involved in physical computing, and have allowed even the electronic layman to build some super cool stuff.

Rob Nero presented his TRKBRD prototype at Interaction 10, an infrared touchpad built in Arduino that turns the entire surface of one’s laptop keyboard into a device-free pointing surface. Chris Rojas built an Arduino tank that can be controlled remotely through an iPhone application called TouchOSC. What’s super awesome is that most everyone building this stuff is happy to share their source code, and contribute their discoveries back to the community. The forums on the Arduino website are on fire with helpful tips, and it seems an answer to any technical question is only a Google search away. SparkFun has done tremendous work in making electronics more user-friendly and approachable, offering suggested uses, tutorials and data sheets right alongside the components they sell.

Dourish and Embodied Interaction: Uniting Tangible Computing and Social Computing

In tandem with my continuing education with electronics, I’ve been doing extensive research into embodied interaction, an emerging area of study in HCI that considers how our engagement, perception, and situatedness in the world influences how we interact with computational artifacts. Embodiment is closely related to a philosophical interest of mine, phenomenology, which studies the phenomena of experience and how reality is revealed to, and interpreted by, human consciousness. Phenomenology brackets off the external world and isn’t concerned with establishing a scientifically objective understanding of reality, but rather looks at how reality is experienced through consciousness.

Paul Dourish outlines a notion of embodied interaction in his landmark work, “Where The Action Is: The Foundations of Embodied Interaction.” In Chapter Four he iterates through a few definitions of embodiment, starting with what he characterizes as a rather naive one:

“Embodiment 1. Embodiment means possessing and acting through a physical manifestation in the world.”

He takes issue with this definition, however, as it places too high a priority on physical presence, and proposes a second iteration:

“Embodiment 2. Embodied phenomena are those that by their very nature occur in real time and real space.”

Indeed, in this definition embodiment is concerned more with participation than physical presence. Dourish uses the example of conversation, which is characterized by minute gestures and movements that hold no objective meaning independent of human interpretation. In “Technology as Experience” McCarthy and Wright use the example of a wink versus a blink. While closing and opening one’s eye is an objective natural phenomena that exists in the world, the meaning behind a wink is more complicated; there are issues of the intent of the “winker”, whether they intend for the wink to represent flirtation, collusion, or whether they simply had a speck of dirt in their eye. There are also issues of interpretation of the “winkee”, whether they perceive the wink, how they interpret the wink, and whether or not they interpret it as intended by the “winker.”

Thus, Dourish’s second iteration on embodiment deemphasizes physical presence while allowing for these subjective elements that do not exist independent of human consciousness. A wink cannot exist independent of real time and real space, but its meaning involves more than just its physicality. Indeed, Edmund Husserl originally proposed phenomenology in the early 20th century, but it was his student Martin Heidegger who carried it forward into the realm of interpretation. Hermeneutics is an area of study concerned with the theory of interpretation, and thus Heidegger’s hermeneutical phenomenology (or the study of experience and how it is interpreted by consciousness) has rather become the foundation of all recent phenomenological theory.

Beyond Heidegger, Dourish takes us through Alfred Schutz, who considered intersubjectivity and the social world of phenomenology, and Maurice Merleau-Ponty, who deliberately considered the human body by introducing the embodied nature of perception. In wrapping up, Dourish presents a third definition of embodiment:

Embodied 3. “Embodied phenomena are those which by their very nature occur in real time and real space. … Embodiment is the property of our engagement with the world that allows us to make it meaningful.”

Thus, Dourish says:

“Embodied interaction is the creation, manipulation, and sharing of meaning through engaged interaction with artifacts.”

Dourish’s thesis behind “Where The Action Is” is that tangible computing (computer interactions that happens in the world, through the direct manipulation of physical artifacts) and social computing (computer-augmented interaction that involves the continual navigation and reconfiguration of social space) are two sides of the same coin; namely, that of embodied interaction. Just as tangible interactions are necessarily embedded in real space and real time, social interaction is embedded as an active, practical accomplishment between individuals.

According to Dourish, embodied computing is a larger frame that encompasses tangible computing and social computing. This is a significant observation, and “Where The Action Is” is a landmark achievement. But, as Dourish himself admits, there isn’t a whole lot new here. He connects the dots between two seemingly unrelated areas of HCI theory, unifies them under the umbrella term embodied interaction, and leaves it to us to work it out from there.

And I’m not so sure that’s happened. “Where The Action Is” came out nine years ago, and based on the papers I’ve read on embodied interaction, few have attempted to extend the definition beyond Dourish’s work. While I wouldn’t describe his book as inadequate, I would certainly characterize it as a starting point, a signifiant one at that, for extending our thoughts on computing into the embodied, physical world.

From Physical Computing to Notions of Embodiment

For the last two months I have been researching theories on embodiment, teaching myself physical computing, and reflecting deeply on my experience of learning the arcane language of electronics. Even with all the brilliantly-written books and well-documented tutorials in the world, I find that learning electronics is hard. It frequently violates my common-sense experience with the world, and authors often use familiar metaphors to compensate for this. Indeed, electricity is like water, except when it’s not, and it flows, except when it doesn’t.

In reading my reflections I can trace the evolution of how I’ve been thinking about electronics, how I discover new metaphors that more closely describe my experiences, reject old metaphors, and become increasingly disenchanted that this is a domain of expertise I can master in three months. What is interesting is not that I was wrong in my conceptualizations of how electronics work, however, but how I was wrong and how I found myself compensating for it.

Hans and Umbach: Arduino, 8-bit shift register, 7-segment display

While working with a seven-segment display, for instance, I could not figure out which segmented LED mapped to which pin. As I slowly began to figure this out, it did not seem to map to any recognizable pattern, and certainly did not adhere to my expectations. I thought the designers of the display must have had deliberately sinister motives in how their product so effectively violated any sort of common sense interpretation.

To compensate, I drew up my own spatial map, both on paper as well as in my mind, to establish a personal pattern where no external pattern was immediately perceived. “The pin in the upper lefthand corner starts on the middle, center segment,” I told myself, “and spirals out clockwise from there, clockwise for both the segments as well as the pins, skipping the middle-pin common anodes, with the decimal seated awkwardly between the rightmost top and bottom segments.”

It was this personal spatial reasoning, this establishment of my own pattern language to describe how the seven-segment display worked, that made me realize how strongly my own embodied experience determines how I perceive, interact with, and make sense of the world. So long as a micro-controller has been programmed correctly, it doesn’t care which pin maps to which segment. But for me, a bumbling human who is poor at numbers but excels at language, socialization and spatial reasoning, you know, those things that humans are naturally good at, I needed some sort of support mechanism. And that mechanism arose out of my own embodied experience as a real physical being with certain capabilities for navigating and making sense of a real physical world.

Over time this realization, that I am constantly leveraging my own embodiment as a tool to interpret the world, dwarfed the interest I had in learning electronics. I’m still trying to figure out how to get an 8×8 64-LED matrix to interface with an Arduino through a series of 74HC595N 8-bit shift registers, so I can eventually make it play Pong with a Wii Nunchuk. That said, it’s frustrating that every time I try to do something, the chip I have is not the chip I need, and the chip I need is $10 plus $5 shipping and will arrive in a week, and by the way have I thought about how to send constant current to all the LEDs so they’re all of similar brightness because my segmented number “8” is way dimmer than my segmented number “1” because of all the LEDs that need to light up, and oh yeah, there’s an app for that.

Sigh.

Especially when I’m trying to play Pong on my 8×8 LED matrix, while someone else is already playing Super Mario Bros. on hers.

Extending Notions of Embodiment into Design Practice

In accordance with Merleau-Ponty and his work introducing the human body to phenomenology, and the work of Lakoff and Johnson in extending our notions of embodied cognition, I believe that the human body itself is central to structuring the way we perceive, interact with, and make sense of the world. Thus, I aim to take up the challenge issued by Dourish, and extend our notions of embodiment as they apply to the design of computational interactions. The goal of my work is to establish a language of embodied interaction that will help design practitioners create more compelling, more engaging, more natural interactions.

Considering physical space and the human body is an enormous topic in interaction design. In a panel at SXSW Interactive last week, Peter Merholz, Michele Perras, David Merrill, Johnny Lee and Nathan Moody discussed computing beyond the desktop as a new interaction paradigm, and Ron Goldin from Lunar discussed touchless invisible interactions in a separate presentation. At Interaction 10, Kendra Shimmell demonstrated her work with environments and movement-based interactions, Matt Cottam presented his considerable work integrating computing technologies with the richly tactile qualities of wood, and Christopher Fahey even gave a shout-out specifically to “Where The Action Is” in his talk on designing the human interface (slide 50 in the deck). The migration of computing off the desktop and into the space of our everyday lives seems only to be accelerating, to the point where Ben Fullerton proposed at Interaction 10 that we as interaction designers need to begin designing not just for connectivity and ubiquity, but for solitude and opportunities to actually disconnect from the world.

Establishing a Language of Embodied Interaction for Design Practitioners

To recap, my goal is to establish a language of embodied interaction that helps designers navigate this increasing delocalization and miniaturization of computing. I don’t know yet what this language will look like, but a few guiding principles seem to be emerging from my work:

All interactions are tangible. There is no such thing as an intangible interaction. I reject the notion that tangible interaction, the direct manipulation of physical representations of digital information, is significantly different from manipulating pixels on a screen, interactions that involve a keyboard or pointing device, or even touch screen interactions.

Tangibility involves all the senses, not just touch. Tangibility considers all the ways that objects make their presence known to us, and involves all senses. A screen is not “intangible” simply because it is comprised of pixels. A pixel is merely a colored speck on a screen, which I perceive when its photons reach my eye. Pixels are physical, and exist with us in the real world.

Likewise, a keyboard or mouse is not an intangible interaction simply because it doesn’t afford direct manipulation. I believe the wall that has been erected between historic interactions (such as the keyboard and mouse) and tangible interactions (such as the wonderful Siftables project) is false, and has damaged the agenda of tangible interaction as a whole. These interactions exist on a continuum, not between tangible and intangible, but between richly physical and physically impoverished. A mouse doesn’t allow for a whole lot of nuance of motion or pressure, and a glass touch screen doesn’t richly engage our sense of touch, but they are both necessarily physical interactions. There is an opportunity to improve the tangible nature of all interactions, but it will not happen by categorically rejecting our interactive history on the grounds that they are not tangible.

Everything is physical. There is no such thing as the virtual world, and there is no such thing as a digital interaction. Ishii and Ullmer (PDF link) in the Tangible Media Group at the MIT Media Lab have done extensive work on tangible interactions, characterizing them as physical manifestations of digital information. “Tangible Bits,” the title of their seminal work, largely summarizes this view. Repeatedly in their work, they set up a dichotomy between atoms and bits, physical and digital, real and virtual.

The trouble is, all information that we interact with, no matter if it is in the world or stored as ones and zeroes on a hard drive, shows itself to us in a physical way. I read your text message as a series of latin characters rendered by physical pixels that emit physical photons from the screen on my mobile device. I perceive your avatar in Second Life in a similar manner. I hear a song on my iPod because the digital information of the file is decoded by the software, which causes the thin membrane in my headphones to vibrate at a particular frequency. Even if I dive deep and study the ones and zeroes that comprise that audio file, I’m still seeing them represented as characters on a screen.

All information, in order to be perceived, must be rendered in some sort of medium. Thus, we can never interact with information directly, and all our interactions are necessarily mediated. As with the supposed wall between tangible interactions and the interactions that proceeded them, the wall between physical and digital, or real and virtual, is equally false. We never see nor interact with digital information, only the physical representation of it. We cannot interact with bits, only atoms. We do not and cannot exist in a virtual world, only the real one.

This is not to say that talking with someone in-person is the same as video chatting with them, or talking on the phone, or text messaging back and forth. Each of these interactions is very different based on the type and quality of information you can throw back and forth. It is, however, to illustrate that there isn’t necessarily any difference between a physical interaction and a supposed virtual one.

Thus, what Ishii and Ullmer propose, communicating digital information by embodying it in ambient sounds or water ripples or puffs of air, is no different than communicating it through pixels on a screen. What’s more, these “virtual” experiences we have, the “virtual” friendships we form, the “virtual” worlds we live in, are no different than the physical world, because they are all necessarily revealed to us in the physical world. The limitations of existing computational media may prevent us from allowing such high-bandwidth interactions as those allowed by face-to-face interaction (think of how much we communicate through subtle facial expressions and body language), but the fact that these interactions are happening through a screen, rather than at a coffee shop, does not make them virtual. It may, however, make them an impoverished physical interaction, as they do not engage our wide array of senses as a fully in-the-world interaction does.

Again, the dichotomy between real and virtual is false. The dichotomy between physical and digital is false. What we have is a continuum between physically rich and physically impoverished. It is nonsense to speak of digital interactions, or virtual interactions. All interactions are necessarily physical, are mediated by our bodies, and are therefore embodied.

The traditional compartmentalization of senses is a false one. In confining tangible interactions to touch, we ignore how our senses work together to help us interpret the world and make sense of it. The disembodiment of sensory inputs from one another is a byproduct of the compartmentalization of computational output (visual feedback from a screen rendered independently from audio feedback from a speaker, for instance) that contradicts our felt experience with the physical world. “See with your ears” and “hear with your eyes” are not simply convenient metaphors, but describe how our senses work in concert with one another to aid perception and interpretation.

Humans have more than five senses. Our experience with everything is situated in our sense of time. We have a sense of balance, and our sense of proprioception tells us where our limbs are situated in space. We have a sense of temperature and a sense of pain that are related to, but quite independent from, our sense of touch. Indeed, how can a loud sound “hurt” our ears if our sense of pain is tied to touch alone? Further, some animals can see in a wider color spectrum than humans, can sense magnetic or electrical fields, or can detect minute changes in air pressure. If computing somehow made these senses available to humans, how would that change our behavior?

My goal in breaking open these senses is not to arrive at a scientific account of how the brain processes sensory input, but to establish a more complete subjective, phenomenological account that offers a deeper understanding of how the phenomena of experience are revealed to human consciousness. I aim to render explicit the tacit assumptions that we make in our designs as to how they engage the senses, and uncover new design opportunities by mashing them together in unexpected ways.

Embodied Interaction: A Core Principle for Designing the Next Generation of Computing

By transcending the senses and considering the overall experience of our designs in a deeper, more reflective manner, we as interaction designers will be empowered to create more engaging, more fulfilling interactions. By considering the embodied nature of understanding, and how the human body plays a role in mediating interaction, we will be better prepared to design the systems and products for the post-desktop era.

Gleaming The Cube: Design Principles for Bringing the Outdoors Indoors

For Distant Viewing

I’ve been working on my capstone project for two semesters now, trying to figure out a way to introduce a slice of the outdoor experience to the inside world. Playing, recreating and simply being outside is something that is extremely important to me, and based on conversations with my research participants, important to them as well.

There’s an apparent dichotomy between the richly engaging, dynamically changing outside world, and the rather static, sterile, sensory-deprivation tank that is the typical indoor workspace. Regarding the individual who has established a deep, personal connection to the outdoors, or to nature, or to wilderness, how do we improve the quality of life for this person if they have to spend most of their waking hours in an indoor built environment? What sort of experiential qualities are present in an outdoor setting that we can appropriately introduce to an indoor space? How can we do this in a manner that is still aligned with work and business needs?

My interests are not in arriving at a factual, scientifically objective account of outdoor experience, but rather how outdoor spaces are received by our senses, interpreted in our minds, and ultimately made meaningful to us. Mine is a phenomenological approach, where I am concerned with the experience of direct realism. How does nature reveal itself to our consciousness? How does our consciousness interpret the outdoors, and regard it as meaningful? How is the situatedness of the individual, from their perceptual capabilities, to their social and cultural values, to their memories and lived experiences, how are these evoked by a particular experience, and how do they determine how the individual interprets it?

The goal of my capstone project is to establish a series of high-level design principles that help to guide interaction designers who find themselves trying to evoke a sense of the outdoors in an indoor space. I do not precisely know yet what these principles will be, but a few possible threads have bubbled to the surface.

The Biological Thread

Green Dude

Most animals have what is called a circadian rhythm, a biological clock that runs on a 24-hour period and determines when an organism wakes up, does certain activities, and goes to sleep. Animals still heed to this internal clock even when deprived of external stimuli, such as the movement of the sun and changes in temperature, and humans are no exception. Despite artificial lighting and built environments, we are still inexplicably bound to this rhythm.

The circadian rhythm is clearly an evolutionary response to the 24-hour day of our planet, and in this way our biology is not only situated in, but largely determined by our environment. Our biological nature is born from the nature of the Earth itself, and its subsequent rhythms. Indeed, the natural length of a day is inescapably woven into the biology of our own humanity.

It goes further than that, however. Lakoff and Johnson have done extensive work demonstrating that our use of language, and our thoughts themselves, are tightly coupled to a series of primary metaphors that rise out of our experience with our own bodies. The foundation of human thought is bound up not in some kind of disembodied rationality, argue Lakoff and Johnson, but is rather determined by our own embodied cognition. We talk of purpose as a destination, time in terms of motion, and things that are similar as being close together. These are not just convenient linguistic phrases, but are the very foundation of how we structure and make sense of the world.

Our perceptions and subsequent rationalism are a product of our own embodiment, and our embodiment is a product of our biology. Since our biology evolved in response to the inescapable rhythms of the natural world, it would seem that a connection to the outside world is an undeniably important component of our humanity. To deny the rhythms of the outside world is to deny the very thing that makes us human.

As humans we are unavoidably situated in our biology, which influences how we perceive, categorize and make meaning of the world. A design that aims to communicate a sense of the outdoors must consider the biological connection that makes the natural world intrinsically meaningful to us.

The Cultural Thread

I hope she said yes.

A longstanding claim has been that it is reason, our unique access to a transcendent and objective reality, that distinguishes humans from other animals. The implications of Lakoff and Johnson’s work, that rationality is not disembodied but is rather a product of our own embodiment, stands to elevate other uniquely human activities such as culture and art to a similar level as reason.

This is certainly not to undercut rational thought, which remains an incredibly powerful tool that, in the case of quantum mechanics, continues to unearth a world that is in direct violation of our common-sense notions of direct realism. It is, however, to demonstrate that reason is not the privileged, disembodied force we may think it is, but is rather determined by the unique nature of our own humanity. If reason (that is, human reason) is one important capability that make us uniquely human, than our other capabilities such as culture and art may be equally important, despite their subjective nature.

Our relationship with the outdoors cannot be described fully in a purely biological, or purely rational, account, as our social and cultural experiences influence our attitudes towards the natural world as well. There is biological precedent for our connection, but the way we ultimately make meaning and form relationships with the outdoors will be highly dependent on the culture we are situated in, and the experiences with the outdoors that we have collected.

As a designer, it is inappropriate to assume that everyone will interpret a palm tree in the same way, or a cactus, or a coniferous tree. For a person in the midwestern United States a palm tree might signify a faraway exotic place to spend spring break, whereas for a person in Florida it may represent just another damn tree. Someone who lives in the mountains may not have the same appreciation for their local topography as someone who grew up in the plains.

The values we associate with the outdoors are heavily influenced by the society and culture we inhabit. A design that aims to communicate a sense of the outdoors must consider the sociocultural relationships its users have with the natural world, and how (or if) it intends to change them.

The Temporal and Perceptual Thread

Waning Sunlight

The natural world changes slowly, often at a rate below immediate human perception. We notice the leaves changing in autumn, but you can’t sit down and literally watch the leaves change. The sun moves across the sky throughout the day, the days get longer or shorter depending on one’s latitude and the time of year, and the phases of the moon change. There are, however, changes that we can perceive, such as wind blowing, clouds moving, rain falling, and certainly lightning striking nearby.

The indoor world has limited access to these natural processes, but it does possess some of its own. Co-workers arrive in the morning, fetch their coffee, take bathroom breaks, go to lunch, and eventually filter out for the evening. Human Resources may hang holiday decorations depending on the time of year, and the wear-and-tear of the hallway carpet may become a topic of conversation for bored individuals. Indeed, we are ambiently aware of these processes, often without consciously attending to them or deliberately marking them out.

From an informational standpoint the natural world is always communicating its status, albeit at a level below that of immediate human perception. We notice changes from time to time, but we cannot consciously focus and attend to them, because they cannot be actively witnessed by our senses. The sun moves, the phases of the moon change, the trees bud and the flowers bloom, and while all of these channels communicate information about the state of the outdoors, they are far from being distracting or overwhelming. Thus, a design for bringing a sense of the outdoors indoors would do well for capturing and communicating these slow processes in an elegant manner.

However, part of the intrigue of the outside world is the interplay between these longer imperceivable processes, and the more immediate perceivable ones. I can’t sit down and watch the sun move across the sky, but on a partly cloudy day I can tell when it comes out from behind a cloud. I can feel and hear the breeze on a windy day, and while I could just barely perceive that thunderhead bearing down on me, I can certainly feel its drenching rain.

This interplay demonstrates how the processes of nature situate themselves in a multi-scalar, almost fractal relationship. Certain changes are perceivable minute-to-minute, hour-to-hour or day-to-day. Others are only noticeable at larger timescales, such as week-to-week, month-to-month or season-to-season. Still other changes are noticeable from year-to-year. The natural world of course works on timescales far beyond this, beyond the limits of human perception and even imagination, and certain creative designs cast a reflective light on even these vast timescales.

A design that aims to communicate a sense of the outdoors must allow for multiple levels of perception and temporal resolution, utilizing different magnitudes of perceivable change to communicate the multi-scalar cyclic relationships of the natural world.

So that largely summarizes my current work. I’m not sure if these are the actual design principles I’m going to roll with, but a few categories definitely seem to be emerging. I’m deeply interested in a phenomenological standpoint that considers sense-making, sensuality and embodied experience as core to my argument. I have found that a key component to my work is the temporal, multi-scalar, cyclic nature of outdoor processes, as well as the differing levels of human perception of those changes. Indeed, these two principles are tightly woven together at this point, but it may make more sense to split them apart.

I’m already realizing that I need a principle that considers space, such as the way sunlight filters through leaves or how crepuscular rays fill outdoor space, and mapping these to surfaces in the office or dust particles in the air. Nature has an interesting way of rendering space visible in subtle ways and using it to communicate information, and I’m fairly certain I need a principle that captures that. I also aim to further explain my design principles by applying them specifically to light as a design medium, based on my lighting studies.

In Summary

  • As humans we are unavoidably situated in our biology, which influences how we perceive, categorize and make meaning of the world. A design that aims to communicate a sense of the outdoors must consider the biological connection that makes the natural world intrinsically meaningful to us.
  • The values we associate with the outdoors are heavily influenced by the society and culture we inhabit. A design that aims to communicate a sense of the outdoors must consider the sociocultural relationships its users have with the natural world, and how (or if) it intends to change them.
  • A design that aims to communicate a sense of the outdoors must allow for multiple levels of perception and temporal resolution, utilizing different magnitudes of perceivable change to communicate the multi-scalar cyclic relationships of the natural world.

Hans and Umbach: WiiChuck Pong

Hans and Umbach recently had a huge breakthrough that they wanted to share with you. A few weeks ago they built the Monski Pong example from Tom Igoe’s Making Things Talk book, substituting a few potentiometers for the arms of their non-existent Monski monkey (and non-existent flex sensors). They learned a lot in the process, but the boys have become increasingly concerned that they haven’t done enough work with front-facing interactions.

Stuffing a few wires into a breadboard is great for proof-of-concept work, but it brings with it a delicate and fussy interaction environment that lacks robustness and aesthetics. In the last week they’ve refocused their efforts on interactive input methods, rather than raw electronics, taking apart a Super Nintendo controller and interfacing a Nintendo Wii Nunchuk in the process.

Hans and Umbach: Taking apart an SNES controller

Hans and Umbach: Taking apart an SNES controller

Hans and Umbach: Arduino Hearts Wii Nunchuck

This got them thinking. “If we can access the accelerometers of the Wii Nunchuk as an input source, can we use them to play our Pong game?” The answer is yes, and the boys want to show you how they did it.

Hans and Umbach: Wiichuck Pong Components

First up, you’ll need a Nintendo Wii Nunchuk. These things are sweet, as they carry both an X and Y axis accelerometer (as well as a couple of buttons) for less than $20. Hans hasn’t found any libraries yet that interface with the analog control up top, but these other inputs have been more than enough to keep Umbach busy.

You need access to the wires and pins inside the controller, but it would be an awful shame to cut that beautiful cable. Lucky for us, Tod Kurt has created the WiiChuck adapter, a simple tiny PCB that takes the pins from the Nunchuk plug and breaks them out into a standard 4-pin header. You can get a WiiChuck adapter at SparkFun for a measly $3.

The adapter doesn’t come with the pins to plug them into your Arduino, though, so you’ll want to get a row of break-away headers so you can cut off a 4-pin header for yourself. You need to solder those pins into place, so now you’re also in the market for a soldering iron and some solder as well. And some wire cutters for separating those break-away headers from their kin. Yeah, it takes quite a bit of stuff to get started. We’re lucky to have Umbach on our team, who carries with himself a bandolier full of tools and electronics wherever he goes.

The whole point of the WiiChuck adapter is to be able to plug your Nunchuk into your Arduino, so you can do magic stuff like communicate serially with your computer, or control other things plugged into your Arduino. When it comes to writing code and working with the software side, Tod Kurt put together a WiiChuck library that makes it pretty easy to interface between the Arduino and the Nunchuk without doing everything yourself. If you download the WiiChuck Demo zip file, you’ll get the library of functions for connecting to the Nunchuk, as well as a demo that shows it all (hopefully) working.

The demo is great and all, but the boys wanted to make it do something. They had recently built the pong example from Tom Igoe’s book, and were interested in controlling the paddles with the accelerometer inside the Nunchuk. There are two pieces of software at work here. The first is the Pong game itself, written in Processing, that accepts incoming serial data and moves the paddles based on that. The second is the sensor reader, written in Arduino, that takes incoming sensor data from the Arduino and converts it into a format that the Pong game understands.

To get it all to work, Hans made some changes to the Arduino sensor reader example, blending it with the code from the WiiChuck demo. That way, the Arduino would pull down and translate input from the Nunchuk’s accelerometers (and buttons) into a format compatible with the Pong game. The game itself required minimal modification, only modifying the minimum and maximum ranges for the paddle values to conform to the range of values produced by the accelerometers.

Hans and Umbach: Wiichuck Pong Game

Et voila! C’est magnifique! This video up top shows the fruits of our labor… tilting the Nunchuk up and down moves the right paddle, and tilting it left and right moves the left paddle. One button starts the game rollin’, and the other button resets the scores.

If you’re interested in trying it out for yourself, Hans and Umbach have packaged up all their code into a fine and handy zip file. Or, you can browse the individual files ici:

WiichuckPongReader.pde (Arduino)
nunchuck_funcs.h (Arduino library)
WiichuckPongGame.pde (Processing)
WiichuckPong.zip (Everything zipped up)

Thanks, and happy hacking!

Hans and Umbach: Prototyping In Light

Hans and Umbach took some time out from their work to help me with my capstone project, where I’m trying to help people maintain a connection with the outdoors when they work inside for a living. In particular I’ve been studying how sunlight plays with indoor architectural spaces, and how the shapes of cast light change throughout the day as the sun moves across the sky. My explorations have been deeply inspired by the work of Daniel Rybakken, Adam Frank, and Philips’ efforts with dynamic lighting.

I wanted to create a device that would mimic the movement of the sun throughout the day, and I turned to Hans and Umbach for advice as to how to build such a thing. They recommended something as simple as a clock movement with a paper screen that would rotate, changing the angle and position of a beam of light from a Maglite over the course of time. Deemed Chrono we set forth to build such a prototype, to see how it would work.

"Outside In" Chrono Prototype Construction

"Outside In" Chrono Prototype Construction

"Outside In" Chrono Prototype Construction

"Outside In" Chrono Prototype Stage

Light is a tricky beast to prototype with, to be sure, but these small steps begin to point us in the right direction. We recorded a few time-lapse videos that show the movement of the prototype in a simulated office desk environment, condensing thirteen minutes of movement into less than two minutes:

The electronics are simple, but it’s an interesting and subtle way to communicate the slow passage of time within “embodied” space!

Hans and Umbach: Arduino Party!

Our good friend Lorelei needed some electronics help the other day, so Hans and Umbach invited her over for a fun-filled Arduino Party. She’s prototyping a force-sensing coaster that encourages people to drink plenty of water throughout the day, and the first step towards that goal is getting a force sensor to communicate with her Arduino.

She managed to pull an analog input from her FlexiForce pressure sensor and send it as a PWM output to an LED, but was still getting some terrible noise from the sensor. Umbach managed to dump it all to the serial monitor, and lo and behold, the Arduino was reporting readings that ranged from 0 – 1023 and everything in between!

Hans and Umbach: Arduino Party!

Hans dismantled the circuit and rebuilt it, taking the Cat Sat On The Mat example from Tom Igoe’s wonderful Making Things Talk book as inspiration. In adjusting the sensitivity of the output from the sensor he tried all sorts of different resistors, from 1K to 100K, before ultimately settling on a 15K resistor. Umbach wired up the circuit to the Arduino, ran it into the serial monitor on Lorelei’s computer, and whammo! Success! A clean, analog signal coming from the FlexiForce!

Hans and Umbach: The Completed Circuit

We had a lot of fun, and encourage the rest of ya’ll to throw your own Arduino parties! Just don’t try to combine them with fondue parties… electronics don’t mix well with boiling oil and cheese… then again, they might go well with a chocolate fondue, so don’t let us stop you!

Hans and Umbach: Tragedy!

We have some sad news to report on the Hans and Umbach side of things. Umbach was soldering the other day, putting together our second Arduino Proto Shield from Adafruit, when he burned himself pretty bad on his soldering iron. Don’t worry, he’s a healer!

You see, Umbach keeps his soldering iron to the left of himself when he’s working. The strong affordance of the soldering iron seems to indicate that you should hold it like a pen, but of course that is a ridiculous notion. The long metal end of the iron is about a million degrees, and it will burn your skin in an instant. You should hold it not like a pen, but further back, like a… not pen… or a paint brush… or something.

But then, even that is not entirely accurate. As you get more comfortable with soldering you realize, or at least Umbach has realized, that the iron is not the most important thing you wield in your hands. The iron merely heats up the area, and it does not require nearly the fine motor control as the solder itself. Indeed, the solder should be held in your dominant hand, so you can be as precise as possible with whatever parts you may be slagging in liquid metal.

Umbach was in his groove, grabbed his soldering iron in his left hand, and without thinking made to pass it to his right hand, as he would a pen. He grabbed it for only half a second, but it was enough to burn the back of his index finger and the inside of his middle finger.

There is a lesson here, and it’s not necessarily that Umbach was thoughtless, careless and stupid. As humans we are constantly filtering information, performing apparently routine tasks without deliberate thought. This is in much the same way that I am convinced no one actually learns Photoshop or Illustrator, but over time is able to unconsciously filter out the aspects of the interface that distract from their everyday usage. It’s an incredible ability, and one that frees up our mental capacity to dream of such awesome things as transistors, skee ball, and bears juggling chainsaws.

We go through life largely in a state of absorbed coping. In the case of Umbach, we see that this can get us into trouble sometimes. Grabbing the hot end of a soldering iron is clearly a poor decision, and had Umbach been consciously aware of the results that would inevitably follow from his actions he would never have done it in the first place.

But we are people, and as people we adopt certain habits that are applicable in certain situations. When these situations unexpectedly cross one another, such as the strong pen-like affordance of a soldering iron triggering the pen-like habit of holding, we may find ourselves with burned fingers. As designers it’s important that we deliberately consider what the form of our products communicate to our users, even unconsciously, and design in a manner that discourages the absent-minded adoption of an incorrect interaction model.

Or maybe Hans just needs to take over the soldering from now on.

Your Workflow is the Battlefield

There’s been quite the wailing and gnashing of teeth over the Apple iPad not supporting Flash. Personally, I welcome this new landscape of the web, where a future without Flash seems not only bright but possible indeed.

That said, what is unfolding here is of considerable gravity, and will likely determine the future of the web. Most web professionals use Adobe tools in some capacity to do their job, whether Photoshop, Illustrator, Dreamweaver (gasp), Flash, Flex, Flash Cataylst, or even Fireworks (which is, according to many, the best wireframing tool on the market, despite its quirks and crash-prone behaviors).

Now, I am not privy to inside information, but based on what I’ve been able to glean, Adobe’s strategy is something like this. There is a deliberate reason that your workflow as a standards-based web professional sucks; that Photoshop doesn’t behave the way you want it to, that exporting web images is still a pain in the ass, and that you actually need to fight the software to get it to do what you want.

Adobe knows how you use its software. Adobe knows how you want to use its software. Adobe understands your existing workflow.

And it doesn’t fucking care.

You see, Adobe doesn’t view you, as a web professional, as someone engaged in building websites. It doesn’t view itself as one who builds the tools to support you in your job. Adobe does not view you as the author of images and CSS and HTML and Javascript that all magically comes together to create a website, but rather as the author of what could potentially be Adobe Web Properties™.

They are not interested in supporting your workflow to create standards-based websites, because that is not in their strategic interest. They would much rather you consented to the cognitive model of Adobe Software™ to create proprietary Adobe Web Properties™ that render using Adobe Web Technologies™.

In essence, Adobe wants to be the gatekeeper for the production, as well as the consumption, of the web.

Apple knows this, and knows that the future of the web is mobile. Their actions are no less strategic than that of Adobe, and Apple has chosen a route that deliberately undermines Adobe’s strategy; Adobe’s strategy for controlling not just the consumption of rich interactive experiences on the web, but their production as well.

From the production side, as far as Adobe is concerned, if you’re not building your websites in Flash Catalyst and exporting them as Flash files, you’re doing it wrong.

Your frustrations with Photoshop and Fireworks in not supporting the “real way” web professionals build standards-based websites are not by accident, but by design. Adobe views each website as a potential property over which they can exert control over the look, feel and experience. As these “experiences” become more sophisticated, so do the tools necessary to create them. Adobe wants to be in the business of selling the only tools that do the job, controlling your production from end-to-end, and then even controlling the publication of and access to your creation.

Apple’s own domination plans for the mobile web undermines all this.

And Adobe is pissed.

Hans and Umbach: “You know how grip works.”

Over winter break, Kate and I were fortunate enough to attend the British Advertising Awards at the Walker Art Center in Minneapolis. One commercial from Audi in particular really stuck with me, because of its clear reference to our highly sophisticated ability to navigate and interact with our physical surroundings.

With the Hans and Umbach project, this is what I aim to render explicit; that we have these incredibly well-developed skills for working with the physical artifacts in our environment, and by deliberately designing for these skills we can create more compelling, more engaging, more intuitive interactions.

Hans and Umbach: Soldering and Building

The Hans and Umbach Electro-Mechanical Computing Company

Phew, have we got a treat for ya’ll! Last night Hans was able to tame the wild beast that is Adobe Premiere Pro, and compiled some videos of Umbach (or was it Hans?) building some stuff with Arduino.

First up, the boys soldered together an Arduino Proto Shield kit from Adafruit. You can witness their amazing efforts in super-speed time, where sixty minutes of inhaling metallic fumes has been condensed into three power-packed minutes!

After that, the boys took their new creation and built a three-channel LED color mixer, out of a few potentiometers and one of these kick-ass triple output LEDs from SparkFun.

A huge shout goes out to Ryan Rapsys of Erratik Productions for the music!

Hans and Umbach: Atoms Are the New Bits

The Hans and Umbach Electro-Mechanical Computing Company

Needless to say, Hans and Umbach are extremely excited about this new article in Wired magazine, which champions a trend of garage tinkerers and other DIYers acting in concert to bring the world its next generation of products. Just as the internet democratized digital publication, so will new prototyping technologies democratize physical production.

We’d better get crackin’.