Category Archives: Philosophy

The West

Gracious Living

I may be honing in on part of why I find the American West, not only the landscape but also its people and history, so interesting. And history not necessarily in the wars fought or the great leaders and historic influencers and such, but in the everyday sense. What did people, regular people, do out here? What was their lifeworld? What was their intersubjectivity?

In a way, it’s a manner to defamiliarize myself with my own lifeworld, my own values and needs and hopes and dreams and goals and aspirations and fears… to try on someone else’s to render more explicit to my own consciousness my own silently-held assumptions, biases and predispositions. And if I understand mine better, I can come to know those of others better. I can better empathize with them, knowing that my own convictions and beliefs are sourced in something, sourced in experiences I have had that have influenced my values and thinking, rather than some innate characteristic of mankind that others may or may not have discovered yet.

Indeed, this thought experiment allows me to reject the notion that I have attained some kind of objective, universal, transcendent truth or enlightenment. In some ways I have found my own enlightenment, yes. I have discovered, and continue to hammer out, a personal framework for meaning. But this is not a guarantor. It is something that frames, that helps me make sense of human life, but not something that determines life.

Transactions for Saturday October 25th, 1884

And so, I look to the West, and the ephemera of the 1800s, yes, that era of westward expansion and exploration and such, and I find it fascinating to see what people, what ordinary people, did in those circumstances. What they were forced to do. What they chose to do. How they went about doing it, and why, and what they did once they got there.

And I’m figuring that a lot of the style of the West, artifacts and such that we inescapably associate with the West, are not necessarily by design… but that they were the only resources available from which to craft things. So yes, they were designed, if not in an aesthetic sense, but in that there were strict constraints of materials available to build, and those in turn determined the styles of things, of buildings, of main street in boom towns… indeed, the stark reality of building a town out of nothing, yes, that’s positively fascinating.

Wallpaper Strata

But also, what luxuries did we chose to bring along? From alcohol to prostitutes to sourdough bread to chandeliers in barrooms to player pianos to ornate ceiling tiles to framed art to wainscoting to wallpaper… these things didn’t just come from nowhere. In a lawless land such as the historic American West, it likely ended up there because someone decided they could make a buck by doing it. A piano is heavy and delicate and difficult to transport, but man just as with any bumpin’ night club, I’m sure it could bring the crowds if you were the only saloon that had one.

But also, I’m interested in the West from an art direction standpoint, from the way the western films, with their spurs and pointed-toe cowboy boots and electric guitars and whistles and harmonicas and such, the way Sergio Leone has basically mediated the portrayal of the West, and given us these vast tools of shared meaning with which we can craft and express a certain experience. And, in the case of diagetic sound, that can be pretty authentic, from the sounds of insects to wind to trotting horses, but also the electric guitars and whoops and hollers of The Good, The Bad and The Ugly, how the non-diagetic sound has become a resource for emotional connection and, yes, designed experience.

So there’s West as nature, which I love. There’s West as life, historically, which I find interesting. There’s West as a shared cultural cinematic medium, the experience design of the West, which I find awesome.

And. There’s West as a raw portrayal of what we find quintessentially valuable as Americans. Or even, as humans. As people. I don’t want to draw too many universals independent of American culture, but there’s something telling to the West, the rawness of the hardscrabble life its history afforded, that tells a story of what we truly, truly value, yes as historic Americans, as a culture, but perhaps even as humans, when we have nothing and are presented with the rawest of living.

If we were to carve out an existence where there is nothing (taking on the perspective of a period-era pioneer, for a moment, and blindly ignoring the existing indigenous cultures we oh-so-ravaged, which already had indescribable “meanings” associated with all that we considered wide open “nothing” in the West), what are we going to carry with ourselves on our backs as we make our way westward? What are we going to build once we get there? What are we going to seek out, either from nature, or from others, in the hope that a loose regional society can provide what essentials we cannot provide for ourselves?

Entertainment District

What do we value, above all else, such that we will go through such pains to carry it with us, or build it, or seek it out? Shelter, water, food, fame, fortune, power, sex, liquor, the sublime, art, culture, music, the church, tobacco, opium… what motivates us, as a people, even at the fringes of civilization? What remains constant? What do we carry in our hearts, wherever we travel, whatever our society, whatever its wickedness and lawlessness?

In a way, I find the West a fascinating experiment in what we truly find most valuable as a culture, a society, a people, perhaps even a species… a perfected laboratory where, if given an opportunity to start it all over, with limited resources, how we would desire to remake ourselves. The scarcity of resources and obvious constraints and harshness of life in the Old West fascinates me as a designer, perhaps even moreso than the aesthetics of Western life, and I feel I’m beginning to understand that the aesthetics we associate with the West are inexplicably tied to something that was very real, and very raw, for some people’s existence.

The stories of other people’s lives, and how they lived them from day to day, fascinate me as a writer and a storyteller. I continue my attempts to unravel the stories of the West, for I feel as though they hold some kind of truth as to what dwells in the hearts of humankind.

Hans and Umbach: Exploring the Boundary Between Physical and Digital

My final semester of graduate school is now long over, I have spent the last few weeks immersed in the awesome culture that is Adaptive Path, and yet embodied interaction continues to dominate my thoughts.


Today I have been reviewing my notes from the Hans and Umbach project, after using a terminal command to combine hundreds of text documents into a single file, which I converted to a PDF and loaded onto an iPad that I am currently borrowing from work. One thing that is striking, and perhaps disappointing, is that I originally set out on the modest goal of learning Arduino, hardware sketching and physical computing. Where others have made LED matrixes that let you play Super Mario Brothers, or tanks that they can control with an iPhone, I made a few LED mixers, interfaced with a 7-segment display, took apart a Super Nintendo controller, and played Pong with a Wii Nunchuk.

Hans and Umbach: Arduino, 8-bit shift register, 7-segment display

My results didn’t quite meet my initial expectations. Electronics, it turns out, is still an archaic craft wrapped in cloaks of obtuse language and user-hostile encodings, and is certainly an art unto itself. I realized that to produce the robust interactions I had intended, with all the nuance and detail with which I approach my screen-mediated design work, would take an entire career worth of learning and refinement.

So then, were my efforts with Hans and Umbach all for naught? I don’t believe so. Physical computing exists at the intersection of the physical and the so-called digital worlds, which is why I was originally so interested in studying it. In reflecting extensively on my own process of learning electronics, and simultaneously diving deep on academic research behind notions of embodiment, I came to realize that perhaps stumbling through the craft of linking these two worlds together wasn’t the best use of my strengths.

Because, I realized, the boundary between the physical and digital worlds was a false one.

Indeed, mentally compartmentalizing the physical from the digital makes sense from a computer science perspective, or from a system architecture perspective, but it is a wrong, dead-wrong approach for an interaction design perspective.

Every interaction, whether it is with a coffee cup or a keyboard or pixels on a screen, exists in the physical world, is perceived through our senses, is actively interpreted by us, and is thus rendered meaningful by our interpretation. Whether it is physical or digital, every interaction is embodied, as we only interact with the physical manifestations of digital information.

Musée Mécanique

This was a surprising conclusion to reach, as the whole reason I set out on this inquiry was to prove that the interactions afforded by the devices at the Musee Mecanique were of a different class than those afforded by screens and input devices. What I began to discover, though, is that even our most familiar, most natural, most culturally-embedded interactions, are all technologically-mediated.

Type Cliché Letterpress Project

There is nothing natural about plain paper, dark ink or the printing press; these are all technologies. However, a book differs greatly from an e-book in terms of the richness of its physicality. Screens typically comprise an interaction that is physically impoverished, given the rich range of sensing capabilities we have as human beings. By not engaging our senses for texture, warmth, smell and sound, the e-book is limited in how it engages our sense of embodiment, but it is embodied nevertheless.

Indeed, too much effort has been wasted trying to explain how and why tangible computing is new and different than what came before it… what, intangible computing? I believe that the assertion is irrelevant, that tangible computing is not new, but as an area of inquiry it has given us a new perspective from which to reconsider all interactions, namely that of their embodied qualities. While tangible computing is mostly concerned with the sense of touch and physical manipulability, embodied interaction considers the larger notions of physicality as a whole, the human body as a mediator of experience, the nature of being, and the role of individual interpretation as central to the formation of meaning.

All interactions can benefit from an embodied perspective, not just analog, physical, in-the-world interactions, but so-called digital ones as well. There are all these things in the world, hardly perceptible but nonetheless important, that we use every day to create meaning.

What I continue to outline, through my consideration of embodied interaction, phenomenology and metaphor, is a means by which we can talk about these experiences in such a way that embodiment can better inform our design process.

I’m not there yet, but I’m continuing to work it out.

And now, I get to work it out with some of the coolest people in the whole entire world.

Hans and Umbach: Miscellaneous thoughts regarding the embodied future of interaction design.

Okay. It’s been awhile. I’ve taken some time off. From this, as well as from embodied interaction.

But let’s get back at it. Embodied interaction, that is.

I’ve had quite some time to decompress about this, take a pause and see what sort of ideas keep bubbling to the surface. And the results are not surprising. Or are surprising. Or whatever.

On a postmodern interpretation of technology, and rejecting a sense of inevitability.

Jaron Lanier. And questioning underlying assumptions, tacit assumptions, the colorless, odorless nature of our technological surroundings. Of our environments. Render explicit to consciousness that our ecologies are not inevitable, that they are not natural, that they are not predestined, that they are constructed and hold no truths. That is not to say they are completely relative, but just that they are subjective. Indeed. Privacy on the internet. That you cannot hear people from your car. Certain responsibilities, the lines we draw between designer and user, producer and consumer, etc.

Who is responsible when my gas pedal sticks? How about when I swerve to miss a deer? Or if I hit the deer? Should the car allow me to swerve so strong that I can flip it? Should it never be allowed to exceed 55 mph? What is the logical limit for the top speed? What is safe? What is safe enough? Given infinite resources, given no technological constraints, where would we find ourselves?

On the future of screen-mediated interactions.

Screens, for instance. There’s nothing inherently wrong with screens. A screen is just a collection of pixels. RGB emissive light, photons in this case. However, they could just as well be CMYK dots, like a newspaper. Like a magazine. Imagine how that would change our relationship with paper. Indeed, in that case you could pinch-zoom the 2010 Rand McNally U.S. Road Atlas. You’ve already tried that. Jake’s already tried to slide-unlock his wallet.

There is a feedback loop here. Digital technology influences how we interact with our analog environments. Not just vice versa. Twenty years ago the analog interaction of operating a Rolodex offered a logical metaphor for the digital interaction of browsing an address book. But no more. How many 15-year-olds today do you think have operated a Rolodex? How many do you think have operated an iPod? Or an iPhone? One metaphor for interaction here can no longer effectively be leveraged. Methinks the metaphor of the Rolodex interaction is dead.

So, you take Iron Man 2, with its transparent glass screens, and you think, man, that looks cool at first, but then you realize trying to focus on flat content projected on a transparent screen would be somewhat straining. But. If you can project an image on clear glass like that, who is to say you cannot also project a black background? And now, your office may consist of hundreds of screens, but when they’re off they’re transparent. Open. Airy. They barely exist. And the fact that it can be transparent, or can project its own black background for familiar contrast, or yes, that they can be transparent, opens up all sorts of options for augmented reality.

Though, for a truly transparent glass screen (which is merely transitory technology on the way to in-air displays) you come up against problems of auto-stereoscopy and determining the relativistic perspective of the user’s viewpoint… parallax and the like. I move my head, and the display needs to update accordingly. Or, what if I’m sharing the same augmented reality screen with someone else? They need to see a different display, from a different angle, than myself. Here we need some sort of holography, that projects a unique display along all emissive points.

When we think of screens, we need to think beyond the current technological implementation of the screen, and instead think of the screen metaphorically. What are the terms we use to construct our thinking of this display? Can we touch it? Do we touch it? If we touch it does it get greasy? Not necessarily. I predict in a few years that nanotechnology will provide us with materials, perhaps inspired by the leaves of the lotus, that collect no dust, accept no grease (even from the infinitely-greasy human hand). Imagine if all glass were made of such a material.

Is a screen an extension of a book? A viewport into another world? A wormhole? How we align ourselves, socially and culturally, with these artifacts greatly influences how we perceive them, how we conceptualize them, how we imagine ourselves using them. We look back at old science fiction movies and laugh at their cornball conceptualization of the future, but it’s important to recognize that every piece of science fiction is a product of a unique society and culture. Especially mainstream science fiction (or depictions of the future, as seen in Iron Man) needs to consider their sociocultural situatedness.

Ironically, technology in science fiction needs to appear futuristic, but not so much so that it seems unbelievable and unachievable given current understandings of the world. I recently read that plants may use quantum entanglement to maximize the efficiency of photosynthesis, and that quantum entanglement may allow birds to “see” the Earth’s magnetic field, aiding in migration. If these theories turn out to hold weight and thusly become popularized, they will influence our shared, intersubjective world, and become a resource that science fiction can leverage for believably futuristic renderings of the, well, future.

On questioning hegemonies.

I am realizing that one of my roles as a designer is to question, or at least render explicit, the tacit assumptions of the hegemonies in which we conduct our lives. As interaction designers, we have inherited the legacy, a powerful and important legacy at that, of a scientific approach to computation, as well as an initially cognitive-systems approach to interaction. The scientific, non-humanistic origins of our field, I believe, continue to silently influence the way we think about and talk about interaction.

There is a strong, increasingly strong, reaction against these rational histories of human-computer interaction, towards a more experiential model that considers the whole person, their emotions, desires, goals and fears, not only as something to design for, but something to design with. The user as a medium for design. Indeed, the interpretative abilities of the user are an incredible resource that can, nay must, be effectively leveraged by our designs.

The value in a design is not objectively measurable, and is not contained in the designed artifact itself, but in the union between the artifact and the user. The simplest designs are compelling not merely because they are simple, but because they so gracefully leverage the rich intersubjective world of the user (or users) to give them meaning. As phenomenology tells us, these meanings are situated not in the artifact, but in the consciousness of the user herself. Interaction design is concerned not with the objective world, but the messy, subjective world of interpretation. Phenomenology is at the very core of interaction design; concerned with reality as it is revealed to and manifest in consciousness.

I am proud that interaction design is increasing concerned with the messy subjective world, that it realizes that an account of the objective qualities of the world are insufficient to design compelling interactions. Nevertheless, I believe there is still significant work to be done in shrugging off the scientific cloak of computation, so that we can truly design future-facing interactions. I believe certain metaphors used for describing our systems have hung on past their prime, and silently and insidiously damage progress in our field. Most notably, as I have described recently, is the conceptualization of a virtual world that exists independently of the physical world.

On dispelling the myth of the virtual world.

While the difference between the physical and digital is certainly important from a technology and computation perspective, I believe it is meaningless from an interactive perspective. Nevertheless, we still speak of making virtual friends, roaming virtual worlds, or downloading digital information. I believe this categorization creates a false boundary between the physical and digital worlds, mischaracterizing the digital and trivializing the real, physical, embodied interactions that happen, that must happen, when a user interacts with the so-called virtual world.

Interacting with a friend in World of Warcraft is greatly different than interacting with them when they’re standing in your living room, but not because one is a “virtual” interaction and the other is a “real” interaction. No, they are both physical interactions, one mediated in co-present physical space (with all the available expressive faculties that come along with such co-presence), and one mediated through keyboard, mouse, screen and audio. To characterize the latter as “virtual” is to casually dismiss the embodied interactions that must happen in order for the conversation to take place, and to neglect possible opportunities to make the interaction more richly embodied.

On disentangling interaction design from its computational roots.

Computer science must necessarily distinguish between hardware and software layers, either of which can branch into any multitude of sub-disciplines. However, users do not necessarily make any such distinction. I have observed college freshmen working with computers, and their conceptual model of computers often does not distinguish between operating system and application, or even local (as in, on their computer) or remote (as in, on the internet). To them, a computer (or even computation as a whole) is one amorphous interactive mass, which whether we like it or not, is how we have to design it.

Also. We must design in the abstract, but ultimately our design are interacted with at the ultimate particular level. People never abstractly interact with a product. They only particularly, specifically, interact with something.

Or something.

Outside In: Evoking a Sense of the Natural World in Indoor Spaces

Last night I delivered my thesis presentation, effectively completing my master’s degree in human-computer interaction design. Over the last seven months I’ve been conducting a design exploration into the ways we find nature meaningful to us, and uncovering ways to enliven indoor environments with a sense of the outdoors.

Here is the 20-minute presentation:

A big, hearty thanks to everyone who came out to see it live and in person!

Hans and Umbach: The Virtual World You Requested Does Not Exist

Our interests in embodied interaction started almost a year ago, as we spent the summer in San Francisco. Confronted by the overwhelming colors and textures of a real living-and-breathing (and, based on olfactory sensations, clearly excreting) city, we realized how malnourished our computer-mediated interactions were, compared to the rich sensory experience of the real, physical world. Additionally, our time at the Musée Mécanique made us appreciate the aesthetic experience of real physical artifacts, of tactile materials like warm wood cabinets and cold metal handles. We liked heavy dials you had to twist, piano scrolls that spun, machines that would hum and shake as their gears and belts within worked their magic.

Musée Mécanique

There is something different, something tangibly different, about real objects in real space around you. Sound that emanates from two metal pieces clanging together in real space is so much more satisfying than a recording played through a speaker. That they move, that they displace the air around them, the same air that you breathe, is just one of the ways we seem inexplicably tied to the physical realm.

Electronics as a tool for extending computation into the physical world.

This was the goal as we began our inquiry; how to create physical interactions, that exist in the real world and involve the manipulation of real artifacts, that are invisibly-backed by the strengths of modern computation and network technology. In our efforts to reintroduce interaction to the space around us we learned electronics, we experimented with Arduino, and we took our knowledge of programming and extended it into interactive electronic artifacts that existed in the world with us.

Hans and Umbach: Wiichuck Pong Components

As we tinkered with electronics we quickly discovered that all of the subtlety and nuance, as well as challenges, that go into designing “digital” interactions are present in physical computing, only amplified because we were now considering both an electronic and physical layer. These are two layers that, say, when you build a website or application, you take for granted. With Arduino they become your responsibility, subject to whatever limited grasp you may have with the subject area. We are disappointed by the limited progress we made in learning electronics, but we certainly have a renewed appreciation for people with the wide array of skills necessary to make not only functional, not only good, but great computationally-backed physical interactions.

From a theoretical perspective, our original interests were to understand why these physical in-the-world interactions are so fulfilling and evocative, and why our virtual interactions feel so vapid in comparison. Our goal was to explain why these physical interactions should earn a privileged seat at the table, while virtual interactions should be sent to their room.

Tangible computing: digital information, physical interaction.

As we sifted through the layers of theory on embodiment, we realized we needed a better understanding of what defines a virtual interaction, and how it is different from a physical one. Traditionally, a virtual/digital interaction involves a screen comprised of pixels, with a keyboard and pointing device used for controlling the interface. Tangible computing has worked to categorize interactions, characterizing certain ones as ‘tangible’ and other ones as ‘digital’. Indeed, with ambient computing Ishii and Ullmer have dedicated much of their work to studying how we can render ‘digital’ information, or ‘bits’, in ‘physical’ space. A number of authors have sought to define tangible computing in a manner that differentiates it from ‘regular’ computing. A few common tenets:

  • Tangible computing unifies input and output surfaces. Instead of a keyboard that adds characters to a screen, or instead of a mouse that moves a cursor, tangible computing offers a new interactive model where the input mechanism and output display are one and the same artifact.
  • Tangible computing affords direct manipulation. Instead of mapping the physical space my mouse occupies to the virtual space on the screen, the physical object I am working with can be grasped, turned and shaped in order to change its unified output characteristics.

Three Flavors: Data-Centered, Perceptual-Motor-Centered, Space-Centered

Hornecker outlines three primary views of tangible interaction. The work of Ishii and Ullmer concerns a data-centered viewpoint, where physical artifacts are computationally-augmented with digital information. A second is a perceptual-motor-centered view of tangible interaction, which aims to leverage the affordance and rich sensory experience of physical objects. As championed by Djajadiningrat and Overbeeke among others, this view of tangible interaction emphasizes the expressive nature of human movement. Finally, Hornecker subscribes to a space-centered view of tangible interaction, which involves embedding virtual displays in real-world spaces.

I subscribe to a perceptual-motor-centered view of tangible interaction.

And I believe that the data-centered and space-centered views are absolute nonsense.

Let’s talk about digital and virtual.

Both the data-centered and space-centered views of tangible computing make reference to a ‘digital’ or ‘virtual’ world of information. These concepts are familiar enough, and we all have a gut feeling as to what they mean. Digital information is ones and zeroes. It lives in a computer or on a server somewhere. It is ephemeral, existing without really existing, and can be infinitely accessed, copied, reproduced and distributed without loss. It’s what got the music industry’s panties in a bundle.

The virtual world is the world in which this ‘digital’ information exists, and it lacks many of the familiar characteristics of the physical world. Things don’t actually ‘exist’ in the virtual world. A photo on Flickr is not a photograph in real life. You don’t need to be co-present with the photo in order to see it. Multiple people can look at the same photograph at the same time, without being aware of one another.

The metaphors we use to describe digital information and the virtual world emphasize its distributed, ephemeral nature. Deleted files disappear “into the ether” and we pull things down from “the cloud.” Like an atmosphere that envelops us, we think of it as existing independent of us, independent of the moments we perceive it through our devices.

Nevertheless, digital and virtual are only conceptual metaphors. They do not describe the objective qualities of our networked, computational systems, but rather our subjective framing of them. They are extremely effective metaphors, yes, as characterized by their widespread use and prevalence in thought. But it is nonsense, utter nonsense, to claim that the virtual world exists, and is any different than the physical world.

There is no digital information. There is no virtual world. There is only the physical world, where we encounter mediated instances of so-called ‘digital’ information.

You never see, nor interact with, a virtual world. There is no such thing as a virtual display. The idea of augmenting real, physical objects with digital information is meaningless, as is the idea of augmenting physical spaces with virtual displays.

If a tree falls in the forest and no one hears the sound, it does not make a sound.

My mind’s telling me virtual, but my body says physical.

All of your interactions with the ‘virtual’ world are necessarily mediated by whatever system or tool you are using to access it. This post, for instance, is not virtual. It is a collection of physical pixels emitting physical photons of light, which enter your eye in a pattern that your brain recognizes and interprets as text. This is the case if you are reading this on your laptop, your phone or your iPad. If you want to comment on this post, you will perhaps press physical keys on your laptop to make recognizable characters appear on your physical screen, until they are in an amount and order you deem satisfactory.

Perhaps you will comment by pecking this out on an iPhone or iPad’s ‘virtual’ keyboard. Again, just because the keyboard is rendered on a screen, comprised of pixels, does not mean that it is virtual. Just because there isn’t tactile feedback (technically there is tactile feedback, as your finger doesn’t pass through the device like a ghost, but it may be feedback that doesn’t meet your expectations for a keyboard) doesn’t mean it isn’t physical.

You never have direct, unmediated access to what is metaphorically described as the virtual world. All of your interactions with ‘virtual’ information are necessarily physical, necessarily tangible, and therefore embodied. Thus, anyone who claims that tangibility is a new agenda for computing is sadly mistaken. Tangibility has always been core to our ability to interact with and experience computational devices, from pixels to keyboards to touch screens.

All interactions are not created equal.

The arguments over classification, determining what ‘is’ a tangible interaction versus what ‘is’ a virtual interaction, are completely misguided. All interactions are tangible, all interactions are physical, all interactions are embodied, but all interactions are not necessarily created equal. As humans we have highly-developed capacities to perceive, interpret, and make meaning out of our surroundings. In traditional desktop computing, as well as touch screen computing, devices tend to leverage only our most basic capacities for seeing and touching. These characteristics do not make an interaction virtual, they do not make it intangible, but they do make it physically impoverished.

This was the big surprise the boys and I encountered over the course of this project. We initially set out to explain why physical interactions were more fulfilling than virtual ones, and how the traditional screen, keyboard and mouse ignored all but the most rudimentary human capabilities for interacting with the world. What we realized was that we couldn’t merely categorize some interactions as physical and the other ones as virtual, because all interactions are necessarily situated in and mediated by the physical world. ‘Virtual’ is a convenient conceptual metaphor for describing a certain class of interactions, those that evoke only a limited set of our physical and perceptual capabilities, but the notion of a disembodied virtual world independent of the physical world is absolute nonsense. Moreover, I believe the appeal of the ‘physical’ and ‘virtual’ metaphors, and the territorial battles that have been fought under their banners, have distracted us from far more important agendas.

Traditional desktop interactions are unsatisfying not because they are ‘intangible’ or ‘virtual’, but because they offer an impoverished physical interaction that does little to leverage our unique abilities to perceive, interpret, and make meaning from our surroundings. Tangible computing differentiates itself not because it offers a ‘physical’ representation of ‘digital’ information, but because it uniquely focuses on the tactile qualities of interaction, and the rich sensory experiences that the world can afford.

Ultimately, all interactions are tangible. By acknowledging the metaphorical barrier between the ‘physical’ and ‘virtual’ worlds as a false one, and instead focusing on our ability to deliver richly evocative interactions through these different interactive paradigms, we are empowered to build more compelling interactions.

Hans and Umbach: The Role of Metaphor in Embodied Interaction

Through their research, Hans and Umbach have discovered that there is no shortage of brilliant work summarizing the primary concepts of embodied interaction. From Antle to Schiphorst, from Dourish to Hornecker, from Robertson to Sharlin to Lowgren to Fernaeus to Djajadiningrat to Fishkin, everyone seems to be reading the right stuff. Everyone is talking about Heidegger and his hermeneutical phenomenology, a philosophical approach to understanding the way the world is manifest in consciousness, how we interpret our experience with the world, and ultimately how we form meanings with it.

Everyone is channeling Dourish, and his work unifying social computing and tangible computing under the banner of embodied interaction. Many authors are channeling Lakoff and Johnson, and their profound work studying linguistics, metaphors and embodied cognition. Indeed, any text that discusses embodied interaction, without reference to Lakoff and Johnson, is immediately suspect in the boys’ book.

Lakoff and Johnson, and the role of metaphor in human thinking.

Lakoff and Johnson posit that much of our language, and thus much of our thinking, is dependent on our use of metaphors to describe the world. These metaphors are so ingrained in our thinking that we are rarely conscious of their use. For example, we describe time using spatial metaphors, or even material metaphors. Things that are in the future are “ahead” of us, and things in the past are “behind” us. We talk about the speed at which we perceive time passing, and we describe time as though it is water, a continually flowing substance. Time slips through our fingers, we don’t have enough of it, and we frequently run out of it.

Lakoff and Johnson argue that metaphors are not just convenient linguistic tricks we use that allow us to communicate more efficiently with one another, but that our brains are hard-wired to categorize and associate in such a way that we can’t help but think in metaphor. Hans and Umbach have definitely experienced that in the last few months, as they’ve been learning electronics. As they work with circuits and components, trying to build things that work and debug things that don’t, they’re constantly using spatial and material metaphors as a foundation to their thinking. We talk about electricity “flowing” from negative to positive, as though it is water. We talk about resistors resisting (or constricting) the flow of electricity. We talk about capacitors “filling up”, or buttons “closing” a circuit, or transistors “waiting” for a signal.

If we pause just for a moment, none of these thoughts regarding electricity make any sense at all. We can’t see it, so it’s meaningless to “know” or even to “think” that it acts like water, even while this particular mental model sets us up for success when creating a functional circuit. I close things in my environment all the time, such as doors, windows and notebooks, but to say that a pressed button “closes” a circuit is nonsense. Worst of all, how can a transistor “wait”? For something to wait implies that it perceives time, that it can anticipate the future, that it will respond in some manner when the appropriate stimuli presents itself.

Animals wait. Humans wait. Transistors do not wait, and yet this metaphor, that of the transistor as an organism that can anticipate and respond, tells us how to work with them. This, then, is where Lakoff and Johnson’s work gets particularly juicy. Humans are biological creatures with particular sensory capacities. We see light across a particular spectrum, can sense heat across a particular range at varying degrees of sensitivity, and have bodies with arms, fingers and hands that grant us certain abilities for interacting with the world.

Cognition is situated in the the body, and the body influences cognition.

J.J. Gibson’s work in ecological psychology argues that action and cognition are radically situated in the environment and inseparable from it, such that you can make no predictions about an organism’s behavior without knowing about the environment in which it is situated. Lakoff and Johnson extend Gibson’s work by channeling the concept of embodied cognition, which similarly claims that cognition is radically situated in the body.

Indeed, according to embodied cognition, the reason we perceive the world the way we do is not necessarily because the world possesses certain perceptible qualities, but rather because our bodies perceive and make sense of the world in a certain way. We perceive time in a certain way because we are hard-wired to experience it in that way. We organize the physical world in time because it is impossible for us to organize it independent of time. The more we learn about quantum mechanics, too, the more we learn that there is little in the world that objectively reflects the common sense human experience of time.

This is not to say that the objective world does not exist, but rather that we need to deliberately consider the way our minds make sense of the world. Since our minds are situated in our bodies, and our bodies have certain capabilities that pre-filter our access to the world, the importance of considering subjective experience as a phenomenon independent of the objective world cannot be understated.

“I can’t get my body out of my mind.”

The notion of embodied cognition has profound implications, and we can see some of them manifested in the way we talk about, and orient ourselves towards, the physical world. Our bodies are basically symmetrical from left to right, but strongly asymmetrical from front to back. We can see things when they are in front of us, but not when they are behind us. Our limbs are oriented in such a way that we walk in a forward vector, towards our line of sight.

Thus, things that we encounter “in the future” we typically encounter as we walk towards them, and things that we encountered “in the past” are things that are behind us. This asymmetry from front to back gives rise not only to the way we orient ourselves spatially, but also influences how we perceive the world. In this way our bodies’ unique configuration determines our understanding of time, spatially situating our temporal metaphors.

The richer notions of embodiment that Hans and Umbach have discovered over the course of our project consider these notions of metaphor as a fundamental part of how we interpret the world and make meaning of it. These metaphors arise out of the unique qualities and perceptual capabilities of our bodies, such that the way we make sense of and interact with the world is necessarily shaped by our own physical characteristics.

Hans and Umbach: Establishing a Language of Embodied Interaction for Design Practitioners

My work with Hans and Umbach on physical computing and embodied interaction took an interesting turn recently, down a path I hadn’t anticipated when I set out to pursue this project. My initial goal with this independent study was to develop the skills necessary to work with electronics and physical computing as a prototyping medium. In recent years, hardware platforms such as Arduino and programming environments such as Wiring have clearly lowered the barrier of entry for getting involved in physical computing, and have allowed even the electronic layman to build some super cool stuff.

Rob Nero presented his TRKBRD prototype at Interaction 10, an infrared touchpad built in Arduino that turns the entire surface of one’s laptop keyboard into a device-free pointing surface. Chris Rojas built an Arduino tank that can be controlled remotely through an iPhone application called TouchOSC. What’s super awesome is that most everyone building this stuff is happy to share their source code, and contribute their discoveries back to the community. The forums on the Arduino website are on fire with helpful tips, and it seems an answer to any technical question is only a Google search away. SparkFun has done tremendous work in making electronics more user-friendly and approachable, offering suggested uses, tutorials and data sheets right alongside the components they sell.

Dourish and Embodied Interaction: Uniting Tangible Computing and Social Computing

In tandem with my continuing education with electronics, I’ve been doing extensive research into embodied interaction, an emerging area of study in HCI that considers how our engagement, perception, and situatedness in the world influences how we interact with computational artifacts. Embodiment is closely related to a philosophical interest of mine, phenomenology, which studies the phenomena of experience and how reality is revealed to, and interpreted by, human consciousness. Phenomenology brackets off the external world and isn’t concerned with establishing a scientifically objective understanding of reality, but rather looks at how reality is experienced through consciousness.

Paul Dourish outlines a notion of embodied interaction in his landmark work, “Where The Action Is: The Foundations of Embodied Interaction.” In Chapter Four he iterates through a few definitions of embodiment, starting with what he characterizes as a rather naive one:

“Embodiment 1. Embodiment means possessing and acting through a physical manifestation in the world.”

He takes issue with this definition, however, as it places too high a priority on physical presence, and proposes a second iteration:

“Embodiment 2. Embodied phenomena are those that by their very nature occur in real time and real space.”

Indeed, in this definition embodiment is concerned more with participation than physical presence. Dourish uses the example of conversation, which is characterized by minute gestures and movements that hold no objective meaning independent of human interpretation. In “Technology as Experience” McCarthy and Wright use the example of a wink versus a blink. While closing and opening one’s eye is an objective natural phenomena that exists in the world, the meaning behind a wink is more complicated; there are issues of the intent of the “winker”, whether they intend for the wink to represent flirtation, collusion, or whether they simply had a speck of dirt in their eye. There are also issues of interpretation of the “winkee”, whether they perceive the wink, how they interpret the wink, and whether or not they interpret it as intended by the “winker.”

Thus, Dourish’s second iteration on embodiment deemphasizes physical presence while allowing for these subjective elements that do not exist independent of human consciousness. A wink cannot exist independent of real time and real space, but its meaning involves more than just its physicality. Indeed, Edmund Husserl originally proposed phenomenology in the early 20th century, but it was his student Martin Heidegger who carried it forward into the realm of interpretation. Hermeneutics is an area of study concerned with the theory of interpretation, and thus Heidegger’s hermeneutical phenomenology (or the study of experience and how it is interpreted by consciousness) has rather become the foundation of all recent phenomenological theory.

Beyond Heidegger, Dourish takes us through Alfred Schutz, who considered intersubjectivity and the social world of phenomenology, and Maurice Merleau-Ponty, who deliberately considered the human body by introducing the embodied nature of perception. In wrapping up, Dourish presents a third definition of embodiment:

Embodied 3. “Embodied phenomena are those which by their very nature occur in real time and real space. … Embodiment is the property of our engagement with the world that allows us to make it meaningful.”

Thus, Dourish says:

“Embodied interaction is the creation, manipulation, and sharing of meaning through engaged interaction with artifacts.”

Dourish’s thesis behind “Where The Action Is” is that tangible computing (computer interactions that happens in the world, through the direct manipulation of physical artifacts) and social computing (computer-augmented interaction that involves the continual navigation and reconfiguration of social space) are two sides of the same coin; namely, that of embodied interaction. Just as tangible interactions are necessarily embedded in real space and real time, social interaction is embedded as an active, practical accomplishment between individuals.

According to Dourish, embodied computing is a larger frame that encompasses tangible computing and social computing. This is a significant observation, and “Where The Action Is” is a landmark achievement. But, as Dourish himself admits, there isn’t a whole lot new here. He connects the dots between two seemingly unrelated areas of HCI theory, unifies them under the umbrella term embodied interaction, and leaves it to us to work it out from there.

And I’m not so sure that’s happened. “Where The Action Is” came out nine years ago, and based on the papers I’ve read on embodied interaction, few have attempted to extend the definition beyond Dourish’s work. While I wouldn’t describe his book as inadequate, I would certainly characterize it as a starting point, a signifiant one at that, for extending our thoughts on computing into the embodied, physical world.

From Physical Computing to Notions of Embodiment

For the last two months I have been researching theories on embodiment, teaching myself physical computing, and reflecting deeply on my experience of learning the arcane language of electronics. Even with all the brilliantly-written books and well-documented tutorials in the world, I find that learning electronics is hard. It frequently violates my common-sense experience with the world, and authors often use familiar metaphors to compensate for this. Indeed, electricity is like water, except when it’s not, and it flows, except when it doesn’t.

In reading my reflections I can trace the evolution of how I’ve been thinking about electronics, how I discover new metaphors that more closely describe my experiences, reject old metaphors, and become increasingly disenchanted that this is a domain of expertise I can master in three months. What is interesting is not that I was wrong in my conceptualizations of how electronics work, however, but how I was wrong and how I found myself compensating for it.

Hans and Umbach: Arduino, 8-bit shift register, 7-segment display

While working with a seven-segment display, for instance, I could not figure out which segmented LED mapped to which pin. As I slowly began to figure this out, it did not seem to map to any recognizable pattern, and certainly did not adhere to my expectations. I thought the designers of the display must have had deliberately sinister motives in how their product so effectively violated any sort of common sense interpretation.

To compensate, I drew up my own spatial map, both on paper as well as in my mind, to establish a personal pattern where no external pattern was immediately perceived. “The pin in the upper lefthand corner starts on the middle, center segment,” I told myself, “and spirals out clockwise from there, clockwise for both the segments as well as the pins, skipping the middle-pin common anodes, with the decimal seated awkwardly between the rightmost top and bottom segments.”

It was this personal spatial reasoning, this establishment of my own pattern language to describe how the seven-segment display worked, that made me realize how strongly my own embodied experience determines how I perceive, interact with, and make sense of the world. So long as a micro-controller has been programmed correctly, it doesn’t care which pin maps to which segment. But for me, a bumbling human who is poor at numbers but excels at language, socialization and spatial reasoning, you know, those things that humans are naturally good at, I needed some sort of support mechanism. And that mechanism arose out of my own embodied experience as a real physical being with certain capabilities for navigating and making sense of a real physical world.

Over time this realization, that I am constantly leveraging my own embodiment as a tool to interpret the world, dwarfed the interest I had in learning electronics. I’m still trying to figure out how to get an 8×8 64-LED matrix to interface with an Arduino through a series of 74HC595N 8-bit shift registers, so I can eventually make it play Pong with a Wii Nunchuk. That said, it’s frustrating that every time I try to do something, the chip I have is not the chip I need, and the chip I need is $10 plus $5 shipping and will arrive in a week, and by the way have I thought about how to send constant current to all the LEDs so they’re all of similar brightness because my segmented number “8” is way dimmer than my segmented number “1” because of all the LEDs that need to light up, and oh yeah, there’s an app for that.


Especially when I’m trying to play Pong on my 8×8 LED matrix, while someone else is already playing Super Mario Bros. on hers.

Extending Notions of Embodiment into Design Practice

In accordance with Merleau-Ponty and his work introducing the human body to phenomenology, and the work of Lakoff and Johnson in extending our notions of embodied cognition, I believe that the human body itself is central to structuring the way we perceive, interact with, and make sense of the world. Thus, I aim to take up the challenge issued by Dourish, and extend our notions of embodiment as they apply to the design of computational interactions. The goal of my work is to establish a language of embodied interaction that will help design practitioners create more compelling, more engaging, more natural interactions.

Considering physical space and the human body is an enormous topic in interaction design. In a panel at SXSW Interactive last week, Peter Merholz, Michele Perras, David Merrill, Johnny Lee and Nathan Moody discussed computing beyond the desktop as a new interaction paradigm, and Ron Goldin from Lunar discussed touchless invisible interactions in a separate presentation. At Interaction 10, Kendra Shimmell demonstrated her work with environments and movement-based interactions, Matt Cottam presented his considerable work integrating computing technologies with the richly tactile qualities of wood, and Christopher Fahey even gave a shout-out specifically to “Where The Action Is” in his talk on designing the human interface (slide 50 in the deck). The migration of computing off the desktop and into the space of our everyday lives seems only to be accelerating, to the point where Ben Fullerton proposed at Interaction 10 that we as interaction designers need to begin designing not just for connectivity and ubiquity, but for solitude and opportunities to actually disconnect from the world.

Establishing a Language of Embodied Interaction for Design Practitioners

To recap, my goal is to establish a language of embodied interaction that helps designers navigate this increasing delocalization and miniaturization of computing. I don’t know yet what this language will look like, but a few guiding principles seem to be emerging from my work:

All interactions are tangible. There is no such thing as an intangible interaction. I reject the notion that tangible interaction, the direct manipulation of physical representations of digital information, is significantly different from manipulating pixels on a screen, interactions that involve a keyboard or pointing device, or even touch screen interactions.

Tangibility involves all the senses, not just touch. Tangibility considers all the ways that objects make their presence known to us, and involves all senses. A screen is not “intangible” simply because it is comprised of pixels. A pixel is merely a colored speck on a screen, which I perceive when its photons reach my eye. Pixels are physical, and exist with us in the real world.

Likewise, a keyboard or mouse is not an intangible interaction simply because it doesn’t afford direct manipulation. I believe the wall that has been erected between historic interactions (such as the keyboard and mouse) and tangible interactions (such as the wonderful Siftables project) is false, and has damaged the agenda of tangible interaction as a whole. These interactions exist on a continuum, not between tangible and intangible, but between richly physical and physically impoverished. A mouse doesn’t allow for a whole lot of nuance of motion or pressure, and a glass touch screen doesn’t richly engage our sense of touch, but they are both necessarily physical interactions. There is an opportunity to improve the tangible nature of all interactions, but it will not happen by categorically rejecting our interactive history on the grounds that they are not tangible.

Everything is physical. There is no such thing as the virtual world, and there is no such thing as a digital interaction. Ishii and Ullmer (PDF link) in the Tangible Media Group at the MIT Media Lab have done extensive work on tangible interactions, characterizing them as physical manifestations of digital information. “Tangible Bits,” the title of their seminal work, largely summarizes this view. Repeatedly in their work, they set up a dichotomy between atoms and bits, physical and digital, real and virtual.

The trouble is, all information that we interact with, no matter if it is in the world or stored as ones and zeroes on a hard drive, shows itself to us in a physical way. I read your text message as a series of latin characters rendered by physical pixels that emit physical photons from the screen on my mobile device. I perceive your avatar in Second Life in a similar manner. I hear a song on my iPod because the digital information of the file is decoded by the software, which causes the thin membrane in my headphones to vibrate at a particular frequency. Even if I dive deep and study the ones and zeroes that comprise that audio file, I’m still seeing them represented as characters on a screen.

All information, in order to be perceived, must be rendered in some sort of medium. Thus, we can never interact with information directly, and all our interactions are necessarily mediated. As with the supposed wall between tangible interactions and the interactions that proceeded them, the wall between physical and digital, or real and virtual, is equally false. We never see nor interact with digital information, only the physical representation of it. We cannot interact with bits, only atoms. We do not and cannot exist in a virtual world, only the real one.

This is not to say that talking with someone in-person is the same as video chatting with them, or talking on the phone, or text messaging back and forth. Each of these interactions is very different based on the type and quality of information you can throw back and forth. It is, however, to illustrate that there isn’t necessarily any difference between a physical interaction and a supposed virtual one.

Thus, what Ishii and Ullmer propose, communicating digital information by embodying it in ambient sounds or water ripples or puffs of air, is no different than communicating it through pixels on a screen. What’s more, these “virtual” experiences we have, the “virtual” friendships we form, the “virtual” worlds we live in, are no different than the physical world, because they are all necessarily revealed to us in the physical world. The limitations of existing computational media may prevent us from allowing such high-bandwidth interactions as those allowed by face-to-face interaction (think of how much we communicate through subtle facial expressions and body language), but the fact that these interactions are happening through a screen, rather than at a coffee shop, does not make them virtual. It may, however, make them an impoverished physical interaction, as they do not engage our wide array of senses as a fully in-the-world interaction does.

Again, the dichotomy between real and virtual is false. The dichotomy between physical and digital is false. What we have is a continuum between physically rich and physically impoverished. It is nonsense to speak of digital interactions, or virtual interactions. All interactions are necessarily physical, are mediated by our bodies, and are therefore embodied.

The traditional compartmentalization of senses is a false one. In confining tangible interactions to touch, we ignore how our senses work together to help us interpret the world and make sense of it. The disembodiment of sensory inputs from one another is a byproduct of the compartmentalization of computational output (visual feedback from a screen rendered independently from audio feedback from a speaker, for instance) that contradicts our felt experience with the physical world. “See with your ears” and “hear with your eyes” are not simply convenient metaphors, but describe how our senses work in concert with one another to aid perception and interpretation.

Humans have more than five senses. Our experience with everything is situated in our sense of time. We have a sense of balance, and our sense of proprioception tells us where our limbs are situated in space. We have a sense of temperature and a sense of pain that are related to, but quite independent from, our sense of touch. Indeed, how can a loud sound “hurt” our ears if our sense of pain is tied to touch alone? Further, some animals can see in a wider color spectrum than humans, can sense magnetic or electrical fields, or can detect minute changes in air pressure. If computing somehow made these senses available to humans, how would that change our behavior?

My goal in breaking open these senses is not to arrive at a scientific account of how the brain processes sensory input, but to establish a more complete subjective, phenomenological account that offers a deeper understanding of how the phenomena of experience are revealed to human consciousness. I aim to render explicit the tacit assumptions that we make in our designs as to how they engage the senses, and uncover new design opportunities by mashing them together in unexpected ways.

Embodied Interaction: A Core Principle for Designing the Next Generation of Computing

By transcending the senses and considering the overall experience of our designs in a deeper, more reflective manner, we as interaction designers will be empowered to create more engaging, more fulfilling interactions. By considering the embodied nature of understanding, and how the human body plays a role in mediating interaction, we will be better prepared to design the systems and products for the post-desktop era.

Introducing the Hans and Umbach Project

The Hans and Umbach Electro-Mechanical Computing Company

Last summer I began thinking about something that I referred to as “analog interactions”, those natural, in-the-world interactions we have with real, physical artifacts. My interest arose in response to a number of stimuli, one of which is the current trend towards smooth, glasslike capacitive touch screen devices. From iPhones to Droids to Nexus Ones to Mighty Mice to Joojoos to anticipated Apple tablets, there seems to a strong interest in eliminating the actual “touch” from our interactions with computational devices.

Glass capacitive touch screens allow for incredible flexibility in the display of and interaction with information. This is clearly demonstrated by the iPhone and iPod Touch, where software alone can change the keyboard configuration from letters to numbers to numeric keypads to different languages entirely.

A physical keyboard that needed to make the same adaptations would be quite a feat, and while the Optimus Maximus is an expensive step towards allowing such configurability in the display of keys, its buttons do not move, change shape or otherwise physically alter themselves in a manner similar to these touch screen keys. Chris Harrison and Scott Hudson, two PhD students at CMU, built a touch screen that uses small air chambers that allow it to feature physical (yet dynamically configurable) buttons.

From a convenience standpoint, capacitive touch screens make a lot of sense, in their ability to shrink input and output into one tiny package. Their form factor allows incredible latitude in using software to finely tune their interactions for particular applications. However, humans are creatures of a physical world that have an incredible capacity to sense, touch and interpret their surroundings. Our bodies have these well-developed skills that help us function as beings in the world, and I feel that capacitive touch screens, with their cold and static glass surfaces, insult the nuanced capabilities of the human senses.

Looking back, in an effort to look forward.

Musée Mécanique

Much of this coalesced in my mind during my summer in San Francisco, and specifically in my frequent trips to the Musee Mecanique. Thanks to its brilliant collection of turn-of-the-century penny arcade machines and automated musical instruments, I was continually impressed by the rich experiential qualities of these historic, pre-computational devices. From their lavish ornamentation to the deep stained woodgrain of their cabinets, from the way a sculpted metal handle feels in the hand to the smell of electricity in the air, the machines at the Musee Mecanique do an incredible job of engaging all the senses and offering a uniquely physical experience despite their primitive computational insides.

Off the Desktop and Into the World

It’s clear from the trajectory of computing that our points of interaction with computer systems are going to become increasingly delocalized, mobile and dispersed throughout our environment. While I am not yet ready to predict the demise of computing on a desktop (either through desktop or laptop computers alike), it is clear that our future interactions with computing are going to take place off the desktop, and out in the world with us. Indeed, I wrote about this on the Adaptive Path weblog while working there for the summer. Indeed, these interactions may supplement, rather than supplant, our usual eight-hour days in front of the glowing rectangle. This increased percentage of time that a person in the modern world would spend interacting with computing, even through any number of forms and methods, makes it all the more important that we consider the nature of these interactions, and deliberately model them in such a way that leverages our natural human abilities.


One model that can offer guidance in the design of these in-the-world computing interactions is the notion of embodiment, which as stated by Paul Dourish describes the common way in which we encounter physical reality in the everyday world. We deal with objects in the world–we see, touch and hear them–in real time and in real space. Embodiment is the property of our engagement with the world that allows us to interpret and make meaning of it, and the objects that we encounter in it. The physical world is the site and the setting for all human activity, and all theory, action and meaning arises out of our embodied engagement with the world.

From embodiment we can derive the idea of embodied interaction, which Dourish describes as the creation, manipulation and sharing of meaning through our engaged interaction with artifacts. Rather than situating meaning in the mind through typical models of cognition, embodied interaction posits that meaning arises out of our inescapable being-in-the-world. Indeed, our minds are necessarily situated in our bodies, and thus our bodies, our own embodiment in the world, plays a strong role in how we think about, interpret, understand, and make meaning about the world. Thus, theories of embodied interaction respect the human body as the source of information about the world, and take into account the user’s own embodiment as a resource when designing interactions.

Exploring Embodied Interaction and Physical Computing

And so, this semester I am pursuing an independent study into theories of embodied interaction, and practical applications of physical computing. For the sake of fun I am conducting this project under the guise of the Hans and Umbach Electro-Mechanical Computing Company, which is not actually a company, nor does it employ anyone by the name of Hans or Umbach.

In this line of inquiry I hope to untangle what it means when computing exists not just on a screen or on a desk, but is embedded in the space around us. I aim to explore the naturalness of in-the-world interactions, actions and behaviors that humans engage in every day without thinking, and how these can be leveraged to inform computer-augmented interactions that are more natural and intuitive. I am interested in exploring the boundary between the real/analog world (the physical world of time, space and objects in which we exist) and the virtual/digital world (the virtual world of digital information that effectively exists outside of physical space), and how this boundary is constructed and navigated.

Is it a false boundary, because the supposed “virtual” world can only be revealed to us by manipulating pixels or other artifacts in the “real” world? Is it a boundary that can be described in terms of the aesthetics of the experience with analog/digital artifacts, such as a note written on paper versus pixels representing words on a screen? Is it determined by the means of production, such as a laser-printed letter versus a typewriter-written letter on handmade paper? Is a handwritten letter more “analog” than an identical-looking letter printed off a high-quality printer? These are all questions I hope to address.

Interfacing Between the Digital and Analog

Paulo's Little Gadget by Han

I aim to explore these questions by learning physical computing, and the Arduino platform in particular, as a mechanism for bridging the gap between digital information and analog artifacts. Electronics is something that is quite unfamiliar to me, and so I hope that this can be an opportunity to reflect on my own experience of learning something new. Given my experience as a web developer and my knowledge of programming, I find electronics to be a particularly interesting interface, because it seems to be a physical manifestation of the programmatic logic that I have only engaged with in a virtual manner. I have coded content management systems for websites, but I have not coded something that takes up physical space and directly influences artifacts in the physical world.

Within the coding metaphor of electronics, too, there are two separate-but-related manifestations. The first is the raw “coding” of circuits, with resistors and transistors and the like, to achieve a certain result. The second is the coding in Processing, a computer language, that I write in a text editor and upload to the Arduino board to make it work its magic. Indeed, the Arduino platform is an incredibly useful tool for physical computing that I hope to learn more about in the coming semester, but it does put a layer of mysticism between one and one’s understanding of electronics. Thus, in concert with my experiments with Arduino I will be working through the incredible Make: Electronics: Learning by Discovery book, which literally takes you from zero to hero in regards to electronics. And really, I know a bit already, but I am quite a zero at this point.

In Summary

Over the next few months I aim to study notions of embodiment, and embodied interaction in particular, in the context of learning and working with physical computing. As computing continues its delocalization and migration into our environment, it is important that existing interaction paradigms be challenged based on their appropriateness for new and different interactive contexts. The future of computing need not resemble the input and output devices that we currently associate with computers, despite the recognizable evolution of the capacitive touch screen paradigm. By deliberately designing for the embodied nature of human experience, we can create new interactive models that result in naturally rich, compelling and intuitive systems.

Welcome to the Hans and Umbach Electro-Mechanical Computing Company. It’s clearly going to be a busy, ambitious, somewhat dizzying semester.

Quoth Heidegger

There are two kinds of people in the world. Those who engage in hermeneutical phenomenology, those who engage in phenomenological hermeneutics, and those who get beat up after philosophy class for being good at math.


In the context of how analog interactions tie into human experience and perception, Jared just pointed me to Henri Bergson’s Time and Free Will: An Essay on the Immediate Data of Consciousness.

All I can say is, holy shit. My long-dormant love for philosophy just rocketed so hard to the front of my consciousness that it threatens to break through my forehead.

Things are about to get interesting.