Category Archives: Nerd

Splitting Subversion into Multiple Git Repositories

For the last three years I’ve been maintaining all my projects and websites, including Daneomatic and Brainside Out, as well as I’ve Been To Duluth and Terra and Rosco and Siskiwit, in a single Subversion repository. At any given time I find myself tinkering with a number of different projects, and honestly it keeps me awake at night if I’m not tracking that work in some form of version control. Given the number of projects I work on, and my tendency to abandon them and start new ones, I didn’t feel it necessary to maintain a separate repository for each individual project.

Subversion is, frankly, kind of a stupid versioning system, which actually works to the favor of someone wanting to manage multiple projects in a single repository. Since it’s easy to checkout individual folders, rather than the entire repository itself, all you need to do is create a unique folder for each individual project. Unlike Git, the trunk, tag and branches are just folders in Subversion, so you can easily compartmentalize projects using a folder hierarchy.

This approach creates a terribly twisted and intertwined history of commits, with each project wrapped around the other. My goal, however, was not necessarily good version control, but any version control at all. Like living, keeping multiple projects in the same repo beats the alternative.

The folder hierarchy of my Subversion repository looks like this. Each project has its own folder:

Within each project is the standard folder structure for branches, tags and trunk:

In the trunk folder is the file structure of the project itself. Here’s the trunk for one of my CodeIgniter projects:

While it’s generally bad practice to keep multiple projects in the same repository in Subversion, near as I can tell it’s truly a recipe for disaster in Git. Git is real smart about a lot of things, including tagging and branching and fundamentally offering a distributed version control system (read: a local copy of your entire revision history), but that smartness will make your brain ache if you try to independently maintain multiple projects in the same repository on your local machine.

And so it came to pass that I wanted to convert my single Subversion repository into eight separate Git repositories; one for each of the projects I had been tracking. There are many wonderful tutorials available for handling the generic conversion of a Subversion repo to Git, but none that outlined how to manage this split.

I hope to shed some light on this process. These instructions are heavily influenced by Paul Dowman’s excellent post on the same subject, with the extra twist of splitting a single Subversion repository into multiple Git repositories. I would highly recommend you read through his instructions as well.

First things first: Install and configure Git.

First, I installed Git. I’m on OS X, and while I’m sure you can do this on Windows, I haven’t the foggiest how you would go about it.

After installing Git I had to do some initial global configuration, setting up my name and address and such. There are other tutorials that tell you how to do that, but ultimately it’s two commands in Terminal:

[prompt]$ git config --global user.name "Your Name"
[prompt]$ git config --global user.email [email protected]

Also, I needed to setup SSH keys between my local machine and a remote server, as the ultimate goal of this undertaking was to push my Git repositories to the cloud. I have an account at Beanstalk that allows me to host multiple repositories, and they have a great tutorial on setting up SSH keys in their environment. GitHub has a helpful tutorial on SSH keys as well.

Give yourself some space.

Next, I created a folder where I was going to do my business. I called it git_convert:

Then, I created a file in git_convert called authors.txt, which maps each user from my Subversion repository onto a full name and email address for my forthcoming Git repositories. My authors.txt file is super basic, as I’m the only dude who’s been rooting around in this repository. All it contains is this single line of text:

dane = Dane Petersen <[email protected]>

Now crank that Soulja Boy!

Now comes the good stuff. The git svn command will grab a remote Subversion repository, and convert it to a Git repository in a specified folder on my local machine. Paul Dowman’s tutorial is super handy, but it took some experimentation before I discovered that git svn works not only for an entire repository, but for its subfolders as well. All I needed to do was append the path for the corresponding project to the URL for the repository itself.

What’s awesome, too, is that if you convert a subfolder of your Subversion repository to Git, git svn will leave all the other cruft behind, and will convert only the files and commits that are relevant for that particular folder. So, if you have a 100 MB repository that you’re converting to eight Git repositories, you’re not going to end up with 800 MB worth of redundant garbage. Sick, bro!

After firing up Terminal and navigating to my git_convert directory, I used the following command to clone a subfolder of my remote Subversion repository into a new local Git repository:

[prompt]$ git svn clone http://brainsideout.svn.beanstalkapp.com/brainsideout/brainsideout --no-metadata -A authors.txt -t tags -b branches -T trunk git_brainsideout

After some churning, that created a new folder called ‘git_brainsideout’ in my git_convert folder:

That folder’s contents are an exact copy of the corresponding project’s trunk folder of my remote Subversion repository:

You’ll notice that the trunk, tags and branches folders have all disappeared. That’s because my git svn command mapped them to their appropriate places within Git, and also because Git is awesomely smart in how it handles tags and branches. Dowman has some additional commands you may want to consider for cleaning up after your tags and branches, but this is all it took for me to get up and running.

Using git svn in the above manner, I eventually converted all my Subversion projects into separate local Git repositories:

Again, the trunk, tag and branches folders are gone, mapped and replaced by the invisibly magic files of Git:

Push your efforts into the cloud.

I had a few empty remote Git repositories at Beanstalk where I wanted all my hard work to live, so my last step was pushing my local repositories up to my Git server. First, I navigated into the desired local Git repository, and setup a name for the remote repository using the remote command:

[prompt]$ git remote add beanstalk [email protected]:/brainsideout.git

I had previously setup my SSH keys, so it was easy to push my local repository to the remote location:

[prompt]$ git push beanstalk master

Bam. Dead. Done. So long, Subversion!

For more information on how to get rollin’ with Git, check out the official git/svn crash course, or Git for the lazy.

Happy Gitting!

Hans and Umbach: Establishing a Language of Embodied Interaction for Design Practitioners

My work with Hans and Umbach on physical computing and embodied interaction took an interesting turn recently, down a path I hadn’t anticipated when I set out to pursue this project. My initial goal with this independent study was to develop the skills necessary to work with electronics and physical computing as a prototyping medium. In recent years, hardware platforms such as Arduino and programming environments such as Wiring have clearly lowered the barrier of entry for getting involved in physical computing, and have allowed even the electronic layman to build some super cool stuff.

Rob Nero presented his TRKBRD prototype at Interaction 10, an infrared touchpad built in Arduino that turns the entire surface of one’s laptop keyboard into a device-free pointing surface. Chris Rojas built an Arduino tank that can be controlled remotely through an iPhone application called TouchOSC. What’s super awesome is that most everyone building this stuff is happy to share their source code, and contribute their discoveries back to the community. The forums on the Arduino website are on fire with helpful tips, and it seems an answer to any technical question is only a Google search away. SparkFun has done tremendous work in making electronics more user-friendly and approachable, offering suggested uses, tutorials and data sheets right alongside the components they sell.

Dourish and Embodied Interaction: Uniting Tangible Computing and Social Computing

In tandem with my continuing education with electronics, I’ve been doing extensive research into embodied interaction, an emerging area of study in HCI that considers how our engagement, perception, and situatedness in the world influences how we interact with computational artifacts. Embodiment is closely related to a philosophical interest of mine, phenomenology, which studies the phenomena of experience and how reality is revealed to, and interpreted by, human consciousness. Phenomenology brackets off the external world and isn’t concerned with establishing a scientifically objective understanding of reality, but rather looks at how reality is experienced through consciousness.

Paul Dourish outlines a notion of embodied interaction in his landmark work, “Where The Action Is: The Foundations of Embodied Interaction.” In Chapter Four he iterates through a few definitions of embodiment, starting with what he characterizes as a rather naive one:

“Embodiment 1. Embodiment means possessing and acting through a physical manifestation in the world.”

He takes issue with this definition, however, as it places too high a priority on physical presence, and proposes a second iteration:

“Embodiment 2. Embodied phenomena are those that by their very nature occur in real time and real space.”

Indeed, in this definition embodiment is concerned more with participation than physical presence. Dourish uses the example of conversation, which is characterized by minute gestures and movements that hold no objective meaning independent of human interpretation. In “Technology as Experience” McCarthy and Wright use the example of a wink versus a blink. While closing and opening one’s eye is an objective natural phenomena that exists in the world, the meaning behind a wink is more complicated; there are issues of the intent of the “winker”, whether they intend for the wink to represent flirtation, collusion, or whether they simply had a speck of dirt in their eye. There are also issues of interpretation of the “winkee”, whether they perceive the wink, how they interpret the wink, and whether or not they interpret it as intended by the “winker.”

Thus, Dourish’s second iteration on embodiment deemphasizes physical presence while allowing for these subjective elements that do not exist independent of human consciousness. A wink cannot exist independent of real time and real space, but its meaning involves more than just its physicality. Indeed, Edmund Husserl originally proposed phenomenology in the early 20th century, but it was his student Martin Heidegger who carried it forward into the realm of interpretation. Hermeneutics is an area of study concerned with the theory of interpretation, and thus Heidegger’s hermeneutical phenomenology (or the study of experience and how it is interpreted by consciousness) has rather become the foundation of all recent phenomenological theory.

Beyond Heidegger, Dourish takes us through Alfred Schutz, who considered intersubjectivity and the social world of phenomenology, and Maurice Merleau-Ponty, who deliberately considered the human body by introducing the embodied nature of perception. In wrapping up, Dourish presents a third definition of embodiment:

Embodied 3. “Embodied phenomena are those which by their very nature occur in real time and real space. … Embodiment is the property of our engagement with the world that allows us to make it meaningful.”

Thus, Dourish says:

“Embodied interaction is the creation, manipulation, and sharing of meaning through engaged interaction with artifacts.”

Dourish’s thesis behind “Where The Action Is” is that tangible computing (computer interactions that happens in the world, through the direct manipulation of physical artifacts) and social computing (computer-augmented interaction that involves the continual navigation and reconfiguration of social space) are two sides of the same coin; namely, that of embodied interaction. Just as tangible interactions are necessarily embedded in real space and real time, social interaction is embedded as an active, practical accomplishment between individuals.

According to Dourish, embodied computing is a larger frame that encompasses tangible computing and social computing. This is a significant observation, and “Where The Action Is” is a landmark achievement. But, as Dourish himself admits, there isn’t a whole lot new here. He connects the dots between two seemingly unrelated areas of HCI theory, unifies them under the umbrella term embodied interaction, and leaves it to us to work it out from there.

And I’m not so sure that’s happened. “Where The Action Is” came out nine years ago, and based on the papers I’ve read on embodied interaction, few have attempted to extend the definition beyond Dourish’s work. While I wouldn’t describe his book as inadequate, I would certainly characterize it as a starting point, a signifiant one at that, for extending our thoughts on computing into the embodied, physical world.

From Physical Computing to Notions of Embodiment

For the last two months I have been researching theories on embodiment, teaching myself physical computing, and reflecting deeply on my experience of learning the arcane language of electronics. Even with all the brilliantly-written books and well-documented tutorials in the world, I find that learning electronics is hard. It frequently violates my common-sense experience with the world, and authors often use familiar metaphors to compensate for this. Indeed, electricity is like water, except when it’s not, and it flows, except when it doesn’t.

In reading my reflections I can trace the evolution of how I’ve been thinking about electronics, how I discover new metaphors that more closely describe my experiences, reject old metaphors, and become increasingly disenchanted that this is a domain of expertise I can master in three months. What is interesting is not that I was wrong in my conceptualizations of how electronics work, however, but how I was wrong and how I found myself compensating for it.

Hans and Umbach: Arduino, 8-bit shift register, 7-segment display

While working with a seven-segment display, for instance, I could not figure out which segmented LED mapped to which pin. As I slowly began to figure this out, it did not seem to map to any recognizable pattern, and certainly did not adhere to my expectations. I thought the designers of the display must have had deliberately sinister motives in how their product so effectively violated any sort of common sense interpretation.

To compensate, I drew up my own spatial map, both on paper as well as in my mind, to establish a personal pattern where no external pattern was immediately perceived. “The pin in the upper lefthand corner starts on the middle, center segment,” I told myself, “and spirals out clockwise from there, clockwise for both the segments as well as the pins, skipping the middle-pin common anodes, with the decimal seated awkwardly between the rightmost top and bottom segments.”

It was this personal spatial reasoning, this establishment of my own pattern language to describe how the seven-segment display worked, that made me realize how strongly my own embodied experience determines how I perceive, interact with, and make sense of the world. So long as a micro-controller has been programmed correctly, it doesn’t care which pin maps to which segment. But for me, a bumbling human who is poor at numbers but excels at language, socialization and spatial reasoning, you know, those things that humans are naturally good at, I needed some sort of support mechanism. And that mechanism arose out of my own embodied experience as a real physical being with certain capabilities for navigating and making sense of a real physical world.

Over time this realization, that I am constantly leveraging my own embodiment as a tool to interpret the world, dwarfed the interest I had in learning electronics. I’m still trying to figure out how to get an 8×8 64-LED matrix to interface with an Arduino through a series of 74HC595N 8-bit shift registers, so I can eventually make it play Pong with a Wii Nunchuk. That said, it’s frustrating that every time I try to do something, the chip I have is not the chip I need, and the chip I need is $10 plus $5 shipping and will arrive in a week, and by the way have I thought about how to send constant current to all the LEDs so they’re all of similar brightness because my segmented number “8” is way dimmer than my segmented number “1” because of all the LEDs that need to light up, and oh yeah, there’s an app for that.

Sigh.

Especially when I’m trying to play Pong on my 8×8 LED matrix, while someone else is already playing Super Mario Bros. on hers.

Extending Notions of Embodiment into Design Practice

In accordance with Merleau-Ponty and his work introducing the human body to phenomenology, and the work of Lakoff and Johnson in extending our notions of embodied cognition, I believe that the human body itself is central to structuring the way we perceive, interact with, and make sense of the world. Thus, I aim to take up the challenge issued by Dourish, and extend our notions of embodiment as they apply to the design of computational interactions. The goal of my work is to establish a language of embodied interaction that will help design practitioners create more compelling, more engaging, more natural interactions.

Considering physical space and the human body is an enormous topic in interaction design. In a panel at SXSW Interactive last week, Peter Merholz, Michele Perras, David Merrill, Johnny Lee and Nathan Moody discussed computing beyond the desktop as a new interaction paradigm, and Ron Goldin from Lunar discussed touchless invisible interactions in a separate presentation. At Interaction 10, Kendra Shimmell demonstrated her work with environments and movement-based interactions, Matt Cottam presented his considerable work integrating computing technologies with the richly tactile qualities of wood, and Christopher Fahey even gave a shout-out specifically to “Where The Action Is” in his talk on designing the human interface (slide 50 in the deck). The migration of computing off the desktop and into the space of our everyday lives seems only to be accelerating, to the point where Ben Fullerton proposed at Interaction 10 that we as interaction designers need to begin designing not just for connectivity and ubiquity, but for solitude and opportunities to actually disconnect from the world.

Establishing a Language of Embodied Interaction for Design Practitioners

To recap, my goal is to establish a language of embodied interaction that helps designers navigate this increasing delocalization and miniaturization of computing. I don’t know yet what this language will look like, but a few guiding principles seem to be emerging from my work:

All interactions are tangible. There is no such thing as an intangible interaction. I reject the notion that tangible interaction, the direct manipulation of physical representations of digital information, is significantly different from manipulating pixels on a screen, interactions that involve a keyboard or pointing device, or even touch screen interactions.

Tangibility involves all the senses, not just touch. Tangibility considers all the ways that objects make their presence known to us, and involves all senses. A screen is not “intangible” simply because it is comprised of pixels. A pixel is merely a colored speck on a screen, which I perceive when its photons reach my eye. Pixels are physical, and exist with us in the real world.

Likewise, a keyboard or mouse is not an intangible interaction simply because it doesn’t afford direct manipulation. I believe the wall that has been erected between historic interactions (such as the keyboard and mouse) and tangible interactions (such as the wonderful Siftables project) is false, and has damaged the agenda of tangible interaction as a whole. These interactions exist on a continuum, not between tangible and intangible, but between richly physical and physically impoverished. A mouse doesn’t allow for a whole lot of nuance of motion or pressure, and a glass touch screen doesn’t richly engage our sense of touch, but they are both necessarily physical interactions. There is an opportunity to improve the tangible nature of all interactions, but it will not happen by categorically rejecting our interactive history on the grounds that they are not tangible.

Everything is physical. There is no such thing as the virtual world, and there is no such thing as a digital interaction. Ishii and Ullmer (PDF link) in the Tangible Media Group at the MIT Media Lab have done extensive work on tangible interactions, characterizing them as physical manifestations of digital information. “Tangible Bits,” the title of their seminal work, largely summarizes this view. Repeatedly in their work, they set up a dichotomy between atoms and bits, physical and digital, real and virtual.

The trouble is, all information that we interact with, no matter if it is in the world or stored as ones and zeroes on a hard drive, shows itself to us in a physical way. I read your text message as a series of latin characters rendered by physical pixels that emit physical photons from the screen on my mobile device. I perceive your avatar in Second Life in a similar manner. I hear a song on my iPod because the digital information of the file is decoded by the software, which causes the thin membrane in my headphones to vibrate at a particular frequency. Even if I dive deep and study the ones and zeroes that comprise that audio file, I’m still seeing them represented as characters on a screen.

All information, in order to be perceived, must be rendered in some sort of medium. Thus, we can never interact with information directly, and all our interactions are necessarily mediated. As with the supposed wall between tangible interactions and the interactions that proceeded them, the wall between physical and digital, or real and virtual, is equally false. We never see nor interact with digital information, only the physical representation of it. We cannot interact with bits, only atoms. We do not and cannot exist in a virtual world, only the real one.

This is not to say that talking with someone in-person is the same as video chatting with them, or talking on the phone, or text messaging back and forth. Each of these interactions is very different based on the type and quality of information you can throw back and forth. It is, however, to illustrate that there isn’t necessarily any difference between a physical interaction and a supposed virtual one.

Thus, what Ishii and Ullmer propose, communicating digital information by embodying it in ambient sounds or water ripples or puffs of air, is no different than communicating it through pixels on a screen. What’s more, these “virtual” experiences we have, the “virtual” friendships we form, the “virtual” worlds we live in, are no different than the physical world, because they are all necessarily revealed to us in the physical world. The limitations of existing computational media may prevent us from allowing such high-bandwidth interactions as those allowed by face-to-face interaction (think of how much we communicate through subtle facial expressions and body language), but the fact that these interactions are happening through a screen, rather than at a coffee shop, does not make them virtual. It may, however, make them an impoverished physical interaction, as they do not engage our wide array of senses as a fully in-the-world interaction does.

Again, the dichotomy between real and virtual is false. The dichotomy between physical and digital is false. What we have is a continuum between physically rich and physically impoverished. It is nonsense to speak of digital interactions, or virtual interactions. All interactions are necessarily physical, are mediated by our bodies, and are therefore embodied.

The traditional compartmentalization of senses is a false one. In confining tangible interactions to touch, we ignore how our senses work together to help us interpret the world and make sense of it. The disembodiment of sensory inputs from one another is a byproduct of the compartmentalization of computational output (visual feedback from a screen rendered independently from audio feedback from a speaker, for instance) that contradicts our felt experience with the physical world. “See with your ears” and “hear with your eyes” are not simply convenient metaphors, but describe how our senses work in concert with one another to aid perception and interpretation.

Humans have more than five senses. Our experience with everything is situated in our sense of time. We have a sense of balance, and our sense of proprioception tells us where our limbs are situated in space. We have a sense of temperature and a sense of pain that are related to, but quite independent from, our sense of touch. Indeed, how can a loud sound “hurt” our ears if our sense of pain is tied to touch alone? Further, some animals can see in a wider color spectrum than humans, can sense magnetic or electrical fields, or can detect minute changes in air pressure. If computing somehow made these senses available to humans, how would that change our behavior?

My goal in breaking open these senses is not to arrive at a scientific account of how the brain processes sensory input, but to establish a more complete subjective, phenomenological account that offers a deeper understanding of how the phenomena of experience are revealed to human consciousness. I aim to render explicit the tacit assumptions that we make in our designs as to how they engage the senses, and uncover new design opportunities by mashing them together in unexpected ways.

Embodied Interaction: A Core Principle for Designing the Next Generation of Computing

By transcending the senses and considering the overall experience of our designs in a deeper, more reflective manner, we as interaction designers will be empowered to create more engaging, more fulfilling interactions. By considering the embodied nature of understanding, and how the human body plays a role in mediating interaction, we will be better prepared to design the systems and products for the post-desktop era.

Your Workflow is the Battlefield

There’s been quite the wailing and gnashing of teeth over the Apple iPad not supporting Flash. Personally, I welcome this new landscape of the web, where a future without Flash seems not only bright but possible indeed.

That said, what is unfolding here is of considerable gravity, and will likely determine the future of the web. Most web professionals use Adobe tools in some capacity to do their job, whether Photoshop, Illustrator, Dreamweaver (gasp), Flash, Flex, Flash Cataylst, or even Fireworks (which is, according to many, the best wireframing tool on the market, despite its quirks and crash-prone behaviors).

Now, I am not privy to inside information, but based on what I’ve been able to glean, Adobe’s strategy is something like this. There is a deliberate reason that your workflow as a standards-based web professional sucks; that Photoshop doesn’t behave the way you want it to, that exporting web images is still a pain in the ass, and that you actually need to fight the software to get it to do what you want.

Adobe knows how you use its software. Adobe knows how you want to use its software. Adobe understands your existing workflow.

And it doesn’t fucking care.

You see, Adobe doesn’t view you, as a web professional, as someone engaged in building websites. It doesn’t view itself as one who builds the tools to support you in your job. Adobe does not view you as the author of images and CSS and HTML and Javascript that all magically comes together to create a website, but rather as the author of what could potentially be Adobe Web Properties™.

They are not interested in supporting your workflow to create standards-based websites, because that is not in their strategic interest. They would much rather you consented to the cognitive model of Adobe Software™ to create proprietary Adobe Web Properties™ that render using Adobe Web Technologies™.

In essence, Adobe wants to be the gatekeeper for the production, as well as the consumption, of the web.

Apple knows this, and knows that the future of the web is mobile. Their actions are no less strategic than that of Adobe, and Apple has chosen a route that deliberately undermines Adobe’s strategy; Adobe’s strategy for controlling not just the consumption of rich interactive experiences on the web, but their production as well.

From the production side, as far as Adobe is concerned, if you’re not building your websites in Flash Catalyst and exporting them as Flash files, you’re doing it wrong.

Your frustrations with Photoshop and Fireworks in not supporting the “real way” web professionals build standards-based websites are not by accident, but by design. Adobe views each website as a potential property over which they can exert control over the look, feel and experience. As these “experiences” become more sophisticated, so do the tools necessary to create them. Adobe wants to be in the business of selling the only tools that do the job, controlling your production from end-to-end, and then even controlling the publication of and access to your creation.

Apple’s own domination plans for the mobile web undermines all this.

And Adobe is pissed.

Hans and Umbach: Atoms Are the New Bits

The Hans and Umbach Electro-Mechanical Computing Company

Needless to say, Hans and Umbach are extremely excited about this new article in Wired magazine, which champions a trend of garage tinkerers and other DIYers acting in concert to bring the world its next generation of products. Just as the internet democratized digital publication, so will new prototyping technologies democratize physical production.

We’d better get crackin’.

Introducing the Hans and Umbach Project

The Hans and Umbach Electro-Mechanical Computing Company

Last summer I began thinking about something that I referred to as “analog interactions”, those natural, in-the-world interactions we have with real, physical artifacts. My interest arose in response to a number of stimuli, one of which is the current trend towards smooth, glasslike capacitive touch screen devices. From iPhones to Droids to Nexus Ones to Mighty Mice to Joojoos to anticipated Apple tablets, there seems to a strong interest in eliminating the actual “touch” from our interactions with computational devices.

Glass capacitive touch screens allow for incredible flexibility in the display of and interaction with information. This is clearly demonstrated by the iPhone and iPod Touch, where software alone can change the keyboard configuration from letters to numbers to numeric keypads to different languages entirely.

A physical keyboard that needed to make the same adaptations would be quite a feat, and while the Optimus Maximus is an expensive step towards allowing such configurability in the display of keys, its buttons do not move, change shape or otherwise physically alter themselves in a manner similar to these touch screen keys. Chris Harrison and Scott Hudson, two PhD students at CMU, built a touch screen that uses small air chambers that allow it to feature physical (yet dynamically configurable) buttons.

From a convenience standpoint, capacitive touch screens make a lot of sense, in their ability to shrink input and output into one tiny package. Their form factor allows incredible latitude in using software to finely tune their interactions for particular applications. However, humans are creatures of a physical world that have an incredible capacity to sense, touch and interpret their surroundings. Our bodies have these well-developed skills that help us function as beings in the world, and I feel that capacitive touch screens, with their cold and static glass surfaces, insult the nuanced capabilities of the human senses.

Looking back, in an effort to look forward.

Musée Mécanique

Much of this coalesced in my mind during my summer in San Francisco, and specifically in my frequent trips to the Musee Mecanique. Thanks to its brilliant collection of turn-of-the-century penny arcade machines and automated musical instruments, I was continually impressed by the rich experiential qualities of these historic, pre-computational devices. From their lavish ornamentation to the deep stained woodgrain of their cabinets, from the way a sculpted metal handle feels in the hand to the smell of electricity in the air, the machines at the Musee Mecanique do an incredible job of engaging all the senses and offering a uniquely physical experience despite their primitive computational insides.

Off the Desktop and Into the World

It’s clear from the trajectory of computing that our points of interaction with computer systems are going to become increasingly delocalized, mobile and dispersed throughout our environment. While I am not yet ready to predict the demise of computing on a desktop (either through desktop or laptop computers alike), it is clear that our future interactions with computing are going to take place off the desktop, and out in the world with us. Indeed, I wrote about this on the Adaptive Path weblog while working there for the summer. Indeed, these interactions may supplement, rather than supplant, our usual eight-hour days in front of the glowing rectangle. This increased percentage of time that a person in the modern world would spend interacting with computing, even through any number of forms and methods, makes it all the more important that we consider the nature of these interactions, and deliberately model them in such a way that leverages our natural human abilities.

Embodiment

One model that can offer guidance in the design of these in-the-world computing interactions is the notion of embodiment, which as stated by Paul Dourish describes the common way in which we encounter physical reality in the everyday world. We deal with objects in the world–we see, touch and hear them–in real time and in real space. Embodiment is the property of our engagement with the world that allows us to interpret and make meaning of it, and the objects that we encounter in it. The physical world is the site and the setting for all human activity, and all theory, action and meaning arises out of our embodied engagement with the world.

From embodiment we can derive the idea of embodied interaction, which Dourish describes as the creation, manipulation and sharing of meaning through our engaged interaction with artifacts. Rather than situating meaning in the mind through typical models of cognition, embodied interaction posits that meaning arises out of our inescapable being-in-the-world. Indeed, our minds are necessarily situated in our bodies, and thus our bodies, our own embodiment in the world, plays a strong role in how we think about, interpret, understand, and make meaning about the world. Thus, theories of embodied interaction respect the human body as the source of information about the world, and take into account the user’s own embodiment as a resource when designing interactions.

Exploring Embodied Interaction and Physical Computing

And so, this semester I am pursuing an independent study into theories of embodied interaction, and practical applications of physical computing. For the sake of fun I am conducting this project under the guise of the Hans and Umbach Electro-Mechanical Computing Company, which is not actually a company, nor does it employ anyone by the name of Hans or Umbach.

In this line of inquiry I hope to untangle what it means when computing exists not just on a screen or on a desk, but is embedded in the space around us. I aim to explore the naturalness of in-the-world interactions, actions and behaviors that humans engage in every day without thinking, and how these can be leveraged to inform computer-augmented interactions that are more natural and intuitive. I am interested in exploring the boundary between the real/analog world (the physical world of time, space and objects in which we exist) and the virtual/digital world (the virtual world of digital information that effectively exists outside of physical space), and how this boundary is constructed and navigated.

Is it a false boundary, because the supposed “virtual” world can only be revealed to us by manipulating pixels or other artifacts in the “real” world? Is it a boundary that can be described in terms of the aesthetics of the experience with analog/digital artifacts, such as a note written on paper versus pixels representing words on a screen? Is it determined by the means of production, such as a laser-printed letter versus a typewriter-written letter on handmade paper? Is a handwritten letter more “analog” than an identical-looking letter printed off a high-quality printer? These are all questions I hope to address.

Interfacing Between the Digital and Analog

Paulo's Little Gadget by Han

I aim to explore these questions by learning physical computing, and the Arduino platform in particular, as a mechanism for bridging the gap between digital information and analog artifacts. Electronics is something that is quite unfamiliar to me, and so I hope that this can be an opportunity to reflect on my own experience of learning something new. Given my experience as a web developer and my knowledge of programming, I find electronics to be a particularly interesting interface, because it seems to be a physical manifestation of the programmatic logic that I have only engaged with in a virtual manner. I have coded content management systems for websites, but I have not coded something that takes up physical space and directly influences artifacts in the physical world.

Within the coding metaphor of electronics, too, there are two separate-but-related manifestations. The first is the raw “coding” of circuits, with resistors and transistors and the like, to achieve a certain result. The second is the coding in Processing, a computer language, that I write in a text editor and upload to the Arduino board to make it work its magic. Indeed, the Arduino platform is an incredibly useful tool for physical computing that I hope to learn more about in the coming semester, but it does put a layer of mysticism between one and one’s understanding of electronics. Thus, in concert with my experiments with Arduino I will be working through the incredible Make: Electronics: Learning by Discovery book, which literally takes you from zero to hero in regards to electronics. And really, I know a bit already, but I am quite a zero at this point.

In Summary

Over the next few months I aim to study notions of embodiment, and embodied interaction in particular, in the context of learning and working with physical computing. As computing continues its delocalization and migration into our environment, it is important that existing interaction paradigms be challenged based on their appropriateness for new and different interactive contexts. The future of computing need not resemble the input and output devices that we currently associate with computers, despite the recognizable evolution of the capacitive touch screen paradigm. By deliberately designing for the embodied nature of human experience, we can create new interactive models that result in naturally rich, compelling and intuitive systems.

Welcome to the Hans and Umbach Electro-Mechanical Computing Company. It’s clearly going to be a busy, ambitious, somewhat dizzying semester.

Slippery Slope

Tonight, Kate and I are playing a game. It is called the Don’t Drink the Bacon Grease game. The first person to drink the bacon grease loses the game.

So far we’re both winning.

But I think Kate might be pulling ahead.

Quoth Heidegger

There are two kinds of people in the world. Those who engage in hermeneutical phenomenology, those who engage in phenomenological hermeneutics, and those who get beat up after philosophy class for being good at math.

Directions to Mark’s Work

Out to vine. Cross vine, straight to welding. Right side of welding. Two-track entry to nature. Entry behind. Natural area. Straight. Between ponds. Fenced-in area. Right of chain link fence. Straight. Little hill. Up it. On top of berm along river. Left on dirt single track, fifty feet to railroad tracks. Left to get on rails, cross river on rails. Immediately to right is Mark’s building.

The One In Which Dane Discusses AT&T Service Plans for Five Hundred Words, Much To Everyone’s Dismay

Well shucks. I was just about ready to toss my first-generation iPhone down a well, until I dug a bit further into how much this new service plan was going to cost me.

All I can say is, ouch.

Right now I’ve got 450 anytime minutes and 5,000 night-and-weekend minutes for $39.99 a month. In addition I have my iPhone data plan, which includes 200 text messages, for $20 a month. Finally, I do a lot of dialing across multiple time zones, and so to keep those long family conversations from bankrupting my lavish estate I have early nights and weekends for an extra $8.99 a month.

This comes to a grand monthly total of $68.98, or approximately $72 after those bullshit fees. The last I heard we had finally finished paying off the Spanish American War, and so the phone companies have been quietly rewriting their terms and conditions such that they are no longer charging you recovery fees for the taxes they incur, but service fees for whatever the hell they want. It was this breach of contract that allowed me to duck out of my Sprint contract back in January 2008 and avoid their $200 early termination fee.

So. The data plan for the iPhone 3G bumps everything up an extra $10 a month to $30. Now, from what I’ve heard 3G is pretty freakin’ amazing compared to EDGE, but unfortunately I never seem to live or recreate in a place that supports 3G (San Francisco, of course, being a civilized anomaly in my trek through the backwaters of America).

Thus, for my purposes it would be an extra $10 a month for the privilege of potentially enjoying a service that I will never get to use. Now, I do get other amazing things with an iPhone 3G S, such as GPS and voice control and more storage space and a compass and a faster CPU and a non-recessed headphone jack, but it hardly seems prudent that I should reward AT&T on a monthly basis for enjoying a set of features that has nothing to do with their service.

What’s more, I would have to pay an extra $5 a month to get the 200 text messages that are currently included in my (cheaper) data plan. Now, I don’t text much, but I find it indispensable when coordinating with friends, or sharing short bursts of information that don’t require a proper phone call. Indeed, it is criminal that they charge money for something that rides as heavy as a hobo fart on the network’s backchannel and costs them nothing to support. That said, if I have to pay for texting I want a flat rate, as the last thing I want to think about when I’m composing a text is whether or not it’s worth 25 cents.

I have been with AT&T long enough that I qualify for the $199 pricing on the new phone. The question is, then, how enthusiastic I am about getting burned for an extra $180 a year for the same mobile service that I currently enjoy (as 3G does not yet penetrate the windswept mountaintops and tree villages that I typically inhabit). To put it in perspective, that’s a monthly payment I could spend on hosting my intertubes at Media Temple.

Which. If things go as planned, both these expenditures might well be worth their while.

In other news, I wrote my first iPhone app today.

JJ Abrams Is All Over The New Star Trek Movie

And the world is a better place for it.

  • Location names are set in Futura. Come to think of it, Futura is all over the place.
  • They discuss “Slusho” in the bar at the beginning of the movie.
  • I swear one of the doctors who delivered James Kirk was the scientist from the Dharma Initiative videos in Lost.
  • Angry ice world beast thing looks like a puppy version of the Cloverfield angry beast thing.
  • Time travel.
  • Movie totally kicked ass and I want to see it again.
  • And again and again.