Category Archives: Web

Splitting Subversion into Multiple Git Repositories

For the last three years I’ve been maintaining all my projects and websites, including Daneomatic and Brainside Out, as well as I’ve Been To Duluth and Terra and Rosco and Siskiwit, in a single Subversion repository. At any given time I find myself tinkering with a number of different projects, and honestly it keeps me awake at night if I’m not tracking that work in some form of version control. Given the number of projects I work on, and my tendency to abandon them and start new ones, I didn’t feel it necessary to maintain a separate repository for each individual project.

Subversion is, frankly, kind of a stupid versioning system, which actually works to the favor of someone wanting to manage multiple projects in a single repository. Since it’s easy to checkout individual folders, rather than the entire repository itself, all you need to do is create a unique folder for each individual project. Unlike Git, the trunk, tag and branches are just folders in Subversion, so you can easily compartmentalize projects using a folder hierarchy.

This approach creates a terribly twisted and intertwined history of commits, with each project wrapped around the other. My goal, however, was not necessarily good version control, but any version control at all. Like living, keeping multiple projects in the same repo beats the alternative.

The folder hierarchy of my Subversion repository looks like this. Each project has its own folder:

Within each project is the standard folder structure for branches, tags and trunk:

In the trunk folder is the file structure of the project itself. Here’s the trunk for one of my CodeIgniter projects:

While it’s generally bad practice to keep multiple projects in the same repository in Subversion, near as I can tell it’s truly a recipe for disaster in Git. Git is real smart about a lot of things, including tagging and branching and fundamentally offering a distributed version control system (read: a local copy of your entire revision history), but that smartness will make your brain ache if you try to independently maintain multiple projects in the same repository on your local machine.

And so it came to pass that I wanted to convert my single Subversion repository into eight separate Git repositories; one for each of the projects I had been tracking. There are many wonderful tutorials available for handling the generic conversion of a Subversion repo to Git, but none that outlined how to manage this split.

I hope to shed some light on this process. These instructions are heavily influenced by Paul Dowman’s excellent post on the same subject, with the extra twist of splitting a single Subversion repository into multiple Git repositories. I would highly recommend you read through his instructions as well.

First things first: Install and configure Git.

First, I installed Git. I’m on OS X, and while I’m sure you can do this on Windows, I haven’t the foggiest how you would go about it.

After installing Git I had to do some initial global configuration, setting up my name and address and such. There are other tutorials that tell you how to do that, but ultimately it’s two commands in Terminal:

[prompt]$ git config --global user.name "Your Name"
[prompt]$ git config --global user.email [email protected]

Also, I needed to setup SSH keys between my local machine and a remote server, as the ultimate goal of this undertaking was to push my Git repositories to the cloud. I have an account at Beanstalk that allows me to host multiple repositories, and they have a great tutorial on setting up SSH keys in their environment. GitHub has a helpful tutorial on SSH keys as well.

Give yourself some space.

Next, I created a folder where I was going to do my business. I called it git_convert:

Then, I created a file in git_convert called authors.txt, which maps each user from my Subversion repository onto a full name and email address for my forthcoming Git repositories. My authors.txt file is super basic, as I’m the only dude who’s been rooting around in this repository. All it contains is this single line of text:

dane = Dane Petersen <[email protected]>

Now crank that Soulja Boy!

Now comes the good stuff. The git svn command will grab a remote Subversion repository, and convert it to a Git repository in a specified folder on my local machine. Paul Dowman’s tutorial is super handy, but it took some experimentation before I discovered that git svn works not only for an entire repository, but for its subfolders as well. All I needed to do was append the path for the corresponding project to the URL for the repository itself.

What’s awesome, too, is that if you convert a subfolder of your Subversion repository to Git, git svn will leave all the other cruft behind, and will convert only the files and commits that are relevant for that particular folder. So, if you have a 100 MB repository that you’re converting to eight Git repositories, you’re not going to end up with 800 MB worth of redundant garbage. Sick, bro!

After firing up Terminal and navigating to my git_convert directory, I used the following command to clone a subfolder of my remote Subversion repository into a new local Git repository:

[prompt]$ git svn clone http://brainsideout.svn.beanstalkapp.com/brainsideout/brainsideout --no-metadata -A authors.txt -t tags -b branches -T trunk git_brainsideout

After some churning, that created a new folder called ‘git_brainsideout’ in my git_convert folder:

That folder’s contents are an exact copy of the corresponding project’s trunk folder of my remote Subversion repository:

You’ll notice that the trunk, tags and branches folders have all disappeared. That’s because my git svn command mapped them to their appropriate places within Git, and also because Git is awesomely smart in how it handles tags and branches. Dowman has some additional commands you may want to consider for cleaning up after your tags and branches, but this is all it took for me to get up and running.

Using git svn in the above manner, I eventually converted all my Subversion projects into separate local Git repositories:

Again, the trunk, tag and branches folders are gone, mapped and replaced by the invisibly magic files of Git:

Push your efforts into the cloud.

I had a few empty remote Git repositories at Beanstalk where I wanted all my hard work to live, so my last step was pushing my local repositories up to my Git server. First, I navigated into the desired local Git repository, and setup a name for the remote repository using the remote command:

[prompt]$ git remote add beanstalk [email protected]:/brainsideout.git

I had previously setup my SSH keys, so it was easy to push my local repository to the remote location:

[prompt]$ git push beanstalk master

Bam. Dead. Done. So long, Subversion!

For more information on how to get rollin’ with Git, check out the official git/svn crash course, or Git for the lazy.

Happy Gitting!

Clean Slate

I’ve been cleaning a lot of the cruft out of my domains lately. Subdomains, development domains, MySQL databases originally setup to stage all sorts of nefarious dealings… they’ve all been pulled up by the roots and tossed into heaping piles of gzipped tarballs.

As part of this activity I’ve been cleaning out my Google Analytics account as well, as many of my analytic site profiles refer to domains long gone, testing procedures long concluded, directions I thought my web interests would go but didn’t. Having just made a Great and Terrible Mistake and irreversibly destroying a trove of information courtesy of the slop that is the Google Analytics interface, I have penned a cautionary tale to let you aware of two of its most dangerous functions: pagination and deletion.

Google Analytics Pagination: Party like it’s 1995 (and your 14.4K U.S. Robotics Sportster just arrived)

The pagination tool in Google Analytics defaults to displaying only 10 site profiles per page. Using the dropdown menu you can change this to 5, 10, 20, 35, 50 or 100.

An option to display only five profiles per page? What the hell? In what universe would that be useful? Are we seriously so pressed for bandwidth in 2010 that we cannot afford to peer at the world through more than a pinhole? Further, the cognitive load of needing to choose between six freaking options is ridiculous. It’s a modest burden to bear but oftentimes interfaces manage to kill their users not through a single fatal flaw, but through an endless series of tiny papercuts such as this.

Seriously, Google Analytics. If you must have pagination, limit the options to 10, 50 and All. And for all that is holy, remember my choice for at least the duration of my session. Needing to reset the number of rows every time I go back to my profile list is maddening, and the fact that I can’t save this option as a personal setting is driving me insane.

Or would drive me insane, if I hadn’t screwed up in a much bigger way. Pagination in Google Analytics has an additional feature whose destructive tendencies are so finely tuned that they trump even the above critique. To expand on this, we’ll take a quick stroll through the flawed workflow for deleting a site profile.

Deletion: With great power comes insufficient gravity and illustrative consequence surrounding said power.

To delete a site profile, you click the “Delete” link in its corresponding row:

When you click “Delete” a beautiful alert box pops up, a charming implementation of the “Hello World” of any beginner’s javascript tutorial:

In the alert box, the profile that will be deleted is not mentioned by name. It is up to you to remember, guess or assume which profile you originally clicked on. The most prominent information on this alert is the domain of the website that initiated the alert. Is that really the most important thing you need to know at this point, in order to make an informed decision? More important than the fact that the profile data cannot be recovered? More important than the name of the profile that’s actually being deleted?

Also note that “OK” is selected by default, so that pressing the return key will delete the profile. With an action as destructive as the irrecoverable deletion of years worth of information, it’s insanely poor form to select this choice by default.

Perhaps if creating a sensible “Delete” workflow in Google Analytics was as precious as maximizing clickthru rates on text ads, we’d see Google employing the same obsessive levels of testing that the color of hyperlinks currently enjoy. As it stands, all I can say is user experience my ass.

One Plus One Equals Gone

The ambiguous delete tool in Google Analytics, combined with its poorly-executed pagination functionality, creates a perfect storm of destruction. No matter what page you are on, when you click “OK” to confirm the deletion of a profile, Google Analytics redirects you to the first page of your profile list.


(The alert box for confirming the delete action appears over your current page. After clicking “OK” from the alert box you are redirected to the first page, losing the context of your delete action.)

Like most humans, I have a finely-tuned spatial memory. I instinctively take note of where things are located in space, I can predict where they will go, and I can remember where they were. If I’m performing a repetitive task, say spooning food into my mouth, I don’t check my plate after every bite to make sure it hasn’t turned into a bowl of snakes. There is an expectation, born from my experience with physical reality, that the plate and food will remain fairly consistent between mouthfuls such that it doesn’t demand constant conscious consideration. In the words of Heidegger, the spoon, plate and food are ready-to-hand, an extension of myself, part of my world of absorbed coping.

In Google Analytics I had identified two profiles that were outdated, and I moved to delete both of them. Spatially, they were located right next to each other, one after the other. I deleted the first one, and instinctively went to the location of the second one, and deleted it as well. The javascript alert, boldly declaring https://www.google.com/, was promptly ignored because it offered no useful information to confirm.

So long, dear friends.
Well, numerical representations of friends.

Unbeknownst to me, after deleting the first site profile I had been quietly redirected to the first page of my profiles list. And so, it came to pass that I deleted not the profile I intended to delete, but the profile documenting four years of activity here at Daneomatic. Clearly I’m not the first person to have accidentally (and irrecoverably) deleted a profile from Google Analytics.

Dear friends of Daneomatic, I ask that you enjoy your fresh start. Save your comments, I know nothing of you, of your browsers or activities or search terms.

Please, remake yourselves however you see fit. The gentle fellows at You Look Nice Today may offer some valuable suggestions as to how to best use this opportunity.

I, of course, would recommend the Mork from Ork suspenders.

Your Workflow is the Battlefield

There’s been quite the wailing and gnashing of teeth over the Apple iPad not supporting Flash. Personally, I welcome this new landscape of the web, where a future without Flash seems not only bright but possible indeed.

That said, what is unfolding here is of considerable gravity, and will likely determine the future of the web. Most web professionals use Adobe tools in some capacity to do their job, whether Photoshop, Illustrator, Dreamweaver (gasp), Flash, Flex, Flash Cataylst, or even Fireworks (which is, according to many, the best wireframing tool on the market, despite its quirks and crash-prone behaviors).

Now, I am not privy to inside information, but based on what I’ve been able to glean, Adobe’s strategy is something like this. There is a deliberate reason that your workflow as a standards-based web professional sucks; that Photoshop doesn’t behave the way you want it to, that exporting web images is still a pain in the ass, and that you actually need to fight the software to get it to do what you want.

Adobe knows how you use its software. Adobe knows how you want to use its software. Adobe understands your existing workflow.

And it doesn’t fucking care.

You see, Adobe doesn’t view you, as a web professional, as someone engaged in building websites. It doesn’t view itself as one who builds the tools to support you in your job. Adobe does not view you as the author of images and CSS and HTML and Javascript that all magically comes together to create a website, but rather as the author of what could potentially be Adobe Web Properties™.

They are not interested in supporting your workflow to create standards-based websites, because that is not in their strategic interest. They would much rather you consented to the cognitive model of Adobe Software™ to create proprietary Adobe Web Properties™ that render using Adobe Web Technologies™.

In essence, Adobe wants to be the gatekeeper for the production, as well as the consumption, of the web.

Apple knows this, and knows that the future of the web is mobile. Their actions are no less strategic than that of Adobe, and Apple has chosen a route that deliberately undermines Adobe’s strategy; Adobe’s strategy for controlling not just the consumption of rich interactive experiences on the web, but their production as well.

From the production side, as far as Adobe is concerned, if you’re not building your websites in Flash Catalyst and exporting them as Flash files, you’re doing it wrong.

Your frustrations with Photoshop and Fireworks in not supporting the “real way” web professionals build standards-based websites are not by accident, but by design. Adobe views each website as a potential property over which they can exert control over the look, feel and experience. As these “experiences” become more sophisticated, so do the tools necessary to create them. Adobe wants to be in the business of selling the only tools that do the job, controlling your production from end-to-end, and then even controlling the publication of and access to your creation.

Apple’s own domination plans for the mobile web undermines all this.

And Adobe is pissed.

Scope

Home Sweet Home

Maybe you’ve already heard, but I recently helped launch Adaptive Path’s new home page. A few of the other kind folks at the office designed it, and I cut it up into its hot-and-buttery front-end code. I used 960.gs for the pixel-perfect CSS grid system, and cooked up some slick back-end code for streaming our recent essays and blog posts into their proper sections.

What’s more, I wrote some tight little scripts to hit up our Twitter feed and pull down our most recent tweets. Javascript implementations are nice for low-volume sites, but when you get as much traffic as AP you need something a bit more robust. I developed a lightweight caching module that wraps around our call to the Twitter API, keeping our tweets fresh without hitting Twitter on every single (insanely frequent) page load.

Meanwhile, I’ve pretty much been living at Musée Mécanique the last two weekends, digesting their incredible collection of antique coin-operated arcade machines. While these pictures certainly won’t leave that familiarly cold smell of metal on your hands after you’re done handling them, I’ve nevertheless been dropping my observations into a set on Flickr.

Drop Coin Here

The Cail-O-Scope

Love Tester

Grope

Musée Mécanique

Musée Mécanique

Baseball Score-Board

Oliver’s Simple Fluid Dynamics Generator

God damn this is cool. Click and drag in the black square to make the magic happen. Works best in the smokin’ Safari 4.0, because this beast is heavy on the Javascript. In any other browser you’ll wonder what in the hell I’m gettin’ so worked up about.

Stuff like this just feeds my existing obsession with introducing deliberate thought and consideration into the texture and materiality of our digital interfaces. Seriously, computer interaction that exhibits natural physical properties, either felt, observed or otherwise perceived, really gets my blood going.

fluid-dynamics

Zoom, Zoom, Zoom

I started using Safari 4.0 yesterday, and I like what I see so far. The new Javascript interpreter is fast. The controls for Google Maps are so quick they’re frightening. Just try using your scroll wheel (or two-finger gesture on the trackpad of your new non-removable-battery MacBook Pro) to zoom in. If there’s a faster way to reach the surface of the earth from space, I’m sure Burt Rutan is working on it.

Err, the opposite, I’m sure he’s working on the opposite. Sheesh.

Anyway, I will leave you with the coolest, blockiest typeface I have ever seen in my life. Seriously, check out these titles:

Golden Gate Bridge Plaque

Beautiful. Let’s get a little closer, shall we?

PAST OFFICERS, PAST DIRECTORS

And once more, for the people in the balcony:

DIRECTORS detail

Man.

Sublety and Nuance in Physical Interaction

I had a great conversation during tea time at Adaptive Path this evening with Jesse James Garrett, about the role of subtlety and nuance in physical interaction design. Central to the conversation was Microsoft’s Project Natal, an upcoming system for the Xbox 360 that lets you use your full body to control games.

While large motions, like punching and kicking the air, make for an impressive flourish, it’s interesting to consider what a system like this would look like in a few years, as it becomes increasingly fine-tuned. What if it knows where each one of my fingers is, like a musical instrument? What kind of interactive applications could this have in a non-game environment? Or, as Jesse mused, how can we learn from gaming to bring more game-related themes, from the concept of play to the interactive vocabularies we establish therein, into everyday computer-mediated interactions?

Part of Jesse’s work on the Ajax approach to web development was based on a desire to make web interactions feel more game-like in nature. Before we had instant asynchronous updates, whether backed by XML or not, the web had a distinctly evaluative feel to it. The cost of submitting web input was high, as it resulted in a long pause before I would know whether or not my submission had been accepted. Games typically offer instantaneous feedback and so this delayed, high-cost transaction felt more like taking an exam than playing a game. Thus, the web-two-point-oh-social-media-user-generated-content revolution is not about Ajax or Prototype or Scriptaculous or jQuery or MooTools, but about removing the barriers of time and cost previously associated with contributing to the web.

And so, with sophisticated physical input devices on the horizon, how can we use the most familiar input devices ever, our own bodies, to enhance our computer-mediated experiences? Further, given the fine-grained control we have over our physical selves, how can we draw on the rich human tradition of having a body and allow people to interact with a system in a more subtle and nuanced manner?

Just something I’m pondering.

Your Online Banking System Can Go To Hell

online-statements

Dear Online Statements,

I have paper records that go back ten years. Ten years. These records do not expire. You propose that I enroll in a “convenient” system that forgets my records after a mere 18 months. If I want to access records older than that you will charge me a fee and send them via U.S. mail, which is what you were doing in the first place.

If your online system is so “secure” why can’t you entrust it with more than 18 months worth of records? If it is so “convenient” why does it do a worse job of managing my account history than I do?

This is supposed to sound compelling why?

Regards,
Dane

P.S. If you ever again mention the “greenness” of online statements versus mailed statements, so help me god I will claw out your throat. There is nothing green about a server farm that needs to run white-hot 24 hours a day, seven days a week, to allow me “access” to my “statements” whenever I “want”.

I’ll tell you what’s green and convenient, and it’s a fucking file cabinet.

Mother of All Funk Chords

I found this months ago, and watching it still gives me chills:

This was the reason the internet was invented, man.

Social Hygiene

The other day, Kate and I were discussing the difference between a “tool” and a “douche bag”. It is a subtle but important differentiation, and we came up with the following guide. We hope you find it helpful, and failing that, offensive.

Tool: Drives a champagne Lexus LX with gold trim.
Douche Bag: Drives a black Cadillac Escalade with gold trim.

Tool: Wears one polo shirt with one popped collar.
Douche Bag: Wears two or more polo shirts with one or more popped collars.

Tool: Tries to network with you at a party.
Douche Bag: Tries to network with you at a funeral.

Tool: Working on a Web 2.0 social networking application.
Douche Bag: Working on a Web 2.0 social networking application that will be the next Facebook/MySpace/YouTube.