Future Now
The IFTF Blog
Reimagining Work
One of the paradoxes of technological change is that as new tools are introduced, people often remain tethered to their existing tools and practices. It's a paradox because new technologies can be significantly beneficial, yet we are loath to incorporate them into our daily routines. As people, we don't tolerate losses too well. If we lose a technology we've come to rely on, even to replace it with a better version, that can feel like a big source of pain.
It's also common that most tools we encounter have a marginal benefit. The benefits can be slim because of steep learning curves or the acute pain of using them. Sometimes, we're able to recognize better outcomes through experimentation, through a long-term view, or how broader use might benefit a more diverse set of stakeholders. That helps make those initial forays with a new technology feel more worthwhile.
We live with deep mixtures of old and new technologies in everyday life. Each has varying degrees of complexity and sophistication, and this creates a lot of conflict, especially when we try to sort out and normalize new working practices.
What might be some of the mechanisms for resolving avoidable conflicts and redirecting our attention, work, and habits towards more productive and personally satisfying exchanges?
Are there technologies that can serve as proxies for working out the inconsistencies of social interactions, our values, and new working principles in order to help make a better fit between technology and everyday life?
Collaborative Authorship
Collaborative authorship and revision of documents is one of those areas where working practices can converge, can conflict, and are ripe as a proxy for sorting out new solutions for change.
Consider how complicated it is to collaborate as a group on a document. There are different ways of crafting statements, varied word choices, structure, formatting, and design. If we add the many ways that we share, trade, exchange, comment, validate, and track these changes, it quickly become surprisingly complicated. Furthermore, as the size of a collaborative group grows, intra-group communication, sharing, and version conflict intensifies.
Despite all of these complications, we still retain many of the vestiges of paper trails, office files, and document authoring tools that we used a decade ago. With the exception real-time editing (e.g. http://etherpad.com/ and its google docs offspring), we haven't yet embraced tools that 1) serve the goal of minimizing version conflicts in distributed authorship environments, 2) make generic organizational structures an explicit variable in collaborative processes, 3) allow us to capture the meta-information around our work and creative processes to provide us with a richer perspective on our work, and 4) coordinate the different tasks that go into creating the document and make it a durable and mobile carrier of meaning.
Each of these four areas is a pain point for people who collaborate and share documents in their everyday lives. Collaboratively produced documents represent a tremendous amount of transactional effort to work through, and this frequently amplifies emotional conflicts that arise from reconciling different cultural and organizational values, working styles, and different technical skills. We like to stick with what we know, because we know it works at least reasonably well. But often times, a new tool can be better simply because it reduces the pain or wasted effort we normally experience.
What would happen if the coordination techniques of collaborative coding used by programmers were applied to other disciplines and organizing styles? What if the core value of conflict resolution between computer scripts could be applied to humanities manuscripts, power point decks, maps, and grant proposals?
Collaborative Coding
One outcome of the open source software movement is the embrace of distributed, collaborative methods for maintaining and developing programming scripts. Programmers need tools for collaborating remotely, and because open source projects aren't always run from a centralized authority, collaborators need ways to resolve different forms of conflict. And because computers and other machines run logical processes, they have a low tolerance for script errors and other conflicts. Programmers need version control to identify new errors, fix old ones, and track their progress. By archiving and saving each iteration of a script instead of writing over it, incremental process are made by different members of a team working collaboratively on different parts of the script. This distributes the ability to contribute at a much finer resolution–from the whole document level to the individual letter, word, or phrase.
Computer-readable scripts are not very different from any other form of written document. This means that the same tools for distributed and collaborative authorship can be used for areas other than computer science–from the humanities to graphic design.
New Tools, New Capabilities
Tools like Google docs and wikis do these tasks relatively well for distributed and synchronous tasks where the goal is to have a single resulting document. But how do you collaborate and coordinate around a document to emphasize and highlight different alternatives? How can big, asynchronous changes be incorporated? Sure, it's possible to use "track changes" in MS Word and have multiple people add their comments and edits, but would that really be LESS painful?
Instead I'm imagining what it would be like to use a platform like GitHub to apply the capabilities and experience of collaborative coding to other endeavors like proposal development, poetry, graphic design, or really any collaborative document. If you ever visit a hackathon or take a look at an open source programming project, you'll likely run across GitHub. It's one of the major platforms supporting the collaborative computing around the world. It's based on Git, which was developed to support the maintenance and development of the Linux OS kernel. It's a control system that manages tiny information differences between document versions, and while programmers have embraced it for a variety of projects, there's no reason it couldn't be implemented with a spiffy graphical user interface and designed to support the version control and coordination needs of collaborative document authoring in other domains like comparative literature, science manuscript development, or foundation grant proposals.
I'm not the first to see the opportunity here. Julie Meloni in the ProfHacker column at the Chronicle of Higher Education offers this nice, gentle introduction to version control for academics aptly titled, A Gentle Introduction to Version Control.
One insight from her description is that the command line interface definitely has to be replaced with an intuitive graphical user control system for the document, revision histories, author community, and changes. I can barely remember my phone number; I'm not sure why everyday computer programmers think that everyone else has an easy time remembering all those minute task commands.
Another is this great comment to Meloni's article from Sean Gillies:
"After reading a tweet from Dan Cohen yesterday that scholarship might emulate software development, I wonder if distributed version control doesn’t point to a future of scholarship that’s less about editions and more about “diffs” or edits."
Science Fiction author and journalist Cory Doctorow discusses the process of archiving and version control using Git in his book, Context: Further Selected Essays on Productivity, Creativity, Parenting, and Politics in the 21st Century. He describes his desire for a tool that can capture what's happening as he writes, what his influences are, and bind that to the document itself–all while managing the different versions of his manuscripts.
But in the digital era, many authors work from a single file, modifying it incrementally for each revision. There are no distinct, individual drafts, merely an eternally changing scroll that is forever in flux. When the book is finished, all the intermediate steps that the manuscript went through disappear.
It occurred to me that there was no reason that this had to be so. Computers can remember an insane amount of information about the modification history of files—indeed, that’s the norm in software development, where code repositories are used to keep track of each change to the codebase, noting who made the changes, what s/he changed, and any notes s/he made about the reason for the change.
So I wrote to a programmer friend of mine, Thomas Gideon, who hosts the excellent Command Line podcast (http://thecommandline.net), and asked him which version control system he’d recommend for my fiction projects—which one would be easiest to automate so that every couple of minutes, it checked to see if any of the master files for my novels had been updated, and then check the updated ones in.
Thomas loved the idea and ran with it, creating a script that made use of the free and open-source control system “Git” (the system used to maintain the Linux kernel), checking in my prose at 15-minute intervals, noting, with each check-in, the current time-zone on my system clock (where am I?), the weather there, as fetched from Google (what’s it like?), and the headlines from my last three Boing Boing posts (what am I thinking?). Future versions will support plug-ins to capture even richer metadata—say, the last three tweets I twittered, and the last three songs my music player played for me.
There's a GitHub release of Flashbake, the code for the Git-based tool, that can be downloaded, tinkered with, and added to. The thing that Flashbake does well is adding meta-info to the writing flow, capturing a small snapshot of what's happening. [here's the original project page]
Gina Trapani, has a fantastic getting started guide to Flashbake at Lifehacker, along with a whole bunch of little bits to help you decide if it's worth the learning curve. Like she says, you have to be a little adventurous and not mind running a script or two.
Organizational Design 1.0
Expect more tools like Flashbake–ones that make it easy to cache fine-grained changes to docs and select from multiple threads of that work. My guess is they will be the basis for more user-friendly text authoring collaboration platforms in the very near future, once collaborative authoring and distributed work starts to be more widely utilized.
The important thing about Git and similar tools is that they support a wide range of organizational structures for getting work done. I'm a little out of my expertise here–in understanding how the technical specifics support different org styles–but it's apparent that Git and some other versioning tools support centralized, decentralized, hierarchical, heterarchical, and everything in-between forms of coordination. What matters is the document, the changes that get made, and how those changed get committed or accepted to the latest version. Pull, the ability to draw from a master repository, and push, the ability to add to a master repository, are two of the most important permissions.
It's not difficult to imagine scenarios where a team is working on the same document, starting with content, editing, revising, developmentally editing, and copy editing all at different times in different places. Obviously, there needs to be the basic content before the latter stages of editing can begin. But it supports is simultaneous editing by multiple people where some changes can be accepted and others rejected, not wholesale edits or versions. These fine-tuned edits are important because they can mean that a host of authors can contribute and that no contribution is too miniscule. What's kind of cool is that six different people can be adding a sentence, and then when it comes time to merge versions, one can select which of the versions of those sentences to accept into the final.
To wrap up, here's some final characteristics of distribted version control systems:
- A Distributed Source Control System (like Git) means that it's easy to access a central repository.
- It's possible recreate a project from any copy, and branching a repository means that duplicates of work can be built upon and modified without affecting the original. A difference command (diff) allows one to compare changes made between file versions.
- One can work totally offline on their local repository and push changes to the central repository later.
- Permissions are suited to the tasks and can be shaped around different organizations, teams, and roles.
- The contributions that individuals form another "social" layer–like following people or keeping track of their history.
- The history of changes made to the repository is one measure of productivity. Plus, using centralized file organization tends to increase productivity by minimizing the transaction costs of exchanging files and notifying others of changes. The transparency (who made what commits) is also a big motivator for participation and productivity.
It takes some effort to get something like this up and running. To actually start doing collaborative coordination around documents then involves getting others onboard, and that means building the cultural tools to support ongoing practice. I have no doubt that a small group of dedicated individuals could do it; many probably already have. My hope is that by writing this article, people share more examples of how Git and other versioning systems are developing as models for reimagining collaborative authorship, coordination, and work in domains other than computing.