“If we think that no-one is watching us and making value judgements about our community, our research, our relevance, and our output, then we are misguided.”
Or subscribe now to read with unlimited access for as low as $10/month.
Don’t have an account? Sign up now.
A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.
“If we think that no-one is watching us and making value judgements about our community, our research, our relevance, and our output, then we are misguided.”
When Melissa Terras spoke these words in her plenary presentation at the recent Digital Humanities 2010 conference, people listened and took these words to heart—at least in the corner of the DH community where I hang out, there’s been a noticeable shift toward stepping up the game. I’ve also seen an energy ripple through the group; sure, there are days full up with the drudgery of writing grant application after grant application, and it might take a week to accomplish what should have taken a day, but underneath all of that is the ongoing fervent desire to make something new—a new process, a new path, even a new tool—not out of a need to justify one’s scholarly existence, but because that neverending push through the doldrums to creativity and sharing new knowledge is one’s scholarly existence.
Although the One Week | One Tool summer institute was set in motion long before Terras’s plenary, I prefer to think of the experience as one of the first concentrated efforts since that time to combine community, research, relevance, and output into value. As a member of the OW|OT/Anthologize team, I’m going to (safely) speak for all of us when I say that I’m grateful the NEH Office of Digital Humanities saw the potential for value when they went out on a limb and funded this project.
ADVERTISEMENT
I realize that given my description on the team page—"led the Development team and served as the ‘Glue’ across the entire project"—as well as my position as managing editor of ProfHacker, and the fact that I’ve been working on the development side as a vendor in the private sector for many many years, there’s some expectation to provide a comprehensive analysis of the tool we built and the process we went through to build it. That’s not really what this post is all about, although there are elements of that in it.
Instead, I’m going to talk briefly about the team and the tool, all the information constantly swirling around the groups as it was being developed, and how and why this project is yet another CHNM-inspired success—a model, if you will,
The Tool: Anthologize
Anthologize is a WordPress 3.0 plugin that turns the WordPress installation into a platform for publishing electronic texts. From within the plugin in your WordPress dashboard, you can grab posts from your existing blog, import feeds from external sites, or create new content directly within the tool. At that point, you can outline, order, and edit your work, with the goal of crafting it into a single volume for export in several formats. Right now those formats are TEI, PDF, and ePUB, along with rudimentary support for RTF.
It should be stated clearly that Anthologize is a very alpha release and is best installed in a test environment first. There are a log of bugs. A lot. This release is a prototype that was put through a series of tests in five different types of development environments, but certainly doesn’t account for the myriad system configurations of our user base.
Since I want to dwell on the process more than the tool itself, I’ll point you toward the following information from the team:
a post by ProfHacker author and non-humanities person! Heather Whitney
There’s also this wonderful video about the making of the tool...
The Inspiration
Although we all came together Sunday night for a meet-and-greet, and knew most participants were leaving the institute the following Saturday and we had to put pencils down, we didn’t know what we were going to build until Tuesday. On Monday at lunch, Tom Scheinfeldt asked us at lunch, “So, any thoughts?” As the table fell silent and we looked around to see who would offer a suggestion first, I thought about what I wanted to build. Although I had a specific idea for a project—and believe me, it will be built someday—I realized that I cared more about meeting a bunch of specific criteria.
I started to answer Tom’s question by throwing at him all the questions I start with when thinking up a project, such as:
Do we focus on something discipline-specific, or something useful regardless of discipline? Although this project was funded by the NEH Office of Digital Humanities, and given the team members this tool would deeply influenced by and invested in humanistic inquiry, I didn’t want us to build something that belonged only to the humanities.
Do we focus on something for research use, classroom use, or both? Given one shot to make a cool tool, I hoped we wouldn’t limit it to one group of users. The flipside to that? Being aware of an audience is difficult, let along multiple audiences.
Do we build on existing technology or build something new? The time constraints made this an easy decision in the end, although there were a good number of suggestions in the final list that would have required development from the ground up.
I could have continued my list of questions in rapid-fire succession (I have many) but I noticed Tom was laughing at me and pointing out how incredibly logical that sounded—the implication being that such a logical process to begin with would preclude the production of something truly creative. So I shut up and trusted that the team would work through all those questions eventually (and hopefully quickly), and put it in my mental notebook of all the things we had to keep in mind over the next four days.
ADVERTISEMENT
After we decided on the tool we were going to build—a conversation that included discussion of the points above, and others—we shifted quickly into design-and-build move. But, importantly, that designing and building had a little bit of research around it. Specifically, the following articles were read, re-read, and were a constant presence throughout the week:
Matt Cutts’s “Blog to Book?” and all the comments attached to that post.
Also swirling around the group were the discussions that came out of the recent Scholarly Communication Institute, the work at University of Maryland on Computer Forensics and Born Digital Content, book history (such as the questions that will be posed at this upcoming Radcliffe seminar), and the future of the book. And of course we brought our own knowledge and experience to the table; as you do, the use cases for scholars, teachers, libraries, archives, museums, artits, and writers were created and discussed by everyone before development really started. By Wednesday, everyone on the team had a clear idea of what we were building, who we were building it for, all the ways in which it could extend, and the effort that it would take to sustain it.
These few paragraphs do not do justice to the rich narrative of the week that was, but I would like to take a moment to reflect upon the One Week | One Tool development process in relation to a 2002 article by Martha Nell Smith (founding director of MITH), “Computing: What’s American Literary Study Got to Do with IT?” (American Literature 74.4). In this article, Smith discusses core technologies that should be driving digital humanities projects. In this case, “technologies” does not necessarily mean markup languages, or server-side programming, or disk arrays that can store terabytes upon terabytes of digital information. Instead, Smith refers to “technologies” as simply those means by which we might accomplish various ends. In digital humanities projects, she suggests those technologies are:
the technology of access, or making available to all what was previously limited in some way
the technology of multimedia study objects or digital surrogates, or ensuring that the digital representation of the physical object retains the key features of that object, such that it may be manipulated and transformed (displayed, analyzed, etc.) without losing its core qualities.
the technology of collaboration, or accomplishing means to various ends in humanities computing projects by including a vast array of colleagues—managers, librarians, programmers, designers, visionaries, and so on.
the technology of self-consciousness, or a constant reflection upon how the core objects of study have been produced both in the first place, and in their new spaces.
Whether consciously or not, the OW|OT team worked entirely with these technologies to produce the Anthologize tool; it’s my not so humble opinion that the success of this tool will depend on continuing to keep these technologies in mind...while bugfixing with the others (those pesky language and platform technologies).
Although only Dan, Tom, Jeremy Boggs, and Trevor Owens were “officially” scheduled for interaction with the team, I’m going to speak for the group when I say that without Sharon Leon and Sheila Brennan setting aside their own work to spend hours with our team, this project would not have been as successful. Additionally, we need to give a special shout out to Ken Albers, Ammon Shepherd, and John Flatness for their help as well.
The reason CHNM is uniquely positioned as instigator of and support system for this project and the resulting tool is the longstanding tradition of enthusiasm, creativity, collaboration, and support put in place by its founding director, Roy Rosenzweig. It is impossible to spend any time around CHNM without learning something about this man and the reasons the center exists and is a success. In addition to actually making a tangible tool, One Week | One Tool was supposed to be a time to learn new skills and experience new methods of work and collaboration so that we could take these pieces with us and put them in place in our own labs, departments, and centers. It’s my hope that individually as well as professionally we all got a little bit of that Rosenzweig spirit on the way out the door, too.