User-testing a digital edition: Getting the feedback you need

I’ve been thinking about specific questions I want to ask during the user testing for Infinite Ulysses (my dissertation project)—more specifically, Rachel’s tweet had me thinking about how to describe to user-testing volunteers what kind of feedback I’m seeking.  I came up with some statements that model the types of thoughts users might have that I’d like to know about. For the first phase of beta-testing on my project, I’ll ask testers some abbreviated form of:

Have one of the following thoughts, or similar? Please elaborate:

  1. “I wanted to see ___ but didn’t / couldn’t locate it / stopped looking for it even though it’s probably there”
  2. “This is broken”
  3. “This is cool”
  4. “This could be done better (by ___?)”
  5. “This doesn’t work how I expected (and that is good / bad /should be changed ___ way)”
  6. “Where is ___ again (that I used before”
  7. “This requires too many steps / is hard to remember how to use”
  8. “Don’t see how / why I’d use this”
  9. “I’d use this ___ way in my reading / teaching / work”
  10. “I gave up (where, when)”
  11. “____ would make me like this site / like it better / use it regularly”
  12. “I’m not interested in Ulysses, but I’d like to use this interface for a different novel / non-fiction text / (other thing)”
  13. “Starting to read the text on the site took too long (too much to learn / too much text or instruction to wade through) / took the right amount of time (intro text and instruction was appreciated or easily skippable, or site use was intuitive enough to get started)
  14. “I would recommend this site (to x person or y type of person)”
  15. “The problem with this site is ___”
  16. “Reading on this site would be easier if ___”
  17. “I wish you’d add ___”

Testing stages for Infinite Ulysses

As I start to get all my design and code ducks in a row for this project this month, I’ll be moving into a cycle of user-testing and improving the website in response to user feedback. I’ll be testing in four chunks:

  1. Self-testing: Working through the site on my own to locate any really obvious issues; for example, I’ll step through the entire process of signing up and reading a chapter on the website to look for problems. I’ll step through the site with different user personas in mind (imitating the different backgrounds and needs of some of my target audiences, such as first-time readers of Ulysses and teachers using the site with a class). I’ll also apply various website assessment tools such as validators for code and accessibility.
  2. Alpha testing: Next, I’ll run some low-stakes testing by inviting my dissertation committee, close friends, and family to break the site. This might get me to a point where the next stage of testers aren’t hitting any problems big enough to take the site down or make testers wait while I take days to fix a difficult issue.
  3. Beta testing: I’ll conduct beta-testing this fall and spring by opening the site to exploration and use by the people who have generously volunteered via this sign-up form. Phase I will take place this fall and take feedback from volunteers using the site individually; Phase II will take place in winter and early spring, continuing individual use of the site, and adding in people using the site in groups, such as teachers with their classes, or book clubs reading together.
  4. Post-release testing: I’ll continue to take feedback once the site goes live for use by anyone in June 2015, although I’ll need to scale down work on requested enhancements and focus on bug fixes and continued data gathering/analysis on how people use the site to read. Setting up site logging and Google Analytics on my site will help me monitor use as time allows.

User testing how?

I’ll be building on my user-testing experience from my master’s research and the BitCurator project, as well as trying some new tactics.

The thesis for my information master’s degree involved a use study exploring how members of the public (and others with a content interest in a website, but lack of experience with digital humanities and edition/archives commonplaces) experienced scholar-focused DH sites, using the Blake Archive and Whitman Archive as examples. I was particularly interested in identifying small design and development changes that could be made to such sites to better welcome a public humanities audience. For my master’s research, I built off existing user study metrics from a related field (learning technology) as well as creating and testing questions suggested by my research questions; feedback was gathered using a web survey, which produced both quantitative and qualitative data for coding and statistical analysis.

I’m hoping to further set up

  • web surveys for willing site visitors to fill out after using the site
  • shorter web pop-up questions—only for users who check a box agreeing to these—that ask quick questions about current site use (perhaps incentivized with special digital site badges, or with real stickers if I can get some funding for printing)
  • in-person meetings with volunteers where I observe them interacting with the site, sometimes having them talk aloud to me, or with a partner, as to their reactions and questions as they use the site
  • various automated ways of studying site use, such as Google Analytics and Drupal site logging

For bug reports and feature requests, site visitors will be able to send me feedback (either via email or a web form) or submit an issue to the project’s GitHub repository. All bugs/enhancement feedback will become GitHub issues, but I don’t want to make users create a GitHub account and/or figure out how to submit issues if they don’t want to. I’ll be able to add a label to each issue (bug, enhancement request, duplicate of another request, out of scope for finishing my dissertation but a good idea for some day, and won’t fix for things I won’t address and/or can’t replicate). I’m using Zapier (a If This Then That -like service) to automate sending any issues labeled as bugs or enhancements that I want to fix before my dissertation defense to Basecamp, in an appropriate task list and with a “due in x days” deadline tacked on.

To read more about user testing in the digital humanities, check out my posts about

User testing for the long haul

I’ve got one major technical concern about this project (which I’ll discuss in a later post), and one big research-design concern—both related to the “Infinite”-ness of this digital edition. My research design concern is the length of this user-testing; I’m pursuing this project as my doctoral dissertation, and as such I’m hoping to defend the project and receive my degree in a timely manner. Usability testing can be done over the course of a few months of users visiting the site and my iterating the design and code; testing use and usefulness, as in

  1. how people want to use the site (i.e. perhaps differently from how I imagined),
  2. how people read Ulysses (a long and complex book which, if you’re not attempting it in a class or book club, might take you months to read), and
  3. what happens to a text like Ulysses as it accrues readers, their annotations, and the assessment the social modules lets readers place on others’ annotations (the more readers and annotations, the more we can learn)

are things I can begin to gather data on, and begin to speculate on what trends that data suggests we’ll see, but I won’t be able to give them the full treatment of years of data gathering within the scope of the dissertation. To address this, I’ll both analyze the data I do gather over the course of months of user testing, and try to automate further data-gathering on the site so that I can supplement that analysis every few months or years without requiring too much effort or funding to sustain this work.

Would you like to volunteer as an early user of the Infinite Ulysses site? Sign up here!

Getting Digital Humanities Done: Schedule, Software, Etc. for a Digital Dissertation

Attending the Scholars’ Lab/NEH Speaking in Code symposium emphasized for me how much of a research developer’s work involves tacit knowledge, and this opacity extends to scholars in general when it comes to how stuff—books, articles, projects—get made. You get semi-polished glimpses of a work becoming itself, from its beginnings (grant applications, abstracts) to the draft stage (actual drafts, as well as blog posts and conference talks feeling out the research) to the end point (white papers, finished products, and sometimes honest discussions of a project’s struggles). But what does doing this work look like on a daily basis?  What tools do you use, and where, and when? Matthew Kirschenbaum writes:

Nobody teaches you how to write a book. Yes, in graduate school, you may get “feedback” on your dissertation to a greater or lesser extent from mentors and peers. But that typically has very little do with the process of executing on a marketable book project. (Kirschenbaum, blog post)

Kirschenbaum’s post attempts to improve on this lack of example for how scholarly work gets done, discussing practical details of his book-writing process such as citation management. I’ve tried to emulate this example-sharing by blogging about my dissertational process, from the technical choices I’ve made to the theoretical questions I’m considering. In this post, I’ve asked myself what I’d like most like to learn about others’ DH work set-up, and decided that the physical and screenspace environments, most-used software and websites, and work habits were the aspects I would most benefit from seeing other examples of. I’ll describe the workplace set-up, schedule, and software that help me make progress on my Infinite Ulysses project, with the hope of hearing more from others about the day-to-day environment and behavior that produces their digital humanities work.


MITH. I’m in residence at MITH as the Winnemore Digital Dissertation Fellow this academic year, and the plan is to do most of my remaining dissertation work at the office. MITH is generally either quiet or has a pleasant hum, but sometimes I’ll block out sounds while I’m blogging or on a particularly frustrating problem. I saved up for some noise-canceling headphones, which are pretty magical at producing quiet even when I’ve been trying to work in an airport. In the past, I’ve used cheaper but still useful alternatives such as earplugs or normal headphones playing free white noise. Coffitivity is great when you want want something a little more realistic in the background.

I use an external monitor, keyboard, and mouse, both for the extra screen space and to position everything at the optimum height for my wrists and neck.

A photo of my fellowship desk at MITH, Fall 2014.

Home. I worked entirely at home over the summer, but my set-up there is pretty similar (external monitor, desk/chair/externals adjusted to promote good posture/happy muscles).

In terms of screen space usage, I’ll have one Chrome browser window open with a number of tabs up (or two separate windows if I want to separate research/question-solving from website work), as well as a minimized Firefox window (for checking site design and script using the plugin Firebug). I’ll also have a text editor open with several different code files I’m working on or consulting, and the GitHub for Mac application open for committing (saving/uploading) new code to my private dissertation code repository (to be made public soon).

Work schedule

Time. I have the most energy in the morning; I was the kind of person who preferred taking classes in the 8am slot during college. Being up early lets my day start at a slow pace, and I have a bit of time by myself to get into a good working frame of mind and plan what I want to accomplish that day.

I get to MITH a little before 8 A.M. and stay until I hit a good stopping point for the day, either mid- or late afternoon. I do this for four consecutive days a week, with some extra email/organization/reading work on those evenings after I go home and try to jog. On the remaining three days, I try to do nothing related to the dissertation, focusing instead on fun projects (creating a new theme for this blog, learning to bake really good bread, trying to increase my running distance, learning Go). So far, this is a great schedule: I get way more done than if I try to say “I should be working on my dissertation 40+ hours a week and on every single day!”, and I’m always refreshed and excited to get back to my research after a few days resting my brain with other things. I think I’d allow myself to work on one of these days off if I was really excited, but I never want to feel like I need to; getting my project done a bit sooner isn’t worth burning out or not being able to think about anything but my project.

Scheduling tasks. I previously used TaskPaper, a task list app from the makers of the distraction-free writing app WriteRoom (which really helped with writing my candidacy exam presentation!), to keep track of what I needed to do on my dissertation. That got a bit unwieldy. I’ve recently switched to keeping a TaskPaper list for things I want to do on non-dissertating days (so I can write them down and not have them distracting me), and using Basecamp to manage teaching and research tasks. Trello is a good, free alternative, but I ended up finding the set-up for Basecamp matched the structure of my task list a bit better (I’m also more used to Basecamp from using it for MITH projects).

On Basecamp, I’ve attempted to break down everything I need to do in the remaining year (knock on wood) of my dissertation into small tasks, then added a date to the tasks I want to complete in the next month, so I can filter to a view of just what I need to accomplish today or this week in order to stay on track. Checking off tasks is pretty gratifying, and having a clear overview of what needs to be done by what point (after first, of course, creating a project timeline that I could reasonably follow, with a lot of empty buffer time at the end for all the things I can’t plan for) keeps me from getting stressed about what I haven’t done yet—if I complete the tasks I need to do this week this week, I know I’m on schedule.

Within a given week, I try to vary between coding-focused days and other activities such as organization of tasks and URL bookmarks, blogging, and reading/research. I generally start the week with coding so that if a problem proves frustrating, I can switch to a non-code day to clear my mind before trying to figure out the issue again.


Citations. I’ve managed citations in the past with the free and awesome Zotero (e.g. for my master’s thesis), and I expect to do so again once I start creating my final article about my project. If you do a lot of your research on the Web, Zotero is fantastic—you can use it to pull citation info from a page you’re reading, as well as take a snapshot of the page in case it disappears later.

Research for me right now means a mixture of theoretical articles (e.g. papers about the social reception of Ulysses, or studies on meaningful crowdsourcing for the humanities, or blog posts), example digital editions and other digital textual work (e.g. Alan Galey’s Visualizing Variation), and support for my research coding/design (e.g. code snippets, the Drupal StackExchange, tutorials, and documentation). I initially saved these bookmarks into various folders, but the folder names at the start of my project proved to be way too vague and all-encompassing two years down the road (“Scholarly Sources to Read”, anyone?). I recently re-organized these so it’s easier to find what I need, with more specific folder titles.

Screenshot of Chrome bookmarks menu with folders for various dissertation topics

I wanted to keep the folders in a certain order, but kept losing that when I’d sort their contents by title—thus the numbering per folder. In my research resources folder, you can see some more specific sub-folders, such as one where I’m keeping track of the various digital edition and textual annotation tools I considered and tested before ending up working with Annotator.js.

Coding. I read, edit, and write code in bbEdit (the pro version of the free TextWrangler), which has far more fancy features than I can regularly use. Features I’ve found useful include scripts that convert minified CSS and JS into readable, well-structured code; folding (collapsing) sections of a code file I’m not working on and don’t need to see; and searching across multiple folders.

Track changes. Keeping track of changes in my code is important for several reasons: I want to document effort for my dissertation committee (especially as my dissertation looks different than those they’re used to), give proper credit and correct licensing for any pieces of open-source code I build on, and be able to roll back my code to an earlier state if I need to. I use git to do this, mostly through the website GitHub and a GUI app (GitHub for Mac) that is quicker for me than using git on the command line.

I’ve started creating Drupal installation profiles after I make major changes to my work, so I can easily re-create my site without needing to reset all the options and reinstall various modules; eventually, I’ll package all my work as a Drupal distribution, so that anyone can spin up a participatory digital edition website like mine with no more trouble than setting up a vanilla Drupal website (and if setting up a vanilla Drupal website is difficult or unknown to you, I’ll provide user-friendly instructions on that as well—or you can check out Quinn Dombrowski’s awesome Drupal for Humanists).

Command line and local work. I don’t use the command line as much when I’m just working locally (i.e. on a version of the website that lives on my computer and isn’t accessible on the Web), and I’ve been mostly working locally lately. When I need it, I use Mac’s Terminal app to manage things that are easier to do from the command line than via the GUI, such as Drush (a set of commands that makes doing sysadminy and developer things with a Drupal site quick), or to move files around. When I’m working on a site on the Web, I use the command line to install things, edit files, and more. When I want to work locally on my Drupal site, I use MAMP Pro (the paid version of the free MAMP), which imitates the things I’d be using on a web server to make a Drupal site work (e.g. MySQL).

Technical support. An important piece of tacit understanding we discussed at Speaking in Code was the ability to successfully search for answers to technical problems; surprisingly, knowing how to Google well as a coder—especially when technical words have alternative popular meanings—is a difficult skill to impart. Since someone has probably encountered the same or a similar problem to yours before and written about it, and coding today is all about not re-inventing the wheel but building off of existing open-source code, knowing how to search for what you need is a vital technical skill.

I’ve been working on some of the core functionalities of my Drupal site lately (things related to highlighting and annotating text in a digital edition), so my main searches have to do with Drupal, PHP (which Drupal is largely written in), and Javascript (the core functionality of my site is provided by Annotator.js). Besides pure Googling when I hit an obstacle, I visit the module pages for Annotator and Annotation as well as their issue queues (to see if anyone else has run into the same issue), and the Stack Overflow,  Drupal Answers, and Server Fault Q&A sites. I’m also on the Annotator.js list-serv, which is fairly low traffic and offers a mix of news about the community of Annotator users/developers, and developer questions.

Blogging. I’ve covered how I figured out what I needed to blog about in this post on affinity mapping.

Other work set-ups

If you’ve similarly written about the day-to-day process of your scholarship—or are inspired to do so—let me know and I’ll link to it!

Digital Dissertation Fellowship at MITH!

For the next academic year, I’ll be in residence at MITH (the Maryland Institute for Technology in the Humanities, a digital humanities think tank) as the Winnemore Digital Dissertation Fellow. The fellowship will support my digital dissertation work and wrap up with a Digital Dialogue presentation in the spring.

MITH has been integral to my development as a digital humanist, functioning like a second academic department and helping me gain experience in diverse subfields of DH (e.g. web development, digital archives, digital pedagogy, usability testing, and technical documentation)—I’m really grateful for this capstone opportunity! I’ll be blogging more about the fellowship here as well as on the MITH blog.