Monthly Archives: August 2011

Encoding Financial Records

We held our NEH-funded meeting about encoding financial records at Wheaton College on August 18 and 19, and initial responses to our assessment instrument (read Survey Monkey) suggest that participants agree that we had a productive and energizing series of discussions. We will be testing some ideas based on ontological and embedded encoding through the next couple of months, and we will complete our white paper by the end of the year.

I was pleased at the level of enthusiasm for the endeavor over the course of the two days. Participants contributed experience and examples from their own projects. And I learned new things about current ideas around interoperability and making data harvestable.

We have begun to build an exciting community of practice composed of participants with diverse expertise who see significant potential in developing models for digitizing financial records from the early nineteenth century and before.

2 Comments

Filed under digital humanities

User-Friendly XML

As I continue to think through how I do history digitally, I note both that historians have been using computers for a long time and that what I do differs from the statistics-heavy social science computing people were learning when I was in graduate school. Programs like SPSS didn’t seem relevant to my dissertation project, which focused on small communities that would not have yielded statistically significant analysis. I didn’t know about Arc-GIS, and it might be interesting to see what one could learn by imposing census data on Whitney Cross’s maps of the Burned Over District. Might, at some other point.

I’m struck by how easily I accepted the idea that transcribing and marking up journals, diaries, and now financial records could yield interesting results for understanding the nineteenth-century United States. But an analogy that came to me this morning clarifies the process for me.

I’ve noted here before that I came to comfort with code as a result of the coincidence that my post-secondary education began just at the moment that computing was becoming democratized. At Rice, my own experience with mainframes began with learning to use word processors to type papers. In my early post-collegiate jobs, my comfort with learning to use similar applications earned me a position as the WordPerfect expert among the secretarial staff of a department at the UVA Medical School. I bought my first PC in grad school and developed minimal comfort with DOS, but I didn’t become a power user until I bought my first Mac and learned the joy of the Apple interface.

My development as an academic user coincided with the spread of the Internet in the 1990s, though I remained a low-end user focused on email and word processing until my first exposure to TEI and XML in 2004. The utility of statistical data remained relatively opaque to me, and my fondness for Macs and parallel contempt for Windows as a DOS-impaired lesser version of the Apple interface prevented my exploring possibilities. Coupled with my interest in pedagogical uses of technology, the advent of the World-Wide Web led to my involvement in discussions about cross-platform applications, and I became more and more comfortable in conversations about technology. Thus, I had been primed for the next stage–learning about XML through exposure to TEI and therefore becoming a different kind of academic user.

The analogy between the comparative difficulties of DOS/SPSS and Mac/XML has considerable explanatory power for me as I think about how I have come to be convinced that XML/TEI tools for transcription and markup have a place in undergraduate classrooms. I think it goes a long way towards expressing some of the assumptions behind my notion that liberal education should include exposure to computational thinking.

2 Comments

Filed under digital humanities, liberal education