Note: I'm in the process of reconstructing several older posts after a data breach. My site was hacked in a ransomware attack, and I discovered that I had failed to back up 3-4 posts written in relatively rapid succession between June 1 and August 1, 2019. It is not my goal to rewrite every post but to create at least a brief stub for the posts that were erased, especially in the unlikely event that someone has bookmarked and old post and is trying to revisit it. This message is meant to serve as an explanatory placeholder as I put everything back together as well as I can. In the mean time, feel free to shoot me an email if you're looking for a specific post. Thank you for your patience.
This post was originally a preamble to the book project I had begun in Spring 2019. It included a longer story about how my master’s and doctoral work investigated print culture topics (e.g., magazine serialization, publishing industry intermediaries, Willa Cather as ghostwriter) using tradition humanities methods. Ultimately, I found that these approaches were insufficient to address the deeper research questions behind the case studies I chose.
After finishing my doctorate, I pursued opportunities to develop computational training, including large scale text analysis methods. Scholars of print and publishing have been largely skeptical of such cultural analytics methods, especially when compared with literary studies scholars focused on genre and form such as Jockers, Moretti, and Underwood. Numerous articles have examined the causes of this disparity , but I believe my home discipline will benefit most from one or more well-executed and convincing monographs that make the case for cultural analytics methods by example. I also want to argue that bringing computational methods into contact with humanities expertise can lead to methodological novelty and foster better understanding of large-scale, computational and quantitative methods.