Martin Paul Eve bio photo

Martin Paul Eve

Professor of Literature, Technology and Publishing at Birkbeck, University of London

Email Books Twitter Github Stackoverflow MLA CORE Institutional Repo Hypothes.is ORCID ID  ORCID iD Wikipedia Pictures for Re-Use

Those who are not invested in the digital humanities, on either side of an often nasty binary “for-or-against” style argument, may have missed the bust up in the past few days over Nan Z Da’s “The Computational Case Against Computational Literary Studies” in Critical Inquiry. It’s probably rash of me to do so, but as I have just been discharged from hospital and am feeling better I thought I would jot a few notes down from my initial reading of the piece. These move from the section on my work through to broader remarks.

First, I get off rather lightly, which is a kind of “sigh of relief” moment. I am deemed to be an “exemplary case” because I “only [use] the statistical tools that are needed, [explain] the relative simplicity of [my] measurements and [give] credit to these measurements as things already available in coding packages instead of presenting them as though he devised them from scratch”. OK, well, thanks - I could almost use that as a review blurb for the book! I am spared the harshest critique, in other words. I want to pose some reasons for this.

Where I am at least mildly criticised, though, is for the disproportion of input labour to output literary critical result. That is: in retyping a novel and then conducting textual analysis upon it using various relatively stable and understood statistical methods, I only come to the observation that repeated character names are associated with the detective fiction section of Cloud Atlas. This does seem a fair enough observation; it is what that article at which Da looks does (although the book does a heap more; I did not re-type an entire novel just to come to this observation).

I am by no means the only person to work closely with texts using computational methods; I would never claim this. But what I’ve been working on in Close Reading with Computers is ways in which we can re-integrate the seemingly irreconcilable poles of “distance” and “depth”, or “proximity” and “profundity”. What types of repetitious computational tasks can help us to understand novels, poetry, and plays close up? How might the traditional metaphor of the telescope that has been used for distant reading look if we instead thought of it as a microscope (and yes, I know that there have been prominent objections to the “close reading is a microscope argument”; see Smith, Barbara Herrnstein, ‘What Was “Close Reading”?: A Century of Method in Literary Studies’, The Minnesota Review, 87 (2016), 57–75 <https://doi.org/10.1215/00265667-3630844>)?

Perhaps I was just not as astute as others in noticing the features to which I draw attention as a result of the computational methods. You could have got there without using the digital methods. But I couldn’t see most of the things on which I comment until after I had done some computational work. For my project, the important thing was to ask questions of a text – how does Mitchell write his different genres? How do the different versions of the novel differ? How is the stylistic imaginary of the nineteenth century represented in this text? – and then to use digital methods, where appropriate, to draw attention to elements that I might otherwise have overlooked. A good example of this is the present-tense narration of the Luisa Rey section of Cloud Atlas. Again, it may be my fault, but it took me a good while to notice that the pace of this section is largely due to it being the only chapter of the novel written in the present tense. I knew this at some intuitive level, but if you had asked me “how many sections of Cloud Atlas are set in the present tense and which are they?”, I am not sure I would have been that confident before I had flagged this using my digital approaches.

This led me to wonder what happens in other works of crime fiction with respect to tense. It turns out that the use of present-tense narration for crime thrillers is not necessarily as common as might be believed when reading the Luisa Rey chapter. Dan Brown’s Robert Langdon novels are written in the past tense. Robert Harris, in thrillers such as An Officer and a Spy (2013), alternates between present-tense and past-tense narration. Le Carré’s A Delicate Truth (2013) also segues between tenses. Ian Rankin deploys this shifting technique in his crime fiction. Elly Griffiths’s Ruth Galloway crime-fiction novels are written in the present tense, but her Stephens and Mephisto series in the past.

For me, in this project, digital methods serve, then, as transforming and clarifying lenses that I can use to find salient formal features only then to re-integrate these with argumentative close readings. And this is perhaps because of the specificity of my project; I am not seeking to give a paradigm-shifting account of literary periodisation for example. I am just here interested in the bridges between words and things, between the formal and the thematic, the text and its effects.

In a way, then, the group that Da puts under the label “computational literary studies” (“CLS Squad” as I now think of it) is not coherent. I feel that I am doing something very different to the large-scale literary history side of things (even when those large-scale approaches must first work on small scale samples to build their methods). As a consequence, my tools are used, in Close Reading with Computers, alongside very close reading. The cross-validation of my digital techniques was conducted with eyeballs on the text. When I found something that I thought was interesting, the next step was to ask: what does this actually tell us? Is it present in the actual text? What can I now see, hidden in plain sight, that wasn’t clear before? This is, I think, very different to modelling, say, one hundred years of literary history and then having to interpret that secondary dataset. I have probably made some mistakes in my work (the full data and software for the book will be available so people can check). But I was much less likely to interpret the findings wrongly as my project is about closeness to a work. I am conducting a kind of formalist reading, not a literary-historical reading.

Second, there is much in Da’s piece that needs answering and I leave this to others. I do think the work would have been stronger, though, had not elements of a routinized anti-DH polemic crept in (without resorting to a critique of tone, there is a certain ungenerosity to some of this). Was it really necessary to criticise the fact that Andrew Piper has been successful at bringing in grant money? This isn’t used to pay for software/infrastructure, as the piece implicitly claims (Da notes that most of the software is free/open source), but for the labour of researchers. Perhaps there is a fair comment to be made on DH’s allocation of funding (though it is hardly as large as others make out). But it is disconcerting to see people cheerleading for less money to be put into the study of humanistic objects of inquiry. Perhaps it is not a call for less money to be put into it in general, though, but rather for a reallocation away from digital approaches.

Third, there is something interesting about the critique of DH as incrementalist improvement in the piece. Da basically says: it is not good enough for digital methods to say that they are getting better, because they are wrong at the moment and strong interpretative conclusions are being drawn erroneously. I do not think I agree with this because I think that all methods for studying literature are flawed and have, within them, dialectic improvements and replacements (Ted Underwood might disagree that this is a dialectic approach where paradigms are “replaced” as I seem to remember from Distant Horizons). I think Alan Liu put this well on Twitter when he compared approaches in traditional literary studies to the Da critique: ‘e.g. (generic example): “Wordsworth uses ‘joy’ a lot in important poems like ‘Tintern Abbey’.” Evidence of that sort underlies much of literary studies, going back to close reading. Let’s compare the statistical validity of that to DH’s attempt to make it, if not right, better’.

Fourth, there are huge infrastructural implications to Da’s piece. In other disciplines, these are already being broached via the rhetorics of the reproducibility and replication crises. As Alan Thomas at the University of Chicago Press asked: “How realistic for authors and publishers” are Da’s recommendations of full datawork and replicable software? Well, in the present moment, this is possible. We can lodge these artefacts in various preservation-backed repositories with stable identifiers etc. The question is actually: for how long do we want to be able to replicate a finding? This is a question of usage as opposed to one just of preservation. Sure, we can make bits and bytes available for a very long time indeed. But how are they interpreted? Usage half-lives of work in the humanities disciplines are long and I might want to validate some work undertaken six years ago. What guarantee do I have that software written six years ago will still run on the newest operating system?

The other challenge is that “data” actually means “stuff”. Data can range from a tiny CSV representation of a spreadsheet up to terabytes of information. To say to publishers and archivists ‘please can I deposit my “data”’ when the spectrum for what that may contain is so wide is a problem. This is because there is an economic scarcity underlying all systems of digital preservation, as David S.H. Rosenthal has argued for years. Part of this scarcity consists of pre-selection to militate against all resources being consumed by, say, a single project. Yet blanket calls for all data and software to be available over decadal+ timespans for replication and repeatability will only be viable while digital literary studies remains a niche, small area. It’s either that or we agree a common framework of data and software standards (which isn’t feasible to my mind).

Finally, because I really must stop now, there are plenty of things for others to argue with in the piece. What is perhaps interesting and important, though, is that Da’s article, within our disciplinary space, amalgamates a broad set of critiques against a range of articles. The article zooms through a set of works to which Da has taken exception (perhaps rightly, perhaps wrongly; these truth claims need to be examined) and tears them apart, bracketing them all under the rubric of CLS. Does this happen in, say, the natural sciences? Are there broad systemic reviews that dismantle a series of articles around a certain theme? (i.e. “The Computational Biological Case Against Computational Biology”?) What about in computer science? Or would we instead see piece-by-piece takedowns, responses to non-replicability, or flaws in method in specific articles? It can be hard to validate findings, as Da points out. It would also, presumably, be exhausting to systematically test every single article in CLS (Da says it took her two years to work on this piece, and that tallies with my timetable for her contact with me). Because even if you give precise instructions to replicate the digital findings, one of the problems is that it’s often not just the computational part that you need to reproduce. That can be easy enough: “run this line of code and it will recreate my graphs”. The bigger problem is querying the underlying study design, and then working out precisely what the computational part does and shows in relation to the final interpretation of the data. Who are the reviewers who will make the connections between the literary-interpretative and statistical-design spaces? There aren’t many people who can do this. A literary reviewer can see the literary interpretations, but then not judge whether the statistical evidence supports such a conclusion. A statistical reviewer could understand the study design, but might miss broader literary knowledge that is necessary to fully formulate the study design. Inter-reviewer cooperation seems ideal, but difficult to come by in a time-scarce environment.

Some thoughts, anyway.