"There are only 10 types of people in the world:
Those who understand binary and those who don't."
This week marked the 66th anniversary of the discovery (and subsequent removal) of the first computer bug. Ever wondered why it is called a "bug"? This is because this first one actually was a bug – a moth – that had shorted out some points in Relay #70, Panel F, of the Harvard University Mark II Aiken Relay Calculator on September 9th, 1947 (Huggins). Subsequently, we have continued to use the word "computer bug" to refer to problems with our computers, and fixing those problems still is described with the terminology of "debugging." Thankfully, those debugging processes now require much less expertise (in most instances) in extermination.
Yet, this little excursion into computer history – and, perhaps, computer terminology – happens to precisely coincide with some of the work (yes, work!!!) I have been doing this week. For the purpose of educating myself a bit more about what it might actually be that I am doing in the Digital Humanities, I somewhat haphazardly managed to sign myself up for a seminar in the philosophy and theory of Digital Humanities – if you know me, you'll understand what I just got myself into. For this week, then, I am working (yes, working!!!) my way through the history of mathematical thought that ultimately inspired contemporary automated computation (Martin Davis).
I have often made the statement to my students (as I'm sure many of you have as well) that I did not choose the study of English – and of literature, to be more precise – because I was good at math. In reading The Universal Computer: The Road from Leibniz to Turing, I therefore have to admit to have a few difficulties with the mathematical terminology and descriptions of mathematical theories and their implications (hence: work!). I also am surprised, however, to find that math as it is informative of the software we use is much less based on what I would have considered ‘true' math (those pesky complicated computations they make you do in high school), and much more by logic. Yes, this is logic as it can be calculated, but the affinity of Leibniz et. al. for logical, perhaps even rhetorical, questions within the field of mathematics is one that reveals computer science as much more historically related to the humanities than seems to have been evident in academia for the past few decades.
For example, I often asked myself what the benefit of Digital Humanities would be for someone in Rhetoric and Composition studies. As a scholar of literature, I have created digital editions of texts; I have marked up texts for further analytical exploration; I will, this semester, even create a program for the purposes of text analysis. Yet all my endeavors thus far have clearly related to the study of literature – either to offer another representation of an already existing literary text, or to discover some previously unknown patterns in a work of literature as relating to style, plot, character, speech… But what about rhetoric studies? Perhaps, I could imagine some computational examination of the rhetorical devices used in, say, "Letters from Birmingham Jail," but how might Digital Humanities assist in formulating more precise rhetorical theories for pedagogical purposes in particular? I am asking this question specifically in regards to pedagogy, since many of my friends' work in Rhet/Comp focuses on formulating pedagogical strategies and theories. In other words, although I know that the Rhet/Comp field encompasses much more than teaching freshmen what ethos, logos, and pathos are all about, it is precisely the applicability of DH within the Rhet/Comp field within pedagogy that thus far eludes me. How can Digital Humanities provide a methodology to further the development of very intricate pedagogical strategies and methods?
I am certainly not the person to answer this question, but reading Davis' book seems to indicate that rhetoric already has a much larger role in computer science and history than many people might be aware of, and certainly a larger role than literature. When Davis explains Gödel's proof that propositions within Russell's Principa Mathematica can be expressible, yet not provable, it reminds me a bit of some of the deductive brainteasers I personally am so fond of: "U is true. Suppose that it were false. Then, what it says would be false. So it couldn't be unprovable and would have to be provable, and therefore true. This contradicts the supposition that U was false. Hence it must be true" (italics in original, 119). Know which brainteasers I'm alluding to? Where you have four or so ‘people' who all say seemingly contradictory things about who stole an item, and you have to deduce who is the liar from the combination of their statements? But, I am digressing…
What really is at stake here, at least for me, is the philosophical and rhetorical nature in the approach of these mathematicians to computational logic. While deductive (and inductive) reasoning play a large role in humanities studies, and perhaps even more so in the study of rhetoric, a computational approach to those very pillars of scholarship constitutes perhaps less an alienation from and more an assimilation to Humanities studies. If not only Digital Humanities, but the entire study and development of computer systems can be considered as originating from humanistic thought from the very beginning, the inclusion of computational tools in the study of the humanities seems much less threatening. Ironically, this inclusion is much less of an inclusion of mathematics, or, as some might say, a deviation from humanistic thought. Perhaps the distinction between the disciplines – mathematics, science, humanities… -- has been a rather artificial one from the beginning (it is certainly a political one). In this sense, then, it may be true, as others have already said, that the Digital Humanities could be a reconciling force in starkly divided academic disciplines. We just never thought we might need bug spray to do it.comments powered by Disqus