Pre-Christmas Reflections

Remember my very first blog post in this space; the one where I discussed the difficulty of simply explaining what the heck it is we do to someone outside the field?

Well, this will be the last blog post required for ENGL 946, and somehow I feel as if I've come back full circle to the very beginning, albeit in a slightly modified way. One of my undergraduate professors once made the assertion that education, especially specialized graduate education, changes your perception of the world in an irreversible manner. In application to the pursuit of a Humanities degree, I think she meant you simply become very jaded as to the corruptness of human nature. I mean, how wouldn't you? After having read many, and many more, mostly 19th century novels like Sister Carrie, The Portrait of a Lady, or Uncle Tom's Cabin, how can you believe in the success of pure human intentions anymore? Corruption wins. After a while, everything seems to end like an episode of Law and Order: SVU — there is a conclusion, but it is always somewhat unsatisfactory. In the Humanities, you can't untake that red pill. Once you know, you cannot unknow.

read more

Reading by Screen Light

As, at the very latest, my blog post from a few weeks ago reveals ("Glass Reality"), I live in a rather 'techie' household. And yes, the fact that my fiancé is a freelance software engineer and app developer certainly contributes to our always having the newest and latest technology we (or better, he) can afford (I'm a grad student in the humanities, after all). After reading Hayles considerations of How We Think, or perhaps even better, how our thinking changes in relation to media usage, I therefore could not help but wonder to what degree my own thinking has already been influenced by all this technology. I certainly lay no claim to understanding it all, or even a slight fraction of the inside-the-black-box functionality (to use Kirschenbaum's terminology), nor to possess more than a slight familiarity with coding. And this, as I shamefully admit, despite considering myself as an aspiring Digital Humanist. My fiancé usually has a lot of explaining to do. A familiar scene: "Honey, I broke it!!! Help!!!" — He takes a quick glance at the document I've painstakingly encoded in TEI for the past few hours and which I've been trying to fix for the past 30 minutes in hopes that "Oxygen" finally gives me this great relief of having provided me with this tiny little square in the lower right corner of the screen meaning that the document 'validated' — "You're not ending your tag in line 96732. It's a self-closer." Duh.

However, I'm not only an aspiring Digital Humanist. I'm also a literary scholar. Even writing this out still makes it seem as if there still exists this stark division — I'd almost say dichotomy — of being a traditional literary scholar (the vampire-resembling kind that reads dusty books in dark libraries) and the computer science guy (the zombie-resembling kind with the blood-shot eyes from staring at a screen for too long). And yet, as becomes clear in Hayles' book, the Digital Humanities are somewhere in between there; they constitute some kind of hybrid identity (to borrow from Manovich) in literary scholarship. And that, perhaps, changes our perception of what constitutes literary research in fundamental ways.

read more

Diagrams, and Then Some

One assertion in particular caught my eye, while reading Franco Moretti's (in)famous Graphs, Maps, Trees (again) this week. In his even shorter introduction (or preface?) to the already quite short triptych, Moretti (naturally) proposes "a new object of study: instead of concrete, individual works, a trio of artificial constructs — graphs, maps, and trees — in which the reality of the text undergoes a process of deliberate reduction and abstraction" (n.p.). As he goes on, however, Moretti seems to make an important distinction between the three. In Chapter One, he asserts that "graphs are not really models; they are not simplified, intuitive versions of a theoretical structure in the way maps and (especially) evolutionary trees will be in the next two chapters" (8, emphasis in original). In this sense, then, is Moretti suggesting here that graphs offer the means of abstraction he mentioned earlier, whereas maps and trees constitute simplifications?

In this distinction, then, Moretti seems to follow Willard McCarty's findings in "Knowing…: Modeling in Literary Studies" — an article which I incidentally read only a few weeks ago for another class — in that "a model-of is made in a consciously simplifying act of interpretation" (n.p.). According to Moretti, as quoted above, maps and trees may constitute models — although he himself seems uncomfortable with the term 'model' and rather tends to refer to them as diagrams — but graphs are clearly not. So, what are graphs, then? The most obvious answer, of course, here would be that graphs provide a visualization of quantified data displayed in such a manner that its qualities as a "collective system" (Moretti 4) are revealed. And yet, as Moretti goes on to explain, "that quantitative data are useful because they are independent of interpretation, [while also being] … challenging because they often demand an interpretation that transcends the quantitative realm" (Moretti 30). Even more importantly, Moretti asserts, "we see them falsifying existing theoretical explanations, and ask for a theory" (30, emphasis in original). And yet, cautions McCarty, "theoretical modeling, constrained only by language, is apt to slip from a consciously makeshift, heuristic approximation to hypothesized reality" (n.p.).

read more

Glass Reality

As of this past Tuesday, my fiancé has a new "toy" (as I'm prone to call it): Google Glass (by the way, I still find the app name, Google Goggles, much wittier. Now they just need to make it run on Glass). Understandably, he is quite proud to have it. Since Glass is still in beta-testing, getting it required an invite from a person already owning it — which happened to be a close friend of his who acquired his 'right' to beta-test at Google I/O — along with a ridiculous amount of money (something I will certainly use as a bargaining chip in the future). This, however, makes Glass quite special in a social sense, and he certainly gets a lot of attention from people intrigued by it (click here for a hilarious youtube video compiled by someone who's already owned it for a while and who is summarizing people's reaction to Glass — yep; those reactions are quite faithfully represented. And no. It's not recording you all the time.).

And yet, perhaps Glass is special not only in a social sense, but also as a piece of hardware that changes our understanding of media. We know at least since reading Kirschenbaum a few weeks ago that the borders between hardware and software can get quite blurry, and Google Glass perhaps constitutes a prime example of where at least the perception of which is which becomes almost indistinguishable. When Manovich, in Software Takes Command, considers the hybridization of software, then, I cannot help but wonder if this hybridization extends also to hardware — in other words (Manovich's), if "one of the key mechanisms responsible for the invention of … new media … is hybridization" (176, emphasis in original), then perhaps it could also be the literal mechanism (in this case, Google Glass) that hybridizes with the medium itself, thus and thereby constituting a new medium in itself.

read more

CSI: Afternoons Infected by Cherry Tree Mysteries

One of the most captivating moments in reading Matthew Kirschenbaum's forensic exploration of Mechanisms this week was his – as I initially thought – completely arbitrary need to specify precisely that he had viewed all objects discussed in his book "on a Dell Latitude x 300 Windows XP laptop with a 1.20 GHz processor and 632 MB of RAM, sound turned on" (22). I admit, at first I considered this specification as an off-handed joke with which Kirschenbaum merely indicates the scrutiny (in a forensic sense) he has applied to his examinations of electronic objects. After all, why would this matter to me as his reader? In reading (and thinking) further through this book, however, I soon learned that, yes, it does matter, and it matters in very deterministic ways particularly in considering digital objects from a humanistic perspective – and not only as a way of ascertaining forensic scrutiny. As Kirschenbaum clarifies at the end of his introduction, the "forensic imagination" as he applies it to "new media" had been "conceived as a deeply humanistic way of knowing, one that assigns value to time, history, and social or material circumstance – even trauma and wear – as part of our thinking… . Product and process, artifact and event, forensic and formal, awareness of the mechanism modulates inscription and transmission through the singularity of a digital present" (23). Perhaps the examination of digital objects is not so far removed from literary research after all.

In writing about books (here, I mean to use the word 'book' as codex), aren't we already used to describing such aspects as edition, printing, provenance, even such aspects as the particular exemplar used from a specific library in our bibliographies? Perhaps, we might even state where (in terms of physical space) we have encountered, examined, and read the book – such as, 'in the reading room of a particular library's special archive.' And, as humanists, we do make these specifications because we recognize that these bibliographic codes do matter not only to our experience of the text, but also to our interpretation of it. It does make a difference – and a rather significant one – which particular rendition of a text I am examining, and this difference increases proportionally with the detail I examine the book for. If, for example, I examine a specific artifact for an inscription, then this particular (in this case physical) object is individuated (to use Kirschenbaum's language) insofar as it is the only object in existence exhibiting this precise state of being. Perhaps this is precisely Kirschenbaum's point in his discussion of the forensic and formal materialities of digital objects: Despite the perceived infinite duplicability of digital objects, their individual identity persists in a manner not less, but perhaps even more significant than is the case for 'physical' objects (I am placing the word physical in scare quotes here to emphasize, in accordance with Kirschenbaum, that digital objects are no less physical than what we would generally perceive to be physical – in other words, those objects we can discern directly and unaided by digital means).

read more

My Ingress to Virtual Reality


They are coming. A scientific experiment called the "Niantic Project," initiated by the National Intelligence Agency (NIA) has discovered that certain points on earth exhibit energy anomalies which influence us humans in unforeseeable ways. It is a mind virus. Utilizing these energy anomalies can assist in building protective fields from outside influence in terms of mind control. The Niantic Project claims that these measures of mind control extending between these points of interest (portals) are subject to the highest scrutiny so as to not be harmful, yet no evidence supporting or disputing the ultimate effect of the energy matter on the human mind has thus far been discovered. All we know thus far is that the energy anomalies center around exhibitions of human aesthetics and creativity – works of art in the form of sculptures, murals, statues, architectural masterpieces, and other landmarks scattered across the world. Now that you have been made aware of the existence of these energy anomalies, it is your choice of whether to join the struggle – but on which side? Join the Enlightened, and you will fight to control the mind fields according to the standards described by the Niantic Project. Join the Resistance, and you will fight to protect your fellow citizens from any form of mind control. Become an agent.

Described above is the basic plot of a closed beta, alternative reality GPS-based game called "Ingress," developed by Google. Yes, I'm an agent, and have been playing the game since its release in November of 2012. I have observed the struggles of my team against the opposing force, battled over portals, and established myself as a distinguished agent on the side of the Resistance in Lincoln. I have seen treaties being made, treaties being broken; I have helped establish agreed-upon standards of conduct (not further established in the game's TOS), and contributed to strategy planning and execution. This is one of my identities. I am a cyborg.

read more

Understanding McLuhan;
or, How I Recovered my Cool.

When I first grabbed the book, I have to confess my reaction to McLuhan's philosophizing of media was very similar to that of the Duke of Gloucester's to Edward Gibbon: "Another damned fat book, eh…? Scribble, scribble, scribble, eh…?" (15). Retrospectively speaking, however, I have to say that McLuhan has managed to reconcile me to some of the philosophy thus far discussed in this blog — most specifically, to Kittler. McLuhan, although writing much earlier than Kittler, seems much less prescriptive — perhaps outrageous — in regards to meaning formation and reality, and his assertions, neatly packed in a series of allegories and analogies, are much easier to reconcile with technology as I personally experience it. And here may be the key to McLuhan: other than Kittler, who seems to make grand statements with (implicitly) universal applications, McLuhan focuses much more on the individual situation and context of experience — the experience of media as it is informed by the situation in which it itself exists.

Something about McLuhan is strangely comforting, especially when I consider applying his own theories not only to Kittler, but also to McLuhan himself. As he himself argues, "'Comfort' consists in abandoning a visual arrangement in favor of one that permits casual participation of the senses, a state that is excluded when any one sense, but especially the visual sense, is hotted up to the point of dominant command of a situation" (32). Perhaps I'm taking McLuhan's consideration of 'hot' vs. 'cool' media much too literally, but if 'hot' media require less participation than 'cool' media, then the reading of philosophies of technology (relatively speaking) would certainly be 'cool' (particularly in comparison to the reading of some pop-culture sparkling vampire stories, for example). In this sense, then, it is self-explanatory why reading literature (should we call it that?) that requires less mental involvement and participation seems so much more comforting than reading highly theoretical and philosophical considerations like McLuhan's.

read more

Reality, Representations, and the Nature of Media

The moment – the choke-on-your-coffee-and-snort-it-through-your-nose moment – snuck up on me unexpectedly. On the very last page of Friedrich Kittler's 263-page philosophical history on media development. I didn't see it coming. I don't think Kittler did either, despite his retrospectively quite adequate (perhaps too adequate) prediction from 15 or so years ago. But, let me share – just make sure you're not drinking any hot beverages. Kittler declares that

"Of all long-distance connections on this planet today, from phone services to microwave radio, 0.1 percent flow through the transmission, storage, and decoding machines of the National Security Agency (NSA) … By its own account, the NSA has 'accelerated' the 'advent of the computer age,' and hence the end of history, like nothing else. …the NSA is preparing for the future." (263)

It might be an overstatement to refer to "the end of history," although Kittler did just finish discussing the prevalence and popularity of 'spy novels' following a World War II incident in which information was leaked to Moscow. But perhaps 'the end of history' simply refers to a situation in which history has come full circle, and we start again. And end up in Moscow. At the airport.

read more

Questions over Questions, and the answer is still 42

What is a robot? At first sight, this question seems to be a relatively simple one to answer if we define 'robot,' as Wikipedia does, as "a mechanical or virtual agent, usually an electro-mechanical machine that is guided by a computer program or electronic circuitry." But, even this relatively straight-forward definition already has philosophical, theoretical, and even definitional problems in itself. For example, if a robot is an "agent," then the robot acts on behalf of our authority in the performance for predesigned tasks. Yet, in philosophy, an "agent" is also considered as having "agency," which is something that even the most autonomous robot can only appear to have (at least as long as we consider such movie plots as occur in Bicentennial Man or AI as fictional). Further, a robot certainly can learn, but this learning (if, as I do here, we use the concept of "learning" as an indication for, or perhaps the instantiation of, autonomy), yet this 'learning' again seems to be restricted to a set of pre-determined parameters that ask the robot to respond in an "intelligent" manner to a set of situational stimuli.

I come to the consideration of what constitutes a robot after reading Brian Cantwell Smith's critique of computational theory, and his caution against considering the 'universal' computer as in fact universal. Specifically, he points out that the computer (defined as a computational machine) would not be able to perform certain physical tasks, such as cooking coffee. Yet, we do have automated coffee machines, don't we? And aren't those machines instantiations of computational, digital, performances based on Turing? An autonomous robot, then, perhaps constitutes one of the least restrictive computational machines, since it can perform a multiplicity of tasks that are not limited to virtual computation. To say that virtual computation is not the limit here (robots do perform physical tasks), however, is not to say that the "ubiquitously-assumed metric" (Cantwell Smith 30) is not the basis. This is an important distinction to make here, because it is very much the basis that informs limitations as to what machines can do, not the perceived (and inherently inexhaustible) limits of possibility.

read more

To Math Or Not To Math

"There are only 10 types of people in the world:
Those who understand binary and those who don't."

This week marked the 66th anniversary of the discovery (and subsequent removal) of the first computer bug. Ever wondered why it is called a "bug"? This is because this first one actually was a bug – a moth – that had shorted out some points in Relay #70, Panel F, of the Harvard University Mark II Aiken Relay Calculator on September 9th, 1947 (Huggins). Subsequently, we have continued to use the word "computer bug" to refer to problems with our computers, and fixing those problems still is described with the terminology of "debugging." Thankfully, those debugging processes now require much less expertise (in most instances) in extermination.

Yet, this little excursion into computer history – and, perhaps, computer terminology – happens to precisely coincide with some of the work (yes, work!!!) I have been doing this week. For the purpose of educating myself a bit more about what it might actually be that I am doing in the Digital Humanities, I somewhat haphazardly managed to sign myself up for a seminar in the philosophy and theory of Digital Humanities – if you know me, you'll understand what I just got myself into. For this week, then, I am working (yes, working!!!) my way through the history of mathematical thought that ultimately inspired contemporary automated computation (Martin Davis).

read more

Manifestos

Let me begin this, my first blog post (ever!), with a little anecdote that I hope will help illuminate some of the questions I will ask later in this blog. About two years ago – I was already quite a good way into my graduate studies in literature – my mom and I had a conversation about a book she'd read and liked: Hemingway's The Old Man and the Sea. Now, my mom is an avid reader, and probably more widely read than many other people are. However, she is not a literature scholar, so when, during this conversation, I pointed out some of the larger symbolism in the novel, her response (in essence) was that "this would be reading far too much into it. Because, really, the book was simply about a man who just really likes his boat." This response really stunned me, coming from my own mother, and I was quick to point out that I was currently in the process of studying for a doctorate in "reading far too much into books." Certainly, I have since explained to her in much more detail what it actually is that I do – and I think she really does understand more about literary studies now. Yet this incident back then really illuminated the problem in trying to explain a specialization, or better, a specialized study to someone not in the field (kudos to all you teachers out there!).

Most of us in academia, especially in the humanities, are probably well familiar with the question of "What do you actually do?" or with its more prevalent (and perhaps more offensive) counterpart "What do you do with a degree in philosophy/literature/history/etc.? Teach?" The implication here seems to be that ‘the study of the human record' has little more specific application than to stuff our students' class schedules with "basic requirements" that, yes, are good to have heard about at one time, but really only justify our own indulgences and distract students from the pursuit of "a real job." So, then, if the value of ‘the study of the human record' already is hard to justify – even to explain – what are we to do with Digital Humanities, which, perhaps far too simplified, is the digital, or computational, study of the human record?

read more