What is a robot? At first sight, this question seems to be a relatively simple one to answer if we define 'robot,' as Wikipedia does, as "a mechanical or virtual agent, usually an electro-mechanical machine that is guided by a computer program or electronic circuitry." But, even this relatively straight-forward definition already has philosophical, theoretical, and even definitional problems in itself. For example, if a robot is an "agent," then the robot acts on behalf of our authority in the performance for predesigned tasks. Yet, in philosophy, an "agent" is also considered as having "agency," which is something that even the most autonomous robot can only appear to have (at least as long as we consider such movie plots as occur in Bicentennial Man or AI as fictional). Further, a robot certainly can learn, but this learning (if, as I do here, we use the concept of "learning" as an indication for, or perhaps the instantiation of, autonomy), yet this 'learning' again seems to be restricted to a set of pre-determined parameters that ask the robot to respond in an "intelligent" manner to a set of situational stimuli.
I come to the consideration of what constitutes a robot after reading Brian Cantwell Smith's critique of computational theory, and his caution against considering the 'universal' computer as in fact universal. Specifically, he points out that the computer (defined as a computational machine) would not be able to perform certain physical tasks, such as cooking coffee. Yet, we do have automated coffee machines, don't we? And aren't those machines instantiations of computational, digital, performances based on Turing? An autonomous robot, then, perhaps constitutes one of the least restrictive computational machines, since it can perform a multiplicity of tasks that are not limited to virtual computation. To say that virtual computation is not the limit here (robots do perform physical tasks), however, is not to say that the "ubiquitously-assumed metric" (Cantwell Smith 30) is not the basis. This is an important distinction to make here, because it is very much the basis that informs limitations as to what machines can do, not the perceived (and inherently inexhaustible) limits of possibility.
Perhaps it is precisely the inexhaustibility of the possibilities for machinery, while failing to account for the limitations of the bases for it, which induces so much anxiety about human replacement in an ever more technologically informed world. Even further, at least according to Heidegger, an ever increasing instrumentality of technology leads to a change in perception not only about it in itself, but it also induces a more general change in the way we perceive the world. And yet, Heidegger's concerns seem very much akin to what Cantwell Smith identifies as problematic in computing theories, which is that computing has to be "best understood as a dialectical interplay of meaning and mechanism" (emphasis in original, 15), whereby the mechanism is as much limited by the meaning as the meaning by the mechanism. So, then, what do we mean by "meaning"?
The Turing model, as it is based on systems of logical symbolism and their computational models, perhaps simulates meaning much more than creates it – similar to the apparent "intelligence" of some autonomous or semi-autonomous machines. Yet, as I tend to argue, meaning is perhaps a retrospective endeavor applied to already existing structural systems (of order as well as chaos). Because of a certain retrospectivity, then, meaning simultaneously assumes a position prior to and resulting from mechanical computation; in other words, because of their dialectical interplay, both mechanism and meaning are positioned in a philosophical manner similar to the chicken and the egg question – they have to be considered as parts of the same set while also being distinct from one another.
This, perhaps, is precisely the problem in the creation of meaning. Because meaning informs mechanisms, but also results from it, it can only retrospectively be interpreted, and to do that we need the human brain. Part of the problem here, as Caterina Bernadini has suggested before me, is the question of translation. Since the Turing model functions based on a representational system of logic defined by symbols, computer coding can be likened to the translation of one language into another -- in the sense that we are substituting signifiers of ideas for other signifiers. And yet, this method of substitution has obvious limits that significantly inform, because they alter, meaning. Very simply put, run a foreign language text through Google translator, and see what happens.
Certainly, we can account for some of the inter-relations between symbols (or signifiers) by developing syntactical rules. However, even these meaning-creating, or better, meaning-reproducing, systems remain insufficient – because, again, meaning is partially retrospective. Initially, I had decided to read Heidegger in the original German, partially because I wondered whether the meaning of his assertions changes significantly between the translations, and partially because I had hoped that his ideas would be easier to penetrate in my (and his) first language (for the record, they are not). What I found myself doing, however, was to read Heidegger in between translations. I read Heidegger simultaneously in German and in English, and it was only in between those interpretations – and representations – that the meaning I created came to the forefront. And, what was even more striking is that the editor of this particular version of Heidegger's work admits to have altered even the translation represented here. In all four cases – Heidegger's, the original translator's, the editor's, and mine – then, we have created meaning specific to us based on our interpretational methods. But, the meaning(s) we did ultimately derive may differ however slightly or however greatly. In this sense, then meaning is as much an informative act as it is a retrospective derision, especially for and in computing. As Cantwell Smith seems to imply then, perhaps we should cease chasing indefinable and unformulable ideals, and instead simply accept that we work within limited parameters – as do our machines. Or, perhaps, Cantwell Smith is just messing with us, considering his basing his claims on a number of analyses which we still wait for him to publish (as of yet, only the introduction to Age of Significance has been released – three years ago). Or, we can simply choose to believe that the answer to life, the universe, and everything is 42. This, I do agree, can also be very liberating.comments powered by Disqus