We’ve Always Been Digital

A Reaction to ‘Historical Reflections: We’ve Never Been Digital’, by Thomas Haigh, in Communications of the ACMSeptember 2014, Vol. 57, No. 9

This column is inspired by another column that appeared in the September 2014 edition of the Communications of the ACM. Despite my provocative title, I won’t suggest a rupture or an opposite point with respect to what Thomas Haigh, the original author, describes and comments with great skill; on the contrary, I’ll try to emphasize some of his strongest points and flipping others in a purely communicative and (why not) entertaining exercise. My illiteracy in history won’t allow me to be as formal and thorough as he is in the original article, which is incredibly well documented. I strongly recommend reading it to get a full grasp of his arguments, although I’ll offer a summary.

In general terms, Haigh has a somewhat sceptical point of view with respect to hype in the area of computing. He travels through a great part of the 20th century to recall the many different occasions in which computers have been sold to the general public as a ticket to the future, promising changes “far beyond those effected by printing”. On this line, historian of technology Gabrielle Hecht called “rupture talk” the social process in which an unfamiliar new technology became unexpectedly a central part of American life. This metaphor of the ‘rupture talk’ served well not only to explain messages that all of our lives would be eventually ruled by computers and the digital (only popularized in the 1980s and 1990s), but by technology in general (e.g. same enthusiasm happened the 1950s when France reoriented all its colonial power and engineering skill around mastery of nuclear technology). With computers, this rupture talk has a special spot in the book published in 1995 by Nicholas Negroponte called Being Digital. The fundamental point of the book is that, indeed, being digital is the fundamental characteristic of a new way of life, and would be so in the next couple of years. Here is an excerpt:

Early in the next millennium, your left and right cuff links or earrings may communicate with each other by low-orbiting satellites and have more computer power than your present PC. Your telephone won’t ring indiscriminately; it will receive, sort, and perhaps respond to your calls like a well-trained English butler. Mass media will be refined by systems for transmitting and receiving personalized information and entertainment. Schools will change to become more like museums and playgrounds for children to assemble ideas and socialize with children all over the world. The digital planet will look and feel like the head of a pin. As we interconnect ourselves, many of the values of a nation-state will give way to those of both larger and smaller communities. We will socialize in digital neighborhoods in which physical space will be irrelevant and time will play a different role. Twenty years from now, when you look out of a window what you see may be five thousand miles and six time zones away…

While some of these promises have gone even far beyond those envisaged by Negroponte, some others never happened and, on the contrary, still read pretty oddly (even meaningless) if we try to fit them into our 2014 “digital” life. Haigh’s analyses continues with a reflection on Bruno Latour’s book, We Have Never Been Modern, published in 1993. Latour’s thesis is that nature, technology, and society have never truly been separable despite the Enlightenment and the Scientific Revolution, in which their separation was defined as “the hallmark of modernity”.

Interestingly, Haigh brings his analysis to the raise of the term digital humanities, and somewhat complains that (apparently) the movement of the digital humanities (a “a push to apply the tools and methods of computing to the subject matter of the humanities“) and the fact of being a digital humanist (a shifting boundary of using computers to do things that most of one’s colleagues wouldn’t know how to do; from using email or a word processor to installing a Web server or programming) suffer from a disturbing glorification of technological tools. The fact that he is a trained computer scientist that switched to the humanities makes his point even stronger.

My first point is a double, even triple highlight of this last reflection. What attracted me to the digital humanities was not the usage of computers to solve tasks that were usually handcrafted by humanists. My impression is that that is exactly what computer scientists have been doing (in a long series of disparate fields like natural language processing, logics, argumentation, etc.) for the last 20 years, and that the question of the digital humanities was much related on how to understand computing from a cultural and social perspective, far from the optimistic speculation in which we scientists use to advertise when we talk what we do. In this sense, the balance is still quite skewed towards making the humanities more digital, while I find there is an important urge to apply the methods and perspectives of the humanities to understand computing and its relationship with human beings.

Another point I consider worth discussing of Haigh’s article is his clear preference for We Have Never Been Digital, supporting the argument that the Scientific Revolution never created a whole new world. Furthermore, his claim that even the digital does not really exist (bits still –and forever will– have a material analog support) tickles my brain on the opposite direction. The broader concept of the digital, if we understand it as the same human curiosity that created philosophy, mathematics, algorithms and ultimately and only very recently (and in combination with engineering, which some would say is older as science) computers, could have a stronger and older tradition than we might think. This leads me to think that, indeed, the technological revolution never happened: it has always been there, since the very moment in which the first humans engineered tools, fire and wheels, and found sound explanations of why they behave the way they do in the rise of science, until reaching Babbage’s differential machine and electronic computers. That’s why I think that the fact of coding information in binary in integrated circuits is just a minor implementation detail, and that indeed being digital or not is not the relevant question. Humans have always been digital.

My last thought is for a more comprehensive and positive notion of hype in computing advertising and research. I think it could be useful to put it in a more pragmatic and less dramatic scope. It is true that scientists and computing advocates have an intrinsic tendency of overselling and overpromising. The history of Artificial Intelligence is probably one of the best examples of this, with ups and downs since its invention until current times. Besides, AI did deliver cool stuff: machines are better than us at playing intellectual games, and if we try to understand why the promise of robots-feeling-and-thinking-as-humans hasn’t been accomplished yet, AI has become one of the most (if not the most) successful fields in science. My point is that failing at delivering is not necessarily a bad thing. Overpromising can produce disappointment, but it also produces a list of challenges (some more plausible than others) to accomplish. Having this list of challenges is useful to researchers, who can set agendas and go for short shots or longer shots. In a quickly changing field like AI and all things digital, it might be more efficient to try to get for all these points on these highly speculative lists, rather than wasting time on discussing whether these challenges even make any sense (normally, to some extent, they do). In the worst case, failure scratches them from the list (or even better: postpones them until better times come); in the best case, our intuitive and overoptimistic visions of the future might turn out to be, in the end, not that crazy or far-fetched.