Computer Power and Human Reason by Joseph Weizenbaum was first published in 1976. My interest in ethics had not, until recently, steered me toward it, but having now obtained a copy and considered Weizenbaum’s arguments, I’m quite pleased I did.
It is not a book about computers, or programming, or “computer science,” or even an anti-AI screed. Rather, I’d call it a set of increasingly intense philosophical essays about what computers are capable of and what they should be used for, two things that are not necessarily the same, as well as what it means to be human and what it means to be ethical.
Weizenbaum holds that “computers,” speaking broadly, may eventually exhibit what we might call intelligent behavior, but that behavior, limited by the digital switching of 0’s and 1’s, will be that of aliens and fundamentally different; they cannot ever have human intelligence. Why? Because humans have self-directed goals and purposes and wills, and interact with the world in a fundamentally different way (having experiences rather than data). We can both exhibit and feel pride and cowardice, fear and joy, all qualia, and mostly importantly, we can judge matters, rather than simply make decisions, based on our unique, relative experiences. While computers excel at laborious bureaucratic tasks beyond any single human, they cannot ever have the human experiences that actual humans use as the foundation for their values, which then allow humans to make judgments.
Weizenbaum repeatedly quotes a former colleague that challenged him to come up with something a computer could do that a judge (presumably of the legal variety) could not, with their answer being a flat “nothing.” The vigorous humanism of his book-length rejoinder is something to behold. Not only does he castigate the slippery-slope, positivist argument that human-like AI is inevitable as with all technological progress, but he twists the knife further, noting the decision to pursue AI research blindly is itself inhuman. As computers cannot judge like humans, they cannot be ethical, and Weizenberg warns that they should not ever be given work that involves judgment. There are hints of the networked world-to-come in his chapters, but just like anyone in 1976, he doesn’t see just how quickly miniaturized, networked computers are coming.
What he does see clearly are the ethical concerns. He notes any future speech recognition will ultimately only serve the cause of increased surveillance – check. Weizenberg was the programmer behind ELIZA, the famous therapist chatbot, and was alarmed at how quickly some people connected to its lines of code like it was a real human being.
What would he think of modern speech recognition and generative AI? Nothing good. My earlier assessment of ChatGPT is more or less the same as his description of the limits of AI, though he pushes it much farther, noting (even as today) the increased dehumanization and automation of modern society, and lamenting the passive acceptance of a overly computerized future where humans cede more and more power to computers that can never have any real knowledge of human experience, and accept, without thinking, an overly technical approach to complicated human problems.
There are two related passages I’d like to replicate here as they spoke to me as a sometimes disgruntled English professor:
During the time of trouble on American university campuses, one could often hear well-meaning speakers say that the unrest, at least on their campuses, was mainly caused by inadequate communication among the university’s various constituencies, e.g. faculty, administration, students, staff. The “problem” was therefore seen as fundamentally a communication, hence a technical, problem. It was therefore solvable by technical means, such as the establishment of various “hotlines” to, say, the president’s or the provost’s office. Perhaps there were communication difficulties; there usually are on most campuses. But this view of the “problem” – a view entirely consistent with Newell and Simon’s view of “human problem solving” and with instrumental reasoning – actively hides, buries, the existence of real conflicts…
… instrumental reason converts each dilemma, however genuine, into a mere paradox that can then be unraveled by the application of logic, of calculation. All conflicting interests are replaced by the interests of technique alone.
p. 266
This man certainly worked at a university.
The last chapter, “Against the Imperialism of Instrumental Reason,” is a powerful attack on a soulless worship of reason as inhumane. The climax of the argument, for me, is this:
The lesson, therefore, is that the scientist and technologist must, by acts of will and of the imagination, actively strive to reduce such psychological distances, to counter the forces that tend to remove him from the consequences of his actions. He must – it is as simple as this – think of what he is actually doing. He must learn to listen to his own inner voice. He must learn to say “No!”
Finally, it is the act itself that matters. When instrumental reason is the sole guide to action, the acts it justifies are robbed of their inherent meanings and thus exist in an ethical vacuum. I recently heard an officer of a great university publicly defend an important policy decision he had made, one that many of the university’s students and faculty opposed on moral grounds, with the words: “We could have taken a moral stand, but what good would that have done?” But the good of a moral act inheres in the act itself. That is why any act can itself ennoble or corrupt the person who performs it. The victory of instrumental reason in our time has brought about the virtual disappearance of this insight and thus perforce the de-legitimization of the very idea of nobility.
p.276
Bravo. The closing chapter is quite strong, but I’ll limit myself to one more paragraph:
… It is a widely held but a grievously mistaken belief that civil courage finds exercise only the context of world-shaking events. To the contrary, its most arduous exercise is often in those small contexts in which the challenge is to overcome the fears induced by petty concerns over career, over our relationship to who appear to have power over us, over whatever may disturb the tranquility of our mundane existence.
p. 276
When we do not think what we choose to do matters, that is a remarkably good indicator that it does.
The insidious nature of the worldview, then, that Weisenbaum critiques is a mental trap that shuts down what makes us human – our will and agency.
Computers, by the end of the book, become a metaphor or tool for understanding what makes us human – and what does not. There is a very powerful assembled argument that the highly specialized knowledge that computer science and data-driven research claims to possess is at a serious disadvantage when compared to the comfortable familiarity with ambiguity in the humanities. The discussion of language models and composition in earlier chapters suggests Weizenbaum was not field-cloistered from literature and writing – this is a interdisciplinary work.
When I read such arguments, I think about the contemporary anti-intellectual politics of Florida and Texas, but I also think about the larger awareness of the “rhetoric of science” concept since the writing of this 1976 book and the mixed and increasingly sour bag of candies that the Internet turned out to be. I also think about every interaction with a corporate entity I’ve ever had, and how my own university works.
It’s hard to find a print copy of this book, but an ebook version is not difficult to find. I highly recommend it. It has aged well. As a closing thought, the epistemology of The New Rhetoric seems quite capable with Weizenbaum’s ideas here as a reckoning with WWII, though his examples primarily concern Vietnam.