Futurist, software engineer, and the head of the Google Mind project, Ray Kurzweil has been the leading champion of transhumanist technotopianism. Central to this movement is the belief that human biological sentient and cognitive capacities are too constraining to be ultimately satisfying. In order to realize its full value, the capacities that make life meaningful must be developed to their furthest imaginable range and depth. The fullest imaginable range and depth is limited only by the laws of physics (the ultimate entropic decay of the universe). These limits cannot be reached within the biological form of sentience and intelligence. Therefore, human destiny is to first merge with Artificial Intelligence (the subtitle of Kurzweil’s latest book) to form the “Singularity,” after which point human evolution by natural selection will end and the conscious transcendence of all biological limits on human life-capacities begins.
In 2005, Kurzweil predicted that the Singularity would occur around 2045. He maintains that prediction in the current book. The new work does not add anything fundamental to the arguments that he deployed in The Singularity is Near but seems to have been written (perhaps at his publisher’s prompting) by the spectacular success of Chat-GPT-4 in emulating human powers of argumentation and textual analysis. The title The Singularity is Nearer perhaps became too delicious to resist in the glow of warm media embrace of Chat-GPT’s apparent powers.
While the underlying transhumanist arguments are the same as in the 2005 work, Kurzweil’s tone is not quite so rhapsodic. In 2005 he prophesied (there is no other term for it) that the Singularity will evolve towards divine perfection: “Evolution moves toward greater complexity, greater eloquence, … greater beauty, and greater levels of subtle attributes such as love. In every monotheistic tradition God is likewise described as all of these qualities, only without limitation. … Of course, even the accelerating growth of evolution never achieves an infinite level, but as it explodes exponentially it certainly moves rapidly in that direction. So evolution moves inexorably toward this conception of God.” (389) if I were to be picky– and I will be– I would point out that evolution (as Daniel Dennett explained in Darwin’s Dangerous Idea), evolution does not move toward anything at all. Evolution was a revolutionary idea precisely because it provided mechanistic explanations for dynamics which, in earlier ages, were assumed to require the existence of a divine entity or Idea to steer them. It is a fact that more complex neural systems have evolved, but not because “evolution” (which is, in any case, a process, not a thing) was being guided to it as a goal. Moreover, it is at least debatable whether human beings have become more loving or politically or morally intelligent over time. We have a grasp on the problems of social life but we have as yet proven incapable of solving them.
Kurzweil’s tone is thus more sober in the new work, his time frame limited to the period between now and 2045 when he expects the Singularity to burst forth, and his technical arguments focused for the most part on the development of existing engineering achievements in mind-machine interface (Musk’s Neuralink, for example) into full-scale brain-cloud interconnection. The Singularity is nearer because we understand the physics and mechanics of connecting mind and computers through sensors that translate electrochemical energy to binary code; it will be achieved when we fully merge with Artificial Intelligence. The existing engineering needs only to be scaled up (or, rather, down, since nanobots will be the interface linking the cerebral mass of humanity to the cloud). (72)
As we gradually merge with AI through the 2030’s, Kurzweil foresees, first, an exhilarating increase in the speed of thinking and expansion of the range of information to which we have near-immediate access, and then the emergence of virtual analogues of ourselves which will represent a new form of self-conscious existence. Kurzweil addresses the problem of whether a computational system can really become conscious with a functionalist answer: if the behavior of the computational system is in every respect identical to, or at least indistinguishable from, a biological consciousness, it is conscious. “And if an AI is able to eloquently proclaim its own consciousness, what ethical grounds could we have for insisting that only our own biology can give rise to worthwhile sentience.”(65) He develops this account in dialogue with the philosopher David Chalmer’s idea of zombies: entities that are indistinguishable from living beings but have no inner life, no self-consciousness, at all. (79-81) Whether one finds philosophers’ thought-experiments compelling means of advancing scientific arguments or not, there are problems with Kurzweil’s argument. The biggest issue is that he conflates the problem of the evolution of sentience with the design of neural networks.
Already Chat-GPT can carry on conversations with people, but, if you ask it whether it understands what it is saying, even Kurzweil will admit that it will tell you it does not. A more sophisticated AI might indeed–and some day soon– be able to “proclaim itself” conscious and even provide a cogent explanation of what that means, but it will not thereby have crossed the main ethical threshold from non-life to life. The ethical difference between conscious and self-conscious creatures and AI systems that can verbally assert their consciousness is life. Conscious beings feel themselves alive and strive to create the conditions in which they can feel more alive. My cats cannot argue with me that they are conscious, but they do not have to, because they prove by the (limited repertoire) of their expressions that they are alive. As such they have preferences, desires, and goals of which they are aware (in a cat-like way) and, more importantly, they can undertake self-directed action to bring those goals about. Unless and until an AI crosses the line between non-life and life it will not cross the threshold towards making a claim on ethical consideration.
More technically, Kurzweil’s argument makes two mistakes. The first is to collapse all the powers of consciousness (feelings, emotions, ratiocination, evaluation, etc.) into information processing and the second is to overlook the possibility (as Terence Deacon has argued) that life-activity cannot be explained simply on the basis of what living system are and do, but what they are not and seek out. There is no doubt that brains operate by processing information from the environment, but it does not follow, I would argue, that feelings or logical inferences are nothing more than information. If life-activity were nothing more than information processing then Kurzweil’s hopes for digital apotheosis might be sound. But human beings are not their brains and neural architecture alone: we are integrally unified bio-social agents whose relationships with their world have a qualitative, felt dimension which cannot be cashed out in informational terms alone. We prefer, or desire, or need some states more than others, and we actively shape our environment in response to these felt needs. Deacon has argued in exquisite detail that the emergence of life must be explained by the emergence of “teleosearching” chemical systems which act so as to bring about a state of thermodynamic equilibrium. (See his Incomplete Nature and my review, here). In simpler terms, the behaviour of these systems cannot be understood without reference to what they are not, but strive (at first purely unconsciously, via basic physical principles) to bring about. Living systems are conscious of what they need, and, moreover, posit goals which are not physical or chemical but moral and political. But there are no goals properly speaking until there is life and intentionality. No matter how complex or fast an information processing system is, it is not alive until it seeks to maintain itself.
Living things are composed of non-living elements, so it is not impossible or inconceivable that new forms of artificial life might evolve. The crucial question will be not whether such an entity can generate cogent explanations of what it is, but whether it can become conscious of being the sort of entity it is and strive to maintain itself, At present, no matter how impressive Chat-GPT’s responses to prompts are, it cannot do anything until it is prompted. My cat, indeed, an amoeba or paramecium, can act on its own directions.
These criticisms are also relevant to the speculative engineering proposal that is central to his project for practical immortality: the “uploading” of consciousness to a digital platform. “Freeing” consciousness from biological limitations is essential to the emergence of the post-Singularity superintelligence. Kurzweil assumes (as he also assumed in the 2005 book) that consciousness is some sort of pattern which could be precisely modeled and emulated in an artificial neural network. Perhaps. But I think that it is more likely that consciousness is not a fixed pattern that could be captured in some sort of snap shot and then re-printed, so to speak, in a neural network. I think that it is much more likely that consciousness is a dynamic process that depends upon the the coordinated functioning of the whole of the body’s organic systems in integral connection with the natural and social environment. If that is the case then rather than the first step towards the Singularity Chat-GPT and its like might be the last step in the development of AI.
Kurzweil does not avoid criticisms but his responses tend to sidestep the most difficult issues. Thus, he does not seriously inquire into the bio-chemical dynamics of life or consciousness but assumes that they are reducible to information processing. Since computers are information processors par excellence, they will eventually figure out how to transpose consciousness from biological to a digital platform. The same sort of arguing around problems characterizes Kurzweil’s treatment of the economic dimension of technological development. Kurzweil is one of the few transhumanists to understand that scientific and technological development has social and economic dimensions. For Kurzweil, those economic dimensions involve a secure intellectual property rights regime on the one hand and an emergent quasi-evolutionary dynamic that he calls the “law of accelerating returns” on the other. “The law of accelerating returns describes a phenomenon wherein certain kinds of technologies create feedback loops that accelerate innovation. Broadly, these are technologies that give us greater mastery over information.”(112) Each increase in information processing capacity catalyses a new round of innovation that increases our processing power even further, generating an exponential growth dynamic which is theoretically without limit.
Theoretically, yes, but Kurzweil forgets that statistics express historical trends. A historical trend may continue into the future, but then again it might not. It is one thing to plot a curve on a graph that extends from the present to the future, it is another thing for the future to play out like that. There is no causal relationship between the mathematical model of the future and what will in fact happen. As I noted above, the law of accelerating returns is an economic principle because its operation depends upon social conditions that encourage investment. Protectionism, weak intellectual property rights, and high taxes could all slow investment and therefore the innovations that depend upon it. Even if we assume propitious investment conditions, mainstream economists have wondered for some time about why digital technologies have not increased productivity or catalyzed growth in the real economy. Kurzweil’s answer is that economists are looking in the wrong place, productivity tables, when they should be looking at price.(213) Kurzweil argues that the major economic impact of computing technologies lies in the constant reduction of the price of computation per unit. The increase here is truly mind-boggling: computer power that would have cost millions of dollars in the 1950s and been accessible only to governments or major corporations is now available to children for pennies. (see the Appendix, 293-312).
Be that as it may, Kurzweil does not address the problem of productivity but changes the subject. It may be true that consumer purchasing power has gone up exponentially, but productivity is a measure of output relative to input (especially labour time) and that has not gone up nearly as much as mainstream economists would expect. Robert Solow quipped in response to this puzzle: “we see the computer age everywhere, except in the productivity statistics.” The practical implications of this debate are significant for Kurzweil’s project: if innovation is linked to investment and investment to profitability in the the real economy, growth might not be self-amplifying as he believes. Good old fashioned economic stagnation (such as the globe has been experiencing) can limit technological development. And even if any slow down proved temporary, there are serious scientific questions to be raised about Kurzweil’s speculative projections of what is technologically possible.
But let us assume the law of accelerating returns operates as Kurzweil argues and engineering problems like nanobots and mind uploading are solved and the Singularity does occur in 2045. Then the question becomes a philosophical one: should we let the new evolutionary course play out, or switch it off and go back to our slow-witted biological lives. In Embodiment and the Meaning of Life I argued that we should, precisely because the humanist values that Kurzweil believes that he is serving depend upon– if I am correct– the frames of finitude (aging, disease, the possibility of failure, and death) within which we struggle and work. Kurzweil treats struggle and work like he treats aging and death, as problems to be solved. But we are embodied beings and embodied beings must deal with a world and other people outside of themselves. Our successes are valuable not only because they express the achievement of a goal, they are valuable because they could have not worked out. No one is celebrated for climbing an imaginary mountain; imaginary friends cease to satisfy our emotional needs once we are no longer toddlers. Isn’t virtual reality just another word for imaginary?
Kurzweil and other transhumanists would argue vociferously that it is not. A mature cyberspace would be indistinguishable from material reality except that we– or the Superintelligence that supplants us– could imagine into being anything that is logically possible. But whatever such a creature might be it will not be a human being: human beings are individuals. Our identity is shaped by our differences; friendships and other forms of mutualistic relationship are valuable because they connect us to something we are not. Embodied humanism of the sort that I have defended works within these limitations to increase the value of human life by overcoming obstacles and socially created roadblocks to all round need-satisfaction and the unfolding of our living capabilities.
But old fashioned humanism and political struggle is too slow. Once we merge with AI we can download problems-solving to it and free ourselves to think “millions of time faster.” (265)
About what?
“