Genetic Engineering & Biotechnology News, “Artificial Intelligence, Bioenhancement, and the Singularity,” May 1, 2016

Artificial Intelligence, Bioenhancement, and the Singularity

Defer Radical Self-Modification, Says Historian, to Avoid Destabilizing Civilization
Genetic Engineering & Biotechnology News | By Michael Bess| May 01, 2016 (Vol. 36, No. 9) [Link to Original]

They met for the first time in a hotel bar at Lake Tahoe in 1998, one evening after a technology conference. Bill Joy was an eminent computer-systems designer, chief scientist for Sun Microsystems. Ray Kurzweil was an award-winning inventor and technologist, whose many creations included a reading machine for the blind and an advanced music synthesizer. Their conversation focused on the future relationship between humans and machines.

What they saw that evening, as they gazed together into the coming decades, was something that has come to be called the Singularity. Both Joy and Kurzweil believed it might arrive as soon as the mid-21st century. Both of them felt that it would be the most dramatic turning point in the history of humankind thus far.

On the other side of that divide, humans would redesign their own bodies and minds, using the powerful tools of genetics and nanotechnology; they would reverse-engineer the human brain, applying this knowledge to design new forms of artificial intelligence far more potent than any human mind; they would endow these superintelligent machines with bodies more capable and versatile than any mere biological being could hope to emulate. In this way, humankind would in effect be giving birth to its own successor species, our own technological progeny, whose limitless potential would take them out into the cosmos to fulfill a destiny greater than any mere mortal of today could fully comprehend. It would be a moment of species metamorphosis, a collective transformation akin to the transition from a caterpillar to a butterfly.

Kurzweil looked upon this prospect with a mixture of awe and elation, embracing it as the fulfillment of humanity’s deepest ideals and dreams. Joy regarded it with a mixture of awe and horror, recoiling from what he viewed as a radical dehumanizing of Homo sapiens; he also sensed profound danger in these technologies—the risk of accidental cataclysms engulfing the biosphere.

Original Concerns Still Resonate

Today these kinds of hopes and fears continue to circulate. In 2015 the Oxford philosopher Nick Bostrom created a sensation with his book, Superintelligence, in which he surveyed the rapid progress in artificial intelligence (AI) research and described the inherent difficulties that would be involved in controlling such machines, were they ever to reach or surpass a human level of cognitive abilities. Such a machine could presumably redesign itself over time, improving its own software and hardware in a runaway cycle of rapidly accelerating powers.

Though some scientists and engineers dismissed Bostrom’s concerns as sci-fi alarmism, his arguments were echoed by such luminaries as Elon Musk, Bill Gates, and Stephen Hawking. Musk even donated $10 million to the Boston-based Future of Life Institute to fund research into developing control systems for advanced AI machines.

To be sure, most of the people who make a living in these fields tend to regard such concerns as exaggerated or premature. Precisely because they spend their days (and nights) trying to solve the extremely complex problems posed by cutting-edge biology or informatics, they have a sober sense of how far we remain today from anything like the Singularity. As one scientist recently told me: “Worrying about the Singularity is a bit like telling a cave man who’s lighting his campfire that he should beware of global warming.”

This kind of self-deprecating attitude is certainly laudable, but I don’t find it particularly reassuring. When scientists emphasize how far we still have to go, I tend to think, “Yes, but look at how far we’ve already come—and how fast.” Although enthusiasts like Kurzweil are probably mistaken when they claim that progress in these fields will advance indefinitely along an exponentially rising curve, it would be churlish to deny the accelerating pace of innovation in genetics, nanotech, and AI during recent decades.

Our society has gradually put in place a complex and well-funded network of institutions specifically designed to generate rapid innovation in science and technology, and the growth of these institutions has been matched by the growth of a trained workforce, as indicated by statistics from the National Science Foundation and the Bureau of Labor Statistics. In 1850, only 0.03% of the total U.S. workforce was employed in fields of science, engineering, and technology; by 1950, this figure had multiplied 36-fold, to 1.1% of the workforce; and by 2001 it had grown still further, to 4.2%.

It is just as serious a mistake to underestimate the significance of this development as it is to exaggerate it.

In the domain of pharmaceuticals, we can now control which part of a chemical substance we wish to activate, and fine-tune the interaction between its molecules and our own cellular processes. Our bioelectronic prostheses are no longer like eyeglasses, adding an external layer to our senses: they now reach deep into our nervous system, changing how it performs in its fundamental workings. Today’s genetic interventions penetrate straight to the core, using detailed genomic maps to target specific sites in DNA and redirect their functions along the lines we desire. All these technologies illustrate the directness with which humans can now manipulate the innermost workings of nature’s processes.

Even though these innovations are not advancing smoothly, exponentially, or unstoppably, they are nonetheless harbingers of something big. Figures such as Kurzweil, Joy, Bostrom, and Musk are right to get excited about them. This transformation will probably arrive piecemeal, in untidy increments and jumps, extending over a period of many decades through the middle of the 21st century. Our children and grandchildren will experience it directly.

Digital Divides, Biological Breaks

The results will be mixed. Some of the new bioenhanced capabilities will be splendid to behold (and to experience). People will live longer, healthier, more productive lives; they will connect with each other in seamless webs of direct interactivity; they will be able to fine-tune their own moods and thought-processes; they will interact with machines in entirely new ways; they will use their augmented minds to generate staggeringly complex and subtle forms of knowledge and insight.

At the same time, these technologies will also create formidable challenges. If only the rich have access to the most potent bioenhancements, this will exacerbate the already grievous rift between haves and have-nots. Competition will be keen for the most sophisticated enhancement products—for an individual’s professional and social success will be at stake. As these technologies advance, they will continuously raise the bar of “normal” performance, forcing people to engage in constant cycles of upgrades and boosts merely to keep up—“Humans 95Humans XPHumans 10.”

People will tend to identify strongly with their particular “enhancement profiles,” clustering together in novel social and cultural groupings that could lead to new forms of prejudice, rivalry, and outright conflict. Some bioenhancements will offer such fine-grained control over feelings and moods that they risk turning people into emotional puppets. Individuals who boost their traits beyond a certain threshold may acquire such extreme capabilities that they will no longer be recognized as unambiguously human.

Channeling Icarus

Until recently in human history, the major technological watersheds all came about incrementally, spread out over centuries or longer. Think, for example, of the shift from stone to metal tools, the transition from nomadic hunter-gathering to settled agriculture, or the substitution of mechanical power for human and animal sources of energy.

In all these cases, people and social systems had time to adapt: they gradually developed new values, new norms and habits, to accommodate the transformed material conditions. But this is not the case with the current epochal shift. This time around, the radical innovations are coming upon us with relative suddenness—in a time frame that encompasses four or five decades, a century at most.

Some of the factors propelling this process will reflect our baser nature: greed, competition, envy, and the lust for power. Others will arise out of noble sentiments: the desire to see our loved ones succeed; the thirst for novelty; the aspiration to attain higher forms of achievement, knowledge, and sensation.

These forces will be hard enough in themselves to resist, but they will be further strengthened by the heavy involvement of large-scale business interests, for whom these technologies will offer major profits. Influential libertarian voices will also add to the mix, as they invoke the inalienable right of each individual freely to modify her own body and mind as she sees fit. This nexus of impulses and ideals, economic and social forces, will generate a seemingly irresistible pressure to go faster, faster, faster.

And yet, restraint is the smarter path: the deliberate postponement of radical forms of self-modification, or human-level AI, until our society has had a chance to gauge the consequences and acclimate to them. If we permit these kinds of technologies to advance too quickly, the resultant social stresses could end up destabilizing our civilization.The likelihood of major unintended effects should impel our society to proceed slowly, and with great humility, as we go down this road.

The Singularity can wait.