Smith College Admission Academics Student Life About Smith news Offices
Kahn Liberal Arts Institute
Kahn Spotlight
About the Institute
News and Events
Fellowships
Current Projects
Future Projects
Past Projects
Contact Us
Kahn Chronicle Online
Current Short-Term Projects

Beyond Human Intelligence:
The Possibility of Technological Singularity


Organizing Fellows:
Judith Cardell, Computer Science & Engineering
James Miller, Economics

The deadline to apply for a Faculty Fellowship in this project is Friday, October 8. To apply, email the Kahn Institute's Director, Rick Fantasia (rfantasi@smith.edu) by that date. In your email, please include the title of the project, and explain why you are interested in it, what you would bring to it, and what you hope to gain from it.


Beyond Human Intelligence: The Possibility of Technologial SingularityWill we reach a point in the future when artificial intelligences have been enhanced so much that machines are vastly smarter than people? Will there come a time when computers and networks have advanced to such an extent that they will have acquired a consciousness of their own? How far in the future might that happen? At such a point, what would happen to humans? "Technological singularity" is a name used to describe a possible future in which such enhanced human or artificial intelligence exists and is billions of times more powerful than human intelligence, pushing the future beyond human imagination. Although the concept may seem crazy, (or at least centuries away) or reserved only for the domain of science fiction, some argue that humanity is inexorably headed toward such a destination, and much sooner than we think. Indeed, there are those who think that a state of "technological singularity" may be reached by the middle of this century. Beyond science fiction.

The most basic argument for the plausibility of a condition of computer super intelligence capable of reaching, and even exceeding, human intelligence is that anything that exists can potentially be modeled by a computer program. Thus, a sufficiently detailed computer program that models the human brain could develop human-like intelligence. Current research is rapidly improving our understanding of how the brain operates, and it seems likely that we will soon have enough information about the human brain to be able to create an accurate and comprehensive model of its structure and function, a model that could be used to generate a machine that works in the same way, an "artificial general intelligence" (AGI). Given that we have access to the "source code" for the human brain (DNA), which we are getting exponentially better at reading, and we can perform increasingly detailed scans of the human brain, some singularity proponents believe that by 2030 we will be able to create a machine that has human-level intelligence.

Once a working model has been created to match human intelligence, the argument suggests, it should be possible to surpass it relatively quickly, with computing power advancing much faster than almost any other technology. When coupled with AGI, such a dramatic amplification of computer performance would create staggeringly fast applications of intelligence, capable of remaking human civilizations. On one level, remarkably complex tasks would be accomplished within remarkably short time frames—perhaps writing a brilliant novel, instantly, or creating pharmaceuticals that can anticipate all possible viral or bacteriological mutations anywhere in the world, or designing unimaginably advanced systems of transportation. But such capacities would also permit the design of new and more powerful weapons — material, symbolic, biologic — that could exert and enforce unimaginable degrees of control and repression and exploitation over human populations. Indeed, a super-enhanced ability to manipulate human genetic material creates the possibility for wholly fabricated populations of human-machine hybrids.

Physicist Stephen Hawking warns that AGI poses an existential threat to humanity, asserting, "In contrast with our intellect, computers double their performance every 18 months.... So the danger is real that they could develop intelligence and take over the world.... We must develop as quickly as possible technologies that make possible a direct connection between brain and computer so that artificial brains contribute to human intelligence rather than opposing it.”

It seems that the very prospect of such a state of "technological singularity" can quickly erode any walls in our collective imagination separating utopian possibilities from dystopian applications. We think that there is an intellectually fruitful and important discussion to be had by tackling this issue frankly, rationally, and analytically. This short-term Kahn Institute project will be such a discussion. In it we will consider arguments by proponents and skeptics of a possible state of technological singularity, with attention to how, when, or whether it might occur, as well as to the broad range of potential outcomes. We welcome to this discussion the ideas and thinking of colleagues from the arts and humanities, the social sciences, and the physical sciences.

PROJECT SCHEDULE:

  • Friday, November 12, 2-6pm
  • Saturday, November 13, 9am-4pm
DirectoryCalendarCampus MapVirtual TourContact UsSite A-Z