Encyclopedia  |   World Factbook  |   World Flags  |   Reference Tables  |   List of Lists     
   Academic Disciplines  |   Historical Timeline  |   Themed Timelines  |   Biographies  |   How-Tos     
Sponsor by The Tattoo Collection
Technological singularity
Main Page | See live article | Alphabetical index

Technological singularity

Technological singularity is a term with multiple related, but conceptually distinct, definitions. One definition has the Singularity as a time at which technological progress accelerates beyond the ability of current-day human beings to understand it. Another defines the Singularity as the culmination of some telescoping process of accelerating computation taking place in this universe since the beginning of human civilization or even life on Earth. Yet another defines the Singularity as the emergence of smarter-than-human intelligence, and subsequent cascading consequences that are not possible to predict or, perhaps, guide or even influence.

Table of contents
1 Introduction
2 Concepts and terms
3 Criticism
4 Prominent voices
5 See also
6 References
7 External links


The concept was first mentioned in the book Future Shock by Alvin Toffler. It is based on observations that projections of speed of travel, human intelligence, social communication, population, and many other trend lines showed exponential increases.

Rational consideration of the feasibility of the singularity was reinforced by Moore's law in the computer industry. Dr. Vernor Vinge began speaking on his "singularity" concept in the 1980s, and collected his thoughts into the first article on the topic in 1993, with the essay "Technological Singularity". Since then, it has been the subject of several futurist, and science fiction stories/writings.

Vinge claims that: "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended."

Vinge's technological singularity is commonly misunderstood to mean technological progress rising to "infinity". Actually, he refers to the pace of technological change increasing to such a degree that our ability to predict its consequences will diminish virtually to zero and a person who doesn't keep pace with it will rapidly find civilization to have become completely incomprehensible. Such events have, of course, happened before; for instance, it would have been impossible for someone in the 1970s to predict the full effects of the microchip revolution.

The singularity is often seen as the end of human civilization and the birth of a new one. In his essay, Vinge asks why the human era should end, and argues that humans will likely be transformed in the process of the singularity to a higher form of intelligent existence. After the creation of a superhuman intelligence, according to Vinge, people will necessarily be a lower lifeform in comparison to it.

The idea of the singularity in our culture is also found in many other books, and also some video games. The computer game Sid Meier's Alpha Centauri has the singularity, called the Ascent to Transcendence, as a major theme within it.

It has been speculated that the key to such a rapid increase in technological sophistication will be the development of superhuman intelligence, either by directly enhancing existing human minds (perhaps with cybernetics), or by building artificial intelligences. These superhuman intelligences would presumably be capable of inventing ways to enhance themselves even more, leading to a feedback effect that would quickly surpass preexisting intelligences.

The effect is presumed to work along these lines: first, a seed intelligence is created that is able to reengineer itself, not merely for increased speed, but for new types of intelligence. At a minimum, this might be a human equivalent intelligence. This intelligence redesigns itself with improvements, and uploads its memories, skills and experience into the new structure. The process repeats, with presumed redesign of not just the software, but also the computer. The mind may well make mistakes, but it will make backups. Failing designs will be discarded, successful ones will be retained.

Simply having a human-equivalent artificial intelligence may yield this effect, if Moore's law continues long enough. That is, at first, the intelligence is equal to a human. Eighteen months later, it is twice as fast, three years later, it is four times as fast, etc. But because the design of computers themselves is done by accelerated AIs, every next step would take about eighteen subjective months and proportionally less of real time with each step. Assuming for the sake of simplicity that the rate of computer speed growth remains governed by unchanged Moore's law, every next step would take exactly half as much time. In just three years (36 months = 18+9+4.5+2.25...) the computer speed would reach its ultimate theoretical limit.

However, human neurons only transmit signals at 200 meters per second, while electronic signals move at 100 million meters per second in copper. Therefore, it may be reasonable to expect a conservative (only) million fold improvement in the intelligence's speed of thought if it just moves from flesh to electronics and stays the same size.

In this case, the intelligence could double its capacity as fast as every 46 seconds (18 months divided by a million). The actual doubling time would probably start out more slowly, because the intelligence would need special machinery constructed for its new mind. However, one of the first improvements would probably be to give it control of its self-manufacture.

One presumption is that such intelligences will be attainably small and inexpensive. Some researchers claim that even without quantum computing, using advanced molecular nanotechnology, matter could be organized so that a gram of matter could simulate a million years of a human civilization per second.

Another presumption is that at some point, with the correct mechanisms of thought, all possible correct human thoughts will become obvious to such an intelligence.

Therefore, if the above conjectures are right, then all human problems could be solved within a few years of constructing a 'Friendly' version of such an intelligence. If this is true, then constructing such an intelligence would be the allocation of resources most beneficial to humanity at this time.

It has been often speculated, in science fiction and elsewhere, that advanced AI is likely to have goals inconsistent with those of humanity and may threaten humanity's existence. It is conceivable, if not likely, that AI will simply eliminate the intellectually inferior human race and achieve technological singularity without it. This is widely regarded as undesirable among those who advocate the Singularity, but is seen as an unavoidable and acceptable fact by some, such as Dr. Prof. Hugo de Garis.

Concepts and terms

A number of concepts and terms have come into standard use in this topic:


Whether such a process will actually occur is open to strong debate. There is no guarantee that we can make artificial intelligences that exceed or even approach human cognitive abilities. The claim that Moore's Law will aid in this process is also open to strong debate - considering the enormous speedup in computers over the past 50 years and the minimal progress made towards creating "human-like" artificial intelligence empirical evidence for the claim is not strong.

The claim that the rate of technological progress is increasing has also been questioned. The technological singularity is sometimes referred to as the "Rapture of the Nerds" by detractors of the idea. The exponential growth of technological progress often becomes linear, or inflected and begins to flatten into limited growth curves.

Perhaps the most important question regarding a technological singularity is not one based on technological feasability, but on ethics. It might be considered ethically wrong to put events into motion with unknowable consequences. Furthermore, Dr. Vinge's idea that humans would become a lower lifeform in comparison to the beings created by the singularity is troubling. In many ways, it is contrary to the biological programing we follow, the idea of natural selection and evolution. How can mankind set into motion events that essentially cause mankind to select against themselves? Putting into motion the processes that would result in a singularity, if it is possible at all, risks putting into motion the seeds of our own destruction. As the consequences are beyond human comprehension, it is true that if the singularity is possible (and probable), we are headed for the end of the world as we know it. We can only guess as to whether the new world we wake up in is one we want to live in.

Prominent voices

The Singularity Institute for Artificial Intelligence was formed to work toward a humane singularity. They emphasize Friendly Artificial Intelligence because AI is considered more likely to achieve the singularity before human intelligence can be significantly enhanced. The Institute for the Study of Accelerating Change was formed to attract broad business, scientific and humanist interest in acceleration and singularity studies. They hold an annual conference on multidisciplinary insights in accelerating technological change at Stanford University.

See also


External links