"The technological singularity hypothesis is that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization in an event called the singularity.[1]
Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable or even unfathomable.[2]"
...A technological singularity includes the concept of an intelligence explosion, a term coined in 1965 by I. J. Good.[15]
Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[16]
However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is more intelligent than humanity.[17]
If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of.
It could then design an even more capable machine, or re-write its own software to become even more intelligent.
This more capable machine could then go on to design a machine of yet greater capability.
These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[18][19][20]
...Existential risk
Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).[71][72][73]
Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[74]
AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[65][75] and humans would be powerless to stop them.[76]
Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.[68]
Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:
A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[77]
No comments:
Post a Comment