16 novembre 2024
2H19
Zaps
L'horreur existentielle de l'usine à trombones
En 1960, Norbert Wiener a décrit le problème de l'alignement comme ceci : « Si on utilise, pour atteindre nos objectifs, un agent mécanique qu'on ne peut pas contrôler efficacement... On ferait bien de s'assurer que l'objectif que l'on assigne à cette machine soit celui que l'on désire vraiment. »...
The existential horror of the paperclip factory
In 1960, Norbert Wiener described the alignment problem like this :
“If we use, to achieve our objectives, a mechanical agent that we cannot control effectively... We would do well to ensure that the objective we assign to this machine is the one we really desire."
Many researchers are concerned about the alignment of future general artificial intelligences, also called human-level AI, and artificial superintelligences, hypothetical agents whose capabilities would far exceed human performance in most areas.
According to some scientists, creating a misaligned superhuman general AI would challenge humanity's position as the dominant species on Earth, leading to a loss of control or even extinction of humanity...
“If we use, to achieve our objectives, a mechanical agent that we cannot control effectively... We would do well to ensure that the objective we assign to this machine is the one we really desire."
Many researchers are concerned about the alignment of future general artificial intelligences, also called human-level AI, and artificial superintelligences, hypothetical agents whose capabilities would far exceed human performance in most areas.
According to some scientists, creating a misaligned superhuman general AI would challenge humanity's position as the dominant species on Earth, leading to a loss of control or even extinction of humanity...
Ce site n'affiche aucune publicité et fonctionne uniquement grâce à vos dons
This website displays no ad and exists only thanks to donations
Mois en cours / Current month