Richard Dawkins on artificial intelligence, agnosticism, and utopia / Boing Boing



Evolutionary biologist and “passionate rationalist” Richard Dawkins has a new anthology of essays out today, titled Science in the Soul. Over at Scientific American, John Horgan posted an interview with Dawkins in which the two discuss a range of topics, from A.I. to agnosticism. From SciAm:





At the Templeton (Foundation) meeting, you described yourself as an agnostic, because you cannot be certain that God does not exist, correct?




This is a semantic matter. Some people define atheism as a positive conviction that there are no gods and agnosticism as allowing for the possibility, however slight. In this sense I am agnostic, as any scientist would be. But only in the same way I am agnostic about leprechauns and fairies. Other people define agnosticism as the belief that the existence of gods is as probable as their nonexistence. In this sense I am certainly not agnostic. Not only are gods unnecessary for explaining anything, they are overwhelmingly improbable. I rather like the phrase of a friend who calls himself a “tooth fairy agnostic”—his belief in gods is as strong as his belief in the tooth fairy. So is mine. We live our lives on the assumption that there are no gods, fairies, hobgoblins, ghosts, zombies, poltergeists or any supernatural entities. Actually, it is not at all clear what supernatural could even mean, other than something which science does not (yet) understand….


…Do you share the concerns of billionaire entrepreneur Elon Musk, who has said that artificial intelligence might pose an “existential risk” to humanity?



Elon Musk is a 21st-century genius. You have to listen to what he says. I am philosophically committed to “mechanistic naturalism,” from which follows the conclusion that anything humans can do, machines can in principle do, too. In many cases we already know they can do it better. Whether they can do it better in all cases remains to be seen, but I wouldn’t bet against it. The precautionary principle should lead us to behave as though there is a real danger—a danger we should take immediate steps to forestall. Unless, that is, we think robots could to a better job of running the world than we can. And a better job of being happy and increasing the sum of sentient happiness…

Read the full article from the Source…

Leave a Reply

Your email address will not be published. Required fields are marked *