Saturday, June 3, 2017

Christof Koch, Artificial Intelligence, The Fermi Paradox & The Inevitability (?) Of Self Destruction

Image result for pax on both houses call an asshole an asshole

The Spiritual Reductionist Consciousness Of Cristof Koch
What The Neuroscientist Is Discovering Is Both Humbling And Frightening Him

But you really believe artificial intelligence could develop a certain level of complexity and wipe us out?
This is independent of the question of computer consciousness. Yes, if you have an entity that has enough AI and deep machine learning and access to the Cloud, etc., it’s possible in our lifetime that we’ll see creatures that we can talk to with almost the same range of fluidity and depth of conversation that you and I have. Once you have one of them, you replicate them in software and you can have billions of them. If you link them together, you could get superhuman intelligence. That’s why I think it behooves all of us to think hard about this before it may be too late. Yes, there’s a promise of untold benefits, but we all know human nature. It has its dark side. People will misuse it for their own purposes.
How do we build in those checks to make sure computers don’t rule the world?
That’s a very good question. The only reason we don’t have a nuclear bomb in every backyard is because you can’t build it easily. It’s hard to get the material. It takes a nation state and tens of thousands of people. But that may be different with AI. If current trends accelerate, it may be that 10 programmers in Timbuktu could unleash something truly malevolent onto mankind. These days, I’m getting more pessimistic about the fate of a technological species such as ours. Of course, this might also explain the Fermi paradox.
Remind us what the Fermi paradox is.
We have yet to detect a single intelligent species, even though we know there are probably trillions of planets. Why is that? Well, one explanation is it’s just extremely unlikely for life to arise and we’re the only one. But I think a more likely possibility is that any time you get life that’s sufficiently complex, with advanced technology, it has somehow managed to annihilate itself, either by nuclear war or by the rise of machines.
You are a pessimist! You really think any advanced civilization is going to destroy itself?
If it’s very aggressive like ours and it’s based in technology. You can imagine other civilizations that are not nearly as aggressive and live more in harmony with themselves and nature. Some people have thought of it as a bottleneck. As soon as you develop technology to escape the boundary of the planet, there’s an argument that civilization will also develop computers and nuclear fusion and fission. Then the question is, can it grow up? Can it become a full-grown, mature adult without killing itself?

Neuroscientist Cristof Koch's Homepage

No comments:

Post a Comment