Artificial intelligence is much better than natural stupidity. Like in this great old fiction movie, if AI was to run the world, probably the first decision they would take would be to destroy all nuclear weapons while we now see that natural stupidity rulers (and not robots) are increasing constantly the stockpiles of atomic bombs.
Humans and computers are going through a transformative time, but there are still many more questions than answers. Nick Bostrom, Oxford Martin Programme Founder, and Gray Scott, futurist and technology expert debate the possible effects of this change.
Elon Musk, the CEO of SpaceX and Tesla, is backing a new idea that seeks to explore the possibility of connecting the human brain with artificial intelligence (AI).
Not everybody agrees, however, with Mr. Musk that this sweeping technological transformation is an automatic good thing for the human race. But like it or not, it seems there is no chance of reversing course.
RT: Whatever the scientists might describe as benefits, more people have seen The Terminator and The Matrix than have read New Scientist. Is it far-fetched for us worry about the trajectory of humans and their relationship to computers?
Gray Scott: Well, first of all we have to remember that we’re a long way from the Matrix. Even if we wanted to do that now it is just not possible. These are the primary stages. What Elon Musk is talking about is the primary stage of trying to use the brain to control other machines. So you won’t be able to download a library into your brain anytime soon.
RT: Nick, anything troubling you about this?
Nick Bostrom: I am skeptical that the way to control computers in the long-term will be by plugging electronics into our brains. I think there are certain medical applications that can be exciting for people with Parkinson’s, epilepsy and depression. But it is actually quite difficult to enhance the capabilities of a normal, healthy human brain and most of functionality that you might want to get by these implants, you could get equally well by having the computer outside of your body and then you interact with it using your eyeballs and your fingertips. It seems like the bottleneck is not so much the rate which we can funnel information in and out of the brain, but rather the brain’s ability to interpret and make sense of the information.
RT: Neuralink's technology will probably not come cheap. It claims it can improve cognitive ability, but surely only for those who can afford it. Down the line, might we have an app-enabled wealthy class with everyone else left behind?
GS: There are a lot of ethical implications of this technology. The fact is that we are transforming as species into a technological species – that is a fact. The uptake for technology is usually from the top down – we’ve known that through history. I don’t expect that this is going to be something that is going to be available to everyone in the beginning. There will be a trickle-down effect of this technology. The fact is that it is a transformative time on this planet and we are transforming into a new technological species. So I don’t think even if you wanted to stop this, now that Elon Musk has sort of put it on the map. I don’t think you could stop it now anyway.
RT: Nick, are you ready to be a robotic human?
NB: Well, if it really worked, if it gave me genuine benefits and limited risks, I’d love a life-extension pill or a cognitive-enhancement cocktail, or a brain implant for that matter. I think that the technical challenges in actually getting it to work. A lot of the time it is the little details. Like current brain implants kind of work in principal, but they tend to move around, they are not very stable. There is a collateral damage when you implant them. So getting all of those things, you don’t want infections, when you… drill through the skull – getting all of those things. We see it with a lot of other medical technologies; sometimes the concept is quite simple… but when you actually try to roll it out there are all of these complications.