Should We Fear Artificial Intelligence?
There are a lot of people out there who are afraid that teaching machines to think like humans will eventually lead the machines we build and develop to overtake humans. They ask themselves if we will be safe in a world full of robots who are being educated to overcome human intelligence.
Kurzweil sees Singularity as an event to look forward to because it will help humans advance, rather than one to fear. Speaking at SXSW in 2017, the Google executive said, “We’re going to get more neocortex, we’re going to be funnier, we’re going to be better at music. We’re going to be sexier. We’re really going to exemplify all the things that we value in humans to a greater degree.”
When put in those terms, General AI doesn’t sound like such a bad or scary thing.
One thing to keep in mind with artificial intelligence is that computers, even those that achieve General AI, will still rely on people to program them.
Artificial intelligence is going to learn the information programmers filter them. This is reassuring as long as the programmers are acting in the best interest of human society. Computers and machines rely on the help of humans because they do not have infinite knowledge. They need us to program them and to tell them what to think and how to act.
If you still have some anxiety or fear over AI, you aren’t alone. But remember that the behavior of AI machines doesn’t depend on their intellect. Instead, it depends on the creators and the information those creators want them to learn. There isn’t a point in between.
I believe that the main reason people fear AI is that they worry about losing control over various aspects of their lives. This is a very human response. After all, we are autonomous beings who value the concepts of independent thought and freedom to make decisions and take actions.
It certainly doesn’t help that primary figures in the field of technology are ringing an alarm bell about AI. In a live Q & A session that took place in 2014, Tesla and SpaceX founder Elon Musk stated that artificial intelligence is “our biggest existential threat” and expressed his idea that “there should be some regulatory oversight, maybe at the national and international level just to make sure that we don’t do something very foolish.”
So, should we be scared of artificial intelligence?
I believe that no, there is nothing in the technology itself that presents a danger. Remember, robots are not the ones thinking original thoughts. They are merely performing tasks in the best way they know-how, based on what their creators and developers have taught them.
So, then, the real potential threat of AI technology doesn’t come from the machines but instead comes from whoever is developing them.
This gets to the heart of why people are scared of AI. There are two ways of looking at it.
1. The machines are going to become more intelligent than humans and will band together to destroy us.
2. When put into the wrong hands, artificial intelligence can be used for evil purposes that we will be powerless to defeat.
My goal here is not to say that people who have these fears are wrong or that they are unfounded. Indeed, when we start to think about the heretofore unexplored and untested world of General AI, there is a definite need to approach with caution and care.
Instead of adding more fuel to the fire, I want to propose solutions for handling these fears and turning the challenges inherent in creating human-like minds inside of machines into positives for the human race.