Brian Weaver
answered on 3 Mar 2019:
last edited 3 Mar 2019 6:14 pm
Great question! I’m not a computer scientist, the following is merely my opinion. I think machine learning is the best option to solve problems in extremely complex systems (i.e. self-driving vehicles). One risk I can imagine with AI is having a “slave-labor” force of AI robots. Would these robots have the same rights as humans?
*
Will the singularity happen within our lifetime? If so, will it be a gradual or exponential event? With the computational speed of computers now, I imagine that once a super-human artificial intelligence begins self-improving, singularity will have occurred before anyone realizes what happened. Any programming to protect humans will be easily removed (hopefully we taught the “good” emotions). My practical experience with programming makes me think that singularity won’t happen in our lifetime.
*
Final thoughts… How different are humans from machines? We have sensors and a neural network and learn how to respond in our environment based on sensory inputs. I think the biggest difference is that humans are extraordinarily good learners. Also, is there a non-teachable soul component that will always separate humans and AI?
*
Here’s some AI humor, I’m not the author:
.
yes, I think AI will be a really helpful tool in the advancement of humanity!
I am also a little worried about potential risks, especially about the potential that they may gain some form of consciousness and if we abuse them the way we abuse animals (or worse, believing they are ‘just stupid machines’) and then they become literally in control of everything (think literally EVERYTHING in your home being linked through the Internet of Things, and controlled by your home assistant…) then who knows what the consequences could be… :’D
Comments