Understanding of natural language is what sometimes is called 'AI complete,' meaning if you can really do that, you can probably solve artificial intelligence.
Machine learning is looking for patterns in data. If you start with racist data, you will end up with even more racist models. This is a real problem.
If you believe everything you read, you are probably quite worried about the prospect of a superintelligent, killer AI.
Understanding of natural language is what sometimes is called 'AI complete,' meaning if you can really do that, you can probably solve artificial intelligence.
When you think of driverless cars, there's a huge potential for these cars to save lives by preventing accidents and by reducing congestion on highways.
The truth is that behind any AI program that works is a huge amount of, A, human ingenuity and, B, blood, sweat and tears. It's not the kind of thing that suddenly takes off like 'Her' or in 'Ex Machina.'
I don't think that all the coal miners - or even more realistically, say, the truck drivers whose jobs may be put out by self-driving cars and trucks - they're all going to go and become web designers and programmers.
To take intellectual risks is to think about something that can't be done, that doesn't make any sense, and go for it responsibly.
Machine learning is looking for patterns in data. If you start with racist data, you will end up with even more racist models. This is a real problem.
Sooner or later, the U.S. will face mounting job losses due to advances in automation, artificial intelligence, and robotics.
Our highways and our roads are underutilized because of the allowances we have to make for human drivers.
It's hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I'd have to guess that talking about black holes gets boring after awhile - it's a slowly developing topic.
To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations.
We don't want A.I. to engage in cyberbullying, stock manipulation, or terrorist threats; we don't want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don't want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.