This sentence is false

27.jpg

Why AI is Harder Than We Think

The year 2020 was supposed to herald the arrival of self-driving cars. Five years earlier, a headline in The Guardian predicted that “From 2020 you will become a permanent backseat driver.” In 2016 Business Insider assured us that “10 million self-driving cars will be on the road by 2020.” Tesla Motors CEO Elon Musk promised in 2019 that “A year from now, we’ll have over a million cars with full self-driving, software…everything” […]

none of these predictions has come true. […]

like all AI systems of the past, deep-learning systems can exhibit brittleness— unpredictable errors when facing situations that differ from the training data. This is because such systems are susceptible to shortcut learning: learning statistical associations in the training data that allow the machine to produce correct answers but sometimes for the wrong reasons. In other words, these machines don’t learn the concepts we are trying to teach them, but rather they learn shortcuts to correct answers on the training set—and such shortcuts will not lead to good generalizations. Indeed, deep learning systems often cannot learn the abstract concepts that would enable them to transfer what they have learned to new situations or tasks. Moreover, such systems are vulnerable to attack from “adversarial perturbations”—specially engineered changes to the input that are either imperceptible or irrelevant to humans, but that induce the system to make errors.

{ arXiv | Continue reading }