4 key misunderstandings in AI is an interesting blog entry, discussing following topics:
– Narrow AI and general AI are not on the same scale
– The easy things are hard to automate
– Anthropomorphizing AI doesn’t help
– AI without a body
– Common sense in AI
The blog is based on a great paper from Melanie Mitchell titled “Why AI is Harder Than We Think.”
Mitchell lays out four common fallacies about AI that cause misunderstandings not only among the public and the media, but also among experts.
The four fallacies described reveal flaws in our conceptualizations of the current state of AI and our limited intuitions about the nature of intelligence. Mitchell argues that these fallacies are at least in part why capturing humanlike intelligence in machines always turns out to be harder than we think.
Fallacy 1: Narrow intelligence is on a continuum with general intelligence
Fallacy 2: Easy things are easy and hard things are hard
Fallacy 3: The lure of wishful mnemonics
Fallacy 4: Intelligence is all in the brain
These fallacies raise several questions for AI researchers. How can we assess actual progress toward “general” or “human-level” AI? How can we assess the difficulty of a particular domain for AI as compared with humans? How should we describe the actual abilities of AI systems without fooling ourselves and others with wishful mnemonics? To what extent can the various dimensions of human cognition (including cognitive biases, emotions, objectives, and embodiment) be disentangled? How can we improve our intuitions about what intelligence is?
A video-lecture of this paper is available.