Melanie Mitchell has a great article, she just announced at Complexity Digest:
Language models can generate uncannily humanlike prose (and poetry!) and seemingly perform sophisticated linguistic reasoning. How can we test if these machines actually understand what they’re doing?
Read the full article at: www.quantamagazine.org
Understanding language requires understanding the world,
and a machine exposed only to language
cannot gain such an understanding.
Training and evaluating machines for baby-level intelligence may seem like a giant step backward compared to the prodigious feats of AI systems like Watson and GPT-3.
But if true and trustworthy understanding is the goal, this may be the only path to machines that can genuinely comprehend what “it” refers to in a sentence, and everything else that understanding “it” entails
Consider what it means to understand “The sports car passed the mail truck because it was going slower.” You need to know what sports cars and mail trucks are, that cars can “pass” one another, and, at an even more basic level, that vehicles are objects that exist and interact in the world, driven by humans with their own agendas.
All this is knowledge that we humans take for granted, but it’s not built into machines or likely to be explicitly written down in any of a language model’s training text. Some cognitive scientists have argued that humans rely on innate, pre-linguistic core knowledge of space, time and many other essential properties of the world in order to learn and understand language. If we want machines to similarly master human language, we will need to first endow them with the primordial principles humans are born with. And to assess machines’ understanding, we should start by assessing their grasp of these principles, which one might call “infant metaphys.”