Category: AI
-
What Does It Mean for AI to Understand?
Melanie Mitchell has a great article, she just announced at Complexity Digest: Language models can generate uncannily humanlike prose (and poetry!) and seemingly perform sophisticated linguistic reasoning. How can we test if these machines actually understand what they’re doing? Read the full article at: www.quantamagazine.orghttps://www.quantamagazine.org/what-does-it-mean-for-ai-to-understand-20211216/# Understanding language requires understanding the world, and a machine exposed only…
-
World Model – “Free Energy” Selections of Perception & Policy
During their lives humans constantly interact with the physical environment, as well as with themselves and others.World model learning and inference are crucial concepts in brain and cognitive science, as well as in AI and robotics. The outstanding challenges of building a generalpurpose AI needs world modelling and probabilistic inference, needed to realise a brain-like…
-
Designing for Human-AI interaction is hard. (So steal like an artist :-)
Following is and interesting article/blog , and just “stolen like an artist” from https://www.simonoregan.com/short-thoughts/the-design-difficulties-of-human-ai-interaction Designing for Human-AI interaction is hard. Here Yang et al. catalog where designers run into problems when applying the traditional 4Ds process to designing AI systems. These difficulties can be broadly attributed to two sources: This uncertainty and complexity combination then…
-
VVUQ your model and twin – should we trust ?
The world is moving towards digital twins. I recently came across a insightfull article: A probabilistic graphical model foundation for enabling predictive digital twins at scale, available at Arxiv – & published Nature The digital twin is a set of coupled computational models that evolve over time to persistently represent the structure, behavior, and context…
-
Learning: Brain vs xNN
The neural and cognitive architecture for learning from a small sample is a nice article I would like to recommend. It highlights how human learners avoid generalization issues found in machine learning, proposes a general model explaining how the brain may simplify complex problems. Synergy between cognitive functions and reinforcement learning allows simplification.Recurrent loops between…
-
Ready, Steady, Go AI
AI is a potential solution to the challenge of turning big phenomics data into insights. Ready, Steady, Go AI is an interactive tutorial for disease classification that has been of significance as the foundation for achieving digital phenomics. It is designed as an open-source, freely available code via virtual lab notebooks to empower not only…
-
Skills in Space
Mind in Motion: How Action Shapes Thought is a great book by Barbara Tversky. In this book, she argues that spatial thinking is the foundation of all thought, including abstract thinking. When there are too many thoughts to hold in mind, we put those thoughts into the world in various ways, and the way we put…
-
Does the quality of “Smart Information Processing” connect with the Default Mode Network ?
I was triggered by an article in Nature, explaining the atypical connectome hierarchy in ASD (Autism Spectrum Disorder). ASD is characterized by atypical sensory processing, while and deficits in high-level cognitive and social functions, including impairments in Theory of Mind and predictive abilities. The article points out that ASD might emerge from disturbances in macroscale cortical…
-
The Information Lens – helping to correct the failure of the Perceptron
In the rich history of information processing, the idea of the perceptron occurred, based on the founding ideas of the artificial neuron (McCulloch and Pitts, 1943). Including the available knowledge of learning, Frank Rosenblatt constructed the perceptron devices, building the first of artificial learning machines, and as such creating the first neural nets in 1957. As referenced in “Calling Bullshit” (Ch. 8, intro),…
-
AI: Analogy Included ?
There is a great Quanta article on Melanie Mitchell, discussing her effort to include analogy into AI. Some quotes: “Today’s state-of-the-art neural networks are very good at certain tasks, but they’re very bad at taking what they’ve learned in one kind of situation and transferring it to another” — the essence of analogy. “Analogy isn’t…
