John McCarthy is reported to have said about artificial intelligence:
“As soon as it works, no one calls it AI any more.
Melanie Mitchell is a Professor at the Santa Fe Institute - a very interesting place to work, indeed. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. I have recently finished her latest book, Artificial Intelligence: A Guide for Thinking Humans, and I enjoyed a lot. I found the starting McCarthy’s quote in her work, actually. As you might guess, I’m an avid reader of AI and robotics related books, blogs and articles, and thus I consider I have a little criterion to compare between them.
The book is composed of interweaving stories about the science of AI and the people behind it, Artificial Intelligence brims with clear-sighted, captivating, and accessible accounts of the most interesting and provocative modern work in the field, flavored with Mitchell's humor and personal observations.
In most AI introductory books, I often found yet another twist in the worn-out fight between Deep Blue and Kasparov (I don't know if the Baku man is sick of being asked), AlphaGo and Alan Turing. Yet, Mitchell steps forward and goes technically a bit deeper in an accessible writing style.
The book starts with a not-frequently-read (for me) historical background of the origins of symbolic programming and neural networks approach. After the famous Dartmouth Summer Research Project on Artificial Intelligence in 1956 summer, researchers had different points of view about this discipline. This historical grounding makes for a worthy and compelling narrative in itself. However, along the book there are also ample contemporary topics explored in great detail, such as AI applications in image recognition, autonomous vehicles, voice recognition, and the impressive translation that today’s popular search engines now provide...
Many of the challenges of creating fully intelligent machines come down to the paradox, popular in AI research, that “easy things are hard”. Computers have famously vanquished human champions in chess and in Jeopardy, but they still have trouble, say, figuring out whether or not a given photo includes an animal. Machines are as yet incapable of generalizing, understanding cause and effect, or transferring knowledge from situation to situation – skills that we homo sapiens begin to develop in infancy.
I only miss that the book was published before LLMs came out, and thus, the theory for natural language understanding is based on older theories. So let’s hope that the author updates this part.
To sum up, this work will mainly interest technologists who are exploring the computational and technological foundations of AI and the present implications these bring to the digital era.
Moreover,
also conducts the blog, which I strongly recommend, where she demystifies many of the hype headlines these days.
That quote from McCarthy was visionary and, for me, absolutely current in certain circles that are more reactionary to the development of AI.
The "AI effect" was popularized by Larry Tesler's expression: “Intelligence is whatever machines haven't done yet.” And it is illuminating about the strategy of retreating from all AI achievements to preserve human uniqueness at all costs.
I recently published a little bit extensive paper about it, in case you are interested:
https://revistas.comillas.edu/index.php/razonyfe/article/view/21442
Great book, anyway.