What to read this week: The Laws of Thought by Tom Griffiths

Dwight Ellefsen/FPG/Archive

Laws of thought
Tom Griffiths, William Collins (UK) Macmillan (USA)

For almost 70 years, cognitive researchers have been fighting a civil war. On one side is computationalism, which argues that intelligence is best explained by rules, symbols, and logic that can be expressed in equations. On the other side is connectionism, where intelligence emerges from vast, interconnected networks modeled on the brain’s neurons, and no single component is intelligent, but the system as a whole is in some way.

This battle has shaped everything from cognitive science to artificial intelligence, which is now transforming the global economy. Two new books wade in from opposite sides this month. For me it’s a tip Laws of Thought: The Search for a Mathematical Theory of Mind. In it, Princeton professor Tom Griffiths traces the long-running attempt to formalize thought in mathematical laws and explains why modern AI is the way it is—and what the future might hold.

Griffiths frames the story around three competing and increasingly intertwined mathematical ways of formalizing thought: rules and symbols, neural networks, and probability. The first of these views thinking more as problem solving – breaking down a task into goals and sub-goals and then navigating through it using formal steps. It powered early AI, but also showed why human common sense is so difficult to fulfill, with the number of rules AI had to follow soon growing into tens of millions of requirements.

Neural networks trade on explicit rules for learning from examples and build intelligence from many simple units that interact to produce complex behavior. This is (sort of) how people work, but probability and statistics add a third ingredient: uncertainty. The mind does not have access to perfect information, and how we weigh evidence and update our beliefs is what makes us human.

For Griffiths, none of the three frameworks is sufficient. Realistic accounts of intelligence, whether human or machine, combine all three. He makes his case historically, looking at how people have tried to map the processes of the mind using mathematics, drawing on archives and interviews with researchers. This makes his book detailed and engaging, if a little heavy-handed.

Neuroscientists Gaurav Suri and Jay McClelland chose a different direction The Emergent Mind: How Intelligence Emerges in Humans and Machinesin which he argues that the mind is an emergent property of interacting networks of neurons, biological or artificial, that can generate thoughts, emotions, and decisions. It draws on McClelland’s history as a pioneer of connectionism.

These two books offer an interesting and contrasting perspective on the generative AI revolution. For Griffiths, the large language model (LLM) confirms his hybrid vision: it is impressive, but it hallucinates and stumbles, and a symbolic layer will be needed to correct it. For Suri and McClelland, the same LLM is vindication: it is amazing how much thinking has come from the network itself.

The problem with The emerging mind it’s not so much his thesis as his delivery, as the tone flips between the folksy side and clumsy phrasing. Explaining math and science is always going to be difficult, and no single book will ever completely help Laws of thought approaches because describing the history of artificial intelligence means focusing on what each framework can and cannot explain.

The emerging mind has a more provocative manifesto, with the authors seeing no major obstacle to more autonomous, goal-driven AI based on purely neural architectures. As a result, he may feel less grounded in reality.

But Griffiths’ book leaves you with a strong sense of the “languages” we have to use to describe ideas, and why the future may lie in messy overlaps.

Could this future signal peace between the two camps?

Two more great books on machine intelligence

The new scientist. Science news and long-form reading from expert journalists on developments in science, technology, health and the environment on the website and in the magazine.

Algorithms live
by Brian Christian and Tom Griffiths

This is a lively, non-technical tour of how ideas from computing can illuminate everyday decisions, including how an algorithmic approach can improve human decision-making. It was written by Griffiths a decade ago, before the ChatGPT revolution, but it remains relevant.

The new scientist. Science news and long-form reading from expert journalists on developments in science, technology, health and the environment on the website and in the magazine.

Restarting the AI
Building AI we can trust
by Gary Marcus and Ernest Davis

Current neural networks may be impressive but fragile, this book argues. This is the case with hybrid systems, which recover their strengths from the rules and symbols approach – one of the three mathematical frameworks in Griffiths’ new book.

Chris Stokel-Walker is a technical writer based in Newcastle upon Tyne, UK

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*