BBC/Curious Films/Rory Langdon
Chances are you’re thinking about artificial intelligence a lot more today than you did five years ago. Since the launch of ChatGPT in November 2022, we have become accustomed to interacting with artificial intelligence in most areas of life, from chatbots and smart home technology to banking and healthcare.
But such rapid change brings unexpected problems – as mathematician and broadcaster Hannah Fry shows AI Confidential with Hannah Frya new three-part BBC documentary in which he talks to people whose lives have been changed by technology. She spoke with The new scientist about how we should view artificial intelligence, its role in modern mathematics—and why it will lift the global economy.
Bethan Ackerley: In the show you explore what AI is doing to our relationships and sense of reality. Some of this stems from “artistic conceit”—the idea that these instruments give us what we want to hear, not what we need to hear. How does this happen?
Hannah Fry: The earlier models were extremely sycophantic. Everything you’d write would be like, “Oh my god, you’re so amazing, you’re the best writer I’ve ever had.” They are a bit better now, but there is one major difference. We want them to help us, encourage us, and make us feel important, which are things you get from a really good human relationship.
At the same time, a really good human relationship will say the hard things out loud. If you put too much into the AI, it stops being useful and becomes argumentative and not fun to be around. There are also a huge number of people who have broken up with partners because they used it as a therapist and the AI said “get rid of him”.
There are people who have given up their jobs. There are people who have tried to use AI to make money and lose property because they had too much faith in its abilities. Once you start including all those people, it’s a really big group. I think all of us know someone who has been affected by social media bubbles and radicalization. I think this is the new version.
Has witnessing these issues changed the way you use AI?
What changed is the way I encouraged it. So now I challenge him regularly to tell me what I can’t see, to find my biases. Don’t be condescending, tell me the hard stuff.
If we don’t want AI to be what we want it to be?
The answer probably depends on the situation. There are amazing examples in the scientific space – AlphaFold, I think [an AI that predicts protein structures]. There are incredible advances in mathematics where algorithms have an intelligence that is not like humans. But I don’t think you can have a good model of reasoning if it doesn’t conceptually overlap with the things people understand the world to be. So I think it should be more humane.
“
There are certain situations where AI can do superhuman things, but also forklifts
“
It seems like every day there is a report about a math problem that has been unsolved for years but has now been solved by AI. Does it excite you?
I like to think of it as if there is this big map of mathematics and that human mathematicians are in a certain territory and circle around it. They don’t always see connections to things nearby. Amazing mathematicians found bridges between two areas of the map, such as the Taniyama-Shimura conjecture, where Japanese mathematicians found a bridge between two otherwise separate areas of mathematics. Then everything we knew from here applied there and vice versa.
I think AI is really good at saying, “Look here, this looks like fertile territory that’s been underexplored,” and that’s really, really exciting. What AI isn’t so good at is pushing the boundaries further. And what it’s really not good at… is total abstraction, having broader and more expansive theories. People always say that if you gave AI everything up to 1900, it wouldn’t come up with general relativity. So I’m still excited that we’re in this very sweet spot where AI will make human math faster, more efficient, and more exciting, but it still needs us.
There are many misconceptions about AI. Which one would you drive away if you could?
People imagine it as omnipotent, almost omnipotent. “The AI said it; the AI told me to buy this stock.” There are certain situations where AI can do superhuman things, but also forklifts. We’ve created tools that can do things that humans haven’t been able to do for a long time. It doesn’t mean they are gods or have untouchable knowledge.
You don’t let a forklift access your bank account…
No! Exactly. I think that’s what it is—framing these things. Because they speak a language and talk to us, they feel like creatures. We don’t have that problem with Wikipedia. It would be better to think of these things as an excel sheet that is really capable rather than a creature.
Why do we tend to anthropomorphize AI?
Our bodies are attuned to cognitive social relationships. We are a smart, social species. And this is a seemingly smart, seemingly social entity. Of course we put a figure on it. There is nothing in our past in our design that compels us to do anything else.
Is there no way to resist that anthropomorphic urge?
I think it’s unfair to put it in the hands of individuals. It’s a bit like saying junk food is freely available and it’s your responsibility to make sure you don’t have too much of it. The way these interfaces are designed, the conversations they have with you, we now have really good evidence that all of that leads to people falling more and more into this trap. And I think it’s only in the construction of these systems that you’re going to be able to keep people from falling down these rabbit holes.
Artificial intelligence highlights many social problems such as people being very isolated and lonely. But couldn’t AI help with these problems?
If you say, “OK, you can’t talk to any chatbots, if you feel lonely, let’s ban it,” then you still have lonely people. And of course it would be wonderful if there were rich human relationships for everyone, but that doesn’t happen. So, this being the world we’re in, I think there are situations where talking to a chatbot can alleviate some of the worst issues around loneliness. But these are delicate subjects. When you start using technology to solve really human questions, there’s an incredible fragility to it all.
Let’s talk about the distant future. With AI, we often think about extreme scenarios—say, a super-intelligent AI designed to make paper clips turns us all into paper clips. How useful is it to think about such a doomsday scenario?
At one point I thought these crazy, far-fetched scenarios distracted from what really mattered, which was that decisions were made by algorithms that affected people’s lives. I’ve changed my mind over the last couple of years because I think that just by worrying about this sort of thing you can build in technical security mechanisms to prevent it.
So worrying is not pointless, worries really do have power. AI has really bad potential outcomes, and the more honest we are about that, the more likely we can mitigate them. I want it to be like Y2K, you know? I want it to be the thing that we feared and feared, so we did the work to prevent it.
Do you think we will ever achieve artificial general intelligence?
We don’t really have a clear definition of what AGI is. But if we take AGI to mean at least as good as most humans at any task that involves a computer, then yes, we’re almost there, really. Some people take AGI to mean beyond human ability in every conceivable task. I don’t know that. But I think AGI is really not far at all. I really think we’re going to see seismic changes in the next five to ten years.
What changes?
I think there will be profound changes in the economic models we have become accustomed to throughout human history. I think there’s going to be really huge leaps forward in science, which I’m really excited about, including in the design of medicine. The whole structure of our society is built on the idea that you exchange your work, knowledge and human intelligence for money, which you then use to buy things – I think there’s a certain fragility to that.
AI will almost certainly change our relationship with work. What do we need to do to ensure that artificial intelligence leads to us all working less rather than some being out of work altogether?
I have an answer for that – I just see how much trouble I’ll get myself into if I say it out loud. Okay, I’ll give you his version. There are just a few undeniable facts, right? So far, society has been based on the exchange of labor for money. Our tax system is based on taxation of income, not wealth. I think these two things will have to change.
topics:

Leave a Reply