Two of Google’s A.I. researchers shared the Nobel Prize in Chemistry. Demis Hassabis (CEO of Google DeepMind) and John Jumper (Senior Research Scientist at Google DeepMind), won their Nobels a day after Geoffrey Hinton, a former Google vice president and researcher, was one of the two winners of the Nobel Prize in Physics that he shares with John Hopfield for their pioneering work on artificial intelligence.
- In 2012, Dr. Hinton, then a professor at the University of Toronto, published a research paper with two of his graduate students that demonstrated the power of an A.I. technology called a neural network. Google paid $44 million to bring them to the company. About a year later, Google paid $650 million for Dr. Hassabis’s four-year-old start-up, DeepMind (UK), which specialized in the same kind of technology. Dr. Hinton and Dr. Hassabis were part of a small academic community that had nurtured neural networks for years while the rest of the world had largely ignored it. (1)
- In 2020, Demis Hassabis and John Jumper presented an AI model called AlphaFold2. With its help, they have been able to predict the structure of virtually all the 200 million proteins that researchers have identified. Since their breakthrough, AlphaFold2 has been used by more than two million people from 190 countries. Among a myriad of scientific applications, researchers can now better understand antibiotic resistance and create images of enzymes that can decompose plastic. (2)
“This is the year the Nobel committee got A.I.,” said Oren Etzioni, a professor emeritus of computer science at the University of Washington. “These prizes are a conscious recognition of how influential A.I. has become in the scientific world.” (1)
By emphasizing the contributions of AI to science, the Swedish Academy of Sciences overlooks broader concerns about the potential dangers of A.I. and the Nobel Prize winners approach this with greater caution.
- When Google acquired DeepMind, Dr. Hassabis and his co-founders asked for assurances that Google would not use DeepMind’s technologies for military purposes and that it would establish an independent board that would work to ensure that its technologies were not misused. “Of course it’s a dual-purpose technology,” Dr. Hassabis said during a news conference after winning the Nobel Prize. “It has extraordinary potential for good, but also it can be used for harm.” (1)
- Dr. Hinton left Google, using his retirement as an opportunity to speak freely about his worry that the race toward A.I. could one day be catastrophic. He said on Tuesday that he hoped “having the Nobel Prize could mean that people will take me more seriously.” (1)
Geoffrey Hinton’s first reactions
Interview – By Adam Smith calling from the website of the Nobel Prize
https://www.nobelprize.org/prizes/physics/2024/hinton/interview/
Adam Smith : How would you describe yourself? Would you say you were a computer scientist or would you say you were a physicist trying to understand biology when you were doing this work?
Geoffrey Hinton: I would say I am someone who doesn’t really know what field he’s in but would like to understand how the brain works. And in my attempts to understand how the brain works, I’ve helped to create a technology that works surprisingly well.
AS: It’s notable, I suppose that you’ve very publicly expressed fears about what the technology can bring. What do you think needs to be done in order to allay the fears that you and others are expressing?
GH: I think it’s rather different from climate change. With climate change, everybody knows what needs to be done. We need to stop burning carbon. It’s just a question of the political will to do that. And large companies making big profits not being willing to do that. But it’s clear what you need to do. Here we’re dealing with something where we have much less idea of what’s going to happen and what to do about it. I wish I had a sort of simple recipe that if you do this, everything’s going to be okay. But I don’t. In particular with respect to the existential threat of these things getting out of control and taking over, I think we’re a kind of bifurcation point in history where in the next few years we need to figure out if there’s a way to deal with that threat. I think it’s very important right now for people to be working on the issue of how will we keep control? We need to put a lot of research effort into it. I think one thing governments can do is force the big companies to spend a lot more of their resources on safety research. So that, for example, companies like OpenAI can’t just put safety research on the back burner.
AS: Is there a parallel with the biotechnology revolution when, the bio technologies themselves got together in those Asilomar conferences and sat down and said, you know, there is potential danger here and we need to be on it ourselves?
GH: Yes. I think there are similarities with that, and I think what they did was very good. Unfortunately, there’s many more practical applications of AI than for the things like cloning that the biologists were trying to keep under control. And so, I think it’s going to be a lot harder. But I think the biologists, what they did is, a good model to look at. It’s impressive that they managed to achieve agreement, and the scientists did it.
AS: So, for instance with the large language models, the thing that I suppose contributes to your fear is you feel that these models are much closer to understanding than a lot of people say. When it comes to the impact of the Nobel Prize in this area, do you think it will make a difference?
GH: Yes, I think it will make a difference. Hopefully it’ll make me more credible when I say these things really do understand what they’re saying.
AS: Do you worry that people don’t take you seriously?
GH: So, there is a whole school of linguistics that comes from Chomsky that thinks that it’s complete nonsense to say these things understand, that they don’t process language at all in the same way as we do. I think that school is wrong. I think it’s clear now that neural nets are much better at processing language than anything ever produced by the Chomsky School of Linguistics. But there’s still a lot of debate about that, particularly among linguists.
(1) Google Triumphs on the Nobel Stage as Tough Antitrust Fight Looms, The New York Times, October 9, 2024
(2)The Royal Swedish Academy of Sciences, Nobel Prizes 2024