7 Artificial intelligence Before you read a) Discuss in class: • What do you know about artificial intelligence (AI)? • Do you use any AI software for school or personal life? b) Work with a partner and research the benefits/dangers of AI. Make two lists and collect your ideas on a poster. c) Are you more excited or worried about the development of AI? Reading: Geoffrey Hinton tells us why he’s now scared of the tech he helped build a) Read the text about deep learning pioneer Geoffrey Hinton’s concerns about artificial intelligence. Underline passages which outline what AI is capable of. 1 2 I met Geoffrey Hinton at his house on a pretty street in north London just four days before the bombshell announcement that he is quitting Google. Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI. Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in. At the start of our conversation, I took a seat at the kitchen table, and Hinton started pacing. Plagued for years by chronic back pain, Hinton almost never sits down. For the next hour I watched him walk from one end of the room to the other, my head swiveling as he spoke. And he had plenty to say. The 75-year-old computer scientist, who was a joint recipient with Yann LeCun and Yoshua Bengio of the 2018 Turing Award for his work on deep learning, says he is ready to shift gears. “I’m getting too old to do technical work that requires remembering lots of details,” he told me. “I’m still OK, but I’m not nearly as good as I was, and that’s annoying.” But that’s not the only reason he’s leaving Google. Hinton wants to spend his time on what he describes as “more philosophical work.” And that will focus on the small but – to him – very real danger that AI will turn out to be a disaster. Leaving Google will let him speak his mind, without the self-censorship a Google executive must engage in. “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he says. “As long as I’m paid by Google, I can’t do that.” That doesn’t mean Hinton is unhappy with Google by any means. “It may surprise you,” he says. “There’s a lot of good things about Google that I want to say, and they’re much more credible if I’m not at Google anymore.” Hinton says that the new generation of large language models – especially GPT-4, which OpenAI released in March – has made him realise that machines are on track to be a lot smarter than he thought they’d be. And he’s scared about how that might play out. “These things are totally different from us,” he says. “Sometimes I think it’s as if aliens had landed and people haven’t realised because they speak very good English.” For 40 years, Hinton has seen artificial neural networks as a poor attempt to mimic biological ones. Now he thinks that’s changed: in trying to mimic what biological brains do, he thinks, we’ve come up with something better. “It’s scary when you see that,” he says. “It’s a sudden flip.” Hinton’s fears will strike many as the stuff of science fiction. But here’s his case. As their name suggests, large language models are made from massive neural networks with vast numbers of connections. But they are tiny compared with the brain. “Our brains have 100 trillion connections,” says Hinton. “Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.” Compared with brains, neural networks are widely believed to be bad at learning: it takes vast amounts of data and energy to train them. Brains, on the other hand, pick up new ideas and skills quickly, using a fraction as much energy as neural networks do. “People seemed to have some kind of magic,” says Hinton. “Well, the bottom falls out of that argument as soon as you take one of these large language models and train it to do something new. It can learn new tasks extremely quickly.” Hinton is talking about 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 96 The world of work Nur zu Prüfzwecken – Eigentum des Verlags öbv
RkJQdWJsaXNoZXIy MjU2NDQ5MQ==