Geoffrey Hinton has spent the last few years warning about the ways AI can harm humanity. Private arms, wrongdoing, labor migration – you name it. Still, he suggests that a non-disaster caused by AI may actually be beneficial in the long run.
“Politicians don’t control it well,” Hinton said while speaking at the Hinton Programs, an annual AI security conference, earlier this month. “So actually, it might be good if we had a big AI disaster that doesn’t take us away – then, they’ll take control of things.”
The British – Canadian researcher worked in the field for decades, before ai broke in the fall of 2022. Hinton, Professor Emeritus at the University of Toronto who worked at Google last year and the Trange Award in 2018.
Recently, however, Hinton has grown concerned about the threats posed by AI and the lack of regulation that holds big tech companies accountable for assessing such risks. Legislation like California’s SB-1047 Veje failed last year due to its tougher standards for AI Model developers. A narrow sweep was installed by Governor Gavin Newlom in September.
Hinton says more action is needed to address emerging issues, such as AI’s tendency to self-preservation. A study published in December shows that leading AI models can engage in behavioral “testing”, pursuing their own goals while hiding the intentions of humans. A few months later, another report surfaced that Anthropic’s Claude could turn to Blackmail and extortion when it was believed that developers were trying to shut it down.
“For an AI agent, to get things done, it has to have a general ability to create images,” Hinton said. “You’ll quickly realize that it’s a good introduction to making things for a living.”
Building “Materal” Ai
Hinton’s solution? Build AI with “feminine taste”. ” Since technology will eventually surpass human intelligence, he argues, machines must ‘take care of it more than it takes care of itself. ” It has the power of a child, he added, “the end of a system where smart things control smarter things.”
Adding maternal feelings to the machine can seem far-fetched. But Hinton argues that AI systems are capable of displaying the psychological properties of emotions. They may not act or sweat, but they can try to avoid repeating the embarrassing incident after making a mistake. “You don’t have to be made of carbon to have feelings,” she said.
Hinton doesn’t let on that his idea of a child is likely to gain popularity among Silicon Valley executives, who may view AI as a “super smart” secretary who can be fired at will.
“That’s not how the leaders of the big book companies see themselves,” Hinton said. “You can’t see Elon Musk or Mark Zuckerberg wanting to be a kid.”