Categories: US News

Ai Godfabrey Geoffrey Hinton: Near-disasters may rule the law

[ad_1]

Hinton says Aid-Miss AI disasters may be needed to push lawmakers to act. Jorge OZONI / AFP via Getty Images

Geoffrey Hinton has spent the last few years warning about the ways AI can harm humanity. Private arms, wrongdoing, labor migration – you name it. Still, he suggests that a non-disaster caused by AI may actually be beneficial in the long run.

“Politicians don’t control it well,” Hinton said while speaking at the Hinton Programs, an annual AI security conference, earlier this month. “So actually, it might be good if we had a big AI disaster that doesn’t take us away – then, they’ll take control of things.”

The British – Canadian researcher worked in the field for decades, before ai broke in the fall of 2022. Hinton, Professor Emeritus at the University of Toronto who worked at Google last year and the Trange Award in 2018.

Recently, however, Hinton has grown concerned about the threats posed by AI and the lack of regulation that holds big tech companies accountable for assessing such risks. Legislation like California’s SB-1047 Veje failed last year due to its tougher standards for AI Model developers. A narrow sweep was installed by Governor Gavin Newlom in September.

Hinton says more action is needed to address emerging issues, such as AI’s tendency to self-preservation. A study published in December shows that leading AI models can engage in behavioral “testing”, pursuing their own goals while hiding the intentions of humans. A few months later, another report surfaced that Anthropic’s Claude could turn to Blackmail and extortion when it was believed that developers were trying to shut it down.

“For an AI agent, to get things done, it has to have a general ability to create images,” Hinton said. “You’ll quickly realize that it’s a good introduction to making things for a living.”

Building “Materal” Ai

Hinton’s solution? Build AI with “feminine taste”. ” Since technology will eventually surpass human intelligence, he argues, machines must ‘take care of it more than it takes care of itself. ” It has the power of a child, he added, “the end of a system where smart things control smarter things.”

Adding maternal feelings to the machine can seem far-fetched. But Hinton argues that AI systems are capable of displaying the psychological properties of emotions. They may not act or sweat, but they can try to avoid repeating the embarrassing incident after making a mistake. “You don’t have to be made of carbon to have feelings,” she said.

Hinton doesn’t let on that his idea of ​​a child is likely to gain popularity among Silicon Valley executives, who may view AI as a “super smart” secretary who can be fired at will.

“That’s not how the leaders of the big book companies see themselves,” Hinton said. “You can’t see Elon Musk or Mark Zuckerberg wanting to be a kid.”



[ad_2]

kimdc171

Recent Posts

캘리포니아 5% 부유세 추진, 국내외 투자시장에 크게 미칠 충격은?

캘리포니아가 부유세 5% 도입을 검토하는 것으로 보인다. 시장에서는 상당한 파장이 예상된다는 해석도 나온다. 투자자 입장에서는…

3 hours ago

2026년 중국 반도체, 한국 턱밑까지 왔다: 위기의 7가지 신호?

최근 중국 반도체 산업의 도약 속도가 한국 시장에 바짝 다가온 느낌이다. 일각에서는 이 흐름이 단순한…

1 day ago

입시 전략, 정시 말고 다른 길은? 5가지 대안 집중 분석과 팁

최근 대입에서 정시 비중이 강화되면서 다양한 입시 전략을 고민하는 분위기가 확산됐다. 일부에서는 학생부종합전형이나 논술, 실기…

2 days ago

한국 경찰은 왜, 충격적으로 얼마나 많은 실종자를 ‘가출’로 처리하고 있을까?

최근 실종자 처리 방식이 의문을 불러일으키고 있다. 경찰 통계상 상당수가 ‘가출’로 분류된 사례가 확인됐다. 이는…

3 days ago

2026년 스타링크 한국 상륙, 인터넷 시장 대전환 위기 현실?

요약: 우주인터넷 서비스 스타링크가 드디어 한국에 상륙했다. 그동안 통신사 중심이던 국내 인터넷 시장에 일대 변화를…

4 days ago

긴장 고조, 전 매니저 폭로로 재점화된 박나래 사건 핵심 쟁점 5가지?

전 매니저의 폭로로 송출된 '술잔 비행' 의혹이 다시 수면 위로 떠올랐다. 당시 현장 분위기와 공개된…

5 days ago