US News

Anthropic group increases AI and Loyalty-Well

Ai Startup Anthropic is well-known in Claude Chatbot. CORRET ANTROPICIC

Last year, anthropic hired its first Ai Welfare researcher, a Kyle Fish, exploring that AI models recognize and fit in behavior. Now, the fast-growing startup is watching to add a more full worker to its Model Heallfare team as doubles in the small but fiery study field.

The question of whether AI models can develop to know – and whether the matter has given dedicated resources – a debate across the Licon Valley. While some famous AI leaders warned that such questions risk in society, others, such as fish, say that it is an important but neglected place.

“Given that we have the closest models – and in some cases of people’s intelligence and skills, it takes fair value to produce understanding,” said fish in the latest piece of 80,000 hours podcast.

Anthropic has just submitted to open the work developer or scientist to join its social worker. “You will be in the first to work to understand better, evaluate and deal with problems with the welfare and the characteristic of AI,” read the list. Obligations include using technical research projects and interventions to reduce social work injury. The income of the passage of the passage between $ 315,000 and $ 340,000.

Anthropic did not respond to requests from watching.

The new Hire will work with the fish, joining anthropic in previous September. Earlier he invented Eloos Ai, a non-profit profits focused on Ai Welly, and filed a paper that explains that there could be ai. For a few months after the Fish of the Fish, Anthropic announced the presentation of its official Assessment Program Deduction Dedicated in well-offering and intervention.

As part of the program, anthropic recently provided its models of the user deemed to risk or housing, a visible pattern.

Meanwhile, a number of anthropic Anthropic model will remain expensive and designed to minimize the interference of the user experience, for the fish 80,000 hours. Hope to evaluate how model training can increase social concerns and trying to create “some sort of Expes Sanctient” – the nature of a criteria where can pursue the models. “

Anthropic can be the largest company of a large investment company in the social worker, but not alone. In April, Google Deepmind sent to open a research scientist to inspect the topics including “machine ignorance,” according to 404 media.

However, doubts continue to argue with Silicon Valley. Mustafa Sleyman, Microsoft AI CEO, opposed last month that social research is “dangerous and unlikely.” He warned that promoting the work could conclude deltions by Ai Systems, and that the appearance of AI “seems to impose the rights of AI.

Fish, however, keeps the possibility of AI’s possibilities. It estimates 20 percent of the opportunity “somewhere, in another part of the process, at least there is the glamor of receiving experience or logical.”

Since the fish looks to expand his team with a new employment, they hope to extend the anthropic’s welfare limit. “To date, most of what we have done to find the lowly hanging fruits where we can find it there and pursue those projects,” he said. “In time, we hope to move a lot to the tests of a really intention of the answers to some of the major photos of photos and work backwards to develop a more broad agenda.”

Anthropic hires researchers to study AI and welfare



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button