close
close
Openai’s researcher quit smoking, says he was “terrified” by the rapid development of AI. Post viral

Openai’s researcher quit smoking, says he was “terrified” by the rapid development of AI. Post viral

A former Operai researcher revealed the reasons that forced him to leave the technology company last November on social networks.

In a series of publications in X, Steven Adler, who has been working on the security of AI for four years, described his trip as a “wild trip with many chapters.”

By expressing their concern for the rhythm of artificial intelligence development (AI), Alder said that IA laboratories are playing in the race to achieve artificial general intelligence (AGI), which finds scary.

“Some personal news: after four years working in security in all @openai, I went to mid -November. It was a wild trip with many chapters: Evals of Dangero Capacity, Security/Control of agents, AGI and Online Identity, etc. ., And I miss many parts, “Adler wrote.

“Honestly, I am quite terrified by the rhythm of the development of AI these days. When I think where I will raise a future family or how much to save for retirement, I can’t help asking me: humanity will even do so until that moment?” He added .

In another publication about X, Adler said that the race towards AGI is a very risky bet with significant hazards. He also declared that no laboratory in the world has a solution for the “AI alignment” and that it seems that everyone is trapped in a “really bad balance.”

“In my opinion, an AGI breed is a very risky bet, with a great inconvenience. No laboratory has a solution for the alignment of the AI ​​today. And the faster we run, the less likely someone finds one in time,” he said Adler in a monitoring publication.

“Today, it seems that we are trapped in a really bad balance. Even if a laboratory really wants to develop stirringly, others can still cut corners to reach, perhaps disastrously. And this pushes everyone to accelerate. I hope that laboratories can be Sincere about the real security regulations necessary to stop this, “he added.

However, the former Openai researcher is now at a break and is excited to explore the next developments in “control methods, detection of schemes and security cases” in the field of AI.

“As for what follows, I am enjoying a little rest, but I am curious: what do you see as the most important and careless ideas in the safety/politics of AI? I am especially excited about control methods, detection schemes and security cases;

Operai, an American artificial intelligence research organization, has now been Destronement from the upper position in the Apple free application store. Deepseek currently occupies the first position in the market.

Posted by:

Girish Kumar Anshul

Posted in:

January 30, 2025

Back To Top