close
close
Another security researcher stops OpenAi

Another security researcher stops OpenAi

These days, the AI ​​news cycle is dominated by Veteran. Depseek is a Chinese startup that launched a model of reasoning as powerful as Chatgpt O1 despite not having the massive resources of OpenAi hardware and infrastructure. Depseek used major chips and software optimizations to train Depseek R1.

OpenAi accusations that Deepseek could have distilled chatgpt To train R1 precursors, it doesn’t even matter.

The Deepseek project probably reached its goal on multiple fronts. He leveled the play on the AI ​​wars, giving China a fighting opportunity. Deepseek also gave a great financial blow to the US stock market, which lost almost $ 1 billion, with AI hardware companies that represent the largest market capitalization losses.

Finally, Depseek gave China a software weapon that could be even more powerful than Tiktok. Deepseek is the number one application in the App Store. In addition to that, anyone can install the Deepseek model on your computer and use it to build other models.

With all that in mind, you may not even pay attention to the reports that say that another key OpenAi security researcher has resigned. Steven Adler is another name that we will add to a growing list of engineers who left Openai in the last year.

It is also interesting that the engineer resigned in mid -November, but made his departure public on Monday, just when Deepseek News crashed the market.

Adler saying in X that is renouncing the company after four years. “It was a wild trip with many chapters (Evals of dangerous capacity, security/control of agents, AGI and online identity, etc., and I will miss many parts,” he said before leaving a gloomy statement.

Operai Steven Adler researcher on leaving the company.
Operai Steven Adler researcher on leaving the company. Image Source: X

The researcher said he is “quite terrified by the rhythm of the development of AI these days.” Adler’s comment echoes the fears of other AI experts who think that AI will bring our inevitable fatality. The former chatgpt safety researcher does not accelerate the words, saying that he is worried about the future.

“When I think where I will raise a future family or how much to save for retirement, I can’t help asking me: humanity will even reach that point?” asked.

Openai Steven Adler researcher being terrified.
Openai Steven Adler researcher being terrified. Image Source: X

He was curious to listen to what Adler saw in Operai that he made him leave the company. It would be even more curious to know why he did not stay to potentially help the humanity of the bad AI by participating in one of the most important AI companies that exist.

Adler could have witnessed AGI research (artificial general intelligence) in Openai, something that the company is clearly developing. I am speculating here, but it is based on a tracking tweet. Adler said: “The AGI race is a very risky bet, with a great inconvenience. No laboratory has a solution for the alignment of AI today. And the faster we run, the less likely someone finds one in time. ”

Operai Steven Adler researcher at the AGI race.
Operai Steven Adler researcher at the AGI race. Image Source: X

Presumably, OpenAi is accelerating in that Agi race. AGI is the type of AI that will coincide with the creativity and ability of a human when he is responsible for trying to solve any problem. But AGI will also have much more information, so you can address any task much better than a human. At least that is the idea.

Alignment is the most important security subject with respect to AI, AGI and Superintelligence (like). AI has to be aligned with the interests of humanity at all levels. That is the only way to ensure that AI does not develop own agendas that can lead to our disappearance.

However, real -life products such as Chatgpt and Deepseek already give us two types of lineups. Chatgpt is lucky, aligned with US and Western interests. Deepseek is built with Chinese interests first, and is aligned with those who through censorship.

Openai Steven Adler researcher on the risks from other AI companies.
Openai Steven Adler researcher on the risks from other AI companies. Image Source: X

Adler also seemed to refer to Depseek in his thread on Monday in X without naming the Chinese startup.

“Today, it seems that we are trapped in a really bad balance,” he said. “Even if a laboratory really wants to develop in a responsible way, others can still cut corners to catch up, maybe disastrously. And this pushes everything to accelerate. I hope that laboratories can be sincere about the true security reference necessary to stop this. ”

How is this linked to AI? Well, the Depseek R1 open source model is available to anyone. The nefarious actors with some resources, knowledge and new ideas could stumble with AGI without knowing what they are doing or even realizing that they would unleash a higher form of the world in the world.

This sounds like a science fiction film stage, of course. But it could happen. It is as valid as a scenario as, for example, a large AI company that develops AGI and ignoring the warnings of security researchers, which finally end up leaving the company one by one.

It is not clear where Adler will move next, but he seems interested in the safety of AI. He asked in X which people believe “are the” most important and careless ideas in the security/politics of AI “, and added that he is excited about the” control methods, detection of schemes and security cases. “

Back To Top