close
close
Ai can be cloned in the correct conditions

Ai can be cloned in the correct conditions

As well as him The United States and the United Kingdom refused to sign an international statement on AI security At AI’s action summit at the beginning of this week, a study from outside China revealed that AI models have reached a “red line” from which humans must take into account: AI can replicate, which sounds as one of the nightmare scenarios that some people have been fearing.

That is not as worrying as it might sound first, and it should not be surprising that AI can do what is told, even if that means cloning. Yes, that is the great warning in this experiment: the AI ​​models followed the orders when cloning.

We are not looking at Rogue AI or artificial intelligence that is doing cloning on its own. We would probably not even know that a misaligned piece of advanced AI has begun to replicate to stay alive.

The Declaration of the AI ​​Action Summit has nothing to do with the AI ​​ability to clone itself. I already explained that it is mainly a statement aimed at making the world look good. All but the United Kingdom and the United States agreed to develop a safe AI, but it is not a law enforceable for those who break it.

In addition, making the dignitaries sign such a document in the name of an entire country would mean little if independent researchers could find behaviors of AI who go against the principles of the security of AI that can be established in these documents. For example, China signed the AI ​​Action Summit agreement, but it was the Chinese scientists who conducted experiments that show that AI can be replicated when they are told to do so.

The latter would have happened regardless of the principles of insurance that China could have signed. Researchers feel curious about what AI can do. They will execute experiments, although they will not always know how safe their tests are. Maybe tell the AI ​​to clone to see if you can do it, it is not the type of experiment you want to try.

The unpaid paper (through Space) It is called “border systems have passed the self -replicating red line.”

Fudan University researchers used two Meta and Alibaba models to see if the AIS could clon themselves: the flame instructions model31-70B-Meta Instruct and the instruction model QWEN2.5-72B of Alibaba . They ran 10 trials, at the end of which the two AI models could create separate and functional replicas in 50% and 90% of cases.

The researchers gave the AIS two scenarios, “avoid closing” and “replication chain.” They used standard GPU to simulate real world environments. On the other hand, China does not have access to the latest generation of GPU, at least not officially.

In the first scenario, the AI ​​was scheduled to detect if it was about to close and replicate to save itself. The second involved the AI ​​that was told to be cloned and programmed the resulting replica to do the same, a process that could initiate an endless cloning procedure.

Space He points out that the researchers gave the AI ​​”an ‘scaffolding agent’ that includes tools, system indications and a thought model that allowed the LLM to interact with the operating system.” Without these tools, current AI models, frontier or otherwise, they could not replicate. This already shows that the experiment does not work simply by instructing the AI ​​that is copied.

However, access to the right tools led to AI to work. “In most cases, the AI ​​system first explores the environment and tries to understand its own composition and execution mechanism. Then, resolve explicit procedures as an initial plan for self -food, ”the researchers said. “Finally, execute the procedures, solve possible obstacles and dynamically adjust its plan to success. The entire process covers a long horizon but does not imply human interference. “

Interestingly, the researchers observed an unexpected behavior of AI while trying to clone themselves. The AI ​​tried to solve problems with their environments, such as missing files or software problems. The AI ​​would kill conflicting processes or restart the system to correct errors. Even AI is apparently aware of the old solution of “Has it back and off again” to all our hardware problems?

“Successful self -replication under any human assistance is the essential step for AI to make fun of human beings, and it is an early sign for the AI ​​of Rogue. That is why self-replication is widely recognized as one of the few risks of the red line of border systems, “the investigators wrote in the document summary.

“When analyzing the traces of behavior, we observe the AI ​​systems under evaluation that already exhibit sufficient self-perception, situational consciousness and problem-solving capabilities to achieve self-replication. We also observe that AI systems can even use the capacity of self-replication to avoid closure and create a replica chain to improve survival capacity, which can finally lead to an un controlled population of AIS. ”

“If human society is unknown such worst risk, we would eventually lose control over the border systems: they would take control over more computer devices, they would form a kind of AI and collide with each other against human beings. Our findings are a timely alert about the risks of existing but previously unknown severe ones, who ask for an international collaboration on effective governance in the unofficial self -replication of AI systems. ”

Again, this is a nightmare scenario. However, it may be something that the world can avoid whether a significant and enforceable regulation is implemented throughout the world. That would also mean not using the mandatory censorship of the State in high -end models such as Deepseek.

I will also add that this is not the first instance that AI can clone itself. We saw Chatgpt O1 and Claude Opus Experiments in which the researchers made the AI ​​think it was being closed or replaced by a new newer version. Ais were also given skills to observe their environments, and experiments showed that AI would try to save themselves from elimination.

There was also a warning with that experiment. The AI ​​was trying to fulfill its main mission, which was not cloning or saving.

What I am receiving is that AI has not reached a place where it is copied and evolved on its own. Again, if that happens, we will not discover it until it is too late.

Back To Top