close
close
Anthrope’s CEO says that the risks of AI are being overlooked

Anthrope’s CEO says that the risks of AI are being overlooked

  • The CEO of Anthrope, Dario Amodei, said that although the benefits of AI are large, so are the risks.
  • Amodei said in “Hard fork“That worries the threats to national security and the misuse of AI.
  • He believes that it is possible to address the risks of AI without giving up the solutions it offers.

The CEO of Anthrope, Dario Amodei, said that people still do not take the AI ​​seriously enough, but expects that to change in the next two years.

“I think people will wake up with risks and benefits,” Amodei said in an episode of “Hard Forks” by The New York Times, adding that he is concerned that understanding arrives as a “shock.”

“And so, the more we can warn people, which may not be possible, but I want to try,” said Amodei. “The more we can prevent people, the greater the probability will be, even if it is still very low, of a healthy and rational response.”

Those optimistic about technology expect the advent of powerful intelligence not to reduce the barriers for the “knowledge work” of niche once done exclusively by specialized professionals. In theory, the benefits are immense, with applications that could help solve everything from the Climate crisis to outbreaks of fatal diseases. But the corresponding risks, said Amodei, are proportionally large.

“If you observe our responsible scale policy, it is nothing more than AI, Autonomy and CBRN: chemical, biological, radiological, nuclear,” said Amodei. “It is an unconditional misuse in the autonomy of AI that could be threats to the lives of millions of people. That is what Anthrope is mostly worried.”

He said that the possibility of “misuse” of bad actors could come as soon as “2025 or 2026”, although he does not know when he can exactly present a “real risk.”

“I think it is very important to say that it is not, ‘Oh, did the model give me the sequence of this? Did he gave me a cookbook to make methamphetamine or something?'” A Amodei said. “That’s easy. You can do that with Google. We don’t care about that at all.”

“We care about this type of esoteric, high and unusual knowledge that, for example, only a PH.D. of Virology or something has,” he added. “How much does it help with that?”

If AI can act as a substitute for higher education of niche, Amodei clarifies, “it does not mean that we are all to die of the plague tomorrow.” But it would mean that a new race of danger had come into play.

“It means that there is a new risk in the world,” said Amodei. “There is a new threat vector in the world as if you had made it easier to build a nuclear weapon.”

Leaving individual actors, Amodei expects AI to have mass implications for military technology and national security. In particular, Amodei told him that “AI could be an autocracy engine.”

“If you think about repressive governments, the limits of how repressive can be generally established, so your executors can make their human executors do,” said Amodei. “But if its executors are no longer humans, that begins to paint some very dark possibilities.”

Amodei pointed to Russia and China as particular areas of concern and said it believes that it is crucial that the United States remains “even with China” in terms of AI development. He added that he wants to make sure that “liberal democracies” retain enough leverage “and sufficient advantage in technology” to verify the abuses of power and block threats to national security.

So how can you mitigate the risk without the benefits of the tip? Beyond the implementation of safeguards during the development of the systems themselves and Promote regulatory supervisionAmodei has no magical response, but he thinks it can be done.

“Actually, you can have both. There are ways to address surgically and carefully the risks without slowing down the benefits, if they do,” said Amodei. “But they require subtlety and require complex conversation.”

AI models are inherently “something difficult to control,” said Amodei. But the situation is not “desperate.”

“We know how to do this,” he said. “We have a kind of plan on how to make them safe, but it is not a plan that still works reliably. With luck, we can do better in the future.”