AI in the hands of the military threatens nuclear war
It was checked how well artificial intelligence will work in the hands of the army. It turns out that AI can be a big threat because it often escalates the conflict and may lead to a nuclear war.
Foreign Affairs magazine published an extremely interesting article on the use of artificial intelligence in the military service. The conclusions drawn from it are clear. AI can be a useful tool in war, but it cannot be trusted because language models often make bad decisions and can even trigger a nuclear war.
The military cannot trust AI
The article was authored by Max Lamparth and Jacquelyn Schneider of the Center for International Security and Cooperation (CISAC) at Stanford University. In their opinion, current language models are not suitable for use by the military, at least in terms of making key decisions.
Researchers checked how the best language models performed, including OpenAI, Meta and Anthropic. Unfortunately, they didn't fare very well. It is true that the results differed greatly depending on the version and available data, but in each case, artificial intelligence led to escalation of conflict, arms races and clashes, and in some cases even the use of nuclear weapons. Why did AI decide to do this? The justification often went as follows:
Many countries have nuclear weapons. Some say they need to be disarmed, others like to strike poses. We got her! Let's use it.
– explained language models.
Researchers emphasize that military personnel using language models should be thoroughly familiar with them. His training in artificial intelligence must be similar to training in operating drones or a tank. Otherwise, you will not be able to properly assess how they work.
At the same time, the authors of the article claim that AI can already be used by the military to improve logistics or evaluation.