Chinese scientists have developed AlphaWar, an artificial intelligence (AI) machine that acts precisely like humans when engaging in military simulations or so-called war games, the South China Morning Post reports.
Even though some military strategists with extensive experience played either with or against the AI for many rounds, they could not recognize it as a machine.
The team claimed in a paper printed on February 17 in the Chinese-language journal Acta Automatica Sinica that AlphaWar had passed the Turing test.
The machine was named after Google DeepMind's AI AlphaGo, which became the first to outperform top human players in Go, a Chinese board game.
Chinese War Games AI Passes Turing Test
Alan Turing proposed the Turing Test in 1950 to determine whether a computer can display intelligent behavior equivalent to or indistinguishable from a human's.
The Turing Test is a method for determining whether a computer program can convince a human that they are interacting with another human instead of a machine. It is a game in which one tries to determine whether they are conversing with a computer or a real person based on their responses to questions and comments.
The test is used to determine how advanced artificial intelligence has become and whether or not it can replicate human-like behavior and intelligence.
In the case of AlphaGo, it was able to avoid detection in war games by experts.
What's Next for Chinese-Backed AlphaWar?
AlphaWar was developed by a team led by Professor Huang Kaiqi at the Beijing Institute of Automation, Chinese Academy of Sciences.
It was in 2020 when the AI machine passed the test, Huang and his co-authors stated in their paper without providing specifics. It is uncertain why the news took over two years to become public. The Huang team was unreachable for comment, SCMP reports.
Governments, Companies Joining the AI Race
Governments have made substantial investments in developing computer-assisted war game technology to enhance speed and precision. In 2007, the US Defense Advanced Research Projects Agency (DARPA) launched the Deep Green program to develop artificial intelligence for war games.
After Deep Green was discontinued in 2020, DARPA launched a new initiative, Game Breaker, to continue its efforts.
Experts See Potential on AlphaWar
According to Huang's team, simulating a real-world battle has been difficult for even the most powerful computers.
In recent years, AI technology has advanced rapidly and beaten human players in complex computer games such as Starcraft, which is significantly more complex than Go.
But some of the most critical components of the military war game are humans, who can make random errors or accomplish unexpected feats in a highly unstable situation with limited knowledge of the opponents.
This uncertainty could alter the course of a battle, and according to Huang's team, it is extremely difficult for AI to learn and imitate it. They added that a war game is not a game because the outcomes can affect the lives and deaths of many people.
Read Also : AI Loses to Amateur Go Player, Thanks to Artificial Intelligence's Flaw-Here's What Happened
According to the research paper, AlphaWar could develop superior strategies than humans. The machine improves its performance by acquiring knowledge from military strategists or by competing with itself. However, it still needs to improve and catch up to the best human strategists in several areas, including unit coordination and weapon use.
Huang's team notes that some new tools, such as large language models similar to ChatGPT, may be used to improve the performance of AlphaWar and other AI war games.
Stay posted here at Tech Times.