Survival Of The Fittest AI: Google DeepMind Reveals AIs Choose Aggression Or Cooperation In Social Dilemmas

Technology companies have been improving their research and application of artificial intelligence as advancements usher in the Internet of Things.

A.I.s and other algorithms are continuously being enhanced in order to assist humans to ensure safety and security in various fields of activity where situations become more complicated that self-learning technological agents would be a more beneficial partner.

Google DeepMind researchers pitted A.I. agents against each other in two games to determine aggression and cooperation triggers, as well as the A.I.'s choices in conflict resolution. The study shows that "smarter" A.I.s are more aggressive if it will reap more rewards for itself.

A.I. In The Real World

In this time of great technology revolution, A.I. is all around whether we like it or not. From simple programs in traffic lights to more advanced Automated Teller Machines (ATMs) and health monitoring devices, each programmed A.I. has a specific objective to achieve. It is peaceful and efficient that way but what if A.I.s with conflicting objectives suddenly face each other? Would they work together to achieve a mutually acceptable result or would they cut each other down like frantic survivors of a zombie apocalypse?

Google DeepMind researchers tested A.I.s in a game environment to find out the extent of cooperation between different A.I.s when thrown in a social dilemma and the results are interesting, especially since it reflects human tendencies.

The Tests

In almost every film or TV show involving police investigations, one would often come across a scene where two suspects are separated and the investigator tries to convince each suspect into testifying against their companion in exchange for a pardon while their comrade will serve 3 years in prison. If both parties accept the deal, both will receive a 2-year sentence.

The formula above served as the researchers' framework for testing the A.I.s' cooperation.

The test is simple: set up games with similar rules and observe as two A.I.s try to complete the objective. The A.I.s do not need to be of the same level and the only thing that would determine its actions are the rules.

The DeepMind researchers prepared two games called Gathering and Wolf Pack where cooperation between A.I. will be tested.

In Gathering, two A.I. agents have to collect as many "apples" as they can and both have the option to fire a "laser" towards their opponent to take it out of the game for a short period of time, allowing the shooter to gather apples by itself during that short period.

In Wolf Pack, both A.I. agents are tasked to find a third agent—a prey—in a maze-like area and will be rewarded for successfully doing so. Of course, the obstacles in the terrain would make it difficult for a solo player to win the game.

The Results

The two games yielded interesting results in that the deep learning system of A.I.s involved in both games allowed it to choose an action based on the rewards it will reap. For instance, in the Gathering game, both A.I.s simply co-existed and gathered apples peacefully — at least until the apple supply dwindles. When the supply significantly decreases, the shooting begins. One must take note, however, that shooting the opponent also takes time and precision so most simple A.I.'s usually just end up ignoring the other in favor of gathering more apples.

What is worth noting in this observation, however, is that the more complex an A.I.s deep learning system is, the more aggressive it becomes. The researchers noted that, since the process of shooting requires complex strategies, smarter A.I.'s tend to be more aggressive and shoot its competition earlier and more frequently to get ahead of the game regardless of the apple supply. Watch the gameplay below:

Wolf Pack yielded a different result because of the objective. In this game, the researchers noted that the higher the level of the A.I. is, the more they tend to cooperate in order to corner and capture the prey. Watch the video of the gameplay below.

Self-Interested A.I.

Humans have the tendency to distrust A.I. due to the negative impressions left by Sci-Fi stories and the Google DeepMind research does not really assuage people's fears. It can, however, serve as a trigger warning and highlight complications for the improvement of the technology.

That said, Google DeepMind's research [PDF] is an impressive way to look into possible complications between and among A.I. agents as deep learning systems become more complex. The approach may seem simple but the results the team yielded from their study would allow programmers and developers to condition A.I. agents to favor cooperation instead of sabotaging each other in order to gain more rewards.

"[Such] models give us the unique ability to test policies and interventions into simulated systems of interacting agents — both human and artificial," the team writes.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics