DeepMind, the artificial intelligence company subsidiary of Alphabet Inc., has developed a new AI system called Adaptive Agent (AdA) that can solve tasks it has never seen before as quickly and accurately as humans, NewScientist reports.
This innovation is a big step forward in developing AI technology since traditional AI systems could only solve problems they had been trained to solve. AdA, on the other hand, can move around, plan, and manipulate objects in a 3D virtual world.
Reinforcing the I in AI
In 2021, DeepMind published a preprint detailing its first steps in creating an AI agent that can play multiple games without human interaction data.
The company's approach, dubbed "Open-Ended Learning," trains an agent through a vast gaming environment called XLand, which includes multiple games within 3D worlds relatable to humans.
The agent's capabilities improve iteratively through the challenges during training, leading to an AI with the ability to succeed at a wide range of tasks, from simple object-finding problems to complex games like Hide and Seek, Capture the Flag, Starcraft II, and even Dota 2.
Back in July 2022, with the help of an artificial intelligence dubbed AlphaFold, DeepMind solved one of the biggest biological problems in 18 months by predicting the structure of nearly every protein scientists have cataloged. Read more here.
Massive Improvement
AdA's recent success has come from a process called reinforcement learning, where the AI is taught what success looks like and then figures out the rules and how to achieve it.
This training method has allowed AdA to be trained on billions of tasks of increasing difficulty, equivalent to 100 human years. The virtual world AdA lives in has more tasks than there are stars in the observable universe, and AdA has to learn a system of rules similar to the laws of physics.
Read Also : Apple Music Offers a Super Bowl Experience in the App for Its First Time to Sponsor the NFL Halftime Show
In addition to its ability to solve unseen tasks, AdA demonstrated coordination and cooperation on tasks requiring multiple in-game agents. For example, AdA was able to cooperate and solve cooperative cooking games.
This ability to solve tasks in changing environments has the potential to be helpful in real-world applications such as manual labor and self-driving cars.
DeepMind's creation of AdA represents a significant step forward in the evolution of artificial intelligence technology. It also has the potential to have a substantial impact in several different areas of study and application.
It is a remarkable achievement that artificial intelligence has reached the point where it can complete tasks in shifting environments just as quickly and accurately as humans can. This opens up new opportunities for the future of AI.
Are you looking for more information about DeepMind's AdA? In this paper, researchers show how reinforcement learning can teach an AI agent to adapt quickly to its environment across an ample, open-ended task space at speed similar to that of humans.
Stay posted here at Tech Times.