When Google AI Daydreams The Images Can Be Pretty Disturbing

A group of software engineers from Google's AI department shared their research on the Artificial Neural Networks and its image recognition capabilities.

When the team requests its Artificial Neural Network to identify a pattern on an existing picture after encoding it with several images, the system tends to interpret it with abstraction. The engineers call the technique Inceptionism.

The process is similar to how humans see different shapes in the sky while cloud watching. With the range of our imagination, we often perceive actual objects in the clouds such as cars, faces or flowers. Google is developing a very advanced type of vision system that will allow an Artificial Neural Network to identify the picture it is looking at and distinguish from other objects, such as an ant versus a starfish.

Google explains that the structure of its neural network is composed of layers of artificial neurons that can be as many as 30 at once. When an engineer encodes a picture through the network, the first layer of neurons will detect any low-level information such as the edge of the picture. The next layer of neurons will identify the shapes on the picture, until the final layers of neurons are combining all the information gathered by the other layers to complete its interpretation of the picture.

Google "trains" these Artificial Neural Networks by filling it up with millions of pictures, and the engineers can even tweak the system to focus on a specific type of photos such as animals or trees. The engineers recently discovered how the system has advanced when they requested it to generate images of certain objects starting from just noise or a bunch of dots with colors.

Check out the whole gallery of Inceptionism by Google.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics