Yeah, Androids Dreaming Of Electric Sheep
Google has created a feedback loop in its image recognizing neural network, which searches for patterns in the images, forming hallucinatory pictures of buildings, animals, human beings and landscapes which turn from beautiful to frightening.
Have you ever wondered what do machines and gadgets dream about? New images revealed by Google give us one probable answer: fascinating landscapes, buildings, bridges and fountains merging into one.
The images, which turn from beautiful to horrifying, were designed by Google’s image recognition neural network. It has been “taught” to recognize features like animals, objects and buildings in pictures and photographs.
They were made by feeding an image into the network, asking it to identify one of its features, and modify the image to stress on the feature it identifies. That modified image is then fed back into the network, which again identifies features and stress on them, and it goes on like this. In due course, the feedback loop transforms the image beyond recognition.
At the primary level, the neural network may only be used to identify the edges of a picture. In such case, the image becomes artistic, very similar to an effect that people playing with Photoshop filters usually get.
But that’s not it. The neural network can be asked to find more complex features. For example, if there is an animal in the picture, it ends up forming a much weirder and disturbing hallucination.
Lastly, the software can even tasked on an image which only has random noise and nothing else. It can generate features that are completely of its own thoughts. Machines having thoughts! You can also task a network concentrated on finding building features with discovering and improvising them to a featureless picture.
The image formations are stunning and they are more than just for display. Neural networks are one of the common trait of machine learning. Instead of overtly program a computer to help it know how to identify a picture, Google feeds its pictures and part the key features together.
But that leads to software that is fairly opaque. It is hard to know what traits the software is exploring and what features are overlooked. For example, asking the network to unearth cowbell in an image having random noise discloses that a cowbell has to have a cow with a bell. As a solution you can feed in more images of cowbells without a cow wearing it. Then, maybe it will understand that cow isn’t an inherent part of the cowbell.
To understand what precisely goes on at every layer is one of the challenges of neural network. Google engineers and mobile app designers know that after training, every layer gradually dig out higher and higher-level features of the picture, until the last layer significantly makes a choice on what the picture displays. For example, the first or primary layer will look for corners or edges. Layers in the middle, interpret the basic features to search the components and shapes like leaf or door. The last few layers bring together these elements into interpretations. These neurons activate because of highly complex stuffs like trees or an entire building.
The image identification software already made its mark among consumer products. The new photo service – Google Photos, shows the feature to search images with text. Typing “cat”, for example, will extract every picture Google can find which has a cat in it. Occasionally, Google may also give results displaying other quadrupedal mammals or human beings with the cat.
So, there you have your answer. Androids don’t only dream of electric sheep; they also dream of fascinating multi-colored pictures.
Interesting and weird! Cool blog by the way.