These Are What the Google Artificial Intelligence’s Dreams Look Like
Google's servers drive the much of the world's data, and apparently, they dream as well, according to a Google blog post by two Google software engineers and an intern.
Google’s artificial neural networks (ANNs) are stacked layers of artificial neurons (run on computers) used to process Google Images. To understand how computers dream, we first need to understand how they learn. In basic terms, Google's programmers teach an ANN what a fork is by showing it millions of pictures of forks, and designating that each one is what a fork looks like. Each of network's 10-30 layers extracts progressively more complex information from the picture, from edges to shapes to finally the idea of a fork. Eventually, the neural network understands a fork has a handle and two to four tines, and if there are any errors, the team corrects what the computer is misreading and tries again.
The Google team realized that the same process used to discern images could be used to generate images as well. The logic holds: if you know what a fork looks like, you can ostensibly draw a fork.
This showed that even when shown millions of photos, the computer couldn’t come up with a perfect Platonic form of an object. For instance, when asked to create a dumbbell, the computer depicted long, stringy arm-things stretching from the dumbbell shapes. Arms were often found in pictures of dumbbells, so the computer thought that sometimes dumbbells had arms.
READ MORE AT: https://www.popsci.com/these-are-what-google-artificial-intelligences-dreams-look#page-2

Great post! If you’re looking to hire wireless access point installation freelancer for your business or home network, consider Paperub. Their skilled freelancers deliver quality services at affordable prices.
ReplyDelete