16,000 Computers Find a Cat On the Internet

    June 26, 2012
    Sean Patterson
    Comments are off for this post.

[UPDATE] Google has chimed in on the results of the experiment and why it thinks machine learning is important for its future.

[ORIGINAL] Google X is the secretive inner-lab where Google engineers try to make their wildest science fiction dreams come true. Two of the most famous recent projects to be announced publicly from Google X are the self-driving cars Google is now testing throughout the country and the recently announced Google Glass augmented reality display headset. Those technologies have been announced, but there are no doubt other, even more high-tech projects being kept secret.

The lid was lifted on another Google X project this week when Andrew Ng, the director of Stanford’s artificial intelligence Lab, told the New York Times about his recent experiments in machine learning with Google. Ng and Google engineers have created one of the largest artificial neural networks in the world, with 16,000 connected processors. Ng recently gave the network an interesting task: find a cat on YouTube.

Ng is an expert on machine learning, a branch of artificial intelligence research concerned with developing learning algorithms. Using state-of-the art machine learning techniques, the Google X neural network was able to teach itself what a cat looks like using 10 million images from YouTube videos. Ng stated that the result of the simulation surprised researchers, as the network was not ever told what a cat was.

Ng and other researchers will be presenting the results of the simulation later this week at the International Conference on Machine Learning. In addition to his machine learning research and teaching at Stanford, Ng is one of the co-founders of Coursera, the free online university that offers classes from professors at universities such as Stanford, the University of Michigan, and Princeton. If you are interested in just how the artificial neural network was able to identify a feline, take a look at the video below. In it, Ng explains to the 2011 Bay Area Vision Meeting the concepts behind the technology that was used to accomplish the feat:

(Picture via arxiv.org)