Bing Improves Image Search With Deep LearningBy: Zach Walton - November 22, 2013
Image search is a cornerstone of any search engine. That’s why both Google and Bing are doing everything they can to improve image search to bring up the most relevant images for any search imaginable. While some may argue that recent changes made to Google image search make it worse, Bing is moving ahead with a new strategy that involves deep learning.
So, what is deep learning? In short, it’s a type of machine learning that uses artificial neural networks to learn about and understand multiple concepts, including the abstract. In the past, computer systems had to be manually “trained” to recognize patterns or specific images. With machine learning, these systems can now learn to recognize these patterns on their own.
When it comes to image search quality, Bing found that integrating deep learning into its systems greatly increased the quality. With deep learning enabled, a search for cats returns all cats except for two dogs that happen to look like cats. Using traditional search features, the search returns only two cats with the rest of the results featuring dogs, a baby and a disembodied head.
In short, Bing hopes to use deep learning to provide better search results by connecting like images via a giant graph. Here’s the full explanation:
Two images can be connected if the distance between the respective features learned through deep learning is small enough. Extending this concept to all the images on the web, trillions of connected images form a gigantic graph where each image is connected via semantic links to other images. As illustrated in the graph below, by using deep learning features, the image of a motorcycle is connected with other images with motorcycles of different colors and shapes. By using traditional features such as colors and edges, the same image of a motorcycle is connected to images of different entities such as bicycles, or even waterfalls and landscapes. In contrast, deep learning keeps the semantics in the image neighborhood even though the visual patterns are not very similar.
The above might be a little confusing to understand so here’s the above concept in visual form:
As you can see in the first image, all the connected images are of motorcycles. They may not be similar motorcycles, but the system recognizes that a person is searching for a motorcycle. In the bottom image, the search results are a little more chaotic as it returns some motorcycles, but it also returns images of waterfalls and bicycles simply because the images are similar in color, among other indicators.
With deep learning enabled, Bing should be able to return more relevant images in search than before. It probably won’t fix its suggested image search problem though.[Image: Bing]