Request Media Kit

Google Using RAISR Technology on Google+ and Saving 75% in Bandwidth

Google+ has become a haven for high end photos by professional photographers who obviously care about image quality. Google’s solution to the huge bandwidth requirements for their free service i...
Google Using RAISR Technology on Google+ and Saving 75% in Bandwidth
Written by Rich Ord
  • Google+ has become a haven for high end photos by professional photographers who obviously care about image quality. Google’s solution to the huge bandwidth requirements for their free service is a technology called RAISR. Lower bandwidth is also a benefit to the end user by increasing loading speeds and lowering data costs. This is especially concerning outside of the United States where it’s rare not to have to pay for internet based on data usage.

    Back in November Google introduced a machine learning technology called “RAISR: Rapid and Accurate Image Super-Resolution”, that creates high-quality versions of low-resolution images. “RAISR produces results that are comparable to or better than the currently available super-resolution methods, and does so roughly 10 to 100 times faster, allowing it to be run on a typical mobile device in real-time,” explained Peyman Milanfar, Lead Scientist at Google Research. “Furthermore, our technique is able to avoid recreating the aliasing artifacts that may exist in the lower resolution image.”

    Here’s how Google’s technical team (Yaniv Romano, John Isidoro, Peyman Milanfar) described it in June 2016:

    Given an image, we wish to produce an image of larger size with significantly more pixels and higher image quality. This is generally known as the Single Image Super-Resolution (SISR) problem. The idea is that with sufficient training data (corresponding pairs of low and high resolution images) we can learn set of filters (i.e. a mapping) that when applied to given image that is not in the training set, will produce a higher resolution version of it, where the learning is preferably low complexity. In our proposed approach, the run-time is more than one to two orders of magnitude faster than the best competing methods currently available, while producing results comparable or better than state-of-the-art.

    A closely related topic is image sharpening and contrast enhancement, i.e., improving the visual quality of a blurry image by amplifying the underlying details (a wide range of frequencies). Our approach additionally includes an extremely efficient way to produce an image that is significantly sharper than the input blurry one, without introducing artifacts such as halos and noise amplification. We illustrate how this effective sharpening algorithm, in addition to being of independent interest, can be used as a pre-processing step to induce the learning of more effective upscaling filters with built-in sharpening and contrast enhancement effect.

    “RAISR, which was introduced in November, uses machine learning to produce great quality versions of low-resolution images, allowing you to see beautiful photos as the photographers intended them to be seen,” noted John Nack, Product Manager of Digital Photography at Google. “By using RAISR to display some of the large images on Google+, we’ve been able to use up to 75 percent less bandwidth per image we’ve applied it to.”

    “While we’ve only begun to roll this out for high-resolution images when they appear in the streams of a subset of Android devices, we’re already applying RAISR to more than 1 billion images per week, reducing these users’ total bandwidth by about a third,” said Nack. “In the coming weeks we plan to roll this technology out more broadly — and we’re excited to see what further time and data savings we can offer.”

    Get the WebProNews newsletter
    delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit