Search: 2010 – A Review
Yesterday, I had the tremendous privilege of moderating a Webinar with our Search 2010 Panel: Marissa Mayer from Google, Larry Cornett from Yahoo, Justin Osmer from Microsoft, Daniel Read from Ask, Jakob Nielsen from the Nielsen Norman Group, Chris Sherman from Search Engine Land and Greg Sterling from Sterling Market Intelligence.
It was a great conversation, and the full one hour Webinar is now available.
I won’t steal the panelists thunder, but the first question I posed to them was what they see as the biggest change to search in the coming year. Most pointed to the continued emergence of blended search results on the page, as well as more advances in disambiguating intent. A few panelists looked at the promise of mobile, driven by advances in mobile technology such as multi touch displays, embodied in the iPhone. After listening again to the various comments, I’ve put them together into 4 major driving forces for Search in 2008 and beyond:
The quest to understand what we want when we launch our search is nothing new. How do you deal with the complexities and ambiguity of the English language (or any language, for that matter) when you’re trying to make the connection between the vagaries of unexpressed intent and billions of possible matches? All we have to go by is a word or two, which may have multiple meanings. While this has always been the holy grail of search, expect to see some new approaches tested in 2008. We’ve already seen some of this with the search refinement and assist features seen on Yahoo, Live and Ask. Google also has their query refinement tool (at the bottom of the results page), but as Marissa Mayer pointed out in the Webinar, Google believes that as much disambiguation as possible should happen behind the scenes, transparent to the user.
The challenge with this, as Marissa also acknowledged in the Webinar, is that there are no big innovations on the horizon to help with untangling intent in the background. Personalization probably holds the biggest promise in this regard, and although it was regarded with varying degrees of optimism in the Webinar, no one believes personalization will make too much of a difference to the user in the next year or so. All the engines are still just dipping their toes in the murky waters of personalization. Using the social graph, or tracking the behavior of communities is another potential signal to use for disambiguation, but again, we’re at the earliest stages of this. And, as Jakob Nielsen pointed out, looking at community patterns might offer some help for the head phrases, but the numbers get too small as we move into the long tail to offer much guidance.
For the foreseeable future, disambiguation seems to rest with the user, through offering tools to help refine and focus queries, and possibly doing some behind the scenes disambiguation on the most popular and least ambiguous of queries, where the engines can be reasonably confident in the intent of the user. The example we used in the Webinar was Feist, a very popular Canadian recording artist. But “Feist” is also a breed of dog. If there’s a search for Feist, the engines can be fairly confident, based on the popularity of the artist, that the user is probably looking for information on her, not the dog.
More Useful Results
The second of the 4 major areas goes to the nature of the results themselves, and what is returned to us with our query. Universal (Federated, Blended, etc) results are the first step in this direction. Expect to see more of this. Daniel Read from Ask led the charge in this direction, with their much lauded 3D interface. As engines crawl more sources of information, including videos, audio, news stories, books and local directories, they can match more of this information to user’s interpreted intent. This will drive the biggest visible changes in search over the short term. For the head phrases, those high volume, less ambiguous queries, engines will become increasing confident in providing us a richer, more functional result set. This will mean media results for entertainment queries, maps and directory information for local queries and news results for topics of interest.
But Marissa Mayer feels we’re still a long ways from maximizing the potential of the plain old traditional web results. She pointed out some examples of results where Google’s teams had been working on pulling more relevant and informative snippets, and showing fresher results for time sensitive topics. Jakob Nielsen chimed in by saying that none of the examples shown during the Webinar were particularly useful. And here comes the crux of a search engine’s job. Just using relevance as the sole criteria isn’t good enough. For someone looking for when the iPhone might be available in Canada, there are a number of pages that could be equally relevant, based on content alone, but some of those pages could be far more useful than others. The concept of usefulness as a ranking factor hasn’t really been explored by any of the algorithms, and it’s a far more subtle and nuanced factor than pure relevance. It depends on gathering the interactions of users with the pages themselves. And, in this case, we’re again reliant on the popularity of a page. It will be much easier to gather data and accurately determine “usefulness” for popular queries than it will be for long tail queries.
By the way, the concept of usefulness extends to advertising as well. A good portion of the Webinar was devoted to how advertising might remain in sync with organic results, whatever their form. Increasingly, as long as usefulness is the criteria, I see the line blurring between what is editorial content and what is advertising on the page. If it gets a user closer to their intent, then it’s served its purpose.
When we’re talking innovation, the panel seems to see only incremental innovation in the near term on the desktop. But as a few panelists pointed out in the interview, mobile is in the midst of disruptive innovation right now. The iPhone marked a significant upping of the bar, with its multitouch capabilities and smoother user experience. What the iPhone did in the mobile world is move the user experience up to a whole new level. With that, there’s suddenly a competitive storm brewing to meet and exceed the iPhone’s capabilities. As the hardware and operating systems queue up for a series of dramatic improvements, it can only bode well for the mobile online experience, including search.
Remember, there’s a pent up flood of functionality just waiting in the mobile space for the hardware to handle it. The triad of bottlenecks that have restricted mobile innovation – speed of connectivity, processing power and limitations of the user interface – all appear that they could break loose at the same time. When those give way, all the players are ready to significantly up the ante in what the mobile search experience could look like.
One area that we were only able to touch on tangentially (an hour was far too short a time with this group!) is how search functionality will start showing up in more and more places. Already, we’re seeing search being a key component in many mash ups. The ability to put this functionality under the hood and have it power more and more functional interfaces, combined with other 2.0 and 3.0 capabilities, will drive the web forward.
But it’s not only on the desktop that we’ll see search go undercover. We’ve already touched on mobile, but also expect to see search functionality built into smarter appliances (a fridge that scans for recipes and specials at the grocery store) and entertainment centers (on the fly searching for a video or audio file). Microsoft’s surface computing technology will bring smart interfaces to every corner of our home, and connectivity and searchability goes hand in hand with these interfaces between our physical and virtual worlds.
That touches on just some of the topics we covered in our one hour with the panelists. You can access the full Webinar at http://www.enquiroresearch.com/future-of-search-2010.aspx. We’ll be following up in 2008 with more topics, so stay tuned!