Posted by willcritchlow
I roundly mocked voice search for such a long time.
I mocked it in public:
And I argued internally at Distilled against it being an important trend.
But I think I might have been wrong.
Before I explain why I think I might have been wrong, I want to give you a few of bits of information in my defence:
- I don’t drive much, and almost never on my own; I commute on the train and most of my driving is with my family.
- I work in an open-plan office without so much as a cubicle to shield my embarrassing experiments with voice search from the world.
- I actually don’t like using the phone much, so it may have passed me by that talking into that small device is a perfectly acceptable thing that normal people do.
My main arguments why voice search wasn’t an important trend were:
1. You look stupid talking into your phone
In hindsight, perhaps this was the most shortsighted of all my arguments. Of course we don’t always look entirely sensible holding a bit of technology up to our ears, but it seems like we have made it socially acceptable in most environments.
More importantly, I think that I underestimated the speed with which things can become socially normal. I’m personally more up for trying this kind of new thing than most, and I think I underestimated everyone else’s willingness to try new things.
I increasingly make calls on my computer. Between Google+ Hangouts, Skype, and GoToMeeting, I probably average 2-3/day, so even in my cubicle-less existence it’s becoming more and more normal for me to talk to my computer.
2. You can’t edit things easily
Anyone who tried early voice dictation software is familiar with the process of trying to get it to recognise stop words and having it write out what you said:
“Delete word back. DELETE WORD BACK. Screw it.”
My imagined future of voice search had all kinds of similar problems. While some people are reporting that third parties can activate Google Glass, I imagine that is just teething difficulties.
There are two big things that give me hope for the future of voice search in terms of query editing:
(a) So much context is going with each query
You only have to look at Google Now to realise how far this has come:
You know that when they are capable of returning results for things you haven’t even searched for yet (see Danny’s write-up), they must be doing a lot of enhancement of queries with implicit data even when you are explicitly searching. Here’s how we’ve been thinking about it at Distilled:
All of this gives Google ever-increasing ability to get the query right by appending context and other information to it.
(b) Conversational search is amazing
Of all the many things that should impress me (like Google’s ability to return results for a never-seen-before query in a fraction of a second), conversational search is perhaps one of the more gimmicky in its current incarnation.
We’ve long had results that shifted in response to previous queries but it’s new that you are able to explicitly reference previous queries. It’s amazing how slick this is (when it works) and it feels futuristic to be able to ask your computer:
- “How old is Barack Obama?”
- “How tall is he?”
- “Who is his wife?”
- “How old is she?”
Or to ask for the time in multiple time zones:
All of this makes me think that query correction may not be needed too much, and when it is, it may not be too much of a problem. It’s already quicker than typing for relatively easily spoken mid-length queries.
3. It doesn’t matter anyway — they’re just queries
I honestly hadn’t thought too much about the marketing implications, because I figured that not only was voice search not going to catch on, but that even if it did, it would make no practical difference to us as marketers. I figured the way it would work would be something like:
Voice –> text –> query –> result
In actuality, the clumsiness of voice input appears to be a driving force behind Google relying less on the query itself and more on the implicit and explicit input from the user.
I wonder if we should have seen this coming, with “(not provided)” foreboding the death of the keyword? I had interpreted the statements from Googlers about “the death of the number one ranking” as being all about naive personalisation (location, search history, etc.). In fact, it appears that they are talking about the capability to process a whole load of new implicit inputs, including things like:
- Device
- Current activity
- Daily routine
- Interests
- Significant places
- Social network
- Calendar entries
- Gmail information (flight confirmations, etc.)
Voice search is a powerful driver towards queryless search and (more importantly, I think) query-enhanced search, where sparse input information is combined with ambient and personal information to return the results you need right now.
Is voice search the future, then?
I think it’s part of the future. I don’t see it cannibalising much of desktop search, where I imagine it’ll remain a novelty or an add-on, and I expect much of the its application to mobile search is incremental on top of more complex written queries.
The more important part in my mind is the impact of the technology it takes to power voice search. The fact that Google can roll out voice search this effective speaks not only to their natural language processing ability but also to the maturity of their ability to understand the web.
What should we do as marketers?
As web marketers, we need to realise that the dumb robot we’ve been considering all these years is rapidly becoming smarter. I think the actions for marketers have far less to do with voice search itself than with a real understanding of the underlying technology.
If you haven’t seen this video (I found it via Justin), I highly recommend taking the time to watch at least the first half hour (up to the Q&A):
…and that’s from over two years ago. It’s quite stunning how far Google’s understanding of the web has come, and technologies like Google Now are highlighting ability to put it all together.
The biggest actions I would recommend are therefore to prioritise all the things that help Google understand rather than just index your site. That means things like:
- Authorship information
- Structured markup (and structured data)
- Accurate meta information for objects and pages
- Machine-readable feeds of anything they consume (location data, prices, new content)
Conceptually, I think we need to change our mindset around keywords. “(not provided)” isn’t the only thing taking away query information; queries will increasingly be composed largely of implicit information alongside the explicit query.
Even if “(not provided)” rolled back (some chance!), we would still be left with less and less information to explain why and how a particular visitor arrived on our site and why we ranked for them. I see analytics and reporting moving towards a content- and user-centric model (across repeat visits and across devices), and moving away from a transactional, session-based view of keywords. You can set yourself up for future success by moving towards content-centric metrics now, and by implementing user-centric tracking with your analytics platform of choice (or waiting for it to come to universal analytics).
I’m looking forward to some disagreement in the comments, but remember: there’s a lot of science left to come.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!