About frans

Website:
frans has written 4625 articles so far, you can find them below.

Building SEO-Focused Pages to Serve Topics & People Rather than Keywords & Rankings – Whiteboard Friday

Posted by randfish

With updates like Hummingbird, Google is getting better and better at determining what’s relevant to you and what you’re looking for. This can actually help our work in SEO, as it means we don’t have to focus quite so intently on specific keywords.

In today’s Whiteboard Friday, Rand explains how focusing on specific kinds of people and the topics they’re interested in can be even more effective in driving valuable traffic than ranking for specific keywords.

Whiteboard Friday – Building SEO focused pages to serve topics and people rather than keywords and rankings

For reference, here’s a still of this week’s whiteboard:

Video Transcription

Howdy, Moz fans and welcome to another edition of “Whiteboard Friday.” This week, I want to talk to you a little bit about the classic technique of building SEO pages for keywords and rankings versus the more modern technique of trying to do this with people and topics in mind. So, let me walk you through the classic model and show you why we’ve needed to evolve.

So, historically, SEO has really been about keyword rankings. It’s “I want to rank well for this keyword because that particular keyword sends me traffic that is of high quality. The value of the people visiting my site from that is high.” The problem is, this doesn’t account for other types of traffic, channels, and sources, right? We’re just focused on SEO.

This can be a little bit problematic because it can mean that we ignore things like social and content marketing opportunities and email marketing opportunities. But, okay. Let’s stick with it. In order to do this, we do some keyword research. We figure out which terms and phrases are popular, which ones are high and low competition, which ones we expect to drive high-quality traffic.

We create some landing pages for each of these terms and phrases. We get links. And we optimize that content so that hopefully, it performs well in the search engines. And then we measure the success of this process based on both the ranking itself. But also, the keywords that drive traffic to those pages. And whether people who visit coming from those keywords are high-quality visitors.

And then we decide “Yeah, I’m not ranking so well for this keyword. But gosh, it’s sending great traffic. Let me focus more on this one.” Or “Oh, I am ranking well for this. But the keyword is not sending me high-quality traffic. So, it doesn’t matter that much. I’m going to ignore it because of the problems.”

So, a lot of times, creating these landing pages with each particular term and phrase is doing a lot of unnecessary overlapping work, right? Even if you’re not doing this sort of hyper, slight modifications of each phrase. “Brown bowling shoes,” “red bowling shoes,” “blue bowling shoes.” Maybe you could just have a bowling shoes page and then have a list of colors to choose from. Okay.

But even still, you might have “bowling shoes” and “shoes for going bowling.” And “shoes for indoor sports,” all of these different kinds of things that could have a considerable amount of overlap. And many different topic areas do this.

The problem with getting links and optimizing these individual pages is that you’re only getting a page to rank for one particular term or maybe a couple of different terms, versus a group of keywords in a topic that might all be very well-served by the same content, by the same landing page.

And by the way, because you’re doing this, you’re not putting in the same level of effort, energy, quality and improvement, right? Because it’s an improvement into making this content better and better. You’re just trying to churn out landing page after landing page.

And then, if you’re measuring success based on the traffic that the keyword is sending, this isn’t even possible anymore. Because Google has taken away keyword referral data and given us (not provided) instead.

And this is why we’re seeing this big shift to this new model, this more modern model, where SEO is really about the broad performance of search traffic across a website, and about the broad performance of the pages receiving search visits. So, this means that I look at a given set of pages, I look at a section of my site, I look at content areas that I’m investing in, and I say “Gosh, the visits that come from Google, that come from Bing, that come from Image Search, whatever they are, these are performing at a high quality, therefore, I want to invest more in SEO.” Not necessarily “Oh, look. This keyword sent me this good traffic.”

I’m still doing keyword research. I’m still using that same process, right? Where I go and I try to figure out “Okay, how many people are searching for this term? Do I think they’re going to be high-quality visitors? And is the competition low enough to where I think my website can compete?”

I’m going to then define groups of terms and phrases that can be well-served by that content. This is very different. Instead of saying “Blue bowling shoes” and “Brown bowling shoes,” I’m saying, “I think I can have one great page around bowling shoes, in general, that’s going to serve me really well. I’m going to have all different kinds, custom bowling shoes and all these different things.”

And maybe some of them deserve their own individual landing pages, but together, this group of keywords can be served by this page. And then these individual ones have their own targeted pages.

From there, I’m going to optimize for two things that are a little bit different than what I’ve done in the past. Both keyword targeting and being able to earn some links. But also, an opportunity for amplification.

That amplification can come from links. It could come from email marketing, it could come from social media. It could come from word-of-mouth. But, regardless, this is the new fantastic way to earn those signals that seem to correlate with things ranking well.

Links are certainly one of them. But we don’t need the same types of direct anchor text that we used to need. Broad links to a website can now help increase our domain authority, meaning that all of our content ranks well.

Google certainly seems to be getting very good at recognizing relevancy of particular websites around topic areas. Meaning that if I’ve done a good job in the past of showing Google that I’m relevant for a particular topic like bowling shoes. When I put together custom, graphic-printed, leather bowling shoes pages, that page might rank right away. Even if I haven’t done very much work to specifically earn links to it and get anchor text and those kinds of things, because of the relevancy signals I’ve built up in the past. And that’s what this process does.

And now, I can measure success based on how the search traffic to given landing pages is performing. Let me show you an example of this.

And here, I’ve got my example. So, I’m focusing beyond bowling shoes. I’m going to go with “Comparing mobile phone plans,” right? So, let’s say that you’re putting together a site and you want to try and help consumers who are looking at different mobile phone plans, figure out which one they should go with, great.

So, “Compare mobile phone plans” is where you’re starting. And you’re also thinking about ‘Well, okay. Let me expand beyond that. I want to get broad performance.” And so, I’m trying to get this broad audience to target. Everyone who is interested in this topic. All these consumers.

And so, what are things that they also might be interested in? And I’ll do some keyword research and some subject matter research. Maybe I’ll talk to some experts, I’ll talk to some consumers. And I’ll see providers, they’re looking for different phone providers. They might use synonyms of these different terms. They might have some concept expansion that I go through as I’m doing my keyword research.

Maybe I’m looking for queries that people search for before and after. So, after they make the determination if they like this particular provider, then they go look at phones. Or after they determine they like this phone, they want to see which provider offers that phone. Fine, fair.

So, now, I’m going to do this definition of the groups of keywords that I care about. I have comparison in my providers. Verizon, T-Mobile, Sprint, AT&T. Comparison of phones, the Galaxy, iPhone, Nexus, by price or features. What about people who are really heavy into international calling or family plans or travel a lot? Need data-heavy stuff or doing lots of tethering to their laptops.

So, this type of thing is what’s defining the pages that I might build by the searcher’s intent. When they search for keywords around these topics, I’m not necessarily sure that I’m going to be able to capture all of the keywords that they might search for and that’s okay.

I’m going to take these specific phrases that I do put in my keyword research. And then, I’m going to expand out to, “All right, I want to try and have a page that reaches all the people who are looking for stuff like this.” And Google’s actually really helping you with search algorithms like Hummingbird, where they’re expanding the definition of what keyword relevancy and keyword matching is really meaning.

So, now, I’m going to go and I’m going to try and build out these pages. So, I’ve got my phone plans compared. Verizon versus T-Mobile versus AT&T versus Sprint. The showdown.

And that page is going to feature things like “I want to show the price of the services relative to time over time. I want to show which phones they have available.” And maybe pull in some expert ratings and reviews for those particular phones. Maybe I’ll toss in CNET’s rating on each of the phones and link over to that.

What add-ons do they have? What included services? Do I maybe want to link out to some expert reviews? Can I have sorting so that I can say “Oh, I only want this particular phone. So, show me only the providers that have got that phone” or those types of things.

And then, I’m going to take this and I’m going to launch it. All this stuff, all these features are not just there to help be relevant to the search query. They’re to help the searcher and to make this worthy of amplification.

And then, I can use the performance of all the search traffic that lands on any version of this page. So, this page might have lots of different URLs based on the sorting or what features I select or whatever that is. Maybe I rel canonical them or maybe I don’t, because I think it can be expanded out and serve a lot of these different needs. And that’s fine, too.

But this, this is a great way to effectively determine the ROI that I’ve gotten from producing this content, targeting these searchers. And then, I can look at the value from other channels in how search impacts social and social impacts search by looking at multi-channel and multi-touch. It’s really, really cool.

So, yes. SEO has gotten more complex. It’s gotten harder. There’s a little bit of disassociation away from just the keyword and the ranking. But this process still really works and it’s still very powerful. And I think SEOs are going to be using this for a long time to come. We just have to have a switch in our mentality.

All right, everyone. I look forward to the comments. And we’ll see you again next week for another edition of “Whiteboard Friday.” Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

I Am an Entity: Hacking the Knowledge Graph

Posted by Andrew_Isidoro

This post was originally in YouMoz, and was promoted to the main blog because it provides great value and interest to our community. The author’s views are entirely his or her own and may not reflect the views of Moz, Inc.

For a long time Google has algorithmically led users towards web pages based on search strings, yet over the past few years, we’ve seen many changes which are leading to a more data-driven model of semantic search.

In 2010 Google hit a milestone with its acquisition of Metaweb and its semantic database now known as Freebase. This database helps to make up the Knowledge Graph; an archive of over 570 million of the most searched-for people, places and things (entities), including around 18 billion cross-references. A truly impressive demonstration of what a semantic search engine with structured data can bring to the everyday user.

What has changed?

The surge of Knowledge Graph entries picked up by Dr Pete a few weeks ago indicates a huge change in the algorithm. Google has been attempting to establish a deep associative context around the entities to try and understand the query rather than just regurgitate what it believes is the closest result for some time, but this has been focused on a very tight dataset reserved for high profile people, places and things.

It seems that has changed.

Over the past few weeks, while looking into how the Knowledge Graph pulls data for certain sources, I have made a few general observations and have been tracking what, if any, impact certain practices have on the display of information panels.

If I’m being brutally honest, this experiment was to scratch a personal “itch.” I was interested in the constructs of the Knowledge Graph over anything else, which is why I was so surprised that a few weeks ago I began to see this:

Google Search for "Andrew Isidoro's Age"

It seems that anyone now wishing to find out “Andrew Isidoro’s Age” could now be greeted with not only my age but also my date of birth in an information panel. After a few well-planned boasts to my girlfriend about my new found fame (all of which were dismissed as “slightly sad and geeky”), I began to probe further and found that this was by no means the only piece of information that Google could supply users about me.

It also displayed data such as my place of birth and my Job. It could even answer natural language queries and connect me to other entities like in queries such as: “Where did Andrew Isidoro go to school?

and somewhat creepily, “Who are Andrew Isidoro’s parents?“.

Many of you may now be a little scared about your own personal privacy, but I have a confession to make. Though I am by no means a celebrity, I do have a Freebase profile. The information that I have inputted into this is now available for all to see as a part of Google’s search product.

I’ve already written about the implications of privacy so I’ll gloss over the ethics for a moment and get right into the mechanics.

How are entities born?

Disclaimer: I’m a long-time user of and contributor to Freebase, I’ve written about its potential uses in search many times and the below represents my opinion based on externally-visible interactions with Freebase and other Google products.

After taking some time to study the subject, there seems to be a structure around how entities are initiated within the Knowledge Graph:

Affinity

As anyone who works with external data will tell you, one of the most challenging tasks is identifying the levels of trust within a data-set. Google is not different here; to be able to offer a definitive answer to a query, they must be confident of its reliability.

After a few experiments with Freebase data, it seems clear that Google are pretty damn sure the string “Andrew Isidoro” is me. There are a few potential reasons for this:

  • Provenance

To take a definition from W3C:

“Provenance is information about entities, activities, and people involved in producing a piece of data or thing, which can be used to form assessments about its quality, reliability or trustworthiness.”

In summary, provenance is the ‘who’. It’s about finding the original author, editor and maintainer of data; and through that information Google can begin to make judgements about their data’s credibility.

Google has been very smart with their structuring of Freebase user accounts. To login to your account you are asked to sign in via Google; which of course gives the search giant access to your personal details, and may offer a source of data provenance from a user’s Google+ profile.

Freebase Topic pages also allow us to link a Freebase user profile through the “Users Who Say They Are This Person” property. This begins to add provenance to the inputted data and, depending on the source, could add further trust.

  • External structured data

Recently an area of tremendous growth in material for SEOs has been structured data. Understanding the schema.org vocabulary has become a big part of our roles within search but there is still much that isn’t being experimented with.

Once Google crawls web pages with structured markup, it can easily extract and understand structured data based on the markup tags and add it to the Knowledge Graph.

No property has been more overlooked in the last few months than the sameAs relationship. Google has long used two-way verification to authenticate web properties, and even explicitly recommends using sameAs with Freebase within its documentation; so why wouldn’t I try and link my personal webpage (complete with person and location markup) to my Freebase profile? I used a simple itemprop to exhibit the relationship on my personal blog:

<link itemprop="sameAs" href="<a href="http://www.freebase.com/m/0py84hb" >http://www.freebase.com/m/0py84hb</a>">Andrew Isidoro</a>

Finally, my name is by no means common; according to howmanyofme.com there are just 2 people in the U.S. named Andrew Isidoro. What’s more, I am the only person with my name in the Freebase database, which massively reduces the amount of noise when looking for an entity related to a query for my name.

Data sources

Over the past few months, I have written many times about the Knowledge Graph and have had conversations with some fantastic people around how Google decides which queries to show information panels for.

Google uses a number of data sources and it seems that each panel template requires a number of separate data sources to initiate. However, I believe that it is less an information retrieval exercise and more of a verification of data.

Take my age panel example; this information is in the Freebase database yet in order to have the necessary trust in the result, Google must verify it against a secondary source. In their patent for the Knowledge Graph, they constantly make reference to multiple sources of panel data:

“Content including at least one content item obtained from a first resource and at least one second content item obtained from a second resource different than the first resource”

These resources could include any entity provided to Google’s crawlers as structured data, including code marked up with microformats, microdata or RDFa; all of which, when used to their full potential, are particularly good at making relationships between themselves and other resources.

The Knowledge Graph panels access several databases dynamically to identify content items, and it is important to understand that I have only been looking at initiating the Knowledge Graph for a person, not for any other type of panel template. As always, correlation ≠ causation; however it does seem that Freebase is a major player in a number of trusted sources that Google uses to form Knowledge Graph panels.

Search behaviour

As for influencing what might appear in a knowledge panel, there are a lot of different potential sources that information might come from that go beyond just what we might think of when we think of knowledge bases.

Bill Slawski has written on what may affect data within panels; most notably that Google query and click logs are likely being used to see what people are interested in when they perform searches related to an entity. Google search results might also be used to unveil aspects and attributes that might be related to an entity as well.

For example, search for “David Beckham”, and scan through the titles and descriptions for the top 100 search results, and you may see certain terms and phrases appearing frequently. It’s probably not a coincidence that his salary is shown within the Knowledge Graph panel when “David Beckham Net Worth” is the top auto suggest result for his name.

Why now?

Dr Pete wrote a fantastic post a few weeks ago on “The Day the Knowledge Graph Exploded” which highlights what I am beginning to believe was a major turning point in the way Google displays data within panels.

The Day the Knowledge Graph Exploded - Dr Pete

However, where Dr Pete’s “gut feeling is that Google has bumped up the volume on the Knowledge Graph, letting KG entries appear more frequently,” I believe that there was a change in the way they determine the quality of their data. A reduction in affinity threshold needed to display information.

For example, not only did we see an increase in the number of panels displayed but we began to see a few errors in the data:

This error can be traced back to a rogue Freebase entry added in December 2012 (almost a year ago) that sat unnoticed until this “update” put it into the public domain. This suggests that some sort of editorial control was relaxed to allow this information to show, and that Freebase can be used as a single source of data.

For person-based panels, my inclusion seems to show a new era of Knowledge Graph that Dr Pete reported a few weeks ago. We can see that new “things” are being discovered as strings then, using data, free text extraction and natural language processing tools, Google is able to aggregate, clean, normalize and structure information from Freebase and the search index, with the appropriate schema and relational graphs, to create entities.

Despite the brash headline, this post is a single experiment and should not be treated as gospel. Instead, let’s use this as a chance to generate discussion around the changes to the Knowledge Graph, for us to start thinking about our own hypotheses and begin to test them. Please leave any thoughts or comments below.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

The Shape of Things to Come: Google in 2014

Posted by gfiorelli1

We can’t imagine the future without first understanding the past.

In this post, I will present what I consider the most relevant events we experienced this year in search, and will try to paint a picture of things to come by answering this question: How will Google evolve now that it has acquired Wavii, Behav.io, PostRank, and Grapple, along with machine learning and neural computing technology?

The future of Google will be based on entity search, semantic search, and Ã&frac14;ber-personalization, and all the technologies it acquired will interact with one another in order to shape the Google we will experience in 2014. I’ll show you how to deal with it.

The past

Last year, in my “preview” post The Cassandra Memorandum, besides presenting my predictions on what would have been the search marketing landscape during this 2013, I presented a funny prophecy from a friend of mine: the “Balrog Update,” an algorithm that, wrapped in fire, would have crawled the web, penalizing and incinerating sites which do not include the anchor text “click here” at least seven times and do not include a picture of a kitten asleep in a basket.

Thinking back, though, that hilarious preview wasn’t incorrect at all.

In the past three years, we’ve had all sorts of updates from Google: Panda, Penguin, Venice, Top-Heavy, EMD, (Not Provided) and Hummingbird (and that’s just its organic search facet).

Bing, Facebook, Twitter, and other inbound marketing outlets also had their share of meaningful updates.

For many SEOs (and not just for them), organic search especially has become a sort of Land of Mordor…

For this reason, and because I see so often in the Q&A, in tweets sent to me, or in requests for help popping up in my inbox, how many SEOs feel discouraged in their daily work by all these frenzied changes, before presenting my vision of what we need to expect in Search in 2014, I thought it was better to have our own war speech.

Somehow we need it:

(Clip from the “The Lord of the Rings: The Return of the King” by Peter Jackson, distributed by Warner Bros)


A day may come when the courage of SEOs fails. But it is not this day.


A methodology

“I give you the light of Eärendil … May it be a light for you in dark places,
when all other lights go out.”

Even if I’m interested in large-scale correlation tests like the Moz Search Engine Ranking Factors, in reality I am convinced that the science in which we best excel is that of hindsight.

For example, when Caffeine was introduced, almost no one imagined that that magnification of the SERPs would have meant its deterioration too.

Probably not even Google had calculated the side effects of that epochal infrastructural change, and only the obvious decline in the quality of the SERPs (who remembers this post by Rand) led to Panda, Penguin, and EMD.

But we understood just after they rolled out that Panda and Co. were needed consequences of Caffeine (and of spammers’ greed).

And despite my thinking that every technical marketer (as SEOs and social media marketers are) should devote part of their time to conducting experiments that test their theories, actually the best science we tend to apply is the science of inference.

AuthorRank is a good example of that. Give us a Patent, give us some new mark-up and new social-based user profiling, and we will create a new theory from scratch that may include some fundamentals but is not proven by the facts.

Hindsight and deduction, however, are not to blame. On the contrary; if done wisely, reading into the news (albeit avoiding paranoid theories) can help us perceive with some degree of accuracy what the future of our industry may be, and can prepare us for the changes that will come.

While we were distracted…

While we were distracted—first by the increasingly spammy nature of Google and, secondly, by the updates Google rolled out to fight those same spammy SERPs—Big G was silently working on its evolution.

Our (justified) obsession with the Google zoo made us underestimate what were actually the most relevant Google “updates:” the Knowledge Graph, Google Now, and MyAnswers.

The first—which has become a sort of new obsession for us SEOs—was telling us that Google didn’t need an explicit query for showing us relevant information, and even more importantly, that people could stay inside Google to find that information.

The second was a clear declaration of which field Google is focusing its complete interest on: mobile.

The third, MyAnswers, tells us that Personalization—or, better, Ã&frac14;ber-personalization—is the present and future of Google.

MyAnswers, recently rolled out in the regional Googles, is a good example of just how much we were distracted. Tell me: How many of you still talk about SPYW? And how many of you know that its page now redirects to the MyAnswers one? Try it: www.google.com/insidesearch/features/plus/‎.

What about Hummingbird?

Yes, Hummingbird, the update no SEO noticed was rolled out.

Hummingbird, as I described in my latest post here on Moz, is an infrastructural update that essentially governs how Google understands a query, applying to all the existing “ranking factors” (sigh) that draw the SERPs.

From the very few things we know, it is based over the synonym dictionaries Google was already using, but applies a concept based analysis over them where entities (both named and search) and “word coupling” play a very important role.

But, still, Google is attending primary school and must learn a lot, for instance not confusing Spain with France when analyzing the word “tapas” (or Italy with the USA for “pizza”):

But we also know that Google has bought DNNresearch Inc. and its deep neural networks, which had gained great experience in machine learning with Panda, and that people like Andrew Ng moved from the Google X team to the Knowledge Team (the same of Amit Singhal and Matt Cutts), so it is quite probable that Google will be a very disciplined student and will learn very fast.

The missing pieces of the “future” puzzle

As with any other infrastructural change, Hummingbird will lead to visible changes. Some might already be here (the turmoil in the Local Search as described by David Mihm), but the most interesting ones are still to come.

Do you want to know what they are? Then watch and listen to what Oren Etzioni of Wavii (bought by Google last April 2013) says in this video:

As well described by Bill Slawski here:

The [open information] extraction approach identifies nouns and how they might be related to each other by the verbs that create a relationship between them, and rates the quality of those relationships. A “classifier� determines how trustworthy each relationship might be, and retains only the trustworthy relationships.
These terms within these relationships (each considered a “tuple�) are stored in an inverted index that can be used to respond to queries.

So, it can improve the usage of the immense Knowledge Base of Google, along with the predictive answers to queries based on context. Doesn’t all this remind you what we already see in the SERPs?

Moreover, do you see the connection with Hummingbird and how it can link together the Knowledge Graph, Google Now, and MyAnswers; and ultimately also determine how classic organic results (and ads) will be shown to the users over a pure entity-based and semantic analysis, where links will still play a role, but not be so overly determinant?

So, if I have to preview the news that will shake our industry in 2014, I would look to the path Wavii has shown us, but also especially to the solutions that Google finds for answering the questions Etzioni himself was presenting in the video above as the challenges Wavii still needed to solve.

But another acquisition may hide the key to those questions: the team from Behav.io.

I say team, because Google did not buy Behav.io as a society, but the entire team, which became part of the Google Now area.

What was the objective of Behav.io? It was looking at how peoples’ locations, networks of phone contacts, physical proximity, and movement throughout the day could help in predicting a range of behaviors.

More over, Behav.io was based over the smart analysis of the all the data the sensors in our smartphones could tell about us. Not only GPS data (have you ever looked at your Location History?), but also the speakers/microphones, the proximity detection between two or more sensors, which apps we use and which we download and discard, the lighting sensors, browser history (no matter which search engines we use), the accelerometer, SMS…

You can imagine how Google could use all this information: Again, for enhancing the predictive solution of any query that could matter to us. The repercussions of this technology will be obvious for Google Now, but also for MyAnswers, which substantially is very similar to Google Now in its purposes.

The ability to understand app usage could allow Google to create an interest graph for each one of us, which could enhance the “simple” personalization offered by our web history. For instance, I usually read the news directly from the official apps of the newspapers and magazines I like, not from Google News or a browser. I also read 70% of the posts I’m interested in from my Feedly app. All that information would normally not be accessible by Google, but now that it owns the Behav.io technology, it could access it.

But the Behav.io technology could also be very important for helping Google understand what the real social graph of every single person is. The social graph is not just the connection between profiles in Facebook or Twitter or Google Plus or any other social network, nor is it the sum of all the connections of every social network. The “real life social graph” (this definition is mine) is also composed of the relations between people that we don’t have in our circles/followers/fans, people we contact only by phone, short text messages or WhatsApp.

Finally, we should remember that back in 2011 Google acquired two other interesting startups: PostRank and Social Grapple. It is quite sure that Google has already used their technology, especially for Google Plus Analytics, but I have the feeling that it (or its evolution) will be used to analyze the quality of the connections we have in our own “real life social graph,” hence helping Google distinguish who our real influencers are, and therefore to personalize our searches in any facet (predictive or not predictive).

Image credit: Niemanlab.org

Another aspect that we probably will see introduced once and for all will be sentiment analysis as a pre-rendering phase of the SERPs (something that Google could easily do with the science behind its Prediction API). Sentiment Analysis is needed, not just because it could help distinguishing between documents that are appreciated by its users and those that are not. If we agree that semantic search is key in Hummingbird; if we agree that Semantic is not just about the triptych subject, verb, and object; and if we agree that natural language understanding is becoming essential for Google due to Voice Search, then sentiment analysis is needed in order to understand rhetoric figures (i.e. the use of metaphors and allegories) and emotional inflections of the voice (the ironic and sarcastic tones, for instance).

Maybe it is also for these reasons that Google is so interested in buying companies like Boston Dynamics? No, I am not thinking of Skynet; I am thinking of HAL 9000, which could be the ideal objective of Google in the years to come, even more so than the often-cited “Star Trek Computer.”

What about us?

Sincerely, I don’t think that our daily lives as SEOs and inbound marketers will radically change in 2014 from what they are now.

Websites will still need to be crawled, parsed, and indexed; hence technical SEO will still maintain a huge role in the future.

Maybe from a technical point of view, those ones who still have not embraced structured data will need to do so, even though structured data by itself is not enough to say that we are doing semantic SEO.

Updates like Panda and Penguin will still be rolled out, with Penguin possibly introduced as a layer in the Link Graph in order to automate it, as it happens now with Panda.

And Matt Cutts will still announce to us that some link network has been “retired.”

What I can predict with some sort of clarity—and for the simple reason that people and not search engines definitely are our targets—is that real audience analysis and cohort analysis, not just keywords and competitor research, will become even more important as SEO tasks.

But if we already were putting people at the center of our jobs—if we already were considering SEO as Search Experience Optimization—then we won’t change the they we work that much.

We will still create digital assets that will help our sites be considered useful by the users, and we will organize our jobs in synergy with social, content, and email marketing in order to earn the status of thought leaders in our niche, and in doing so will enter into the “real life social graph” of our audience members, hence being visible in their private SERPs.

The future I painted is telling us that is the route to follow. The only thing it is urging us to do better is integrate our internet marketing strategy with our “offline” marketing strategy, because that distinction makes no sense anymore for the users, nor does it make sense to our clients. Because marketing, not just analytics, is universal.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Easing the Pain of Keyword Not Provided: 5 Tactics for Reclaiming Your Data

Posted by timresnik

October 18th, 2011, the day Google announced “Secure Search,” was a dark day for many search marketers. We had hope, though; we were told only a small fraction of search referrals from Google would apply. This was proven false in just a few weeks as (not provided) quickly hit 10+% for many sites. Then, a year later, seemingly out of the blue, Google started to encrypt almost all searches. Today, we are approaching the dreaded extinction of Google organic keyword data:

Oh keywords, how I will miss thee.

Knowing the keywords that send us traffic from Google Search has always been a major pillar on which search marketers execute and measure the effectiveness of an SEO strategy. With Google “Secure Search” and keywords being stripped from the referral string, it’s starting to look more like a crutch—or worse, a crutch that will very soon no longer exist at all. Here are five ideas and two bonus resources to help nurse keyword targeting and search ROI back to health. Will they solve all your problems? No. Will they inform a direction for future “provided” solutions? Maybe. Are they better than nothing? Most definitely.

1. Use custom variables to tag content with categories/topics

Most web analytics software allows site owners to pass custom variables through. In Google Analytics, a custom variable can be inserted into your code, and as the name implies, you can pass custom name/value pairs of your choice. It’s one of the most useful analytics tools for web traffic segmentation with many different applications. Mix this functionality with category, topics or tags from a page on your site and you can now analyze your organic web traffic based on those variables. If you are discipline and creative in understanding and tagging your content, you will get insight about what topics are sending your traffic.

If you have some programming chops and can extract these variables from your CMS yourself and append them to your tracking code, more power to you! If not, and you are a WordPress user, I have some good news: There is a free plugin from our friends at Yoast. Install it and then simply select the following:

Once it is in GA there are several ways to get at the data. One is to simply go to Acquisition > Channels > Organic Search, then select the primary dimension of “landing page” and the secondary dimension with your custom variable. You now have a list of your landing pages that received organic traffic and the categories/tags related to each. Valuable stuff.

If you want some ideas of what tags you should be using, there are several auto-tag generator plugins for WordPress, Zemanta being one.

Requirements:

  • Programming chops or WordPress and Google Analytics

  • Being disciplined about entering tags and categories

Watch out:

  • It’s human-powered, for better or for worse, and your data is only as good as the humanoid at the controls of your CMS

  • Doesn’t help for long-tail targeting and reporting

2. Combining rank data with landing pages from Google Analytics

We can recapture some Google keywords by joining our rankings and analytics data. Download your rankings data from your favorite rankings tool; the more data you have the better. In Google Analytics, go to Channels > Organic Search > Source = Google and add the secondary dimension of “Landing Page.” View the maximum number of rows and download the data into a CSV. Put your data in two separate tabs in a spreadsheet. Now, all you need to do it join the keywords from the rankings tab with the keywords from the analytics tab. This can be done using VLOOKUP. While you’re at it, add the ranking data to the analytics tab. The end result will look like this:

Requirements:

  • Rankings data

  • Google Analytics data

  • Basic Excel or Google Spreadsheet skills

Watch out:

  • Using the method above with VLOOKUP will only return one keyword per landing page. With some crafty Excel work, you can figure out how to get all the keywords for that page

3. Site search: what users are searching for on your site

If you get enough people using the search feature of your site, it can be a gold mine for keyword data. After all, this keyword data will always be “provided.” Configuring Google Analytics to capture your internal search traffic is pretty straightforward. Once you have done so, you will be able to see the top keywords people are searching for on your site.

Step 1: Open the Google Analytics profile you want to set up Site Search for

Step 2: Navigate to Admin > Settings and scroll to the bottom for “Site Search Settings.” Enter in the parameter that is designated for a search query on your site; for example /search_results.php?q=keyword. If you use a POST-based method and do not pass through a parameter in the URL you can either configure your application to append one, or you can trigger a virtual pageview in your Google Analytics snippet, such as:

analytics.js: ga('send', 'pageview', '/search_results.php?q=keyword')

The category option allows you to look for an additional query parameter that can later be used to group the site search data. For example, if you had search on your site in different sections that you wanted to keep separate: help, content, documentation, etc.

Step 3: Let GA collect some data for a day or so and check out your results. Navigate to Behavior > Site Search > Search Terms to see a complete list that users search for on your site. To dig deep add the secondary dimension of “destination page” to see where the user landed after seeing the search results. Then, be sure to check out the secondary dimension of “search refinement” to see which keywords your users searched for after they searched for the original content. This can clue you into gap between what people are looking for and not finding on your site.

Requirements:

  • A search box on your site

  • Google Analytics

Watch out:

  • It’s a limited data set (on Moz only about 1/2 or one percent of visits end up using our search)

4. Google (and Bing) Webmaster Tools

Google has created the headache with “Not Provided,” but they have also given us a bit of medicine in the form Webmaster Tools. Released a few years back within Webmaster Tools, “Search Queries” provides webmasters with some basic information around their keywords, including average position, impressions, number of clicks, and click-through rate (CTR).

This data should be used, but has a few major limitations. First, only a small, Google-selected subset of the keywords is represented. There is no transparency about how or why they select the keywords, so using it to measure results of specific content optimization efforts can be inaccurate and even misleading.

Second, the data is limited to 90 days. If you ranked for a query 91 days ago, you’ll never know. Webmaster Tools also has an API, but unfortunately the “search queries” data isn’t available through it yet. According to Mr. Cutts, that is imminent. If you want to store your data for longer than 90 days and know how to program, you can use this PHP library or this Python library.

Finally, there is a limitation in how you can use Webmaster Tools data in Google Analytics. The good news is that you can integrate this data into Google Analytics with some basic authentication between the services. The bad news is that you can only segment the data in Google Analytics with 2 dimensions: country and Google property. Joining this data with behavior, demographics, goals, etc. would be extremely valuable.

Requirement:

  • Google Webmaster Tools account

Watch out:

  • (Limitations noted above)

5. Deeper topical analysis

Avinash Kaushik, one of my favorite speakers MozCon this year wrote about understanding the “personality” of the page as a future solution for “not provided”. He says:

“I wonder if someone can create a tool that will crawl our site and tell us what the personality of each page represents. Some of this is manifested today as keyword density analysis (which is value-deficient, especially because search engines got over “density” nine hundred years ago). By personality, I mean what does the page stand for, what is the adjacent cluster of meaning that is around the page’s purpose? Based on the words used, what attitude does the page reflect, and based on how others are talking about this page, what other meaning is being implied on a page?”

I think this could be accomplished by performing topical analysis on body content of pages as they are published and then passed through to Google Analytics with custom variables; similar to what I described above with categories. This could be done by using DBpedia and one of the annotation open source application that uses it, such as DBpedia Spotlight. Spotlight detects mentions of terms in your content and scores the relevance of those mentions against structured data created from Wikipedia. Once the topics of the page are “extracted” and passed to your web analytics platform, you’ll be able to use it as a dimension against organic search referrals to landing pages. (Thanks to Jay Leary for walking me through Spotlight)

Bonus: some other “not provided” resources

Mike King is not too worried about “Not Provided.” His deck argues we should be focusing on segmenting our data by personas and affinity groups, and paying more attention to “implicit” rather than “explicit” intent. Good stuff.

Ten industry experts, including two Mozzers, weigh in here and answer a series of questions on the “Not Provided” landscape, including tools and techniques that they use, and even a few “Top Tips for 2014.”

Conclusion

Keyword data from Google organic search is owned and controlled by Google and can never be replaced. Secure Search is here to stay and nearing 100%. There is no cure-all solution. That being said, search marketers are a GSD and generous group, and will continue to hack away at the problem and share solutions. What are some of the data sources and hacks you are using to deal with “not provided?” Are there future algorithmic solutions to this problem, or are we doomed to have to take our Google medicine and be happy with what they decide to provide in Webmaster Tools?


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →