About frans

Website:
frans has written 4625 articles so far, you can find them below.

30+ Important Takeaways from Google’s Search Quality Rater’s Guidelines

Posted by jenstar

For many SEOs, a glimpse at the Google’s Search Quality Rater’s Guidelines is akin to looking into Google’s ranking algorithm. While they don’t give the secret sauce to rank number one on Google, they do offer some incredible insight into what Google views as quality – and not-so-quality – and the types of pages they want to serve at the top of their search results.

Last week, Google made the unprecedented move of releasing the entire Search Quality Rater’s Guidelines, following an analysis of a leaked copy obtained by The SEM Post. While Google released a condensed version of the guidelines in 2013, until last week, Google had never released the full guidelines that the search quality raters receive in their entirety.

First, it’s worth noting that quality raters themselves have no bearing on the rankings of the sites they rate. So quality raters could assign a low score to a website, but that low rating would not be reflected at all in the actual live Google search results.

Instead, Google uses the quality raters for experiments, assessing the quality of the search results when they run these experiments. The guidelines themselves are what Google feels searchers are looking for and want to find when they do a Google search. The type of sites that rate highest are the sites and pages they want to rank well. So while it isn’t directly search algorithm-related, it shows what they want their algos to rank the best.

The document itself weighs in at 160 pages, with hundreds of examples of search results and pages with detailed explanations of why each specific example is either good, bad, or somewhere in between. Here’s what’s most important for SEOs and webmasters to know in these newly-released guidelines.

Your Money or Your Life Pages (aka YMYL)

SEOs were first introduced to the concept of Your Money or Your Life pages last year in a leaked copy of the guidelines. These are the types of pages that Google holds to the highest standards because they’re the types of pages that can greatly impact a person’s life.

While anyone can make a webpage about a medical condition or offer advice about things such as retirement planning or child support, Google wants to ensure that these types of pages that impact a searcher’s money or life are as high-quality as possible.

In other words, if low-quality pages in these areas could “potentially negatively impact users’ happiness, health, or wealth,” Google does not want those pages to rank well.

If you have any web pages or websites that deal in these market areas, Google will hold your site to a higher standard than it would a site on a hockey team fan page or a page on rice cooker recipes.

It is also worth noting that Google does consider any website that has a shopping component, such as an online store, as a type of site that also falls under YMYL for ratings. Therefore, ensuring the sales process is secure would be another thing raters would consider.

If a rater wouldn’t feel comfortable ordering from the site or submitting personal information to it, then it wouldn’t rate well. And if a rater feels this way, it’s very likely visitors would feel the same too — meaning you should take steps to fix it.

Market areas for YMYL

Google details five areas that fall into this YMYL category. If your website falls within one of these areas, or you have web pages within a site that do, you’ll want to take extra care that you’re supporting this content with things like references, expert opinions, and helpful supplementary or additional content.

  • Shopping or financial transaction pages
    This doesn’t apply merely to sites where you might pay bills online, do online banking, or transfer money. Any online store that accepts orders and payment information will fall under this as well.
  • Financial information pages
    There are a ton of low-quality websites that fall under this umbrella of financial information pages. Google considers these types of pages to be in the areas of “investments, taxes, retirement planning, home purchase, paying for college, buying insurance, etc.”
  • Medical information pages
    Google considers these types of pages to go well beyond the standard medical conditions and pharmaceuticals, but it also covers things such as nutrition and very niche health sites for sufferers of specific diseases or conditions — the types of sites that are often set up by those suffering from medical condition themselves.
  • Legal pages
    We’ve seen a ton of legal-related sites pop up by webmasters who are looking to cash in on AdSense or affiliate revenue. But Google considers all types of legal information pages as falling under YMYL, including things such as immigration, child custody, divorce, and even creating a well.
  • All-encompassing “Other”
    Then, of course, there are a ton of other types of pages and sites that can fall under YMYL that aren’t necessarily in any of the above categories. These are still things where having the wrong information can negatively impact the searcher’s happiness, health, or wealth. For example, Google considers topics such as child adoption and car safety information as falling under this as well.

Google makes frequent reference to YMYL pages within the quality guidelines and repeatedly stresses the importance of holding these types of sites to a higher bar than others.

Expertise / Authoritativeness / Trustworthiness, aka E-A-T

Expertise / Authoritativeness / Trustworthiness — shortened to E-A-T — refers to what many think of as a website’s overall value. Is the site lacking in expertise? Does it lack authoritativeness? Does it lack trustworthiness? These are all things that readers are asked to consider when it comes to the overall quality of the website or web page, particularly for ones that fall into the YMYL category.

This is also a good rule of thumb for SEO in general. You want to make sure that your website has a great amount of expertise, whether it’s coming from you or contributors. You also want to show people why you have that expertise. Is it the the experience, relevant education, or other qualities that gives the writer of each page that stamp of expertise? Be sure to show and include it.

Authoritativeness is similar, but from the website perspective. Google wants websites that have high authority on the topic. This can come from the expertise of the writers, or even the year quality of the community if it’s something like a forum.

When it comes to trustworthiness, again Google wants raters to decide: Is a site you’d feel you can trust? Or is it somewhat sketchy and you’d have trouble believing what the website is trying to tell you?

Why you need E-A-T

This also comes down to something that goes well beyond just the quality raters and how they view E-A-T. It’s something that you should consider for your site even if these quality raters didn’t exist.

Every website should make a point of either showing how their site has a high E-A-T value or figure out what it is they can do to increase it. Does it mean bringing contributors on board? Or do you merely need to update things like author bios and “About Me” pages? What can you do to show that you have the E-A-T that not only quality raters are looking for, but also just the general visitors to your site?

If it is forums, can your posters show their credentials on publicly-visible profile pages, with additional profile fields for anything specific to the market area? This can really help to show expertise, and your contributors to the forums will appreciate being showcased as an expert, too.

This comes back to the whole concept of quality content. When a searcher lands on your page and they can easily tell that it’s created by someone (or a company) with high E-A-T, this not only tells that searcher that this is great authoritative content, but they’re also that much more likely to recommend or share it with others. It gives them the confidence that they’re sharing trustworthy and accurate information in their social circles.

Fortunately for webmasters, Google does discuss how someone can be an authority with less formal expertise; they’re not looking for degrees or other formal education for someone to be considered an expert. Things like great, detailed reviews, experiences shared on forums or blogs, and even life experience are all things that Google takes into account when considering whether someone’s an authority.

Supplementary content

Supplementary content is where many webmasters are still struggling. Sometimes it’s not easy to add supplementary content, like sidebar tips, into something like your standard WordPress blog for those who are not tech-savvy.

However, supplementary content doesn’t have to require technical know-how. It can comprise things such as similar articles. There are plenty of plug-ins that allow users to add suggested content and can be used to provide helpful supplementary content. Just remember: the key word here is helpful. Things like those suggested-article ad networks, particularly when they lead to Zergnet-style landing pages, are not usually considered helpful.

Think about the additional supporting content that can be added to each page. Images, related articles, sidebar content, or anything else that could be seen as helpful to the visitor of the page is all considered supplementary content.

If you are questioning whether something on the page can be considered secondary content or not, look at the page — anything that isn’t either the main article or advertising can be considered supplementary content. Yes, this includes a strong navigation, too.

Page design

By now you’d think this is a no-brainer, but there are still some atrocious page designs out there with horrible user experiences. But this goes much further than how easy the website is to use.

Google wants raters to consider the focus of the pages. Ideally, the main content of the page, such as the main article, should be “front and center” and the highlight of the page. Don’t make your user scroll down to see the article. Don’t have a ton of ads above the fold that push the content lower. And don’t try to disguise your ad content. These are all things that will affect the rating.

They do include a caveat: Ugly does not equal bad. There are some ugly websites out there that are still user-friendly and meet visitors’ needs; Google even includes some of them as examples of pages with positive ratings.

More on advertising & E-A-T

Google isn’t just looking for ads that are placed above the fold and in a position where one would expect the article to begin. They examine some other aspects as well that can impact the user experience.

Are you somehow trying to blend your advertising too much with the content of the page? This can be an issue. In Google’s words, they say that ads can be present for any visitors that may want to interact with them. But the ads should also be something that can be ignored for those who aren’t interested in the ads.

They also want there to be a clear separation between advertising and the content. This doesn’t mean you must slap a big “ads” label on them, or anything along those lines. But there should be a distinction to differentiate the ads from the main content. Most websites do this, but many try and blur the lines between ads and content to incite accidental clicks by those who don’t realize it was actually an ad.

All about the website

There are still a ton of websites out there that lack basic information about the site itself. Do you have an “About” page? Do you have a “Contact Us” page so that visitors can contact you? If you are selling a service or a product, do you have a customer service page?

If your site falls into the YMYL category, Google considers this information imperative. But if your site isn’t a YMYL page, Google suggests that just a simple email address is fine, or you can use something like a contact form.

Always make sure there’s a way for a visitor to find a little bit more about you or your site, if they’re so inclined. But be sure to go above and beyond this if it’s a YMYL site.

Reputation

For websites to get the highest possible rating, Google is looking at reputation as well. They ask the raters to consider the reputation of the site or author, and also ask them to do reputation research.

They direct the raters to look at Wikipedia and “other informational sources” as places to start doing reputation research when it comes to more formal topics. So if you’re giving medical advice or financial advice, for example, make sure that you have your online reputation listed in places that would be easy to find. If you don’t have a Wikipedia page, consider professional membership sites or similar sites to showcase your background and professional reputation.

Google also considers that there are some topics where this kind of professional reputation isn’t available. In these cases, they say that the reader can look at things such as “popularity, user engagement, and user reviews” to discover reputation within the community or market area. This can often be represented simply by a site that is highly popular, with plenty of comments or online references.

What makes a page low-quality?

On the other end of the spectrum, we have pages that Google considers low-quality. And as you can imagine, a lot of what makes a page low-quality should be obvious to many in the SEO industry. But as we know, webmasters aren’t necessarily thinking from the perspective of a user when gauging the quality of their sites, or they’re looking to take advantage of shortcuts.

5 clues

Google does give us insight into exactly what they consider low-quality, in the form of five things raters should look for. Any one of these will usually result in the lowest ratings.

  1. The quality of the main content is low.
    This shouldn’t be too surprising. Whether it’s spun content or just poorly-written content, low-quality content means a low rating. Useless content is useless.
  2. There is an unsatisfying amount of main content for the purpose of the page.
    This doesn’t mean that short content cannot be considered great-quality content. But if your three-sentence article needs a few more paragraphs to fully explain what the title of that article implies or promises, then you need to rethink that content and perhaps expand it. Thin content is not your SEO friend.
  3. The author of the page or website doesn’t have enough expertise for the topic of the page, and/or the website is not trustworthy or authoritative enough for the topic. In other words, the page/website is lacking E-A-T.
    Again, Google wants to know that the person has authority on the subject. If the site isn’t displaying the characteristics of E-A-T, it can be considered low-quality.
  4. The website has a negative reputation.
    This is where reputation research comes back into play. Ensure you have a great online reputation for your website (or your personal name, if you’re writing under your own name). That said, don’t be overly concerned about it if you have a couple of negative reviews; almost every business does. But if you have overwhelmingly negative reviews, it will be an issue when it comes to how the quality raters see and rate your site.
  5. The supplementary content is distracting or unhelpful for the purpose of the page.
    Again, don’t hit your visitors over the head with all ads, especially if they’re things like autoplay video ads or super flashy animated ads. Google wants the raters to be able to ignore ads on the page if they don’t need them. And again, don’t disguise your ads as content.

Sneaky redirects

If you include links to affiliate programs on your site, be aware that Google does consider these to be “sneaky redirects” in the Quality Rater’s Guidelines. While there isn’t necessarily anything bad about one affiliate link on the page, bombarding visitors with those affiliate links can impact the perceived quality of the page.

The raters are also looking for other types of redirects. These include the ones we usually see used as doorway pages, where you’re redirected through multiple URLs before you end up at the final landing page — a page which usually has absolutely nothing to do with the original link you clicked.

Spammy main content

There’s a wide variety of things that Google is asking the raters to look at when it comes to quality of the main content of the page. Some are flags for what Google considers to be the lowest quality — things that are typically associated with spam. A lot of things are unsurprising, such as auto-generated main content and gibberish. But Google wants their raters to consider other things that signal low quality, in their eyes.

Keyword stuffing

While we generally associate keyword stuffing with content so heavy with keywords that it comes across as almost unreadable, Google also considers it keyword stuffing when the overuse of those keywords seems only a little bit annoying. So for those of you that think you’re being very clever about inserting a few extra keywords in your content, definitely consider it from an outsider’s point of view.

Copied content

This shouldn’t come as a surprise, but many people feel that unless someone is doing a direct comparison, they can get away with stealing or “borrowing” content. Whether you’re copying or scraping the content, Google asks the raters to look specifically at whether the content adds value or not. They also instruct them on how to find stolen content using Google searches and the Wayback Machine.

Abandoned

We still come across sites where the forum is filled with spam, where there’s no moderation on blog comments (so they’re brimming with auto-approved pharmaceutical spam), or where they’ve been hacked. Even if the content seems great, this still signals an untrustworthy site. If the site owner doesn’t care enough to prevent it, why should a visitor care enough to consider it worthy?

Scam sites

Whether a site is trying to solicit extensive personal information, is for a known scam, or is a phishing page, these are all signs of a lowest-quality page. Also included are pages with suspicious download links. If you’re offering a download, make sure it comes across as legitimate as possible, or use a third-party verified service for offering downloads.

Mobile-friendly

If you haven’t taken one of the many hints from Google to make your site mobile friendly, know that this will hurt the perceived quality of your site. In fact, Google tells their raters to rate any page that is not mobile-friendly (a page that becomes unusable on a mobile device) at the lowest rating.

In this latest version of the quality guidelines, all ratings are now being done on a mobile device. Google has been telling us over and over for the last couple of years that mobile is where it’s at, and many countries have more mobile traffic than desktop. So, if you still haven’t made your site mobile-friendly, this should tell you emphatically that it needs to be a priority.

If you have an app, raters are also looking at things like app installs and in-app content in the search results.

Know & Know Simple Queries

Google added a new concept to their quality guidelines this year. It comes down to what they consider “Know Queries” and “Know Simple Queries.” Why is this important? Because Know Simple Queries are the driving force behind featured snippets, something many webmasters are coveting right now.

Know Simple

Know Simple Queries are the types of searches that could be answered in either one to two sentences or in a short list. These are the types of answers that can be featured quite easily in a featured snippet and contain most of the necessary information.

These are also queries where there’s usually a single accepted answer that most people would agree on. These are not controversial questions or types of questions where there are two very different opinions on the answer. These include things such as how tall or how old a particular person is – questions with a clear answer.

These also include implied queries. These are the types of searches where, even though it’s not in the form of a question, there’s clearly a question being asked. For example, someone searching for “Daniel Radcliffe’s height” is really asking “How tall is Daniel Radcliffe?”

If you’re looking for featured snippets, these are the types of questions you want to answer with your webpages and content. And while the first paragraph may only be 1–2 sentences long as a quick answer, you can definitely expand on it in subsequent paragraphs, particularly for those who are concerned about the length of content on the page.

Know Queries

The Know Queries are all the rest of the queries that would be too complex or have too many possible answers. For example, searches related to stock recommendations or a politician wouldn’t have a featured snippet because it’s not clear exactly what the searchers are looking for. “Barack Obama” would be a Know Query, while “Barack Obama’s age” would be a Know Simple Query.

Many controversial topics are considered to be Know Queries, because there are two or more very different opinions on the topic that usually can’t be answered in those 1–2 sentences.

The number of keywords in the search doesn’t necessarily preclude whether it is a Know Query or Know Simple Query. Many long-tail searches would still be considered Know Queries.

Needs Met

Needs Met is another new section to the new Quality Rater’s Guidelines. It looks at how well the search result meets what the searcher’s query is. This is where sites that are trying to rank for content that they don’t have supporting content for will have a hard time, since those landing pages won’t meet what the searchers are actually looking for.

Ratings for this range from “Fully Meets” to “Fails to Meet.”

The most important thing to know is that any site that is not mobile-friendly will get “Fails to Meet.” Again, if your site is not mobile-friendly, you need to make this an immediate priority.

Getting “Highly Meets”

Essentially, your page needs to be able to answer whatever the search query is. This means that the searcher can find all the information they were looking for from their search query on your page without having to visit other pages or websites for the answer. This is why it’s so crucial to make sure that your titles and keywords match your content, and your content is quality enough to answer fully whatever the searchers are looking for when your page surfaces in the SERPs.

Local Packs & “Fully Meets”

If your site is coming up in a local 3-pack, as long as those results in the 3-pack match what the query was, they can be awarded “Fully Meets.” The same applies when it’s a local business knowledge panel — again, provided that it matches whatever the search query is. This is where local businesses that spam Google My Business will run into problems.

Product pages

If you have a quality product page and it matches the search query, this page can earn “Highly Meets.” It can be for both more general queries — the type that might lead to a page on the business website that lists all the products for that product type (such as a listing page for backpacks) — or for a specific product (such as a specific backpack).

Featured snippets

Raters also look at featured snippets and gauging how well those snippets answer the question. We’ve all seen instances where a featured snippet seems quite odd compared to what the search query is, so Google seems to be testing how well their algorithm is choosing those snippets.

“Slightly Meets” and “Fails to Meet”

Google wants the raters to look at things like whether the content is outdated, or is far too broad or specific to what the page is primarily about. Also included is content that’s created without any expertise or has other signals that make it low-quality and untrustworthy.

Dated & updated content

There’s been a recent trend lately where webmasters change the dates on some of their content to make it appear more recent than it really is, even if they don’t change anything on the page. In contrast, others add updated dates to their content when they do a refresh or check, even when the publish date remains the same. Google now takes this into account and asks raters to check the Wayback Machine if there are any questions about the content date.

Heavy monetization

Often, YMYL sites run with heavy monetization. This is one of the things that Google asks the raters to look for, particularly if it’s distracting from the main content. If your page is YMYL, then you’ll want to balance the monetization with usability.

Overall

First and foremost, the biggest takeaway from the guidelines is to make your site mobile-friendly (if it’s not already). Without being mobile-friendly, you’re already missing out the mobile-friendly ranking boost, which means your site will get pushed down further in the results when someone searches on a mobile device. Clearly, Google is also looking at mobile-friendliness as a sign of quality. You might have fabulous, high-quality content, but Google sees those non-mobile-friendly pages as low-quality.

Having confirmation about how Google looks at queries when it comes to featured snippets means that SEOs can take more advantage of getting those featured snippets. Gary Illyes from Google has said that you need to make sure that you’re answering the question if you want featured snippets. This is clearly what’s at the heart of Know Simple Queries. Make sure that you’re answering the question for any search query you hope to get a featured snippet on.

Take a look at your supplementary content on the page and how it supports your main content. Adding related articles and linking to articles found on your own site is a simple way to provide additional value for the visitor — not to mention the fact that it will often keep them on your site longer. Think usefulness for your visitors.

And while looking at that supplementary content, make sure you’re not going overboard with advertising, especially on sites that are YMYL. It can sometimes be hard to find that balance between monetization and user experience, but this is where looking closely at your monetization efforts and figuring out what’s actually making money can really pay off. It’s not uncommon to find some that ad units generate pennies a month and are really not worth cluttering up the page to add fifty cents of monthly revenue.

Make sure you provide sufficient information to a visitor, or a quality rater, that can answer simple questions about your site. Is the author reputable? Does the site have authority? Should people consider the site trustworthy? And don’t forget to include things like a simple contact form. Your site should reflect E-A-T: Expertise, Authoritativeness and Trustworthiness.

Bottom line: Make sure you present the highest-quality content from highly reputable sources. The higher the perceived value of your site, the higher the quality ratings will be. While this doesn’t translate directly into higher rankings, doing well with regards to these guidelines can translate into the type of content Google wants to serve higher in the search results.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

RankBrain Unleashed

Posted by gfiorelli1

Disclaimer: Much of what you’re about to read is based on personal opinion. A thorough reflection about RankBrain, to be sure, but still personal — it doesn’t claim to be correct, and certainly not “definitive,” but has the aim to make you ponder the evolution of Google.

Introduction

Whenever Google announces something as important as a new algorithm, I always try to hold off on writing about it immediately, to let the dust settle, digest the news and the posts that talk about it, investigate, and then, finally, draw conclusions.

I did so in the case of Hummingbird. I do it now for RankBrain.

In the case of RankBrain, this is even more correct, because — let’s be honest — we know next to nothing about how RankBrain works. The only things that Google has said publicly are in the video Bloomberg published and the few things unnamed Googlers told Danny Sullivan for his article, FAQ: All About The New Google RankBrain Algorithm.

Dissecting the sources

As I said before, the only direct source we have is the video interview published on Bloomberg.

So, let’s dissect what Jack Clark, reporter of the Bloomberg said in that video and what Greg Corrado — senior research scientist at Google and one of the founding members and co-technical lead of Google’s large-scale deep neural networks project —came others said to Clark.

RankBrain is already worldwide.

I wanted to say this first: If you’re wondering whether or not RankBrain is already affecting the SERPs in your country, now you know — it is.

RankBrain is Artificial Intelligence.

Does this mean that RankBrain is our first evidence of Google as the Star Trek computer? No, it does not.

It’s true that many Googlers — like Peter Norvig, Corinna Cortes, Mehryar Mohri, Yoram Singer, Thomas Dean, Jeff Dean and many others — have been investigating and working on machine/deep learning and AI for a number of years (since 2001, as you can see when scrolling down this page). It’s equally true that much of the Google work on language, speech, translation, and visual processing relies on machine learning and AI. However, we should consider the topic of ANI (Artificial Narrow Intelligence), which Tim Urban of Wait But Why describes as: “Machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing.”

Considering how Google is still buggy, we could have some fun and call it HANI (Hopefully Artificial Narrow Intelligence).

All jokes aside, Google clearly intends for its search engine to be an ANI in the (near) future.

RankBrain is a learning system.

With the term “learning system,” Greg Corrado surely means “machine learning system.”

Machine learning is not new to Google. We SEOs discovered how Google uses machine learning when Panda rolled out in 2011.

Panda, in fact, is a machine learning-based algorithm able to learn through iterations what a “quality website” is — or isn’t.

In order to train itself, it needs a dataset and yes/no factors. The result is an algorithm that is eventually able to achieve its objective.

Iterations, then, are meant to provide the machine with a constant learning process, in order to refine and optimize the algorithm.

Hundreds of people are working on it, and on building computers that can think by themselves.

Uhhhh… (Sorry, I couldn’t resist.)

RankBrain is a machine learning system, but — from what Greg Corrado said in the video — we can infer that in the future, it will probably be a deep learning one.

We do not know when this transition will happen (if ever), but assuming it does, then RankBrain won’t need any input — it will only need a dataset, over which it will apply its learning process in order to generate and then refine its algorithm.

Rand Fishkin visualized in a very simple but correct way what a deep learning process is:

Remember — and I repeat this so there’s no misunderstanding — RankBrain is not (yet) a deep learning system, because it still needs inputs in order to work. So… how does it work?

It interprets languages and interprets queries.

Paraphrasing the Bloomberg interview, Greg Corrado gave this information about how RankBrain works:
It works when people make ambiguous searches or use colloquial terms, trying to solve a classic breakdown computers have because they don’t understand those queries or never saw them before.

We can consider RankBrain to be the first 100% post-Hummingbird algorithm developed by Google.

Even if we had some new algorithms rolling out after the Hummingbird release (e.g. Quality Update), those were based on pre-Hummingbird algos and/or were serving a very different phase of search (the Filter/Clustering and Ranking ones, specifically).

Credit: Enrico Altavilla

RankBrain seems to be a needed “patch” to the general Hummingbird update. In fact, we should remember that Hummingbird itself was meant to help Google understand “verbose queries.”

However, as Danny Sullivan wrote in the above mentioned FAQ article at Search Engine Land, RankBrain is not a sort of Hummingbird v.2, but rather a new algorithm that “optimizes” the Hummingbird work.

If you look at the image above while reading Greg Corrado’s words, we can say with a high degree of correctness that RankBrain acts in between the “Understanding” and the “Retrieving” phases of the overall search process.

Evidently, the too-ambiguous queries and the ones based on colloquialisms were too hard for Hummingbird to understand — so much so, in fact, that Google needed to create RankBrain.

RankBrain, like Hummingbird, generalizes and rewrites those kinds of queries, trying to match the intent behind them.

In order to understand a never-before-seen or unclear query, RankBrain uses vectors, which are — to quote the Bloomberg article — “vast amounts of written language embedded into mathematical entities,” and it tries to see if those vectors may have a meaning in relation to the query it’s trying to answer.

Vectors, though, don’t seem to be a completely new feature in the general Hummingbird algorithm. We have evidence of a very similar thing in 2013 via Matt Cutts himself, as you can see from the Twitter conversation below:

At that time, Google was still a ways from being perfect.

Upon discovering web documents that may answer the query, RankBrain retrieves them and lets them proceed, following the steps of the search phase until those documents are presented in a visible SERP.

It is within this context that we must accept the definition of RankBrain as a “ranking factor,” because in regards to the specific set of queries treated by RankBrain, this is substantially the truth.

In other words, the more RankBrain considers a web document to be a potentially correct answer to an unknown or not understandable query, the higher that document will rank in the corresponding SERP — while still taking into account the other applicable ranking factors.

Of course, it will be the choice of the searcher that ultimately informs Google as to what the answer to that unclear or unknown query is.

As a final note, necessary in order to head off the claims I saw when Hummingbird rolled out: No, your site did not lose visibility because of a mysterious RankBrain penalty.

Dismantling the RankBrain gears

Kristine Schachinger, a wonderful SEO geek whom I hold in deep esteem, relates RankBrain to Knowledge Graph and Entity Search in this article on Search Engine Land. However — while I’m in agreement that RankBrain is a patch of Hummingbird and that Hummingbird is not yet the “semantic search” Google announced — our opinions do differ on a few points.

I do not consider Hummingbird and Knowledge Graph to be the same thing. They surely share the same mission (moving from strings to things), and Hummingbird uses some of the technology behind Knowledge Graph, but still — they are two separate things.

This is, IMHO, a common misunderstanding SEOs have. So much so, in fact, that I even tend to not consider the Featured Snippets (aka the answers boxes) part of Knowledge Graph itself, as is commonly believed.

Therefore, if Hummingbird is not the same as Knowledge Graph, then we should think of entities not only as named entities (people, concepts like “love,” planets, landmarks, brands), but also as search entities, which are quite different altogether.

Search entities, as described by Bill Slawski, are as follows:

  • A query a searcher submits
  • Documents responsive to the query
  • The search session during which the searcher submits the query
  • The time at which the query is submitted
  • Advertisements presented in response to the query
  • Anchor text in a link in a document
  • The domain associated with a document

The relationships between these search entities can create a “probability score,” which may determine if a web document is shown in a determined SERP or not.

We cannot exclude the fact that RankBrain utilizes search entities in order to find the most probable and correct answers to a never-before-seen query, then uses the probability score as a qualitative metric in order to offer reasonable, substantive SERPs to the querying user.

The biggest advancement with RankBrain, though, is in how it deals with the quantity of content it analyzes in order to create the vectors. It seems bigger than the classic “link anchor text and surrounding text” that we always considered when discussing, for instance, how the Link Graph works.

There is a patent filed by Google that cites one of the AI experts cited by Greg Corrado — Thomas Strohmann — as an author.

In that patent, very well explained (again) by Bill Slawski in this post on Gofishdigital.com, is described a process through which Google can discover potential meanings for non-understandable queries.

In the patent, huge importance is attributed to context and “concepts,” and the fact that RankBrain uses vectors (again, “vast amounts of written language embedded into mathematical entities”). This is likely because those vectors are needed to secure a higher probability of understanding context and detecting already-known concepts, thus resulting in a higher probability of positively matching those unknown concepts it’s trying to understand in the query.

Speculating about RankBrain

As the section title says, now I enter in the most speculative part of this post.

What I wrote before, though it may also be considered speculation, has the distinct possibility of being true. What I am going to write now may or may not be true, so please, take it with a grain of salt.

DeepMind and Google Search

In 2014, Google acquired a company specialized in learning systems called DeepMind. I cannot help but consider that some of its technology and the evolutions of its technologies are used by Google for improving its search algorithm — hence the machine learning process of RankBrain.

In this article published last June on technologyreview.com, it’s explained in detail how not having a correctly-formatted database is the biggest obstacle for a correct machine and deep learning process. Without it, the neural computing (which is behind machine and deep learning) cannot work.

In the case of language, then, having “vast amounts of written language” is not enough if there’s no context, especially if not using n-grams within the search so the machine can understand it.

However, Karl Moritz Hermann and some of his DeepMind colleagues described in this paper how they were able to discover the kind of annotations they were looking for in classic “news highlights,” which are independent from the main news body.

Allow me to quote the Technology Review article in explaining their experiment:

Hermann and co anonymize the dataset by replacing the actors in sentences with a generic description. An example of some original text from the Daily Mail is this: “The BBC producer allegedly struck by Jeremy Clarkson will not press charges against the “Top Gear” host, his lawyer said Friday. Clarkson, who hosted one of the most-watched television shows in the world, was dropped by the BBC Wednesday after an internal investigation by the British broadcaster found he had subjected producer Oisin Tymon “to an unprovoked physical and verbal attack.”

An anonymized version of this text would be the following:

The ent381 producer allegedly struck by ent212 will not press charges against the “ent153” host, his lawyer said friday. ent212, who hosted one of the most – watched television shows in the world, was dropped by the ent381 wednesday after an internal investigation by the ent180 broadcaster found he had subjected producer ent193 “to an unprovoked physical and verbal attack.”

In this way it is possible to convert the following Cloze-type query to identify X from “Producer X will not press charges against Jeremy Clarkson, his lawyer says” to “Producer X will not press charges against ent212, his lawyer says.”

And the required answer changes from “Oisin Tymon” to “ent212.”

In that way, the anonymized actor is only possible to identify with some kind of understanding of the grammatical links and causal relationships between the entities in the story.

Using the Daily Mail, Hermann was able to provide a large, useful dataset to the DeepMind deep learning machine, and thus train it. After the training, the computer was able to correctly answer up to 60% of the questions asked.

Not a great percentage, we might be thinking. Besides, not all documents on the web are presented with the kind of highlights the Daily Mail or CNN sites have.

However, let me speculate: What are the search index and the Knowledge Graph if not a giant, annotated database? Would it be possible for Google to train its neural machine learning computing systems using the same technology DeepMind used with the Daily Mail-based database?

And what if Google were experimenting and using the Quantum Computer it shares with NASA and USRA for these kinds of machine learning tasks?

Or… What if Google were using all the computers in all of its data centers as one unique neural computing system?

I know, science fiction, but…

Ray Kurzweil’s vision

Ray Kurzweil is usually known for the “futurist” facets of his credentials. It’s easy for us to forget that he’s been working at Google since 2012, personally hired by Larry Page “to bring natural language understanding to Google.” Natural language understanding is essential both for RankBrain and for Hummingbird to work properly.

In an interview with The Guardian last year, Ray Kurzweil said:

When you write an article you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world’s information. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.

The DeepMind technology I cited above seems to be going in that direction, even though it’s still a non-mature technology.

The biggest problem, though, is not really being able to read billion of documents, because Google is already doing it (go read the EULA of Gmail, for instance). The biggest problem is understanding the implicit meaning within the words, so that Google may properly answer users’ questions, or even anticipate the answers before the questions are asked.

We know that Google is hard at work to achieve this, because the same Kurzweil told us that in the same interview:

“We are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.”

The vectors used by RankBrain may be our first glimpse of the technology Google will end up using for understanding all context, which is fundamental for giving a meaning to language.

How can we optimize for RankBrain?

I’m sure you’re asking this question.

My answer? This is a useless question, because RankBrain targets non-understandable queries and those using colloquialisms. Therefore, just as it’s not very useful to create specific pages for every single long-tail keyword, it’s even less useful to try targeting the queries RankBrain targets.

What we should do is insist on optimizing our content using semantic SEO practices, in order to help Google understand the context of our content and the meaning behind the concepts and entities we are writing about.

What we should do is consider the factors of personalized search as priorities, because search entities are strictly related to personalization. Branding, under this perspective, surely is a strategy that may have positive correlation to RankBrain and Hummingbird as they interpret and classify web documents and their content.

RankBrain, then, may not mean that much for our daily SEO activities, but it is offering us a glimpse of the future to come.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

We’re Pleased to Announce Moz Content – A New Product for Content Marketers

Posted by JayLeary

Stating the obvious here, but content is a massively important part of any inbound marketing campaign. The problem that most of us run into — and I know this well from years of SEO consulting with publishers — is that even “good” content can fade from view without a share, link, or conversion. Engaging an audience isn’t as simple as clicking “publish.”

So, how do we avoid making phantom content a habit?

For Moz, timely data has been a big part of the answer. Over the years, we’ve built internal tools like 1Metric to guide our work. It’s a simple strategy, but the more analysis we perform, the better we understand our audience. The better we understand our audience, the easier it is to produce engaging content.

When we blog and talk about those tools, folks in the community remind us that having something similar for their own use would be really helpful.

Well, we took that feedback to heart and, about a year ago, set out to create a product that helps marketers and content creators optimize their content efforts. Now, after lots of hard work, we’re ready to roll back the curtain on our latest offering: Moz Content.

 

Enough talk – let’s check it out!

Here’s a quick overview of we came up with…

The Content Audit

At the heart of Moz Content is the Content Audit. With an Audit, you can crawl and analyze any site, including a competitor’s. The Audit inventories a site’s pages and uncovers wins based on social and link activity… In other words, the basic analysis you’re probably already cobbling together in Excel.

More importantly, Moz Content helps you find meaning in that mess of data with automatic tagging and filtering based on topics, authors, and even content types (think lists, videos, news articles, and more). With an Audit, you can answer important questions about a site’s strategy, like:

How do Guides on the Moz Blog stack up against Lists?

vs.

Average links and shares are almost double for Guides. Let’s keep it up!

The filtering lets you segment content to easily surface insights about your current strategy. Are “social media” or “link building” articles generating more links? How do Whiteboard Fridays compare with other videos? Audits let you shortcut the analysis and answer pressing questions about your audience’s interests.

That point-in-time analysis is helpful when you’re researching or course-correcting, but we also know that ongoing performance reporting is critical to a content marketer’s workflow. That’s where Tracked Audits come in.

With a Tracked Audit, Moz Content will automatically re-crawl a site every week and trend your performance metrics. Then, with the handy Audit Selector, you can compare the Audits we’ve archived in order to measure your progress.

By comparing two Audits, you can easily surface gains or losses and learn if your latest efforts are resonating.

Content Search

When we built Moz Content, we knew that we’d need to help sites at both ends of the content creation spectrum. Tracked Audits are great if a site has an active audience, but if you’re just getting started, the focus is usually on research. That’s where Content Search comes in.

Content Search lets you explore popular articles from across the web with simple topic searches. Interested in SEO and content? A quick search for (no surprises here) “SEO AND content” surfaces competitor articles that have garnered lots of attention.

(You can also search for content on domains with the “site:” operator.)

Moz Content monitors hundreds of thousands of English-language sites in order to surface new content about the niches you play in. Use the tool to analyze competitors or research topics that are important to your audience.

For social media marketers, Content Search also helps with curation. After you find something interesting, you can share it directly with your followers:

It’s worth mentioning that our index is still growing and you may see some gaps in the reporting. If that’s the case, feel free to reach out with topics you’d like covered in the future.

And a final note: you’ll probably notice we’re not reporting Twitter shares. Twitter, as of a few days ago, shut down the endpoint that many of us were using to measure Tweet counts. We didn’t want this wrinkle to hold up the launch, but we’re on the case and working on alternatives.

Time to test drive

There are other details we could cover, but I’m guessing you’d rather just dive in and see for yourself. With Moz Content, we’re providing free, limited access to the Audit and Content Search. Just head over to https://moz.com/content and take it for a spin. (Tip: Log in to your Community account first for elevated page limits, more searches, and a saved Audit.)

Try it out!

If you need more data and higher limits, you can always subscribe to Moz Content on a monthly or annual plan. The Strategists tier goes for $59/month and we’ll be adding higher limit tiers with Google Analytics integration soon.

This is just the beginning for Moz Content — we’ll need your help as we improve and expand upon the functionality. Don’t hesitate to let us know what you’d like to see, and feel free to send any feedback our way with a comment below, a note to our Help Team, or outreach on social.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

5 Steps to Content Marketing Success

Posted by Paddy_Moogan

Content marketing is hard.

The problem is that the process looks easy. You brainstorm some ideas, choose one that you like, design and build it, do some outreach and you get traffic, links and social shares. Job done.

It’s a bit like link building, where someone may say, “Just build great content and the links will come.”

Unfortunately, it’s very rarely that straightforward.

Yes, sometimes you can get lucky and something will fly with little effort. But anyone that says that content marketing is easy has probably never done it over and over again. This is one of the reasons that I really liked this post last week by Simon Penson, because he admitted that he’d failed many times before getting it right. Simon pointed out that the plan he shared just increases the possibility of success — it doesn’t guarantee it.

In this post, I’m going to share our process for putting together a content marketing campaign. It doesn’t guarantee success either, but I’m positive that it puts us in a much better position than if we didn’t have a process at all. We’re always trying to improve this process, and it’s never going to be 100% perfect. With each campaign we do, there’s usually something we add or take away which also reflects the ever-changing nature of our industry. It’s also hard to manufacture and force that “ah-ha” moment, when you get a great insight into something and which then generates a great idea. Although this slide deck by Mark Johnstone helps make sense of how we get those moments in an excellent way.

One thing to point out before we get into the meat of this post is that it’s not just about “big” content. Our role as digital marketers (many of us with an SEO background) goes much wider than content that is purely designed to generate links and social shares.

A content strategy needs to include more than just one type of content, and for most clients at Aira, we do multiple types of content based on their objectives. But that’s a post for another day, because today I’m going to talk about our process in the context of content that is designed to generate links and social shares, driving traffic as a result.

There are five broad steps in the process:

  1. Research and idea generation
  2. Idea validation
  3. Production
  4. Promotion
  5. Conversion

Step 1 – Research and idea generation

It’s easy to dive straight into brainstorming and idea generation, and sometimes, that can work. However, I’d always recommend a period of research into an industry prior to this so that you can get a feel for what’s been done before, what has worked, and what hasn’t. This can mean that you go into a brainstorming section far better equipped to generate ideas that may work.

One thing to point out at this stage is that you shouldn’t put yourself under pressure to come up with a completely new idea. It’s great if you can, but the reality is that it’s unlikely that something has never, ever been done before in some form or another. So you shouldn’t put this pressure on yourself. The following quote is an apt one:

“An idea is nothing more or less than a new combination of old elements.”

This is from the book A Technique for Producing Ideas, published in 1939 by James Webb Young. It’s a short — but excellent — read, and I’d highly recommend it.

I think you’d agree that over 75 years later, this quote is even more true now!

Therefore, a big part of the thinking behind our process is looking for inspiration in what others have done and asking ourselves if we can do it a little bit better or a bit differently. I’m certainly not saying you shouldn’t try to come up with brand new ideas, but don’t let an idea fall by the wayside just because it has been done before.

I’m going to frame the rest of this step by saying something: The most successful* content that you find will come down to at least one of three things:

  1. The story – If something has a strong story or hook behind it, it’s more likely to grab attention and be picked up by mainstream news websites and publishers.
  2. The data – Often tied into the story but not always explicitly, if you have unique data or data that has been sliced/interpreted differently, it can be of more interest to someone.
  3. The production – Sometimes a piece of content may just look visually stunning, and that is enough to generate links and shares.

There is one more which I want to point out, but it’s been deliberately left out of the list above. The one other thing that can make a piece of content successful is an existing audience to market that audience to. A prime example in our industry here is Moz, who has a very large, existing audience. This means that this very blog post is more likely to get links and shares than it would if I published it on my own blog, which has a very small audience.

This is important to remember because, when looking at your competitors and the success of their content, the numbers may be skewed a bit because of the audience they have. I’ll show you how to offset this below.

* Successful, in the context of this post, means generating links and social shares that drive quality traffic. Success can mean many things to different businesses, so I just wanted to remind you of this.

Find your content competitors

The first key step is to research your content competitors, and it’s very important to recognize the difference between your product/service competitors and your content competitors. Let’s look at an example.

Let’s say you’re a travel website. You may be trying to rank for keywords such as “flights to New York” or “holiday apartments in Italy” because you provide those things. You’ll have competitors who are trying to rank for the same kind of keywords and of course, you should take a look at what they’re up to. However, there is a whole other section of websites who don’t compete for these type of keywords, but whom you can learn a lot from when it comes to content. In this example, those websites are travel bloggers and publishers who have travel sections. They produce the exact kind of content that generates links, social shares and traffic — exactly what we’re trying to do with our own websites.

Examples in the travel world could be Nomadic Matt and Jayne Gorman, who both produce great content that generates links and social shares. If I run a travel website and I wanted to learn what content can work well in my industry, I’d definitely take a closer look at these kind of people for inspiration. They may even be people who I could partner with on content ideas, but that’s a bit outside the scope of this post.

It’s pretty simple to find our content competitors. The quickest way is to think of a few non-commercial keywords. Examples related to travel may be “guide to New York City” or “planning a trip to Italy,” which are likely to show search results that include publishers/blogs as opposed to direct competitors. You can also use the keyword search function in Buzzsumo to do these kind of searches:

The results will show you content that contains this keyword, ordered by social shares


If you’re not familiar with Buzzsumo and would like to learn the basics, take a look at this post that I wrote on Moz a few months ago, which talks about this and shows how we use the API for one of our internal tools at Aira.

Finding stories and topics

Once you’ve found a handful of content competitors (we try and find at least 4–5), it’s time to start taking a closer look at what they’re doing. Buzzsumo allows us to do this quickly and easily; all we need to do is run a domain search and use an advanced search operator to search multiple domains at once:

You just need to put OR in between the domains that you want to do research on. The resulting search looks like this:

As you can see, Nomadic Matt is dominating the results, which is likely to be because of a combination of writing great content and having a larger audience than the other websites we searched for. This is a good example of where we may actually want to temporarily remove him from the list, so that we see a more diverse set of results. However, you can also just download a CSV from Buzzsumo and filter his domain out if you wish.

The important step here is to scan the list of results to try and find patterns and trends. In the screenshot below, I can immediately see a pattern:

Some of the best-performing posts are lists. We can see this quickly by noticing the numbers at the start of the title. Going a bit further down, I notice another pattern:

Lots of these posts are “How to”-style posts, which are clearly popular with his audience due to them featuring high on the list of results.

It doesn’t take long to start noticing these patterns. Make a note of them and we’ll come back to how we’re going to use them later.

Another way to find patterns is to analyze the titles in bulk. We can do this by doing an export from Buzzsumo so that we get a list of titles:

You can then copy and paste these titles into a word cloud generator tool, such as Wordle, and get something like this:

You’ll need to remove common words, such as the website names and domains, but the result above is basically a summary of the words that get the most shares — which is really handy to know in bulk. Again, make a note of these kind of themes.


I know what you may be thinking at this point: What about links? Buzzsumo can give you backlink data, but you have to click on each individual result to get it. This is fine for a small number of articles, but we’re trying to do bulk analysis. So instead, we’re going to export the results to a CSV and then upload those results into URL Profiler, which can fetch link metrics for us in bulk.

These are the settings you want:

You can select your choice of Mozscape, Majestic, or Ahrefs data, or all three — it’s up to you. The point is that we need to know how many links our content competitors are generating to their individual content pieces. The results will then look something like this when you export the results to Excel:

Once you’ve got this, you can do some pivot table magic to make the data easier to consume. Here are the settings that you need:

Then you’ll end up with a graph that looks something like this (you can, of course, make it prettier!):

As we can see, A Luxury Travel Blog is leading the way in terms of generating links to their content, so they’re worthy of a closer look. The beauty of this process is that Buzzsumo does a pretty good job of excluding the homepage from their results, so the results are showing links just to the content they produce. From here, we can do a deeper dive into their links using Mozscape, Majestic, or Ahrefs — whichever you prefer.


Before moving on, I want to mention a few other tools that we use in this step of the process. Epic Beat is very similar to Buzzsumo in that you can enter a domain or keyword to find what content is being shared the most. Combining the results from Epic Beat and Buzzsumo can give you lots of information on what is working for competitors in your industry:

Another cool tool — which is more for qualitative analysis rather than quantitive — is Brandtale, which curates digital content/advertising campaigns on large publishers. Sticking with our travel example, I can browse their travel section to find brands who are running campaigns:

I can drill into any of these and see what these brands are doing and if I can learn anything. Trust me, running content campaigns like this on large publishers, such as National Geographic or the New York Times, is expensive. A lot of work will have gone into them, which means they’re worth looking at.

Finding data sources

Our next step is to try and find data sources that could lead to us creating a piece of content or a story that can be pitched to publications. I’d highly recommend Statista for this, which is a growing resource of statistics and facts. Sticking with our travel example, here is a snapshot of the kind of data it has available with a simple search:

If Statista doesn’t have what you need, a few simple searches on Google will often yield good results. Just remember to do a bit of due diligence on where the data comes from and make sure that it’s as sound as possible.

Failing that, can you get your own data? There are many organizations and services out there who will gather data on your behalf. Yes, you have to pay for them, but if you think the data can help you generate links and shares for your website, then it could be worth the investment. Here are a few options:

Some of these can be expensive to use, so I recommend using something like Google Consumer Surveys to poll a small sample of people. Then, if the data is looking promising, run the full survey.

Finding visual content

The final piece of this research is finding visual content which has done well and seeing if we can do better. Like finding data, don’t overthink this, and start with a few simple searches. Google Images is always a good place to start with keywords such as this:

You can get more specific based on the website you’re working with, but what we’re looking to do here is scan the results quickly and see if anything stands out to us:

If we find any that look particularly good or interesting, we take a closer look and ask ourselves the question, “Can we do it better?” While some visuals may look okay and performed well, there are often ways to improve on something, such as:
  • Making the core story or headline more obvious
  • Making it interactive to make the key messages easier to consume
  • Making the design cleaner so that key messages are communicated better

There are any number of ways a good designer can make an existing idea much better, and as we discussed earlier, making something beautiful can sometimes be enough to make it successful.

Once we’ve found something we think we can do better, the next question is how successful was it? One way to do this is to use this Google Chrome extension to automatically do a Google Image reverse search to see how many other websites have used that visual. If the answer is a fair few, then you know that a better version is likely to be of interest to a number of websites.

Putting all of this research together

That was a lot to go through! But trust me, it’s worth it. The next step is to take all of this information and put it into a brainstorm session brief for your team. When it comes to brainstorming, many people will say “all ideas are good ideas” — but this simply isn’t true.

A brief is very important here, because your team needs to walk into that session with the right information and context. If they don’t, then the majority of ideas that are generated may not actually be usable — which isn’t a very good use of time.

To make this easier, I’ve put together a Google Doc template which you’re welcome to download and make a copy of. You can find it here.

Step 2 – Idea validation

The more I work on content marketing campaigns, the more I value this step in the process. You can think that you have a great idea, but how do you know for sure? The fact is that you can never predict this 100%, but you can increase the possibility by using a framework to validate an idea.

The key thing here is not the frameworks that I talk about the below, but to make sure you use some kind of framework so that you can consistently and fairly assess the quality of your ideas.

One of the frameworks I’d recommend, which some of you may have heard of, is from Made to Stick by Chip and Dan Heath. I’m not going to go into too much detail here simply because lots has already been written on the topic, including this post from Distilled and this more recent post by Hannah Smith, which references the framework. There is also this summary of the book, which talks about the key takeaways and what the principles of Made to Stick are.

In summary, the book outlines six principles which, through their research, the authors feel make an idea stick in our minds.

  1. Simplicity – An idea needs to be easy for us to comprehend quickly. A good way to test this is to write the headline and see if you can communicate the idea within the restrains of a headline( i.e. you only have a short sentence).
  2. Unexpectedness – While the idea doesn’t have to be 100% brand new, there needs to be something new or unexpected about it.
  3. Concreteness – This can often be mixed up with simplicity, but is subtly different. Concreteness is all about the idea not allowing room for ambiguity or misinterpretation of what you’re trying to say.
  4. Credibility – The basis of the idea needs to be credible. This can be via credible data, a credible (expert) author or a credible company behind the idea.
  5. Emotion – If an idea provokes an emotional response, we’re much more likely to remember it.
  6. Story – We touched upon this earlier and goes back to when we were children. We were told stories and all of us can remember certain ones. We’re used to the structure of a story and how it peaks our interest.

The key here isn’t the framework itself, although that is very important. The key is the ability for you and your team to give each other valuable, constructive feedback on an idea. It’s often easy to just say “I don’t like that idea” or “That idea won’t work,” which, even if you’re right, isn’t the most useful feedback to receive. With a good framework, someone can reference it in their feedback. So if you’re using the framework above, you can say “I don’t think the idea is simple” or “It’s not concrete enough” — this is far more useful feedback to hear and it may mean that an idea simply needs tweaking rather than dumping completely.

As mentioned earlier, this isn’t the only framework you can use. Another one goes back to what we talked about earlier:

  1. Do we have a story or an interesting hook?
  2. Do we have unique, interesting data?
  3. Can we make the idea look beautiful?

Answering yes to at least one of these questions can increase the chances of your idea being a success.

Step 3 – Production

I’m not a designer or a developer, so I’m not going to tell you how to design or develop a piece of content. But there are some things that we’ve learned (sometimes the hard way) when it comes to producing a piece of content.

Function over form

The first thing I want to share here which is important is to remember function over form. Never, ever say “I want an infographic” or “I want a video” or “I want an interactive piece of content.” You should focus on getting the right idea first, then ask what the best way to present that idea is. If it turns out that an infographic is the best way to present your idea, then great. But don’t start with the form; start with the idea and see where it takes you.

This may help reduce the number of terrible infographics on the web which, unfortunately, our industry is at least partly responsible for!

Mobile-first design

There are stats upon stats showing the growth of mobile, so I’m not going to tell you those again. If you want to do some digging, I’d highly recommend the work and analysis from Ben Evans, who specializes in this area.

In relation to content, what we need to remember is that content discovery is becoming more and more mobile-centric. We typically think of content discovery as someone browsing on their laptop/desktop machines and clicking through from a blog, Twitter, or Facebook. In reality, though, it actually looks more like this:

When someone clicks on a link like this on their mobile device, they expect the content they land on to work perfectly on their device. If it doesn’t, the user is not likely to enjoy or engage with the content, let along share it or link to it from somewhere.

This deck from Vicke Cheung does a great job of showing the importance of designing for mobile, along with practical tips for doing this:

Ten Lessons in Designing Content for Mobile from Vicke Cheung

Another key thing here is to let designers design. Try not to restrict them by providing a brief that tells them 100% how something needs to be done. Give them the goals of the piece and some guidelines, then let them design. Of course give them feedback along the way, but try not to be too prescriptive.

Go-live checklist

One of the lessons we’ve learned the hard way is that in your excitement to get something live, you can forget some of the basics. A few common things that need to be thought about, but are easily forgotten, can be:

  • Social/Open Graph tagging
  • Analytics code
  • Responsive testing

To help with this, here is another Google Doc which you can download and use which contains a few things to remember:

While the things on this list seem basic, it can be very easy to forget!

Step 4 – Promotion

Here is one of the key takeaways: Spend just as much time on promotion as you do on the production. It’s so easy to get caught up on design, development, and the idea itself, you can end up spending most of your time on producing it and not nearly enough time on promoting it.

There are three different types of promotion we work on at Aira. These differ by client, but ideally, we spend time on all of the following:

A combination of all three can help ensure that your content reaches as many people as possible. I used to rely solely on organic content promotion via traditional link building outreach/digital PR, but this may not be enough and ignores some useful techniques.

Paid promotion

Paying to promote your content can be very useful in generating traffic to a piece of content, which in turn, can also help generate social shares and sometimes links. Larry Kim goes into detail on this in his post over on Search Engine Land. The basic principle is that you can use paid promotion to get your content in front of writers, bloggers, journalists, and influencers.

There are a few options for how you can do this. Firstly, to reach a wide audience, you can use platforms such as Taboola or Outbrain. These can work well for reaching a very big audience, but targeting options for specific demographics on these platforms is still rather limited.

Wil Reynolds ran an experiment using these (and other) platforms, which is definitely worth looking at:

The $10,000 Paid Content + Paid Linking Test that is 100% Google Safe from Wil Reynolds

Our experience with these particular platforms is very mixed, with it working well for some clients but sending very untargeted traffic for others. So we’d advise starting with a small budget and assessing the quality of traffic before spending too much.

Other options are more regular social channels such as:

  • Facebook
  • Twitter
  • LinkedIn
  • Pinterest

The one I want to focus on is Facebook, where the targeting options are almost scary. But they’re useful to us nonetheless. You can do things such as specifically targeting journalists using options such as:

You can put whatever list you’d like in here, but I’m sure you get the idea!

You can also go one step further in targeting people by uploading their email address into the custom audience feature of Facebook:

It’s straightforward here to upload your outreach list; if Facebook can find a match for the email addresses, you can advertise directly to those people. If you’d like to go into more detail on this, take a look at this post I wrote last year.

Earned promotion

This is likely to be more familiar to most of us because this section covers traditional link building outreach and digital PR. Essentially, we need to find a list of influencers and contact them in order to promote our content to them. This sounds simple, but can often be the trickiest part of the process… because it’s here that you may find out that you don’t actually have a great idea! This is why the idea validation step is so important — because it reduces the chances of promotion going wrong.

I’ve written multiple times about finding outreach prospects before, so I won’t repeat everything here. But I will point out my favourite techniques for doing this.

Finding existing lists of prospects

I honestly start every single piece of link building research with these kind of searches:

Simply switch out [INDUSTRY] for your own industry and you’ll find more than enough prospects to keep you busy!

Finding mainstream publications and journalists

Here, we’re trying to find high-level outreach targets who write for national newspapers and mainstream publications. The value of these can be huge, because many websites like this have the ability to send a LOT of traffic to your website.

Here are a few tools (mostly paid, unfortunately) that you can use for this kind of research:

You can of course do manual research as well, but these tools can help speed things up a bit.

Owned promotion

This one will depend heavily on what your client already has, but essentially we’re talking about using their own channels, such as:

  • Social channels like Facebook, Twitter, Instagram, LinkedIn, or Pinterest
  • Their existing blog
  • Their email marketing list

This may sound easy, but I’ve worked with some companies where the social team sits separately to the SEO/content team — which can make it harder to get them to work together! If you can bridge this gap, though, it’s a pretty easy win to get eyeballs on your content.

Step 5 – Conversion and tracking

So here we are, at the final step of our process, and I want to be really honest about this bit. It can be quite hard to convert a visitor to a piece of content that is designed for links and social shares. These kind of content pieces are often not designed to “sell” to the visitor, so getting them to click across to the main website or a product page (let alone getting them to buy something) is difficult. There are exceptions; this piece from Bellroy is one that comes to mind which is informational but very related to their product:

Generally though, this is difficult to pull off. So what can we do instead?

Micro-conversions

If we can’t convert someone into a buyer, what else can we do? One thing we’ve done for some clients is to try and capture a visitor’s email address so that we can then target them on Facebook or via email marketing. Or it could be any other number of things, such as:

  • Commenting on a piece of content (so you also get their email address)
  • Sharing a piece of content
  • Spending a certain amount of time on your content

Build retargeting lists

If someone visits a piece of content, you can build a retargeting list and then advertise to them in the future. There are two ways you can do this:

  1. Advertise your products and services to try and encourage clicks and future purchases
  2. Advertise your future content pieces — this can work very well if you’re working on a content series, i.e. a series of blog posts that all tie together

Build retargeting lists based on interactions with a page

This is a post for another day, but it is possible to go more targeted when building your retargeting lists, by building them based on how someone interacts with your content. For example, you can fire Facebook retargeting pixels when someone clicks on a certain link or when someone selects a certain option (if your content is interactive). This means that you can build lists that are very specific, and you can cater your advertising based on the interactions that users have carried out.

To wrap up

So that’s about it for today’s post! These are the five broad steps that we take for a content marketing campaign, and while we’re always iterating on them and improving them, they have increased our chances of success — which is what this is all about. You’ll never guarantee success, but whether you use the process above or your own, you certainly should utilize a process to enhance your chances.

I’d love to hear your feedback in the comments!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

When Data Just Isn’t Enough: The Hidden Context that’s Key to Content Loyalty

Posted by ronell-smith

(Image source)

When the client asked to “go mute” during our monthly client call, there was no reason to sound the alarm. After all, being able to talk through what they’ve heard as a team, in private, was normal. But when the always-skeptical global marketing director said the COO (who has been in the room for the 10 minutes of analytics discussion) wants to take the discussion offline for a bit, but wants you to hold on, I knew things had likely gone off the rails.

“Thanks for holding, you guys,” says the head of marketing upon taking the phone off mute after what seemed like an eternity. “Tim was just in here, and he had some questions about the data. He expressed concern that it appears [the team] is simply regurgitating a bunch of numbers.”

After it was explained that the numbers actually exceeded everyone’s expectations for how the site would perform after the redesign, the link detox and having new content in place, she made things crystal clear.

“Let me cut to the chase,” she said. “The numbers are great. We’re happy with the numbers. But this the same thing our last agency provided us: great data. What we’re looking for is someone to share what the data is telling us about what to do in the future, so we can focus only on those areas that are likely to benefit the brand. We’d like to know what will help us attain success in the future, not what [your team] thinks will lead to success in the future.”

What this client needed was the Oracle of Delphi, not someone to analyze their data.

But she was right. They were looking for all-important insight, insight that could not be gleaned from data alone. However, this agency and all the others she’d worked with had led her to believe the data is gospel. Follow it to the Promised Land.

She knew better.

Data alone is never enough.

Though many in online marketing prefer to see data as the be-all and end-all, at best data alone tells us what’s likely to be effective in the future. It does not provide the “if this, then that” clarity we crave.

The more we share “according-to-the-data” insight, the more we walk a tightrope that never ends. Data tells us what happened, can yield great insight into what’s likely to happen, and is at its best when used to discern what is happening.

However, in the real world, things change constantly and often without warning, a fact that cannot be accounted for via data alone.

“[Data] is an abstract description of reality,” writes Jim Harris on his blog, Obsessive-Compulsive Data Quality. ”…The inconvenient truth is that the real world is not the same thing as these abstract descriptions of it—not even when we believe that data perfection is possible (or have managed to convince ourselves that our data is perfect).”

To be sure, data is integral to attaining success in the information-rich online marketing arena. Everything from our websites to our campaigns to conversions depends on it. In fact, data is a large part of what sets online marketing apart from traditional marketing, which can, at times, feel like so much guesswork.

But over the course of the last two years, through interviews with more than 300 folks in the content marketing/inbound marketing space, I’ve come to realize that many wonder if data (insofar as how it’s used to make decisions) isn’t as much a curse as it is a blessing.

(Image source)

In conversation after conversation, I’ve heard CEOs, SEOs, CMOs, PPC nerds, and content folks say the same things, which is summed up nicely by these comments from a director-level SEO at one of the most successful agencies in the US: “Even in those cases where we deliver to clients data that far exceeds their expectations, they often fire us. Heck, especially when we deliver those amazing results, they fire us.”

I think this occurs for one of two reasons:

  1. They realize data doesn’t yield the solution they’d hoped for, or
  2. They falsely believe data highlights the end-game, meaning they can now thrive on autopilot.

As any of us working in online marketing can attest, nothing could be further from the truth.

Data is an important part of a large picture, one that is as nuanced and as varied as it is ever-changing.

Because of that, we need context.

“Data doesn’t come with context,” says Tim Gillman, an analytics nerd at Portent Interactive in Seattle. “For example: measuring content. If your data says people spend ~15 mins reading your post, there’s always the chance that they simply left their computer for awhile. You don’t know for certain they were loving your content.”

I struggled with this reality for months, wondering what, if anything, could be done to bridge this gap, which would allow us to (a) be given the time to do quality work for our clients and (b) have clients realize the efficacy of our efforts.

I read big data and data science books, started following the words and works of big data nerds active on social media, in addition to listening to podcasts, watching YouTube videos, and talking to as many people as I could to discern how we, as online marketers, can be successful.

Training ourselves to think about data differently

In the end, it was the sage words from Harvard Business School professor Clayton Christensen that helped me gain some clarity.

Data, at best, can only tell us about the past, he writes. It cannot help us see into the future.

For that, he adds, we need a theory for helping to explain what’s likely to happen. Taken together, both data and theory, serve to provide us with the building blocks of what can become the framework for success we crave.

To make this work, he says, we must go “dumpster diving” — hanging out in the real world, observing and noticing how things occur in real life — which will lead us to more effectively posit the hows (things really work) and whys (they work as they do).

Then, once we have the data, we use it to empirically assess the observed behavior, devoid of emotion.

The framework looks a lot like this:

  • Observe – Dumpster-diving in the real world
  • Theorize – Posit the how and the why
  • Test – Assess and compile data
  • Construct – Develop a framework for future efforts

With this model, we’re training ourselves to think about data in a different, but no less valuable, way. In the above scenario, data is an important part of the equation; it is not treated as the equation in its entirety.

This, to my mind, gets us closer to seeing data in the proper context. That is a part of the solution. But changing how we think about data won’t allow us to keep clients any better, won’t immediately make us better marketers and cannot, by itself, lead to better overall decisions being made.

For that to occur, we have to change two things: the data we act upon, and how we choose to act upon it.

A framework for finding your data goalposts

(Image source)

Without knowing it, Matthew Brown at MozCon 2015 provided us with the veritable playbook for how to use data to improve our content marketing efforts. During his talk, which was one of the best of the entire event, he highlighted the key to content marketing success: content loyalty.

The more loyal our audiences, the better able we are to sustain our content marketing efforts. (A loyal audience comprises the folks who most frequently visit your site.)

The key, Brown said during the talk, is to find the goalpost that helps you determine content loyalty for your brand, then optimize for that metric. So, instead of chasing Likes, shares, or links to your content, you’re focused on creating loyal visitors to your site.

This is important because one of the reasons content marketers end up getting lost down the data rabbit hole is we too often chase the wrong metrics (e.g., they highlight activity but don’t lead to conversions) or we attempt to track too many metrics, most of which don’t lead to the goal we, or our clients, are hoping for.

Here’s how such an effort could work for your brand, using the OTTC framework borrowed from Christensen’s work:

  • Observe
    Determine what comprises “loyal visitors” for your brand. It could be visits per day, per week, or per month. This is the crucial first step. Get this wrong and nothing else matters. What you’re looking for is the metric that correlates with visitors becoming loyal to your site. Put simply, you’re looking for the gotcha that says “These folks are now loyal visitors.”
  • Theorize
    Gather the team and spend some time thinking through what it is about your site and/or content that likely leads to these audience members becoming loyal fans and followers. Is it the length of the content? The number of images? The author? The amount of content above the fold? The number of ads?
  • Test
    Use the information gleaned from that meeting with the team to begin testing the various on-page elements until you have a good idea of what it is that leads folks to become loyal. This is the fun part. To make it even more rewarding, you can rest assured that many of your competitors won’t be following suit, as many of them are content to guess at what works, then throw more of the same at the wall.
  • Construct
    Develop a process by which you continue to optimize for content loyalty, in large part by creating the types and formats of content that you’ve uncovered as leading to content loyalty. Keep in mind, however, that this process is not static, as your audience’s needs are likely to change with time. But by analyzing the data, dumpster diving by interacting with the audience via emails, polls, Q&A, and sundry other methods of staying connected, your brand will be in great shape to continue putting the ball through the uprights.

Summation

This is a post I thought long and hard about writing. During this quest to better understand data and shine a light on how to make it work for us and not against us, I’ve developed a deep, sincere fascination for big data and the role it can play in answering some of our biggest questions.

I’m in no way anti-data. Hardly. What I’m against is the “data-tells-us-all-we-need-to-know” mindset I so often encounter.

I’m hopeful that, in the future, more and more of us are willing to be honest with ourselves and our clients, acknowledging what we know to be true: the data alone won’t save us.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →