Archives for 

seo

The Anatomy of a Link – What Makes a Good (and Bad) Link?

Posted by Paddy_Moogan

The following is an excerpt from The Linkbuilding Book, an e-book by Paddy Moogan available for purchase and download. This chapter, entitled “The Anatomy of a Link,” offers deeper insight into what makes for a quality link. Enjoy!

Not all links are created equal. One part of the Google algorithm is the number of links pointing at your website, but it would be foolish to make this a raw number and not take into account the quality of those links. Otherwise it would just be a free for all, and everyone would be trying to get as many links as they can with no regard for the quality of those links.

Back in the early days of search engine optimization, it was pretty much a free-for-all because the search engines were not as good at determining the quality of a link. Even the introduction of PageRank, combined with anchor text as a signal, didn’t deter link spammers. As search engines have become more advanced, they have been able to expand the link-related signals they can use beyond raw numbers. Search engines can look at a number of factors, which can all combine and give them an indicator of quality. More to the point, they can tell whether the link is likely to be a genuine, editorially-given link, or a spammy link.

These factors are outlined in more detail below. There is something important to remember here, though: it isn’t really the link itself you care about (to a certain degree). It is the page and the domain you are getting the link from which we care about right now. Once we know what these factors are, it helps set the scene for the types of links you should (and shouldn’t) be getting for your own website.

Before diving into the finer details of links and linking pages, I want to take a much broader look at what makes a good link. To me, there are three broad elements of a link:

  • Trust
  • Diversity
  • Relevance

If you can get a link that ticks off all three of these, you’re onto a winner! However, the reality is that this is quite hard to do consistently. But you should always have it in the back of your mind.

Links that are trusted

In an ideal world, all links that you get would be from trusted websites. By trust, we often mean what Google thinks of a website, and some will also refer to this as authority. As we’ve discussed, Google came up with PageRank as a way to objectively measure the trust of every single link they find on the web. Generally, the more PageRank a page has, the more trusted it is by Google, the more likely it is to rank and the more likely it is to help you rank better if it links to you.

However, there is another concept here that you need to be aware of—TrustRank.

TrustRank differs from PageRank in that it is designed to be harder to game if you’re a spammer. Taken from the TrustRank paper, written in 2004:

Let us discuss the difference between PageRank and TrustRank first. Remember, the PageRank algorithm does not incorporate any knowledge about the quality of a site, nor does it explicitly penalize badness. In fact, we will see that it is not very uncommon that some site created by a skilled spammer receives high PageRank score. In contrast, our TrustRank is meant to differentiate good and bad sites: we expect that spam sites were not assigned high TrustRank scores.

Source: http://www.vldb.org/conf/2004/RS15P3.PDF

If you click through to this PDF to read the full paper on TrustRank, you’ll notice that it is a joint effort between Stanford and Yahoo. There was some confusion as to who came up with the original idea for TrustRank because of this. Also, a patent granted to Google in 2009 referring to “Trust Rank” describes a very different process to the one in the original paper from 2004.

For now, we’re going to briefly discuss the idea of TrustRank from 2004 and how it may be used by the search engines to calculate trust.

Let’s start with this simple diagram:

Starting from the left hand side, if you imagine that you had a list of websites that you trust 100%, it may include sites like the BBC, CNN, The New York Times, etc. In this “seed list” you have no spam whatsoever because these are super high-quality websites with a high level of editorial control. As we move one step to the right, we first have a list of websites that are one link away from the trusted seed set. The amount of spam increases ever so slightly, but not a lot. Hat tip to Rand for the original visualization of this.

Now go to the far right of the diagram, and we can see that, even if a list of websites is just three links away from the trusted seed set, websites in that list are more likely to be spam—as many as 14% of them, in fact.

Therefore, the search engines could define their own trusted seed set of websites and use this as a starting point for crawling the web. As they crawl through these websites and follow the external links, they can see how far away any given website is from the trusted seed set. The implication is that the further away from there a website is, the higher likelihood it has to be spam. While this isn’t an exact science, when you think of the billions of pages online which need to be measured for trust, this is a highly scalable way of doing it, and the tests from the original paper showed that it worked well, too.

Links that are diverse

There are two types of diversity that I want to cover here:

  • Diversity of linking domains
  • Diversity of link type

Both of these are important if we want to build good links and have a strong, robust link profile.

Diversity of linking domains simply means getting links from lots of different domains—not the same ones over and over again. I discuss this in much more detail below.

Diversity of link type means getting links from different types of domains. If all of your links are from web directories, that isn’t very diverse. If all of your links come from press release syndicators, that isn’t very diverse. I’m sure you see what I mean. A natural link profile will contain links from many different types of websites.

Links that are relevant

The word “relevant” here is not referring to the page that the link is on, but rather it is referring to the link itself. Anchor text allowed Google to discover the possible topic of a page without even having to crawl it, and it became a strong signal to them.

Therefore, we need to acquire links to our website that are relevant to us—we can do this by trying to make the anchor text contain a keyword that we are targeting and is relevant to us. However, caution is needed here in light of Google updates in 2012, namely Penguin, which had a massive effect on link building.

I won’t go into too much detail here so that I don’t distract from the goal of this chapter, but the key takeaway is that building links with overly-commercial, keyword-focused anchor text is a lot riskier than it used to be. It can still definitely work, but does pose a risk if you overdo it.

Elements of a page that may affect the quality of a link

As we have talked about at the start of this chapter, Google does not simply look at the raw number of links pointing at your website. They look at many other factors to try and assess the quality of a link and how much value it should pass to the target page. In this chapter, we will take a detailed look into what these factors could be and what this means to your work as a link builder.

Some of these factors are mentioned in a patent filed by Google in 2004 and granted in 2010, which became known as the “reasonable surfer” model. It basically outlines how various elements of a link, as well as the page containing the link, may affect how Google treats a link.

Below we’ll take a look at these and explore how they may affect your work and what you need to remember about each of them.

Number of other outgoing links on a page

If the link pointing to your website is among hundreds or thousands of other outgoing links on a single page, then chances are that it isn’t as valuable. If you think about it from a user’s point of view, they probably are not going to find a page with hundreds of links particularly useful. There are, of course, exceptions, but on the whole, these types of pages do not provide a good user experience. There is also the probability that these types of pages are only created for links and do not have much real content on them, which is also a signal of a page that ultimately isn’t a good user experience.

Also, going back to our knowledge of how PageRank works, the higher the number of outgoing links on a page there are, the less value each of those links is worth. This isn’t a hard and fast rule, though, and has been the topic of hot debate in the SEO industry for many years, particularly in relation to PageRank sculpting, which is discussed in another chapter.

How this affects your work as an SEO

When seeking to get links from existing pages on a website, as opposed to new pages, take a look at the number of other outgoing links on a page using a tool such as Search Status (Firefox) or OpenSEO stats (Chrome). If the number looks very high, then you may want to consider whether the link is worth going for and spending time acquiring. Obviously you should take account of other factors too, such as whether the domain is a particularly strong one to get a link from, even if it is among hundreds of other links.

You may also want to consider whether there is a genuine reason for a high number of other links on the page. If there is a genuine reason, then the link may still be worth going for. One thing you should definitely look out for is a lot of links to other websites which are not related to the topic of your page. In particular, look for links which look like they go to gambling, poker, pills, and health websites. If you see these, then you may be looking at a link exchange page where the webmaster has only put those links in place because he or she got one back from the site being linked to. These are the type of reciprocal links that Google does not like to see and will probably reduce the value of.

One good rule of thumb here is to think whether the page is of value to a real user and whether someone may actually click through to your website from it. If the answer to both of these is no, then it may not be the best link to pursue.

The page having a penalty or filter applied

This is a bit of a controversial one. Traditionally, the official line from Google has always been that links from bad pages can’t hurt you. There have been a few comments from Google employees to the contrary, but, on the whole, their stance has always been the same. However, we have seen this stance downplayed a little in recent years, with various comments from Googlers becoming a lot softer and not explicitly saying that links from bad pages can’t hurt you. My own personal experience (and that of many SEOs) is that links from bad pages or penalized pages can hurt you, and of course we’ve seen the Penguin update reduce the rankings of many websites that had low-quality links pointing at them. The release of the disavow tool was pretty much an acknowledgement that bad links could hurt you; therefore, you had a tool available to help you deal with this.

I can see why Google, up until recently, held this public stance. They do not want to encourage people to deliberately point bad links at their competitors in an effort to hurt their rankings. The fact is that this is a practice which does happen a lot more than people think. Therefore, I feel it is one that every SEO should be aware of and know how to deal with. We will get into a lot more detail on identifying and removing link-based penalties in a later chapter, but for now we will stick within the context of this chapter.

How this affects your work as an SEO

You need to be able to identify links from pages which may be low quality in the eyes of Google. You also need to be able to spot low-quality pages when identifying possible link targets. We will explore a method for identifying large numbers of low-quality links in a link profile later on.

The quality of other websites being linked to from that page

There is the concept of being within a “bad neighborhood” when it comes to your link profile. This stems from the idea that if you are seen to be clustered and associated with a load of other low-quality websites, your website could be hurt and the trust lowered. One way to get into a bad neighborhood is to get links from the same places as low-quality, spammy websites. So if your website is linked to from the same page as 25 other websites, most of which are low quality, it isn’t a good signal to send to Google.

This ties in with your analysis of the number of outgoing links on a page which we discussed earlier. Quite often, you will find that pages with very high numbers of outgoing links will have lower editorial standards. This naturally means that they are more likely to be linking to lower-quality websites. There is also the possibility that the list of links isn’t checked very often for quality.

You definitely want to avoid instances of your website getting links from the same pages as low-quality websites. This helps Google see that you are a genuine website that doesn’t partake in any low-quality link building. If you find one or two instances of getting these types of links, then you probably will not have any issues. But if you find that you are getting many of your links from low-quality pages and bad neighborhoods, then you will want to take a closer look and see if these links are hurting you.

How this affects your work as an SEO

It can be hard to go and investigate the quality of every website being linked to from the page you are considering as a link target. You could do some scraping and assess the quality of outgoing links using some metrics, but doing this on scale can be quite intensive and take a lot of time. What I’d advise doing is trying to develop your gut feeling and instincts for link building. Many experienced link builders will be able to look at a page and know right away if the outgoing links are to low-quality websites. This gut feeling only comes with time and practice.

Personally, if I look at a page of links and it looks like a link exchange page that doesn’t appeal to me as a user, it probably isn’t a high-quality page. I’d also look for lots of exact match keyword links to other websites, which is a tell-tale sign of low editorial standards.

Again, it can help to put yourself in the position of a user and assess whether the page is genuinely useful or not.

Number of incoming links to the page

If the page you are getting a link from has lots of other links pointing at it, then that gives the page a level of authority that is then passed onto your website. Chances are, if the page is a genuinely good resource, then it will accrue links over time, giving it a level of link equity that many spammy pages will never get. Therefore, a link from a page with lots of link equity is going to be far more valuable to you.

At the same time, if this page is a genuinely good resource, the editorial standards will be a lot higher, and you’ll have a tougher time getting your link placed. This is actually a good thing: the harder a link is to get, the higher the value that link usually is.

How this affects your work as an SEO

When you are looking at a page as a possible link target, take a quick look at a few metrics to get a feel for how strong that page is and how many links it has. By far, the quickest way to do this is to have a few extensions or plugins added to your browser that can instantly give you some info. For example, if you have the Moz Toolbar installed, you can get a quick measure of the Page Authority and the number of links Moz has discovered pointing to that page.

Number of incoming links to the domain

This is similar to the above factor, but looking at the number of links pointing to the domain as a whole instead. The more quality links a domain has, the more likely it is to be a high-quality website.

Age of the domain

I’m not sure, personally, if age of a domain is strictly a factor, but with age comes authority if the website is high-quality. Also, if you get a link from a brand new domain, naturally that domain is not going to be that strong, as it has not had time to get many links. The reality is that you can’t affect the age of a domain, so you shouldn’t really worry about it too much. The only way you could possibly use it is as a way to filter a huge set of possible link targets. For example, you could filter link targets to only show you ones which are more than two years old, which may give you a slightly higher-quality set of results.

How this affects your work as an SEO

As mentioned, you can’t really affect this factor, so it’s generally something you shouldn’t worry too much about. You can use it as a way to filter large sets of link targets, but there are many other better metrics to use rather than domain age.

Link from within a PDF

Within a PDF file, you can link out to external websites, much in the same way you can on a normal webpage. If this PDF is accessible on the web, the search engines are capable of crawling it and finding the links.

How this affects your work as an SEO

In most cases, your day-to-day work will probably not be affected that much, given that many link building techniques involve standard links on webpages. But if you work in an industry where PDFs are regularly created and distributed in some form (i.e. white papers), you should take the time to make sure you include links and that they’re pointing to the right pages.

In this case, you can also take advantage of various websites that offer submission of PDFs or white papers to get more links. This can work well because some of these sites may not usually link to you from a standard web page. You need to ensure you’re submitting to relevant and quality websites; otherwise, the links you get are not likely to make much difference to you in terms of rankings or traffic.

The page being crawlable by the search engines

This is a big one. If the search engines never find the page where your link is placed, it will never count. This is usually not a problem, but it is something you should be aware of. The main way a page can be blocked is by using a robots.txt file, so you should get into the habit of checking that pages are crawlable by the search engines. You can use this simple JavaScript bookmarklet to test if a page is blocked in robots.txt.

There are other ways that a page may be blocked from search engines, keeping them from discovering your links. For example, if a page has elements such as JavaScript or AJAX, it is possible that search engines may not be able to crawl those elements. If your link is inside one of these elements, it may never be discovered and counted.

In general, the search engines are getting much better at discovering links and content within these elements, so it isn’t something to worry about too much, but you should be aware of it.

To check whether or not a page is cached by Google, you can simply type “cache:” before the URL and put it into the Google Chrome toolbar. If the page is cached, you will see a copy of it. If it isn’t cached, you’ll see something like this:

Elements of a link that affect its quality

Above, we have looked at the elements of a page that can affect the quality of a link. We must also consider what elements of a link itself the search engines can use to assess its quality and relevance. They can then decide how much link equity to pass across that link.

As mentioned above, many of these elements are part of the “reasonable surfer” model and may include things such as:

  • The position of the link on the page (i.e. in the body, footer, sidebar, etc.)
  • Font size/color of the link
  • If the link is within a list, and the position within that list
  • If the link is text or an image, and if it is an image, how big that image is
  • Number of words used as the anchor text

There are more, and we’ll look at a few in more detail below. Here is the basic anatomy of a link:

URL

The most important part of a link is the URL that is contained within it. If the URL is one that points to your website, then you’ve built a link. At first glance, you may not realize that the URL can affect the quality and trust that Google puts into that link, but in fact it can have quite a big effect.

For example, if the link is pointing to a URL that is one of the following:

  • Goes through lots of redirects
  • Is blocked by a robots.txt file
  • Is a spammy page (i.e. keyword stuffed, sells links, or machine-generated)
  • Contains viruses or malware
  • Contains characters that Google can’t/won’t crawl
  • Contains extra tracking parameters at the end of the URL

All of these things can alter the way that Google handles that link. It could choose not to follow the link, or it could follow the link, but choose not to pass any PageRank across it. In extreme cases, such as linking to spammy pages or malware, Google may even choose to penalize the page containing the link to protect their users. Google does not want its users to visit pages that link to spam and malware, so it may decide to take those pages out of its index or make them very hard to find.

How this affects your work as an SEO

In general, you probably don’t need to worry too much on a daily basis about this stuff, but it is certainly something you need to be aware of. For example, if you’re linking out to other websites from your own, you really need to make sure that the page you’re linking to is good quality. This is common sense, really, but SEOs tend to take it a lot more seriously when they realize that they could receive a penalty if they don’t pay attention!

In terms of getting links, there are a few things you can do to make your links as clean as possible:

  • Avoid getting links to pages that redirect to others—certainly avoid linking to a page that has a 302 redirect because Google doesn’t tend to pass PageRank across these unless they’re in place for a long time
  • Avoid linking to pages that have tracking parameters on the end, because sometimes Google will index two copies of the same page and the link equity will be split. If you absolutely can’t avoid doing this, then you can use a rel=canonical tag to tell Google which URL is the canonical so that they pass the link equity across to that version

Position of the link of a page

As a user, you are probably more likely to click on links in the middle of the page than in the footer. Google understands this, and in 2004 it filed a patent which was covered very well by Bill Slawski. The patent outlined a model which became known as the “reasonable surfer” model (which we briefly mentioned earlier), and it included the following:

“Systems and methods consistent with the principles of the invention may provide a reasonable surfer model that indicates that when a surfer accesses a document with a set of links, the surfer will follow some of the links with higher probability than others.

This reasonable surfer model reflects the fact that not all of the links associated with a document are equally likely to be followed. Examples of unlikely followed links may include “Terms of Service” links, banner advertisements, and links unrelated to the document.”

Source: http://www.seobythesea.com/2010/05/googles-reasonable-surfer-how-the-value-of-a-link-may-differ-based-upon-link-and-document-features-and-user-data/

The following diagram, courtesy of Moz, helps explain this a bit more:

With crawling technology improving, the search engines are able to find the position of a link on a page as a user would see it and, therefore, treat it appropriately.

If you’re a blogger and you want to share a really good resource with your users, you are unlikely to put the link in the footer, where very few readers will actually read. Instead, you’re likely to place it front and center in your blog so that as many people see it and click on as possible. Now, compare this to a link in your footer to your earnings disclosure page. It seems a little unfair to pass the same amount of link equity to both pages, right? You’d want to pass more to the genuinely good resource rather than a standard page that users don’t worry about too much.

Anchor text

For SEOs, this is probably second in importance to the URL, particularly as Google puts so much weight on it as a ranking signal even today where, arguably, it isn’t as strong a signal as it used to be.

Historically, SEOs have worked very hard to make anchor text of incoming links the same as the keywords which they want to rank for. So, if you wanted to rank for “car insurance,” you’d try to get a link that has “car insurance” as the anchor text.

However, since the rollout of Penguin into search results, SEOs have started to be a lot more cautious with their approach to anchor text. Many SEOs reported that a high proportion of unnatural anchor text in a link profile led to a penalty from Google after Penguin was launched.

The truth is that an average blogger, webmaster, or Internet user will NOT link to you using your exact keywords. It is even more unlikely that lots of them will! Google seems to be finally picking up on this and hitting websites that have over-done their anchor text targeting.

Ultimately, you want the anchor text in your link profile to be a varied mix of words. Some of it should be keyword-focused, some of it focused on the brand, and some of it not focused on anything at all. This helps reduce the chance of you being put on Google’s radar for having unnatural links.

Nofollow vs. followed

The nofollow attribute was adopted in 2005 by Yahoo, Google, and MSN (now Bing) and was intended to tell the search engines when a webmaster didn’t trust the website they were linking to. It was also intended to be a way of declaring paid links, such as advertising.

In terms of the quality of a link, if it has the nofollow attribute applied, it shouldn’t pass any PageRank. This effectively means that nofollow links are not counted by Google and shouldn’t make any difference when it comes to organic search results.

Therefore, when building links, you should always try to get links that are followed, which means they should help you with ranking better. Having said that, having a few nofollow links in your profile is natural, and you should also think of the other benefit of a link: traffic. If a link is nofollow but receives lots of targeted traffic, then it is worth building.

Link title

If you’re not very familiar with it, take a look at this page for some examples and explanations.

The intention here is to help provide more context about the link, particularly for accessibility, as it provides people with more information if they need it. If you hover over a link without clicking it, most modern browsers should display the link title, much in the same way they’d show the ALT text of an image. Note that it is not meant to be a duplication of anchor text; it is an aid to help describe the link.

In terms of SEO, the link title doesn’t appear to carry much weight at all when it comes to ranking. In fact, Google appeared to confirm that they do not use it at PubCon in 2005 according to this forum thread. Obviously this was a few years ago now, but my testing seems to confirm this as well.

Text link vs. image link

This section so far has been discussing text-based links, by which we mean a link that has anchor text containing standard letters or numbers. It is also possible to get links directly from images. The HTML for this looks slightly different:

Notice the addition of the <img src tag, which contains the image itself. Also note how there is no anchor text, such as we’d usually find with a text link. Instead, the ALT text (in this example, the words “Example Alt Text”) is used.

My limited testing on this has shown that the ALT text acts in a similar way to anchor text, but doesn’t appear to be as powerful.

Link contained within JavaScript/AJAX/Flash

In the past, the search engines have struggled with crawling certain web technologies, such as JavaScript, Flash and AJAX. They simply didn’t have the resources or technology to crawl through these relatively advanced pieces of code. These pieces of code were mainly designed for users with full browsers capable of rendering them. For a single user who can interact with a web page, it’s pretty straightforward for them to execute things like JavaScript and Flash. However, a search engine crawler isn’t like a standard web browser and doesn’t interact with a page the way a user does.

This meant that if a link to your website was contained within a piece of JavaScript code, it was possible that the search engines would never see it, meaning your link would not be counted. Believe it or not, this actually used to be a good way of intentionally stopping them from crawling certain links.

However, the search engines and the technology they use has developed quite a bit and they are now more capable of understanding things like JavaScript. They can sometimes execute it and find what happens next, such as links and content being loaded. In May 2014, Google released a blog post explicitly stating that they were trying harder to get better at understanding websites that used JavaScript. They also released a new feature in Google Webmaster Tools so that we could better see when Google was having problems with JavaScript.

But this doesn’t mean that we don’t have to worry about things. My advice is still to make links as clean as possible and make it straightforward for the search engines to find your links. This means building them in simple HTML, if at all possible.

How this affects your work as an SEO

You should also know how to check if a search engine can find your link. This is actually pretty straightforward and there are a few methods:

  • Disabling Flash, JavaScript, and AJAX in your browser using a tool like the Web Developer Toolbar for Chrome and Firefox
  • Checking the cache of your page
  • Looking at the source code and seeing if the linked-to page is there and easy to understand

Text surrounding the link

There was some hot debate around this topic toward the end of 2012, mainly fueled by this Whiteboard Friday on Moz, where Rand predicted that anchor text, as a signal, would become weaker. In this video, Rand gave a number of examples of what appeared to be strange rankings for certain websites that didn’t have any exact match anchor text for the keyword being searched. In Rand’s opinion, a possible reason for this could be co-occurrence of related keywords which are used by Google as another signal.

It was followed by a post by Bill Slawski which gave a number of alternative reasons why these apparently strange rankings may be happening. It was then followed by another great post by Joshua Giardino which dove into the topic in a lot of detail. I’d recommend a good read of these posts.

Having said all of that, there is some belief that Google could certainly use the text surrounding a link to infer some relevance, particularly in the case of anchor text being used that isn’t very descriptive (such as “click here”). If you’re building links, you may not always have control of the anchor text itself, let alone the content surrounding it. But if you are, then it’s worth thinking about how you can surround the link with related keywords and possibly tone down the use of exact keywords in the anchor text itself.

This has been a chapter from Paddy Moogan’s e-book, The Linkbuilding Book, available for full purchase and download below.

Buy The Linkbuilding Book

Moz Pro subscribers will receive an additional 25% discount via the Moz Perks page. If you’re not a Pro subscriber, you can always take a free 30-day trial to access your very own discount code!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Beyond Responsive: Design and Development Trends for Adaptable Marketers

Posted by Carson-Ward

A friend of mine recently asked me to review and explain a series of site recommendations sent over by a well-known digital marketing agency with roots in SEO. We talked through the (generally good) recommendations for content and search optimization, and then we got to this:

“* Mobile accounts for 53% of your traffic. We recommend building a mobile-friendly responsive website. Google recommends using responsive design so that your site looks good on all devices, and it may help increase mobile rankings.”

And that was it. A bullet point that says “build a responsive site” is like getting a home inspection back with a bunch of minor repairs and a bullet point that says, “Also, build a new house with modern specs.”

We, as professional marketers, need to realize that this advice is not good enough. We’re not helping anyone with broad statements that give no guidance on where to start or what to think about. Google might recommend responsive, but that doesn’t mean it’s the only option or that it’s always the right option. Even if it is the right option, we need to have some idea on how to do responsive right.

If we’re going to tell people to redesign their websites, we’d better have something more profound than a single bullet point on a 20-page document. Implying that “Google will reward you for responsive” and leaving it at that could do more harm than good. It also misses a tremendous opportunity to help clients build a great website with an awesome user experience.

It’s fine if you’re not well-versed in site architecture, design, user experience, and/or user intent. Just don’t mention a gargantuan project like a site redesign if all you have to say is “build a responsive site, because Google.”

This post is a look at how companies are handling the future of the web, for better or worse. My goal is to help SEOs, content marketers, and all other digital marketers to speak more intelligently about responsive, mobile, and other design and development trends.

Don’t follow the crowd: you risk going full Windows 8.

We learned some important lessons about cross-platform design from the disaster that was Windows 8. It was a mess for lots of reasons – and yet I see the same people who mocked Windows 8 beginning to make some of the same mistakes on their websites. For those who never used Windows 8 in its early days, let me explain why it was so bad.

  • “Metro” (or “Modern” or whatever) shunned navigation for modern simplicity. It featured big icons – and no clear way to do more than click icons. Desktop users hated it.
  • There were a bunch of useful features and options most people never knew about hidden in sub-navigation. Windows 8 could actually do some cool new stuff – but few people knew it could, because it wasn’t visible.
  • Users didn’t know how to do what they wanted. Menus and buttons were shunned in favor of bloated pictures of app icons. Common features like the start menu, control panel, and file search were suddenly moved to non-standard places. Thousands of people turned to Google every month to figure out how to do simple things like turn their computer off and run a search. That’s RIDICULOUS.

A small sample of people asking Google to help them navigate a Microsoft product. Also interesting: Windows 7 has always had lower searches for these terms despite 4-5x the number of active users.

Now here we are, three years later, watching the web go full Windows 8 on their users. Menus are scaled down into little hamburgers on desktop. Don’t do that! You’re alienating your desktop users just like Windows 8 did. Users have to click two or three times instead of just once to find what they need in your menu. And don’t kid yourself: You’re not Windows. No one’s going to ask Google how to use your site’s nav. They’re just going to look at result number two.

Let’s look at an example of making the Windows 8 mistake on the web. Let’s go big. Let’s go Honda.

This is what happens when you take a design trend and try to force it on your corporate site without thinking about users or why they’re coming to your site. What does this site sell? Dreams? Clouds? Stock images? The text on the page could be placed on almost any corporate site in the world. Honda has gone full Windows 8 on their corporate site.

Aside: I’m picking on Honda because I know they can take a beating here and keep running – just like my CR-V (which I love).

I’m obviously not a fan of the expanding mobile-style navigational menu on desktop, but Honda blew me away with an overly-complicated mess of a menu.

I understand the company makes major engines, boats, and aircraft parts. Having lots of parts to your business doesn’t mean that each part deserves equal emphasis. Honda needs to step back and ask what users want when they get to the site, and realize that it’s unfeasible to serve every intent – especially if it wants to maintain its simplistic design.

What about the competition?

Toyota and other competitors know most users visiting the site want to look at automobile options or find a dealer. Both Honda and Toyota have sites for racing, and both companies sell industrial engines. But Toyota understands that most users landing on Toyota.com want the consumer brand, and that racing enthusiasts will Google “Toyota racing” instead. There’s also a link way down in the footer.

The exception to the rule of avoiding what I’m calling mobile-only design might be a design firm. Here’s Big Spaceship’s site. They’re a design agency that knows more about web design than I ever will. It’s a great site, and it’s probably going to get them sales. Do not copy them. Don’t imitate a design agency’s website unless you are a design agency. I’m talking to you, Honda.

When a user visits a design firm’s site, they want to see the company’s skills. Design agencies like Big Spaceship are wise to immediately showcase and sell users on their design capabilities. In essence, the home page acts as a full-page product shot and sales page.

I’ve seen SEO/Design/Marketing agencies create what are essentially design-only websites, and then wonder why no one is interested in their SEO services. I’ve seen product companies use a logo + hamburger menu + massive product image layout and have problems selling anything but the product featured in the first image. That’s what you get for copying the cool kids.

It only makes sense to show one thing if you only do one thing. Good design in Amazon’s case is very different. Amazon has millions of products, and they don’t want people clicking through categories, choosing the wrong ones, and getting lost or frustrated. The search function is key with a mega-site: thus the not-so-pretty search bar on every Amazon page.

Align your users’ intents with nav items and landing page content. Show them how to browse or search your goods and services without making them click unnecessarily. Keep browse-able items to a manageable level, and make sure you have a simple click path to things people want to do on your site. Look at how Medium aligns intent with design.

Simplicity works for Medium posts: the user wants to read the post they’ve landed on, and the focus of the site’s design is on reading the post. Medium will hold off on getting you to read or share more until you’re done reading. Most of those calls to action are at the bottom of the article. Now look at the home page.

Smart. When someone lands on a post, they want to read the post. So show them the post! When someone lands on the home page, their intents vary. Give them options that aren’t hidden behind a hamburger menu. Show them what they can do.

Figure out what your users want to know or see, and build those elements prominently into the site. Don’t blindly copy web design, or you risk following Windows 8 in alienating your core users, especially on desktop.

So how do you know what your users want to see?

1. Run on-page surveys

One of the best ways to figure out what people are looking for is to ask them. Don’t continually annoy people with popups, but if you’re just starting out it’s worth gathering the information up-front. Ask people what they’re looking for when they visit your site. We use Qualaroo, but there are lots of simple tools that can be implemented quickly.

If you already know what people are looking for, you should make sure you know what their primary considerations are for buying. Does price matter to them more than power or quality? If price matters most to your buyers, price should be featured prominently in the design.

2. Use split tests to understand intent

There are lots of reasons to run split tests, and the focus should usually be on conversion. The problem is that sometimes we focus exclusively on which version converted better, and forget to ask why.

We use Optimizely, and it’s awesome. We also keep a log of test results with our pre-test hypothesis, pages tested, a link to results, and why we think it won. Then we try to think about the implications if we’re right about our conclusion.

  • Where else might we be making the mistake of the losing version?
  • What other pages are impacted if we’re right about what our users want?
  • Is there content we can create to solve the users’ problems? Are there key pages or explanations that are missing?

It’s a little bit dangerous to over-apply a single test’s conclusions on the whole site, so this usually leads to more testing. After three or four tests you might be ready to make moderate changes without running a split test, allowing you to move on to the next big test.

3. Look at in-market segments

Try to figure out where your users are mentally by looking at in-market segments. Don’t mistake in-market segments for what users are trying to buy. Instead, use it to understand what else the user has been looking at. Here’s a site we work on, for example:

So what is this telling me on our home services site? What do real estate, employment, hotels, new cars, and home furniture have in common? These are all things people need if they’re moving. If we’re smart about it, our site should have messaging and navigation options clearly intended for people who are moving. Maybe moving guides would be a good content idea. These are all opportunities that go unnoticed if we’re only focused on what people want to buy.

Some sites are going back to mobile sites, and that’s okay

It’s been said that Google “likes” responsive design and will reward responsive sites with higher search rankings. I disagree on that second point. Google likes sites that give the user what they want, regardless of the technology used.

Yes, Google has recommended responsive design. So do I, but I do so because it’s by far the easiest multi-device approach to maintain and the hardest to completely mess up. That doesn’t mean it’s the only way, and that does not mean that Google will penalize a site for providing a superior mobile experience in a different way.

There are lots of benefits to mobile sites. On some sites the intent and behavior of mobile users is different enough from desktop users that it justifies creating a mobile-specific experience. It’s also compatible with the goal of a fast-loading site.

Responsive sites are generally much slower to load, according to a report from The Search Agency.

You can and should make your site fast with responsive, but there are a host of reasons most responsive sites end up slower on mobile. Both dynamically-served sites and mobile sites naturally lend themselves to building with speed in mind. A mobile-specific site can also offer an experience that is ideal for the user intent at that time.

This past July, Cindy Krum talked about “mobile intent” during her Mozcon presentation. It might sound like a buzzword, but it’s true that mobile users are in a different spot. They’re not looking to compare as much. They want to either buy quickly or get some quick details on the product.

If you’re thinking about doing a mobile site, make sure you have lots of people ready to build it out correctly and maintain it. Don’t underestimate the dev time it will take to make the entire site work. You’ll need SEOs who know how to set up rel tags and ideally make sure the mobile site has an identical URL structure. You’ll need lots of QA to make sure all your page types are being served correctly.

Some SEOs will say that a mobile sub-domain or sub-folder is worse for SEO because links to one won’t count as links to the other. Nonsense! That’s what the rel=”canonical” and rel=”alternate” tags are for. Just like fretting over non-www 301 redirecting to the www version, these are things that made a big difference at one point, but are no longer as essentially important as they were. Google is smart enough to understand what’s happening – unless you don’t implement them correctly.

Responsive design is still a better option for most companies, but there’s no reason to be dogmatic about it. There’s a reason Google gives you three options. A mobile site can work for larger companies, and is often the best option for mega e-commerce sites.

Web development continues to evolve – including JavaScript libraries

JavaScript usage is place where the SEOs are often guilty of giving dated advice. SEO should enable great content to appear to more people in more searches. SEO should not be used to restrict useful content creating tools unless absolutely necessary.

Traditional SEO wisdom has always been to avoid putting any content into JavaScript that we want the crawlers to see. This is outdated advice for websites in 2015. Libraries like React and Angular can be amazing tools. They’re full of features, fun to use, and can make your website feel faster and more responsive.

If Google wants to reward a positive user experience, and if JavaScript can help site owners provide a stellar user experience, then SEOs should embrace JavaScript. Rather than lobbying against any JavaScript on the site it’s time to get a little more sophisticated in our approach to help the team use their tools correctly.

React and Angular can definitely make your dynamic content more fun to use, but they also make heavy use of AJAX-like client-side execution, which Google doesn’t really understand (yet). Developers and SEOs should be aware of how to make it work.

Making AJAX Google-friendly could be its own post. In fact, there are already several great posts. Google also has some great guides – make sure to check the linked resources, too. One small warning: there’s a lot of outdated info out there on the topic.

You can get around a lot of the nitty-gritty technical SEO using things like Prerender or V8. Try to find a tool that will automatically generate a crawlable version while using AJAX. Communicate with your developers to find a solution that works with your setup.

A humbling example

As I said, it’s important to make sure that you communicate with developers before construction begins. I’ll use a painful recent experience as an example. We just built a react-based tool that helps beginners estimate how much internet speed they need. It immediately redirected all visitors to a URL with a hashtag and the rest of the survey is behind a hashtag. And none of the text could be crawled without client-side execution.

Oops.

We built an awesome tool, and then hid it from Google. Someone fire the guy who missed that… just don’t tell anyone it was me. We used React.js here, and it was a blast. We’ve also received great feedback from users. The lesson here is not to avoid React and AJAX. The lesson here is to communicate SEO requirements to the developers early. The fix will be done soon, but it took a lot longer than if I’d done my due diligence beforehand.

Understanding Google-friendly JavaScript implementation is the job of every SEO. Other digital marketers should at least be aware that there’s a potential problem and a technical solution.

I love interactive tools that are fast and useful. SEOs should be facilitating the building of things that are awesome. That means helping find solutions rather than lobbying against an entire toolset that’s widely used on the modern web.

Don’t forget About indexable apps

Google can now index and rank apps, and they have some decent guidelines on how to do it. It’s possible that app-based companies with an exclusively mobile client base don’t even need a traditional website.

Most companies will still want to build and maintain websites, but be open to the idea that a responsive site might not be the best option for a small mobile game developer. The right option might instead be to add links to content and discussion and then support deep linking within the app.

Even if app-only isn’t the right option, consider that content within apps could be a more engaging medium for people who have already installed the app. For example, a discussion board for players of the game might work better within the game app itself. It could definitely feel more engaging and immersive if users never have to leave the app to ask a fellow user a question or rant about the latest update.

Final thoughts

A site might look awesome when you shrink and expand the window while presenting the design to the c-suite, but if the real decision makers, the users, don’t know what a cheeseburger menu is, you’re not going to sell very many stock photos of earth. Responsive design is a great option – often the right option – but it isn’t the only option. Hopefully this post can help get some thoughts started how to do responsive right.

I’m absolutely not saying that responsive is dead. My point is that if our advice drifts into design and development we should be able to give more concrete advice. Don’t just build websites that respond to screen size. Build websites that respond immediately to your customer’s needs.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Remarketing to People That Have Already Visited Your Website – Whiteboard Friday

Posted by randfish

Someone visits your website once, doesn’t convert, and goes on with their day. How in the world do you win them back? Well, the answer may lie in a topic we haven’t discussed for a while: remarketing.

In today’s Whiteboard Friday, Rand discusses how to get back in front of folks who have visited your site or engaged with your industry, new options in retargeted ads, and offers some best practices to follow.

Remarketing to People That Have Already Visited Your Website Whiteboard

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about remarketing to people who’ve already visited your website and then left, or already interacted with your niche, your service, your community, and then gone off somewhere else.

This is actually pretty interesting. A lot of times when we talk about the organic marketing funnel—someone performs a search, they follow you on a social network or they see a tweet from you, a Facebook update and they come to your website—well, we focus a lot on trying to convert that person either to a customer or convert them to signing up for an email newsletter, subscribing to something, following you on a social network, or becoming a part of your community.

But there’s actually a lot of data suggesting that the overwhelming majority of people who visit your website… I’ll use Fitbit as an example here. Brad, one of Moz’s investors and also Fitbit’s investor, sent me a Fitbit recently, which is very nice. What am I at today? Let’s see, 5696 steps.

A lot of people who visit Fitbit’s website, I don’t actually know this for sure, but probably about a tenth of a percent of them are converting to a sale or actually buying one of these things. Then, 99.9% are going somewhere else. The idea here is: What can we do to capture this audience again, to get in front of them? We know that at some point they were interested in our product or our service. We want to get in front of them again.

Retargeting

This is something we’ve covered a little bit, but there’s actually a bunch of new options that have surfaced from the advertising and web marketing world that we should probably be aware of. A few of these include things like classic retargeting, aka we follow them around the web like a lost puppy dog. The ads that you see on the side of everything after you looked at that one pair of Zappos shoes that one time, and now you just can’t seem to get them out of your head or your browser. Maybe someone’s visiting The Next Web and if page X over here on Fitbit’s website was visited in the last 1, 2, 30, or 60 days, we want to show this particular ad with a bid price of XYZ.

This is kind of cool. I think where retargeting has really become more sophisticated is in some of the options. We can filter and configure and modify this and model it in such a way that we can say if you visited this page but not these other pages, or if you visited these three pages in a row, we want to show you this. If you interacted on our site in this particular way, we can now do things with apps. If we know that someone has interacted with an app, we can start to do retargeting and remarketing personalized to them.

Moz has used a service called AdRoll in the past. There are a number of them out there. Obviously, Google has a pretty powerful display network around this, too.

RLSA (Remarketed Lists for Search Ads)

Another thing that has been around for a couple of years but we haven’t talked about too much on Whiteboard Friday here is RLSA. That’s remarketed lists for search ads.

This means if we know that Sonja visited—I think it’s Tory Burch who’s a fashion designer who designs a special kind of Fitbit—the Tory Burch page on Fitbit and then we know that she searched for bracelets or watches, even though bracelets and watches are something we would never ever want to bid on as Fitbit because we’re not in the fashion category, but if we know that Sonya has previously visited a page on Fitbit or any page on Fitbit’s website potentially, well, now that she’s doing these fashion related searches, we might say, “You know what? Let’s show our Tory Burch ad specifically for that product, which is a fashion product, in the search results in the ads there.” That’s pretty cool.

We can customize this in a ton of ways. You can imagine a bunch of different uses based on what people visited and then what they searched for. Of course, you can bid a lot higher for those types of ads because you know the prior behavior. You can also expect a much higher click-through rate and probably a much higher conversion rate from those ads because that person has already visited your website and is familiar with your product or your brand.

If you have their email address…

If you have an email address, or a social ID, or an app ID, or even a phone number actually, you can use Facebook and Twitter’s custom audiences, which are pretty cool to do targeting specifically to people on Facebook or on Twitter whose email address you’ve uploaded. If a lot of people have signed up for your email newsletter or have started your product purchase process, maybe they went to Fitbit. They entered their email address to sign up, and then they never completed a purchase. We can get back in front of them using Facebook or Twitter custom audiences or using AdWords.

Actually, as of two days prior to us filming this, but probably a few days before, maybe a week or two before this Whiteboard Friday comes out, Google just introduced something called customer match in AdWords. You can upload an email list and then get it in front of those emails specifically when they’re performing searches or across their display ad network.

You can do those via places like Retargeter and AdRoll and Google. Those are the CRM retargeting models and services. That’s pretty cool.

Or their social ID…

If we have social IDs, for example, if you Facebook connect to Fitbit or if you connected via Twitter, I think you can also use Facebook’s connection on Instagram for Instagram ads now if you’re part of Instagram’s ad program. A bunch of options there as well.

A few best practices before we finish here.

  1. First off, whenever you’re doing any type of remarketing or retargeting through any of these types of services, make sure that you have smart burn pixels and burn pages, meaning if someone finishes the checkout at Fitbit, don’t show them the ad any more. You don’t want to keep marketing to someone who’s already completed that conversion process. Likewise, you probably want to have a burn after a certain number of days. If you can see that after 8 days or 12 days or 15 days you just are getting very low click-through, very low conversion, you know what, maybe it’s time to give up on the ad.
  2. You also want to be smart about limiting the exposure and/or changing the message. If someone has seen your ad four, five, or six times as they’re browsing across the web, maybe you want to say, “Hey, let’s either give them a new message or wait for them to visit again before we keep trying to advertise. Otherwise, we could be burning dollars and bids that could be better spent on other customers or other marketing channels.”
  3. We want to customize based on behavior. I think one of the big advancements here is that remarketing, when it initially came out, used to be pretty dumb and pretty basic. It was, “Did they visit your site? Then you can show them this one ad.” Now people have gotten way more sophisticated, and ad networks have gotten way more sophisticated. We can say, “Hey, they performed this action. We only want to be in this network. We only want to do this if they’ve done this specific group of things in a row or completed these processes.” That can really improve your click-through rates, improve your conversion rates, and improve your targeting.
  4. Don’t ever assign 100% credit to any one of these. Remember that whatever initially brought them to the website should receive at least as much, if not more, credit and investment than whatever brought them back to purchase. This is a way of recapturing folks, not an initial way. If you’re assigning 100% credit, what happens is that you’ll stop investing at the top of the funnel and soon you’ll just be remarketing to the same smaller, shrinking group of visitors over time. That can get really dangerous.
  5. Don’t limit ads to sales focus only. If you know that you can convert from other sources, from content, from multiple visits, from someone signing up for an email newsletter, from someone attending an event, from participation on your platform or in your community in a certain way, you don’t need to only market the product that you are selling. I think this is something where folks have gotten very narrow. You can see some innovative companies doing some really smart stuff in retargeting and remarketing, looking earlier in their funnel and saying, “Hey, we know that 30% of people who do this activity will eventually become a customer of ours. So let’s also remarket this activity, and we can bid a third of the price of whatever we know the conversion leads to directly.”
  6. You can also try remarketing for really creative stuff. I’ve seen it for job ads, which I think is brilliant. If someone visits your Jobs page and you’re having trouble hiring, hey, follow them around the web like a lost puppy dog. Get in front of them on their social networks. If they have been to an event of yours and you have their email address, you can now market through here.

Campaigns to influencers, I’ve seen some really creative content marketers who said, “Hey, you know what, we know that here’s a list of journalists and bloggers that we’ve reached out to. We can take that email list and upload it.” You need a minimum of a thousand email addresses for all three—Facebook, Twitter, and Google—for the CRM style stuff. Make sure that you have that many emails before you try and upload. If you do, you can get in front of those influencers with content. If that’s leading to links and press coverage and stories and the bid prices are low, which they often will be, you may have some big advantages there.

Hopefully, I will see some very creative ads from all of you following me around the web. I look forward to discussion in the comments. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Campaign Tracking Without Going Crazy: Keeping Order in AdWords Optimization

Posted by anthonycoraggio

Pay-per-click advertising generates vast amounts of data, which presents us with tremendous potential for optimization and success. However, this formidable sword cuts both ways—even skilled managers can quickly find themselves adrift if tests and changes are not carefully tracked. Here’s a quick, actionable guide to keeping order in your AdWords account with a simple and professional activity log.

The philosophy of orderly management

Good Adwords management is an exacting science—every tweak and change made should be for a specific reason, with a particular goal in mind. Think in terms of the scientific method: we’re always moving forward from hypothesis, to test, to result, and back again.

When it comes time to evaluate the results of these changes and iterate to the next step, it’s very important to know exactly what changes were made (and when). Likewise, when the numbers break unexpectedly, it’s vital to be able to eliminate as many variables as possible as quickly as possible in our analysis. Many of us operate in collaborative environments, so this information needs to be readily accessible.

To be able to do that, we need a system that defines when and where these changes happened, and clearly explains the nature of the change. Beyond that, we also need to keep it user-friendly for two very important reasons. First, many of us operate in collaborative environments, so this information needs to be readily accessible to teammates, supervisors, and clients that may need it. Second, it’s vital to remember that the most elaborate, brilliantly-detailed tracking plan is going to be useless if you don’t actually use it consistently. To get started building a good system, let’s take a look at the tools we have at hand.

Tools of the trade

AdWords changelog

The first and most obvious tool that might come to mind is the Adwords native changelog, but this should be viewed as a tool of last resort in most cases. Anyone that has had to dig through that information line-by-line trying to diagnose an issue will tell you that it’s less than optimal, even with the improved filtering options Google has provided. The crux of the issue here is that there is no indicator of intent—why was the change made? Was it a considered part of a test? What other changes were a part of the same move made?

That said, the changelog can be a handy feature when it comes to quick refreshers on a former budget cap or tracing a trend in bids—especially when downloaded to Excel. Just don’t rely on it for everything!

Google Analytics annotations

This is our second UI option, and a key one. Obviously this isn’t in AdWords itself (though that would be a lovely feature), but if you spend even half your time in online marketing, chances are you’ve got GA open in a second tab or window already! If you commit the effort to nothing else, do it for this. Placing annotations for major changes or tests doesn’t only help you—it provides a touchpoint for anyone else that might need to look into traffic ups and downs, and can save hours of time in the future.. Note that I said “major”—remember that this is a shared system, and you can easily swamp it if you get too granular.

Spreadsheets

This is where most of my logs go, as proper coding and some simple filtering makes it a breeze to find the information you need quickly. I’ll get into more detail on practical usage below, but basically this is where the when/where/why goes for future reference. My preference here is usually to use Google Sheets for the simple collaboration features, but you can do just as well with a shared Excel file on OneDrive.

Project management tools

Keeping your test tracking connected to and aligned with your project management tools is always wise. There are myriad project management software tools out there, but I favor agile PM for SEM applications—Trello, Jira, Mingle, Basecamp, and more are all useful. The key here is really that your activity and test logs are easily available wherever you keep project resources, and linked to from whatever cards or items are associated to a particular test. For example, if you have a task card titled “Client-128: A/B Ad Test For {Campaign>Ad Group}”, note “per task Client-128” in your activity log and link directly to that card if your tool permits it. You can also link to the activity log from the card or a project resource file if you’re using a cloud sheet, as in Google Docs Sheets.

Creating a system & putting it all together

Now you know all the tools—here’s how to put them together. To get you started, there are two primary areas you’ll want to address with your activity log: ongoing changes/optimizations, and major planned tests.

Tracking ongoing changes: the standard activity log

The standard activity log is your rock. It’s the one point where the hundreds of changes and thoughts the human brain could never hope to perfectly recall will always be, ready to answer any question you (or your client, or your boss) might come up with down the line. An activity log should, at minimum, tell us the following:

  • What happened?
  • When did it happen?
  • Who was involved?
  • Why did it happen?

If I notice an inflection point on a particular graph starting on 9/28 and need more information, I should be able to go back and see that User X paused out Campaign Y that morning, because they had spoken with the client and learned that budget was to be shifted out to Campaign Z. Instant context, and major time saved! If I want to know more, I know who to ask and how to ask the right question for a quick and productive conversation.

Ongoing optimizations and relatively small changes can stack up very quickly over time, so we also want to be sure that it’s an easy system to sort through. This is part of why I prefer to use a spreadsheet, and recommend including a couple columns for simple filtering and searching. Placing a unique sequential ID on every item gives you a reliable point of return if you muddle up the order or dates, and a note indicating the type and magnitude of the change makes searching for the highlights far easier.

Anything you can do with your chosen tool to simplify and speed up the process is fair game, as long as you can reasonably expect others to understand what you’ve put in there. Timestamp hotkeys and coded categories (e.g. “nkw” denoting a negative keyword expansion) in particular can save headaches and encourage compliance. Finally, always keep your logs open. It’s easy to forget early on, and dragging your cursor through just a few extra clicks to open them back up when you’re in the zone can be a bigger obstacle than you might expect!

Formal test tracking

When you’re conducting formal A/B or multivariate tests in your account, a higher standard of documentation is a good idea. Even if you’re not presenting this to a client formally, put together a quick line of data detailing the following for every major test you plan and execute:

  • Purpose. Every test should have a reason behind it. Documenting this is a good exercise in holding yourself to account on smart testing in general, but this is most important for future analysis and test iterations—it’s what sets up the “why.”
  • Hypothesis. Marketers have a reputation for playing fast and loose with statistical methods, but remember that for results you can trust, you should have a falsifiable hypothesis. Again, get this down so you can say what exactly your results do and do not prove.
  • Procedure. Exactly what it sounds like—what did you do in implementing this test? You need to record what the controlled and experimental variables were, so you can appropriately account for what might have influenced your results and what might be worth trying again differently in the future.
  • Results. Again, easy—what was the outcome? Don’t be stingy with the details here; confidence level, effect size, and the actual ad copy or landing page that was tested should be recorded for posterity and later reference.

I like putting at least the hypothesis and results in a combined test results spreadsheet for quick future reference. Over time, as people shift through roles, what was tested a year ago can quickly fade from organizational memory. When planning your next test, you need to be able to quickly go back and see if it’s been done before, and whether it’s worth trying again. I’ve seen a lot of wasted duplication of effort in companies I’ve consulted for this exact reason—don’t let that be you!

I also recommend plugging in a quick line in my standard activity log for each action on a test (i.e. launched, finalized, paused), since these are often pretty high-impact changes and it’s helpful to have this information in your go-to spot.

Make it work

I’ll close with a brief reiteration of what I believe is the most important part of activity logging and test tracking: actually doing it. Internal adoption of any new tool or process is almost always the toughest hurdle (ask anyone who’s ever overseen a CRM implementation). As with any habit, there are a few simple behaviors that can help you make good tracking practices a reliable part of your routine:

  • Start small. It won’t hurt to start by logging just the biggest, most important activities. You’ll have an easier time remembering to do it, and you’ll soon start doing it for more and more tweaks automatically.
  • Be accountable. Even if you’re the only one touching the account, tell someone else what you’re doing and ask them to check in on you. There’s nothing like social accountability to reinforce a behavior!
  • Have a goal in mind. If you don’t feel a sense of purpose in what you’re doing, you’re probably just not going to do it. Make a pact with yourself or your team that you’ll review your activity logging one week from when you start and share thoughts and ideas on improving it. You’ve then got a clear and present point of reference for success and moving forward.

Do you have any favorite tricks or tactics for keeping good track of your SEM campaigns? Share them with us in the comments!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →