About frans

Website:
frans has written 4625 articles so far, you can find them below.

How to Use Search Analytics in Google Sheets for Better SEO Insights

Posted by mihai.aperghis

This post was originally in YouMoz, and was promoted to the main blog because it provides great value and interest to our community. The author’s views are entirely his or her own and may not reflect the views of Moz, Inc.

As an SEO, whether you’re working in-house or handling many clients in an agency, you’ve likely been using this tool for a bunch of reasons. Whether it’s diagnosing traffic and position changes or finding opportunities for optimizations and content ideas, Google Search Console’s Search Search Analytics has been at the core of most SEOs’ toolset.

The scope of this small guide is to give you a few ideas on how to use Search Analytics together with Google Sheets to help you in your SEO work. As with the guide on how to do competitive analysis in Excel, this one is also focused around a tool that I’ve built to help me get the most of Search Analytics: Search Analytics for Sheets.

The problem with the Search Analytics UI

Sorting out and managing data in the Google Search Console Search Analytics web UI in order to get meaningful insights is often difficult to do, and even the CSV downloads don’t make it much easier.

The main problem with the Search Analytics UI is grouping.

If you’d like to see a list of all the keywords in Search Analytics and, at the same time, get their corresponding landing pages, you can’t do that. You instead need to filter query-by-query (to see their associated landing pages), or page-by-page (to see their associated queries). And this is just one example.

Search Analytics Grouping

Basically, with the Search Analytics UI, you can’t do any sort of grouping on a large scale. You have to filter by each keyword, each landing page, each country etc. in order to get the data you need, which would take a LOT of time (and possible a part of your sanity as well).

In comes the API for the save

Almost one year ago (and after quite a bit of pressure from webmasters), Google launched the official API for Search Analytics.

Official Google Webmaster Central Blog Search Analytics API

With it, you can do pretty much anything you can do with the web UI, with the added benefit of applying any sort of grouping and/or filtering.

Excited yet?

Imagine you can now have one column filled with keywords, the next column with their corresponding landing pages, then maybe the next one with their corresponding countries or devices, and have impressions, clicks, CTR, and positions for each combination.

Everything in one API call


Query Page Country Device Clicks Impressions CTR Position
keyword 1 https://domain.com/us/page/ usa DESKTOP 92 2,565 3.59% 7.3
keyword 1 https://domain.com/us/page/ usa MOBILE 51 1,122 4.55% 6.2
keyword 2 https://domain.com/gb/ gbr DESKTOP 39 342 11.4% 3.8
keyword 1 https://domain.com/au/page/ aus DESKTOP 21 55 38.18% 1.7
keyword 3 https://domain.com/us/page/ usa MOBILE 20 122 16.39% 3.6

Getting the data into Google Sheets

I have traditionally enjoyed using Excel but have since migrated over to Google Sheets due to its cloud nature (which means easier sharing with my co-workers) and expandability via scripts, libraries, and add-ons.

After being heavily inspired by Seer Interactive’s SEO Toolbox (an open-source Google Sheets library that offers some very nice functions for daily SEO tasks), I decided to build a Sheets script that would use the Search Analytics API.

I liked the idea of speeding up and improving my daily monitoring and diagnosing for traffic and ranking changes.

Also, using the API gave me the pretty useful feature of automatically backing up your GSC data once a month. (Before, you needed to do this manually, use a paid Sheets add-on or a Python script.)

Once things started to take shape with the script, I realized I could take this public by publishing it into an add-on.

What is Search Analytics for sheets?

Simply put, Search Analytics for Sheets is a (completely free) Google Sheets add-on that allows you to fetch data from GSC (via its API), grouped and filtered to your liking, and create automated monthly backups.

If your interest is piqued, installing the add-on is fairly simple. Either install it from the Chrome Web Store, or:

  • Open a Google spreadsheet
  • Go to Add-ons -> Get add-ons
  • Search for Search Analytics for Sheets
  • Install it (It’ll ask you to authorize a bunch of stuff, but you can sleep safe: The add-on has been reviewed by Google and no data is being saved/monitored/used in any other way except grabbing it and putting it in your spreadsheets).

Once that’s done, open a spreadsheet where you’d like to use the add-on and:

Search Analytics for Sheets Install

  • Go to Add-ons -> Search Analytics for Sheets -> Open Sidebar
  • Authorize it with your GSC account (make sure you’re logged in Sheets with your GSC account, then close the window once it says it was successful)

You’ll only have to do this once per user account, so once you install it, the add-on will be available for all your spreadsheets.

PS: You’ll get an error if you don’t have any websites verified on your logged in account.

How Search Analytics for Sheets can help you

Next, I’ll give you some examples on what you can use the add-on for, based on how I mainly use it.

Grab information on queries and their associated landing pages

Whether it is to diagnose traffic changes, find content optimization opportunities, or check for appropriate landing pages, getting data on both queries and landing pages at the same time can usually provide instant insights. Other than automated backups, this is by far the feature that I use the most, especially since it’s fairly hard to replicate the process using the standard web UI.

Best of all, it’s quite straightforward to do this and requires only a few clicks:

  • Select the website
  • Select your preferred date interval (by default it will grab the minimum and maximum dates available in GSC)
  • In the Group field, select “Query,” then “Page”
  • Click “Request Data”

That’s it.

You’ll now have a new sheet containing a list of queries, their associated landing pages, and information about impressions, clicks, CTR, and position for each query-page pair.

Search Analytics for Sheets Example 1

What you do with the data is up to you:

  • Check keyword opportunities

Use a sheets filter to only show rows with positions between 10 and 21 (usually second-page results) and see whether landing pages can be further optimized to push those queries to the first page. Maybe work a bit on the title tag, content and internal linking to those pages.

  • Diagnose landing page performance

Check position 20+ rows to see whether there’s a mismatch between the query and its landing page. Perhaps you should create more landing pages, or there are pages that target those queries but aren’t accessible by Google.

  • Improve CTR

Look closely at position and CTR. Check low-CTR rows with associated high position values and see if there’s any way to improve titles and meta descriptions for those pages (a call-to-action might help), or maybe even add some rich snippets (they’re pretty effective in raising CTR without much work).

  • Find out why your traffic dropped
    • Had significant changes in traffic? Do two requests (for example, one for the last 30 days and one for the previous 30 days) then use VLOOKUP to compare the data.
    • Positions dropped across the board? Time to check GSC for increased 4xx/5xx errors, manual actions, or faulty site or protocol migrations.
    • Positions haven’t dropped, but clicks and impressions did? Might be seasonality, time to check year-over-year analytics, Google Trends, Keyword Planner.
    • Impressions and positions haven’t dropped, but clicks/CTR did? Manually check those queries, see whether the Google UI has changed (more top ads, featured snippet, AMP carousel, “In the news” box, etc.)

I could go on, but I should probably leave this for a separate post.

Get higher granularity with further grouping and filtering options

Even though I don’t use them as much, the date, country and device groupings let you dive deep into the data, while filtering allows you to fetch specific data to one or more dimensions.

Search Analytics for Sheets Grouping

Date grouping creates a new column with the actual day when the impressions, clicks, CTR, and position were recorded. This is particularly useful together with a filter for a specific query, so you can basically have your own rank tracker.

Grouping by country and device lets you understand where your audience is.

Using country grouping will let you know how your site fares internationally, which is of course highly useful if you target users in more than one country.

However, device grouping is probably something you’ll play more with, given the rise in mobile traffic everywhere. Together with query and/or page grouping, this is useful to know how Google ranks your site on desktop and mobile, and where you might need to improve (generally speaking you’ll probably be more interested in mobile rankings here rather than desktop, since those can pinpoint problems with certain pages on your site and their mobile usability).

Search Analytics for Sheets Grouping Example

Filtering is exactly what it sounds like.

Choose between query, page, country and/or device to select specific information to be retrieved. You can add any number of filters; just remember that, for the time being, multiple filters are added cumulatively (all conditions must be met).

Search Analytics for Sheets Grouping Example

Other than the rank tracking example mentioned earlier, filtering can be useful in other situations as well.

If you’re doing a lot of content marketing, perhaps you’ll use the page filter to only retrieve URLs that contain /blog/ (or whatever subdirectory your content is under), while filtering by country is great for international sites, as you might expect.

Just remember one thing: Search Analytics offers a lot of data, but not all the data. They tend to leave out data that is too individual (as in, very few users can be aggregated in that result, such as, for example, long tail queries).

This also means that, the more you group/filter, the less aggregated the data is, and certain information will not be available. That doesn’t mean you shouldn’t use groups and filters; it’s just something to keep in mind when you’re adding up the numbers.

Saving the best for last: Automated Search Analytics backups

This is the feature that got me into building this add-on.

I use GSC data quite a bit, from client reports to comparing data from multiple time periods. Unless you’ve never used GSC/WMT in the past, it’s highly unlikely you don’t know that the data available in Search Analytics only spans about the last 90 days.

While the guys at Google have mentioned that they’re looking into expanding this window, most SEOs have had to rely on various ways of backing up data in order to access it later.

This usually requires either remembering to manually download the data each month, or using a more complicated (but automated) method such as a Python script.

The Search Analytics for Sheets add-on allows you to do this effortlessly.

Just like when requesting data, select the site and set up any grouping and filtering that you’d like to use. I highly recommend using query and page grouping, and maybe country filtering to cut some of the noise.

Then simply enable the backup.

That’s it.The current spreadsheet will host that backup from now on, until you decide to disable it.

Search Analytics for Sheets Example 2

What happens now is that once per month (typically on the 3rd day of the month) the backup will run automatically and fetch the data for the previous month into the spreadsheet (each month will have its own sheet).

In case there are delays (sometimes Search Analytics data can be delayed even up to a week), the add-on will re-attempt to run the backup every day until it succeeds.

It’ll even keep a log with all backup attempts, and send you an email if you’d like.

Search Analytics for Sheets Backup Log

It’ll also create a separate sheet for monthly aggregated data (the total number of impressions and clicks plus CTR and position data, without any grouping or filtering), so that way you’ll be sure you’re ‘saving’ the real overview information as well.

If you’d like more than one backup (either another backup for the same site but with different grouping/filtering options or a new backup for a different site), simply open a new spreadsheet and enable the backup there. You’ll always be able to see a list with all the backups within the “About” tab.

For the moment, only monthly backups are available, though I’m thinking about including a weekly and/or daily option as well. However that might be more complicated, especially in cases where GSC data is delayed.

Going further

I hope you’ll find the tool as useful as I think it is.

There may be some bugs, even though I tried squashing them all (thanks to Russ Jones and Tori Cushing, Barry Schwartz from Search Engine Roundtable, and Cosmin Negrescu from SEOmonitor for helping me test and debug it).

If you do find anything else or have any feature requests, please let me know via the add-on feedback function in Google Sheets or via the form on the official site.

If not, I hope the tool will help you in your day-to-day SEO work as much as it helps me. Looking forward to see more use cases for it in the comments.

PS: The tool doesn’t support more than 5,000 rows at the moment; working on getting that improved!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

How to Build a Facebook Funnel That Converts – Whiteboard Friday

Posted by ryanwashere

How are you using remarketing on Facebook? If you’ve ever felt frustrated about the ROI on FB ads, it just may be time to give them another chance. In today’s guest-hosted Whiteboard Friday, Ryan Stewart outlines his process for using remarketing and targeted content creation to get more conversions out of your Facebook ad spend.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hello, Moz fans. My name is Ryan Stewart. I own digital consultancy agency WEBRIS, and I am ecstatic to be doing this week’s version of Whiteboard Friday.

Now, as a marketing consultant I get the pleasure of talking to fellow marketers and business owners all the time, and one of the first questions I ask them is what they’re doing on Facebook, because I firmly believe there’s no better way to spend your money online right now. Nine times out of ten, what they tell me is this. “Hey, look Ryan, we spent some money, we got some fans, we got some video views, we got a lot of clicks, but ultimately that return on investment wasn’t quite there, so we stopped.” So I’m going to show you a framework today that’s going to help you get more return on investment from your Facebook ad spend.

Common key mistakes when it comes to Facebook ads

Before we get into that framework, there are a couple of key things that I want to just check off right off the bat that might help you. These are key mistakes that I see people making all the time.

  1. The misuse of Facebook technology. What that means is not having the pixel installed, not using custom conversions, not using a tag management solution to help you out, and not really understanding and using custom audiences the right way, because those are ultimately what make remarketing and really what make this whole funnel thing really drive through. We need to understand and use custom audiences the right way, and I’m going to talk about that in here.
  2. We need between six to nine brand touchpoints. Another thing, and I kind of call this the SEO mentality, we think that just because somebody is interested or searching for red shoes that means they want to buy it. It’s not the case. Especially on Facebook, there’s a completely different mentality on Facebook, and we need to understand that. There’s a lot of studies that show that we need between six to nine brand touchpoints before somebody is an interested sales prospect or lead, and…
  3. We need to build that value, that relationship and really build that purchase intent. We do that through content, which I’m going to talk about in depth over here as well.
  4. The lack of trust in Facebook. Again, the SEO mentality that we’re constantly working against Google, Facebook is a complete opposite. They have done an amazing job optimizing their platform. All you have to do is tell it what you want it to do and it will go out and it will find the right audiences. Just have faith in the process. Trust that Facebook will get it done for you, and then you can focus on what really matters.

The Facebook marketing funnel framework

This is the framework that I have drawn up over here. It looks like your traditional marketing funnel. It’s got your awareness, interest, consideration, and purchase. But what’s in it is specific to this process and Facebook ads in particular.

Now I’m going to start actually not at the top. I’m going to run you through the whole thing, but I’m going to start here with interest, because this is where most people start. They build a landing page, a squeeze page that says, “Hey, opt in for our free e-book,” and they just start promoting it. They push it to fans. They push it to audiences that they think are interested in it. Unless you’re a huge brand that has all of these touchpoints and awareness taken care of, it’s very tough because people don’t know who you are and they’re not just going to start giving you their information and start buying just because you put up a nice landing page.

1. Build your content — whatever form works best for you.

So what we need to do is build the content on top of that. That’s what I have right here. So you can I’ve got different types of content that you can use — videos, blog posts, webinars, e-books. Whatever it is, it doesn’t really matter.

Create something that you’re comfortable with. But what you need to focus on is really two things.

  1. Making sure that it’s on your website, because then we can retarget people and get them down that funnel.
  2. Creating and building value for the people that you’re targeting. So again, if you’re Moz and you sell a bunch of different products — you’ve got your local solutions, you’ve got your keyword research, you’ve got link building solutions — we don’t want to just create one piece of content. We want to create specific pieces of content that are engaging to those little sub-niches of the audience and relevant to the product, and that’s key because it allows us to expand and scale this out.

2. Gather insights from people that are already aware to inform your lookalike audiences.

Now once you’ve got that content built, what I like to start doing is promoting it to what I call warm audience. These are people that are already aware. These are your Facebook fans, your website retargeting list, your customer list, all of these people. You start by promoting the content to them, and you start analyzing the data and see what’s performing best.

Understand the audience segments that are driving the most engagement, driving the most purchases right off the bat, because what we can do is create lookalike audiences based on these and we can pivot into promoting this to cold audiences. Again, this is really where you can start to scale, because this is only going to take you so far. Unless you’re a massive brand, it’s not going to take you very far. This is really mostly for data analysis and getting some initial people into the funnel.

This up here, the cold audiences, is where you really start to make your money. So again, lookalike audiences are a tremendous thing. You need to trust in Facebook that what you tell it to do, it’s going to go out and find the right people. But there’s other stuff you can do as well. Again, because we’re not taking a landing page, we can actually go out and do some form of outreach to get more eyeballs on the page. We can go to Facebook groups. We can go to other Facebook Pages. We can say, “Hey, I’ve got this really great guide, ’19 Things To Do to Build Links for Local SEO.” We can start to do some exchanges. All we need to do is start getting people to here, getting people to this content, because once they’re on this content, they’re in our funnel. So let me show you how this works, and this is where the Facebook ads really start to pick up and retargeting really starts to come into play.

3. Remarket with an initial offer to move your audience from aware to interested.

What we want to do is we want to set parameters that tell Facebook, “Hey, anytime somebody’s been to this blog post but hasn’t been to our landing page, show them this ad.” I love to use video in this case, again because video is a great way to build the brand, to hack those touchpoints, to get your face out there, and to start getting some recognition. So I like to use a video that says, “Hey, thank you for checking out our blog posts, our webinar. We really appreciate it. But we left some things out, and those things are included on this page.” That’s how you can start to introduce your offer and get people to your landing page, your squeeze page, or your product page.

4. Use another remarketing ad to move them from the interested stage to consideration.

We’re not done there, because there are some other things that we can do on Facebook to start really building this thing up and driving a lot more conversions.

Once we get people to the landing page, not everyone is going to convert. So what we can do is we can set up another remarketing ad that’s says, “Hey Facebook, anytime that somebody has been to our landing page but hasn’t been to the next page, which is our trial page, our thank you page, whatever that may be, we want to run this ad.” Again, I like to use video again, and we can say, “Hey, thank you for checking out our landing page, but you didn’t opt in. Did you know that we have a free trial? Did you know that we offer a discount? Did you know we have a free e-book?”

Whatever it is that you’re offering to get people to opt in, run that. What happens is then you get your people in your email sequence, your traditional marketing sequence. You run that on the side, but again we’re not done. Because we found these people on Facebook, they’re still on Facebook. There’s still more that we can do.

5. Build trust with ads that share benefits, testimonials, etc.

If you’ve got a free trial that you’re giving away, a free consultation, whatever it is, a discount for your products, we want to tell people about it. We want to make sure that they’re taking advantage of it, because again you know once you get somebody on email, you might have a 20% open rate, you’re cutting off 80% of people. But we know they’re on Facebook, so what we can do is run another remarketing ad that says, “Hey Facebook, anytime that somebody has been to our free trial page but hasn’t actually purchased, let’s drive people to use it.” You can start talking about the benefits of your product, start showing testimonials from people. Whatever it is that you can drive people to use your product and really build trust in your product, you want to take advantage of that.

6. Use your final remarketing ad to sweeten the pot and ask for the hard sell.

Finally, we’re still not done, because we still haven’t asked for that hard sell. This is where we use our final ad that says, “Hey Facebook, anytime that somebody has used our trial but hasn’t been to our ultimate checkout page, we want to run this final ad.” What I did, I have a course that I use that I sold using Facebook ads. What I did, I ran a very personalized video ad that said, “Hey, thank you for checking out my content. Thank you for attending my webinar. Thank you for checking out the free trial. Look it, there’s something that’s holding you up from purchasing. I am willing to jump on a call with you and answer any questions that you may have.” Obviously, that’s not going to apply to every business. But figure out a final piece of value that you can add to those people to really drive them to purchase and ask for that hard sell.

Again, this is kind of a quick overview of this process, but the key point here is that this part from down here’s automated. All you have to focus on now is building more content and building more traffic to that content, because once you get traffic to this content — and you know tons of ways to do that, you can even rank it in organic search and get people in your funnel that way — all you have to do is focus on getting people in here. This whole funnel is automated, and it’s a beautiful thing. When you do this, it takes patience. You’re not going to get as many email conversions upfront, but it works.

I’m telling you, if you just have faith in this process and use this to your advantage, use remarketing with everything that you can do, it will work.

Again guys, my name is Ryan Stewart. Hopefully you enjoyed this presentation. For more information, again there’s a ton of stuff on Moz. I have some stuff on my blog. I appreciate your time. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Why Didn’t You Recover from Penguin?

Posted by Dr-Pete

After almost a two-year wait, the latest Penguin update rolled out in late September and into early October. This roll-out is unusual in many ways, and it only now seems to be settling down. In the past couple of weeks, we’ve seen many reports of recoveries from previous Penguin demotions, but this post is about those who were left behind. What if you didn’t recover from Penguin?

I’m going to work my way from unlikely, borderline conspiracy theories to difficult truths. Theories #1 and #2 might make you feel better, but, unfortunately, the truth is more likely in #4 or #5.


1. There is no Penguin

Then you’ll see that it is not the spoon that bends, it is only yourself. Ok, this is the closest I’ll get to full-on conspiracy theory. What if this new Penguin is a ruse, and Google did nothing or rolled out something else? We can’t know anything 100% without peering into the source code, but I’m 99% confident this isn’t the case. Interpreting Google often means reading between the lines, but I don’t know of any recent confirmed announcement that ended up being patently false.

Google representatives are confirming details about the new Penguin both publicly and privately, and algorithm flux matches the general timeline. Perhaps more importantly, we’re seeing many anecdotal reports of Penguin recoveries, such as:

Given the severity of Penguin demotions and the known and infrequent update timelines, these reports are unlikely to be coincidences. Some of these reports are also coming from reliable sources, like Marie Haynes (above) and Glenn Gabe (below), who closely track sites hit by Penguin.


2. Penguin is still rolling out

This Penguin update has been unusual in many ways. It’s probably best not to even call it “Penguin 4.0” (yes, I realize I keep calling it that). The new, “real-time” Penguin is not simply an update to Penguins 1–3. It replaces them and works very differently.

Because real-time Penguin is so different, the roll-out was broken up into a couple of phases. I believe that the new code went live in roughly the timeline of Google’s announcement date of September 23rd. It might have happened a day or two before that, but probably not weeks before. This new code, though, was the kinder, gentler Penguin, which devalues bad links.

For this new code to fully take effect, the entire link graph had to be refreshed, and this takes time, especially for deeper links. So, the impact of the initial roll-out may have taken a few days to fully kick in. In terms of algorithm flux, the brunt of the initial release hit MozCast around September 27th. Now that the new Penguin is real-time, we’ll be feeling its impact continuously, although that impact will be unnoticeable for the vast majority of sites on the vast majority of days.

In addition, Google has rolled back previous Penguin demotions. This happened after the new code launched, but we don’t have an exact timeline. This process also took days, possibly a week or more. We saw additional algorithm spikes around October 2nd and 6th, although the entire period showed sustained flux.

On October 7th, Gary Illyes from Google said that the Penguin roll-out was in the “final stage” (presumably, the removal of demotions) and would take a “few more days”. As of this writing, it’s been five more days.

My best guess is that 95%+ of previous Penguin demotions have been removed at this point. There’s a chance you’re in the lucky 5% remaining, but I wouldn’t hold my breath.


3. You didn’t cut nearly deep enough

During the few previous Penguin updates, it was assumed that sites didn’t recover because they simply hadn’t cut deep enough. In other words, site owners and SEOs had tried to surgically remove or disavow a limited number of bad links, but those links were either not the suspect links or were just the tip of the iceberg.

I think it’s true that many people were probably trying to keep as many links as possible, and were hesitant to make the deep cuts Penguin required. However, this entire argument is misleading and possibly self-destructive, because this isn’t how the new Penguin works.

Theoretically, the new Penguin should only devalue bad links, and its impact will be felt on a more “granular” (in Google’s own words) level. In other words, your entire site won’t be demoted because of a few or even a lot of bad links, at least not by Penguin. Should you continue to clean up your link profile? Possibly. Will cutting deeper help you recover from Penguin down the road? Probably not.


4. Without bad links, you’d have no links at all

Here’s the more likely problem, and it’s a cousin of #3. Your link profile is so bad that there is practically no difference between “demotion” and “devaluation.” It’s quite possible that your past Penguin demotion was lifted, but your links were so heavily devalued that you saw no ranking recovery. There was simply no link equity left to provide SEO benefit.

In this case, continuing to prune those bad links isn’t going to help you. You need to build new quality signals and authoritative links. The good news is that you shouldn’t have to wait months or years now to see the positive impact of new links. The bad news is that building high-quality links is a long, difficult road. If it were easy, you probably wouldn’t have taken shortcuts in the first place.


5. Your problem was never Penguin

This is the explanation no one wants to hear, but I think it’s more common than most of us think. We’re obsessed with the confirmed update animals, especially Penguin and Panda, but these are only a few of the hundreds of animals in the Google Zoo.

There were algorithmic link demotions before Penguin, and there are still parts of the algorithm that look for and act on bad links. Given the power that links still hold over ranking, this should come as no surprise. The new Penguin isn’t a free pass on all past link-building sins.

In addition, there are still manual actions. These should (hopefully) show up in Google Search Console, but Google will act on bad links manually where it’s warranted.

It’s also possible that you have a very different algorithmic problem in play or any of a number of technical SEO issues. That diagnostic is well beyond the scope of this blog post, but I’ll offer this advice — dig deeper. If you haven’t recovered from Penguin, maybe you’ve got different or bigger problems.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

3 New Upgrades Make the Web’s Best Keyword Research Tool Even Better

Posted by randfish

If you know me, you know I’m hyper-critical of the software, data, and products Moz releases. My usual response to someone asking about our tools vs. others used to be to give a rundown of the things I like about the competition and why they’re great, then look down at ground, shuffle my feet in embarrassment, and say “and Moz also has a good tool for that.”

But Keyword Explorer (and the progress Moz Pro & Local have made this year) brings out a different behavior in me. I’m still a little embarrassed to admit it, but admit it I must. KW Explorer is the best keyword research tool in the market, period*.

But we are never satisfied, so today, it’s getting even better with the addition of some killer new functionality.

#1: Rank checking inside KW Explorer lists

First on the list is the ability to easily see whether a given domain (or URL) already ranks on page 1 for any of the keywords on a list. Just enter a domain or page, hit “check rankings,” and the Rank column will fill in with your data.

Why is this crucial?

Because many of us who do keyword research need to know whether to add a list of keywords to our “already visible/getting traffic” set, or to the “in need of content creation or optimization” set. This feature makes it simple to build up a multi-hundred keyword list for targeting, and quickly include or exclude the keywords for which we’re already ranking page 1 (or above/below any given position). This column now appears in the CSV export, too, so you can mash up and filter the data however you’d like.

Quick aside: If you have a keyword list with expired SERPs (after 14 days, KW Explorer assumes that Google’s results may have changed substantially enough to invalidate the prior Difficulty & Opportunity scores), you’ll get this experience when checking rankings. Just refresh the keywords on the list to fetch the latest SERPs and you’ll be good to go.

But, of course, there’s also the need to get more ranking data — the ranking positions beyond page 1, tracking over time, comparison to competitors, etc. And that’s why, we’ve also added…

#2: Send keywords directly from a list to Pro Campaigns for rank tracking

Undoubtedly, our most-requested feature of the summer was the ability to import a list (or selected keywords from a list) over to a campaign to track. The previous export/import system worked, but it was an unnecessary hassle. Today, you can simply use the “I want to” menu, choose “Add XYZ to Campaign,” and then select which campaign you want (or create a new one).

The keywords will auto-magically copy themselves into your campaign, using whatever default settings you’ve got for rank tracking (US-English, Google.com is most common, but you can rank track in any country or language).

Why is this crucial?

Because once you know the keywords you’re targeting, you need to know how you’re performing over time, how your competition’s doing on those terms/phrases, and how the rankings are changing to include or exclude various SERP features (yup, as of August, we also track all the SERP features in Pro Campaigns).

The challenge, of course, is that you’ve got to know which keywords are worth targeting in the first place, and how relatively important they are, which is why we’ve worked like mad to deliver…

#3: Better, more accurate keyword volume and coverage than ever

(that’s way, way frickin’ better than whatever Google AdWords is doing with their “low spending” accounts)

Russ Jones and the Keyword Explorer team have been going full-force on a new, more powerful solution to replacing Google AdWords’s weird, imprecise, always-30-days-or-more-behind keyword data with better information. We started working with clickstream data (searches and click patterns gathered from browser extensions, anonymized, and sold to us by various folks) early this year; Russ wrote a detailed account of the process here.

But now our volume numbers are even better, with the addition of dramatically more data via a partnership with the awesome crew at Jumpshot. Their clickstream-based search behavior, plus what we get from other sources, combined with our modeling against AdWords’ impression counts on real campaigns, gives us higher accuracy, more coverage, and faster recognition of volume trends than ever before.

Why is this crucial?

When you enter a term or phrase into Keyword Explorer, you can now expect that we’re providing the best, most accurate volume ranges available*. Marketers need to be able to trust the numbers in their keyword tools, or else risk prioritizing the wrong search terms, the wrong content, and the wrong investments. We have confidence, thanks to our test comparisons, that the volume ranges you see in KW Explorer’s ranges will match real volume for the prior 30 days 95%+ of the time.

In the months ahead, Russ will have more to share comparing Moz’s keyword volume data to AdWords’ and, hopefully, an external API for search volume, too (especially after all the resounding requests on Twitter).

If that wasn’t enough, we’ve also added volume numbers to Pro Campaigns, so you can see this high-quality information in the context of the keywords you’re tracking.

Not too shabby, eh?


Let’s get real. Moz had a number of years where getting one change to one product, even a small one, felt like pulling teeth. It took forever. I think you could rightly point at our software and say “What’s going on over there?” But those days are long gone. Just look at all the useful, quality updates in 2016. This team is firing. on. every. cylinder. If you work on Moz’s software, you should be proud. If you use our software, you can feel like you’re getting your money’s worth and more. And if, like me, you tie far too much of your self-worth to the quality of your company’s products, well, even you can start holding your head high.

Rock on, fellow Mozzers and Moz subscribers. Rock on.


* In the English-language market, that is; outside of the United States, Canada, the United Kingdom, and Australia (where we get Jumpshot and other clickstream data), the suggestions aren’t as comprehensive and the volume numbers are often missing. Sadly, it’ll probably be this way for a while as we’re focusing on English markets for the time being, and will need to find and make deals with clickstream providers in each country/language in order to match up.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

We Fought the Comment Spam (and the Comment Spam Didn’t Win)

Posted by FeliciaCrawford

All across the Internet, comments sections are disappearing.

From your high-profile news sites to those that share the online marketing space, more and more sites are banishing that unassuming little text box at the bottom of a post. And frankly, it’s not hard to understand why.

First, you have your good ol’-fashioned spam comments. These are the commenters that hold dear the idea that those nofollowed comment links are valuable:

commentlink5.png

The usual.

commentlink2.png

Spicing it up a bit with some solid industry advice.

commentlink4.png

Really going for the gold!

Then you have your thin comments. Often left with (we assume) good intentions, they don’t add much value to the discussion:

thincomment1.png

thincomment2.png

thincomment3.png

These poor souls usually end up with a lot of downvotes, and if they receive upvotes, it’s often a clear sign that there’s a nefarious MozPoint scheme afoot.

Sometimes even the best of us are lured by the glamour of spamming:

Finally, lest we forget, you have your inflammatory comments. Those comments that, although perhaps on-topic, are derailing or just downright unkind. We don’t get too much of that here on the Moz blog, thank goodness (or thank TAGFEE), but I’m sure we’ve all read enough of those to last us several lifetimes.

And comment moderation is a thankless, wearying task. Though we fight the good fight, comment spammers are constantly finding ways around our barriers, poking their links into the chinks in our armor. It takes valuable time out of a Mozzer’s busy workday to moderate those comments.

So why are we battling to keep them?

In the beginning, there was the blog.

Before the Moz Pro toolset was even a twinkle in Roger’s optical sensors, Moz was a blog. A community of brave folks banding together to tackle the mysteries and challenges of SEO. If you look back across the years and rifle through the many, many comments, you’ll begin to notice a few things:

  • People learned from one another.
  • People leaned on one another.
  • People networked and cultivated relationships that otherwise may not have blossomed.

Google says they’re good for SEO, and I’m not gonna fight with Google.

Now, I don’t want to cheapen the sentiment here, but it has to be said: the smart folks over at Google have made it clear that a healthy, vibrant online community is one signal of a site’s quality. Comments can be considered part and parcel of what constitutes good (nay, even great) content, and have even been spotted in a featured snippet or two.

I don’t know about you, but I’m not one to argue with the Big G.

But there’s always been comment spam. Why do you care now?

Comment spam isn’t a new or novel phenomenon. It’s been plaguing blogs almost since the very first public bloggers put fingers to keyboard. Most blog posts on Moz show traces of its corrupt spamalicious influence in the comments section. So what was the catalyst that steeled our resolve?

It just got annoying.

Authors pour heart and soul into crafting their posts. They take valuable time out of their regular work day to engage in the comments section, answering questions and driving thoughtful discussion. They deserve better than a slew of spammers aiming to place a link.

Readers devote hours of their ever-so-precious lives to reading the blog. Some folks even read for the comment conversations alone. They deserve to benefit from those invested hours, to be inspired to join the conversation.

We knew we had to do something. What that was seemed unclear, though.

We began to notice something. When we promoted a YouMoz post to the main blog, it tended to garner more of what we’d call quality comments. Comments with depth, that ask pertinent questions, that respectfully challenge the article in question. These posts came prepackaged with their own discussions already in full swing from their time on YouMoz; often, the first few comments were engaging ones, and they were just as often upvoted to remain on top (the blog auto-sorts comments by popularity).

Conversely, when the first several comments on a brand-new post were thin, spammy, or otherwise low-quality, it seemed to grind any potential discussion to a screeching halt. Internally, our Mozzer authors like Dr. Pete and Rand began to take notice. I received some concerned questions from other frequent contributors. At first, I wasn’t sure how to tackle the problem. After all, we already seemed to be doing so much.

Comment moderation? Check. Certain triggers catch comments in a queue, which we clear out daily.

Subject every comment to approval by an editor? No, that would stymy the natural discussions that make our blog comments section special in the first place. No one should have to wait for my morning meetings to finish before they can engage in intellectual banter with their peers.

Close the comments section? No way. This was never on the table. It simply didn’t make sense; we’re fortunate in that a good majority of comments on the blog are high quality.

It boiled down to the fact that there was the potential for our comments section to nurture not only good content, not only great content, but fantastic content. 10X, if you prefer that term. The most royal darn content outside of Buckingham Palace.

Okay, that might be going a little far. But something incredibly special happens here on the blog. You can ask questions about a Whiteboard Friday and Rand will do his best to answer, thoughtfully refute, or discuss your point. You can get to know your peers in an industry largely cooped up behind a screen half a world away. You can joke with them, disagree with them, metaphorically high-five them. And it’s not limited to a relatively low character count, nor is there pressure to approve the friend request of anyone you’ve just hotly debated.

We had to preserve that.

And that’s when we devised our grand experiment.

We began to seed discussion questions as the first comment.

Inspired by sites like the New York Times with their “NYT Pick” featured comment option, we decided there was a better way.

nytpick.png

Marvel at that nifty gold badge!

For one week in August (8/1 through 8/5), I asked authors to contribute a discussion question, something to spark a decent conversation in the comments early on, before you could even say “thanks for the nice post.”

This question would appear at the top of our comments section, the first thing a reader would see after consuming the post and potentially feeling inspired enough to share their thoughts.

Rand kicked it off a little early, in fact, with this zinger on July 29th:

randsfirstcomment.png

Those upvotes looked mighty promising to a despairing blog editor.

Keep in mind that, for the most part, posting these discussion questions is a very manual process. We don’t currently have the framework built to display a “featured question.” We tend to publish around 12am Seattle time; to get these little puppies in place early enough to make a difference, I would…

  • Stay up until midnight
  • Assume the identity of the author (with permission, of course) using magical Moz admin abilities
  • Publish the comment
  • Sneak back to my main account and — yes, here’s the shady bit — thumbs it up to ensure it stayed “on top” for a few hours

I do struggle with the guilt of these small betrayals (that is, gaming the thumb system), but ‘twas for the greater good, I swear it! As you can see from the screenshot above, that high visibility — combined with a ready-to-go thought-provoking question — earned more upvotes as the day wore on. Almost without fail, each seeded discussion question remained the top-voted comment on every post that week. And it seemed to be working — more and more comments seemed to be good quality. Great quality. Sometimes even fantastic quality. (I just shivered.)

What’s spam to me might be a sandwich to you.

Now, quality is a very subjective thing. I can’t vouch for the absolute science of this experiment, because it was very squarely rooted in a subjective analysis of the comments. But when we compared the results from our experiment week (8/1 through 8/5) to two separate weeks in which we didn’t make any special effort in the comments (7/18 through 7/22 and 6/27 through 7/1), the results were quite telling.

Cut to the chase — what happened?!

Manually going through the comments section of each post, I tallied how many comments I considered high-quality or useful that were not given by the author, and how many comments I considered so thin or spammy as to be detrimental to the section as a whole.

For the control week of June 27th through July 1st, 26% of total comments were high-quality and 26% were spammy.

For the control week of July 18th through July 22nd, 23% were high-quality and 29% were spammy.

For the week of our discussion questions, August 1st through August 5th, 35% of total comments were high-quality and 11% were spammy.

My subjective, unscientific experiment had great results. Since then, I’ve asked our authors to contribute discussion questions to kick off a good conversation in the comments. Every time, I can anecdotally say that the commentary was more vibrant, more overtly helpful, and more alive than when we don’t meddle.

You like it, you really like it!

Seeded discussion questions far and away have more upvotes than your regularly scheduled top comments. Often they top the double digits, and this very apt discussion question by Gianluca (a long-time supporter and champion of the Moz community) earned a whopping 27 thumbs pointing toward the heavens:

gianluca.png

In addition, people are answering those questions. They’re answering each other answering those questions. The questions are helping to get the gears turning, adding another layer of thoughtfulness to a piece that you otherwise might be content to skim and then bounce off to another magical corner of the Internet.

The greatest and most humbling triumph, of course, would be to help transform the spammers into supporters, to inspire everyone to think critically and communicate boldly. If even one person hesitates before dropping in a promotional link and instead asks the community’s advice, my spirit shall rest easy forevermore.

There’s a light at the end of the tunnel.

Sure, there are still comment spammers. There have always been comment spammers. And, though it pains me to say it, there will always be comment spammers. It’s just a part of life we must accept, like the mud that comes along with a beautifully rainy Seattle afternoon or when your last sip of delicious coffee is muddled with grounds.

But I want to give you hope, O ye commenters and readers and editors of the world. You need not sacrifice the intrinsic goodness of a community-led comments section to the ravages of spam. There is another way. And though the night is dark and full of spammers, we’re strong enough and smart enough to never yield, to hold firm to our values, and to nourish what goodness and helpfulness we can in our humble territory of the Internet.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →