About frans

Website:
frans has written 4625 articles so far, you can find them below.

Announcing Moz Academy!

Posted by Nick_Sayers

We’re stoked to announce Moz Academy!

Have you ever wanted a resource to learn inbound marketing or a place your team can reference marketing best practices? Well, we hope you do a backflip over Moz Academy. If you have a Moz Subscription, check it out now!

Moz Academy

Subscription-based content

At Moz we produce a wealth of free content in the blog, our guides, Q&A, and pretty much everywhere on the site. We want to do something special for Moz subscribers by transforming our free content and reinventing it for Moz Academy. You could probably scour the Moz Blog and other websites to obtain the information in Moz Academy, but we think having it easily digestible and all in one place is a huge win for Moz subscribers. Moz is excited to add the simplicity and power of Moz Academy to the list of Moz subscription benefits.

Why create an inbound marketing school?

Moz is extremely passionate about educating our community. In fact, our entire business started as a blog where people could learn about SEO. Moz Academy gives subscribers the power to be better marketers, which will enable them to use our products in more depth and with greater confidence. We want to provide a hub of marketing knowledge that will create a stronger community where people can teach each other while using the Academy as a frame of reference. One could say that Moz Academy is the Mr. Miyagi of inbound marketing. The key to this project is empowering you to kick even more butt than you already do!

We hope Moz Academy turns into the one-stop-shop for inbound knowledge for Moz subscribers. Everyone on the team is committed to continually refreshing content and adding new lessons. Again, we really want this to be the easiest and most comprehensive place to learn internet marketing on the web.

Furthermore, we’ve designed each lesson with empathy in mind; they will be easily digestible and considerate of your time. That means you can drop in whenever you like and have comfortable breakpoints if you’re brain is exploding with inbound marketing knowledge.

Wait, how do I use Moz Academy?

Moz Academy is easy to use! Check out these six simple steps:

Step 1: Log into your Moz account.

Step 2: Go to moz.com/academy.

Step 3: Look through the lessons.

Step 4: Click a lesson you find interesting.

Step 5: Enjoy a video and/or read the lesson below it!

Step 6: Crane kick.

What lessons do you have right now?

We’re starting with the following lessons:

  • Inbound Marketing
  • SEO
  • Link Building
  • Social Media
  • Content Marketing

We plan to add a lot more! Look for lessons on local SEO, community management, video marketing, email marketing and web analytics. Yup, it’s going to be pretty sweet!

Well, Moz, what’s next for Moz Academy?

The future of Moz Academy really depends on how everyone uses it. In the next few months, we want to create a good foundation for beginners and subsequently build up to intermediate-level content. Eventually, we’d like to have sections for beginners lessons, intermediate lessons, and advanced lessons. Keep your eyes peeled, because we’ll be releasing a lot of new stuff! Some of our longer-term goals for Moz Academy are to have interactive quizzes and some sort of gamification. Yes, we know you’d like to track your progress and unlock achievements. That way you can show off how awesome you are at Moz Academy!

Eventually, we want Moz Academy to look more like Treehouse and Code School’s online learning platforms. We have a long way to go, but are excited about the journey to get there. With your help and feedback, we can make Moz Academy something awesome. Thanks in advance, and enjoy!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

SEO Finds in Your Server Logs, Part 2: Optimizing for Googlebot

Posted by timresnik

This is a follow-up to a post I wrote a few months ago that goes over some of the basics of why server log files are a critical part of your technical SEO toolkit. In this post, I provide more detail around formatting the data in Excel in order to find and analyze Googlebot crawl optimization opportunities.

Before digging into the logs, it’s important to understand the basics of how Googlebot crawls your site. There are three basic factors that Googlebot considers. First is which pages should be crawled. This is determined by factors such as the number of backlinks that point to a page, the internal link structure of the site, the number and strength of the internal links that point to that page, and other internal signals like sitemaps.

Next, Googlebot determines how many pages to crawl. This is commonly referred to as the “crawl budget.” Factors that are most likely considered when allocating crawl budget are domain authority and trust, performance, load time, and clean crawl paths (Googlebot getting stuck in your endless faceted search loop costs them money). For much more detail on crawl budget, check out Ian Lurie’s post on the subject.

Finally, the rate of the crawl — how frequently Googlebot comes back — is determined by how often the site is updated, the domain authority, and the freshness of citations, social mentions, and links.

Now, let’s take a look at how Googlebot is crawling Moz.com (NOTE: the data I am analyzing is from SEOmoz.org prior to our site migration to Moz.com. Several of the potential issues that I point out below are now solved. Wahoo!). The first step is getting the log data into a workable format. I explained in detail how to do this in my last server log post. However, this time make sure to include the parameters with the URLs so we can analyze funky crawl paths. Just make sure the box below is unchecked when importing your log file.

The first thing that we want to look at is where on the site Googlebot is spending its time and dedicating the most resources. Now that you have exported your log file to a .csv file, you’ll need to do a bit of formatting and cleaning of the data.

1. Save the file with an Excel extension, for example .xlsx

2. Remove all the columns except for Page/File, Response Code and User Agent, it should look something like this (formatted as a table which can be done by selecting your data and ^L):

3. Isolate Googlebot from other spiders by creating a new column and writing a formula that searches for “Googlebot� in the cells in the 3rd column.

4. Scrub the Page/File column for the top-level directory so we can later run a pivot table and see which sections Google is crawling the most

5. Since we left the parameter on the URL in order to check crawl paths, we’ll want to remove it here so that data is included in the top level directory analysis that we do in the pivot table. The URL parameter always starts with “?,” so that is what we want to search for in Excel. This is a little tricky because Excel uses the question mark character as a wildcard. In order to indicate to Excel that the question mark is literal, use a preceding tilde, like this: “~?”

6. The data can now be analyzed in a pivot table (data > pivot table). The number associated with the directory is the total number of times Googlebot requested a file in the timeframe of the log, in this case a day.

Is Google allocating crawl budget properly? We can dive deeper into several different pieces of data here:

  • Over 70% of Google’s crawl budget focuses on three sections, while over 50% goes towards /qa/ and /users/. Moz should look at search referral data from Google Analytics to see how much organic search value these sections provide. If it is disproportionately low, crawl management tactics or on-page optimization improvements should be considered.
  • Another potential insight from this data is that /page-strength/, a URL used for posting data for a Moz tool, is being crawled nearly 1,000 times. These crawls are most likely triggered from external links pointing to the results of the Moz tool. The recommendation would be to exclude this directory using robots.txt.
  • On the other end of the spectrum, it is important to understand the directories that are rarely being crawled. Are there sections being under-crawled? Let’s look at a few of Moz’s:

In this example, the directory /webinars pops out as not getting enough Google attention. In fact, only the top directory is being crawled, while the actual Webinar content pages are being skipped.

These are just a few examples of crawl resource issues that can be found in server logs. A few additional issues to look for include:

  • Are spiders crawling pages that are excluded by robots.txt?
  • Are spider crawling pages that should be excluded by robots.txt?
  • Are certain sections consuming too much bandwidth? What is the ratio of the number of pages crawled in a section to the amount of bandwidth required?

As a bonus, I have done a screencast of the above process for formatting and analyzing the Googlebot crawl.

In my next post on analyzing log files, I will explain in more detail how to identify duplicate content and look for trends over time. Feel free to share your thoughts and questions in the comments below!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →