About frans

Website:
frans has written 4625 articles so far, you can find them below.

Click-Through Rate Isn’t Everything: 8 Ways to Improve Your Online Display Ads

Posted by rMaynes1

You are exposed to an average of 362 online display ads a day. How close are you to buying anything when you see those ads?

Online display ads have been around for over 20 years. They’re nothing new. But over the past 2 decades, the content, format, and messaging of display ads have changed dramatically—because they have had to!

The click-through rate of that first banner ad in 1994 was 44%. CTRs have steadily declined, and were sitting at around 0.1% in 2012 for standard display ads (video and rich media excluded), according to DoubleClick. Advertisers had to do something to ensure that their ads were seen, and engaged with—ads had to be a useful resource, and not an annoying nuisance.

It’s important, however, that the focus is not firmly fixed on CTRs. Yes, online display ads have largely been considered a tool for direct response advertising, but more recently, advertisers are understanding the importance of reaching the right person, in the right mindset, with an ad that can be seen. This ad may not be clicked on, but does that mean it wasn’t noticed and remembered? Advertisers are increasingly opting to pay for performance as opposed to clicks and/or impressions. Advertisers want their ad to drive action that leads to purchase—and that isn’t always in the form of a click.

Mediative recently conducted and released a research study that looks at how display ads can drive purchase behaviour. If someone is browsing the web and sees an ad, can it influence a purchase decision? Are searchers more responsive to display ads at different stages in the buying cycle? What actions do people take after seeing an ad that captures their interest? Ultimately, Mediative wanted to know how indicative of purchase behaviour a click on an ad was, and if clicks on display ads even matter anymore when it comes to driving purchase behaviour and measuring campaign success. The results from an online survey are quite interesting.

1. The ability of online display ads to influence people increases as they come closer to a purchase decision.

In fact, display ads are 39% more likely to influence web users when they are researching a potential purchase versus when they have no intent to buy.

Advertiser action item #1:

Have different ad creatives with different messaging that will appeal to the researcher and the purchaser of your product or service separately. Combined with targeted impressions, advertisers are more likely to reach and engage their target audience when they are most receptive to the particular messaging in the ad.

Here are a few examples of Dell display ads and different creatives that have been used:

This creative is focusing on particular features of the product that might appeal more to researchers.

This ad injects the notion of “limited time” to get a deal, which might cause people who are on the fence to act faster—but it doesn’t mention pricing or discounts.

These creatives introduce price discounts and special offers which will appeal to those in the market to buy.

2. The relevancy of ads cannot be understated.

40% of people took an action (clicked the ad, contacted the advertiser, searched online for more information, etc.) from seeing an ad because it was relevant to a need or want, or relevant to something they were doing at the time.

Advertiser action item #2:

Use audience data or lookalike modeling in display campaigns to ensure ads will be targeted to searchers who have a higher likelihood of being interested in the product or service. Retargeting ads to people based on their past activity or searches is valuable at this stage, as potential customers can be reached all over the web while they comparison shop.

An established Canadian charitable organization ran an awareness campaign in Q2 2015 using retargeting, first and third party data lookalike modeling, and contextual targeting to help drive existing, and new users to their website. The goal was to drive donations, while reducing the effective cost per action of the campaign. This combination helped drive granularity in the targeting, enabling the most efficient spending possible. The result was a 689% decrease in eCPA—$76 versus the goal of $600.

3. Clicks on ads are not the only actions taken after seeing ads.

53% of people said they were likely to search online for the product featured in the ad (the same as those who said they would click on the ad). Searching for more information online is just as likely as clicking the ad after it captures attention, just not as quickly as a click (74% would click on the ad immediately or within an hour, 52% would search online immediately or within an hour).

Advertiser action item #3:

It is critical not to measure the success of a display campaign by clicks alone. Advertisers can get caught up in CTRs, but it’s important to remember that ads will drive other behaviours in people, not just a click. Website visits, search metrics, etc. must all be taken into consideration.

A leading manufacturer of PCs, laptops, tablets, and accessories wanted to increase sales in Q2 of 2014, with full transparency on the performance and delivery of the campaign. The campaign was run against specific custom audience data focusing on people of technological, educational, and business interest, and was optimized using various tactics. The result? The campaign achieved a post-view ROI revenue (revenue from target audiences who were presented with ad impressions, yet did not necessarily click through at that time) that was 30x the amount of post-click revenue.

4. Clicks on ads are not the only actions that lead to purchase.

33% of respondents reported making a purchase as a direct result of seeing an ad online. Of those, 61% clicked and 44% searched (multiple selections were allowed), which led to a purchase.

Advertiser action item #4:

Revise the metrics you measure. Measuring “post-view conversions” will take into account the fact that people may see an ad, but act later—the ad triggers an action, whether it be a search, a visit, or a purchase—but not immediately, and it is not directly measurable.

5. The age of the target audience can impact when ads are most likely to influence them in the buying cycle.

  • Overall, 18–25 year olds are most likely to be influenced by online advertising.
  • At the beginning of the buying cycle, younger adults aged 18–34 are likely to notice and be influenced by ads much more than people aged over 35.
  • At the later stages of the buying cycle, older adults aged 26–54 are 12% more likely that 18–25 year olds to have made a purchase as a result of seeing an ad.

Advertiser action item #5:

If your target audience is older, multiple exposures of an ad might be necessary in order to increase the likelihood of capturing their attention. Integrated campaigns could be more effective, where offline campaigns run in parallel with online campaigns to maximize message exposure.

6. Gender influences how much of an impact display ads have.

More women took an online action that led to a purchase in the last 30 days, whereas more men took an offline action that led to a purchase.

  • 76% more women than men visited an advertiser’s website without clicking on the ad.
  • 47% more women than men searched online for more information about the advertiser, product, or service.
  • 43% more men than women visited the advertiser’s location.
  • 33% more men than women contacted the advertiser.

Advertiser action item #6:

Ensure you know as much about your target audience as possible. What is their age, their average income? What sites do they like to visit? What are their interests? The more you know about who you are trying to reach, the more likely you will be to reach them at the right times when they will be most responsive to your advertising messages.

7. Income influences how much of an impact display ads have.

  • Web users who earned over $100k a year were 35% more likely to be influenced by an ad when exposed to something they hadn’t even thought about than those making under $50k a year.
  • When ready to buy, people who earned under $20K were 12.5% more likely to be influenced by ads than those making over $100K.

Advertiser action item #7:

Lower earners (students, part-time workers, etc.) are more influenced by ads when ready to buy, so will likely engage more with ads offering discounts. Consider income differences when you are trying to reach people at different stages in the buying cycle.

8. Discounts don’t influence people if they are not relevant.

We were surprised that the results of the survey indicated that discounts or promotions in ads did not have more of an impact on people—but it’s likely that the ads with coupons were irrelevant to the searcher’s needs or wants, therefore would have no impact. We asked people what their reasons were behind taking action after seeing an online ad. 40% of respondents took an action from seeing an ad for a more purchase-related reason than simply being interested—they took the action because the ad was relevant to a need or want, or relevant to something they were doing at the time.

Advertiser action item #8:

Use discounts strategically. Utilizing data in campaigns can ensure ads reach people with a high intent to buy and a high likelihood of being interested in your product or service. Turn interest into desire with coupons and/or discounts—it will have more of an impact if directly tied to something the searcher is already considering.

In conclusion, to be successful, advertisers need to ensure their ads are providing value to online web users—to be noticed, remembered, and engaged with, relevancy of the ad is key. Serving relevant ads that are related to a searcher’s current need or want are far more likely to capture attention than a “one-size-fits-all” approach.

Advertisers will be rewarded for their attention to personalization with more interaction with ads and a higher likelihood of a purchase. Analyzing lower funnel metrics, such as post-view conversions, rather than simply concentrating on the CTR will allow advertisers to have a far better understanding of how their ads are performing, and the potential number of consumers that have been influenced.

Rebecca Maynes, Manager of Content Marketing and Research with Mediative, was the major contributor on this whitepaper. The full research study is available for free download at Mediative.com.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Why All SEOs Should Unblock JavaScript & CSS… And Why Google Cares

Posted by jenstar

If you’re a webmaster, you probably received one of those infamous “Googlebot cannot access CSS and JS files on example.com” warning letters that Google sent out to seemingly every SEO and webmaster. This was a brand new alert from Google, although we have been hearing from the search engine about the need to ensure all resources are unblocked—including both JavaScript and CSS.

There was definite confusion around these letters, supported by some of the reporting in Google Search Console. Here’s what you need to know about Google’s desire to see these resources unblocked and how you can easily unblock them to take advantage of the associated ranking boosts.

Why does Google care?

One of the biggest complaints about the warning emails lay in the fact that many felt there was no reason for Google to see these files. This was especially true because it was flagging files that, traditionally, webmasters blocked—such as files within the WordPress admin area and Wordpress plugin folders.

Here’s the letter in question that many received from Google. It definitely raised plenty of questions and concerns:

Of course, whenever Google does anything that could devalue rankings, the SEO industry tends to freak out. And the confusing message in the warning didn’t help the situation.

Why Google needs it

Google needs to render these files for a couple of key reasons. The most visible and well known is the mobile-friendly algorithm. Google needs to be able to render the page completely, including the JavaScript and CSS, to ensure that the page is mobile-friendly and to apply both the mobile-friendly tag in the search results and the associated ranking boost for mobile search results. Unblocking these resources was one of the things that Google was publicly recommending to webmasters to get the mobile-friendly boost for those pages.

However, there are other parts of the algorithm that rely on using it, as well. The page layout algorithm, the algorithm that looks at where content is placed on the page in relation to the advertisements, is one such example. If Google determines a webpage is mostly ads above the fold, with the actual content below the fold, it can devalue the rankings for those pages. But with the wizardry of CSS, webmasters can easily make it appear that the content is front and center, while the ads are the most visible part of the page above the fold.

And while it’s an old school trick and not very effective, people still use CSS and JavaScript in order to hide things like keyword stuffing and links—including, in the case of a hacked site, to hide it from the actual website owner. Googlebot crawling the CSS and JavaScript can determine if it is being used spammily.

Google also has hundreds of other signals in their search algo, and it is very likely that a few of those use data garnered from CSS and JavaScript in some fashion as well. And as Google changes things, there is always the possibility that Google will use it for future signals, as well.

Why now?

While many SEOs had their first introduction to the perils of blocking JavaScript and CSS when they received the email from Google, Matt Cutts was actually talking about it three-and-a-half years ago in a Google Webmaster Help video.


Then, last year, Google made a significant change to their webmaster guidelines by adding it to their technical guidelines:

Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings.

It still got very little attention at the time, especially since most people believed they weren’t blocking anything.

However, one major issue was that some popular SEO Wordpress plugins were blocking some JavaScript and CSS. Since most Wordpress users weren’t aware this was happening, it came as a surprise to learn that they were, in fact, blocking resources.

It also began showing up in a new “Blocked Resources” section of Google Search Console in the month preceding the mobile-friendly algo launch.

How many sites were affected?

In usual Google fashion, they didn’t give specific numbers about how many webmasters received these blocked resources warnings. But Gary Illyes from Google did confirm that they were sent out to 18.7% of those that were sent out for the mobile-friendly warnings earlier this year:

@jenstar about 18.7% of that sent for mobile issues a few months back

— Gary Illyes (@methode) July 29, 2015

Finding blocked resources

The email that Google sent to webmasters alerting them to the issue of blocked CSS and JavaScript was confusing. It left many webmasters unsure of what exactly was being blocked and what was blocking it, particularly because they were receiving warnings for JavaScript and CSS hosted on other third-party sites.

If you received one of the warning letters, the suggestion for how to find blocked resources was to use the Fetch tool in Google Search Console. While this might be fine for checking the homepage, for sites with more than a handful of pages, this can get tedious quite quickly. Luckily, there’s an easier way than Google’s suggested method.

There’s a full walkthrough here, but for those familiar with Google Search Console, you’ll find a section called “Blocked Resources” under the “Google Index” which will tell you what JavaScript and CSS is blocked and what pages they’re found in.

You also should make sure that you check for blocked resources after any major redesign or when launching a new site, as it isn’t entirely clear if Google is still actively sending out these emails to alert webmasters of the problem.

Homepage

There’s been some concern about those who use specialized scripts on internal pages and don’t necessarily want to unblock them for security reasons. John Mueller from Google said that they are looking primarily at the homepage—both desktop and mobile—to see what JavaScript and CSS are blocked.

So at least for now, while it is certainly a best practice to unblock CSS and JavaScript from all pages, at the very least you want to make it a priority for the homepage, ensuring nothing on that page is blocked. After that, you can work your way through other pages, paying special attention to pages that have unique JavaScript or CSS.

Indexing of Javascript & CSS

Another reason many sites give for not wanting to unblock their CSS and JavaScript is because they don’t want them to be indexed by Google. But neither of those files are file types that Google will index, according to their long list of supported file types for indexation.

All variations

It is also worth remembering to check both the www and the non-www for blocked resources in Google Search Console. This is something that is often overlooked by those webmasters that only to tend to look at the version they prefer to use for the site.

Also, because the blocked resources data shown in Search Console is based on when Googlebot last crawled each page, you could find additional blocked resources when checking them both. This is especially true for for sites that may be older or not updated as frequently, and not crawled daily (like a more popular site is).

Likewise, if you have both a mobile version and a desktop version, you’ll want to ensure that both are not blocking any resources. It’s especially important for the mobile version, since it impacts whether each page gets the mobile-friendly tag and ranking boost in the mobile search results.

And if you serve different pages based on language and location, you’ll want to check each of those as well. Don’t just check the “main” version and assume it’s all good across the entire site. It’s not uncommon to discover surprises in other variations of the same site. At the very least, check the homepage for each language and location.

Wordpress and blocking Javascript & CSS

If you use one of the “SEO for Wordpress”-type plugins for a Wordpress-based site, chances are you’re blocking Javascript and CSS due to that plugin. It used to be one of the “out-of-the-box” default settings for some to block everything in the /wp-admin/ folder.

When the mobile-friendly algo came into play, because those admin pages were not being individually indexed, the majority of Wordpress users left that robots block intact. But this new Google warning does require all Wordpress-related JavaScript and CSS be unblocked, and Google will show it as an error if you block the JavaScript and CSS.

Yoast, creator of the popular Yoast SEO plugin (formerly Wordpress SEO), also recommends unblocking all the JavaScript and CSS in Wordpress, including the /wp-admin/ folder.

Third-party resources

One of the ironies of this was that Google was flagging third-party JavaScript, meaning JavaScript hosted on a third-party site that was called from each webpage. And yes, this includes Google’s own Google AdSense JavaScript.

Initially, Google suggested that website owners contact those third-party sites to ask them to unblock the JavaScript being used, so that Googlebot could crawl it. However, not many webmasters were doing this; they felt it wasn’t their job, especially when they had no control over what a third-party sites blocks from crawling.

Google later said that they were not concerned about third-party resources because of that lack of control webmasters have. So while it might come up on the blocked resources list, they are truly looking for URLs for both JavaScript and CSS that the website owner can control through their own robots.txt.

John Mueller revealed more recently that they were planning to reach out to some of the more frequently cited third-party sites in order to see if they could unblock the JavaScript. While we don’t know which sites they intend to contact, it was something they planned to do; I suspect they’ll successfully see some of them unblocked. Again, while this isn’t so much a webmaster problem, it’ll be nice to have some of those sites no longer flagged in the reports.

How to unblock your JavaScript and CSS

For most users, it’s just a case of checking the robots.txt and ensuring you’re allowing all JavaScript and CSS files to be crawled. For Yoast SEO users, you can edit your robots.txt file directly in the admin area of Wordpress.

Gary Illyes from Google also shared some detailed robots.txt changes on Stack Overflow. You can add these directives to your robots.txt file in order to allow Googlebot to crawl all Javascript and CSS.

To be doubly sure you’re unblocking all JavaScript and CSS, you can add the following to your robots.txt file, provided you don’t have any directories being blocked in it already:

User-Agent: Googlebot
Allow: .js
Allow: .css
If you have a more specialized robots.txt file, where you’re blocking entire directories, it can be a bit more complicated.

In these cases, you also need to allow the .js and.css for each of the directories you have blocked.

For example:

User-Agent: Googlebot
Disallow: /deep/
Allow: /deep/*.js
Allow: /deep/*.css

Repeat this for each directory you are blocking in robots.txt.

This allows Googlebot to crawl those files, while disallowing other crawlers (if you’ve blocked them). However, the chances are good that the kind of bots you’re most concerned about being allowed to crawl various JavaScript and CSS files aren’t the ones that honor robots.txt files.

You can change the User-Agent to *, which would allow all crawlers to crawl it. Bing does have its own version of the mobile-friendly algo, which requires crawling of JavaScript and CSS, although they haven’t sent out warnings about it.

Bottom line

If you want to rank as well as you possibly can, unblocking JavaScript and CSS is one of the easiest SEO changes you can make to your site. This is especially important for those with a significant amount of mobile traffic, since the mobile ranking algorithm does require they both be unblocked to get that mobile-friendly ranking boost.

Yes, you can continue blocking Google bot from crawling either of them, but your rankings will suffer if you do so. And in a world where every position gained counts, it doesn’t make sense to sacrifice rankings in order to keep those files private.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Are Your Analytics Telling the Right Story?

Posted by Bill.Sebald

A process can easily become a habit. A habit may not change without awareness or intervention.

Before it becomes a habit, a process should be adjusted to change along with new goals, constant learning, experimentation, and so on.

Considering your time in analytics, are you engaging in a process, or in an outdated habit?

That’s a real question that digital marketing practitioners should ask themselves. Inherently, marketers tend to be buried with work, reusing templates to speed up results. But many agencies lean on those templates a little too much, in my opinion.

Templates should never be written in stone.

If your company is pumping out canned reports, you’re not alone. I do the business development for our company and regularly ask prospects to explain or share the reports they’ve received in the past. Sometimes it’s truly discouraging, outdated, wasteful, and the reason businesses search for new SEO vendors.

Look—I’m all for scalability. It’s a huge help. But some things can’t be scaled and still be successful, especially in today’s SEO climate—or, frankly, marketing in general. Much of what was scalable in SEO prior to 2011 is now penalty-bait. Today’s analytics tools and platforms can slice and dice data faster than anything Ron Popeil ever sold, but the human element will always be necessary if you want your marketing to dominate.

Find the stories to tell

I like to tell stories. I’m real fun in the pub. What I’ve always loved about marketing is the challenge to not only find a story, but have that story change something for the better. I like adding my layer based on real data and experimenting.

Analytics work is all about finding the story. It’s detective work. It’s equal parts Sherlock Holmes, Batman, and Indiana Jones. If you’re lucky, the story jumps out with very little digging. However, it’s more likely you’ll be going on some expeditions. It’s common to start with a hunch or random click through reports, but you need to always be looking for the story.

A great place to start is through client conversations. We schedule at least one monthly call with our clients, where it’s truly a discussion session. We get conversations going to pull intel out of the key stakeholders. Case in point: Recently, we discovered through an open discussion that one of our clients had great success with an earlier email campaign targeted to business owners. There was specific information customers positively responded to, which was helpful in recent content development on their website. It’s amazing what you can learn by asking questions and simply listening to responses.

We should be true consultants, not report monkeys. Dive into the discussions started and enjoy the ride. I guarantee you’ll take note of a few ripe areas to review next time you log into your Google Analytics account.

An impromptu survey says it’s a time issue

Most SEO engagements are designed around a block of purchased hours. Hopefully the client understands they’re not only buying your time to complete SEO tasks, but also your expertise and analysis. If someone on your team were to say, “I don’t have time to do analysis because all my tasks used up their budget this month,” then you really need to question the value of the chosen tasks. Were they picked based on front-loaded analysis, or were they simply tasks pulled out of guesswork?

A few weeks ago I pushed a quick Survey Monkey survey out on Twitter and Linkedin. Thanks to a few retweets, 94 people responded (please consider the following results more directional than scientific—I’m well aware it’s a shallow survey pool). I asked two questions:

  1. If you work in-house or have clients, how often do you log into your clients’ analytics? (Multiple choices ranged from several times a day to a few times a month).
  2. Do you, or do you not, get enough time in Analytics to interpret the data?

The responses:

answers2

While some do make a habit of logging into analytics once or more times a day, more do not. Is it required to check under the hood every day? Personally, I believe it is—but your answer may vary on that one. If something went south overnight, I want to be aware before my client tells me. After all, that’s one of the things I’m paid for. I like the idea of being active—not reactive.

More notable is that most respondents didn’t feel they get enough time in analytics. That should absolutely change.

There was also a field for respondents to elaborate on their selections. There were several comments that jumped out at me:

“In house, day to day tasks and random projects prevent me from taking the deep dives in analytics that I feel are valuable.”

“It’s challenging to keep up with the changes and enhancements made in Google Analytics in particular, amongst other responsibilities and initiatives.”

“Too many things are on my plate for me to spend the time I know I should be spending in Google Analytics.”

“Finding the actionable info in Analytics always takes more time that expected—never enough time to crunch the numbers!”

“I log in to ‘spot check’ things but rarely do I get to delve into the data for long enough to suss out the issues and opportunities presented by the data.”

These results suggest that many marketers are not spending enough time with analytics. And possibly not because they don’t see the value, but simply because they don’t have time. “Either you run the day, or the day runs you (Jim Rohn)” is apropos here—you must make time. You need to get on top of all the people filling your plate. It’s not easy, but it needs to be done.

Get on top of those filling your plate. Kind of like professional crowd surfing.

Helpful resources

Dashboards are fantastic, but I rarely see them set up in analytics platforms. One of the best ways to get a quick glimpse of your key metrics are with dashboards. All good analytics platforms provide the ability to make custom dashboards. Get into work, grab a coffee, fire up the computer, click your dashboard bookmark. (I recommend that order!) Google Analytics, which most of us probably use, provides some decent options with their dashboards, though limited compared to enterprise analytics platforms.

However, this basic dashboard is the minimum you should review in analytics. We’ll get deeper soon.

Building these widgets are quite easy (I recently created a tutorial on my site). There are also websites that provide dashboards you can import into Google Analytics. Dashboard Junkie is a fun one. Here are some others from Econsultancy and Google themselves.

It’s not just analytics platforms that offer dashboards. There are several other vendors in the SEO space that port in analytics data and mesh with their own data—from Moz Analytics to SearchMetrics to Conductor to many, many others.

SEMrush has a unique data set that marketers should routinely review. While your traffic data in analytics will be truer, if you’re targeting pages you may be interested in monitoring keyword rank counts:

Are backlinks a target? Maybe you’d find Cognitive SEO’s dashboard valuable:

cognitive

RankRanger is another SaaS we use. It’s become way more than just our daily rank tracking software. The data you can port in creates excellent snapshots and graphs, and strong dashboards:

rankranger1

It also offers other graphing functionality to make pretty useful views:

While some of the bigger platforms, like SearchMetrics and Conductor, make it easier to get a lot of information within one login, I’m still finding myself logging into several programs to get the most useful data possible. C’est la vie.

Analytics is your vehicle to identifying problems and opportunity

Remember, dashboards are simply the “quick and dirty” window into your site. They help spotlight drastic changes, and make your website’s general traction more visible. Certainly valuable for when your CMO corners you by the Keurig machine. It’s a state of the union, but doesn’t focus on subsections that may need attention.

Agencies and consultants tend to create SEO reports for their clients as a standard practice, though sometimes these reports become extremely boilerplate. Boilerplate reports essentially force you to look under the same rocks month after month. How can you get a bigger view of the world if you never leave your comfortable neighborhood? A new routine needs to be created by generating new reports and correlations, finding trends that were hidden, and using all the tools at your disposal (from Analytics to link tools to competitive tools).

Your analytics app is not a toy—it’s the lifeblood of your website.

Deeper dives with Google Analytics

Grouped pages lookup

A quick way to look at chunks of the site is by identifying a footprint in the URL and searching with that. For example, go to Behavior > Site Content > All Pages or Landing Pages. Then, in the search bar right below the graph, search for the footprint. For example, take www.mystoreisdabomb.com/blog/2015/ as a real URL. if you want to see everything in the blog, enter */blog/ into the search bar. This is especially useful in getting the temperature of an eCommerce category.

Segment sessions with conversions/transactions

So often in SEO we spend our time analyzing what’s not working or posing as a barrier. This report helps us take a look at what is performing (by leads or sales generated) and the customer behavior, channels, and demographic information that goes along with that. Then we can identify opportunities to make use of our success and improve our overall inbound strategy.

Below is a deeper dive into the conversions “Lead Generation” segment, although these same reports can just as aptly be applied to transactions. Ultimately, there are a lot of ways to slice and dice the analysis, so you’ll have to know what makes sense for your client, but here are three different reports from this segment that provided useful insights that will enhance our strategy.

  • Conversions
    One of the easy and most valuable ones! Directions: Under any report, go to Add a Segment > Sessions with Conversions > Apply.
  • Demographics – age, gender, location
    For example, our client is based in Pennsylvania, but is receiving almost as many request form submissions from Texas and New York, and has a high ratio of request form submissions to visitors for both of these other states. Given our client’s industry, this gives us ideas on how to market to these individuals and additional information the Texans may need given the long distance.
  • Mobile – overview, device type, landing pages
    For this client, we see more confirmation of what has been called the “micro-moment” in that our mobile users spend less time on the site, view less pages per visit, have a higher bounce rate, and are more likely to be new users (less brand affinity). This would indicate that the site is mobile optimized and performing as expected. From here, I would next go into mobile traffic segments to find pages that aren’t receiving a lot of mobile traffic, but are similar to those that are, and find ways to drive traffic to those pages as well.
  • Acquisition
    Here we’re looking at how the inbound channels stack up for driving conversions. Organic and Paid channels are neck and neck, although referral and social are unexpected wins (and social, glad we’ve proven your viability to make money!). We’ll now dig deeper into the referring sites and social channels to see where the opportunities are here.

Assisted conversions

There’s more to the story than last click. In Analytics, go to Conversions > Multi-Channel Funnels > Assisted conversions. Many clients have difficulty understanding the concept of attribution. This report seems to provide the best introduction to the world of attribution. Last click isn’t going to be replaced anytime soon, but we can start to educate and optimize for other parts of the funnel.

True stories from analytics detective work

Granted, this is not a post about favorite reports. But this is a post about why digging through analytics can open up huge opportunities. So, it’s real-life example time from Greenlane’s own experience!

Story 1: The Forgotten Links

The client is a big fashion brand. They’ve been a popular brick-and-mortar retail destination since the early 80s, but only went online in 1996. This is the type of company that builds links based on their brand ambassadors and trendy styles. SEO wasn’t the mainstream channel it is today, so it’s likely they had some serious architecture changes since the 90s, right?

For this company, analytics data can only be traced back about seven years. We thought, “Let’s take a look at what drove traffic in their early years. Let’s see if there were any trends that drove volume and sales where they may be slipping today. If they had authority then, and are slipping now, it might be easier to recoup that authority versus building from scratch.”

The good news—this brand had been able to essentially maintain the authority they launched with, as there were not any real noticeable gaps between search data then and search data today. But, in the digging, we uncovered a gem. We found a lot of URLs that used to draw traffic that are not on their tree today. After digging furthur, we found a redesign occurred in the late 90s. SEO wasn’t factored in, creating a ton of 404s. These 404s were not even being charted in Google Webmaster Tools, yet they are still being linked to today from external sites (remember, GWT is still quite directional in terms of the data they provide). Better yet, we pulled links from OSE and Majestic, and saw that thousands of forgotten links existed.

This is an easy campaign—create a 301 redirect matrix for those dead pages and bring those old backlinks to life.

But we kept wondering what pages were out there before the days where analytics was implemented. Using the Wayback Machine, we found that even more redesigns had occurred in the first few years of the site’s life. We didn’t have data for these pages, so we had to get creative. Using Screaming Frog, we crawled the Wayback Machine to pull out URLs we didn’t know existed. We fed them into the link tools, and sure enough, there were links there, too.

Story 2: To “View All” or Not To “View All”

Most eCommerce sites have pagination issues. It’s a given. A seasoned SEO knows immediately to look for these issues. SEOs use rel=”next” and “prev” to help Google understand the relationships. But does Google always behave the way we think they should? Golly, no!

Example 2 is a company that sells barware online. They have a lot of products, and tend to show only “page 1” of a given category. Yet, the analytics showed instances where Google preferred to show the view all page. These were long “view all” pages, which, after comparing to the “page 1” pages, showed a much lower bounce rate and higher conversions. Google seemed to prefer them in several cases anyway, so a quick change to default to “view all” started showing very positive returns in three months.

Story 3: Selling What Analytics Says to Sell

I have to change some details of this story because of NDAs, but once upon a time there was a jewelry company that sold artisan products. They were fond of creating certain kinds of keepsakes based on what sold well in their retail stores. Online, though, they weren’t performing very well selling these same products. The website was fairly new and hadn’t quite earned the footing they thought their brand should have, but that wasn’t the terminal answer we wanted to give them. Instead, we wanted to focus on areas they could compete with, while building up the entire site and turning their offline brand into an online brand.

Conversion rates, search metrics, and even PPC data showed a small but consistent win on a niche product that didn’t perform nearly as well in the brick-and-mortar stores. It wasn’t a target for us or the CEO. Yet online, there was obvious interest. Not only that, with low effort, this series of products was poised to score big in natural search due to low competition. The estimated search volume (per Google Keyword Planner) wasn’t extraordinary by any stretch, but it led to traffic that spent considerable dollars on these products. So much so, in fact, that this product became a focus point of the website. Sometimes, mining through rocks can uncover gold (jewelry pun intended).

Conclusion

My biggest hope is that your takeaway after reading this piece is a candid look at your role as an SEO or digital marketer. You’re a person with a “unique set of skills,” being called upon to perform works of brilliance. Being busy does create pressure; that pressure can sometimes force you to look for shortcuts or “phone it in.” If you really want to find the purest joy in what you’ve chosen as a career, I believe it’s from the stories embedded within the data. Go get ’em, Sherlock!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

How to Get Your App Content Indexed by Google

Posted by bridget.randolph

As mobile technology becomes an increasingly common way for users to access the internet, you need to ensure that your mobile content (whether on a mobile website or in a mobile app) is as accessible to users as possible. In the past this process has been relatively siloed, with separate URLs for desktop and mobile content and apps tucked away in app stores.

But as app and mobile web usage continues to rise, the ways in which people access this content is beginning to converge, which means it’s becoming more important to keep all of these different content locations linked up. This means that the way we think about managing our web and mobile content is evolving:

So how do we improve the interaction between these different types of content and different platforms, getting to the point of being able to have a single URL which takes the user to the most appropriate version of the content based on their personal context?

The first step is to ensure that we are correctly implementing deep linking (e.g., linking to a particular screen within an app) for apps which have comparable webpage content, to allow for our app content to rank in mobile search.

Image credit: Google Developers

Google indexation provides benefits for both Android and iOS apps. The benefits for Android apps are twofold:

  • users searching on an Android device who have not yet installed your app will see the app show up in mobile search results; and
  • Android users who do have your app installed will get query autocompletions when they use browser search which can include results from your app, as well as seeing enhanced display elements in the SERP (such as the app icon). It’s basically like rich snippets for apps.

Image credit: Google Developers

On iOS, app ranking is currently only supported for apps already installed on the device. Apple users should see search results which include links to installed apps and also include the enhanced display elements mentioned above.

In addition, Google recently announced that mobile apps which use the new App Indexing API for deep linking may receive a rankings boost in mobile web search. They are releasing a new and improved version of Google Now, “Now on Tap,” in their latest OS update (Android M), which allows you to search content across your phone without navigating out of whatever app (or website) you are currently using. The catch is, that app content has to be in their index in order to be included in a “Now on Tap” search.

It’s not just Google, either; Apple is implementing their own version of a search index to allow iOS9 users to search and discover web and app content without using a third-party search engine, Bing has its own approach to app indexation and ranking, and other services aren’t far behind.

This post, however, will focus on how to setup your Android and iOS apps to appear in Google search results. While the idea of app indexation isn’t new, it is an area of rapid innovation and the process for getting your apps indexed by Google has recently been simplified. This post is therefore intended to provide a brief overview of that process and to serve as an update to the information which is currently available.

The implementation

The good news is that it’s getting simpler to add the relevant markup to your web content and get your app content indexed and ranking in mobile search results.

The basic process is only three steps:

  1. Support HTTP deep links in your mobile app. For iOS you will need to do this by setting up support for “Universal Links.” “Universal Links” are what Apple calls HTTP links that have a single URL which can open both a specific page on a website and the corresponding view in an app.
    Note: At this point, you can register your app with Google, associate it with your website and stop there—as long as you are using the same URLs for your web content and your app content, they should be able to automatically crawl, index, and attempt to rank your app content based on your website’s structure. However, implementing App Indexing and explicitly mapping your web content to your app content using on-page markup can provide additional benefits and allow for a bit more control. Therefore, I recommend following the full process, if possible.
  2. Implement Google App Indexing using the App Indexing API for Android, or by integrating the App Indexing SDK for iOS 9.
  3. Explicitly map your web pages to their corresponding app screens using either a rel=alternate link element on the individual page, by referencing the app URLs in your XML sitemaps, or by using schema.org markup.

You can find a more step-by-step explanation of this process (looking at Android and iOS separately) below.


The app indexation process used to be a bit more complex, because HTTP links aren’t supported by older iOS versions. Instead, developers had to use something called “Custom URL Schemes” to link to iOS app content. This meant that you essentially had to create a unique scheme for your app URLs and then add support for these in the app code.

Custom URL schemes have a couple other downsides besides adding complexity, namely:

  • different app developers can claim the same custom URL scheme, whereas with HTTP links you can associate the app to a particular domain or set of domains; and
  • with custom URL schemes, tapping the URL when the app isn’t installed results in a broken link (because it only links to content within the app), whereas HTTP links are web links as well and can take the user to a webpage if the app isn’t installed (as long as the URL is the same for both the app view and the corresponding webpage).

While you can still use the custom URL scheme approach, the good news is that Google’s App Indexing is now compatible with HTTP deep link standards for iOS 9, which Apple calls “Universal Links.”

You should still add markup to any webpages which have content corresponding to a particular app screen. Think of it like like rel=canonical or like mobile switchboard tags, but for apps. Be aware that when Google finds a link between a webpage and an app page which they think are equivalent, they will compare the two pages and you will receive a ‘Content Mismatch’ error in the Search Console if they don’t believe the content is similar enough.

Getting Android apps indexed in Google

Step 1: Support HTTP deep links in your app by adding intent filters to your manifest.

An intent filter is a way of specifying how an app responds to a particular action. Intent filters for deep links have three required elements: <action>, <category>, and <data>. You can find more guidance on this from Google Developers. Here is their example of an intent filter which enables support for HTTP deep links:

<intent-filter android:label="@string/filter_title_viewrecipes">

<action android:name="android.intent.action.VIEW" />
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.BROWSABLE" />
<data android:scheme="http"
android:host="recipe-app.com"
android:pathPrefix="/recipes" />
</intent-filter>
</activity>

Noindex option:
Just like for websites, you can add noindex directives for app content as well. Include a noindex.xml file in your app to indicate which deep links should not be indexed, and then reference that file in the app’s manifest (AndroidManifest.xml) file. You can find more detail on how to create and reference the noindex.xml file here.

Step 2: Associate your app to your site in Google Search Console.

This is done in Google Search Console (you can also do it from the Developer Console). As long as your app is set up to support deep links, this step is technically all you have to do to allow Google to start indexing your app. It will allow Google to index and crawl your app automatically by attempting to figure out the app structure from your website structure.

However, if you do stop here, you will not have as much control over how Google understands your content, which is why the explicit mapping of pages to app versions is recommended. Also, if you can’t use the API for some reason, you need to make sure that Googlebot can access your content. You can check that this is configured correctly in your site’s robots.txt file by testing some of your deep links using the robots.txt tester tool in the Search Console.

Step 3: Implement app indexing using the App Indexing API.

Using the App Indexing API is definitely worthwhile; apart from anything else, apps which use the API should receive a rankings boost in mobile search results, and you don’t need to worry about Googlebot struggling to access your content.

The App Indexing API allows you to annotate information about the activities within your app that support deep links (as laid out in your intent filters). For details on how to set this up, see the Google Developers guidance.

Step 4: Test your implementation.

You can test your implementation (always on a fresh installation of your app!) with the following tools. (Find more info about how to use each of these tools here.)

Android Debug Bridge – to test deep links from the command line

Fetch as Google (Search Console) – to test what Google sees when it crawls your app deep links

You can also track search traffic to these deep links in the Search Console’s Search Analytics report.

Getting iOS apps indexed in Google

Step 1: Support HTTP deep links in your app by setting up support for “Universal Links.”

To support universal links in your iOS app, you need to first ensure that your app handles these links correctly by adopting the UIApplicationDelegate methods (if it doesn’t already use this protocol). Once this is in place, you can associate your app with your domain.

You’ll do this by:

  • adding an “associated domains” entitlement file to your app’s project in XCode that lists each domain associated with your app; and
  • uploading an apple-app-site-association file to each of these domains with the content your app supports—note that the file must be hosted at the root level and on a domain that supports HTTPS.

To learn more about supporting Universal Links, view the Apple Developer guidance.

Step 2: Register your app with Google (using the GoogleAppIndexing SDK for iOS 9).

You’ll need to add the App Indexing SDK to your app using the CocoaPods dependency manager. For step by step instructions, check the Google Developers’ guide. Basically what this does is allows you to register your app with Google, just like Android apps are registered via the Search Console. This also means that Google can now read the apple-app-site-association file to understand what URLs your app can open.

Step 3: Test your implementation.

You can test whether this is set up correctly by tapping a universal link in Safari on an iOS 9 device and checking that it opens the right location in your app.

Mapping your webpages to your app with on-page markup or sitemaps

Once you’ve set up the deep linking support for your Android and/or iOS app(s), the final step is to explicitly identify the corresponding webpages to the correct app screens using one of the supported markup options. This step allows you to indicate more clearly to Google what the relationship is between a given page and its corresponding app link (both of which should already share the same URL if you are using HTTP links). Following this step also allows you to indicate the relationship to Bing crawlers, which otherwise wouldn’t see the app content, and to allow Apple to index your iOS app.

You can do this mapping either in the head of the individual page using a link element, using schema.org markup (for Android only), or in an XML sitemap.

A note on formats for app links

The format for an Android HTTP link uses the format of:

android-app://{package_name}/http/{host_path}

The {package_name} is the app’s “Application ID,” which is how it is referenced in the Google Play Store. So a link to the (example) Gizmos app might look like this:

android-app://com.gizmos.android/http/gizmos.com/example

For iOS links, you use the app’s iTunes ID instead of the Package Name. So an iOS app URL uses this format:

ios-app://{itunes_id}/{scheme}/{host_path}

For HTTP links the {scheme} is “http,” which would mean your URL would look like this:

ios-app://{itunes_id}/http/{host_path}

How to reference your app links

Note: Google provides guidance on the three currently supported deep link methods here.

Option 1: Link rel=alternate element

To add an app link reference to an individual page, you can use an HTML <link> element in the <head> of the page.

Here is an example of how this might look if you have both an iOS and Android app:

<html>
<head>
...
<link rel="alternate" href="android-app://com.gizmos.android/http/gizmos.com/example" />
<link rel="alternate" href="ios-app://123456/http/gizmos/example" /></head>
<body> … </body>

Option 2: Schema.org markup (currently supported on Android only)

Alternatively, if you have an Android app, you can use schema.org markup for the ViewAction potential action on an individual page to reference the corresponding app link.

Here is an example of how this might look:

script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "WebPage",
"@id": "http://gizmos.com/example",
"potentialAction": {
"@type": "ViewAction",
"target": "android-app://com.gizmos.android/http/gizmos.com/example"
}
}
</script>

Option 3: Add your app deep links to your XML sitemap

Instead of marking up individual pages, you can use an <xhtml:link> element in your XML sitemap, inside the <url> element specifying the relevant webpage.

Here is an example of how this would look if you have both an iOS and an Android app:

<?xml version="1.0" encoding="UTF-8" ?>
http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:xhtml="http://www.w3.org/1999/xhtml">
<url>
http://gizmos.com/example
<xhtml:link rel="alternate" href="ios-app://123456/http/gizmos/example" /></url>
<xhtml:link rel="alternate" href="android-app://com.gizmos.android/http/gizmos.com/example" />
...
</urlset>

Additional information

What about apps which don’t have corresponding web pages?

Unfortunately, as of this writing, Google does not officially offer app indexation for apps which don’t have corresponding web content. However, they are trying to move in this direction, and as such are beginning to try this out with a handful of apps with “app-only” content. If you have an app with app-only content, and would like to get this content indexed, you can express interest using this form.

What about getting my app indexed in Bing?

Bing supports two open standard options for linking webpages to app links:

To learn more about how to implement these types of markup, see the guidance on the Bing blog.

Quick reference checklists

Will Critchlow recently spoke about app indexation in his presentation at Searchlove London. He provided two useful checklists for Android and iOS app indexing:

Image source: http://www.slideshare.net/DistilledSEO/searchlove-…

To learn more about app indexing by Google, check out Emily Grossman and Cindy Krum’s excellent post over on SearchEngineLand.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Will Google Bring Back Google Authorship?

Posted by MarkTraphagen

Recently, Google Webmaster Trends analyst Gary Illyes surprised many of us with a remark he made during his keynote Q&A with Danny Sullivan at SMX East in New York City. Illyes said that he recommended webmasters not remove the rel=author tag from their site content.

Google had used rel=author as part of its Google Authorship feature that (potentially) displayed a special author rich snippet in search results for content using the tag. Google ended support of this feature in August 2014.

The phrase that made everyone sit up and say, “Did he just say that?” was this: “…because it is possible Google might make use of [rel=author] again in the future.”

Even though Google’s John Mueller made the same recommendation after he announced that Google was no longer making use of Google Authorship in search (to be precise, Mueller said leaving the tag in place “did no harm”), Illyes’s statement seemed to shock many because Google has said nothing about Google Authorship or the rel=author tag since they said they stopped supporting it.

In a subsequent Twitter exchange I had with Gary Illyes, he explained that if enough users are implementing something, Google might consider using it. I asked him if that meant specifically if more people started using rel=author again, that Google might make use of it again. Illyes replied, “That would be safe to say.”

Before I provide my commentary on what all this means, and whether we should expect to see a resumption of Google Authorship in Google Search, let me provide a brief overview of Authorship for anyone who may not be familiar with it. If you already understand Google Authorship, feel free to skip down to the Will Google Bring Back Authorship? section.

A brief history of Google Authorship

Google Authorship was a feature that showed in Google Search results for about three years (from July 2011 until August 2014). It allowed authors and publishers to tag their content, linking it to an author’s Google+ profile, in order to provide a more-certain identification of the content author for Google.

In return, Google said they might display an authorship rich snippet for content so tagged in search results. The authorship rich snippet varied in form over the three years Authorship was in use, but generally it consisted of the author’s profile photo next to the result and his or her byline name under the title. For part of the run of Authorship, one could click on an author byline in search to see results showing related content from that author.

Google Authorship began with an official blog post in June of 2011 where Othar Hansson announced that Google would begin supporting the rel=author tag, but with no specifics on how they might use it.

Then in a July 2011 video, Hansson and Matt Cutts explained that Google+ would be the hub for author identification, and that Google might start showing a special Authorship rich snippet result for properly tagged content.

Those rich snippets slowly began appearing for more and more authors using rel=author over the next several months. During the three years of the program, Google experimented with many different configurations of the rich snippet, and also which authors and content would get it in response to various search queries.

Interest in Google Authorship from the SEO and online marketing communities was spurred even more by its possible connection to Google’s Agent Rank patent, first revealed by Bill Slawski. In this patent, Google described a system by which particular “agents” or “entities” could be identified, scored by their level of authority, and that score then be used as a search ranking factor.

Since one of the types of agents identified in the patent was a content author, the patent rapidly became known as “author rank” in the SEO community. The connection with Authorship in particular, though, came from Cutts and Hansson stating in the above-mentioned Authorship video that Google might someday use Authorship as a search ranking factor.

Speculation about so-called Author Rank, and whether or not it was “on” as a ranking factor, continued throughout the life of the Authorship program. Throughout that period, however, Cutts continued to refer to it as something Google might do in the future. (You can find my own take on why I believed Authorship was never used as a direct ranking factor here.)

The first hint that Google might be drawing back from Authorship came at Pubcon Las Vegas in October 2013 when Matt Cutts, in his keynote “State of Search” address, revealed that at some point in the near future Google would be cutting back on the amount of Authorship rich snippets shown by “around 15%.” Cutts said that in experiments, Google found that reducing Authorship rich snippets by that much “improved the quality of those results.”

Sure enough, in early December of that year, Moz’s Peter Meyers detected a rapid decline over several days in the number of Authorship rich snippets in search results, as measured by his Mozcast Features tool.

Around that same time Google implemented what I called “two-class Authorship,” a first class of authors who continued to get the full rich snippet, and a second class who now got only a byline (no author photo).

Finally, in August 2014, this author was contacted directly by John Mueller, offering to share some information under an NDA embargo until the information was made public. In my call with Mueller, he told me that he was letting me know 24 hours in advance that Google Authorship was going to be discontinued. He added that he was making this call as a courtesy to me since I had become the primary non-Google source of information about Authorship.

With that information, Eric Enge and I were able to compose an in-depth article on Authorship and its demise for Search Engine Land that went live within two minutes of John Mueller’s own public announcement on Google+. In our article linked above, Eric and I give our takes on the reasons behind the death of Authorship and the possible future of author authority on Google.

Will Google bring back Authorship?

From the day Authorship was “killed” in August 2013, we heard no more about it from Google—until Gary Illyes’s remarks at SMX East. So do Gary’s remarks mean we should expect to see a return of Google Authorship to search results?

I don’t think so, at least not in any form similar to what we saw before.

Let me explain why.

1. Illyes made no promise. Far too often people take statements about what Google “could” or “might” do from spokespersons like Gary Illyes, Matt Cutts, and John Mueller and translate “could/might” to “will.” That is unfair to those spokespeople, and an abuse of what they are saying. Just because something is spoken of as a possibility, it does not follow that a promise is being made.

2. It ain’t broke so…. So if there are no actual plans by Google to restore Google Authorship, why would Illyes make a point of stating publicly that authors and publishers should continue to use the rel=author tag? I think a primary reason may be that once Google gets any set of people to begin using any kind of schema, they’d rather have it remain in place. Anything that helps better organize the information on web pages is good for a search engine, whether or not that particular information is “in play” at present.

In the case of rel=author, I think it still may be useful to Google to be able to have confidence about content connected with certain authors. When Authorship ended, many people asked me if I were going to remove the tags from my content. I responded why would I? Having them there doesn’t hurt anything. But more important, as an author trying to build my personal brand reputation online, why wouldn’t I want to give Google every possible hint about the content with which I should be identified?

3. The reasons why Authorship was killed still remain. As with any change in Google search, we’ll probably never know all the reasons behind it, but the public reasons stated by John Mueller centered around Google’s commitment to a “mobile first” user experience strategy. Mobile first is a recognition that search is more and more a mobile experience. Recently, Google announced that more of all searches are now done on mobile than desktop. That trend will likely never reverse.

In response, we’ve seen Google continually moving toward simpler, cleaner, less-cluttered design in all its products, including search. Even their recent logo redesign was motivated by the requirements of the small screen. According to Mueller, Authorship snippets were too much clutter for a mobile world, with not enough user benefit to justify their continuation.

In our Search Engine Land article, Eric Enge and I speculated that another reason Google may have ended the Authorship experiment was relatively poor adoption of the tagging, low participation in Google+ (which was being used as the “anchor” on Google’s side for author identification), and incorrect implementation of the tags by many who did try to use them.

On the latter point, Enge conducted a study of major publishers, which showed that even among those who bothered to implement the authorship tagging, the majority was doing it wrong. That was true even among high-tech and SEO publications!

Alt that points to a messy and lopsided signal, not the kind of signal a search engine wants. At the end of the day, Google couldn’t guarantee that a result showing an Authorship rich snippet was really any better than the surrounding results, so why give it such a prominent highlight?

Despite Gary Illyes saying that if more sites used rel=author Google might begin using it again, I don’t see that doing so would change any of the conditions stated above. Therefore, I believe that any future use of rel=author by Google, if it ever occurs, will look nothing like the Authorship program we knew and loved.

So is there any future for author authority in search?

To this question, I answer a resounding “Yes!”

Every indication I’ve had from Googlers, both publicly and privately, is that author authority continues to be of interest to them, even if they have no sound way to implement it yet.

So how would Google go about assessing author identity and authority in a world where authors and publishers will never mass-tag everything accurately?

The answer: the Knowledge Graph, entity search, and machine learning.

The very first attempts at search engines were mostly human-curated. For example, the original Yahoo search was fed by a group of editors who attempted to classify every web page they came across. But as the World Wide Web took off and started growing exponentially, it was quickly obvious that such attempts couldn’t scale. Hyperlinks between web pages as a means of assessing both the subject matter and relative authority of web pages proved to be a better solution. Search at the scale of the web was born.

Remember that Google’s actual mission statement is to “organize the world’s information.” Over time, Google realized that just knowing about web pages was not enough. The real world is organized by relationships between entities—persons, places, things, concepts—and Google needed a way to learn the relationships between those things, also at scale.

The Knowledge Graph is the repository of what Google is learning, and machine learning is the engine that helps them do that learning at scale. At a simple level, search engine machine learning is the development of an algorithm that learns on its own as a result of feedback mechanisms. Google is applying this technology to the acquisition of and linking together of entities and their relationships at scale.

It’s my contention that this process will be the next evolutionary step that will eventually enable Google to identify authors who matter on a given topic with their actual content, evaluate the relative authority of that content in the perceptions of readers, and use that as a search ranking factor.

In fact, Matt Cutts seemed to hint at a Knowledge Graph-based approach in a June 2013 video about the future of authorship where he talked about how Google was moving away from dependence on keywords, from “strings to things,” figuring out how to discover the “real-world people” behind web content and “their relationships” to improve search results.

Notice that nothing in a machine learning process is dependent upon humans doing anything other than what they already do on the web.

The project is already underway. Take a moment right now and ask Google, “Who is Mark Traphagen?” If you are in the US or most English-speaking countries, you’ll probably see this at the top of the results:

That’s a Knowledge Panel result from Google’s Knowledge Graph. It reveals a couple of things:

1. Google has a high confidence that I’m likely the droids, er, the “Mark Traphagen” you’re looking for. There are a few other Mark Traphagens in the world who potentially show up in Google Search, but Google sees that the vast majority of searchers who search for “Mark Traphagen” are looking for a result about me. Thanks, everybody!

2. Google has high confidence that the Mark Traphagen you’re looking for is the guy who writes for Search Engine Land, so that site’s bio for me is likely a good instant answer to your lifelong quest to find the Real Mark Traphagen (a quest some compare to the search for the Holy Grail).

If Google can continue to do that at scale, then they can lick a problem like assessing author authority for search rankings without any help from us, thank you very much.

How does all this fit with Gary Illyes’s recommendation? I think that while Google knows it ultimately has to depend on machine learning to carry off such projects at scale, any help we can give the machine along the way is appreciated. Back in the Google Authorship I days, some of us (myself included) believed that one of the real purposes for the Authorship project was to enlist our help in training the machine learning algorithm. It may be that rel=author is still useful for that.

What might Authorship look like in the future?

Allow me to speculate a bit.

I don’t expect we’ll ever again see the mass implementation of author rich snippets we saw before, where almost anyone could get highlighted just for having used the tagging on their content and having a Google+ profile. As I stated above, I think Google saw that doing that was a non-useful skewing of the results, as more people were probably clicking on those rich snippets without necessarily getting a better piece of content on the other end.

Instead, I would expect that Google would see the most value in identifying the top few authors for any given topic, and boosting them. This would be similar to their behavior with major brands in search. We often see major, well-known brands dominating the top results for commercial queries because user behavior data tells Google that’s what people want to see. In a similar way, people might be happy to be led directly to authors they already know and trust. They really don’t care about anyone else, no matter how dashing their profile image might be.

Furthermore, for reasons also stated above, I don’t expect that we’ll see a return to the full rich snippets of the glory days of Authorship I. Instead, the boost to top authors might simply be algorithmic; that is, other factors being equal, their content would get a little ranking boost for queries where they are relevant to the topic and the searcher.

It’s also possible that such author’s content could be featured in a highlighted box, similar to how we see local search results or Google News results now.

But notice what I said above: “…when [the authors] are relevant to the topic and the searcher.” That latter part is important, because I believe it is likely that personalization will come into play here as well. It makes sense that boosting or highlighting a particular author has the most value when my search behavior shows that author already has value to me.

We already see this at work with Google+ posts in personalized (logged in) search. When I search for something that AJ Kohn has posted on Google+ while I’m logged in to my Google account, Google will elevate that result to my first page of results and even give it a good old-fashioned Authorship rich snippet! Google has high confidence that’s a result I might want to see because AJ is in my circles, and my interactions with him and his content show that he is probably very relevant and useful to me. Good guess, Google, you’re right!

It is now obvious that Google knows they have to expand beyond Google+ in entity identification and assessment. If Google+ had taken off and become a real rival to Facebook, Google’s job might have been a lot easier. But in the end, building machine learning algorithms that sniff out our “who’s who” and “who matters to whom” may be an even better, if vastly more difficult, solution.

So to sum up, I do expect that at some point in the future, author authority will become a factor in how Google assesses and ranks search results. However, I think that boost will be a “rich get richer” benefit for only the top, most reputable, most trusted authors in each topic. Finally, I think the output will be more subtle and personalized than we saw during the first attempt at Authorship in search.

How to prepare for Authorship II

Since it is unlikely that Authorship II, the future implementation of author identity and authority in search, will be anything like Authorship I, is there anything you can be doing to increase the odds that Authorship II will benefit you and your content? I think there are several things.

1. Set a goal of being the 10X content creator in your niche. Part of the Gospel According to Rand Fishkin these days is that “good, unique” content is not good enough anymore. In order to stand out and get the real benefits of content, you’ve got to be producing and publishing content that is ten times better than anything currently on page one of Google for your topic. That means it’s time to sacrifice quantity (churning out posts like a blogging machine) for quality (publishing only that which kicks butt and makes readers stand up, take notice, and share, recommend and link).

2. Publishers need to become 10X publishers. If you run a publishing site that accepts user-generated content, you’ve got to raise your standards. Accepting any article from any writer just to fill space on your pages won’t cut it.

3. Build and encourage your tribe. If you are authoring truly great, useful stuff, sooner or later you will start to attract some fans. Work hard to identify those fans, to draw them into a community around your work, and to reward and encourage them any way you can. Become insanely accessible to those people. They are the ones who will begin to transmit the signals that will say to Google, “This person matters!”

4. Work as hard offline as you do online. Maybe harder. More and more as I talk with other authors who have been working hard at building their personal brands and tribes, I’m hearing that their offline activities seem to be driving tremendous benefit that flows over into online. I’m talking about speaking at conferences and events, being available for interviews, being prominent in your participation in the organizations and communities around your topic, and dozens of other such opportunities.

BONUS: Doing all four of those recommendations will reap rewards for you in the here and now, whether or not Google ever implements any kind of “author rank.”

The natural power of the fact that people trust other people long before they will trust faceless brands continues, in my opinion, to be one of the least understood and underutilized methodologies in online marketing. Those who work hard to build real author authority in their topic areas will reap the rewards as Google begins to seek them out in the days to come.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →