About frans

Website:
frans has written 4625 articles so far, you can find them below.

Foursquare Quietly Unlocks Its Own "Local Data Aggregator" Badge

Posted by David-Mihm

I was wrong about Foursquare.

While five of my 2013 local search prognostications came to fruition, my sixth prediction—that Foursquare would be bought—doesn’t look like it will (unless Apple has silently acquired Foursquare in the last couple of days).

In fact, Foursquare has been turning away from an acquisition path, setting off on a fundraising spree in 2013. While this quest for cash has struck some analysts as a desperate tactic, PR from the company indicates that it remains focused on growing its userbase and its revenues for the foreseeable future. It’s one of the few companies in tech to successfully address both sides of the merchant and consumer marketplace, and as a result, might even have a chance at an IPO.

As the company matures, we hear less and less about mayorships, badges, and social gamification—perhaps a tacit admission that checkins are indeed dying as the motivational factor underlying usage of Foursquare.

Foursquare: the data aggregator

Instead, the company is pivoting into a self-described position as “the location layer for the Internet.”

Google, Bing, Nokia, and other mapping companies have built their own much broader location layers to varying degrees of success, but it’s the human activity associated with location data that makes Foursquare unique. Its growing database of keyword-rich tips and comments and widening network of social interactions even make predictive recommendations possible.

But I’m considerably less excited about these consumer-facing recommendations than I am about Foursquare’s data play. If “location layer for the internet” is not a synonym for “data aggregator,” I’m not sure what would be.

In the last several months, Foursquare has been prompting its users to provide business details about the places they check-in at, like whether a business has wi-fi, its relative price range, delivery and payment options, and more. It’s also accumulating one of the biggest photo libraries in all of local search. For companies that have not yet built their own services like StreetView and Mapmaker, Foursquare “ground truth” position is enviable.

So from my standpoint, Foursquare’s already achieved the status of a major data aggregator, and seems to have its sights set on becoming the data aggregator.

Foursquare: The Data Aggregator?

That statement would have sounded preposterous 18 months ago, with “only” 15 million users and 250,000 claimed venues.

But while many of us in the local search space have been distracted by the shiny objects of Google+ Local and Facebook Graph Search, Foursquare has struck deals with the two largest up-and-coming social apps (Instagram and Pinterest) to provide the location backbone for their geolocation features. Not to mention Uber, WhatsApp, and a host of other conversational and transactional apps.

And buried in the December 5th TechCrunch article about Foursquare’s latest iOS release was this throwaway line:

“Foursquare has a sharing deal with Apple already — it’s one of over a dozen contributors to Apple’s Maps data.”

So, doing some quick math, we have

All of a sudden that’s a substantial number of people contributing location information to Foursquare. Granted, there’s considerable overlap in those users, but even a conservative 80-100 million would be a pretty large number of touchpoints.

In fact, one thing that Wil Reynolds and I realized at a recent get-together in San Diego is that for many people outside the tech world, Foursquare and Instagram are basically the same app (see screenshots below). I’m seeing more and more of my decidedly non-techie Instagram friends tagging their photos with location. And avid Foursquare users like Matthew Brown have always made photography their primary network activity.

Providing the geographic foundation for two apps—Pinterest and Instagram—that are far more popular than Foursquare gives it a strong running start on laying the location foundation for the Internet.

What’s next for Foursquare?

While Facebook is undoubtedly building its own location layer, Zuckerberg and company have long ignored local search. And they’ve got plenty of other short- and mid-term priorities. Exposing Facebook check-in data to the extent Foursquare has, and forcing Instagram to update a very successful API integration, would seem to be pretty far down the list.

As I suggested in my Local Search Ecosystem update in August, to challenge established players like Infogroup, Neustar, and Acxiom, in the long run Foursquare does need to build out its index considerably beyond the current sweetspots of food, drink, and entertainment.

But in the short run, the quality and depth of Foursquare’s popular venue information in major cities gives start-up app developers everything they need to launch and attract users to their apps. And Foursquare’s independence from Google, Facebook, and Apple is appealing for many of them—particularly for non-U.S. app developers who have a hard time finding publicly-available location databases outside of Google or Facebook.

Foursquare’s success with Instagram and Pinterest has created a self-perpetuating growth strategy: it will continue to be the location API of choice for most “hot” local startups.

TL;DR

Foursquare venues have been contributing to a business’s citation profile for years, so hopefully most of you have included venue creation and management in your local SEO service packages already. Even if you optimize non-retail locations like insurance agencies, accounting offices, and the like, make one of your 2014 New Year’s resolutions be a higher level of engagement with Foursquare.

The bottom line is that irrespective of its user growth and beyond just SEO, Foursquare is going to get more important to the SoLoMo ecosystem in the coming year.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

The IdeaGraph – Whiteboard Friday

Posted by wrttnwrd

There can be important links between topics that seem completely unrelated at first glance. These random affinities are factoring into search results more and more, and in today’s Whiteboard Friday, Ian Lurie of Portent, Inc. shows us how we can find and benefit from those otherwise-hidden links.

Whiteboard Friday – Ian Lurie – The Ideagraph

For reference, here’s a still of this week’s whiteboard!

Video Transcription

Howdy Moz fans. Today we’re going to talk about the IdeaGraph. My name’s Ian Lurie, and I want to talk about some critical evolution that’s happening in the world of search right now.

Google and other search engines have existed in a world of words and links. Words establish relevance. Links establish connections and authority. The problem with that is Google takes a look at this world of links and words and has a very hard time with what I call random affinities.

Let’s say all cyclists like eggplant, or some cyclists like eggplant. Google can’t figure that out. There is no way to make that connection. Maybe if every eggplant site on the planet linked to every cycling site on the planet, there would be something there for them, but there really isn’t.

So Google exists purely on words and links, which means there’s a lot of things that it doesn’t pick up on. The things it doesn’t pick up on are what I call the IdeaGraph.

The IdeaGraph is something that’s always existed. It’s not something new. It’s this thing that creates these connections that are formed only by people. So things that are totally unrelated, like eggplant and cyclists, and by the way that’s not true as far as I know. I’m a cyclist and I hate eggplant. But all these things that randomly connect are part of the IdeaGraph.

The IdeaGraph has been used by marketers for years and years and years. If you walk into a grocery store, and you’re going from one aisle to the next and you see these products in semi-random order, there’s some research there where they test different configurations and see, if someone’s walking to the dairy section way at the back of the store, what products can we put along their walk that they’re most likely to pick up? Those products, even if the marketers don’t know it, are part of the IdeaGraph, because you could put chocolate there, and maybe the chocolate is what people want, but maybe you should put cleaning supplies there and nobody wants it, because the IdeaGraph doesn’t connect them tightly enough.

The other place that you run into issues with the IdeaGraph on search and on the Internet is with authorship and credibility and authority.

Right now, if you write an article, and it gets posted on a third-party site, like The New York Times, and it’s a huge hit, and it gets thousands and thousands and thousands of links, you might get a little authority sent back to your site, and your site is sad. See? Sad face website. Because it’s not getting all the authority it could. Your post is getting tons. It’s happy. But your site is not.

With the IdeaGraph it will be easier because the thing that connects your site to your article is you. So just like you can connect widely varying ideas and concepts, you can also connect everything you contribute to a single central source, which then redistributes that authority.

Now Google is starting to work on this. They’re starting to work on how to make this work for them in search results. What they’ve started to do is build these random affinities. So if you take cyclists and eggplant, theoretically some of the things Google is doing could eventually create this place, this space, where you would be able to tell from Google, and Google would be able to tell you that there is this overlap.

The place that they’re starting to do it, I think, remember Google doesn’t come and tell us these things, but I think it’s Google+. With authorship and publisher, rel=author and rel=publisher, they’re actually tying these different things together into a single receptacle into your Google+ profile. Remember, anyone who has Gmail, has a Google+ profile. They may not know it, but they do. Now Google’s gathering all sorts of demographic data with that as well.

So what they’re doing is, let’s say you’re using rel=author and you publish posts all over the Internet, good posts. If you’re just doing crappy guest blogging, this probably won’t work. You’ll just send yourself all the lousy credit. You want the good credit. So you write all these posts, and you have the rel=author on the post, and they link back to your Google+ profile.

So your Google+ profile gets more and more authoritative. As it gets more and more authoritative, it redistributes that authority, that connection to all the places you publish. What you end up with is a much more robust way of connecting content to people and ideas to people, and ideas to each other. If you write about cycling on one site and eggplant on another, and they both link back to your Google+ profile, and a lot of other people do that, Google can start to say, “Huh, there might be a connection here. Maybe, with my new enhanced query results, I should think about how I can put these two pieces of information together to provide better search results.” And your site ends up happier. See? Happy site. Total limit of my artistic ability.

So that becomes a very powerful tool for creating exactly the right kind of results that we, as human beings, really want, because people create the IdeaGraph. Search engines create the world of words and links, and that’s why some people have so much trouble with queries, because they’re having to convert their thinking from just ideas to words and links.

So what powers the IdeaGraph is this concept of random affinities. You, as a marketer, can take advantage of that, because as Google figures this out through Google+, you’re going to be able to find these affinities, and just like all those aisles in the grocery store, or when you walk into a Starbucks and there’s a CD there—you’re buying coffee and there’s a CD? How do those relate? When you find those random affinities, you can capitalize on them and make your marketing message that much more compelling, because you can find where to put that message in places you might never expect.

An example I like is I went on Amazon once and I searched for “lonely planet,” and in the “people who bought this also bought,” I found a book on making really great smoothies, which tells me there’s this random affinity between people who travel lonely planet style and people who like smoothies. It might be a tiny attachment. It might be a tiny relationship, but it’s a great place to do some cross marketing and to target content.

So if you take a look here, if you want to find random affinities and build on them, take a look at the Facebook Ad Planner. When you’re building a Facebook ad, you can put in a precise interest, and it’ll show you other related precise interests. Those relationships are built almost purely on the people who have them in common. So sometimes there is no match, there’s no relationship between those two different concepts or interests, other than the fact that lots of people like them both. So that’s a good place to start.

Any site that uses collaborative filtering. So, Amazon, for example. Any site that has “people who bought this also bought that” is a great place to go try this. Go on Amazon and try it and look at “people who bought also bought.” You’ll find all sorts of cool relationships.

Followerwonk is a fantastic tool for this. This one takes a little more work, but the data you can find is incredible. Let’s say you know that Rand is one of your customers. He’s a perfect customer, and he’s typical of your perfect customer. You can go on Followerwonk and find all the people who follow him and then pull all of their bios, do a little research into the bios and find what other interests those people express.

So they’re following Randfish, but maybe a whole bunch of them express an interest in comic books, and it’s more than just one or two. It’s a big number of them. You just found a random affinity. People who like Rand also like comic books. You can then find this area, and it’s always easier to sell and get interest in this area.

Again, you can use that to drive content strategy. You can use that to drive keyword selection in a world where we don’t really know what keywords are driving traffic anymore, but we can find out what ideas are. You can use it to target specific messages to people.

The ways you capitalize on this, on your own site you want to make sure that you have rel=author and publisher set up, because that’s the most obvious IdeaGraph implementation we have right now, is rel=author and publisher.

Make sure you’re using schemas from Schema.org whenever you can. For example, make sure you use the article mark-up on your site because Google’s enhanced articles, results that are showing up at the bottom of search results right now, those are powered, in part, by pages that have the article mark-up, or at least there’s a very high correlation between them. We don’t know if it’s causal, but it seems to be.

Use product mark-up and review mark-up. I’ve seen a few instances and some of my colleagues have seen instances where schema mark-up on a page allows content to show up in search results attributed to that page, even if they’re being populated to the page by JavaScript or something else.

Get yourself set up with Google Analytics Demographics, as Google rolls it out. You’ll be able to get demographic data and categorical data in Google Analytics based on visitors to your site. Then again, if you have a demographic profile, you can look at the things that that demographic profile is interested in and find those random affinities.

So just to summarize all of this, links and words have worked for a long time, but we’re starting to see the limitations of it, particularly with mobile devices and other kinds of search. Google has been trying to find a way to fix this, as has Bing, and they’re both working very hard at this. They’re trying to build on this thing that has always existed that I call the IdeaGraph, and they’re building on it using random affinities. Selling to random affinities is much, much easier. You can find them using lots of tools out on the web like collaborative filtering, Facebook, and Followerwonk. You can take advantage and position your site for it by just making sure that you have these basic mark-up elements in place, and you’re already collecting data.

I hope that was helpful to all Moz fans out there, and I look forward to talking to you online. Thanks.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Mission ImposSERPble 2: User Intent and Click Through Rates

Posted by CatalystSEM

This post was originally in YouMoz, and was promoted to the main blog because it provides great value and interest to our community. The author’s views are entirely his or her own and may not reflect the views of Moz, Inc.

It’s been quite a while since I first read (and bookmarked) Slingshot SEO’s YouMoz blog post, Mission ImposSERPble: Establishing Click-through Rates, which showcased their study examining organic click-through rates (CTR) across search engine result pages. The Slingshot study is an excellent example of how one can use data to uncover trends and insights. However, that study is over two and a half years old now, and the Google search results have evolved significantly since then.

Using the Slingshot CTR study (and a few others) as inspiration, Catalyst thought it would be beneficial to take a fresh look at some of our own click-through rate data and dive into the mindset of searchers and their proclivity for clicking on the different types of modern organic Google search results.

Swing on over to Catalyst’s website and download the free Google CTR Study: How User Intent Impacts Google Click-Through Rates

**TANGENT: I’m really hoping that the Moz community’s reception of this ‘sequel’ post follows the path of some of the all-time great movie sequels (think Terminator 2, The Godfather: Part II) and not that of Jaws 2.

How is the 2013 Catalyst CTR study unique?

  • RECENT DATA: This CTR study is the most current large-scale US study available. It contains data ranging from Oct. 2012 – June 2013. Google is constantly tweaking its SERP UI, which can influence organic CTR behavior.
  • MORE DATA: This study contains more keyword data, too. The keyword set for this study spans 17,500 unique queries across 59 different websites. More data can lead to more accurate representations of the true population.
  • MORE SEGMENTS: This study segments queries into categories not covered in previous studies which allows us to compared CTR behavior attributed to different keyword types. For example, branded v. unbranded queries, and question v. non-question based queries.

How have organic CTRs changed over time?

The most significant changes since the 2011 Slingshot study is the higher CTRs for positions 3, 4, and 5.

Ranking on the first page of search results is great for achieving visibility; however, the search result for your website must be compelling enough to make searchers want to click through to your website. In fact, this study shows that having the most compelling listing in the SERPs could be more important than “ranking #1� (provided you are still ranking within the top five listings, anyway).

Read on to learn more.

Catalyst 2013 CTRs vs. Slingshot SEO 2011 CTRs

data table of Catalyst CTRs compared to Slingshot SEO CTRs

Since Slingshot’s 2011 study, click-through rates have not dramatically shifted, with the total average CTR for first page organic results dropping by just 4%.

While seemingly minor, these downward shifts could be a result of Google’s ever-evolving user interface. For example, with elements such as Product Listing Ads, Knowledge Graph information, G+ authorship snippets, and other microdata becoming more and more common in a Google SERP, users’ eyes may tend to stray further from the historical “F shape� pattern, impacting the CTR by ranking position.

Positions 3-5 showed slightly higher average CTRs than what Slingshot presented in 2011. A possible explanation for this shift is that users could be more aware of Paid Search listing located at the top of the results page, so in an attempt to “bypass� these results, they may have modified their browsing behavior to quickly scan/wheel-scroll past a few listings down the page.

What is the distribution of clicks across a Google SERP?

example Google search engine result page click distributions

Business owners need to understand that even if your website ranks in the first organic position for your target keyword, your site will almost certainly never receive traffic from every one of those users/searchers.

On average, the top organic SERP listing (#1) drives visits from around 17% of Google searches.

The top four positions, or typical rankings “above the fold� for many desktop users, receive 83% of first page organic clicks.

The Catalyst data also reveals that only 48% of Google searches result in a page one organic click (meaning any click on listings ranging 1-10). So what is the other 52% doing? Two things, the user either clicks on a Paid Search listing, or they “abandon� the search, which we define as:

  • Query Refinement – based on the displayed results, the user alters their search
  • Instant Satisfaction – based on the displayed results, the user gets the answer they were interested in without having to click
  • 2nd Page Organic SERP – the user navigates to other SERPs
  • Leave Search Engine – the user exits the Google search engine

How do branded query CTRs differ from unbranded queries?

Branded CTRs for top ranking terms are lower than unbranded CTRs, likely due to both user intent and the way Google presents results.

branded query CTRs vs. unbranded query CTRs

data table of branded and unbranded organic CTRs

These numbers shocked us a bit. At the surface, you might assume that listings with top rankings for branded queries would have higher CTRs than unbranded queries. But, when you take a closer look at the current Google UI and place yourself in the mindset of a searcher, our data actually seems more likely.

Consumers who search unbranded queries are often times higher in the purchasing funnel: looking for information, without a specific answer or action in mind. As a result, they may be more likely to click on the first result, particularly when the listing belongs to a strong brand that they trust.

Additionally, take a look at the example below, notice how many organic results are presented “above the foldâ€� for a unbranded query compared to an branded query (note: these SERP screenshots were taken from 1366×768 screen resolution). There are far fewer potential organic click paths for a user to take when presented with the branded query’s result page (1 organic result v. 4.5 results). It really boils down to ‘transactional’ v. ‘informational’ queries. Typically, keywords that are more transactional (e.g. purchase intent) and/or drive higher ROI are more competitive in the PPC space and as a result will have more paid search ads encroaching on valuable SERP real estate.

example branded search query v. unbranded search query result page

We all know the makeup of every search result page is different and the number of organic results above the fold can be influenced by a number of factors, including, device type, screen size/resolution, paid search competiveness, and so on.

You can use your website analytics platform to see what screen resolutions your visitors are using and predict how many organic listings your target audience would typically see for different search types and devices. In our example, you can see that my desktop visitors most commonly use screen resolutions higher than 1280×800, so I can be fairly certain that my current audience typically sees up to 5 organic results from a desktop Google search.

Google Analytics screen resolution of my audience

Does query length/word count impact organic CTR?

As a user’s query length approaches the long tail, the average CTR for page one rankings increases.

head vs long tail organic ctr

The organic click percentage totals represented in this graph suggest that as a user’s query becomes more refined they are more likely to click on a first page organic result (~56% for four+ word queries v. ~30% for one-word queries).

Furthermore, as a query approaches the long tail, click distributions across the top ten results begin to spread more evenly down the fold. Meaning, when a consumer’s search becomes more refined/specific, they likely spend more time scanning the SERPs looking for the best possible listing to answer their search inquiry. This is where compelling calls-to-action and eye-catching page titles/meta descriptions can really make or break your organic click through rates.

As previously stated, only about 30% of one-word queries result in a first page organic click. Why so low? Well, one potential reason for this is that searchers use one-word queries simply to refine their search based on their initial impression of the SERP. This means that the single word query would become a multiple word query. If the user does not find what they are looking for within the first result, they modify their search to be more specific, often resulting in the query to contain multiple words.

Additionally, one-word queries resulted in 60% of the total first page organic clicks (17.68%) being attributed to the first ranking. Maybe, by nature, one-word queries are very similar to navigational queries (as the keywords are oftentimes very broad or a specific brand name).

Potential business uses

Leveraging click-through rate data enables us to further understand user behavior on a search result and how it can differ depending on search intent. These learnings can play an integral role in defining a company’s digital strategy, as well as forecasting website traffic and even ROI. For instance:

  1. Forecasting Website Performance and Traffic Given a keyword’s monthly search volume, we can predict the number of visits a website could expect to receive by each ranking position. This becomes increasingly valuable when we have conversion rate data attributed to specific keywords.
  2. Identifying Search Keyword Targets With Google Webmaster Tools’ CTR/search query data we can easily determine the keywords that are “low-hanging fruit�. We consider low hanging fruit to be keywords that a brand ranks fairly well on, but are just outside of achieving high visibility/high organic traffic because the site currently ranks “below the fold� on page 1 of the SERPs or rank somewhere within pages 2-3 of the results.). Once targeted and integrated into the brand’s keyphrase strategy, SEOs can then work to improve the site’s rankings for that particular query.
  3. Identifying Under-performing Top Visible Keywords
    By comparing a brand’s specific search query CTR against the industry average as identified in this report, we can identify under-performing keyphrases. Next, an SEO can perform an audit to determine if the low CTR is due to factors within the brand’s control, or if it is caused by external factors.

Data set, criteria, and methodology

Some information about our data set and methodology. If you’re like me, and want to follow along using your own data, you can review our complete process in our whitepaper. All websites included in the study are Consumer Packaged Goods (CPG) brands. As such, the associated CTRs, and hypothesized user behaviors reflect only those brands and users.

Data was collected via each brand’s respective Google Webmaster Tools account, which was then processed and analyzed using a powerful BI and data visualization tool.

Catalyst analyzed close to 17,500 unique search queries (with an average ranking between 1–10, and a minimum of 50 search impressions per month) across 59 unique brands over a 9 month timeframe (Oct. 2012 – Jun 2013).

Here are a few definitions so we’re all on the same page (we mirrored definitions as provided by Google for their Google Webmaster Tools)…

  • Click-Through Rate (CTR) – the percentage of impressions that resulted in a click for a website.
  • Average Position – the average top position of a website on the search results page for that query. To calculate average position, Google takes into account the top ranking URL from the website for a particular query.

Final word

I have learned a great deal from the studies and blog posts shared by Moz and other industry experts throughout my career, and I felt I had an opportunity to meaningfully contribute back to the SEO community by providing an updated, more in-depth Google CTR study for SEOs to use as a resource when benchmarking and measuring their campaigns and progress.

For more data and analysis relating to coupon-based queries, question based queries, desktop v. mobile user devices, and more download our complete CTR study.

Have any questions or comments on our study? Did anyone actually enjoy Jaws 2? Please let us know and join the discussion below!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →

Moz Holiday Traditions

Posted by ssnsestak

Here at Moz, we often feel like a big family. Just like any other family, over the years we’ve developed an eclectic set of traditions to celebrate the holiday season. We’d like to share a few of our favorites and welcome you to join us in these most joyous of festivities. 😉

Seasons Greetings!

Moz Holiday Traditions 2013 _regular out


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Continue reading →