The True Cost of Local Business Directories
Posted by kristihines
If you’re a local business owner, you’ve likely heard that you should submit your business to local business directories like Yelp, Merchant Circle, Yellow Pages, and similar networks in order to help boost your local search visibility on Google. It sounds easy at first: you think you’ll just go to a few websites, enter your contact information, and you’ll be set. Because all you really want to do is get some links to your website from these profiles.
But the truth is, there are a lot of local business listings to obtain if you go the DIY route. There are local business directories that offer free listings, paid listings, and package listings on multiple networks. There are also local data providers that aren’t necessarily directories themselves, but they push your information out to other directories.
In this post, we’re going to look at the real cost of getting local business listings for your local business.
Finding the right directories
Since one of a business owner’s most important commodities is time, it’s important to note the time investment that you must make to individually create and manage local business listings. Here’s what you’ll need to do to find the right directories for your business.
Directories ranking for your business
You can start by looking your business up on Google by name to see where you already have listings that need to be claimed.
These are the first directories you’ll want to tackle, as they’re the ones that people are viewing when they search for your business by name. This is especially important for local businesses that don’t have their own website or social media presence. Updating these directories will help customers get to know your business, your hours, and what you have to offer.
These are going to be the easiest, in many cases, because the listing is already there. Most local business directories offer a link to help you start the process.
Depending on the directory, you’ll need to look in several places to find the link to claim your business. Sometimes it can be found near the top of your listing. Other times, it may be hidden in the directory’s header or footer.
It’s important to claim your listings so you can add your website link, business hours, and photos to help your listing stand out from others. Claiming your listing will also help make sure you’re notified about any reviews or public updates your business receives.
Directories ranking for your competitors
Once you’ve claimed the listings you already have, you’ll want to start finding new ones. Creating listings on local business directories where your competitors have listings will help you get in front of your target audience. If you notice your competitors have detailed profiles on some networks, but not others, that should clue you in to which ones are going to be most effective.
To find these directories, search for your competitors by name on Google. You should be able to spot which ones you haven’t claimed for yourself already and go from there.
Directories ranking for your keywords
What keywords and phrases does your business target in search? Do a quick search for them to see which local directories rank in the top ten search results. Most keyword searches related to local businesses will lead you to your website, your competitors’ websites, specific business listings in local business directories, and categories on local business directories.
You should make sure you have a listing on the local business directories that rank for your competitors, as well as the ones whose categories rank. For the latter, you may even want to consider doing paid advertising or sponsorship to make sure your business is first for the category, since that page is likely receiving traffic from your target customers.
Directories ranking in mobile search
After you’ve looked for the directories that rank for your business name, your competitors, and your target keywords, you’ll want to do the same research on mobile search. This will help you find additional directories that are favorites for mobile users. Considering the studies showing that 50% of mobile searchers end up visiting a local store to make a purchase, getting your business in local business directories that rank well in mobile is key to business success.
Claiming and creating local business directory listings
If you think finding the right local business directories is time-consuming, wait until you start to claim and create them. Some directories make it simple and straightforward. Others have a much more complicated process.
Getting your business listing verified is usually the toughest part. Some networks will not require any verification past confirming your email address. Some will have an automated call or texting system for you to use to confirm your phone number. Some will have you speak to a live representative in order to confirm your listing and try to sell you paid upgrades and advertising.
The lengthiest ones from start to finish are those that require you to verify your business by postal mail. It means that you will have to wait a couple of days (or weeks, depending on the directory) to complete your listing.
In the event that you’re trying to claim a listing for your business that needs the address or phone number updated, you’ll need to invest additional time to contact the directory’s support team directly to get your information updated. Otherwise, you won’t be able to claim your business by phone or mail.
The cost of local business listings
Now that you know the time investment of finding, claiming, and creating local business directories, it’s time to look at the actual cost. While some of the top local business directories are free, others require payment if you want beyond the basic listings, such as the addition of your website link, a listing in more than one category, removal of ads from your listing, and the ability to add media.
Pricing for local directory listings can range from $29 to $499 per year. You will find some directories that sell listings for their site alone, while others are grouped under plans like this one where you can choose to pay for one directory or a group of directories annually.
With the above service, you’re looking at a minimum of $199 per year for one network, or $999 per year for dozens of networks. While it might look like a good deal, in reality, you are paying for listings that you could have gotten for free (Yahoo, Facebook, Google+, etc.) in addition to ones that have a paid entry.
So how can you decide what listings are worth paying for? If they are not listings that appear on the first page of search results for your business name, your competitors, or your keywords, you can do some additional research in the following ways:
Check the directory’s search traffic
You can use SEMrush for free (10 queries prior to registering + 10 after entering your email address) to see the estimated search traffic for any given local business directory. For example, you can check Yelp’s traffic by searching for their domain name:
Then, compare it with other local business directories you might not be familiar with, like this one:
This can help you decide whether or not it’s worth upgrading to an account at $108 per month to get a website link and featured placement.
Alternatively, you can use sites like Alexa to estimate traffic through seeing which site has a lower Alexa ranking. For example, you can check Yelp’s Alexa ranking:
Then compare it with other local business directories, like this one:
Instantly, you can see that between the two sites, Yelp is more popular in the US, while the other directory is more popular in India. You can scroll down further through the profile to see what countries a local business directory gets the majority of their traffic from to determine if they are getting traffic from your target customer base.
If you have a business in the US, and the directory you’re researching doesn’t get a lot of US traffic, it won’t be worth getting a listing there, and certainly not worth paying for one.
Determine the directory’s reputation
The most revealing search you can do for any local business directory that you are considering paying is the directory’s name, plus the word “scam.” If the directory is a scam, you’ll find out pretty quickly. Even if it’s not a scam, you will find out what businesses and customers alike find unappealing about the directory’s service.
The traffic a directory receives may trump a bad reputation, however. If you look at Yelp’s Better Business Bureau page, you will find over 1,700 complaints. It goes to show that while some businesses have a great experience on Yelp, others do not.
If you find a directory with little traffic and bad reviews or complaints, it’s best to steer clear, regardless of whether they want payment for your listing.
Look for activity in your category
Are other businesses in your category getting reviews, tips, or other engagement? If so, that means there are people actually using the website. If not, it may not be worth the additional cost.
The “in your category” part is particularly important. Photography businesses may be getting a ton of traffic, but if you have an air conditioning repair service, and none of the businesses in that category have reviews or engagement, then your business likely won’t, either.
This also goes for local business directories that allow you to create a listing for free, but make you pay for any leads that you get. If businesses in your category are not receiving reviews or engagement, then the leads you receive may not pan out into actual paying customers.
See where your listing would be placed
Does paying for a listing on a specific local business directory guarantee you first-page placement? In some cases, that will make the listing worth it—if the site is getting enough traffic from your target customers.
This is especially important for local business directories whose category pages rank on the first page for your target keyword. For these directories, it’s essential that your business gets placed in the right category and at the top of the first page, if possible.
Think of that category page as search results—the further down the page you are, the less likely people are to click through to your business. If you’re on the second or third page, those chances go down even further.
In conclusion
Local business directories can be valuable assets for your local business marketing. Be sure to do your due diligence in researching the right directories for your business. You can also simplify the process and see what Moz Local has to offer. Once your listings are live, be sure to monitor them for new reviews, tips, and other engagement. Also be sure to monitor your analytics to determine which local business directory is giving you the most benefit!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Continue reading →User Behaviour Data as a Ranking Signal
Posted by Dan-Petrovic
Question: How does a search engine interpret user experience?
Answer: They collect and process user behaviour data.
Types of user behaviour data used by search engines include click-through rate (CTR), navigational paths, time, duration, frequency, and type of access.
Click-through rate
Click-through rate analysis is one of the most prominent search quality feedback signals in both commercial and academic information retrieval papers. Both Google and Microsoft have made considerable efforts towards development of mechanisms which help them understand when a page receives higher or lower CTR than expected.
For example, user reactions to particular search results or search result lists may be gauged, so that results on which users often click will receive a higher ranking. The general assumption under such an approach is that searching users are often the best judges of relevance, so that if they select a particular search result, it is likely to be relevant, or at least more relevant than the presented alternatives.
Source: Google, 2015
To get an idea of how much research has already been done in this area, I suggest you query Google Scholar.
Position bias
CTR values are heavily influenced by position because users are more likely to click on top results. This is called “position bias,” and it’s what makes it difficult to accept that CTR can be a useful ranking signal. The good news is that search engines have numerous ways of dealing with the bias problem. In 2008, Microsoft found that the “cascade model” worked best in bias analysis. Despite slight degradation in confidence for lower-ranking results, it performed really well without any need for training data and it operated parameter-free. The significance of their model is in the fact that it offered a cheap and effective way to handle position bias, making CTR more practical to work with.
Search engine click logs provide an invaluable source of relevant information, but this information is biased. A key source of bias is “presentation order,” where the probability of a click is influenced by a document’s position in the results page. This piece focuses on explaining that bias, modeling how the probability of a click depends on position. We propose four simple hypotheses about how position bias might arise. We carry out a large data-gathering effort, where we perturb the ranking of a major search engine, to see how clicks are affected. We then explore which of the four hypotheses best explains the real-world position effects, and compare these to a simple logistic regression model. The data is not well explained by simple position models, where some users click indiscriminately on rank 1 or there is a simple decay of attention over ranks. A “cascade model,” where users view results from top to bottom and leave as soon as they see a worthwhile document, is our best explanation for position bias in early rank.
Source: Microsoft, 2008
Result attractiveness
Good CTR is a relative term. A 30% CTR for a top result in Google wouldn’t be a surprise, unless it’s a branded term; then it would be a terrible CTR. Likewise, the same value for a competitive term would be extraordinarily high if nested between “high-gravity” search features (e.g. an answer box, knowledge panel, or local pack).
I’ve spent five years closely observing CTR data in the context of its dependence on position, snippet quality and special search features. During this time I’ve come to appreciate the value of knowing when deviation from the norm occurs. In addition to ranking position, consider other elements which may impact the user’s choice to click on a result:
- Snippet quality
- Perceived relevance
- Presence of special search result features
- Brand recognition
- Personalisation
Practical application
Search result attractiveness is not an abstract academic problem. When done right, CTR studies can provide a lot of value to a modern marketer. Here’s a case study where I take advantage of CTR average deviations in my phrase research and page targeting process.
In the graph below, we see position-based CTR averages for xbmc-skins.com retrieved from Google’s Search Console:
The site caught my attention, as it outranks the official page for a fairly competitive term. Mark Whitney (the site owner) explains that people like his website better than the official page or even Kodi’s own skin selection process and often jump on his site instead of using the official tools simply because it provides a better user experience.
“There was no way to easily compare features and screenshots of XBMC/Kodi skins. So I made the site to do that and offer faceted filters so users can view only the skins that suit their requirements.”
If a search query outperforms the site’s average CTR for a specific position, then we’re looking at a high-quality snippet or a page of particularly high interest and relevance to users. After processing all available data, I’ve identified a query of particularly good growth potential. The phrase “kodi skins” currently ranks at position 2 with a CTR of 39%, as opposed to the 27% expected from that position. That’s 12% more than the average CTR for this domain. Part of that success can be attributed to a richer search snippet with links to the most popular skins. One of the reasons for the links to appear in the first place was, of course, user choices, from both a navigational (page visits, navigational paths) and an editorial point of view (links, shares and discussion). It’s a powerful loop.
With this information, I was able to project a more optimistic CTR for position #1 in Google and inflate traffic projection up to 3,758 clicks. The difficulty score for the result above us is only 23/100, which, in combination with expected click gain, shows an amazing potential score of 831. Potential score is a relative value, and represents a balance between difficulty and traffic gain. It’s critical when prioritising lists of hundreds or even thousands of keywords. I usually just sort it by potential and schedule my campaign work top-down.
After mapping all keywords to their corresponding landing pages, I was able to produce a list of high priority pages ordered by the total keyword potential score. At the top of the list are pages guaranteed to bring good traffic with low effort, and at the bottom of the list are pages that will either never move up, or the extra traffic won’t be attractive enough if they do.
Additional factors
By aggregating position-based CTR data from multiple reports, I ended up with an up-to-date CTR trends graph for 2015. It shows an interesting dip at position 5, likely influenced by high-gravity SERP elements (e.g. a local pack):
Separating branded and non-branded terms gave me different results, showing much lower CTR values for the top three positions. Finally, URL mapping and phrase tagging also allowed me to determine averages for:
- Page type
- Page topic
- Language
- Location
- File format
Google’s title bolding study
Google is also aware of additional factors that contribute to result attractiveness bias, and they’ve been busy working on non-position click bias solutions .
Leveraging click-through data has become a popular approach for evaluating and optimizing information retrieval systems. For instance, since users must decide whether to click on a result based on its summary (e.g. the title, URL, and abstract), one might expect clicks to favor “more attractive” results. In this piece, we examine result summary attractiveness as a potential source of presentation bias. This study distinguishes itself from prior work by aiming to detect systematic biases in click behavior due to attractive summaries inflating perceived relevance. Our experiments conducted on a commercial web search engine show substantial evidence of presentation bias in clicks, leaning in favor of results with more detective titles.
Source: Google, 2010
They show strong interest in finding ways to improve the effectiveness of CTR-based ranking signals. In addition to solving position bias, Google’s engineers have gone one step further by investigating SERP snippet title bolding as a result attractiveness bias factor. I find it interesting that Google recently removed bolding in titles for live search results, likely to eliminate the bias altogether. Their paper highlights the value in further research focused on the bias impact of specific SERP snippet features.
“It would be interesting and useful to identify more sophisticated ways to measure attractiveness; e.g., we have not considered the attractiveness of the displayed result URL. Its length, bolding, and recognizable domain may have a significant impact.”
Source: Google, 2010
URL access, duration, frequency, and trajectory
Logged click data is not the only useful user behaviour signal. Session duration, for example, is a high-value metric if measured correctly. For example, a user could navigate to a page and leave it idle while they go out for lunch. This is where active user monitoring systems become useful.
Source: Microsoft, 2012
There are many assisting user-behaviour signals which, while not indexable, aid measurement of engagement time on pages. This includes various types of interaction via keyboard, mouse, touchpad, tablet, pen, touch screen, and other interfaces.
Google’s John Mueller recently explained that user engagement is not a direct ranking signal, and I believe this. Kind of. John said that this type of data (time on page, filling out forms, clicking, etc) doesn’t do anything automatically.
“So I’d see that as a positive thing in general, but I wouldn’t assume it is something that Google would pick up as a ranking factor and use to kind of promote your web site in search automatically.”
At this point in time, we’re likely looking at a sandbox model rather than a live listening and reaction system when it comes to the direct influence of user behaviour on a specific page. That said, Google does acknowledge limitations of quality-rater and sandbox-based result evaluation. They’ve recently proposed an active learning system, which would evaluate results on the fly with a more representative sample of their user base.
“Another direction for future work is to incorporate active learning in order to gather a more representative sample of user preferences.”
Google’s result attractiveness paper was published in 2010. In early 2011, Google released the Panda algorithm. Later that year, Panda went into flux, indicating an implementation of one form of an active learning system. We can expect more of Google’s systems to run on their own in the future.
The monitoring engine
Google has designed and patented a system in charge of collecting and processing of user behaviour data. They call it “the monitoring engine”, but I don’t like that name—it’s too long. Maybe they should call it, oh, I don’t know… Chrome?
The user behavior data might be obtained from a web browser or a browser assistant associated with clients. A browser assistant may include executable code, such as a plug-in, an applet, a dynamic link library (DLL), or a similar type of executable object or process that operates in conjunction with (or separately from) a web browser. The web browser or browser assistant might send information to the server concerning a user of a client.
Source: Ranking documents based on user behavior and/or feature data, Google, 2012
The actual patent describing Google’s monitoring engine is a truly dreadful read, so if you’re in a rush, you can read my highlights instead.
- Google’s client behavior data processor can retrieve client-side behavior data associated with a web page.
- This client-side behavior data can then be used to help formulate a ranking score for the article.
The monitoring engine can:
- Distinguish whether the user is actually viewing an article, such as a web page, or whether the web page has merely been left active on the client device while the user is away from the client.
- Monitor a plurality of articles associated with one or more applications and create client-side behavior data associated with each article individually.
- Determine client-side behavior data for multiple user articles and ensure that the client-side behavior data associated with an article can be identified with that particular article.
- Transmit the client-side behavior data, together with identifying information that associates the data with a particular article to which it relates, to the data store for storage in a manner that preserves associations between the article and the client behaviors.
Source: Methods and systems for improving a search ranking using article information, Google, 2015
MetricsService
Let’s step away from patents for a minute and observe what’s already out there. Chrome’s MetricsService is a system in charge of the acquisition and transmission of user log data. Transmitted histograms contain very detailed records of user activities, including opened/closed tabs, fetched URLs, maximized windows, et cetera.
Enter this in Chrome: chrome://histograms/
(Click here for technical details)
- ET_KEY_PRESSED
- ET_MOUSEWHEEL
- ET_MOUSE_DRAGGED
- ET_MOUSE_EXITED
- ET_MOUSE_MOVED
- ET_MOUSE_PRESSED
- ET_MOUSE_RELEASED
- MouseDown
- MouseMove
- MouseUp
- BrowsingSessionDuration
- NewTabPage.NumberOfMouseOvers
- NewTabPage.SuggestionsType
- NewTabPage.URLState
- Omnibox.SaveStateForTabSwitch.UserInputInProgress
- SessionRestore.TabClosedPeriod
- SessionStorageDatabase.Commit
- History.InMemoryTypedUrlVisitCount
- Sync.FreqTypedUrls
- Autofill.UserHappiness.
- The number of mousedown events detected at HTML anchor-tag links’ default event handler.
- The HTTP response code returned by the Domain Reliability collector when a report is uploaded.
- A count of form activity (e.g. fields selected, characters typed) in a tab. Recorded only for tabs that are evicted due to memory pressure and then selected again.
- Track the different ways users are opening new tabs. Does not apply to opening existing links or searches in a new tab, only to brand-new empty tabs.
Here are a few external links with detailed information about Chrome’s MetricsService, reasons and types of data collection, and a full list of histograms.
Use in rankings
Google can process duration data in an eigenvector-like fashion using nodes (URLs), edges (links), and labels (user behaviour data). Page engagement signals, such as session duration value, are used to calculate weights of nodes. Here are the two modes of a simplified graph comprised of three nodes (A, B, C) with time labels attached to each:
In an undirected graph model (undirected edges), the weight of the node A is directly driven by the label value (120 second active session). In a directed graph (directed edges), node A links to node B and C. By doing so, it receives a time-label credit from the nodes it links to.
Source: Google, 2015
Source: Google, 2015
What’s interesting is that the implicit quality signals of deeper pages also flow up to higher-level pages.
Source: Google, 2015
Reasonable surfer model
“Reasonable surfer” is the random surfer’s successor. The PageRank dampening factor reflects the original assumption that after each followed link, our imaginary surfer is less likely to click on another random link, resulting in an eventual abandonment of the surfing path. Most search engines today work with a more refined model encompassing a wider variety of influencing factors.
[…]
For example, model generating unit may generate a rule that indicates that links with anchor text greater than a particular font size have a higher probability of being selected than links with anchor text less than the particular font size. Additionally, or alternatively, model generating unit may generate a rule that indicates that links positioned closer to the top of a document have a higher probability of being selected than links positioned toward the bottom of the document. Additionally, or alternatively, model generating unit may generate a rule that indicates that when a topical cluster associated with the source document is related to a topical cluster associated with the target document, the link has a higher probability of being selected than when the topical cluster associated with the source document is unrelated to the topical cluster associated with the target document. These rules are provided merely as examples. Model generating unit may generate other rules based on other types of feature data or combinations of feature data. Model generating unit may learn the document-specific rules based on the user behavior data and the feature vector associated with the various links. For example, model generating unit may determine how users behaved when presented with links of a particular source document. From this information, model generating unit may generate document-specific rules of link selection.
For example, model generating unit may generate a rule that indicates that a link positioned under the “More Top Stories” heading on the cnn.com web site has a high probability of being selected. Additionally, or alternatively, model generating unit may generate a rule that indicates that a link associated with a target URL that contains the word “domainpark” has a low probability of being selected. Additionally, or alternatively, model generating unit may generate a rule that indicates that a link associated with a source document that contains a popup has a low probability of being selected. Additionally, or alternatively, model generating unit may generate a rule that indicates that a link associated with a target domain that ends in “.tv” has a low probability of being selected. Additionally, or alternatively, model generating unit may generate a rule that indicates that a link associated with a target URL that includes multiple hyphens has a low probability of being selected.
Source: Google, 2012
For example, the likelihood of a link being clicked on within a page may depend on:
- Position of the link on the page (top, bottom, above/below fold)
- Location of the link on the page (menu, sidebar, footer, content area, list)
- Size of anchor text
- Font size, style, and colour
- Topical cluster match
- URL characteristics (external/internal, hyphenation, TLD, length, redirect, host)
- Image link, size, and aspect ratio
- Number of links on page
- Words around the link, in title, or headings
- Commerciality of anchor text
In addition to perceived importance from on-page signals, a search engine may judge link popularity by observing common user choices. A link on which users click more within a page can carry more weight than the one with less clicks. Google in particular mentions user click behaviour monitoring in the context of balancing out traditional, more manipulative signals (e.g. links).
Source: Google, 2012
In the following illustration, we can see two outbound links on the same document (A) pointing to two other documents: (B) and (C). On the left is what would happen in the traditional “random surfer model,” while on the right we have a link which sits on a more prominent location and tends to be a preferred choice by many of the pages’ visitors.
This method can be used on a single document or in a wider scope, and is also applicable to both single users (personalisation) and groups (classes) of users determined by language, browsing history, or interests.
For example, the web browser or browser assistant may record data concerning the documents accessed by the user and the links within the documents (if any) the user selected. Additionally, or alternatively, the web browser or browser assistant may record data concerning the language of the user, which may be determined in a number of ways that are known in the art, such as by analyzing documents accessed by the user. Additionally, or alternatively, the web browser or browser assistant may record data concerning interests of the user. This may be determined, for example, from the favorites or bookmark list of the user, topics associated with documents accessed by the user, or in other ways that are known in the art. Additionally, or alternatively, the web browser or browser assistant may record data concerning query terms entered by the user. The web browser or browser assistant may send this data for storage in repository.
Source: Google, 2012
Pogo-sticking
One of the most telling signals for a search engine is when users perform a query and quickly bounce back to search results after visiting a page that didn’t satisfy their needs. The effect was described and discussed a long time ago, and numerous experiments show its effect in action. That said, many question the validity of SEO experiments largely due to their rather non-scientific execution and general data noise. So, it’s nice to know that the effect has been on Google’s radar.
Additionally, the user can select a first link in a listing of search results, move to a first web page associated with the first link, and then quickly return to the listing of search results and select a second link. The present invention can detect this behavior and determine that the first web page is not relevant to what the user wants. The first web page can be down-ranked, or alternatively, a second web page associated with the second link, which the user views for longer periods or time, can be up-ranked.
Source: Google, 2015
Address bar
URL data can include whether a user types a URL into an address field of a web browser, or whether a user accesses a URL by clicking on a hyperlink to another web page or a hyperlink in an email message. So, for example, if users type in the exact URL and hit enter to reach a page, that represents a stronger signal than when visiting the same page after a browser autofill/suggest or clicking on a link.
- Typing in full URL (full significance)
- Typing in partial URL with auto-fill completion (medium significance)
- Following a hyperlink (low significance)
Login pages
Google monitors users and maps their journey as they browse the web. They know when users log into something (e.g. social network) and they know when they end the session by logging out. If a common journey path always starts with a login page, Google will add more significance to the login page in their rankings.
“A login page can start a user on a trajectory, or sequence, of associated pages and may be more significant to the user than the associated pages and, therefore, merit a higher ranking score.”
I find this very interesting. In fact, as I write this, we’re setting up a login experiment to see if repeated client access and page engagement impacts the search visibility of the page in any way. Readers of this article can access the login test page with username: moz and password: moz123.
The idea behind my experiment is to have all the signals mentioned in this article ticked off:
- URL familiarity, direct entry for maximum credit
- Triggering frequent and repeated access by our clients
- Expected session length of 30-120 seconds
- Session length credit up-flow to home page
- Interactive elements add to engagement (export, chart interaction, filters)
Combining implicit and traditional ranking signals
Google treats various user-generated data with different degrees of importance. Combining implicit signals such as day of the week, active session duration, visit frequency, or type of article with traditional ranking methods improves reliability of search results.
The ranking processor determines a ranking score based at least in part on the client-side behavior data, retrieved from the client behavior data processor, associated with the nth article. This can be accomplished, for example, by a ranking algorithm that weights the various client behavior data and other ranking factors associated with the query signal to produce a ranking score. The different types of client behavior data can have different weights, and these weights can be different for different applications. In addition to the client behavior data, the ranking processor can utilize conventional methods for ranking articles according to the terms contained in the articles. It can further use information obtained from a server on a network (for example, in the case of web pages). The ranking processor can request a PageRank value for the web page from a server and additionally use that value to compute the ranking score. The ranking score can also depend on the type of article. The ranking score can further depend on time, such as the time of day or the day of the week. For example, a user can typically be working on and interested in certain types of articles during the day, and interested in different kinds of articles during the evening or weekends.
Source: Google, 2015
I first suspected Google’s results change in regular patterns (weekdays, weekends, seasonal events) back in 2013. In a follow-up study this year, we analysed the last 186 days of Algoroo volatility data. Our results showed behaviourally-triggered changes trending from Wednesday onward, usually peaking around Friday and Saturday with a small decline on Sunday and a dramatic drop at the beginning of the week:
Values presented in the chart above are a sum of daily volatility scores for each day of the week during the observation period of 186 days. Our daily fluctuation values are aggregated from result movement, factoring in ~17,000 keywords, 100 deep.
Impact on SEO
The fact that behaviour signals are on Google’s radar stresses the rising importance of user experience optimisation. Our job is to incentivise users to click, engage, convert, and keep coming back. This complex task requires a multidisciplinary mix, including technical, strategic, and creative skills. We’re being evaluated by both users and search engines, and everything users do on our pages counts. The evaluation starts at the SERP level and follows users during the whole journey throughout your site.
“Good user experience”
Search visibility will never depend on subjective user experience, but on search engines’ interpretation of it. Our most recent research into how people read online shows that users don’t react well when facing large quantities of text (this article included) and will often skim content and leave if they can’t find answers quickly enough. This type of behaviour may send the wrong signals about your page.
My solution was to present all users with a skeletal content form with supplementary content available on-demand through use of hypotext. As a result, our test page (~5000 words) increased the average time per user from 6 to 12 minutes and bounce rate reduced from 90% to 60%. The very article where we published our findings shows clicks, hovers, and scroll depth activity of double or triple values to the rest of our content. To me, this was convincing enough.
Google’s algorithms disagreed, however, devaluing the content not visible on the page by default. Queries contained within unexpanded parts of the page aren’t bolded in SERP snippets and currently don’t rank as well as pages which copied that same content but made it visible. This is ultimately something Google has to work on, but in the meantime we have to be mindful of this perception gap and make calculated decisions in cases where good user experience doesn’t match Google’s best practices.
Relevant papers
- Ranking documents based on user behavior and/or feature data, Google, 2010
- Active Exploration for Learning Rankings from Clickthrough Data, Cornell, 2007
- Beyond Position Bias, Google, 2010
- An Experimental Comparison of Click Position-Bias Models, Microsoft, 2008
- Improving Searcher Models Using Mouse Cursor Activity, Microsoft, 2012
- Inferring Search Behaviors Using Partially Observable Markov (POM) Model, Microsoft, 2010
- A Dynamic Bayesian Network Click Model for Web Search Ranking, Yahoo!, 2009
- Modifying search result ranking based on implicit user feedback and a model of presentation bias, Google, 2015
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Continue reading →How Much Keyword Repetition is Optimal – Whiteboard Friday
Is Online Marketing Gender-Diverse?
Posted by JackieRae
What is your gender?
It’s a standard demographic question that many of us answered on the Online Marketing Industry Survey. It’s a compelling indicator of how our industry is evolving in an area that truly helps us become better marketers: gender equality. Why does gender equality matter to marketing, and why do we need to understand this trend? There are hundred of reasons why gender diversity is good, and we can name at least a few. Diversity in the workplace fosters better working performance by forcing us to think more deeply about decisions and with new perspectives. Studies have also shown that creating a more gender-diverse organization will boost a company’s profits. As we discuss the following data, I encourage you to consider how you and your organization can foster more gender diversity in your workplace.
When collecting the data, our goal was to reach the general online marketing population through a diverse range of channels. However, our results skew towards the Moz Community, as most participants discovered the survey through Moz’s blog and Twitter feed.
Of the 3,618 respondents, 1,089 respondents marked “female,” 2,516 marked “male,” and 13 marked “I’d rather not say.” Compared to December 2013, females have only gained about 2% representation in the Online Marketing Industry Survey, with a 7.3% increase in women since 2012. Every year, more women are joining the online marketing workforce. However, there’s still a long way to go before women represent anywhere close to 50% of that population.
Here are the percentages of each gender in overall participation for the last 4 years:
Of the 1,089 women who took the survey, 63.32% are from the United States. Overall, there are more U.S. women represented, but when we filter down to individual countries, we see a more erratic breakdown of men versus women:
Many countries are missing from the diagram, as we only included countries that were represented by greater than 100 participants in the study. According to the survey, the smallest gender difference goes to Canada. (Go Canada!) The highest gender difference is India.
Breaking down job titles
Like many fields, online marketers have job title inequalities in the most poignant ways. Job titles are a good indicator of which levels and functions are geared towards men or women throughout different types of organizations.
Based on these responses, the most equal fields are web analytics and public relations. Women were more likely to have words like “social media” and “content” in their job titles, while men dominate the fields of engineering, web development, paid advertising, SEO, and e-commerce (all fields that are considered more technical).
The next chart looks at job level, where we see that males dominate higher-level jobs. The largest gaps between men and women are in the titles of president, business owner, chief, consultant, director, analyst, vice president, and strategist. The lowest disparities in job level: intern, project manager, and specialist. Women are more likely to have “editor,” “assistant,” “writer,” and “coordinator” within their titles.
How do these inequalities compare to other industries? According to HBR, on average, women make up 40% of managers, 35% of directors, 27% of vice presidents, 24% of senior vice presidents, and 19% of executive and chiefs. If we compare that data to our own, it would appear the online marketers are making strides in higher-level jobs—we’re above average for vice presidents and chiefs—but falling short for managers, directors, and presidents.
Why are those titles stacked the way they are? I don’t have a specific answer, but I do know what it’s not based on, according to a few other questions in the survey:
- Education? Nope. 80.80% of female respondents have 4-year and/or Master’s-level degrees, while only 64.50% of men reported having 4-year or Master’s-level degrees. Women who took this survey have a higher level of education than their male counterparts.
- Years of experience? Maybe. There are higher percentages of males with more than 3 years’ experience in online marketing. Yet that’s only correlation; it could be that women aren’t sticking to online marketing as long as men.
The money question: How do salaries stack up?
Based on the survey responses, there is a suggested pattern that men make more than women, and are paid differently depending on their job level.
A few notes about our methodology: We’re reporting salary in U.S. dollars, because we asked survey participants to report their salaries in that form. In order to analyze the salaries, which were collected by ranges, we took the midpoint of the range. This isn’t the best model, but allows us to test the significance of the differences between genders and job titles. For you data junkies out there, we used the 2-sample Z-test. If the group had less than 30 respondents, then that group was excluded from testing. Thanks to our resident statistician, Mitch, for helping with the analysis.
We show that men earn more than women in the United States, Canada, and the United Kingdom. Australian responses showed a slight difference between salaries, but it wasn’t significant enough to say with confidence that salaries were unequal.
*No inequality proven: P-Values are too great to suggest inequality between men and women in Australia. However, the averages show, directionally, that men may make more than women.
Personally, I was shocked at the disparity in salary. However, as we saw in the previous section, males are more likely to have higher-power jobs; could this be a reflection of that? Let’s look at the average salaries of men and women by job title in all responses (globally):
*No inequality proven: P-Values are too great to lend statistical significance to inequality between men and women. However, the averages show, directionally, where salaries swing.
Comparing salaries for job titles, we can see that the differences in salary permeate through nearly all levels and functions. We show that men make more than women as business owners, consultants, and specialists. While we can’t prove disparity in all job functions, it’s worth noting that women may make more than men in some functions. Interestingly, when we filtered our results for only U.S. responses, the most unequal job levels are business owner, manager, and strategist.
Improving gender diversity
We know, based on the data presented, that there is a gender gap in online marketing. The question that remains is: What is your company or organization doing to support a gender-diverse workplace? As I circulated this data around the office, it spurred a really cool discussion: What is being done within online marketing organizations to tackle gender inequalities? I thought I’d share some examples here:
- Tech companies are discussing salaries more openly, like SumAll and Buffer. This will shed light on any disparities occurring within these tech organizations and support equal pay.
- Companies partner with nonprofits, such as Ada Developers Academy, that aim at getting women into the technical space.
- Seer Interactive made a point to focus on helping women find mentorship from other women leaders.
- Marketing conferences are making strides to include more female speakers and track attendee participation. (SMX just blogged about this recently, and MozCon had 12 women representing out of 26 total speakers in 2015.)
If you’re feeling stuck or wondering how you can help, here are some great resources and tools for implementing and enacting change in your organization:
- Ways to Proactively Welcome Women into Online Marketing, by Erica McGillivray. A recently published blog post, giving some great tips and tricks on how to engage women in online marketing.
- 6 Ways to Fix Inequality at Work, by the World Economic Forum. The WEF also has some great research and resources for improving education, accessibility, and more across the world.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Continue reading →