Affordable, Stat-Based Retail Strategy For Your Agency’s Clients

Posted by MiriamEllis

Retail clients are battling tough economics offline and tough competitors online. They need every bit of help your agency can give them. 

I was heartened when 75 percent of the 1,400+ respondents to the Moz State of Local SEO Industry Report 2019 shared that they contribute to offline strategy recommendations either frequently or at least some of the time. I can’t think of a market where good and relatively inexpensive experiments are more needed than in embattled retail. The ripple effect of a single new idea, offered up generously, can spread out to encompass new revenue streams for the client and new levels of retention for your agency.

And that’s why win-win seemed written all over three statistics from a 2018 Yes Marketing retail survey when I read it because they speak to motivating about one quarter to half of 1,000 polled customers without going to any extreme expense. Take a look:

I highly recommend downloading Yes Marketing’s complete survey which is chock-full of great data, but today, let’s look at just three valuable stats from it to come up with an actionable strategy you can gift your offline retail clients at your next meeting.

Getting it right: A little market near me

For the past 16 years, I’ve been observing the local business scene with a combination of professional scrutiny and personal regard. I’m inspired by businesses that open and thrive and am saddened by those that open and close.

Right now, I’m especially intrigued by a very small, independently-owned grocery store which set up shop last year in what I’ll lovingly describe as a rural, half-a-horse town not far from me. This locale has a single main street with less than 20 businesses on it, but I’m predicting the shop’s ultimate success based on several factors. A strong one is that the community is flanked by several much larger towns with lots of through traffic and the market is several miles from any competitor. But other factors which match point-for-point with the data in the Yes Marketing survey make me feel especially confident that this small business is going to “get it right”. 

Encourage your retail clients to explore the following tips.

1) The store is visually appealing

43–58 percent of Yes Marketing’s surveyed retail customers say they’d be motivated to shop with a retailer who has cool product displays, murals, etc. Retail shoppers of all ages are seeking appealing experiences.

At the market near me, there are many things going on in its favor. The building is historic on the outside and full of natural light on this inside, and the staff sets up creative displays, such as all of the ingredients you need to make a hearty winter soup gathered up on a vintage table. The Instagram crowd can have selfie fun here, and more mature customers will appreciate the aesthetic simplicity of this uncluttered, human-scale shopping experience.

For your retail clients, it won’t break the bank to become more visually appealing. Design cues are everywhere!

Share these suggestions with a worthy client:

Basic cleanliness is the starting point

This is an old survey, but I think we’re safe to say that at least 45 percent of retail customers are still put off by dirty premises — especially restrooms. Janitorial duties are already built into the budget of most businesses and only need to be accomplished properly. I continuously notice how many reviewers proclaim the word “clean” when a business deserves it.

Inspiration is affordable

Whatever employees are already being paid is the cost of engaging them to lend their creativity to creating merchandise displays that draw attention and/or solve problems. My hearty winter soup example is one idea (complete with boxed broth, pasta, veggies, bowls, and cookware). 

For your retail client? It might be everything a consumer needs to recover from a cold (medicine, citrus fruit, electric blanket, herbal tea, tissue, a paperback, a sympathetic stuffed animal, etc.). Or everything one needs to winterize a car, take a trip to a beach, build a beautiful window box, or pamper a pet. Retailers can inexpensively encourage the hidden artistic talents in staff.

Feeling stuck? The Internet is full of free retail display tips, design magazines cost a few bucks, and your clients’ cable bills already cover a subscription to channels like HGTV and the DIY network that trade on style. A client who knows that interior designers are all using grey-and-white palettes and that one TV ad after another features women wearing denim blue with aspen yellow right now is well on their way to catching customers’ eyes.

Aspiring artists live near your client and need work

The national average cost to have a large wall mural professionally painted is about $8,000, with much less expensive options available. Some retailers even hold contests surrounding logo design, and an artist near your client may work quite inexpensively if they are trying to build up their portfolio. I can’t predict how long the Instagram mural trend will last, but wall art has been a crowd-pleaser since Paleolithic times. Any shopper who stops to snap a photo of themselves has been brought in close proximity to your front door.

I pulled this word cloud out of the reviews of the little grocery store:

While your clients’ industries and aesthetics will vary, tell them they can aim for a similar, positive response from at least 49 percent of their customers with a little more care put into the shopping environment.

2) The store offers additional services beyond the sale of products

19–40 percent of survey respondents are influenced by value-adds. Doubtless, you’ve seen the TV commercials in which banks double as coffee houses to appeal to the young, and small hardware chains emphasize staff expertise over loneliness in a warehouse. That’s what this is all about, and it can be done at a smaller scale, without overly-strapping your retail clients.

At the market near me, reviews like this are coming in:

The market has worked out a very economic arrangement with a massage therapist, who can build up their clientele out of the deal, so it’s a win for everybody.

For your retail clients, sharing these examples could inspire appealing added services:

The cost of these efforts is either the salary of an employee, nominal or free.

3) The store hosts local events

20–36 percent of customers feel the appeal of retailers becoming destinations for things to learn and do. Coincidentally, this corresponds with two of the tasks Google dubbed micro-moments a couple of years back, and while not everyone loves that terminology, we can at least agree that large numbers of people use the Internet to discover local resources.

At the market near me, they’re doing open-mic readings, and this is a trend in many cities to which Google Calendar attests:

For your clients, the last two words of that event description are key. When there’s a local wish to build community, retail businesses can lend the space and the stage. This can look like:

Again, costs here can be quite modest and you’ll be bringing the community together under the banner of your business.

Putting it in writing

The last item on the budget for any of these ventures is whatever it costs to publicize it. For sure, your client will want:

  • A homepage announcement and/or one or more blog posts
  • Google Posts, Q&A, photos and related features
  • Social mentions
  • If the concept is large enough (or the community is small) some outreach to local news in hopes of a write-up and inclusion of local/social calendars
  • Link building would be great if the client can afford a reasonable investment in your services, where necessary
  • And, of course, be sure your client’s local business listings are accurate so that newcomers aren’t getting lost on their way to finding the cool new offering

Getting the word out about events, features, and other desirable attributes don’t have to be exorbitant, but it will put the finishing touch on ensuring a community knows the business is ready to offer the desired experience.

Seeing opportunity

Sometimes, you’ll find yourself in a client meeting and things will be a bit flat. Maybe the client has been disengaged from your contract lately, or sales have been leveling out for lack of new ideas. That’s the perfect time to put something fresh on the table, demonstrating that you’re thinking about the client’s whole picture beyond CTR and citations.

One thing that I find to be an inspiring practice for agencies is to do an audit of competitors’ reviews looking for “holes” In many communities, shopping is really dull and reviews reflect that, with few shoppers feeling genuinely excited by a particular vertical’s local offerings. Your client could be the one to change that, with a little extra attention from you.

Every possibility won’t be the perfect match for every business, but if you can help the company see a new opportunity, the few minutes spent brainstorming could benefit you both.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

14 SEO Predictions for 2019 & Beyond, as Told by Mozzers

Posted by TheMozTeam

With the new year in full swing and an already busy first quarter, our 2019 predictions for SEO in the new year are hopping onto the scene a little late — but fashionably so, we hope. From an explosion of SERP features to increased monetization to the key drivers of search this year, our SEO experts have consulted their crystal balls (read: access to mountains of data and in-depth analyses) and made their predictions. Read on for an exhaustive list of fourteen things to watch out for in search from our very own Dr. Pete, Britney Muller, Rob Bucci, Russ Jones, and Miriam Ellis!

1. Answers will drive search

People Also Ask boxes exploded in 2018, and featured snippets have expanded into both multifaceted and multi-snippet versions. Google wants to answer questions, it wants to answer them across as many devices as possible, and it will reward sites with succinct, well-structured answers. Focus on answers that naturally leave visitors wanting more and establish your brand and credibility. [Dr. Peter J. Meyers]

Further reading:

2. Voice search will continue to be utterly useless for optimization

Optimizing for voice search will still be no more than optimizing for featured snippets, and conversions from voice will remain a dark box. [Russ Jones]

Further reading:

3. Mobile is table stakes

This is barely a prediction. If your 2019 plan is to finally figure out mobile, you’re already too late. Almost all Google features are designed with mobile-first in mind, and the mobile-first index has expanded rapidly in the past few months. Get your mobile house (not to be confused with your mobile home) in order as soon as you can. [Dr. Peter J. Meyers]

Further reading:

4. Further SERP feature intrusions in organic search

Expect Google to find more and more ways to replace organic with solutions that keep users on Google’s property. This includes interactive SERP features that replace, slowly but surely, many website offerings in the same way that live scores, weather, and flights have. [Russ Jones]

Further reading:

5. Video will dominate niches

Featured Videos, Video Carousels, and Suggested Clips (where Google targets specific content in a video) are taking over the how-to spaces. As Google tests search appliances with screens, including Home Hub, expect video to dominate instructional and DIY niches. [Dr. Peter J. Meyers]

Further reading:

6. SERPs will become more interactive

We’ve seen the start of interactive SERPs with People Also Ask Boxes. Depending on which question you expand, two to three new questions will generate below that directly pertain to your expanded question. This real-time engagement keeps people on the SERP longer and helps Google better understand what a user is seeking. [Britney Muller]

Further reading:

7. Local SEO: Google will continue getting up in your business — literally

Google will continue asking more and more intimate questions about your business to your customers. Does this business have gender-neutral bathrooms? Is this business accessible? What is the atmosphere like? How clean is it? What kind of lighting do they have? And so on. If Google can acquire accurate, real-world information about your business (your percentage of repeat customers via geocaching, price via transaction history, etc.) they can rely less heavily on website signals and provide more accurate results to searchers. [Britney Muller]

Further reading:

8. Business proximity-to-searcher will remain a top local ranking factor

In Moz’s recent State of Local SEO report, the majority of respondents agreed that Google’s focus on the proximity of a searcher to local businesses frequently emphasizes distance over quality in the local SERPs. I predict that we’ll continue to see this heavily weighting the results in 2019. On the one hand, hyper-localized results can be positive, as they allow a diversity of businesses to shine for a given search. On the other hand, with the exception of urgent situations, most people would prefer to see best options rather than just closest ones. [Miriam Ellis]

Further reading:

9. Local SEO: Google is going to increase monetization

Look to see more of the local and maps space monetized uniquely by Google both through Adwords and potentially new lead-gen models. This space will become more and more competitive. [Russ Jones]

Further reading:

10. Monetization tests for voice

Google and Amazon have been moving towards voice-supported displays in hopes of better monetizing voice. It will be interesting to see their efforts to get displays in homes and how they integrate the display advertising. Bold prediction: Amazon will provide sleep-mode display ads similar to how Kindle currently displays them today. [Britney Muller]

11. Marketers will place a greater focus on the SERPs

I expect we’ll see a greater focus on the analysis of SERPs as Google does more to give people answers without them having to leave the search results. We’re seeing more and more vertical search engines like Google Jobs, Google Flights, Google Hotels, Google Shopping. We’re also seeing more in-depth content make it onto the SERP than ever in the form of featured snippets, People Also Ask boxes, and more. With these new developments, marketers are increasingly going to want to report on their general brand visibility within the SERPs, not just their website ranking. It’s going to be more important than ever for people to be measuring all the elements within a SERP, not just their own ranking. [Rob Bucci]

Further reading:

12. Targeting topics will be more productive than targeting queries

2019 is going to be another year in which we see the emphasis on individual search queries start to decline, as people focus more on clusters of queries around topics. People Also Ask queries have made the importance of topics much more obvious to the SEO industry. With PAAs, Google is clearly illustrating that they think about searcher experience in terms of a searcher’s satisfaction across an entire topic, not just a specific search query. With this in mind, we can expect SEOs to more and more want to see their search queries clustered into topics so they can measure their visibility and the competitive landscape across these clusters. [Rob Bucci]

Further reading:

13. Linked unstructured citations will receive increasing focus

I recently conducted a small study in which there was a 75% correlation between organic and local pack rank. Linked unstructured citations (the mention of partial or complete business information + a link on any type of relevant website) are a means of improving organic rankings which underpin local rankings. They can also serve as a non-Google dependent means of driving traffic and leads. Anything you’re not having to pay Google for will become increasingly precious. Structured citations on key local business listing platforms will remain table stakes, but competitive local businesses will need to focus on unstructured data to move the needle. [Miriam Ellis]

Further reading:

14. Reviews will remain a competitive difference-maker

A Google rep recently stated that about one-third of local searches are made with the intent of reading reviews. This is huge. Local businesses that acquire and maintain a good and interactive reputation on the web will have a critical advantage over brands that ignore reviews as fundamental to customer service. Competitive local businesses will earn, monitor, respond to, and analyze the sentiment of their review corpus. [Miriam Ellis]

Further reading:

We’ve heard from Mozzers, and now we want to hear from you. What have you seen so far in 2019 that’s got your SEO Spidey senses tingling? What trends are you capitalizing on and planning for? Let us know in the comments below (and brag to friends and colleagues when your prediction comes true in the next 6–10 months). 😉

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Advanced Linkbuilding: How to Find the Absolute Best Publishers and Writers to Pitch

Posted by KristinTynski

In my last post, I explained how using network visualization tools can help you massively improve your content marketing PR/Outreach strategy —understanding which news outlets have the largest syndication networks empowers your outreach team to prioritize high-syndication publications over lower syndication publications. The result? The content you are pitching enjoys significantly more widespread link pickups.

Today, I’m going to take you a little deeper — we’ll be looking at a few techniques for forming an even better understanding of the publisher syndication networks in your particular niche. I’ve broken this technique into two parts:

  • Technique One — Leveraging Buzzsumo influencer data and twitter scraping to find the most influential journalists writing about any topic
  • Technique Two — Leveraging the Gdelt Dataset to reveal deep story syndication networks between publishers using in-context links.

Why do this at all?

If you are interested in generating high-value links at scale, these techniques provide an undeniable competitive advantage — they help you to deeply understand how writers and news publications connect and syndicate to each other.

In our opinion at Fractl, data-driven content stories that have strong news hooks, finding writers and publications who would find the content compelling, and pitching them effectively is the single highest ROI SEO activity possible. Done correctly, it is entirely possible to generate dozens, sometimes even hundreds or thousands, of high-authority links with one or a handful of content campaigns.

Let’s dive in.

Using Buzzsumo to understand journalist influencer networks on any topic

First, you want to figure out who your topc influencers are your a topic. A very handy feature of Buzzsumo is its “influencers” tool. You can locate it on the influences tab, then follow these steps:

  • Select only “Journalists.” This will limit the result to only the Twitter accounts of those known to be reporters and journalists of major publications. Bloggers and lower authority publishers will be excluded.
  • Search using a topical keyword. If it is straightforward, one or two searches should be fine. If it is more complex, create a few related queries, and collate the twitter accounts that appear in all of them. Alternatively, use the Boolean “and/or” in your search to narrow your result. It is critical to be sure your search results are returning journalists that as closely match your target criteria as possible.
  • Ideally, you want at least 100 results. More is generally better, so long as you are sure the results represent your target criteria well.
  • Once you are happy with your search result, click export to grab a CSV.

The next step is to grab all of the people each of these known journalist influencers follows — the goal is to understand which of these 100 or so influencers impacts the other 100 the most. Additionally, we want to find people outside of this group that many of these 100 follow in common.

To do so, we leveraged Twint, a handy Twitter scraper available on Github to pull all of the people each of these journalist influencers follow. Using our scraped data, we built an edge list, which allowed us to visualize the result in  Gephi.

Here is an interactive version for you to explore, and here is a screenshot of what it looks like:

This graph shows us which nodes (influencers) have the most In-Degree links. In other words: it tells us who, of our media influencers, is most followed. 

    These are the top 10 nodes:

      • @maiasz
      • Radley Balko (@radleybalko) Opinion journalist, Washington Post
      • @johannhari101
      • @davidkroll
      • @narcomania
      • @milbank
      • @samquinones7
      • @felicejfreyer
      • @jeannewhalen
      • @ericbolling 

    Who is the most influential?

    Using the “Betweenness Centrality” score given by Gephi, we get a rough understanding of which nodes (influencers) in the network act as hubs of information transfer. Those with the highest “Betweenness Centrality” can be thought of as the “connectors” of the network. These are the top 10 influencers:

        • Maia Szalavitz (@maiasz) Neuroscience Journalist, VICE and TIME
        • Radley Balko (@radleybalko) Opinion journalist, Washington Post
        • Johann Hari (@johannhari101) New York Times best-selling author
        • David Kroll (@davidkroll) Freelance healthcare writer, Forbes Heath
        • Max Daly (@Narcomania) Global Drugs Editor, VICE
        • Dana Milbank (@milbank)Columnist, Washington Post
        • Sam Quinones (@samquinones7), Author
        • Felice Freyer (@felicejfreyer), Boston Globe Reporter, Mental health and Addiction
        • Jeanne Whalen (@jeannewhalen) Business Reporter, Washington Post
        • Eric Bolling (@ericbolling) New York Times best-selling author

      @maiasz, @davidkroll, and @johannhari101 are standouts. There’s considerable overlap between the winners in “In-Degree” and “Betweenness Centrality” but they are still quite different. 

        What else can we learn?

          The middle of the visualization holds many of the largest sized nodes. The nodes in this view are sized by “In-Degree.” The large, centrally located nodes are disproportionately followed by other members of the graph and enjoy popularity across the board (from many of the other influential nodes). These are journalists commonly followed by everyone else. Sifting through these centrally located nodes will surface many journalists who behave as influencers of the group initially pulled from BuzzSumo.

          So, if you had a campaign about a niche topic, you could consider pitching to an influencer surfaced from this data —according to our the visualization, an article shared in their network would have the most reach and potential ROI

          Using Gdelt to find the most influential websites on a topic with in-context link analysis

          The first example was a great way to find the best journalists in a niche to pitch to, but top journalists are often the most pitched to overall. Often times, it can be easier to get a pickup from less known writers at major publications. For this reason, understanding which major publishers are most influential, and enjoy the widest syndication on a specific theme, topic, or beat, can be majorly helpful.

          By using Gdelt’s massive and fully comprehensive database of digital news stories, along with Google BigQuery and Gephi, it is possible to dig even deeper to yield important strategic information that will help you prioritize your content pitching.

          We pulled all of the articles in Gdelt’s database that are known to be about a specific theme within a given timeframe. In this case (as with the previous example) we looked at “behaviour health.” For each article we found in Gdelt’s database that matches our criteria, we also grabbed links found only within the context of the article.

          Here is how it is done:

          • Connect to Gdelt on Google BigQuery — you can find a tutorial here.
          • Pull data from Gdelt. You can use this command: SELECT DocumentIdentifier,V2Themes,Extras,SourceCommonName,DATE FROM [gdelt-bq:gdeltv2.gkg] where (V2Themes like ‘%Your Theme%’).
          • Select any theme you find, here — just replace the part between the percentages.
          • To extract the links found in each article and build an edge file. This can be done with a relatively simple python script to pull out all of the <PAGE_LINKS> from the results of the query, clean the links to only show their root domain (not the full URL) and put them into an edge file format.

          Note: The edge file is made up of Source–>Target pairs. The Source is the article and the Target are the links found within the article. The edge list will look like this:

          • Article 1, First link found in the article.
          • Article 1, Second link found in the article.
          • Article 2, First link found in the article.
          • Article 2, Second link found in the article.
          • Article 2, Third link found in the article.

          From here, the edge file can be used to build a network visualization where the nodes publishers and the edges between them represent the in-context links found from our Gdelt data pull around whatever topic we desired.

          This final visualization is a network representation of the publishers who have written stories about addiction, and where those stories link to.

            What can we learn from this graph?

            This tells us which nodes (Publisher websites) have the most In-Degree links. In other words: who is the most linked. We can see that the most linked-to for this topic are:

            • tmz.com
            • people.com
            • cdc.gov
            • cnn.com
            • go.com
            • nih.gov
            • ap.org
            • latimes.com
            • jamanetwork.com
            • nytimes.com

            Which publisher is most influential? 

            Using the “Betweenness Centrality” score given by Gephi, we get a rough understanding of which nodes (publishers) in the network act as hubs of information transfer. The nodes with the highest “Betweenness Centrality” can be thought of as the “connectors” of the network. Getting pickups from these high-betweenness centrality nodes gives a much greater likelihood of syndication for that specific topic/theme. 

            • Dailymail.co.uk
            • Nytimes.com
            • People.com
            • CNN.com
            • Latimes.com
            • washingtonpost.com
            • usatoday.com
            • cvslocal.com
            • huffingtonpost.com
            • sfgate.com

            What else can we learn?

              Similar to the first example, the higher the betweenness centrality numbers, number of In-degree links, and the more centrally located in the graph, the more “important” that node can generally be said to be. Using this as a guide, the most important pitching targets can be easily identified. 

              Understanding some of the edge clusters gives additional insights into other potential opportunities. Including a few clusters specific to different regional or state local news, and a few foreign language publication clusters.

              Wrapping up

              I’ve outlined two different techniques we use at Fractl to understand the influence networks around specific topical areas, both in terms of publications and the writers at those publications. The visualization techniques described are not obvious guides, but instead, are tools for combing through large amounts of data and finding hidden information. Use these techniques to unearth new opportunities and prioritize as you get ready to find the best places to pitch the content you’ve worked so hard to create.

              Do you have any similar ideas or tactics to ensure you’re pitching the best writers and publishers with your content? Comment below!

                Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

                We Dipped Our Toes Into Double Featured Snippets

                Posted by TheMozTeam

                This post was originally published on the STAT blog.


                Featured snippets, a vehicle for voice search and the answers to our most pressing questions, have doubled on the SERPs — but not in the way we usually mean. This time, instead of appearing on two times the number of SERPS, two snippets are appearing on the same SERP. Hoo!

                In all our years of obsessively stalking snippets, this is one of the first documented cases of them doing something a little different. And we are here for it.

                While it’s still early days for the double-snippet SERP, we’re giving you everything we’ve got so far. And the bottom line is this: double the snippets mean double the opportunity.

                Google’s case for double-snippet SERPs

                The first time we heard mention of more than one snippet per SERP was at the end of January in Google’s “reintroduction” to featured snippets.

                Not yet launched, details on the feature were a little sparse. We learned that they’re “to help people better locate information” and “may also eventually help in cases where you can get contradictory information when asking about the same thing but in different ways.”

                Thankfully, we only had to wait a month before Google released them into the wild and gave us a little more insight into their purpose.

                Calling them “multifaceted” featured snippets (a definition we’re not entirely sure we’re down with), Google explained that they’re currently serving “‘multi-intent’ queries, which are queries that have several potential intentions or purposes associated,” and will eventually expand to queries that need more than one piece of information to answer.

                With that knowledge in our back pocket, let’s get to the good stuff.

                The double snippet rollout is starting off small

                Since the US-en market is Google’s favorite testing ground for new features and the largest locale being tracked in STAT, it made sense to focus our research there. We chose to analyze mobile SERPs over desktop because of Google’s (finally released) mobile-first indexing, and also because that’s where Google told us they were starting.

                After waiting for enough two-snippet SERPs to show up so we could get our (proper) analysis on, we pulled our data at the end March. Out of the mobile keywords currently tracking in the US-en market in STAT, 122,501 had a featured snippet present, and of those, 1.06 percent had more than one to its name.

                With only 1,299 double-snippet SERPs to analyze, we admit that our sample size is smaller than our big data nerd selves would like. That said, it is indicative of how petite this release currently is.

                Two snippets appear for noun-heavy queries

                Our first order of business was to see what kind of keywords two snippets were appearing for. If we can zero in on what Google might deem “multi-intent,” then we can optimize accordingly.

                By weighting our double-snippet keywords by tf-idf, we found that nouns such as “insurance,” “computer,” “job,” and “surgery” were the primary triggers — like in [general liability insurance policy] and [spinal stenosis surgery].

                It’s important to note that we don’t see this mirrored in single-snippet SERPs. When we refreshed our snippet research in November 2017, we saw that snippets appeared most often for “how,” followed closely by “does,” “to,” “what,” and “is.” These are all words that typically compose full sentence questions.

                Essentially, without those interrogative words, Google is left to guess what the actual question is. Take our [general liability insurance policy]keyword as an example — does the searcher want to know what a general liability insurance policy is or how to get one?

                Because of how vague the query is, it’s likely the searcher wants to know everything they can about the topic. And so, instead of having to pick, Google’s finally caught onto the wisdom of the Old El Paso taco girl — why not have both?

                Better leapfrogging and double duty domains

                Next, we wanted to know where you’d need to rank in order to win one (or both) of the snippets on this new SERP. This is what we typically call “source position.”

                On a single-snippet SERP and ignoring any SERP features, Google pulls from the first organic rank 31 percent of the time. On double-snippet SERPs, the top snippet pulls from the first organic rank 24.84 percent of the time, and the bottom pulls from organic ranks 5–10 more often than solo snippets.

                What this means is that you can leapfrog more competitors in a double-snippet situation than when just one is in play.

                And when we dug into who’s answering all these questions, we discovered that 5.70 percent of our double-snippet SERPs had the same domain in both snippets. This begs the obvious question: is your content ready to do double duty?

                Snippet headers provide clarity and keyword ideas

                In what feels like the first new addition to the feature in a long time, there’s now a header on top of each snippet, which states the question it’s set out to answer. With reports of headers on solo snippets (and “People also search for” boxes attached to the bottom — will this madness never end?!), this may be a sneak peek at the new norm.

                Instead of relying on guesses alone, we can turn to these headers for what a searcher is likely looking for — we’ll trust in Google’s excellent consumer research. Using our [general liability insurance policy] example once more, Google points us to “what is general liabilities insurance” and “what does a business insurance policy cover” as good interpretations.

                Because these headers effectively turn ambiguous statements into clear questions, we weren’t surprised to see words like “how” and “what” appear in more than 80 percent of them. This trend falls in line with keywords that typically produce snippets, which we touched on earlier.

                So, not only does a second snippet mean double the goodness that you usually get with just one, it also means more insight into intent and another keyword to track and optimize for.

                Both snippets prefer paragraph formatting

                Next, it was time to give formatting a look-see to determine whether the snippets appearing in twos behave any differently than their solo counterparts. To do that, we gathered every snippet on our double-snippet SERPs and compared them against our November 2017 data, back when pairs weren’t a thing.

                While Google’s order of preference is the same for both — paragraphs, lists, and then tables — paragraph formatting was the clear favorite on our two-snippet SERPs.

                It follows, then, that the most common pairing of snippets was paragraph-paragraph — this appeared on 85.68 percent of our SERPs. The least common, at 0.31 percent, was the table-table coupling.

                We can give two reasons for this behavior. One, if a query can have multiple interpretations, it makes sense that a paragraph answer would provide the necessary space to explain each of them, and two, Google really doesn’t like tables.

                We saw double-snippet testing in action

                When looking at the total number of snippets we had on hand, we realised that the only way everything added up was if a few SERPs had more than two snippets. And lo! Eleven of our keywords returned anywhere from six to 12 snippets.

                For a hot minute we were concerned that Google was planning a full-SERP snippet takeover, but when we searched those keywords a few days later, we discovered that we’d caught testing in action.

                Here’s what we saw play out for the keyword [severe lower back pain]:

                After testing six variations, Google decided to stick with the first two snippets. Whether this is a matter of top-of-the-SERP results getting the most engagement no matter what, or the phrasing of these questions resonating with searchers the most, is hard for us to tell.

                The multiple snippets appearing for [full-time employment] left us scratching our head a bit:

                Our best hypothesis is that searchers in Florida, NYS, Minnesota, and Oregon have more questions about full-time employment than other places. But, since we’d performed a nation-wide search, Google seems to have thought better of including location-specific snippets.

                Share your double-snippet SERP experiences

                It goes without saying — but here we are saying it anyway — that we’ll be keeping an eye on the scope of this release and will report back on any new revelations.

                In the meantime, we’re keen to know what you’re seeing. Have you had any double-snippet SERPs yet? Were they in a market outside the US? What keywords were surfacing them? 

                Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

                What a Two-Tiered SERP Means for Content Strategy – Whiteboard Friday

                Posted by willcritchlow

                If you’re a big site competing to rank for popular head terms, where’s the best place to focus your content strategy? According to a hypothesis by the good folks at Distilled, the answer may lie in perfectly satisfying searcher intent.

                https://fast.wistia.net/embed/iframe/z392yvpkam?videoFoam=true

                https://fast.wistia.net/assets/external/E-v1.js

                Click on the whiteboard image above to open a high resolution version in a new tab!

                If you haven’t heard the news, the Domain Authority metric discussed in this episode will be updated on March 5th, 2019 to better correlate with Google algorithm changes. Learn about what’s changing below:

                Learn more about the new DA


                Video Transcription

                Hi, Whiteboard Friday fans. I’m Will Critchlow, one of the founders at Distilled, and what I want to talk about today is joining the dots between some theoretical work that some of my colleagues have been doing and some of the client work that we’ve been doing recently and the results that we’ve been seeing from that in the wild and what I think it means for strategies for different-sized sites going on from here.

                Correlations and a hypothesis

                The beginning of this I credit to one of my colleagues, Tom Capper, THCapper on Twitter, who presented at our Search Love London conference a presentation entitled “The Two-Tiered SERP,” and I’m going to describe what that means in just a second. But what I’m going to do today is talk about what I think that the two-tiered SERP means for content strategy going forward and base that a little bit on some of what we’re seeing in the wild with some of our client projects.

                What Tom presented at Search Love London was he started by looking at the fact that the correlation between domain authority and rankings has decreased over time. So he pulled out some stats from February 2017 and looked at those same stats 18 months later and saw a significant drop in the correlation between domain authority and rankings. This ties into a bunch of work that he’s done and presented elsewhere around potentially less reliance on links going forward and some other data that Google might be using, some other metrics and ranking factors that they might be using in their place, particularly branded metrics and so forth.

                But Tom saw this drop and had a hypothesis that it wasn’t just an across-the-board drop. This wasn’t just Google not using links anymore or using links less. It was actually a more granular effect than that. This is the two-tiered SERP or what we mean by the two-tiered SERP. So a search engine result page, a SERP, you’ve got some results at the top and some results further down the page.

                What Tom found — he had this hypothesis that was born out in the data — was that the correlation between domain authority and rankings was much higher among the positions 6 through 10 than it was among the top half of the search results page and that this can be explained by essentially somewhat traditional ranking factors lower down the page and in lower competition niches and that at the top of the page, where there’s more usage data, greater search volume and so forth in these top positions, that traditional ranking factors played less of a part.

                They maybe get you into the consideration set. There are no domains ranking up here that are very, very weak. But once you’re in the consideration set, there’s much less of a correlation between these different positions. So it’s still true on average that these positions 1 through 5 are probably more authoritative than the sites that are appearing in lower positions. But within this set there’s less predictive value.

                The domain authority is less predictive of ranking within this set than it is of ranking within this set. So this is the two-tiered SERP, and this is consistent with a bunch of data that we’ve seen across the place and in particular with the outcomes that we’re seeing among content campaigns and content strategies for different kinds of sites.

                At Distilled, we get quite a lot of clients coming to us wanting either a content strategy put together or in some cases coming to us essentially with their content strategy and saying, “Can you execute this? Can you help us execute this plan?” It’s very common for that plan to be, “We want to create a bunch of big pieces of content that get a ton of links, and we’re going to use that link authority to make our site more authoritative and that is going to result in our whole site doing better and ranking better.”

                An anonymized case study

                We’ve seen that that is performing differently in different cases, and in particular it’s performing better on smaller sites than it is on big sites. So this is a little anonymized case study. This is a real example of a story that happened with one of our consulting clients where we put in place a content strategy for them that did include a plan to build the domain authority because this was a site that came to us with a domain authority significantly below that of their key competitors, also with all of these sites not having a ton of domain authority.

                This was working in a B2B space, relatively small domains. They came to us with that, and we figured that actually growing the authority was a key part of this content strategy and over the next 18 months put out a bunch of pieces that have done really well and generated a ton of press coverage and traction and things. Over that time, they’ve actually outstripped their key competitors in the domain authority metrics, and crucially we saw that tie directly to increases in traffic that went hand-in-hand with this increase in domain authority.

                But this contrasts to what we’ve seen with some much larger sites in much more competitive verticals where they’re already very, very high domain authority, maybe they’re already stronger than some of their competitors and adding to that. So adding big content pieces that get even more big authoritative links has not moved the needle in the way that it might have done a few years ago.

                That’s totally consistent with this kind of setup, where if you are currently trying to edge in the bottom or you’re competing for less competitive search terms, then this kind of approach might really work for you and it might, in fact, be necessary to get into the consideration set for the more competitive end. But if you’re operating on a much bigger site, you’ve already got the competitive domain authority, you and your competitors are all very powerful sites, then our kind of hypothesis is that you’re going to be needing to look more towards the user experience, the conversion rate, and intent research.

                Are you satisfying searcher intent for competitive head terms?

                What is somebody who performs this search actually looking to do? Can you satisfy that intent? Can you make sure that they don’t bounce back to the search results and click on a competitor? Can you make sure that in fact they stay on your site, they get done the thing they want to get done, and it all works out for them, because we think that these kinds of things are going to be much more powerful for moving up through the very top end of the most competitive head terms.

                So when we’re working on a content strategy or putting our creative team to work on these kinds of things on bigger sites, we’re more likely to be creating content directly designed to rank. We might be creating content based off a ton of this research, and we’re going to be incrementally improving those things to try and say, “Have we actually satisfied the perfect intent for this super competitive head term?”

                What we’re seeing is that’s more likely to move the needle up at this top end than growing the domain authority on a big site. So I hope you found that interesting. I’m looking forward to a vigorous discussion in the comments on this one. But thank you for joining me for this week’s Whiteboard Friday. I’ve been Will Critchlow from Distilled. Take care.

                Video transcription by Speechpad.com


                Learn about Domain Authority 2.0!

                Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

                The Influence of Voice Search on Featured Snippets

                Posted by TheMozTeam

                This post was originally published on the STAT blog.


                We all know that featured snippets provide easy-to-read, authoritative answers and that digital assistants love to say them out loud when asked questions.

                This means that featured snippets have an impact on voice search — bad snippets, or no snippets at all, and digital assistants struggle. By that logic: Create a lot of awesome snippets and win the voice search race. Right?

                Right, but there’s actually a far more interesting angle to examine — one that will help you nab more snippets and optimize for voice search at the same time. In order to explore this, we need to make like Doctor Who and go back in time.

                From typing to talking

                Back when dinosaurs roamed the earth and queries were typed into search engines via keyboards, people adapted to search engines by adjusting how they performed queries. We pulled out unnecessary words and phrases, like “the,” “of,” and, well, “and,” which created truncated requests — robotic-sounding searches for a robotic search engine.

                The first ever dinosaur to use Google.

                Of course, as search engines have evolved, so too has their ability to understand natural language patterns and the intent behind queries. Google’s 2013 Hummingbird update helped pave the way for such evolution. This algorithm rejigging allowed Google’s search engine to better understand the whole of a query, moving it away from keyword matching to conversation having.

                This is good news if you’re a human person: We have a harder time changing the way we speak than the way we write. It’s even greater news for digital assistants, because voice search only works if search engines can interpret human speech and engage in chitchat.

                Digital assistants and machine learning

                By looking at how digital assistants do their voice search thing (what we say versus what they search), we can see just how far machine learning has come with natural language processing and how far it still has to go (robots, they’re just like us!). We can also get a sense of the kinds of queries we need to be tracking if voice search is on the SEO agenda.

                For example, when we asked our Google Assistant, “What are the best headphones for $100,” it queried [best headphones for $100]. We followed that by asking, “What about wireless,” and it searched [best wireless headphones for $100]. And then we remembered that we’re in Canada, so we followed that with, “I meant $100 Canadian,” and it performed a search for [best wireless headphones for $100 Canadian].

                We can learn two things from this successful tête-à-tête: Not only does our Google Assistant manage to construct mostly full-sentence queries out of our mostly full-sentence asks, but it’s able to accurately link together topical queries. Despite us dropping our subject altogether by the end, Google Assistant still knows what we’re talking about.

                Of course, we’re not above pointing out the fumbles. In the string of: “How to bake a Bundt cake,” “What kind of pan does it take,” and then “How much do those cost,” the actual query Google Assistant searched for the last question was [how much does bundt cake cost].

                Just after we finished praising our Assistant for being able to maintain the same subject all the way through our inquiry, we needed it to be able to switch tracks. And it couldn’t. It associated the “those” with our initial Bundt cake subject instead of the most recent noun mentioned (Bundt cake pans).

                In another important line of questioning about Bundt cake-baking, “How long will it take” produced the query [how long does it take to take a Bundt cake], while “How long does that take” produced [how long does a Bundt cake take to bake].

                They’re the same ask, but our Google Assistant had a harder time parsing which definition of “take” our first sentence was using, spitting out a rather awkward query. Unless we really did want to know how long it’s going to take us to run off with someone’s freshly baked Bundt cake? (Don’t judge us.)

                Since Google is likely paying out the wazoo to up the machine learning ante, we expect there to be less awkward failures over time. Which is a good thing, because when we asked about Bundt cake ingredients (“Does it take butter”) we found ourselves looking at a SERP for [how do I bake a butter].

                Not that that doesn’t sound delicious.

                Snippets are appearing for different kinds of queries

                So, what are we to make of all of this? That we’re essentially in the midst of a natural language renaissance. And that voice search is helping spearhead the charge.

                As for what this means for snippets specifically? They’re going to have to show up for human speak-type queries. And wouldn’t you know it, Google is already moving forward with this strategy, and not simply creating more snippets for the same types of queries. We’ve even got proof.

                Over the last two years, we’ve seen an increase in the number of words in a query that surfaces a featured snippet. Long-tail queries may be a nuisance and a half, but snippet-having queries are getting longer by the minute.

                When we bucket and weight the terms found in those long-tail queries by TF-IDF, we get further proof of voice search’s sway over snippets. The term “how” appears more than any other word and is followed closely by “does,” “to,” “much,” “what,” and “is” — all words that typically compose full sentences and are easier to remove from our typed searches than our spoken ones.

                This means that if we want to snag more snippets and help searchers using digital assistants, we need to build out long-tail, natural-sounding keyword lists to track and optimize for.

                Format your snippet content to match

                When it’s finally time to optimize, one of the best ways to get your content into the ears of a searcher is through the right snippet formatting, which is a lesson we can learn from Google.

                Taking our TF-IDF-weighted terms, we found that the words “best” and “how to” brought in the most list snippets of the bunch. We certainly don’t have to think too hard about why Google decided they benefit from list formatting — it provides a quick comparative snapshot or a handy step-by-step.

                From this, we may be inclined to format all of our “best” and “how to” keyword content into lists. But, as you can see in the chart above, paragraphs and tables are still appearing here, and we could be leaving snippets on the table by ignoring them. If we have time, we’ll dig into which keywords those formats are a better fit for and why.

                Get tracking

                You could be the Wonder Woman of meta descriptions, but if you aren’t optimizing for the right kind of snippets, then your content’s going to have a harder time getting heard. Building out a voice search-friendly keyword list to track is the first step to lassoing those snippets.

                Want to learn how you can do that in STAT? Say hello and request a tailored demo.

                Need more snippets in your life? We dug into Google’s double-snippet SERPs for you — double the snippets, double the fun.

                Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

                SEO Channel Context: An Analysis of Growth Opportunities

                Posted by BrankoK

                Too often do you see SEO analyses and decisions being made without considering the context of the marketing channel mix. Equally as often do you see large budgets being poured into paid ads in ways that seem to forget there’s a whole lot to gain from catering to popular search demand.

                Both instances can lead to leaky conversion funnels and missed opportunity for long term traffic flows. But this article will show you a case of an SEO context analysis we used to determine the importance and role of SEO.

                This analysis was one of our deliverables for a marketing agency client who hired us to inform SEO decisions which we then turned into a report template for you to get inspired by and duplicate.

                Case description

                The included charts show real, live data. You can see the whole SEO channel context analysis in this Data Studio SEO report template.

                The traffic analyzed is for of a monetizing blog, whose marketing team also happens to be one of most fun to work for. For the sake of this case study, we’re giving them a spectacular undercover name — “The Broze Fellaz.”

                For context, this blog started off with content for the first two years before they launched their flagship product. Now, they sell a catalogue of products highly relevant to their content and, thanks to one of the most entertaining Shark Tank episodes ever aired, they have acquired investments and a highly engaged niche community.

                As you’ll see below, organic search is their biggest channel in many ways. Facebook also runs both as organic and paid and the team spends many an hour inside the platform. Email has elaborate automated flows that strive to leverage subscribers that come from the stellar content on the website. We therefore chose the three — organic Search, Facebook, and email — as a combination that would yield a comprehensive analysis with insights we can easily act on.

                Ingredients for the SEO analysis

                This analysis is a result of a long-term retainer relationship with “The Broze Fellaz” as our ongoing analytics client. A great deal was required in order for data-driven action to happen, but we assure you, it’s all doable.

                From the analysis best practice drawer, we used:

                • 2 cups of relevant channels for context and analysis via comparison.
                • 3 cups of different touch points to identify channel roles — bringing in traffic, generating opt-ins, closing sales, etc.
                • 5 heads of open-minded lettuce and readiness to change current status quo, for a team that can execute.
                • 457 oz of focus-on-finding what is going on with organic search, why it is going on, and what we can do about it (otherwise, we’d end up with another scorecard export).
                • Imperial units used in arbitrary numbers that are hard to imagine and thus feel very large.
                • 1 to 2 heads of your analyst brain, baked into the analysis. You’re not making an automated report — even a HubSpot intern can do that. You’re being a human and you’re analyzing. You’re making human analysis. This helps avoid having your job stolen by a robot.
                • Full tray of Data Studio visualizations that appeal to the eye.
                • Sprinkles of benchmarks, for highlighting significance of performance differences.

                From the measurement setup and stack toolbox, we used:

                • Google Analytics with tailored channel definitions, enhanced e-commerce and Search Console integration.
                • Event tracking for opt-ins and adjusted bounce rate via MashMetrics GTM setup framework.
                • UTM routine for social and email traffic implemented via Google Sheets & UTM.io.
                • Google Data Studio. This is my favorite visualization tool. Despite its flaws and gaps (as it’s still in beta) I say it is better than its paid counterparts, and it keeps getting better. For data sources, we used the native connectors for Google Analytics and Google Sheets, then Facebook community connectors by Supermetrics.
                • Keyword Hero. Thanks to semantic algorithms and data aggregation, you are indeed able to see 95 percent of your organic search queries (check out Onpage Hero, too, you’ll be amazed).

                Inspiration for my approach comes from Lea Pica, Avinash, the Google Data Studio newsletter, and Chris Penn, along with our dear clients and the questions they have us answer for them.

                Ready? Let’s dive in.

                Analysis of the client’s SEO on the context of their channel mix

                1) Insight: Before the visit

                What’s going on and why is it happening?

                Organic search traffic volume blows the other channels out of the water. This is normal for sites with quality regular content; yet, the difference is stark considering the active effort that goes into Facebook and email campaigns.

                The CTR of organic search is up to par with Facebook. That’s a lot to say when comparing an organic channel to a channel with high level of targeting control.

                It looks like email flows are the clear winner in terms of CTR to the website, which has a highly engaged community of users who return fairly often and advocate passionately. It also has a product and content that’s incredibly relevant to their users, which few other companies appear to be good at.

                There’s a high CTR on search engine results pages often indicates that organic search may support funnel stages beyond just the top.

                As well, email flows are sent to a very warm audience — interested users who went through a double opt-in. It is to be expected for this CTR to be high.

                What’s been done already?

                There’s an active effort and budget allocation being put towards Facebook Ads and email automation. A content plan has been put in place and is being executed diligently.

                What we recommend next

                1. Approach SEO in a way as systematic as what you do for Facebook and email flows.
                2. Optimize meta titles and descriptions via testing tools such as Sanity Check. The organic search CTR may become consistently higher than that of Facebook ads.
                3. Assuming you’ve worked on improving CTR for Facebook ads, have the same person work on the meta text and titles. Most likely, there’ll be patterns you can replicate from social to SEO.
                4. Run a technical audit and optimize accordingly. Knowing that you haven’t done that in a long time, and seeing how much traffic you get anyway, there’ll be quick, big wins to enjoy.

                Results we expect

                You can easily increase the organic CTR by at least 5 percent. You could also clean up the technical state of your site in the eyes of crawlers -— you’ll then see faster indexing by search engines when you publish new content, increased impressions for existing content. As a result, you may enjoy a major spike within a month.

                2) Insight: Engagement and options during the visit

                With over 70 percent of traffic coming to this website from organic search, the metrics in this analysis will be heavily skewed towards organic search. So, comparing the rate for organic search to site-wide is sometimes conclusive, other times not conclusive.

                Adjusted bounce rate — via GTM events in the measurement framework used, we do not count a visit as a bounce if the visit lasts 45 seconds or longer. We prefer this approach because such an adjusted bounce rate is much more actionable for content sites. Users who find what they were searching for often read the page they land on for several minutes without clicking to another page. However, this is still a memorable visit for the user. Further, staying on the landing page for a while, or keeping the page open in a browser tab, are both good indicators for distinguishing quality, interested traffic, from all traffic.

                We included all Facebook traffic here, not just paid. We know from the client’s data that the majority is from paid content, they have a solid UTM routine in place. But due to boosted posts, we’ve experienced big inaccuracies when splitting paid and organic Facebook for the purposes of channel attribution.

                What’s going on and why is it happening?

                It looks like organic search has a bounce rate worse than the email flows — that’s to be expected and not actionable, considering that the emails are only sent to recent visitors who have gone through a double opt-in. What is meaningful, however, is that organic has a better bounce rate than Facebook. It is safe to say that organic search visitors will be more likely to remember the website than the Facebook visitors.

                Opt-in rates for Facebook are right above site average, and those for organic search are right below, while organic is bringing in a majority of email opt-ins despite its lower opt-in rate.

                Google’s algorithms and the draw of the content on this website are doing better at winning users’ attention than the detailed targeting applied on Facebook. The organic traffic will have a higher likelihood of remembering the website and coming back. Across all of our clients, we find that organic search can be a great retargeting channel, particularly if you consider that the site will come up higher in search results for its recent visitors.

                What’s been done already?

                The Facebook ad campaigns of “The Broze Fellaz” have been built and optimized for driving content opt-ins. Site content that ranks in organic search is less intentional than that.

                Opt-in placements have been tested on some of the biggest organic traffic magnets.

                Thorough, creative and consistent content calendars have been in place as a foundation for all channels.

                What we recommend next

                1. It’s great to keep using organic search as a way to introduce new users to the site. Now, you can try to be more intentional about using it for driving opt-ins. It’s already serving both of the stages of the funnel.
                2. Test and optimize opt-in placements on more traffic magnets.
                3. Test and optimize opt-in copy for top 10 traffic magnets.
                4. Once your opt-in rates have improved, focus on growing the channel. Add to the content work with a 3-month sprint of an extensive SEO project
                5. Assign Google Analytics goal values to non-e-commerce actions on your site. The current opt-ins have different roles and levels of importance and there’s also a handful of other actions people can take that lead to marketing results down the road. Analyzing goal values will help you create better flows toward pre-purchase actions.
                6. Facebook campaigns seem to be at a point where you can pour more budget into them and expect proportionate increase in opt-in count.

                Results we expect

                Growth in your opt-ins from Facebook should be proportionate to increase in budget, with a near-immediate effect. At the same time, it’s fairly realistic to bring the opt-in rate of organic search closer to site average.

                3) Insight: Closing the deal

                For channel attribution with money involved, you want to make sure that your Google Analytics channel definitions, view filters, and UTM’s are in top shape.

                What’s going on and why is it happening?

                Transaction rate, as well as per session value, is higher for organic search than it is for Facebook (paid and organic combined).

                Organic search contributes to far more last-click revenue than Facebook and email combined. For its relatively low volume of traffic, email flows are outstanding in the volume of revenue they bring in.

                Thanks to the integration of Keyword Hero with Google Analytics for this client, we can see that about 30 percent of organic search visits are from branded keywords, which tends to drive the transaction rate up.

                So, why is this happening? Most of the product on the site is highly relevant to the information people search for on Google.

                Multi-channel reports in Google Analytics also show that people often discover the site in organic search, then come back by typing in the URL or clicking a bookmark. That makes organic a source of conversions where, very often, no other channels are even needed.

                We can conclude that Facebook posts and campaigns of this client are built to drive content opt-ins, not e-commerce transactions. Email flows are built specifically to close sales.

                What’s been done already?

                There is dedicated staff for Facebook campaigns and posts, as well a thorough system dedicated to automated email flows.

                A consistent content routine is in place, with experienced staff at the helm. A piece has been published every week for the last few years, with the content calendar filled with ready-to-publish content for the next few months. The community is highly engaged, reading times are high, comment count soaring, and usefulness of content outstanding. This, along with partnerships with influencers, helps “The Broze Fellaz” take up a half of the first page on the SERP for several lucrative topics. They’ve been achieving this even without a comprehensive SEO project. Content seems to be king indeed.

                Google Shopping has been tried. The campaign looked promising but didn’t yield incremental sales. There’s much more search demand for informational queries than there is for product.

                What we recommend next

                1. Organic traffic is ready to grow. If there is no budget left, resource allocation should be considered. In paid search, you can often simply increase budgets. Here, with stellar content already performing well, a comprehensive SEO project is begging for your attention. Focus can be put into structure and technical aspects, as well as content that better caters to search demand. Think optimizing the site’s information architecture, interlinking content for cornerstone structure, log analysis, and technical cleanup, meta text testing for CTR gains that would also lead to ranking gains, strategic ranking of long tail topics, intentional growing of the backlink profile.
                2. Three- or six-month intensive sprint of comprehensive SEO work would be appropriate.

                Results we expect

                Increasing last click revenue from organic search and direct by 25 percent would lead to a gain as high as all of the current revenue from automated email flows. Considering how large the growth has been already, this gain is more than achievable in 3–6 months.

                Wrapping it up

                Organic search presence of “The Broze Fellaz” should continue to be the number-one role for bringing new people to the site and bringing people back to the site. Doing so supports sales that happen with the contribution of other channels, e.g. email flows. The analysis points out is that organic search is also effective at playing the role of the last-click channel for transactions, often times without the help of other channels.

                We’ve worked with this client for a few years, and, based on our knowledge of their marketing focus, this analysis points us to a confident conclusion that a dedicated, comprehensive SEO project will lead to high incremental growth.

                Your turn

                In drawing analytical conclusions and acting on them, there’s always more than one way to shoe a horse. Let us know what conclusions you would’ve drawn instead. Copy the layout of our SEO Channel Context Comparison analysis template and show us what it helped you do for your SEO efforts — create a similar analysis for a paid or owned channel in your mix. Whether it’s comments below, tweeting our way, or sending a smoke signal, we’ll be all ears. And eyes.

                Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

                Make sense of your data with these essential keyword segments

                Posted by TheMozTeam

                This blog post was originally published on the STAT blog.


                The first step to getting the most out of your SERP data is smart keyword segmentation — it surfaces targeted insights that will help you make data-driven decisions.

                But knowing what to segment can feel daunting, especially when you’re working with thousands of keywords. That’s why we’re arming you with a handful of must-have tags.

                Follow along as we walk through the different kinds of segments in STAT, how to create them, and which tags you’ll want to get started with. You’ll be a fanciful segment connoisseur by the time we’re through!

                Segmentation in STAT

                In STAT, keyword segments are called “tags” and come as two different types: standard or dynamic.

                Standard tags are best used when you want to keep specific keywords grouped together because of shared characteristics — like term (brand, product type, etc), location, or device. Standard tags are static, so the keywords that populate those segments won’t change unless you manually add or remove them.

                Dynamic tags, on the other hand, are a fancier kind of tag based on filter criteria. Just like a smart playlist, dynamic tags automatically populate with all of the keywords that meet said criteria, such as keywords with a search volume over 500 that rank on page one. This means that the keywords in a dynamic tag aren’t forever — they’ll filter in and out depending on the criteria you’ve set.

                How to create a keyword segment

                Tags are created in a few easy steps. At the Site level, pop over to the Keywords tab, click the down arrow on any table column header, and then select Filter keywords. From there, you can select the pre-populated options or enter your own metrics for a choose-your-own-filter adventure.

                Once your filters are in place, simply click Tag All Filtered Keywords, enter a new tag name, and then pick the tag type best suited to your needs — standard or dynamic — and voila! You’ve created your very own segment.

                Segments to get you started

                Now that you know how to set up a tag, it’s time to explore some of the different segments you can implement and the filter criteria you’ll need to apply.

                Rank and rank movement

                Tracking your rank and ranking movements with dynamic tags will give you eyeballs on your keyword performance, making it easy to monitor and report on current and historical trends.

                There’s a boatload of rank segments you can set up, but here’s just a sampling to get you started:

                • Keywords ranking in position 1–3; this will identify your top performing keywords.
                • Keywords ranking in position 11–15; this will suss out the low-hanging, top of page two fruit in need of a little nudge.
                • Keywords with a rank change of 10 or more (in either direction); this will show you keywords that are slipping off or shooting up the SERP.

                Appearance and ownership of SERP features

                Whether they’re images, carousels, or news results, SERP features have significantly altered the search landscape. Sometimes they push you down the page and other times, like when you manage to snag one, they can give you a serious leg up on the competition and drive loads more traffic to your site.

                Whatever industry-related SERP features that you want to keep apprised of, you can create dynamic tags that show you the prevalence and movement of them within your keyword set. Segment even further for tags that show which keywords own those features and which have fallen short.

                Below are a few segments you can set up for featured snippets and local packs.

                Featured snippets

                Everyone’s favourite SERP feature isn’t going anywhere anytime soon, so it wouldn’t be a bad idea to outfit yourself with a snippet tracking strategy. You can create as many tags as there are snippet options to choose from:

                • Keywords with a featured snippet.
                • Keywords with a paragraph, list, table, and/or carousel snippet.
                • Keywords with an owned paragraph, list, table, and/or carousel snippet.
                • Keywords with an unowned paragraph, list, table, and/or carousel snippet.

                The first two will allow you to see over-arching snippet trends, while the last two will chart your ownership progress.

                If you want to know the URL that’s won you a snippet, just take a peek at the URL column.

                Local packs

                If you’re a brick and mortar business, we highly advise creating tags for local packs since they provide a huge opportunity for exposure. These two tags will show you which local packs you have a presence in and which you need to work on

                • Keywords with an owned local pack.
                • Keywords with an unowned local pack.

                Want all the juicy data squeezed into a local pack, like who’s showing up and with what URL? We created the Local pack report just for that.

                Landing pages, subdomains, and other important URLs

                Whether you’re adding new content or implementing link-building strategies around subdomains and landing pages, dynamic tags allow you to track and measure page performance, see whether your searchers are ending up on the pages you want, and match increases in page traffic with specific keywords.

                For example, are your informational intent keywords driving traffic to your product pages instead of your blog? To check, a tag that includes your blog URL will pull in each post that ranks for one of your keywords.

                Try these three dynamic tags for starters:

                • Keywords ranking for a landing page URL.
                • Keywords ranking for a subdomain URL.
                • Keywords ranking for a blog URL.

                Is a page not indexed yet? That’s okay. You can still create a dynamic tag for its URL and keywords will start appearing in that segment when Google finally gets to it.

                Location, location, location

                Google cares a lot about location and so should you, which is why keyword segments centred around location are essential. You can tag in two ways: by geo-modifier and by geo-location.

                For these, it’s better to go with the standard tag as the search term and location are fixed to the keyword.

                Geo-modifier

                A geo-modifier is the geographical qualifier that searchers manually include in their query — like in [sushi near me]. We advocate for adding various geo-modifiers to your keywords and then incorporating them into your tagging strategy. For instance, you can segment by:

                • Keywords with “in [city]” in them.
                • Keywords with “near me” in them.

                The former will show you how you fare for city-wide searches, while the latter will let you see if you’re meeting the needs of searchers looking for nearby options.

                Geo-location

                Geo-location is where the keyword is being tracked. More tracked locations mean more searchers’ SERPs to sample. And the closer you can get to searchers standing on a street corner, the more accurate those SERPs will be. This is why we strongly recommend you track in multiple pin-point locations in every market you serve.

                Once you’ve got your tracking strategy in place, get your segmentation on. You can filter and tag by:

                • Keywords tracked in specific locations; this will let you keep tabs on geographical trends.
                • Keywords tracked in each market; this will allow for market-level research.

                Search volume & cost-per-click

                Search volume might be a contentious metric thanks to Google’s close variants, but having a decent idea of what it’s up to is better than a complete shot in the dark. We suggest at least two dynamic segments around search volume:

                • Keywords with high search volume; this will show which queries are popular in your industry and have the potential to drive the most traffic.
                • Keywords with low search volume; this can actually help reveal conversion opportunities — remember, long-tail keywords typically have lower search volumes but higher conversion rates.

                Tracking the cost-per-click of your keywords will also bring you and your PPC team tonnes of valuable insights — you’ll know if you’re holding the top organic spot for an outrageously high CPC keyword.

                As with search volume, tags for high and low CPC should do you just fine. High CPC keywords will show you where the competition is the fiercest, while low CPC keywords will surface your easiest point of entry into the paid game — queries you can optimize for with less of a fight.

                Device type

                From screen size to indexing, desktop and smartphones produce substantially different SERPs from one another, making it essential to track them separately. So, filter and tag for:

                • Keywords tracked on a desktop.
                • Keywords tracked on a smartphone.

                Similar to your location segments, it’s best to use the standard tag here.

                Go crazy with multiple filters

                We’ve shown you some really high-level segments, but you can actually filter down your keywords even further. In other words, you can get extra fancy and add multiple filters to a single tag. Go as far as high search volume, branded keywords triggering paragraph featured snippets that you own for smartphone searchers in the downtown core. Phew!

                Want to make talk shop about segmentation or see dynamic tags in action? Say hello (don’t be shy) and request a demo.

                Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

                Build a Search Intent Dashboard to Unlock Better Opportunities

                Posted by scott.taft

                We’ve been talking a lot about search intent this week, and if you’ve been following along, you’re likely already aware of how “search intent” is essential for a robust SEO strategy. If, however, you’ve ever laboured for hours classifying keywords by topic and search intent, only to end up with a ton of data you don’t really know what to do with, then this post is for you.

                I’m going to share how to take all that sweet keyword data you’ve categorized, put it into a Power BI dashboard, and start slicing and dicing to uncover a ton insights — faster than you ever could before.

                Building your keyword list

                Every great search analysis starts with keyword research and this one is no different. I’m not going to go into excruciating detail about how to build your keyword list. However, I will mention a few of my favorite tools that I’m sure most of you are using already:

                • Search Query Report — What better place to look first than the search terms already driving clicks and (hopefully) conversions to your site.
                • Answer The Public — Great for pulling a ton of suggested terms, questions and phrases related to a single search term.
                • InfiniteSuggest — Like Answer The Public, but faster and allows you to build based on a continuous list of seed keywords.
                • MergeWords — Quickly expand your keywords by adding modifiers upon modifiers.
                • Grep Words — A suite of keyword tools for expanding, pulling search volume and more.

                Please note that these tools are a great way to scale your keyword collecting but each will come with the need to comb through and clean your data to ensure all keywords are at least somewhat relevant to your business and audience.

                Once I have an initial keyword list built, I’ll upload it to STAT and let it run for a couple days to get an initial data pull. This allows me to pull the ‘People Also Ask’ and ‘Related Searches’ reports in STAT to further build out my keyword list. All in all, I’m aiming to get to at least 5,000 keywords, but the more the merrier.

                For the purposes of this blog post I have about 19,000 keywords I collected for a client in the window treatments space.

                Categorizing your keywords by topic

                Bucketing keywords into categories is an age-old challenge for most digital marketers but it’s a critical step in understanding the distribution of your data. One of the best ways to segment your keywords is by shared words. If you’re short on AI and machine learning capabilities, look no further than a trusty Ngram analyzer. I love to use this Ngram Tool from guidetodatamining.com — it ain’t much to look at, but it’s fast and trustworthy.

                After dropping my 19,000 keywords into the tool and analyzing by unigram (or 1-word phrases), I manually select categories that fit with my client’s business and audience. I also make sure the unigram accounts for a decent amount of keywords (e.g. I wouldn’t pick a unigram that has a count of only 2 keywords).

                Using this data, I then create a Category Mapping table and map a unigram, or “trigger word”, to a Category like the following:

                You’ll notice that for “curtain” and “drapes” I mapped both to the Curtains category. For my client’s business, they treat these as the same product, and doing this allows me to account for variations in keywords but ultimately group them how I want for this analysis.

                Using this method, I create a Trigger Word-Category mapping based on my entire dataset. It’s possible that not every keyword will fall into a category and that’s okay — it likely means that keyword is not relevant or significant enough to be accounted for.

                Creating a keyword intent map

                Similar to identifying common topics by which to group your keywords, I’m going to follow a similar process but with the goal of grouping keywords by intent modifier.

                Search intent is the end goal of a person using a search engine. Digital marketers can leverage these terms and modifiers to infer what types of results or actions a consumer is aiming for.

                For example, if a person searches for “white blinds near me”, it is safe to infer that this person is looking to buy white blinds as they are looking for a physical location that sells them. In this case I would classify “near me” as a “Transactional” modifier. If, however, the person searched “living room blinds ideas” I would infer their intent is to see images or read blog posts on the topic of living room blinds. I might classify this search term as being at the “Inspirational” stage, where a person is still deciding what products they might be interested and, therefore, isn’t quite ready to buy yet.

                There is a lot of research on some generally accepted intent modifiers in search and I don’t intent to reinvent the wheel. This handy guide (originally published in STAT) provides a good review of intent modifiers you can start with.

                I followed the same process as building out categories to build out my intent mapping and the result is a table of intent triggers and their corresponding Intent stage.

                Intro to Power BI

                There are tons of resources on how to get started with the free tool Power BI, one of which is from own founder Will Reynold’s video series on using Power BI for Digital Marketing. This is a great place to start if you’re new to the tool and its capabilities.

                Note: it’s not about the tool necessarily (although Power BI is a super powerful one). It’s more about being able to look at all of this data in one place and pull insights from it at speeds which Excel just won’t give you. If you’re still skeptical of trying a new tool like Power BI at the end of this post, I urge you to get the free download from Microsoft and give it a try.

                Setting up your data in Power BI

                Power BI’s power comes from linking multiple datasets together based on common “keys.” Think back to your Microsoft Access days and this should all start to sound familiar.

                Step 1: Upload your data sources

                First, open Power BI and you’ll see a button called “Get Data” in the top ribbon. Click that and then select the data format you want to upload. All of my data for this analysis is in CSV format so I will select the Text/CSV option for all of my data sources. You have to follow these steps for each data source. Click “Load” for each data source.

                Step 2: Clean your data

                In the Power BI ribbon menu, click the button called “Edit Queries.” This will open the Query Editor where we will make all of our data transformations.

                The main things you’ll
                want to do in the Query Editor are the following:

                • Make sure all data formats make sense (e.g. keywords are formatted as text, numbers are formatted as decimals or whole numbers).
                • Rename columns as needed.
                • Create a domain column in your Top 20 report based on the URL column.

                Close and apply your
                changes by hitting the “Edit Queries” button, as seen above.

                Step 3: Create relationships between data sources

                On the left side of Power BI is a vertical bar with icons for different views. Click the third one to see your relationships view.

                In this view, we are going to connect all data sources to our ‘Keywords Bridge’ table by clicking and dragging a line from the field ‘Keyword’ in each table and to ‘Keyword’ in the ‘Keywords Bridge’ table (note that for the PPC Data, I have connected ‘Search Term’ as this is the PPC equivalent of a keyword, as we’re using here).

                The last thing we need to do for our relationships is double-click on each line to ensure the following options are selected for each so that our dashboard works properly:

                • The cardinality is Many to 1
                • The relationship is “active”
                • The cross filter direction is set to “both”

                We are now ready to start building our Intent Dashboard and analyzing our data.

                Building the search intent dashboard

                In this section I’ll walk you through each visual in the Search Intent Dashboard (as seen below):

                Top domains by count of keywords

                Visual type: Stacked Bar Chart visual

                Axis: I’ve nested URL under Domain so I can drill down to see this same breakdown by URL for a specific Domain

                Value: Distinct count of keywords

                Legend: Result Types

                Filter: Top 10 filter on Domains by count of distinct keywords

                Keyword breakdown by result type

                Visual type: Donut chart

                Legend: Result Types

                Value: Count of distinct keywords, shown as Percent of grand total

                Metric Cards

                Sum of Distinct MSV

                Because the Top 20 report shows each keyword 20 times, we need to create a calculated measure in Power BI to only sum MSV for the unique list of keywords. Use this formula for that calculated measure:

                Sum Distinct MSV = SUMX(DISTINCT('Table'[Keywords]), FIRSTNONBLANK('Table'[MSV], 0))
                

                Keywords

                This is just a distinct count of keywords

                Slicer: PPC Conversions

                Visual type: Slicer

                Drop your PPC Conversions field into a slicer and set the format to “Between” to get this nifty slider visual.

                Tables

                Visual type: Table or Matrix (a matrix allows for drilling down similar to a pivot table in Excel)

                Values: Here I have Category or Intent Stage and then the distinct count of keywords.

                Pulling insights from your search intent dashboard

                This dashboard is now a Swiss Army knife of data that allows you to slice and dice to your heart’s content. Below are a couple examples of how I use this dashboard to pull out opportunities and insights for my clients.

                Where are competitors winning?

                With this data we can quickly see who the top competing domains are, but what’s more valuable is seeing who the competitors are for a particular intent stage and category.

                I start by filtering to the “Informational” stage, since it represents the most keywords in our dataset. I also filter to the top category for this intent stage which is “Blinds”. Looking at my Keyword Count card, I can now see that I’m looking at a subset of 641 keywords.

                Note: To filter multiple visuals in Power BI, you need to press and hold the “Ctrl” button each time you click a new visual to maintain all the filters you clicked previously.

                The top competing subdomain here is videos.blinds.com with visibility in the top 20 for over 250 keywords, most of which are for video results. I hit ctrl+click on the Video results portion of videos.blinds.com to update the keywords table to only keywords where videos.blinds.com is ranking in the top 20 with a video result.

                From all this I can now say that videos.blinds.com is ranking in the top 20 positions for about 30 percent of keywords that fall into the “Blinds” category and the “Informational” intent stage. I can also see that most of the keywords here start with “how to”, which tells me that most likely people searching for blinds in an informational stage are looking for how to instructions and that video may be a desired content format.

                Where should I focus my time?

                Whether you’re in-house or at an agency, time is always a hit commodity. You can use this dashboard to quickly identify opportunities that you should be prioritizing first — opportunities that can guarantee you’ll deliver bottom-line results.

                To find these bottom-line results, we’re going to filter our data using the PPC conversions slicer so that our data only includes keywords that have converted at least once in our PPC campaigns.

                Once I do that, I can see I’m working with a pretty limited set of keywords that have been bucketed into intent stages, but I can continue by drilling into the “Transactional” intent stage because I want to target queries that are linked to a possible purchase.

                Note: Not every keyword will fall into an intent stage if it doesn’t meet the criteria we set. These keywords will still appear in the data, but this is the reason why your total keyword count might not always match the total keyword count in the intent stages or category tables.

                From there I want to focus on those “Transactional” keywords that are triggering answer boxes to make sure I have good visibility, since they are converting for me on PPC. To do that, I filter to only show keywords triggering answer boxes. Based on these filters I can look at my keyword table and see most (if not all) of the keywords are “installation” keywords and I don’t see my client’s domain in the top list of competitors. This is now an area of focus for me to start driving organic conversions.

                Wrap up

                I’ve only just scratched the surface — there’s tons that can can be done with this data inside a tool like Power BI. Having a solid data set of keywords and visuals that I can revisit repeatedly for a client and continuously pull out opportunities to help fuel our strategy is, for me, invaluable. I can work efficiently without having to go back to keyword tools whenever I need an idea. Hopefully you find this makes building an intent-based strategy more efficient and sound for your business or clients.

                Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

                Detecting Link Manipulation and Spam with Domain Authority

                Posted by rjonesx.

                Over 7 years ago, while still an employee at Virante, Inc. (now Hive Digital), I wrote a post on Moz outlining some simple methods for detecting backlink manipulation by comparing one’s backlink profile to an ideal model based on Wikipedia. At the time, I was limited in the research I could perform because I was a consumer of the API, lacking access to deeper metrics, measurements, and methodologies to identify anomalies in backlink profiles. We used these techniques in spotting backlink manipulation with tools like Remove’em and Penguin Risk, but they were always handicapped by the limitations of consumer facing APIs. Moreover, they didn’t scale. It is one thing to collect all the backlinks for a site, even a large site, and judge every individual link for source type, quality, anchor text, etc. Reports like these can be accessed from dozens of vendors if you are willing to wait a few hours for the report to complete. But how do you do this for 30 trillion links every single day?

                Since the launch of Link Explorer and my residency here at Moz, I have had the luxury of far less filtered data, giving me a far deeper, clearer picture of the tools available to backlink index maintainers to identify and counter manipulation. While I in no way intend to say that all manipulation can be detected, I want to outline just some of the myriad surprising methodologies to detect spam.

                The general methodology

                You don’t need to be a data scientist or a math nerd to understand this simple practice for identifying link spam. While there certainly is a great deal of math used in the execution of measuring, testing, and building practical models, the general gist is plainly understandable.

                The first step is to get a good random sample of links from the web, which you can read about here. But let’s assume you have already finished that step. Then, for any property of those random links (DA, anchor text, etc.), you figure out what is normal or expected. Finally, you look for outliers and see if those correspond with something important – like sites that are manipulating the link graph, or sites that are exceptionally good. Let’s start with an easy example, link decay.

                Link decay and link spam

                Link decay is the natural occurrence of links either dropping off the web or changing URLs. For example, if you get links after you send out a press release, you would expect some of those links to eventually disappear as the pages are archived or removed for being old. And, if you were to get a link from a blog post, you might expect to have a homepage link on the blog until that post is pushed to the second or third page by new posts.

                But what if you bought your links? What if you own a large number of domains and all the sites link to each other? What if you use a PBN? These links tend not to decay. Exercising control over your inbound links often means that you keep them from ever decaying. Thus, we can create a simple hypothesis:

                Hypothesis: The link decay rate of sites manipulating the link graph will differ from sites with natural link profiles.

                The methodology for testing this hypothesis is just as we discussed before. We first figure out what is natural. What does a random site’s link decay rate look like? Well, we simply get a bunch of sites and record how fast links are deleted (we visit a page and see a link is gone) vs. their total number of links. We then can look for anomalies.

                In this case of anomaly hunting, I’m going to make it really easy. No statistics, no math, just a quick look at what pops up when we first sort by Lowest Decay Rate and then sort by Highest Domain Authority to see who is at the tail-end of the spectrum.

                spreadsheet of sites with high deleted link ratios

                Success! Every example we see of a good DA score but 0 link decay appears to be powered by a link network of some sort. This is the Aha! moment of data science that is so fun. What is particularly interesting is we find spam on both ends of the distribution — that is to say, sites that have 0 decay or near 100% decay rates both tend to be spammy. The first type tends to be part of a link network, the second part tends to spam their backlinks to sites others are spamming, so their links quickly shuffle off to other pages.

                Of course, now we do the hard work of building a model that actually takes this into account and accurately reduces Domain Authority relative to the severity of the link spam. But you might be asking…

                These sites don’t rank in Google — why do they have decent DAs in the first place?

                Well, this is a common problem with training sets. DA is trained on sites that rank in Google so that we can figure out who will rank above who. However, historically, we haven’t (and no one to my knowledge in our industry has) taken into account random URLs that don’t rank at all. This is something we’re solving for in the new DA model set to launch in early March, so stay tuned, as this represents a major improvement on the way we calculate DA!

                Spam Score distribution and link spam

                One of the most exciting new additions to the upcoming Domain Authority 2.0 is the use of our Spam Score. Moz’s Spam Score is a link-blind (we don’t use links at all) metric that predicts the likelihood a domain will be indexed in Google. The higher the score, the worse the site.

                Now, we could just ignore any links from sites with Spam Scores over 70 and call it a day, but it turns out there are fascinating patterns left behind by common link manipulation schemes waiting to be discovered by using this simple methodology of using a random sample of URLs to find out what a normal backlink profile looks like, and then see if there are anomalies in the way Spam Score is distributed among the backlinks to a site. Let me show you just one.

                It turns out that acting natural is really hard to do. Even the best attempts often fall short, as did this particularly pernicious link spam network. This network had haunted me for 2 years because it included a directory of the top million sites, so if you were one of those sites, you could see anywhere from 200 to 600 followed links show up in your backlink profile. I called it “The Globe” network. It was easy to look at the network and see what they were doing, but could we spot it automatically so that we could devalue other networks like it in the future? When we looked at the link profile of sites included in the network, the Spam Score distribution lit up like a Christmas tree.

                spreadsheet with distribution of spam scores

                Most sites get the majority of their backlinks from low Spam Score domains and get fewer and fewer as the Spam Score of the domains go up. But this link network couldn’t hide because we were able to detect the sites in their network as having quality issues using Spam Score. If we relied only on ignoring the bad Spam Score links, we would have never discovered this issue. Instead, we found a great classifier for finding sites that are likely to be penalized by Google for bad link building practices.

                DA distribution and link spam

                We can find similar patterns among sites with the distribution of inbound Domain Authority. It’s common for businesses seeking to increase their rankings to set minimum quality standards on their outreach campaigns, often DA30 and above. An unfortunate outcome of this is that what remains are glaring examples of sites with manipulated link profiles.

                Let me take a moment and be clear here. A manipulated link profile is not necessarily against Google’s guidelines. If you do targeted PR outreach, it is reasonable to expect that such a distribution might occur without any attempt to manipulate the graph. However, the real question is whether Google wants sites that perform such outreach to perform better. If not, this glaring example of link manipulation is pretty easy for Google to dampen, if not ignore altogether.

                spreadsheet with distribution of domain authorityA normal link graph for a site that is not targeting high link equity domains will have the majority of their links coming from DA0–10 sites, slightly fewer for DA10–20, and so on and so forth until there are almost no links from DA90+. This makes sense, as the web has far more low DA sites than high. But all the sites above have abnormal link distributions, which make it easy to detect and correct — at scale — link value.

                Now, I want to be clear: these are not necessarily examples of violating Google’s guidelines. However, they are manipulations of the link graph. It’s up to you to determine whether you believe Google takes the time to differentiate between how the outreach was conducted that resulted in the abnormal link distribution.

                What doesn’t work

                For every type of link manipulation detection method we discover, we scrap dozens more. Some of these are actually quite surprising. Let me write about just one of the many.

                The first surprising example was the ratio of nofollow to follow links. It seems pretty straightforward that comment, forum, and other types of spammers would end up accumulating lots of nofollowed links, thereby leaving a pattern that is easy to discern. Well, it turns out this is not true at all.

                The ratio of nofollow to follow links turns out to be a poor indicator, as popular sites like facebook.com often have a higher ratio than even pure comment spammers. This is likely due to the use of widgets and beacons and the legitimate usage of popular sites like facebook.com in comments across the web. Of course, this isn’t always the case. There are some sites with 100% nofollow links and a high number of root linking domains. These anomalies, like “Comment Spammer 1,” can be detected quite easily, but as a general measurement the ratio does not serve as a good classifier for spam or ham.

                So what’s next?

                Moz is continually traversing the the link graph looking for ways to improve Domain Authority using everything from basic linear algebra to complex neural networks. The goal in mind is simple: We want to make the best Domain Authority metric ever. We want a metric which users can trust in the long run to root out spam just like Google (and help you determine when you or your competitors are pushing the limits) while at the same time maintaining or improving correlations with rankings. Of course, we have no expectation of rooting out all spam — no one can do that. But we can do a better job. Led by the incomparable Neil Martinsen-Burrell, our metric will stand alone in the industry as the canonical method for measuring the likelihood a site will rank in Google.


                We’re launching Domain Authority 2.0 on March 5th! Check out our helpful resources here, or sign up for our webinar this Thursday, February 21st for more info on how to communicate changes like this to clients and stakeholders:

                Save my spot!

                Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!