The Functional Content Masterplan – Own the Knowledge Graph Goldrush with this On-Page Plan

Posted by SimonPenson

[Estimated read time: 17 minutes]

On-page content is certainly not one of the sexier topics in digital marketing.

Lost in the flashing lights of “cool digital marketing trends” and things to be seen talking about, it’s become the poor relative of many a hyped “game-changer.”

I’m here to argue that, in being distracted by the topics that may be more “cutting-edge,” we’re leaving our most valuable assets unloved and at the mercy of underperformance.

This post is designed not only to make it clear what good on-page content looks like, but also how you should go about prioritizing which pages to tackle first based on commercial opportunity, creating truly customer-focused on-page experiences.

What is “static” or “functional” content?

So how am I defining static/functional content, and why is it so important to nurture in 2016? The answer lies in the recent refocus on audience-centric marketing and Google’s development of the Knowledge Graph.

Whether you call your on-page content “functional,” “static,” or simply “on-page” content, they’re all flavors of the same thing: content that sits on key landing pages. These may be category pages or other key conversion pages. The text is designed to help Google understand the relevance of the page and/or help customers with their buying decisions.

Functional content has other uses as well, but today we’re focusing on its use as a customer-focused conversion enhancement and discovery tactic.

And while several years ago it would have been produced simply to aid a relatively immature Google to “find” and “understand,” the focus is now squarely back on creating valuable user experiences for your targeted audience.

Google’s ability to better understand and measure what “quality content” really looks like — alongside an overall increase in web usage and ease-of-use expectation among audiences — has made key page investment as critical to success on many levels.

We should now be looking to craft on-page content to improve conversion, search visibility, user experience, and relevance — and yes, even as a technique to steal Knowledge Graph real estate.

The question, however, is “how do I even begin to tackle that mountain?”

Auditing what you have

For those with large sites, the task of even beginning to understand where to start with your static content improvement program can be daunting. Even if you have a small site of a couple of hundred pages, the thought of writing content for all of them can be enough to put you off even starting.

As with any project, the key is gathering the data to inform your decision-making before simply “starting.” That’s where my latest process can help.

Introducing COAT: The Content Optimization and Auditing Tool

To help the process along, we’ve been using a tool internally for months — for the first time today, there’s now a version that anyone can use.

This link will take you to the new Content Optimisation and Auditing Tool (COAT), and below I’ll walk through exactly how we use it to understand the current site and prioritize areas for content improvement. I’ll also walk you through the manual step-by-step process, should you wish to take the scenic route.

The manual process

If you enjoy taking the long road — maybe you feel an extra sense of achievement in doing so — then let’s take a look at how to pull the data together to make data-informed decisions around your functional content.

As with any solid piece of analysis, we begin with an empty Excel doc and, in this case, a list of keywords you feel are relevant to and important for your business and site.

In this example, we’ll take a couple of keywords and our own site:

Keywords:

Content Marketing Agency
Digital PR

Site:

http://ift.tt/10MOZUy

Running this process manually is labor-intensive (hence the need to automate it!) and to add dozens more keywords creates a lot of work for little extra knowledge gain, but by focusing on a couple you can see how to build the fuller picture.

Stage one

We start by adding our keywords to our spreadsheet alongside a capture of the search volume for those terms and the actual URL ranking, as shown below (NOTE: all data is for google.co.uk).

Next we add in ranking position…

We then look to the page itself and give each of the key on-page elements a score based on our understanding of best practice. If you want to be really smart, you can score the most important factors out of 20 and those lesser points out of 10.

In building our COAT tool to enable this to be carried out at scale across sites with thousands of pages, we made a list of many of the key on-page factors we know to affect rank and indeed conversion. They include:

  • URL optimization
  • Title tag optimization and clickability
  • Meta description optimization and clickability
  • H1, H2, and H3 optimization and clickability (as individual scores)
  • Occurences of keyword phrases within body copy
  • Word count
  • Keyword density
  • Readability (as measured by the Flesch-Kincaid readability score)

This is far from an exhaustive list, but it’s a great place to start your analysis. The example below shows an element of this scored:

Once you have calculated score for every key factor, your job is to then to turn this into an average, weighted score out of 100. In this case, you can see I’ve done this across the listed factors and have a final score for each keyword and URL:

Stage two

Once you have score for a larger number of pages and keywords, it’s then possible to begin organizing your data in a way that helps prioritise action.

You can do this simply enough by using filters and organising the table by any number of combinations.

You may want to sort by highest search volume and then by those pages ranking between, say, 5th and 10th position.

Doing this enables you to focus on the pages that may yield the most potential traffic increase from Google, if that is indeed your aim.

Working this way makes it much easier to work in a way that delivers the largest positive net impact fastest.

Doing it at scale

Of course, if you have a large site with tens (or even hundreds) of thousands of pages, the manual option is almost impossible — which is why we scratched our heads and looked for a more effective option. The result was the creation of our Content Auditing and Optimisation Tool. Here’s how you can make use of it to paint a fuller picture of your entire site.

Here’s how it works

When it comes to using COAT, you follow a basic process:

  • Head over to the tool.
  • Enter your domain, or a sub-directory of the site if you’d like to focus on a particular section
  • Add the keywords you want to analyze in a comma-separated list
  • Click “Get Report,” making sure you’ve chosen the right country

Next comes the smart bit: by adding target keywords to the system before it crawls, it enables the algorithm to cross-reference all pages against those phrases and then score each combination against a list of critical attributes you’d expect the “perfect page” to have.

Let’s take an example:

You run a site that sells laptops. You enter a URL for a specific model, such as /apple-15in-macbook/, and a bunch of related keywords, such as “Apple 15-inch MacBook” and “Apple MacBook Pro.”

The system works out the best page for those terms and measures the existing content against a large number of known ranking signals and measures, covering everything from title tags and H1s to readability tests such as the Flesch-Kincaid system.

This outputs a spreadsheet that scores each URL or even categories of URLs (to allow you to see how well-optimized the site is generally for a specific area of business, such as Apple laptops), enabling you to sort the data, discover the pages most in need of improvement, and identify where content gaps may exist.

In a nutshell, it’ll provide:

  • What the most relevant target page for each keyword is
  • How well-optimized individual pages are for their target keywords
  • Where content gaps exist within the site’s functional content

It also presents the top-level data in an actionable way. An example of the report landing page can be seen below (raw CSV downloads are also available — more on that in a moment).

You can see the overall page score and simple ways to improve it. This is for our “Digital PR” keyword:

The output

As we’ve already covered in the manual process example, in addition to pulling the “content quality scores” for each URL, you can also take the data to the next level by adding in other data sources to the mix.

The standard CSV download includes data such as keyword, URL, and scores for the key elements (such as H1, meta, canonical use and static content quality).

This level of detail makes it possible to create a priority order for fixes based on lowest-scoring pages easily enough, but there are ways you can supercharge this process even more.

The first thing to do is run a simple rankings check using your favorite rank tracker for those keywords and add them into a new column in your CSV. It’ll look a little like this (I’ve added some basic styling for clarity):

I also try to group keywords by adding a third column using a handful of grouped terms. In this example, you can see I’m grouping car model keywords with brand terms manually.

Below, you’ll see how we can then group these terms together in an averaged cluster table to give us a better understanding of where the keyword volume might be from a car brand perspective. I’ve blurred the keyword grouping column here to protect existing client strategy data.

As you can see from the snapshot above, we now have a spreadsheet with keyword, keyword group, search volume, URL, rank, and the overall content score pulled in from the base Excel sheet we have worked through. From this, we can do some clever chart visualization to help us understand the data.

Visualizing the numbers

To really understand where the opportunity lies and to take this process past a simple I’ll-work-on-the-worst-pages-first approach, we need to bring it to life.

This means turning our table into a chart. We’ll utilize the chart functionality within Excel itself.

Here’s an example of the corresponding chart for the table shown above, showing performance by category and ranking correlation. We’re using dummy data here, but you can look at the overall optimization score for each car brand section alongside how well they rank (the purple line is average rank for that category):

If we focus on the chart above, we can begin to see a pattern between those categories that are better optimized and generally have better rankings. Correlation does not always equal causation, as we know, but it’s useful information.

Take the very first column, or the Subaru category. We can see that it’s one of the better-optimized categories (at 49%) and average rank is at 34.1. Now, these are hardly record-breaking positions, but it does point towards the value of well-worked static pages.

Making the categories as granular as possible can be very valuable here, as you can quickly build up a focused picture of where to put your effort to move the needle quickly. The process for doing so is an entirely subjective one, often based on your knowledge of your industry or your site information architecture.

Add keyword volume data into the mix and you know exactly where to build your static content creation to-do list.

Adding in context

Like any data set, however, it requires a level of benchmarking and context to give you the fullest picture possible before you commit time and effort to the content improvement process.

It’s for this reason that I always look to run the same process on key competitors, too. An example of the resulting comparison charts can be seen below.

The process is relatively straightforward: take an average of all the individual URL content scores, which will give you a “whole domain” score. Add competitors by repeating the process for their domain.

You can take a more granular view manually by following the same process for the grouped keywords and tabulating the result. Below, we can see how our domain sizes up against those same two competitors for all nine of our example keyword groups, such as the car brands example we looked at earlier.

With that benchmark data in place, you can move on to the proactive improvement part of the process.

The perfect page structure

Having identified your priority pages, the next step is to ensure you edit (or create them) in the right way to maximize impact.

Whereas a few years ago it was all about creating a few paragraphs almost solely for the sake of helping Google understand the page, now we MUST be focused on usability and improving the experience for the right visitor.

This means adding value to the page. To do that, you need to stand back and really focus in on the visitor: how they get to the page and what they expect from it.

This will almost always involve what I call “making the visitor smarter”: creating content that ensures they make better and more informed buying decisions.

To do that requires a structured approach to delivering key information succinctly and in a way that enhances — rather than hinders — the user journey.

The best way of working through what that should look like is to share a few examples of those doing it well:

1. Tredz Top 5 Reviews

Tredz is a UK cycling ecommerce business. They do a great job of understanding what their audience is looking for and ensuring they’re set up to make them smarter. The “Top 5” pages are certainly not classic landing pages, but they’re brilliant examples of how you can sell and add value at the same time.

Below is the page for the “Top 5 hybrids for under £500.” You can clearly see how the URL (http://ift.tt/29eH2DW), meta, H tags, and body copy all support this focus and are consistently aligned:

2. Read it for me

This is a really cool business concept and they also do great landing pages. You get three clear reasons to try them out — presented clearly and utilizing several different content types — all in one package.

3. On Stride Financial

Finance may not be where you’d expect to see amazing landing pages, but this is a great example. Not only is it an easy-to-use experience, it answers all the user’s key questions succinctly, starting with “What is an installment loan?” It’s also structured in a way to capture Knowledge Graph opportunity — something we’ll come to shortly.

Outside of examples like these and supporting content, you should be aiming to

create impactful headlines, testimonials (where appropriate), directional cues (so it’s clear where to “go next”), and high-quality images to reflect the quality of your product or services.

Claiming Knowledge Graph

There is, of course, one final reason to work hard on your static pages. That reason? To claim a massively important piece of digital real estate: Google Featured Snippets.

Snippets form part of the wider Knowledge Graph, the tangible visualization of Google’s semantic search knowledge base that’s designed to better understand the associations and entities behind words, phrases, and descriptions of things.

The Knowledge Graph comes in a multitude of formats, but one of the most valuable (and attainable from a commercial perspective) is the Featured Snippet, which sits at the top of the organic SERP. An example can be seen below from a search for “How do I register to vote” in google.co.uk:

In recent months, Zazzle Media has done a lot of work on landing page design to capture featured snippets with some interesting findings, most notably the level of extra traffic such a position can achieve.

Having now measured dozens of these snippets, we see an average of 15–20% extra traffic from them versus a traditional position 1. That’s a definite bonus, and makes the task of claiming them extremely worthwhile.

You don’t have to be first

The best news? You don’t even have to be in first position to be considered for a snippet. Our own research shows us that almost 75% of the examples we track have been claimed by pages ranked between 2nd and 10th position. It’s far from being robust enough yet for us to formalize a full report on it, but early indication across more than 900 claimed snippets (heavily weighted to the finance sector at present) support these early findings.

Similar research by search data specialists STAT has also supported this theory, revealing that objective words are more likely to appear. General question and definition words (like “does,” “cause,” and “definition”) as well as financial words (like “salary,” “average,” and “cost”) are likely to trigger a featured snippet. Conversely, the word “best” triggered zero featured snippets in over 20,000 instances.

This suggests that writing in a factual way is more likely to help you claim featured results.

Measuring what you already have

Before you run into this two-footed, you must first audit what you may (or may not) already have. If you run a larger site, you may already have claimed a few snippets by chance, and with any major project it’s important to benchmark before you begin.

Luckily, there are a handful of tools out there to help you discover what you already rank for. My favorite is SEMrush.

The paid-for tool makes it easy to find out if you rank for any featured snippets already. I’d suggest using it to benchmark and then measure the effect of any optimization and content reworking you do as a result of the auditing process.

Claiming Featured Snippets

Claiming your own Featured Snippet then requires a focus on content structure and on answering key questions in a logical order. This also means paying close attention to on-page HTML structure to ensure that Google can easily and cleanly pick out specific answers.

Let’s look at a few examples showing that Google can pick up different types of content for different types of questions.

1. The list

One of the most prevalent examples of Featured Snippets is the list.

As you can see, Media Temple has claimed this incredibly visual piece of real estate simply by creating an article with a well-structured, step-by-step guide to answer the question:

“How do I set up an email account on my iPhone?”

If we look at how the page is formatted, we can see that the URL matches the search almost exactly, while the H1 tag serves to reinforce the relevance still further.

As we scroll down we find a user-friendly approach to the content, with short sentences and paragraphs broken up succinctly into sections.

This allows Google to quickly understand relevance and extract the most useful information to present in search; in this case, the step-by-step how-to process to complete the task.

Here are the first few paragraphs of the article, highlighting key structural elements. Below this is the list itself that’s captured in the above Featured Snippet:

2. The table

Google LOVES to present tables; clearly there’s something about the logical nature of how the data is presented that resonates with its team of left-brained engineers!

In the example below, we see a site listing countries by size. Historically, this page may well not have ranked so highly (it isn’t usually the page in position one that claims the snippet result). Because of the ways it has structured the information so well, however, Geohive will be enjoying a sizable spike in traffic to the page.

The page itself looks like this — clear, concise and well-structured:

3. The definition

The final example is the description, or definition snippet; it’s possibly the hardest to claim consistently.

It’s difficult for two key reasons:

  • There will be lots of competition for the space and answering the search query in prose format.
  • It requires a focus on HTML structure and brilliantly crafted content to win.

In the example below, we can see a very good example of how you should be structuring content pages.

We start with a perfect URL (/what-is-a-mortgage-broker/) and this follows through to the H1 (What is a Mortgage Broker). The author then cleverly uses subheadings to extend the rest of the post into a thorough piece on the subject area. Subheadings include the key How, What, Where, and When areas of focus that any good journalism tutor will lecture you on using in any good article or story. Examples might include

  • So how does this whole mortgage broker thing work?
  • Mortgage brokers can shop the rate for you
  • Mortgage brokers are your loan guide
  • Mortgage broker FAQ

The result is a piece that leaves no stone unturned. Because of this, it’s been shared plenty of times — a sure fire signal that the article is positively viewed by readers.

Featured Snippet Cheatsheet

Not being one to leave you alone to figure this out though, I have created this simple Featured Snippet Cheatsheet, designed to take the guesswork out of creating pages worthy of being selected for the Knowledge Graph.

Do it today!

Thanks for making it this far. My one hope is for you to go off and put this plan into action for your own site. Doing so will quickly transform your approach to both landing pages and to your ongoing content creation plan (but that’s a post for another day!).

And if you do have a go, remember to use the free COAT tool and guides associated with this article to make the process as simple as possible.

Content Optimization and Auditing Tool: Click to access

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

The Balanced Digital Scorecard: A Simpler Way to Evaluate Prospects

Posted by EmilySmith

[Estimated read time: 10 minutes]

As anyone who’s contributed to business development at an agency knows, it can be challenging to establish exactly what a given prospect needs. What projects, services, or campaigns would actually move the needle for this organization? While some clients come to an agency with specific requests, others are looking for guidance — help establishing where to focus resources. This can be especially difficult, as answering these questions often requires large amounts of information to be analyzed in a small period of time.

To address the challenge of evaluating prospective clients and prioritizing proposed work, we’ve developed the Balanced Digital Scorecard framework. This post is the first in a two-part series. Today, we’ll look at:

  • Why we developed this framework,
  • Where the concept came from, and
  • Specific areas to review when evaluating prospects

Part two will cover how to use the inputs from the evaluation process to prioritize proposed work — stay tuned!

Evaluating potential clients

Working with new clients, establishing what strategies will be most impactful to their goals… this is what makes working at an agency awesome. But it can also be some of the most challenging work. Contributing to business development and pitching prospects tends to amplify this with time constraints and limited access to internal data. While some clients have a clear idea of the work they want help with, this doesn’t always equal the most impactful work from a consultant’s standpoint. Balancing these needs and wants takes experience and skill, but can be made easier with the right framework.

The use of a framework in this setting helps narrow down the questions you need to answer and the areas to investigate. This is crucial to working smarter, not harder — words which we at Distilled take very seriously. Often when putting together proposals and pitches, consultants must quickly establish the past and present status of a site from many different perspectives.

  • What type of business is this and what are their overall goals?
  • What purpose does the site serve and how does it align with these goals?
  • What campaigns have they run and were they successful?
  • What does the internal team look like and how efficiently can they get things done?
  • What is the experience of the user when they arrive on the site?

The list goes on and on, often becoming a vast amount of information that, if not digested and organized, can make putting the right pitch together burdensome.

To help our consultants understand both what questions to ask and how they fit together, we’ve adapted the Balanced Scorecard framework to meet our needs. But before I talk more about our version, I want to briefly touch on the original framework to make sure we’re all on the same page.

airplane-quote-kaplan-norton.png

The Balanced Scorecard

For anyone not familiar with this concept, the Balanced Scorecard was created by Robert Kaplan and David Norton in 1992. First published in the Harvard Business Review, Kaplan and Norton set out to create a management system, as opposed to a measurement system (which was more common at that time).

Kaplan and Norton argued that “the traditional financial performance measures worked well for the industrial era, but they are out of step with the skills and competencies companies are trying to master today.” They felt the information age would require a different approach, one that guided and evaluated the journey companies undertook. This would allow them to better create “future value through investment in customers, suppliers, employees, processes, technology, and innovation.”

The concept suggests that businesses be viewed through four distinct perspectives:

  • Innovation and learning – Can we continue to improve and create value?
  • Internal business – What must we excel at?
  • Customer – How do customers see us?
  • Financial – How do we look to shareholders?

Narrowing the focus to these four perspectives reduces information overload. “Companies rarely suffer from having too few measures,” wrote Kaplan and Norton. “More commonly, they keep adding new measures whenever an employee or a consultant makes a worthwhile suggestion.” By limiting the perspectives and associated measurements, management is forced to focus on only the most critical areas of the business.

This image below shows the relations of each perspective:

balanced scorecard graphic .gif

And now, with it filled out as an example:

92105_B.gif

As you can see, this gives the company clear goals and corresponding measurements.

Kaplan and Norton found that companies solely driven by financial goals and departments were unable to implement the scorecard, because it required all teams and departments to work toward central visions — which often weren’t financial goals.

“The balanced scorecard, on the other hand, is well suited to the kind of organization many companies are trying to become… put[ting] strategy and vision, not control, at the center,” wrote Kaplan and Norton. This would inevitably bring teams together, helping management understand the connectivity within the organization. Ultimately, they felt that “this understanding can help managers transcend traditional notions about functional barriers and ultimately lead to improved decision-making and problem-solving.”

At this point, you’re probably wondering why this framework matters to a digital marketing consultant. While it’s more directly suited for evaluating companies from the inside, so much of this approach is really about breaking down the evaluation process into meaningful metrics with forward-looking goals. And this happens to be very similar to evaluating prospects.

Our digital version

As I mentioned before, evaluating prospective clients can be a very challenging task. It’s crucial to limit the areas of investigation during this process to avoid getting lost in the weeds, instead focusing only on the most critical data points.

Since our framework is built for evaluating clients in the digital world, we have appropriately named it the Balanced Digital Scorecard. Our scorecard also has main perspectives through which to view the client:

  1. Platform – Does their platform support publishing, discovery, and discoverability from a technical standpoint?
  2. Content – Are they publishing content which combines appropriate blends of effective, informative, entertaining, and compelling?
  3. Audience – Are they building visibility through owned, earned, and paid media?
  4. Conversions – Do they have a deep understanding of the needs of the market, and are they creating assets, resources, and journeys that drive profitable customer action?
  5. Measurement – Are they measuring all relevant aspects of their approach and their prospects’ activities to enable testing, improvement, and appropriate investment?

These perspectives make up the five areas of analysis to work through when evaluating most prospective clients.

1. Platform

Most consultants or SEO experts have a good understanding of the technical elements to review in a standard site audit. A great list of these can be found on our Technical Audit Checklist, created by my fellow Distiller, Ben Estes. The goal of reviewing these factors is of course to “ensure site implementation won’t hurt rankings” says Ben. While you should definitely evaluate these elements (at a high level), there is more to look into when using this framework.

Evaluating a prospect’s platform does include standard technical SEO factors but also more internal questions, like:

  • How effective and/or differentiated is their CMS?
  • How easy is it for them to publish content?
  • How differentiated are their template levels?
  • What elements are under the control of each team?

Additionally, you should look into areas like social sharing, overall mobile-friendliness, and site speed.

If you’re thinking this seems like quite the undertaking because technical audits take time and some prospects won’t be open with platform constraints, you’re right (to an extent). Take a high-level approach and look for massive weaknesses instead of every single limitation. This will give you enough information to understand where to prioritize this perspective in the pitch.

2. Content

Similar to the technical section, evaluating content looks similar to a lightweight version of a full content audit. What content do they have, which pieces are awesome and what is missing? Also look to competitors to understand who is creating content in the space and what level the bar is set at.

Beyond looking at these elements through a search lens, aim to understand what content is being shared and why. Is this taking place largely on social channels, or are publications picking these pieces up? Evaluating content on multiple levels helps to understand what they’ve created in the past and their audience’s response to it.

3. Audience

Looking into a prospect’s audience can be challenging depending on how much access they grant you during the pitch process. If you’re able to get access to analytics this task is much easier but without it, there are many tools you can leverage to get some of the same insights.

In this section, you’re looking at the traffic the site is receiving and from where. Are they building visibility through owned, earned, and paid media outlets? How effective are those efforts? Look at metrics like Search Visibility from SearchMetrics, social reach, and email stats.

A large amount of this research will depend on what information is available or accessible to you. As with previous perspectives, you’re just aiming to spot large weaknesses.

4. Conversion

Increased conversions are often a main goal stated by prospects, but without transparency from them, this can be very difficult to evaluate during a pitch. This means that often you’re left to speculate or use basic approaches. How difficult or simple is it to buy something, contact them, or complete a conversion in general? Are there good calls to action to micro-conversions such as joining an email list? How much different is the mobile experience of this process?

Look at the path to these conversions. Was there a clear funnel and did it make sense from a user’s perspective? Understanding the journey a user takes (which you can generally experience first-hand) can tell you a lot about expected conversion metrics.

Lastly, many companies’ financials are available to the public and offer a general idea of how the company is doing. If you can establish how much of their business takes place online, you can start to speculate about the success of their web presence.

5. Measurement

Evaluating a prospect’s measurement capabilities is (not surprisingly) vastly more accurate with analytics access. If you’re granted access, evaluate each platform not just for validity but also accessibility. Are there useful dashboards, management data, or other data sources that teams can use to monitor and make decisions?

Without access, you’re left to simply check and see the presence of analytics and if there is a data layer. While this doesn’t tell you much, you can often deduce from conversations how much data is a part of the internal team’s thought process. If people are monitoring, engaging, and interested in analytics data, changes and prioritization might be an easier undertaking.

what-you-measure-quote.png

Final thoughts

Working with prospective clients is something all agency consultants will have to do at some point in their career. This process is incredibly interesting — it challenges you to leverage a variety of skills and a range of knowledge to evaluate new clients and industries. It’s also a daunting task. Often your position outside the organization or unfamiliarity with a given industry can make it difficult to know where to start.

Frameworks like the original Balanced Scorecard created by Kaplan and Norton were designed to help a business evaluate itself from a more modern and holistic perspective. This approach turns the focus to future goals and action, not just evaluation of the past.

This notion is crucial at an agency needing to establish the best path forward for prospective clients. We developed our own framework, the Balanced Digital Scorecard, to help our consultants do just that. By limiting the questions you’re looking to answer, you can work smarter and focus your attention on five perspectives to evaluate a given client. Once you’ve reviewed these, you’re able to identify which ones are lagging behind and prioritize proposed work accordingly.

Next time, we’ll cover the second part: how to use the Balanced Digital Scorecard to prioritize your work.

If you use a framework to evaluate prospects or have thoughts on the Balanced Digital Scorecard, I’d love to hear from you. I welcome any feedback and/or questions!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

10 Illustrations of How Fresh Content May Influence Google Rankings (Updated)

Posted by Cyrus-Shepard

[Estimated read time: 11 minutes]

How fresh is this article?

Through patent filings over the years, Google has explored many ways that it might use “freshness” as a ranking signal. Back in 2011, we published a popular Moz Blog post about these “Freshness Factors” for SEO. Following our own advice, this is a brand new update of that article.

In 2003, Google engineers filed a patent named Information retrieval based on historical data that shook the SEO world. The patent not only offered insight into the mind of Google engineers at the time, but also seemingly provided a roadmap for Google’s algorithm for years to come.

In his series on the “10 most important search patents of all time,” Bill Slawski’s excellent writeup shows how this patent spawned an entire family of Google child patents–the latest from October 2011.

This post doesn’t attempt to describe all the ways that Google may determine freshness to rank web pages, but instead focuses on areas we may most likely influence through SEO.

Giant, great big caveat: Keep in mind that while multiple Google patent filings describe these techniques — often in great detail — we have no guarantee how Google uses them in its algorithm. While we can’t be 100% certain, evidence suggests that they use at least some, and possibly many, of these techniques to rank search results.

For another take on these factors, I highly recommend reading Justin Briggs’ excellent article Methods for Evaluating Freshness.

When “Queries Deserve Freshness”

Former Google Fellow Amit Singhal once explained how “Different searches have different freshness needs.”

The implication is that Google measures all of your documents for freshness, then scores each page according to the type of search query.

Singhal describes the types of keyword searches most likely to require fresh content:

  • Recent events or hot topics: “occupy oakland protest” “nba lockout”
  • Regularly recurring events: “NFL scores” “dancing with the stars” “exxon earnings”
  • Frequent updates: “best slr cameras” “subaru impreza reviews”

Google may determine exactly which queries require fresh content by monitoring the web and their own huge warehouse of data, including:

  1. Search volume: Are queries for a particular term spiking (i.e. “Earthquake Los Angeles”)?
  2. News and blog coverage: If a number of news organizations start writing about the same subject, it’s likely a hot topic.
  3. Social media: A spike in mentions of a particular topic may indicate the topic is “trending.”

While some queries need fresh content, other search queries may be better served by older content.

Fresh is often better, but not always. (More on this later.)

Below are ten ways Google may determine the freshness of your content. Images courtesy of my favorite graphic designer, Dawn Shepard.

1. Freshness by inception date

Initially, a web page can be given a “freshness” score based on its inception date, which decays over time. This freshness score may boost a piece of content for certain search queries, but degrades as the content becomes older.

The inception date is often when Google first becomes aware of the document, such as when Googlebot first indexes a document or discovers a link to it.


“For some queries, older documents may be more favorable than newer ones. As a result, it may be beneficial to adjust the score of a document based on the difference (in age) from the average age of the result set.”
– All captions from US Patent Document Scoring Based on Document Content Update

2. Amount of change influences freshness: How Much

The age of a webpage or domain isn’t the only freshness factor. Search engines can score regularly updated content for freshness differently from content that doesn’t change. In this case, the amount of change on your webpage plays a role.

For example, changing a single sentence won’t have as big of a freshness impact as a large change to the main body text.


“Also, a document having a relatively large amount of its content updated over time might be scored differently than a document having a relatively small amount of its content updated over time.”

In fact, Google may choose to ignore small changes completely. That’s one reason why when I update a link on a page, I typically also update the text surrounding it. This way, Google may be less likely to ignore the change. Consider the following:

“In order to not update every link’s freshness from a minor edit of a tiny unrelated part of a document, each updated document may be tested for significant changes (e.g., changes to a large portion of the document or changes to many different portions of the document) and a link’s freshness may be updated (or not updated) accordingly.”

3. Changes to core content matter more: How important

Changes made in “important” areas of a document will signal freshness differently than changes made in less important content.

Less important content includes:

  • JavaScript
  • Comments
  • Advertisements
  • Navigation
  • Boilerplate material
  • Date/time tags

Conversely, “important” content often means the main body text.

So simply changing out the links in your sidebar, or updating your footer copy, likely won’t be considered as a signal of freshness.


“…content deemed to be unimportant if updated/changed, such as Javascript, comments, advertisements, navigational elements, boilerplate material, or date/time tags, may be given relatively little weight or even ignored altogether when determining UA.”

This brings up the issue of timestamps on a page. Some webmasters like to update timestamps regularly — sometimes in an attempt to fake freshness — but there exists conflicting evidence on how well this works. Suffice to say, the freshness signals are likely much stronger when you keep the actual page content itself fresh and updated.

4. The rate of document change: How often

Content that changes more often is scored differently than content that only changes every few years.

For example, consider the homepage of the New York Times, which updates every day and has a high degree of change.


“For example, a document whose content is edited often may be scored differently than a document whose content remains static over time. Also, a document having a relatively large amount of its content updated over time might be scored differently than a document having a relatively small amount of its content updated over time.”

Google may treat links from these pages differently as well (more on this below.) For example, a fresh “link of the day” from the Yahoo homepage may be assigned less significance than a link that remains more permanently.

5. New page creation

Instead of revising individual pages, fresh websites often add completely new pages over time. (This is the case with most blogs.) Websites that add new pages at a higher rate may earn a higher freshness score than sites that add content less frequently.


“UA may also be determined as a function of one or more factors, such as the number of ‘new’ or unique pages associated with a document over a period of time. Another factor might include the ratio of the number of new or unique pages associated with a document over a period of time versus the total number of pages associated with that document.”

Some webmasters advocate adding 20–30% new pages to your site every year. Personally, I don’t believe this is necessary as long as you send other freshness signals, including keeping your content up-to-date and regularly earning new links.

6. Rate of new link growth signals freshness

Not all freshness signals are restricted to the page itself. Many external signals can also indicate freshness as well, oftentimes with powerful results.

If a webpage sees an increase in its link growth rate, this could indicate a signal of relevance to search engines. For example, if folks start linking to your personal website because you’re about to get married, your site could be deemed more relevant and fresh (as far as this current event goes.)


“…a downward trend in the number or rate of new links (e.g., based on a comparison of the number or rate of new links in a recent time period versus an older time period) over time could signal to search engine 125 that a document is stale, in which case search engine 125 may decrease the document’s score.”

Be warned: an unusual increase in linking activity can also indicate spam or manipulative link building techniques. Search engines are likely to devalue such behavior. Natural link growth over time is usually the best bet.

7. Links from fresh sites pass fresh value

Links from sites that have a high freshness score themselves can raise the freshness score of the sites they link to.

For example, if you obtain a link off an old, static site that hasn’t been updated in years, this may not pass the same level of freshness value as a link from a fresh page, i.e. the homepage of Wired. Justin Briggs coined this FreshRank.


“Document S may be considered fresh if n% of the links to S are fresh or if the documents containing forward links to S are considered fresh.”

8. Traffic and engagement metrics may signal freshness

When Google presents a list of search results to users, the results the users choose and how much time they spend on each one can be used as an indicator of freshness and relevance.

For example, if users consistently click a search result further down the list, and they spend much more time engaged with that page than the other results, this may mean the result is more fresh and relevant.


“If a document is returned for a certain query and over time, or within a given time window, users spend either more or less time on average on the document given the same or similar query, then this may be used as an indication that the document is fresh or stale, respectively.”

You might interpret this to mean that click-through rate is a ranking factor, but that’s not necessarily the case. A more nuanced interpretation might say that the increased clicks tell Google there is a hot interest in the topic, and this page — and others like it — happen to match user intent.

For a more detailed explanation of this CTR phenomenon, I highly recommend reading Eric Enge’s excellent article about CTR as a ranking factor.

9. Changes in anchor text may devalue links

If the subject of a web page changes dramatically over time, it makes sense that any new anchor text pointing to the page will change as well.

For example, if you buy a domain about racing cars, then change the format to content about baking, over time your new incoming anchor text will shift from cars to cookies.

In this instance, Google might determine that your site has changed so much that the old anchor text is now stale (the opposite of fresh) and devalue those older links entirely.


“The date of appearance/change of the document pointed to by the link may be a good indicator of the freshness of the anchor text based on the theory that good anchor text may go unchanged when a document gets updated if it is still relevant and good.”

The lesson here is that if you update a page, don’t deviate too much from the original context or you may risk losing equity from your pre-existing links.

10. Older is often better

Google understands the newest result isn’t always the best. Consider a search query for “Magna Carta.” An older, authoritative result may be best here.

In this case, having a well-aged document may actually help you.

Google’s patent suggests they determine the freshness requirement for a query based on the average age of documents returned for the query.


“For some queries, documents with content that has not recently changed may be more favorable than documents with content that has recently changed. As a result, it may be beneficial to adjust the score of a document based on the difference from the average date-of-change of the result set.”

A good way to determine this is to simply Google your search term, and gauge the average inception age of the pages returned in the results. If they all appear more than a few years old, a brand-new fresh page may have a hard time competing.

Freshness best practices

The goal here shouldn’t be to update your site simply for the sake of updating it and hoping for better ranking. If this is your practice, you’ll likely be frustrated with a lack of results.

Instead, your goal should be to update your site in a timely manner that benefits users, with an aim of increasing clicks, user engagement, and fresh links. These are the clearest signals you can pass to Google to show that your site is fresh and deserving of high rankings.

Aside from updating older content, other best practices include:

  1. Create new content regularly.
  2. When updating, focus on core content, and not unimportant boilerplate material.
  3. Keep in mind that small changes may be ignored. If you’re going to update a link, you may consider updating all the text around the link.
  4. Steady link growth is almost always better than spiky, inconsistent link growth.
  5. All other things being equal, links from fresher pages likely pass more value than links from stale pages.
  6. Engagement metrics are your friend. Work to increase clicks and user satisfaction.
  7. If you change the topic of a page too much, older links to the page may lose value.

Updating older content works amazingly well when you also earn fresh links to the content. A perfect example of this is when Geoff Kenyon updated his Technical Site Audit Checklist post on Moz. You can see the before and after results below:


Be fresh.

Be relevant.

Most important, be useful.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Predicting Intent: What Unnatural Outbound Link Penalties Could Mean for the Future of SEO

Posted by Angular

[Estimated read time: 8 minutes]

As SEOs, we often find ourselves facing new changes implemented by search engines that impact how our clients’ websites perform in the SERPs. With each change, it’s important that we look beyond its immediate impact and think about its future implications so that we can try to answer this question: “If I were Google, why would I do that?”

Recently, Google implemented a series of manual penalties that affected sites deemed to have unnatural outbound links. Webmasters of affected sites received messages like this in Google Search Console:

Google Outbound Links Penalty

Webmasters were notified in an email that Google had detected a pattern of “unnatural artificial, deceptive, or manipulative outbound links.” The manual action itself described the link as being either “unnatural or irrelevant.”

The responses from webmasters varied in their usual extreme fashion, with recommendations ranging from “do nothing” to “nofollow every outbound link on your site.”

Google’s John Mueller posted in product forums that you don’t need to nofollow every link on your site, but you should focus on nofollowing links that point to a product, sales, or social media page as the result of an exchange.

Now, on to the fun part of being an SEO: looking at a problem and trying to reverse-engineer Google’s intentions to decipher the implications this could have on our industry, clients, and strategy.

The intent of this post is not to decry those opinions that this was specifically focused on bloggers who placed dofollow links on product/business reviews, but to present a few ideas to incite discussion as to the potential big-picture strategy that could be at play here.

A few concepts that influenced my thought process are as follows:

  • Penguin has repeatedly missed its “launch date,” which indicates that Google engineers don’t feel it’s accurate enough to release into the wild.

Penguin Not Ready

  • The growth of negative SEO makes it even more difficult for Google to identify/penalize sites for tactics that are not implemented on their own websites.
  • Penguin temporarily impacted link-building markets in a way Google would want. The decline reached its plateau in July 2015, as shown in this graph from Google Trends:
    Trend of Link Building

If I were Google, I would expect webmasters impacted by the unnatural outbound links penalty to respond in one of these ways:

  1. Do nothing. The penalty is specifically stated to “discount the trust in links on your site.” As a webmaster, do you really care if Google trusts the outbound links on your site or not? What about if the penalty does not impact your traffic, rankings, visibility, etc.? What incentive do you have to take any action? Even if you sell links, if the information is not publicly displayed, this does nothing to harm your link-selling business.
  2. Innocent site cleanup effort. A legitimate site that has not exchanged goods for links (or wants to pretend they haven’t) would simply go through their site and remove any links that they feel may have triggered the issue and then maybe file a bland reconsideration request stating as much.
  3. Guilty site cleanup effort. A site that has participated in link schemes would know exactly which links are the offenders and remove them. Now, depending on the business owner, some might then file a reconsideration request saying, “I’m sorry, so-and-so paid me to do it, and I’ll never do it again.” Others may simply state, “Yes, we have identified the problem and corrected it.”

In scenario No. 1, Google wins because this helps further the fear, uncertainty, and doubt (FUD) campaigns around link development. It is suddenly impossible to know if a site’s outbound links have value because they may possibly have a penalty preventing them from passing value. So link building not only continues to carry the risk of creating a penalty on your site, but it suddenly becomes more obvious that you could exchange goods/money/services for a link that has no value despite its MozRank or any other external “ranking” metric.

In scenarios No. 2 and No. 3, Google wins because they can monitor the links that have been nofollowed/removed and add potential link scheme violators to training data.

In scenario No. 3, they may be able to procure evidence of sites participating in link schemes through admissions by webmasters who sold the links.

If I were Google, I would really love to have a control group of known sites participating in link schemes to further develop my machine-learned algorithm for detecting link profile manipulation. I would take the unnatural outbound link data from scenario No. 3 above and run those sites as a data set against Penguin to attempt 100% confidence, knowing that all those sites definitely participated in link schemes. Then I would tweak Penguin with this training dataset and issue manual actions against the linked sites.

This wouldn’t be the first time SEOs have predicted a Google subtext of leveraging webmasters and their data to help them further develop their algorithms for link penalties. In 2012, the SEO industry was skeptical regarding the use of the disavow tool and whether or not Google was crowdsourcing webmasters for their spam team.

martinibuster

“Clearly there are link schemes that cannot be caught through the standard algorithm. That’s one of the reasons why there are manual actions. It’s within the realm of possibilities that disavow data can be used to confirm how well they’re catching spam, as well as identifying spam they couldn’t catch automatically. For example, when web publishers disavow sites that were not caught by the algorithm, this can suggest a new area for quality control to look into.” — Roger Montti, Martinibuster.com


What objectives could the unnatural outbound links penalties accomplish?

  1. Legit webmasters could become more afraid to sell/place links because they get “penalized.”
  2. Spammy webmasters could continue selling links from their penalized sites, which would add to the confusion and devaluation of link markets.
  3. Webmasters could become afraid to buy/exchange links because they could get scammed by penalized sites and be more likely to be outed by the legitimate sites.
  4. The Penguin algorithm could have increased confidence scoring and become ready for real-time implementation.

Russ Jones


“There was a time when Google would devalue the PR of a site that was caught selling links. With that signal gone, and Google going after outbound links, it is now more difficult than ever to know whether a link acquired is really of value.” -— Russ Jones, Principal Search Scientist at MOZ

Again, if I were Google, the next generation of Penguin would likely heavily weight irrelevantly placed links, and not just commercial keyword-specific anchor text. Testing this first on the sites I think are guilty of providing the links and simply devaluing those links seems much smarter. Of course, at this point, there is no specific evidence to indicate Google’s intention behind the unnatural outbound links penalties were intended as a final testing phase for Penguin and to further devalue the manipulated link market. But if I were Google, that’s exactly what I would be doing.

Tripp Hamilton

“Gone are the days of easily repeatable link building strategies. Acquiring links shouldn’t be easy, and Penguin will continue to change the search marketing landscape whether we like it or not. I, for one, welcome our artificially intelligent overlords. Future iterations of the Penguin algorithm will further solidify the “difficulty level” of link acquisition, making spam less popular and forcing businesses toward legitimate marketing strategies.” — Tripp Hamilton, Product Manager at Removeem.com

Google’s webmaster guidelines show link schemes are interpreted by intent. I wonder what happens if I start nofollowing links from my site for the intent of devaluing a site’s rankings? The intent is manipulation. Am I at risk of being considered a participant in link schemes? If I do link building as part of an SEO campaign, am I inherently conducting a link scheme?

Google Webmaster Guidelines for Link Scheme

So, since I’m an SEO, not Google, I have to ask myself and my colleagues, “What does this do to change or reinforce my SEO efforts?” I immediately think back to a Whiteboard Friday from a few years ago that discusses the Rules of Link Building.

Cyrus Shepard

“At its best, good link building is indistinguishable from good marketing.” — Cyrus Shepard, former Content Astronaut at Moz

When asked what type of impact SEOs should expect from this, Garret French from Citation Labs shared:

Garret French

“Clearly this new effort by Google will start to dry up the dofollow sponsored post, sponsored review marketplace. Watch for prices to drop over the next few months and then go back and test reviews with nofollowed links to see which ones actually drive converting traffic! If you can’t stomach paying for nofollowed links then it’s time to get creative and return to old-fashioned, story-driven blog PR. It doesn’t scale well, but it works well for natural links.”

In conclusion, as SEOs, we are responsible for predicting the future of our industry. We do not simply act in the present. Google does not wish for its results to be gamed and have departments full of data scientists dedicated to building algorithms to identify and devalue manipulative practices. If you are incapable of legitimately building links, then you must mimic legitimate links in all aspects (or consider a new career).

Takeaways

Most importantly, any links that we try to build must provide value. If a URL links to a landing page that is not contextually relevant to its source page, then this irrelevant link is likely to be flagged and devalued. Remember, Google can do topical analysis, too.

In link cleanup mode or Penguin recovery, we’ve typically approached unnatural links as being obvious when they have a commercial keyword (e.g. “insurance quotes”) because links more naturally occur with the URL, brand, or navigational labels as anchor text. It would also be safe to assume that natural links tend to occur in content about the destination the link offers and that link relevance should be considered.

Finally, we should continue to identify and present clients with methods for naturally building authority by providing value in what they offer and working to build real relationships and brand advocates.

What are your thoughts? Do you agree? Disagree?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Long Tail SEO: When & How to Target Low-Volume Keywords – Whiteboard Friday

Posted by randfish

The long tail of search can be a mysterious place to explore, often lacking the volume data that we usually rely on to guide us. But the keyword phrases you can uncover there are worth their weight in gold, often driving highly valuable traffic to your site. In this edition of Whiteboard Friday, Rand delves into core strategies you can use to make long tail keywords work in your favor, from niche-specific SEO to a bigger content strategy that catches many long tail searches in its net.

http://ift.tt/292A54R

http://ift.tt/1GaxkYO

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about long tail SEO.

Now, for those of you who might not be familiar, there’s basically a demand curve in the search engine world. Lots and lots of searchers are searching for very popular keywords in the NBA world like “NBA finals.” Then we have a smaller number of folks who are searching for “basketball hoops,” but it’s still pretty substantial, right? Probably hundreds to thousands per month. Then maybe there are only a few dozen searches a month for something like “Miami Heat box ticket prices.”

Then we get into the very long tail, where there are one, two, maybe three searches a month, or maybe not even. Maybe it’s only a few searches per year for something like “retro Super Sonics customizable jersey Seattle.”

Now, this is pretty tough to do keyword research anywhere in this long tail region. The long tail region is almost a mystery to us because the search engines themselves don’t get enough volume to where they’d show it in a tool like AdWords or in Bing’s research. Even Search Suggest or related searches will often not surface these kinds of terms and phrases. They just don’t get enough volume. But for many businesses, and yours may be one of them, these keywords are actually quite valuable.

2 ways to think about long tail keyword targeting

#1: I think that there’s this small set of hyper-targeted, specific keyword terms and phrases that are very high value to my business. I know they’re not searched for very much, maybe only a couple of times a month, maybe not even that. But when they are, if I can drive the search traffic to my website, it’s hugely valuable to me, and therefore it’s worth pursuing a handful of these. A handful could be half a dozen, or it could be in the small hundreds that you decide these terms are worth going after even though they have a very small number of keyword searches. Remember, if we were to build 50 landing pages targeting terms that only get one or two searches a month, we still might get a hundred or a couple hundred searches every year coming to our site that are super valuable to the business. So these terms in general, when we’re doing this hyper-specific, they need to be…

  • Conversion-likely, meaning that we know we’re going to convert those searchers into buyers if we can get them or searchers into whatever we need them to do.
  • They should be very low competition, because not a lot of people know about these keywords. There’s not a bunch of sites targeting them already. There are no keyword research tools out there that are showing this data.
  • It should be a relatively small number of terms that we’re targeting. Like I said, maybe a few dozen, maybe a couple hundred, generally not more than that.
  • We’re going to try and build specifically optimized pages to turn those searchers into customers or to serve them in whatever way we need.

#2: The second way is to have a large-scale sort of blast approach, where we’re less targeted with our content, but we’re covering a very wide range of keyword targets. This is what a lot of user-generated content sites, large blogs, and large content sites are doing with their work. Maybe they’re doing some specific keyword targeting, but they’re also kind of trying to reach this broad group of long tail keywords that might be in their niche. It tends to be the case that there’s…

  • A ton of content being produced.
  • It’s less conversion-focused in general, because we don’t know the intent of all these searchers, particularly on the long tail terms.
  • We are going to be targeting a large number of terms here.
  • There are no specific keyword targets available. So, in general, we’re focused more on the content itself and less on the specificity of that keyword targeting.

Niche + specific long tail SEO

Now, let’s start with the niche and specific. The way I’m going to think about this is I might want to build these pages — my retro Super Sonics jerseys that are customizable — with my:

  • Standard on-page SEO best practices.
  • I’m going to do my smart internal linking.
  • I really don’t need very many external links. One or two will probably do it. In fact, a lot of times, when it comes to long tail, you can rank with no external links at all, internal links only.
  • Quality content investment is still essential. I need to make sure that this page gets indexed by Google, and it has to do a great job of converting visitors. So it’s got to serve the searcher intent. It can’t look like automated content, it can’t look low quality, and it certainly can’t dissuade visitors from coming, because then I’ve wasted all the investment that I’ve made getting that searcher to my page. Especially since there are so few of them, I better make sure this page does a great job.

A) PPC is a great way to go. You can do a broad-term PPC buy in AdWords or in Bing, and then discover these hyper-specific opportunities. So if I’m buying keywords like “customizable jerseys,” I might see that, sure, most of them are for teams and sports that I’ve heard of, but there might be some that come to me that are very, very long tail. This is actually a reason why you might want to do those broad PPC buys for discovery purposes, even if the ROI isn’t paying off inside your AdWords campaign. You look and you go, “Hey, it doesn’t pay to do this broad buy, but every week we’re discovering new keywords for our long tail targeting that does make it worthwhile.” That can be something to pay attention to.

B) You can use some keyword research tools, just not AdWords itself, because AdWords bias is to show you more commercial terms, and it biases to show you terms and phrases that do actually have search volume. What you want to do is actually find keyword research tools that can show you keywords with zero searches, no search volume at all. So you could use something like Moz’s Keyword Explorer. You could use KeywordTool.io. You could use Übersuggest. You could use some of the keyword research tools from the other providers out there, like a Searchmetrics or what have you. But all of these kinds of terms, what you want to find are those 0–10 searches keywords, because those are going to be the ones that have very, very little volume but potentially are super high-value for your specific website or business.

C) Be aware that the keyword difficulty scores may not actually be that useful in these cases. Keyword difficulty scores — this is true for Moz’s keyword difficulty score and for all the other tools that do keyword difficulty — what they tend to do is they look at a search result and then they say, “How many links or how high is the domain authority and page authority or all the link metrics that point to these 10 pages?” The problem is in a set where there are very few people doing very specific keyword targeting, you could have powerful pages that are not actually optimized at all for these keywords that aren’t really relevant, and therefore it might be much easier than it looks like from a keyword difficulty score to rank for those pages. So my advice is to look at the keyword targeting to spot that opportunity. If you see that none of the 10 pages actually includes all the keywords, or only one of them seems to actually serve the searcher intent for these long tail keywords, you’ve probably found yourself a great long tail SEO opportunity.

Large-scale, untargeted long tail SEO

This is very, very different in approach. It’s going to be for a different kind of website, different application. We are not targeting specific terms and phrases that we’ve identified. We’re instead saying, “You know what? We want to have a big content strategy to own all types of long tail searches in a particular niche.” That could be educational content. It could be discussion content. It could be product content, where you’re supporting user-generated content, those kinds of things.

  • I want a bias to the uniqueness of the content itself and real searcher value, which means I do need content that is useful to searchers, useful to real people. It can’t be completely auto-generated.
  • I’m worrying less about the particular keyword targeting. I know that I don’t know which terms and phrases I’m going to be going after. So instead, I’m biasing to other things, like usefulness, amount of uniqueness of content, the quality of it, the value that it provides, the engagement metrics that I can look at in my analytics, all that kind of stuff.
  • You want to be careful here. Anytime you’re doing broad-scale content creation or enabling content creation on a platform, you’ve got to keep low-value, low-unique content pages out of Google’s index. That could be done two ways. One, you limit the system to only allow in certain amounts of content before a page can even be published. Or you look at the quantity of content that’s being created or the engagement metrics from your analytics, and you essentially block — via robots.txt or via meta robots tag — any of the pages that look like they’re low-value, low-unique content.

A) This approach requires a lot of scalability, and so you need something like a:

  • Discussion forum
  • Q&A-style content
  • User-posted product or service or business listings. Think something like an Etsy or a GitHub or a Moz Q&A, discussion forums like Reddit. These all support user-generated content.
  • You can also go with non-UGC if it’s editorially created. Something like a frequently updated blog or news content, particularly if you have enough of a staff that can create that content on a regular basis so that you’re pumping out good stuff on a regular basis, that can also work. It’s generally not as scalable, but you have to worry less about the uniqueness of quality content.

B) You don’t want to fully automate this system. The worst thing you can possibly do is to take a site that has been doing well, pump out hundreds, thousands, tens of thousands of pages, throw them up on the site, they’re low-quality content, low uniqueness of content, and Google can hit you with something like the Panda penalty, which has happened to a lot of sites that we’ve seen over the years. They continue to iterate and refine that, so be very cautious. You need some human curation in order to make sure the uniqueness of content and value remain above the level you need.

C) If you’re going to be doing this large-scale content creation, I highly advise you to make the content management system or the UGC submission system work in your favor. Make it do some of that hard SEO legwork for you, things like…

  • Nudging users to give more descriptive, more useful content when they’re creating it for you.
  • Require some minimum level of content in order to even be able to post it.
  • Use spam software to be able to catch and evaluate stuff before it goes into your system. If it has lots of links, if it contains poison keywords, spam keywords, kick it out.
  • Encourage and reward the high-quality contributions. If you see users or content that is consistently doing well through your engagement metrics, go find out who those users were, go reward them. Go promote that content. Push that to higher visibility. You want to make this a system that rewards the best stuff and keeps the bad stuff out. A great UGC content management system can do this for you if you build it right.

All right, everyone, look forward to your thoughts on long tail SEO, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Diving for Pearls: A Guide to Long-Tail Keywords – Next Level

Posted by jocameron

[Estimated read time: 15 minutes]

Welcome to the fifth installment of our educational Next Level series! Last time, we led you on a journey to perform a heroic site audit. This time around we’re diving into the subject of long tail keywords, equipping you with all the tools you’ll need to uncover buried treasure.


One of the biggest obstacles to driving forward your business online is being able to rank well for keywords that people are searching for. Getting your lovely URLs to show up in those precious top positions — and gaining a good portion of the visitors behind the searches — can feel like an impossible dream.

Particularly if you’re working on a newish site on a modest budget within a competitive niche.

Well, strap yourself in, because today we’re going to live that dream. I’ll take you through the bronze, silver, and gold levels of finding, assessing, and targeting long tail keywords so you can start getting visitors to your site that are primed and ready to convert.

So what the bloomin’ heck are long tail keywords?

The ‘long tail of search’ refers to the many weird and wonderful ways the diverse people of the world search for what they’re after in any given niche.

People (yes, people! Shiny, happy, everyday, run-of-the-mill, muesli-eating, bogie-picking, credit-card-toting people!) don’t just stop searching broad and generic ‘head’ keywords like “web design” or “camera” or “sailor moon”.

They clarify their search with emotional triggers, technical terms they’ve learned from reading forums, and compared features and prices before mustering up the courage to commit and convert on your site.

The long tail is packed with searches like “best web designer in Nottingham” or “mirrorless camera 4k video 2016” or “sailor moon cat costume.”

This lovely chart visualizes the long tail of search by using the tried and tested “Internet loves cats + animated gifs are the coolest = SUCCESS” formula.

All along that tail are searches being constantly generated by people seeking answers from the Internet hive mind. There’s no end to what you’ll find if you have a good old rummage about, including: Questions, styles, colors, brands, concerns, peeves, desires, hopes, dreams… and everything in between.

Fresh, new, outrageous, often bizarre keywords. If you’ve done any keyword research you’ll know what I mean by bizarre. Things a person wouldn’t admit to their therapist, priest, or doctor they’ll happily pump into Google and hit search. And we’re going to go diving for pearls: keywords with searcher intent, high demand, low competition, and a spot on the SERPs just for you.

Bronze medal: Build your own keyword

It’s really easy to come up with a long tail keyword. You can use your brain, gather some thoughts, take a stab in the dark, and throw a few keyword modifiers around your head keyword.

Have you ever played with that magnetic fridge poetry game? It’s a bit like that. You can play online if (like me) you have an aversion to physical things.

I’m no poet, but I think I deserve a medal for this attempt, and now I really want some “hot seasonal berry water.”

Magnetic poetry not doing it for you? Don’t worry — that’s only the beginning.

Use your industry knowledge

Time to draw on that valuable industry knowledge you’ve been storing up, jot down some ideas, and think about intent and common misconceptions. I’m going to use the example pearls or freshwater pearls in this post as the head term because that’s something I’m interested in.

Let’s go!

How do I clean freshwater pearls

Ok, cool, adding to my list.

Search your keyword

Now you can get some more ideas by manually entering your keyword into Google and prompting it to give you popular suggestions, like I’ve done below:

Awesome, I’m adding Freshwater pearls price to my list.

Explore the language of social media

Get amongst the over-sharers and have a look at what people are chatting about on social media by searching your keyword in Twitter, Instagram, and Youtube. These are topics in your niche that people are talking about right now.

Twitter and Instagram are proving tricky to explore for my head term because it’s jam-packed with people selling pearl jewelry.

Shout out to a cheeky Moz tool, Followerwonk, for helping with this stage. I’m searching Twitter bios to find Twitter accounts with “freshwater pearls.”

Click these handy little graph icons for a more in-depth profile analysis

I can now explore what they’re tweeting, I can follow them and find out who is engaging with them, and I can find their most important tweets. Pretty groovy!

YouTube is also pulling up some interesting ideas around my keyword. This is simultaneously helping me gather keyword ideas and giving me a good sense about what content is already out there. Don’t worry, we’ll touch on content later on in this post. 🙂

I’m adding understanding types of pearls and Difference between saltwater and freshwater pearls to my list.

Ask keyword questions?

You’ll probably notice that I’ve added a question mark to a phrase that is not a question, just to mess with you all. Apologies for the confusing internal-reading-voice-upwards-inflection.

Questions are my favorite types of keywords. What!? You don’t have a fav keyword type? Well, you do now — trust me.

Answer the Public is packed with questions, and it has the added bonus of having this tooth-picking (not bogie-picking, thank goodness!) dude waiting for you to impress him.

So let’s give him something to munch on and pop freshwater pearls in there, too, then grab some questions for our growing list.

To leave no rock unturned (or no mollusk unshucked), let’s pop over to Google Search Console to find keywords that are already sending you traffic (and discover any mismatches between your content and use intent.)

Pile these into a list, like I’ve done in this spreadsheet.

Now this is starting to look interesting: we’ve got some keyword modifiers, some clear buying signals, and a better idea of what people might be looking for around “freshwater pearls.”

Should you stop there? I’m flabbergasted — how can you even suggest that?! This is only the beginning. 🙂

Silver medal: Assess demand and explore topics

So far, so rosy. But we’ve been focusing on finding keywords, picking them up, and stashing them in our collection like colored glass at the seaside.

To really dig into the endless tail of your niche, you’ll need a keyword tool like our very own Keyword Explorer (KWE for short). This is invaluable to finding topics within your niche that present a real opportunity for your site.

If you’re trying out KWE for the first time, you get 2 searches free per day without having to log in, but you get a few more with your Community account and even more with a Moz Pro subscription.

Find search volume for your head keyword

Let’s put “pearls” into KWE. Now you can see how many times it’s being searched per month in Google:

Now try “freshwater pearls.” As expected, the search volume goes down, but we’re getting more specific.

We could keep going like this, but we’re going to burn up all our free searches. Just take it as read that, as you get more specific and enter all the phrases we found earlier, the search volume will decrease even more. There may not be any data at all. That’s why you need to explore the searches around this main keyword.

Find even more long tail searches

Below the search volume, click on “Keyword Suggestions.”

Well, hi there, ever-expanding long tail! We’ve gone from a handful of keywords pulled together manually from different sources to 1,000 suggestions right there on your screen. Positioned right next to that we have search volume to give us an idea of demand.

The diversity of searches within your niche is just as important as that big number we saw at the beginning, because it shows you how much demand there is for this niche as a whole. We’re also learning more about searcher intent.

I’m scanning through those 1,000 suggestions and looking for other terms that pop up again and again. I’m also looking for signals and different ways the words are being used to pick out words to expand my list.

I like to toggle between sorting by relevancy and search volume, and then scroll through all the results to cherry-pick those that catch my eye.

Now reverse the volume filter so that it’s showing lower-volume search terms and scroll down through the end of the tail to explore the lower-volume chatter.

This is where your industry knowledge comes into play again. Bots, formulas, spreadsheets, and algorithms are all well and good, but don’t discount your own instincts and knowledge.

Use the suggestions filters to your advantage and play around with broader or more specific suggestion types. Keyword Explorer pulls together suggestions from AdWords, autosuggest, related searches, Wikipedia titles, topic modeling extractions, and SERPscape.

Looking through the suggestions, I’ve noticed that the word “cultured” has popped up a few times.

To see these all bundled together, I want to look at the grouping options in KWE. I like the high lexicon groups so I can see how much discussion is going on within my topics.

Scroll down and expand that group to get an idea of demand and assess intent.

I’m also interested in the words around “price” and “value,” so I’m doing the same and saving those to my sheet, along with the search volume. A few attempts at researching the “cleaning” of pearls wasn’t very fruitful, so I’ve adjusted my search to “clean freshwater pearls.”

Because I’m a keyword questions fanatic, I’m also going to filter by questions (the bottom option from the drop-down menu):

OK! How is our list looking? Pretty darn hot, I reckon! We’ve gathered together a list of keywords and dug into the long tail of these sub-niches, and right alongside we’ve got search volume.

You’ll notice that some of the keywords I discovered in the bronze stage don’t have any data showing up in KWE (indicated by the hyphen in the screenshot above). That’s ok — they’re still topics I can research further. This is exactly why we have assessed demand; no wild goose chase for us!

Ok, we’re drawing some conclusions, we’re building our list, and we’re making educated decisions. Congrats on your silver-level keyword wizardry! 😀

Gold medal: Find out who you’re competing with

We’re not operating in a vacuum. There’s always someone out there trying to elbow their way onto the first page. Don’t fall into the trap of thinking that just because it’s a long tail term with a nice chunk of search volume all those clicks will rain down on you. If the terms you’re looking to target already have big names headlining, this could very well alter your roadmap.

To reap the rewards of targeting the long tail, you’ll have to make sure you can outperform your competition.

Manually check the SERPs

Check out who’s showing up in the search engine results page (SERPs) by running a search for your head term. Make sure you’re signed out of Google and in an incognito tab.

We’re focusing on the organic results to find out if there are any weaker URLs you can pick off.

I’ll start with “freshwater pearls” for illustrative purposes.

Whoooaaa, this is a noisy page. I’ve had to scroll a whole 2.5cm on my magic mouse (that’s very nearly a whole inch for the imperialists among us) just to see any organic results.

Let’s install the Mozbar to discover some metrics on the fly, like domain authority and back-linking data.

Now, if seeing those big players in the SERPs doesn’t make it clear, looking at the Mozbar metrics certainly does. This is exclusive real estate. It’s dominated by retailers, although Wikipedia gets a place in the middle of the page.

Let’s get into the mind of Google for a second. It — or should I say “they” (I can’t decide if it’s more creepy for Google to be referred to as a singular or plural pronoun. Let’s go with “they”) — anyway, I digress. “They” are guessing that we’re looking to buy pearls, but they’re also offering results on what they are.

This sort of information is offered up by big retailers who have created content that targets the intention of searchers. Mikimoto drives us to their blog post all about where freshwater pearls come from.

As you get deeper into the long tail of your niche, you’ll begin to come across sites you might not be so familiar with. So go and have a peek at their content.

With a little bit of snooping you can easily find out:

  • how relevant the article is
  • if it looks appealing, up to date, and sharable
  • be really judge-y: why not?

Now let’s find some more:

  • when the article was published
  • when their site was created
  • how often their blog is updated
  • how many other sites are linking to the page with Open Site Explorer
  • how many tweets, likes, etc.

You can also pop your topic into Moz Content to see how other articles are performing in your niche. I talk about competitor analysis a bit more in my Bonnie Tyler Site Audit Manifesto, so check it out.

Document all of your findings our spreadsheet from earlier to keep track of the data. This information will now inform you of your chances of ranking for that term.

Manually checking out your competition is something that I would strongly recommend. But we don’t have all the time in the world to check each results page for each keyword we’re interested in.

Keyword Explorer leaps to our rescue again

Run your search and click on “SERP Analysis” to see what the first page looks like, along with authority metrics and social activity.

All the metrics for the organic results, like Page Authority, goes into calculating the Difficulty score above (lower is better).

And all those other factors — the ads and suggestions taking up space on the SERPs — that’s what’s used to calculate Opportunity (higher is better).

Potential is all the other metrics tallied up. You definitely want this to be higher.

So now we have 3 important numerical values we can use to to gauge our competition. We can use these values to compare keywords.

After a few searches in KWE, you’re going to start hankering for a keyword list or two. For this you’ll need a paid subscription, or a Moz Pro 30-day free trial.

It’s well worth the sign-up; not only to you get 5,000 keyword reports per month and 30 lists (on the Medium plan), but you also get to check out the super-magical-KWE-mega-list-funky-cool metric page. That’s what I call it, just rolls off the tongue, you know?

Ok, fellow list buddies, let’s go and add those terms we’re interested in to our lovely new list.

Then head up to your lists on the top right and open up the one you just created.

Now we can see the spread of demand, competition and SERP features for our whole list.

You can compare Volume, SERPS, Difficulty, Opportunity, and Potential across multiple lists, topics, and niches.

How to compare apples with apples

Comparing keywords is something we get asked about quite a bit on the Moz Help Team.

Should I target this word or that word?

For the long tail keyword, the Volume is a lot lower, Difficulty is also down, the Opportunity is a bit up, and overall the Potential is down because of the drop in search volume.

But don’t discount it! By targeting these sorts of terms, you’re focusing more on the intent of the searcher. You’re also making your content relevant for all the other neighboring search terms.

Let’s compare difference between freshwater and cultured pearls with how much are freshwater pearls worth.

Search volume is the same, but for the keyword how much are freshwater pearls worth Difficulty is up, but so is the overall Potential because the Opportunity is higher.

But just because you’re picking between two long tail keywords doesn’t mean you’ve fully understood the long tail of search.

You know all those keywords I grabbed for my list earlier in this post? Well, here they are sorted into topics.

Look at all the different ways people search for kind of the same thing. This is what drives the long tail of search — searcher diversity. If you tally all the volume up for the cultured topic, we’ve got a bigger group of keywords and overall more search volume. This is where you can use Keyword Explorer and the long tail to make informed decisions.

You’re laying out your virtual welcome mat for all the potential traffic these terms send.

Platinum level: I lied — there’s one more level!

For all you lovely overachievers out there who have reached the end of this post, I’m going to reward you with one final tip.

You’ve done all the snooping around on your competitors, so you know who you’re up against. You’ve done the research, so you know what keywords to target to begin driving intent-rich traffic.

Now you need to create strong, consistent, and outstanding content. For the best explanation on how and why you must do this, you can’t go past Rand’s 10x Whiteboard Friday.

Here’s where you really have to tip your hat to long tail keywords, because by targeting the long tail you can start to build enough authority in the industry to beat stronger competition and rank higher for more competitive keywords in your niche.

Wrapping up…

The various different keyword phrases that make up the long tail in your industry is vast, often easier to rank for, and indicates stronger intent from the searcher. By targeting them you’ll find you can start to rank for relevant phrases sooner than if you just targeted the head. And over time, if you get the right signals, you’ll be able to rank for keywords with tougher competition. Pretty sweet, huh? Give our Keyword Explorer tool a whirl and let me know how you get on 🙂

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Context is King: A Million Examples of Creative Ad Campaigns Getting it Right

Posted by Daniel_Marks

[Estimated read time: 6 minutes]

This was one of the first television commercials to ever air:

Talking to the camera on a mic was the obvious way to leverage television: after all, that’s how radio commercials worked. Now, advertisers could just put radio commercials on television. What an exciting new advertising medium!

As it turns out, putting radio commercials on television wasn’t really the best use of this new medium. Sound familiar? This seems awfully similar to the current practice of turning your television commercial into a YouTube pre-roll ad. However, the difference this time isn’t the media format, which is largely similar (YouTube videos are still video, banner ads are still text + image, podcast sponsorships are still voice, etc.) Instead, the difference is how people are consuming the content; in other words, the context.

A television commercial is a relatively static experience: 30 seconds of video placed within a few appropriate time slots, reaching people in their living room (or possibly bedroom). A Facebook newsfeed ad is a little more dynamic: it can be seen anywhere (home, office, bus, etc.), at anytime, by anyone, in almost any format and next to almost any content. The digital age has basically exacerbated the “problem” of context by offering up a thousand different ways for consumers to interact with your marketing.

But, with great problems comes great opportunity — or something like that. So, what are some ways to leverage context in the digital age?

Intent context

Different channels have different user intents. On one end of the funnel are channels like Facebook and Snapchat that are great fillers of the empty space in our lives. This makes them well-suited for top-of-funnel brand advertising because you aren’t looking for something specific and are therefore more receptive to brand messaging (though you can certainly use Facebook for direct marketing purposes).

BuzzFeed, for example, has done a great job of tailoring their Snapchat content to the intent of the channel — it’s about immediate gratification, not driving off-channel behaviors:

This feels like you’re watching your friend’s Snapchat story, not professionally produced branded content. However, it’s still the early days for Snapchat — all companies, including BuzzFeed, are trying to figure out what kind of content makes sense for their goals.

As for Facebook, there are plenty of examples of doing brand awareness right, but one of the more famous ones is by A1 Steak Sauce. It was both set and promoted (in part) on Facebook:

Critically, the video works with or without sound.

On the other end of the funnel is something like AdWords: great when you know what you’re looking for, not so great when you don’t. This subway ad for health insurance from Oscar feels pretty out of place when you use the same copy for AdWords:

Getting intent right means that you need to actually experience your ad as a user would. It’s not enough to put a bunch of marketers together in a conference room and watch the YouTube ad you created. You need to feel the ad as a user would. This means watching your ad when you’re in the living room and just clicked on a friend’s YouTube link from Facebook to watch a soccer highlight (or whatever).

Situational context

Situational context (is that redundant?) can be leveraged with a whole range of strategies, but the overarching theme is the same: make users feel like the ad they’re seeing is uniquely built for their current situation. It’s putting a YouTube star in pre-roll ads on their own channel, or quickly tweeting something off the back of a current event:

http://platform.twitter.com/widgets.js

…or digital experiences that are relevant to the sporting event a user is watching:

There are thousands of examples of doing this right:

Behavioral context

You might want people on Facebook to watch your video with sound, but the reality is that 85% of Facebook video views are silent. You might want people to watch your brilliant one-minute YouTube ad, but the reality is that 94% of users skip an ad after 5 seconds You need to embrace user behaviors instead of railing against them, like these smart people:

  • Wells Fargo creates a Facebook version of their television ad: http://ift.tt/1SThs0Z

    The important takeaways are making it short, having captions to make it understandable without sound, and putting the brand mention earlier in the video.

  • Geico makes an “unskippable” 5 second YouTube ad:

    How do you reach people who skip your commercial after 5 seconds? Make the ad 5 seconds long!

Understanding channel behaviors means not using channel features for the sake of channel features while still taking advantage of behaviors that allow for richer ad experiences. It means using the channel yourself, looking up the relevant research, talking to experts, and making informed decisions about how people will actually engage with your creative work.

Location context

A user’s location can prompt geographic-specific advertising (for example, Facebook Local Awareness Ads or in-store Snapchat filters). It can feel gimmicky when used needlessly, but can provide a compelling marketing experience when done right.

AirBnB’s slogan is “belong anywhere.” One of the ways to feel like a local in a new city is to have locals give you a personal tour — which is exactly what AirBnB provides by targeting people on mobile when they’re looking for directions:

Or you can just make use of location services in more straightforward ways, like how the Bernie Sanders campaign targeted his core demographics in New York before the important primary by using Snapchat Geofilters.

However, be careful about inferring location from device — only 17% of mobile searches are on the go.

Audience context

Audience targeting is likely the most powerful form of context provided by digital marketing. You can segment your audience in a thousand different ways — from Facebook Lookalikes to Google Customer Match — that a billboard could only dream of. The more you customize your ad copy to the audience you’re targeting, the better off you’ll be. (There seems to be a running theme here…)

You could directly speak to the audience of your competitors by targeting branded keywords:

Or better yet, target competitor customers that are about to change services:

http://platform.twitter.com/widgets.js

Retargeting is another powerful way to use audience context by changing your copy to reflect the actions a user has taken on your site (more great retargeting examples here):

Then, of course, there are all the obvious ways of leveraging audience, such as adjusting your value proposition, using a slightly different tone, or tweaking the offer you provide.

There’s a cliché that the digital age has killed advertising creativity. Forget about clever copy or innovative work, It’s all about spreadsheets and algorithms now. This couldn’t be further from the truth. The Internet didn’t kill advertising creativity — it just raised the bar. Content in all its forms (video ads, blog posts, tweets, etc.) will always be important. It might be harder to buy engaged eyeballs for your 30-second commercial online, but content done right can reach millions of people who are voluntarily consuming it. More importantly, though, the Internet lets you engage with your audience in a thousand innovative ways, providing a revamped arena for marketing creativity: context.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Using Google Tag Manager to Dynamically Generate Schema.org/JSON-LD Tags

Posted by serpschris

[Estimated read time: 7 minutes]

One of the biggest takeaways from SearchFest in Portland earlier this year was the rapidly rising importance of semantic search and structured data — in particular Schema.org. And while implementing Schema used to require a lot of changes to your site’s markup, the JSON-LD format has created a great alternative to adding microdata to a page with minimal code.

mike arnesen searchfest 2016

Check out Mike Arnesen’s deck from his SearchFest talk, “Understanding & Facilitating Semantic Search,” for a great overview on using structured data.

What was even more exciting was the idea that you could use Google Tag Manager to insert JSON-LD into a page, allowing you to add Schema markup to your site without having to touch the site’s code directly (in other words, no back and forth with the IT department).

Trouble is, while it seemed like Tag Manager would let you insert a JSON-LD snippet on the page no problem, it didn’t appear to be possible to use other Tag Manager features to dynamically generate that snippet. Tag Manager lets you create variables by extracting content from the page using either CSS selectors or some basic JavaScript. These variables can then be used dynamically in your tags (check out Mike’s post on semantic analysis for a good example).

So if we wanted to grab that page URL and pass it dynamically to the JSON-LD snippet, we might have tried something like this:

Using tag manager to insert JSON-LD with dynamic variables

But that doesn’t work. Bummer.

Meaning that if you wanted to use GTM to add the the BlogPosting Schema type to each of your blog posts, you would have to create a different tag and trigger (based on the URL) for each post. Not exactly scalable.

But, with a bit of experimentation, I’ve figured out a little bit of JavaScript magic that makes it possible to extract data from the existing content on the page and dynamically create a valid JSON-LD snippet.

Dynamically generating JSON-LD

The reason why our first example doesn’t work is because Tag Manager replaces each variable with a little piece of JavaScript that calls a function — returning the value of whatever variable is called.

We can see this error in the Google Structured Data Testing Tool:

JSON-LD Google Tag Manager variable error

The error is the result of Tag Manager inserting JavaScript into what should be a JSON tag — this is invalid, and so the tag fails.

However, we can use Tag Manager to insert a JavaScript tag, and have that JavaScript tag insert our JSON-LD tag.

Google Tag Manager JSON-LD insertion script

If you’re not super familiar with JavaScript, this might look pretty complicated, but it actually works the exact same way as many other tags you’re probably already using (like Google Analytics, or Tag Manager itself).

Here, our Schema data is contained within the JavaScript “data” object, which we can dynamically populate with variables from Tag Manager. The snippet then creates a script tag on the page with the right type (application/ld+json), and populates the tag with our data, which we convert to JSON using the JSON.stringify function.

The purpose of this example is simply to demonstrate how the script works (dynamically swapping out the URL for the Organization Schema type wouldn’t actually make much sense). So let’s see how it could be used in the real world.

Dynamically generating Schema.org tags for blog posts

Start with a valid Schema template

First, build out a complete JSON/LD Schema snippet for a single post based on the http://ift.tt/KdiOY5 specification.

example article schema template

Identify the necessary dynamic variables

There are a number of variables that will be the same between articles; for example, the publisher information. Likewise, the main image for each article has a specific size generated by WordPress that will always be the same between posts, so we can keep the height and width variables constant.

In our case, we’ve identified 7 variables that change between posts that we’ll want to populate dynamically:
identify schema properties for dynamic substitution by tag manager

Create the variables within Google Tag Manager

  • Main Entity ID: The page URL.
  • Headline: We’ll keep this simple and use the page title.
  • Date Published and Modified: Our blog is on WordPress, so we already have meta tags for “article:published_time” and “article:modified_time”. The modified_time isn’t always included (unless the post is modified after publishing), but the Schema specification recommends including it, so we should set dateModified to the published date if it there isn’t already a modified date. In some circumstances, we may need to re-format the date — fortunately, in this case, it’s already in the ISO 860 format, so we’re good.
  • Author Name: In some cases we’re going to need to extract content from the page. Our blog lists the author and published date in the byline. We’ll need to extract the name, but leave out the time stamp, trailing pipe, and spaces.tag manager extract author name from pagetag manager extract author name from page markup
  • Article Image: Our blog has Yoast installed, which has specified image tags for Twitter and Open Graph. Note: I’m using the meta twitter:image instead of the og:image tag value due to a small bug that existed with the open graph image on our blog when I wrote this.
  • Article Description: We’ll use the meta description.

Here is our insertion script, again, that we’ll use in our tag, this time with the properties swapped out for the variables we’ll need to create:

google tag manager json-ld insertion script with dynamic variables

I’m leaving out dateModified right now — we’ll cover than in a minute.

Extracting meta values

Fortunately, Tag Manager makes extracting values from DOM elements really easy — especially because, as is the case with meta properties, the exact value we need will be in one of the element’s attributes. To extract the page title, we can get the value of the <title> tag. We don’t need to specify an attribute name for this one:

configuring a google tag manager tag to extract the title value

For meta properties, we can extract the value from the content attribute:

configuring a google tag manager tag to extract the title value

Tag Manager also has some useful built-in variables that we can leverage — in this case, the Page URL:

Tag Manager Page URL built in variable

Processing page elements

For extracting the author name, the markup of our site makes it so that just a straight selector won’t work, meaning we’ll need to use some custom JavaScript to grab just the text we want (the text of the span element, not the time element), and strip off the last 3 characters (” | “) to get just the author’s name.

In case there’s a problem with this selector, I’ve also put in a fallback (just our company name), to make sure that if our selector fails a value is returned.

custom JavaScript google tag manager variable to extract and process copy

Testing

Tag Manager has a great feature that allows you to stage and test tags before you deploy them.

google tag manager debug mode

Once we have our variables in place, we can enter the Preview mode and head to one of our blog posts:

testing tag manager schema variables

Here we can check the values of all of our variables to make sure that the correct values are coming through.

Finally, we set up our tag, and configure it to fire where we want. In this case, we’re just going to fire these tags on blog posts:

tag manager trigger configuration

And here’s the final version of our tag.

For our dateModified parameter, we added a few lines of code that check whether our modified variable is set, and if it’s not, sets the “dateModified” JSON-LD variable to the published date. You can find the raw code here.

dynamic schema json-ld tag

Now we can save the tag, deploy the current version, and then use the Google Structured Data Testing Tool to validate our work:

google structured data testing tool validates dynamically generated JSON-LD

Success!!


This is just a first version of this code, which is serving to test the idea that we can use Google Tag Manager to dynamically insert JSON-LD/Schema.org tags. However after just a few days we checked in with Google Search Console and it confirmed the BlogPosting Schema was successfully found on all of our blog posts with no errors, so I think this is a viable method for implementing structured data.

valid structured data found in Google Search Console

Structured data is becoming an increasingly important part of an SEO’s job, and with techniques like this we can dramatically improve our ability to implement structured data efficiently, and with minimal technical overhead.

I’m interested in hearing the community’s experience with using Tag Manager with JSON-LD, and I’d love to hear if people have success using this method!

Happy tagging!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

“A Complete Failure”: What Tech Businesses Can Learn from a Sports Blog’s Scandal

Posted by RuthBurrReedy

[Estimated read time: 16 minutes]

On February 17th, sports news and opinion blog SB Nation posted a longform story titled “Who is Daniel Holtzclaw?” The story, which took a sympathetic viewpoint on convicted rapist Daniel Holtzclaw, sparked a huge amount of controversy among SB Nation readers and garnered the blog a great deal of media attention. It took less than 24 hours for SB Nation to pull the piece, calling it “a complete failure.”

In the aftermath, SB Nation’s parent company, Vox Media, convened a panel of editors and journalists to investigate how a piece taking such a controversial stance on such a sensitive topic could have been published on the site without anyone saying “Hang on, is this a good idea?”

What does this have to do with digital marketing?

When I read the peer review report, I was expecting a fascinating glimpse into the inner workings of a very popular media site. What I wasn’t expecting was how many of the review’s findings were exactly the kinds of struggles, missteps, and roadblocks I’ve seen at my and my friends’ workplaces over the years. If you’re a startup, a rapidly growing business, or a company that relies heavily on technology for communication, chances are you have or will run into many of the same pitfalls that plagued SB Nation.

Regardless of your feelings about the case itself or SB Nation’s coverage of it, the Vox peer review has some valuable lessons on how a business can avoid inadvertently launching a product (or website, or piece of content) it’s not proud of. The entire peer review is worth a read, but I’ve pulled out the biggest lessons for businesses here.

Realistic production schedules

The Longform blog was publishing one longform piece of content every week, a pace that the peer review called “…a furious rate of output that’s not conducive to consistently excellent reporting and editing.”

In the tech world, there’s enormous pressure to ship and keep shipping. For content creators, there is often a quota of pieces per week or per month to be met. An environment that stresses pressure to consistently produce over pressure to do excellent work is one that is ultimately going to result in some inferior products getting pushed out there. Taking shortcuts in the name of speed also results in technical debt that can take a business years to catch up with.

Photo via Pixabay

“Done is better than perfect” is a common mantra at tech companies, and there’s a lot of value in not sweating the small stuff. Scope creep is a real problem that can bog a project down inevitably. Still, it’s important to make sure that your organization isn’t encouraging a breakneck pace at the expense of quality.

Encourage your team to be honest about how long a project might take, and then build in some squish time to handle the unforeseen (more on that next). Recognize that not all product updates (blog posts, ad campaigns) are created equal, and some will take more time than others. If your team is constantly in “crunch mode” trying to get things out the door, that’s a big clue that your pace is too fast and mistakes are going to be missed.

Build in time for Quality Assurance

This is one I’ve seen time and again around a new website launch. As delays are almost unavoidable when building a new site, the “final product” often isn’t ready for review until a day or two before launch is scheduled to happen. It certainly can be possible to test and review a site in that amount of time, but only if everything is actually ready to go. If the QA review finds any major problems, there’s no time to fix them before the launch deadline.

At SB Nation, editors and reviewers had between two and four days to turn a longform piece around for publishing before it was slated to go live. This meant heavy revisions were out of the question without pushing back the timeline to publish. Furthermore, the peer review discovered that many pieces were being published with only one other person having reviewed them — and then only from a copy-editing perspective, not for content.

Quote: "A QA process is meaningless without a strategy to fix the problems it uncovers."

Even if your team are all geniuses who consistently do a dynamite job, any writer will tell you that it’s impossible to proofread your own work. An outside perspective is absolutely vital to finding and fixing the problems any project might have; the person who built it is just going to be too close to the work to do it themselves.

Say it with me: a QA process is meaningless without a strategy to fix the problems it uncovers. Do not expect that your projects will just need a quick review-and-polish before they’re ready to go live. Instead, make sure that you’ve budgeted enough time both thoroughly test and review, and assume that the QA process will uncover at least one major issue that you’ll need to build in time to fix. In order to do that, you’ll have to make sure you:

Don’t get married to your deadlines

There are occasions where a hard deadline is unavoidable. You need the new website to be built before your major trade show event, or your blog post is responding to a developing news situation that won’t be as relevant tomorrow. There are plenty of situations, however, when deadlines are more malleable than you might think. Sometimes, the cost of launching something that’s not fully ready to go is much higher than the cost of pushing back a deadline by a week or two.

Quote: "Don’t get so focused on “done” that you lose sight of what you’re trying to do with your project in the first place."

As a leader, one thing you can do is build in internal deadlines throughout the life of a project, so as each deadline approaches you’re checking in regularly about whether or not it’s realistic. This can mean saying things like “We will need two weeks for QA, so all work on the product will need to be completed by June 15th to launch on July 1st.” “The blue widget team will need 3 weeks to complete their portion of the project, so if it’s not handed off to them by May 25th, we are jeopardizing the launch date.” By setting these internal deadlines, you’re building in an early warning system for yourself so you have plenty of time to plan and manage expectations. You’ll also set a precedent for treating QA time as sacred, instead of cutting into it when other portions of the project overrun.

You should also decide ahead of time what you do and don’t need in order to launch something. This is commonly referred to as defining the MVP, or Minimum Viable Product. If you have to sacrifice a few bells and whistles to hit your deadline, it may be worth it — but make sure you don’t get so focused on “done” that you lose sight of what you’re trying to do with this project in the first place.

Be clear about roles and responsibilities

Vox uncovered a problem at SB Nation that is so, so typical at companies that have undergone a period of fast growth: their internal leadership structure had not grown to scale with their increased numbers. The result was confusion about who was responsible for what, and who had the power to say “No” to whom.

This type of problem can be especially insidious at an organization with a “flat” org chart, where there aren’t a ton of pre-defined internal hierarchies. Too much middle management does run the risk of turning your business into the company from Office Space — but too little oversight can be just as dangerous. A clearly-defined org chart can help your employees figure out who they need to contact in a crisis, and make sure that everyone’s in the loop who needs to be.

Photo via Pixabay

Taking the time to build some internal structure, define roles and responsibilities, and have a clear process to escalate problems as needed might sound unbearably stuffy for your hip young tech business. You may feel that “everyone knows” who’s responsible for what because you’re such a tight-knit bunch. I’m sure I’m not the only one who felt an unpleasant shiver of recognition when I read this quote from the peer review, though:

“Asked why he thought senior staffers felt they couldn’t overrule Stout, Hall told us, “I think people didn’t know how…We have a very tight, personality-based workplace. And I think sometimes if you don’t know that person — people didn’t feel like there was a formal way to do it.”

Make sure there is a formal way to do it. Believe me, you will be so happy that you got these things in place before you needed them, instead of waiting until you have a huge, public, embarrassing incident to make you realize you need them.

Empower feedback at a cultural level

There were multiple people within the SB Nation organization that were concerned about the Holtzclaw piece, but most of them felt like it was not their job, or their place, to give feedback. The copy editor who was the first to review the piece specifically said he thought sharing his concerns was “above his pay grade.”

It can be difficult to know whether or not criticism or raising concerns is appropriate in a given situation, especially for less-experienced team members who may not be familiar with workplace etiquette. This is exacerbated when decisions are made via email chains — a given observer may not want to derail the thread with their concerns, or risk looking foolish or overstepping their bounds in front of their colleagues.

Quote: "It won’t matter if you ask for feedback if you’re not taking the time to support it culturally."

As a leader in your organization, pay attention to decision-making processes and make sure that you are creating a space to actively solicit and encourage feedback from the people involved, regardless of their pay grade. You need to pay attention to the “unwritten rules” of your workplace, too — what happens when someone voices a critique? Are they listened to? Taken seriously? Dismissed? Ridiculed? Does change ever come out of it? It won’t matter if you ask for feedback if you’re not taking the time to support it culturally.

This is one of many instances in which defining and living by a set of company values can be so important. At UpBuild, two of our values are transparency and integrity, so creating a culture where feedback is warmly encouraged fits right in with what we’ve said we want to do. Knowing that they’re taking an action that’s supported by the company’s core values can encourage someone to risk speaking up when they otherwise might not have.

Avoid isolation and concentration of power

Quote: "There should be no one person who holds so much exclusive knowledge about an area of your business that it wouldn’t be able to continue if they left."

Most startups have that one person. They’ve been there for a while, maybe since the beginning, and since the business started out lean and mean, they’ve worn a lot of hats while they’re there. Maybe they helped build some of the systems the company runs on. At any rate, they’ve got one product, or one area, that is “theirs” alone — nobody else really touches it, and in some cases nobody else really knows exactly what they do with it. What people do know is they have to stay on this person’s good side, because any time their work intersects with this person’s individual fiefdom within the company, they’ll need their cooperation to get it done. In the case of SB Nation’s Longform blog, even people who were ostensibly editor Glenn Stout’s peers didn’t feel empowered to kill a piece he’d championed, because it was “his” blog.

Photo via Pixabay

Additionally, the team was distributed across the country, which means they used a combination of email and Slack to communicate. According to Stout, “When I first started hearing about Slack, I had no idea even what it was. And then the edict came down that we have to go on Slack. And I had to find out what it was. And I would use it occasionally, but I’m just much more comfortable with emails.”

What I found so fascinating about this portion of the report was that not only did Stout’s non-participation in Slack distance him from his co-workers, giving them less of a sense of what he was working on, it also created the impression that he was exempt from the rules and expectations that governed much of the rest of the organization.

If you recognize an internal fiefdom forming (or know of one that’s already there), break it up by building in some redundancies — there should be no one person who holds so much exclusive knowledge about an area of your business that it wouldn’t be able to continue if they were to leave. Get other people involved in the project, emphasize cross-team collaboration and information sharing, and make it a requirement for everyone. Even your top performers shouldn’t be exempt from the policies and procedures laid out for your organization.

Building in redundancies can also help guard against another contributing factor to the SB Nation debacle: the one person who would usually dictate whether or not a piece would be published, the editorial director, was on vacation. Two other editors were meant to be taking on his work in his absence, but neither was sure if they had the power to kill the article. If a core decision-maker is on vacation, there should be other team members conversant enough with his or her work to step in, and everyone should be on the same page about their power to make decisions in his or her absence.

Quote: "If important discussions are happening [on a tool like Slack], everyone needs to be using it."

To combat isolation, be consistent in setting and reinforcing expectations for communication and collaboration. Don’t just let everyone use communication tools to whatever extent they prefer; figure out which conversations need to be happening in which channel, and then nudge your team to make sure information is being communicated in the right way/place.

The informality and high interruption factor of a tool like Slack can turn people off, and it’s easy to see participation as voluntary, but if important discussions are happening there, everyone needs to be using it. Set the expectation with your direct reports: “You don’t have to look at it constantly, but I expect you to be checking Slack (x) times per day, participating in discussions that concern you, and responding promptly to direct messages.”

Speaking of tools:

Pick a project management solution and stick with it

Quote: "Adoption of a new tool or process has to be set at a cultural level."

Growing companies tend to leave a trail of discarded project management systems in their wake. It can be difficult to get a whole team to fully adopt a new tool, and when you’re small, it’s pretty easy to keep track of everything that’s going on. The challenge comes, once again, with growth — sooner or later a company gets big enough for things to start falling through the cracks. Finding the next new shiny project management tool and rolling it out to great fanfare are easy; getting your team to actually use it is the hard part.

Adoption of a new tool or process has to be set at a cultural level. Build use of the new tool into existing processes in an explicit way, and then reinforce that that’s how you expect them to be done from now on. I’ve found it helpful to structure touch-base meetings about a given project around the project management tool (we use Trello); that gives the people working on the project incentive to make sure everything’s up-to-date before our meeting.

Prioritize empathy

When you’re spending all your time thinking about and making something, it can be really hard to think about it in any different way. Most SEOs have encountered this with our clients; people get so used to industry terminology or niche product descriptions that they have a hard time taking a step back and asking, “What are our customers really looking for? What problems are they trying to solve?”

Quote: "Empathy should be a required skill and consideration for anyone who is going to be communicating on behalf of your business."

Empathy can come into play at many points during the marketing process — user testing, focus groups, persona building, user experience, accessibility concerns, etc. — but is of particular importance when dealing with sensitive subjects. There are countless examples of businesses who have, for example, tried to take an “edgy” tone on social media and wound up in a media firestorm. Empathy should be a required skill and consideration for anyone who is going to be communicating on behalf of your business.

As with empowering feedback, prioritizing empathy is something that can be best done at a cultural level. You may decide you want to make empathy part of your core values, but even if you don’t, you should clearly define your company’s stance on addressing sensitive topics — and the tone and brand voice you’ve defined as part of your overall brand strategy should reflect that as well. Make empathy part of the training and onboarding process for anyone who will be communicating about your brand, and for all higher-up and senior staff, and you’ll start to see the cultural shift.

Fail fast, apologize faster

The peer review covered a lot of “what went wrong,” but let’s close on what went right. It took SB Nation less than 24 hours to realize their mistake, take down the piece, and issue an apology — not a “we’re sorry if you were offended” apology, but a real, honest-to-goodness, “this was a mistake and we are heartily sorry” apology.

Quote: "The rules are simple, and you probably learned them in elementary school: 1.) Say you’re sorry. 2.) Show you understand why what you did was wrong or hurt somebody. 3.) Don’t do it again."

It would have been easy to beat a hasty retreat, engage in some quick and public firings, and sweep the whole thing under the rug. Instead, SB Nation and parent company Vox Media proved that their apology was sincere by taking concrete steps to figure out what had happened and how they could prevent it from happening again.

If you take nothing else away from this, learn from this master class in public apology. The rules are simple, and you probably learned them in elementary school: 1.) Say you’re sorry. 2.) Show you understand why what you did was wrong or hurt somebody. 3.) Don’t do it again.

What lessons did you take away from this peer review? In addition to everything I’ve outlined above, it also served as an important reminder for me that there are many places to find lessons to be learned, and that they’re often outside the tech bubble we so often find ourselves in.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Why We Only Accept 1 Out of Every 10 Guest Blog Pitches

Posted by BopDesign

[Estimated read time: 6 minutes]

We’ve been pitched a blog post about hoverboards.

While hoverboards are pretty cool and I’d like to own one, our business and our blog have absolutely nothing to do with hoverboards.

Cool-hoverboard.jpg

Why did we get pitched a post about hoverboards? Most likely because the person pitching the post saw we have a decent domain authority and they wanted to get a piece of it by getting a backlink from us. I’m sure their blog post on hoverboards would have been very interesting, but it likely would have caused our audience of B2B marketing professionals to scratch their heads in confusion.

Scratch-Head-Confusion.jpg

We know who we are, and who we are not

We are a boutique digital marketing firm that focuses on creating websites and providing content marketing services.

Writing about anything else doesn’t provide value for our brand.

Over the past eight years, we’ve built up the blog on our website writing (mostly) weekly posts about all aspects of web design and digital marketing. This blogging strategy has enabled us to add two to four new blog posts to our website each month. Our main goal has always been to provide our clients and prospects with helpful, actionable information that helps them do their jobs better or aids them in making a decision about digital marketing.

Potential clients get helpful tips and can do their jobs better.

We get great content that may help us rank better and attract more potential clients.

We hate rejection, too

While we love adding insightful information to our blog, we hate having to reject guest post submissions.

Below is an actual pitch we received (sender’s information not included to protect their privacy).

Guest-Blog-Proposal.png

Any smart website owner should be excited to get a guest post pitch. Not only is it flattering (you really like us and want to write for us?), but you get free, hopefully useful content for your website. You can use someone else’s writing to drive traffic to your website.

It’s not us; it’s you

So, why the heck do we end up rejecting nine out of ten pitches we receive?

Simply stated, many of the guest post pitches we receive “aren’t a good fit,” which can mean a variety of things.

Here are the top reasons we reject a guest blog (and why you should, too):

  • The topic is irrelevant
  • The company pitching the blog isn’t related to our industry
  • The writing is terrible
  • The blog is tailored to the wrong audience (B2C vs B2B or CTO vs CMO)
  • The website they want us to link to is sketchy
  • We’ve published a blog post from them recently
  • The writing in the email is terrible and full of grammar issues
  • The person hasn’t researched our business or even looked at our website
  • The topic is too inflammatory
  • The topic is relevant but not inline with our firm’s philosophy
  • The topic is tired and overused
  • There is no value for our audience

When I read a guest blog pitch, I evaluate it for all of these things.

Don’t make me hate helping you

Recently, I made the mistake of tentatively accepting a guest post pitch even though the grammar in the email wasn’t up to our standards. We work with CEOs, founders, and marketing directors in a variety of industries, including biotech and finance, all who tend to have advanced educations and expect quality writing.

As such, we require all the content on our website to be grammatically correct, to flow well, and to be coherent.

I ignored my instinct because the proposed topic was really interesting and I felt it would be a great blog post for our current clients. I ended up paying for it. The draft that the guest writer sent over was subpar, to put it nicely. A blog post will undergo revisions, but this post was grammatically challenged and incoherent, jumping from point to point and back again.

Redline-Document-Small.png

I Tracked Changes during the revision process, then returned the post to the writer, who I didn’t hear back from.

The winning 10%

We’ve noticed that winning guest pitches — whether ours or others who pitch to our blog — have a few things in common, in that the pitchers seem to realize the following:

  • It’s not easy and it does take time
  • Always be professional and respectful
  • Know your audience (both the person you’re emailing and the folks who are reading their website/blog)
  • Read their existing blog posts
  • Pitch a relevant topic
  • Follow-up is key to getting a response (rejection or approval)
  • Don’t push it
  • Don’t get discouraged

We don’t anticipate this 90/10 rule for the blog pitches we accept to change. It’s unfortunate, but we know that many digital marketers will never fully understand guest blog pitches and will continue the machine-gun pitching strategy.

7 tips for a successful guest blog pitch

Based on our experience pitching guest blogs and accepting guest blogs, we have several insights to share with writers, marketers, and website owners.

1. Steer clear of paying for guest post opportunities

This one always surprises me. It’s only a matter of time before sites that sell space on their blog are nixed from the SERPs. We always decline when a website we pitch tells us they will publish it for a fee.

2. Do your own research

We always perform our own research to vet a website, ensure it’s relevant, and make sure it actually has a blog we’d like to write for.

3. Don’t always go after 60+ DA websites

It’s great to land a guest blog on a high DA site, but these are often very tough. It’s often better to start with the “low-hanging fruit,” relevant sites that might have low domain authority.

4. Write a thoughtful article that adds value

Don’t write crap. Consider every guest blog you write to be a graded assignment. Your professional reputation still matters in a digital world. If you write crap, you will be judged for it.

5. Provide options.

People, including editors, like to have options. You might have a great topic, but it’s always best to present several great topics. You never know, the editor may have previously accepted a similar topic.

6. Be genuine.

Ditch your generic email pitch. You may start with a template, but spend 15 minutes or so tailoring it to your pitch.

If you can, find the person’s name and personalize the message. Keep in mind that many of the people you pitch receive lots of unsolicited pitches every day. Stand out from the rest by being genuine and unique.

7. Don’t spam or waste people’s time.

If the website you’re pitching isn’t relevant to your industry, don’t pitch them. If they take the time to send you a rejection notice, be gracious and respectful. Take it as a learning experience and thank them for their time.

The last thing I’ve learned about rejecting and submitting guest blog posts is a success comes from creating a partnership between the person doing the pitching and the person being pitched. Our approach is always to offer something of value, be respectful, and, hopefully, create a connection beneficial for everyone.

Have you been successful in pitching guest posts? What’s worked for you?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!