Moz Keyword Explorer vs. Google Keyword Planner: The Definitive Comparison

Posted by BritneyMuller

window.onload = function() {
Moz.include(‘jquery.datatables’, function () {
$(‘.datatable’).dataTable({
‘bPaginate’: true
});
});
};

Keyword research, the blueprint to any successful SEO strategy

If you’ve been doing keyword research for a while, you’ve probably fallen into a routine. And that routine has likely been recently disrupted… thanks, Google.

If you’re new to keyword research, getting comfortable with new keyword research tools will come more easily to you. Lucky pups. But us change-averse old dogs can still learn new tricks when we need to. Are you ready to see which tool is right for you? –Woof.

My hesitations about writing this article:

  • I’m new to Moz and don’t want to be crucified for criticizing our own keyword research tool. This concern has only been met with acceptance and encouragement, so…*fingers crossed* they don’t change their minds. Love you guys!
  • My methods of keyword research revolve around finding qualified traffic for increasing conversions, not just any large search volume numbers (to make traffic look good).
  • I fear that this will come across as a Moz Keyword Explorer soft sell. It’s not. It’s a very honest comparison of Moz Keyword Explorer versus Google’s Keyword Planner. It’s a post that I’ve been wanting to read for a while.

Here are some great guides if you need a Moz Keyword Explorer refresher, or a Google Keyword Planner refresher.

< << TL;DR Skip to the conclusion here >> >

Google Keyword Planner’s recent change

Any habits we’ve held onto with Google Keyword Planner were disrupted early September when they decided to stop providing average monthly search volume data (unless you’re in that special group of higher-paying ad buyers who can still access the more precise search volume data). Instead, we now see huge swings of min-max search volume, which really starts to muddy the keyword research waters. Google recently came forward to explain that this change was done to deter scrapers from pulling their search volume data.

For a more comprehensive write-up on this change, read Google Keyword Unplanner by Russ Jones. He explains a little more about how this change affects various data sources and what Moz has been doing to mitigate the impact.

But, showing is better than telling. So let’s take a look for ourselves:

Screen Shot 2016-10-19 at 9.07.27 AM.png

A 900,000 average monthly search volume swing is crazy! In fact, Google now only provides one of seven volume sizes: 0–10, 10–100, 100–1000, 1000–10000, 10000–100k, 100k–1MM and 1MM+.

Moz’s Keyword Explorer also gives ranges, but they’re not nearly as vast (or as arbitrary). The machine-learning model behind Keyword Explorer is designed to predict monthly fluctuations in search volume. It’s mathematically tied to the most accurate keyword data available, and you can see exactly how, and how accurate Moz gets in this Clickstream Data to the Rescue article.

Screen Shot 2016-08-15 at 10.04.36 AM.png

Which is why I wanted to know:

http://platform.twitter.com/widgets.js

What are quality keywords?

Quality keywords successfully target your demographic during their acquisition phase (education – purchase), have a specific searcher intent, low-medium organic competition, and medium-high search volume (this will vary based on what part of the acquisition funnel you’re targeting).

However, it’s important to keep in mind that some longer-tail queries (with little to no search volume) can be highly profitable as well.


Tier 1 keyword research setup

Google Keyword Planner:

This is my familiar ol, kooky friend that has been acting very strange lately (anyone else noticing all of the delays and glitches?). I’m a little worried.

Anywho, here’s how I begin keyword research within Keyword Planner:

keyword-planner.gif

  1. Enter in your keyword under “Search for more keywords using a phrase, website or category.”
  2. Make sure the region is set to United States (if wanting to research nationally).
  3. Set keyword options to “broad.” –Settle down, we’ll go back and change this to “closely related” after our first swoop.
  4. Sort keyword volume by highest to lowest and change the “show rows” to 100.
  5. IMPORTANT: Always scroll top to bottom! Otherwise, new keywords will populate from the bottom that you’ll miss.
  6. Select keywords with unique intents as you scroll down the first 100 rows, click “next,” and start again from the top until through all keyword results.

Moz Keyword Explorer:

My hip new friend that I’m not sure I can trust just yet. However, multiple trusted friends vouch for her integrity and… I really dig her style.

Here’s how I begin keyword research within Keyword Explorer:

keyword-explorer.gif

  1. Enter your keyword into the Keyword Explorer search bar.
  2. Navigate to “Keyword Suggestions” on the left-hand menu.
  3. Set “Display keyword suggestions that” to “include a mix of sources.”
  4. Set “Group Keywords” to “no.”
  5. Sort keyword list by highest search volume to lowest.
  6. Scroll down and select keywords with unique searcher intent.

Either way, this will give you one giant list of 1,000 keywords, which can be tough to pace through (compared to the 100 keyword chunks in GKP). A progress bar of sorts would be nice.

The thing that’s taken the most getting used to is not seeing a competition/difficulty metric adjacent to the search volume. The whole goal of keyword research is to discover opportunity gaps that offer mid-to-high search volume with low competition. If you’re anything like me, you’ve ran hundreds if not thousands of strange SEO tests and are very aware of what you can achieve “competition”-wise (domain-dependent) and what you can’t. (Or when a higher-competition keyword should take the form of a longer SEO plan.)

*It’s important to note that the KWP “Competition” metric is an advertising metric.

Despite this metric occasionally leading to an SEO correlation, it’s often misleading and not an accurate representation of how competitive the organic results are.

The KWE “Difficulty” metric, on the other hand, is an organic search metric. It also leverages a smarter CTR curve model to show when weaker pages are ranking higher (in addition to other ranking signals).

That being said, having to wait to find out the competition metric of a keyword until after I add it to a list is frustrating. I can’t help but feel that I’m not selecting keywords as strategically as I could be. Hopefully, Moz will add a historical competition metric up front (adjacent to search volume) sometime in the near future to help us better select ripe keyword opportunities.

The relevancy metric doesn’t do much to help my research because I’m already relying on the keywords themselves to tell me whether or not they’re relevant/have a unique user intent.

(I told you guys I would be honest!)

Label by keyword type:

Navigational: Searchers seeking a destination on the web.

Example: “University of Minnesota tuition”

Informational: Searchers researching, getting quick answers, often times using what, who, where, how, etc. modifiers.

Example: “what is a conker”

Commercial Investigation: Searchers investigating beyond an informational query. Comparing brands, searching for “best,” researching potential clients, etc.

Example: “ppc experts in london”

Transactional: Searchers looking to purchase something, comparing rates, seeking prices for things, etc.

Example: “affordable yoda action figure”

Transactional and Commercial Investigation types tend to be most profitable (depending on business model). For example, a blog could do very well from Informational-type keywords.

If you want a more in-depth understanding of keyword types; read Rand’s Segmenting Search Intent. <– An oldie, but a goodie!!

Compare results & answer:

  • Which tool provided better long-tail results?
  • Which tool provided better top-of-funnel queries?
  • What percentage of “keyword types” did each tool provide?
  • What are the advantages and disadvantages of each tool?

For whatever reason, “student loans” painted an accurate picture (of what I’ve found to be true across other competitive keywords) for each prospective tools’ wheelhouse. So, “student loans” will serve as our point of reference throughout this comparative analysis.


Tier 1 keyword research overview:

Moz Keyword Explorer Google Keyword Planner
Term: “student loans” “student loans”
Region: United States United States
Spectrum: Include a mix of sources Broad
Group Keywords: No
Total Results: 1000 700
#Keywords With Intents: 43 40


+ Moz Keyword Explorer results:

http://ift.tt/2fvrw8M

http://ift.tt/2e3Ertp

Keyword Modifier Type Min Volume Max Volume Difficulty Opportunity Importance Potential
student loan consolidation consolidation Commercial Investigation 11501 30300 60 83 3 79
student loan calculator calculator Informational 11501 30300 75 100 3 76
student loan Informational 118001 300000 82 84 3 82
federal student loan federal Navigational 30301 70800 63 48 3 76
student loan refinance refinance Commercial Investigation 11501 30300 55 83 3 77
student loan repayment calculator repayment calculator Informational 11501 30300 67 100 3 74
student loan interest rates interest rates Commercial Investigation 6501 9300 53 54 3 69
student loan hero hero Navigational 1701 2900 49 19 3 53
student loan forgiveness forgiveness Commercial Investigation 70801 118000 62 86 3 86
student loans information information Informational 501 850 90 55 3 39
applying for student loans applying for Informational 4301 6500 72 55 3 60
fafsa student loans fafsa Navigational 2901 4300 98 56 3 28
bad credit student loan bad credit Commercial Investigation 1701 2900 44 83 3 70
student loan websites websites Commercial Investigation 851 1700 79 53 3 48
where to get student loan where to get Informational 501 850 76 55 3 47
citibank student loans pay citibank pay Navigational 201 500 29 94 3 64
how to get a school loan how to get a Informational 201 500 68 55 3 45
how to find my student loans how to find my Navigational 101 200 54 58 3 48
how to check student loans how to check Navigational 101 200 63 55 3 45
discover private student loan discover private Navigational 101 200 53 21 3 36
check my student loan balance check my balance Navigational 101 200 55 100 3 52
apply for student loan online apply for online Transactional 101 200 68 53 3 41
look up student loans look up Commercial Investigation 101 200 53 90 3 51
student loan now now Transactional 51 100 72 86 3 42
stafford student loans login stafford login Navigational 51 100 76 60 3 36
federal student loan lookup federal lookup Navigational 11 50 55 100 3 46
how to view my student loans how to view my Informational 11 50 57 64 3 39
how do i find out who has my student loan how do i find out who has my Informational 11 50 59 86 3 42
apply for additional student loans apply for additional Commercial Investigation 11 50 73 64 3 34
what student loans do i owe what do i owe Informational 11 50 50 41 3 34
student loan application status application status Navigational 0 10 72 100 3 33
what is federal student loans what is federal Informational 0 10 78 58 3 25
who services federal student loans who services federal Informational 0 10 68 100 3 22
apply for student loan by phone apply for by phone Transactional 0 10 86 86 3 11
national student loan locator phone number national locator phone number Informational 0 0 58 29 3 11
i owe student loans who do i call i owe who do i call Informational 0 0 50 94 3 26
where do i find my student loan interest where do i find my interest Informational 0 0 78 58 3 11
how to find my student loan account number how to find my account number Informational 0 0 55 100 3 25
how much federal student loans do i have how much federal do i have Navigational 0 0 80 46 3 8
where do i pay my government student loans where do i pay my government Navigational 0 0 77 55 3 11
student loans lookup lookup Navigational 0 0 55 100 3 26
student loans payment history payment history Navigational 0 0 66 46 3 14
how many school loans do i have how many do i have Navigational 0 0 68 90 3 21

Additional tool features:

The Importance metric: …is powerful! However, I’ve left all my results at a neutral Importance (3) so you can see downloaded results without any customization (and to keep things fair, because I’m not prioritizing GKP keywords).

If you choose to use this metric, you set a priority level for each keyword (1=not important, 10=most important) that will then influence the keyword’s Potential score. This allows you to more easily prioritize a keyword plan, which is very helpful.

keyword-importance.gif

Group keywords with low lexical similarity: While this can save you time, it can also lead to missing keyword opportunities. In my example below, if I select “student loans” (and not “Select 821 keywords in group”), I would miss all of the nested keywords.

Use this feature carefully:

group-keywords.gif


+ Google Keyword Planner results:

http://ift.tt/2fvtyWw

http://ift.tt/2e3FSrE

Keyword Modifier Type Avg. Monthly Searches (exact match only) Competition Suggested Bid
student loan forgiveness forgiveness Commercial Investigation 100K – 1M 0.58 3.38
student loan refinance refinance Commercial Investigation 10K – 100K 0.96 34.57
student loan consolidation consolidation Commercial Investigation 10K – 100K 0.98 22.52
private student loans private Commercial Investigation 10K – 100K 0.99 28.51
student loans without a cosigner without a cosigner Commercial Investigation 1K – 10K 0.98 23.85
parent student loans parent Commercial Investigation 1K – 10K 0.96 10.27
best private student loans best private Commercial Investigation 1K – 10K 0.93 21.33
bad credit student loans bad credit Commercial Investigation 1K – 10K 0.97 4.02
best student loans best Commercial Investigation 1K – 10K 0.93 18.61
compare student loans compare Commercial Investigation 100 – 1K 0.98 23.8
medical student loans medical Commercial Investigation 100 – 1K 0.91 10.16
student loans from banks from banks Commercial Investigation 100 – 1K 0.97 13.09
student loans for international students for international students Commercial Investigation 100 – 1K 0.88 14.01
no credit check student loans no credit check Commercial Investigation 100 – 1K 0.98 5.74
nursing student loans nursing Commercial Investigation 100 – 1K 0.94 15.53
alternative student loan options alternative options Commercial Investigation 10 – 100 1 30.32
best student loan consolidation program best consolidation program Commercial Investigation 10 – 100 0.91 36.91
student loan bankruptcy bankruptcy Commercial Investigation 1K – 10K 0.42 9.48
student loan deferment deferment Commercial Investigation 1K – 10K 0.35 10.31
student loans Informational 100K – 1M 0.98 25.97
student loan calculator calculator Informational 10K – 100K 0.42 5.41
types of student loans types of Informational 1K – 10K 0.82 13.61
student loan options options Informational 1K – 10K 0.99 23.63
how to consolidate student loans how to consolidate Informational 1K – 10K 0.84 13.79
student loan default default Informational 1K – 10K 0.28 8.18
student loan help help Informational 1K – 10K 0.96 15.48
where to get student loans where to get Informational 100 – 1K 0.97 17.19
average student loan average Informational 100 – 1K 0.33 18.59
private education loans private Informational 100 – 1K 0.98 16.76
what is a student loan what is Informational 100 – 1K 0.6 8.75
how do you get a student loan how do you get Informational 100 – 1K 0.94 5.22
no credit student loans no credit Informational 100 – 1K 0.98 7.85
about student loans about Informational 10 – 100 0.92 14.9
information on student loans information Informational 10 – 100 0.94 14.08
iowa student loan iowa Navigational 10K – 100K 0.23 9.08
great lakes student loans great lakes Navigational 10K – 100K 0.18 7.05
fafsa student loans fafsa Navigational 1K – 10K 0.61 7.41
student loan interest rates interest rates Transactional 1K – 10K 0.7 10.11
low interest student loans low interest Transactional 100 – 1K 0.98 21.07
need student loan today need today Transactional 10 – 100 1 9.8
i need a student loan now i need now Transactional 10 – 100 0.99 13.7

Tier 1 conclusion:

Google Keyword Planner largely uncovered Commercial Investigation and Informational queries. GKP also better identified a broader set of top-of-funnel keyword opportunities: student loan help, parent student loans, types of student loans, etc.

Moz Keyword Explorer largely uncovered Informational and Navigational queries. MKE better identified longer-tail keyword opportunities: how to get a school loan, apply for student loan online, apply for student loan by phone, etc.


Tier 2 keyword research setup

“closely related search terms” vs. “only include keywords with all of the query terms”

keyword-planner-closely.gif

Google Keyword Planner: Perform same setup, but select “Only show ideas closely related to my search terms.”

keyword-explorer-include.gif

Moz Keyword Explorer: Perform same setup, but select “only include keywords with all of the query terms.”

Note: Your .csv download will still say “Broad” for Google Keyword Planner, even though you’ve selected “Closely related”… Told you she was acting funny.


Tier 2 keyword research overview:

Moz Keyword Explorer Google Keyword Planner
Term: “student loans” “student loans”
Region: United States United States
Spectrum: Only include keywords with all of the query terms Closely related
Group Keywords: No
Total Results: 1000 700
#Keywords With Intents: 66 30


+ Moz Keyword Explorer results:

http://ift.tt/2fvqqKm

http://ift.tt/2e3ErJV

Keyword Modifier Type Min Volume Max Volume Difficulty Opportunity Importance Potential
student loan Informational 118001 300000 82 84 3 82
student loan forgiveness forgiveness Commercial Investigation 70801 118000 62 86 3 86
student loan calculator calculator Commercial Investigation 11501 30300 75 100 3 76
citi student loan citi Navigational 11501 30300 34 94 3 86
student loan consolidation consolidation Commercial Investigation 11501 30300 60 83 3 79
private student loan loan Commercial Investigation 11501 30300 62 80 3 77
student loan refinance refinance Commercial Investigation 11501 30300 55 83 3 77
student loan repayment calculator repayment calculator Commercial Investigation 11501 30300 67 100 3 74
student loan interest rates interest rates Transactional 6501 9300 53 54 3 69
application for student loan application for Commercial Investigation 4301 6500 64 54 3 63
apply for student loan apply for Commercial Investigation 4301 6500 60 53 3 64
student loan forgiveness for teachers forgiveness for teachers Commercial Investigation 4301 6500 58 100 3 71
bad credit student loan bad credit Commercial Investigation 1701 2900 44 83 3 70
student loan hero hero Navigational 1701 2900 49 19 3 53
student loan servicing servicing Commercial Investigation 1701 2900 70 90 3 62
discovery student loan discovery Navigational 851 1700 47 28 3 51
fsa student loan fsa Navigational 851 1700 90 58 3 41
student loan providers providers Commercial Investigation 501 850 66 53 3 51
where to get student loan where to get Informational 501 850 76 55 3 47
check student loan balance check balance Navigational 201 500 54 46 3 49
department of education student loan servicing center department of education servicing center Navigational 201 500 78 58 3 42
student loan status status Navigational 201 500 61 86 3 54
us student loan debt us debt Informational 201 500 66 56 3 49
all student loan all Informational 101 200 58 56 3 45
discover private student loan discover private Navigational 101 200 53 21 3 36
how do i find my student loan how do i find my interest Informational 101 200 59 86 3 51
student loan management management Commercial Investigation 101 200 57 53 3 45
student loan resources resources Commercial Investigation 101 200 49 83 3 52
where is my student loan where is Informational 51 100 61 55 3 42
student loan corporation citibank corporation citibank Navigational 11 50 36 94 3 45
student loan enquiries enquiries Commercial Investigation 11 50 61 100 3 43
fafsa student loan consolidation fafsa consolidation Navigational 11 50 99 53 3 1
federal student loan options federal options Commercial Investigation 11 50 75 54 3 34
federal student loan terms federal terms Commercial Investigation 11 50 81 90 3 31
get a student loan today get a today Transactional 11 50 66 83 3 41
need student loan now need now Transactional 11 50 71 83 3 37
student loan overview overview Informational 11 50 79 94 3 35
student loan payment history payment history Navigational 11 50 55 100 3 46
student loan website down website down Informational 11 50 42 90 3 44
apply for student loan by phone apply for by phone Commercial Investigation 0 10 86 86 3 11
apply online for student loan apply online for Commercial Investigation 0 10 68 53 3 28
citibank student loan promotional code citibank promotional code Navigational 0 10 38 94 3 28
student loan corporation sallie mae corporation sallie mae Commercial Investigation 0 10 63 100 3 23
dsl student loan dsl Navigational 0 10 51 90 3 38
how do i take out a federal student loan how do i take out a federal Informational 0 10 80 55 3 22
how to pay student loan online how to pay online Informational 0 10 52 55 3 32
student loan management app management app Commercial Investigation 0 10 43 83 3 26
my student loan account number my account number Informational 0 10 65 64 3 18
student loan servicing center pennsylvania servicing center pennsylvania Navigational 0 10 52 88 3 38
where to pay my student loan where to pay my Informational 0 10 68 100 3 22
student loan counseling center counseling center Commercial Investigation 0 0 58 83 3 23
deadline for student loan application deadline for application Informational 0 0 68 60 3 16
educated borrower student loan educated borrower Commercial Investigation 0 0 54 83 3 24
get subsidized student loan get subsidized Commercial Investigation 0 0 64 90 3 22
how do i find my student loan account number how do i find my account number Informational 0 0 55 100 3 26
how much student loan can i have how much can i have Informational 0 0 71 55 3 14
how to check the status of a student loan from direct loans how to check the status of a Informational 0 0 86 90 3 11
how to find out who is my student loan lender how to find out who is my lender Informational 0 0 60 60 3 19
how to get your student loan money how to get your money Informational 0 0 39 56 3 22
student loan information eligibility information eligibility Commercial Investigation 0 0 85 86 3 11
is financial aid a student loan is financial aid a Informational 0 0 72 60 3 15
national student loan data system for parents national data system for parents Commercial Investigation 0 0 53 22 3 10
national student loan database contact number national database contact number Navigational 0 0 57 64 3 20
nslds student loan login nslds login Navigational 0 0 73 46 3 11
subsidized loan and unsubsidized student loan subsidized and unsubsidized Commercial Investigation 0 0 57 94 3 24
what is a national direct student loan what is a national direct Informational 0 0 66 64 3 17

+ Google Keyword Planner results:

http://ift.tt/2fvrFct

http://ift.tt/2e3Ha68

Keyword Modifier Type Avg. Monthly Searches (exact match only) Competition Suggested bid
student loan application application Commercial Investigation 1K – 10K 0.98 22.37
student loan bankruptcy bankruptcy Commercial Investigation 1K – 10K 0.42 9.48
how to get a student loan how to get Informational 1K – 10K 0.92 10.59
student loan help help Informational 1K – 10K 0.96 15.48
student loan deferment deferment Commercial Investigation 1K – 10K 0.35 10.31
alaska student loan alsaska Navigational 1K – 10K 0.54 2.21
south carolina student loan south carolina Navigational 1K – 10K 0.45 23.59
texas guaranteed student loan texas guranteed Navigational 1K – 10K 0.5 17.34
student loan interest rates interest rates Transactional 1K – 10K 0.7 10.11
student loan consolidation rates consolidation rates Transactional 1K – 10K 0.94 17.44
student loan refinance refinance Commercial Investigation 10K – 100K 0.96 34.57
student loan consolidation consolidation Commercial Investigation 10K – 100K 0.98 22.52
student loan calculator calculator Informational 10K – 100K 0.42 5.41
student loan gov gov Navigational 10K – 100K 0.28 16.42
iowa student loan iowa Navigational 10K – 100K 0.23 9.08
student loan forgiveness forgiveness Commercial Investigation 100K – 1M 0.58 3.38
what is a student loan what is Informational 100 – 1K 0.6 8.75
how can i get a student loan how can I get Informational 100 – 1K 0.97 7.71
how to get a private student loan how to get a private Informational 100 – 1K 0.96 14.82
student loan app application Navigational 100 – 1K 0.83 11.89
student loan cancellation cancellation Transactional 100 – 1K 0.41 4.5
student loan tax tax Transactional 100 – 1K 0.25 47.05
medical student loan consolidation medical consolidation Commercial Investigation 10 – 100 0.93 0
federal student loan options federal options Commercial Investigation 10 – 100 0.75 7.45
student loan consolidation faq consolidation faq Commercial Investigation 10 – 100 0.76 15.94
how to figure out student loan interest how to figure out interest Informational 10 – 100 0.38 10.52
how to apply for a student loan online how to apply Informational 10 – 100 1 20.61
how much is my student loan payment how much is my Informational 10 – 100 0.22 20.96
need a student loan now need now Transactional 10 – 100 0.99 12.02
need student loan today need today Transactional 10 – 100 1 9.8

Tier 2 conclusion:

Google Keyword Planner largely uncovered a pretty even percentage of all 4 keyword types (30% Informational, 20% Navigational, 30% Commercial Investigation, and 20% Transactional). GKP also continued to provide a broader set of top-of-funnel keyword opportunities: student loan bankruptcy, student loan gov, student loan help, how to get a student loan, etc.

Moz Keyword Explorer largely uncovered Commercial Investigation and Informational queries. MKE also continued to provide a broader set of long-tail keyword opportunities: student loan forgiveness for teachers, student loan providers, student loan status, how do i find my student loan, etc.

http://ift.tt/2fvupGw


Where this is the end of the road for Google results, Moz has some other filters up its sleeve:

keyword-research-filters.gif

Let’s explore the other available Moz keyword filters and examine the discovered keyword results (keywords with unique intent).

Exclude your query terms to get broader ideas: 25 keywords

Most results are longer-tail queries around college tuition, educational expenses, private school tuition, etc. This evenly resulted in Informational, Navigational, and Transactional keyword results:

http://ift.tt/2fvpqWx

http://ift.tt/2e3KnCK

Based on closely related topics: 35 keywords

One of the more evenly distributed (search volume) results in this example. Most keyword results are around other types of loans or grants: payday loan, pell grants, auto loan, private loans, etc.

http://ift.tt/2fvry0o

http://ift.tt/2e3F5ae

Based on broadly related topics and synonyms: 74 keywords

Results are mostly three words or longer and revolve around more specific types of loans; great lakes student loans, wells fargo student loans, student loan chase, etc.

http://ift.tt/2fvr4XZ

http://ift.tt/2e3JhH9

Related to keywords with similar results pages: 187 keywords

Results are mostly long-tail Commercial Investigation queries around loan payments, student loan consolidation, student loan forgiveness for teachers, student loan payment help, etc.

http://ift.tt/2fvrEFp

http://ift.tt/2e3HVMH

Are questions: 111 keywords

Last, but certainly not least. The crème de la crème of an FAQ page.

Results reveal long-tail student loan questions (mostly Informational), like: can you file bankruptcy on student loans, do student loans affect credit score, are student loans tax deductible, where can i get a student loan, etc.

http://ift.tt/2fvqr0S

http://ift.tt/2e3DnG1


TL;DR

Conclusion:

Google Keyword Planner has limited search volume data, but continues to provide a broader set of top-of-funnel keywords (high volume, low competition <– ad metric). Despite the “closely related” filter resulting in a more even percentage of all 4 keyword types, it provided fairly similar results (35.4% duplicate) to “broad.” Commercial and Informational keyword types were most commonly found.

Moz Keyword Explorer provides more accurate search volume data, while providing a broader set of long-tail keywords (mid-to-low volume, low competition). The many keyword filters provide a wide range of keyword results (17% duplicate in first two filters) and keyword types depending on which keyword filter you use. However, Informational, Commercial Investigation, and Navigational keyword types were most commonly found.

Pros:

Moz Keyword Explorer: Google Keyword Planner:
The keyword search volume accuracy (IMO) is the most impressive part of this tool. The ability to view monthly trends, mobile versus desktop searches, and geo-popular areas is wonderful.
Better UX. Can add negative keywords/keywords to not include in results.
Keyword suggestion filters reveal far more keyword results. Sorting by 100 keywords is a nice cadence.
The “are questions” filter is incredibly useful for things like FAQ pages and content marketing ideas. Google Sheet download integration.
Saved keyword lists (that can be refreshed!? Say whaa!?) Average keyword bid (for further competition insight).
Detailed SERP data for SERP feature opportunities. Monthly keyword trend data (on hover).
Organic competition metric. Ability to target specific hyper-local areas.
Ability to prioritize keywords which influences the Potential metric (for smarter keyword prioritization).

Cons:

Moz Keyword Explorer: Google Keyword Planner:
The Min Volume | Max Volume | Difficulty | Opportunity | Importance | Potential can be overwhelming. Search volume ranges are widely skewed and bucketed.
No Google Sheet download integration. Individuals who start adding keywords from the bottom up of a list (scrolling up) will miss newly populated results.
No “select all” option. Broad & Closely Related filters tend to provide very similar results.
The list of 1,000 keyword results can be daunting when doing lots of keyword research. No SERP feature data.
Inability to target specific local regions. Can’t save lists.
Clunky, slow UX.

Which is right for you?

I’d consider where you want to target people in your sales funnel, and where you need to improve your current website traffic. If you have wide top-of-funnel traffic for your product/service and need to better provide long-tail transactions, check out Moz Keyword Explorer. If you need a brief overview of top-level searches, take a look at Google Keyword Planner results.

Which do I use?

I’m a little ashamed to say that I still use both. Checking Google Keyword Planner gives me the peace of mind that I’m not missing anything. But, Moz Keyword Explorer continues to impress me with its search volume accuracy and ease of list creation. As it gets better with top-of-funnel keywords (and hopefully integrates competition up front) I would love to transition completely over to Moz.

Other keyword research tips:

I’ve also been a big fan of ubersuggest.io to give your initial keyword list a boost. You can add your selected keywords directly to Google Keyword Planner or Moz Keyword Explorer for instant keyword data. This can help identify where you should take your keyword research in terms of intent, sub-topic intents, geographic, etc.

Answer the Public is also a great resource for FAQ pages. Just make sure to change the location if you are not based in the UK.

Would love your feedback!

  • Please let me know if you can think of other ways to determine the quality of keywords from each tool.
  • Any other pros/cons that you would add?
  • What other tools have you been using for keyword research?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

How to Craft the Best Damn E-commerce Page on the Web – Whiteboard Friday

Posted by randfish

From your top-level nav to your seal-the-deal content, there are endless considerations when it comes to crafting your ecommerce page. Using one of his personal favorite examples, Rand takes you step by detailed step through the process of creating a truly superb ecommerce page in today’s Whiteboard Friday.

http://ift.tt/2eTr811

http://ift.tt/1GaxkYO

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy all and welcome to a special edition of Whiteboard Friday. My name is Rand Fishkin. I’m the founder of Moz, and today I want to talk with you about how to craft the best damn ecommerce page on the web. I’m actually going to be using the example of one of my very favorite ecommerce pages. That is the Bellroy Slim Wallet page. Now, Bellroy, actually, all of their pages, Bellroy makes wallets and they market them online primarily. They make some fantastic products. I’ve been an owner of one for a long time, and it was this very page that convinced me to buy it. So what better example to use?

So what I want to do today is walk us through the elements of a fantastic ecommerce page, talk about some things where I think perhaps even Bellroy could improve, and then walk through, at the very end, the process for improving your own ecommerce page.

The elements of a fantastic e-commerce page

So let’s start with number one, the very first thing which a lot of folks, unfortunately, don’t talk about but is critical to a great ecommerce process and a great ecommerce page, and that is…

1. The navigation at the very top

The navigation at the top needs to do a few things. It’s got to help people:

  • Understand and know where they are in the site structure, especially if you have a more complex site. In Bellroy’s case, they don’t really need to highlight anything. You know you’re on a wallet page. That’s probably in Shop, right? But for Amazon, this is critically important. For Best Buy, this is hugely important. Even for places like Samsung and Apple, critical to understand where I am in the site structure.
  • I want to know something about the brand itself. So if this is the first time that someone is visiting the website, which is very often the case with ecommerce pages, they’re often entry points for the first exposure that you have to a brand. Let’s recall, from what we know about conversion rate optimization, it is uncommon, unusual for someone to convert on their first visit to a brand or a website’s page, but you can make a great first impression, and part of that is what your top navigation needs to do. So it should help people identify with the brand, get a sense for the style and the details of who you are.
  • You need to know where, broadly, you can go in the website. Where can I explore from here? If this is my first visit or if this is my second visit and I’m trying to learn a little bit more about the company, I want to be able to easily get to places like About, or I want to be able to easily learn more about their products or what they do, learn more about the potential solutions, learn more about their collections and what other things they offer me.
  • I also, especially for ecommerce repeat visitors and for folks who are buying more than one thing, I want to have this simple navigation around Cart. I don’t, in fact, love how Bellroy minimizes this, but you want to make sure that the Search bar is there as well. Search is actually a function. About 10% to 12% of visitors on average to ecommerce pages will use Search as their primary navigation function. So if you make that really subtle or hard to find or difficult to use, the Search feature can really limit the impact that you can have with that group.
  • I want that info about the shopping process that comes from having the Cart. In Bellroy’s case, I love what they do. They actually put “Free shipping in the United States” in their nav on every page, which I think, clearly for them, it must be one of the key questions that they get all the time. I have no doubt that they’ve done some A/B testing and optimization to make sure, “Hey, you know what? Let’s just put it in front of everyone because it doesn’t hurt and it helps to improve our conversion rates.”

2. Core product information

Core product information tends to be that above-the-fold key part here. In Bellroy’s case, it’s very minimalist. We’re just talking about a photo of the wallet itself, and then you can click left or right, or I think sometimes it auto-scrolls as well on desktop but not mobile. I can see a lot more photos of how many cards the wallet can hold and what it looks like in my pants, how it measures up compared to a ruler, and all that kind of stuff. So there’s some great photography in here and that’s important, as well as the name and the price.These core details may differ from product to product. For example, if you are selling a more complex piece of technology, the core features may, in fact, be fairly substantive, and that’s okay. But we are trying to help. With this core product information, we’re trying to help people understand what the product is and what it does. So wallet, very, very obvious. If we’re talking about lab equipment or scientific machinery, well, a little more complicated. We better make sure that we’re communicating that. We want…

  • Visuals that are going to serve to… in this case, I think they do a great job, but comprehensively communicate the positioning, the positioning of the product itself. So Bellroy is clearly going with minimalist. They’re going with craft. They’re a small, niche shop. They don’t do 10,000 things. They just make wallets, and they are trying to make that very clear. They also are trying to make their quality a big part of this, and they are trying to make the focus of the product itself, the slimness. You can really see that as you go into, well obviously, the naming convention, but also the photography itself, which is showing you just how slim this wallet can be in comparison to bulky other wallets. They take the same number of cards, they put them in two different kinds of wallets, they show you the thickness, and the Bellroy is very, very slim. So that’s clearly what the positioning is going for.
  • Potentially here, we might want video or animation. But I’m going to say that this is only a part of the core content when it truly makes sense. Great example of when it does make sense would be Zappos. Zappos, obviously, has their videos for nearly every shoe and shoe brand that they promote on their website. They saw tremendous conversion rate improvements because people had a lot of questions about how it moves and walks and how it looks with certain pieces of clothing. The detail of having someone explain it to you, as I’m explaining ecommerce pages to you in video form, turned out had a great impact on their conversion rate. You might want to test this, but it’s also the case that this content, that video or animation content might live down below. We’ll talk about how that can live in more of the photos and process at the very bottom at the end of this video.
  • Naming convention. We want price. We want core structural details. I like that Bellroy here has made their core content very, very slim, just the photos, the name, and the price.

3. Clear options to the path to purchase

This is somewhere where, I think, a lot of folks unfortunately get torn by the Amazon model. If you are Amazon.com, which yes, has phenomenal click-through rates, phenomenal engagement rates, phenomenal conversion rates, but you are not Amazon. Repeat after me, “I am not Amazon.” Therefore, one of the things that Amazon does is they clutter this page with hundreds of different things that you could do, and they built that up over decades, literally decades. They built up so that we are all familiar with an Amazon page, ecommerce page, and what we expect on it. We know there’s going to be a lot of clutter. We know there’s going to be a ton of call-to-actions, other things we could buy, things that are often bought with this, and things that could be bundled with this. That is fine for Amazon. It is almost definitely not fine for you unless you are extremely similar to what Amazon does. For that reason, I see many, many folks getting dragged in this direction of, “Hey, I want to have 10 different calls-to-action because people might want to X, Y, and Z.” There are ways to do the “might want to X, Y, and Z” without making those specific calls-to-action in the core part of the landing page for the ecommerce product. I’ll talk about those in just a second.

But what I do want you to do here is:

  • Help people understand what is available. Quick example, you can select the color. That is the only thing you can do with this wallet. There are no different sizes. There are no different materials that they could be made of. There’s just color. Color, Checkout, and by the way, once again, free shipping.
  • I am trying to drive them to the primary action, and that is what this section of your ecommerce page needs to do a great job of. Make the options clear, if there are any, and make the path to purchase really, really simple.
  • We’re trying to eliminate roadblocks, we’re trying to eliminate any questions that might arise, and we want to eliminate any future frustration. So, for example, one of the things that I would do here, that Bellroy does not do, is I would geo-target based on IP address. So I’d look at the IP address of the visitor who’s coming to this page, and I would say, “I am pretty sure you are located in Washington State right now. Therefore, I know that this is the sales tax amount that I need to charge.” Or, “Bellroy isn’t in Washington State. I don’t need to charge you sales tax.” So I might have a little thing here that says, “Sales Tax” and then a little drop-down that’s pre-populated with Washington or pre-populated with the ZIP code if you know that and “$0.” That way it’s predictive. It’s saying already, “Oh, good. I know that the next page I’m going to click on is going to ask me about sales tax, or the page after I enter my credit card is.” You know what, it’s great to have that question answered beforehand. Now, maybe Bellroy has tested this and they found that it doesn’t convert as well, but I would guess that it probably, probably would convert even better with that messaging on there.

4. Detailed descriptions of the features of the product

This is where a lot of the bulk of the content often lives on product pages, on ecommerce pages. In this case, they’ve got a list of features, including all sorts of dimension stuff, how it’s built, what it’s made from, and what it can hold, etc., etc.

What I’m trying to do here is a few things:

  • I want to help people know what to expect from this product. I don’t want high returns. Especially if I’m offering free shipping, I definitely don’t want high returns. I want people to be very satisfied with this product, to know exactly what they’re going to get.
  • I want to help them determine if the product fits their needs, fits what they are trying to accomplish, fits the problem they’re trying to solve.
  • I want to help them, lead them to answers quickly for frequently asked questions. So if I know that lots of people who reach this page have this sort of, “Oh, gosh, you know, I wonder, what is their delivery process like? How long does it take to get to me because I kind of need a wallet for this trip that I’m going on, and, you know, I’m bringing pants that just won’t hold my thick wallet, and that’s what triggered me to search for slim wallets in Google and that’s what led me to this page?” Aha, delivery. Great job. You’ve answered the question before or as they are asking it, and that is really important. We want answers to the unasked questions before people start to panic in the Checkout process.

You can go through this with folks who you say, “Hey, I want you to imagine that you are about to buy this. Give me the 10 things in your head. I want you to say out loud everything that you think when you see this page.” You can do this with actual customers, with customers who are returning, with people who fit your target demographic and target customer profile but have not yet bought from you, with people who’ve bought from your competitors. As you do this, you will find the answers to be very, very similar time after time, and then you can answer them right in this featured content. So warranty is obviously another big one. They note that they have a three-year warranty. You can click plus here, and you can get more information.

I also like that they answer that unasked question. So when they say, “Okay, it’s 80 millimeters by 95 millimeters.” “Man, I don’t know how big a millimeter is. I just can’t hold that information in my head.” But look, they have a link “Compare to Others.” If you click that, it will show you an overlay comparison of this wallet against other wallets that they offer and other wallets that other people offer. Awesome. Fantastic. You are answering that question before I have it.

5. A lot of the seal-the-deal content

When we were talking before about videos or animations or some of the content that maybe belongs in the featured section or possibly could be around Checkout, but doesn’t quite reach the level of importance that we’ve dictated for those, this is where you can put that content. It can live below the fold, scrolling way down. I have yet to see the ecommerce page that has suffered from providing too much detail about things people actually care about. I have seen ecommerce pages suffer from bloating the page with tons of content that no one cares about, especially as it affects page load speed which hurts your conversions on mobile and hurts your rankings in Google because site speed is a real issue. But seal-the-deal content should:

  • Help people get really comfortable and build trust. So if I scroll down here, what I’m seeing is more photos about how the wallet is made, how people are using it. They call this the nude approach, which cleverly titled, I’m sure it makes for a lot of clicks. The nude approach to building a wallet, why the leather is so slender, why it adds so little weight and depth, why it lasts so long, all these kinds of things.
  • It’s trying to use social proof or other psychological triggers to get rid of any remaining skepticism. So if you know what the elements of skepticism are from your potential buyers, you can answer that in this deeper content as people get down and through this.

Now, all right, you might say to yourself, “These all sound like great things. How do I actually run this process, Rand?” The answer is embedded in what we just talked about. You’re going to need to ask your customers, your potential customers, your customers who bought from you before, and customers who did not buy from you but ended up buying from a competitor, about these elements. You’re going to need to test, which means that you need some infrastructure, something like an Unbounce or an Optimizely, or your own testing platform if you feel like building one, your engineers do, in order to be able to change out elements and see how well they convert, change out pieces of information. But it is not helpful to change things like button color, or to change lists of features, or to change out the specific photos when the problem is, overall, you have not solved these problems. If you don’t solve these problems, the best button color in the world will not help your conversion rate nearly enough, which is why we need to form theories and have hypotheses about what’s stopping people from buying. That should be informed by our real research.

SEO for ecommerce pages

SEO for ecommerce pages is based on only a few very, very simple things. Our SEO elements here are keywords, content, engagement, links, and in some cases freshness. You hit these five and you’ve basically nailed it.

  • Keywords, do you call your products the same thing people call your products when they search for them? If the answer is no, you have an opportunity to improve. Even if you want to use a branded name, I would suggest combining that with the name that everyone else calls your things. So if this is the slim sleeve wallet, if historically Bellroy had called this the sleeve wallet, I would highly recommend to them, “Hey, people are searching for slim wallet. How about we find a way to merge those things?”
  • Content is around what is on this page, and Google is looking for content that solves the searcher’s problem, the searcher’s issue. That means doing all of these things right and having it in a format that Google can actually read. Video is great. Transcripts of the video should also be available. Visuals are great. Descriptions should also be available. Google needs that text content.
  • Engagement, that is going to come from people visiting this page and not clicking the back button and going back to Google and searching for other stuff and clicking on your competitor’s links. It’s going to come from people clicking that Checkout button or browsing deeper in the website and from engaging with this page by spending time on the site and not bouncing. That’s your job and responsibility, and this stuff can all help.
  • Links come from press. It can come from blogs. It can come from some high-quality directories. Be very careful in the directory link-building world. It can come from partnerships. It can come from suppliers. It can come from fans of the product. It can come from reviews. All that kind of stuff. People who give you their testimonials, you can potentially ask them for links, so all that kind of stuff. Those links, if they are from diverse sets of domains and they contain good anchor text, meaning the name of your actual product, and they are pointing specifically to this page, they will tremendously help you rank above your competition.
  • Freshness. In some industries and in some cases, when you know that there is a lot of demand for the latest and greatest, you should be updating this page as frequently as you can with the new information that is most pertinent and relevant to your audience.

You do these things, and you do these things, and you will have the best damn ecommerce page on the web.
All right, everyone, thanks for joining us. We’ll see you again hopefully on Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Local Empathy: The New Tool in Your Brand’s Emergency Kit

Posted by MiriamEllis

Implement generosity.

If I could sum up all of the thoughts I’m about to share with local enterprises, it would be with those two words.

Image via Lewis Kelly

Disasters and emergencies are unavoidable challenges faced by all local communities. How businesses respond to these stressful and sometimes devastating events spotlight company policy for cities to see. Once flood waters reside or cyclones trail away, once the dust settles, which of these two brands would you wish to call yours?:

How two brands’ reaction to disaster became a reputation-defining moment

As Hurricane Matthew moved toward the southeastern United States this month (in October 2016), millions of citizens evacuated, many of them not knowing where to find safe shelter. Brand A (a franchise location of an international hotel chain) responded by allegedly quadrupling the prices of its rooms — a practice known as ‘price gouging,’ which is illegal during declared emergencies in 34 states. Brand B (the international accommodations entity Airbnb) responded by sourcing thousands of free local rooms from its hosts for victims of the hurricane.

And then professional and social media responded with news stories, social communications, and reviews, trying both brands in the court of public opinion, doling out blame and praise.

This is how reputations are broken and made in today’s connected world, and the extremity of this tragic emergency situation brought two factors into high relief for these two brands:

Culture and preparedness

“I don’t know about the prices. I just run the hotel. I don’t set the prices. Corporate sets the prices.”

This is how the manager of the Brand A hotel franchise location responded when questioned by a TV news reporter regarding alleged price gouging, set amid a backdrop of elders and families with small children unable to afford a room at 4x its normal rate.

“We are deeply troubled by these allegations as they in no way reflect our brand values. This hotel is franchised. We don’t manage inventory or rates.”

This is the official response from corporate issued to the news network, and while Brand A promised to investigate, the public impression was made that the buck was being passed back and forth between different entities while evacuees were in danger. Based on the significant response from social media, including non-guideline-compliant user reviews from people who had never even stayed at this hotel, corporate culture was being perceived as greedy rather than fair to an extreme degree. It’s important to note here that I didn’t encounter a single sentiment expressed by consumers expecting that the rooms at this hotel would be given away for free. It was the quadrupling of the price that brought public condemnation.

emergency3.jpg

Consumers are not privy to the creation of company policy. They aren’t able to puzzle out who made the decision to raise prices as this hotel, or at the many other hotels, gas stations, and stores in Florida which viewed an emergency as an opportunity for profit. Doubtless, the concept of supply and demand fuels this type of decision-making, but in an atmosphere lacking adequate transparency, the consumer is left with judging whether policy feels fair or unfair, and whether it aligns with their personal value system.

While we’ll likely never know the internal communications which led to this franchise location being cited by the public and investigated by the authorities for alleged price gouging, it is crystal clear that the corporate brand was not prepared in advance with a policy for times of emergency to be enacted by all franchisees. This, then, leaves the franchisee working within vague latitudes of allowable practices, which can result in long-lasting damage to the overall brand, coupled with damage to the local community being served. It’s a scenario of universal negativity and one that certainly can’t be made up for by a few days’ worth of increased profits.

You’ve likely noticed by now that I am specifically not naming this hotel. In the empathetic spirit of the carefully-crafted TAGFEE policy of Moz, my goal here is not to shame a particular business. Rather, it’s my hope that seeing the outcomes of policy will embolden companies to aim high in mirroring the value systems of consumers who reward fairness and generosity with genuine loyalty.

Ideally, I’d love to live in a world in which all businesses are motivated by concern for the common good, but barring this, I would at least like to demonstrate how generous policy is actually good policy and good business. Let’s turn our eyes to Brand B, which lit a beacon of hope in the midst of this recent disaster, as described in this excerpt from Wired:

“This was profound,” says Patrick Meier, a humanitarian technology expert who consults for the World Bank, the Australian Red Cross, and Facebook. “Airbnb changed its code order to allow people to rent out their place for zero dollars, because you could not do that otherwise.”

Innovation shines brightly in this account of Airbnb recognizing that communities around the world contain considerable resources of goodwill, which can then be amplified via technology.

The company has dedicated its own resources to developing an emergency response strategy, including the hire of a disaster response expert and an overhaul of the website’s code to enable free rentals. Thanks to the generosity of hosts who are willing open their doors to their fellow man in a time of trouble, Airbnb has been able to facilitate relief in more than twenty major global events since 2013. Of course, the best part of this is the lives that have been eased and even saved in times of trouble, but numerous industries should also pay attention to how Airbnb has benefitted from this exemplary outreach.

Here’s a quick sampling of the exceptionally favorable media coverage of the emergency response strategy:

emergency8.jpg

That is a set of national and local references any business would envy. And the comments on articles like this one show just how well the public has received Airbnb’s efforts:

emergency12.jpg

In utter marketing-ese, these consumers have not only been exposed or re-exposed to the Airbnb brand via the article, but have also just gained one new positive association with it. They are on the road to becoming potential brand advocates.

What I appreciate most about this scenario is that, in contrast to Brand A’s situation, this one features universal positivity in which all parties share in the goodwill, and that is literally priceless. And, by taking an organized approach to emergency preparedness and creating policy surrounding it, Airbnb can expect to receive ongoing appreciative notice for their efforts.

Room for hero brands, large and small

The EPA predicts a rise in extreme weather events in the United States due to climate change, including increases in the precipitation and wind of storms in some areas, and the spread of drought in others. Added to inevitable annual occurrences such as tornadoes, blizzards, and earthquakes, there are two questions every intelligent brand should be asking and answering internally right now: How can we help in the short term and how can we help in the long term?

Immediate relief

In the short term, your business can take a cue from Airbnb and discover available resources or develop new ones for providing help in a disaster. I noticed a Hurricane Matthew story in which a Papa John’s pizza deliverer helped a man in Nebraska get in touch with his grandmother in Florida whom he had been trying to reach for three anxious days. What if the pizza chain developed a new emergency preparedness policy from this human interest story, using their delivery fleet to reconnect loved ones… perhaps with a free pizza thrown into the bargain?

Or, there are restaurants with the ability to provide food or a percentage of profits to local food banks if they are lucky enough to still have electricity while their neighbors are less fortunate.

Maybe your company doesn’t have the resources of Everbridge, which has helped some 900+ counties and organizations communicate critical safety information in emergencies, but maybe your supermarket or the lobby of your legal practice can offer a free, warm, dry Wi-Fi hotspot to neighbors in an emergency.

In brief, if your business offers goods and services to your local community, create a plan for how, if you are fortunate enough to escape the worst effects of a disaster, you can share what you have with neighbors in need.

Long-term plans

According to Pew Research, 77% of Latin Americans, 60% of Europeans, 48% of the population of Asia and the Pacific, and 41% of the U.S. population are worried about the immediacy of the impacts of global warming. A global median of 51% indicates that climate change is affecting people right now.

From a business perspective, this means that the time for your brand to form and announce its plans for contributing to the climate solution is right now. Your efficient, green, and renewable energy practices, if made transparent, can do much to let the public know that not only will you be there for them in the short term in sudden emergencies, but that you are also doing your part to reduce future extreme weather events.

Whether your business model is green-based or you incorporate green practices into your existing brand, sharing what you are doing to be a good neighbor in both the short and long term can earn the genuine goodwill of the local communities you wish to serve.

Do something great

I often imagine the future unlived when I see brands making awkward or self-damaging decisions. I rub my forehead and squint my eyes, envisioning what they might have done differently.

Imagine if Brand A had implemented generosity. Imagine if, instead of raising its prices during that dreadful emergency, Brand A had offered a deep discount on its rooms to be sure that even the least fortunate community members had a secure place to stay during the hurricane. Imagine if they had opened up their lounges and lobbies and invited in homeless veterans for the night, granting them safety in exchange for their service. Imagine if they had warmly reached out to families, letting them know that cherished pets would be welcome during the storm, too.

Imagine the gratitude of those who had been helped.

Imagine the social media response, the links, the new stories, unstructured citations, reviews…

Yes, it might have been unprofitable monetarily. It might have even been mayhem. But it would have been great.

To me, firemen have always exemplified a species of greatness. In moments of extreme danger, they forget themselves and act for the good of others. Imagine putting a fireman’s heart at the heart of your brand, to be brought out during times of emergency. Why not bring it up at the next all-staff meeting? Brainstorm existing resources, develop new ones, write out a plan, make it a policy… Stand tall on the local business scene, stand up, be great!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

New Moz Local Product Packages Are Coming in November

Posted by dudleycarr

We’ve been working behind the scenes to make Moz Local serve your needs better than ever. It’s not quite ready yet, but we just can’t hold it in any longer — we had to give you a teaser of what we’re planning.

Moz Local has grown tremendously over the past 2.5 years. Our initial $84 Listing Distribution offering pioneered the use of data aggregators to bring new efficiency and value to local businesses and agencies. We followed that with Search Insights, which added Local SEO analytics that gave businesses insight into how their location data performed in local searches.

The changes we’re making now will help all of our customers — local businesses, enterprise brands, and our agency partners — get the most out of Moz Local by delivering greater business value.

We’ll provide that value by making Moz Local available in 3 different packages:

Moz Local Essential

For the majority of our local business customers who have a single to dozens of locations, Moz Local Essential is our base-level Active Location Data Management and Reputation Monitoring solution. At $99 per year, this new entry-level offering adds Reputation Monitoring, the ability to monitor the latest reviews of your business locations on the most popular review sites, all from one place — while remaining priced at less than 50% of the leading competition.

Moz Local Professional

For Enterprise brands and agencies that need an enterprise-class solution to manage at scale (hundreds to thousands of business locations), Moz Local Professional includes everything in the Essential package — plus Local SEO Analytics to analyze results and make informed decisions that improve local marketing performance, and SEO expertise and support from the Moz Local customer success team.

Moz Local Premium

For Enterprise brands and agencies that have a higher level of business need, Moz Local Premium includes everything in the Professional package — plus all of the advantages that Moz Local has to offer made available via the Moz Local API, and augmented with the full suite of organic SEO tools from Moz.

At all levels, we continue to ensure that Moz Local is the industry’s most effective location data management solution. And as the global leader in SEO software, we’re committed to bringing the power of SEO and location analytics to your increasingly complex local marketing challenges, whether you’re a brand or an agency.

We’re busy putting the finishing touches on these new offerings, but we just couldn’t wait to tell you. On November 17th, we’ll share full details about each of our packages, features, and pricing.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

The Technical SEO Renaissance: The Whys and Hows of SEO’s Forgotten Role in the Mechanics of the Web

Posted by iPullRank

Web technologies and their adoption are advancing at a frenetic pace. Content is a game that every type of team and agency plays, so we’re all competing for a piece of that pie. Meanwhile, technical SEO is more complicated and more important than ever before and much of the SEO discussion has shied away from its growing technical components in favor of content marketing.

As a result, SEO is going through a renaissance wherein the technical components are coming back to the forefront and we need to be prepared. At the same time, a number of thought leaders have made statements that modern SEO is not technical. These statements misrepresent the opportunities and problems that have sprouted on the backs of newer technologies. They also contribute to an ever-growing technical knowledge gap within SEO as a marketing field and make it difficult for many SEOs to solve our new problems.

That resulting knowledge gap that’s been growing for the past couple of years influenced me to, for the first time, “tour” a presentation. I’d been giving my Technical SEO Renaissance talk in one form or another since January because I thought it was important to stoke a conversation around the fact that things have shifted and many organizations and websites may be behind the curve if they don’t account for these shifts. A number of things have happened that prove I’ve been on the right track since I began giving this presentation, so I figured it’s worth bringing the discussion to continue the discussion. Shall we?

An abridged history of SEO (according to me)

It’s interesting to think that the technical SEO has become a dying breed in recent years. There was a time when it was a prerequisite.

Image via PCMag

Personally, I started working on the web in 1995 as a high school intern at Microsoft. My title, like everyone else who worked on the web then, was “webmaster.” This was well before the web profession splintered into myriad disciplines. There was no Front End vs. Backend. There was no DevOps or UX person. You were just a Webmaster.

Back then, before Yahoo, AltaVista, Lycos, Excite, and WebCrawler entered their heyday, we discovered the web by clicking linkrolls, using Gopher, Usenet, IRC, from magazines, and via email. Around the same time, IE and Netscape were engaged in the Browser Wars and you had more than one client-side scripting language to choose from. Frames were the rage.

Then the search engines showed up. Truthfully, at this time, I didn’t really think about how search engines worked. I just knew Lycos gave me what I believed to be the most trustworthy results to my queries. At that point, I had no idea that there was this underworld of people manipulating these portals into doing their bidding.

Enter SEO.

Image via Fox

SEO was born of a cross-section of these webmasters, the subset of computer scientists that understood the otherwise esoteric field of information retrieval and those “Get Rich Quick on the Internet” folks. These Internet puppeteers were essentially magicians who traded tips and tricks in the almost dark corners of the web. They were basically nerds wringing dollars out of search engines through keyword stuffing, content spinning, and cloaking.

Then Google showed up to the party.

Image via droidforums.net

Early Google updates started the cat-and-mouse game that would shorten some perpetual vacations. To condense the last 15 years of search engine history into a short paragraph, Google changed the game from being about content pollution and link manipulation through a series of updates starting with Florida and more recently Panda and Penguin. After subsequent refinements of Panda and Penguin, the face of the SEO industry changed pretty dramatically. Many of the most arrogant “I can rank anything” SEOs turned white hat, started software companies, or cut their losses and did something else. That’s not to say that hacks and spam links don’t still work, because they certainly often do. Rather, Google’s sophistication finally discouraged a lot of people who no longer have the stomach for the roller coaster.

Simultaneously, people started to come into SEO from different disciplines. Well, people always came into SEO from very different professional histories, but it started to attract a lot more more actual “marketing” people. This makes a lot of sense because SEO as an industry has shifted heavily into a content marketing focus. After all, we’ve got to get those links somehow, right?

Image via Entrepreneur

Naturally, this begat a lot of marketers marketing to marketers about marketing who made statements like “Modern SEO Requires Almost No Technical Expertise.”

Or one of my favorites, that may have attracted even more ire: “SEO is Makeup.”

Image via Search Engine Land

While I, naturally, disagree with these statements, I understand why these folks would contribute these ideas in their thought leadership. Irrespective of the fact that I’ve worked with both gentlemen in the past in some capacity and know their predispositions towards content, the core point they’re making is that many modern Content Management Systems do account for many of our time-honored SEO best practices. Google is pretty good at understanding what you’re talking about in your content. Ultimately, your organization’s focus needs to be on making something meaningful for your user base so you can deliver competitive marketing.

If you remember the last time I tried to make the case for a paradigm shift in the SEO space, you’d be right in thinking that I agree with that idea fundamentally. However, not at the cost of ignoring the fact that the technical landscape has changed. Technical SEO is the price of admission. Or, to quote Adam Audette, “SEO should be invisible,” not makeup.

Changes in web technology are causing a technical renaissance

In SEO, we often criticize developers for always wanting to deploy the new shiny thing. Moving forward, it’s important that we understand the new shiny things so we can be more effective in optimizing them.

SEO has always had a healthy fear of JavaScript, and with good reason. Despite the fact that search engines have had the technology to crawl the web the same way we see it in a browser for at least 10 years, it has always been a crapshoot as to whether that content actually gets crawled and, more importantly, indexed.

When we’d initially examined the idea of headless browsing in 2011, the collective response was that the computational expense prohibited it at scale. But it seems that even if that is the case, Google believes enough of the web is rendered using JavaScript that it’s a worthy investment.

Over time more and more folks would examine this idea; ultimately, a comment from this ex-Googler on Hacker News would indicate that this has long been something Google understood needed conquering:

This was actually my primary role at Google from 2006 to 2010.

One of my first test cases was a certain date range of the Wall Street Journal’s archives of their Chinese language pages, where all of the actual text was in a JavaScript string literal, and before my changes, Google thought all of these pages had identical content… just the navigation boilerplate. Since the WSJ didn’t do this for its English language pages, my best guess is that they weren’t trying to hide content from search engines, but rather trying to work around some old browser bug that incorrectly rendered (or made ugly) Chinese text, but somehow rendering text via JavaScript avoided the bug.

The really interesting parts were (1) trying to make sure that rendering was deterministic (so that identical pages always looked identical to Google for duplicate elimination purposes) (2) detecting when we deviated significantly from real browser behavior (so we didn’t generate too many nonsense URLs for the crawler or too many bogus redirects), and (3) making the emulated browser look a bit like IE and Firefox (and later Chrome) at the some time, so we didn’t get tons of pages that said “come back using IE” er “please download Firefox”.

I ended up modifying SpiderMonkey’s bytecode dispatch to help detect when the simulated browser had gone off into the weeds and was likely generating nonsense.

I went through a lot of trouble figuring out the order that different JavaScript events were fired off in IE, FireFox, and Chrome. It turns out that some pages actually fire off events in different orders between a freshly loaded page and a page if you hit the refresh button. (This is when I learned about holding down shift while hitting the browser’s reload button to make it act like it was a fresh page fetch.)

At some point, some SEO figured out that random() was always returning 0.5. I’m not sure if anyone figured out that JavaScript always saw the date as sometime in the Summer of 2006, but I presume that has changed. I hope they now set the random seed and the date using a keyed cryptographic hash of all of the loaded javascript and page text, so it’s deterministic but very difficult to game. (You can make the date determistic for a month and dates of different pages jump forward at different times by adding an HMAC of page content (mod number of seconds in a month) to the current time, rounding down that time to a month boundary, and then subtracting back the value you added earlier. This prevents excessive index churn from switching all dates at once, and yet gives each page a unique date.)

Now, consider these JavaScript usage statistics across the web from BuiltWith:

JavaScript is obviously here to stay. Most of the web is using it to render content in some form or another. This means there’s potential for search quality to plummet over time if Google couldn’t make sense of what content is on pages rendered with JavaScript.

Additionally, Google’s own JavaScript MVW framework, AngularJS, has seen pretty strong adoption as of late. When I attended Google’s I/O conference a few months ago, the recent advancements of Progressive Web Apps and Firebase were being harped upon due to the speed and flexibility they bring to the web. You can only expect that developers will make a stronger push.

Image via Builtwith

Sadly, despite BuiltVisible’s fantastic contributions to the subject, there hasn’t been enough discussion around Progressive Web Apps, Single-Page Applications, and JavaScript frameworks in the SEO space. Instead, there are arguments about 301s vs 302s. Perhaps the latest spike in adoption and the proliferation of PWAs, SPAs, and JS frameworks across different verticals will change that. At iPullRank, we’ve worked with a number of companies who have made the switch to Angular; there’s a lot worth discussing on this specific topic.

Additionally, Facebook’s contribution to the JavaScript MVW frameworks, React, is being adopted for the very similar speed and benefits of flexibility in the development process.

However, regarding SEO, the key difference between Angular and React is that, from the beginning, React had a renderToString function built in which allows the content to render properly from the server side. This makes the question of indexation of React pages rather trivial.

AngularJS 1.x, on the other hand, has birthed an SEO best practice wherein you pre-render pages using headless browser-driven snapshot appliance such as Prerender.io, Brombone, etc. This is somewhat ironic, as it’s Google’s own product. More on that later.

View Source is dead

As a result of the adoption of these JavaScript frameworks, using View Source to examine the code of a website is an obsolete practice. What you’re seeing in View Source is not the computed Document Object Model (DOM). Rather, you’re seeing the code before it’s processed by the browser. The lack of understanding around why you might need to view a page’s code differently is another instance where having a more detailed understanding of the technical components of how the web works is more effective.

Depending on how the page is coded, you may see variables in the place of actual content, or you may not see the completed DOM tree that’s there once the page has loaded completely. This is the fundamental reason why, as soon as an SEO hears that there’s JavaScript on the page, the recommendation is to make sure all content is visible without JavaScript.

To illustrate the point further, consider this View Source view of Seamless.com. If you look for the meta description or the rel-canonical on this page, you’ll find variables in the place of the actual copy:

If instead you look at the code in the Elements section of Chrome DevTools or Inspect Element in other browsers, you’ll find the fully executed DOM. You’ll see the variables are now filled in with copy. The URL for the rel-canonical is on the page, as is the meta description:

Since search engines are crawling this way, you may be missing out on the complete story of what’s going on if you default to just using View Source to examine the code of the site.

HTTP/2 is on the way

One of Google’s largest points of emphasis is page speed. An understanding of how networking impacts page speed is definitely a must-have to be an effective SEO.

Before HTTP/2 was announced, the HyperText Transfer Protocol specification had not been updated in a very long time. In fact, we’ve been using HTTP/1.1 since 1999. HTTP/2 is a large departure from HTTP/1.1, and I encourage you to read up on it, as it will make a dramatic contribution to the speed of the web.

Image via Slideshare

Quickly though, one of the biggest differences is that HTTP/2 will make use of one TCP (Transmission Control Protocol) connection per origin and “multiplex” the stream. If you’ve ever taken a look at the issues that Google PageSpeed Insights highlights, you’ll notice that one of the primary things that always comes up is limiting the number of HTTP requests/ This is what multiplexing helps eliminate; HTTP/2 opens up one connection to each server, pushing assets across it at the same time, often making determinations of required resources based on the initial resource. With browsers requiring Transport Layer Security (TLS) to leverage HTTP/2, it’s very likely that Google will make some sort of push in the near future to get websites to adopt it. After all, speed and security have been common threads throughout everything in the past five years.

Image via Builtwith

As of late, more hosting providers have been highlighting the fact that they are making HTTP/2 available, which is probably why there’s been a significant jump in its usage this year. The beauty of HTTP/2 is that most browsers already support it and you don’t have to do much to enable it unless your site is not secure.

Image via CanIUse.com

Definitely keep HTTP/2 on your radar, as it may be the culmination of what Google has been pushing for.

SEO tools are lagging behind search engines

When I think critically about this, SEO tools have always lagged behind the capabilities of search engines. That’s to be expected, though, because SEO tools are built by smaller teams and the most important things must be prioritized. A lack of technical understanding may lead to you believe the information from the tools you use when they are inaccurate.

When you review some of Google’s own documentation, you’ll find that some of my favorite tools are not in line with Google’s specifications. For instance, Google allows you to specify hreflang, rel-canonical, and x-robots in HTTP headers. There’s a huge lack of consistency in SEO tools’ ability to check for those directives.

It’s possible that you’ve performed an audit of a site and found it difficult to determine why a page has fallen out of the index. It very well could be because a developer was following Google’s documentation and specifying a directive in an HTTP header, but your SEO tool did not surface it. In fact, it’s generally better to set these at the HTTP header level than to add bytes to your download time by filling up every page’s <head> with them.

Google is crawling headless, despite the computational expense, because they recognize that so much of the web is being transformed by JavaScript. Recently, Screaming Frog made the shift to render the entire page using JS:

To my knowledge, none of the other crawling tools are doing this yet. I do recognize the fact that it would be considerably more expensive for all SEO tools to make this shift because cloud server usage is time-based and it takes significantly more time to render a page in a browser than to just download the main HTML file. How much time?

A ton more time, actually. I just wrote a simple script that just loads the HTML using both cURL and HorsemanJS. cURL took an average of 5.25 milliseconds to download the HTML of the Yahoo homepage. HorsemanJS, on the other hand, took an average of 25,839.25 milliseconds or roughly 26 seconds to render the page. It’s the difference between crawling 686,000 URLs an hour and 138.

Ideally, SEO tools would extract the technologies in use on the site or perform some sort of DIFF operation on a few pages and then offer the option to crawl headless if it’s deemed worthwhile.

Finally, Google’s specs on mobile also say that you can use client-side redirects. I’m not aware of a tool that tracks this. Now, I’m not saying leveraging JavaScript redirects for mobile is the way you should do it. Rather that Google allows it, so we should be able to inspect it easily.

Luckily, until SEO tools catch up, Chrome DevTools does handle a lot of these things. For instance, the HTTP Request and Response headers section will show you x-robots, hreflang, and rel-canonical HTTP headers.

You can also use DevTools’ GeoLocation Emulator to get view the web as though you are in a different location. For those of you who have fond memories of the nearEquals query parameter, this is another way you can get a sense of where you rank in precise locations.

Chrome DevTools also allows you to plug in your Android device and control it from your browser. There’s any number of use cases for this from an SEO perspective, but Simo Ahava wrote a great instructional post on how you can use it to debug your mobile analytics setup. You can do the same on iOS devices in Safari if you have a Mac.

What truly are rankings in 2016?

Rankings are a funny thing and, truthfully, have been for some time now. I, myself, was resistant to the idea of averaged rankings when Google rolled them out in Webmaster Tools/Search Console, but average rankings actually make a lot more sense than what we look at in standard ranking tools. Let me explain.

SEO tools pull rankings based on a situation that doesn’t actually exist in the real world. The machines that scrape Google are meant to be clean and otherwise agnostic unless you explicitly specify a location. Effectively, these tools look to understand how rankings would look to users searching for the first time with no context or history with Google. Ranking software emulates a user who is logging onto the web for the first time ever and the first thing they think to do is search for “4ft fishing rod.” Then they continually search for a series of other related and/or unrelated queries without ever actually clicking on a result. Granted. some software may do other things to try and emulate that user, but either way they collect data that is not necessarily reflective of what real users see. And finally, with so many people tracking many of the same keywords so frequently, you have to wonder how much these tools inflate search volume.

The bottom line is that we are ignoring true user context, especially in the mobile arena.

Rankings tools that allow you to track mobile rankings usually let you define one context or they will simply specify “mobile phone” as an option. Cindy Krum’s research indicates that SERP features and rankings will be different based on the combination of user agent, phone make and model, browser, and even the content on their phone.

Rankings tools also ignore the user’s reality of choice. We’re in an era where there are simply so many elements that comprise the SERP, that #1 is simply NOT #1. In some cases, #1 is the 8th choice on the page and far below the fold.

With AdWords having a 4th ad slot, organic being pushed far below the fold, and users not being sure of the difference between organic and paid, being #1 in organic doesn’t mean what it used to. So when we look at rankings reports that tell us we’re number one, we’re often deluding ourselves as to what outcome that will drive. When we report that to clients, we’re not focusing on actionability or user context. Rather, we are focusing entirely on vanity.

Of course, rankings are not a business goal; they’re a measure of potential or opportunity. No matter how much we talk about how they shouldn’t be the main KPI, rankings are still something that SEOs point at to show they’re moving the needle. Therefore we should consider thinking of organic rankings as being relative to the SERP features that surround them.

In other words, I’d like to see rankings include both the standard organic 1–10 ranking as well as the absolute position with regard to Paid, local packs, and featured snippets. Anything else is ignoring the impact of the choices that are overwhelmingly available to the user.

Recently, we’ve seen some upgrades to this effect with Moz making a big change to how they are surfacing features of rankings and I know a number of other tools have highlighted the organic features as well. Who will be the first to highlight the Integrated Search context? After all, many users don’t know the difference.

What is cloaking in 2016?

Cloaking is officially defined as showing search engines something different from the user. What does that mean when Google allows adaptive and responsive sites and crawls both headless and text-based? What does that mean when Googlebot respects 304 response codes?

Under adaptive and responsive models, it’s often the case that more or less content is shown for different contexts. This is rare for responsive, as it’s meant to reposition and size content by definition, but some implementations may instead reduce content components to make the viewing context work.

In the case when a site responds to screen resolution by changing what content is shown and more content is shown beyond the resolution that Googlebot renders, how do they distinguish that from cloaking?

Similarly, the 304 response code is way to indicate to the client that the content has not been modified since the last time it visited; therefore, there’s no reason to download it again.

Googlebot adheres to this response code to keep from being a bandwidth hog. So what’s to stop a webmaster from getting one version of the page indexed, changing it, and then returning a 304?

I don’t know that there are definitive answers to those questions at this point. However, based on what I’m seeing in the wild, these have proven to be opportunities for technical SEOs that are still dedicated to testing and learning.

Crawling

Accessibility of content as a fundamental component that SEOs must examine has not changed. What has changed is the type of analytical effort that needs to go into it. It’s been established that Google’s crawling capabilities have improved dramatically and people like Eric Wu have done a great job of surfacing the granular detail of those capabilities with experiments like JSCrawlability.com

Similarly, I wanted to try an experiment to see how Googlebot behaves once it loads a page. Using LuckyOrange, I attempted to capture a video of Googlebot once it gets to the page:

I installed the LuckyOrange script on a page that hadn’t been indexed yet and set it up so that it only only fires if the user agent contains “googlebot.” Once I was set up, I then invoked Fetch and Render from Search Console. I’d hoped to see mouse scrolling or an attempt at a form fill. Instead, the cursor never moved and Googlebot was only on the page for a few seconds. Later on, I saw another hit from Googlebot to that URL and then the page appeared in the index shortly thereafter. There was no record of the second visit in LuckyOrange.

While I’d like to do more extensive testing on a bigger site to validate this finding, my hypothesis from this anecdotal experience is that Googlebot will come to the site and make a determination of whether a page/site needs to be crawled using the headless crawler. Based on that, they’ll come back to the site using the right crawler for the job.

I encourage you to give it a try as well. You don’t have to use LuckyOrange — you could use HotJar or anything else like it — but here’s my code for LuckyOrange:

jQuery(function() {
    Window.__lo_site_id = XXXX;
    if (navigator.userAgent.toLowerCase().indexOf(‘googlebot’) >)
    {
        var wa = document.createElement(‘script’);
        wa.type = ‘text/javascript’;
        wa.async = true;
        wa.src = (‘https’ == document.location.protocol ? ‘<a href="https://ssl">https://ssl</a>’ : ’<a href="http://cdn">http://cdn</a>’) + ‘http://.luckyorange.com/w.js’;
        var s = document.getElementByTagName(‘script’)[0];
        s.parentNode.insertBefore(wa,s);
        // Tag it with Googlebot
        window._loq = window._low || [];
        window._loq .push([“tag”, “Googlebot”]);
    }
));

The moral of the story, however, is that what Google sees, how often they see it, and so on are still primary questions that we need to answer as SEOs. While it’s not sexy, log file analysis is an absolutely necessary exercise, especially for large-site SEO projects — perhaps now more than ever, due to the complexities of sites. I’d encourage you to listen to everything Marshall Simmonds says in general, but especially on this subject.

To that end, Google’s Crawl Stats in Search Console are utterly useless. These charts tell me what, exactly? Great, thanks Google, you crawled a bunch of pages at some point in February. Cool!

There are any number of log file analysis tools out there, from Kibana in the ELK stack to other tools such as Logz.io. However, the Screaming Frog team has made leaps and bounds in this arena with the recent release of their Log File Analyzer.

Of note with this tool is how easily it handles millions of records, which I hope is an indication of things to come with their Spider tool as well. Irrespective of who makes the tool, the insights that it helps you unlock are incredibly valuable in terms of what’s actually happening.

We had a client last year that was adamant that their losses in organic were not the result of the Penguin update. They believed that it might be due to turning off other traditional and digital campaigns that may have contributed to search volume, or perhaps seasonality or some other factor. Pulling the log files, I was able to layer all of the data from when all of their campaigns were running and show that it was none of those things; rather, Googlebot activity dropped tremendously right after the Penguin update and at the same time as their organic search traffic. The log files made it definitively obvious.

It follows conventionally held SEO wisdom that Googlebot crawls based on the pages that have the highest quality and/or quantity of links pointing to them. In layering the the number of social shares, links, and Googlebot visits for our latest clients, we’re finding that there’s more correlation between social shares and crawl activity than links. In the data below, the section of the site with the most links actually gets crawled the least!

These are important insights that you may just be guessing at without taking the time to dig into your log files.

How log files help you understand AngularJS

Like any other web page or application, every request results in a record in the logs. But depending on how the server is setup, there are a ton of lessons that can come out of it with regard to AngularJS setups, especially if you’re pre-rendering using one of the snapshot technologies.

For one of our clients, we found that oftentimes when the snapshot system needed to refresh its cache, it took too long and timed out. Googlebot understands these as 5XX errors.

This behavior leads to those pages falling out of the index, and over time we saw pages jump back and forth between ranking very highly and disappearing altogether, or another page on the site taking its place.

Additionally, we found that there were many instances wherein Googlebot was being misidentified as a human user. In turn, Googlebot was served the AngularJS live page rather than the HTML snapshot. However, despite the fact that Googlebot was not seeing the HTML snapshots for these pages, these pages were still making it into the index and ranking just fine. So we ended up working with the client on a test to remove the snapshot system on sections of the site, and organic search traffic actually improved.

This is directly in line with what Google is saying in their deprecation announcement of the AJAX Crawling scheme. They are able to access content that is rendered using JavaScript and will index anything that is shown at load.

That’s not to say that HTML snapshot systems are not worth using. The Googlebot behavior for pre-rendered pages is that they tend to be crawled more quickly and more frequently. My best guess is that this is due to the crawl being less computationally expensive for them to execute. All in all, I’d say using HTML snapshots is still the best practice, but definitely not the only way for Google see these types of sites.

According to Google, you shouldn’t serve snapshots just for them, but for the speed enhancements that the user gets as well.

In general, websites shouldn’t pre-render pages only for Google — we expect that you might pre-render pages for performance benefits for users and that you would follow progressive enhancement guidelines. If you pre-render pages, make sure that the content served to Googlebot matches the user’s experience, both how it looks and how it interacts. Serving Googlebot different content than a normal user would see is considered cloaking, and would be against our Webmaster Guidelines.

These are highly technical decisions that have a direct influence on organic search visibility. From my experience in interviewing SEOs to join our team at iPullRank over the last year, very few of them understand these concepts or are capable of diagnosing issues with HTML snapshots. These issues are now commonplace and will only continue to grow as these technologies continue to be adopted.

However, if we’re to serve snapshots to the user too, it begs the question: Why would we use the framework in the first place? Naturally, tech stack decisions are ones that are beyond the scope of just SEO, but you might consider a framework that doesn’t require such an appliance, like MeteorJS.

Alternatively, if you definitely want to stick with Angular, consider Angular 2, which supports the new Angular Universal. Angular Universal serves “isomorphic” JavaScript, which is another way to say that it pre-renders its content on the server side.

Angular 2 has a whole host of improvements over Angular 1.x, but I’ll let these Googlers tell you about them.

Before all of the crazy frameworks reared their confusing heads, Google has had one line of thought about emerging technologies — and that is “progressive enhancement.” With many new IoT devices on the horizon, we should be building websites to serve content for the lowest common denominator of functionality and save the bells and whistles for the devices that can render them.

If you’re starting from scratch, a good approach is to build your site’s structure and navigation using only HTML. Then, once you have the site’s pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses.

In other words, make sure your content is accessible to everyone. Shoutout to Fili Weise for reminding me of that.

Scraping is the fundamental flawed core of SEO analysis

Scraping is fundamental to everything that our SEO tools do. cURL is a library for making and handling HTTP requests. Most popular programming languages have bindings for the library and, as such, most SEO tools leverage the library or something similar to download web pages.

Think of cURL as working similar to downloading a single file from an FTP; in terms of web pages, it doesn’t mean that the page can be viewed in its entirety, because you’re not downloading all of the required files.

This is a fundamental flaw of most SEO software for the very same reason View Source is not a valuable way to view a page’s code anymore. Because there are a number of JavaScript and/or CSS transformations that happen at load, and Google is crawling with headless browsers, you need to look at the Inspect (element) view of the code to get a sense of what Google can actually see.

This is where headless browsing comes into play.

One of the more popular headless browsing libraries is PhantomJS. Many tools outside of the SEO world are written using this library for browser automation. Netflix even has one for scraping and taking screenshots called Sketchy. PhantomJS is built from a rendering engine called QtWebkit, which is to say it’s forked from the same code that Safari (and Chrome before Google forked it into Blink) is based on. While PhantomJS is missing the features of the latest browsers, it has enough features to support most things we need for SEO analysis.

As you can see from the GitHub repository, HTML snapshot software such as Prerender.io is written using this library as well.

PhantomJS has a series of wrapper libraries that make it quite easy to use in a variety of different languages. For those of you interested in using it with NodeJS, check out HorsemanJS.

For those of you that are more familiar with PHP, check out PHP PhantomJS.

A more recent and better qualified addition to the headless browser party is Headless Chromium. As you might have guessed, this is a headless version of the Chrome browser. If I were a betting man, I’d say what we’re looking at here is a some sort of toned-down fork of Googlebot.

To that end, this is probably something that SEO companies should consider when rethinking their own crawling infrastructure in the future, if only for a premium tier of users. If you want to know more about Headless Chrome, check out what Sami Kyostila and Alex Clarke (both Googlers) had to say at BlinkOn 6:

Using in-browser scraping to do what your tools can’t

Although many SEO tools cannot examine the fully rendered DOM, that doesn’t mean that you, as an an individual SEO, have to miss out. Even without leveraging a headless browser, Chrome can be turned into a scraping machine with just a little bit of JavaScript. I’ve talked about this at length in my “How to Scrape Every Single Page on the Web” post. Using a little bit of jQuery, you can effectively select and print anything from a page to the JavaScript Console and then export it to a file in whatever structure you prefer.

Scraping this way allows you to skip a lot of the coding that’s required to make sites believe you’re a real user, like authentication and cookie management that has to happen on the server side. Of course, this way of scraping is good for one-offs rather than building software around.

ArtooJS is a bookmarklet made to support in-browser scraping and automating scraping across a series of pages and saving the results to a file as JSON.

A more fully featured solution for this is the Chrome Extension, WebScraper.io. It requires no code and makes the whole process point-and-click.

How to approach content and linking from the technical context

Much of what SEO has been doing for the past few years has devolved into the creation of more content for more links. I don’t know that adding anything to the discussion around how to scale content or build more links is of value at this point, but I suspect there are some opportunities for existing links and content that are not top-of-mind for many people.

Google Looks at Entities First

Googlers announced recently that they look at entities first when reviewing a query. An entity is Google’s representation of proper nouns in their system to distinguish persons, places, and things, and inform their understanding of natural language. At this point in the talk, I ask people to put their hands up if they have an entity strategy. I’ve given the talk a dozen times at this point and there have only been two people to raise their hands.

Bill Slawski is the foremost thought leader on this topic, so I’m going to defer to his wisdom and encourage you to read:

I would also encourage you to use a natural language processing tool like AlchemyAPI or MonkeyLearn. Better still, use Google’s own Natural Language Processing API to extract entities. The difference between your standard keyword research and entity strategies is that your entity strategy needs to be built from your existing content. So in identifying entities, you’ll want to do your keyword research first and then run those landing pages through an entity extraction tool to see how they line up. You’ll also want to run your competitor landing pages through those same entity extraction APIs to identify what entities are being targeted for those keywords.

TF*IDF

Similarly, Term Frequency/Inverse Document Frequency or TF*IDF is a natural language processing technique that doesn’t get much discussion on this side of the pond. In fact, topic modeling algorithms have been the subject of much-heated debates in the SEO community in the past. The issue of concern is that topic modeling tools have the tendency to push us back towards the Dark Ages of keyword density, rather than considering the idea of creating content that has utility for users. However, in many European countries they swear by TF*IDF (or WDF*IDF — Within Document Frequency/Inverse Document Frequency) as a key technique that drives up organic visibility even without links.

After hanging out in Germany a bit last year, some folks were able to convince me that taking another look at TF*IDF was worth it. So, we did and then we started working it into our content optimization process.

In Searchmetrics’ 2014 study of ranking factors they found that while TF*IDF specifically actually had a negative correlation with visibility, relevant and proof terms have strong positive correlations.

Image via Searchmetrics

Based on their examination of these factors, Searchmetrics made the call to drop TF*IDF from their analysis altogether in 2015 in favor of the proof terms and relevant terms. Year over year the positive correlation holds for those types of terms, albeit not as high.

Images via Searchmetrics

In Moz’s own 2015 ranking factors, we find that LDA and TF*IDF related items remain in the highest on-page content factors.

In effect, no matter what model you look at, the general idea is to use related keywords in your copy in order to rank better for your primary target keyword, because it works.

Now, I can’t say we’ve examined the tactic in isolation, but I can say that the pages that we’ve optimized using TF*IDF have seen bigger jumps in rankings than those without it. While we leverage OnPage.org’s TF*IDF tool, we don’t follow it using hard and fast numerical rules. Instead, we allow the related keywords to influence ideation and then use them as they make sense.

At the very least, this order of technical optimization of content needs to revisited. While you’re at it, you should consider the other tactics that Cyrus Shepard called out as well in order to get more mileage out of your content marketing efforts.

302s vs 301s — seriously?

As of late, a reexamination of the 301 vs. 302 redirect has come back up in the SEO echo chamber. I get the sense that Webmaster Trends Analysts in the public eye either like attention or are just bored, so they’ll issue vague tweets just to see what happens.

For those of you who prefer to do work rather than wait for Gary Illyes to tweet, all I’ve got is some data to share.

Once upon a time, we worked with a large media organization. As is par for the course with these types of organizations, their tech team was resistant to implementing much of our recommendations. Yet they had millions of links both internally and externally pointing to URLs that returned 302 response codes.

After many meetings, and a more compelling business case, the one substantial thing that we were able to convince them to do was switch those 302s into 301s. Nearly overnight there was an increase in rankings in the 1–3 rank zone.

Despite seasonality, there was a jump in organic Search traffic as well.

To reiterate, the only substantial change at this point was the 302 to 301 switch. It resulted in a few million more organic search visits month over month. Granted, this was a year ago, but until someone can show me the same happening or no traffic loss when you switch from 301s to 302s, there’s no discussion for us to have.

Internal linking, the technical approach

Under the PageRank model, it’s an axiom that the flow of link equity through the site is an incredibly important component to examine. Unfortunately, so much of the discussion with clients is only on the external links and not about how to better maximize the link equity that a site already has.

There are a number of tools out there that bring this concept to the forefront. For instance, Searchmetrics calculates and visualizes the flow of link equity throughout the site. This gives you a sense of where you can build internal links to make other pages stronger.

Additionally, Paul Shapiro put together a compelling post on how you can calculate a version of internal PageRank for free using the statistical computing software R.

Either of these approaches is incredibly valuable to offering more visibility to content and very much fall in the bucket of what technical SEO can offer.

Structured data is the future of organic search

The popular one-liner is that Google is looking to become the presentation layer of the web. I say, help them do it!

There has been much discussion about how Google is taking our content and attempting to cut our own websites out of the picture. With the traffic boon that the industry has seen from sites making it into the featured snippet, it’s pretty obvious that, in many cases, there’s more value for you in Google taking your content than in them not.

With Vocal Search appliances on mobile devices and the forthcoming Google Home, there’s only one answer that the user receives. That is to say that the Star Trek computer Google is building is not going to read every result — just one. These answers are fueled by rich cards and featured snippets, which are in turn fueled by structured data.

Google has actually done us a huge favor regarding structured data in updating the specifications that allow JSON-LD. Before this, Schema.org was a matter of making very tedious and specific changes to code with little ROI. Now structured data powers a number of components of the SERP and can simply be placed at the <HEAD> of a document quite easily. Now is the time to revisit implementing the extra markup. Builtvisible’s guide to Structured Data remains the gold standard.

Page speed is still Google’s obsession

Google has very aggressive expectations around page speed, especially for the mobile context. They want the above-the-fold content to load within one second. However, 800 milliseconds of that time is pretty much out of your control.

Image via Google

Based on what you can directly affect, as an SEO, you have 200 milliseconds to make content appear on the screen. A lot of what can be done on-page to influence the speed at which things load is optimizing the page for critical rendering path.

Image via Nianpeng Li

To understand this concept, first we have to take a bit of a step back to get a sense of how browsers construct a web page.

  1. The browser takes the uniform resource locator (URL) that you specify in your address bar and performs a DNS lookup on the domain name.
  2. Once a socket is open and a connection is negotiated, it then asks the server for the HTML of the page you’ve requested.
  3. The browser begins to parse the HTML into the Document Object Model until it encounters CSS, then it starts to parse the CSS into the CSS Object Model.
  4. If at any point it runs into JavaScript, it will pause the DOM and/or CSSOM construction until the JavaScript completes execution, unless it is asynchronous.
  5. Once all of this is complete, the browser constructs the Render Tree, which then builds the layout of the page and finally the elements of the page are painted.

In the Timeline section of Chrome DevTools, you can see the individual operations as they happen and how they contribute to load time. In the timeline at the top, you’ll always see the visualization as mostly yellow because JavaScript execution takes the most time out of any part of page construction. JavaScript causes page construction to halt until the the script execution is complete. This is called “render-blocking” JavaScript.

That term may sound familiar to you because you’ve poked around in PageSpeed Insights looking for answers on how to make improvements and “Eliminate Render-blocking JavaScript” is a common one. The tool is primarily built to support optimization for the Critical Rendering Path. A lot of the recommendations involve issues like sizing resources statically, using asynchronous scripts, and specifying image dimensions.

Additionally, external resources contribute significantly to page load time. For instance, I always see Chartbeat’s library taking 3 or more seconds just to resolve the DNS. These are all things that need to be reviewed when considering how to make a page load faster.

If you know much about the Accelerated Mobile Pages (AMP) specification, a lot of what I just highlighted might sound very familiar to you.

Essentially, AMP exists because Google believes the general public is bad at coding. So they made a subset of HTML and threw a global CDN behind it to make your pages hit the 1 second mark. Personally, I have a strong aversion to AMP, but as many of us predicted at the top of the year, Google has rolled AMP out beyond just the media vertical and into all types of pages in the SERP. The roadmap indicates that there is a lot more coming, so it’s definitely something we should dig into and look to capitalize on.

Using pre-browsing directives to speed things up

To support site speed improvements, most browsers have pre-browsing resource hints. These hints allow you to indicate to the browser that a file will be needed later in the page, so while the components of the browser are idle, it can download or connect to those resources now. Chrome specifically looks to do these things automatically when it can, and may ignore your specification altogether. However, these directives operate much like the rel-canonical tag — you’re more likely to get value out of them than not.

Image via Google

  • Rel-preconnect – This directive allows you to resolve the DNS, initiate the TCP handshake, and negotiate the TLS tunnel between the client and server before you need to. When you don’t do this, these things happen one after another for each resource rather than simultaneously. As the diagram below indicates, in some cases you can shave nearly half a second off just by doing this. Alternatively, if you just want to resolve the DNS in advance, you could use rel-dns-prefetch.

    If you see a lot of idle time in your Timeline in Chrome DevTools, rel-preconnect can help you shave some of that off.

    You can specify rel-preconnect with

    <link rel=”preconnect” href=”https://domain.com”>
    	

    or rel-dns-prefetch with

    <link rel=”dns-prefetch” href=”domain.com”>
    	

  • Rel-prefetch – This directive allows you to download a resource for a page that will be needed in the future. For instance, if you want to pull the stylesheet of the next page or download the HTML for the next page, you can do so by specifying it as
    <link rel=”prefetch” href=”nextpage.html”>
    	
  • Rel-prerender – Not to be confused with the aforementioned Prerender.io, rel-prerender is a directive that allows you to load an entire page and all of its resources in an invisible tab. Once the user clicks a link to go to that URL, the page appears instantly. If the user instead clicks on a link that you did not specify as the rel-prerender, the prerendered page is deleted from memory. You specify the rel-prerender as follows:
    <link rel=”prerender” href=”nextpage.html”>
    	

    I’ve talked about rel-prerender in the past in my post about how I improved our site’s speed 68.35% with one line of code.

    There are a number of caveats that come with rel-prerender, but the most important one is that you can only specify one page at a time and only one rel-prerender can be specified across all Chrome threads. In my post I talk about how to leverage the Google Analytics API to make the best guess at the URL the user is likely going to visit next.

    If you’re using an analytics package that isn’t Google Analytics, or if you have ads on your pages, it will falsely count prerender hits as actual views to the page. What you’ll want to do is wrap any JavaScript that you don’t want to fire until the page is actually in view in the Page Visibility API. Effectively, you’ll only fire analytics or show ads when the page is actually visible.

    Finally, keep in mind that rel-prerender does not work with Firefox, iOS Safari, Opera Mini, or Android’s browser. Not sure why they didn’t get invited to the pre-party, but I wouldn’t recommend using it on a mobile device anyway.

  • Rel-preload and rel-subresource – Following the same pattern as above, rel-preload and rel-subresource allow you to load things within the same page before they are needed. Rel-subresource is Chrome-specific, while rel-preload works for Chrome, Android, and Opera.

Finally, keep in mind that Chrome is sophisticated enough to make attempts at all of these things. Your resource hints help them develop the 100% confidence level to act on them. Chrome is making a series of predictions based on everything you type into the address bar and it keeps track of whether or not it’s making the right predictions to determine what to preconnect and prerender for you. Check out chrome://predictors to see what Chrome has been predicting based on your behavior.

Image via Google

Where does SEO go from here?

Being a strong SEO requires a series of skills that’s difficult for a single person to be great at. For instance, an SEO with strong technical skills may find it difficult to perform effective outreach or vice-versa. Naturally, SEO is already stratified between on- and off-page in that way. However, the technical skill requirement has continued to grow dramatically in the past few years.

There are a number of skills that have always given technical SEOs an unfair advantage, such as web and software development skills or even statistical modeling skills. Perhaps it’s time to officially further stratify technical SEO from traditional content-driven on-page optimizations, since much of the skillset required is more that of a web developer and network administrator than that of what is typically thought of as SEO (at least at this stage in the game). As an industry, we should consider a role of an SEO Engineer, as some organizations already have.

At the very least, the SEO Engineer will need to have a grasp of all of the following to truly capitalize on these technical opportunities:

  • Document Object Model – An understanding of the building blocks of web browsers is fundamental to the understanding how how we front-end developers manipulate the web as they build it.
  • Critical Rendering Path – An understanding of how a browser constructs a page and what goes into the rendering of the page will help with the speed enhancements that Google is more aggressively requiring.
  • Structured Data and Markup – An understanding of how metadata can be specified to influence how Google understands the information being presented.
  • Page Speed – An understanding of the rest of the coding and networking components that impact page load times is the natural next step to getting page speed up. Of course, this is a much bigger deal than SEO, as it impacts the general user experience.
  • Log File Analysis – An understanding of how search engines traverse websites and what they deem as important and accessible is a requirement, especially with the advent of new front-end technologies.
  • SEO for JavaScript Frameworks – An understanding of the implications of leveraging one of the popular frameworks for front-end development, as well as a detailed understanding of how, why, and when an HTML snapshot appliance may be required and what it takes to implement them is critical. Just the other day, Justin Briggs collected most of the knowledge on this topic in one place and broke it down to its components. I encourage you to check it out.
  • Chrome DevTools – An understanding of one of the most the powerful tools in the SEO toolkit, the Chrome web browser itself. Chrome DevTools’ features coupled with a few third-party plugins close the gaps for many things that SEO tools cannot currently analyze. The SEO Engineer needs to be able to build something quick to get the answers to questions that were previously unasked by our industry.
  • Acclerated Mobile Pages & Facebook Instant Pages – If the AMP Roadmap is any indication, Facebook Instant Pages is a similar specification and I suspect it will be difficult for them to continue to exist exclusively.
  • HTTP/2 – An understanding of how this protocol will dramatically change the speed of the web and the SEO implications of migrating from HTTP/1.1.

Let’s Make SEO Great Again

One of the things that always made SEO interesting and its thought leaders so compelling was that we tested, learned, and shared that knowledge so heavily. It seems that that culture of testing and learning was drowned in the content deluge. Perhaps many of those types of folks disappeared as the tactics they knew and loved were swallowed by Google’s zoo animals. Perhaps our continually eroding data makes it more and more difficult to draw strong conclusions.

Whatever the case, right now, there are far fewer people publicly testing and discovering opportunities. We need to demand more from our industry, our tools, our clients, our agencies, and ourselves.

Let’s stop chasing the content train and get back to making experiences that perform.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

A Guide on How to Use XPath and Text Analysis to Pitch Content

Posted by petewailes

In my day-to-day role at Builtvisible, I build tools to break down marketing challenges and simplify tasks. One of the things we as marketers often need to do is pitch content concepts to sites. To make this easier, you want to pitch something on-topic. To do that more effectively, I decided to spend some time creating a process to help in the ideation stage.

In the spirit of sharing, I thought I’d show you how that process was created and share it with you all.

Tell me what you write

The first challenge is making sure that your content will be on-topic. The starting point, therefore, needs to be creating a title that relates to the site’s own recent content. Assuming the site has a blog or recent news area, you can use XPath to help with that.

Here we see the main Moz blog page. Lots of posts with titles. If we use Chrome and open up Web Inspector, we see the following:

We can see here the element that corresponds to a single blog post title. Right click and hover over “Copy,” and we can copy the XPath to it.

Now we’re going to need a handy little Chrome plugin called XPath Helper. Once installed, we can open it and paste our XPath into XPath Helper. That’ll highlight the title we copied the path to. In this case, that XPath looks like this:

//*[@id="wrap"]/main[1]/div[1]/article[1]/header/h2/a</pre>

This only selects one title, though. Fortunately, we can modify this to pick up all the titles. That XPath looks like this:

//*[@id="wrap"]/main/div/article/header/h2/a</pre>

By removing the nth selectors (where it says [1]), we can make it select all instances of links in h2 headings in headers in articles. This will create a list of all the titles we need in the results box of XPath helper. Doing that, I got the following…

Recent Moz post titles

  • Digital Strategy Basics: The What, the Why, & the How
  • Should My Landing Page Be SEO-Focused, Conversion-Focused, or Both? – Whiteboard Friday
  • A Different Kind of SEO: 5 Big Challenges One Niche Faces in Google
  • Google’s Rolling Out AMP to the Main SERPs – Are You Prepared?
  • Diagramming the Story of a 1-Star Review
  • Moz Content Gets More Robust with the Addition of Topic Trends
  • Wake Up, SEOs – the NEW New Google is Here
  • 301 Redirects Rules Change: What You Need to Know for SEO
  • Should SEOs and Marketers Continue to Track and Report on Keyword Rankings? – Whiteboard Friday
  • Case Study: How We Created Controversial Content That Earned Hundreds of Links
  • Ranking #0: SEO for Answers
  • The Future of e-Commerce: What if Users Could Skip Your Site?
  • Does Voice Search and/or Conversational Search Change SEO Tactics or Strategy? – Whiteboard Friday
  • Architecting a Unicorn: SEO & IA at Envato (A Podcast by True North)

Doing this for a few pages gave me a handy list of titles. This can then be plugged into a text analysis tool like this one, which lets us see what the posts are about. This is especially useful when we may have lists of hundreds of titles.

Having done this, I got a table of phrases from which I could determine what Moz likes to feature. For example:

Top Two-Word Phrases Occurrences
how to 13
guide to 6
accessibility seo 4
local seo 3
for accessibility 3
in 2016 2
online marketing 2
how google 2
you need 2
future of 2
conversion rates 2
the future 2
seo for 2
long tail 2
301 redirects 2

Assuming that Moz is writing about things people care about, we can look at this and make a few educated guesses. “How,” “guide,” and “you need” sound like phrases around educating how to do specific tasks. “Future of” and “the future” indicates people might be looking for ways to stay ahead of the curve. And, of course, “SEO” turns up with various modifiers. A blog post that might resonate with the Moz crowd, then, would be something focused on unpacking a tactic, focused on delivering results, that not many people are yet using.

Who’s writing what?

So we’ve decided we’re going to write a guide about something to do with SEO, focused on enabling SEOs to better address a task. Where do we go from here?

In the course of creating ideas for what became this post (and a few other posts), I started to turn to other sites that I knew the community hung around on, and used the same trick with XPath and content analysis on those areas. (For the sake of completeness, I looked at Inbound, HackerNews, Lobsters, and Twitter.) Things that came up repeatedly included content marketing, {insert type here} content, and phrases around the idea of effective/creative/innovative methods to {insert thing here}.

With this in mind, I had a sit and a think about what I do when I want to pitch something, and how I’ve optimized that process over the years for speed and efficacy. It fit into the types of content Moz seems to like, and what the community at large is talking about at the moment, with a twist that is reasonably unique.

The same data gives a list of people who are interested in and writing about similar stories. This makes it easy to create a list of people to reach out to with regards to research, who you can get to contibute, and who’ll be happy to promote it when it’s live. Needless to say, in a world where content is anything but scarce, that network of people shouting about what you’ve created is going to help you get word out and make the community take more notice of it.

Taking this further

For the moment, and because I’m a developer first, I don’t have much problem with the slightly technical and convoluted nature of this. However, as SEOs, you might want to swap out some of the tools. You could, for example, use Screaming Frog to compile the titles, and people might want to use their own text analysis tools to break down phrases, remove stop words, and other useful things.

If you’ve got any similar processes or any ideas of how you would extend this, I’d love to hear about it in the comments!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Content Gating: When, Whether, and How to Put Your Content Behind an Email/Form Capture – Whiteboard Friday

Posted by randfish

Have you ever considered gating your content to get leads? Whether you choose to have open-access content or gate it to gather information, there are benefits and drawbacks you should be aware of. In today’s Whiteboard Friday, Rand weighs the pros and cons of each approach and shares some tips for improving your process, regardless of whichever route you go.

http://ift.tt/2ervjB7

http://ift.tt/1GaxkYO

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about content gating.

This is something that a lot of content marketers use, particularly those who are interested in generating leads, individuals that their salespeople or sales teams or outreach folks or business development folks can reach out to specifically to sell a product or start a conversation. Many content marketers and SEOs use this type of content as a lure to essentially attract someone, who then fills in form fields to give enough information so that the sales pipeline gets filled or the leads pipeline gets filled, and then the person gets the content.

As opposed to the classic model that we’re used to in a more open content marketing and open SEO world of, “Let me give you something and then hopefully get something in return,” it’s, “You give me something and I will give you this thing in return.” This is a very, very popular tactic. You might be familiar with Moz and know that my general bias and Moz’s general bias is against content gating. We sort of have a philosophical bias against it, with the exception of, on the Moz Local side, some enterprise stuff, that that marketing team may be doing, may in the future include some gating. But generally, at Moz, we’re sort of against it.

However, I don’t want to be too biased. I recognize that it does have benefits, and I want to explain some of those benefits and drawbacks so that you can make your own choices of how to do it. Then we’re going to rock through some recommendations, some tactical tips that I’ve got for you around how you can improve how you do it, no matter whether you are doing open content or full content gating.

Benefits of gating content

The two. This is the gated idea. So you get this free report on the state of artificial intelligence in 2016. But first, before you get that report, you fill in all these fields: name, email, role, company website, Twitter, LinkedIn, what is your budget for AI in 2017 and you fill in a number. I’m not kidding here. Many of these reports require these and many other fields to be filled in. I have filled in personally several that are intense in order to get a report back. So it’s even worked on me at times.

The opposite of that, of course, would be the report is completely available. You get to the webpage, and it’s just here’s the state of AI, the different sections, and you get your graphs and your charts, and all your data is right in there. Fantastic, completely free access. You’ve had to give nothing, just visit the website.

The benefits of gating are you actually get:

  • More information about who specifically accessed the report. Granted, some of this information could be faked. There are people who work around that by verifying and validating at least the email address or those kinds of things.
  • Those who expend the energy to invest in the report may view the data or the report itself as more valuable, more useful, more trustworthy, to carry generally greater value. This is sort of an element of human psychology, where we value things that we’ve had to work harder to get.
  • Sales outreach to the folks who did access it may be much easier and much more effective because you obviously have a lot of information about those people, versus if you collected only an email or no information at all, in which case would be close to impossible.

Drawbacks of gating content

Let’s walk through the drawbacks of gating, some things that you can’t do:

  • Smaller audience potential. It is much harder to get this in front of tons of people. Maybe not this page specifically, but certainly it’s hard to get amplification of this, and it’s very hard to get an audience, get many, many people to fill out all those form fields.
  • Harder to earn links and amplification. People generally do not link to content like this. By the way, the people who do link to and socially amplify stuff like this usually do it with the actual file. So what they’ll do is they’ll look for State of AI 2016, filetype:pdf, site:yourdomain.com, and then they’ll find the file behind whatever you’ve got. I know there are some ways to gate that even such that no one can access it, but it’s a real pain.
  • It also is true that some folks this leaves a very bad taste in their mouth. They have a negative brand perception around it. Now negative brand perception could be around having to fill this out. It could be around whether the content was worth it after they filled this out. It could be about the outreach that happens to them after they filled this out and their interest in getting this data was not to start a sales conversation. You also lose a bunch of your SEO benefits, because you don’t get the links, you don’t get the engagement. If you do rank for this, it tends to be the case that your bounce rate is very high, much higher than other people who might rank for things like the state of AI 2016. So you just struggle.

Benefits of open access

What are the benefits and drawbacks of open access? Well, benefits, pretty obvious:

  • Greater ability to drive traffic from all channels, of course — social, search, word of mouth, email, whatever it is. You can drive a lot more people here.
  • There’s a larger future audience for retargeting and remarketing. So the people who do reach the report itself in here, you certainly have an opportunity. You could retarget and remarket to them. You could also reach out to them directly. Maybe you could retarget and remarket to people who’ve reached this page but didn’t fill in any information. But these folks here are a much greater audience potential for those retargeting and remarketing efforts. Larry Kim from WordStream has shown some awesome examples. Marty Weintraub from Aimclear also has shown some awesome examples of how you can do that retargeting and remarketing to folks who’ve reached content.
  • SEO benefits via links that point to these pages, via engagement metrics, via their ranking ability, etc. etc. You’re going to do much better with this. We do much better with the Beginner’s Guide to SEO on Moz than we would if it were gated and you had to give us your information first, of course.

Overall, if what you are trying to achieve is, rather than leads, simply to get your message to the greatest number of people, this is a far, far better effort. This is likely to reach a much bigger audience, and that message will therefore reach that much larger audience.

Drawbacks of open access

There are some drawbacks for this open access model. It’s not without them.

  • It might be hard or even totally impossible to convert many or most of the visits that come to open access content into leads or potential leads. It’s just the case that those people are going to consume that content, but they may never give you information that will allow you to follow up or reach out to them.
  • Information about the most valuable and important visitors, the ones who would have filled this thing out and would have been great leads is lost forever when you open up the content. You just can’t capture those folks. You’re not going to get their information.

So these two are what drive many folks up to this model and certainly the benefits of the gated content model as well.

Recommendations

So, my recommendations. It’s a fairly simple equation. I urge you to think about this equation from as broad a strategic perspective and then a tactical accomplishment perspective as you possibly can.

1. If audience size, reach, and future marketing benefits are greater than detailed leads as a metric or as a value, then you should go open access. If the reverse is true, if detailed leads are more valuable to you than the audience size, the potential reach, the amplification and link benefits, and all the future marketing benefits that come from those things, the ranking benefits and SEO benefits, if that’s the case, then you should go with a gated model. You get lots of people at an open access model. You get one person, but you know all their information in a gated content model.

2. It is not the case that this has to be completely either/or. There are modified ways to do both of these tactics in combination and concert. In fact, that can be potentially quite advantageous.

So a semi-gated model is something we’ve seen a few content marketers and companies start to do, where they have a part of the report or some of the most interesting aspects of the report or several of the graphics or an embedded SlideShare or whatever it is, and then you can get more of the report by filling in more items. So they’re sharing some stuff, which can potentially attract engagement and links and more amplification, and use in all sorts of places and press, and blog posts and all that kind of stuff. But then they also get the benefit of some people filling out whatever form information is critical in order to get more of that data if they’re very interested. I like this tease model a lot. I think that can work really, really well, especially if you are giving enough to prove your value and worth, and to earn those engagement and links, before you ask for a lot more.

You can go the other way and go a completely open model but with add-ons. So, for example, in this, here’s the full report on AI. If you would like more information, we conducted a survey with AI practitioners or companies utilizing AI. If you’d like the results of that survey, you can get that, and that’s in the sidebar or as a little notification in the report, a call to action. So that’s full report, but if you want this other thing that maybe is useful to some of the folks who best fit the interested in this data and also potentially interested in our product or service, or whatever we’re trying to get leads for, then you can optionally put your information in.

I like both of these. They sort of straddle that line.

3. No matter which one or which modified version you do, you should try and optimize the outcomes. That means in an open content model:

  • Don’t ignore the fact that you can still do retargeting to all the people who visited this open content and get them back to your website, on to potentially a very relevant offer that has a high conversion rate and where you can do CRO testing and those kinds of things. That is completely reasonable and something that many, many folks do, Moz included. We do a lot of remarketing around the web.
  • You can drive low-cost, paid traffic to the content that gets the most shares in order to bump it up and earn more amplification, earn more traffic to it, which then gives you a broader audience to retarget to or a broader audience to put your CTA in front of.
  • If you are going to go completely gated, a lot of these form fields, you can infer or use software to get and therefore get a higher conversion rate. So for example, I’m asking for name, email, role, company, website, Twitter, and LinkedIn. In fact, I could ask exclusively for LinkedIn and email and get every single one of those from just those two fields. I could even kill email and ask them to sign in with LinkedIn and then request the email permission after or as part of that request. So there are options here. You can also ask for name and email, and then use a software service like FullContact’s API and get all of the data around the company, website, role and title, LinkedIn, Twitter, Facebook, etc., etc. that are associated with that name or in that email address. So then you don’t have to ask for so much information.
  • You can try putting your teaser content in multiple channels and platforms to maximize its exposure so that you drive more people to this get more. If you’re worried that hey this teaser won’t reach enough people to be able to get more of those folks here, you can amplify that through putting it on SlideShare or republishing on places like Medium or submitting the content in guest contributions to other websites in legit ways that have overlapped audiences and share your information that you know is going to resonate and will make them want more. Now you get more traffic back to these pages, and now I can convert more of those folks to the get more system.

So content gating, not the end of the world, not the worst thing in the world. I personally dislike a lot of things about it, but it does have its uses. I think if you’re smart, if you play around with some of these tactical tips, you can get some great value from it.

I look forward to your ideas, suggestions, and experiences with content gating, and we’ll see you next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Earning the Link: How to Pitch and Partner with the 5 Publisher Personas

Posted by QuezSays

I stood up from my office chair, stepped behind it and leaned on its back with both hands so I could stare at the email from a new angle. I was silenced by the response of the blogger:

“We’ve had a recent policy change here, and we no longer offer followed links. It’s hurting our reputation and being flagged by Google.”

In that moment, the game changed for me. I’ve received some interesting responses from editors and bloggers about links before, but never as adamant and uninformed as this. I realized that I needed to develop a communication strategy for my emails to publishing partners about links.

The challenge

Content marketing is a great way to amp up the reputation and visibility of your business. This includes well-placed bylines on high-authority sites that cover your market place. From our perspective, it’s completely appropriate to receive an attribution link in return. Creating interesting, authoritative, and valuable content is something my team excels at — that’s not the issue. The issue is working with publishing partners who have preconceived notions about links.

Publishers, bloggers, and editors have a wide range of opinions when it comes to links and how they’re treated by Google. This can create challenges for content creators who want to submit their work to these publishers but are being refused a link back to their site in their author attribution. A variety of people find themselves in this situation — SEOs, content marketing professionals, freelancers, thought leaders, etc.

The fact that people have different opinions on links is not exactly breaking news. My CEO, Eric Enge, does a good job recapping how this nofollow madness came about.

So how do you communicate with publishers in these circumstances in a way that’s credible, respectful, and effective?

After placing roughly 150 pieces of content on a wide range of sites, I’ve learned that it’s crucial to identify someone’s perspective to effectively communicate with them. There are so many myths and misconceptions about links and how Google treats links — you never know what perspective you’ll be dealing with.

This piece will help you quickly identify the perspective at hand, personify it, and from there, help you strategically communicate to give you the best chance of attaining that well-earned attribution link.

Step 1 – Pitch properly

As Rand Fishkin said in his 2012 Whiteboard Friday, “Stop link building and start link earning.” This context is the foundation of all communication with publishing partners.

Practice good pitching etiquette and do your homework researching the site. There are many resources that cover this, so I won’t go in-depth here. However, I will touch on my pitching strategy because I truly believe in its effectiveness.

When I draft all of my pitch emails, I refer to a sticky note stuck to my monitor that outlines the four sequential questions an editor is going to have when they receive my email:

Sticky note.jpg

1. What does this person want?what they want.png

Answer this question in the subject of your email, and in the first sentence. Eric Enge suggests you treat this as your value proposition.

2. Is this credible?

is this credible.png

Ask yourself the question, “What would make my communication more credible in this person’s eyes?”

For example:

  • Name-dropping a big brand that is a part of the collaboration
  • Mentioning an accolade that your writer has earned (e.g. rated top Southern mommy blogger back-to-back years)
  • Highlighting other places the author has been published (e.g. monthly Forbes and USA Today contributor)
  • Mentioning a very specific piece of information that proves you’ve spent a lot of time on their site:
    • “Penny Pens has a lot of tips to share on how to road trip around the Midwest. I think this would complement your travel-heavy July editorial calendar. It would also build nicely off of Christina WritesALot’s piece on Choosing Travel Buddies Wisely.”
  • Speaking directly to their content strategy:
    • “I think that Bobby Beers UCLA Tailgating guide would be a great piece to help promote football ticket sales on your events page.”

Worth noting: If you’re unwilling to do the in-depth research that allows you to speak this way, don’t slapdash this communication. Go another route. “I read your recent article on plants and found it very interesting” doesn’t give you any credibility, and it can even hurt you by coming off as insincere. Emails like that already plague editors.

Don’t believe me? Check out Michael Smart’s article on how we’ve ruined the compliment approach to pitch introductions.

In fact, I’ve even seen software that mimics this approach for marketers that are trying to scale their outreach. The user selects the publication and editor and the software creates an email template that automatically pulls in the title of the last article the editor published. That is how manipulative the email outreach environment has become.

3. Is this valuable?

is this valuable.png

And

4. Will this work?

will this work.png

Ask yourself what details would be worth including here. Is the detail crucial to the communication? Would including it potentially prevent the recipient from understanding something or from responding?

For example, when I pitch writers that work for a big brand, sometimes I mention that we’re not interested in giving or receiving any compensation for the contribution I’m offering. I’ve had experiences where the editor sees the name of my Fortune 100 client and immediately thinks that I’m offering a sponsored post. Or they think that my writer wants payment and will immediately write off the opportunity because they don’t have the budget for another writer at that time.

By answering these questions clearly and in this order, I’m giving the editor permission to close the email at any time with the information they need to know that this opportunity is not going to work. This is the best gift you can give an editor. It shows that you respect their time and will keep the door open for future opportunities. It’s how to begin building trust in a long-term relationship.

Side note: Talking about links during pitching

I generally don’t talk about links with an editor upfront and often wait until they’ve had a chance to see the completed content. First of all, the attribution link is only one of the benefits we’re looking for (reminder: the others are reputation and visibility). It just doesn’t seem fair to talk to the editor about your author attribution before they see the piece. They don’t know you and want to see that you can deliver something valuable and non-promotional first.

It can also come off as unnatural to some editors. Do you really want to risk having your email mistaken for one of the hundreds of spam emails they regularly get promising “high-quality relevant content in exchange for only one dofollowed link!”? Unfortunately, talking about links right away can sometimes trigger an editor to see your content opportunity as low quality.

Step 2 – Earn the link

Once the editor requests your content, work with the writer or content creators until you have something that you’re proud to represent. Ask yourself this question: “Is this link-worthy?” If the answer isn’t a resounding “Heck yeah,” then you won’t have the leverage that you need later on if you end up in a sticky situation (i.e. if you aren’t given a link or you’re given a nofollow link). In those situations, you need to make a powerful request to remedy the situation. Are you willing to make that request for a piece of content your team created half-heartedly? That’s up to you. You need to decide what type of content you want associated with your personal brand.

In short, there are no shortcuts. Earn the editor’s respect and earn the link.

link building vs link earning.png

Step 3 – Write a simple “white-hat SEO” author attribution and submit

For example:

  • Usually no more than two to three sentences
  • Avoid direct-match rich anchor text
  • Link to a page that has high relevance to the author or the content
  • Don’t include more than one to two links

Step 4 – When encountering a nofollow link or missing link, communicate strategically

Once in a blue moon, when you check to see if an article you’ve submitted has been published, you’ll find a nofollow tag or a missing link.

What you SHOULDN’T do in this situation is send an email that justifies or explains why you deserve the link, or why the link is important to you. Don’t make an assumption as to why the link isn’t there. You don’t know what happened.

What you SHOULD do is make a simple request. There is no need for the email to be longer than three sentences:

“Hi Max, thanks for making Sally McWritesALot’s article look so great. It looks like the link in her attribution is nofollowed. Can you remove that nofollow tag?”

The editor’s response will give you hints on how to proceed. Below, I’ve outlined some of the flavors of responses you might get, with a publisher persona associated with each one that will help guide your communication strategy.

Skeptical Sally

skeptical sally.png

  • How to identify:
    • Skeptical Sally might respond with something like this (real examples I’ve received):
      • I don’t allow follow links on the site in sponsored or guest content. As I’m sure you are aware, it can dramatically damage our Google ranking. I love Andrea’s piece, but can’t risk a portion of the site … this is my full-time job — and one that I love. My Google ranking can affect future business opportunities.
      • We do not allow dofollow links any longer; this is in an effort to abide by SEO best practices for our blog.
  • Skeptical Sally’s perspective:
    • Sees links in general as very risky, especially a link that may be associated with a brand
    • Due to their policy change, she now plans to put a nofollow tag on every outbound link, “just to be safe”
    • Has an immediate skepticism of people asking for links
  • Communication strategy:
    • Move on. It is unlikely that Skeptical Sally will be open to a new perspective about links. If you try to educate her on the issue or talk through it, she may even get offended. Oftentimes, it’s just not worth impacting the relationship. After all, there may be ways to collaborate in the future that don’t involve content links (social media cross-promotion, interviews, etc.). Best to say thanks and move on. You still get the reputation and visibility benefits of the article that was published, but you now know that Sally’s site isn’t one where you can expect fair attribution.

Pseudo-Smart Steve

pseudo smart steve.png

  • How to identify:
    • Pseudo-Smart Steve might respond with something like this (real examples I’ve received):
      • The [client] link is just to [client] and will appear spammy to Google. Big red flag.”
      • Other language to look out for — any mention of “PageRank sculpting” or “retaining link juice”
  • Pseudo-Smart Steve’s perspective:
    • Has absorbed some SEO advice from outdated or unreliable sources
    • Knows that links are important, and wants to cash in on the best way to use them on his site
    • May attempt some type of “page sculpting” strategy to prevent precious PageRank from leaking off of his domain (note that this notion is a myth)
  • Communication strategy:
    • Make an attempt to educate these people, standing shoulder to shoulder with them. Sometimes they are just doing the best they can with the knowledge that they have and are open to new information.
      • For example, if the editor responds, “We prefer to nofollow as it retains the link juice,” then perhaps there is an opportunity to send them a link to a resource that will explain that the PageRank that would have been distributed to that nofollow link is NOT redistributed, it is essentially wasted (such as this Matt Cuts blog post).
      • Important – I wouldn’t recommend explaining SEO concepts in-depth over email. What would be more credible and powerful is to make your point in a sentence or two and then provide a link to a resource that backs up your point from an obviously credible source (Google’s blog, something that contains a quote from Google, a reputable study, etc.). Empowering an editor with the information they need to make their own decision is powerful and helpful.
      • Here are a couple of recently published resources to have bookmarked in case you are in a situation like this:

Here’s the extent I’d recommend explaining something in the email itself (real example):

  • Me: “Regarding the link, you can nofollow if it’s an absolute sticking point for you. However, we do feel that since the link is going to a relevant page (where you can find more writing by Julia), there won’t be any risk. Also, there are millions of websites linking to [client], so we feel from that standpoint, it’s not really going to raise any red flags.”
  • Editor: “OK — that all works. The nofollow link really isn’t a sticking point … I appreciate your feedback.

Savvy Shelby

savvy shelby.png

  • How to identify:
    • Responses that comment on how a topic relates to user experience, engagement, visibility, or other editorial areas
  • Savvy Shelby’s perspective:
    • Knows what she needs to know about links — that they are important to people, relevant to search engines, and are a form of currency when working with writers and freelancers
    • Knows that there are things that she doesn’t know about links — that search engines and technical marketers know a lot more than she does about exactly how links work
    • Knows that user experience is what really matters — that if a link doesn’t feel valuable to a user and isn’t a gesture to reward a contributor for a brilliant piece (trusting the contributor enough to know that it won’t be harmful), it may not be something she wants to include
  • Communication strategy:
    • If the link was omitted entirely, explain why including that link will positively impact user experience.
      • Will it provide author credibility?
      • Help users find more content that the author has written?
      • Expand on the topic somehow?
    • If the link has a nofollow tag, let the editor know that the author you are working with prefers to have the freedom to include a followed link in their attribution. This is why it’s so important that you’ve earned the link and provided incredibly valuable content to her and her audience. Trust must have been built by now.

Side note: Make this editor your best friend. They are your most powerful publishing partner.

Oblivious Oliver

oblivious oliver.png

  • How to identify:
    • There may not be specific language to look out for here, besides hints that suggest complete apathy or a lack of editorial structure or direction. Look for off-topic content or grammatical errors during your initial research. You probably don’t want to do a lot of work (and build an association) with a site that doesn’t scrutinize the work of their guest contributors.
  • Oblivious Oliver’s perspective:
    • Doesn’t know anything about links or an association between links and search engines
    • He’s willing to do almost anything with links, as long as it doesn’t make the page look bad
    • May be so hungry for original content that he’s willing to sacrifice quality in general
  • Communication strategy:
    • If you’re just realizing that you’re dealing with an Oblivious Oliver at this stage, it may be a sign that you’re not doing enough detailed research on the site upfront. Perhaps there were some hints within content on his site that you could have picked up on.
    • Regardless, at this point it doesn’t matter. Follow through on your word to deliver a high-quality piece of content and move on to the next opportunity.

The biggest takeaway here is the simplest one: Email communication around controversial or misunderstood topics (such as links) is difficult. Because of this, it will benefit you to keep your communication in simple editorial vernacular until you have earned the right to talk about links — by providing something valuable. When you identify a Savvy Shelby, cultivate the relationship. And for the rest, I hope that this guide empowers you to respond in a manner that’s more effective and will get you results.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

All Content is Not Created Equal: Comparing Results Across 15 Verticals

Posted by kerryjones

Are you holding your content marketing to unrealistic standards?

No matter how informative your infographic about tax law may be, it’s not going to attract the same amount of attention as a BuzzFeed Tasty video. You shouldn’t expect it to. In order to determine what successful content looks like for your brand, you first need to have realistic expectations for what content can achieve in your particular niche.

Our analysis of hundreds of Fractl content marketing campaigns looked at the factors which have worked for our content across all topics. Now we’ve dived a little deeper into this data to develop a better understanding of what to expect from content in different verticals.

What follows is based on data we’ve collected over the years while working with clients in these industries. Keep in mind these aren’t definitive industry benchmarks – your mileage may vary.

Content-by-Vertical-01 (1).jpg

First, we categorized our sample of over 340 Fractl client campaigns into one of 15 different verticals:

  • Health and Fitness
  • Travel
  • Education
  • Entertainment
  • Drugs and Alcohol
  • Politics, Safety, and Crime
  • Sex and Relationships
  • Business and Finance
  • Science
  • Technology
  • Sports
  • Automotive
  • Home and Garden
  • Pets
  • Fashion

We then looked at placements and social media shares for each project. We also analyzed content characteristics like visual asset type and formatting. A “placement” refers to any time a publisher wrote about the campaign. Regarding links, a placement could mean a dofollow, cocitation, nofollow, or text attribution.

Across the entire sample, an average campaign received 90 placements and just over 11,800 social shares. As expected, the results deviated greatly from the average when we looked at the average number of placements and social shares per vertical.

Content-by-Vertical-19.jpg

Some verticals, such as Health and Fitness, outperformed the average benchmarks by more than double, with 195 average placements and roughly 62,600 social shares. Not surprisingly, verticals with more niche audiences had lower numbers. For example, Automotive campaigns earned an average of 43 placements and 1,650 social shares.

What were the top-performing topics?

The average campaigns in Health and Fitness, Drugs and Alcohol, and Travel outperformed the average campaigns in other verticals. So what does it take to be successful in each of these three verticals?

Health and Fitness

Our Health and Fitness campaigns were nearly nine times more likely to include side-by-side images than the average vertical.

Many of these side-by-side image campaigns were centered around body image issues. For instance, we Photoshopped women in video games to have body types closer to that of the average American woman. We also used this tactic to highlight male body image issues and differences in beauty standards around the world.

Takeaway: Contrasting images immediately pass along a wealth of information that can be difficult to capture as effectively with standard data visualizations like charts or graphs. Additionally, they carry emotional power.

For instance, we created a morphing GIF of Miss America from 1922 to 2015. The difference between Miss America in 1922 and Miss America in 2015 is stark, and the GIF makes a powerful statement. Readers and publishers were also able to access information about the images that wouldn’t have come across in figures alone (such as the change in clothing styles and the relative lack of diverse contestants).

As part of the project, we also charted the decline in BMI for pageant winners. Depending on the project and available information, it may be helpful to provide some quantitative data to support the narrative told through images.

Interestingly, although Health and Fitness campaigns were 36.4 percent more likely to use social media data than the average vertical, each of the social media campaigns were in the bottom 68 percent of all Health and Fitness campaigns by social shares.

Drugs and Alcohol

Our Drugs and Alcohol campaigns were 2.2 times more likely to use curated data (65 percent versus 30 percent) and 1.4 times more likely to have interactive elements (26 percent versus 19 percent) than the average campaign.

Takeaway: When dealing with emotional and controversial topics like drugs and alcohol, you don’t necessarily need to collect new data to make an impact. Readers and publishers value visualizations that can help explain complex information in simple ways. An additional benefit: creating interactive experiences that allow your audience to explore data on their own and make their own conclusions.

One good example of these principles is our “Pathways to Addiction” campaign, in which we created interactive platforms for exploring data from the National Survey on Drug Use and Health, including information about the sequence in which people have tried different substances.

This format allowed readers to explore a controversial topic on their own and draw independent conclusions.

Pro Tip: Whether you choose to use curated data or collect your own data, it is imperative to be impartial in your presentation and open in your methodology when working on campaigns around sensitive or controversial topics. You don’t need to stay away from controversial topics, but you do need to take precautions for your agency and your client.

Travel

Our Travel campaigns were 28.6 percent more likely to use social media data and 30.5 percent more likely to use rankings and comparisons than a campaign in the average vertical.

Takeaway: Travel is an inherently social behavior. For many people, travel isn’t complete until they’ve captured the perfect photo – or five. Travel content that acknowledges this social aspect can be really powerful. Rankings, which also feature heavily in travel content, are strong geographic egobait for readers and publishers and play up the social aspect of the Travel vertical.

Stratos Jet Charters’ Talking Tourists, which combined social media data with rankings, is a great example. For this campaign, we gathered over 37,000 tweets to determine which places were the most and least friendly to tourists.

Talking Tourists was successful (96 placements and over 56,000 social shares) because it used content types with proven success in the travel vertical (social media data, rankings, and maps) to explore a topic that isn’t often explored quantitatively.

How to achieve content marketing success in every vertical

The three industries listed above are ripe for highly successful content, but does that mean less popular verticals should go in with low expectations?

Even with topics that are more difficult to attract the attention of readers and publishers, it is still possible for your content to perform well beyond other content in the vertical. This is particularly true when campaigns align with trending stories or tell a completely unique story.

However, not every piece of content can hit it out of the park. Rand estimates that it will take five to ten attempts to create a piece of successful content. Even then, the average high-performing Science content will not receive the same amount of attention as the average Health and Fitness content.

So how can you maximize the chances for success? Here’s what we’ve observed about our top-performing campaigns in the following verticals:

  • Automotive: If you want to create automotive content that appeals to a wider audience, consider using data from social media. Four of our top seven campaigns (by placements) in this vertical featured data from social networks.
  • Business and Finance: When it comes to money, people want to know how they stack up. Our top Business and Finance campaigns (by social shares) relied on comparisons or rankings. If you’re looking for social shares, this is the way to go.
  • Drugs and Alcohol: Finding interesting correlations or stories in existing datasets can prove popular in this vertical – the majority of top campaigns used curated data.
  • Education: Our top Education campaigns featured social media data and interactive features.
  • Entertainment: Timely content that connects with a passionate fan base is a recipe for success.
  • Fashion: Successful fashion campaigns focused on solving problems for the audience.
  • Health and Fitness: Side-by-side images that show a strong contrast perform extremely well in this vertical.
  • Home and Garden: To attract attention from readers and publishers in this niche, make your content timely or pop culture-related.
  • Pets: The highest-performing campaign in this vertical appealed to readers and publishers because it focused on the social aspects of pet ownership. We also included a geographic egobait component by highlighting distinct regional differences in popular dog breeds.
  • Politics, Safety, and Crime: Our top-performing campaign (by social shares) in Politics, Safety, and Crime used social media data to explore a trending topic.
  • Science: In this vertical, relating complex topics to pop culture figures, like superheroes, can boost your content’s social appeal. Creating interactive platforms to explore complicated data can also help your audience connect with your campaign.
  • Sex and Relationships: Talking about sex and relationships feels a little scandalous, which piques interest. Two of our top three campaigns in this vertical, both by placements and by social shares, used social media data to measure conversation around these topics.
  • Sports: This vertical naturally lends itself well to regional egobait. Although only two of our Sports campaigns included maps, these were the most shared of all our sports campaigns.

Browse through the flipbook below to see examples of top-performing campaigns in each vertical.

http://ift.tt/2dziXHV

Try a mixed-vertical strategy

For many of our clients at Fractl, we create content both within and outside of the client’s vertical to maximize reach. Our work with Movoto, a real estate research site, illustrates how one company’s content marketing can span multiple verticals while still remaining highly relevant to the company’s core business.

When developing a content marketing strategy, it is helpful to look at the average placement and social share rate for various verticals. Let’s say a brand that sells decor for log cabins wants to focus 60 percent of its energy on creating highly targeted, niche-specific projects and 40 percent on content designed to raise general awareness about its brand.

Three out of every five campaigns produced for this client should be geared toward publishers who write about log cabins and readers who are on the verge of purchasing log cabin decor. This type of content might include targeted blog posts, industry-specific research, and product comparisons that would appeal to folks at the bottom of the sales funnel.

For the other two campaigns, it’s important to look at adjacent verticals before determining how to move forward. For this particular client, primarily creating Travel content (which yields high average social shares and placements) may be the best course of action.

In addition to the data I’ve shared here, I encourage you to analyze your own content performance data by vertical to set realistic expectations. Vertical-specific metrics can also help identify opportunities to create cross-vertical content for greater traction.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Is Your Camera the New Search Box? How Visual Intelligence is Turning Search Keyword-less

Posted by purna_v

My neighbor has the most beautiful garden ever.

Season after season, she grows the most exotic, gorgeous plants that I could never find in any local nursery. Slightly green with envy over her green thumb, I discovered a glimmer of hope.

There are apps that will identify any plant you take a photo of. Problem solved. Now the rest of the neighborhood is getting prettied up as several houses, including mine, have sprouted exotic new blooms easily ordered online.

Take a photo, get an answer. The most basic form of visual search.

Visual search addresses both convenience and curiosity. If we wanted to learn something more about what we’re looking at, we could simply upload a photo instead of trying to come up with words to describe it.

This isn’t new. Google Visual Search was demoed back in 2009. CamFind rolled out its visual search app in 2013, following similar technology that powered Google Glass.

What’s new is that a storm of visual-centric technologies are coming together to point to a future of search that makes the keyword less…key.

Artificial intelligence and machine learning are the critical new components in the visual game. Let’s focus on what this means and how it’s going to impact your marketing game.

How many kinds of reality do we actually need?

The first thing we think about with the future of visual is virtual reality or augmented reality.

There’s also a third one: mixed reality. So what’s the difference between them and how many kinds of reality can we handle?

Virtual reality (VR) is full immersion in another universe – when you have the VR headset on, you cannot see your actual reality. Virtual reality is a closed environment, meaning that you can only experience what’s been programmed into it. Oculus Rift is an example of virtual reality.

Augmented reality (AR) uses your real environment, but enhances it with the addition of a computer-generated element, like sound or graphics. Pokémon Go is a great example of this, where you still see the world around you but the Pokémon-related graphics – as well as sounds – are added to what you see.

Mixed reality (MR) is an offshoot of augmented reality, with the added element of augmented virtuality. Here, it merges your virtual world with your real world and allows you to interact with both through gestures and voice commands. HoloLens from Microsoft (my employer) is an example of mixed reality – this headset can be programmed to layer on and make interactive any kind of environment over your reality.

The difference is a big fat deal – because an open environment, like HoloLens, becomes a fantastic tool for marketers and consumers.

Let me show you what I mean.

Pretty cool, right? Just think of the commercial implications.

Retail reality

Virtual and augmented reality will reshape retail. This is because it solves a problem – for the consumer.

Online shopping has become a driving force, and we already know what its limitations are: not being able to try clothing on, feel the fabric on the couch or get a sense of the heft of a stool. All of these are obstacles to the online shopper.

According to the Harvard Business Review, augmented reality will eliminate pain points that are specific to every kind of retail shopping – not just trying on the right size, but think about envisioning how big a two-man tent actually is. With augmented reality, you can climb inside it!

If you have any doubt that augmented reality is coming, and coming fast, look no further than this recent conquering by Pokémon Go. We couldn’t get enough.

Some projections put investment in AR technology at close to $30 billion by 2020 – that’s in the next three years. HoloLens is already showing early signs for being a game-changer for advertisers.

For example, if I’m shopping for a kitchen stool I could not only look at the website, but I can see what it would look like in my home:

Holo_1.png

Holo2.png

It’s all about being able to get a better feel for how things will look.

Fashion is one industry that has tried to find ways to solve for this and is increasingly embracing augmented reality.

Rebecca Minkoff debuted the use of augmented reality in her New York Fashion Week show this September. Women could use AR app Zeekit – live during the show – to see how the clothes would look on their own body.

Zeekit.png

Image credit: Zeekit

Why did they do this? To fix a very real problem in retail.

According to Uri Minkoff, who is a partner in his sister’s clothing company, 20 to 40 percent of purchases in retail get returned – that’s the industry standard.

If a virtual try-on can eliminate the hassle of the wrong fit, the wrong size, the wrong everything, then they will have solved a business problem while also making their customers super happy.

This trend caught on and at London Fashion Week a few weeks later there were a host of other designers following suit.

Let’s get real about reality

Let’s bring our leap into the visual back down to earth just a bit – because very few of us will be augmenting our reality today.

What’s preventing AR and VR from taking over the world just yet is going to be slow market penetration. AR and VR are relatively expensive and require entirely new hardware.

On the other hand, something like voice search – another aspect of multi-sensory search – is becoming widely adopted because it relies on a piece of hardware most of us already carry with us at all times: our mobile phone.

The future of visual intelligence relies on tying it to a platform that is already commonly used.

Imagine this. You’re reading a magazine and you like something a model is wearing.

Your phone is never more than three feet from you, so you pick it up, snap a photo of the dress, and the artificial intelligence (AI) – via your digital personal assistant – uses image search to find out where to buy it, no keywords necessary at all.

Take a look at how it could work:

Talk about a multi-sensory search experience, right?

Voice search and conversation as a platform are combined with image search to transact right within the existing platform of your digital personal assistant – which is already used by 66% of 18- to 26-year-olds and 59% of 27- to 35-year-olds, according to Forrester Research.

graph_genzz.jpg

As personal digital assistants rise, so will the prevalence of visual intelligence.

Digital personal assistants, with their embedded artificial intelligence, are the key to the future of visual intelligence in everybody’s hands.

What’s already happening with visual intelligence?

Amazon

One of the most common uses exists right within the Amazon app. Here, the app gives you the option to find a product simply by taking a photo of something or of the bar code:

Amazon1.jpg or Amazon2.jpg

CamFind

The app CamFind can identify the content of pictures you’ve taken and offer links to places you could shop for it. Their website touts the fact that users can get “fast, accurate results with no typing necessary.”

For example, I took a photo of my (very dusty) mouse and it not only recognized it, but also gave me links to places I could buy it or learn more about it.

Pinterest

Pinterest already has a handy visual search tool for “visually similar results,” which returns results from other pins that are a mix of commerce and community posts. This is a huge benefit for retailers to take advantage of.

For example, if you were looking for pumpkin soup recipe ideas and came across a kitchen towel you liked within the Pin, you could select the part of the image you wanted to find visually similar results for.

Pinterest.png

Image credit: Pinterest

Google

Google’s purchase of Moodstocks is also very interesting to watch. Moodstocks is a startup that has developed machine learning technology to boost image recognition for the cameras on smartphones.

For example, you see something you like. Maybe it’s a pair of shoes a stranger is wearing on the subway, and you take a picture of it. The image recognition software identifies the make and model of the shoe, tells you where you can buy it and how much it costs.

Mood.jpg

Image credit: Moodstocks

Captionbot.ai

Microsoft has developed an app that describes what it sees in images. It understands thousands of objects as well as the relationship between them. That last bit is key – and is the “AI” part.

Capbot.png

Captionbot.ai was created to showcase some of the intelligence capabilities of Microsoft Cognitive Services, such as Computer Vision, Emotion API, and Natural Language. It’s all built on machine learning, which means it will get smarter over time.

You know what else is going to make it smarter over time? It’s integrated into Skype now. This gives it a huge practice field – exactly what all machine learning technology craves.

As I said when we first started, where we are now with something like plant identification is leading us directly to the future with a way of getting your product into the hands of consumers who are dying to buy it.

What should I do?

Let’s make our marketing more visual.

We saw the signs with rich SERP results – we went from text only to images, videos and more. We’re seeing pictures everywhere in a land that used to be limited to plain text.

Images are the most important deciding factor when making a purchase, according to research by Pixel Road Designs. They also found that consumers are 80% more willing to engage with content that includes relevant images. Think about your own purchase behavior – we all do this.

This is also why all the virtual reality shenanigans are going to take root.

Up the visual appeal

Without the keyword, the image is now the star of the show. It’s almost as if the understudy suddenly got thrust into the spotlight. Are they ready? Will they succeed?

To get ready for keywordless searches, start by reviewing the images on your site. The goal here is to ensure they’re fully optimized and still recognizable without the surrounding text.

First and foremost, we want to look at the quality of the image and answer yes to as many of the following questions as possible:

  • Does it clearly showcase the product?
  • Is it high-resolution?
  • Is the lighting natural with no distortive filters applied?
  • Is it easily recognizable as being that product?

Next, we want to tell the search engines as much about the image as we can, so they can best understand it. For the same reasons that SEOs can benefit by using Schema mark-up, we want to ensure the images tell as much of a story as they can.

The wonderfully brilliant Ronell Smith touched upon this subject in his recent Moz post, and the Yoast blog offers some in-depth image SEO tips as well. To summarize a few of their key points:

  • Make sure file names are descriptive
  • Provide all the information: titles, captions, alt attribute, description
  • Create an image XML sitemap
  • Optimize file size for loading speed

Fairly simple to do, right? This primes us for the next step.

Take action now by taking advantage of existing technology:

1. Pinterest:

On Pinterest, optimize your product images for clean matches from lifestyle photos. You can reverse-engineer searches to your products via the “visually similar results” tool by posting pins of lifestyle shots (always more compelling than a white background product shot) that feature your products, in various relevant categories.

visual-search-results-blog.gif

In August, Pinterest added video to its visual search machine learning functionality. This tool is still working out the kinks, but keep your eye on it so you can create relevant content with a commerce view.

For example, a crafting video about jewelry might be tagged with places to buy the tools and materials in it.

2. Slyce:

Integrate Slyce’s astounding tool, which gives your customer’s camera a “buy” button. Using image recognition technology, the Slyce tool activates visual product recognition.

slyce.png

Image credit: Slyce.it

Does it work? There are certainly several compelling case studies from the likes of Urban Outfitters and Neiman Marcus on their site.

3. Snapchat:

Snap your way to your customer, using Snapchat’s soon-to-come object recognition ad platform. This lets you deliver an ad to a Snapchatter by recognizing objects in the pictures they’ve just taken.

The Verge shared images from the patent Snapchat had applied for, such as:

snapchat.png

For example, someone who snaps a pic of a woman in a cocktail dress could get an ad for cocktail dresses. Mind-blowing.

4. Blippar:

The Blippar app is practically a two-for-one in the world of visual intelligence, offering both AR as well as visual discovery options.

They’ve helped brands pave the way to AR by turning their static content into AR interactive content. A past example is Domino’s Pizza in the UK, which allowed users of the Blippar app to interact with their static posters to take actions such as download deals for their local store.

Blippar.jpg

Now the company has expanded into visual discovery. When a user “Blipps” an item, the app will show a series of interrelated bubbles, each related to the original item. For example, “Blipping” a can of soda could result in information about the manufacturer, latest news, offers, and more.

blippars.jpg

Image credit: Blippar.com

Empowerment via inclusivity

Just in case you imagine all the developments are here to serve commerce, I wanted to share two examples of how visual intelligence can help with accessibility for the seeing impaired.

TapTapSee

taptapsee logo.PNG

From the creators of CamFind, TapTapSee is an app specifically designed for the blind and visually impaired.

It recognizes objects photographed and identifies them out loud for the user. All the user needs to do to take a photo is to double tap on the devices’ screen.

The Seeing AI

Created by a Microsoft engineer, the Seeing AI project combines artificial intelligence and image recognition with a pair of smart glasses to help a visually-impaired person better understand who and what is going on around them.

Take a look at them in action:

While wearing the glasses, the user simply swipes the touch panel on the eyewear to take a photo. The AI will then interpret the scene and describe it back out loud, using natural language.

It can describe what people are doing, how old they are, what emotion they’re expressing, and it can even read out text (such as a restaurant menu or newspaper) to the user.

Innovations like this are what makes search even more inclusive.

Keep Calm and Visualize On

We are visual creatures. We eat first with our eyes, we love with our eyes, we become curious with our eyes.

Cameras as the new search box is brilliant. It removes obstacles to search and helps us get answers in a more intuitive way. Our technology is adapting to us, to our very human drive to see everything.

And that is why the future of search is visual.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!