Should SEOs and Marketers Continue to Track and Report on Keyword Rankings? – Whiteboard Friday

Posted by randfish

Is the practice of tracking keywords truly dying? There’s been a great deal of industry discussion around the topic of late, and some key points have been made. In today’s Whiteboard Friday, Rand speaks to the biggest challenges keyword rank tracking faces today and how to solve for them.

http://ift.tt/2avkhd0

http://ift.tt/1GaxkYO

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about keyword ranking reports. There have been a few articles that have come out recently on a number of big industry sites around whether SEOs should still be tracking their keyword rankings.

I want to be clear: Moz has a little bit of a vested interest here. And so the question is: Can you actually trust me, who obviously I’m a big shareholder in Moz and I’m the founder, and so I care a lot about how Moz does as a software business. We help people track rankings. Does that mean I’m biased? I’m going to do my best not to be. So rather than saying you absolutely should track rankings, I’m instead going to address what most of these articles have brought up as the problems of rank tracking and then talk about some solutions by which you can do this.

My suspicion is you should probably be rank tracking. I think that if you turn it off and you don’t do it, it’s very hard to get a lot of the value that we need as SEOs, a lot of the intelligence. It’s true there are challenges with keyword ranking reports, but not true enough to avoid doing it entirely. We still get too much value from them.

The case against — and solutions for — keyword ranking data

A. People, places, and things

So let’s start with the case against keyword ranking data. First off, “keyword ranking reports are inaccurate.” There’s personalization, localization, and device type, and that biases and has removed what is the “one true ranking.” We’ve done a bunch of analyses of these, and this is absolutely the case.

Personalization, turns out, doesn’t change ranking that much on average. For an individual it can change rankings dramatically. If they visited your website before, they could be historically biased to you. Or if they visited your competitor’s, they could be biased. Their previous search history might have biased them in a single session, those kinds of things. But with the removal of Google+ from search results, personalization is actually not as dramatically changing as it used to be. Localization, though, still huge, absolutely, and device differences, still huge.

Solution

But we can address this, and the way to do that is by tracking these things separately. So here you can see I’ve got a ranking report that shows me my mobile rankings versus my desktop rankings. I think this is absolutely essential. Especially if you’re getting a lot of traffic from both mobile and desktop search, you need to be tracking those separately. Super smart. Of course we should do that.

We can do the same thing on the local side as well. So I can say, “Here, look. This is how I rank in Seattle. Here’s how I rank in Minneapolis. Here’s how I rank in the U.S. with no geographic personalization,” if Google were to do that. Those types of rankings can also be pretty good.

It is true that local ranked tracking has gotten a little more challenging, but we’ve seen that folks like, well Moz itself, but folks like STAT (GetStat), SERPs.com, Search Metrics, they have all adjusted their rank tracking methodologies in order to have accurate local rank tracking. It’s pretty good. Same with device type, pretty darn good.

B. Keyword value estimation

Another big problem that is expressed by a number of folks here is we no longer know how much traffic an individual keyword sends. Because we don’t know how much an individual keyword sends, we can’t really say, “What’s the value of ranking for that keyword?” Therefore, why bother to even track keyword rankings?

I think this is a little bit of spurious logic. The leap there doesn’t quite make sense to me. But I will say this. If you don’t know which keywords are sending you traffic specifically, you still know which pages are receiving search traffic. That is reported. You can get it in your Google Analytics, your Omniture report, whatever you’re using, and then you can tie that back to keyword ranking reports showing which pages are receiving traffic from which keywords.

Most all of the ranked tracking platforms, Moz included, has a report that shows you something like this. It says, “Here are the keywords that we believe are likely to have sent these percentages of traffic to this page based on the keywords that you’re tracking, based on the pages that are ranking for them, and how much search traffic those pages receive.”

Solution

So let’s track that. We can look at pages receiving visits from search, and we can look at which keywords they rank for. Then we can tie those together, which gives us the ability to then make not only a report like this, but a report that estimates the value contributed by content and by pages rather than by individual keywords.

In a lot of ways, this is almost superior to our previous methodology of tracking by keyword. Keyword can still be estimated through AdWords, through paid search, but this can be estimated on a content basis, which means you get credit for how much value the page has created, based on all the search traffic that’s flowed to it, and where that’s at in your attribution lifecycle of people visiting those pages.

C. Tracking rankings and keyword relevancy

Pages often rank for keywords that they aren’t specifically targeting, because Google has gotten way better with user intent. So it can be hard or even impossible to track those rankings, because we don’t know what to look for.

Well, okay, I hear you. That is a challenge. This means basically what we have to do is broaden the set of keywords that we look at and deal with the fact that we’re going to have to do sampling. We can’t track every possible keyword, unless you have a crazy budget, in which case go talk to Rob Bucci up at STAT, and he will set you up with a huge campaign to track all your millions of keywords.

Solution

If you have a smaller budget, what you have to do is sample, and you sample by sets of keywords. Like these are my high conversion keywords — I’m going to assume I have a flower delivery business — so flower delivery and floral gifts and flower arrangements for offices. My long tail keywords, like artisan rose varieties and floral alternatives for special occasions, and my branded keywords, like Rand’s Flowers or Flowers by Rand.

I can create a bunch of different buckets like this, sample the keywords that are in them, and then I can track each of these separately. Now I can see, ah, these are sets of keywords where I’ve generally been moving up and receiving more traffic. These are sets of keywords where I’ve generally been moving down. These are sets of keywords that perform better or worse on mobile or desktop, or better or worse in these geographic areas. Right now I can really start to get true intelligence from there.

Don’t let your keyword targeting — your keyword targeting meaning what keywords you’re targeting on which pages — determine what you rank track. Don’t let it do that exclusively. Sure, go ahead and take that list and put that in there, but then also do some more expansive keyword research to find those broad sets of search terms and phrases that you should be monitoring. Now we can really solve this issue.

D. Keyword rank tracking with a purpose

This one I think is a pretty insidious problem. But for many organizations ranking reports are more of a historical artifact. We’re not tracking them for a particular reason. We’re tracking them because that’s what we’ve always tracked and/or because we think we’re supposed to track them. Those are terrible reasons to track things. You should be looking for reasons of real value and actionability. Let’s give some examples here.

Solution

What I want you to do is identify the goals of rank tracking first, like: What do I want to solve? What would I do differently based on whether this data came back to me in one way or another?

If you don’t have a great answer to that question, definitely don’t bother tracking that thing. That should be the rule of all analytics.

So if your goal is to say, “Hey, I want to be able to attribute a search traffic gain or a search traffic loss to what I’ve done on my site or what Google has changed out there,” that is crucially important. I think that’s core to SEO. If you don’t have that, I’m not sure how we can possibly do our jobs.

We attribute search traffic gains and losses by tracking broadly, a broad enough set of keywords, hopefully in enough buckets, to be able to get a good sample set; by tracking the pages that receive that traffic so we can see if a page goes way down in its search visits. We can look at, “Oh, what was that page ranking for? Oh, it was ranking for these keywords. Oh, they dropped.” Or, “No, they didn’t drop. But you know what? We looked in Google Trends, and the traffic demand for those keywords dropped,” and so we know that this is a seasonality thing, or a fluctuation in demand, or those types of things.

And we can track by geography and device, so that we can say, “Hey, we lost a bunch of traffic. Oh, we’re no longer mobile-friendly.” That is a problem. Or, “Hey, we’re tracking and, hey, we’re no longer ranking in this geography. Oh, that’s because these two competitors came in and they took over that market from us.”

We could look at would be something like identify pages that are in need of work, but they only require a small amount of work to have a big change in traffic. So we could do things like track pages that rank on page two for given keywords. If we have a bunch of those, we can say, “Hey, maybe just a few on-page tweaks, a few links to these pages, and we could move up substantially.” We had a Whiteboard Friday where we talked about how you could do that with internal linking previously and have seen some remarkable results there.

We can track keywords that rank in position four to seven on average. Those are your big wins, because if you can move up from position four, five, six, seven to one, two, three, you can double or triple your search traffic that you’re receiving from keywords like that.

You should also track long tail, untargeted keywords. If you’ve got a long tail bucket, like we’ve got up here, I can then say, “Aha, I don’t have a page that’s even targeting any of these keywords. I should make one. I could probably rank very easily because I have an authoritative website and some good content,” and that’s really all you might need.

We might look at some up-and-coming competitors. I want to track who’s in my space, who might be creeping up there. So I should track the most common domains that rank on page one or two across my keyword sets.

I can track specific competitors. I might say, “Hey, Joel’s Flower Delivery Service looks like it’s doing really well. I’m going to set them up as a competitor, and I’m going to track their rankings specifically, or I’m going to see…” You could use something like SEMrush and see specifically: What are all the keywords they rank for that you don’t rank for?

This type of data, in my view, is still tremendously important to SEO, no matter what platform you’re using. But if you’re having these problems or if these problems are being expressed to you, now you have some solutions.

I look forward to your comments. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Case Study: How We Created Controversial Content That Earned Hundreds of Links

Posted by KelseyLibert

Content marketers, does the following scenario sound familiar?

You’re tasked with creating content that attracts publicity, links, and social shares. You come up with great ideas for content that you’re confident could accomplish these goals. However, any ideas that push the envelope or might offend anyone in the slightest get shot down by your boss or client. Even if a provocative idea gets approved, after feedback from higher-ups and several rounds of editing, you end up with a boring, watered-down version of what you originally envisioned.

Given the above, you’re not surprised when you achieve lackluster results. Repeat this cycle enough times, and it may lead to the false assumption that content marketing doesn’t work for the brand.

In this post, I’ll answer two questions:

  1. How can I get my boss or clients to sign off on envelope-pushing content that will attract the attention needed to achieve great results?
  2. How can we minimize the risk of backlash?

Why controversy is so powerful for content marketing

To get big results, content needs to get people talking. Often times, the best way to do this is by creating an emotional reaction in the audience. Content that deals with a controversial or polarizing topic can be a surefire way to accomplish this.

On the other hand, when you play it too safe with your content, it becomes extremely difficult to ignite the emotional response needed to drive social sharing. Ultimately, you don’t attract the attention needed to earn high-quality links.

Below is a peek at the promotions report from a recent controversial campaign that resulted in a lot of high-quality links, among other benefits.

abodo-promotions-report.png

Overcoming a client’s aversion to controversy

We understand and respect a client’s fierce dedication to protecting their brand. The thought of attaching their company to anything controversial can set off worst-case-scenario visions of an angry Internet mob and bad press (which isn’t always a terrible thing).

One such example of balancing a sensitive topic while minimizing the potential risk is a recent campaign we created for apartment listing site Abodo. Our idea was to use Twitter data to pinpoint which states and cities had the highest concentration of prejudiced and tolerant tweets. Bigotry in America is an extremely sensitive topic, yet our client was open to the idea.

Want to get a contentious idea approved by your boss or client? Here’s how we did it.

1. Your idea needs to be relevant to the brand, either directly or tangentially.

Controversy for the sake of controversy is not going to provide value to the brand or the target audience.

I asked Michael Taus, VP of Growth and Business Development at Abodo, why our campaign idea got the green light. He said Abodo’s mission is to help people find a home, not to influence political discourse. But they also believe that when you’re moving to a new community, there’s more to the decision than what your house or apartment looks like, including understanding the social and cultural tone of the location.

So while the campaign dealt with a hot topic, ultimately this information would be valuable to Abodo’s users.

2. Prove that playing it safe isn’t working.

If your “safe” content is struggling to get attention, make the case for taking a risk. Previous campaign topics for our client had been too conservative. We knew by creating something worth talking about, we’d see greater results.

3. Put safeguards in place for minimizing risk to the brand.

While we couldn’t guarantee there wouldn’t be a negative response once the campaign launched, we could guarantee that we’d do everything in our power to minimize any potential backlash. We were confident in our ability to protect our client because we’d done it so many times with other campaigns. I’ll walk you through how to do this throughout the rest of the post.

On the client’s end, they can get approval from other internal departments; for example, having the legal and PR teams review and give final approval can help mitigate the uncertainty around running a controversial campaign.

Did taking a risk pay off?

The campaign was a big success, with results including:

  • More than 620 placements (240 dofollow links and 280 co-citation links)
  • Features on high-authority sites including CNET, Slate, Business Insider, AOL, Yahoo, Mic, The Daily Beast, and Adweek
  • More than 67,000 social shares
  • A whole lot of discussion

cnet-coverage.png

Beyond these metrics, Abodo has seen additional benefits such as partnership opportunities. Since this campaign launched, they were approached by a nonprofit organization to collaborate on a similar type of piece. They hope to repeat their success by leveraging the nonprofit’s substantial audience and PR capabilities.

Essential tips for minimizing risk around contentious content

We find that good journalism practices can greatly reduce the risk of a negative response. Keep the following five things in mind when creating attention-grabbing content.

1. Presenting data vs. taking a stance: Let the data speak

Rather than presenting an opinion, just present the facts. Our clients are usually fine with controversial topics as long as we don’t take a stance on them and instead allow the data we’ve collected to tell the story for us. Facts are facts, and that’s all your content needs to offer.

If publishers want to put their own spin on the facts you present or audiences see the story the data are telling and want to respond, the conversation can be opened up and generate a lot of engagement.

For the Abodo campaign, the data we presented weren’t a direct reflection of our client but rather came from an outside source (Twitter). We packaged the campaign on a landing page on the client’s site, which includes the design assets and an objective summary of the data.

abodo-landing-page.png

The publishers then chose how to cover the data we provided, and the discussion took off from there. For example, Slate called out Louisiana’s unfortunate achievement of having the most derogatory tweets.

slate-coverage.png

2. Present more than one side of the story

How do you feel when you watch a news report or documentary that only shares one side of the story? It takes away credibility from the reporting, doesn’t it?

To keep the campaign topic from being too negative and one-sided, we looked at the most prejudiced and least prejudiced tweets. Including states and cities with the least derogatory tweets added a positive angle to the story. This made the data more objective, which improved the campaign’s credibility.

least-derogatory.png

Regional publishers showed off that their state had the nicest tweets.

idaho-article.png

And residents of these places were proud to share the news.

If your campaign topic is negative, try to show the positive side of it too. This keeps the content from being a total downer, which is important for social sharing since people usually want to pass along content that will make others feel good. Our recent study on the emotions behind viral content found that even when viral content evokes negative emotions, it’s usually not purely negative; the content also makes the audience feel a positive emotion or surprise.

Aside from objective reporting, a huge benefit to telling more than one side of the story is that you’re able to pitch the story for multiple angles, thus maximizing your potential coverage. Because of this, we ended up creating 18 visual assets for this campaign, which is far more than we typically do.

3. Don’t go in with an agenda

Be careful of twisting the data to fit your agenda. It’s okay to have a thesis when you start, but if your aim is to tell a certain story you’re apt to stick with that storyline regardless of what the data show. If your information is clearly slanted to show the story you want to tell, the audience will catch on, and you’ll get called out.

Instead of gathering research with an intent of “I’m setting out to prove XYZ,” adopt a mindset of “I wonder what the reality is.”

4. Be transparent about your methodology

You don’t want the validity of your data to become a point of contention among publishers and readers. This goes for any data-heavy campaign but especially for controversial data.

To combat any doubts around where the information came from or how the data were collected and analyzed, we publish a detailed methodology alongside all of our campaigns. For the Abodo campaign, we created a PDF document of the research methodology which we could easily share with publishers.

methodology-example.pngInclude the following in your campaign’s methodology:

  • Where and when you received your data.
  • What kind and how much data you collected. (Our methodology went on to list exactly which terms we searched for on Twitter.)
  • Any exceptions within your collection and analysis, such as omitted information.
  • A list of additional sources. (We only use reputable, new sources ideally published within the last year.)

sources-example.png

For even more transparency, make your raw data available. This gives publishers a chance to comb through the data to find additional story angles.

5. Don’t feed the trolls

This is true for any content campaign, but it’s especially important to have an error-free campaign when dealing with a sensitive topic since it may be under more scrutiny. Don’t let mistakes in the content become the real controversy.

Build multiple phases of editing into your production process to ensure you’re not releasing inaccurate or low-quality content. Keep these processes consistent by creating a set of editorial guidelines that everyone involved can follow.

We put our campaigns through fact checking and several rounds of quality assurance.

Fact checking should play a complementary role to research and involves verifying accuracy by making sure all data and assertions are true. Every point in the content should have a source that can be verified. Writers should be familiar with best practices for making their work easy to fact-check; this fact-checking guide from Poynter is a good resource.

Quality assurance looks at both the textual and design elements of a campaign to ensure a good user experience. Our QA team reviews things like grammar, clarity (Is this text clearly making a point? Is a design element confusing or hard to read?), and layout/organization.

Include other share-worthy elements

Although the controversial subject matter helped this campaign gain attention, we also incorporated other proven elements of highly shareable content:

  • Geographic angle. People wanted to see how their state or city ranked. Many took to social media to express their disappointment or pride in the results.
  • Timeliness. Bigotry is a hot-button issue in the U.S. right now amidst racial tension and a heated political situation.
  • Comparison. Rankings and comparisons stimulate discussion, especially when people have strong opinions about the rankings.
  • Surprising. The results were somewhat shocking since some cities and states which ranked “most PC” or “most prejudiced” were unexpected.

The more share-worthy elements you can tack onto your content, the greater your chances for success.

Have you seen success with controversial or polarizing content? Did you overcome a client’s objection to controversy? Be sure to share your experience in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Ranking #0: SEO for Answers

Posted by Dr-Pete

It’s been over two years since Google launched Featured Snippets, and yet many search marketers still see them as little more than a novelty. If you’re not convinced by now that Featured Snippets offer a significant organic opportunity, then today is my attempt to change your mind.

If you somehow haven’t encountered a Featured Snippet searching Google over the past two years, here’s an example (from a search for “ssl”):

This is a promoted organic result, appearing above the traditional #1 ranking position. At minimum, Featured Snippets contain an extracted answer (more on that later), a display title, and a URL. They may also have an image, bulleted lists, and simple tables.

Why should you care?

We’re all busy, and Google has made so many changes in the past couple of years that it can be hard to sort out what’s really important to your customer or employer. I get it, and I’m not judging you. So, let’s get the hard question out of the way: Why are Featured Snippets important?

(1) They occupy the “#0” position

Here’s the top portion of a SERP for “hdmi cable,” a commercial query:

There are a couple of interesting things going on here. First, Featured Snippets always (for now) come before traditional organic results. This is why I have taken to calling them the “#0” ranking position. What beats #1? You can see where I’m going with this… #0. In this case, the first organic is pushed down even more, below a set of Related Questions (the “People also ask” box). So, the “#1” organic position is really third in this example.

In addition, notice that the “#0” (that’s the last time I’ll put it in quotes) position is the same URL as the #1 organic position. So, Amazon is getting two listings on this result for a single page. The Featured Snippet doesn’t always come from the #1 organic result (we’ll get to that in a minute), but if you score #0, you are always listed twice on page one of results.

(2) They’re surprisingly prevalent

In our 10,000-keyword tracking data set, Featured Snippets rolled out at approximately 2% of the queries we track. As of mid-July, they appear on roughly 11% of the keywords we monitor. We don’t have good historical data from the first few months after roll-out, but here’s a 12-month graph (July 2015 – July 2016):

Featured Snippets have more than doubled in prevalence in the past year, and they’ve increased by a factor of roughly 5X since launch. After two years, it’s clear that this is no longer a short-term or small-scale test. Google considers this experiment to be a success.

(3) They often boost CTR

When Featured Snippets launched, SEOs were naturally concerned that, by extracting and displaying answers, click-through rates to the source site would suffer. While extracting answers from sites was certainly uncharted territory for Google, and we can debate their use of our content in this form, there’s a growing body of evidence to suggest that Featured Snippets not only haven’t harmed CTR, but they actually boost it in some cases.

In August of 2015, Search Engine Land published a case study by Glenn Gabe that tracked the loss of a Featured Snippet for a client on a competitive keyword. In the two-week period following the loss, that client lost over 39K clicks. In February of 2016, HubSpot did a larger study of high-volume keywords showing that ranking #0 produced a 114% CTR boost, even when they already held the #1 organic position. While these results are anecdotal and may not apply to everyone, evidence continues to suggest that Featured Snippets can boost organic search traffic in many cases.

Where do they come from?

Featured Snippets were born out of a problem that dates back to the early days of search. Pre-Google, many search players, including Yahoo, were human-curated directories first. As content creation exploded, humans could no longer keep up, especially in anything close to real-time, and search engines turned to algorithmic approaches and machine curation.

When Google launched the Knowledge Graph, it was based entirely on human-curated data, such as Freebase and Wikidata. You can see this data in traditional “Knowledge Cards,” sometimes generically called “answer boxes.” For example, this card appears on a search for “Who is the CEO of Tesla?”:

The answer is short and factual, and there is no corresponding source link for it. This comes directly from the curated Knowledge Graph. If you run a search for “Tesla,” you can see this more easily in the Knowledge Panel on that page:

In the middle, you can see an entry for “CEO: Elon Musk.” This isn’t just a block of display text — each of these line items are factoids that exist individually as structured data in the Knowledge Graph. You can test this by running searches against other factoids, like “When was Tesla founded?”

While Google does a decent job of matching many forms of a question to answers in the Knowledge Graph, they can’t escape the limits of human curation. There are also questions that don’t easily fit the “factoid” model. For example, if you search “What is ludicrous mode Tesla?” (pardon the weird syntax), you get this Featured Snippet:

Google’s solution was obvious, if incredibly difficult — take the trillions of pages in their index and use them to generate answers in real-time. So, that’s exactly what they did. If you go to the source page on Engadget, the text in the Featured Snippet is taken directly from on-page copy (I’ve added the green highlighting):

It’s not as simple as just scraping off the first paragraph with a spatula and flipping it onto the SERP, though. Google does seem to be parsing content fairly deeply for relevance, and they’ve been improving their capabilities constantly since the launch of Featured Snippets. Consider a couple of other examples with slightly different formats. Here’s a Featured Snippet for “How much is a Tesla?”:

Note the tabular data. This data is being extracted and reformatted from a table on the target page. This isn’t structured data — it’s plain-old HTML. Google has not only parsed the table but determined that tabular data is a sensible format in response to the question. Here’s the original table:

Here’s one of my favorite examples, from a search for “how to cook bacon.” For any aspiring bacon wizards, please pay careful attention to step #4:

Note the bulleted (ordered) list. As with the table, not only has Google determined that a list is a relevant format for the answer, but they’ve created this list. Now look at the target page:

There’s no HTML ordered list (<ol></ol>) on this page. Google is taking a list-like paragraph style and converting it into a simpler list. This content is also fairly deep into a long page of text. Again, there is no structured data in play. Google is using any and all content available in the quest for answers.

How do you get one?

So, let’s get to the tactical question — how can you score a Featured Snippet? You need to know two things. First, you have to rank organically on the first page of results. Every Featured Snippet we’ve tracked also ranks on page one. Second, you need to have content that effectively targets the question.

Do you have to rank #1 to get the #0 position? No. Ranking #1 certainly doesn’t hurt, but we’ve found examples of Featured Snippet URLs from across all of page one. As of June, the graph below represents the distribution of organic rankings for all of the Featured Snippets in our tracking data set:

Just about 1/3 of Featured Snippets are pulled from the #1 position, with the bulk of the remaining coming from positions #2–#5. There are opportunties across all of page one, in theory, but searches where you rank in the top five are going to be your best targets. The team at STAT produced an in-depth white paper on Featured Snippets across a very large data set that showed a similar pattern, with about 30% of Featured Snippet URLs ranking in the #1 organic position.

If you’re not convinced yet, here’s another argument for the “Why should you care?” column. Once you’re ranking on page one, our data suggests that getting the Featured Snippet is more about relevance than ranking/authority. If you’re ranking #2–#5 it may be easier to compete for position #0 than it is for position #1. Featured Snippets are the closest thing to an SEO shortcut you’re likely to get in 2016.

The double-edged sword of Featured Snippets (for Google) is that, since the content comes from our websites, we ultimately control it. I showed in a previous post how we fixed a Featured Snippet with updated data, but let’s get to what you really want to hear — can we take a Featured Snippet from a competitor?

A while back, I did a search for “What is Page Authority?” Page Authority is a metric created by us here at Moz, and so naturally we have a vested interest in who’s ranking for that term. I came across the following Featured Snippet.

At the time, DrumbeatMarketing.net was ranking #2 and Moz was ranking #1, so we knew we had an opportunity. They were clearly doing something right, and we tried to learn from it. Their page title addressed the question directly. They jumped quickly to a concise answer, whereas we rambled a little bit. So, we rewrote the page, starting with a clear definition and question-targeted header:

This wasn’t the only change, but I think it’s important to structure your answers for brevity, or at least summarize them somewhere on the page. A general format of a quick summary at the top, followed by a deeper dive seems to be effective. Journalists sometimes call this an “inverted pyramid” structure, and it’s useful for readers as well, especially Internet readers who tend to skim articles.

In very short order, our changes had the desired impact, and we took the #0 position:

This didn’t take more authority, deep structural changes, or a long-term social media campaign. We simply wrote a better answer. I believe we also did a service to search users. This is a better page for people in a hurry and leads to a better search snippet than before. Don’t think of this as optimizing for Featured Snippets, or you’re going to over-optimize and be haunted by the Ghost of SEO Past. Think of it as being a better answer.

What should you target?

Featured Snippets can require a slightly different and broader approach to keyword research, especially since many of us don’t routinely track questions. So, what kind of questions tend to trigger Featured Snippets? It’s helpful to keep in mind the 5 Ws (Who, What, When, Where, Why) + How, but many of these questions will generate answers from the Knowledge Graph directly.

To keep things simple, ask yourself this: is the answer a matter of simple fact (or a “factoid”)? For example, a question like “How old is Beyoncé?” or “When is Labor Day?” is going to be pulled from the Knowledge Graph. While human curation can’t keep up with the pace of the web, WikiData and other sources are still impressive and cover a massive amount of territory. Typically, these questions won’t produce Featured Snippets.

What and implied-what questions

A good starting point is “What…?” questions, such as our “What is Page Authority?” experiment. This is especially effective for industry terms and other specialized knowledge that can’t be easily reduced to a dictionary definition.

Keep in mind that many Featured Snippets appear on implied “What…” questions. In other words, “What” never appears in the query. For example, here’s a Featured Snippet for “PPC”:

Google has essentially decided that this fairly ambiguous query deserves an answer to “What is PPC?” In other words, they’ve implied the “What.” This is fairly common now for industry terms and phrases that might be unfamiliar to the average searcher, and is a good starting point for your keyword research.

Keep in mind that common words will produce a dictionary entry. For example, here’s a Knowledge Card for “What is search?”:

These dictionary cards are driven by human-curated data sources and are not organic, in the typical sense of the word. Google has expanded dictionary results in the past year, so you’ll need to focus on less common terms and phrases.

Why and how questions

“Why… ?” questions are good fodder for Featured Snippets because they can’t easily be answered with factoids. They often require some explanation, such as this snippet for “Why is the sky blue?”:

Likewise, “How…?” questions often require more in-depth answers. An especially good target for Featured Snippets is “How to… ?” questions, which tend to have practical answers that can be summarized. Here’s one for “How to make tacos”:

One benefit of “Why,” “How,” and “How to” questions is that the Featured Snippet summary often just serves as a teaser to a longer answer. The summary can add credibility to your listing while still attracting clicks to in-depth content. “How… ?” may also be implied in some cases. For example, a search for “convert PDF to Word” brings up a Featured Snippet for a “How to…” page.

What content is eligible?

Once you have a question in mind, and that question/query is eligible for Featured Snippets, there’s another piece of the targeting problem: which page on your site is best equipped to answer that question? Let’s take, for example, the search “What is SEO?”. It has the following Featured Snippet from Wikipedia:

Moz ranks on page one for that search, but it still begs two questions: (1) is the ranking page the best answer to the question (in Google’s eyes), and (2) what content on the page do they see as best matching the question. Fortunately, you can use the “site:” operator along with your search term to help answer both questions. Here’s a Featured Snippet for [site:moz.com “what is seo”]:

Now, we know that, within just our own site, Google is seeing The Beginner’s Guide as the best match to the question, and we have an idea of how they’re parsing that page for an answer. If we were willing to rewrite the page just to answer this question (and that certainly involves trade-offs), we’d have a much better sense of where to start.

What about Related Questions?

Featured Snippets have a close cousin that launched more recently, known to Google as Related Questions and sometimes called the “People Also Ask” box. If I run a search for “page authority,” it returns the following set of Related Questions (nestled into the organic results):

Although Related Questions have a less dominant position in search results than Featured Snippets (they’re not generally at the top), they’re more prevalent, occurring on almost 17% of the searches in our tracking data set. These boxes can contain up to four related questions (currently), and each question expands to look something like this:

At this point, that expanded content should look familiar — it’s being generated from the index, has an organic link, and looks almost exactly like a Featured Snippet. It also has a link to a Google search for the related question. Clicking on that search brings up the following Featured Snippet:

Interestingly, and somewhat confusingly, that Featured Snippet doesn’t exactly match the snippet in the Related Questions box, even though they’re answering the same question from the same page. We’re not completely sure how Featured Snippets and Related Questions are connected, but they share a common philosophy and very likely a lot of common code. Being a better answer will help you rank for both.

What’s the long game?

If you want to know where all of this is headed in the future, you have to ask a simple question: what’s in it for Google? It’s easy to jump to conspiracy theories when Google takes our content to provide direct answers, but what do they gain? They haven’t monetized this box, and a strong, third-party answer draws attention and could detract from ad clicks. They’re keeping you on their page for another few seconds, but that’s little more than a vanity metric.

I think the answer is that this is part of a long shift toward mobile and alternative display formats. Look at the first page of a search for “what is page authority” on an Android device:

Here, the Featured Snippet dominates the page — there’s just not room for much more on a mobile screen. As technology diversifies into watches and other wearables, this problem will expand. There’s an even more difficult problem than screen space, though, and that’s when you have no screen at all.

If you do a voice search on Android for “what is page authority,” Google will read back to you the following answer:

“According to Moz, Page Authority is a score developed by Moz that predicts how well a specific page will rank on search engines.”

This is an even more truncated answer, and voice search appends the attribution (“According to Moz…”). You can still look at your phone screen, of course, but imagine if you had asked the question in your car or on Google’s new search appliance (their competitor to Amazon’s Echo). In those cases, the Featured Snippet wouldn’t just be the most prominent answer — it would be the only answer.

Google has to adapt to our changing world of devices, and often those devices requires succinct answers and aren’t well-suited to a traditional SERP. This may not be so much about profiting from direct answers for Google as it is about survival. New devices will demands new formats.

How do you track all of this?

After years of tracking rich SERP features, watching the world of organic search evolve, and preaching that evolution to our customers and industry, I’m happy to say that our Product Team has been hard at work for months building the infrastructure and UI necessary to manage the rich and complicated world of SERP features, including Featured Snippets. Spoiler alert: expect an announcement from us very soon.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

The Future of e-Commerce: What if Users Could Skip Your Site?

Posted by tallen1985

Have you taken a look at Google Shopping recently? Okay, so it isn’t quite the ecommerce monster that Amazon or eBay are, and yes, it’s only filled with sponsored posts. Playing around with it, however, proves that it provides a decent experience.

And that experience got me thinking. What if, instead of being sponsored ads, Google Shopping completely replaced organic search results for transactional queries? Would this be a better user experience? I would have a comparison of products from multiple retailers without even having to visit a website. Would this be a better experience than just “ten blue links?”

In this post I want to share why I think Google Shopping could replace organic search results in the future, and how websites can begin to prepare for this.

A closer look at Google Shopping

We’ve already seen evidence of Google trying to keep users within their search engine with local packs, flights, knowledge graphs, and instant answers. What’s to say shopping isn’t next? Google have already been using Google Shopping ads within search results for a while now, and they recently started testing Showcase Shopping ads, increasing the level of product exposure in a search result.

Check out this Google Shopping result for “red shoes” below:

On first impression, this could easily be an organic shopping result.

Google doesn’t make it crystal clear that these are paid ads, only displaying a small notification in the top right. Do users clearly understand that these products and brands are paying to appear here? As the potential customer, does it even matter, as long as I find the red shoes I’m looking for?

If this had been my search result instead of the typical organic search result, it wouldn’t have been a disappointing experience. In fact, Google would be putting me closer to my desired action of actually researching/purchasing red shoes, without me ever needing to leave Google.

Why do I think the long-term plan could be to use the layout of Google Shopping as a replacement for the current organic result? For me, the Google Shopping landing pages offer:

  • An overall better user experience than most sites — it has familiarity and loads quickly.
  • A range of products from multiple suppliers all in one place.
  • Price comparison of multiple suppliers without me having to load multiple domains.
  • Easy-to-understand faceted navigation.
  • Mobile-friendly — I don’t have to gamble on the search result I’m clicking on.

More intuitive for voice search

This plugs perfectly in with the development and improvements of voice search and the use of compound search queries, which Tom Anthony and myself discussed in Distilled’s Searchscape series.

Here’s a previous example of a compound query that Tom Anthony shared at SMX Munich:

I thought I’d test this same process out by trying to find a pair of red shoes using just voice search. The results weren’t perfect and, at this time, not a great user experience. However, compare this to Google Shopping results and you’ll see where we could be heading in the future with organic results.

Below is how the current search results look for a mobile voice search (on the left) versus search results if you click through to Google Shopping (images on the right).

“Okay Google, show me shoes”

Yup, those are definitely shoes. So far, so good for both results!

Current SERPs Shopping SERPS

“Okay Google, under £40”

Not quite under £40, but they are shoes within a reasonable price range. Google’s organic results have dropped product listings and are now showing sales pages for shoe stores.

Current SERPs Shopping SERPS

“Okay Google, in red”

Organic search now lists red shoe landing pages. However, the ads seem way off target, displaying bikes. Google Shopping, on the other hand, is getting pretty close to the product I may be looking to purchase.

Current SERPs Shopping SERPS

“Okay Google, for men”

Organic continues to show me predominantly men’s shoes page results, despite a very specific search query. Compare that to Google Shopping, which now matches the majority of my criteria except price.

Current SERPs Shopping SERPS

While the above search shows the organic SERPs aren’t producing high-quality results for conversational queries, you can be confident that these types of results will continue to improve. And when they do, the Google Shopping result will produce the best answer to the user’s query, getting them to their desired action with the fewest number of clicks.

Time and again we’ve seen Google attempt to reduce the number of steps it takes for a user to get their answer via features such as car insurance, flight comparison, and instant answers. This seems the logical next step for shopping, as well, once search results are dependable.

Will the user still have to come to my site to complete a transaction?

Initially, yes, the user will have to click through to your page in order to purchase. Currently, Google Shopping allows users to find more information about a product within Google before clicking through to a landing page to complete their purchase.

But in the long run, Google could facilitate the transaction for your business without a user ever hitting a website. We saw Google testing this within paid search back in 2015. And while at the time Google stated they have no intention of becoming a retailer (and I still believe this to be true), we certainly know that Google wants to get the user to complete their goal as quickly and easily as possible, ideally remaining within the Google eco-system.

google-buy-now-animated-1437048801.gif

Google Shopping testing instant purchase

What could this mean for webmasters?

A change such as this could be a double-edged sword for businesses. If Google decided to rank your product more prominently than competitors, its ease of use could see an uplift in sales. The downside? If Google decided to monetize this feature, they could look to take a cut from any sales, similar to Amazon and eBay.

Secondly, we would have to refine the way we measure traffic to our site (or not). It’s likely that measurement would have to be based on impressions and conversions rather than sessions. Based on the current reporting format available for Google Shopping, users may have access to clicks and click-through rate, but as no actual data is being passed to Google Analytics this would likely be reported within Google Search Console.

Of course, we’d still want ranking reports, as well. Rank tracking companies such as GetStat and SEMRush would have to adapt their products to track product listings in the same way that we’ve seen them improve tracking for local packs and structured data over the last 12 months.

How could we prepare for this?

Preparation for a world where Google looks like this falls into two buckets: what you should do if you own the physical products, and what you should do if you don’t (for example, if you’re an affiliate site).

If you own the product:

If you own the product (for example, you stock and sell TVs), then you should be looking to give Google as much information about your products as possible to ensure they have the optimal opportunity to appear within search engine results. Ensure product pages are well-optimized so Google understands the product being displayed. Most importantly, we recommend you get structured data in place (Google’s current preference is for webmasters to use JSON-LD).

There may also be immediate benefits, such as getting more rich snippets within search results and an increased opportunity of being featured in answer boxes (and leapfrogging competitors), but this will help future-proof your site.

Want to know more about JSON-LD? I recommend taking a read of the following resources:

Additionally, we need to start looking higher up the funnel and creating content that will make users come back. I know, I hate saying it, but we have to produce great content! I’ll discuss how The Wirecutter has been approaching this in just a moment.

Further down the pipeline, if Google decided it can handle processing user transactions within Google itself, you’ll want to consider opening up your checkout as an API. This was a requirement in Google’s paid experiment and, as such, could be a necessity to appear here in the future.

If you don’t own the product & are an affiliate or review site, etc.

Ranking for both transactional and information search queries could become even more difficult. It may even become impossible to rank for very specific long-tail search terms.

The recommendations don’t differ too much from above. We should still get structured data in place to reap the rewards now and start producing great content that sits higher up the funnel.

Producing great and useful content

Will Critchlow recently introduced me to The WireCutter as one of his go-to websites. This is a site that’s taken product research to an extreme. With extremely in-depth articles about which products users should buy, they take the thought process out of “which product should I buy?” and instead, based on my needs, say, “Don’t worry about doing any more research, we’ve done it for you. Just buy this one.”

I’ve recently purchased a range of products from pens to printers based on their recommendations. They’ve created useful content — which, after numerous purchases, I now trust — and as a result encourages me to return to their site over and over again.

To finish up, I’d love to hear your thoughts:

  • How might the future of ecommerce look?
  • How have you been using voice search, particularly compound and revised queries?
  • Do you think Google Shopping replacing the current organic search layout would provide an improved user experience?

Reach out to me in the comments below or over on Twitter — @the_timallen.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Does Voice Search and/or Conversational Search Change SEO Tactics or Strategy?

Posted by randfish

We’re hearing a lot about voice search lately, and that trend doesn’t seem likely to disappear. But does it have a direct impact on how you should be thinking about your SEO strategy? In today’s Whiteboard Friday, Rand discusses what to expect when it comes to the future of search and what you can do to stay on top.

http://ift.tt/2afRHiq

http://ift.tt/1GaxkYO

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about voice search, conversational search, Internet of things search, and how these attributes and the rise in these trends may or may not play a big role in our SEO strategy and tactics for the future.

Today

Today, we have a few sort of nascent beginnings of this, and I made a prediction at the beginning of this year, in my traditional predictions post, that voice search, conversational search, Internet of things, that these wouldn’t actually have a big impact or much of an impact at all on the web marketing world. What we are hearing is from the engines, specifically Google and Bing, talking about how a higher and higher percent of their queries are coming through voice searches. However, what we’re not hearing is how this might be changing SEO or whether it’s changing SEO.

“Okay, Google”
So today what we have going on is things like folks asking their device, their Android device, “Okay, Google, what’s the difference between libel and slander?” You might hear this. Maybe you have a question, something you want answered, and Google will respond verbally to you, or they might just show you the results on the screen, and then you can click through to there, or some combination of the two.

“Alexa”
You can ask your Alexa device, the Amazon Echo Alexa device, you can ask it, “Alexa, did Iceland beat England in the Euro soccer game,” or football game as English and Icelandic people would call it. In fact they did. Really, sorry about that England, but I kind of want to see the Icelandic commentator freak out again. That seems exciting.

“Hey, Siri”
For Apple products, “Hey Siri, where can I get Vietnamese rice noodles near here?” And Siri will look around you, and then return some results, that sort of thing.

Talking to cars
Of course, there’s also this idea that with more and more cars are becoming hotspots for searches as drivers ask their cars things or ask their phones in their cars things like, “All right Tesla,” this is not real, you can’t actually say this to Tesla yet, but I’m sure it’s coming, “When is my brother-in-law’s birthday, and does he drink whiskey?” Hopefully, your Tesla will be smart enough, through whatever partnerships it has with these other technology companies, to be able to answer that.

This is what’s happening today. We’re seeing the rise in conversational and voice search. So there’s a new and different kind of keyword demand and also a new and different kind of result set that returns because of that. Does it really make a huge difference from an SEO perspective? Well, I’m going to argue that not yet, no, it doesn’t. However, I think there are strategic and tactical things that we should be paying attention to as this world progresses, this world of voice search, conversational search progresses.

Strategically speaking

1. The rise in instant answers without search results will continue

We’re going to see a continual rise in instant answers. What is happening is that when a lot of these voice and conversational search queries are coming through, they tend to be longer, and they tend to be seeking out an answer that a device can quickly give you a direct answer to. Things like, what I placed here, and this requires some logic and some understanding from the machines, some contextual understanding, but it’s not that challenging, and the machines are doing a good job of this.

I suspect that what we’ll continue to see is that the percent of queries with an instant answer result keeps rising over time. Now this is percent, not absolute numbers. I mean, obviously the absolute number is rising, but that doesn’t necessarily mean that the traditional kinds of queries that have been made to search engines are going to disappear.

In fact, one of the things that I would urge you and caution against is to say, “Oh, because voice and conversational search are rising, we should stop paying attention to direct, traditional web search and web results.” It may in fact be the case that even with the rise of all these instant answers and new SERP features and voice search that the raw number of clicks on search results in your industry, in your field, for your keywords may actually have gone up despite all these trends. Because these trends are additive, they are not necessarily taking away from other forms of queries, at least not necessarily.

2. Google (& Apple, Amazon, etc.) will continue to disintermediate simplistic answer/data problems:

I think Google and Apple and Amazon and Alexa and all of these engines that participate in this will be continuing to disintermediate simplistic data and answer publishers. So I think it behooves you to question what types of information you’re publishing

The way I’d phrase this is if a certain percent, X percent of queries that result in traffic can be answered in fewer than Y words, or with a quick image or a quick graphic, a quick number, then the engine is going to do it themselves. They don’t need you, and very frankly they’re faster than you are. They can answer that more quickly, more directly than you can. So I think it pays to consider: Are you in the safe or dangerous portion of this strategic framework with the current content that you publish and with the content plans that you have out in the future?

SAFETY DANCE VS. DANGER ZONE

  • Safe: Recipes
  • Dangerous: Cooking conversions

So if you’re in the world of food and cooking, recipes probably very safe. It’s very, very difficult for an engine to say, “Okay, here let me read you the ingredients. Let me show you the photos. Let me give you the entire rundown. I’ll give you the comments. I’ll give you the star rating.” This is too complex.

What’s very simple is cooking conversions. “Alexa, how many pounds of flour do I need to make up a cup?” Very simple cooking conversion, instant answer very possible. Pretty dangerous to be relying on a ton of your click-through traffic for that dangerous stuff.

  • Safe: Sports analysis
  • Dangerous: Sport scores

Sports analysis, very, very difficult for any of these services to try and provide analysis of a game, very easy for them to provide a score.

  • Safe: In-depth product comparison
  • Dangerous: Simplistic product price comparison

Very difficult for them to do an in-depth product comparison, very easy for them to do a specific, simplistic product price comparison. “What are the prices of X on these?”

  • Safe: Learn to code tutorials
  • Dangerous: Quick function lookups

Learn to code tutorials, almost impossible to disintermediate, but a quick function look-up, very easy to disintermediate.

SAFE: If it’s hard to aggregate and present simply, you have a competitive advantage, and you probably will be able to keep that traffic.

DANGEROUS: If it is easy to aggregate and present simply, you’re probably in dangerous territory long term.

Tactically speaking

There are three things that we really think about as we move to the conversational and voice search world. Those are…

1. Keyword research & targeting requires SERP feature analysis

It requires SERP analysis of both desktop and mobile, and preferably in the future I think we’re actually going to be looking for keyword research tools that can perform a voice query and then can tell us what the results either look like or sound like from the engine.

We need to do our prioritization of keyword targeting, which keywords we actually want to select and which keywords we want to create content for and try to rank for, based on our click-through opportunity and our value. If we don’t have that information and that data, then we’re probably going to be choosing some keywords unwisely compared to our competition who is thinking about this.

2. Content structure should optimize toward formats engines will use in their instant answers

If someone searches for libel versus slander, it is the case that if you rank on the first page and you have the right content structure, Google may pull you into that instant answer box. What we’ve seen from our research is that being in that instant answer box is not a bad thing. In fact, it tends to increase click-through rate and overall traffic for many, many publishers. Not true for everyone. Some instant answers do really disintermediate queries, the “Iceland versus England, what was the score?” If Google just tells you, you don’t need to click through. But certainly on libel versus slander you may see libel is written or published defamatory statement, while slander is spoken. It’s very likely that people will actually be clicking through to learn more about that subject, and then you have an opportunity to serve up ads or to serve up your services or whatever product you’re selling, those types of things. So format things intelligently.

Dr. Pete did a great blog post* on how to rank number zero, how to get into those instant answer results. He recently did a presentation at SMX Advanced that he’s published on SlideShare, that you can check out as well. Both those resources very handy.

*Editor’s note: This is indeed a great blog post, but it’s still a draft. Stay tuned — we’ll share this with you on Tuesday, July 26th. 🙂

3. Keep an eye on absolute volume and search volume demand trends, NOT just percentages of queries and aggregated stats

So if keyword search volume for the terms and phrases that you care about, if the orange is typed and the green here is voice search, you can see that it looks like over here this is 60% plus, so voice search has overtaken typed search. But what’s actually happened is that, year to year, typed search has gone up as well. It didn’t stop paying to try and rank for these keywords. In fact, it paid more and more dividends. It’s just that voice search grew even faster. So I think we have to be cautious if we think about voice as completely disintermediating or taking over our industry or our content. Rather we should think of this as additive, and we need to pay close attention to the true overall volume demand, both typed and voice search over time.

All right gang, look forward to your tactics, your strategies for voice and conversational search, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Architecting a Unicorn: SEO & IA at Envato (A Podcast by True North)

Posted by Loop11

Rapid growth. In the business world, it’s generally thought of as a good thing — scratch that, a great thing. But when you’re an ecommerce site, that rapid growth can also mean more hurdles to jump, especially when it comes to your SEO and information architecture.

In this episode of True North, you’ll be given a firsthand look at how one company found a way to overcome the obstacles and unite their processes of search, discovery, and transactions.

Architecting a Unicorn – SEO & IA at Envato

http://ift.tt/2aaGhtB

Episode transcript

Ben Newton: “Hi. I’m a serial entrepreneur.” We hear that a lot these days, don’t we? I don’t know about you, but when I hear a person say that, I kind of find it repelling, as though someone has blown cigarette smoke into my personal space. It didn’t always used to be this way. Ten years ago, for instance, being an entrepreneur wasn’t the buzzword it is today. Another thing that happened 10 years ago was the founding of a bootstrapped company called Envato.

Chris Thelwell: We’re a marketplace for creative professionals. So we have people that produce assets for us like WordPress themes, like graphic assets, like photography and videos, and then we have people that want to buy that kind of stuff. And we’re the marketplace that fits in between.

Ben: Unlike the many wantrepreneurs of today, Envato has actually helped to create thousands of real entrepreneurs who have been hugely successful.

Fiorella Rizzà: A lot of our authors were able to quit their day-to-day job and just focus on doing what they love. We have stories of authors that have built houses and were able to provide for their children and were able to stay at home and spend more time with their family or travel around the world. To me it’s — we actually help people reach for freedom.

Ben: In the last six years, Envato has grown exponentially. Today, it has roughly 10 million assets available for sale through its network of websites. Some of their sites, such as ThemeForest, are among the most trafficked websites in the world. All of this is good news, but with a rapid growth fueled by user-generated content, problems are created. Many of these issues can be thrown into two main buckets: information architecture and search engine optimization, also known as IA and SEO.

Kate Hunter: We’ve got sort of two streams. We have new products. So when it comes to new products, it’s about working with those teams prior to development and mapping out a structure that will allow it to scale and not run into architecture problems down the track, that will inhibit the ability to grow organic traffic. And then conversely on the marketplaces, if you architect for a small amount of categories or a small amount of content 10 years ago, and you now have a large amount of content and you now have to shoehorn that into the original structure you created, it’s not necessarily the best fit.

Ben: So put yourself in Envato’s shoes. You have a runaway success, which is only growing in momentum, yet you know some things have to change. The bones of Envato need to be altered to not only handle future growth, but also to get the most out of what is currently there.

There’s no easy path here. And no step forward seems to be without two steps back, but they’re not letting that stop them. Let’s find out how they’re planning for success.

Search, discovery, and transactions in one seamless experience

Hi, and welcome to True North. My name is Ben Newton and I’m from Loop11. This is the podcast where we share stories of discovery and innovation.

As we found out, Envato manages roughly a dozen websites which are geared towards connecting creatives, from around the world, with people who need their services or assets. Having millions of visitors pouring into their sites would seem like the kind of problem you want to have, and it is. They’re not complaining.

What they are trying to do, though, is figure out how to give these visitors the best product, and this ultimately means wrapping search, discovery, and transactions into one seamless experience.

So whose problem is this? Is it an SEO problem? Or maybe it’s IA. Maybe it’s a UX or development issue.

Fiorella: From 2010 to 2015, the number of items that were uploaded to our marketplaces grew exponentially, roughly 50,000 items at the beginning of 2010 to almost 10 million items now.

Ben: That’s Fiorella Rizzà. Her role straddles copywriting, search, and information architecture.

Fiorella: But of course, what happens when you get all this content coming through is you just want to put it out there and make sure that the users are going to see it. So we have this technical constraint where an item cannot fall under multiple categories. It might sound like it’s not a big deal, but it’s actually extremely . . . like it’s a huge constraint, and it doesn’t allow for flexibility.

So what happened was the easiest way to go was just create new categories that would accommodate for a new technology that would come up or a new type of item. But of course, the result of that is that the IA is, at the moment, extremely complex and not intuitive at all.

Ben: Often problems with IA don’t become apparent until you’re literally using the product and a completely rational use case exposes a severe limitation.

Fiorella: So we have a top-level category on VideoHive that’s called Stock Footage, which is pretty straightforward. You’re going to go there, you’re going to find Stock Footage. But then Stock Footage has a number of subcategories that — I’ll give you a few examples: Holiday, Water, Nature, Hobby.

So there is an item where you can just see a boat on water, and that’s it. That’s all you see, and it’s currently falling under the Water subcategory of Stock Footage. But there’s also a Vehicles subcategory, a Hobby subcategory. All these categories would be okay for this video.

The problem there is that those pieces of information are all good to describe that item. The problem is they have been treated in the wrong way. It’s not really about the content in this case. The content is fine. The problem is how we captured the information and how we presented it to the user.

So if you search for it from the homepage, then you’re going to find it. But you click “Stock Footage” and then you click “Vehicles,” and then you’re inside the Vehicles category, and you search for a boat, you’re not going to find that one because that one belongs to another subcategory. So you’re not going to find it.

SEO is what happens when everything else is done right — including UX & IA

Ben: Although navigation and discoverability are arguably two of the most important facets for an ecommerce website, poor IA has wider-reaching implications. For a company like Envato, organic search is a massive source of traffic, and the level of your IA relates directly to your potential to perform in search engines.

Kate: To steal a quote from a conference I was just at, SEO isn’t something you can sprinkle on or apply over the top of something. It’s what happens when everything else is done right, and one of those things that has to be done right is UX and IA.

Ben: That’s Kate Hunter. She’s the Organic Performance Manager at Envato.

Kate: Search engines are trying to emulate human behavior. If it’s hard for them to crawl, if it’s hard for them to understand, then they’re not going to rank you as high as possible because they don’t think you’re doing as good a job as you could.

So at the moment I can say we’ve mapped our click-through rates based on where you can possibly rank, and our content in some cases deserves to rank two positions higher than what it currently is, but it doesn’t because search engines aren’t able to crawl it efficiently. Which also means we’re aren’t allowed to distribute our PageRank efficiently between our pages, which means discoverability and authority is very hard to achieve and execute, which is why we don’t rank those two positions higher.

So the other thing is competition. So sometimes if you were to launch a niche and you launch with a terrible IA and it stays that way, but no one ever competes with you, you’ll always rank for that because you are still the best content. But the Envato business competes in a highly competitive field with a lot of money attached to them, and the reality is our competitors are building sites the way we would build them, if we built ThemeForest today, except we built ThemeForest 10 years ago.

Ben: So this cuts to the core of the problem Envato is facing. The direction that information architecture should head is clear, but how and when to implement those changes isn’t. Their decisions are bookended by the urgency to stay in front of newer competitors and the realization that the old architecture takes a long time to change. It’s the common scenario of too big to start again, but too important not to address.

Chris: Legacy is a huge issue, and you can sort of plan where you want to be. You can plan the future. It’s a similar issue with design. We have, we could design a really great marketplace, but we can’t deliver that. We’re too big to just deliver. We have to deliver in little tiny steps, and we’ll probably never reach that end goal.

Ben: That’s Chris Thelwell. He’s the head of UX and Design at Envato.

Chris: So that kind of vision idea of where you want to get to is really hard to achieve, and you have to kind of work out how we can take those little steps together. One of the examples is, who believes in clicking logos to go back to the homepage? Now, we’ve seen a significant amount of traffic that goes to our homepage. That’s the exits on that page. The theory behind that is that people are clicking on that logo expecting to go somewhere different than where we send them. So it’s kind of trying to understand why you get the results you do.

We’ve got a page with a very high exit rate. You’re trying to understand, why has it gotten that higher exit rate? Where do the people come from to visit? And it’s not necessarily a problem of the page, it’s maybe a problem of where the link to that page was, and we’re constantly trying to understand those things.

Ben: So knowing your product and the user data behind it is key to understanding the problems you need to address. We’ve also heard that SEOs are a big reason for getting your IA right, but can it work the other way? Can improving your SEO help your information architecture?

Can improving your SEO help improve your IA?

Kate: More and more importantly for SEO is tone of voice and authenticity. Google has always said it’s trying to get its algorithms to understand results and websites in the same way that humans do. It’s never been able to do that more so than it does now.

A great example is that, two years ago, best practice would be to not use stop words. So stop words being things like “on, a, by, from” because you’ve got character limits in your title tags and that’s a waste of characters in that title.

Since Hummingbird, the difference is that these stop words are actually really, really important now because they’re not a waste of characters because stop words help to define intent. So that’s where copywriting comes in. So, example, in that page title, instead of using “WordPress templates” or “WordPress themes” with the pipe character and then the term “ThemeForest,” it will now say “WordPress themes from ThemeForest,” because we need to indicate that we are a platform that allows people to sell this.

If we say “buy,” that would indicate that we make these ourselves, that we’ve made them. But we need to say they’re from us because we are the platform. If you think about how people talk or use voice search, they wouldn’t just say “WordPress themes ThemeForest”, they’d say “WordPress themes from ThemeForest.” So voice search is a good indicator. So how you’d search if you were verbally searching is a good indicator of how you should be looking at your on-page text.

Ben: What this can teach us is to keep coming back to what real people would respond to. How would normal people group or search for your information? This is easily forgotten, especially when the focus is on rankings and not the end user.

Now, Kate has said that good SEO is as a result of everything else being done right. But that doesn’t mean that it’s not constantly being monitored during each and every process of UX, design, and IA development. Both Kate and Fiorella are constantly forming benchmarks, running tests, and measuring results to ensure that the waves of constant improvement keep flowing.

Fiorella: When you create or change the information architecture, basically what you’re doing is you are defining or redefining the discovery patterns of users. So whether they’re going to be successful or unsuccessful is all up to how you structure the information. What we’ve found to be extremely useful is card sorting and especially tree-testing exercises. First we test the current structure and then we come up with a new proposed way of organizing the content, and we test it again and see how the results compare.

Ben: The way Fiorella executes tree testing is to use an online tool which is completely removed from their website’s design and content, so as not to influence the results. Rather, it’s an interface which shows just the categories and the subsequent subcategories as the user proceeds through the test. She then configures tasks for the user to complete which provides feedback.

For example, if she was testing the ThemeForest IA, the task might be for a user to pretend they were a restaurant owner looking for a new website template. They’d be asked to navigate the categories until they found an area which they believed would contain the content they were looking for.

Fiorella would then analyze the user data in aggregate, looking for the most common paths taken by users and what percentage of them found the right pages.

This example also acts as a counterpoint to the SEO monitoring and testing done by Kate. While IA can be done offline and removed from the actual website, SEO is a constant monitoring and tweaking process, based on what’s actually happening out in the wild.

Kate: One of the first things I did coming into the business was put in place a rank tracking tool, which allows me to see in the current state for popular terms which were relevant for how we were ranking. And I now know where everything ranks and I know where I think we deserve to rank. For me SEO isn’t about number one. Number one is a very old-school place to play, particularly because we’re a global company. You have to be at the top. One of the best 10 results worldwide, so it’s a very competitive game.

In WordPress for example, we’re highly relevant to WordPress, but WordPress is not our business. WordPress is WordPress.org or WordPress.com. So ThemeForest ranks for number two in the U.S. for WordPress themes. We cannot aim to be any higher than that because we will never trump the original source.

So a business like ours which is, we don’t actually have our products, in a lot of cases we can’t be number one. The best we can hope for is number two, which is not a bad thing by any means. It’s about we are not the most relevant in every case. So the goal is to improve and I have an idea in terms of where we deserve to rank, and that’s my goal. Number one is not the goal.

Build with a vision of the future in mind

Ben: So as Envato moves forward, there are two clear and separate ways in which they’re addressing the problem of IA and SEO.

Kate: We have, I guess, two streams. There’s the one working with the existing aging platform which we’re retrofitting, and then there’s working with the new products. Baking in IA before development, so mapping out what the future looks like. If you’re an online clothing retailer, you might only have pants and t-shirts to sell right now, but could you imagine where you sold all sorts of apparel in the future? And if so, what would the IA for a really detailed clothing store or online clothing store look like in 10 years? Map that structure and then build for that structure, but only populate the content you have now.

Fiorella: To me the thing that helped most was that we made the decision to go with the best-case scenario. Like imagine we don’t have any constraints. What is this going to look like? Because this helps you have a vision. You know where you’re going and what you’re heading towards, and that helps you, prevents you from losing track of what you’re doing and just wander off, thinking about other possible solutions, which it’s very likely to happen.

Ben: To find out more about the team at Envato and how they’re thinking about the challenges they face, go to inside.envato.com. If you have a story you’d like us to consider for the show, please visit our website and send us an email.

You can subscribe to our show on iTunes where you can also rate and review us. Or go to truenorthpodcast.com and join the community.

Our music is by the Mysterious Breakmaster Cylinder.

True North is produced by Loop11. We’ll see you next time.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

​How to Create Images That Attract & Convince Your Target Niche

Posted by nikkielizabethdemere

FJPY1Rw.png

Any old picture might be worth a thousand words. But your target niche doesn’t need or want a thousand words. Your ideal audience needs the right words, paired with the right images, to tell a story that uniquely appeals to their deepest desires.

Studies show that people understand images faster than words, remember them longer, and if there’s a discrepancy between what we see and what we hear, our brains will choose to believe what they see. Our brains prioritize visual information over any other kind, which makes images the fast-track to connection all marketers are looking for.

So don’t just slap some text on a stock photo and call it good. You can do better. Much better. And I’ll show you how.

Understand the symbolic underpinnings

This homepage from Seer Interactive does a lot right. The copy below this central image is golden: “We’re Seer. We pride ourselves on outcaring the competition.” Outcaring? That’s genius!

But, I would argue, pairing this image with these words, “It’s not just marketing, it’s personal,” is less than genius. There’s nothing personal about this picture. Sure, there are people in it, but chatting with a group of coworkers doesn’t say “personal” to me. It says corporate.

NNqDoNV.pngWhat if they paired those words with this free image by Greg Rakozy from Unsplash?

SOgjVjt.pngThere’s something about this image that isn’t just personal; it’s intimate. Two people connecting in the dark, surrounded by snowflakes that almost look like white noise. Could this be a metaphor for reaching out through the noise of the Internet to make a personal connection? To get someone to fall in love (with your brand) even?

Many philosophers, anthropologists, sociologists, and psychologists have pointed out that humans are uniquely symbolic creatures.
– Clay Routledge Ph.D., The Power of Symbolism, Psychology Today

A truly powerful image speaks to us on a symbolic level, feeding us information by intuition and association. Humans are associative creatures. We naturally derive deep, multifaceted meanings from visual cues, an idea brought into prominence by both Sigmund Freud and Carl Jung.

The magic behind an effective symbol is its ability to deliver messages to both our conscious minds and subconscious awareness. When choosing the right image for marketing copy — whether an ad or the “hero” section of your website — consider not just what you want to tell people, but what you want them to feel.

A symbol must possess at one and the same time a double or a multiple significance … Thus all symbols possess both a ‘face’ and a ‘hidden’ value, and it is one of the great achievements of psychology to have shown how the ‘hidden’ value is generally, from the point of view of function, the more important. …Behind this face value lies a mass of undifferentiated feelings and impulses, which do not rise into consciousness, which we could not adequately put into words even if we wanted to… and which, though they go unattended to, powerfully influence our behavior.
– F.C. Bartlett, ‘The social functions of symbols,’ Astralasian Journal of Psychology and Philosophy

And, of course, as you’re looking through images, consider this:

What type of images and experiences will resonate with your target audience’s deepest desires?

This, of course, requires you to have built out a robust buyer persona that includes not just their demographic information with a catchy name but also their extracurricular passions: the driving forces that get them out of bed and into the office each day.

As with conversion copywriting, the key to success is identifying motivations and using them to create a visual representation of your niche’s most desired outcomes.

Set the stage for an experience, not just a product

In keeping with the theme of images that deliver the desired outcome, the most effective online ads do this in a way that invites the viewer to experience that outcome. Instead of featuring simply a product, for example, these ads set the stage for the experience that buying the product just might enable you to have.

ModCloth is a master of this. Doesn’t this image make you want to take a nap in a nice, cozy cabin? You can get that experience (or something like it) if you buy their $200 hammock.

5036Odd.pngUnless you live in the deep woods of the Appalachian mountains, your home will never look like this. But some of us wish ours did, and we’re clearly the target audience. This picture speaks to our deepest need to get away from everyone and everything for some much-needed rest and recuperation.

When choosing images, it’s just as important to consider symbolism as it is to consider the target viewers. What experience will resonate with them most? What images will sell their desired experiences?

ModCloth’s recent “road trip” slider doesn’t say anything about the clothes they’re trying to sell, for example. But it does speak to a sense of adventure and the power of female friendships, both of which are defining characteristics of their target niche of millennial women with a delightfully quirky fashion sense.

cWEVdqk.pngYou don’t have to be a clothing company to capitalize on this idea or even a B2C company. Check out how these B2B companies use images to make their words not just read, but felt.

LU9kd3l.pngDon’t you feel like you’re Superman out for a midnight joyride? All the world at your fingertips? Yeah, that’s the point. What they’re selling, essentially, is omniscience via data. All the benefits of DC Comics-like superpowers, minus the kryptonite.

19LrmR9.pngYou might not catch it at first glance, but look at how cozy these people are. They’re wearing knit sweaters (not suits) while cradling warm cappuccinos in their hands — clearly, this sales meeting is going well. No pressure tactics here. Quite the opposite.

C8OQkJi.pngFor this example from Blitz Marketing, you’ll have to visit their website, because this isn’t a static image — it’s a video montage designed to get you PUMPED! Energy practically radiates off the screen (which, we are left to infer, is the feeling you’d get all the time if you worked with this creative marketing agency).

EUzMn5d.png

Piston, another ad agency, takes a more subtle approach, which I love. Instead of having your standard stock photo of “man in a suit,” they did a custom photo shoot and added quirky elements, like a pink candy ring. I find this image particularly powerful because it effectively sets up an expectation (man in a suit), then adds a completely unexpected element (candy ring), which is conveniently located behind the word CREATIVE. This illustrates just how creative this agency is while remaining utterly professional.

Numbers are compelling. Numbers with visual aids? Unstoppable.

Let’s say your buyer persona isn’t driven by emotion. Show this persona a grid of city lights from 2,000 feet up, and he or she won’t feel like Superman. They’ll be wondering what this has to do with the ROI they can expect.

Someone get this persona some numbers already.

When conversion depends heavily on gaining credibility, pictures can be very compelling. In fact, one study out of the Victoria University of Wellington in New Zealand showed that simply having an image makes the text alongside that image more believable, even if the image had nothing at all to do with the text.

When people evaluate claims, they often rely on what comedian Stephen Colbert calls ‘truthiness,’ or subjective feelings of truth.
Nonprobative photographs (or words) inflate truthiness, by E.J. Newman, M. Garry, D.M. Bernstein, J. Kantner, D.S. Lindsay

Essentially, any image is better than nothing. But the right image? It’s worth even more. In a similar study by the Psychology departments at both Colorado State University and the University of California, researchers experimented with brain images.

Brain images are believed to have a particularly persuasive influence on the public perception of research on cognition. Three experiments are reported showing that presenting brain images with articles summarizing cognitive neuroscience research resulted in higher ratings of scientific reasoning for arguments made in those articles, as compared to articles accompanied by bar graphs, a topographical map of brain activation, or no image.
Seeing is believing: The effect of brain images on judgments of scientific reasoning by David P. McCabe and Alan D. Castel

However, what if we traded in this either/or philosophy (either picture or no picture, either picture or bar graph) for a philosophy that uses the best of all resources?

Having the right image, supported by the right words, and given credibility by real numbers (as statistics or in graphs/charts) is the most effective possible combination.

Statistics have also proven to be compelling. In Blinded with science: Trivial graphs and formulas increase ad persuasiveness and belief in product efficacy, the study out of Cornell University reveals that just the appearance of being scientific increases an ad’s persuasiveness. What does that “appearance” require?

Graphs. Simple, unadorned graphs.

And, those graphs were even more effective at persuading people who had “a greater belief in science” (e.g., your logical buyer persona).

Put the right words together with the right image, then overlay with a supportive set of numbers, and you can convince even the most logical persona that you have the solutions they seek.

Caveat: When the name of the game is building credibility, don’t undermine yourself with shoddy data and lazy analysis. One of your smart customers will, without fail, call you out on it.

Graphs and charts don’t have to be fancy or complicated to be convincing. Check out these two graphs from the Kissmetrics article Most of Your A/B Test Results are Illusory and That’s Okay by Will Kurt.

CpsQKZK.pngF0eQFmR.pngDo you even need to read the rest of the article to get the point? (Though you will want to read the article to find out exactly what that scientist is doing so right.) This is highly effective data storytelling that shows you, at a glance, the central point the author is trying to make.

CubeYou, a social data mining company that turns raw numbers into actionable insights, does great data storytelling by combining stats and images. Not only do these visuals deliver demographic information, they put a face on the target at the same time, effectively appealing to both logical and more intuitive personas in one fell swoop.

VwJsu9Q.pngAnd for even more powerful images, look at the data visualizations Big Mountain Data put together of the #WhyIStayed domestic violence hashtag. Talk about telling an impactful story.

IFaDNBQ.pngThen there are infographics that include data visualization, images, and analysis. I love this one from CyberPRMusic.com.

qelQyNp.pngIt’s all about telling their story

Uninspired visuals are everywhere. Seriously, they’re easy to find. In researching this article, I could find 20 bad images for every one good one I’ve included here.

Herein lies an opportunity to stand out.

Maybe the intersection of words, images, and numbers isn’t well understood in online marketing. Maybe having free stock photos at our fingertips has made us lazy in their use. Maybe there aren’t enough English majors touting the benefits of effective symbolism.

Whatever the reason, you now have the chance to go beyond telling your target niche about your product or service’s features and benefits. You have the ability to set your brand apart by showing them just how great life can be. Free tools such as Visage make it possible.

iHrJ3ka.png

But first, you have to care enough to make compelling images a priority.

What are your thoughts on using stunning visuals as needle-movers for your brand?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

The 15 Most Popular Myths About International SEO, Debunked

Posted by Kaitlin

There are lots of myths and misconceptions surrounding the subject of international SEO. I recently gave a Mozinar on this; I’d like to share the basis of that talk in written form here. Let’s first explore why international SEO is so confusing, then dive into some of the most common myths. By the end of this article, you should have a much clearer understanding of how international SEO works and how to apply the proper strategies and tactics to your website.

One common trend is the lack of clarity around the subject. Let’s dig into that:

Why is international SEO so confusing?

There are several reasons:

  • Not everyone reads Google Webmaster Guidelines and has a clear understanding of how they index and rank international content.
  • Guidelines vary among search engines, such as Bing, Yandex, Baidu, and Google.
  • Guidelines change over time, so it’s difficult to keep up with changes and adapt your strategy accordingly.
  • It’s difficult to implement best practices on your own site. There are many technical and strategic considerations that can conflict with business needs and competing priorities. This makes it hard to test and find out what works best for your site(s).

A little history

Let’s explore the reasons behind the lack of clarity on international SEO a bit further. Looking at its development over the years will help you better understand the reasons why it’s confusing, laying some groundwork for the myth-busting that is about to come. (Also, I was a history major in college, so I can’t help but think in terms of timelines.)

Please note: This timeline is constructed almost entirely on Google Webmaster blog announcements. There are a few notes in here about Bing and Yandex, but it’s mostly focused on Google and isn’t meant to be a comprehensive timeline. Mostly this is for illustrative purposes.

tYyiBfi.jpg

2006–2008

Our story begins in 2006. In 2006 and 2007, things are pretty quiet. Google makes a few announcements, the biggest being that webmasters could use geo-targeting settings within Webmaster Tools. They also clarify some of the signals they use for detecting the relevance of a page for a particular market: ccTLDs, and the IP of a server.

2009

In 2009, Bing reveals its secret sauce, which includes ccTLDs, reverse IP lookup, language of body content, server location, and location of backlinks.

kmO5bei.jpg

2010

In 2010, things start to get really exciting. Google reveals some of the other hints that they use to detect geo-targeting, presents the pros and cons of the main URL structures that you can use to set up your international sites, and gives loads of advice about what you should or shouldn’t do on your site. Note that just about the same time that Google says they ignore the meta language tag, Bing says that they do use that tag.

Then, in fall of 2010, hreflang tags are introduced to the world. Until this, there was no standard page-level tag to tell a search engine what country or language you were specifically targeting.

2011

Originally, hreflang tags were only meant to help Google sort out multi-regional pages (that is, pages in the same language that target different countries). Only, in 2011, Google expands hreflang tag support to work across languages as well. Also during this time, Google removes the requirement to use canonical tags in conjunction with hreflang tags, citing they want to simplify the process.

2012

Then in 2012, hreflang tags are supported in XML sitemaps (not just page tags). Also, the Google International Help Center is created, with a bunch of useful information for webmasters.

5AcJDZp.jpg

2013

In 2013, the concept of the “x-default” hreflang tag is introduced, and we learn that Yandex is also supporting hreflang tags. This same year, Bing adds geo-targeting functionality to Bing Webmaster Tools, a full 5 years after Google did.

2014

Note that it isn’t until 2014 that Google begins including hreflang tag reporting within Google Webmaster Tools. Up until that point, webmasters would have had to read about hreflang tags somewhere else to know that they exist and should be used for geo-targeting and language-targeting purposes. Hreflang tags become much more prominent after this change.

2015

In 2015, we see improvements to locale-adaptive crawling, and some clarity on the importance of server location.

To sum up, this timeline shows several trends:

  • Hreflang tags were super confusing at first
  • There were several iterations to improve hreflang tag recommendations between 2011 and 2013
  • Hreflang tag reporting was only added to Google Search Console in 2014
  • Even today, only Google and Yandex support hreflang. Bing and the other major search engines still do not.

There are good reasons for why webmasters and SEO professionals have misconceptions and questions about how best to approach international SEO.

At least 25% of hreflang tags are incorrect

Let’s look at the adoption of hreflang tags specifically. According to NerdyData, 1.7 million sites have at least one hreflang tag.

I did a quick search to find out:

438,417 sites have hreflang=“uk”

7,829 sites have hreflang=“en-uk”

Both of these tags are incorrect. The correct ISO code for the United Kingdom is actually gb, not uk. Plus, you can’t target by country alone — you have to target by language-country pairs or by language. Thus, just writing “uk” is incorrect as well.

That means at least 25% of hreflang tags are incorrect, and I only did a brief search to find a couple of the most commonly mistaken ones. You can imagine just how many sites out there are getting these hreflang values wrong.

All of this is to prove a point: the field is ripe for optimization when it comes to global SEO. Now, let’s debunk some myths!
vXmLaF7.jpg

Myth #1: I need to have multiple websites in order to rank around the world.

There’s a lot of talk about needing ccTLDs or separate websites for your international content. (A ccTLD is a country-coded top-level domain, such as example.ca, which is country-coded for Canada).

However, it is possible for your website to rank in multiple locations around the world. You don’t necessarily need multiple websites or sub-domains to rank internationally; in many cases, you can work within the confines of your current domain.

In fact, if you take a look at your analytics on your website, even if it has no geo-targeting whatsoever, chances are you already have traffic coming in from various languages and countries.

Many global brands have only one site, using subfolders for their multilingual or multi-regional content. Don’t feel that international SEO is beyond your reach because you believe it requires multiple websites. You may only need one!

The most important thing to remember when deciding whether you need separate websites is that new websites will start with zero authority. You will have to fight an uphill battle to build authority for establish and rank those new ccTLDs — and for some companies, organic traffic growth may be for many years after launching ccTLDs. Now, this is not to say that ccTLDs are not a good option. But you just need to keep this in mind that they are not the only option.

Myth #2: “The best site structure for international rankings is _________.”

There’s a lot of debate about what the best site structure is for international rankings. Is it subfolders? Subdomains? ccTLDs?

Some people swear by ccTLDs, saying that in some markets users prefer to buy from local sites, resulting in higher click-through rates. Others champion subdomains or sub-directories.

There is no one answer to the best international site structure. You can dominate using any of these options. I’ve seen websites of all site structures dominate in their verticals. However, there are certain advantages and disadvantages to each, so it’s best to research your options and decide which is best for you.

2Wmy7mJ.jpg

Google has published their pros and cons breakdown of the URL structures you can use for international targeting. There are 4 options listed here:

  • Country-specific, aka ccTLDs
  • Subdomains
  • Subdirectories with gTLDs (generic top-level domains, like .com or .org)
  • URL parameters. These are not recommended.

Subdirectories with gTLDs have the added benefit of consolidating domain authority, while subdomains and ccTLDs have the disadvantage of making it harder to build up domain authority. In my opinion, subdomains are the least advantageous of the 3 options because they do not have the distinct advantage of geo-targeting that ccTLDs do, and they don’t have the advantage of a consolidated backlink profile that subdirectories do.

The most important thing to think about is what’s best for your business. Consider whether you want to target at a language level or a country level. Then decide how much effort you want (or can) put behind building up domain authority to your new domains.

Or, for those who are more visual learners:

YsZjsb2.jpg

  • ccTLDs are a good option if you’re Godzilla. If branding isn’t a problem for you, if you have your own PR, if building up domain authority and handling multiple domains is no big deal, then ccTLDs are a good way to go.
  • Subdirectories are a good option if you’re MacGuyver. You’re able to get the job done using only what you’ve got.
  • Subdomains are a good option if you’re Wallace and Gromit. Somehow, everything ends well despite the many bumps in the road.

ELxkSig.jpg

I researched the accuracy of each type of site structure. First, I looked at Google Analytics data and SEMRush data to find out what percentage of the time the correct landing page URL was ranking in the correct version of Google. I did this for 8 brands and 30 sites in total, so my sample size was small, and there are many other factors that could skew the accuracy of this data. But it’s interesting all the same. ccTLDs were the most accurate, followed by subdirectories, and then subdomains. ccTLDs can be very effective because they give very clear, unambiguous geo-targeting signals to search engines.

However, there’s no one-size-fits-all approach. You need to take a cold, hard look at your business and consider things like:

  • Marketing budget you have available for each locale
  • Crawl bandwidth and crawl budget available for your site
  • Market research: which locales should you target?
  • Costs associated with localization and site maintenance
  • Site performance concerns
  • Overall business objectives

As SEOs, we’re responsible for forecasting how realistically our websites will be able to grow and improve in terms of domain authority. If you believe your website can gain fantastic link authority and your team can manage the work involved in handling multiple websites, then you can consider ccTLDs (but whichever site structure you choose will be effective). But if your team will struggle under the added burden of developing and maintaining multiple (localized!) content efforts to drive traffic to your varied sites, then you need to slow down and perhaps start with subdirectories.

Myth #3: I can duplicate my website on separate ccTLDs or geo-targeted sub-folders & each will rank in their respective Googles.

This myth refers to taking a site, duplicating it exactly, and then putting it on another domain, subdomain, or subfolder for the purposes of geo-targeting.

And when I say “in their respective Googles,” I mean the country-specific versions of Google (such as google.co.uk, where searchers in the United Kingdom will typically begin a search).

You can duplicate your site, but it’s kind of pointless. Duplication does not give you an added boost; it gives you added cruft. It reduces your crawl budget if you have all that content on one domain. It can be expensive and often ineffective to host your site duplicated across multiple domains. There will be cannibalization.

Often I’ll see a duplicate ccTLD get outranked by its .com sister in its local version of Google. For example, say a site like example.co.uk is a mirror of example.com, and the example.com outranks the example.co.uk site in google.co.uk. This is because geo-targeting is outweighed by the domain authority of the .com. We saw in an earlier chart that ccTLDs can be the most accurate for showing the right content in the right version of Google, but that’s because those sites had a good spread of link authority among each of their ccTLDs, as well as localized content.

wRdZ9rC.jpg

There’s a big difference between the accuracy of ccTLDs when they’re localized and when they are dupes. I did some research using the SEMRush API, looking at 3 brands using ccTLDs in 26 country versions of Google, where the .com outranked the ccTLD 42 times. You shouldn’t just host your site mirrored across multiple ccTLDs just for the heck of it; it’s only effective if you can localize each one.

To sum it up: Avoid simply duplicating your site if you can. The more you can do to localize and differentiate your sites, the better.

Myth #4: Geo-targeting in Search Console will be enough for search engines to understand and rank my content correctly.

Geo-targeting your content is not enough. Like we covered in the last example, if you have two pages that are exactly the same and you geo-target them in Search Console, that doesn’t necessarily mean that those two pages will show up in the correct version of Google. Note that this doesn’t mean you should neglect geo-targeting in Google Search Console (or Bing or Yandex Webmaster Tools) — you should definitely use those options. However, search engines use a number of different clues to help them handle international content, and geo-targeting settings do not trump those other signals.

Search engines have revealed what some of the international ranking factors they use are. Here are some that have been confirmed:

  • Translated content of the page
  • Translated URLs
  • Local links from ccTLDs
  • NAP info — this could also include local currencies and links to Google My Business profiles
  • Server location*

*Note that I included server location in this list, but with a caveat — we’ll talk more about that in a bit.

You need to take into account all of these factors, and not just some of them.

Myth #5: Why reinvent the wheel? There are multinational companies who have invested millions in R&D — just copy what they do.

The problem here is that large multinational companies don’t always prioritize SEO. They make SEO mistakes all the time. It’s a myth that you should look to Fortune 500 websites or top e-commerce websites to see how they structure their website— they don’t always get it right. Imitation may be the best form of flattery, but it shouldn’t replace careful thought.

Besides, what the multinational companies do in terms of site structure and SEO differs widely. So if you were to copy a large brand’s site structure, which should you copy? Apple, Amazon, TripAdvisor, Ikea…?
x1gVRwM.jpg

Myth #6: Using URL parameters to indicate language is OK.

Google recommends against this, and from my experience, it’s definitely best to avoid URL parameters to indicate language or region.

What this looks like in the wild is:

http://ift.tt/2a6Qud7

or

http://ift.tt/2a5eHOi

…where the target language or region of the page changes depending on the parameter. The problem is that parameters aren’t dependable. Sometimes they’ll be indexed, sometimes not. Search engines prefer unique URLs.

Myth #7: I can just proxy localized content into my existing URLs.

In this situation, a website will use the IP address or the Accept-Lang header of a user to detect their location or browser language preference, then change the content of the page based on that information. So the URL stays the same, but the content changes.

Google and Bing have clearly said they don’t like parameters and recommend keeping one language on one URL. Proxied content, content served by a cookie, and side-by-side translations all make it very problematic for search engines to index a page in one language. Search engines will appear to crawl from all over the world, so they’ll get conflicting messages about the content of a page.

Basically, you always want to have 1 URL = 1 version of a page.

Google has improved and will continue to improve its locale-aware crawling. As of early 2015, they announced that Googlebot will crawl from a number of IP addresses around the world, not just the US, and will use the Accept-Lang header to see if your website is locale-adaptive and changing the content of the page depending on the user. But in the same breath, they made it very clear this technology is not perfect, this does not replace the recommendation for using hreflang, and they still recommend you NOT use locale-adaptive content.

Myth #8: Adding hreflang tags will help my multinational content rank better.

Hreflang tags are one of the most powerful tools in the international SEO toolbox. They’re foundational to a successful international SEO strategy. However, they’re not meant to be a ranking factor. Instead, they’re intended to ensure the correct localized page is shown in the correct localized version of Google.

In order to get hreflang tags right, you have to follow the documentation exactly. With hreflang, there is no margin for error. Make sure to use the correct language (in ISO 639-1 format) and country codes (in ISO 3166-1 Alpha 2 format) when selecting the values for your hreflang tags.

Hreflang requirements:

  • Exact ISO codes for language, and for language-country if you target by country
  • Return tags
  • Self-referential tags
  • Point to correct URLs
  • Include all URLs in an hreflang group
  • Use page tags or XML sitemaps, preferably not both
  • Use HTTP headers for PDFs, etc.

Be sure to check your Google Search Console data regularly to make sure no return tag errors or other errors have been found. A return tag error is when Page A has an hreflang tag that points to Page B, but Page B doesn’t have an hreflang tag pointing back to Page A. That means the entire hreflang association for that group of pages won’t work, and you’ll see return tag errors for those pages in Google Search Console.

Either the page tagging method or the XML hreflang sitemap method work well. For some sites, an XML sitemap can be advantageous because it eliminates the need for code bloat with page tags. Whichever implementation allows you to add hreflang tags programmatically is good. There are tools on the market to assist with page tagging, if you use one of the popular CMS platforms.

Here are some tools to help you with hreflang:

Myth #9: I can’t use a canonical tag on a page with hreflang tags.

When it comes to hreflang tags AND canonical tags, many eyes glaze over. This is where things get really confusing. I like to keep it super simple.

The simplest thing is to keep all your canonical tags self-referential. This is a standard SEO best practice anyways. Regardless of whether you have hreflang tags on a page, you should be implementing self-referential canonical tags.

Myth #10: I can use flag icons on my site to indicate the site’s language.

Flags are not languages — there’s even a whole website dedicated to talking about this common myth: http://ift.tt/2a6QmKL. It has many examples of sites that mistakenly use flag icons to indicate languages.

For example, the UK’s Union Jack doesn’t represent all speakers of English in the world. Thanks to the course of history, there are at least 101 countries in the world where English is a common tongue. A flag of a country to represent speakers of a language is very off-putting for any users who speak the language but aren’t from that country.

Here’s an example where flag icons are used to indicate language. A better (and more creative) approach is to replace the flag icons with localized greetings:

NPPjSuJ.jpg

If you have a multi-lingual site(s), you should not use flags to represent language. Instead, use the name of the language, written in the local language. English should be “English,” Spanish should be “Español,” German should be “Deutsch,” etc. You’d be surprised how many websites forget to use localized language or country spellings.

Myth #11: I can get away with automated translations.

The technology for automated translations or machine translations has been improving in recent years, but it’s still better to avoid automated translations, especially machine translation that involves no human editing.

Automatic translations can be inaccurate and off-putting. They can hurt a website trying to rank in a competitive landscape. A great way to get an edge on your competitors is to use professional, high-quality native translators to localize your content into your target languages. High-quality localization is one of the key factors in improving your rankings when it comes to international SEO.

If you have a very large amount of content that you cannot afford to translate, choose some of the most important content for human translation, such as your main category and product pages.

Myth #12: Whichever site layout and user experience works best in our core markets should be rolled out across all our markets.

This is something I’ve seen happen on many, many sites, and it was part of the reason why eBay failed in China.

Porter Erisman tells the story in his book Alibaba’s World, which I highly recommend. He spoke of how, when eBay and Alibaba were duking it out in China, eBay made the decision to apply its Western UX principles to its Chinese site.

In Alibaba’s World, Erisman writes about how eBay “eliminated localized features and functions that Chinese Internet users enjoyed and forced them to use the same platform that had been popular in the US and Germany. Most likely, eBay executives figured that because the platform had thrived in more industrialized markets, its technology and functionality must be superior to a platform from a developing country.

“Chinese users preferred Alibaba’s Taobao platform over eBay, because it had an interface that Chinese users were used to – cute icons, flashing animations, and had a chat feature that connected customers with sellers. In the West, bidding starts low and ends high, but Chinese users preferred to haggle with sellers, who would start their bids high and end low.”

From this story, you can tell how localization — in terms of site design, UX, and holistic business strategy — can be of tantamount importance.

Here is an example of Lush’s Japanese site, which has bright colors, a lot going on, and it’s almost completely localized into Japanese. Also notice the chat box in the bottom right:

u38CJm9.jpg

Now compare that to the Lush USA site. There’s a lot more white space here, fewer tiles, and the chat box is only a small button on the right sidebar.
JWv4GcN.jpg
They’ve taken the effort to adjust layout according to how they want to express their brand to each market, rather than just replacing tiles in the same CMS layout with localized tiles. Yet, in both markets they have many elements that are similar, too. They’re a good example of keeping a unified global brand while leaving plenty of room for local expression.

The key to success internationally is localizing your online presence while at the same time having a unified global brand. From an SEO perspective, you should make sure there’s a logical organization to your global URLs so that localized content can be geo-targeted by subdirectory, subdomain, or domain. You should focus on getting hreflang tags right, etc. But you should also work with a content strategy team to make sure that there will be room for trans-creation of content, as well as with a UX design team to make sure that localized content can be showcased appropriately.

Design, UX, site architecture — all of these things play increasingly important roles in SEO. By localizing your design, you’re reducing duplicate content and you’re potentially improving your site engagement metrics (and by corollary, your clickstream data).

Things that an SEO definitely wants to localize are:

  • URLs
  • Meta titles & descriptions
  • Navigation labels
  • Headings
  • Image file names, internal anchor text, & alt text
  • Body content

Make sure to focus on keyword variations between countries, even within the same language. For example, there are differences in names and spellings for many things in the UK versus the US. A travel agency might describe their tours to a British audience as “tailor-made, bespoke holidays,” while they would tell their American audience they sell “customized vacation packages.”
If you used the same keywords to target all countries that share a common tongue, you’d be losing out on the ability to choose the best keywords for each country. Take this into account when considering your keyword optimization.

Myth #13: We can just use IP sniffing and auto-redirect users to the right place. We don’t need hreflang tags or any type of geo-targeting.

A lot of sites use some form of automatic redirection, detecting the user’s IP address and redirecting them to another website or to a different page on their site that’s localized for their region. Another common practice is to use the Accept-Language header to detect the user’s browser language preference, redirecting users to localized content that way.

However, Google recommends against automatic redirection. It can be inaccurate, can prevent users and search engines from indexing your whole site, and can be frustrating for users when they’re redirected to a page they don’t want. In fact, hreflang annotations, when correctly added to all your localized content and correctly cross-referenced, should eliminate or greatly reduce the need for any auto-redirection. You should avoid automatic redirection as much as possible.

Here are all the reasons (that I can think of) why you shouldn’t do automatic redirection:

  • User agents like Googlebot may have a hard time reading all versions of your page if you keep redirecting them.
  • IP detection can be inaccurate.
  • Multiple countries can have multiple official languages.
  • Multiple languages can be official in multiple countries.
  • Server load time can be negatively affected by having to add in all these redirects.
  • Shared computers between spouses, children, etc., could have different language preferences.
  • Expats and travelers may try to access a website that assumes they’re locals, making it frustrating for the users to switch languages.
  • Internet cafes, hotel computer centers, and school computer labs may have diverse users.
  • The user prefers to browse in one language, but make transactions in another. For example, many citizens are fluent in English, and will search in English if they think they can get better results that way. But when it comes to the checkout process, especially when reading legalese, they will prefer to switch to their native language.
  • A person sends a link to a friend, but that friend lives in a different place, and can’t see the same thing as her friend sees.

Instead, a much better user experience is to provide a small, unobtrusive banner that appears when you detect a user may find another portion of your site more relevant. TripAdvisor and Amazon do a great job of this. Here’s an image from Google Webmaster Blog that exemplifies how to do this well:

gB6AhGP.jpg

One exception to the never-use-auto-redirection rule is that, when a user selects a country and/or language preference on your site, you should store that preference in a cookie and redirect the user to their preferred locale whenever they visit your site in the future. Make sure that they can set a new preference any time, which will re-set the cookie preference.

On that note, also always make sure to have a country and/or language selector on your website that’s located on every page and is easy for users to see and for search engine bots to crawl.

Myth #14: I need local servers to host my global content.

Many website owners believe they need local servers in order to rank well abroad. This is because Google and Bing clearly stated that local servers were an important international ranking factor in the past.

However, Google confirmed last year that local server signals are not as important as they once were. With the rise in popularity of CDNs, local servers are generally not necessary. You definitely need a local server for hosting sites in China, and it may be useful in some other markets like Japan. It’s always good to experiment. But as a general rule, what you need is a good CDN that will serve up content to your target markets quickly.

Myth #15: I can’t have multi-country targeted content that’s all in the same language, because then I’d incur a duplicate content penalty.

This myth is born from an underlying fear of duplicate content. Something like 30% of the web contains some dupe content (according to a recent RavenTools study). Duplicate content is a fact of life on the web. You have to do something spammy with that duplicate content, such as create doorway pages or scrape content, in order to incur a penalty.

Geo-targeted, localized content is not spammy or manipulative. There are valid business reasons for wanting to have very similar content geared for different users around the world. Matt Cutts confirmed that you will not incur a penalty for having similar content across multiple ccTLDs.

The reality is, you CAN have multi-country targeted content in the same language. It’s just that you need to combine hreflang tags + localization in order to get it right. Here are some ways to avoid duplicate content problems:

  • Use hreflang tags
  • Localized keyword optimization
  • Adding in local info such as telephone numbers, currencies, addresses in schema markup, and Google My Business profiles
  • Localized HTML sitemaps
  • Localized navigation and home page features that cater to specific audiences.
  • Localized images that resonate with the audience. American football, for example, is not very popular outside the US. Also, be mindful of holidays around the world and of current events.
  • Transcreated content (where you take an idea and tailor it for a specific locale), rather than translation (which is more word-for-word than concept-for-concept)
  • Obtain links from local ccTLDs pointing to your localized content

As you can see, there are many common myths surrounding international SEO, but hopefully you’ve gained some clarity and feel better equipped to build a great global site strategy. I believe international SEO will continue to be of growing interest, as globalization is a continuing trend. Cross-border e-commerce is booming — Facebook and Google are looking at emerging markets in Africa, India, and Southeast Asia, where more and more people are going online and getting comfortable buying online.

International SEO is ripe for optimization — so you, as SEO experts, are in a very good position if you understand how to set your website up for international SEO success.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

A Guide to Sampling in Google Analytics

Posted by Tom.Capper

Sampling is a process used in statistics when it’s unfeasible or impractical to analyse all the data that exists. Instead, a small, randomly selected subset is used to keep things manageable. Many analytics platforms use some sort of sampling to keep report loading times in check, and there seem to be three schools of thought when it comes to sampling in analytics. There are those who are terrified of it, insisting in unsampled versions of any report. Then there are those who are relaxed about it, trusting the statistical logic. And then, lastly, there are those who are oblivious.

All three are misguided.

Sampling isn’t something to fear, but, in Google Analytics in particular, it can’t always be trusted. Because of that, it’s definitely worth your time to understand when it occurs, how it affects your work, and how it can be avoided.

When it happens

You can always tell when sampling is being used, because of this line at the top of every report:

If the percentage is less than 100%, then sampling is in progress. You’ll notice above that I’ve produced a report based on more than half a billion sessions without any sampling — sampling isn’t just about the sheer number of sessions involved in a report. It’s about the complexity of what you’re asking the platform to report on. Contrast the below (apologies for the small screenshots; I wanted to make sure the whole context was included, so have added captions explaining just what you’re looking at):

No segment applied, report based on 100% of sessions

Segment applied, report based on 0.17% of sessions

The two are identical apart from the use of a segment in the second case. Google Analytics can always provide unsampled data for top-line totals like that first case, but segments in particular are very prone to prompting sampling.

The exact same level of sampling can also be induced through use of a secondary dimension:

Secondary dimension applied, report based on 0.17% of sessions

A few other specialised reports are also prone to this level of sampling, most notably:

  • The Ecommerce Overview
  • “Flow Reports”

Report based on 0.17% of sessions

Report based on <0.1% of sessions

To summarise so far, sampling can happen when we use:

  • A segment
  • More than one dimension
  • Certain detailed reports (including Ecommerce Overview and AdWords Campaigns)
  • “Flow” reports

The accuracy of sampling

Sampling, for the most part, is actually pretty reliable. Take the below two numbers for organic traffic over the same period, one taken from a tiny 0.17% sample, and one taken without sampling:

Report based on 0.17% of sessions, reports 303,384,785 sessions via organic

Report based on 100% of sessions, reports 296,387,352 sessions via organic

The difference is just 2.4%, from a sample of 0.17% of actual sessions. Interestingly, when I repeated this comparison over a shorter period (last quarter), the size of the sample went up to 71.3%, but the margin of error was fairly similar at 2.3%.

It’s worth noting, of course, that the deeper you dig into your data, the smaller the effective sample becomes. If you’re looking at a sample of 1% of data and you notice a landing page with 100 sessions in a report, that’s based on 1 visit — simply because 1 is 1% of 100. For example, take the below:

Report based on 45 sessions

Eight percent of a whole year’s traffic to Distilled is a lot, but 8% of organic traffic to my profile page is not, so we end up viewing a report (above) based on 45 visits. Whether or not this should concern you depends on the size of the changes you’re looking to detect and your threshold for acceptable levels of uncertainty. These topics will be familiar to those with experience in CRO, but I recommend this tool to get your started, and I’ve written about some of the key concepts here.

In extreme cases like the one above, though, your intuition should suffice – that click-through from my /about/ page to /resources/…tup-guide/ claims to feature in 12 sessions, and is based on 8.11% of sessions. As 12 is roughly 8% of 100, we know that this is in fact based on 1 session. Not something you’d want to base a strategy on.

If any of the above concerns you, then I’ve some solutions later in this post. Either way, there’s one more thing you should know about. Check out the below screenshot:

Report based on 100% of sessions, but “All Users” only accounts for 38.81% “of Total”

There’s no sampling here, but the number displayed for “All Users” in fact only contains 38.8% of sessions. This is because of the combination of there being more than 1,000,000 rows (as indicated by the yellow “high-cardinality” warning at the top of the report) and the use of a segment. This is because of the effect of those rows grouped into “(other)”, which are hidden when a segment is active. Regardless of any sampling, the numbers in the rows below will be as accurate as they would be otherwise (apart from the fact that “(other)” is missing), but the segment totals at the top end up of limited use.

So, we’ve now gone over:

  • Sampling is generally pretty accurate (+/- 2.5% in the examples above).
  • When you’re looking at small numbers in reports with a high level of sampling, you can work out how many reports they’re based on.
    • For example, 1% sampling showing 100 sessions means 1 session was the basis of the number in the report.
  • You should keep an eye out for that yellow high-cardinality warning when also using segments.

What you can do about it

Often it’s possible to recreate the key data you want in alternative ways that do not trigger sampling. Mainly this means avoiding segments and secondary dimensions. For example, if we wanted to view the session counts for the top organic landing pages, we might ordinarily use the Landing Pages report and apply a segment:

Landing Pages report with Organic Traffic segment, based on 71.27% of sessions

In the above report, I’ve simply applied a segment to the landing pages report, resulting in sampling. However, I can get the same data unsampled — in the below case, I’ve instead gone to the “Channels” report and clicked on “Organic Search” in the report:

Channels > Organic Search report, with primary dimension “Landing Page”, based on 100% of sessions

This takes me to a report where I’m only looking at organic search sessions, and I can pick a primary dimension of my choice — in this case, Landing Page. It’s worth noting, however, that this trick does not function reliably — when I replicated the same method starting from the “Source / Medium” report, I still ended up with sampling.

A similar trick applies to custom segments — if I wanted to create a segment to show me only visits to certain landing pages, I could instead write a regex advanced filter to replicate the functionality with less chance of sampling:

Lastly, there are a few more extreme solutions. Firstly, you can create duplicate views, then apply view-level filters, to replicate segment functionality (permanently for that view):

Secondly, you can use the API and Google Sheets to break up a report into smaller date ranges, then aggregate them. My colleague Tian Wang wrote about that tool here.

Lastly, there’s GA Premium, which for a not inconsiderable cost, gets you this button:

So lastly, here’s how you can avoid sampling:

  • You can construct reports differently to avoid segments or secondary dimensions and thus reduce the chance of sampling being triggered.
  • You can create duplicate views to show you subsets of your data that you’d otherwise have to view sampled.
  • You can use the GA API to request large numbers of smaller reports then aggregate them in Google Sheets.
  • For larger businesses, there’s always the option of GA Premium to receive unsampled reports.

Discussion

I hope you’ve found this post useful. I’d love to read your thoughts and suggestions in the comments below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

How to Choose a Domain Name – Whiteboard Friday

Posted by randfish

One decision that you’ll have to live with for quite a long time is the domain name you choose for your site. You may have a list of options that you know are available, but what should you keep in mind when you sit down to make the decision? In today’s Whiteboard Friday, Rand covers eight criteria for picking a winner.

http://ift.tt/29TcZk3

http://ift.tt/1GaxkYO

How to Choose a Domain Name for SEO & Branding Whiteboard

Click on the whiteboard image above to open a high resolution version in a new tab!

Welcome to Rand’s rules (for choosing an effective domain name)

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we are going to chat about choosing domain names, and, in fact, I’ve got eight rules for you that will help guide your domain name choices.

Now, it could be you’re starting a new brand. It could be that you have an existing brand and you’re trying to take it online. It might be that you’re working with clients who are taking their brand online. It could be that you’re starting a new company entirely. I love entrepreneurship, congratulations. Any of these ways, you’re going to need a website.

Before you do that, you should really think long and hard about the domain name that you choose and, in fact, the brand name that you choose and how that’s represented through your domain name online. Domain names have a massive impact all over the web in terms of click-through rate, from search to social media results, to referring links, to type-in traffic, brandability, offline advertising. There’s a huge wealth of places that your domain name impacts your brand and your online marketing, and we can’t ignore this.

So first rule that Rand has for how to choose a domain name.

1) Make it brandable.

Brandable, meaning when you hear the domain name, when you hear yourself or someone else say it, does it sound like a brand, or does it sound like a generic? So that means that hyphens and numbers are a real problem because they don’t make something sound like a brand. They make it sound generic, or they make it sound strange.

For example, if I try and say to you, “Look, let’s imagine that our new company that we’re starting together, you and I, is a website that has pasta recipes and potentially sells some pasta related e-commerce products on it.” If I tell you that I have pasta-shop.com, well, that’s hard to brand. It’s hard to say. It’s hard to remember.

Speaking of, is this brand memorable? So generic keyword strings are a big no-no. Generic keyword strings really tough to remember, really tough to stand out in the brain. You want something unique, which means try and avoid those exact and partial keyword match domain names. They tend not to do so well, in fact. If you look at the numbers that we see in MozCast, for example, or in correlation studies, you can see that, over the past 10 years, they have done nothing but trend down over time in terms of their correlation with rankings and their ability to show in the search results. Dangerous there.

I would probably stay away from something like a PastaRecipesOnline.com. I think BestPasta.com, maybe that’s getting a little bit better. PastaAficionado, well, it sounds brandable. For sure, it’s a little bit challenging to say. But it’s definitely unique.

I really like PastaLabs.com. Very brandable, unique, memorable, stands out. I’m going to remember it. It has kind of a scientific connotation to it. Fascinating. I might think about the domain name space that way.

2) Make it pronounceable.

You might say to yourself, “Rand, why is it so important that it’s pronounceable? Most people are going to be typing this in or they’re going to be clicking on a link, so why does it matter?”

In fact, it matters because of a concept called “processing fluency.” It’s a cognitive bias that human beings have where, essentially, we remember and have more positive associations with things that we can easily say and easily think about, and that includes pronounceability in our own minds. This is going to be different depending on the language that you’re targeting and which countries you’re targeting. But certainly, if you can’t easily say the name and others are not easily able to guess how to say that name, you’re going to lose that processing fluency, you’re going to lose that memorability and all the benefits of the brandability that you’ve created.

So I might stay away from things like FlourEggsH20.com. It’s clever. Don’t get me wrong. It’s unique. It’s clever. It might even be brandable, but it’s very difficult to pronounce and to recall. When you see it, you don’t know if that zero is an O. There are questions about like what does it necessarily mean or not.

Raviolibertine.com. Even I’m having trouble saying it. Raviolibertine? I would stay away from a little bit of the getting too clever for yourself, and many, many domain names do try to do that.

I might say, “You know what? Something like LandOfNoodles.com, while it doesn’t fulfill every requirement that we’ve got here, it is eminently pronounceable, easy to remember.” These are easy words that many people are very familiar with, at least in English. LandOfNoodles, whoops, I like LandOfNoodles. I’m giving it a check mark. Well, now I’ve messed up the Whiteboard. Hopefully, Elijah took a picture before I did that. Oh, he’s giving me the thumbs up. Good.

3) Make it as short as you possibly can, but no shorter.

This means obey these other rules before you just go for raw length. But length matters. Length matters because of the processing fluency stuff we talked about before. But the fewer characters a domain name has, the easier it is to type, the easier it is to say, to share, the less it gets shortened on social media sharing platforms and in search results. So when you’ve seen those long domain names, they get compressed, or they might not show fully, or the URL might get cut off, or you might see just the t.co, all those kinds of things.

Therefore, short as possible. Shorter is definitely better. I might go for something short like MyPasta.com, but I’d be careful about going too short. For example, PastaScience.com is a pretty good domain name. PastaSci, I’ve lost that pronounceability and a little bit of that memorability. It’s a little bit tougher. It’s clearly a brand, but it’s a little awkward. I would probably stay away from that one and I’d stick with PastaScience.

4) Bias to .com.

I know, it’s 2016. Why are we still talking about .com? The internet’s been around 20-plus years. Why does .com matter so much when there are so many TLD extension options? The answer is, again, this is the most recognized, most easily accessible brand outside of the tech world.

If you’re talking about, “Look, all I’m doing is addressing developers and my pasta website only wants to talk to very, very tech savvy individuals, people who already work in the web world,” well okay, maybe it’s all right to go with a .pasta domain name. Perhaps you can actually buy that TLD extension now that ICANN has approved all these new domain names.

But cognitive fluency, processing fluency says, dictates that we should go with something that’s easy, that people have an association with already, and .com is still the primary thing that non-tech savvy folks have an association with. If you want to build up a very brandable domain that can do well, you want that .com. Probably, eventually, if you are very successful, you’re going to have to try and go capture it anyway, and so I would bias you to get it if you can.

If it’s unavailable, my suggestion would be to go with the .net, .co, or a known ccTLD. Those are your best bets. A known ccTLD might be something like .ca in Canada or .it in Italy, those kinds of things. That’s your next best bet. I’d still bias you to .com. But the PenneIsMightier.com, I’m particularly proud of this one. I think it’s a terrible pun, but a man’s got to do.

MacaroniMan.net, would I potentially think about that if I couldn’t get the .com? Yeah, possibly if I thought I was targeting a little bit more of a savvy audience and if I was pretty sure that MacaroniMan.com was owned by a squatter who just wouldn’t give it up, or it was owned by a small restaurant somewhere that I never had to really worry about competition with and they wouldn’t sell to me, yeah, okay, I might do it.

What about Impastable.co? Avoiding the fact that this is another terrible pun of mine, I might consider that if I absolutely couldn’t get Impastable.com and that was already my domain name and I felt like I had the branding ability to make the .co something people would associate with. I could consider that too.

5) Avoid names that infringe on another company or another organization’s existing trademark or could be confused with that trademark.

You have to be very careful here because it’s not whether you think it could be confused. It’s whether you think any judge in the jurisdiction in which they might take legal action against you would consider those two things to be potentially misrepresented or potentially confusable. So it’s not your judgment. It’s not even your audience’s judgment. It’s what you think a judge in the jurisdiction might have the judgment about.

So this is dangerous waters. I would urge you to talk to your attorney or a legal professional about this if you have real concerns. But there is the danger and this does happen regularly throughout the web’s history where a trademark owner will come and sue a domain owner, someone who’s owning the domain legitimately and using it for business purposes or just someone who’s purchased it and is sitting on it, and that sucks.

This can also create brand confusion, which is hard for your brandability. So you might be familiar with some pasta brands that have done particularly well here in the U.S., like Barilla and Ronzoni and Rustichella d’Abruzzo. Well, I probably would not go get Barzilla.com even if you have a hilarious, Godzilla themed pun that you want to make about the pasta. Just because your name might be Ron and you are covering pasta, I still would not go with RonsZoni. Oops, I’m going to X those both out. Likewise, Rustichella — apologies for my poor Italian pronunciation — but Rustichella owns Rustichella.it. They don’t seem to own Rustichella.com. I think that’s owned by a domain name owner. But I would not go start up a website there. Rustichella certainly could, with their U.S. presence, go and claim trademark ownership of that domain and potentially get it from you. I would think that was risky.

6) Make the domain name instantly intuitive.

If you believe that a member of your target audience, the audience that you’re trying to reach now and in the future, could immediately associate the domain name with a good guess of what they think you do, that is a big positive. Being able to look at that domain name and say, “Oh, I’m guessing they probably do this. This is probably what that company is up to.”

So something clever and subtle, like SavoryThreads.com, okay, yeah, once I get to your site, I might realize, “Oh, I see it’s sort of a playful word game there and ‘savory,’ I get that it’s about food.” But it’s too clever, in my opinion, and it doesn’t instantly suggest to a majority of your audience what it is that you do.

Likewise, AnnelloniToZiti.com, well, yeah, maybe I could guess that these are probably pasta names and it probably means that the website has something to do with that. But they’re not traditionally very well-known pastas. At least here in the United States, those shapes are not particularly well-known, and so I might cross that one out too, versus something where it is clearly, clearly about recipes for and potentially sales of goods, PastaPerfected.com. That’s obviously, intuitively about what it is going to be, and anyone from your audience could figure that out.

7) Use broad keywords when sensible, but don’t stress keyword inclusion.

Keyword use in domain names, you might think, is an important thing and that would be something that I would mention here from an SEO perspective. It can help. Don’t get me wrong, it can help. It can help mostly for this instant intuition portion and the cognitive fluency and processing fluency biases that we’ve talked about, but also a little bit from an SEO perspective because of the anchor text that you generally will accrue when people link over to your domain. But what we’ve been seeing, as I mentioned earlier, is that Google’s been biasing away from these exact match and partial match keywords.

I would say that if you can get a keyword mention in your domain name that helps make it obvious what you’re about, go for it. But if you’re trying to target what would be called keyword rich or keyword targeted domains, I would generally stay away from those actually in 2016. They just don’t carry the weight that they used to, and there are a lot of associations, negative associations that users and search engines have about them that would make me stay away.

So I would not do something like a RecipesForPasta.com. I wouldn’t do something like BuyPastaOnline.com. I would be tempted to, in fact, go for something very, very broad like Gusto.com. Think about a brand like an Amazon.com, which clearly has no association with what it is, or Google itself, Google.com, or a domain here in Seattle area that serves lawyers that’s called Avvo.com. These are very, very well-branded and associated with their niches, but they don’t necessarily need to have a keyword richness to them.

Another great example, the find a dog sitter or find pet care website, Rover.com. Well, “rover” has an association with dogs, but it’s not really keyword rich. It’s more of a creative association just like “gusto” means “taste” in Italian. So I might be tempted to go in that direction instead. Same thing with something like Handcut.com. People have that, especially foodies are going to have that association between handcut and pasta.

8) If your name isn’t available, it’s okay to append or modify it.

If your domain name is not available, last one, it is okay to go out there and add a suffix or a prefix. It is okay to use an alternate TLD extension, like we talked about previously, and it’s okay to be a little bit creative with your online brand.

For example, let’s say my brand name is Pastaterra. Maybe I’ve already got a shop somewhere maybe in the Seattle area and I have been selling pasta at my shop and now I’m going online with it. Well, it is okay for me to do something like ThePastaterra.com, or PastaterraShop.com, or even Pastaterra.net. If I wanted to be very targeting a much more tech savvy set and was aware of the branding difficulties, I could conceivably go with something like Terra.pasta, because that pasta TLD extension is now available. But I could get a little bit broader. In fact, I might prefer this and go with something like RandOfTheTerra or RandsTerra.com or EatAtTerra. If I were a restaurant, I might do something like EatAtTerra.com.

So with these rules in mind, I would love to hear from all of you about your domain choices and your domain name biases and what you think is working in 2016 and potentially not working, and hopefully we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com;;

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!