The Google Analytics Add-On for Sheets: An Intro to an Underutilized Tool

Posted by tian_wang

With today’s blog post I’m sharing everything one needs to know about an underappreciated tool: the Google Analytics add-on for Google Sheets. In this post I’ll be covering the following:

1. What is the Google Analytics add-on?

2. How to install and set up the Google Analytics add-on.

3. How to create a custom report with the Google Analytics add-on.

4. A step-by-step worked example of setting up an automated report.

5. Further considerations and pitfalls to avoid.

Thanks to Moz for having me, and for giving me the chance to write about this simple and powerful tool!

1. What is the Google Analytics add-on and why should I care?

I’m glad I asked. Simply put, the Google Analytics add-on is an extension for Google Sheets that allows you to create custom reports within Sheets. The add-on works by linking up to an existing Analytics account, using Google’s Analytics API and Regular Expressions to filter the data you want to pull, and finally gathering the data into an easy and intuitive format that’s ripe for reporting.

The Google Analytics add-on’s real value-add to a reporting workflow is that it’s extremely flexible, reliable, and a real time-saver. Your reporting will still be constrained by the limitations of Sheets itself (as compared to, say, Excel), but the Sheets framework has served almost every reporting need I’ve come across to date and the same will probably be true for most of you!

In a nutshell, the Add-On allows you to:

  • Pull any data that you’d be able to access in the Analytics API (i.e analytics.google.com) directly into a spreadsheet
  • Easily compare historical data across time periods
  • Filter and segment your data
  • Automate regular reporting
  • Make tweaks to existing reports to get new data (no more re-inventing wheels!)

If this all sounds like you could use it, read on!

2. Getting started: How to install and set up the Google Analytics add-on

2A. Installing the Google Analytics add-on

  • Go into Google Sheets.
  • On the header bar, under your Workbook’s title, click add-on.
  • This opens a drop-down menu — click “Get add-ons.”
  • In the following window, type “Google Analytics” into the search bar on the top right and hit enter.

  • The first result is the add-on we want, so go ahead and install it.

  • Refresh your page and confirm the add-on is installed by clicking “Add-ons” again. You should see an option for “Google Analytics.”

That’s all there is to installation!

2B. Setting up the Google Analytics add-on

Now that we have the Google Analytics add-on installed, we need to set it up by linking it to an Analytics account before we can use it.

  • Under the “Add-ons” tab in Sheets, hover “Google Analytics” to expose a side-bar as shown below.

  • Click “Create New Report.” You’ll see a menu appear on the right side of your screen.

  • In this menu, set the account information to the Analytics account you want to measure.
  • Fill out the metrics and dimensions you want to analyze. You can further customize segmentation within the report itself later, so just choose a simple set for now.
  • Click “Create Report.” The output will be a new sheet, with a report configuration that looks like this:

  • Note: This is NOT your report. This is the setup configuration for you to let the add-on know exactly what information you’d like to see in the report.

Once you’ve arrived at this step, your set-up phase is done!

Next we’ll look at what these parameters mean, and how to customize them to tailor the data you receive.

3. Creating a custom report with the Google Analytics add-on

So now you have all these weird boxes and you’re probably wondering what you need to fill out and what you don’t.

Before we get into that, let’s take a look at what happens if you don’t fill out anything additional, and just run the report from here.

To run a configured report, click back into the “Add-Ons” menu and go to Google Analytics. From there, click “Run Reports.” Make sure you have your configuration sheet open when you do this!

You’ll get a notification that the report was either successfully created, or that something went wrong (this might require some troubleshooting).

Following the example above, your output will look something like this:

This is your actual report. Hooray! So what are we actually seeing? Let’s go back to the “Report Configuration” sheet to find out.

The report configuration:

Type and View ID are defaults that don’t need to be changed. Report Name is what you want your report to be called, and will be the name generated for the report sheet created when you run your reports.

So really, in the report configuration above, all the input we’re seeing is:

  • Last N Days = 7
  • Metrics = ga:users

In other words, this report shows the total number of sessions in the specified View ID over the last week. Interesting maybe, but not that helpful. Let’s see what happens if we make a few changes.

I’ve changed Last N Days from 7 to 30, and added Date as a Dimension. Running the report again yields the following output:

By increasing the range of data pulled from last 7 to 30 days, we get a data from a larger set of days. By adding date as a dimension, we can see how much traffic the site registered each day.

This is only scratching the surface of what the Google Analytics add-on can do. Here’s a breakdown of the parameters, and how to use them:

Parameter Name

Required?

Description & Notes

Example Value(s)

Report Name

No

The name of your report. This will be the name of the report sheet that’s generated when you run reports. If you’re running multiple reports, and want to exclude one without deleting its configuration setup, delete the report name and the column will be ignored next time you run your reports.

“January Organic Traffic”

Type

No

Inputs are either “core” or “mcf,” representative of Google’s Core Reporting API and Multi-Channel Funnels API respectively. Core is the default and will serve most of your needs!

“core”

/

“mcf”

View (Profile) ID

Yes

The Analytics view that your report will pull data from. You can find your view ID in the Analytics interface, under the Admin tab.

ga:12345678

Start / End Date

No

Used alternatively with Last N Days (i.e you must use exactly one), allows you to specify a range of data to pull from.

2/1/2016 – 2/31/2016

Last N Days

No

Used alternatively with Start / End Date (i.e you must use exactly one), pulls data from the last N days from the current date. Counts backwards from the current date.

Any integer

Metrics

Yes

Metrics you want to pull. You can include multiple metrics per report. Documentation on Metrics and dimensions can be found in Google’s Metrics & Dimensions Explorer

“ga:sessions”

Dimensions

No

Dimensions you want your metrics to be segmented by. You can include multiple dimensions per report. Documentation on metrics and dimensions can be found here.

“ga:date”

Sort

No

Specifies an order to return your data by, can be used to organize data before generating a report. Note: you can only sort by metrics/dimensions that are included in your report.

“sort=ga:browser,
ga:country”

Filters

No

Filter the data included in your report based on any dimension (not just those included in the report).

“ga:country==Japan;
ga:sessions>5”

Segment

No

Use segments from the main reporting interface.

“users::condition::
ga:browser==Chrome”

Sampling Level

No

Directs the level of sampling for the data you’re pulling. Analytics samples data by default, but the add-on can increase the precision of sampling usage.

“HIGHER_PRECISION”

Start Index

No

Shows results starting from the current index (default = 1, not 0). For use with Max Results, when you want to retrieve paginated data (e.g if you’re pulling 2,000 results, and want to get results 1,001 – 2,000).

Integer

Max Results

No

Default is 1,000, can be raised to 10,000.

Integer up to 10,000

Spreadsheet URL

No

Sends your data to another spreadsheet.

URL for sheet where you want data to be sent

By using these parameters in concert, you can arrive at a customized report detailing exactly what you want. The best part is, once you’ve set up a report in your configuration sheet and confirmed the output is what you want, all you have to do to run it again is run your reports in the add-on! This makes regular reporting a breeze, while still bringing all the benefits of Sheets to bear.

Some important things to note and consider, when you’re setting up your configuration sheet:

  • You can include multiple report configurations in the the sheet (see below):

In the image above, running the report configuration will produce four separate reports. You should NOT have one configuration sheet per report.

  • Although you can have your reports generated in the same workbook as your configuration sheet, I recommend copying the data into another workbook or using the Spreadsheet URL parameter to do the same thing. Loading multiple reports in one workbook can create performance problems.
  • You can schedule your reporting to run automatically by enabling scheduled reporting within the Google Analytics add-on. Note: this is only helpful if you are using “Last N Days” for your time parameter. If you’re using a date range, your report will just give you the same data for that range every month.

The regularity options are hourly, daily, weekly, and monthly.

4. Creating an automated report: A worked example

So now that we’ve installed, set up, and configured a report, next up is the big fish, the dream of anyone who’s had to do regular reporting: automation.

As an SEO, I use the Google Analytics add-on for this exact purpose for many of my clients. I’ll start by assuming you’ve installed and set up the add-on, and are ready to create a custom report configuration.

Step one: Outline a framework

Before we begin creating our report, it’s important we understand what we want to measure and how we want to measure it. For this example, let’s say we want to view organic traffic to a specific set of pages on our site from Chrome browsers and that we want to analyze the traffic month-over-month and year-over-year.

Step two: Understand your framework within the add-on

To get everything we want, we’ll use three separate reports: organic traffic in the past month (January 2016), organic traffic in the month before that (December 2015), and organic traffic in the past month, last year (January 2015). It’s possible to include this all in one report, but I recommend creating one report per date period, as it makes organizing your data and troubleshooting your configuration significantly easier.

Step three: Map your key elements to add-on parameters

Report One parameter breakdown:

Report Name – 1/1/2016

  • Make it easily distinguishable from the other reports we’ll be running

Type – core

  • The GA API default

View (Profile) ID

  • The account we want to pull data from

Start Date – 1/1/2016

  • The beginning date we want to pull data from

End Date – 1/31/2016

  • The cutoff date for the data we want to pull

Metrics – ga:sessions

  • We want to analyze sessions for this report

Dimensions – ga:date

  • Allows us to see traffic the site received each day in the specified range

Filters – ga:medium==organic;ga:landingpagepath=@resources

  • We’ve included two filters, one that specifies only organic traffic and another that specifies sessions that had a landing page with “resources” in the URL (resources is the subdirectory on Distilled’s website that houses our editorial content)
  • Properly filling out filters and segments requires specific syntax, which you can find on Google’s Core Reporting API resources.

Segments – sessions::condition::ga:browser==Chrome

  • Specifies that we only want session data from Chrome browsers

Sampling Level – HIGHER_PRECISION

  • Specifies that we want to minimize sampling for this data set

Report One output: Past month’s sessions

Now that we’ve set up our report, it’s time to run it and check the results.

So, in the month of January 2016, the resources section on Distilled’s website saw 10,365 sessions that satisfied the following conditions:

  • organic source/medium
  • landing page containing “resources”
  • Chrome browser

But how do we know this is accurate? It’s impossible to tell at face value, but you can reliably check accuracy of a report by looking at the analogous view in Google Analytics itself.

Confirming Report One data

Since the Google Analytics add-on is an analogue to what you find on analytics.google.com, in your account, we can combine separate pieces in GA to achieve the same effect as our report:

Date Range

Organic Source/Medium

Landing Page Path & Browser

The result

Hooray!

Now that we’ve confirmed our framework works, and is showing us what we want, creating our other two reports can be done by simply copying the configuration and making minor adjustments to the parameters.

Since we want a month-over-month comparison and a year-over-year comparison for the exact same data, all we have to do is change the date range for the two reports.

One should detail the month before (December 2015) and the other should detail the same month in the previous year (January 2015). We can run these reports immediately.

The results?

Total Sessions In January 2015 (Reporting Month, Previous Year: 2,608

Total Sessions In December 2015 (Previous Month): 7,765

Total Sessions In January 2016 (Reporting Month): 10,365

We’re up 33% month-over-month and 297% year-over-year. Not bad!

Every month, we can update the dates in the configuration. For example, next month we’ll be examining February 2016, compared to January 2016 and February 2015. Constructing a dashboard can be done in Sheets, as well, by creating an additional sheet that references the outputs from your reports!

5. Closing observations and pitfalls to avoid

The Google Analytics add-on probably isn’t the perfect reporting solution that all digital marketers yearn for. When I first discovered the Google Analytics add-on for Google Sheets, I was intimidated by its use of Regular Expressions and thought that you needed to be a syntax savant to make full use of the tool. Since then, I haven’t become any better at Regular Expressions, but I’ve come to realize that the Google Analytics add-on is versatile enough that it can add value to most reporting processes, without the need for deep technical fluency.

I was able to cobble together each of the reports I needed by testing, breaking, and researching different combinations of segments, filters, and frameworks and I encourage you to do the same! You’ll most likely be able to arrive at the exact report you need, given enough time and patience.

One last thing to note: the Google Analytics interface (i.e what you use when you access your analytics account online) has built-in safeguards to ensure that the data you see matches the reporting level you’ve chosen. For example, if I click into a session-level report (e.g landing pages), I’ll see mostly session-level metrics. Similarly, clicking into a page-level report will return page-level metrics. In the Google Analytics add-on, however, this safeguard doesn’t exist due to the add-on being designed for greater versatility. It’s therefore all the more important that you’re thorough in outlining, designing, and building your reporting framework within the add-on. After you’ve configured a custom report and successfully run it, be sure to check your results against the Google Analytics interface!

Abraham Lincoln famously said, “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” Good advice in general that also holds true for using the Google Analytics add-on for Google Sheets.

Supplementary resource appendix:

  • RegExr – General Regular Expressions resource.
  • Debuggex – Visual Regular Expressions debugging tool.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Overcoming Objections on Your Landing Pages – Whiteboard Friday

Posted by randfish

[Estimated read time: 9 minutes]

How do you take your potential customers’ problems and turn them into a conversion success? If you’re having trouble with low conversion rates on high-traffic landing pages, don’t worry — there’s help. In today’s Whiteboard Friday, Rand shares a process to turn your landing page objections into improved conversion rates.

http://ift.tt/1R6fUjK

http://ift.tt/1GaxkYO

Overcoming Objections on Your Landing Pages in Order to Improve Your Conversion Rates Whiteboard

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about overcoming objections on your landing pages in order to improve conversion rates. So this a process that I have stolen part and parcel from Conversion Rate Experts, a British consulting company that Moz has used a couple of times to help with our campaigns. Karl Blanks and Ben Jesson have just been phenomenal for this stuff.

Look, they’re not the only ones who do it. A lot of people in conversion rate optimization use a process similar to this, but it’s something I talk about and share so often that I thought, hey, let’s bring it to Whiteboard Friday.

Enter a problem…

So a lot of the time marketers have this problem where a lot of people are visiting a page, a landing page where you’re trying to sell someone or get someone to take a conversion action, maybe sign up for an email list or join a community or download an app, take a free trial of something, test out a free tool or buy an actual product, like in this case my minimalist noise-canceling headphones.

They are very minimalist indeed thanks to my subpar drawing skills. But when lots of people are visiting this page and very few are converting, you’ve got a conversion rate optimization problem and challenge, and this process can really help you through it.

So first off, let’s start with the question around what’s a low conversion rate?

The answer to that is it really depends. It depends on who you are and what you’re trying to accomplish. If you’re in business to consumer ecommerce, like selling headphones, then you’re getting what I’d say is relatively qualified traffic. You’re not just blasting traffic to this page that’s coming from sources that maybe don’t even know what they’re getting, but in fact people who clicked here knew that they were looking for headphones. 1.5% to 2%, that’s reasonably solid. Below that you probably have an issue. It’s likely that you can improve it.

With email signups, if you’re trying to get people to convert to an email list, 3% to 5% with B2B. Software as a service, it’s a little bit lower, 0.5% to 1%. Those tend to be tougher to get people through. This number might be higher if the B2B product that you’re serving and the SaaS product is a free trial or something like that. In fact, a software free trial usually is in the 1.5% to 2% range. A free app install, like if people are getting to an app download page or to an app’s homepage or download page, and you’re seeing below 4% or 5%, that’s probably a problem. Free account signup, if you’re talking about people joining a community or maybe connecting a Facebook or a Google account to start a free account on a website, that’s maybe in the 2% to 3% range.

But these are variable. Your mileage may vary. But I want to say that if you start from these assumptions and you’re looking and you’re going, “Wow, we’re way under these for our target,” yeah, let’s try this process.

Collect contact information

So what we do to start, and what Conversion Rate Experts did to start, is they collect contact information for three different groups of people. The first group is people who’ve heard of your product, your service, your company, but they’ve never actually tried it. Maybe they haven’t even made their way to a landing page to convert yet, but they’re in your target demographic. They’re the audience you’re trying to reach.

The second group is people who have tried out your product or service but decided against it. That could be people who went through the shopping cart but abandoned it, and so you have their email address. It could be people who’ve signed up for an email newsletter but canceled it, or signed up for an account but never kept using it, or signed up for a free trial but canceled before the period was over. It could be people who have signed up for a mailing list to get a product but then never actually converted.

Then the third one is people who have converted, people who actually use your stuff, like it, have tried it, bought it, etc.

You want to interview them.

You can use three methods, and I recommend some combination of all of these. You can do it over email, over the phone, or in person. When we’ve done this specifically in-house for Moz, or when Conversion Rate Experts did it for Moz, they did all three. They interviewed some folks over email, some folks they talked to over the phone, some folks they went to, literally, conferences and events and met with them in person and had those interviews, those sit-down interviews.

Then they grouped them into these three groups, and then they asked slightly different questions, variations of questions to each group. So for people who had heard of the product but never actually tried it, they asked questions like: “What have you heard about us or about this product? What would make you want to try it, and what objections do you currently have that’s stopping you from doing that?”

For people who sort of walked away, they maybe tried or they didn’t get all the way through trying, but they walked away, they didn’t end up converting or they didn’t stick with it, we could say: “What made you initially interested? What objections did you have, and how did you overcome those? What made you change your mind or decide against this product?” Oftentimes that’s a mismatch of expectations versus what was delivered.

Then for the people who loved it, who are loyal customers, who are big fans, you can say: “Well, what got you interested? What objections did you have and how did you overcome them? What has made you stick with us? What makes you love us or this product or this service, this newsletter, this account, this community, and if you did love it, can we share your story?” This is powerful because we can use these later on for testimonials.

Create a landing page

Then C, in this process, we’re going to actually create a landing page that takes the answers to these questions, which are essentially objections, reasons people didn’t buy, didn’t convert or weren’t happy when they did, and we’re going to turn them into a landing page that offers compelling explanations, compelling reasons, examples, data and testimonials to get people through that process.

So if you hear, for example, “Hey, I didn’t buy this because I wasn’t sure if the right adapters would be included for my devices,” or, “I travel on planes a lot and I didn’t know whether the headphones would support the plane use that I want to have,” great, terrific. We’re going to include what the adapters are right on there, which airlines they’re compatible with, all that kind of information. That’s going on the page.

If they say, “Hey, I actually couldn’t tell how big the headphones were. I know you have dimensions on there, but I couldn’t tell how big they were from the photos,” okay, let’s add some photos of representative sample sizes of things that people are very familiar with, maybe a CD, maybe an iPhone that people are like, “Oh yeah, I know the size of a CD. I know the size of an iPhone. I can compare that against the headphones.” So now that’s one of the images in there. Great, we’ve answered the objection.

“I wasn’t sure if they had volume control.” Great. Let’s put that in a photo.

“Is tax and shipping included in the cost? I didn’t want to get into a shopping cart situation where I wasn’t sure.” Perfect. We’re going to put in there, “Tax included. Free shipping.”

“Is the audio quality good enough for audiophiles and pros because I’m really . . .” well, terrific. Let’s find a known audiophile, let’s add their testimonial to the page.

We’re essentially going one by one through the objections that we hear most frequently here, and then we’re turning those into content on the page. That content can be data, it can be reasons, it can be examples, it can be testimonials. It’s whatever we needed to be to help get people through that purchase process.

Split test

Then, of course, with every type of conversion rate optimization test and landing page optimization, we want to actually try some variations. So we’re going to do a split test of the new page against the old one, and if we see there’s stronger conversion rate, we know we’ve had success.

If we don’t, we can go back to the drawing board and potentially broaden our audience here, try and understand how have we not overcome these objections, maybe show this new page to some of these people and see what additional objections they’ve got, all that kind of stuff.

This process is really powerful. It helps you uncover the problems and issues that you may not even know exist. In my experience, it’s the case that when companies try this, whether it’s for products or for services, for landing pages, for new accounts, for apps, whatever it is, they tend to uncover the same small set of answers from these groups over and over again. It’s just a matter of getting those four or five questions right and answering them on the landing page in order to significantly improve conversion.

All right, everyone. Look forward to your suggestions, your ideas, your feedback, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Is Any Press Good Press? Measuring the SEO Impact of PR Wins and Fails

Posted by KelseyLibert

[Estimated read time: 15 minutes]

Is the saying “any press is good press” really true? Whether it happens as part of a carefully orchestrated PR stunt or accidentally, the potential payoffs and drawbacks when a brand dominates the news can be huge.

In our latest collaboration, Fractl and Moz explored how a surge of media coverage impacted seven companies.

Is-Any-Press-Good-Press-Header.png

By looking at brands that dominated headlines within the last year, we set out to answer the following questions:

  • Does positive press coverage always bring more benefits than negative press coverage?
  • Beyond the initial spikes in traffic and backlinks, what kind of long-term SEO value can be gained from massive media coverage?
  • Do large brands or unknown brands stand to gain more from a frenzy of media attention?
  • Are negative PR stunts worth the risk? Can the potential long-term benefits outweigh the short-term damage to the brand’s reputation?

Methodology

Our goal was to analyze the impact of major media coverage on press mentions, organic traffic, and backlinks, based on seven companies that appeared in the news between February 2015 and February 2016. Here’s how we gathered the data:

  • Press mentions were measured by comparing how often the brand appeared in Google News search results the month before and the month after the PR event occurred.
  • A combination of Moz’s Open Site Explorer, SEMrush, and Ahrefs was used to measure traffic and backlinks. Increases and decreases in traffic and backlinks were determined by calculating the percentage change from the month before the story broke compared to the month after.
  • BuzzSumo was used to measure how often brand names appeared in headlines around the time of the PR event and how many social shares those stories received.

Note: We left out a few metrics for some brands, due to incomplete or unavailable data. For example, backlink percentage growth was not measured for Airbnb or Miss Universe, since these events happened too recently before this study was published for us to provide an accurate count of new backlinks. Additionally, organic traffic and backlink percentage growth were not measured for Peeple, since it launched its site around the same time as its news appearance.

pr-stunt-header.png

I. How media coverage affects press mentions, organic traffic, and backlinks

We looked at seven brands, both well-known and unknown, which received a mix of positive and negative media attention. Before we dive into our overall findings, let’s examine why these companies made headlines and how press coverage impacted each one. Be sure to check out our more detailed graphics around these PR events, too.

Impact of positive media coverage

During the last year, Roman Originals, Airbnb, and REI were part of feel-good stories in the press.

Roman Originals cashes in on #TheDress

What happened

Were you Team Black and Blue or Team Gold and White? It was the stuff PR teams dream of when this UK-based retail brand inadvertently received a ton of press when a photo of one of its dresses ignited a heated debate over its color.

the-dress.jpg

The story was picked up by major publishers including BuzzFeed, Time, Gawker, and Wired. Some A-list celebrities chimed in with their dress-color opinions on social media as well.

http://platform.twitter.com/widgets.js

The results

Roman Originals was by far the biggest winner out of the brands we analyzed, seeing a 17.5K% increase in press mentions, nearly a 420% increase in US organic traffic, and 2.3K% increase in new backlinks. By far the greatest benefit was the impact on sales — Roman Originals’ global sales increased by 560% within a day of the story hitting the news.

Beyond the short-term increases, it appears Roman Originals gained significant long-term benefits from the media frenzy. Its site has seen a lift in both UK and US organic traffic since the story broke in February 2015.

In addition to the initial spikes directly after the story broke, RomanOriginals.co.uk saw a solid lift in backlinks over time, too.

Man lists igloo on Airbnb for $200

What happened

After Blizzard Jonas had hit the Northeast, a man built an igloo in Brooklyn and listed it for $200 per night on Airbnb as a joke. Airbnb deleted the listing shortly after it was posted. Media pickups included ABC News, USA Today, Washington Post, Mashable, and The Daily Mail.

http://platform.twitter.com/widgets.js

The results

Of all the PR events we analyzed, the igloo story was the most recent, having occurred at the end of January. Although we can’t yet gauge the long-term impact this media hit will have on Airbnb, the initial impact appears to be minimal. Since Airbnb is frequently in the news, it’s not very surprising that one PR event doesn’t have a significant effect.

Airbnb’s site only saw a 2% increase in organic traffic, despite an 83% increase in press mentions.

It’s also too soon to measure the story’s impact on new backlinks. However, the chart below shows the backlinks around the time of the story breaking relative to the new backlinks acquired during the rest of the year.

REI opts out of Black Friday

What happened

The retail chain announced it would be closed on Black Friday and created the #OptOutside campaign urging Americans to spend Black Friday outdoors instead of shopping. Major media outlets picked up the story, including CNN, USA Today, CBS News, and Time.

screenshot-time.com 2016-02-16 13-45-54.png

The results

While REI received great publicity by saying “no” to Black Friday, the media coverage appeared to have little impact on organic traffic to REI.com. In fact, traffic decreased by 5% the month after the story broke compared to the previous month.

REI.com did see a 51% increase in new backlinks after the story broke. Additionally, the subdomain created as part of the #OptOutside campaign has received nearly 8,000 backlinks since its launch.

When good press turns bad (and vice versa)

In addition to both positive and negative spins being put on a story, the sentiment around the story can change as more details emerge. Sometimes a positive story turns negative or a bad story turns positive. Such is the case with Gravity Payments and Miss Universe, respectively.

CEO of Gravity Payments announces $70K minimum wage

What happened

The CEO of this credit card-processing company announced he was cutting his salary to provide a minimum staff salary of $70K. It was hard to miss this story, which was covered by nearly every major US media outlet (and some global), and included a handful of TV appearances by the CEO. The brand later received backlash when it was discovered that the CEO, Dan Price, may have increased employee wages in response to a lawsuit from his brother.

screenshot-www.today.com 2016-02-16 13-57-32.png

The results

Initial spikes after the story broke included a 90% increase in press mentions, 139% increase in organic traffic, and 146% increase in new backlinks. But it didn’t end there for Gravity Payments.

What’s been most incredible about this story is its longevity in the press. Six months after the story broke, publishers were doing follow-up stories about the CEO signing a book deal and how business was booming. In December 2015, Bloomberg wrote a piece revealing that there was more to the story and suggested the wage increase was motivated by a lawsuit.

So far it looks like the benefits from the good press have outweighed any negative stories. In addition to the initial spike, to date GravityPayments.com has seen a 1,888% increase in organic traffic from the month before the story broke (March 2015).

The site has also received a substantial lift in new backlinks since the story broke.

Steve Harvey crowns the wrong Miss Universe winner

What happened

Host Steve Harvey accidentally announced the wrong winner during the 2015 Miss Universe pageant. Some speculated the slip up was an elaborate PR stunt organized to combat the pageant’s falling ratings.

While there was initial backlash over the mistake, after several public apologies from Harvey, the incident may end up being best remembered for the memes it inspired. steve-harvey-meme.jpeg

The results

It appears the negative sentiment around this story has not hurt the brand. With a 199% increase in press mentions compared to the previous year’s pageant, this year’s Miss Universe stayed top of mind long after the pageant was over.

After the incident, there was nearly a 123% increase in monthly organic traffic to MissUniverse.com compared to the month following the 2014 Miss Universe pageant. However, organic traffic had steadily increased throughout 2015. For this reason, it’s difficult to give Steve Harvey’s flub all the credit for any increases in organic traffic. It’s also too early to measure the long-term impact on traffic.

It’s also difficult to gauge how much of an effect it had on backlinks to MissUniverse.com. Judging from the chart below, so far there has been a minimal impact on new backlinks, but this may change as more articles related to this story are indexed.

For a brand that relies on TV viewership, perhaps the greatest payoff from this incident has yet to come. You can bet the world will tune in when Steve Harvey hosts next year’s Miss Universe pageant (he signed a multi-year hosting contract).

Is there any value to bad publicity?

Crafting controversial stories around a brand can have a huge payoff. After all, the press loves conflict. But too much negative press coverage can lead to a company’s downfall, as is the case with Turing Pharmaceuticals and Peeple.

Turing Pharmaceuticals raises drug price by 5,000%

What happened

You may not recognize the company name, but you’ve most likely heard of its former CEO Martin Shkreli. This pharmaceutical company bought a prescription drug and raised the price by 5,000%. The story made global headlines, including coverage by the New York Times, BBC, NBC News, and NPR, and the CEO had multiple TV interviews.

shkreli-daily-beast.png

Shkreli defended the price hike, saying the profits would be funneled back into new treatment research, but his assertions that the pricing was a sound business decision wasn’t enough to save face. He later stepped down as Turing’s CEO after being arrested by the FBI on fraud charges.

The results

Like Gravity Payments, the Turing Pharma story has had a long lifespan in the news cycle. After the story broke on September 20, press mentions of Turing Pharmaceuticals increased by 821% over the previous month.

During the month after the story first broke, turingpharma.com saw a 318% increase in organic traffic. Traffic also spiked in December and February, which is when Shkreli’s arrest, resignation as Turing CEO, and congressional hearing were making headlines.

Turingpharma.com also saw a significant increase in backlinks after the story broke. Within a month after the story broke, the site had a 382% increase in new backlinks.

While Turing Pharmaceuticals gained SEO value and brand recognition from the media frenzy, the benefits don’t make up for the negative sentiment toward the brand; the company posted a $14.6 million loss during the third quarter of 2015.

Peeple promotes new app as “Yelp for people”

What happened

A new site announcing a soon-to-be-launched “Yelp for people” app caused a huge social media and press backlash. The creepy nature of the app, which allowed people to review one another like businesses, sparked criticism as well as concerns that it would devolve into a virtual “burn book.”

wp-peeple.png

The Washington Post broke the story, and from there it was picked up by the New York Times, BBC, Wired, and Mashable.

The results

Peeple is an exceptional case since the app’s site launched right before the brand received the flurry of media coverage. Because of that, it’s possible that forthepeeple.com had not been indexed by Google yet at the time of the press coverage. Unlike the other brands we looked at in this study, we don’t have traffic and backlink benchmarks to compare from before press attention. But still, the Peeple story serves as a cautionary tale for brands hoping to attract attention to a new product with negative press.

Peeple received a 343% increase in press mentions during the month after the story broke. But since it was a new site, it’s difficult to accurately gauge how much of an impact media attention had on organic traffic and backlinks. Despite all of the attention, to date, the site only receives an estimated 1,000 visitors per month.

Since the story broke, the site has received around 3,800 backlinks.

An abundance of negative media coverage buried Peeple before its product even launched. By the time the founders backtracked and repositioned Peeple in a more positive light, it was too late to turn the brand’s image around. The app still hasn’t launched.

II. What marketers can learn from these 7 PR wins and fails

A substantial increase in press mentions, rather than volume, can yield significant benefits.

Overall, the stories about large brands (Airbnb, REI, Miss Universe) received more exposure than the unknown brands (Turing Pharmaceuticals, Roman Originals, Peeple, Gravity Payments). The well-known brands were mentioned in 148% more headlines than the unknown brands, and those stories received on average 190% more social shares than stories about the lesser-known brands.

Although stories about smaller brands received less press coverage than large brands, the relatively unknown companies saw a greater impact from being in the news than large brands. Roman Originals, Gravity Payments, and Turing Pharmaceuticals saw the greatest increases in organic traffic and backlinks. Comparatively, a surge of press coverage did not have as dramatic of an impact on the large companies. Of the well-known brands, Miss Universe saw the greatest impact, with a 199% increase in press mentions and 123% increase in site traffic compared to the previous year’s pageant.

Negative stories attracted more coverage and social shares than positive stories.

On average, the brands with negative stories (Miss Universe, Turing Pharma, and Peeple) appeared in 172% more headlines which received 176% more social shares than positive stories.

Have you noticed that the news feels predominantly negative? This is for good reason, since conflict is a pillar of good storytelling. Just as a novel or movie needs conflict, so do news stories.

That being said, there is such a thing as too much conflict. As we saw with Turing Pharmaceuticals and Peeple, company reputations can be irreversibly damaged when the brand itself is the source of conflict.

An element of unexpectedness is a key ingredient for massive press coverage.

There’s an old saying in journalism: “When a dog bites a man, that is not news because it happens so often. But if a man bites a dog, that is news.”

From a CEO paying all employees $70,000 salaries to a major retailer closing on the busiest shopping day of the year to a seasoned TV host announcing the wrong beauty pageant winner, all of the stories we analyzed were surprising in some way.

Surprising stories attract initial attention and then ignite others to share it. This crucial element of newsworthiness also plays a role in making content go viral.

A quick, positive reaction when the brand isn’t controlling the story may help boost the beneficial impact of media coverage.

A carefully orchestrated PR stunt allows a company to plan for the potential press reaction, but what’s a brand to do when it unexpectedly ends up in the news?

While this may sound like a bureaucratic company’s worst nightmare, nimble brands can cash in on the attention with a quick, good-spirited reaction. Roman Originals masterfully news-jacked a story about itself by doing just that.

First, it put out a tweet that settled the debate over the dress’ color and updated its homepage to showcase #TheDress.

screenshot-twitter.com 2016-02-15 13-29-22.pngSoon after, a white and gold version of the dress was put up for auction, with the proceeds donated to charity. Had Roman Originals spent too much time planning a response, it may have missed out while the story was still relevant in the news cycle.

Key takeaways

While most brands will never achieve this level of media coverage, the instances above teach pertinent lessons about what makes a story catch fire in the media:

  • A PR win for a little-known brand doesn’t necessarily require thousands of press mentions. For this reason, unknown companies stand to benefit more from riskier tactics like PR stunts. On the flipside, it may be more difficult for a large brand to initiate a PR stunt that makes a significant impact.
  • An element of unexpectedness may be a primary driver for what makes a news story go viral. When possible, include an unexpected angle into your PR pitches by focusing on what’s unique, bizarre, or novel about your brand.
  • Plan for the unexpected by having processes in place that empower marketing and PR teams to act fast with a public response to sudden media attention.
  • As we saw in our study, controversial stories are a big hit with journalists, but make sure your brand is the hero, not the villain. Look for opportunities to weave the “bad guys” your company is fighting into your pitches. Your company’s villain could be as obvious as a competitor or more subtle adversaries like the establishment (Uber vs. taxi industry).

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Does Ad Viewability Always Equal Views?

Posted by rMaynes1

[Estimated read time: 6 minutes]

There’s a lot of talk about ad viewability at present, and with big players such as Google announcing in 2015 that it would only charge advertisers for 100% viewable impressions, it’s easy to see how it’s become such a hot topic in the digital marketing world.

But what exactly does it mean to be “viewable?” Does this mean people will look at your ad? We recently conducted a research study that set out to answer these questions.

Conducting the eye-tracking study

The study was conducted in two parts: an online survey of 1400 participants for quantitative data, and an eye-tracking study designed to observe actual behaviors of searchers online — more qualitative data.

The goal was to measure the type of ads people noticed and engaged with the most, determining whether behavior changed depending on the intent behind the search task (research or purchase) and the relevancy of the ad. We also wanted to determine how viewable online display ads truly are and what other factors influenced whether or not people viewed them.

Participants performed tasks in Mediative’s lab while being recorded using the Tobii T60 desktop eye tracker. The key metrics measured were:

  • Time to First Fixation – How long it took the searcher to fixate on an ad. A fixation is when we hold our eyes still and actually take in visual information. A typical fixation lasts from 100–300 milliseconds, and we generally fixate 3–4 times every second (Source: tobii.com).
  • Total Visit Duration – How long the searcher spent in total fixating on the ad.
  • Visit Count – How many times the searcher came back to look at the ad.
  • Percentage Fixated – The percentage of all participants who looked at the ad.
  • Percentage Clicked The percentage of all participants who clicked on the ad.

A participant conducting a calibration test on the T60 Tobii Eye-tracker in Mediative’s research lab

“The findings in this study are a powerful reminder to create engaging advertising programs that responsibly leverage first- and second-party data. Marketers are still better off complimenting user experiences than disrupting them.”

– Sonia Carreno, President, Interactive Advertising Bureau of Canada

What is viewability?

“Viewability,” as defined by the Media Ratings Council, means an ad has 50% of its pixels in view for a minimum of one second. Essentially, an ad is viewable if there’s an opportunity for it to be viewed. 76% of ads in Mediative’s study were served in a “viewable” position as defined above.

An opportunity for an ad to be viewed, however, does not mean that it will be viewed. 16.6% of the ads that were served throughout the study were viewed. 50% more ads were viewed above the fold compared to below the fold, and ads above the fold were viewed for 87% longer.

Increasing viewability to increase views

Although click-through rate is a clear indicator of whether an ad was viewed or not, it doesn’t give the whole picture. It gives no measurement of how many ads were seen, but not clicked on. Therefore, CTR cannot be the sole measurement of a display ad campaign’s success. Ads can be seen, noticed, and influence a purchase — all without generating a click.

Buying a viewable ad impression is only the first step, however. Here are some areas for you to consider improving in order to maximize the chances of your display ads being seen.

1. Ad relevancy

The research showed that ads relevant to the searcher’s current task are 80% more likely to be noticed than ads relevant to something the searcher had looked for in the past. Additionally, ads relevant to the search query are viewed for 67% longer than irrelevant ads. Relevant ads were visited on average 2.59 times per visitor per page, versus only 1.6 times for irrelevant ads. Relevant ads received 5.7x more clicks.

Below are heat maps for web pages containing two big box ads. The page on the left features an ad that was irrelevant to the search task. The page on the right features a relevant ad. The areas in red had the most views, followed by orange, yellow, and green.

Your action item:

You can advertise on sites that are relevant to the audience you are trying to reach (e.g. a car ad on a car information site). However, adding data into campaigns and purchasing impressions in real-time will increase the relevancy of your ads, no matter the site the user is visiting. With demographic data and/or intent- and interest-based data, specific audiences of people can be targeted, rather than specific sites. This is more likely to result in a higher return on ad investments, as impressions land on the most likely buyers.

2. Ad type

In a survey, we asked people which ads they pay attention to the most on a webpage. The responses show that people believe they pay attention to the leaderboard ads at the top of the page the most. Our eye-tracking study confirmed that, yes, this ad type was noticed the fastest, and by the most people.

However, it was the ads to the side of the page (skyscraper ads) and within the page content (big box ads) that were viewed for the longest and received the most clicks. A November 2014 report by Google had similar findings, reporting that the most viewable ads on a page are those that are positioned just above the fold, not at the top of the page.

Your action item:

Don’t rule out ads that might traditionally have poor click performance. This doesn’t mean the ad isn’t seen!

3. Ad design

Poor display ad design is often to blame for a poor click-through rate; if people don’t notice the ad, they won’t click it. When it comes to online display ads, images, videos, and animations are more important than what’s actually being said with the text.

Your action item:

Invest in good ad creative. Keep ads simple, yet eye-catching. Ensure the ad features a clear call-to-action to indicate why the searcher should click on your ad so that they don’t lose interest.

4. Multiple ad exposures

Multiple relevant ads on the same page were viewed, on average, by 2.7x more participants and captured 2.8x more clicks than the individual relevant ads.

Multiple ads shown several times across different pages increase in engagement the more times they’re shown. The average number of clicks increased by 162% between one exposure and two, and by 39% between two exposures and three.

Your action item:

Consider advertising placements such as home page takeovers or run-of-site/run-of-network advertising, where multiple exposures of the same ad will be served. Retargeted ads will also likely result in multiple exposures to the same ad. When retargeted ads were presented to a searcher, they were viewed, on average, 65% faster than ads that were not retargeted.

In summary

Ultimately, what we’ve discovered through this research is that buying a “viewable” ad impression does not guarantee that it’s going to be seen and/or clicked on, and that there are many ways you can maximize the chances of your ad being viewed. It’s critical to understand, however, that online ad success cannot be determined by views and clicks to an ad alone. The entire customer’s purchase journey must be considered, and how ads can influence behavior at different stages. Display advertising is just one part of an integrated digital campaign for most advertisers.

For more tips on how to maximize display ad viewability, download the full Mediative paper for free.

Let us know your thoughts and questions in the comments!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

How to Find Your Brand’s Disruptive Opportunity

Posted by ronell-smith

[Estimated read time: 9 minutes]

In 2009, I wrote a magazine story about a Japanese lure company selling high-end fishing lures in the US for $20 to $50 each. With anglers buying the lures in droves, it was only a matter of time before competitors followed suit, creating carbon copies of the best sellers, which lead to a market frenzy the industry had never seen.

Months later, I interviewed the owner of the lure company everyone was copying. His comments were eye-opening and accurate.

“I don’t get it,” he said, referring to the competition. “I sell one million lures for $25. They sell 10 to 15 million lures for $5 to $10. I should be copying them.”

Basic math highlights the truth of his words: $25 million < $50 million–$150 million.

A good idea doesn’t mean good for your brand

What looked like an idea — selling more expensive products to eager buyers — sufficed as a blinder to what would become an amazing opportunity: finding a way to sell more low-cost lures.

For those of us involved in content marketing, we’re used to scenarios like these. Right?

The competition does something cool or interesting or that gets links, likes, or conversions, and we lose our minds in an attempt to copy them, even if it makes zero sense for us to do so.

Buzzfeed, anyone?

Sure, list posts can be and have been effective, but other than throwing some traffic your way, for most businesses the long-term value simply isn’t there.

But we live in a monkey-see, monkey-do world, so whatever the competition does, we attempt to do it better.

Never mind the fact that (a) we don’t really know how successful they are, or (b) how successful attempting the same tactics will prove for us.

Most important, because our resources are limited, we don’t often see how choosing to chase others’ ideas means we typically cannot adequately focus on the opportunities right in front of us.

Opportunities > ideas

At Mozcon 2015, the word “disruption” kept being spoken by speakers on stage. In fact, a prominent theme of Rand’s opening talk was how a number of prominent brands were willing to disrupt themselves:

  • Facebook: The brand’s Little Red Book, given to all employees, contains many useful, guiding tidbits, among them, “If we don’t create the thing that kills Facebook, someone else will.”
  • Microsoft: After years of openly expressing contempt for Linux, the brand is welcoming working with the open-source outsider.

And as someone who was fortunate enough to stumble onto disruption (correctly called disruptive innovation) after college, hearing Rand talk about this theory made me very proud.

Problem is, what these businesses he described are doing is not really disruption.

In a strict business sense, the examples he shared are known as a pivot, where businesses re-imagine (or refashion) themselves and/or their assets in an entirely different light, as a way to grow, ward off competitors, solve big problems or grow their audience.

What is disruption?

Disruption is something far more significant, especially for brands looking to set themselves apart in competitive markets.

Coined by current Harvard Business School professor Clayton Christensen in his 1996 book “The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail,” innovative disruption refers to the transformation of a product or a service in such a way that it makes it more affordable and more accessible to a wider audience.

These products start at the bottom, catering to a market that cannot afford the more expensive (and more popular) option. But, as in the case of the lure companies, by focusing on the bottom of the market, there is less competition and greater numbers who can afford the product.

One of the most prominent and most vivid examples of disruptive innovation is how smartphones disrupted the laptop market, which in the 1980s disrupted the desktop market, which itself disrupted the mainframe computer.

Disruptors enter the market at the bottom, where people or industries are being underserved, which is a result of the major players in a vertical choosing to go upstream and over-serve the market, often through added features and benefits people cannot use and don’t need.

But while they continue to cater to the high end of the market, disruptors slide in and gobble up market share at the bottom, before moving upstream to challenge their biggest competitors as the former’s products or services improve in quality, grow in popularity, and suffice as a viable option for even those with more discerning taste.

Why should content marketers care?

For those of you wondering what any of this has to do with content marketing, I say “plenty.” Think about the biggest challenges currently weighing down content marketing, beginning with brands…

  • Laboring to create engaging content
  • Failing to to understand their audience and their needs
  • Lacking clarity on which metrics to use in determining the success of their marketing efforts

I’m convinced this occurs because brands are too focused on better serving audiences they neither own nor enjoy the full support of, instead of looking for the next opportunity on the horizon. Or they’re too focused on ideas, instead of opportunities.

Finding your brand’s disruptive opportunity means you’re not competing in the same sandbox as everyone else, which means your chances of dominating the category are much, much higher.

Brands making it work

When Facebook announced 2G Tuesdays, whereby it asks some of its employees to use their brand’s apps over a slower 2G connection, it was billed as an opportunity for the company to see the challenges people in the developing world have when using their products.

Maybe.

That’s probably only part of the story. It’s just as likely that the company, which has nearly one billion active users, is looking at what additional services it can offer people in developing countries. It’s a good bet that, to garner interest from the other 6.4 billion people on earth, the service they offer won’t look anything like the Facebook we now know and (mostly) love.

Or what about Wavestorm, a company which sells surfboards for $99.99 and are offered exclusively at discount warehouse Costco? The company entered the bottom of the market, where surfboards regularly cost $300 to $1,000 or more. Even the brand’s owner is not shy in saying that he realizes the folks who can only pay $99 today will likely be willing to spend far more in the future.

Is your brand ready to find its disruptive opportunity?

Seize your brand’s disruptive opportunity

The beginning is always the most painful part, and this exercise will be no different. Begin by pulling the marketing team together for a brainstorming session.

Then throw two ideas on the table:

  1. What are the verticals where a large portion of the audience is underserved?
  2. What are we uniquely qualified to offer at least one of these markets that the competition would have a hard time beating us on?

Here’s the kicker: You cannot limit yourself to any specific vertical.

To many of you, the idea will sound crazy at first. That is, until you re-read the questions and see that what you’re really asked to do is think of where you should be looking for growth, expanded opportunities that you maybe have not and would not otherwise look for.

When I talk to brands, what I frequently hear is that they have maxed out in a market, lack the skilled staff to compete in their vertical, or are no longer able to connect with the audience in a meaningful way.

As tastes have changed, these brands have not been able to keep up.

So instead of playing catch-up, my suggestion is to slowly but surely start charting a new path, one where the territory is fertile (i.e., lots of opportunities) and the competition is not entrenched.

Don’t let fear get in your way.

Maybe you’re an agency sick of losing clients. You could take options off the table, offering only those services that you’ve determined, during the discovery process, will benefit the client. Any prospect who wants to cherry-pick services would have to look elsewhere.

How is this disruptive? The vast majority of agencies offer a smorgasbord of services, many of which they do poorly, while many others specialize in areas where they have deep expertise. The sweet spot is often in the middle, where you identify needs but only take on the most glaring of those needs, or very specific needs, which could move the business forward. (The management consulting field is currently being disrupted in similar fashion.)

Let’s take a look at a couple of examples of disruption at work.

Disruption in action

When I first encountered strength coaches Dean Somerset and Tony Gentilcore, they were both making a name for themselves as bloggers and trainers in Canada and Boston, respectively. Fast forward seven years, and they are now two of the most well-known, most-sought-after young experts in the field.

While most trainers go after the largest piece of the pie — fat loss clients — they’ve focused on helping folks to move better (i.e., mobility) rather than just look better, which means clients can enjoy their newfound size and weight. Also, they spend a considerable amount of time traveling the US, Canada and Europe, teaching other strength coaches how to be better at their jobs.

One of my favorite examples of a small brand that’s taken up the challenge to disrupt a sector is Boston-based Wistia, the video-hosting company that makes it easier for businesses to add their videos to the web.

The brand was founded in 2006, which is significant because video-hosting juggernaut YouTube came to fruition in 2005.

You might ask yourself, “What were [Wistia] thinking?” One word: Opportunity.

Where others saw a dominant player owning a category, they saw a dominant player opening a category so wide that others had room to thrive.

(Not shown to scale, of course)

“We had an opportunity to go deeper on one segment of this market and create specialized features that YouTube would never build as a broad-based platform,” says Wistia co-founder and CEO Chris Savage.

So while YouTube focuses on being everything to everyone, Wistia has singled out a lucrative, largely ignored piece of the pie they can own and dominate.

Next steps

Let me be emphatic: I have no expectation that businesses will read this post, then dramatically reshape their products, product lines, or services overnight. The point of this article is to make it clear that opportunities are all around, and the more open we are to these opportunities, the more we’ll increase our chances of continued success and limit the number of missed opportunities.

In the end, the lure companies chasing the Japanese brands realized their error too late: The category is now dominated by low-cost alternatives that cost a fraction of the price of the originals.

If only one the copycats had looked more closely at the numbers, they could have seen the opportunity ahead.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Four Ads on Top: The Wait Is Over

Posted by Dr-Pete

For the past couple of months, Google has been testing SERPs with 4 ads at the top of the page (previously, the top ad block had 1-3 ads), leading to a ton of speculation in the PPC community. Across the MozCast data set, 4 ads accounted for only about 1% of SERPs with top ads (which matches testing protocol, historically). Then, as of yesterday, this happened:

Over the past 2 weeks, we’ve seen a gradual increase, but on the morning of February 18, the percentage of top ads blocks displaying 4 ads jumped to 18.9% (it’s 19.3% as of this morning). Of the 5,986 page-1 SERPs in our tracking data that displayed top ads this morning, here’s how the ad count currently breaks down:

As you can see, 4-ad blocks have overtaken 2-ad blocks and now account for almost one-fifth of all top ad blocks. Keep in mind that this situation is highly dynamic and will continue to change over time. At the 19% level, though, it’s unlikely that this is still in testing.

Sample SERPs & Keywords

The 4-ad blocks look the same as other, recent top ad blocks, with the exception of the fourth listing. Here’s one for “used cars,” localized to the Chicago area:

Here’s another example, from an equally competitive search, “laptops”:

As you can see, the ads continue to carry rich features, including site-links and location enhancements. Other examples of high-volume searches that showed 4 top ads in this morning’s data include:

    • “royal caribbean”
    • “car insurance”
    • “smartphone”
    • “netbook”
    • “medicare”
    • “job search”
    • “crm”
    • “global warming”
    • “cruises”
    • “bridesmaid dresses”

Please note that our data set tends toward commercial queries, so it’s likely that our percentages of occurrence are higher than the total population of searches.

Shift in Right-column Ads

Along with this change, we’ve seen another shift – right-hand column ads seem to be moving to other positions. This is a 30-day graph for the occurrence of right-hand ads and bottom ads in our data set:

The same day that the 4-ad blocks jumped, there was a substantial drop in right-column ad blocks and corresponding increasing in bottom ad blocks. Rumors are flying that AdWords reps are confirming this change to some clients, but confirmation is still in progress as of this writing.

Where is Google Headed?

We can only speculate at this point, but there are a couple of changes that have been coming for a while. First, Google has made a public and measurable move toward mobile-first design. Since mobile doesn’t support the right-hand column, Google may be trying to standardize the advertising ecosystem across devices.

Second, many new right-hand elements have popped up in the last couple of years, including Knowledge Panels and paid shopping blocks (PLAs). These entities push right-hand column ads down, sometimes even below the fold. At the same time, Knowledge Panels have begun to integrate with niche advertising in verticals including hotels, movies, music, and even some consumer electronics and other products.

This is a volatile situation and the numbers are likely to change over the coming days and weeks. I’ll try to update this post with any major changes.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Should SEOs Only Care About DIRECT Ranking Signals in Google? – Whiteboard Friday

Posted by randfish

Can a new friend you connect with at a conference be as strong of a ranking signal as a quality backlink? Can it be stronger? The power of indirect ranking signals is something that can often be overlooked or brushed aside in favor of what we know as hard truth from Google, but doing so is a mistake. In today’s Whiteboard Friday, Rand talks about the importance of broadening your perspective and tactics when it comes to considering both direct and indirect ranking signals in your SEO.

http://ift.tt/1TtOudC

http://ift.tt/1GaxkYO

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about direct ranking signals versus indirect ranking signals. I see people ask questions like this all the time. Should I only care about the direct ranking signals? Do I even really care, as an SEO, if something indirectly impacts my rankings in Google, because I can’t really influence that, can I? Or I can’t be confident that Google is going to make those changes. The answer, from my perspective, is “Well, hang on a second. Let’s walk through this together.”

Direct ranking signals

Direct ranking signals are pretty obvious. These are things like links. You earn a bunch of new links. They point to your page. Your rankings go up. Now my page for bookcases ranked a little higher because all these people just linked over to me. Great, that’s very nice.

Direct ranking signal, my page takes eight seconds to load. Now I improve a bunch of things about it and it only takes three seconds to load. Maybe I move up. Maybe I move up a lot less. I put this from 3 to 2 and this one from 32 to 31. Page load speed, a very small ranking factor, but as Google says a direct one. All right, great. Page load speed, that’s improved.

Direct ranking signal, keyword and language usage. I go from having my page about media storage furniture, and then I realize no one looks for media storage furniture. They look for bookcases, because in fact the thing that I’m storing here, the media is books. Boom, I’m moving up for bookcases again.

These are direct ranking signals. They’re very obvious. We know that they are in Google’s ranking algorithm. Google is public about a lot of them. They talk about them. We can test them and observe them. They are consistent. They move the needle. Great.

Indirect ranking influencers

What about stuff like this? I go to a conference. I meet people, this friendly person with a hat. Friendly person with a hat goes home and they write an article. Friendly person with a hat’s article contains a link to my bookcase website. Well, Google would never say that going to conferences gives you higher rankings. That’s not a ranking signal. Even building relationships with friendly people who have hats on, also not a ranking signal. Did it move the needle? Yeah, it probably did, because it ended up indirectly influencing a direct ranking signal.

Maybe Gillian Anderson — she probably has better things to do what with “The X-Files” being back on the air, very exciting — Gillian Anderson: “Rand’s bookcases are my favorite thing in the apartment.” Wow, look. She sent thousands of people searching for Rand’s bookcases. Hmm, what happens then? Well, people pick it up and write about it. Lots of people searching for it. Maybe Google’s entity associations and topic modeling algorithm starts to associate Rand’s bookcases as being an entity and associates the word “Rand’s” with the word “bookcases.” Maybe now I rank higher. Is it suddenly the case then that tweets equal higher rankings? Google would certainly tell you tweets don’t impact rankings. Tweets are not a ranking factor. What’s going on? It’s indirect.

What about I go to my page and I decide, “Hey, you know what I think would be really cool is if I had a feature where you take a photo of your bookshelf and I will tell you all the books that you’ve got on it and then I’ll even show them on my bookshelf on the website so that you can see how your books look on Rand’s bookcases.” You upload a photo. Bam, you can see all your books on there. Super sweet feature. Gets me some news. Gets me a bunch of shares. Adds time to my time on site. Improves my conversion rate. Also, weirdly, influences maybe a bunch of direct ranking factors that lead to higher rankings. Is it the case that photo upload features mean higher rankings? Again, Google would never tell you that. It’s not consistent. It’s not like every time I add a photo upload feature to a website it’s suddenly going to rank higher.

The problems

1. What Google says

What’s going on here? Well, indirect ranking features, indirect ranking influencers are powerful. It doesn’t matter that they’re indirect. They can have powerful impacts on your rankings. I think for SEOs this is really hard, because Google’s representatives will often say things like, “We don’t use that in the rankings. That signal, that is not a ranking signal for us,” which shouldn’t shut down the conversation, but it really does in our industry. A lot of times we hear that Google says social signals are not ranking signals, tweets are not a ranking signal, or time on site is not a ranking signal, so why should SEOs try and influence that?

2. What clients, teams, and managers will say

That brings us to the second problem. Clients, teams, managers, what do they say? They say, “That’s not SEO. That’s not your job, SEO person.” They’ll go back and they’ll cite Google saying, “Hey, this doesn’t influence rankings.” Well, guess what? Both of these are really problematic because they may be technically accurate, but they don’t capture the big picture, and because of that you miss out.

3. It takes time

The other one I hear is “indirect influences take time.” This is absolutely the case. You get a bunch of links. They’re probably going to be counted ASAP as soon as Google finds them. You add this new feature here, it’s going to take a while for all of these other things to propagate over to the ranking signals that are actually going to impact your position. That’s tough. They only have the desired impact when (a) they get counted by the direct signals, and (b) when they actually work to influence the direct signals. So indirect signals are tough in all these ways.

My advice

Focus on what leads to improvements

My advice is to take a broader perspective. Stop focusing exclusively on “this directly impacts SEO and therefore is my job,” and “this doesn’t directly influence SEO and therefore is not my job.” Say holistically I know that lots of things impact searcher satisfaction:

  • User experience,
  • Amplification and amplification likelihood,
  • Engagement,
  • Branding through memory and association,
  • Relationships with people,
  • Brand coverage, and
  • Saturation.

All of these things will indirectly positively impact your rankings, but not just your rankings. They’ll impact positively your conversion rate. They’ll impact positively your user experience. They’ll impact positively your bottom line. That’s the one that you really care about. SEO is just a path to the bottom line to sales and brand building and amplification and the things that you are actually trying to grow.

If you focus on these and you’re aware of what is direct versus indirect and how the indirect things impact the direct things, I think you can craft a very smart holistic SEO strategy. If you throw out all the indirect stuff, because it’s not direct, you are killing yourself. You’re shooting yourself in the foot. Your competitors, frankly, the smart ones are the ones who are going to concentrate on both of these. Sometimes indirect stuff can be more powerful in the short term and the long term than direct stuff. That’s just how it goes.

Fight and work influencers for the future

I’d urge you to fight for the ability to have influence on these indirect signals, especially when they also have positive impacts on other channels. Don’t let rank influence be a short-term measurement only. I think one of the big problems is that folks look at their rankings and they say, “Okay, we did this. It moved up the next week. That clearly had an impact.” No. Look, a lot of the work that we do in SEO has rank impact for months and years to come. We can’t just measure things right away. If it has a positive impact on other signals you care about on the bottom line, on all of these types of factors, then it’s going to influence rankings positively as well.

All right, everyone, look forward to your comments, and we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Success Metrics in a World Without Twitter Share Counts

Posted by EricaMcGillivray

On November 20, 2015, Twitter took away share counts on their buttons and from their accessible free metrics. Site owners lost an easy signal of popularity of their posts. Those of us in the web metrics business scrambled to either remove, change, or find alternatives for the data to serve to our customers. And all those share count buttons, on sites across the Web, started looking a tad ugly:

Where's my shares?Yep, this is a screenshot from our own site.

Why did Twitter take away this data?

When asked directly, Twitter’s statement about the removal of tweet counts has consistently been:

“The Tweet counts alone did not accurately reflect the impact on Twitter of conversation about the content. They are often more misleading to customers than helpful.”

On the whole, I agree with Twitter that tweet counts are not a holistic measurement of actual audience engagement. They aren’t the end-all-be-all to showing your brand’s success on the channel or for the content you’re promoting. Instead, they are part of the puzzle — a piece of engagement.

However, if Twitter were really concerned about false success reports, they would’ve long ago taken away follower counts, the ultimate social media vanity metric. Or taken strong measures to block automated accounts and follower buying. Not taking action against shallow metrics, while “protecting” users from share counts, makes their statement ring hollow.

OMG, did Twitter put out an alternative?

About a year ago, Twitter acquired Gnip, an enterprise metrics solution. Gnip mostly looks to combine social data and integrate it into a brand’s customer reputation management software, making for some pretty powerful intelligence about customers and community members. But since it’s focused on an enterprise audience, it’s priced out of the reach of most brands. Plus, the fact that it’s served via API means brands must have the knowledge and development skills/talent in order to really customize the data.

Since the share count shutdown, Gnip released a beta Engagement API and has promised an upcoming Audience API. This API seems to carry all the data you’d need to put those share counts back together. However, an important note:

“Currently only three metrics are available from the totals endpoint: Favorites, Replies, and Retweets. We are working to make Impressions and Engagements available.”

For those of you running to your favorite tools — Gnip’s TOS currently forbids the reselling of their data, making it essentially forbidden to integrate into tools, although some companies like Buzzsumo have paid and gotten permission to use the data in their software. The share count removal caused Apple to quietly kill Topsy.

Feel social media’s dark side, Twitter

Killing share counts hasn’t been without its damage to Twitter as a brand. In his post about brands who’s lost and won in Google search, Dr. Pete Meyers notes that Twitter dropped from #6 to #15. That has to hurt their traffic.

Twitter lost as a major brand on Google in 2015

However, Twitter also made a deal with Google in order to show tweets directly in Google searches, which means Twitter’s brand may not be as damaged as it appears.

Star Wars tweet stream in Google results

Perhaps the biggest ding to Twitter is in their actual activity and sharing articles on their platform. Shareaholic reports sharing on Twitter is down 11% since the change was implemented.

Share of voice chart on Twitter from Shareaholic

It’s hard to sell Twitter as a viable place to invest social media time, energy, and money when there’s no easy proof in the pudding. You might have to dig further into your strategy and activities for the answers.

Take back your Twitter metrics!

The bad news: Almost none of these metrics actually replicate or replace the share count metric. Most of them cover only what you tweet, and they don’t capture the other places your content’s getting shared.

The good news: Some of these are probably better metrics and better goals.

Traffic to your site

Traffic may be an oldie, but it’s a goodie. You should probably already be tracking this. And please don’t just use Google Analytics’ default settings, as they’re probably slightly inaccurate.

Google Analytics traffic from Social and Twitter

Some defaults for one of my blogs, since I’m lazy.

Instead, make sure you tag what you’re sharing on social media and you’ll be better able to attribute your hard, hard work to the proper channels. Then you can really figure out if Twitter is the channel for your brand’s content (or if you’re using it right).

Use shortening services and their counters

Alternatively, especially if you’re sharing content not on your own site, you can use share and click counting from various URL shortening services. But this will only count toward individual links you share.

Bit.ly's analytics around share counts for individual links

Twitter’s own free analytics

No, you won’t find the share count here, either. Twitter’s backends are pretty limited to specific stats on individual tweets and some audience demographics. It can be especially challenging if you have multiple accounts and are working with a team. There is the ability to download reporting for further Excel wizardry.

Tweet impressions and Twitter's other engagement metrics

Twitter’s engagement metric is “the number of engagements (clicks, retweets, replies, follows, and likes) divided by the total number of impressions.” While this calculation seems like a good idea, it’s not my favorite, given the specific calculation’s hard to scale as you grow your audience. You’re always going to have more lurkers instead of people engaging with your content, and it’s going to take a lot of massaging of metric reporting when you explain how you grew your audience and those numbers went down. Or how the company with 100 followers does way better on Twitter’s engagement metric.

TrueSocialMetric’s engagement numbers

Now these are engagement metrics that you can scale, grow, and compare. Instead of looking at impressions, TrueSocialMetrics gives conversation, amplification, and applause rates for your social networks. This digs into the type of engagement you’re having. For example, your conversation rate for Twitter is calculated by taking how many comments you got and dividing it by how many times you tweeted.

TrueSocialMetric's engagement numbers

At Moz, we use a combination of TrueSocialMetrics and traffic to report on the success of our social media efforts to our executives. We may use other metrics internally for testing or for other needs, depending on that specific project.

Twitcount

Shortly after the removal of share counts was announced, Twitcount popped up. It works by installing their share counters on your site, where it then can surface historical totals. Twitcount’s numbers only start counting the day you install the code and the button to your site. There are limitations, since they use Twitter’s API, and these limitations may cause data inaccuracies. I haven’t used their solution, but if you have, let us know in the comments how it went!

Buffer’s reach and RT metrics

Again, this only counts for your individual tweet’s metrics, and Buffer only grabs metrics on tweets sent out via their platform. Buffer’s reach metric is similar to what many traditional advertisers and people in public relations are used to, and it is similar to Twitter’s general impressions metric. Reach looks at how far your tweet has possibly gone due to size of the retweeter’s audience.

Like most analytic tools, you can export the metrics and play with them in Excel. Or you can pay for Buffer’s business analytics, which runs between $50–$250/month.

Trending topics and hashtag reports

There are many tools out there where you can track specific trends and hashtags around your brand. At MozCon, we know people are tweeting using #MozCon. But not every brand has a special hashtag, or even knows the hot topics around their brand.

SproutSocial’s trends report is unique in that it pulls both the topics and hashtags most associated with your brand and the engagement around those.

Obviously, in last July, #MozCon is hot. But you can also see that we have positive community sentiment around our brand by what else is happening.

Buzzsumo

Our friends at Buzzsumo can be used as a Topsy topic replacement and share counter. They did a great write-up on how to use their tool for keyword research. They are providing share counts from Gnip’s data.

Share counts from BuzzSumo

Though when I ran some queries on Moz’s blog posts, there seemed to be a big gap in their share counts. While we’d expect to see Moz’s counts down a bit on the weekends, there would be something there:

BuzzSumo on Moz's share counts over the week

I’m unsure if this is Buzzsumo’s or Gnip’s data issue. It’s also possibly that there are limits on the data, especially since Moz has large numbers of followers and gets large amounts of shares on our posts.

Use Fresh Web Explorer’s Mention Authority instead

While Fresh Web Explorer‘s index only covers recent data — the tool’s main function being to find recent mentions of keywords around the web a la Google Alerts — it can be helpful if you’re running a campaign and relying on instant data no older than a month. Mention Authority does include social data. (Sorry, the full formula involved with creating the score is one of Moz’s few trade secrets.) What’s nice about this score is that it’s very analogous across different disciplines, especially publicity campaigns, and can serve as a holistic alternative.

Fresh Web Explorer's mention authority

Embedded tweets for social proof

Stealing this one from our friends at Buffer, but if you’re looking to get social proof back for people visiting your post, embedded tweets can work well. This allows others to see that your tweet about the post was successful, perhaps choosing to retweet and share with their audience.

Obviously, this won’t capture your goals to hand to a boss. But this will display some success and provide an easy share option for people to retweet your brand.

Predictions for the future of Twitter’s share count removal

Twitter will see this as a wash for engagement

With the inclusion of tweets directly in Google search results, it balances out the need for direct social proof. That said, with the recent timeline discussions and other changes, people are watching Twitter for any changes, with many predicting the death of Twitter. (Oh, the irony of trending hashtags when #RIPTwitter is popular.)

Twitter may not relent fully, but it may cheapen the product through Gnip. Alternatively, it may release some kind of “sample” share count metric instead. Serving up share count data on all links certainly costs a lot of money from a technical side. I’m sure this removal decision was reached with a “here’s how much money we’ll save” attached to it.

Questions about Twitter’s direction as a business

For a while, Twitter focused itself on being a breaking news business. At SMX East in 2013, Twitter’s Richard Alfonsi spoke about Twitter being in competition with media and journalism and being a second screen while consuming other media.

Lack of share counts, however, make it hard for companies to prove direct value. (Though I’m sure there are many advertisers wanting only lead generation and direct sales from the platform.) Small businesses, who can’t easily prove other value, aren’t going to see an easy investment in the platform.

Not to mention that issues around harassment have caused problems even celebrities with large followings like Sue Perkins (UK comedian), Joss Whedon (director and producer), Zelda Williams (daughter of Robin Williams), and Anne Wheaton (wife of Wil Wheaton). This garners extremely bad publicity for the company, especially when most were active users of Twitter.

No doubt Twitter shareholders are on edge when stock prices went down and the platform added a net of 0 new users in Q4 of 2015. Is the removal of share counts something in the long list of reasons why Twitter didn’t grow in Q4? Twitter has made some big revenue and shipping promises to shareholders in response.

Someone will build a tool to scrape Twitter and sell share counts.

When Google rolled out (not provided), every SEO software company clamored to make tools to get around it. Since Gnip data is so expensive, it’s pretty impractical for most companies. The only way to actually build this tool would be to scrape all of Twitter, which has many perils. Companies like Hootsuite, Buffer, and SproutSocial are the best set up to do it more easily, but they may not want to anger Twitter.

What are your predictions for Twitter’s future without share counts? Did you use the share counts for your brand, and how did you use them? What will you be using instead?

Header image by MKH Marketing.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Revved Up Rankings: History & Filtering at Your Fingertips, New in Moz Pro

Posted by jmodjeska

Today I’m proud to announce some new features in Moz Pro that help you get a lot more value out of your keyword rankings reports. You can now view your full rankings history for any campaign, select specific date ranges for your charts and tables, better segment your rankings data to get a clearer understanding of your performance and visibility, and effectively manage large campaigns with numerous keywords. Did I also mention it’s lightning-fast? To get started, visit the keyword rankings page in any of your campaigns or test drive Moz Analytics with a free trial today.

http://ift.tt/217ngdZ

Want a quick recap? Tori goes over the highlights in this quick 1:22 minute video!

http://ift.tt/1GaxkYO

Historical rankings: getting from 12 to infinity

The major value of today’s release is that it enables customers to visualize their campaign’s entire rankings history. This is thanks to an ongoing effort to completely overhaul our data assembly architecture. I’m excited about today’s release because it lets loose the first phase of this overhaul initiative, and marks the end of the 12-cycle limitation in our rankings reports.

As of today, timeframe selection has no bounds. You can report on rankings data with start and end dates anywhere in the life of your campaign, up to and including the entire campaign’s history, even on campaigns with long histories and lots of keywords. Your full rankings histories have been liberated.

12 weeks of keyword rankings history in Moz Analytics — a limitation until today

Success! A campaign’s entire rankings history in Moz Analytics

And more new features

In addition to unlimited rankings history, we’re giving users the freedom to compare rankings, search visibility, engine performance, and competitive metrics within customizable timeframes. We want our users’ reporting needs to drive the application, and not vice-versa. Here are some other features available as of today:

  • Customizable timeframe selection. In addition to weekly and monthly views, you can now select and display start and end dates, and export reports for specific timeframes. Rankings deltas (changes over time) are now calculated over the duration of the selected timeframe.

Calendar controls to select your data display range

Quick-select menu for common timeframes

  • Flexible, universal filtering. Fast response times and full keyword history means no more limits on how you view and filter your data. Use the new universal filter to narrow displayed keywords by locality, labels, and keyword text.

  • On-the-fly aggregate calculations. Rankings summaries, deltas, search visibility, and universal results all update on-demand whenever you select a new timeframe.
  • Flexible, fast sorting. Data points — like difference between rankings by engine — that previously took so much overhead to calculate that they couldn’t be sorted in-place, are now easily sortable on-demand.

Sort by anything, anywhere

And performance improvements, too

These new features are built on an entirely new architecture. We’ve been running the new and old systems in full parallel mode for about two months now to ensure everything was ready to switch over. This has also given us the opportunity to measure some key performance improvements:

  • 30X faster pipeline. Our data assembly and storage processes run up to 30X faster, eliminating delays between data collection and in-app availability. The low latency between data collection and availability is what facilitates the delivery of full campaign histories.
  • 20X faster server response times. For most in-app requests, our response times are dramatically faster than the previous system. We’re seeing rankings datasets delivered in 50 ms for average-sized campaigns (compared to 800+ ms in the previous system). We’ve also moved many calculations into the browser, reducing network calls and wait times for filter and sort requests.

Why we did all of this

Rankings data is important to our customers

Keyword rankings data is a core component of the Moz Pro suite of tools. We gather localized and national data on millions of keywords each day across hundreds of search engine locales so that our customers can analyze their SEO keyword performance. Moz Analytics users spend the bulk of their time in the Rankings section, where we present metrics that include mobile and desktop keyword rankings, historical SERP analysis, local and national keywords, search visibility scores, and competitive metrics.

The data was already there

We store deep historical rankings data going back to the moment of a campaign’s creation. While this information has always been accessible via historical rankings CSV downloads, we’ve been aware for some time that this is frustrating and this data would be much more useful in the UI. What held us back was our architecture. If you’re interested in the technical challenges and how we overcame them to deliver these new features, I offer a detailed explanation on our Developer Blog, covering the project background and architecture that makes all of this possible.

Where we’ll go next

We plan to round out our rankings overhaul project with backend and UI updates to the Analyze a Keyword page. We’ll also speed up Page Optimization, at which point the entire corpus of ranking-related data will be on our new platform.

Ultimately, all of our numerous datasets, including crawl and links, will be assembled and stored on the new architecture, unlocking new features and delivering data faster as we go. We’ll continue to be agile and iterative, progressively releasing updates as soon as they’re ready.

So go check it out!

To experience the new features in the rankings section, visit your ranking report in any Moz Analytics campaign. If you’re not already a Moz Pro subscriber, why not take a free trial and see how our software can help you do better marketing? As always, we would love to hear your feedback below.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Beyond App Streaming & AMP: Connection Speed’s Impact on Mobile Search

Posted by Suzzicks

Most people in the digital community have heard of Facebook’s 2G Tuesdays. They were established to remind users that much of the world still accesses the Internet on slow 2G connections, rather than 3G, 4G, LTE or WiFi.

For an online marketer in the developed world, it’s easy to forget about slow connections, but Facebook is particularly sensitive to them. A very high portion of their traffic is mobile, and a large portion of their audience uses their mobile device as their primary access to the Internet, rather than a desktop or laptop.

Facebook and Google agree on this topic. Most digital marketers know that Google cares about latency and page speed, but many don’t realize that Google also cares about connection speed.

Last year they began testing their revived mobile transcoding service, which they call Google Web Lite, to make websites faster in countries like India and Indonesia, where connection speed is a significant problem for a large portion of the population. They also recently added Data Saver Mode in Chrome, which has a similar impact on browsing.

AMP pages begin ranking in mobile results this month

This February, Google will begin ranking AMP pages in mobile search results. These will provide mobile users access to news articles that universally render in about one second. If you haven’t seen it yet, use this link on your phone to submit a news search, and see how fast AMP pages really are. The results are quite impressive.

In addition to making web pages faster, Google wants to make search results faster. They strive to provide results that send searchers to sites optimized for the device they’re searching from. They may alter mobile search results based on the connection speed of the searcher’s device.

To help speed up websites and search results as the same time, Google is also striving to make Chrome faster and lighter. They’re even trying to ensure that it doesn’t drain device batteries, which is something that Android users will especially appreciate! Updated versions of Chrome actually have a new compression method called Brötli, which promises to compress website files 26% more than previous versions of Chrome.

We’ll review the impact of Google’s tests on changing search results based on connection speed. We’ll outline how and why results from these tests could become more salient and impact search results at various different speeds. Finally, we’ll explain why Google has a strong incentive to push this type of initiative forward, and how it will impact tracking and attribution for digital marketers now and in the future.

The diagram below provides a sneak peak of the various connection speeds at which Google products are best accessed and how these relationships will likely impact cross-device search results in the future.

Connection Speed

Best for these Google Products

Impact on SERP

WiFi & Fiber

Fiber, ChromeCast, ChromeCast Music, Google Play, Google Music, Google TV, ChromeBooks, Nest, YouTube, YouTube Red

Streaming Apps, Deep Linked Media Content

3G, 4G, LTE

Android Phones, Android Wear, Android Auto, ChromeBooks, YouTube, YouTube Red

Standard Results, App Packs, Carousels, AMP Pages

2G & Edge

Android Phones, Android Auto

Basic Results, Google Web Lite, AMP Pages

Basic vs. standard mobile search results

The image below shows the same search on the same phone. The phone on the left is set to search on EDGE speeds, and the one on the right is set to 4G/LTE. Google calls the EDGE search results “Basic,” and the 4G/LTE results “Standard.” They even include a note at the bottom of the page explaining “You’re seeing a basic version of this page because your connection is slow” with an option to “switch to standard version.” In some iterations of the message, this sentence was also included: “Google optimized some pages to use 80% less data, and rest are marked slow to load.”

Notice that the EDGE connection has results that are significantly less styled and interactive than the 4G/LTE results.

Serving different results for slower connection speeds is something that Google has tested before, but it’s a concept that seems to have been mostly dormant until the middle of last year, when these Basic results started popping up. Google quietly announced it on Google+, rather than with a blog post. These results are not currently re-creatable (at least for me), but the concept and eventual implementation of this kind of variability could have a significant impact on the SEO world, further deprecating our ability to monitor keyword rankings effectively.

The presentation of the mobile search results isn’t all that’s. changing. The websites included and the order in which they’re ranked changes, as well. Google knows that searchers with slow connections will have a bad experience if they try to download apps, so App Packs are not included in any Basic search results. That means a website ranking in position #7 in Standard search results (after the six apps in the App Pack) can switch to ranking number one in a Basic search. That’s great news if you’re the top website being pushed down by the App Pack!

The full list of search results are included below – items that only appear in one result are bolded.

Standard Search Result
“Superman Games”

Basic Search Result
“Superman Games”

App – City Jump

Web – herogamesworld.com>superman-games

App – Man of Steel

Web – www.heroesarcade.com>play-free>sup…

App – Superman Homepage

Web – LEGO>dccomicssuperheroes>games

App – Superbman

Web – Wikipedia>wiki>List_of_Superman_vi…

App – Batman Arkham Origins

Web – http://ift.tt/1Lr0u7G>tags>supe…

App – Subway Superman Run

Web – YouTube>watch (Superman vs Hulk – O Combate – YouTube)

Web – Herogamesworld.com>superman-games

Web – http://ift.tt/1Lr0sNl

Web – Heroesarcade.com>play-free>sub…

Web – fanfreegames.com > superman-games
Web – Wikipedia>wiki>List_of_Superman_vi…

Web – moviepilot.com>posts > 2015/06/25

Web – LEGO>dccomicssuperheroes>games

Web – m.batmangamesonly.com > superman-ga…

You may have the urge to write this off, thinking all of your potential mobile customers have great phones and fast connections, but you’d be missing the bigger picture here.

First, slow connection speeds can happen to everyone: when they’re in elevators, basements, subways, buildings with thick walls, outside of city centers, or simply in places where the mobile connection is overloaded or bad. Regardless of where they are, users will still try to search, often ignorant of their connection speed.

Second, this testing probably indicates that connection speed is an entirely new variable which could even be described as a ranking factor.

Responsive design does not solve everything

Google’s desire to reach a growing number of devices might sound fantastic if you’re someone who’s recently updated a site to a responsive or adaptive design, but these new development techniques may have been a mixed blessing. Responsive design and adaptive design can be great, but they’re not a panacea, and have actually caused significant problems for Google’s larger goals.

Responsive sites face speed and development challenges.

Responsive design sites are generally slow, which means there is a strong chance that they won’t rank well in Basic search results. Responsive sites can be built to function much more quickly, but it can be an uphill battle for developers. They face an ever-growing set of expectations, frameworks are constantly changing, and they’re already struggling to cram extra functionality and design into clean, light, mobile-first designs.

They can have negative repercussions.

Despite Google’s insistence that responsive design is easier for them to crawl, many webmasters that transitioned saw losses in overall conversions and time-on-site. Their page speed and UX were both negatively impacted by the redesigns. Developers are again having to up their skills and focus on pre-loading, pre-rendering, and pre-fetching content in order to reduce latency — sometimes just to get it back to what it was before their sites went responsive. Others are now forced to create duplicate AMP pages, which only adds to the burden and frustration.

Wearables/interactive media pose new problems.

Beyond the UX and load time concerns, responsive design sites also don’t allow webmasters to effectively target these new growth channels that Google cares about — wearables and interactive media. Unfortunately, responsive design sites are nearly unusable on smartwatches, and probably always will be.

Similarly, Google is getting much more into media, linking search with large-screen TVs, but even when well-built, responsive design sites look wonky on popular wide-screen TVs. It seems that the development of mobile technology may have already out-paced Google’s recommended “ideal” solution.

Regardless, rankings on all of these new devices will likely be strongly influenced by the connection speed of the device.

Is AMP the future of mobile search for slow connections?

The good news is that AMP pages are great candidates for ranking in a Basic search result, because they work well over slow connections. They’ll also be useful on things like smart watches and TVs, as Google will be able to present the content in whichever format it deems appropriate for the device requesting it — thus allowing them to provide a good experience on a growing number of devices.

App streaming & connection speed

A couple months ago, Google announced the small group of apps in a beta test for App Streaming. In this test, apps are hosted and run from a virtual device in Google’s cloud. This allows users to access content in apps without having to download the app itself. Since the app is run in the cloud, over the web, it seems that this technology could eventually remove the OS barrier for apps — an Android app will be able to operate from the cloud on an iOS device, and an iOS app will be able to run on an Android device the same way. Great for both users and developers!

Since Google is quietly working on detecting and perfecting their connection-speed-based changes to the algorithm, it’s easy to see how this new ranking factor will be relied upon even more heavily when App Streaming becomes a reality. App Streaming will only work over WiFi, so Google will be able to leverage what it’s learned from Basic mobile results to provide yet another divergent set of results to devices that are on a WiFi connection.

The potential for App Streaming will make apps much more like websites, and deep links much more like…regular web links. In some ways, it may bring Google back to its “Happy Place,” where everything is device and OS-agnostic.

How do app plugins & deep links fit into the mix?

The App Streaming concept actually has a lot in common with the basic premise of the Chrome OS, which was native on ChromeBooks (but has now been unofficially retired and functionally replaced with the Android OS). The Chrome OS provided a simple software framework that relied heavily on the Chrome browser and cloud-based software and plugins. This allowed the device to leverage the software it already had, without adding significantly more to the local storage. http://ift.tt/1Lr0u7J

This echoes the plugin phenomenon that we’re seeing emerge in the mobile app world. Mobile operating systems and apps use deep links to other local apps plugins. Options like emoji keyboards and image aggregators like GIPHY can be downloaded and automatically pulled into to the Facebook Messenger app.

Deep-linked plugins will go a long way toward freeing storage space and improving UX on users’ phones. That’s great, but App Streaming is also resource-intensive. One of the main problems with the Chrome OS was that it relied so heavily on WiFi connectivity — that’s relevant here, too.

What does music & video casting have to do with search?

Most of the apps that people engage with on a regular basis, for hours at a time, are media apps used over WiFi. Google wants to be able to index and rank that content as deep links, so that it can open and run in the appropriate app or plugin.

In fact, the indexing of deep-linked media content has already begun. The ChromeCast app is using new OS crawler capabilities in the Android Marshmallow OS to scan a user’s device for deep-linked media. They then create a local cache of deep links to watched and un-watched media that a user might want to “cast” to another device, then organize it and make it searchable.

For instance, if you want to watch a documentary on dogs, you could search your Netflix and Hulu apps, then maybe Amazon Instant Video, and maybe even the NBC, TLC, BBC, or PBS apps for a documentary on dogs.

Or, you could just do one search in the ChromeCast app and find all the documentaries on dogs that you can access. Assuming the deep links on those apps are set up correctly, you will be able to compare the selection across all apps that you have, choose one, and cast it. Again, these type of results are less relevant if you are on a 2G or 3G connection and thus not able to cast the media over WiFi.

This is an important move for Google. Recently, they’ve been putting a lot of time and energy into their media offerings. They successfully launched ChromeCast2 and ChromeCastMusic at about the same time as they dramatically improved their GoogleMusic subscription service (a competitor to Spotify and Pandora) and launched YouTubeRed (their rival for Hulu, Netflix, and Amazon Prime Video). They may eventually even begin to include the “cast” logo directly in SERPS, as they have in the default interface of Google+ and YouTube.

Google’s financial interest in adapting results by connectivity

Google’s interest in varying search results by connection speed is critical to their larger goals. A large portion of mobile searches are for entertainment, and the need for entertainment is unending and easy to monetize. Subscription models provide long-term stable revenue with minimal upkeep or effort from Google.

Additionally, the more time searchers spend consuming media, either by surfacing it in Google or the ChromeCast app, or through Now on Tap, the more Google can tailor its marketing messages to them.

Finally, the passive collection and aggregation of people’s consumption data also allows Google to quickly and easily evaluate which media is popular or growing in popularity, so they can tailor Google Play’s licensing strategy to meet users’ demands, improving the long-term value to their subscribers.

As another line of business, Google also offers ChromeCast Music and Google Music, which are subscription services designed to compete with Amazon Music and iTunes. You might think that all this streaming — streaming apps, streaming music, streaming video and casting it from one device to another — would slow down your home or office connection speed as a whole, and you would be right. However, Google has a long-term solution for that too: Google Fiber. The more reliant people become on streaming content from the cloud, the more important it will be for them to get on Google’s super-fast Internet grid. Then you can stream all you want, and Google can collect even more data and monetize as they see fit.

http://ift.tt/1U53C0j

Image credit: The NextWeb

What’s the impact of connection variability in SERPS on SEO strategy & reporting?

So what might this mean for your mobile SEO strategy? Variability by connection speed will make mobile keyword rank reporting and attribution nearly impossible. Currently, most keyword reporting tools either work by aggregating ranking results that are reported from ISPs, or by submitting test queries and aggregating the results.

Unfortunately, while that’s usually sufficient for desktop reporting (though still error-prone and very difficult for highly local searches), it’s nearly impossible for mobile. All of the SEO keyword reporting tools out there are struggling to report on mobile search results, and none take connection speed into account. Most don’t even take OS into account, either, so App Packs and the website rankings around them are not accurately reported.

Similarly, most tools are not able to report on anything about deep links, so it’s hard to know if click-through traffic is even getting to the website, or if it might be getting to a deep screen in an app instead. In short, ranking tools have a long way to go before they will be accurate in mobile, and this additional factor makes the reporting even harder.

In mobile, there are additional factors that can change the mobile rankings and click-through rates dramatically:

  • Localization
  • Featured Rich Snippets (Answer Boxes)
  • Results that are interactive directly in the SERP (playable YouTube videos, news, Twitter and image carousels)
  • AJAX expansion opportunities

All of these things are nightmares for the developers who write ranking software that scrapes search results. Even worse, Google is constantly testing new presentation schemes, so even if the tools could briefly get it right, they risk a constant game of catch-up.C:\Users\Cindy\AppData\Local\Temp\SNAGHTML1116a741.PNG

One of the reasons Google is constantly testing new presentation schemes? They’re trying to make their search results work on an ever-growing list of new devices while minimizing the need for additional page loads or clicks. This is what drives all the testing.

If you think about a traditional set of search results, they’re an ordered list that goes from top to bottom. Google has gotten so fast that the ten-link restriction actually hurts the user experience when the mobile connection is good.

In response, Google has started to include carousels that scroll left to right. Only one or two search results can show on a smart watch at one time, so this feature allows searchers to delve deeper into a specific type of result without the additional click or page load.

However, carousels don’t appear in Basic search results. Also, the carousels only count as one result in the vertical list, but can add as many as 5 or 10 results to the page. Again, SEO’s and SEO software really haven’t settled on a way to represent this effectively in their tracking, and little has been reported about the impact on CTR for either the items in the carousel or the items below it.

Conclusion

Speed matters.

Not just latency and page speed, but also connection speed. While we can’t directly impact the connection speed of our mobile users, we should at least anticipate that search results might vary based on the use-case of their search and strategize accordingly.

In the meantime, SEOs and digital marketers should be wary of tools that report mobile keyword rankings without specifying things like OS, app pack rankings, location and, eventually, connection speed.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!