How to Deliver JSON-LD Recommendations the Easy Way – Whiteboard Friday

Posted by sergeystefoglo

When you work with large clients whose sites comprise thousands (or hundreds of thousands) of pages, it’s a daunting task to add the necessary markup. In today’s Whiteboard Friday, we welcome Sergey Stefoglo to share his framework for delivering JSON-LD recommendations in a structured and straightforward way.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hello, Moz fans. My name is Serge. I’m a consultant at Distilled, and this is another edition of Whiteboard Friday. Today I want to take the next few minutes to talk to you about one of my processes for delivering JSON-LD recommendations.

Now it’s worth noting upfront that at Distilled we work with a lot of large clients that have a lot of pages on their website, thousands, hundreds of thousands of pages. So if you work at an agency that works with local businesses or smaller clients, this process may be a bit overkill, but I hope you find some use in it regardless.

So as I mentioned, oftentimes at Distilled we get clients that have hundreds and thousands of pages on their site, and you can imagine if your point of contact comes to you and essentially asks, “Hey, we don’t have any markup on our site. Can you recommend all of the JSON-LD on all the pages, please?” If you’re anything like me, that could be a bit daunting, right, like that’s a big ask. Your wheels start spinning so to speak, and oftentimes that leads to a little bit of unproductivity. So I hope this process kind of helps get you unstuck and get started and get to work.

Step 1: List out all the page templates

The first step in this process essentially is to list out all of the templates on the site. I’m assuming you’re going to be dealing with an e-commerce site or something like that. That’s really the way that you’re going to break down this problem and take it from kind of a larger picture, where someone comes to you and says, “Hey, I need all of the things on all of the things,” and you break it down and say, “Okay, well, really what I need to focus on is a section at a time, and what I need to do is give recommendations for each section at a time.” To me, that’s a much more kind of organized way to come at this, and it’s helped me a lot.

So when you list out the templates, if you’ve had this client for a while, you probably already know the templates that they have. If they’re new, it’s worth getting familiar with their site and thinking about things at a template level regardless. So just simply hopping on the site, browsing around, and making a list of, yes, they have product pages and category pages and some different variations of those. They have blog pages and a bunch of other kinds of pages. It’s good to be familiar with them. Our goal is to essentially recommend JSON-LD for each of those templates. So that’s really the first step is getting clear on which templates we’re looking at and what exists on the site.

Step 2: Choose one template and note what can be marked up

The second step is to choose one of those templates, just one, for example, like the product page template, and essentially go through that page and jot down anything you think that can be marked up. Now if you’ve recommended schema before or if you’ve worked with JSON-LD or any kind of markup, you’ll be familiar with a lot of the kind of standards across the board, and it does get familiar over time. So once you do this your 2nd time or 3rd time or 10th time, you’ll have a good idea of what kind of markup goes on a product page or what kind of markup goes on a category page.

If it’s your first time, just go on the page and I’d encourage you to just browse through and look at schema.org or some other example sites that are similar, see what they’re doing, and kind of jot down by yourself, in a notebook or something, what you think can be marked up. So on a product page, you can note down that, yes, there’s an image of the product. There’s a price. There’s a URL. There are breadcrumbs on the page. There are reviews, etc. You’re just going through and kind of making a list of that very simply.

Step 3: Convert notes into JSON-LD, validate with the schema testing tool, and paste into doc

The next step is to essentially take those notes and convert them into JSON-LD. At this point, people tend to kind of freak out a little bit, but you don’t have to be a developer to do this. It’s very accessible. If this is your first time going about it, I’m not going to get into all of the specifics on how to do that. This is more of a framework of approaching that. But there are a lot of great articles that I can link to. Just reach out to me and I can hook you up with that.

    But the third step, again, is to convert those notes into actual JSON-LD. That process is fairly straightforward. What I like to do is open up the page or a representative URL from that template that I’m working on. So for a product page, open that up in my browser. I would like to have schema.org open. That’s kind of the canonical resource for schema information. Then I also like to have a few competitor sites open that are similar. If you’re working on an e-commerce brand, you’re fortunate that there are a lot of great examples of sites that are doing this well, and that’s publicly available to you and you can check out what they’re doing and how they’re doing it.

    So my process is kind of just going through that list, going on schema.org or going on a competitor’s site or a previous site you’ve worked on. If you’re looking at something like, let’s say, the cost of the product, you can look that up on schema.org. You can see that there’s an Offer-type markup. You can copy that into the schema testing tool and essentially validate that it works. Once you validate it, you just go down the list further. If you start off with the price, you can move on to breadcrumbs, etc.

    At the end of step three, you essentially have all of the JSON-LD that you need and certainly the core elements to kind of start down the next step.

    Step 4: Check with your point-of-contact/developer!

    The next step is to pause and check in with your point of contact, because if you’re working on a large-scale site and you’re going to have 10 or 15 of these templates you’re working on for JSON-LD, it’s worthwhile to essentially say, “Hey, can we do a 30-minute check-in because I’m done with the first template and I want to make sure that this all makes sense and this is in a format that’s going to be good for you?”

    Speaking of format, what I like to do personally is just use Google Drive, set up a folder in the client folder and title it JSON-LD, give the client access to that, and within that folder you’re just going to have a bunch of different documents, and each document is going to be per template. So for the product page example, you would have a document in that folder titled “Product JSON-LD,” and you would copy any of the JSON-LD that you validated in the schema testing tool and paste it in that doc. That’s what you would be walking through with your point of contact or with the developer. Pretty much take any feedback they have. If they want it in a different format, take that into account and revise it and meet with them again. But pretty much get a green light before moving forward to work on the other templates.

    Step 5: Repeat from Step 2 onward for all your templates

    That’s really the next step is, at that point, once you have the green light and the developer feels good about it or your point of contact feels good about it, you’re just going to kind of rinse and repeat. So you’re going to go back to Step 2, and you’re going to choose another template. If you’ve done the product page one, hop over to the category page template and do the same thing. Jot down what can be marked up. Transfer those notes into JSON-LD using competitor sites or similar sites, using schema.org, and using the structured data validating tool. It’s the same process. At that point, you’re just kind of on cruise control. It’s nice because it takes, again, something that initially could have been fairly stressful, at least for me, and it breaks it down in a way that makes sense and you can focus because of that.

    So again, this process has worked really well for me. At Distilled, we like to think about kind of frameworks and how to approach bigger problems like this and break them down and kind of make them more simple, because we’ve found that allows us to do our best work. This is just one of those processes.

    So that’s all I have for you all today. Thank you so much for tuning in. If you have any questions or comments, or if you have any experiences kind of implementing or recommending JSON-LD, I’d love to hear them. So give me a shout on Twitter or in the comments or anything like that. Thank you so much for tuning in, and we will see you next time.

    Video transcription by Speechpad.com

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Using STAT: How to Uncover Additional Value in Your Keyword Data

    Posted by TheMozTeam

    Changing SERP features and near-daily Google updates mean that single keyword strategies are no longer viable. Brands have a lot to keep tabs on if they want to stay visible and keep that coveted top spot on the SERP.

    That’s why we asked Laura Hampton, Head of Marketing at Impressionto share some of the ways her award-winning team leverages STAT to surface all kinds of insights to make informed decisions.

    Snag her expert tips on how to uncover additional value in your keyword data — including how Impression’s web team uses STAT’s API to improve client reporting, how to spot quick wins with dynamic tags, and what new projects they have up their sleeves. Take it away, Laura!

    Spotting quick wins 

    We all remember the traditional CTR chart. It suggests that websites ranking in position one on the SERPs can expect roughly 30 percent of the clicks available, with position two getting around 12 percent, position three seeing six percent, and so on (disclaimer: these may not be the actual numbers but, let’s face it, this formula is way outdated at this point anyway).

    Today, the SERP landscape has changed, so we know that the chances of any of the above-suggested numbers being correct are minimal — especially when you consider the influence of elements like featured snippets on click-through rates.

    But the practical reality remains that if you can improve your ranking position, it’s highly likely you’ll get at least some uplift in traffic for that term. This is where STAT’s dynamic tags can really help. Dynamic tags are a special kind of tag that automatically populates keywords based on changeable filter criteria.

    We like to set up dynamic tags based on ranking position. We use this to flag keywords which are sitting just outside of the top three, top five, or top 10 positions. Layer into this some form of traffic benchmark, and you can easily uncover keywords with decent traffic potential that just need an extra bit of work to tip them into a better position.

    Chasing position zero with featured snippets and PAAs 

    There’s been a lot of chat in our industry about the growing prevalence of SERP features like featured snippets and “People also ask” (PAA) boxes. In fact, STAT has been instrumental in leading much of the research into the influence of these two SERP features on brand visibility and CTRs.

    If your strategy includes a hunt for the coveted position zero, you’re in luck. We like to use STAT’s dynamic tagging feature to monitor the keywords that result in featured snippets. This way, we can track keywords where our client owns the snippet and where they don’t. We can also highlight new opportunities to create optimized content and attempt to capture the spot from their competitors.

    This also really helps guide our overall content strategy, since STAT is able to provide quick feedback on the type of content (and, therefore, the assumed intent) that will perform best amongst a keyword set.

    Making use of data views 

    Data views are one of the most fundamental elements of STAT. They are tools that allow you to organize your data in ways that are meaningful to you. Holding multiple keyword segments (tags) and producing aggregate metrics, they make it possible for us to dissect keyword information and then implement strategically driven decisions.

    For us at Impression, data views are essential. They reflect the tactical aspirations of the client. While you could create a single templated dashboard for all your clients with the same data views, our strategists will often set up data views that mirror the way each client and account work.

    Even if we’re not yet actively working on a keyword set, we usually create data views to enable us to quickly spot opportunities and report back on the strategic progression.

    Here are just some of the data views we’ve grouped our keyword segments into:

    The conversion funnel

    Segmenting keywords into the stages of the conversion funnel is a fairly common strategy for search marketers — it makes it possible to focus in on and prioritize higher intent queries and then extrapolate out.

    Many of our data views are set up to monitor keywords tagged as “conversion,” “education,” and “awareness.”

    Client goals

    Because we believe successful search marketing is only possible when it integrates with wider business goals, we like to spend time getting to know our clients’ audiences, as well as their specific niches and characteristics.

    This way, we can split our keywords into those which reflect the segments that our clients wish to target. For example, in some cases, this is based on sectors, such as our telecommunications client who targets audiences in finance, marketing, IT, and general business. In others, it’s based on locations, in which case we’ll leverage STAT’s location capabilities to track the visibility of our clients to different locales.

    Services and/or categories

    For those clients who sell online — whether it’s products or services — data views are a great way to track their visibility within each service area or product category.

    Our own dashboard (for Impression) uses this approach to split out our service-based keywords, so our data view is marked “Services” and the tags we track within are “SEO,” “PPC,” “web,” and so on. For one of our fashion clients, the data view relates to product categories, where the tracked tags include “footwear,” “accessories,” and “dresses.”

    At-a-glance health monitoring

    A relatively new feature in STAT allows us to see the performance of tags compared to one another: the Tags tab.

    Because we use data views and tags a lot, this has been a neat addition for us. The ability to quickly view those tags and how the keywords within are progressing is immensely valuable.

    Let’s use an example from above. For Impression’s own keyword set, one data view contains tags that represent different service offerings. When we click on that data view and choose “Tags” in the tabbed options, we can see how well each service area is performing in terms of its visibility online.

    This means we can get very quick strategic insights that say our ranking positions for SEO are consistently pretty awesome, while those around CRO (which we are arguably less well known for), tend to fluctuate more. We can also make a quick comparison between them thanks to the layout of the tab.

    Identifying keyword cannibalization risk through duplicate landing pages 

    While we certainly don’t subscribe to any notion of a content cannibalization penalty per se, we do believe that having multiple landing pages for one keyword or keyword set is problematic.

    That’s where STAT can help. We simply filter the keywords table to show a given landing page and we’re able to track instances where it’s ranking for multiple keywords.

    By exporting that information, we can then compare the best and worst ranking URLs. We can also highlight where the ranking URL for a single keyword has changed, signaling internal conflict and, therefore, an opportunity to streamline and improve.

    Monitoring the competitive landscape 

    No search strategy is complete without an understanding of the wider search landscape. Specifically, this means keeping track of your and/or your client’s rankings when compared to others ranking around them.

    We like to use STAT’s Competitive Landscape tab to view this information for a specific data view, or across the whole account. In particular, the Share of Voice: Current Leaders board tells us very quickly who we’re up against for a keyword set.

    This leads to insights such as the competitiveness of the keyword set, which makes it easier to set client expectations. It also surfaces relevance of the keywords tracked, where, if the share of voice is going to brands that aren’t your own, it may indicate the keywords you’re targeting are not that relevant to your own audience.

    You can also take a look at the Share of Voice: Top 10 Trending to see where competitors are increasing or decreasing their visibility. This can be indicative of changes on the SERPs for that industry, or in the industry as a whole.

    Creating a custom connector for GDS 

    Reporting is a fundamental part of agency life. Our clients appreciate formalized insights into campaign progression (on top of regular communications throughout the month, of course) and one of our main challenges in growing our agency lies in identifying the best way to display reports.

    We’ll be honest here: There was a point where we had started to invest in building our own platform, with all sorts of aspirations of bespoke builds and highly branded experiences that could tie into a plethora of other UX considerations for our clients.

    But at the same time, we’re also big believers that there’s no point in trying to reinvent the wheel if an appropriate solution already exists. So, we decided to use Google Data Studio (GDS) as it was released in Beta and moved onto the platform in 2017.

    Of course, ranking data — while we’d all like to reserve it for internal insight to drive bigger goals — is always of interest to clients. At the time, the STAT API was publicly available, but there was no way to pull data into GDS.

    That’s why we decided to put some of our own time into creating a GDS connector for STAT. Through this connector, we’re able to pull in live data to our GDS reports, which can be easily shared with our clients. It was a relatively straightforward process and, because GDS caches the data for a short amount of time, it doesn’t hammer the STAT API for every request.

    Though our clients do have access to STAT (made possible through their granular user permissions), the GDS integration is a simpler way for them to see top-level stats at a glance.

    We’re in the process of building pipelines through BigQuery to feed into this and facilitate date specific tracking in GDS too — keep an eye out for more info and get access to the STAT GDS connector here.

    Want more? 

    Ready to learn how to get cracking and tracking some more? Reach out to our rad team and request a demo to get your very own tailored walkthrough of STAT. 

    If you’re attending MozCon this year, you can see the ins and outs of STAT in person — grab your ticket before they’re all gone! 

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Link Building in 2019: Get by With a Little Help From Your Friends

    Posted by kelseyreaves

    Editor’s note: This post first appeared in December of 2015, but because SEO (and Google) changes so quickly, we figured it was time for a refresh! 


    The link building world is in a constant state of evolution. New tools are continually introduced to the market, with SEOs ready to discover what works best.

    In 2015, I wrote an article for Moz about how our team switched over to a new email automation tool that drastically improved our overall outreach system — we increased our email reply rates by 187 percent in just one month. Which meant that our number of attainable backlinks also drastically increased.

     I wanted to see what’s changed since I last wrote this post. Because in 2019, you need a lot more than new tools to excel in link building.

    But first…

    Looking back, it was pretty ingenious: Our link building program had automated almost every step in the outreach process. We were emailing hundreds of people a week, guest posting on numerous websites, and raking in 20–30 links per week. If anyone has been in the game long enough, you’ll know that’s an insane amount of links.

    With its success at my first company, I took the concept and applied it to several freelance link building projects I was working on. It proved to work for those sites, too. Later on, I built out a similar system for the second startup I worked for. And again, it proved to be just as successful. Every link building project I took on, my thinking was: How can I scale this thing to get me 10x the number of links? How can I email 5x the number of people? How can I automate this as much as possible so I can create a link building machine that’s completely hands off?

    Well…at least for a period of time.

    While I had the best of intentions, this thinking is what ultimately got me in trouble and lead to the inevitable: I was hit with a manual action for participating in link schemes.

    I remember opening up Search Console and reading that message. At that moment, I felt like a kid caught with their hand in the cookie jar. My stomach was in knots. I had heard of people getting manual actions before but didn’t think it was something that would happen to me.

    In hindsight, this was probably one of the most important moments of my SEO/growth career. It sobered me up and pushed me into thinking about outreach in a whole different light, and taught me the most important lesson to date: building links isn’t about using automation to create processes that scale. It’s about building relationships — and value — that scales.

    What outreach looked like in 2015

    I’m not surprised I got away with what I was doing for so long. From 2015 to 2017, it seemed like everyone and their Mom was guest posting. During that time, this is what I noticed:

    1. It was a numbers game

    Most of the SEOs I talked to from 2015 to 2017 were using a similar strategy. It was all about finding tools that could help scale your guest posting program and contact as many people as possible. Most companies had some arbitrary link quota for their outreach teams to hit every month, mine included.

    2. It promoted somewhat decent content that was templatized

    In our outreach program, we were pitching the same three to four topics over and over again and while the content we wrote was always original, there was nothing novel about the articles we were putting out there. They were cute, engaging — but none of it was on the cutting edge or had a solid opinion. It’s what our friend John Collins from Intercom calls Happy Meal content:

    “It looks good from a distance, but you’re left feeling hungry not long after you consume it.”

    3. It idolized automation and processes

    At the time, most outreach programs were about leveraging tools to automate processes and scale every step of the way. We were using several tools to scrape websites and hired virtual assistants off of Upwork to find email addresses of just about anyone associated with a company, whether they were actually the ideal person to contact or not.

    This process had worked in 2015. But in 2019, there’s no way it could.

    What outreach looks like in 2019

    Since joining the team at OG Marketing this last fall, I’ve vastly altered the way I approach outreach and link building. Our strategy now focuses on three main concepts.

    1. Helping editors cite good sources

    The link building relationships I’ve built this year are almost entirely centered around editors and content managers of notable sites who only want to link to high-quality, relevant content.

    And luckily for us, we work with some of the best content creators in the B2B SaaS-verse. We don’t have to go out and beg for links to mediocre (at best) content: We’re building authority to pages that truly deserve it. More importantly, we’re actually fulfilling a need by providing great sources of information for other quality content.

    2. Understanding backlinks are only one piece to the puzzle

    Link building is only one lever and shouldn’t be your whole SEO strategy. Depending on the site you’re working on, building links may be a good use of your time — or not at all.

    In our strategy, we account for the fact that sometimes links aren’t always necessary. They will definitely help, but it’s possible to excel without them.

    For example, Hotjar recently published an article on 5 ways to use scroll maps. Looking at the backlink profile for the top three results for “scroll map,” CrazyEgg has more referring domains than Hotjar, but is currently in position three. Omniconvert has zero backlinks and still ranks above CrazyEgg in position three. With only three referring domains, Hotjar has earned the 1st position and a coveted featured snippet.

    2015 me would’ve had a knee jerk reaction to kick off an outreach campaign as soon as we hit publish on the new article. But considering the fact that you may not even need a ton of links to rank well, you can actually spend your time more efficiently elsewhere.

    3. Creating quality content that earns links naturally

    There’s definitely a tipping point when it comes to generating backlinks naturally. When your article appears on page one for the query you’re targeting, your chances of having that article cited by other publications with zero effort on your part just naturally goes up.

    Why? Because people looking to add credible citations to their article will turn to Google to find that content.

    This prompts our team to always ensure that each piece of content we create for our clients satisfies searcher intent. To do this, we start off by researching if the intent behind the keyword we want to rank for has purchase, consideration or informational intent.

    For example, the keyword “best video conferencing camera” has consideration-based intent. We can determine this by looking at the SERPs. In the screenshot below, you can see Google understands users are trying to compare different types of cameras.

    By seeing this, we know that our best bet for creating content that will rank well is by writing a listicle-style post comparing the best video cameras on the market. If we had instead created an informational article targeting the same keyword about why you should invest in a video conferencing camera without a list of product comparisons, the article probably wouldn’t perform well in search.

    Therefore, if we start off on the right foot by creating the right type of content from the very beginning, we make it easier for ourselves down the road. In other words, we won’t have to build a million links just to get a piece of content to rank that wasn’t the right format, to begin with.

    What we’ve found with our outreach strategy

    Centering our strategy around creating the right content and then determining whether or not that content needs links, has helped us prioritize what articles actually need to be a part of an outreach campaign.

    Once this is determined, we then call on our friends — or our content partners — to help us drive link equity quickly, efficiently, and in a way, that enhances the source content and makes sense for end users (readers).

    A few months into building out our homie program, there are several things we noticed.

    1. Response rates increased

    Probably because it’s not as templatized and, generally, I care more deeply about the email I’m sending and the person I’m reaching out to. On average, I get about a 65–70 percent response rate.

    2. Opt-in rates increased

    Once I get a response, build the relationship, then ask if they want to become a content partner (“friend”), we typically see a 75 percent opt-in rate.

    3. You get the same amount of links, using half the amount of work, in half the amount of time

    I’m gonna repeat that: we generate the same, if not more, backlinks month over month with less effort, time and manpower than with the process I built out in 2015.

    And the more partners we add, the more links we acquire, with less effort. Visually, it looks like this:

    I (somewhat) paid attention during economics class in college, and I remember a chart with this trajectory being a really good thing. So, I think we’re on to something…

    How our outreach process works (and how you can create your own)

    Our current link building program still leverages some of the tools mentioned in my post from 2015, but we’ve simplified the process. Essentially, it works like this:

    1. Identify your friends

    Do you have friends or acquaintances that work at sites which touch on topics in your space? Start there!

    I got connected to the CEO of Proof, who connected me with their Content Director, Ben. We saw that there was synergy between our content and each needed sources about what the other wrote about. He was able to connect me with other writers and content managers in the space, and now we’re all best of friends.

    2. Find new friends

    Typically we look for similar sites in the B2B SaaS space that we want to partner with and are relevant to our client sites. Then, we use several tools like Clearbit, Hunter.io, and Viola Norbert to identify the person we want to reach out to (usually SEO Managers, Marketing Directors or Content Managers) and find their email.

    This step has been crucial in our process. In the past, we left this to the virtual assistants. But since bringing this in house, we’ve been able to better identify the right person to reach out to, which has increased response rates.

    3. Reach out in an authentic way

    In our outreach message, we cut to the chase. If you’ve identified the right person in the previous step, then they should know exactly what you’re trying to do and why it’s important. If the person you outreached to doesn’t get the big picture and you have to explain yourself, then you’re talking to the wrong person. Plain and simple.

    Compared to 2015, our lists are much smaller (we’re definitely not using the spray and pray method) and we determine on a case by case basis what the best method for outreach is. Whether that be email, Linkedin, or at times, Instagram.

    Here’s an example of a simple, straightforward message I send out:

    4. Share content priorities

    Once someone expresses interest, I’ll find a place on their website using a site search where they can reference one of our client’s content priorities for the month. In return, I’ll ask them what content they’re trying to get more eyes on and see if it aligns with our other client sites or the other partners we work with.

    If I think their content is the perfect source for another article, I’ll cite it. If not, I’ll share it with another partner to see if it could be a good resource for them.

    5. See if they want to be a “friend”

    Once we have that first link nailed down, I’ll explain how we can work together by using each other’s awesome content to enhance new blog articles or article contributions on other sites.

    If they’re down to be content friends, I’ll share their priorities for the month with our other partners who will then share it with their wider network of websites and influencers who are contributing articles to reputable sites and are in need of content resources to cite. From there, the writers can quickly scan a list of URLs and cite articles when it makes sense to help beef up new content or improve existing content with further resources. It’s a win-win.

    If the site is interested in being friends, I’ll send over a spreadsheet where we can track placements and our priorities for the month.

    Here’s the link to a partner template you can download.

    Unlike the guest posting programs I was doing over the last few years, with this process, we’re not leaving a digital footprint for Google to follow.

    In other words, we don’t have our author bios mentioning our website plastered all over the internet, essential saying “Hey, Google! We guest posted here and inserted these links with rich anchor text to try and help our page rank. Oh, and we did the same thing here, and here, and here.”

    With this process, we’re just offering a list of resources to well-known writers and other websites creating badass content. Ultimately, it’s their choice if they want to link to it or not. I’ll definitely make suggestions but in the end, it’s their call.

    6. Grow the friend list

    Now, if I’m looking to drive link equity to a certain page, I don’t have to build a new list, queue up a campaign, and kick off a whole automation sequence to an ungodly amount of people like I did in the past.

    I just hit up one of our partners on our friend’s list and voila! — quality citation in 0.45 seconds.

    And on a personal note, waking up to emails in my inbox of new citations added with zero effort on my part feels like the Link Gods have blessed me time and time again.

    Results

    With our friend network, the numbers speak for themselves. This last month, we were able to generate 74 links. In 2015, I was hitting similar monthly numbers, but link building was my full-time job.

    Now, link building is something I do on the side (I’d estimate a few hours every week), giving me time to manage my client accounts and focus on everything else I need to do — like drive forward technical SEO improvements, conduct keyword research, optimize older pages, and use SEO as an overall means to drive a company’s entire marketing strategy forward.

    Building out a friend network has also opened up the door to many other opportunities for our clients that I had never dreamed of when I viewed my link building relationships as one and done. With the help of our friends, we’ve had our clients featured on podcasts (shout out to Proof’s Scale or Die podcast!), round-ups, case studies, video content, and many, many more.

    Final thoughts

    As an instant-gratification junkie, it pains me to share the honest truth about building a friend network: it’s going to take time.

    But think of the tradeoffs — everything I mentioned above and that in a way, you’re acting as a sort of matchmaker between high-quality content and sites who are open to referencing it.

    I also believe that this type of outreach campaign makes us better marketers. Spamming people gets old. And if we can work together to find a way to promote each other’s high-quality content, then I’m all for it. Because in the end, it’s about making a better user experience for readers and promoting content that deserves to be promoted.

    How has your link building program evolved over the years? Have you been able to create a network of friends for your space? Leave a comment below!

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    How to Monitor Hreflang Performance With Dynamic Tags in STAT

    Posted by TheMozTeam

    This post was originally published on the STAT blog.


    If you’re familiar with hreflang, you’ll know just how essential this teensy bit of code is to a successful international campaign. You’ll also know that it comes with a boatload of moving parts — multiple pages, sections, and subdomains per country.

    That’s a lot of data to track. And if you aren’t hyper-organized, it’s easy to miss out on some big time insights.

    Lucky for you, there’s a handy way to track your hreflang campaigns: all you need are a few dynamic tags in STAT. And even luckier for you, Dan Nutter, Technical SEO Specialist at twentysix, agreed to share his wisdom on this very subject.

    Below, you’ll learn how to set up your own dynamic tags in STAT, monitor all your pages, and even visualize all that data to impress your team, boss, and clients.

    The origins of hreflang 

    The hreflang attribute, for those unfamiliar, tells Google which language you are using on a specific page. Introduced back in 2011, it essentially allows us to speak to our target audience in different countries in their languages.

    Developing it, though, has been described by Google’s John Mueller as one of the most difficult sides to SEO:

    While certainly complex, hreflang has been immensely helpful for companies looking to increase their site (or, in our case, our client’s sites) visibility and grow their audience. This is because: when searchers see the right language of content, it helps decrease bounce rate and increase conversions.

    Since hreflang requires such a significant amount of time and effort from both SEO and development teams, clients (rightly) want to see tangible benefits post-deployment.

    Monitoring hreflang (the standard way) 

    To show the benefits of such a massive change to the technical architecture of a site, SEOs can do one of two things — either highlight the increase in the number of hreflang tags or point out the reduction in the number of errors being detected in Google Search Console.

    The problem is that telling a valuable story about complex code, one that will resonate with clients, is no easy feat, particularly when the information is being communicated to the C-suite.

    This is why dynamic tags in STAT are an incredible tool for SEOs and are invaluable to our team.

    Monitoring hreflang with dynamic tags (the easier way) 

    For those of you running international SEO campaigns, I highly recommend using STAT’s dynamic tags to monitor changes of ranking URLs after new hreflang deployments.

    Not only do dynamic tags allow for a fast diagnosis of potential issues with the hreflang mark-up, they also provide a tangible way to tell a compelling, positive story for our team or a client of twentysix.

    In STAT, dynamic tags are automatically populated by a pre-determined criterion — you select them with the filtering options in the Keywords tab at the site level. For instance, you could filter the SERP Features column to see all keywords that generate “People also ask” boxes.

    All your tags are then at the ready in the Tags tab, so you can get quick snapshots of how your data is performing.

    Creating your hreflang tags in STAT 

    To track your new hreflang mark-up with dynamic tags in STAT, your international content must be delivered via either sub-folders or sub-domains on a site using a gTLD (E.g. www.sitename.com/fr-fr/ or fr.sitename.com).

    If your international content is served on ccTLDs (i.e. http://www.sitename.fr), dynamic tags won’t be able to track any incorrectly ranking URLs, as they will be attributed to a different domain.

    First, you’ll need to separate the sites in your project for all relevant country and language combinations. To enable this, you simply filter ranking URLs for a specific text string. This will generate tags that can track all the ranking keywords for a particular sub-folder — or even a specific URL — and monitor their performance.

    Under the URL column, apply the Wildcard Search and/or Exclusion Search functions. This will allow you to detect any changes in your ranking URLs.

    Applying Wildcard Search and Exclusion Search helps to surface any changes in your URLs.

    The Wildcard Search filter can locate URLs that include the text string for the correct region, thereby tracking the improvement in the number of correctly ranking URLs.

    Sites using sub-folders will require filtering for all URLs, which includes the country and language combination you want to track, such as “/fr-fr/” when tracking URLs for the country France and the language of French.

    For sites using sub-domains, you’ll need to filter for the sub-domain and root domain combined, such as “fr.sitename.com.” To track sub-domains, you’ll need to select Ignore “www.” prefix when matching in the site settings.

    To track subdomains, you need to select Ignore ‘www.’ prefix when matching in the site settings.

    Once you have filtered the URL column for your chosen country, select Tag All Filtered Keywords and create a dynamic tag called “Correct URL.”

    If you opt to track the decrease in the number of incorrectly ranking URLs, you’ll need to create a dynamic tag using the exact same steps as above, only this time with the Exclusion Search functionality.

    Telling a positive story

    When you track the performance of your ranking URLs, it’s easier to demonstrate the value of the changes being implemented to the technical architecture of the site.

    In addition, when that value is visually represented — like in a graph — it provides clients with a clear idea of just how effective a technical change is, and that can be communicated clearly throughout all levels of their business.This shows the increase in any correctly ranking URLs.

    After your tags have been created, you can monitor the increase in correctly ranking URLs using the Dashboard tab.

    The bonus round 

    An unexpected benefit of tracking the success of a hreflang deployment? It highlights any changes made to the technical setup of a site, which can prevent the hreflang from functioning correctly.

    For instance, during a recent campaign, our team noticed an increase in the number of incorrectly ranking URLs, indicating that a site-level change had negatively impacted the hreflang markup.

    At the time, Google Search Console was experiencing a number of time-lag errors, which meant that if we weren’t keeping a close eye on things, we would have missed the issue entirely. With our dynamic tags set up in STAT, we were able to pick up on these changes before Google Search Console.

    Using STAT’s dynamic tags, Dan was able to catch the error before Google Search Console.

    By leveraging STAT’s dynamic tags, we were able to catch the increase and our team rectified the issue before any long-term damage was done.

    Liked what you read?

    Want to know your best and worst-performing tags? Keen to compare all their metrics side-by-side?

    If you answered yes to both and you’re a STAT client, then check out our Tags tab to see what kinds of insights you can uncover for your international (and national) campaigns.

    Not a STAT client (yet)? Book a demo to get a customized walkthrough. You can also chat with our team at MozCon to see it up close and personal! 

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    The Easiest PR-Focused Link Building Tip in the Book – Whiteboard Friday

    Posted by randfish

    Focused on new link acquisition for your clients or company? Link building is always a slog, but Rand has a PR-focused tip that makes it much easier to find people and publications that’ll cover and amplify you. Check it out in this week’s edition of Whiteboard Friday!

    Click on the whiteboard image above to open a high-resolution version in a new tab!

    Video Transcription

    Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we are talking about the easiest link building tip in the book. It is PR-focused, meaning press and public relations focused, and I’ll dive right in.

    If you are trying to get some new links for your new client, for your website, or for your company, start with this process. 

    Step 1: Identify some of your site’s or your business’ unique attributes

    • The type of company that you are. Are you a startup or a scale-up? Are you mid-stage? Are you a small business? Are you a family-owned business?
    • What’s the background of your founders? Do they come from a special place, something that is unique? Almost certainly the answer is yes. But in what kinds of ways?
    • What type of financing do you have? 
    • What is your customer focus, your customer target? 
    • What is your purpose, values, culture? 
    • Geography. Sector or market. 
    • Other attributes, like accessibility. Maybe you do a great job of serving differently-abled folks. Maybe you are a very sustainable business, a super green business. Maybe you have a very high bar of ethics. Or your facilities are absolutely outstanding and super Instagram-worthy.

    Step 2: Find 5–10 others that share these attributes

    Whatever it is, some combination of these, you’re going to take and you’re going to find other people who share those attributes, other businesses, some other businesses that share some of those attributes or some combination of them. For example, I’ve taken a type of business, a startup, and a geography — startups in San Diego. Or a type of financing, angel-financed, but a type of business that is unusually angel-financed, a physical, retail location business. That’s fairly atypical. B corps, a benefit corp that is in the healthcare space. Again, somewhat atypical, somewhat unique. A black-owned business that’s in tech. Tragically, also unusual.

    Step 3: Find publications and people that have covered/amplified others like you

    Now Step 3, I’m going to find publications and people that have covered or amplified other people like you, some combination of other people like you. So we’ll start with my first example here — startups in San Diego. If I am a startup in San Diego, I will plug in several other startups in San Diego.

    So I did a search “startups in San Diego” in Google. I found Cloudbeds and Hire A Helper, two startups that are in San Diego, and I find a bunch of coverage opportunities by searching for the combination of the two of them. Cloudbeds and Hire A Helper leads me to ProgrammableWeb has a page that lists both of them because they both have APIs. San Diego Startup Week lists both of them because they were both panelists or speakers there. Snip2Code has a machine learning directory because they both had some interesting uses for machine learning that they applied. Tampa Bay Times covered both of them because of a data content piece. These are your link opportunities, your press, PR coverage opportunities.

    You can repeat this again and again with combinations like this. The best part is you are using just your brain and Google search. Super, super simple. Of course, you could take this and you could apply this, you could plug in the websites for Cloudbeds and Hire A Helper to Moz’s Link Explorer, and you could get a bunch of other link opportunities. You could plug those two in and you could plug in your own website, and then you could say, “Show me sites that link to these two, but not to me,” through the Link Intersect function.

    Find new link opportunities!

    So there are ways to advance this with tools, but this is some of the simplest, best ways to launch to get coverage, to get people to know you and like you and start to have heard of your brand, and to get those links that Google is going to need to rank your website higher.

    All right, everyone. Hope you’ve enjoyed this. Look forward to some of your tips and advice around easy, PR-focused link building tips. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

    Video transcription by Speechpad.com

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    The New Moz Local Is Here! Can’t-Miss Highlights & How to Get Started

    Posted by MiriamEllis

    Last month we announced that the new Moz Local would be arriving soon. We’re so excited — it’s here! If you’re a current Moz Local customer, you may have already been exploring the new and improved platform this week! If not, signing up now will get you access to all the new goodies we have in store for you.

    

    With any major change to a tool you use, it can take a bit for you to adjust. That’s why I wanted to write up a quick look at some of the highlights of the product, and from there encourage you to dig into our additional resources.

    What are some key features to dig into?

    Full location data management

    More than 90% of purchases happen in physical stores. The first object of local SEO is ensuring that people searching online for what you offer:

    1. Encounter your business
    2. Access accurate information they can trust about it
    3. See the signals they’re looking for to choose you for a transaction

    Moz Local meets this reality with active and continuous synching of location data so that you can grow your authority, visibility, and the public trust by managing your standard business information across partnered data aggregators, apps, sites, and databases. This is software centered around real-time location data and profile management, providing updates as quickly as partners can support them. And, with your authorized connection to Google and Facebook, updates you make to your business data on these two powerhouse platforms are immediate. Moz Local helps you master the online consumer encounter.

    And, because business data changes over time, ongoing management of your online assets is essential. 80% of customers lose trust in a brand when its local business listings mislead them with incorrect information like wrong names, phone numbers, or hours of operation. No brand can afford to lose this trust! Moz Local’s data cleansing service delivers ongoing accuracy and proper formatting for successful submission to the platforms that matter most.

    Finally, Moz Local supports the distribution of rich data beyond the basics. Give customers compelling reasons to choose your business over others by uploading photos, videos descriptions, social links, and more. Full control over these elements can greatly enhance customer encounters and improve conversions.

    Automated duplicate deletion

    Duplicate listings of a business location can turn profile management into a tangle, mislead consumers, dilute ranking strength, and sometimes even violate platform guidelines. But historically, detection and resolution of duplicates has been cumbersome and all but impossible to scale when handled manually.

    One of the most exciting improvements you’ll experience with the new Moz Local is that duplicate workflows are now automated! Our next-level algorithmic technology will identify, confirm and permanently delete your duplicate listings in a fully automated fashion that requires no interaction or involvement on your part. This is a major development that will save local brands and agencies an amazing amount of time.

    Deep Google and Facebook reporting & management

    Logging in and out of multiple dashboards can be such a hassle, but with Moz Local, you’ll have insights about all of your locations and clients in a single space. Moz Local is now hooked up with Facebook management (hooray!) and we’ve deepened our Google My Business integration.

    We’ll capture Facebook insights data for impressions and clicks for your location’s published Facebook content. And you’ll find it convenient that we surface impressions data for both Google Maps and Search. This means you’ll have easy access click data for the familiar attributes: clicks-for-directions, clicks-to-website and clicks-to-call, plus tracking of direct, indirect, and branded queries. Whether you’re dealing with just one listing or 100,000 of them, all the data will be at your fingertips.

    One new feature I’m especially keen to share is the alerts you’ll receive every time a new photo is uploaded to your Google listing by a third party. Image spam is real, and awareness of public uploads of imagery that violates guidelines is part and parcel of reputation management.

    Local dashboard

    Our goal is to make your local SEO work as simple as possible, and very often, the at-a-glance summary in the new Moz Local dashboard will tell you all you need to know for routine check-ups. The default view of all the locations you manage can, of course, be easily filtered and segmented to look at specific clients or locations. Almost effortlessly, you’ll get a very quick overview of data like:

      • Average Profile Completeness
      • Locations requiring attention
      • Total listings in sync (sync is the new term for what we previously referred to as “published”)
      • Listings being updated
      • Listings requiring sync
      • Duplicate Reporting
      • Facebook Insights data
      • Google My Business Insights data

    Profile suggestion engine

    Who has time for guesswork when you’re trying to make the most of your online assets? Our powerful new profile suggestion engine tells you exactly what you what data you need to prove to reach maximum profile completeness.

    Quickly drill down to a specific location. From there, Moz Local surfaces multiple fields (like long description, photos, opening hours, fax numbers, etc.) along with suggestions based on other verifiable online sources to improve consistency across the data publisher and partner network. Again, this is a big time-saver, especially if your agency has multiple clients or your enterprise has multiple locations to manage.

    Email alerts, notifications, activity feed

    Choose how you’d like to stay up-to-date on the status of your listings.

    • Every Moz Local dashboard contains an activity feed that continuously streams the latest information, updates, and alerts for all of your listings
    • Opt-in for email alerts if that’s your preferred method of notification. Digest emails are configurable to be sent on a weekly, monthly, or quarterly basis
    • Optional upgrade for email alerts for new reviews. If you upgrade, you’ll receive these notification daily, ensuring you aren’t missing complaints, praise and conversion opportunities

    Review management

    Google has revealed that about one-third of people looking for local business information are actually trying to find local business reviews. From the viewpoint of consumers, your online reviews are your brand’s reputation. Our own large-scale marketing survey found that 90% of respondents agree that reviews impact local rankings, but that 60% of participants lack a comprehensive review management strategy. The result is that platforms like Google have become mediums of unheard customer voices, neglected leads, and reputation damage.

    The good news is that Moz Local customers have the option to upgrade their subscriptions to turn this unsustainable scenario completely around. Be alerted to incoming reviews on multiple platforms and respond to them quickly. See right away if a problem is emerging at one of your locations, necessitating in-store intervention, or if you’ve been hit with a review spam attack. And go far beyond this with insight into other types of customer sentiment, like photo uploads and Google Q&A.

    The truth is, that in 2019 and in the foreseeable future, no business in a competitive market can afford to neglect public sentiment management, because it has become central to customer service. Every brand is in the business of customer service, but awareness, responsiveness, accountability, and action require strategy and the right tools. Let Moz Local help you take control of your priceless reputation.

    Social posting

    Manage the interactive aspects of your local business profiles with this optional upgrade. Share news, special offers, and questions & answers with customers on social platforms and in directories. This includes:

    • Engaging with customers on social media to share. News posts can be shared on Facebook and eligible directories. Offers can be posted in eligible directories. Questions & Answers can be posted to your Google Business Profile.
    • Publishing Posts instantly or scheduling them for a future date. And here’s something you’ll be excited to hear: you can submit the same post for multiple locations at once, create and save templates for posts, and edit/delete posts from the publishing dashboard!

    In competitive local markets, transitioning from passive observation of online assets to interactive engagement with the public can set your brand apart.

    What should my next steps in the new Moz Local be?

    1. Ensure that your location data and your profile are complete and accurate within the new Moz Local. Be sure to add in as much data as you can in the Basic Data, Rich Data, and Photos & Videos sections to reach high profile completeness. Doing so will ensure that your locations’ listings throughout the local search ecosystem are as informative as possible for potential customers. Moz Local acts as a “source of truth” for your location data and overwrites data on third party platforms like Google and Facebook, so be sure the data you’ve provided us is accurate before moving on to step two.
    2. Gain immediate insights into your local search presence by connecting your Google My Business and Facebook profiles. Once connected, these will begin to pull in tons of data, from impressions, to clicks, to queries.
    3. Once your profile is complete and Google My Business and Facebook profiles are connected, it’s time to sync your data to ensure that what you’ve provided to Moz Local is shared out to our network. Simply click the Sync button in the top right to push your information to our partners.

    Where can I find more information?

    I’m glad you’ve asked! Our resource center will be a great place to start. There, a user guide and video tutorial can show you the ropes, and you can also get registered for our upcoming webinar on June 25th at 10:00am PST:

    Save my spot

    The Help Hub has also been given a complete refresh with the new Moz Local. There you will find ample resources, FAQs, and descriptions of each area of the tool to dig into.

    For any questions that you can’t find answers to, you can always reach out to our wonderful Help Team.

    What’s next from Moz?

    Expect a number of exciting new updates to continue rolling out — both in the new Moz Local tool as well as in other areas of our platform. As I mentioned before, it’s our serious plan to devote everything we’ve got into putting the power of local SEO into your hands. Keep an eye out for more to come from Moz to support your local search marketing.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    The Ultimate Guide to Exploring Seattle This MozCon

    Posted by Kirsten_Barkved

    So, you’ve been debating for years about whether to attend MozCon and you’re finally ready to pull the trigger. Or, maybe you’re still not sure if MozCon is right for you and you’re wondering what the big deal is (a fair and reasonable thought).

    Whether you’re still on the fence or looking to get hyped, here’s the spiel for why you should attend this year’s MozCon. And if, after seeing our awesome agenda, you’re in need more than our stellar line-up and amazing donuts to convince you, then look no further than this post.

    We’re less than four weeks away from MozCon, so we thought we’d dust off the old “things to do while in Seattle” list. So, if you’re attending or still doing research to see if the juice is worth the squeeze (how responsible of you!), here’s a sampling of the places you’ll go whilst in Seattle for MozCon this July 15–17. 

    Get your tickets before they’re gone!

    We asked our Mozzers where to go

    Not only do our Mozzers have their fingers on the pulse of the city itself, but they’ve also got a few MozCons under their belt, so they know exactly what you need after a day’s worth of information-absorbing and networking.

    The Underground Tour — “It’s strange and very Seattle specific.” — Rob Lisy

    Fremont Brewery — “Great beer and outdoor seating with a view of lake union and the city.” — Kelley Manuel

    Cinerama — “Movie theatre with the best chocolate popcorn in the world.” — Tyler Taggart

    Canon — “I have to advocate for Canon. Best chicharron I’ve ever had and incredible cocktails, obviously.” —Kavi Kardos

    Pacific Inn — “Best fish and chips.” —David Joslin

    Rachel’s Ginger Beer — “I like to get something from anywhere and then eat it here — hint: they will put booze in your ginger beer if you ask nicely. And pay more.” — David Pierce

    Michou — “A good choice for a quick grab-and-go sandwich.” — David Pierce

    Museum of Flight — “They have the Apollo 11 spacecraft on display. First time the National Air and Space Museum has shown it outside of DC!” — Chris Lowe

    Alki Beach — “Water taxi to West Seattle to walk along the beach and soak up some sun!” — Katarina Anderson

    Intrigued? We’re just easing you in.

    Iconic stops

    We’d be remiss if we didn’t include a few “of course” stops in our post — there’s a reason these make it to every “30 things to do in Seattle” blog post. Cross a few of these iconic Seattle stops off your bucket list this July.

    The Space Needle 

    Picturesque views of Puget Sound and a rotating glass floor make this spot a must for the ‘gram.

    Seattle Great Wheel

    Want to see Seattle from 157 ft above? Unless you’re afraid of heights, of course, you do! Tip: Stop by at sunset to see the sun dip behind the Olympic mountain range.

    Gas Works Park

    Beautiful, expansive views of downtown Seattle. Unwind after a day of being constantly “on” and enjoy the sun and the Pacific Northwest air. 

    Insider Mozzer tip from David Pierce: “Get a sandwich from Paseo on Fremont and then go down the hill to eat it at Gasworks Park.”

    Fremont Troll

    For obvious reasons.

    Fun fact: The film crew behind the show, Once Upon a Time, filmed the Fremont Troll scenes right outside our Vancouver office. It was fun to watch them turn an underpass into the troll. But the magic quickly waned — ask our YVR Mozzers how much fun it was to not be able to park (or walk, or talk) outside the office during filming for a week or two.

    Weird stops

    Sometimes, you have to go off the beaten path to really get an idea of the soul of a city. And Seattle certainly has some soul. Here’s just a sprinkling of some of the weird things you can do in Seattle.

    Hat n’ Boots

    It’s exactly how it sounds. Originally a gas station, this 1954 must-see “soul of Georgetown” has been billed the largest hat and boots in North America, and we truly don’t know how you could live with yourself if you make it to 80 and didn’t see the largest hat and boots in North America. 

    Official Bad Art Museum 

    One man’s trash is another man’s treasure at the “OBAMA.” Enjoy a cup of coffee or a pint as you peruse the uniquely curated selection of bad art at Cafe Racer

    Twin Peaks Cafe 

    If you 1) have a car, or know someone who would carpool, and, 2) more importantly, are an uber fan of Twin Peaks, the greatest show to ever live, then it is definitely worth the 40 min drive up to Snoqualmie Falls to visit the actual town and cafe (Twede’s Cafe) where the series was filmed in. Bring us back a piece of cherry pie, please.

    Go and see this house that looks like it’s from the movie Up

    Every few years, rumors swirl that the house that Edith Macefield refused to sell to developers is finally being sold. But while the outside may have changed, this little hold out home isn’t going anywhere anytime soon and is symbolic to changing Seattle. You can find Edith’s home here — it’s hard to miss. Bonus points if you bring a balloon and know a dog named Doug.

    Meowtropolitan Cat Café 

    Okay, this one isn’t really all that weird — it’s plain freakin’ adorable! This cat café focuses on placing rescue cats and kitten into loving homes, but if you aren’t able to house a kitty or two, that’s perfectly fine! Cats need to be socialized and told they are very pretty and have nice whiskers. If you go, take a pic or it didn’t happen. Just think of the conversation starters at our birds of a feather you’d have if you went. Tuesday or Thursdays are for Cat Yoga. Just saying. 

    Outdoor stops

    We know that the reason people move to Seattle is because of all the tech jobs. But a close second? The great outdoors. Seattle has SO much to do in its own backyard — hikes, bike paths, beaches, lakes. And enjoying nature is always free. So stretch your legs and get out to any one of these stellar spots our locals haunt.

    Kerry Park

    If you’re a camera buff, this is a must-see, especially at sunset. You get a full view of the city, the water, the Space Needle, all with the glorious backdrop of Mount Rainier. Be prepared for a crowd, though — this spot gets pretty popular. Insider tip from Mozzer, Marcin Narozny: “Take postcard photos from Queen Anne.”

    Golden Gardens Park

    People don’t really equate sandy beaches to Seattle, but we have them in spades! Golden Gardens is a popular destination for strolls along the seawall. There’s also a designated dog park if you’re in the mood for dog spotting (which, like, is our favorite game).

    Waterfall Garden Park

    Want something a little more urban that doesn’t require a ton of travel? This hidden retreat is one of Seattle’s best-kept secrets in the heart of Pioneer Square. You can find it behind Occidental Square Park on 2nd Ave. Plus? It marks the birthplace of UPS!

    Myrtle Edwards Park 

    Birkenstocks are optional. Dog pats are non-negotiable. 

    Booze-y stops

    We’re barely scratching the surface here with the best bars and pubs of Seattle, but for the sake of time, we had to keep it short and sweet. If there’s something you didn’t see on our list and feel strongly that it should have made it, don’t be afraid to @ us in the comments.

    Rock Box

    For obvious reasons, this karaoke bar is top of the list for post-MozCon-feels — it’s the perfect afterpart to let all that pent up conference energy out. Bring your best renditions of Total Eclipse of the Heart for some all night, much-needed crooning.

    Bathtub Gin Co.

    Don’t go if you don’t like gin. We can’t be more transparent than that.

    Needle & Thread

    In the mood for something a little more low-key? Scope out this speakeasy, hidden above Tavern Law. There’s no official drink menu, but they take their cocktails seriously — just tell the barkeep your poison of choice and they’ll concoct something just for you.

    Shultzy’s 

    We do love our beer in the Pacific Northwest, and this little German bar is home to some of Germany’s best brews. Plus: sausages.  

    Unicorn & Narwhal 

    Whimsical food and drink options galore, complete with an arcade, claw machine, and photo booth. Go on Sunday for their Mimosas Cabaret!

    Coffee stops

    The best coffee in Seattle isn’t in a Starbucks cup. It’s also not Seattle’s Best (is anyone shocked?). Because we take our coffee as seriously as we do our SEO, we updated this list and curated the top 5 best coffee places in Seattle.

    Bedlam 

    For a taste of old Seattle, go to Bedlam. It has that pre-boom feel of old Belltown. Plus, real good espresso, comfy seating, toast and pie, and private meeting rooms to go and ponder over all the SEO magic you absorbed.

    Victrola Coffee 

    There’s a reason locals haunt this cafe. Besides having one of the best pour-over cuppas in town, this cafe is also one of the quieter spaces, with ample seating and plenty of outdoor space should you want to bask in the sun. Bonus: There’s a roastery on site, so if it ain’t too busy, ask for a tour!

    Espresso Vivace 

    If you’re looking for the best coffee in the city, look no further. Their scientific attention to detail and flavor is legendary, so much so that they’ll even offer you advice on how best to actually drink your coffee in order to achieve the fullest experience.

    Sound & Fog

    We’re cheating a little with this one because it’s not just a cafe — it’s also a wine bar, offering beer on tap and rotating coffee roasters.

    Tougo Coffee Co. 

    We can’t not have Tougo on the list. As one of Seattle’s oldest coffee shops, it also has some of the most down-to-earth, passionate baristas who are happy to answer all your brewing and roasting questions.

    Hanging out in Seattle longer than just for MozCon?

    If you’re looking for more things to do and you’re staying in our neck of the woods for longer than three days, we have tons more you can busy yourself with! 

    Soccer fan? See the Sounders FC vs. Portland Timbers

    The Pacific Northwest’s biggest rivalry is on Sunday, July 21st at 6:30 p.m. Make sure to join our MozCon Facebook Group and make plans to see the game with other MozCon attendees.

    More of a baseball fan? Stop by to catch a Mariner’s game.

    In town until the 21st? You better be now: July 21st is Bark at the Park. Tickets also include a postgame walk around the bases, so bring your goodest boy or girl. 

    In the mood for a festival?

    The Capitol Hill Block Party is where it’s at. Local music, great food, art (both good and bad), people watching. 

    Interested in exploring some of Seattle’s neighborhoods and cultural celebrations?

    Not convinced yet? Take a peek at why conferences like MozCon belong on your resume and how you can convince your boss to send you there.

    Grab your ticket!

    Obviously, this is just a small sampling of what Seattle has to offer. If you’re a returning visitor, we’d love to know what you got up to during your post-MozCon hours — any suggestions to new Seattle-goers?

    Don’t forget to buy your ticket to MozCon! We’re 80 percent sold out and you don’t want to miss this one.

    Grab my MozCon ticket now!

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    Did Google’s Site Diversity Update Live Up to its Promise?

    Posted by Dr-Pete

    On June 6th, on the heels of a core update, Google announced that a site diversity update was also rolling out. This update offered a unique opportunity, because site diversity is something we can directly measure. Did Google deliver on their promise, or was this announcement mostly a PR play?

    There are a lot of ways to measure site diversity, and we’re going to dive pretty deep into the data. If you can’t wait, here’s the short answer — while Google technically improved site diversity, the update was narrowly targeted and we had to dig to find evidence of improvement.

    How did average diversity improve?

    Using the 10,000-keyword MozCast set, we looked at the average diversity across page-one SERPs. Put simply, we measured how many unique sub-domains were represented within each results page. Since page one of Google can have less than ten organic results, this was expressed as a percentage — specifically, the ratio of unique sub-domains to total organic results on the page. Here’s the 30-day graph (May 19 – June 17):

    A site diversity of 90 percent on a 10-result SERP would mean that nine out of ten sub-domains were unique, with one repeat. It’s hard to see, but between June 6th and 7th, average diversity did improve marginally, from 90.23 percent to 90.72 percent (a 0.49 percent improvement). If we zoom in quite a bit (10x) on the Y-axis, we can see the trend over time:

    Zooming in to just a 10 percent range (85–95 percent diversity), you can see that most of the change happened in a single day, and the improvement has remained in place for the week since the update. Even zooming in, though, the improvement hardly seems impressive.

    Was the improvement more isolated?

    Being as fair to Google as possible, we need to consider one of their follow-up statements:

    What if Google improved the worst-case scenarios, but it wasn’t immediately clear when we averaged out all SERPs? We can isolate situations with more than two listings from the same site by looking specifically at SERPs with a site diversity score of 80 percent or better (eight out of ten sub-domains are unique). Here’s the 30-day graph for just those cases:

    On June 6th, 84.58 percent of sites in our data set had a diversity of 80 percent or better. On June 7th, that increased to 86.68 percent — a 2.1 percent improvement. Let’s dig even deeper to see what’s happening for individual counts.

    How did the impact break down?

    A single data point doesn’t tell us much about what’s happening within each of the buckets. For this analysis, I’m going to use the exact duplicate count, since percentages can get a bit confusing once we have to put them in bins. Another complication is that, on occasion, two sites have more than one organic result — this brings down the overall diversity of the SERP, but doesn’t necessarily mean that one site is dominating.

    So, what if we look just at the count of the dominant site (the site with the most duplicates) across the 10,000 SERPs? We’ll compare June 6th (blue) to June 7th (purple):

    For slightly over half of SERPs in our data set, there were no duplicates (every site had one listing), and this number didn’t change much after the update. The number of sites with two listings (i.e. one duplicate) increased pretty noticeably after the update (up by 346 SERPs). This was offset almost entirely by a decrease in SERPs with three to five listings (down by 345 SERPs across the three bins).

    The numbers get too small to see at 5K scale after the four-count SERPs, so I’ll restrict the Y-axis:

    SERPs with dominant sites owning six to ten organic listings accounted for only 117 of 10,000 SERPs (just over 1 percent) on June 6th. After the update, this actually went up a tiny bit, to 119 SERPs.

    We still see SERPs where one site dominates, and that story didn’t change much after the update. That said, these six to ten-count SERPs are fairly rare. Looking at the keywords, we also see that many of them have brand or navigational intent. Here are a few keywords where we still see a ten-count:

    • “kohl’s hours”
    • “macy’s hotel collection”
    • “lowes outlet”
    • “dillard’s sales”
    • “edd unemployment”

    Many dominant-intent searches show site-links in the #1 position (which allow up to six additional links from one site). It’s hard to say why Google isn’t using site-links in these extreme cases. These may be situations where the intent isn’t quite as clear, but we can only speculate based on looking at a handful of examples. Keep in mind, too, that Google determines intent algorithmically, so it can shift over time.

    This isn’t an easy problem. Site diversity isn’t a lever you can pull in isolation, especially when it’s left to the algorithm. Reducing repetition too much could harm quality, in some cases (especially SERPs with brand intent). Similarly, many algorithm updates unrelated to diversity seem to have unintended consequences for site diversity.

    So, what’s the final verdict?

    When evaluating site diversity, we have to be careful relying too much on anecdotes. Anecdotally, there are definitely SERPs where a single domain seems to have too much power. For example, here’s the main results column on a search for “pure green coffee extract” (I’ve removed a local pack for the purposes of this post):

    The shopping results at the top suggest commercial intent, but the organic results are a mix of informational and commercial results. Amazon has a block of five product results, and this is not a situation where the query suggests brand or navigational intent (I haven’t indicated any specific interest in Amazon). 

    It’s easy to cherry-pick, and we can certainly say that Google hasn’t solved the problem, but what are the broader results telling us? It’s fair to say that there was some amount of improvement, and the improvement tracked with Google’s public statements. SERPs with three to five results (two to four duplicates) from the dominant site decreased a bit — in most cases, these SERPs still had two results from the dominant site (one duplicate).

    Even isolating the change, though, it was fairly small, and there was no improvement for SERPs where six to ten results came from the dominant domain. This may be because many of those queries had strong brand or navigational intent, and the six to ten count SERPs were rare both before and after the update.

    While the improvements were real and Google’s statements were technically accurate, the impact of the site diversity update doesn’t feel on par with a pre-announcement and the PR it received. Regarding the state of site diversity in SERPs, Google has made minor improvements but still has work to do.

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    5 Ways You Might Mess up When Running SEO Split Tests

    Posted by sam.nemzer

    SEO split testing is a relatively new concept, but it’s becoming an essential tool for any SEO who wants to call themselves data-driven. People have been familiar with A/B testing in the context of Conversion Rate Optimisation (CRO) for a long time, and applying those concepts to SEO is a logical next step if you want to be confident that what you’re spending your time on is actually going to lead to more traffic.

    At Distilled, we’ve been in the fortunate position of working with our own SEO A/B testing tool, which we’ve been using to test SEO recommendations for the last three years. Throughout this time, we’ve been able to hone our technique in terms of how best to set up and measure SEO split tests.

    In this post, I’ll outline five mistakes that we’ve fallen victim to over the course of three years of running SEO split tests, and that we commonly see others making.

    What is SEO Split testing?

    Before diving into how it’s done wrong (and right), it’s worth stopping for a minute to explain what SEO split testing actually is.

    CRO testing is the obvious point of comparison. In a CRO test, you’re generally comparing a control and variant version of a page (or group of pages) to see which performs better in terms of conversion. You do this by assigning your users into different buckets, and showing each bucket a different version of the website.

    In SEO split testing, we’re trying to ascertain which version of a page will perform better in terms of organic search traffic. If we were to take a CRO-like approach of bucketing users, we would not be able to test the effect, as there’s only one version of Googlebot, which would only ever see one version of the page.

    To get around this, SEO split tests bucket pages instead. We take a section of a website in which all of the pages follow a similar template (for example the product pages on an eCommerce website), and make a change to half the pages in that section (for all users). That way we can measure the traffic impact of the change across the variant pages, compared to a forecast based on the performance of the control pages.

    For more details, you can read my colleague Craig Bradford’s post here.

    Common SEO Split Testing Mistakes

    1. Not leaving split tests running for long enough

    As SEOs, we know that it can take a while for the changes we make to take effect in the rankings. When we run an SEO split test, this is borne out in the data. As you can see in the below graph, it takes a week or two for the variant pages (in black) to start out-stripping the forecast based on the control pages (in blue).

    A typical SEO split test — it often takes a couple of weeks for the uplift to show.

    It’s tempting to panic after a week or so that our test might not be making a difference, and call it off as a neutral result. However, we’ve seen over and over again that things often change after a week or two, so don’t call it too soon!

    The other factor to bear in mind here is that the longer you leave it after this initial flat period, the more likely it is that your results will be significant, so you’ll have more certainty in the result you find.

    A note for anyone reading with a CRO background — I imagine you’re shouting at your screen that it’s not OK to leave a test running longer to try and reach significance and that you must pre-determine your end date in order for the results to be valid. You’d be correct for a CRO test measured using standard statistical models. In the case of SEO split tests, we measure significance using Bayesian statistical methods, meaning that it’s valid to keep a test running until it reaches significance and you can be confident in your results at that point.

    2. Testing groups of pages that don’t have enough traffic (or are dominated by a small number of pages)

    The sites we’ve been able to run split tests on using Distilled ODN have ranged in traffic levels enormously, as have the site sections on which we’ve attempted to run split tests. Over the course of our experience with SEO split testing, we’ve generated a rule of thumb: if a site section of similar pages doesn’t receive at least 1,000 organic sessions per day in total, it’s going to be very hard to measure any uplift from your split test. If you have less traffic than that to the pages you’re testing, any signal of a positive or negative test result would be overtaken by the level of uncertainty involved.

    Beyond 1,000 sessions per day, in general, the more traffic you have, the smaller the uplift you can detect. So far, the smallest effect size we’ve managed to measure with statistical confidence is a few percent.

    On top of having a good amount of traffic in your site section, you need to make sure that your traffic is well distributed across a large number of pages. If more than 50 percent of the site section’s organic traffic is going to three or four pages, it means that your test is vulnerable to fluctuations in those pages’ performance that has nothing to do with the test. This may lead you to conclude that the change that you are testing is having an effect when it is actually being swayed by an irrelevant factor. By having the traffic well distributed across the site section, you ensure that these page-specific fluctuations will even themselves out and you can be more confident that any effect you measure is genuine.

    3. Bucketing pages arbitrarily

    In CRO tests, the best practice is to assign every user randomly into either the control and variant group. This works to ensure that both groups are essentially identical, because of the large number of users that tends to be involved.

    In an SEO split test, we need to apply more nuance to this approach. For site sections with a very large number of pages, where the traffic is well distributed across them, the purely random approach may well lead to a fair bucketing, but most websites have some pages that get more traffic, and some that get less. As well as that, some pages may have different trends and spikes in traffic, especially if they serve a particular seasonal purpose.

    In order to ensure that the control and variant groups of pages are statistically similar, we create them in such a way that they have:

    • Similar total traffic levels
    • Similar distributions of traffic between pages within them
    • Similar trends in traffic over time
    • Similarity in a range of other statistical measures

    4. Running SEO split tests using JavaScript

    For a lot of websites, it’s very hard to make changes, and harder still to split test them. A workaround that a lot of sites use (and that I have recommended in the past), is to deploy changes using a JavaScript-based tool such as Google Tag Manager.

    Aside from the fact that we’ve seen pages that rely on JavaScript perform worse overall, another issue with this is that Google doesn’t consistently pick up changes that are implemented through JavaScript. There are two primary reasons for this:

    • The process of crawling, indexing, and rendering pages is a multi-phase process — once Googlebot has discovered a page, it first indexes the content within the raw HTML, then there is often a delay before any content or changes that rely on JavaScript are considered.
    • Even when Googlebot has rendered the JavaScript version of the page, it has a cut-off of five seconds after which it will stop processing any JavaScript. A lot of JavaScript changes to web pages, especially those that rely on third-party tools and plugins, take longer than five seconds, which means that Google has stopped paying attention before the changes have had a chance to take effect.

    This can lead to inconsistency within tests. For example, if you are changing the format of your title tags using a JavaScript plugin, it may be that only a small number of your variant pages have that change picked up by Google. This means that whatever change you think you’re testing doesn’t have a chance of demonstrating a significant effect.

    5. Doing pre/post tests instead of A/B tests

    When people talk colloquially about SEO testing, often what they mean is making a change to an individual page (or across an entire site) and seeing whether their traffic or rankings improve. This is not a split test. If you’re just making a change and seeing what happens, your analysis is vulnerable to any external factors, including:

    • Seasonal variations
    • Algorithm updates
    • Competitor activity
    • Your site gaining or losing backlinks
    • Any other changes you make to your site during this time

    The only way to really know if a change has an effect is to run a proper split test — this is the reason we created the ODN in the first place. In order to account for the above external factors, it’s essential to use a control group of pages from which you can model the expected performance of the pages you’re changing, and know for sure that your change is what’s having an effect.

    And now, over to you! I’d love to hear what you think — what experiences have you had with split testing? And what have you learned? Tell me in the comments below! 

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

    SEO Whitepaper: How Distance and Intent Shape a Local Pack

    Posted by TheMozTeam

    In August 2017, a Think With Google piece stated that local searches without “near me” had grown by 150 percent and that searchers were beginning to drop other geo-modifiers — like zip codes and neighborhoods — from their local queries as well.

    Since we can’t always rely on searchers to state when their intent is local, we should be looking at keywords where that intent is implied. But, before we start optimizing, we need to know whether Google is any good at interpreting implicit local intent and if it’s treated the same as explicit intent.

    Consider these queries: [sushi near me] would indicate that close proximity is essential; [sushi in Vancouver] seems to cast a city-wide net; while [sushi] is largely ambiguous — are they hungry for general info or actual sushi? And what happens with [best sushi], where quality could take priority over proximity? Google decides what these queries mean, so it’s important for us to understand those decisions.

    In this whitepaper, we put local packs under the microscope to determine:

    • How Google interprets different kinds of local intent.
    • How geo-location and geo-modification influence local packs and organic results.
    • How distance, Google ratings, and organic rank shape a local pack.
    • How Google handles competing needs.

    Plus, we’ll make the case for tracking local and show you how to set up your own local tracking strategy.

    Download the whitepaper

    Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!