Good Place Word Clouds

Everything's fine

I am a huge fan of the Good Place so I created these Good Place word clouds specific to each of the core “team cockroach”. Zoom in to find phrases or words you recognise from the show.

I’ll follow up with more detail about how these were created but I used scripts from the show, for each character I found the times where they were involved and grabbed the words around those times. Then I used Andreas Mueller’s awesome word cloud script to generate the word clouds. I did tweak the weightings a bit to get the interesting phrases to show up (thanks very much to @nocontextgoodplace on Twitter for inspiration).

Continue reading “Good Place Word Clouds”

How to fix broken or redirecting links

As I said in my post about why we should fix broken or redirecting links – even though broken links and redirects aren’t ideal, we can’t hope to get rid of all redirects or broken links, as with anything in business, we need to prioritise what will have the biggest impact. We need to find the worst offenders.

By the time we’ve finished this post, we will have found just three changes lego.com could make to their site which could;

  • Make sure that Google sees their UK product pages
  • Fix over 9,000 internal redirecting links.
Continue reading “How to fix broken or redirecting links”

Why Melt (unpivot) is the most powerful function in Pandas

Pandas is a Python library that lets us do Excel-type-stuff. Well, that’s not really giving it the credit it deserves. Pandas is a Python library which makes Excel-type stuff waaaaaaaaaaaaaaaaaay easier.

You might have seen me speak about how Jupyter Notebooks can make our lives easier as marketers (if not – you’ve clearly been missing out on Distilled Searchlove and you should absolutely buy a ticket). A lot of the examples I use are to do with how using Pandas is much much easier than trying to do the same stuff in Excel.

One function I haven’t been able to talk about on-stage is melt. As I said in the title, melt is kind of like unpivot and it is one of the best functions in Pandas because it lets us easily do things that wouldn’t just be harder in Excel – they would be pretty much impossible for anyone who isn’t a pretty advanced Excel user.

Continue reading “Why Melt (unpivot) is the most powerful function in Pandas”

Why should I fix my site links?

Photo by Zdeněk Macháček on Unsplash

If you feel like you already have a good understanding of why you should fix broken or redirecting links on your site and just want to get fixing, go to my post here which shares a free Google Colab notebook which will help identify and prioritise problems for you with some easy and dev-readable Excel files.

Otherwise – strap in. Let’s talk about why having redirecting or broken links on your site is a problem and why you should fix it.

Some terminology that will come in useful later

What are templated links?

In short – lots of links across lots of pages, to the same place. Think about your navigation menu or footer. Templated links are often present on pretty much every page, they always have the same content, they are always linking to the same places. Templated links are very useful when you need a page to be accessible from anywhere on your site but it’s also easy to overlook mistakes that can cause you issues.

What are broken links?

A broken link is any link which points to a page which has been deleted and not redirected. That often means a 404 page, named after the status code 404 meaning “not found”.

What are redirect chains?

One redirect going to another redirect etc. etc. So instead of;

page-a ==> page-b

the redirects go like this;

page-a ==> page-b ==> page-c ==> page-d

Now instead of asking for just one page, we’re having to go through three hops to get to the active page. Redirect chains make the usual redirect problems even worse.

What are redirect loops?

This is like a redirect chain but worse. Instead of going;

page-a ==> page-b ==> page-c ==> page-d

It’d be something like;

page-a ==> page-b ==> page-c ==> page-a ==> page-b ==> page-c ==> page-a

And so on until whatever is trying to access the page just gives up. These make redirect problems even worse than redirect chains do.

What are 302 redirects?

The standard redirect involves your website responding with status code 301 which means – “this page has been permanently moved”. An alternative is status code 302 which means “this page has been temporarily moved”.

So essentially, all a 302 redirect is, is a redirect where you send a different, less strong message at the same time.

Don’t be fooled by the terminology – if you are redirecting a page and don’t have imminently plans to change it back (like, within the week), 301 is the way to go. If you use a 302 redirect things like Google aren’t as sure what’s going on. They’re thinking “Sure, you tell me the content is in this new place, but you don’t sound very certain of it, so I’m going to keep an eye on the old page, I’ll probably let it compete with the new page in search results and I definitely won’t treat this as you transferring all of the authority from page-a to page-b.”

Why fix internal redirects

Imagine we’re moving our whole blog. So we’re redirecting mysite.com/blog/• to blog.mysite.com/• .

When we redirect a page, we aren’t actually moving a page. All we’re really doing is deleting the old page, and saying to everything which tries to visit it (person or machine) “don’t look here, look in this other place instead”. We don’t really notice it as people but the machines are doing something like this.

Request: page-a

Response: 301 this page has moved permanently to page-b

Request: page-b

Response: 200 here’s page-b

The first problem – authority

We often talk about search engines, like Google trying to understand the internet in terms of authority – why should this site appear for a search, even if that site is talking about the same topic?

One early way Google used to judge this is links. Well respected, high-value sites tend to get more links than less respected, low-value sites. If you treat every link on the internet as a kind of a vote of confidence for the page it’s linking to, you start to get an idea of what people think is worthy of attention.

Not only do these votes of confidence help a page rank, they also mean that when that page links out to another page it’s vote of confidence bears more weight.

It kind of makes sense right? If we trust a page, we trust what it says more too. A page can’t pass on all of the clout that other pages have given it, but most of that authority gets split between all of the pages it links out to.

Pages on your site will have links going to them, even if they aren’t links from other sites, you will have internal links. That means your pages have some votes of confidence that they can use.

That means that this authority kind of flows around your site. Pages like your homepage pass authority to the higher level pages on your site, then it trickles down to the lower pages, but they link up to other pages so the authority can flow back up to the top.

The problem is, if you redirect a page, all of those votes of confidence aren’t for the new, active page – they are for the old page which doesn’t exist any more. So how does Google interpret this? Let’s use the example above.

When we redirect www.mysite.com/blog/post-1 to blog.mysite.com/post-1 we are essentially replacing all of the content of /blog/post-1 with one giant vote for the blog.mysite page.

As we said, a page can’t give away all of its authority, that’s not how a vote of confidence works, so while we preserve a good amount of the authority that page has built up, it’s still not everything. Some of that authority is still tied up in the old page that’s not doing any good any more.

So, with each unnecessary redirect, we are losing those hard-earned votes of confidence which could help this page rank. What’s more our new page has less authority to help our other pages rank. It basically throws away some of the votes of confidence we could use across our site – we’re hurting this page specifically and our whole site in general.

How do templated links make this worse?

Imagine we have a site with 500 pages (which would be smaller than most) and each of those pages has a footer link to a redirected page. That means that 500 times across our site we’re giving a vote of confidence to a page that doesn’t exist – every page is losing some of the authority it’s trying to pass on and the whole site is losing 500x the votes it would be if we were just talking about one link.

How do redirect chains make this worse?

We’ve already said that we lose a bit of authority with one redirect, if we redirect again we lose a little bit more, another one and it’s a little bit more on the hop after that. So we’re losing even more authority unnecessarily.

The second problem – time

The second and more intuitive problem is that redirects take a little bit more time and resources. Instead of having to just ask for one thing – computers, or Google, are having to ask for it, wait to be told it’s the wrong thing, then ask for another thing and wait to be told that’s the right thing.

That probably seems relatively insignificant but these things stack up quickly. Google is trying to see and understand the whole internet. That’s billions and billions of pages, which means they have to be careful with where they spend their time. If every time Google tried to access a page on your site, it has to go through multiple steps – that’s all taking away resources Google could be using on pages you actually care about.

What’s more, when users are trying to use your site, everything is going to seem slower because their computer is having to go through these additional hops. Which means users are less likely to do what you want (if you want to know why having a slow site is bad, I touch upon that in this Distilled post)

How do templated links make this worse?

As you’d expect, it means that more often users and robots are having to deal with these hops.

How do redirect chains make this worse?

With each redirect hop it’s taking more and more time and resources to get to the page a person or machine actually wants to access.

So do I have to get rid of all redirects?!

The key thing to remember here is redirects are a necessary and expected part of the internet. It’s just not practical to get external websites to update their links whenever we change a page so we need a way to make sure users get to the page we want them to. What’s more, Google remembers old pages it has seen so if we don’t redirect those pages it’ll just keep going back to them.

Why fix broken links?

As we said above – links on our sites are a way for our pages to give a vote of confidence to each other. However, a 404 page doesn’t exist at all, 404 means not found so if we give a vote of confidence to something that doesn’t exist then that vote is pretty much wasted.

Again, kind of makes sense right? If we say to Google – “Hey, this thing is great!” and the thing doesn’t exist any more we’ve just wasted our vote.

Similar to redirects – because of the way all of our pages are giving votes of confidence to all of our other pages – every time we link to a 404 page we’re throwing away votes that could be used across the site. We are limiting the strength of all of our pages by a little bit.

Having lots of links to 404 pages is also a Bad Sign for Google. If a site often links to 404 pages it’s more likely to be a site in disrepair and less likely to be a good user experience. Google doesn’t want to send users to a bad site so we’re less likely to appear in search results.

How do templated broken links make this worse?

More lost votes

As we said, every link to a 404 page is us throwing away a vote. A templated link is often a link that is present on every page of our site. Imagine all of the pages on our site have about 20 links on them. If two of our templates links go to a broken 404 page, that means that we’re throwing away 10% of our possible voting power and we’re reducing our site strength by quite a lot.

Waylaying enthusiastic users

Even if we take a cue from our favourite dictator and ignore all of those lost votes, even if we say we don’t care about Google’s evaluation of our site, this kind of problem could still cause havoc. Say we want a user to buy our product but they want to find out a bit more about it first. If links in our navigation, say, are going to broken pages, the user doesn’t get the information they want, they don’t trust the product, and they don’t buy.

So do I have to fix all 404 pages on my site?

I mean, that would be nice but I am not saying that having any 404 will be the death of your business. If you were running a physical shop and one one your shelves was broken that’s not going to kill the store right? If, on the other hand, you were running a shop and half of your shelves are broken that’s a problem you’ve got to fix pretty quickly.

Don’t believe people who email you saying you have to fix every single broken link on your site or everything will go up in flames, or who tell you that any links on your site which go to broken pages on other sites could make Google penalise you. These people are trying to sell you something.

As ever – all of this comes down to prioritising what is having the biggest impact. You just need to find the patterns of worst offenders.

What should I do next?

As I said above – we can’t hope to get rid of all redirects or broken links, the trick is to find the worst offenders.

Check out this post I wrote sharing a free notebook which will help you find redirect chains and templated broken links, and prioritise your fixes for you so you can work with your devs to fix the problem.

How to Do Change Detection with Screaming Frog and Google Sheets

I made a Google Sheet that does change detection for you based on two Screaming Frog crawls. I’ll tell you why that’s important. 

Two problems frequently come up for SEOs, regardless of if we’re in-house or external.

  1. Knowing when someone else has made key changes to the site
  2. Keeping a record of specific changes we made to the site, and when.

Both can sound trivial, but unnoticed changes to a site can undo months of hard work and, particularly with large e-commerce sites, it’s often necessary to update internal links, on-page text, and external plugins in search of the best possible performance. That doesn’t just go for SEO, it applies just as much to CRO and Dev teams.

Keeping a record of even just our changes can be really time-consuming but without it, we often have to rely on just remembering what we did when, particularly when we see a pattern of changing traffic or rankings and want to know what might have caused it. 

These things are people problems. When we can’t rely on other teams to work with us on their planned changes, that needs to be fixed at a team level. When we don’t have a system for listing the changes we make it’s understandable, particularly for smaller keyword or linking tweaks, but if we compare ourselves to a Dev team for example – a record of changes is exactly the kind of thing we’d expect them to just include in their process. At the end of the day, when we don’t keep track of what we doing that’s because we either don’t have the time or don’t have the commitment to a process. 

We shouldn’t really be trying to fix people problems with tools. That said, people problems are hard. Sometimes you just need a way of staying on top of things while you fight all the good fights. That’s exactly what this is for. 

This is a way to highlight the changes other teams have made to key pages, so you can quickly take action if needed, and to keep track of what you’ve done in case you need to undo it.

As a completely separate use-case, you can also use this sheet to check for differences between different versions of your site. Say, for the sake of argument, that you need to know the difference between the mobile and desktop versions of your site, or your site with and without JavaScript rendering, or even the differences between your live site and a private developer version you’re about to release. There are tools that offer change detection and cover some of the functions of this sheet, but I really like the flexibility this offers to check for changes between versions as well as over time.

What sites is this good for?

This sheet is for anyone who needs an idea of what is changing on a fairly large number of pages but can’t afford to pay for big, expensive change detection systems. It’ll work its way through around 1,000 key pages. 

That said, 1,000 key pages stretches further than you would think. For many small sites, that’ll more than cover all the pages you care about and even larger eCommerce sites get the vast majority of their ranking potential through a smaller number category pages. You would be surprised how big a site can get before more than 1,000 category pages are needed. 

That 1,000 URL limit is a guideline, this sheet can probably stretch a bit further than that, it’s just going to start taking quite a while for it to process all of the formulas.

So what changes does it detect?

This Google Sheet looks at your “new crawl” and “old crawl” data and gives you tabs for each of the following;

  • Newly found pages – any URL in the new crawl that isn’t in the old crawl
  • Newly lost pages – any URL in the old crawl that isn’t in the new crawl
  • Indexation changes – i.e. Any URL which is now canonicalised or was noindexed
  • Status code changes – i.e. Any URL which was redirected but is now code 200
  • URL-level Title Tag or Meta Description changes
  • URL-level H1 or H2 changes
  • Any keywords that are newly added or missing sitewide.

What’s that about keyword change detection?

On many sites, we’re targeting keywords in multiple places at a time. Often we would like to have a clear idea of exactly what we’re targeting where but that’s not always possible.

The thing is, as we said, your pages keep changing – you keep changing them. When we update titles, meta descriptions and H1s we’re not checking every page on the site to confirm our keyword coverage. It’s quite easy to miss that we are removing some, middlingly important, keyword from the site completely. 

Thanks to a custom function, the Google sheet splits apart all of your title tags, meta descriptions, and H#s into their component words and finds any that, as of the last crawl, have either been newly added, or removed from the site completely.

It then looks the freshly removed words up against Search Console data to find all the searches you were getting clicks from before, to give you an idea of what you might be missing out on now.

The fact that it’s checking across all your pages means you don’t end up with a bunch of stopwords in the list (stopwords being; it, and, but, then etc.) and you don’t have to worry about branded terms being pulled through either – it’s very unlikely that you’ll completely remove your brand name from all of your title tags and meta descriptions by accident, and if you do that’s probably something you’d want to know about.

How do I use it?

Start by accessing a copy of this Google Sheet so you can edit it. There are step-by-step instructions in the first tab but broadly all you need to do is;

  1. Run a Screaming Frog crawl of all the pages you want to detect changes on
  2. Wait a bit (like a couple of weeks) or crawl the mobile, JavaScript, or dev version right away for comparison
  3. Run another SF crawl of the pages you want to detect changes on
  4. Export the internal_all report for both crawls and paste them into the “old crawl” and “new crawl” tabs respectively
  5. Wait a bit (like 30 minutes)
  6. Check the results tabs for changes
  7. (Optional) Import Search Console data to give “value lost” information for keywords you removed.

How to Check Your Site Speed: 5 Things You Need to Know About the Google User Experience Report

This is a copy of a post at distilled.net and is canonicalised there.

You’ve done your keyword research, your site architecture is clear and easy to navigate, and you’re giving users really obvious signals about how and why they should convert. But for some reason, conversion rates are the lowest they’ve ever been, and your rankings in Google are getting worse and worse.

You have two things in the back of your mind. First, recently a customer told your support team that the site was very slow to load. Second, Google has said that it is using site speed as part of how rankings are calculated.

It’s a common issue, and one of the biggest problems about site speed is it is so hard to prove it’s making the difference. We often have little-to-no power to impact site speed (apart from sacrificing those juicy tracking snippets and all that content we fought so hard to add in the first place). Even worse – some fundamental speed improvements can be a huge undertaking, regardless of the size of your dev team, so you need a really strong case to get changes made.

Sure, Google has the site speed impact calculator which gives an estimate of how much revenue you could be losing for loading more slowly, and if that gives you enough to make your case – great! Crack on. Chances are, though, that isn’t enough. A person could raise all kinds of objections, for instance;

  1. That’s not real-world data
    1. That tool is trying to access the site from one place in the world, our users live elsewhere so it will load faster for them
    2. We have no idea how the tool is trying to load our site, our users are using browsers to access our content, they will see different behaviour
  2. That tool doesn’t know our industry
  3. The site seems pretty fast to me
  4. The ranking/conversion/money problems started over the last few months – there’s no evidence that site speed got worse over that time.

Tools like webpagetest.org are fantastic but are usually constrained to accessing your site from a handful of locations

Pretty much any site speed checker will run into some combination of the above objections. Say we use webpagetest.org (which wouldn’t be a bad choice), when we give it a url, an automated system accesses our site tests how long it takes to load, and reports to us on that. As I say, not a bad choice but it’s very hard to to test accessing our site from everywhere our users are, using the browsers they are using, getting historic data that was recording even when everything was hunky-dory and site speed was far from our minds, and getting comparable data for our competitors.

Or is it?

Enter the Chrome User Experience (CRUX) report

In October 2017 Google released the Chrome User Experience report. The clue is in the name – this is anonymised domain-by-domain, country-by-country site speed data they have been recording through real-life Google Chrome users since October 2017. The data only includes records from Chrome users which have opted into syncing browser history, and have usage statistic reporting enabled, however many will have this on by default (see Google post). So this resource offers you real-world data on how fast your site is.

That brings us to the first thing you should know about the CRUX report.

1. What site speed data does the Chrome User Experience report contain?

In the simplest terms, the CRUX report gives recordings of how long it took your webpages to load. But loading isn’t on-off, even if you’re not familiar with web development, you will have noticed that when you ask for a web page, it thinks a bit, some of the content appears, maybe the page shuffles around a bit and eventually everything falls into place.

Example of a graph showing performance for a site across different metrics. Read on to understand the data and why it’s presented this way.

There are loads of reasons that different parts of that process could be slower, which means that getting recordings for different page load milestones can help us work out what needs work.

Google’s Chrome User Experience report gives readings for a few important stages of webpage load. They have given definitions here but I’ve also written some out below;

  • First Input Delay
    • This is more experimental, it’s the length of time between a user clicking a button and the site registering the click
    • If this is slow the user might think the screen is frozen
  • First Paint
    • The first time anything is loaded on the page, if this is slow the user will be left looking at a black screen
  • First Contentful Paint
    • Similar to first paint, this is the first time any user-visible content is loaded onto the screen (i.e. text or images).
    • As with First Paint, if this is slow the user will be waiting, looking at a blank screen
  • DOM Content Loaded
    • This is when all the html has been loaded. According to Google, it doesn’t include CSS and all images but by-and-large once you reach this point, the page should be usable, it’s quite an important milestone.
    • If this is slow the user will probably be waiting for content to appear on the page, piece by piece.
  • Onload
    • This is the last milestone and potentially a bit misleading. A page hits Onload when all the initial content has finished loading, which could lead you to believe users will be waiting for Onload. However, many web pages can be quite operational, as the Emperor would say, before Onload. Users might not even notice that the page hasn’t reached Onload.
    • To what extent Onload is a factor in Google ranking calculations is another question but in terms of User Experience I would prioritise the milestones before this.

All of that data is broken down by;

  • Domain (called ‘origin’)
  • Country
  • Device – desktop, tablet, mobile (called ‘client’)
  • Connection speed

So for example, you could see data for just visitors to your site, from Korea, on desktop, with a slow connection speed.

2. How can I access the Chrome User Experience report?

There are two main ways you can access Google’s Chrome user site speed data. The way I strongly recommend is getting it out using BigQuery, either by yourself or with the help of a responsible adult.

DO USE BIGQUERY

If you don’t know what BigQuery is, it’s a way of storing and accessing huge sets of data. You will need to use SQL to get the data out but that doesn’t mean you need to be able to write SQL. This tutorial from Paul Calvano is phenomenal and comes with a bunch of copy-paste code you can use to get some results. When you’re using BigQuery, you’ll ask for certain data, for instance, “give me how fast my domain and these two competitors reach First Contentful Paint”. Then you should be able to save that straight to Google Sheets or a csv file to play around with (also well demonstrated by Paul).

DO NOT USE THE PREBUILT DATA STUDIO DASHBOARD

The other, easier option, which I actually recommend against is the CRUX Data Studio dashboard. On the surface, this is a fantastic way to get site speed data over time. Unfortunately, there are a couple key gotchas for this dashboard which we need to watch out for. As you can see in the screenshot below, the dashboard will give you a readout of how often your site was Fast, Average, or Slow to reach each loading point. That is actually a pretty effective way to display the data over time for a quick benchmark of performance. One thing to watch out for with Fast, Average, and Slow is that the description of the thresholds for each isn’t quite right.

If you compare the percentages of Fast, Average, and Slow in that report with the data direct from BigQuery they don’t line up. It’s an understandable documentation slip but please don’t use those numbers without checking them. I’ve chatted with the team and submitted a bug report on the Github for this tool . I’ve also listed the true definitions below, in case you want to use Google’s report despite the compromises, or use the Fast, Average, Slow categorisations in the reports you create (as I say, it’s a good way to present the data). The link to generate one of these reports is g.co/chromeuxdash.

Another issue is that it uses the “all” dataset – meaning data from every country in the world. That means data from US users is going to be influenced by data from Australian users. It’s an understandable choice given the fact that this report is free, easily generated, and probably took a bunch of time to put together, but it’s taking us further away from that real-world data we were looking for. We can be certain that internet speeds in different countries will vary quite a lot (for instance South Korea is well known for having very fast internet speeds) but also that expectations of performance could vary by country as well. You don’t care if your site speed looks better than your competitor because you’re combining countries in a convenient way, you care if your site is fast enough to make you money. By accessing the report through BigQuery we can select data from just the country we’re interested in and get a more accurate view.

The final big problem with the Data Studio dashboard is it lumps desktop results in with mobile and tablet. That means that even looking at one site over time, it could look like your site speed has taken a major hit one month just because you happened to have more users on a slower connection that month. It doesn’t matter whether desktop users tend to load your pages faster than mobile, or vice versa – if your site speed dashboard can make it look like your site speed is drastically better or worse because you’ve started a facebook advertising campaign that’s not a useful dashboard.

The problems get even worse if you’re trying to compare two domains using this dashboard – one might naturally have more mobile traffic than the other, for example. It’s not a direct comparison and could actually be quite misleading. I’ve included a solution to this in the section below, but it will only work if you’re accessing the data with BigQuery.

Wondering why the Data Studio dashboard reports % of Fast, Average, and Slow, rather than just how long it takes your site to reach a certain load point? Read the next section!

3. Why doesn’t the CRUX report give me one number for load times?

This is important – your website does not have one amount of time that it takes to load a page. I’m not talking about the difference between First Paint or Dom Content Loaded, those numbers will of course be different. I’m talking about the differences within each metric every single time someone accesses a page.

It could take 3 seconds for someone in Tallahassee to reach Dom Content Loaded, 2 seconds for someone in London. Then another person in London loads the page on a different connection type, Dom Content Loaded could take 1.5 seconds. Then another person in London loads the page when the server is under more stress, it takes 4 seconds. The amount of time it takes to load a page looks less like this;

Median result from webpagetest.org

And more like this;

Distribution of load times for different page load milestones

That chart is showing a distribution of load times. Looking at that graph, you could think 95% of the time, the site is reaching DOM Content Loaded in under 8 seconds. On the other hand you could look at the peak and say it most commonly loads in around 1.7 seconds, but you could, for example see a strange peak at around 5 seconds and realise – something is intermittently going wrong that means sometimes the site takes much longer to load.

So you see saying “our site loads in X seconds, it used to load in Y seconds” could be useful when you’re trying to deliver a clear number to someone who doesn’t have time to understand the finer points, but it’s important for you to understand that performance isn’t constant and your site is being judged by what it tends to do, not what it does under sterile testing conditions.

4. What limitations are there in the Chrome User Experience report?

This data is fantastic (in case you hadn’t picked up before, I’m all for it) but there are certain limitations you need to bear in mind.

No raw numbers

The Chrome User Experience report will give us data on any domain contained in the data set. You don’t have to prove you own the site to look it up. That is fantastic data, but it’s also quite understandable that they can’t get away with giving actual numbers. If they did, it would take approximately 2 seconds for an SEO to sum all the numbers together and start getting monthly traffic estimates for all of their competitors.

As a result, all of the data comes as a percentage of total throughout the month, expressed in decimals. A good sense check when you’re working with this data is that all of your categories should add up to 1 (or 100%) unless you’re deliberately ignoring some of the data and know the caveats.

Domain-level data only

The data available from BigQuery is domain-level only, we can’t break it down page-by-page which does mean we can’t find the individual pages which load particularly slowly. Once you have confirmed you might have a problem, you could use a tool like Sitebulb to test page load times en-masse to get an idea of which pages on your site are the worst culprits.

No data at all when there isn’t much data

There will be some sites which don’t appear in some of the territory data sets, or at all. That’s because Google hasn’t added their data to the dataset, potentially because they don’t get enough traffic.

Losing data for the worst load times

This data set is unlikely to be effective at telling you about very very long load times. If you send a tool like webpagetest.org to a page on your site, it’ll sit and wait until that page has totally finished loading, then it’ll tell you what happened.

When a user accesses a page on your site there are all kinds of reasons they might not let it load fully. They might see the button they want to click early on and click on it before too much happened, if it’s taking a very long time they might give up altogether.

This means that the CRUX data is a bit unbalanced – the further we look along the “load time” axis, the less likely it is it’ll include representative data. Fortunately, it’s quite unlikely your site will be returning mostly fast load times and then a bunch of very slow load times. If performance is bad the whole distribution will likely shift towards the bad end of the scale.

The team at Google have confirmed that if a user doesn’t meet a milestone at all (for instance Onload) the recording for that milestone will be thrown out but they won’t throw out the readings for every milestone in that load. So, for example, if the user clicks away before Onload, Onload won’t be recorded at all, but if they have reached Dom Content Loaded, that will be recorded.

Combining stats for different devices

As I mentioned above – one problem with the CRUX report is all of the reported data is as a percentage of all requests.

So for instance, it might report that 10% of requests reached First Paint in 0.1 seconds. The problem with that is that response times are likely different for desktop and mobile – different connection speeds, processor power, probably even different content on the page. But desktop and mobile are lumped together for each domain and in each month, which means that a difference in the proportion of mobile users between domains or between months can mean that site speed could even look better, when it’s actually worse, or vice versa.

This is a problem when we’re accessing the data through BigQuery, as much as it is if we use the auto-generated Data Studio report, but there’s a solution if we’re working with the BigQuery data. This can be a bit of a noodle-boiler so let’s look at a table.

DeviceResponse time (seconds)% of total
Phone0.110
Desktop0.120
Phone0.250
Desktop0.220

In the data above, 10% of total responses were for mobile, and returned a response in 0.1 seconds. 20% of responses were on desktop and returned a response in 0.1 seconds.

If we summed that all together, we would say 30% of the time, our site gave a response in 0.1 seconds. But that’s thrown off by the fact that we’re combining desktop and mobile which will perform differently. Say we decide we are only going to look at desktop responses. If we just remove the mobile data (below), we see that, on desktop, we’re equally likely to give a response at 0.1 and at 0.2 seconds. So actually, for desktop users we have a 50/50 chance. Quite different to the 30% we got when combining the two.

DeviceResponse time (seconds)% of total
Desktop0.120
Desktop0.220


Fortunately, this sense-check also provides our solution, we need to calculate each of these percentages, as a proportion of the overall volume for that device. While it’s fiddly and a bit mind-bending, it’s quite achievable. Here are the steps;

  1. Get all the data for the domain, for the month, including all devices.
  2. Sum together the total % of responses for each device, if doing this in Excel or Google Sheets, a pivot table will do this for you just fine.
  3. For each row of your original data, divide the % of total, by the total amount for that device, e.g. below

Percent by device

Device% of total
Desktop40
Phone60

Original data with adjusted volume

DeviceResponse time (seconds)% of totalDevice % (from table above)Adjusted % of total
Phone0.1106010% / 60% = 16.7%
Desktop0.1204020% / 40% = 50%
Phone0.2506050% / 60% = 83.3%
Desktop0.2204020% / 40% = 50%

5. How should I present Chrome User Experience site speed data?

Because none of the milestones in the Chrome User Experience report have one number as an answer, it can be a challenge to visualise more than a small cross section of the data. Here are some visualisation types that I’ve found useful.

% of responses within “Fast”, “Average”, and “Slow” thresholds

As I mention above, the CRUX team have hit on a good way of displaying performance for these milestones over time. The automatic Data Studio dashboard shows the proportion of each metric over time, that gives you a way to see if a slowdown is a result of being Average or Slow more often, for example. Trying to visualise more than one of the milestones on one graph becomes a bit messy so I’ve found myself splitting out Fast, and Average so I can chart multiple milestones on one graph.

In the graph above, it looks like there isn’t a line for First Paint but that’s because the data is almost identical for that and First Contentful Paint

I’ve also used the Fast, Average, and Slow buckets to compare a few different sites during the same time period, to get a competitive overview.

Comparing competitors “Fast” responses by metric

An alternative which Paul Calvano demonstrates so well is histograms. This helps you see how distributions break down. The Fast, Average, and Slow bandings can hide some sins in that movement within those bands will still impact user experience. Histograms can also give you an idea of where you might be falling down in comparison to others, or your past performance and could help you identify things like inconsistent site performance. It can be difficult to understand a graph with more than a couple time periods or domains on it at the same time, though.

I’m sure there are many other (perhaps better) ways to display this data so feel free to have a play around. The main thing to bear in mind is that there are so many facets to this data it’s necessary to simplify it in some way, otherwise we just won’t be able to make sense of it on a graph.

What do you think?

Hopefully, this post gives you some ideas about how you could use the Chrome User Experience report to identify whether you should improve your site speed. Do you have any thoughts? Anything you think I’ve missed? Let me know in the comments!

If this has inspired you to dig into your site speed page-by-page, my colleague Meagan Sievers has written a post explaining how to use the Google Page Speed API and Google Sheets to bulk test pages. Happy testing.

Bonus – what are the actual thresholds in the CRUX Data Studio report?

As mentioned above, the thresholds in the CRUX Data Studio report aren’t 100% correct, I have submitted a GitHub issue but here are the updated thresholds.

Listed definitionActual time
FCP FastX <1 secondX < 1 second
FCP Average1 < x < 31 < X< 2.5
FCP SlowX >= 3 secondsX >= 2.5 seconds
FIrst Paint FastX <1 secondX < 1 second
First Paint Average1 < x < 31 < x < 2.5
First Paint SlowX >= 3 secondsX >= 2.5
First Input Delay FastX < 100 milX< 50 mil
First Input Delay Average100 mil < x < 150 mil < x < 250 mil
First Input Delay SlowX > 1X > 250 mil
DOM Content Load FastX < 1X < 1.5
DOM Content Load Average1 < x < 31.5 < x < 3.5
DOM Content Load SlowX > 3X > 3.5
Onload FastX < 1X < 2.5
Onload Average1 < x < 32.5 < x < 6.5
Onload Slowx >3X > 6.5

Tips for social media competitor analysis: Let’s stop talking about follower count

This is a copy of a blog post on distilled.net and is canonicalised there.

This year, Hootsuite announced that 3.196 billion people are now active social media users. That is 42% of all the people on earth. In the UK, that percentage climbs to 66% and it’s 71% in the US. Even with recent data protection scandals, platforms like Facebook, Twitter, Instagram, LinkedIn, Wechat, and Pinterest are a huge part of daily life.

This kind of impressive cut through makes it more likely that we can use social media to find our audience, but that doesn’t mean that everyone on the platform is desperate to hear from us. In reality, when we use social media as businesses we’re competing for what might be a very small, very niche, but very valuable cross-section of a network. This means that whenever we do social media marketing, we need a strategy, and to have a successful social media marketing strategy it’s vital to know how we compare to our competitors, what we’re doing well and what threats we should be worrying about.

Without effective social media competitor analysis we’re working in the dark. Unfortunately, a lot of the time when we compare social media communities we keep coming back to the same metrics which aren’t always as informative as we might like. Fear not! Here’s a guide to find the social media stats which really tell us which competitors to watch out for and why.

What are we trying to achieve with social media?

One of the biggest problems with creating a social media strategy is the subjectivity of social can make it incredibly hard to get solid, reliable performance data that can tell us what to do next. If we want to get actionable information about how we compare to competitors, it’s important for us to start with why we’re on the platforms to begin with (we’ll use these agreed facts in later sections). If we agree that;

  1. The value of a social media competitor analysis is to help us perform better on social
  2. The value of social is to help us achieve the business objectives that we set out in the first place.

Then we can agree that the numbers we look at in a social media competitor analysis must be defined by what we actually need the networks to achieve (even if it takes a while for engagement to become page views).

With that in mind, here are the most common aims I think we try to achieve through social media, ordered roughly from high commitment on the part of our audience, to low. When we are comparing social networks we need to make sure we have an idea of how the numbers we look at can contribute to at least one of the items below (and how efficiently).

  1. Sales (this can include donations or affiliate marketing as well as traditional sales)
  2. Support (event attendance etc. paid event attendance being included in sales)
  3. Site visits (essentially ad sales, visits to websites that don’t run on ads can be considered a step towards a sale)
  4. Impressions/staying front of mind (this is also a prerequisite for each of the above).

Why we should stop talking about raw follower counts

We often hear social media accounts evaluated and compared based on raw follower counts. If we agree we should look at numbers that are defined by our key goals I have some reasons why I don’t think we should talk about follower counts as much as we do.

“Followers” is a static number trying to represent a dynamic situation

When we compare social communities we don’t care how effective they were in 2012. The only reason we care about how effective they were over the last six months is because it’s a better predictor of how much of the available audience attention, and conversions they’ll take up over the next six months. What’s more, as social networks grow, and implement or update sharing algorithms, the goal posts are moving, so what happened a few years ago becomes even less relevant to the present.

Unfortunately, raw follower count includes none of that context, it’s just a pile of people who have expressed an interest at some point. Trying to judge how successful a community will be based on follower count is like trying to guess the weather at the top of a large hill based solely on its height – if it gets really big you can probably guess it’ll be colder or windier, but you’re having to ignore a whole bunch of far more relevant factors.

Follower buying can also really throw off these numbers. If you want to check competitors for follower buying you may be able to find some signs by checking for sudden, unusual changes in follower numbers (see “What we should look at instead”) or try exporting all their followers with a service like Export Tweet and check for a large number of accounts with short lifespans, low follower numbers or matching follower numbers.

An “impression” is required for every other social goal

I’m going to move on to what other numbers we should look at in the next section, but we have to agree that in order for anyone to do anything you want with your content, they have to have come into contact with it in some way.

Because of the nature of social networks we can also agree the number of impressions is unlikely to exactly match the follower number, even in a perfect system – some people who aren’t following will see your content, some people who are following won’t. So we’ve started to decouple “follows” from “impressions” – the most basic unit of social media interaction.

Next we can agree – if an account stops producing effective content, or stops producing content altogether, follower count will make no difference. A page that posts nothing will not have people viewing its nonexistent posts. So follower count isn’t sufficient for impressions and impressions are necessary for any other kind of success.

Depending on the kind of social network, the way in which content spreads though it will change. Which means follower count can be less decisive than other systems in different ways. We’ll look at each format below in isolation, where a network relies on more than one means (for instance hashtags and shares) the effect is compounded rather than cancelled out.

Discovery driven by hashtags

Ignoring other amplification mechanisms (which we’ll discuss below), follower count can be much less relevant in comparison to the ability to cut through hashtags. The end result of either a large, active following or content effectively cutting through a hashtag (or both) will be shown in the engagement metrics on the content itself, we have those numbers, so why rely on follows?

Discovery driven by shares and interaction

The combined followings or networks of everyone who follows you (even at relatively small numbers) can easily outweigh your audience or the audience of your competitors. Engagement or shares (whatever mechanism the platform uses to spread data via users) becomes a better predictor of how far content will reach, and we have those numbers, so why rely on follows?

If you’re interested in analysing your followers or competitor followers to find out how many followers those followers have and compare those numbers, services like Export Tweet will let you export a CSV of all the followers of an account, complete with their account creation date and follower number. Also, if you have to look into raw follower numbers this can be a way of checking for fake followers.

Discovery guided by algorithms

In this case, content won’t be shown to the entire following, the platform will start by showing it to a small subsection to gather data about how successful the post is. A successful post is likely to be seen by most of the following and probably users that don’t follow that account too, a less successful post will not be shown to much more than the testing group. Key feedback the platforms will use to gauge post success is engagement and, as we’ve said, we have those numbers, why rely on follows?  

This particular scenario is interesting because having a very large audience of mostly disengaged followers can actually harm reach – when the platform tests your content with your audience, it’s less lightly to be seen by the engaged subset, early post success metrics are likely to fare worse so the content will look less worthy of being shared more widely by the platform. This can mean that tactics like buying followers, or running short-term competitions just to boost follower count without a strategy for how to continually engage those followers, can backfire.

I’m not saying follower count has no impact at all

A large number of follows does give an advantage, and make it more likely that content is widely seen. The fact is that in most cases, engagement metrics usually tell us if posts were widely seen, so they are a much more accurate way to get a snapshot of current effectiveness. Engagement numbers are also far closer to the business objectives we laid out above so I’ll say again, why rely on follows?

At most I’d only ever want to use follower count to prioritise the first networks to investigate – as far as I’m concerned it isn’t a source of the actionable insights we said we wanted.

What we should look at instead

Engagements

In many ways, engagement-based numbers are the best to look at if we want to put together a fair and informative comparison including accounts we don’t own.

Engagement numbers are publicly visible on almost every social network (ignoring private-message platforms), meaning we aren’t having to work with estimates. What’s more, engagement is content-specific and requires some level of deliberate action on behalf of the user, meaning they can be a much better gauge of how many people have actually seen and absorbed a message, rather than glancing at something flying past their screen at roughly the top speed of a Honda Civic.

What business goal does this relate to?

Impressions. As mentioned above, engagements require the content to be on-screen and for the user to have recognised it at some level. Because engagements are like opt-in impressions, we can judge comparative success at staying front of mind. We could also use it as a sign that our audience is likely to take further action, like visiting our site or attending an event, depending on how you interpret the numbers (as long as it’s consistent). It’s fuzzy, but in a lot of ways less fuzzy than follows (due to removal from actual business goals) and actual impressions (due to lack of data). What’s more, the inaccuracy of this data leans towards only counting users who cared about the content, so it’s something I’m happy to live with.

That being said, when you’re comparing your own community to itself over time (and not worrying about competitors) impressions itself is still a good metric to use – most social platforms will give you that number and it can give you a fuller idea of your funnel (we’ll cover impressions more below).

What numbers should you use?

As with follower change and impressions (which I discuss below), we need to control for varying follower base and posts-per-day. I’d recommend:

  • Engagements per (post*follower) (where you multiply total follower count by total updates posted)
  • Engagements per post
  • Total engagements per post.

The first number should help you compare how well a follower base is being engaged, the second should give an idea of return on investment, and the third is to avoid being totally thrown off by tiny communities which might not actually be moving the needle for business objectives.

It’s worth checking the Facebook and Twitter ad reporting (relatively new additions to each platform) to see if the page is spending money promoting that content.

What tools should you use?

The platforms themselves are an option for gathering engagement numbers, which is one of the reasons this kind of check is ideal. This can be as simple as scrolling through competitor timelines and making notes of what engagement they’ve received. Unfortunately, sometimes this is time-consuming and many platforms take steps to block scraping of elements. However, I’ve found some success with scraping engagement numbers from Facebook and Twitter and I’ve included my selectors in case you do manage to use a tool like Agenty or Artoo.js to help automate this.

Facebook

NumberSharesLikesCommentsAdditional commentsAll visible posts
Selector.UFIShareLink._4arz span.UFICommentActorAndBody.UFIPagerLink._q7o

Twitter

NumberInteractionsAll visible posts
Selectorspan.ProfileTweet-actionCountForPresentationspan._timestamp

Facebook Insights is another great source of information because it’ll give you some direct comparisons between your page and others. It’s not quite the level of granularity we’d like but it’s easy, free, and direct, so gift horses and all that.

NapoleonCat – I don’t work for this company but they have a 14-day free trial and their reports offer exactly the kind of information I’d be looking for, for both managed profiles, and ones you are watching. That includes daily raw engagement numbers, and calculated engagement rate and SII their “Social Interaction Index” which claims to account for differing audience size, allowing direct comparison between communities.

The hitch is that Twitter and Instagram only start collecting information from when you add them to the account, so if you want to collect data over time you’ll need to pay the premium fees. On the other hand, their support team has confirmed that they’re perfectly happy with you upgrading for a month, grabbing the stats you need, removing your payment card for a few months (losing access in the process) and repeating six months later for another snapshot.

Socialblade – offers some engagement rate metrics for platforms like Instagram and Twitter.  It doesn’t require you to log in but the data isn’t over time so your information is only as good as your dedication to recording it. 

Fanpage Karma does an impressive job of trying to give you actionable information about what is engaging. For instance, it’ll give you a scatter chart of engagement for other pages, colour coded by post type. Unfortunately,  anything more than a small number of posts can make that visualisation incredibly noisy and hard to read. The engagement-by-post-type charts are easier to read but sacrifice some of that granularity (honestly I don’t think there is a visualisation that has engagement number and post type over time that isn’t noisy).

It’ll also let you compare multiple pages in the same kind of visualisation where the dots still show number of engagement but are colour coded by page instead of post type, patterns can be a bit easier to divine with that one but the same tension can arise.

If you’re tracking these stats for your own content Twitter analytics and Instagram Insights are great, direct, sources of information. Any profile can view Twitter analytics, but you’ll need an Instagram business profile to look at the Instagram data. At the very least, each can be a quick way of gathering stats about your own contents’ impressions and engagement numbers, so you don’t have to manually collect numbers.

If you have to include a follower metric…

If you have to include a follower metric, I’d advise focusing on something far more representative of recent activity. Rather than total or raw number of follows, we can use recent change in followers.

While I still think this is a bit too close to raw followers for my liking, there’s one important difference – this can give you more of an idea of what’s happening now. A big growth in followers could mean a network is creating better content, it could also mean they’ve recently bought a bunch of followers, either way, we know they’re paying attention.

What business goal does this relate to?

Some people might use this number to correlate with impressions, but as I said we can use other numbers to more accurately track that. This number (along with raw post frequency) is one means of gauging effort put into a social network, and so can inform your idea of how efficient that network is, when you are looking at the other metrics.

These numbers are also likely closer to what senior managers are expecting so they can be a nice way to begin to refocus.

What number should you use?

We need to account for differing community histories, a way to do this is to consider both:

  • Raw followers gained over a recent period
  • Followers gained over a recent period as a proportion of total current followers.

We can use these two numbers to get an idea of how quickly networks are growing at the moment. The ideal would be to graph these numbers over time, that way we can see if follower growth has recently spiked, particularly in comparison to other accounts of similar focus or size.

Once we’ve identified times where an account has achieved significant change in growth, we can start to examine activity around that time.

What tools should you use?

NapoleonCat (I promise I’m not getting paid for this) can give you historic follower growth data for accounts you don’t own, although unfortunately it only reports Twitter follower growth since the point an account starts being monitored (other networks seem to backdate).

Socialblade offers historic follower stats for accounts you don’t own, the first time anyone searches for stats on an account, that account will be added to Socialblade’s watchlist and it’ll start gathering stats from that point. If you’re lucky, someone will already have checked, otherwise you can have a look now and check back later.

Impressions

It can be harder to get a comparison of impressions for content, but it’s one of our most foundational business objectives – a way to stay front of mind and ideally build towards sales. Everything we’ve covered in terms of Follower numbers is a step removed from actual impression numbers so it’s worth comparing actual impression numbers for recent content where we can.

What business goal does this relate to?

Impressions, but as impressions are the minimum bar to clear for all of our other business goals, this can also be considered top of the funnel for other things.

What numbers should you use?

  • Impressions per (post*follower) (where you multiply total follower count by total updates posted)
  • Impressions per post
  • Total impressions per account/all impressions for competitor accounts during that same period

Once you have collected impression numbers from a range of accounts on the same platform which are targeting the same audiences, we can sum them together and compare total impressions per account against total impressions overall to get a very rough share of voice estimate. This number will be heavily impacted by users who view content from one account again and again, but as those users are likely to be the most engaged, it’s a bias we can live with. Again, comparing this over time can give us an idea of trajectory and growth.

Some accounts may try to drive up key metrics by posting a huge number of times a day, there’s definitely a law of diminishing returns so as with engagements I’d also get an average per-post impression number to gauge comparative economy.

As this is post-specific, I would also recommend breaking this numbers down by post type (whether that be “meme”, “blog post”, or “video”) to spot trends in effectiveness.

What tools should you use?

Fanpage Karma again goes out of its way to give you means of slicing this data. Just like with engagement you can show impressions by post type for one Facebook page, or compare multiple at the same time. It can result in the same information overload but I definitely can’t fault the platform for a lack of granularity. Unlike with engagement, the platform will pretty much only give you impression data for Facebook and unfortunately sometimes it’s patchy (see the SEMrush and Moz graph below). 

It’ll also give YouTube view information, as well as giving you a breakdown of video views and interactions based on when the video was posted, it also offers cumulative figures which show how the performance of a video improved over time.

Tweetreach will give estimated reach for hashtags and keywords, by searching for a specific enough phrase, you can get an idea of reach for individual tweets, or a number of related tweets if you’re smart about it.

Content shares

This is specifically people sharing a page of your site on a social network. It may help us flesh out some of the impressions metrics we’ve been dancing around, particularly in terms of content from your site or competitors’ being shared by site visitors rather than an official account.

What business goal does this relate to?

Impressions, site visits generating ad revenue

What numbers should you use?

To control for volume of content created by different sites, I would look at both total number of shares and shares per blog post, for example, during the same time period. It could also be valuable information to sum total follower count of the accounts that shared the content, to weight shares by reach, but that could be a huge task and also opens us up to the problems of follower count.

What tools should you use?

Buzzsumo will let you search for shared content by domain, and will let you dig in to which accounts shared a particular item. It can give a slightly imbalanced picture because it’s just looking for shares of your website content (so don’t expect the figures to include particularly successful social-only content for example) but it’s an excellent tool to get a quick understanding of what content is doing how well, and for who.

Link clicks

This can be difficult information to gather but given its potential value to our business goals it’s worth getting this information where we can.

What business goal does this relate to?

Site visits generating ad revenue, event attendance, sales, depending on where the link is pointing.

In my experience it’s usually much harder to get users to click away from a social media platform than it is to get them to take any action within the same platform. Sharing links can also cause a drop in engagement, often because the primary purpose of the content isn’t to encourage engagement – success with a user often won’t be visible at all on the platform.

What numbers should you use?

  • Clicks per (link post*follower) (where you multiply total follower count by total updates posted)
  • Clicks per link post
  • Total link clicks

What tools should you use?

Understandably this is fairly locked-down, Fanpage Karma again goes out of its way to get you the data you need, and does offer to plot posts against link clicks in one of those scatter graphs we love. I’ve reached out to them for information on how they collect this data, will update when I hear back. As with impression data, click data can sometimes be patchy – the platform seems to miss data consistently across metrics.

Outside of that, the best trick I’ve found is by taking advantage of link shortener tracking. For example, anyone who uses free service Bit.ly to shorten their links can also get access to link click stats over time. The thing is, those stats aren’t password protected, anyone can access them just by copying the Bit.ly link and putting a + sign at the end before following the link.

Here are the stats for a link Donald Trump recently shared in a tweet.

Go forth and analyse

Hopefully, some of the metrics and processes I’ve included above prove helpful when you’re next directing your social media strategy. I would never argue that every single one of these numbers should be included in every competitor analysis, and there are a whole host of over factors to include in determining the efficacy of a community, for instance; does the traffic you send convert in the way you want?

That being said, I think these numbers are a great place to start working out what will make the difference, and will hopefully get us away from that frequent focus on follower numbers. If there are any numbers you think I’ve missed or any tips and tricks you know of that you particularly like, I’d love to hear about them in the comments below.