Google’s mission is to connect people who are using their search engine with content that delivers on their expectations. They do this because they want people to keep using their engine, not someone else’s.
Through machine learning, Google do a pretty good job of promoting content on the Search Engine Results Pages (SERPs) that is relevant to searchers queries (these are the words they type into Google). The machine has learnt to interpret the intent of the search and deliver appropriate results – more often than not.
They employ raters to help ensure the content they promote on the SERPs are the best available. This article summarises the advice in the 160 page document they provide for them, to ensure they’re doing their job correctly.
As with our digest of Google’s Webmaster Guidelines we appreciated this topic could be a little dry for some – not us, we love it – so we’ve interspersed the text will photos of animals, that all have a link to the article, albeit a tenuous one at times.
Please read on!
Page Quality Rating Guidelines
What is the purpose of the page?
Guide dog image by Amy_Gillard
The goal here is for raters to evaluate how well a page achieves its purpose, the purpose being the reason or reasons it was created. Most pages are created to be helpful for users. Why? Because those are the type of pages Google are keen to promote on the SERPs.
They go on to confirm that as long as the goal of the page is to help the user, then no particular type of page should be considered higher quality than any other. What they’re saying is that a page created to inform a user about a news topic shouldn’t necessarily be considered higher quality than another page whose purpose is to make the user laugh.
They provide a list of some page purposes to help clarify this point:
- To share information about a topic.
- To share personal or social information.
- To share pictures, videos, or other forms of media.
- To express an opinion or point of view.
- To entertain.
- To sell products or services.
- To allow users to post questions for other users to answer.
- To allow users to share files or to download software.
Source: searchqualityevaluatorguidelines.pdf page 8, 2018
Types of pages
Clearly there are a lot of different types of pages on the internet. Google call Your Money or Your Life Pages (YMYL) pages that could potentially impact the future happiness, health or financial stability of the user who happens to read them. These are, in Google’s eyes, the most important pages when it comes to deciding if they are high quality or not, as there is much more at stake here. YMYL pages include any shopping pages, pages with financial information, medical or legal content and so on.
What’s on a page?
Google define the areas of a page as the main content (the part that helps the page to achieve its purpose), the supplementary content and advertisements. Supplementary content is everything else returned on a page aside from the main content and ads. This should contribute to a good user experience, but does not directly help the page to achieve its purpose.
Ads can help contribute to a good user experience apparently, I can’t say I concur with that, but then I’m not running a multi-billion pound company that raises revenue through ads. What’s important is that we as webmasters know that running ads on pages will not adversely affect the rating a page is given, provided the ads are not of particularly low quality and don’t interfere with the users ability to read the main content!
Is the website important as well as the page?
Piranha image by annca
In short, yes! Some of the criteria by which the page is judged are related to the website where the page lives. In particular the rater will be looking for signals that confirm the creator of the content has the required authority to create it. They will look for confirmation of their authority away from the website itself too, from independent sources, be they professional associations, societies or something else.
The raters are told to consider reviews from customers (but not to be too judgemental if there are one or two negative reviews) and also to be skeptical about claims website owners make about themselves!
They add that no reviews, for example for a small organisation, shouldn’t be indicative of positive or negative reputation, and therefore shouldn’t be considered an indication of low quality content.
Some websites, especially those with YMYL content, as well as having displayed they have the required level of authoritativeness and expertise required to write on the topic, will need to have sufficient information on their contact pages should the user need to get in touch. This isn’t so important with other types of content.
What are the most important factors that determine the quality of the page?
Owl image by TonW
Google are very clear on this. The factors are as below:
- Expertise, Authoritativeness, Trustworthiness
- Main Content Quality and Amount
- Website Information/information about who is responsible for the website
- Website Reputation
Source: searchqualityevaluatorguidelines.pdf page 18, 2018
Expertise, Authoritativeness, Trustworthiness (E-A-T)
This is the most important factor. Google say the other three in their list above, inform the E-A-T of a website.
In short, Google’s raters are looking for evidence of expertise and factually correct information, and this becomes even more important for YMYL pages. The level of expertise required, depends on the topic and the purpose of the page.
How do the ratings work?
There is a sliding scale from Lowest to Highest, with Low, Medium and High in between!
The guidelines provide lots of information to help its raters choose the appropriate rating for a page, along with many real examples.
Here follows a synopsis of some things that I found interesting.
Higher quality pages typically have a satisfying amount of high quality main content. The actual amount is dependent on the topic and purpose of the page. Broad topics should typically have more, narrow content perhaps less. This is Google telling webmasters pretty clearly that long form content, on its own, isn’t necessarily going to deliver and isn’t always required.
A high rating cannot be used for any page where the website has a convincing negative reputation. So even if the quality of the content is good, the overarching reputation overrides this. This confirms to me how important reputation management is for any organisation.
Lower quality ratings should be given to pages where the writer does not have enough expertise and/or the website is not trustworthy. It may have an unsatisfying amount of content for the purpose of the page, or distracting/misleading ads (think pop ups that don’t have large enough close buttons, particularly important on mobile), and unsatisfying amount of information about the website and/or a negative reputation.
The quality of the content is determined not only by the expertise, but by how much time and effort has gone into creating it. A small amount of content on a broad topic will be regarded as unsatisfying.
The lowest quality pages are created to harm, mislead or misinform – usually to make money for the publisher. The guidelines touch on copied content and it’s worth flagged here for the record, that Google don’t regard duplicate content – that has been copied intentionally – very well and go on to provide raters with means to identify duplicate content as part of their research into a page.
Finally they instruct raters to assess the page on their mobile phone. They also advise that they’re happy for them to conduct their main analysis on their desktop device, and they expand on mobile in the second section of the guidelines, which conveniently follows next…
Understanding Mobile User Needs
Cheetah image by mtanenbaum
How your content displays on mobile devices is very important to Google. This section of the guidelines focuses on mobile and much more on the intent of searches (queries) and how content (landing pages) need to deliver on the intent (user intent) if it is to rank as a result on the SERPs.
They start by saying that understanding the query is the first step in this evaluation process. They make the crucial distinction that users in different locations may have different expectations for the same query. They highlight that more users are searching for terms without an explicit location as they know that Google know where they are!
They introduce the concept of queries with multiple meanings which their machine learning machine is doing its best to address. They use the three query interpretations below:
- Dominant Interpretation – what most users mean when they use this query
- Common Interpretation – what many or some users mean
- Minor Interpretations – what just a few users mean
They identify the four broad user intents; users either want to:
- Know something (know and know simple)
- Do something (do and device action)
- Visit a specific site or page (website)
- Visit somewhere (visit-in-person).
Below is a little key info on each.
Know and Know simple
To find information on a topic. The vast majority of searches have this intent. Know simple queries have a specific, often short answer. Most searches do not have this intent.
Do and Device Action
The intent is to accomplish a goal or engage in an activity on their phone (a device action, which might be to call someone).
Visit a website or page that the user has requested.
This is often the intent for searches made on mobile devices, of course. Their location becomes all important here.
The guidelines then introduce web search result blocks, and the elements you typically find in each i.e. title of content, url and description – these are also known as snippets, and special content result blocks – aka featured snippets. The latter often do not require the searcher to click through, assuming the result delivers on the required intent.
They show a number of examples and advise that the rater needs to consider these results blocks and intent for the final section of their guidelines, which is next…
Needs Met Rating Guideline
Robin image by CountryGirl1
The focus in this section is mobile users needs. They introduced the key themes in the previous section. Now they want raters to use this knowledge to consider how helpful and satisfying the result they’re being asked to rate, is for mobile users.
Once again there’s a sliding scale, this time from Fully Meets to Fails to Meet, with Highly Meets, Moderately Meets and Slightly Meets in between!
They are being asked to rate, in some instances, the content in the result block AND the landing page associated with the result. So it’s the key on-page fields (page title, url and meta description), and well as the main content they’re rating.
They go on to qualify this by saying that for the featured snippets they’re generally being asked to rate the information in the box only. For web search result blocks where generally a click is required, it’s both, and for Device action results, they should rate the helpfulness of the suggested action!
To achieve a Fully Meets rating the user should not need to look any further than the result provided. This is typical for Know simple queries, but impossible for broad know queries or queries without a dominant interpretation.
A rating of Moderately Meets should be assigned to results that are helpful/satisfying to many users, or very satisfying to a few. Slightly Meets less so. And so on.
They touch on product queries and say that most do not specify intent and that even though the ultimate goal may be to purchase a product, often they’ll be a different intent prior to this – usually information gathering.
And (he said fairly confidently) this whistle stop tour is pretty much all website owners like yourself need to know! If you want to know more, you’re going to have to read the pdf.
Right, who wants to borrow my printout to save their printer ink?