Recently, Facebook announced changes to the Relevance Score and other metrics advertisers use to evaluate their campaigns, it’s good news for brands struggling to benchmark their results.
Breaking down the announcement – as of April 30, 2019, Facebook is replacing several metrics:
- Offers Saved, Cost per Offers Saved – REPLACED WITH the Post Saves metric
- Messaging Replies, Cost per Messaging Reply – REPLACED WITH New Messaging Connections and Messaging Conversations Started metrics
- Mobile App Purchase Return On Ad Spend (ROAS) and Web Purchase (ROAS) – FOLDED IN to the holistic Purchase ROAS metric
- Relevance Score – REPLACED WITH Quality Ranking, Conversion Rate Ranking, and Engagement Rate Ranking metrics
That last change – replacing the Relevance Score – is the most significant part of this announcement because it looks to give us a limited opportunity to benchmark our performance against competitors. To understand why, we need some of the background context on what it is and how it came to be.
That brings us to the two reasons why the Relevance Score changes are exciting:
- More granular metrics mean prescriptive solutions for improving ad campaigns
- New benchmark insights will be available to measure your brand against competitors
THE HISTORY OF THE RELEVANCE SCORE
As web platforms (like social media and search) have evolved, their owners realized that everything about the experience a user has – including the ads – has to be relevant to that individual user. As a result, in 2008 Google introduced the Quality Score and some time later Facebook introduced the Relevance Score (other platforms have since followed suit).
Quality/relevance scores are typically determined by user interactions with ads – the more positive signals an ad receives, the higher the score. The more negative signals an ad receives, the lower the score. Those signals can take a variety of forms (on Google it might be clicks or the bounce rate on the landing page, on Facebook that can mean clicks, video views, comments, conversions, etc.)
These scores were used to determine (based on how users interacted with ads) how well the ads were targeted and crafted – in certain contexts, ADS THAT SCORE BETTER COST LESS to run because they’re less intrusive to users. If scores are low enough – they won’t run at all (as the platform owners don’t want to risk users spending less time on their site).
EXCITING CHANGE 1: GRANULAR DATA MEANS EASIER TROUBLESHOOTING
The primary benefit of this change to the Relevance Score (segmenting it into Quality Ranking, Conversion Rate Ranking, and Engagement Rate Ranking) is more granular data that will help you make better decisions about your marketing. A relevance score is better than nothing, but it doesn’t necessarily tell you if your ads are accomplishing the goal you want them to. In order to make that determination, you would have to look at several other metrics for any given ad.
For example: Let’s say we want to drive sign-ups for our email newsletter. We create a conversion campaign and design several ad variants to try different images/video and ad copy. All of the ads could have a good Relevance Score, but some may be better at getting users to complete the email subscription form. Looking only at the Relevance Score, one wouldn’t be able to tell which ad variants these were because the scores are impacted by signals we don’t care about (like driving reactions, comments or shares).
With this new trio of Ranking scores, we can more easily weed out some of those irrelevant signals to see how our ads are actually performing. If a conversion ad is high-quality, but doesn’t have a corresponding high Conversion Rate Ranking – we’ll want to dig in to find out why and make changes to improve it. Better data combined with a three-part framework means we can more easily prescribe ways to improve under-performing campaigns.
Hopefully this is a trend – we would love to see Facebook roll out even more Ranking scores for other objectives (like video views or event RSVPs).
EXCITING CHANGE 2: UNDERSTAND HOW YOU COMPARE
Buried in the announcement about the change to Facebook’s Relevance Score is the announcement that we’ll start to see BENCHMARK metrics (even though they’re rudimentary).
Benchmarks in digital marketing are the Holy Grail. They’re very difficult to establish because that data is only available to the platforms themselves (Facebook/Google) and they’re not really interested in publishing them for a variety of reasons.
Currently the best we can do for benchmarks (aside from benchmarking against our own past performance) is to rely on large-scale marketing platforms that have eyes on hundreds or thousands of client ad campaigns to publish periodic studies about what they’re seeing in their (limited) view. Bless their hearts for doing it – otherwise we would have nothing. If you’re interested – here are some examples of the most recent and reliable:
- SocialBakers – Regional Platform Benchmarks and Insights by Industry
- Adstage – Facebook Ads Benchmarks for CPC, CPM, & CTR in Q3 2018
- Wordstream – Facebook Ad Benchmarks by Industry
- Wordstream – Google Ads Benchmarks by Industry
Here’s where the new Facebook Ranking scores come in: they should be able to tell you – in real time – how you’re doing against other advertisers targeting similar audiences with a similar campaign objective.
In the new metrics, we’ll see if scores are Average (representing the 35th to 55th percentile) – or if they’re Above Average, or Below Average (broken into three groups: bottom 35% of ads, bottom 20% of ads, or bottom 10% of ads).
Finally – some feedback!
It’s not perfect, though, because the Ranking doesn’t take into account the difficulty of the outcome you’re trying to produce – which Facebook is quick to note:
“Some products and services naturally exhibit lower conversion rates than others competing in the same ad auction. High-price or high-consideration products like jewelry should expect lower conversion rate rankings than lower-price or lower-consideration products like t-shirts.”
In future, since it’s very difficult for AI to take into account all of the nuance of a conversion campaign, I would love to see advertisers be given the ability to rank how difficult their outcome is as part of the ad campaign creation process so that they can more accurately compare themselves to similar campaigns. So if we’re trying to get prospective customers to buy a car (a big purchase decision with a long sales cycle) I could rank it a 10 in difficulty, but if I’m trying to get customers to sign up for a coupon I could rank it a 1 in difficulty.
If you’re ever in need of a fresh perspective on your digital marketing efforts – please don’t hesitate to reach out and contact us.
Derek DeVries is a director of digital and social strategy at Lambert.