Help Center Article Ratings: Do They Matter?
You've partnered with product managers and marketers to create benefit-driven documentation on a shiny new feature set. Support gurus have given you feedback on your troubleshooting tips and screenshots. Maybe you've even done some usability testing on your instructions with a customer.
But a month later, when you check in to see how your article is doing, your heart drops when you see a 23% customer satisfaction rating. What the heck went wrong?
I've been there. Content strategists at Google, Twitter, SurveyMonkey, and Airbnb have been there.
We're all facing the same struggle: how do we determine if our help center content is working?
Most out-of-the box help center setups (like Desk.com or Zendesk) include an article rating feature. Sounds great, right? Let your customers tell if you if your content is good or not!
Except here's the thing:
- Votes skew negative. Anyone who's been on the Internet knows that angry folks are far likelier to air their opinions than satisfied people. On top of that, users who get their answer by step 4 of 10 have little incentive to scroll to the bottom of your article, where voting modules typically live.
- Most article voting systems are binary. Thumbs up or thumbs down. Smiley face or frowny face. Most rating systems are a simple yes or no vote (although some companies, like Facebook, have incorporated scaled satisfaction systems). This limits what you can learn about why content works or doesn't.
- It's hard to achieve statistical relevance. Even with hundreds of thousands of help center sessions every month, at Eventbrite we still only see single-digit percentages of customers who vote on our content.
So what's a support content creator to do?
Don't require login for article voting.
An authenticated help center can drastically improve your customer's experience with features like content personalization and recommendations.
But requiring a login—or worse, requiring a login to a special support-only account separate from their product account (I'm looking at you, Squarespace)—severely limits the amount of valuable customer insight you could be using to improve your content.
Opening up your feedback modules to any visitors will help you get more of the feedback you need to understand article improvement opportunities.
Ask the right question.
There's a very real (and important) difference between these seemingly similar feedback questions:
- Was this article helpful?
- Did this article answer your question?
The first option is the most common phrasing, but the second option will help you understand your content's success better.
Why? Put yourself in your customer's shoes—just because an article was helpful, doesn't mean it gave them what they needed or wanted.
As content creators, it feels good to get those positive votes from customers—but it robs us of really understanding if our content leads to customer resolution, which is what should really matter.
Structure how you're collecting customer feedback.
Even if it's skewed to the negative, customer feedback is an absolute goldmine if you're collecting it the right way. The problem with simple voting, however, is it provides zero insight on why an article has failed.
Sometimes you can use your best content judgment to know what to change—but you'd be surprised how often it's hard to know what's wrong (for example, 5-10% of our customer feedback is about product functionality and feature requests, so we pass that customer feedback to the product team). Why guess if you can know?
Open feedback fields are great for getting specific feedback in your customers' own words—but buckets of feedback types can make it easier for customers to give more insight with minimal effort.
Use directional signals.
Maybe you have an article that has always just sat at a sad 31% customer satisfaction rating. You've eventually come to accept that this rating is going to stay pretty low unless your product changes—something you may or may not have control over.
What you can do, however, is understand how this rating fluctuates over time. At ratings this low, you should pay attention to even small fluctuations if there have been product changes, seasonal trends, or big updates to content.
In other words, don't worry too much if this article is at 31% month after month. Raise an eyebrow if it all of a sudden drops to 17%.
Compare content performance against support cases.
A full-picture view of the customer experience is more valuable than a single article rating metric.
The Eventbrite team uses article ratings primarily for insight on what needs attention and for specific customer feedback so we don't spin our wheels guessing what to fix.
But when it comes to understanding bottom-line success, we cross-reference article metrics with inbound support cases.
- At a high level, we compare overall help center sessions to the number of contacts our support team gets.
- For more specific content, we compare article views to cases tagged with a specific contact driver classification.
We also look at the next page path from the article—specifically, what percentage of users on the article go to our contact form next—as a sign of success or failure at resolving customer issues with content.
This triangulated way of examining article performance—directional success rates, structured and open-ended customer feedback, and performance against support trends—is how my team looks for self-service success.
What methods does your team use to determine article success?