At MailChimp, we spend a lot of time thinking about our users, and we strive to design our technical content with empathy and understanding for them. But we know that despite our best, most educated efforts, there’s still a human factor that’s difficult to account for.
Our users are human, and they play an active role in the success or failure of our message, so we know we need to hear from them. That’s the simple part. But how do we elicit helpful feedback, and how do we accurately interpret that feedback so we can refine and add to our content in a meaningful way?
We’re still trying to figure this out, but we’ve also learned a few things along the way. Here’s where we’ve been on our journey to collect feedback from our users, and where we think we need to go next.
Like most of our company, our user feedback process grew somewhat organically. We got small pieces of secondhand feedback on article content from our Support agents, but we didn’t have a direct way to hear from users ourselves.
So, we implemented an article feedback form at the beginning of 2014. On that first iteration, users needed to click a Give Feedback link at the bottom of an article to access and submit the form.
(Spoiler alert: if you’re thinking that Give Feedback link gets a little lost down there with all the related article links, you’re right.)
We did get feedback from this first iteration, and we used it. Each morning, I’d read through the submissions we received, and I’d make a ticket in our project management system if anything was actionable.
We floated along like this for a while, but we consistently saw a couple of central issues:
- We weren’t getting a lot of feedback overall.
- Most of the feedback we were getting wasn’t helpful or actionable.
When we did get helpful feedback, it resulted in improvements to our content and cross-linking, and helped eliminate confusion due to search or taxonomy issues. So, we knew there was value there.
About a year after we rolled out the first iteration of the feedback form, we started to talk about where we needed to go next.
Most feedback submissions consisted of a simple Yes/No response, which held some quantitative value, but didn’t give us an idea of exactly how the article was helpful or unhelpful. The limited number of comments we received often didn’t tell us much, either. “You rock!” is nice to hear, and “You suck” isn’t, but neither sentiment actually helps us pinpoint what is or isn’t working about a piece of content.
Users also frequently mistook the article feedback form as an avenue to Support. While this resulted in a lot of unusable feedback, we also didn’t want to set false expectations and give our users the impression they’re reaching Support when they’re not.
So many things! But, we knew that if we were going to make changes to the form, we needed to be able to determine which changes were driving which results.
Baby steps… kind of
In the interest of starting small and tracking our progress, we decided the next iteration on the form was simply to remove that Give Feedback link to make the form visible, and see what that did for our volume and quality of feedback.
It did something, alright!
We immediately began receiving about 11 times the feedback we’d been getting before. To give you some rough numbers, we went from about 1,330 submissions per month to 15,000 submissions per month.
Whoops. That was a heck of a lot more than we bargained for, and monitoring feedback quickly became a multi-person job, and yet the quality of the feedback was largely the same.
Which brings us to today. We know people are seeing the opportunity to submit feedback now, which is a good thing. But, we need to get to a place where we’re collecting more usable feedback, and we need better ways of processing and implementing that feedback.
We’re now looking ahead to our third iteration. Our focus going forward is on collecting more specific data, both qualitatively and quantitatively, without adding a significant user burden that would discourage them from giving feedback.
To accomplish these goals, we’ve come up with a few additions and changes we’d like to implement over the next several months.
- Add a Likert Scale
Rather than ask users to make a sweeping judgment on an article, we’ll replace our current yes/no response and let them rate the helpfulness of articles on a scale.
- Add a Follow-Up for Negative Responses
When a user selects a negative article rating, we’ll give them a multiple-choice question and ask them to choose the option that best describes the problem they had with the content.
- Add a Clarifying Statement
Add a statement to clarify the form isn’t an avenue to our Support team and provide a link to our contact form for those users looking for support.
- Add Inline Commenting
When a user highlights a portion of text, we’ll show them a comment icon that lets them submit a piece of feedback on that specific part of the article.
Our hope is that a combination of these efforts will let us more easily sort feedback into manageable “buckets” and allow us to get a well-rounded picture of how articles are performing.
Implementing these ideas means making user interface changes, which means a lot of cross-team collaboration. We’ll need to communicate consistently and balance our goals and time table with those of our collaborators. Most important, we’ll need to remain flexible and humble. If there’s one thing we know, it’s that almost nothing ever goes exactly as planned.
We’ll be updating the blog as we move forward with this ongoing project, so stay tuned!