How we use feedback to make product decisions

Word cloud from secondary interaction
Published June 6, 2017

In November 2016, the team at Rome2Rio began experimenting with ways in which we could ask users of Rome2Rio about their experience whilst using the site. The goal was to give us an additional data point besides revenue and conversions that we could use to gauge the success of site changes and improved transit data. We began by simply asking our users the question “Was this result helpful?” and presenting Yes and No options along with a feedback field.

Segment Survey question form

We initially collected 200 responses per day with lots of really useful feedback about ways we could improve as well as acknowledgements of areas that we were providing good service.

Segment Survey initial results

As someone who is focussed on UI at Rome2Rio, I was really interested in seeing if there was a way we could boost the response rate whilst maintaining a similar quality by only changing the user interface.

Experiment 1: Primary engagement

The first experiment we tried was to collect the Yes / No answer separately to the feedback message. By progressively disclosing the message box and send button we collected 1.9x responses whilst maintaining a similar satisfaction rating.

Two variants of the segment survey form

Experiment 2: Different scales

For the next experiment we wanted to see how changing the type of scale we used would influence responses and our satisfaction rating. After some research we found that the 5-point Likert scale (https://en.wikipedia.org/wiki/Likert_scale) was regarded as the most widely used format for survey responses. The experiment variants we chose were the Likert scale, 10-point and NPS with the Yes/No style response from the previous experiment as the baseline for comparison.

4 other variants of the segment survey

To compare the satisfaction rating between variants with different scales we mapped the highest option to 100% and the lowest to 0%. For example in the YesNo variant the Yes option would be a 100% satisfaction rating and the No option would be a 0% satisfaction rating. Satisfaction ratings between the variants remained fairly consistent with the amount of scores recording the most changes.

Based on the engagement rates we decided to take the highest performing variants, YesNoDisclose and TenPoint, through to the next round of experimentation.

Experiment 3: Iterating Our Design

Up until this point we’d been very conservative with the appearance of the surveys. In this experiment we tested whether we could increase engagement of our surveys by improving the user interface alone. For user experience we needed to make sure that the survey wouldn’t take away attention from our core information, so a subtle box design with our brand colors was chosen to replace the radio buttons in the previous version.

Design iteration of segment survey

The change resulted in a 200% increase in responses. This meant that 1% of all users who saw the survey were now answering it.

Experiment 4: Secondary Engagement

In our final experiment we focused on how to get quality feedback we could action alongside our satisfaction ratings. We wanted to know things users liked ensuring that we continue to work on features our users enjoy alongside fixing parts of the product users are less satisfied with.

The content team and UI team worked together to come up with a list of options to provide users after they’ve selected their satisfaction level. They derived the following options from common feedback messages we were getting in the free text field.

If a user selected No the options we showed were:

  • Booking
  • Schedules/Timetables
  • Prices
  • Difficult to use
  • Phone numbers
  • Station locations
  • Safety
  • Other

If a user selected Yes the options we showed were:

  • Schedules/Timetables
  • Prices
  • Booking
  • Photos
  • Easy to use
  • Other

To combat bias related to the positioning of the options we randomised the order of the options when they were displayed, always placing ‘Other’ at the end.

YesNo and 10 Point experiment results

After running the experiment for 3 weeks we observed the following:

  1. 45% of YesNo respondents and 51.2% of TenPoint respondents performed a secondary engagement interaction, telling us why they were giving us the rating.
  2. 6.8% of YesNo respondents and 9.5% of TenPoint respondents left a feedback message.
  3. Despite a higher proportion of respondents of the TenPoint survey performing a secondary interaction, the increased volume of overall respondents for the YesNo variant meant that we got more feedback overall from the YesNo variant rather than the TenPoint variant.

We decided on graduating the YesNo variant to 100%. In February of this year, 43,325 users responded to our user surveys providing some great insight into ways we can improve our product.

Word cloud from secondary interaction

What we learned along the way

Asking users up front for detailed feedback lowers engagement

We found that we could increase the amount of overall engagement by gradually introducing questions rather than displaying them all upfront.

Granularity doesn’t necessarily increase accuracy

Adding more granularity to a scale (eg going from a 2-point to a 5-point or 10-point scale) doesn’t really increase the accuracy of your responses unless your question specifically calls for it.

Granularity of 10-point survey

In the above analysis of the 10 point scale responses you can see that most users answered at extreme ends of the scale.

Satisfaction ratings remain steady over time

We’ve noticed that our average satisfaction rating remain steady from day-to-day. This gives us a snapshot of how we’re performing over time and the secondary engagement gives us the why.

Graph of ranking over time

Random matters

In our secondary engagement, the reason for giving the score, users have a slight preference for selecting options at the beginning of the list over others.

Reason index spread

Therefore randomising the display of options in our survey is crucial to getting an accurate representation of results.

Randomized results

Easy access to data

Our data science team has built a tool to allow any team member to easily access up-to-date survey data. Using the exported user survey data, the content team has fixed data issues and the UI team has improved the way schedules are displayed to users.

The UI team’s improvements to provide extra contact information for transport agencies.
The UI team’s improvements to provide extra contact information for transport agencies.

Teamwork

Finally, on a lighter note, the process of working on user surveys has helped ground the data science team, UI team and content team by bringing us closer to the joys and frustrations of our users. Working on a project that combines so many parts of the business has been a really enjoyable process and reinforced the benefits of cross-functional collaboration when creating a product that serves the needs of millions of users every month.

Written by
Software Development Engineer
Rome2rio, based in Melbourne, Australia, is organising the world’s transport information. We offer a multi-modal, door-to-door travel search engine that returns itineraries for air, train, coach, ferry, mass transit and driving options to and from any location. Discover the possibilities at rome2rio.com