January 2023View Slide Deck
I was asked to uncover if a significant portion of our users use our software on devices which are 10+ years old. This data would inform the business decision of whether it is beneficial for our developers to maintain the necessary software for these devices.
I decided that a survey would be the best way to answer this question and as I was developing a research plan, I recognized that there were several elements which would influence the response rate. I decided it would be best to conduct experiments with a small subset of users so I could understand how certain elements - subject line, email copy and incentives - affected our response rates before sending the survey wide.
Ultimately, I wanted a high response rate which I could feel confident in, without reaching out to too large of a sample which would have been an excessive use of resources. While not guaranteed, I take the view that if the response rates are low, there is more potential for non-response error (when the users who respond are different from the users who do not respond in a way which affects the data), which I wanted to avoid. My goal from these small experiments was to get data which was significance in practice and would help me make decisions about the design of the main survey.
Our primary demographic are retired folks who RV full-time in large rigs. Generally, they tend to be sensitive to costs but rich in time. They are typically enthusiastic about RVing and the tools they use, and often are eager to provide feedback and participate in research.
There is research on the effectiveness of survey elements and incentives, but these studies typically focus on younger audiences so I wanted to conduct my own research. From my experiences and understanding of our users, I had theories about which elements would prove most effective, but nothing compares to firsthand data.
the set up
I sent out 4 batches of emails, 12 users per batch: 48 total
I wanted to understand whether a time callout in the subject line would affect the email open rate. Especially since it was such a short time - an estimated 5 minutes - I thought that would be a draw for users. Ultimately, the open rates were nearly the same but I went with V1 because I felt that the additional context for users may be beneficial.
From user interviews and user feedback, I knew that our users varied in their tech-savviness so I experimented with the email verbiage in case some found the word 'equipment' to be intimating or possibly vague. I ultimately went with V2 since it had a higher completion rate.
There exists a lot of good research on the positive effect of monetary incentives on survey response rates. Our demographics -primarily retired and cost sensitive - differ from the tested subjects so I wanted to conduct my own research.
While the incentive-driven version performed slightly better, and typically I am an advocate of offering an incentive as a token of appreciation, I decided not to offer an incentive due to the high cost relative to the business decision's significance. I also felt that with this survey's length and ease, and based on the response rates during this experimentation, we wouldn’t have too hard of a time recruiting participation compared to some heftier research projects we had upcoming. Interestingly, only 7 out of 10 users claimed their incentive, suggesting that monetary rewards were not the primary motivator in this case.
Reminder Emails Work!
Something which surprised me was just how effective the reminder emails worked. I sent one out on Thursday afternoon and 27 opened the reminder email and 8 completed the survey. While I was initially hesitant because I didn't want to pester our users, I felt that 1 light reminder is respectful and it did really help our response rate.
Better Understanding our Users
While the nature of this survey is imperfect - while trying to distinguish a single element, I still had to include elements which were not controlled - my goal was to understand whether any of these elements would affect the survey results in a meaningful way so that I could take that into consideration for the wider survey and possibly any future projects.
I found that none of the elements I tested had a significant impact - at most there was a 6% difference - which further enforced that we have a unique user base who seem to enjoy providing user feedback and engaging in user research. This was a big insight on it’s own.
Iterating, Iterating, Iterating!
Beyond the experimentation with the survey elements, I enjoyed the practice of sending out a pilot survey to ensure the survey didn’t have any issues. This approach was inspired by Caroline Jarret's wonderful book 'Surveys that Work.' Adopting this iterative approach minimized the Total Survey Error and provided me with peace of mind as I launched the final survey. The data from these small experiments also did prove to be significance in practice as it did help me make final decisions related to the main survey.
This iterative approach to survey design is something I’ll carry as continue to do research.