*Please note that due to the sensitivity of the work, most of the images below have been pixelated.

OVERVIEW

ROLE: Lead User Researcher; Storyteller

TEAM: Jenny Broutin-Farah; Kamal Farah; Andrea McClave; Sean Olszewski

CHALLENGE: Plan and execute the SproutsIO Beta Testing Program. The goal was to illuminate information that can inform marketing, product design and development, customer service, and confidence building with investors.

TIMEFRAME: 18 weeks

SOLUTION: I lead the development and implementation of an assortment of study methodologies to efficiently and maximally obtain relevant information addressing the goals of the Beta Testing Program. 

SKILLS USED: Competitive Analysis, User Research, User Flows and Journeys, Feature Prioritization, Contextual Inquiry, User Testing, Usability Lab studies, Focus Groups, Survey Design

BEHAVIORAL INSIGHTS USED: Construct Development; Reciprocity

TOOLS USED: Illustrator; TypeForm; Excel; STATA; MailChimp


PROCESS

 

Planning & ResearcH

Reviewed Segment Survey: Results from the Segment Survey (i.e. proximity to the SproutsIO office, likelihood of social spread, and ability/desire to participate) guided how we divided the beta applicants into three different study groups.

The three groups were: Home Beta Testers (HBT) - individuals who would test the first beta prototype at their home; Office Beta Testers (OBT) - individuals who would come into the SproutsIO office to participate in focus groups and usability-lab studies; and General Beta Testers (GBT) - everyone else who was not close enough to participate in-person.

Next, I paired a variety of study methods with each beta group based upon how well their fit maximized the quality and quantity of actionable data yielded.

The Segment Survey also provided our team with valuable insights regarding app features and hardware interaction features (i.e. blinking lights, misting control, etc.) to develop and target in the initial rounds of beta testing. 

Brainstorming beta groups and study methodologies

User Journey: To identify which app features, hardware design attributes, points of human-product interaction, behaviors, and other outcomes that we wanted to study, we sketched out potential user journeys. This included primary touch-points like: unpacking, setup, growth period, harvest, re-order.

Beta Schedule: Based upon what we wanted to study and when we needed to study it, I developed a schedule of the beta program. In the original version of our schedule we had anticipated enough beta testers in our HBT group for each of our user segments (personas) to be a large enough sample for their results to be significant.  

 

Original beta schedule

 

Although our response rate to our first study was surprisingly high, once we distilled who could participate within each segment, we were left with an insufficient number of individuals for a meaningful sample size. We decided to collapse our segments into one group so that we could still run our quantitative analysis on the HBT program. This meant that our insights applied generally to our audience, rather than specifically to our user segments. With this new information I redesigned the beta schedule.

 

Updated beta schedule

 
 

DESIGN

Focus Group & Usability Lab Study (OBT Group): Through contextual inquiry, I borrowed insights from my research in graduate school and combined it with information decks I found online by MadPow. With these insights, and many many brainstorming sessions with the team, I developed guides for the focus groups and usability lab studies.

 

Focus group guide

 

It was difficult planning the quantitative-related questions because it involved standardizing the type of information to be collected across future studies with HBT and GBT. This meant a lot more upfront focus on strategizing and developing of question design.

 
Usability lab study guide

Usability lab study guide

 

Quantitative Surveys: As for the folks taking our product home with them, I developed a hybrid of cross-sectional and time-varying surveys so that we could measure both instantaneous metrics as well as metrics that might change over time. For example, if it was revealed that their were issues in the usability-lab study, we might want to ask specific follow-up questions regarding the specific moment in that set-up process. We also needed to ask time-varying questions to determine their engagement throughout the process so that we could identify insightful moments of using a SproutsIO at home.

Master list

Master list

Master list continued

Master list continued

Master list continued continued

Master list continued continued

I used a combination of competitive analysis (i.e. signing-up for beta testing at other companies), contextual inquiry (i.e. referencing academic textbooks on validated and reliable question constructs), and my own expertise in question design to roll out a series of surveys for the HBT, OBT and GBT groups.

 
Competitive analysis

Competitive analysis

Contextual inquiry

Contextual inquiry

 
 

We provided this recipe and an acorn squash to participants in our focus groups.

Behavioral Insights: We lowered our expenses needed to incentivize people to participate by replacing costly monetary rewards with thoughtful gifts. Where many studies use gift cards, we created recipes, gifted inexpensive gourds, cultivated feelings of exclusivity, and delivered the gifts as variable (unexpected) rewards to foster both positive experiences and increase participation.

 

Minimizing Risk: We selected individuals who were least likely to be an opinion leader (i.e. social node) for the early stages of the Beta Program when the risk for things go awry was higher. Later, as we worked out the preliminary issues with our product, we integrated the opinion leaders into the HBT group.

 

TESTING

Feedback from Focus Groups: These open-ended discussion sessions helped us prioritize the initial features that we would begin to test with SproutsIO in the usability and home experiments.

Feedback from Usability Lab Studies: We used the feedback from these studies to quickly iterate the user experience of the app and the hardware in preparation for the HBT rounds of testing at home.

 

SOLUTION

Issues: One of the most difficult parts of the process was determining how to prioritize what to study and when to study it. One way we addressed this issue was by anticipating how certain types of information (quantitative vs. qualitative) from the studies could help inform the proximate set of studies. Other times, we learned from where we missed opportunities to gain insights.

Results: The Beta Program provided us with a plethora of organized insight for improving our product experience. On one occasion during a focus group, we noted a general desire to learn about the growing process through SproutsIO. This lead us to track the effect of the SproutsIO app content on the understanding of the growing process through specific questions during the HBT tests. We used this insight to create a question that could reveal the necessary information regarding the construct we wanted to measure.

As I entered my final semester at graduate school, I developed a protocol for the team to follow so they could continue the beta testing program as it entered into the HBT phases.