Case Study 2: Generative research to launch a new app
No time? Quick skim:
Generative research to inform development of a new app. Moderate budget and flexible timeline, so I started gently and allowed for multiple iterations. Concept testing via survey, plus a diary study and focus groups shed light on potential user motivations and context. Iterations still in progress, but this early testing highlighted what was most important to users, informed scope, and allowed stakeholders to take next steps in design.
ABOUT: Solvr is a new app in early development. The creator had an early prototype in Figma and a vision for how the app would work, but no information on user needs and validity of the design.
PROBLEM: Stakeholders needed user insights to explore the market need for this product and tailor scope and scale to meet potential user needs.
PROJECT OBJECTIVES: Identify user needs not currently met by competing products, validate concept and inform design, define scope and scale.
MY ROLE: I was the sole UX researcher and we had a moderate budget to work with.
Discovery
Stakeholder meetings
Context testing (already in progress by stakeholders)
Familiarization with 2 preliminary prototypes in development (one mock up in Figma and one process-flow test in Google Docs)
Competitive analysis of existing similar products
The trickiest part of this discovery phase for me was that the founder just had SO many ideas and exciting directions he could take this, it was hard for us both to really focus our research questions and goals. Multiple stakeholder meetings and constantly revisiting the research plan ultimately helped keep me focused on concrete goals.
Research questions
Who will use Solvr? Is it only for college-educated professionals? What defines the target audience?
What kind of questions/topics will it cover? What parameters should exist around the question content? How will this impact the definition of the target audience?
What visual format will appeal to potential users? What design features will be most helpful?
How should categories be used to sort questions, if at all?
How is Solvr different to its main competitors? What sets it apart, what is its niche?
How will answers on the platform be curated for quality?
(Last two added after concept testing highlighted these as concerns of potential users)
Design Comprehension Test
I started with this test to gauge general interest in the product, perception of the idea in its proposed format, and first impressions.
Stakeholders really wanted idea validation as a starting point, so this concept testing step felt helpful.
A brief (mean time to complete = 3 minutes) survey of first impressions created in UsabilityHub
16 participants recruited via Latticework network (screening criteria = tech-savvy, working professionals)
Participants were shown a still page of some elements of Solvr in its initial design phase and answered questions about their impressions, attitudes and opinions
Image shared with participants in the design comprehension test
*A limitation of this participant pool was that 85% of its members are male; not ideal, since we wanted a representative sample of the general population. However, as a ready-to-go potential user group who met all other screening criteria (educated, tech-savvy, working professionals), stakeholders and I felt this was an effective early stage option that saved developing a screener. We did note the sample limitation to ensure we captured other perspectives elsewhere in the research process.
Diary Study
With the product idea validated in concept testing, we started to consider scope and scale. I wanted more longitudinal insights about the context in which users might turn to Solvr (or competing products) and why.
2 participants
3 consecutive days of logging each time they went online to ask a question, with prompts in Google Sheets doc
Check ins each day with participants via text
*3 days wasn’t long, but given the frequency of logging required, I didn’t want participants to burn out! The intention was short, frequent entries any time they went online to answer a question, so 3 days felt like enough to capture the information I needed without overkill.
Results from both methods yielded some great findings, and focus groups felt like a natural next step to ensure we were solving a problem for users at the right scale. However, in crafting my focus group questions, I hit a block.
Back to the research goals…
At this point, I realized I was struggling with designing my focus group guide because I had lost clarity on the research questions. I went back to my research plan and re-read my initial stakeholder interview notes, and clarified the business objectives and my research goals with some rough sketching and concept mapping (see image). I then conducted a follow-up stakeholder meeting to verify the newly re-shaped goals. As a result of this process, we decided to narrow the goal of the focus groups to concentrate only on goal #1, since insights gained around this goal would have an impact on goals 2 & 3. The approach now felt much clearer and I revised my focus group plan to a much more productive version.
Focus Groups
Back on track!
2 remote focus groups of 5 people each, conducted via Google Meets and Zoom. Each was 1 hour long.
Participants were tech-savvy professionals recruited via professional and personal networks
Incentive offered was a giftcard for their meal on the day of the focus group - the intention was to create a casual “eat dinner and chat with us” atmosphere for a relaxed and candid discussion
“Quora and Reddit are intimidating - it’s basically just a bunch of dudes yelling at each other”
— Focus group participant
“The vibe is a bit bro-ey, it looks like some guy’s music site. If we’re going for trustworthy, I wouldn’t trust a lightening bolt logo”
— Focus group participant
Affinity mapping qualitative survey data
Data Analysis
The design comprehension test yielded a lot of qualitative data, so I chose physical affinity mapping with sticky notes to draw out themes and supporting quotes.
I calculated a few statistics, such as percentages, and pulled out strong quotes or created graphs to emphasize key findings for stakeholders. *Findings I utilized from survey were supported by other data points, since survey results were not statistically significant due to small sample size
To analyze diary study results, I conducted thematic analysis in Excel, pulling out 4 distinct themes.
Videos of focus groups were uploaded and automatically transcribed, then I conducted thematic analysis using Condens.
Insights
One of the major themes across all testing was TRUST. Participants repeatedly referred to this as the most important factor in whether they could successfully find an answer to their question online; trust in the source of the answer.
Expertise or credibility was a secondary theme that emerged; participants want answers from real humans, but they want only informed opinions that are verified, from people who know what they are talking about.
Human connection mattered to participants, but focus groups revealed that a big downside of existing platforms is the perception that users are berated for their opinions and as a result, the platforms are intimidating.
The current design did not support the trustworthy and credible brand that users want.
Outcomes
Coming soon! Iteration still in progress on this project.
Further research recommendations following this round of testing:
Since findings highlighted some issues with aesthetics/brand perception, a desirability study is recommended to ensure design aesthetic matches brand goals.
A/B testing utilizing third party software such as Optimizely will be conducted to determine feature options which are most user-friendly. Potential users will be shown two alternate layouts for the profile page and give feedback on which they prefer and why, with stats like follower count displayed or not.
Card sorting should be utilized to determine which categories/buckets for questions types make the most sense to potential users.
A tree test should be conducted following application of the findings of the above tests, to determine if users can find what they are looking for quickly and easily.
Further iterations of usability tests should be conducted in second phase of design.