
I explore how people learn, stay motivated, and grow in confidence.
I help teams turn insights into thoughtful product decisions.

A multi-phase research programme investigating when learners recognise value in AI-powered language learning — and why strong aha moments don’t always translate into Premium intent.
Led a four-phase programme combining usability testing, interviews, and surveys across Free and Premium users
Found that AI features trigger strong aha moments, but limited discoverability and clarifying framing reduce impact
Shaped Premium messaging, tier differentiation, and roadmap priorities for AI and vocabulary review iteration in H1 2026

A large-scale behavioural analysis of Community exercise submissions, examining how learners actually use social learning features — and why many fail to deliver meaningful learning without clearer structure, guidance, and placement.
Analysed 160k+ Community submissions across English and Spanish to uncover real usage patterns
Identified distinct learner behaviours revealing misalignment between task design and learner expectations
Informed clearer guidelines for when and how Community exercises should be introduced to support meaningful learning

A 4-week diary study exploring how adult language learners stay motivated, build habits, and experience AI-supported learning in everyday life.
Led a longitudinal diary study with 26 learners (A1–C1), combining in-context self-reports and product analytics
Identified strong intrinsic motivation alongside a clear mid-journey dip, revealing key moments for product intervention
Informed AI feature expansion and prioritisation across Q3–Q4 2025

A large-scale efficacy study translating independent learning outcome data into actionable product insights, helping ground AI and Premium strategy in evidence rather than assumptions.
Led internal research synthesis and product interpretation of a six-language efficacy study conducted with an external research team
Analysed learning outcomes across 1.200+ users using pre/post testing, study time, and feature usage data
Informed evidence-based product strategy around AI features, Live Lessons, and Premium positioning

A mixed-methods study evaluating whether AI-powered mistake review delivers meaningful learning and how trust, relevance, and visible progress shape perceived value.
Led end-to-end research combining moderated user testing and a large-scale global survey
Found that generic feedback and limited transparency erode trust, with learners expecting review of their own mistakes
Informed the redesign of Mistake Repair and review features toward more personalised, adaptive feedback