Legibility Audit · Findings Report

Where Claro Skin loses users
before they trust the recommendation.

A review of the consultation flow, results experience, and post-prescription onboarding

Client
Claro Skin (anonymized)
Delivered by
Legible Research
Engagement
Legibility Audit
Date
March 2026
Client name and details have been anonymized in accordance with NDA obligations.

Claro Skin's consultation flow is well-designed at the surface level — the quiz is clean, the inputs feel manageable, and the transition to a results page is fast. The breakdown happens at the moment that matters most: when users receive their formula and need to understand why it's right for them.

The results page delivers a prescription without delivering comprehension. Users receive ingredient names without plain-language explanations, a treatment plan without a timeline, and a product without the "why this, for you" story that would make them confident enough to commit. The education resources that could bridge this gap exist — they're simply buried three clicks away from the moment a user needs them most.

The consequence is measurable: App Store reviews and Reddit signal a pattern of cancellations within the first 4–8 weeks, consistently citing confusion about whether the product is working or appropriate. This is a comprehension problem, not a formulation problem.

5
Findings identified
2
Critical severity
3
Addressable in <4 weeks

How this audit was conducted.

This Legibility Audit uses three complementary methods to surface comprehension gaps — no participant recruitment required, results in two weeks.

🔬
Expert review
Full walkthrough of the product flow with a researcher's lens — documenting every moment where a user would need to understand something in order to proceed confidently.
📡
Signal analysis
Systematic review of App Store reviews, Reddit discussions, and Trustpilot — coded for comprehension-related themes: confusion, unexpected outcomes, "I didn't understand."
🗺
Mental model mapping
Mapping the assumptions the product makes about what users know against what users in this category typically know — identifying the gaps that create friction.

Five places the comprehension gap costs you.

Each finding includes the observation, supporting signal, and a specific recommendation. Findings are ordered by severity.

01
The results page explains what — not why.
Users receive an ingredient list with no explanation of the reasoning behind it. The connection between their skin concerns and the formula they're given is never made explicit.
Critical
What we observed

After completing the 12-question consultation, users land on a results page showing three product names and four ingredients. There is no sentence — anywhere on this page — that says "we recommended this because [your specific concern]." The formula appears authoritative but unexplained, like a prescription pad without a diagnosis conversation.

Users who don't already know what "tretinoin" or "azelaic acid" does have no path to understanding why these were selected for them specifically, versus any other user who completed the quiz.

App Store signal

"I got my results but I have no idea why they gave me this specific combination. It just feels random." — 2-star review, repeated 14 times in varying forms across 90-day review window.

Recommendation

Add a single "Why this formula for you" section to the results page — 2–3 sentences that explicitly connects the user's quiz answers to their specific ingredients. This doesn't require redesign; it's a copy and logic problem.

Example: "Based on your concerns about acne scarring and sensitivity, your formula includes azelaic acid (which fades post-acne marks without irritating sensitive skin) rather than stronger actives like tretinoin."

📝 Copy change
⚡ Low effort
📈 High retention impact
02
There is no timeline for results — and users are inventing one.
When a product doesn't set expectations about when improvement will appear, users set their own. That expectation is almost always too soon.
Critical
What we observed

The results page, the confirmation email, and the product packaging all omit a timeline for expected improvement. This is a known problem in prescription skincare: tretinoin and azelaic acid typically require 8–12 weeks to show meaningful results, with a potential "purging" phase in weeks 3–6.

Without this context, users who see no improvement at week 4 — or who experience purging — have no framework to understand what's happening. Reddit shows them drawing the rational conclusion: the product isn't working.

Reddit signal (r/SkincareAddiction)

"Tried Claro for 6 weeks, my skin actually got worse at first. Nobody told me this would happen. Cancelled." — cross-posted in 3 threads, 47 upvotes.

Recommendation

Add a "What to expect" timeline — a simple visual or short-form explanation that covers: weeks 1–3 (adjustment), weeks 4–6 (potential purging, this is normal), weeks 8–12 (visible improvement begins). This should appear on the results page and in the onboarding email sequence.

Setting accurate expectations is the single highest-leverage retention lever for prescription skincare. Users who understand the purging phase don't cancel — users who don't, do.

📧 Onboarding email
📄 Results page
⚡ Low effort
03
Medical terminology is used without translation.
Terms like "retinoid," "hyperpigmentation," and "sebaceous activity" appear throughout the product experience with no plain-language alternative.
High
What we observed

A count of the results page and first onboarding email finds 11 clinical or technical terms used without definition. The quiz itself uses "seborrheic dermatitis," "post-inflammatory hyperpigmentation," and "comedogenic" — terms that require dermatological literacy to answer accurately.

This creates two failure modes: users who don't know these terms answer inaccurately (reducing formula precision), and users who receive a results page full of clinical language feel they're receiving a medical document, not a personalized recommendation.

Recommendation

Audit all clinical terminology in the quiz and results flow. For each term: either replace with plain language, add an inline tooltip/definition, or add a parenthetical — "retinoid (a form of Vitamin A that speeds up skin cell turnover)".

The goal isn't to dumb down the science — it's to make users feel like the formula understands their skin, not like they're reading a chart note. Comprehension builds trust; clinical opacity erodes it.

✏️ Content audit
🔧 Medium effort
04
The quiz treats uncertainty as a problem, not a data point.
Several quiz questions have no "I'm not sure" option. Users who don't know the answer are forced to guess — which they do, and which reduces their confidence in the result.
High
What we observed

Questions like "How would you describe your skin type?" and "Do you currently use any retinoids?" require a definitive answer. A significant portion of Claro's target users — people new to prescription skincare — don't know their skin type in clinical terms and haven't used retinoids before.

Forced-choice questions on topics of genuine uncertainty cause users to guess, then doubt the recommendation that follows. "I wasn't sure how to answer some questions, so I wonder if the formula is actually right for me" is a trust collapse before the product arrives.

Recommendation

Add "I'm not sure" / "I don't know" as a valid quiz response where relevant. When selected, the product should acknowledge this and explain how it handles uncertainty — e.g., "No problem — we'll start with gentler actives and adjust based on how your skin responds."

This turns an anxiety point into a trust moment: the user learns the system is designed for people who don't have all the answers, not just dermatology enthusiasts.

🧩 Quiz UX
🔧 Medium effort
05
Educational resources exist — but are three clicks away from where users need them.
Claro has a solid ingredient glossary and a "how it works" guide. They are linked from the footer, not from the results page or onboarding sequence where comprehension failures actually occur.
Medium
What we observed

The ingredient glossary at claro.com/ingredients is genuinely useful — it explains each active, why it's used, and what to expect. A "How your formula is made" guide is also available under /learn. Neither is linked or referenced from the results page, confirmation email, or product packaging.

The educational content was likely built to drive SEO or for curious users who go looking. It's not surfaced at the moment of need — when a user receives a formula they don't yet understand.

Recommendation

Surface these resources contextually. On the results page, link each ingredient name to its glossary entry. In the first onboarding email (after product ships), include a "Before your formula arrives" section with a link to the what-to-expect guide.

This is a positioning win as much as a UX win: it signals that Claro is a brand that wants users to understand their skin, not just sell them a product.

🔗 Link changes
📧 Email sequence
⚡ Low effort

Where to start — impact vs. effort.

Three of the five findings are low-effort, high-impact copy and content changes. They don't require engineering resources or design overhaul — they require someone to write better explanations.

Finding Severity Effort Start with
01  Results page: add "why this formula" Critical Low ✓ This week
02  Add results timeline + purging warning Critical Low ✓ This week
05  Surface existing educational content Medium Low ✓ This week
03  Plain-language terminology audit High Medium Sprint 2
04  Quiz "I'm not sure" option + logic High Medium Sprint 2

How to move from findings to fixes.

The three low-effort findings can be addressed in a single focused sprint without engineering involvement. Here's a suggested path forward:

Week 1: Share this report with the team. Assign Finding 01 and 02 to a copywriter or PM — they're writing tasks, not design tasks. Finding 05 is a link-placement decision that can be made in a day.

Week 2–3: Run a quick usability check on the updated results page with 3–5 users to validate that the new "why this formula" explanation lands as intended before shipping more broadly.

Sprint 2: Tackle the terminology audit and quiz uncertainty handling together — they benefit from the same content strategy thinking.

I'm available to support implementation review or to run a follow-up validation study on the updated flow. A 5-person usability test on the revised results experience would directly measure whether these changes close the comprehension gap.

Prepared by Monica S. — Legible Research

Legible Research is a specialist UX research practice for consumer health and beauty products. Questions about this report or next steps: hello@legibleresearch.com

legibleresearch.com