Usability Testing · Findings Report

Why power users weren't adopting
the feature built for them.

Moderated usability sessions on a newly shipped workflow automation feature with near-zero adoption

Client
Fieldline (anonymized)
Delivered by
Legible Research
Engagement
Usability Testing
Date
February 2026
Client name and details have been anonymized in accordance with NDA obligations.

Fieldline's workflow automation feature was built in direct response to sales call feedback — power users consistently said they wanted to automate repetitive task sequences. The feature shipped. Adoption was 3% after 90 days. The problem wasn't that users didn't want automation. It was that they couldn't map what the feature did to the mental model they already had for how their work was structured.

Eight moderated usability sessions revealed a consistent pattern: users understood the concept of automation but couldn't identify which of their own workflows the feature applied to. The trigger-action model the product used was unfamiliar. The entry point buried the feature behind a configuration UI that felt like setup cost. And the only examples in the onboarding flow used generic use cases that didn't reflect how Fieldline's actual user base works.

This is a discoverability and framing problem, not a demand problem. The users who want this feature exist — they just couldn't see themselves in it.

8
Participants
4
Findings identified
3%
Feature adoption at 90 days

How this study was conducted.

Eight moderated usability sessions, each 60 minutes, conducted over two weeks via video call. Participants were existing Fieldline power users — defined as users who log in 4+ days per week and have been on the platform for at least 3 months.

🎯
Task-based sessions
Each participant was given 3 structured tasks designed to surface where the automation feature broke down — from first discovery through to successfully creating a working automation.
💬
Think-aloud protocol
Participants narrated their thinking as they worked through each task. This surfaces the gap between what users expect to happen and what actually happens — the source of most usability failures.
🗺
Mental model interview
A 15-minute structured interview at the end of each session explored how users currently think about their own workflows — to understand what framing would make the automation feature legible to them.

Participant breakdown

ID Role Team size Usage pattern
P1 Operations Manager 12 people Daily
P2 Product Manager 8 people Daily
P3 Project Lead 5 people Daily
P4 Engineering Manager 10 people Daily
P5 Operations Manager 20 people Daily
P6 Account Director 6 people Daily
P7 Product Manager 7 people Daily
P8 Team Lead 9 people Daily

Four reasons the feature isn't being used.

Each finding includes the observed behavior, supporting participant quotes, and a specific recommendation. Findings are ordered by severity.

01
Users can't find the feature — and those who do don't recognize it as automation.
The entry point is buried in the Settings menu under "Workflow Rules." Six of eight participants navigated past it without registering it as the automation feature they'd heard about in the release email.
Critical
What we observed

When asked to find the automation feature, 6 of 8 participants went first to the main navigation, then to the project view, then to the task detail panel — in that order. None looked in Settings on the first pass. When prompted to try Settings, most still passed over "Workflow Rules" because the label didn't say "automation."

"I thought automation would be, like, a dedicated thing — not buried in settings. Settings is where you go to change your password." — P3, Project Lead
Recommendation

Surface automation as a first-class feature. Add an "Automate" entry point directly on the project or task view — contextually, where users are already doing the work the feature is meant to streamline. Rename "Workflow Rules" to "Automations" to match the vocabulary users already have.

A persistent, contextual entry point (e.g., "Automate this" next to recurring task types) would eliminate the discovery problem entirely and frame the feature in the user's own language.

🧭 Navigation
✏️ Copy/labelling
02
The trigger-action model is unfamiliar — users think in terms of tasks and sequences, not events and responses.
The feature UI uses "When [trigger] → Then [action]" logic. This is standard for automation tools, but most participants had no prior exposure to automation software and didn't have a mental model for it.
Critical
What we observed

When participants reached the automation builder, the majority stalled at the "Select a trigger" dropdown. The concept of a trigger — an event that initiates a response — wasn't intuitive. Users who do use other automation tools (2 of 8) moved through this step quickly; the other 6 spent an average of 3 minutes trying to understand what "trigger" meant before giving up or guessing.

"I don't know what a trigger is in this context. Like, I trigger things. Does it mean I have to do something first?" — P5, Operations Manager
Recommendation

Reframe the builder UI around the user's own language. Instead of "Select a trigger," use "When does this happen?" Instead of "Add an action," use "What should happen next?" This maps directly to how participants described their own workflows in the post-session interview: in terms of situations and responses, not events and actions.

Consider adding 3–4 template automations pre-built with Fieldline-specific scenarios (e.g., "When a task is marked complete, notify the next assignee"). Templates lower the abstraction cost significantly.

✏️ UI copy
🧩 Templates
🔧 Medium effort
03
Example automations in onboarding use generic use cases that don't reflect how Fieldline users actually work.
The three example automations shown during setup reference "CRM status changes," "ticket priority updates," and "calendar sync" — none of which are core to Fieldline's primary use case as a project coordination tool.
High
What we observed

When participants saw the example automations, 5 of 8 concluded — incorrectly — that the feature wasn't relevant to their work. The examples felt like they were from a different product. One participant said she'd decided the feature was "for sales teams" based on the CRM reference. None of the examples referenced task assignment, project status, or team notifications — the three most common workflow triggers participants described in the mental model interview.

"These examples don't really apply to how I use this. I thought it was more of a CRM thing." — P6, Account Director
Recommendation

Replace the three onboarding examples with use cases drawn from actual Fieldline workflows. Based on the mental model interviews, the highest-resonance scenarios are: "When a task is overdue, send a Slack message to the assignee," "When a project moves to In Review, notify the team lead," and "When all tasks in a milestone are complete, mark the milestone done."

These are writing-only changes — no engineering required. They would directly address the "this isn't for me" conclusion that currently blocks adoption before users even try the feature.

✏️ Onboarding copy
⚡ Low effort
04
There's no feedback loop — users can't tell if an automation is working after they set it up.
The two participants who successfully created an automation in session both expressed the same concern: they had no way of knowing whether it had run, was going to run, or had failed silently.
Medium
What we observed

After completing an automation setup, both participants immediately looked for a confirmation state, a run log, or some signal that the automation was "on." The only indication was a toggle in the Workflow Rules list — no history, no last-run timestamp, no indication of what triggered the automation or what it did. One participant turned the automation off and back on twice trying to understand if it had activated.

"I set it up, but I have no idea if it actually did anything. I'd want to see a log or something — like, it ran at 3pm, it did this." — P2, Product Manager
Recommendation

Add a lightweight activity log to each automation — showing the last time it ran, what triggered it, and what it did. Even a simple "Last triggered: today at 3:14pm" would resolve the primary anxiety. For users who've never used automation tools before, this feedback loop is what builds trust in the feature over time.

This is a medium-effort engineering change, but it addresses a retention concern: without it, users who do set up automations are likely to abandon them when they can't verify they're working.

📋 Activity log
🔧 Medium effort

Where to start — impact vs. effort.

Two of the four findings are copy and labelling changes that require no engineering work. They address the discovery and framing problems that are blocking adoption before users even attempt to use the feature.

Finding Severity Effort Start with
03  Replace onboarding examples with Fieldline use cases High Low ✓ This week
02  Reframe builder UI language ("When does this happen?") Critical Low ✓ This week
01  Add contextual entry point + rename to "Automations" Critical Medium Sprint 2
04  Add automation activity log Medium Medium Sprint 2

How to move from findings to fixes.

The two low-effort findings — better onboarding examples and reframed UI copy — can be addressed in a single focused sprint without engineering involvement. They directly attack the framing and relevance problems that are preventing adoption before users engage with the feature at all.

This week: Assign Finding 03 to a copywriter or PM. Replace the three CRM-oriented onboarding examples with the Fieldline-specific scenarios identified in this report. For Finding 02, update the builder UI labels — "When does this happen?" and "What should happen next?" require a string change, nothing more.

Sprint 2: Surface the feature with a contextual entry point in the project view (Finding 01). Rename "Workflow Rules" to "Automations" in the navigation. These changes require design and engineering, but directly address the discovery failure that affects all users before they even reach the builder.

Sprint 3: Build the activity log (Finding 04). This is a retention play — it won't drive initial adoption, but it will determine whether users who do set up automations stick with them.

I'd recommend a lightweight follow-up validation study — 4–5 sessions — after the Sprint 2 changes ship, to confirm that the entry point and labelling changes close the discovery gap. Given that two copy changes alone could meaningfully move the adoption number, validating quickly is worth the investment.

Prepared by Monica S. — Legible Research

Legible Research is a UX research practice for product teams. Questions about this report or next steps: hello@legibleresearch.com

legibleresearch.com