Moderated usability sessions on a newly shipped workflow automation feature with near-zero adoption
Fieldline's workflow automation feature was built in direct response to sales call feedback — power users consistently said they wanted to automate repetitive task sequences. The feature shipped. Adoption was 3% after 90 days. The problem wasn't that users didn't want automation. It was that they couldn't map what the feature did to the mental model they already had for how their work was structured.
Eight moderated usability sessions revealed a consistent pattern: users understood the concept of automation but couldn't identify which of their own workflows the feature applied to. The trigger-action model the product used was unfamiliar. The entry point buried the feature behind a configuration UI that felt like setup cost. And the only examples in the onboarding flow used generic use cases that didn't reflect how Fieldline's actual user base works.
This is a discoverability and framing problem, not a demand problem. The users who want this feature exist — they just couldn't see themselves in it.
Eight moderated usability sessions, each 60 minutes, conducted over two weeks via video call. Participants were existing Fieldline power users — defined as users who log in 4+ days per week and have been on the platform for at least 3 months.
Each finding includes the observed behavior, supporting participant quotes, and a specific recommendation. Findings are ordered by severity.
When asked to find the automation feature, 6 of 8 participants went first to the main navigation, then to the project view, then to the task detail panel — in that order. None looked in Settings on the first pass. When prompted to try Settings, most still passed over "Workflow Rules" because the label didn't say "automation."
Surface automation as a first-class feature. Add an "Automate" entry point directly on the project or task view — contextually, where users are already doing the work the feature is meant to streamline. Rename "Workflow Rules" to "Automations" to match the vocabulary users already have.
A persistent, contextual entry point (e.g., "Automate this" next to recurring task types) would eliminate the discovery problem entirely and frame the feature in the user's own language.
When participants reached the automation builder, the majority stalled at the "Select a trigger" dropdown. The concept of a trigger — an event that initiates a response — wasn't intuitive. Users who do use other automation tools (2 of 8) moved through this step quickly; the other 6 spent an average of 3 minutes trying to understand what "trigger" meant before giving up or guessing.
Reframe the builder UI around the user's own language. Instead of "Select a trigger," use "When does this happen?" Instead of "Add an action," use "What should happen next?" This maps directly to how participants described their own workflows in the post-session interview: in terms of situations and responses, not events and actions.
Consider adding 3–4 template automations pre-built with Fieldline-specific scenarios (e.g., "When a task is marked complete, notify the next assignee"). Templates lower the abstraction cost significantly.
When participants saw the example automations, 5 of 8 concluded — incorrectly — that the feature wasn't relevant to their work. The examples felt like they were from a different product. One participant said she'd decided the feature was "for sales teams" based on the CRM reference. None of the examples referenced task assignment, project status, or team notifications — the three most common workflow triggers participants described in the mental model interview.
Replace the three onboarding examples with use cases drawn from actual Fieldline workflows. Based on the mental model interviews, the highest-resonance scenarios are: "When a task is overdue, send a Slack message to the assignee," "When a project moves to In Review, notify the team lead," and "When all tasks in a milestone are complete, mark the milestone done."
These are writing-only changes — no engineering required. They would directly address the "this isn't for me" conclusion that currently blocks adoption before users even try the feature.
After completing an automation setup, both participants immediately looked for a confirmation state, a run log, or some signal that the automation was "on." The only indication was a toggle in the Workflow Rules list — no history, no last-run timestamp, no indication of what triggered the automation or what it did. One participant turned the automation off and back on twice trying to understand if it had activated.
Add a lightweight activity log to each automation — showing the last time it ran, what triggered it, and what it did. Even a simple "Last triggered: today at 3:14pm" would resolve the primary anxiety. For users who've never used automation tools before, this feedback loop is what builds trust in the feature over time.
This is a medium-effort engineering change, but it addresses a retention concern: without it, users who do set up automations are likely to abandon them when they can't verify they're working.
Two of the four findings are copy and labelling changes that require no engineering work. They address the discovery and framing problems that are blocking adoption before users even attempt to use the feature.
The two low-effort findings — better onboarding examples and reframed UI copy — can be addressed in a single focused sprint without engineering involvement. They directly attack the framing and relevance problems that are preventing adoption before users engage with the feature at all.
This week: Assign Finding 03 to a copywriter or PM. Replace the three CRM-oriented onboarding examples with the Fieldline-specific scenarios identified in this report. For Finding 02, update the builder UI labels — "When does this happen?" and "What should happen next?" require a string change, nothing more.
Sprint 2: Surface the feature with a contextual entry point in the project view (Finding 01). Rename "Workflow Rules" to "Automations" in the navigation. These changes require design and engineering, but directly address the discovery failure that affects all users before they even reach the builder.
Sprint 3: Build the activity log (Finding 04). This is a retention play — it won't drive initial adoption, but it will determine whether users who do set up automations stick with them.
I'd recommend a lightweight follow-up validation study — 4–5 sessions — after the Sprint 2 changes ship, to confirm that the entry point and labelling changes close the discovery gap. Given that two copy changes alone could meaningfully move the adoption number, validating quickly is worth the investment.
Legible Research is a UX research practice for product teams. Questions about this report or next steps: hello@legibleresearch.com