Acceptance Criteria Examples: Gherkin & Plain Language

Share:

TL;DR:

  • Get 15+ ready-to-use acceptance criteria examples you can copy and adapt
  • Learn when to use Gherkin format vs plain language for maximum clarity
  • Apply proven templates that reduce development confusion and rework
  • Avoid the 5 most common acceptance criteria mistakes that slow teams down

Table of contents

Context and why it matters in 2025

Acceptance criteria bridge the gap between what you want to build and what gets delivered. Without clear examples, developers make assumptions, designers create inconsistent experiences, and QA tests the wrong scenarios.

Strong acceptance criteria examples reduce back-and-forth questions by 60% and cut story refinement time in half. They become executable specifications that everyone understands before coding starts.

The challenge is knowing when to use structured Gherkin format versus plain language, and having proven templates you can adapt quickly. Most PMs either write vague criteria that leave too much open to interpretation, or over-engineer complex scenarios that slow down the team.

Success means your User Stories Best Practices: Job Stories & Acceptance Criteria become a reliable system that scales with your team size and product complexity.

Step-by-step playbook

1. Choose your format based on complexity

Goal: Match the acceptance criteria format to your story's complexity and team preferences.

Actions:

  • Use plain language for simple UI changes, content updates, or straightforward workflows
  • Choose Gherkin (Given-When-Then) for complex business logic, multi-step flows, or integration scenarios
  • Default to plain language unless the story involves conditional logic or multiple user paths

Example: A simple "Add delete button to user profile" uses plain language. A "Process subscription upgrade with proration" uses Gherkin because it involves multiple conditions and calculations.

Pitfall: Using Gherkin for every story creates unnecessary overhead and slows down writing.

Done: You have a clear decision on format before writing any criteria.

2. Start with the happy path scenario

Goal: Define the primary success scenario that delivers core user value.

Actions:

  • Write the main flow first, assuming everything works perfectly
  • Focus on user-visible outcomes, not technical implementation details
  • Include specific data examples where relevant (usernames, amounts, error messages)
  • Verify the scenario connects to your acceptance criteria in Product Analytics Instrumentation: Complete Setup Guide

Example: For a login feature, start with "User enters valid email and password, clicks login, and sees their dashboard."

Pitfall: Starting with edge cases before nailing the happy path creates confusion about the core functionality.

Done: The primary success scenario is clear and testable.

3. Add error and edge case scenarios

Goal: Cover failure modes and boundary conditions that users will encounter.

Actions:

  • List common error conditions (invalid inputs, network failures, permission issues)
  • Define specific error messages and user guidance for each scenario
  • Include boundary cases (empty states, maximum limits, timeout scenarios)
  • Reference relevant patterns from Backlog Grooming Best Practices: Cadence & Criteria Guide

Example: "When user enters invalid email format, show 'Please enter a valid email address' below the email field."

Pitfall: Writing generic error handling without specific messages leads to inconsistent user experience.

Done: Error scenarios have specific, actionable user guidance defined.

4. Validate with three-amigos review

Goal: Get developer, designer, and QA input before the story enters development.

Actions:

  • Schedule 15-minute review with PM, developer, and designer
  • Walk through each acceptance criteria scenario
  • Ask "What questions would you have while building this?"
  • Update criteria based on technical constraints or design considerations

Example: Developer points out that email validation happens on blur, not submit. Update criteria to reflect the actual interaction timing.

Pitfall: Skipping technical review leads to criteria that can't be implemented as written.

Done: All three disciplines agree the criteria are clear and achievable.

Goal: Connect acceptance criteria to metrics and analytics tracking.

Actions:

  • Add tracking requirements for user actions mentioned in criteria
  • Specify event names and properties for analytics implementation
  • Include success metrics that connect to broader product goals
  • Reference measurement approaches from your analytics setup

Example: "Track 'login_attempted' event with email_format_valid property when user clicks login button."

Pitfall: Forgetting to specify tracking leads to missing data for feature success measurement.

Done: Analytics requirements are explicit and implementable.

Templates and examples

Plain Language Template

**User Story:** As a [user type], I want to [action] so that [benefit].

**Acceptance Criteria:**

**Happy Path:**
- [ ] When user [action], then [specific outcome]
- [ ] User sees [specific UI element/message/data]
- [ ] System [specific behavior/update/calculation]

**Error Scenarios:**
- [ ] When [error condition], show "[specific error message]"
- [ ] When [boundary condition], then [fallback behavior]

**Edge Cases:**
- [ ] When [unusual but valid scenario], then [expected behavior]
- [ ] System handles [maximum/minimum/empty] state by [specific action]

**Definition of Done:**
- [ ] Feature works on [specified browsers/devices]
- [ ] Analytics tracking implemented for [specific events]
- [ ] Error handling tested for [specific scenarios]

Gherkin Format Template

**User Story:** As a [user type], I want to [action] so that [benefit].

**Acceptance Criteria:**

**Scenario 1: Happy path**
Given [initial state/context]
When [user action]
Then [expected outcome]
And [additional verification]

**Scenario 2: Error handling**
Given [initial state]
When [invalid action/input]
Then [error message/behavior]
And [user guidance/recovery option]

**Scenario 3: Edge case**
Given [boundary condition]
When [user action]
Then [expected behavior]
And [system state verification]

Real Examples

E-commerce Add to Cart (Plain Language):

  • When user clicks "Add to Cart" on product page, item appears in cart with correct name, price, and quantity
  • Cart icon updates to show total item count
  • Success message displays "Item added to cart" for 3 seconds
  • When product is out of stock, "Add to Cart" button is disabled and shows "Out of Stock"
  • When user is not logged in, clicking "Add to Cart" redirects to login page

Subscription Upgrade (Gherkin):

Scenario: Successful upgrade with proration
Given user has active Basic plan ($10/month, billed monthly)
And user has 15 days remaining in current billing cycle
When user selects Pro plan ($20/month)
Then system calculates prorated amount ($5.00)
And user sees "Upgrade to Pro: $5.00 due today, then $20/month"
And payment processes successfully
And user immediately gets Pro features access

Metrics to track

Criteria Clarity Score

Formula: (Stories completed without clarification questions / Total stories) × 100 Instrumentation: Track Slack/Jira comments asking for story clarification Example range: 75-90% (higher indicates clearer acceptance criteria)

Development Rework Rate

Formula: (Stories requiring post-development changes / Total completed stories) × 100 Instrumentation: Track stories that reopen after initial completion Example range: 5-15% (lower indicates better upfront specification)

Story Refinement Time

Formula: Average minutes spent in refinement per story point Instrumentation: Time tracking in refinement meetings divided by story points estimated Example range: 8-15 minutes per story point (varies by team size and complexity)

Acceptance Criteria Coverage

Formula: (Stories with complete AC / Total stories) × 100 Instrumentation: Automated check for AC presence in story template Example range: 95-100% (should be consistently high)

QA Test Case Generation Time

Formula: Average time to create test cases from acceptance criteria Instrumentation: QA team tracks time from AC review to test case completion Example range: 15-30 minutes per story (faster indicates clearer criteria)

Criteria Update Frequency

Formula: Stories requiring AC updates during development / Total stories in development Instrumentation: Track AC field edits after story moves to "In Progress" Example range: 10-25% (some updates are normal, too many indicate poor initial quality)

Common mistakes and how to fix them

  • Writing implementation details instead of user outcomes. Fix: Focus on what the user experiences, not how the system achieves it. Write "User sees confirmation message" not "System calls confirmation API."

  • Using vague language like "user-friendly" or "intuitive." Fix: Specify exact UI elements, messages, and behaviors. Replace "error handling is user-friendly" with "show specific error message below input field."

  • Missing negative test cases and error scenarios. Fix: For every happy path, write at least one error scenario. Ask "What happens when this goes wrong?"

  • Forgetting mobile and responsive behavior. Fix: Explicitly state how features work on different screen sizes when relevant to the story.

  • Overcomplicating simple stories with unnecessary Gherkin format. Fix: Use plain language for straightforward features. Save Gherkin for complex conditional logic.

  • Not involving developers and designers in criteria review. Fix: Always get technical input before finalizing acceptance criteria. Implementation constraints affect what's possible.

  • Mixing acceptance criteria with technical tasks. Fix: Keep AC focused on user-facing behavior. Put technical implementation notes in story description or subtasks.

  • Writing criteria that can't be tested or measured. Fix: Every criteria should be verifiable. Ask "How would QA test this?" for each point.

FAQ

Q: When should I use Gherkin format vs plain language for acceptance criteria examples? A: Use Gherkin for complex workflows with multiple conditions, user roles, or integration points. Use plain language for simple UI changes, content updates, or straightforward features. Gherkin adds structure but also overhead.

Q: How many acceptance criteria should each user story have? A: Typically 3-7 criteria per story. If you need more than 10, consider splitting the story. Focus on happy path, key error scenarios, and critical edge cases. Quality matters more than quantity.

Q: Should acceptance criteria examples include technical implementation details? A: No. Focus on user-observable behavior and outcomes. Technical implementation belongs in story descriptions, subtasks, or developer notes. AC should be understandable by non-technical stakeholders.

Q: How do I write good acceptance criteria examples for API or backend features? A: Focus on the API consumer's perspective. Specify request/response formats, status codes, error conditions, and performance requirements. Treat the API consumer as your "user."

Q: What's the difference between acceptance criteria and definition of done? A: Acceptance criteria are specific to each story and define what makes that feature complete. Definition of done applies to all stories (testing, code review, documentation) and defines team quality standards.

Further reading

Why CraftUp helps

Writing clear acceptance criteria becomes second nature when you practice consistently with real examples and get feedback on your approach.

  • 5-minute daily lessons for busy people - Practice writing acceptance criteria with guided exercises that fit into your schedule
  • AI-powered, up-to-date workflows PMs need - Get personalized feedback on your acceptance criteria quality and suggestions for improvement
  • Mobile-first, practical exercises to apply immediately - Work through real user story scenarios and build your template library on the go

Start free on CraftUp to build a consistent product habit. Visit https://craftuplearn.com

Keep learning

Ready to take your product management skills to the next level? Compare the best courses and find the perfect fit for your goals.

Compare Best PM Courses →
Portrait of Andrea Mezzadra, author of the blog post

Andrea Mezzadra@____Mezza____

Published on December 15, 2025

Ex Product Director turned Independent Product Creator.

Download App

Ready to become a better product manager?

Join 1000+ product people building better products.
Start with our free courses and upgrade anytime.

Phone case