FixSense
Features

AI Analysis

How FixSense uses AI to analyze E2E test failures and identify root causes.

Overview

Every time a Playwright or Cypress test fails in your CI, FixSense uses AI to perform intelligent root cause analysis. The AI examines the failure logs, test context, and error patterns to deliver actionable insights.

What Gets Analyzed

For each failed test, the AI receives:

  • Error message and stack trace
  • Test file name and test title
  • Expected vs actual behavior
  • Test framework log output (actions, assertions, network)
  • Failure screenshot — the AI visually inspects the screenshot to see the actual UI state at the moment of failure. This is especially useful for toast/notification mismatches: if a success toast is visible on screen but the test couldn't find it, the AI can see the toast and correctly classify it as a test bug (wrong selector or expected text) rather than an app bug.
  • Console errors and network responses — extracted from Playwright traces and test artifacts when available. HTTP 5xx responses and JavaScript errors are strong signals for application bugs.

Analysis Output

Each analysis produces a structured result:

Root Cause

A clear explanation of why the test failed. Examples:

  • "The login button selector #submit was changed to .btn-primary in the latest commit"
  • "A race condition: the API response arrives after the assertion timeout of 5000ms"
  • "The test relies on a specific data seed that was modified in the database migration"

Failure Classification

Every failure is classified as either an application bug or a test bug — no ambiguity:

ClassificationDescriptionExamples
Application BugThe application itself is broken — the test correctly detected a real issueBackend returning errors, UI not rendering after successful API call, redirect going to wrong page, operations failing silently
Test BugThe test code needs fixing — the application is working correctlyWrong selector, timeout waiting for element, assertion expecting outdated value, test data conflict, undefined passed to action

FixSense uses multi-signal analysis to make this determination. It examines error patterns, network responses (when available), console logs, retry behavior, and historical data to classify each failure with high confidence.

Flakiness Score

In addition to the app/test classification, each failure receives a flakiness score (0-100):

ScoreMeaning
0-20Deterministic failure — happens consistently
21-50Uncertain — investigate further
51-80Likely flaky — intermittent pattern
81-100Almost certainly flaky — fails randomly

Tests that pass on retry in the same CI run are automatically flagged as flaky with a high score. Tests that fail consistently with the same error across multiple runs receive a low score — they are real issues, not flakes.

Fix Suggestion

Specific code changes to resolve the issue:

// Before (failing)
await page.click('#submit');

// After (suggested fix)
await page.click('.btn-primary');

Confidence Score

A 0-100% score indicating how certain the AI is about its analysis. Scores above 80% are typically very accurate.

Analysis Actions

Each analysis card in the dashboard has actions you can take:

Reanalyze

Triggers a new AI API call on the same failure data (test name, error message, and diff context). The AI may produce different root cause explanations, flakiness scores, and fix suggestions on each run. Useful when:

  • The original analysis had low confidence
  • An AI error occurred during the first attempt
  • You want a fresh perspective after understanding the failure better

The card shows a spinner while re-analyzing and updates in-place with the new results — root cause, flakiness score, suggested fix, and confidence are all replaced.

Each reanalyze counts as one analysis toward your monthly quota, since it makes a real AI API call. If you've reached your plan limit, the button will return an error.

Copy

Copies the full analysis to your clipboard in a clean text format — test name, root cause, error message, and suggested fix. Useful for pasting into Slack, Jira, or team discussions.

Delete

Permanently removes the analysis from your dashboard and database. Includes an inline confirmation step to prevent accidental deletion.

Pattern Learning

FixSense learns from your team's feedback and failure history within your account to get smarter over time:

  • When you mark an analysis as helpful, FixSense remembers the failure pattern. If the same test fails with the same error again, you get an instant cached result — no waiting, no extra analysis usage.
  • When you mark an analysis as unhelpful, FixSense ensures it won't reuse that result. The next time the same failure occurs, a fresh analysis runs with improved context.
  • When the same failure occurs multiple times, FixSense uses the previous classification to maintain consistency — the same error always gets the same answer.
  • Repeat failures anchor the flakiness score — a test that fails 5 times with the same error is deterministic (not flaky), regardless of what the error message looks like.

This means the more you use FixSense, the faster and more accurate it becomes for your specific codebase.

Efficiency

FixSense only analyzes failed tests. Passing tests are completely ignored, keeping your usage efficient and your monthly analysis count low.

A typical team with 500 tests uses only a fraction of their monthly analysis quota, even on the Free plan.