Dashboard
Track CI failures, usage, and analytics from the FixSense dashboard.
Overview
The FixSense Dashboard gives you real-time visibility into your CI health, failure patterns, and auto-fix performance across all your repositories.
Analysis Stats
The Analysis widget shows a full breakdown of your test failure data:
- Total Analyzed — total CI failure analyses processed
- App Bugs — failures caused by application code, not tests
- Test Bugs — failures caused by test code issues
- AI Errors — analyses that could not be completed due to AI provider issues
- Avg Flakiness — average flakiness score across all analyses (0–100%)
- Flaky — tests with a flakiness score of 60 or higher (likely intermittent)
- Confirmed — tests with a flakiness score below 60 (reproducible, real failures)
- Positive Feedback — percentage of user votes that rated analyses as accurate
- Feedback Rate — percentage of analyses that received a vote
The breakdown is additive: App Bugs + Test Bugs + AI Errors = Total Analyzed.
Auto-Fix Stats
The Auto-Fix widget tracks the performance of the AI fix agent:
- Success Rate — percentage of completed fix attempts that succeeded
- Fixes — total analyses that entered the auto-fix pipeline
- Skipped — analyses where auto-fix was skipped (e.g., app bug detected, low confidence)
- Agent Runs — fix attempts that were actually executed
- Fixed — successful fixes generated by the AI agent
- Merged — fix PRs that were merged into the codebase
- Verified (CI green) — merged fixes confirmed passing by a subsequent CI run
- Fixes Failed — fix attempts that did not produce a working solution
The pipeline flow: Fixes = Skipped + Agent Runs, and Agent Runs = Fixed + Fixes Failed.
Repository Filter
The Repositories section lists all repos connected to your FixSense installation. Each repo has an ON/OFF toggle:
- ON (green) — the repo's data is included in all dashboard stats
- OFF (gray) — the repo's data is excluded from all widgets, filters, and charts
Use this to focus on a single repository's performance, or temporarily hide repos you're not interested in. The toggle is instant and affects everything on the dashboard — both stat widgets, the analysis list, and the analytics charts.
Usage Overview
Shows your current billing cycle usage:
- Analyses used — how many test failure analyses you've consumed
- Auto-fixes used — how many auto-fix agent runs this month
- Plan limits — visual progress bars showing remaining capacity
Analytics (Pro & Team)
Top Failing Tests
Shows the 5 tests that fail most frequently. Use this to prioritize test maintenance and reduce CI noise. Each test links directly to its latest PR or fix PR.
Failures by Repository
A breakdown of failures per repository with visual bars, helping you identify which projects need the most attention.
Recent Analyses
A chronological list of all failure analyses with:
- Test name and file path
- Repository and PR reference
- Root cause summary
- Flakiness score and confidence indicator
- Fix suggestion and auto-fix status
Click any analysis card to expand it and see the full details, including the original error message.
Filtering
Use the filter tabs to narrow down analyses:
| Filter | Shows |
|---|---|
| All | Every analysis |
| Analyzed | Completed analyses without auto-fix |
| Fixed | Analyses where the AI agent successfully generated a fix |
| Verified | Fixed analyses where the fix passed CI after merge |
| Failed | Analyses where the auto-fix attempt did not succeed |
| App Bugs | Failures identified as application bugs, not test issues |
| AI Errors | Analyses that failed due to AI provider issues |
Combine with the search bar (filters by test name or repo) and time filter (1h, 24h, 7d, 30d, All time). Team plan users also get a Custom date range picker.
Card Actions
Each expanded card has these actions:
| Action | Description |
|---|---|
| Copy | Copies test name, root cause, error, and fix to clipboard |
| Reanalyze | Re-runs AI analysis on the same failure (updates in-place) |
| Feedback | Rate the analysis as helpful or unhelpful to improve future results |
| Delete | Permanently removes the analysis (with inline confirmation) |
Feedback
Every completed analysis shows thumbs up / thumbs down buttons with an "Accurate?" prompt. Your feedback directly improves FixSense:
- Thumbs up — confirms the analysis was accurate. FixSense remembers this, so if the same test fails with the same error again, you get an instant result without waiting for a new analysis.
- Thumbs down — marks the analysis as inaccurate. A dropdown appears asking why: wrong category, not flaky, not an app bug, wrong root cause, bad suggestion, or other. This ensures the same mistake won't be repeated for this failure pattern.
Your feedback is private to your team and only affects your own analyses.
Weekly Trends (Pro & Team)
Shows failure volume per week for the last 8 weeks, with breakdowns for:
- Total failures per week
- Fixed — failures resolved by auto-fix
- Flaky — failures with a flakiness score of 60+
Use this to track whether your CI health is improving or degrading over time.
Export CSV (Pro & Team)
The Export CSV button in the filter bar downloads all currently filtered analyses as a CSV file. The export includes test name, repository, PR number, root cause, flakiness score, confidence, auto-fix status, merge/verification status, feedback, and date.
Use this for offline reporting, sharing with stakeholders, or importing into BI tools.
Custom Date Range (Team)
Team plan users get a Custom option in the time filter that reveals date pickers for selecting any start and end date. This is especially useful with the Team plan's 1-year data retention — filter analyses to a specific sprint, release window, or incident period.
SLA Metrics (Team)
The SLA Metrics widget shows operational performance indicators:
- Failures / Day — average daily failure rate across the selected time range
- Fix Rate — percentage of completed analyses that were auto-fixed
- Merge Rate — percentage of generated fixes that were merged
- End-to-End Verified — percentage of all analyses that reached CI-green verification
These metrics help engineering managers track auto-fix pipeline efficiency and set improvement targets.
Feature Availability by Plan
| Feature | Free | Pro | Team |
|---|---|---|---|
| Analysis stats | ✅ | ✅ | ✅ |
| Auto-fix stats | ✅ | ✅ | ✅ |
| Repository filter | ✅ | ✅ | ✅ |
| Usage overview | ✅ | ✅ | ✅ |
| Recent analyses | ✅ | ✅ | ✅ |
| Feedback | ✅ | ✅ | ✅ |
| Top Failing Tests | — | ✅ | ✅ |
| Failures by Repo | — | ✅ | ✅ |
| Weekly Trends | — | ✅ | ✅ |
| Export CSV | — | ✅ | ✅ |
| Custom Date Range | — | — | ✅ |
| SLA Metrics | — | — | ✅ |