Skip to main content

Findings

A finding is something Hyrax surfaced for your attention — anything from a SQL injection in production code to a refactor opportunity in a tangled module. Findings are the primary unit of work in Hyrax: workflows produce them, fixes resolve them, tickets track them.

There are two kinds:

  • Findings — problems worth fixing (audit's output). Security holes, missing indexes, flaky tests, unhandled error paths, deprecated dependencies. The action verb is Fix.
  • Suggestions — opportunities worth considering (improve's output, plus the architectural-suggestion pass that runs at the end of every audit). Refactors, modernizations, abstractions worth introducing or deleting. The action verb is Implement.

Both kinds share the same shape, the same lifecycle, the same dedup pipeline, the same HYRAX-N ref space, and the same backing schema. The kind shows up as a badge on the row and changes the action verb on the detail page; everything else is unified.

What's in a finding

Every finding includes:

  • Title — one sentence stating what was found.
  • Description — what's there and why it matters. No file paths stuffed inside; just the why.
  • Suggested change — what to do about it.
  • File and line (when applicable) — exactly where it lives.
  • Code excerpt — the relevant snippet, syntax-highlighted.
  • Kindfinding or suggestion.
  • Priority, category, tool, source — for sorting and filtering.
  • Ref — a stable ID like HYRAX-42 you can reference in conversations and tickets.

Findings are designed to be readable by both humans and AI agents. Copy one into Claude Code, Cursor, or a Slack thread and it has everything someone needs to act.

Priority

Priority is one column with four values — P0, P1, P2, P3 — and a kind-aware label so the badge matches how teams talk about each kind:

PriorityFinding labelSuggestion labelWhat it means
P0CriticalHigh impactActive risk or top-leverage opportunity. Drop everything (findings) / next sprint (suggestions).
P1HighMedium impactSignificant defect or worthwhile change. Schedule this sprint / cycle.
P2MediumLow impactReal, not urgent. Pick up when convenient.
P3LowMinor — nits, cosmetic, deferred work. (Most suggestions land at P0–P2.)

Priority is set by the tool that produced the finding and double-checked during post-processing. Scanner findings are capped at P2 unless the underlying pattern is explicitly high-confidence.

Categories

Findings are grouped into six sharpened domains so you can scan a category and triage in batches:

  • Security — auth, injection, secrets, data exposure
  • Correctness — error handling, concurrency, edge cases, validation, tests
  • Maintainability — readability, complexity, dead code, refactors, modernization
  • Performance — hot paths, memory, queries, frontend perf, bundle size
  • Architecture — module boundaries, abstractions, schema, contracts
  • Operations — config, deployment, observability, dependencies, CI

The full list of tools (35+) lives in the audit tool selector — each one is a focused lens that maps onto one of the six categories.

Source: agent vs scanner

Findings come from two engines:

  • Agent (LLM) — a Claude agent read your code, followed the logic, and reasoned about it. High-context, high-confidence, slower. Most findings in a standard audit.
  • Scanner (SCAN) — a fast pattern match. No reasoning, just regex and AST checks against a curated pattern library. Cheap and complete, but blunter.

When both engines flag the same thing, the agent finding wins — it has more context. You'll see a SCAN badge on scanner-only findings; agent findings show Agent (LLM).

Lifecycle

A finding moves through four states:

new → triaged → in_progress → closed
  • new — newly detected, awaiting triage.
  • triaged — reviewed; possibly pushed to Linear or your tracker (ticket_url is orthogonal — set independently of state).
  • in_progress — a fix or task PR is open.
  • closed — terminal. Carries a close_reason:
    • fixed — verified resolved (PR merged or marked fixed).
    • dismissed — false positive, won't-fix, duplicate, or irrelevant. Won't appear again.
    • expired — terminal but unverified (e.g. superseded).

Re-runs merge with previous results — they don't replace. Open findings get their last_seen bumped. Closed-fixed findings re-open if the underlying issue comes back. Dismissed findings stay quiet. Genuinely new findings get added.

Refs

Every finding gets a per-tenant sequential ref rendered as HYRAX-NHYRAX-1, HYRAX-2, … — regardless of kind. The kind shows up as a badge on the row, not in the ref. Refs are stable across re-runs and PRs; quote them in commit messages, Linear tickets, or Slack and anyone on the team can paste it back into Hyrax to pull up the finding.

Dedup

When new findings come in, the upsert path matches against existing rows in two gates: an exact signature match (cheap, indexed), then on miss an embedding-based similarity match (Bedrock Titan amazon.titan-embed-text-v2:0 + pgvector cosine ANN, threshold 0.85). Re-detections collapse onto the existing row.

Tie-break across kinds: if the same underlying issue is flagged once by audit (finding) and once by improve (suggestion), the finding wins. The action you really want there is "Fix," not "Implement."

Filters that matter

The findings list supports filtering by kind, priority, category, source, status, and tool. The three filters most teams use:

  • Kind = Findings — focus on problems first; come back for suggestions on a separate pass.
  • Priority ≥ P1 — the "what should I be working on" view.
  • Source = Agent — skip scanner findings while triaging deeply, when you want only the high-context calls.

Re-running

Re-audit (or re-improve) when:

  • A release shipped — see what landed.
  • You merged a fix or task PR — confirm the finding is gone.
  • A week passed — keep the backlog fresh.
  • You changed Hyrax's audit configuration — see the delta.

Cost is the same as the first run; the merge logic ensures you don't see the same thing twice.