Hosted MCP server
Hyrax exposes a hosted Model Context Protocol server so target-repo agents (Claude Code, Cursor, Copilot) can query the live discovery surface instead of grepping the published markdown skills. MCP is additive — repo-committed skill files still ship via publish. The MCP path wins when the answer needs to be fresh ("what migrations are pending right now?") or precise ("what skill applies to this file path?").
- Hosted endpoint:
https://mcp.get-hydra.dev/mcp - Auth:
Authorization: Bearer hk_live_…— the key is a self-contained credential; tenant is resolved from the key's sha256 hash viapublic.api_keys.key_hash(UNIQUE), no separate header needed. - Transport: streamable HTTP (one POST/GET endpoint, JSON-RPC 2.0)
- Source:
apps/api/app/mcp/in this repo (mounted at/mcpof the FastAPI app — #29 collapsed the standaloneapps/mcp/deployment surface)
Catalog as source of truth
Every MCP tool is defined by a record in src/hyrax/api_catalog/_seed_records.py paired with a handler in src/hyrax/api_catalog/agent_ops.py. Three projections read from that single catalog: the MCP server (apps/api/app/mcp/tools.py), the REST agent router (apps/api/app/routers/agent.py), and the in-app chat agent via src/hyrax/api_catalog/pydantic_ai_projection.py::project_to_pydantic_ai. None of the surfaces declares operations inline. Adding a new agent operation means appending one record to SEED_RECORDS plus its handler; all three projections pick it up on next process boot.
The catalog entry shape: name, description, Pydantic params model, Pydantic response model, auth class, mcp_eligible flag, requires_approval flag (chat-only — see below), REST binding (HTTP method + path), and a handler function (conn, params) -> response. The same handler runs in all projections, so MCP, REST, and chat cannot drift on tool semantics, parameter shapes, or response contracts.
requires_approval is chat-only. Mutation entries (e.g. submit_job, dismiss_finding) carry requires_approval=True. The MCP and REST projections refuse to register approval-required entries — programmatic clients hit the typed mutation routes (POST /api/repos/.../jobs, POST /api/findings/{ref}/dismiss, etc.) directly. The chat router catches the resulting DeferredToolRequests output, surfaces the proposal as a tool_pending_approval SSE frame, and resumes via DeferredToolResults when the SPA approves.
The same read-only operations are reachable via REST under /api/agent/... for SDK consumers — useful when an agent already speaks HTTP+JSON and doesn't need an MCP transport. Both REST and MCP surfaces accept the same hk_live_ key, return the same Pydantic-shaped responses, and respect the same tenant scope.
Tool surface
Every per-repo tool takes the natural-key triple (github_org, github_repo, github_base_branch) — three separate string fields, not a slug. Call list_repos first to learn which triples this tenant owns; everything else takes those three fields plus tool-specific params.
| Tool | What it answers |
|---|---|
list_repos | Which repos does this tenant have? Returns the triple per repo (no positional args) |
repo_overview | Stack profile + principles.md + domain.md for one repo, in a single call |
read_repo_knowledge | Single bundle: profile + principles + skills + open directives, version-pinned by publish-meta hash |
canonical_pattern | Patterns markdown (optionally filtered by area keyword) — answers "the canonical form is X" |
applicable_rules | Skills that apply to a given file path — answers "what should I know before editing this file?" |
recent_issues | Open issues (any kind) touching a file path; pass kind='finding'/'improvement'/'directive' to scope |
pending_migrations | Open I-* improvement directives — the "what migrations are pending?" feed |
explain_directive | Full body of a single finding by ref (A-12, I-7) |
search_issues | Substring search across open issues, optionally scoped by kind |
list_howtos | Slugs of every published how-to guide |
howto_guide | Body of one how-to guide by slug |
list_issues | Cursor-paginated unified issues list with kind/status/severity/repo filters |
triage_issue | Keep or dismiss one issue (chat-only, approval-free, gated on triage_issues) |
reopen_issue | Move a kept/dismissed/fixed issue back to open (gated on triage_issues) |
create_ticket | File a tracker ticket (Linear/Jira) for one finding ref |
refine_task | Pre-job directive expansion (~$0.05); does NOT create a job |
update_repo | Flip the four chat-exposed repo settings (review_enabled, fix_mode, …) |
set_repo_guidance_slot / clear_repo_guidance_slot | Upsert / drop one repo guidance slot |
set_org_guidance_slot / clear_org_guidance_slot | Upsert / drop one org-level guidance slot |
set_organization_context | Set the tenant-wide LLM context blob |
set_tenant_setting / clear_tenant_setting | Upsert / drop one of the typed tenant_config knobs |
set_learning_flags | Toggle tenant self-improvement / global-contribution flags |
discovery_doc_opened | Telemetry callback — record that the calling external agent just opened a .hyrax/discovery/ doc (MCP-only) |
Example invocation (JSON-RPC tools/call):
{
"name": "recent_issues",
"arguments": {
"github_org": "myorg",
"github_repo": "myrepo",
"github_base_branch": "main",
"file_path": "src/api/handlers.py",
"kind": "finding",
"limit": 5
}
}
Approval-required mutations (submit_job, cancel_job, request_fix, retry_job, register_repo, delete_repo, reset_repo) are not projected to MCP — programmatic clients hit the typed REST routes directly. Approval-free mutations (triage_issue, reopen_issue, create_ticket, update_repo, guidance / settings ops) are MCP-projected so target-repo agents can take routine actions without round-tripping through the SPA.
discovery_doc_opened — telemetry callback (#192)
Cooperating external IDE agents (Claude Code, Cursor, Copilot) can call discovery_doc_opened(repo, doc_path) whenever they read a file under the target repo's .hyrax/discovery/ tree. The handler INSERTs one row into the per-tenant discovery_doc_reads table tagged agent='external-mcp' — distinct from the agent='hydra-<tool>' rows the internal audit / improve / review / learn workers stamp. The aggregate read by the next discover run uses both buckets to decide which guides to keep publishing; the external-mcp slice is the load-bearing signal for "is the target-audience IDE actually opening this file?"
The tool is MCP-only — the /api/agent REST surface is admin-gated, which is the wrong shape for telemetry from any authenticated tenant member. The MCP per-key rate limit (120/min, the same ceiling every MCP tool shares) is generous enough for the natural cadence of doc opens during an IDE session.
Calling it is optional; an external agent that ignores it costs nothing beyond the missing signal. Hyrax's own publish pruning is conservative — a guide with zero reads is still kept until the heuristic in run_deep_discover decides to retire it.
Same operations via REST
The catalog also projects to FastAPI routes under /api/agent/... so SDK clients (TypeScript or Python) get typed bindings via make api-types. Path shape mirrors the existing /api/repos/{org}/{repo}/{branch}/... convention:
GET /api/agent/repos -> list_repos
GET /api/agent/repos/{org}/{repo}/{branch}/overview -> repo_overview
GET /api/agent/repos/{org}/{repo}/{branch}/knowledge -> read_repo_knowledge
GET /api/agent/repos/{org}/{repo}/{branch}/issues?kind=finding -> recent_issues
GET /api/agent/repos/{org}/{repo}/{branch}/issues/{ref} -> explain_directive
GET /api/agent/repos/{org}/{repo}/{branch}/issues/search?query=… -> search_issues
…
OpenAPI emits a typed response_model for every entry, so the SPA SDK + Python hydra client can call any agent operation with full type checking.
Auth + rate limits
The MCP server validates the hk_live_ bearer key against the tenant schema on every request — same revoke window, same expiry semantics, same audit trail as the JSON API. Per-key rate limit defaults to 120 requests / 60 seconds (override with HYRAX_MCP_RATE_LIMIT_PER_WINDOW). Hitting the cap returns HTTP 429 with a Retry-After header.
There is no MCP-side OAuth flow — target-repo agents just need the same hk_live_ key the team already uses for the JSON API. Mint one in the Settings → API keys tab if you don't have one yet.
Wiring it into your agent
Claude Code (~/.claude/mcp.json)
{
"mcpServers": {
"hydra": {
"type": "http",
"url": "https://mcp.get-hydra.dev/mcp",
"headers": {
"Authorization": "Bearer hk_live_REPLACE_ME"
}
}
}
}
Then in a session: /mcp lists configured servers; tools appear as mcp__hydra__list_repos, mcp__hydra__canonical_pattern, etc.
Cursor (.cursor/mcp.json in the target repo)
{
"mcpServers": {
"hydra": {
"url": "https://mcp.get-hydra.dev/mcp",
"transport": { "type": "streamable-http" },
"headers": {
"Authorization": "Bearer hk_live_REPLACE_ME"
}
}
}
}
Cursor surfaces MCP tools in the agent panel — turn the hydra block on and the tools appear in the picker.
GitHub Copilot
Copilot is moving toward MCP support; once it ships, the same URL + headers apply. Until then, the JSON API at api.get-hydra.dev/api/... is the supported path for Copilot agents.
Generic MCP client (e.g. for testing)
curl -sS https://mcp.get-hydra.dev/mcp \
-H 'Authorization: Bearer hk_live_REPLACE_ME' \
-H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'
The response lists every tool with its JSON-Schema input shape — handy for sanity-checking that auth works before pointing an agent at it.
When to use MCP vs. published skills
- Use MCP when the answer changes frequently (open findings, pending migrations, last-audit drift) or when the agent's question is parametric ("what applies to this file?").
- Use the published
.hyrax/discovery/files when the agent needs offline access (no network) or when the repo's CLAUDE.md / Cursor rules must reference a stable in-repo path.
Both surfaces are kept in sync by publish — MCP reads the same repo_files rows the publish workflow writes, so a freshly published skill is queryable through MCP within seconds.
Adding a new agent operation
- Add the params + response Pydantic models and the handler function to
src/hyrax/api_catalog/agent_ops.py. - Append a record to
SEED_RECORDSinsrc/hyrax/api_catalog/_seed_records.pyreferencing the handler. Defaultmcp_eligible=True; provide arestdict (method + path) if you want REST too. - Update the
_REQUIRED_TOOLSset intests/test_mcp_tools_registry.pyand the_EXPECTED_AGENT_OPSset intests/test_api_catalog.py. - Update the table above.
The MCP and REST projections regenerate automatically from the registry on next pod boot. No changes needed to apps/api/app/mcp/tools.py or the agent router.
Operating notes
- The MCP surface ships inside the API image (
apps/api/Dockerfile) and rides the API deployment —app.mount("/mcp", ...)inapps/api/app/main.pyexposes it. Same uvicorn worker pool serves REST + MCP; if worker-pool starvation ever shows up under real load profiles, re-split. - Health checks live at the API's
/health/alive(process-up) and/health/ready(DB + tenant-cache OK) — the parent FastAPI owns those routes; the mounted/mcpsub-app no longer carries duplicates. - Failures show up in the standard Hyrax log channel (
hyrax.mcp.*) —hyrax.mcp.auth.db_erroris the one to watch for tenant-side outages. - Rate-limit state is in-process today (single-pod). When we scale horizontally, swap
apps/api/app/mcp/rate_limit.pyfor a Redis-backed counter; thecheck_rate_limitinterface is the only call site.