AI editor setup overview
How SiteCMD's MCP server lets your AI editor read scan results, request fixes, and verify them.
If your AI editor speaks MCP (Model Context Protocol), it can talk to SiteCMD directly. Your AI sees your scan results, can pull fix prompts written specifically for the failing checks, and can verify its own fixes by comparing scans before and after.
This page covers what MCP is, what SiteCMD exposes through it, and the pieces that work the same across every supported editor. For per-editor setup commands, see the page for your editor:
- Cursor
- Claude Code
- Windsurf
- VS Code (with GitHub Copilot or Claude Code extension)
- GitHub Copilot
- Cline
- JetBrains IDEs
- Zed
- OpenAI Codex CLI
What MCP is
Model Context Protocol is an open standard for letting AI tools talk to other tools. An MCP server exposes a set of named functions (called “tools”), each with a description and a parameter schema. The AI editor decides when to call which one based on what you’re asking it to do.
Three things make this useful:
- The AI doesn’t have to guess. When you say “fix the failing accessibility issues on this page,” it can call
get_issuesto see the real list, not invent plausible-looking ones. - Permissions stay with the user. MCP servers run locally, on your machine, under your account. The AI can’t reach into SiteCMD’s data without your explicit consent to use the MCP integration.
- The protocol is the same across tools. Configure your editor once, and the same MCP server works whether you switch editors next month.
SiteCMD ships an MCP server (sitecmd-mcp) as part of the desktop app. When configured, your editor spawns it as a subprocess and talks to it over stdio.
What SiteCMD exposes
The MCP server provides these tools to your AI editor:
| Tool | What it does |
|---|---|
get_projects | List every project tracked in SiteCMD, with URLs and detected frameworks. |
get_scan_score | Get the latest score and category breakdown for a specific site URL. |
get_issues | List failing issues from the latest scan. Can be filtered by severity, category, and source. |
get_fix_prompts | Return fix prompts for selected failing checks. These are written with enough context that the AI can act on them directly. |
get_scan_history | Return scan score history over time for a URL, useful for trend analysis. |
get_dismissed_issues | Return issues that have been dismissed or marked not applicable. The AI should skip these when suggesting fixes. |
compare_scans | Compare the two most recent scans for a URL. Shows what was fixed, what’s new, and what regressed. The right tool to call after the AI has made changes. |
request_scan | Return guidance to the AI about how to ask the user to run a scan. The actual scan still runs in the desktop app, not via MCP. |
The naming and behavior of these tools is identical across every editor. If the editor’s documentation says “this is what we’ll send to the MCP server,” that’s what SiteCMD will receive.
The typical workflow
A normal MCP-driven session looks like this:
- You ask your editor a question. Something like “fix the failing security issues on this scan.”
- The AI calls
get_issues. It sees the actual list of failing checks, with severity, location, and category. - The AI calls
get_fix_prompts. For the issues it’s chosen to act on, it pulls fix prompts that include the relevant context: detected framework, check ID, what the check is testing. - The AI makes edits. It modifies your source files with the changes the fix prompts suggested.
- You re-run the scan. Either by clicking Run Scan in SiteCMD, or by letting the AI prompt you to. The AI does not start scans on its own.
- The AI calls
compare_scans. It sees what its changes fixed, what they didn’t fix, and what (if anything) they broke. - Iterate. If something regressed, the AI sees that immediately and can address it.
This loop is the entire point of the integration. Without MCP, your AI is guessing at what’s wrong with your site. With MCP, it’s working from the actual scan output.
What MCP does not do
- Run scans by itself. Scans are a user action. The desktop app runs them. The AI can ask you to run one (via
request_scan), but it can’t push the button. - Modify SiteCMD data. The MCP server is read-only with respect to your scan results. It can’t dismiss issues, change snooze states, or mark things fixed. Those are user actions in the desktop app.
- Reach across projects. When the AI calls
get_issues, it asks for a specific site URL. There’s no “give me everything” mode.
Auth model
The MCP server runs on your machine, as your user, against the same local database the desktop app uses. There is no separate API key for MCP. If the desktop app can see a project, the MCP server can read it. If you don’t want a specific editor to have access, don’t configure the MCP server in that editor.
This is intentionally simpler than the API-key model some MCP servers use. SiteCMD’s data is local; no remote endpoint needs to authenticate the call.
Multiple editors
If you use more than one AI editor (Cursor in the morning, Claude Code in the afternoon), they can all point at the same SiteCMD MCP server. You configure each one separately, but the underlying server and data are shared.
You don’t need to “switch” SiteCMD between editors. Each editor spawns its own subprocess copy of sitecmd-mcp and talks to it independently. The desktop app doesn’t have to be running for MCP to work, but if it isn’t, scan results will only be as fresh as the last time the app was open.
When MCP isn’t enough
For workflows that need full control (run a scan from CI, fail a deploy on a critical regression, ingest results into a custom pipeline), the CLI is the right tool. See CLI for CI/CD for that.
MCP is for AI editors. CLI is for scripts and pipelines. They don’t compete; they cover different surfaces.