Home Features Pricing Documentation Services Contact DOWNLOAD
← Back to docs

Fixing with AI editors

Use Cursor, Claude Code, Windsurf, or any MCP-capable AI editor to actually fix the issues SiteCMD finds.

You’ve run a scan. You have findings. Some of them are obvious one-line fixes; most of them aren’t. This is where AI editors earn their keep, and where SiteCMD’s MCP integration changes how the work feels.

This page is the overview of the fix-with-AI workflow. For setting up your specific editor, see AI editors.

The loop

The basic shape of an AI-assisted fix loop:

  1. You ask your editor to fix something. “Fix the failing security issues on this scan.”
  2. The editor calls SiteCMD’s MCP server. It gets the real list of failing checks, not a hallucinated one.
  3. The editor reads the relevant fix prompts. Each prompt is written for a specific check, with framework-detection applied if you’ve linked a project folder.
  4. The editor makes the edits. Modifies your source files based on the prompts.
  5. You re-run a scan. Click Run Scan in SiteCMD (or, on the CLI, run sitecmd scan --diff).
  6. The editor confirms the fix. It calls SiteCMD’s compare_scans tool and sees which findings actually got resolved by its changes.
  7. Iterate if needed.

What makes this different from “ask the AI to write some code” is the verification step. The AI sees its own work-product, against the same checks that caught the issue. If the fix didn’t work, the AI sees that immediately.

What the AI sees

When the editor calls SiteCMD’s MCP server, it gets back structured data, not vibes:

  • The check ID that failed (so the editor can look it up in its own context if needed)
  • The exact severity, category, and confidence
  • The location (URL, file path, line number where applicable)
  • The fix prompt written specifically for that check
  • Detected framework (Next.js, Astro, Rails, etc.) so framework-specific fix steps apply

The fix prompts include enough context that the editor can act without asking you for the same info. “Fix the missing Content-Security-Policy header” comes with the actual headers your site is currently sending, the platform you’re deployed on, and where in your code you’d typically add headers for that platform.

Why this is better than copy-pasting

The old workflow:

  1. You read the SiteCMD finding.
  2. You copy-paste the finding into your AI chat.
  3. The AI guesses what your codebase looks like.
  4. You implement what it suggests.
  5. You re-run the scan and hope it’s fixed.

The MCP-integrated workflow:

  1. You ask the AI to fix it.
  2. The AI reads the finding, the fix prompt, your detected framework, your file structure.
  3. It makes the edit.
  4. You re-run the scan and the AI verifies.

The differences add up. Less context-passing. Less hallucination about what your codebase contains. Verifiable feedback after each attempt.

What the AI can’t do

A few things to know upfront:

  • The AI can’t start scans on its own. Scans are a user action. The AI can prompt you (“ready for me to verify? Run the scan in SiteCMD”), but it can’t push the button. This is on purpose: scans can take time and you should decide when they run.
  • The AI can’t dismiss issues. Snooze, ignore, block, mark-fixed are all desktop-app actions. The AI sees findings and proposes fixes; it doesn’t quietly mark things “not applicable.”
  • The AI can’t modify your SiteCMD data. The MCP server is read-only with respect to scan results. Findings, history, dismissals — all of that is yours to manage.

Supported editors

EditorSetup page
CursorCursor integration
Claude CodeClaude Code integration
WindsurfWindsurf integration
VS CodeVS Code integration
GitHub CopilotGitHub Copilot integration
ClineCline integration
JetBrains IDEsJetBrains integration
ZedZed integration
OpenAI Codex CLICodex integration

If your editor speaks MCP, it works. Setup takes about a minute per editor (paste a config snippet, restart the editor, you’re done). For the protocol-level overview, see AI editor overview.

Without MCP

If you can’t use MCP (different editor, restricted environment, just don’t want to), you can still get most of the value:

  • The CLI’s sitecmd fix command prints fix prompts to stdout. Pipe them into any AI you can talk to.
  • Reports and CSV exports give you structured outputs you can paste into any chat-style AI session.
  • The Issues page detail view shows the full fix prompt for any finding. Copy and paste.

The MCP workflow is faster because there’s no copy-paste step. But the underlying fix prompts are the same regardless of how you get them to your AI.

What fix prompts look like

Each fix prompt is structured to give an AI editor enough context to act:

  • What the check is verifying (in plain language)
  • What was found (the actual evidence: the headers you’re missing, the page that’s slow, the dependency that’s bad)
  • Framework-specific fix steps when SiteCMD detected your framework
  • Verification path (what to check after the fix)
  • Effort estimate (quick / moderate / involved)

The “involved” effort tag is the AI’s signal that the fix isn’t a one-line change. It might involve refactoring, new files, or a design decision. Treat those as conversation starters, not autonomous fixes.

The verification habit

The single biggest improvement to your fix-with-AI flow is making verification a habit. After every set of fixes:

  1. Re-run the scan.
  2. Have the AI call compare_scans.
  3. Look at what got fixed, what didn’t, and what (if anything) broke.

This is the loop that turns AI-assisted fixing from “guess and check” into something reliable. Without it, you’re back to copy-pasting and hoping.