The SiteCMD Score
How the score is calculated, what changes it, and how to read it.
Your SiteCMD Score is a number out of 100. It’s the headline answer to “how is this site doing?” Everything on the Dashboard ladders up to it.
This page explains the math behind the number so you can read it accurately, predict how a fix will move it, and avoid the two common misreads.
Where the number comes from
The score starts at 100. Each active issue deducts a penalty. The total is then run through a curve so that “many small issues” doesn’t push you to zero as quickly as a flat sum would.
The penalty for a single issue is:
penalty = base × confidence × status × occurrence_boost
Base by severity:
| Severity | Base penalty |
|---|---|
| Critical | 25 |
| High | 12 |
| Medium | 5 |
| Low | 1.5 |
Confidence multiplier:
| Confidence | Multiplier |
|---|---|
| Confirmed | 1.0 |
| High | 0.85 |
| Needs review | 0.55 |
A confirmed issue counts at full weight. A needs review issue counts at about half. This means SiteCMD doesn’t punish you for findings it isn’t sure about.
Status:
- A failing check (
fail) counts at full weight. - A warning (
warn) counts at half.
Occurrence boost:
If the same issue shows up in multiple places (say, a meta-tag problem on five pages), the first occurrence counts at full base. Each additional occurrence adds a small boost (0.75 points), capped at four extra occurrences. A problem that shows up everywhere is worth more than a problem that shows up once, but the curve flattens so a 50-page site isn’t punished 50x for the same template bug.
The penalty curve
Once all per-issue penalties are summed, the total is run through a curve:
- The first 50 points of total penalty come off the score linearly.
- Anything beyond 50 points is compressed into a 38-point saturated range, approaching but never quite hitting 88 points total.
This means a site with a single critical (25 points) and a couple of highs drops linearly. A site with dozens of issues doesn’t collapse to zero; it lands in the 10-20 range with diminishing returns, so fixing five of fifty issues actually moves the score, instead of being lost in saturation.
Score caps
Two caps sit on top of the curve:
- One or more confirmed critical issues caps your score at 49.
- One or more confirmed high issues caps your score at 79.
These caps only trigger for confirmed or high confidence findings. A needs review critical does not cap your score, because SiteCMD isn’t certain the issue is real.
The caps exist to make the score honest. A site that scores 92 with one confirmed critical buried in the breakdown would mislead you about whether the site is ready to launch. The cap forces the headline number to reflect the worst confirmed problem.
If your score is exactly 49, you have at least one confirmed critical. Find it, fix it (or mark it not applicable if it really is), and your score will jump back to whatever the underlying math says.
What this means in practice
A few patterns you’ll see:
- A single confirmed critical, nothing else. Penalty = 25. Score before cap = 75. The critical cap clamps the headline to 49. Fix it and you jump to a clean 100 (or close, if other issues exist).
- One confirmed high, two mediums, nothing else. Penalty ≈ 12 + 8.5 ≈ 20. Score before cap = 80. The high cap clamps the headline to 79. The cap is doing real work here: it’s making sure you can’t get a 90-something while a confirmed high is still on the board.
- Five medium issues across one template. Penalty = 5 + (4 × 0.75) = 8, times confidence. Score = 92-93 at high confidence. No cap.
- A dozen mediums and a few lows, no critical or high. Penalty ≈ 60 raw, but the curve dampens that. Score lands around 45-50. No cap applies, so the headline just reflects the curve.
- Many needs-review findings. Each one counts at 0.55× base. The score takes a small hit, no cap applies. Useful for triage without blocking the headline.
Risk categories
The Dashboard breaks the score down by risk category:
- Security
- Performance
- SEO
- Accessibility
- Database
- Dependencies
- Reliability
- Compliance
- Polish
- AI safety
- Architecture
Categories are aggregated by total impact (sum of penalties), not by issue count. A category with one confirmed critical will appear at the top of the breakdown, ahead of a category with twenty lows.
The category bucket an issue lands in is determined by the engine that found it. Live-site security checks land in Security. Source-audit dependency findings land in Dependencies. Operational gaps from the source audit land in Reliability.
What does not change the score
- Snoozed issues are paused until the snooze date and do not count toward your score in the meantime.
- Ignored / not-applicable issues are removed from active counts and the score entirely.
- Verified (fixed) issues are removed from active counts. If a later scan re-finds the same issue, it gets reopened.
See Working with issues for the full lifecycle.
The two common misreads
“The score is dropping but I haven’t changed anything.” Two real things can do this without code changes:
- A new check shipped in a SiteCMD update and found a real issue that was already there.
- An integration (uptime, analytics, search console) reported new evidence: a slowdown, a drop, an outage.
“I fixed an issue and the score didn’t move.” Usually one of:
- The fix is in source but hasn’t shipped. The live-site engine still sees the broken version.
- You marked it fixed instead of running a scan. The score updates when a scan confirms the fix, not when you self-report. Use Verify on the issue to re-run the relevant check.
- There’s a cap in effect from a different confirmed critical. Fixing a high won’t move the headline number while a critical is still capping it at 49.