Skip to main content

Overview

Before activating a health score configuration, test it on sample customers to ensure scores are accurate and predictive.

What to Test

TestWhat to Look For
Score AccuracyDo scores match your intuition about each customer?
DistributionAre customers spread across tiers (not all in one)?
Category BalanceIs one category dominating the overall score?
Edge CasesDo new customers and missing data handle gracefully?
Target distribution:
  • Healthy (80-100): 30-40%
  • Medium (60-79): 35-45%
  • High Risk (40-59): 15-20%
  • Critical (0-39): 5-10%

Creating a Test Set

Select 10-15 customers representing different scenarios:
TypeCountExpected ScoreWhat to Include
Healthy3-475-100High MRR, active usage, regular engagement
At-Risk3-430-60Declining usage, payment issues, low engagement
Medium2-360-75Steady but not growing
Recently Churned1-20-40Validates scores would have predicted churn
Edge Cases2-3VariesNew customers, missing data, seasonal accounts

Analyzing Results

For each test customer, compare expected vs. actual scores:
  • Score matches - Configuration is working
  • Score too high - Check if a category (often Revenue) is masking problems
  • Score too low - Check if one bad metric is dragging down the score unfairly
Review the category breakdown to identify which categories need weight adjustments.

Common Issues

IssueLikely CauseFix
All scores too highThresholds too lenient or missing dataTighten thresholds, check integrations
All scores too lowThresholds too strictLoosen thresholds to match reality
Doesn’t predict churnWrong categories weightedAnalyze churned customers, adjust weights
One category dominatesWeight too high or no variance in othersRebalance weights (none >40%)
Scores too volatileShort time windowsUse 90-day windows, trend metrics

Iteration Workflow

  1. Test - Run scores on your 10-15 customer set
  2. Identify issues - Note where actual vs. expected differ
  3. Make one change - Adjust one weight or threshold
  4. Retest - Compare to previous results
  5. Repeat - Typically 3-5 iterations needed
  6. Validate with CSMs - Get feedback before activating

Before Activating

  • Scores align with expectations for all test customers
  • Distribution looks reasonable across tiers
  • No single category dominates
  • Edge cases handled gracefully
  • CSMs have reviewed and validated

After Activation

First week: Check daily for obviously wrong scores, ask CSMs to flag issues. First month: Track if scores correlate with actual churn/renewals. Ongoing: Review quarterly and adjust based on business changes.

Next Steps