Why Generic Assessments Don't Work for Your Company
SHL, TestGorilla, Harver — they offer the same tests to everyone. Why that's a problem and what the alternative is.
Door Ingmar van Maurik · Founder & CEO, Making Moves
The assessment industry sells one-size-fits-all
The big assessment publishers — SHL, Harver, TestGorilla, Saville, Aon Assessment Solutions — have a solid business model: develop one test and sell it to thousands of companies. It's scalable, profitable, and the marketing sounds convincing: "validated on 100,000+ candidates", "proven predictive value", "scientifically backed".
The problem: your company isn't the same as those thousands of other companies. The developer who succeeds at a corporate bank is a different profile than the developer who thrives at a fast-growing startup. The sales manager who tops performance in an enterprise environment has different traits than someone who excels in an SMB team.
And yet all these companies use the same test, with the same norm group, and the same interpretation.
The fundamental problem with generic tests
Generic norm groups miss the context
Generic assessment scores are compared to a norm group of thousands of random people. This norm group is a statistical average that isn't specifically relevant for anyone:
The consequence: you filter out good candidates and let less suitable ones through, simply because you're using the wrong benchmark.
No company context or culture fit
A personality test measuring "teamwork" doesn't account for what teamwork looks like in your specific culture:
The same "teamwork" score can be a perfect match at company A and a complete mismatch at company C. The generic test doesn't make this distinction.
Static models in a dynamic world
The test doesn't change. Whether you administer it in 2020 or 2026, it's the same test with the same norm group and the same interpretation. But your company changes continuously:
A static assessment cannot keep up with this dynamism by definition.
Limited predictive value
Let's look at the numbers honestly. Research shows that generic cognitive tests have a correlation of r = 0.30-0.50 with job performance. Sounds reasonable, but what does this mean in practice?
This is better than nothing — and certainly better than unstructured interviews (r = 0.20) — but far from the accuracy companies think they're getting when paying for a "scientifically validated" assessment.
The costs add up
Generic assessments aren't cheap:
|----------|-------------------|----------------------------|
And this excludes ATS integration, training for interpretation, and the time recruiters spend reading reports they often don't fully understand.
The alternative: custom assessments
Step 1: analyze your top performers
Instead of using generic tests, start by analyzing your best employees:
This produces a success profile unique to your organization — and it can vary per role.
Step 2: build your own norm groups
Your scores are compared with your own employees, not the market:
Step 3: continuous improvement
After every hire, the model is validated:
1. The candidate scores on the assessment
2. After 6 months: performance review
3. Calculate correlation: was the prediction correct?
4. Adjust the model based on results
5. Repeat — the system gets smarter
This is the principle of continuous validation: your assessment evolves with your organization. After 50-100 hires, you have a model that predicts significantly better than any generic assessment.
Step 4: integrate into the hiring flow
Custom assessments deliver the most value when seamlessly integrated:
The business case
Say you make 100 hires per year with an average failure rate of 15% (the market average).
Current situation with generic assessments:
After implementing custom assessments:
The investment in your own assessment system — as part of a complete hiring platform — pays for itself within the first year. And it becomes more valuable every year as the model learns.
Beyond financial savings:
Key takeaways
Generic assessments are a compromise born of necessity: they were the best option when company-specific solutions were too expensive and too complex. But that time has passed.
With modern technology and your own data, you can build assessments that:
The future of assessments isn't generic. It's company-specific, data-driven, and continuously learning.
Want to discover how company-specific assessments work in practice? Schedule a demo or explore our AI hiring system that offers custom assessments as core functionality.