AI in Hiring · 8 min read

How to Use AI to Reduce Hiring Bias

Unconscious bias costs companies talent and diversity. Learn how AI helps hire more fairly, with concrete techniques and implementation tips.

Door Ingmar van Maurik · Founder & CEO, Making Moves


The bias problem in hiring

Bias in hiring is not a theoretical problem. It is a daily reality that costs companies talent and makes teams more homogeneous than necessary. Research consistently shows that unconscious biases play a major role in hiring decisions:

  • Candidates with a Dutch-sounding name receive up to 40 percent more interview invitations than candidates with a non-Western name on identical CVs
  • Female candidates for technical roles are rated up to 25 percent lower than male candidates with the same qualifications
  • Recruiters spend an average of 6 seconds on a CV, allowing first impressions and superficial characteristics to dominate
  • The halo effect causes one positive trait to color all other evaluations
  • These are not conscious choices. They are patterns deeply embedded in how our brains process information. And precisely because of that, they are so difficult to combat with training or good intentions alone.

    Why traditional solutions fall short

    Many companies try to address bias with awareness training, anonymous applications, or structured interviews. These measures help but are insufficient:

    Awareness training increases awareness but barely changes behavior. Studies show that the effect of diversity training largely disappears after 2 to 3 months.

    Anonymous applications remove names and photos but leave other signals intact that trigger bias: school names, neighborhoods, hobbies, and writing style.

    Structured interviews improve comparability but do not eliminate bias. Interviewers are still influenced by appearance, voice, body language, and first impressions.

    The fundamental limitation of all these approaches is that they rely on human discipline to overcome human bias. That is like asking someone to break a habit through willpower alone.

    How AI addresses bias

    AI offers a fundamentally different approach. Not by replacing human evaluators, but by adding an objective layer to the process that structurally reduces the impact of bias.

    Technique 1: blind screening on competencies

    AI can evaluate candidates based on competencies without access to demographic information. The system extracts skills, experience, and potential from the CV and assessment results, and scores candidates purely on relevance to the role.

    The difference from anonymous applications is that AI also ignores the subtle signals that are hard for humans to disregard. The university name is converted to a qualification level. Location becomes irrelevant. Writing style is analyzed for content, not linguistic flair.

    Technique 2: standardized assessments

    Validated assessments are one of the most powerful tools against bias. When every candidate takes the same test under the same conditions, the comparison becomes fairer.

    AI takes this to a higher level by:

  • Adaptive tests that adjust difficulty to the candidate, so everyone can perform optimally
  • Multi-modal assessment that measures different skills in different ways, giving candidates from different backgrounds equal opportunities
  • Automatic scoring that is not influenced by the evaluator's mood or fatigue
  • Technique 3: continuous bias monitoring

    An AI system can continuously monitor whether patterns emerge that indicate bias:

    Monitored metricBias indicatorAction

    |-----------------|----------------|--------|

    Conversion rate per demographic groupSignificant differenceAnalyze cause and adjust criteria Average score per backgroundSystematic differenceReview features causing the difference Cycle time per groupUnequal treatmentInvestigate process-related causes Hiring ratio vs. applicant ratioDisproportionate dropoutEvaluate each process step

    This type of monitoring is virtually impossible manually but a routine task for AI. The system flags patterns that human evaluators would only discover after months or years.

    Technique 4: feature debiasing

    Sometimes the data itself contains bias. If historical hiring data shows that primarily men were hired for technical roles, a naive model learns that being male is a positive predictor.

    Feature debiasing prevents this by:

  • Explicitly excluding protected characteristics as model input
  • Identifying and correcting proxy variables: features that indirectly correlate with protected characteristics
  • Applying adversarial debiasing: a technique where the model is explicitly trained not to discriminate
  • Building in fairness constraints that guarantee scores are independent of protected groups
  • Technique 5: explainable decisions

    One of the strongest weapons against bias is transparency. When every score is accompanied by an explanation of why the candidate scored high or low, it becomes much harder to hide biased decisions behind vague arguments.

    AI scoring systems can report for each candidate:

  • Which factors contributed positively to the score
  • Which factors lowered the score
  • How the candidate scores relative to the norm group
  • Whether there are red flags in the scoring that deserve manual review
  • This forces hiring managers to base decisions on objective criteria instead of gut feeling.

    The limits of AI in combating bias

    It would be unfair to present AI as a magic solution. There are real limitations:

    Historical bias in data: if you train your model on historical data that contains bias, the model can reproduce that bias. This requires conscious data curation and debiasing techniques.

    Measurement bias: some groups perform differently on certain types of assessments, not because of lower competence but because of cultural differences in test behavior. Good assessments account for this.

    Definition of success: if success is defined in a way that inherently favors certain groups, AI reproduces that inequality. It is crucial to define success broadly and fairly.

    Over-reliance: the risk exists that teams blindly trust AI scores without thinking critically. AI should be a tool, not a replacement for human judgment.

    Implementation: a practical approach

    Step 1: measure your current situation

    Before deploying AI against bias, you need to know where you stand. Analyze your current hiring data for:

  • Conversion rate per demographic group at each process step
  • Average cycle time per group
  • Hiring ratio versus applicant ratio
  • Diversity of your current team versus the available talent pool
  • Step 2: implement standardized assessments

    Start by introducing validated assessments that are relevant to the role. Ensure every candidate takes the same assessments, regardless of background.

    Step 3: activate AI scoring with bias monitoring

    Implement AI scoring with built-in bias monitoring. Set alerts for significant differences between groups and review these regularly.

    Step 4: train your team

    Ensure recruiters and hiring managers understand how AI scoring works, what the limits are, and how to interpret the output. AI works best when the team trusts it and engages with it critically.

    Step 5: iterate and improve

    Combating bias is not a one-time project but an ongoing process. Monitor continuously, adjust where needed, and communicate transparently about your progress.

    The business case for fairer hiring

    Beyond the ethical argument, there is a strong business case for reducing bias:

  • Larger talent pool: by reducing bias, you reach candidates you previously overlooked
  • Better hires: more objective selection demonstrably leads to [better hiring outcomes](/artikelen/predictive-hiring-data)
  • Lower turnover: employees selected on competencies rather than fit with a homogeneous group perform well for longer
  • Stronger employer brand: companies that demonstrably hire fairly attract more diverse talent
  • Compliance: regulation around AI in hiring is becoming stricter and fair systems are future-proof
  • Key takeaways

  • Unconscious bias is a structural problem that traditional solutions like training and anonymous applications insufficiently address
  • AI reduces bias through blind screening, standardized assessments, continuous monitoring, feature debiasing, and transparent decisions
  • AI is not a perfect solution and requires conscious data curation, broad definitions of success, and human oversight
  • The business case for fairer hiring is strong: larger talent pool, better hires, and lower turnover
  • Start by measuring, implement step by step, and make bias reduction an ongoing process
  • Want to know how your organization can hire more fairly with AI? Schedule a conversation and discover the possibilities of our AI hiring system.


    Book an intake call · View our AI Hiring System