Back to Research
Algorithmic aversion
Read time4 min readUpdatedDecember 2025

Recruiters trust humans more than algorithms

How algorithmic aversion shapes recruiter trust and decision-making.

The Trust Gap

Human Preference

The study finds lower tolerance for algorithm errors than for human errors in recruiter contexts.1

The Error Penalty Curve

Recruiters generally prefer human recommendations. When an algorithm makes a mistake, trust drops faster than when a human makes the same mistake.1

Trust comparison
Dimension
Human
Algorithm
Initial trust
Higher
Lower
Error tolerance
More forgiving
Less forgiving
Recovery after mistake
Easier
Harder
Perceived explainability
Intuitive
Opaque

Algorithm aversion: Recruiters are quicker to distrust algorithms after errors, even when performance is similar.

Fig. 1 — Trust AsymmetryHow recruiters respond to human vs algorithm recommendations

Where automation breaks recruiter judgment

The study reveals a phenomenon known as "algorithm aversion." Even when algorithms perform well, people are quicker to lose trust in them after a mistake compared to a human making the same mistake.1

Governance pressures around auditing and explainability further reinforce why human judgment remains central in high-stakes decisions.2

Recruiter lens: optimizing only for a score can miss the trust signals that humans look for. The end user is a person who values consistency, narrative, and credibility.

Trust is Fragile

Recruiter lens: manipulation signals are hard to recover from once trust is lost.

Definition: algorithm aversion

Algorithm aversion is the tendency to reject algorithmic recommendations after observing errors, even when the algorithm performs well on average.1

FAQ

Why do recruiters distrust algorithms?

The study shows lower tolerance for algorithm errors than human errors, which creates an aversion effect.

Does this mean scores are useless?

No. Scores can help structure evaluation, but the final decision still leans on human trust signals.

How should candidates respond?

Optimize for clarity and credibility so a human reviewer feels confident in the narrative.


What this changes in RIYP

01

We prioritize clarity over 'perfect scores'

Because a human is reading, clarity and signal strength matter more than gaming a specific score.

02

Human-first decision model

We model the heuristic, context-heavy judgment recruiters actually use.

Sources

  1. Algorithm Aversion in Recruitment - Frontiers in Psychology (2022).
  2. Data-Driven Discrimination at Work - North Carolina Journal of Law & Technology (2017).

See what your resume looks like

Run Free Analysis