Job applicants suspicious of AI, so less said the better, research shows
LAWRENCE — Job applicants these days are hoping for some transparency, and they know companies are using artificial intelligence to screen them. But hiring managers ought not try to convince applicants that AI is unbiased, because they won’t be believed, and it might even prompt resentment.
Those are among the findings of a new study published in the journal Media Psychology under the title “Who’s the Fairest of Them All: An Experiment Testing How Source, Outcome, and Process Description Affect Perceived Fairness in Hiring Decisions.”
Co-written by Cameron Piercy, University of Kansas associate professor of communication studies, and his former graduate student Rebecca Baumler, the researchers surveyed nearly 250 people after they had first participated in an online job-application exercise during which they were accepted or rejected for a role they wanted.
When that decision — thumbs up or down — was coupled with a justification of who made the decision — an algorithm, a hiring manager or a hiring manager using an algorithm — the applicants’ perception of fairness did not seem to vary.
“We expected that the combination of the two would get the benefits of both worlds. But not so much,” Piercy said.
“For some participants, we made a claim that the decision-maker was de-biased. So whether it was the hiring manager or the algorithm or both, we said, ‘Hey, this hiring manager or algorithm has made 1,000 hiring decisions and is known to not be biased.’ For the hiring manager, it was helpful to say this, and for the hiring manager using the algorithm, similarly, job seekers preferred the seasoned decision-maker.
“But when we said, ‘The algorithm has made 1,000 decisions that are unbiased,’ people were like, ‘No, I don't buy that.’ That's when we saw what we call the boomerang effect. Perceptions of fairness plummeted.”
Piercy said that for some respondents, this might be attributed to hard-won experience.
“I think it really comes to the forefront when a machine's making a decision about you, and, all of a sudden, you're like, ‘Machines don't get humans. They don't understand what my internship was worth, what I’ve written. Only a human could truly get me.’ I think what the data shows is that people are OK with algorithms making decisions — up to a certain point,” Piercy said. “We're capturing the cultural dialogue about what can or should machines do.”
“But I'm a numbers guy. I want to know if I make the claim that this decision-maker is unbiased. What does it do to people? And in the case of AI making a decision, it makes them say, ‘No, absolutely not.’ It's worse than the control condition. ‘I'd rather you tell me nothing than tell me AI is not biased,’ right?”
That’s why the authors urge hiring managers not to go there. Rather, the survey found evidence for the beneficial effect of offering justification behind a decision: For example, the applicant lacks a particular qualification that others possess.
“People seem to accept that and say, ‘Thanks for letting me know who or how the decision is being made. But don't try to sell me that your algorithm is special in some way.’ It's not special,” Piercy said.
“We acknowledge that there is a strong call in the academic community, from an ethical standpoint, to say that if you're using technology to make decisions, we ought to make it transparent. We agree with the premise of explainable AI, or XAI — that the way AI is used in important decision-making processes ought to be transparent. The reality is that companies are disadvantaged to disclose the way that they're using AI because it opens them up to legal action.
“People remain skeptical, and probably the less you say about your use of AI and hiring, the better for you, the company, whether or not that is the just thing to do.”