AI-Driven Job Interviews: Efficiency vs. Dehumanization in the Automated Hiring Era

As companies adopt AI-driven job interviews to streamline hiring, candidates face digital bots instead of humans. While AI promises efficiency and fairness, technical glitches, impersonal interactions, and unchecked algorithmic bias fuel frustration. Candidates report dehumanizing experiences with malfunctioning bots, lack of feedback, and increased risk of unfair, automated rejection without recourse.
AI-Driven Job Interviews: Efficiency vs. Dehumanization in the Automated Hiring Era
Written by John Smart

The Algorithms Will See You Now: Inside the Surge—and Stumbles—of AI Job Interviews

As companies scramble to automate and streamline hiring, job candidates are increasingly meeting a new kind of interviewer: a digital bot powered by artificial intelligence. The appeal is obvious—AI promises faster, “fairer,” and round-the-clock screening at scale. But for a growing number of candidates, the reality is far less glossy, riddled with technical glitches, impersonal interactions, and, at worst, algorithmic rejection with zero recourse.

Mike Peditto, a Chicago-based consultant with 15 years’ experience advising on hiring practices, told Slate that the adoption of AI recruiters is rapidly taking hold, particularly after high-profile moves like IBM reportedly laying off several hundred HR workers in favor of AI agents. “It’s becoming a huge thing,” Peditto said, describing an environment where cost pressures and efficiency drives now eclipse human contact.

The human costs have been stark. Candidates recount donning business attire, rehearsing answers, and preparing as for any high-stakes interview—only to be greeted by a malfunctioning bot. Houston resident Leo Humphries shared his experience of dressing professionally for an interview in which the AI recruiter became stuck repeating the same phrase, forcing him to sit powerless as the session looped without resolution. “It was very disrespectful and a waste of time,” another candidate, Colin, told 404 Media (Slashdot), remarking on the dehumanizing nature of being instructed to go the extra mile for “just a robot.” Afterward, like many others, Colin was ghosted by both the AI and the company, receiving no follow-up or feedback.

The issue has reached viral proportions. On TikTok and other platforms, candidates share videos of their interviews derailed by malfunctioning bots—one example cited by Slate involved Kendiana Colin, a 20-year-old Ohio State student, whose AI interviewer became stuck, endlessly repeating “vertical-bar Pilates.” While some of these viral clips are staged for satire, legitimate technical breakdowns are mounting, exacerbated by companies racing to deploy off-the-shelf AI tools that often “wrap around the same core models or APIs,” as reported by Business Insider. According to Unaizah Obaidellah, a senior lecturer in AI at the University of Malaya, glitches are often the result of insufficient or irrelevant training data, leaving the bots ill-equipped to handle the nuance and unpredictability of real human interviews.

As Emily DeJeu, assistant professor at Carnegie Mellon’s Tepper School of Business, told Business Insider, the automation of interviews is only likely to become more routine as companies seek to streamline early-stage hiring. Early adopters claim these tools help reduce human bias and expedite screening, but critics point to mounting evidence of discrimination, systemic glitches, and the erasure of vital human context.

A recent deep dive by Thred underscores a larger risk: studies now indicate that algorithmic interviews are prone to discrimination, amplifying rather than correcting the very biases they are meant to cure. Candidates from non-traditional backgrounds, or those whose speech or appearance deviates from the algorithm’s training norms, are often unfairly filtered out. And with little transparency into how decisions are made—or recourse for contesting a digital rejection—frustration boils over.

Slate’s reporting highlights the psychological impact: candidates describe the process as “dehumanizing,” noting that being rejected without any human interaction or feedback feels uniquely dispiriting. “I do think we’re heading to where this will be pretty commonplace,” Slate observed, signaling the likely persistence of AI-driven interviews even as backlash mounts.

The implications for companies are significant. Termed a “broken candidate experience” by commentators on LinkedIn, the reliance on glitch-prone bots risks alienating talent while failing to deliver on promises of fairness and efficiency. As hiring becomes increasingly digital, insiders warn that treating applicants as data points rather than people may prove costly—not only to reputation, but to the bottom line, as top candidates walk away from companies that seem to value algorithms over authenticity.

For now, the message from industry voices is clear: while the march of automation in recruitment is inexorable, companies must weigh efficiency against the risk of dehumanization—and remain vigilant for the unintended consequences of letting the algorithms take the lead.

Subscribe for Updates

AITrends Newsletter

The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.
Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us