Top
 

The Truth About AI in Hiring: Where It Works and Where It Fails

AI is changing how companies hire. But it’s also introducing new problems, while making old ones harder to catch.

 

Fake candidates. AI-written resumes. Interviews where someone’s getting fed answers in real time. These aren’t hypotheticals. We’ve seen them. And we’ve seen companies try to solve them by replacing the human part of hiring with automation. So now we have machines hiring and evaluating machines.

 

That’s where things start to break.

 

AI has a place in hiring. It’s very useful when used right: for screening, detecting patterns, and flagging obvious mismatches. But it can’t do what experienced managers do. Humans are far too complicated to be evaluated without other humans to assess them. AI can’t read intent. It can’t assess potential. And it can’t tell you who’s going to step up when things go sideways.

 

At Mirigos, we’ve tested AI across multiple steps in the process. Some things it handles well. Others still need a real person who’s built and led actual teams.

 

Here’s where it fits. And where it doesn’t.

 

 

You’re Not Hiring a Resume. You’re Hiring for Trust.

 

A resume can tell you what someone claims to know. It can list tools, titles, and even years of experience. But it won’t tell you how they work. How they respond under pressure. Or whether they’re the kind of person you’d trust to handle a critical situation.

 

That’s real hiring. Judgment. Reliability. Fit. Not just keywords.

 

AI can help surface resumes with relevant skills. But it can’t tell you if someone inflated their role. Or if they were the ones solving problems, or just sitting in the room while others did.

 

Human involvement in the interview is not only for the benefit of the hiring company but also for the candidates. AI hiring agents are getting popular and have been replacing hiring managers. Intrigued by the notion, we decided to see for ourselves how it works.

 

At Mirigos, we ran a controlled experiment with AI-led interviews. Some volunteers, as well as candidates, were told ahead of time what it was and consented to try. Most dropped out in under two minutes, saying it just felt impersonal and wrong.

 

Why? Because people don’t want to talk to a bot when they’re deciding who to work with. And neither do we.

 

We don’t use AI to run interviews. We use experienced people who know how to spot red flags, test for potential, and actually connect with candidates.

 

 

Use AI for Filtering. Just Don’t Get Fancy With It.

 

AI has a place. The first screen.

 

It’s fine to use AI or ATS tools to filter out resumes that belong to candidates who are clearly nowhere near qualified, or who applied to every open role on your site. That’s basic filtering. That’s what the tools are good at.

 

But the moment you try to get clever, “show me only resumes that match this role 70 percent or higher”, you’re already filtering out great people. Machines can’t read nuance. They don’t know how to spot potential. They don’t understand context. A candidate who’s a strong fit might not use your keywords. A 70 percent match score can easily be a false no.

 

That’s where hiring breaks.

 

The better approach is simpler. Set a low threshold, maybe 30 to 40 percent, just to knock out the obvious mismatches. Then review everything else with a real human.

 

Because filtering is not assessment. It’s not an evaluation. It’s just sorting. If you treat it like more than that, you’ll miss the best people before you even speak to them.

 

 

Cheating Isn’t New. It’s Just Easier Now.

 

Candidates were cheating in interviews long before ChatGPT. Some used someone else’s resume. Others had a second person feeding them answers behind the screen. AI can help with some of these red flags, but it won’t tell you who’s real.

 

Yes, it can detect duplicate resumes. Spot inconsistencies. Flag unusual patterns. It’s useful. But fraud isn’t new.

 

AI just made the cheating faster, cleaner, and harder to detect, if you’re not paying attention.

 

The most dangerous version now? Identity mismatches. Deepfakes. Fake documents. Someone pretending to be someone else entirely. AI won’t catch that. But a trained human might.

 

Here’s what you can do to catch the frauds:

  • Ask them to raise their hand in front of their face on camera
  • Have them write a word or sentence on paper and hold it up.
  • Share screen and walk through a task live.

 

None of this is complex. But it’s real. And it works.

 

Because AI can flag technical mismatches. But it can’t spot hesitation, discomfort, or someone who’s clearly reading a script. Fraud will happen. It always has. But the best filter is still a live conversation with someone who knows what they’re looking for.

 

 

The Real Stack: AI Where It Helps. Humans Where It Matters.

 

The best hiring processes don’t ignore AI. But they don’t lean on it blindly either.

 

Use AI to sort. Use it to highlight resumes with the right surface-level indicators. Use it to flag patterns or check consistency. But know where its limits are. And stop it there.

 

The real assessment still needs to come from a human. Not just any human. Someone who’s built teams. Someone who knows what recruitment really looks like. Someone who knows the difference between a strong developer and a strong resume.

 

At Mirigos, we use AI in the early stages. Screening. Sorting. Pattern checking. After that, it’s human-led. Every candidate talks to someone who understands the role, the culture, and the red flags. No bots. No scripts. Just conversation, experience, and context.

 

We’ve tested different models. We’ve run experiments. The answer is never full automation. It’s balance. AI where it helps. People were where it counted.

 

AI can screen resumes. It can’t tell you who you’d trust on your team.