I recently had the opportunity to join Matt Alder on the "Recruiting Future" podcast to explore how AI tools are reshaping how quickly organizations screen, filter, and rank candidates. But matching a candidate to a job description and confidently predicting how they’ll actually perform are two very different outcomes. Most hiring processes have never been tested against real performance data, and many organizations still lack a clear, measurable definition of success for each role. We've included the live podcast below as well as a full transcript for you to skim and digest.
Without that foundation, even the most advanced AI is simply scaling a process that was never truly evidence-based to begin with—speeding up decisions, but not necessarily improving them.
In the interview, Matt and I explore the difference between matching candidates to roles and truly predicting performance, why most hiring practices aren’t grounded in evidence, and what it takes to clearly define success and meaningful performance indicators. We dig into how AI hiring tools hold up under scrutiny, the risks of automating poor decisions, and the questions TA leaders should be asking their vendors. We also discuss whether a reckoning is coming for hiring technology and what the future of hiring is likely to look like.
Many organizations have never validated their hiring processes against actual job performance, which means they often rely on assumptions rather than evidence to make selection decisions. Because they lack clear, measurable definitions of what success looks like in each role, there is no reliable benchmark for determining whether their hiring steps predict meaningful outcomes. This gap leaves hiring teams vulnerable to using tools and methods that feel effective but have no proven link to real-world performance.
When AI is introduced into this environment, it simply automates these untested or incomplete processes, increasing efficiency without improving accuracy. As a result, organizations risk making the same poor decisions—only faster—unless they first establish validated, evidence‑based criteria for evaluating talent.
Defining what good performance looks like is essential because organizations cannot meaningfully evaluate candidates without a clear understanding of the behaviors, outcomes, and competencies that drive success in a role. Once this clarity is established, identifying the specific indicators that truly predict performance allows hiring teams to focus on metrics that matter rather than relying on assumptions or superficial criteria. These indicators create a measurable framework that links hiring decisions to real workplace results.
When organizations skip this step, they risk building selection processes that feel structured but have no proven connection to actual performance. Only after these evidence‑based foundations are in place should AI or assessment technologies be introduced, ensuring they enhance the decision‑making process rather than automating flawed methods.
Concerns continue to grow around whether many AI‑driven hiring tools genuinely predict job performance or simply replicate the biases and assumptions already embedded in existing processes. Without rigorous validation, these tools may give a false sense of precision while reinforcing patterns that have never been shown to correlate with real‑world outcomes.
Automating poor decision‑making can escalate the consequences of flawed or biased methods, making bad hires faster and more frequent rather than improving overall accuracy. Organizations must critically evaluate vendors by examining the scientific basis, validation evidence, and performance correlation behind their technologies. This evidence‑based scrutiny helps ensure that AI tools contribute to better hiring decisions instead of amplifying systemic weaknesses.
Organizations should press vendors to explain exactly how their hiring tools have been validated, because without proven methodological grounding, there is no assurance that these systems measure what they claim to measure. They must also understand each tool’s true predictive accuracy, since many hiring technologies have never been tested against real‑world performance outcomes and may only simulate precision rather than deliver it. By digging into how well a model correlates with actual job success, organizations can avoid adopting tools that merely automate guesswork.
In addition, verifying that every model is built on solid scientific principles—rather than marketing claims or untested assumptions—is critical for ensuring that technology enhances the hiring process instead of reinforcing flawed methods. This level of scrutiny helps organizations make informed decisions about which tools genuinely add predictive value and which ones may introduce unnecessary risks.
As organizations recognize the critical difference between merely filtering candidates and actually predicting future job performance, hiring practices are likely to shift toward methods grounded in empirical evidence rather than assumptions. This growing awareness highlights the limitations of tools that rank applicants quickly but lack demonstrated links to real‑world outcomes, pushing employers to demand more scientific rigor in their selection processes. At the same time, understanding the true predictive accuracy of hiring tools becomes essential, since many systems have never been validated against actual performance data and may only provide an illusion of precision.
As organizations scrutinize these tools more closely, they are better positioned to differentiate between technologies that meaningfully forecast success and those that simply automate existing biases. Ultimately, verifying that each model is built on solid scientific principles—not just marketing claims—will be central to creating hiring processes that genuinely improve decision‑making and elevate long‑term workforce quality.
[The complete transcript of my conversation with Matt Alder is included below for those who prefer to read or want to revisit specific insights.]
00:00
Matt Alder
AI is changing how companies screen and shortlist candidates, enabling faster matching than ever before. However, most hiring processes have never been tested against actual job performance. Without that foundation, even the smartest technology is just helping organizations make the same old hiring mistakes faster. So what does the science of predicting performance really require? Keep listening to find out. Support for this podcast comes from EightfoldAI. EightfoldAI’s interviewer agent is transforming how organizations hire. Powered by Eightfold’s Talent Intelligence foundation, the AI interviewer is an autonomous and assistive agentic interviewer that thinks, adapts and evaluates like your best recruiter, operating 24, 7 across time zones and languages while assessing every candidate fairly and consistently.
01:01
Matt Alder
It works across the entire interview process, from initial screening to final assessment, enabling teams to make faster hiring decisions, deliver a seamless candidate experience and complete interviews in under a business week so you can find your best fit talent. To find out more, just head over to Eightfold.ai. That’s Eightfold.ai. Hi there. Welcome to episode 768 of Recruiting Future with me, Matt Alder. AI tools are changing the pace at which organizations filter and rank candidates. However, matching a resume to a job description and actually predicting whether that person will perform well in the role are two very different things. Most hiring processes have never been validated against real performance outcomes, and organizations don’t often have a clear, measurable definition of what success looks like in a role.
02:19
Matt Alder
Without that foundation, even the most sophisticated AI is just automating something that was never evidence based in the first place. So what would it actually take to build hiring processes that genuinely predict performance? My guest this week is Jennifer Hugo, Managing Director and owner of Corvirtus and an organisational psychologist specialising in evidence based hiring. In our conversation she explains the science behind predicting job performance and why most hiring processes are far from where they need to be. Hi Jennifer and welcome to the podcast.
02:54
Jennifer Yugo
Thank you for having me.
02:56
Matt Alder
An absolute pleasure to have you on the show. Please could you introduce yourself and tell everyone what you do?
03:02
Jennifer Yugo
My pleasure. I am again, my name is Jennifer Yugo. I am the Managing Director and I have the unique privilege of being the owner of Corvirtus, which is a four decades young organization consulting firm that focuses on hiring and developing remarkable people for culture. Our name is rooted in Latin and means heart and values and passion. And so our passion, our greatest calling, is focusing on how we can help build remarkable places to work and do business. That do exceptional things for their chosen purpose and are also exceptional places to work that really build up and grow the people that work for them. So by background I have a PhD industrial Organizational psychology. I know that’s quite a mouthful. Often just say organizational psychology.
03:56
Jennifer Yugo
I heard about IO Psych as a young high school student and was passionate about it all the way through college and then had the opportunity to attend nationally ranked Bowling Green State university in the 2000s. And and then very much was passionate about being an academic and having an academic lineage and contributing to research and working specifically in a business school. So I did that for a handful of years, but found out quickly that I had the drive to work hands on with organizations. And so that led me to Corvirtus in very late 2013. And I got the opportunity to work alongside our founders for a number of years from late 2013 until 2019 when I acquired the company as they entered retirement and time moved so quickly.
04:49
Jennifer Yugo
I just realized the other day that I’ve been in the owner role longer now than I was working under Marta Earhart and Tom Dakotas, our founders, which is mind boggling to me. It feels more like 2/3 of the time was working with them and a third was owning the company. But that probably says something about what it’s like running a business.
05:11
Matt Alder
I know exactly what you mean. Let’s dive into this a little bit. So in terms of kind of building these kind of exceptional organizations, what does good performance look like in a role? And how should organizations be thinking about measuring success? And why do so many get that wrong?
05:26
Jennifer Yugo
Yeah, that’s kind of the ultimate question whether we’re talking about hiring or development, in this case hiring. I think the biggest challenge is not understanding what success looks like and kind of conflating success and effort and personality and then wondering why our hiring and promotion decisions don’t hold up. So from a science perspective, we look at performance in terms of patterns of behavior that really drive those end results that we’re looking at. And so when we have a good understanding of what those behaviors are, we’re able to look at what are the hiring tools that are going to get us there. But even before that, what qualities do we need in order to have those behaviors?
06:12
Jennifer Yugo
So if someone’s going to show persistence in pursuing a goal, even when there’s new obstacles constantly in their path, or they’re going to work on collaborating with others and supporting and growing others, then we can better understand, well, what tools in the hiring process and at what point can we have Multiple measures of that so that we can get the people that we need in the roles. So we really focus really strongly on prediction and having precise models of the competencies that we want to achieve. So we get really brutally specific about what performance is, what are the behaviors under that, the qualities, the abilities, the characteristics that are going to get us there. And then how do we measure those? And also how do we attract for them?
07:04
Jennifer Yugo
Because, and especially in today’s landscape with constant new hiring technologies, we’re both wanting to understand the candidates, but also educate them, engage them, recruit them. At the same time to join this.
07:18
Matt Alder
Up, really, with hiring, how do you validate a hiring process against what people are looking for? What does that look like for people who might not be familiar with how the science can work here?
07:32
Jennifer Yugo
So validation, it’s one of those rules that sounds really technical, but in essence, it’s. It’s quite practical. And there’s other types of validation that we do in other industries, just making sure that something is working. I was talking to a healthcare client this morning and used the analogy of making sure that a lab test is measuring what we intended to measure and just the different problems that you could have if you were using a test, blood test, or some other screen and it wasn’t accurately measuring the disease state or the indicator of disease or a challenge that you expected it. So we’re wanting to understand just, are we making our hiring evidence based on our hiring decisions based on evidence, or are we just basing it on our intuition or just that something feels good?
08:19
Jennifer Yugo
So we want to understand exactly what the job is, not just how we hope someone exists within it. So what are the core behaviors, responsibilities, things that they’re accountable for. And then we want to look at how is it being measured? So you could take your hiring process today and look at the interviews, look at any assessments or other tools, other screens, if there’s what we sometimes call biodata experience, other qualities, how long someone’s been in a similar role or an adjacent role, and then seeing how that relates to performance. So taking each touch point in the hiring process, each process of evaluation, and then ideally being able to have some quantitative variable that we can attach that.
Example defined competencies from a competency model
09:06
Jennifer Yugo
So if we’re rating candidates at the beginning, as, you know, not meeting expectations, or exceeding expectations on their experience, if we can incorporate like a number value to that, and then seeing how does that relate to the hiring decision and then to some indicator of performance, whether it’s making it through training, a lot of times we focus on that. There is A large call center environment that I just spoke to, where were able to help make sure that not just that they’re making it through training, but they even noticed that people coming in were just much more receptive to being engaged. We look at that.
09:40
Jennifer Yugo
So for another organization that we helped was manufacturing, and they really value being able to having people that just naturally want to learn new workstations, pick up new tasks, new key points on the line, the manufacturing line, and with their hiring process and how were able to help them, were able to see that there was a strong link between scoring better on the assessment, higher on the assessment, and some of the other tools that we put into place and learning more of those stations on their own. And so they’re more capable and they have a stronger workforce because of the hiring process. And we knew that was a key variable, that was something that they wanted to focus on from talking to them and hearing that for production, having people that take the initiative to learn multiple stations is key.
10:25
Jennifer Yugo
Or for the call center, that training was a point of challenge and getting people to make it through training, engage in training, those were all things that they really needed. And so then looking at, okay, well, what qualities would help someone be engaged in both of those scenarios, even though they’re very different, and putting those in place and then measuring that. And so that’s what a lot of hiring processes and organizations are missing. Is that evidence where they haven’t been validated? They’re kind of built on tradition. Maybe what we’ve always done or feel is effective, our own intuition, instead of really looking at the numbers, just like you might. I think sometimes when we have a new marketing solution or customer engagement solution, we’re very like, okay, what are the numbers? How is this driving behavior? How is it driving results?
11:16
Jennifer Yugo
But for hiring, we’re maybe a little bit less ready to gather that data and look at the numbers.
11:23
Matt Alder
Yeah, I think that’s really interesting. I think that in many cases, organizations sometimes think that it’s too difficult to align kind of performance with hiring because there’s no data, or the data takes a long time to come. But as you say, picking those things is sort of crucial to that.
11:39
Jennifer Yugo
Absolutely.
11:40
Matt Alder
So there’s been a huge influx of technology in this space in the last couple of years, primarily driven by the kind of the AI revolution that we’re going through. A number of the tools, platforms that are out there are talking about predicting job performance in some way. So some of them are being quite explicit about that. Some of them Being sort of implicit in the way that their systems kind of rank and select people. Given everything that we’ve just talked about in terms of validation, does that hold up to scrutiny? Are you seeing these tools being able to do what you just described?
12:16
Jennifer Yugo
That’s the big question. Does it hold up? And some of it does, but a lot of it collapses once you start asking follow up questions. So if a tool is, let’s say, gauging the big five model of personality, agreeableness, neuroticism, introversion, extroversion, well, that’s great, that could be really useful information to have. But what does that mean for the job? Are those traits strongly predictive? In what ways? What should we be looking at? So AI can absolutely identify patterns, but is it just automating based on past decisions? Some tools will take your top performers, which sounds great, and say, okay, we’ve built a model based on your top performers, but what about those people like were just talking about with the call center that drop out during training?
13:04
Jennifer Yugo
What are the qualities that derailed them, that led them to leave early, that made it difficult for them to thrive in that environment? Or what if we look at the people in the manufacturing environment that are just happy at their one station and not looking at opportunities to learn new things and move forward? So we train algorithms a lot of times on messy, subjective, maybe historically biased data, then it’s not going to drive results and be as objective as we hope it is. It’s not prediction. So again, that go back to that idea of auditing your hiring process and doing validation and looking to see exactly what the measurement is of performance and how do scores and results at Time one drive results later?
13:53
Jennifer Yugo
And then in addition, just also being a useful tool during the onboarding and early employment experience, we know that the first few months on the job are absolutely critical. We hear again and again, even for leadership roles, that there’s painful, unexpected turnover in the first few months or even the first year, it’s very damaging. And so how can we take information that we learn about the person, make sure that we’re measuring those potentially derailing forces? And if we know that someone is maybe above the benchmark that we need, but they still have an opportunity to know that in a certain area, if I support them, engage them, check in with them in certain ways, it can help connect leader behavior to support the person making it through that difficult first year when so much is online.
14:40
Matt Alder
Yeah, I think that’s really interesting. And you kind of really identified there the different ways that these tools might be working and dive into that in a second. But before we do, I mean, you see anything out there that does live up to the hype? Or is there a way that AI powered tools could work differently to be able to deliver the kind of things that are starting to be promised?
15:00
Jennifer Yugo
So again, just starting with that understanding of the job, sometimes we talk about job analysis, which is another very technical maybe eyes glaze over term that just means a deep rooted understanding of the job, understanding what the roles are, the job expectations, the trajectory, looking strategically at what we’re going to need five years from now in terms of the job. And so starting with that science perspective. And again, I might be biased with my background, but a tool that was built by people trained and skilled and experienced in the science of industrial and organizational psychology, where even if it’s using artificial intelligence, they have that science mindset to look and make sure that outcomes are clearly defined. And we’re not just dressing something up with marketing and calling it prediction.
15:55
Jennifer Yugo
So again, just making sure that there’s been rigorous validation of the data and making sure there’s transparency around it. So sometimes there’s a lot of opacity around what it can do. The vendor maybe can’t explain why a candidate scored the way they do. Whereas with other solutions like in our approach, we’re able to tell you exactly why and you’re able to see from the report exactly why the candidate maybe was a recommended or received a lower overall score or rating than another or just landed on different qualities the way they did. So you know, one model isn’t going to fit all jobs. That’s something that often happens with these tools. And how is validation built into the product life cycle?
16:39
Jennifer Yugo
So we provide, you know, my approach as a IO psychologist with our business is providing ongoing validation because your jobs change, your culture change. I had just another call today just talking about how different the workforce is and how different their work environment is than even just a few years ago. And so being able to pivot and adapt the solution both for the candidate experience as well as understanding the person and making good decisions, all of those things need to be taken into account.
17:08
Matt Alder
You touched on this a little bit, but I just want to kind of expand it to give people the kind of advice that they need. So with the tools that are out there, what was your advice in terms of TA leaders, in terms of what they should look at, the questions they should be asking about how these are built and how they might know they’re effective?
17:29
Jennifer Yugo
That’s a deep question, but simple advice. Think about not outsourcing your thinking to technology. So you’re outsourcing the ability to understand a person and then being able to actually make a better decision because of that information that you have. So TA leaders, acquisition leaders should really think about just what is this improving? What is the problem? So are we adding this just because we think it’s cool? I mean, definitely improving the candidate experience is an important goal, but what exactly are we tasking this with doing? Is it narrowing a broad applicant pool in terms of high volume hiring? Is it reducing risk? That’s something that we hear about, and I mentioned leadership earlier and trying to reduce early manager turnover.
18:23
Jennifer Yugo
If we’re able to do that, then we also reduce a lot of risk or potentially in those cases of turnover, things that are happening that introduce risk into the business. And also on the other side, what risks are we introducing if this technology derails and how can we help mitigate that? And then once we’re evaluating the tool, thinking about its validity. So what evidence do they have that it’s effective, that it’s measuring performance? With that in mind, is it also fair? Is it providing a positive, candid experience and also not biased in any way and just supporting people in making the best decision, that we’re not just choosing a shortcut or something that looks cool to have, that we’re really making the right decisions for performance, for our culture, for helping people thrive once they enter our company.
19:13
Matt Alder
I mean, it is a very disruptive time. Things are things that are kind of developing almost constantly. How do you think things are going to move forward over the next two or three years? I mean, obviously making predictions is very hard, but what would you like to see happen?
19:27
Jennifer Yugo
Yeah, I think we’re headed, it’s a daunting word, but we might be headed into a reckoning phase. And that’s what a lot of other people in the field are also feeling. So over the next few years, I think we’re going to see a lot more scrutiny, both in terms of legal, governmental intervention, but just overall. So decision makers and people in the field, talent acquisition, asking tough questions, in addition to that regulatory and legal pressure and less tolerance for just trust us and believe the results or look at this flashy report or flashy experience we’re able to deliver. We’ll start to see some of the things that I was talking about investing in. How do we make clear definitions of performance, have discipline in making decisions and make sure that everything is supported by evidence.
20:16
Jennifer Yugo
So not replacing that human decision making with an algorithm, but supporting sound decision making with technology. And that’s really where we’ll see the biggest gains from.
20:26
Matt Alder
Jennifer, thank you very much for talking to me.
20:29
Jennifer Yugo
Thank you, Matt. This was a pleasure.
20:31
Matt Alder
You can follow this podcast on Apple Podcasts on Spotify or wherever you listen to your podcasts. You can follow this podcast on Apple Podcasts on Spotify or wherever you listen to your podcasts. You can also subscribe to our weekly newsletter, Recruiting Future Feast, and get the inside track on everything that’s coming up on the show. Thanks very much for listening. I’ll be back next time and I hope you’ll join me.