Your Interviewers Are Auditioning for Themselves

I’ve sat on both sides of enough interview tables to notice a pattern nobody talks about openly. Your interviewers, the engineers and senior ICs you pull into hiring loops, are often not evaluating candidates. They’re performing.
They’re positioning themselves as the gatekeeper. The expert. The authority who decides whether you’re worthy of entering their world. Nobody asked them to play that role. No hiring manager is sitting in the room watching this happen. They do it because the power dynamic lets them, and it feels good.
The candidate sitting across from them becomes a prop in that ego exercise.
How This Actually Plays Out
Here’s what I keep seeing. A company needs to hire. They pull two or three engineers into an interview loop. There’s no rubric. No structured scoring. No shared understanding of what “good” looks like for this specific role. The engineers walk in with one implicit instruction: judge this person.
So they do. They ask questions that can only be answered well if you’ve spent real time working through a particular type of problem. The kind of problem their company faces, the kind they’ve personally spent years encountering, struggling with, experimenting around, failing at, and eventually building intuition for. The candidate hasn’t failed at those problems yet. They haven’t had the months of runway to sit with them, try things, and arrive at the answer the interviewer now carries around effortlessly.
That doesn’t mean the candidate can’t solve them. It means they haven’t seen them before. They need time to map from what they know to what’s being asked, and an interview doesn’t give them that time. The interviewer had years. The candidate has minutes. And when the candidate pauses or stumbles, the interviewer marks them down.
What’s happening here isn’t assessment. It’s a territory display. The interviewer is unconsciously saying: I know things you don’t, and that makes me valuable.
The worst part? They don’t even realize they’re doing it. They genuinely believe they’re maintaining standards.
The Experience Confusion
When someone has worked at a company for two or three years, they don’t just learn the tools and the codebase. They build experience solving a particular type of problem. They develop judgment, pattern recognition, instincts about what works and what doesn’t within a specific problem space. That’s real expertise. It takes years to build and it has genuine depth.
But here’s what interviewers forget: that expertise was built because the company gave them the opportunity to build it. They didn’t walk in with it. They accumulated it by working on those specific problems, in that specific context, over time.
The candidate sitting across from them might have equivalent depth, built in a completely different problem space. They’ve solved problems with similar underlying structures but different surface characteristics. The skills transfer. The judgment transfers. But the mapping between “how I solved this in my context” and “how I’d approach it in yours” isn’t instant. It requires understanding your domain, your constraints, your customers. And that mapping is genuinely hard to do on the spot in a 45-minute interview when you’ve never seen the problem framed that way before.
So the interviewer asks a question rooted in their specific experience, the candidate pauses because they’re trying to map from their world to this unfamiliar one, and the interviewer reads that pause as a gap in skill. It isn’t. It’s the absence of context that your company would provide in the first two months anyway.
The question worth asking is whether this person has the capacity to do that mapping. Whether they can take their existing depth and apply it to your problems. Not whether they’ve already done it before walking through the door.
The Structural Failure
This isn’t really about individual interviewers being arrogant. Some are, sure. But most are decent people operating inside a system that hasn’t given them guidance.
Think about what you’re actually doing when you put an employee into an interview panel. You’re asking them to make a judgment call about another human being’s professional worth. And in most companies, here’s the preparation they get for it: none.
No calibration on what the role actually requires. No agreement on which skills are must-haves versus nice-to-haves. No scoring framework. No conversation about where the bar sits and why it sits there. You just send them in and say “tell me what you think.”
So they default to the only benchmark they have: themselves. Their own knowledge becomes the bar. And since they know things the candidate doesn’t, because they work there and the candidate doesn’t, the candidate almost always falls short.
Frank Schmidt and John Hunter published a meta-analysis in 1998 synthesizing 85 years of data on hiring methods. Unstructured interviews, the kind where you walk in and ask whatever comes to mind, had a validity coefficient of 0.38. Structured interviews, where every candidate gets the same questions scored against the same rubric, came in at 0.51. That gap is the difference between a process that adds meaningful signal and one barely better than a coin flip with good manners. The reason structured interviews do better has nothing to do with cleverer questions. The rubric takes away the interviewer’s ability to use themselves as the reference point.
This is a delegation failure wrapped inside a culture problem. You’ve delegated evaluation authority without delegating the framework for how to use it. And your culture hasn’t established that interviewing is a skill that requires training, not just seniority.
What I Actually Do
I’m an interviewer too, so let me tell you what I’ve landed on after years of getting this wrong.
At the beginning of every interview, I tell the candidate exactly what’s going to happen. I say: I’m going to start with straightforward questions and progressively increase the complexity. My goal is to find the edge of your knowledge, not to test whether you meet the minimum bar for this role.
I tell them the questions will eventually go beyond the scope of the position they applied for. And I tell them it’s completely fine to say “I don’t know.” When they hit that point, it probably means we’ve left the boundaries of the role anyway.
You can feel the tension leave the room when you say this. The candidate stops performing. They stop trying to bluff through answers they’re unsure about. They start actually thinking, actually engaging, because the stakes just changed. It’s no longer pass/fail. It’s a mapping exercise.
What I get from this is the full shape of what this person knows. Where their knowledge is deep, where it’s shallow, where it drops off entirely, and (this matters) where it extends beyond the role. Sometimes you find someone whose ceiling is way above what you’re hiring for. That’s information you want to have.
The Taste It Leaves
There’s a practical cost to getting this wrong that goes beyond missed hires.
When a candidate walks out of an interview feeling talked down to, feeling like the process was designed to make them fail, feeling like the interviewer was more interested in demonstrating superiority than understanding capability, that candidate talks. They tell other engineers. They post about it. They remember.
And the candidates who actually are exceptional? They have options. They’re interviewing at four other companies simultaneously. If your process feels adversarial while another company’s process feels respectful and curious, you lose them. Not because your bar was higher. Because your process was worse.
You’re not just evaluating candidates. Candidates are evaluating you. Every interview is a two-way signal about what it’s like to work at your company. If your interviewers are running ego auditions, the signal you’re sending is: this is a place where people protect status instead of building together.
The Fix Is Boring
The solution here isn’t complicated, and that’s partly why it doesn’t happen. It’s unglamorous work.
Train your interviewers. Give them rubrics. Align on what the role actually needs before the loop starts. Make the scoring explicit. Review how interviewers perform over time, not just whether their hires work out, but whether candidates report a fair experience.
And most importantly, ask yourself this: is the bar your interviewer is setting the same bar you’d set? Because if you haven’t explicitly aligned on that, I promise you it’s not. Your interviewer is setting their bar, calibrated to their ego, and you’re treating the output as if it’s organizational judgment.
It’s not. It’s one person’s need to feel smart in a room where they hold all the power.
Fix the structure and the behavior follows.
References
[1] Schmidt, F. L. & Hunter, J. E. (1998). “The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings.” Psychological Bulletin, 124(2), 262-274.
Daron Yondem advises senior technology leaders on AI-driven organizational transformation. Learn more →