As much as I’d like you to make this newsletter your favorite reading, if you haven’t read Bent Flyvberg’s How Big Things Get Done, stop reading this and do that first. It’s really a fantastic distillation of what it takes to get projects done well. I wish I’d written it.
One of the biggest lessons the book gets across is the importance of modularity. To summarize one of his examples, the Empire State Building in New York City was such a success because it wasn’t a 100-floor building. It was really 100 nearly-identical one-floor buildings stacked on top of each other. By the time the crew got to the top, they had the construction down to a science.
He calls this “finding your Lego,” like the excellent system of bricks and plates from the Danish company of the same name.
The idea of making things modular and repeatable is pretty common in the world of software engineering. It’s the underlying concept that underpins the object-oriented programming paradigm, for example. Rather than build everything from scratch, you define primitives (classes) that can then be built upon (subclassed) to extend that base functionality.
Today I wanted to explore bringing this idea to the world of recruiting. More specifically, how we design the questions we give candidates as they’re being interviewed.
In the same way we want reusable components in software, there’s nothing better than consistent interview questions. That’s the “finding your Lego” of the interview process.
What does that mean?
Many interview processes leave the questions up to the interviewers. Yes, they may have a goal or general mandate (a technical interview versus a behavioral one), but beyond that the interviewers can do whatever they like. It’s also tempting to adjust or tailor the questions to the candidate you’re interviewing. The person who’s applied doesn’t have experience in computer vision? Maybe skip the computer vision questions.
A consistent interview is one that dispenses with both these ideas. The interview questions are agreed with the whole hiring team, in advance. Everyone uses the same questions, every time. The scoring guidelines are written down, tested, and applied as universally as possible by the hiring panel. It shouldn’t matter if one interviewer gets swapped out for another.
That might sound constraining, boring, or even counterproductive. You’re taking away space for creativity or a given interviewer’s unique take. There’s so much work upfront. You could be sinking a great candidate by not adjusting to her specific circumstances.
In my experience, the exact opposite happens.
Doing all the work upfront is really providing the glue that makes a project successful: alignment.
The variation that people want to bring to the table — asking their preferred questions, tailoring which questions go to which candidates — is really a sign that not everyone has agreed what you’re screening for.
If one person wants to ask computer vision questions but another person doesn’t, that suggests to me that the hiring panel doesn’t agree that’s a relevant qualification. If it is, then it seems strange to me that you’d ever want to skip that question. It it’s not, then why would you ask that question?
This doesn’t necessarily restrict the kinds of fuzzy weighting that hiring teams often want to capture either. Perhaps computer vision expertise is a “nice to have”: something that you’d like, but are unsure if you can find in a candidate that fits the rest of the hiring profile and that’s seen as less important. Ask the question anyway. There’s nothing to stop you from advancing a candidate who didn’t give a great answer to that question.
The results of this upfront planning and great alignment pay off as the process gets executed. There will, of course, be some good discussions about judgements and decisions at the margin. But fundamentally, it brings everyone together. No one’s going to waffle when it comes to making decisions.
Moreover, in the same way the crew building the Empire State Building got better as they built each of the 100 floors, the repetition and practice of reusing questions gives those questions a whole lot more power.
Once you’ve had your entire engineering team ask the same question the same way many times, you get better at asking that question. You begin to see patterns in what a good, bad, or great answer looks like. You can revise and refine the question as you discover what’s more or less diagnostic.
It’s so much easier to compare candidates when you’re asking consistent questions. There’s no additional pretzel twist necessary to figure out whether the two questions are comparable in difficulty or in relevance to the role you’re trying to fill.
And candidates get a better experience, too. You’re not at risk of giving someone a question that’s too hard or accidentally passing someone on an easy question. You can assure them that the process is fairer and more objective with credibility: you’ve made a structural change to the way you do interviews.
This approach gets at another of the pillars of How Big Things Get Done: plan slowly and deliberately where it’s cheap — before you’re talking to candidates and anything’s really on the line — and act quickly.
It’s a lot easier to hash out the differences in people’s perspectives on why you’re making a hire and what they’re bringing to the team without the ticking clock of people being screened and interviewed. You want to have those debates when the candidate isn’t waiting for you to get back with an offer or a rejection. This is the moment to be creative.
My teams have even gone so far as to run mock internal interviews, to really road test the questions. Sometimes we’ll discover a question as written is too ambiguous, or the wrong level of difficulty. That’s a lot easier to do in the safety of the planning stage than live with a candidate.
Finding great people is a challenge. And no interview or screening process will ever be perfect. At the same time, leaving everything to the whims of the people on the hiring panel is a recipe for muddiness and discord.
Consistent interviews — finding the repeatable “Lego” of candidate screening is better for everyone. The hiring panel irons out issues upfront, can truly align before the delicate part of the process kicks off, and will get better over time by practicing something modular. You can compare candidates on a like-for-like basis. Candidates have a better experience: you can move faster and more confidently, and they know you’re running a fair process that respects them. When it comes to interview design, don’t be afraid to keep it consistent.
Bonus: Another Video Worth Watching
Having worked for so long on recommender systems, I’m still pretty dissatisfied with their ability to operate in “discovery” mode. I.e., recommend something that’s genuinely novel, but that I will actually like.
This treatment on one of my favorite YouTube channels covers a recent paper exploring a few interesting facets of recommender systems.
Enjoy this? Have an idea for something you’d like a perspective on? Drop me a line: I’d love to hear from you.