LeetCode: In defense of the least worst thing
Why I hate LeetCode, why it's the least worst thing around.
I’m far from an objective observer on LeetCode. To be honest, I don’t have many kind things to say about your run of the mill company trying to use leetcode exercises to hire people. But I get the utility.
As infamous Tweet by Max Howell goes, good, well-rounded engineers are often overlooked in favour of other engineers who can “invert a binary tree”. And this happens more often than you think to be honest - especially in big companies.
And I don’t think this is a bug. It’s a feature.
The thing that LeetCode solves for is “evaluation at scale”.
You have hundreds of standardised tests that you can select, and send to candidates. These candidates then try and complete these in a closed environment. The code gets evaluated based on the resources it consumed running a pre-defined set of tests. Overall, it’s as standardised as it can get when evaluating a programmer for good programming foundations.
But the world is also filled with a class of people who only know how to leetcode.
As any person who has spent weeks on LeetCode will tell you, after a while, you become better at doing LeetCode. You don’t necessarily become good at “building things” or “thinking in systems” - but you become really good at data structures fundamentals, avoiding n^2 time complexity, and inverting binary trees.
So, the problem is you’re optimising for filtering out a certain class of good engineers, who are not good at doing LeetCode, and filling the pipeline with people who are good at LeetCode, despite their engineering prowess.
Basically, you’re knocking good engineers who can “do the job well”, in favour of not so good engineers who can just do the test [1].
This is ok for any FAANG company - even preferable.
They can live with the false negative of rejecting good engineers, because they can attract the top engineering talent who are also good at doing leetcode. In other words, there’s enough of an intersection between people who want to work at FAANG, good engineers, and have good LeetCode skills (or are motivated to cultivate them).
In return for employing LeetCode, they get to screen thousands of people at scale, fairly cheaply. Some false positives (bad engineers, good at leetcode) go through the process, and get eliminated later on in the system design interviews.
The problem is when all companies try to apply the the same cookie-cutter approach to hiring. Ideally, to use LeetCode effectively, both of these things must be true.
Top of the funnel on your hiring pipeline (application stage) is getting smashed with more CVs than you can keep up with.
Enough good engineers want to work at your company.
The criticism I hear from engineers a lot about LeetCode - is that they’ve never had to invert a binary tree, or even run a simple sort with a `for` loop by hand for the past 10 years. And that’s a valid criticism.
I can’t remember the last time I’ve written a quick sort by hand, without the aid of a library. Even more, if I saw a PR in which someone implemented the quick sort by hand without using the stdlib, I’d leave a “changes requested” comment. As an industry, we’ve moved towards higher-level abstractions subsidised by compiler optimisations, to a degree that doing LeetCode feels like an accountant getting evaluated on number theory.
So, what’s the solution here?
Clearly, we all agree that coding exercises based on real projects and tooling is much better for the candidate, and much better at producing a good signal. I personally know people who can ace LeetCode interviews, but struggle at the third normal form in relational databases.
But evaluating these generic, non-standardised exercises at scale is a problem [2]. There’s this essential trade-off between “real-world usefulness” and “cost of evaluation” [3]. So as a FAANG / FAANG-adjacent, you’re back to the least worst option - LeetCode.
[1] Or bad engineers, who cheat with AI - which is fast becoming a problem.
[2] This is not a problem if you haven’t got scale. You’d get a much better signal by getting an engineer to screen your CVs.
[3] Possibly something LLMs can solve. If someone is solving this problem, please reach out.