Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've found that asking them to review some obviously bad code with glaring errors and problems is more informative than asking them to solve some random DSA problem.

Candidates who can code well can point out code that has obvious problems. Just ask if this is good or bad, and if it is bad, how they could improve it. This demonstrates competency and doesn't make the interview seem like a grind but instead more like a conversation.



Just got be careful with the “how to improve” part. In my experience as an interviewee sometimes it becomes a regular algorithmic interview. A couple times I found some N+1 queries or inner loops and was asked to “fix it”, which might just turn into leetcode.

The best code review interviews are the ones where there is a healthy amount of actual code, with a handful of functions and classes, some badly named variables, bad comments, some misleading code paths, couple bad patterns, etc… the worst ones are a non-optimal solution and you’re asked to make it optimal. That’s just leetcode disguised as “code review”.


Leave it open-ended and include code with multiple levels of bad so it's not just a quiz. I used real C code that research scientists had given me. If they look at it and say there's no reason this should be in C and in a dynlang instead, that is fine. If I hand them C code where the entire program is a 1000-line main function, with lots of repetition, hard coded file names, and fixed-sized string buffers, and all they tell me is the indentation is icky: that's a negative signal.


I'm in ops and we've found that simple exercises are better at weeding people out than complex ones.


This reminds me of this discussion on cocktails and bartender skills from a while ago https://news.ycombinator.com/item?id=36492450

"The martini may be simple, but it is not easy to make an excellent one. It's a very solid test of a bartender's skill because, unlike many drinks, ingredients alone cannot carry the cocktail. A piña colada for example, is mostly about ingredients (are you using a good coconut cream? fresh pineapple?) For the martini the chilling and dilution need to be just right. This tests the bartender's most important skill: mixing. Proper mixing of the beverage is ultimately what makes a martini."

[..]

"martinis are shockingly easy to fuck up. and this conversation is exactly the reason why the martini is a good test of a bartender's capability. being a bartender is more than putting fixed quantities of ingredients in a glass. how do you know when your martini is properly diluted, either by shaking or stirring? a good bartender will know. a bad bartender will not. a terrible bartender won't even realize dilution is crucial."

I don't really drink much and never had a martini in my life, but I thought it was pretty interesting.


And chefs are supposedly asked to cook eggs.


True enough... Even in software, I was pretty amazed at how much of a filter of, here's a CSV, use one of N languages to load the data, do a check and output the valid entries to one file and the invalid inputs to an error file. You can use any libraries you like, please create a github repo and share with $ACCOUNT for your solution.

I know not everyone works with CSV necessarily, but there are dozens of libraries for a lot of languages. Even if focused on N being those supported in a given company/org. It should be less than an hour of work. Bonus points for any tests/automation, etc.


"Here is a bug report, and the patch to fix it, review the patch"

And the patch:

- does fix something but not the described bug.

- could do the same thing in a third of the added line count.

- has typos or other errors.


I use the "launch ramp" technique. Ask a series of prompts (instead of a single prompt with a long answer). Explain the prompts that will get progressively more complex, aka the ramp will gradually get steeper and get very steep later. I can stop the interview quickly if the candidate can not find simple mistakes and how to remedy them. I can also jump ahead to complex issues to engage highly qualified candidates.


I agree.

Last time I got hit with an interview question like that, my answer ultimately had to be "block the merge and counsel the person who wrote this about performance." I'm still not sure if that's the answer they were looking for (this was for a staff engineer position), but I'd stand by it 100% of the time.


> I've found that asking them to review some obviously bad code with glaring errors and problems is more informative than asking them to solve some random DSA problem.

I once had a coding interview like this, but the problem was that the code was so obviously bad, I couldn't even make sense of what the code was supposed to do if it were good. It felt like the interviewer had just come up with an example of bad code without any context of how the code would make sense if made "good". It was just totally artifical.

If someone had presented the bad code in some Stack Overflow question, I would have started by stepping back to ask, "What are your trying to do?" Except in this case, the interviewer wasn't actually trying to do anything except quiz me.

Identifying a bug in production code would be better, I think.


This is a good idea. This would also show someone's ability to read and contribute to existing code which is a large part of our day to day tasks. There are some that can only solve problems their way, which often means them trying to rewrite everything.


This is pretty interesting, I hadn't heard of this approach before. I'll have to give it a try some time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: