The benchmarks are interesting, but that's not what caught my attention.
I'm not convinced finding increasingly esoteric bugs in an interviewer-selected language is an effective gauge of coding ability. I expect this is actually worse than whiteboard coding.
We don't know what the actual coding question was; the claim that casting int to int32 is expensive seems to be an unprompted remark coming from an interviewee. I'm guessing they didn't get the job.
> The filter for this is a simple review exercise. We present a small chunk of code and ask them to review it over 15 minutes pointing out any issues they see. The idea is to respect their and our time. It works pretty well and we can determine how much experience someone has by their ability to pick up the obvious vs subtle bugs.
So maybe there's more? But it's definitely a mandatory, initial part of the interview process at the very least.
The code may be pretty bad, and it's interesting to see whether a candidate spots that it's bad (if they don't you're presumably going to get more poor code from them if hired) and what about it in particular they point out. It's an opportunity for them to show you what they know, rather than a box checking exercise where you know the answers.
I've actually done an exercise like this about a year ago, in Java, I remember I really cared about iterators at the time, and so even though it's probably not the worst thing wrong with the code I just kept coming back to these C-style for loops which I thought were reprehensible. I should ask my interviewer (whose department I now work in) what their impression was as a result of this obsession.
While the programmer should produce clean, good and optimised code, finding bugs and errors is more job for a pentester. Experienced programmers can easily detect these, but that is not their main job.
Programmer should be able to solve problems and apply the selected language for solving these problems as efficiently as possible.
You think a penetration tester should find bugs in code? They're looking for security weaknesses only, and almost never by looking at the code. Your method may be the most expensive possible.
Every bug is a weakness at some level.
The most efficient way in bug bounties currently to make money is by reviewing open-source code. Manually testing takes huge amount of time.
You can also automate code review for low hanging fruits with tools like semgrep or GitHub's code scanning.
Of course programmers should test their code themselves and minimise the bugs, but their work it not to look for them.
>Of course programmers should test their code themselves and minimise the bugs, but their work it not to look for them.
I have to respectfully WAT. Code review should be a part of everybody's workflow. And all programmers involved should be looking out for bugs. The best bugs are those which were never merged in the first place.
Ops tech 1 - "hey... my database just dropped an entire table, i lost a week of work"
Ops tech 2 - " thats a serious bug, you should escalate"
Ops tech 1- "hey you wrote this thing, it dropped my db table, i lost a week of work... "
Great and Mighty Programmer - ' - not my job, i am a programmer for you see, and looking for bugs is beneath me, some peasant task. now begone, i must solve more problems! '
Ops tech 1 "so what do we do? this doesnt work, like, at all. completely broken, not even the most rudimentary testing was done by who ever created it"
Ops tech 2 "stop using the database, we will build an excel spreadsheet on a shared network drive"
Why a pentester and not a QA team more broadly? QA won’t necessarily review the code (haven’t met a team that did), but they will typically hammer a system with test cases and scenarios that expose unusual behavior and uncover bugs.
I’ve had pentesters review code looking for things like insecure hashing or encryption, or low hanging fruit like cress in the code, but I wouldn’t be inclined to leave what is essentially a QA process to a pentester.
I'm not convinced finding increasingly esoteric bugs in an interviewer-selected language is an effective gauge of coding ability. I expect this is actually worse than whiteboard coding.