> or having to use that same crap spyware that schools use to lock down your computer by essentially rooting it.
The most extreme one I’m aware of is “Lockdown Browser” and the student who lives with me has effectively rendered it useless. Students use MST with DisplayPort to mirror a monitor in a way such that the OS cannot see that two monitors exist. (I don’t think there’s a windows API for LDB to see this? Could be wrong)
Anyways, one student faces the other, each looking at a mirrored display, one student out of view of the webcam. Then a microphone with a hardwired switch soldered in which, again, doesn’t alert the OS that a microphone has been disconnected, is switched off.
Then student 2 can freely speak at the student taking the test and announce all the answers. The test taker is able to keep their eyes on the monitor at all times (so eye tracking won’t show anything weird).
If I were black-hat enough to lead a product like lockdownbrowser, I would beat this technique by looking for electrical hum frequency signatures in the audio feed which can pinpoint what time the audio was recorded due to fluctuations in the grid electricity. https://en.m.wikipedia.org/wiki/Electrical_network_frequency...
If there is zero audio waveform, that would be a flag for review. If the cheater attempts to loop pre-recorded audio, it will be noticeable that the ~60hz frequency signature is “wrong” for the time that the test was being taken.
This would then in turn be defeated by the cheating student running a remote microphone to a quiet room which can be switched to.
I’ve put a lot of thought into this and no matter what, the cheater eventually wins given infinite cat-and-mouse iterations. But most would be caught somewhere along the way.
The solution is to trust the people you hire, treat them with respect and decency. And when they start and don't know what a for-loop is because they obviously gamed the process, you fire them. It's really that simple.
If people complain it's impossible to fire anyone, that's the issue. The solution isn't to implement an insane anti-cheat hiring pipeline that will drive away competent people.
The situation OP is describing would drastically cut costs by not filtering out good employees who are either bad test takers, having an off day, or didn’t have an “ahh ha” moment early enough in the evaluation.
I was trying to find a source for this, and I see that it is really variable. Most sources I see are saying more like <$10k, though some go as high as "3-4x the employee's yearly salary", which is what I'd like to know more about.
I don't see anyone explaining how that latter number gets so high. I assume most of the cost is the time of multiple interviewers across multiple candidates, but that still seems outrageous to me. Even 1x the employee's salary would be 4 similarly-paid colleagues doing nothing except interviewing candidates 40 hours a week for three months, which I've never seen happen. Even the most grueling interview process I've been a part of has been more like 2 hours a week for a couple months.
> A common estimate is that the hiring process costs a year of salary.
If the company wants to make hiring incredibly difficult (too many companies do), I can see it costing that much.
So stop doing that.
Having been in many startups doing a lot of hiring, if each hire cost a year of salary we'd have burned through out funding many times over and yet we didn't. So it doesn't have to cost that much. Keep it simple, hire fast.
Is there any citation for this stat? I'm having a very hard time wrapping my head around this stat because it doesn't seem even remotely reasonable on the surface.
It's going to be the cost of filling one headcount, not the cost of hiring a specific person. In other words it will cover:
• Recruiter/sourcer fees. These are typically a fraction of the salary, so that right there is a large chunk of it.
• The cost of all the interviews needed to locate enough candidates that one accepts. If you interview 60 people in order to make one hire, and conservatively assume an interview takes one hour for doing it and one hour for prep+writeup+hiring manager/committee analysis, then that's 120 hours of skilled labor.
• Hiring bonuses.
• Relocation fees.
• Travel costs for all the people you interviewed on-site.
• On-boarding cost (HR, legal, IT setup, possibly desk provisioning and equipment purchase).
• For some types of employees, time spent in negotiation, meets and greets etc.
I can see an inefficient company with a bad manager that doesn't really know what they want spending a year's salary in a bad process where they're continuously changing requirements, meeting in committees, inventing new rounds of interviews, etc.
What throws me is the claim that this is possibly around the average.
There are efficient companies out that there that can pick through the resumes in a few hours, arrange a few interviews, and have somebody hired in a week or two.
So if there are efficient companies out there dragging down the average to a year, does that mean there outliers out there that are spending 2 years, 3 years , or more worth of salary to fill a position?
If you spend an entire man year in senior productivity to vet a single employee, perhaps that's the problem then?
I have often seen this problem explained (for example by Paul Graham) that a bad employee is a negative value, they make bad commits and stupid decisions that net out as a massive negative for the company. But it seems very counterproductive to try to solve this problem at the selection moment using stupid proxies such as leetcode memorization, instead of during the trial period, when the employee is, you know, interacting with your company.
(As a non-cheater), cheating is an honest signal to some extent. Someone who sets up a second setup, disables their mic, has an accomplice, etc, is going above and beyond to get to the goal.
It's the lazy cheating that's really disappointing.
Thanks for the advice - this is a cool technique and yeh im ex defence! its funny as this remote-video stuff is hardly ever reviewed by the clients anyway in my experience, that this stage in its maturity they use it basically as a bluff. Dont discount the use of manual reviews though, our software would pickup anomolies if your moving your mouth constantly.
We have organised crime targetting our events, they have done things like infiltrate IT departments and trojen horse PCs. For high-stakes, we advise clients that in person inviligation in combination with LDBs is the safest way with a freshly provisioned SOE.
If you want to make this your job, work remotely and like C#, u can find our careers page with this hint: 3F27483F97C94ECF0C8F11148FBBD048DFFCDECBE5C62FA23076297AE804F6C6
Send us an unsolicated resume and say ur from HN LDB
> Students use MST with DisplayPort to mirror a monitor in a way such that the OS cannot see that two monitors exist. (I don’t think there’s a windows API for LDB to see this? Could be wrong)
I have a cheap $50 capture card that is untraceable. My $75 HDMI mux has hdcp stripping and EDID pass through. I use these for streaming video games and this is a pretty standard set of kit.
The only way to discover this is hoping that custom control commands are passed through to the monitor. OEMs use these to give you access to the OSD options from the operating system, such as a crosshair at the center of the screen for gamers, on when their game is running but off when it exits.
> This would then in turn be defeated by the cheating student running a remote microphone to a quiet room which can be switched to.
Honestly, most of the noise comes from the pickup. Using a simple switch to ground will still allow that hum through. You can also just loop a power cord around the mic line a few times and "actively" induce that noise.
My headset makes a terrible warbling modulated at a frequency of 1/120th hz from the bad isolation in my KVM's power supply.
No, a far lower tech option exists: more and more Console gaming targeted headsets like those from Steelseries include an aux in, which can easily be used for the snitch. Egg cartons for sound dampening and impromptu vocal booth and you're well on your way to defeating any technological measure.
The most extreme one I’m aware of is “Lockdown Browser” and the student who lives with me has effectively rendered it useless. Students use MST with DisplayPort to mirror a monitor in a way such that the OS cannot see that two monitors exist. (I don’t think there’s a windows API for LDB to see this? Could be wrong)
Anyways, one student faces the other, each looking at a mirrored display, one student out of view of the webcam. Then a microphone with a hardwired switch soldered in which, again, doesn’t alert the OS that a microphone has been disconnected, is switched off.
Then student 2 can freely speak at the student taking the test and announce all the answers. The test taker is able to keep their eyes on the monitor at all times (so eye tracking won’t show anything weird).
If I were black-hat enough to lead a product like lockdownbrowser, I would beat this technique by looking for electrical hum frequency signatures in the audio feed which can pinpoint what time the audio was recorded due to fluctuations in the grid electricity. https://en.m.wikipedia.org/wiki/Electrical_network_frequency...
If there is zero audio waveform, that would be a flag for review. If the cheater attempts to loop pre-recorded audio, it will be noticeable that the ~60hz frequency signature is “wrong” for the time that the test was being taken.
This would then in turn be defeated by the cheating student running a remote microphone to a quiet room which can be switched to.
I’ve put a lot of thought into this and no matter what, the cheater eventually wins given infinite cat-and-mouse iterations. But most would be caught somewhere along the way.