Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Live coaching app for remote SWE interviews, uses Whisper and GPT-4 (github.com/leetcode-mafia)
301 points by leetcode-mafia on April 5, 2023 | hide | past | favorite | 114 comments
Posting from a throwaway account to maintain privacy.

This project is a salvo against leetcode-style interviews that require candidates to study useless topics and confidently write code in front of a live audience, in order to get a job where none of that stuff matters.

Cheetah is an AI-powered macOS app designed to assist users during remote software engineering interviews by providing real-time, discreet coaching and integration with CoderPad. It uses Whisper for audio transcription and GPT-4 to generate hints/answers. The UI is intentionally minimal to allow for discreet use during a video call.

It was fun dipping into the world of LLMs, prompt chaining, etc. I didn't find a Swift wrapper for whisper.cpp, so in the repo there's also a barebones Swift framework that wraps whisper.cpp and is designed for real-time transcription on M1/M2.

I'll be around if anyone has questions or comments!




When I saw this, I had a good belly laugh! As someone who does a lot of interviewing who also hates all the leetcode style interviews, I love this.

I hope it ushers in a new era of non-leetcode style interviews.

But I suspect what it will actually do is usher in an era of in-person only interviews or having to use that same crap spyware that schools use to lock down your computer by essentially rooting it.


> or having to use that same crap spyware that schools use to lock down your computer by essentially rooting it.

The most extreme one I’m aware of is “Lockdown Browser” and the student who lives with me has effectively rendered it useless. Students use MST with DisplayPort to mirror a monitor in a way such that the OS cannot see that two monitors exist. (I don’t think there’s a windows API for LDB to see this? Could be wrong)

Anyways, one student faces the other, each looking at a mirrored display, one student out of view of the webcam. Then a microphone with a hardwired switch soldered in which, again, doesn’t alert the OS that a microphone has been disconnected, is switched off.

Then student 2 can freely speak at the student taking the test and announce all the answers. The test taker is able to keep their eyes on the monitor at all times (so eye tracking won’t show anything weird).

If I were black-hat enough to lead a product like lockdownbrowser, I would beat this technique by looking for electrical hum frequency signatures in the audio feed which can pinpoint what time the audio was recorded due to fluctuations in the grid electricity. https://en.m.wikipedia.org/wiki/Electrical_network_frequency...

If there is zero audio waveform, that would be a flag for review. If the cheater attempts to loop pre-recorded audio, it will be noticeable that the ~60hz frequency signature is “wrong” for the time that the test was being taken.

This would then in turn be defeated by the cheating student running a remote microphone to a quiet room which can be switched to.

I’ve put a lot of thought into this and no matter what, the cheater eventually wins given infinite cat-and-mouse iterations. But most would be caught somewhere along the way.


The solution is to trust the people you hire, treat them with respect and decency. And when they start and don't know what a for-loop is because they obviously gamed the process, you fire them. It's really that simple.

If people complain it's impossible to fire anyone, that's the issue. The solution isn't to implement an insane anti-cheat hiring pipeline that will drive away competent people.


A common estimate is that the hiring process costs a year of salary. You kinda want to nail it and not have too many do-overs, even if firing is easy.


The situation OP is describing would drastically cut costs by not filtering out good employees who are either bad test takers, having an off day, or didn’t have an “ahh ha” moment early enough in the evaluation.


It would also filter in people who have no business applying for the job in the first place.


I was trying to find a source for this, and I see that it is really variable. Most sources I see are saying more like <$10k, though some go as high as "3-4x the employee's yearly salary", which is what I'd like to know more about.

I don't see anyone explaining how that latter number gets so high. I assume most of the cost is the time of multiple interviewers across multiple candidates, but that still seems outrageous to me. Even 1x the employee's salary would be 4 similarly-paid colleagues doing nothing except interviewing candidates 40 hours a week for three months, which I've never seen happen. Even the most grueling interview process I've been a part of has been more like 2 hours a week for a couple months.


I’ll admit that my source is mostly LinkedIn.

I imagine it includes onboarding.


> A common estimate is that the hiring process costs a year of salary.

If the company wants to make hiring incredibly difficult (too many companies do), I can see it costing that much.

So stop doing that.

Having been in many startups doing a lot of hiring, if each hire cost a year of salary we'd have burned through out funding many times over and yet we didn't. So it doesn't have to cost that much. Keep it simple, hire fast.


Is there any citation for this stat? I'm having a very hard time wrapping my head around this stat because it doesn't seem even remotely reasonable on the surface.


It's going to be the cost of filling one headcount, not the cost of hiring a specific person. In other words it will cover:

• Recruiter/sourcer fees. These are typically a fraction of the salary, so that right there is a large chunk of it.

• The cost of all the interviews needed to locate enough candidates that one accepts. If you interview 60 people in order to make one hire, and conservatively assume an interview takes one hour for doing it and one hour for prep+writeup+hiring manager/committee analysis, then that's 120 hours of skilled labor.

• Hiring bonuses.

• Relocation fees.

• Travel costs for all the people you interviewed on-site.

• On-boarding cost (HR, legal, IT setup, possibly desk provisioning and equipment purchase).

• For some types of employees, time spent in negotiation, meets and greets etc.

• Cost of the ATS.

etc. There's a lot that goes into hiring someone!


I can see an inefficient company with a bad manager that doesn't really know what they want spending a year's salary in a bad process where they're continuously changing requirements, meeting in committees, inventing new rounds of interviews, etc.

What throws me is the claim that this is possibly around the average.

There are efficient companies out that there that can pick through the resumes in a few hours, arrange a few interviews, and have somebody hired in a week or two.

So if there are efficient companies out there dragging down the average to a year, does that mean there outliers out there that are spending 2 years, 3 years , or more worth of salary to fill a position?


> A common estimate is that the hiring process costs a year of salary.

To do what? Why would the cost of hiring scale with the salary of the position?


The cost of time from the interviewers is likely to be more expensive since you have senior people interview candidates for senior roles.


If you spend an entire man year in senior productivity to vet a single employee, perhaps that's the problem then?

I have often seen this problem explained (for example by Paul Graham) that a bad employee is a negative value, they make bad commits and stupid decisions that net out as a massive negative for the company. But it seems very counterproductive to try to solve this problem at the selection moment using stupid proxies such as leetcode memorization, instead of during the trial period, when the employee is, you know, interacting with your company.


Recruiters work on commission and usually charge a fraction of the salary, because a higher paid role is harder to recruit for.


(As a non-cheater), cheating is an honest signal to some extent. Someone who sets up a second setup, disables their mic, has an accomplice, etc, is going above and beyond to get to the goal.

It's the lazy cheating that's really disappointing.


Thanks for the advice - this is a cool technique and yeh im ex defence! its funny as this remote-video stuff is hardly ever reviewed by the clients anyway in my experience, that this stage in its maturity they use it basically as a bluff. Dont discount the use of manual reviews though, our software would pickup anomolies if your moving your mouth constantly.

We have organised crime targetting our events, they have done things like infiltrate IT departments and trojen horse PCs. For high-stakes, we advise clients that in person inviligation in combination with LDBs is the safest way with a freshly provisioned SOE.

If you want to make this your job, work remotely and like C#, u can find our careers page with this hint: 3F27483F97C94ECF0C8F11148FBBD048DFFCDECBE5C62FA23076297AE804F6C6 Send us an unsolicated resume and say ur from HN LDB


Shrouq is that you?


No I don't think it's him (without revealing the solution either).

The person you found probably thought that SHA256 is enough to obfuscate customer names and didn't know that rainbow tables exists.


> Students use MST with DisplayPort to mirror a monitor in a way such that the OS cannot see that two monitors exist. (I don’t think there’s a windows API for LDB to see this? Could be wrong)

I have a cheap $50 capture card that is untraceable. My $75 HDMI mux has hdcp stripping and EDID pass through. I use these for streaming video games and this is a pretty standard set of kit.

The only way to discover this is hoping that custom control commands are passed through to the monitor. OEMs use these to give you access to the OSD options from the operating system, such as a crosshair at the center of the screen for gamers, on when their game is running but off when it exits.

> This would then in turn be defeated by the cheating student running a remote microphone to a quiet room which can be switched to.

Honestly, most of the noise comes from the pickup. Using a simple switch to ground will still allow that hum through. You can also just loop a power cord around the mic line a few times and "actively" induce that noise.

My headset makes a terrible warbling modulated at a frequency of 1/120th hz from the bad isolation in my KVM's power supply.

No, a far lower tech option exists: more and more Console gaming targeted headsets like those from Steelseries include an aux in, which can easily be used for the snitch. Egg cartons for sound dampening and impromptu vocal booth and you're well on your way to defeating any technological measure.


They need to come up with something better than leetcode. Build a project and explain it is a much better way to get a sense of skill. It's the sort of skill that won't be made extinct by ChatGPT. Your creativity and ability to architect and build a project and then explain it should be the standard. Leetcode also penalizes good engineers who have anxiety during interviews.


Worked one place where we received seven (7!) resumes that were almost identical, with the same project of what they could do.

Not saying leetcode is the answer, but it does solve for something things.

My interviews tend to walk through the framework of a project and they can speak to me in the abstract, or an example of what that part of the project does. Say there is a standard way of connecting to the database like in Rails. They can tell me about database.yml and how it has different entries for each of the databases. Then we have a conversation about checking in passwords to git, env variables, secrets managers, etc... This avoids asking stock questions which might be coached/studied more and aims for what the person doing the work might practically know. It also keeps the discussion in a context that makes understanding the questions (hopefully) easier. My style of interviews are very much non-standardized and there has to be some trust that I have some idea what I'm doing.

Leetcode at least has some standardization around the problem. Of course everyone could have looked them up and studied the exact solution, and these solutions don't correspond very well to daily efforts. But, given everyone knows the game, it does demonstrate some horsepower I guess. Or maybe I like these interviews because I'm good at them.


They should all fail if they are all identical. You look for someone who can build something extraordinary, who takes the parameters and runs with them.


Extraordinary is too subjective in my experience

When I’m competing for one position and “we just want to see how you think” is always not true but instead a completely arbitrary set of criteria not presented to the candidate, it should be sanctionable


> Build a project and explain it is a much better way to get a sense of skill.

I HATE project interviews, because their skills aren't transferable to the next interview. Take home projects also tend to "go-over", because you want to put your best foot forward and you're betting that the other people they interview are also going to exceed time.


I don't know if this is what they meant but I read it as you describe an existing project that you've worked on, not a mini one just for the interview. That's how I've interviewed before.


I’ve encountered both, significantly more of the “build a mini project for us” variety.

I also dislike those because I’ve already got a full time engineering job and a family and don’t have time to put 4-5 hours each into every 3-hour project the interviewers ask for. But 5 years ago I did.

I’ve concluded there really isn’t a single way to interview that checks all the boxes, and if you’re running the interview process you have to pay the costs somewhere, unfortunately.


This is more of a problem than it seems at first. At one place I worked that did mini-project interview most of the hires were currently unemployed and could spend all week on it. (then claim they did it in 3 hours)


> I read it as you describe an existing project that you've worked on

IMHO, there is some risk of violating NDAs. For the most part companies don't care if you share how their tech stack works, but I get nervous about revealing IP.


I’ve been doing interviews like this for a few years now. Leetcode sucks.

Bonus points the next time a candidate (correctly) uses ChatGPT or Copilot. Let the machines do what the machines do well (grinding leetcode), good riddance.


Yeah exactly. Since these tools commoditize repetitive skills, the goal should be to see how creative and awesome the project is within the parameters given. Being creative is what you should be testing for since all these other skills are now commodities. It's the one thing we have that AI can't replicate (yet?).


Often one of the parameters given is a timeframe that no sane person would request in a real-world setting. The justification I've heard for that is "we don't wan't people spending too long on it."


Discussion: what does correctly programming with chatbots look like, specifically?

Probably easier for most to list what it doesn't look like.

I'm interested in both.


Asking a chatbot a question rather than Google. I never minded if candidates said they’d use Google for looking up something in the docs. But now you’ve got something a heckuva lot more efficient.

For example: What’s the method to select a random item from a range in Ruby? (ChatGPT used to get this wrong.) I don’t mean to say that I give trivia questions in interview, but if the need to know this came up during an interview or code pairing, having the candidate know when a chatbot response was invalid (and where to look for a correct answer) is a good sign.

I’m also open to a candidate stubbing out an idea with a chatbot/copilot and then checking the solution and adapting it to fit a given context.


My cross platform c# apps are used by millions of students weekly to take assessments. We try and work as close with the OS venders as possible to implement lockdowns via native Kiosk functionality (ive posted bugs with all the majors MS/Google/Chrome/Android/Apple as their updates/move fast attitudes are probably my biggest enermies right now). We sometimes cant though and have to find other means because our clients demand we support xOS because vendors agreesively push hardware onto students. We definitely arnt spywhere, schools are very much against anything that will phone home alot of the time. We have pentests and issues like every other game/app in this cat and mouse race but as soon as you close our app it done, nothing lingers.

ChatGTP is a growing problem but we have defenses against it (until its on device maybe?). Things like Frida are probably worse right now.


I also do tons of interview for a Big Tech. I ran my interview questions through chatGPT (3, the one available at the time), and the answers varied from pretty good to total garbage. I brought this to the interviewer community in the company, suggesting that it might be time to put leetcode-style interviews in the bin and start over.

They dismissed my concerns and will continue as usual. A part of me wants cheating to be rampant to force companies hand in the matter. Unfortunately I'm with you and I suspect they will just enforce in-person, whiteboard interviews again (a colleague explicitly asked for it), rather than trying to come up with a better system.


This “burn it all to the ground” mentality is naive.

There is nothing leetcode-specific about this cheating platform. OP just angled it that way because HN readers would lap it up that way.

Assuming someone was using this cheating platform, how would you run your interviews? Wouldn’t this screw up the actual legit interviews(whatever that is…) too?


I don't do live coding in my interviews because it's silly. We talk about the person's past projects, their contributions to it, insights the gained, mistakes they made and how they would avoid them again, etc. Sometimes they may share some code they wrote and we walk through it and they explain it to me and why they made the choices they made. Sometimes I'll make suggestions and see how well they receive that feedback.

I'm looking for people who can demonstrate that they have faced challenges and overcame them, that they communicate their decisions effectively, and that they can learn new information quickly, can receive and incorporate feedback, and so on.

Can't really cheat your way out of that.


I’d argue that pieces of that interview could still be gamed using this cheating platform depending on how accurate, fast, and realistic sounding the gpt responses are. And all of those attribute will just get better as new models are released.

I realize that this one project isn’t the only way to cheat in interviews, but I still think it’s naive to think that this tech will only harm what you perceive to be “bad interviews” and not affect your own preferred interviews. At the absolute minimum, it adds additional overhead to performing interviews where you have to also be aware and try to figure out if the interviewee is being coached like this.


If the candidate uses a tool to "cheat", and then keeps using that tool while they work for me, is anything lost?

Right now people "cheat" by using an IDE, but no one has trouble with that (and rightly so!).

So why should I care if someone is using LLMs to pass the interview and do their job if they are being successful?


> But I suspect what it will actually do is usher in an era of in-person only interviews

A man can dream! I come across really badly on video. I hate the compressed audio, lack of body language, latency etc


Couldn't you just run a second PC right behind your monitor that was listening and providing tips?


Yeah but then you'd have to retype the output instead of just copy/pasting it. :)


I'll just have gpt email it to me


Unfortunately, I think you're right, at least initially. Not much happens unless the cost of losing access to remote talent becomes too significant to ignore.


the example video isn't of leetcode though, it's general questions about database tech and it gives you all the right points to mention


The answer to this is simple. Make the "on-site" interview on-site again.

* You can rule out many types of cheating, which is becoming a more expensive problem.

* You can get many more details for the 'culture fit' in person.

* If you purchase airplane tickets for the candidate then you have built in identity verification via TSA.

* It demonstrates a higher degree of buy-in before the final interview from both parties.


This is categorically the right solution and the startups I work with that still have some form of office have all shifted to this structure. Take-home technical tests and virtual live tests are dead in the water.


When I went through a bunch of interviews in 2021 all my interviews were remote. After the first few hour-long interviews it became apparent that there were many repeat questions. I started writing the questions down along with my answers. I ended up with a notepad with ~20 of the most frequent questions and answers that I kept open on my screen next to the video conference. It was super helpful and now I have one of those $300k tech jobs!

Edit: these were the questions, the answers are left as an exercise to the reader (or your preferred AI):

What are you looking for in a role?

How do you deal with a conflict with a coworker?

What leadership experience do you have?

Do you have experience working with multiple teams?

What APIs have you designed?

What is priority inversion?

What are the differences between a mutex and a semaphore?

What is preemption?

How do interrupts work?

What is interrupt latency?

What is the difference between an ISR and a function?

What is the difference between an interrupt and an exception?

What is hard vs soft real time?

What is the boot process of a CPU?

What do you do for board bring up?

What are different memory sections used for (code, data, bss, etc.)?

What is a TLB?

What is the difference between big and little endian?

What is the difference between 32 and 64 bit processors?

What happens if a null pointer is dereferenced?


courtesy of GPT4

Priority inversion: A situation where a higher-priority task is indirectly blocked by a lower-priority task holding a shared resource.

Mutex vs semaphore: Mutex ensures mutual exclusion for a shared resource, while semaphore controls access to a resource by multiple tasks with a counter.

Preemption: The act of interrupting and temporarily suspending a task, allowing another task to execute.

Interrupts: Signals to a CPU to temporarily stop its current task to handle an event or perform a specific function.

Interrupt latency: The time between the arrival of an interrupt and the start of the interrupt service routine (ISR). ISR vs function: ISR handles an interrupt, cannot be called directly, and must complete quickly; a function is a reusable block of code that can be called as needed.

Interrupt vs exception: Interrupts are external events requiring CPU attention, while exceptions are internal events caused by the execution of an instruction.

Hard vs soft real time: Hard real-time systems have strict deadlines that must be met, while soft real-time systems have more flexible deadlines.

CPU boot process: Initialization sequence a CPU follows upon startup, including loading firmware, running tests, and loading an operating system.

Board bring up: Process of validating and configuring new hardware to ensure correct functionality.

Memory sections: Code (executable instructions), Data (initialized variables), BSS (uninitialized variables).

TLB: Translation Lookaside Buffer, a cache for memory address translations in virtual memory systems.

Big vs little endian: Big endian stores the most significant byte first, while little endian stores the least significant byte first. 32 vs 64 bit processors: 32-bit processors have 32-bit wide registers and address spaces, while 64-bit processors have 64-bit wide registers and address spaces, allowing for larger memory and better performance.

Null pointer dereference: Undefined behavior occurs, often leading to a crash or unpredictable results.


"CPU boot process: Initialization sequence a CPU follows upon startup, including loading firmware, running tests, and loading an operating system."

That's a bit vague.


I assume OP asked it to summarize in one sentence. I get a full and detailed list of things that happen, from POST to kernel loading.


Well, without specifying what CPU, how else? There are tons of specific answers to this question if you wanna pick up a random datasheet


Does the job actually require the knowledge behind these questions directly in live discussion environment? or are you able to get away with it by Googling/Stackoverflow/ChatGPT?


My job does require knowledge of all of these things. It’d be hard to succeed if you had to look them up.


Are you a systems engineer?


Yes


This is my interview strategy as well!


I had a candidate, last week, that was using ChatGPT during a technical coding interview.

So, he managed to find perfect answers to the most difficult parts of the exercise in 30 seconds, and then struggled for 30 minutes, with my help, on getting the print right.

Of course, I used a slightly modified version of well known problems like "Fizzbuzz" and other stuff, mostly as a quick start to get into the real engineering challenge. It explains why ChatGPT was absolutely unable to finish the exercise. During the interview, it acts a a good filter. If it last 3 minutes, fine, no signal ; but if someone is struggling on it for half the interview time, it's a clear "No".

I haven't check if the candidate was invited to later rounds, but considering my negative review I think he was rejected.

I'm afraid these tools are not going to help you when things really matter :)


I think there are a few edge cases where this could still be useful. I put myself as an example, I have been working as a backend engineer for more than 12 years and I still get very nervous in any interview. I struggle with a simple fib(n) function because my brain sabotages me. But I could happily talk about system design, microservices, or API resilience without problems.


I have a collection of one-liners by now which any developer could explain easily, but which ChatGPT-3.5, ChatGPT-4.0 and Bing's AI all get wrong in the same way. I think they all learned from a wrong SO explanation somewhere.

I'll part with one:

    Explain in detail what resolve is in this code:

    const promise = new Promise((resolve) => resolve(3));

    Assume I'm unfamiliar with callbacks or promises.
All three AIs will incorrectly state in their very first sentence that resolve is passed to the promise constructor as an argument.

There's also some technologies for which they can't write correct code (they'll hallucinate their API), but any software dev could when given their documentation.


Anthropics Claude is better, I'd say...:

The resolve function here is a callback function that is passed to the Promise constructor. It is called the executor function.

A Promise represents the eventual completion (or failure) of an asynchronous operation. The resolve callback is used to mark the successful completion of the asynchronous operation inside the promise.

So in this specific code, the promise is resolved immediately with the value 3. Basically it creates a Promise that is already completed with a value of 3.

To understand this more generically:

The Promise constructor takes an executor function which contains the asynchronous operation. This executor function has two parameters:

resolve: A function to call when the asynchronous operation completes successfully. It passes the result to the next then handler. reject: A function to call when the asynchronous operation fails. It passes the error to the next catch handler. The then and catch methods on a promise are used to handle the resolved or rejected state of the promise.

For example: promise.then((result) => console.log(result)) // Logs 3

So in summary, the resolve function resolves the promise and passes a value to the next promise handler, marking the successful completion of the asynchronous operation.

Does that help explain the resolve callback? Let me know if you have any other questions!


Nope. Also already wrong with the first sentence, just like the three other AIs.

> The resolve function here is a callback function that is passed to the Promise constructor. It is called the executor function.

Correct answer: The resolve function here is a callback function that is an argument to the executor function, which is passed to the Promise constructor.

The "resolve" function is not passed to the Promise constructor (resolve is not the executor function), but every AI seems to think so. They typically correct themselves if you point it out.


I got the right answer by asking GPT-4 to pretend it is a highly skilled senior software engineer and to write its thought process down step by step.


I could not explain the code and I have been developing for years... I asked ChatGPT and I feel I have better understanding of it now.


That's a question for someone who claims to have experience with modern JS development/promises.

Don't trust ChatGPT's explanation in any case.


I work with modern javascript though I would by no means call me an expert on it. Rarely do I have to work with promises in such details.


That has to feel so awesome to reject these wannabe posers. Please please please tell us more :)


This blatant cheating should result in an instant 1 year ban from interviewing at company.


I'm conflicted, in how I feel about this existing and surely being used by some...

I hate the Leetcode interview, with the fury of a thousand suns.

But I also hate cheating and lying, to a similar degree.


If this helps me find great engineers who are bad at interviewing, great!


Is there any such thing as a great engineer who is dishonest?


If that's the only thing they're dishonest about, sure. They've got a family to feed and they do what they have to do.


I love this for whiteboard interview companies. Ask cs trivia questions, get AI responses and unqualified candidates. Hope it leads to an increase in conversational interviews.


Okay, so now interviewers will have to run their own copy and compare their answers to yours on real-time. Wonderful. Let the adversarial games commence!


Interviewer: with your eyes closed, tell me the difference between…


Easy, just use text-to-speech on the answer ChatGPT gives you and splice it into the audio stream you're hearing.


No, just use deepfake to make your eyes appear to be closed.


If you ask ChatGPT the same question in different context sessions, you get vastly different answers each time. So that would not only give you lots of false negatives, but could also give false positives of people who are legit doing it and happen to come up with the same solution that GPT did.


Nah they will just make face-to-face interviews the norm again :)


Indeed possible. That's partly why the prompt chain is set up to build a 'cheat sheet' type of answer, instead of a fully-formed answer that can be repeated verbatim.


Need to seed it with a style, "give answers in the style of THESE GitHub repos"


Local model, seeded with your style!


Train it by leetcoding, then it leetcodes for you lol

brb gonna get myself a 500k job


While this is an interesting idea and somewhat amusing (kudos on the beautiful creativity of it), it’s worth noting that one of the second-order effects of this approach, were it used heavily enough to actually impact interview styles, is that it would further increase incentive to hire “in-network” with people that your team already know, while making completely unknown candidates a higher risk.

Effectively, it changes incentives and risk in a way that encourage the creation of echo chambers.

That said, once again, as far as a proof of concept this is wonderful. I’d say that it’s more art than product, and it makes a wonderful statement in the process.


I can see a use-case for this that acts as a constant AI companion to pair-program with!


RubberDuckGPT


From one point of view, this isn't a problem at all. If a candidate can use tools to generate, understand, demonstrate, and explain a solution in real time, that is likely what you want them to do on the job as well.

Also, it is likely that in the near future that most programmers will be "pair programming" with Copilot, ChatGPT, or some other tooling to augment their capabilities.


  > and remember not to use the loopback device as input for the video chat app.
I laughed out loud.


LOL. I was thinking along the same lines about two months ago, but didn’t have the time to put together an app. Bravo!

https://news.ycombinator.com/item?id=34598251


I don't think I would hire someone with the leet-code ability of GPT-4, though.

It does mean employers have to find questions outside the training set. I asked GPT-4 some of our hiring questions and it made some significant mistakes.


~18 months ago I wrote in a comment here that I expected AI within 5 years from then to beat most working programmers at fresh new leetcode-style questions. It's not there yet but I wouldn't bet my career against this, when you compare to then.

(Reaction at the time was highly skeptical.)


What do you want, credit for your wisdom?

That's not so special, lots of people predict the future. For example, I smelled cinnamon buns in the air last week, then yesterday, bought some cinnamon buns. How'als that for foresight?


I hope this does to whiteboarding what Firesheep did to unencrypted http.

Still, this is impressive tech. It's a little like Iron Man's Jarvis where you can just talk to the computer and have it write code for you.


Looks good, but I'm having trouble installing it as an app and couldn't find relevant steps in the README. Do I need to build it with xcode or something?



How good is Whisper for real-time, in terms of latency? I would love to hack together a dictation system at some point.



This is badass. Even as a proof of concept it’s hilarious how many HR brains will explode over this


We have hired back to back people that we had to let go. When you take 4 hours to respond to simple questions you either have two jobs or you are not disciplined enough to be working from home.


As someone laid off and having trouble finding a single job, that is rather discouraging to hear.


Why?


Why is it discouraging to hear that people are not taking sw jobs seriously when these jobs are now relatively hard to come by?


Because it means that there are more open jobs for you!

You got this man, being laid off sucks. But whoever you are, I’m cheering you on.


Oh I see, I wasn't reading it like that


Can't wait for us to go back to in-person interviews. Since they are more expensive, there won't be a need to take chances on people that don't have the exact expected credentials


Hahaha brilliant


Love it!


> A recent M1 or M2 Mac is required for optimal performance.

Got a chuckle out of this bit.


Can I ask: what did you find funny / worth remarking about?


Its like a half meme that inexperienced programmers (which would be the target audience for this app) buy Macs because supposedly they are the "best" for programming.


I buy macs to program with because I'm so much more productive with their touchpad.


People definitely have their preferences, but from an objective standpoint when talking about the control of the computer, nothing really beats mouse+keyboard+big screen, and this is proven by the fact that you don't see trackpad or laptop use in the competitive gaming scene for MoBAs, RTS, FPS games where input speed and accuracy matters.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: