Hacker Newsnew | past | comments | ask | show | jobs | submit | ghxst's commentslogin

What editor do you use, and how did you set it up? I've been thinking about trying this with some local models and also with super low-latency ones like Gemini 2.5 Flash Lite. Would love to read more about this.

Neovim with the llama.cpp plugin and heavily quantized qwen2.5-coder with 500 (600?) million parameters. It's almost plug and play although the default ring context limit is way too large if you don't have a GPU.

I’ve tried creating similar solutions and feel LLMs still lack accurate control over (or understanding of) length, stress / accents, and phonetics for consistent name generation. For usernames for example I’ve yet to create a generator that uses LLMs that beats simple Markov chains. Maybe because results are subjective it makes rating / training a lot harder? I like the site and your approach though and great job on lookup speed! If anyone has any tricks or suggestions I'd love to hear them.

Thank you! Maybe keep generating using the Markov Chains and use a LLM to evaluate the results?

"What is an interface?" reminds me of one of my favorite interview questions: "What happens when you open a browser and visit a webpage?". There’s no single right answer and when asked in the right way I find it helps surface knowledge and skills candidates might not otherwise bring up. Some go into OS internals, others talk about networking, some focus on UI and page load performance. Even when candidates expect the question it still reveals how they think and what they spent time reading up on.


“324 unrelated companies are made aware of you visiting the webpage via a mountain of JavaScript, tracking pixels, cookies, and telemetry, and then your browser renders megabytes of code to display kilobytes of content, while prompting you to make an account or download an app”


Hope you just overlooked the auto-playing video ads due to the pressure of the interview. When can you start?


I had no idea that Microsoft set up a discord server for copilot.


Love the binary and wave clocks, instantly got me thinking about how it could work as a subtle graphical element in a landing page footer or something like that.


Multiple people within our company reporting issues. Mostly from US, us in the EU still seem fine as of right now.

edit: Never mind, it's down for me now as well.


I can see how recommending the right books to someone who's struggling might actually help, so in that sense it's not entirely useless or could even help the person get better. But more importantly I don't think most people are suggesting LLMs replace therapists; rather, they're acknowledging that a lot of people simply don't have access to mental healthcare, and LLMs are sometimes the only thing available.

Personally, I'd love to see LLMs become as useful to therapists as they've been for me as a software engineer, boosting productivity, not replacing the human. Therapist-in-the-loop AI might be a practical way to expand access to care while potentially increasing the quality as well (not all therapists are good).


That is the by product of this tech bubble called hacker news, programmers that think that real world problems can be solved by an algorithm that's been useful to them. Haven't you thought about that it might be useful just to you and nothing more? It's the same pattern again and again, first with blockchain and crypto, then nfts, today ai, tomorrow whatever will come. I'd also argue it's useful in real software engineering, except for some tedious/repetitive tasks. Think about it: how nn LLM that by default create a react app for a simple form can be the right thing to use for a therapist? As well as it comes with his own biases on React apps what biases would come with for a therapy?


I feel like this argument is a byproduct of being relatively well-off in a Western country (apologies if I'm wrong), where access to therapists and mental healthcare is a given rather than a luxury (and even that is arguable).

> programmers that think that real world problems can be solved by an algorithm that's been useful to them.

Are you suggesting programmers aren't solving real-world problems? That's a strange take, considering nearly every service, tool, or system you rely on today is built and maintained by software engineers to some extent. I'm not sure what point you're making or how it challenges what I actually said.

> Haven't you thought about that it might be useful just to you and nothing more? It's the same pattern again and again, first with blockchain and crypto, then nfts, today ai, tomorrow whatever will come.

Haven't you considered how crypto, despite the hype, has played a real and practical role in countries where fiat currencies have collapsed to the point people resort to in-game currencies as a substitute? (https://archive.ph/MCoOP) Just because a technology gets co-opted by hype or bad actors doesn't mean it has no valid use cases.

> Think about it: how nn LLM that by default create a react app for a simple form can be the right thing to use for a therapist?

LLMs are far more capable than you're giving them credit for in that statement, and that example isn't even close to what I was suggesting.

If your takeaway from my original comment was that I want to replace therapists with a code-generating chatbot, then you either didn't read it carefully or willfully misinterpreted it. The point was about accessibility in parts of the world where human therapists are inaccessible, costly, or simply don't exist in meaningful numbers, AI-assisted tools (with a human in the loop wherever possible) may help close the gap. That doesn't require perfection or replacement, just being better than nothing, which is what many people currently have.


> Are you suggesting programmers aren't solving real-world problems?

Mostly not by a long shot, if you reduce everything to its essence we're not solving real world problems anymore, just putting masks in front of some data.

And no only a fool may believe people from El Salvador or people from other countries benefited from Bitcoin/Cryptos. ONLY the government and the few people involved benefited from it.

Lastly you didn't get my point, let me re iterate it: an coding assistant llm has it own strong biases given training set, an llm trained for doing therapy would have the same bias, each training set has one, and given the biases the code assistance llms currently have(slop dataset=slop code generation) i'd still rather prefer a human programmer as well i'd stil prefer a human therapist


> But more importantly I don't think most people are suggesting LLMs replace therapists; rather, they're acknowledging that a lot of people simply don't have access to mental healthcare, and LLMs are sometimes the only thing available.

My observation is exactly the opposite. Most people who say that are in fact suggesting that LLM replace therapists (or teachers or whatever). And they mean it exactly like that.

They are not acknowledging hard availability of mental healthcare, they do not know much about that. They do not even know what therapies do or dont do, people who suggest this are frequently those whose idea of therapy comes from movies and reddit discussions.


> I considered that an appropriate punishment would have been a pay cut for a few months

This can absolutely cripple a family, I'd be really cautious wishing that upon someone if they wronged you without malice, though I completely understand where you are coming from.

In this case at the very least, I'd want to know what went wrong and what they’re doing to make sure it doesn’t happen again. From a software-engineer’s standpoint, there’s probably a bunch of low-hanging fruit that could have prevented this in the first place.

If all they sent was a (generic) apology letter, I'd have switched banks too.

How did you pursue the matter?


After the big surprise of seeing at work a list with all my personal purchases included in a big set of documents to which I, together with a great number of other colleagues, had access, I went immediately to the bank and I reported the fact.

After some days had passed without seeing any consequence, I went again, this time discussing with some supervising employee, who attempted to convince me that this is some kind of minor mistake and there is no need to do anything about it.

However, I pointed to the precise law paragraphs condemning what they have done and I threatened with legal action. This escalation resulted in me being invited to a bigger branch of the bank, to a discussion with someone in a management position. This time they were extremely ass-kissing, I was shown also the guilty employee, who apologized herself, and eventually I let it go, though there were no clear guarantees that they will change their behavior to prevent such mistakes in the future.

Apparently the origin of the mistake had been a badly formulated database query, which had returned a set of accounts for which the transactions had to be reported to my employer. I had been receiving during the same time interval some money from my employer into my private account, corresponding to salary and travel expenses, and somehow those transactions were matched by the bad database query, grouping my private account with the company accounts. Then the set of account numbers was used to generate reports, without further verification of the account ownership.


Behavior isn't what needs to change here. It's a poor system design. Humans make mistakes. Systems prevent mistakes.

Do you think the mistake would have happened if a machine checked the numbers vs the address? How about if a 2nd person looked it over? How about both?

In this case a computer could have easily flagged an address mismatch between your account number and the receiver (your work).


Thank you, that's what I intended to say.


Thanks for sharing. Sounds like they have (hopefully _had_) a really messy system in place.

And just to be clear, I didn’t mean to downplay what happened to you, I completely understand how serious it is.


If the training data was "censored" by leaving out certain information, is there any practical way to inject that missing data after the model has already been trained?


If it's just filtered out in the training sets, adding the information as context should work out fine - after all this is exactly how o3, Gemini 2.5 and co deal with information that is newer than their training data cutoff.


You can fine tune a model with new information, but it is not the same thing as training it from scratch, and can only get you so far.

You might even be able to poison a model against being fine-tuned on certain information, but that's just a conjecture.


Yes, RAG is one way to do that.


AMD just signed a 2 million dollar contract with them to get the new hardware on mlperf, so it looks like the AMD chapter continues. https://x.com/__tinygrad__/status/1935732517933613216


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: