Hacker News new | past | comments | ask | show | jobs | submit | throwaway1851's comments login

I’ve had opiates for wisdom tooth extraction and a couple of relatively minor but very painful injuries. I was very glad to have them. They made the experiences much more bearable. I did not experience any addiction issues, and I’m glad that my access to these medications was not limited due to some people’s inability to handle them.

I really don’t think forcing needless suffering on everyone is the answer to this problem.


Opiates for wisdom teeth extraction are entirely unnecessary.

Source: had six very painful teeth extractions, with bone grafts and two sinus repairs. I was almost pain-free on 600mg ibuprofen + 600mg acetaminophen every 6 or so hours. I say "almost" because I was not dumb enough to try any type of exercise, or eating crunchy cereal, but I had no problem falling and staying asleep or going back to work in 24 hours.

So I am not entirely sure why you think opiates were necessary or provided you the type of relief that OTC meds could not provide. My double-board-certified, extremely expensive dental surgeon doesn't seem to think so.

I also had abdominal surgery, for which I can concur opiates do help quite a bit, which is entirely separate story.


If you've reviewed the facts in this case, and that's what you took from it, then I seriously question your reading comprehension ability.


They were looking for their angle and found it. Reading was not part of the process.


And, with that, we’re back to pre-LLM chatbot design: intent classification, entity extraction, business logic, return a result. Only the whole process rests on a more rickety foundation. It’s also bloated and slow, querying an LLM over and over for these things. I’m starting to see some parallels to modern JavaScript and SPAs. ;-)


Bard hasn’t been using Google’s best language models. I believe it just got an upgrade, however, and I’m now getting output that is significantly more coherent and useful than ChatGPT’s. It’s also a helluva lot faster, though that could owe to the limited access.


That’s only going to get you so far. Sparsely represented subjects have have an actual reality you’re trying to model. So while yes, you can generate synthetic data to interpolate between the points that have already been sampled, whether those interpolated points have anything to do with the underlying reality is a matter of chance. If you really try to drill down the existing tools into a specialized problem area, it becomes very clear that they lack a sufficiently informed model of the subject to be useful. That’s why I’ve found that ChatGPT is great at the beginning of a project and increasingly irrelevant as you approach the core technical challenges of the project.


Especially weird since SFO is not in San Francisco.


and the event is for the whole metro area and people who want to travel there, but closer to SFO than the other airports


OpenAI’s fear-mongering efforts have been really transparent. As an example, in the ABC News piece, one of their employees discusses asking GPT to help build a bomb. Your employees using words like “bomb” on television is not something that happens by accident.


OpenAI explicitly disclaims ownership interest in the model outputs. A user who both generates outputs from OpenAI AND uses it to train a “foundational” model that competes with OpenAI could owe contract damages. Other parties? I simply don’t see it.


I think a review of the state of frontend tooling will show that efficiency with respect to developer hours is not a widely shared priority. I only say this with only 50% intention of starting a flame war.


> It gets basic facts wrong and often times misunderstands what I'm trying to ask it.

In haven’t tried Bard, but I’ve tried ChatGPT extensively and this sounds like a very good description of it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: