Hacker News new | past | comments | ask | show | jobs | submit login

This is something I constantly struggle with. Its fine for some generative sales, marketing language but for data analysis wearing the risk that the output is even 0.1% wrong is not an option for many.

So you end up checking what the AI does which negates the productivity argument entirely.

Curious how others deal with this dilemma




I would love if we could seed LLMs with specific books, or give our own weightings to sources. I'm sure most books are in there already, I would happily pay extra (and want it to go to original authors) for known provenance of advice. Even pass royalties back to the original authors. For PyData code, I'm always looking at Effective Python/Polars.


this is a big challenge! the more complicated the code generated by AI, the more likely is going wrong and harder to verify. (more magic == more risk)

I'm curious how to really restrict what AI is generating each step so it is simple enough to verify and edit, yet not making it seems too verbose and slow




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: