Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If a calculator works great 99% of the time you could not use that calculator to build a bridge.

We know for certain that certified lawyers have committed malpractice by using ChatGPT, in part because the made-up citations are relatively easy to spot. Malpractice by engineers might take a little more time to discover.




Engineers' work is also externally verifiable, e.g. by unit tests for software, but I'm assuming by other sorts of automated protocols for civil engineering. I would hope a bridge is not built without triple checking the various outcomes.


Well, most of the LLM-generated code i serve are unit tests (and scripts), so hopefully, those are good enough to catch my mistakes :)


If that argument were to save anyone, it would have saved the lawyers too.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: