Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Technical director at another company here: We have humans double-check everything, because we're required by law to. We use automation to make response times faster, or to do the bulk of the work and then just have humans double-check the AI. To do otherwise would be classed as "a software medical device", which needs documentation out the wazoo, and for good reason. I'm not sure you could even have a medical device where most of your design doc is "well I just hope it does the right thing, I guess?".

Sometimes, the AI is more accurate or safer than humans, but it still reads better to say "we always have humans in the loop". In those cases, we reap the benefits of both: Use the AI for safety, but still have a human fallback.



I'm curious, what does your human verification process look like? Does it involve a separate interface or a generated report of some kind? I'm currently working on an tool for personal use, that records actions and triggers them at later stage on when specified event occurs. For verification, generating a CSV report after the process is complete and backing it up with screen recordings.


It's a separate interface where the output of the LLM is rated for safety, and anything unsafe opens a ticket to be acted upon by the medical professionals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: