Hacker News new | past | comments | ask | show | jobs | submit login

"If a patient ends up dying because of preventable circumstances, that's not going to be acceptable because we are trying to save money"

Let's not pretend that every anaesthesiologist has a perfect track record. Machines can make mistakes with horrible consequences, but so can humans. Similar arguments are used against many forms of valuable automation, such as self driving cars. The benefit of using machines is that mistakes are quantifiable and can be fixed en masse. There's no way to know if any given anaesthesiologist is having an off day and there's no one fix for the potential problem. Most automation issues can be fixed with time and more data.

The outlook is grim for automating similar medical tasks because of the same problem that Sedasys is facing: the human inclination towards turf protection. Professional organizations make a lot of dues money from their members, which is then use to purchase enough clout to delay the FDA at least once. No one wants to lose a job that they've committed their lives to and the threat of automation extends across all of society. But we can't afford to be the species that shoots itself in the foot by refusing to reap the benefits of ever advancing society.

I look around and see a world filled with repetitive and mundane tasks. I love it when one of those jobs is automated away. Congratulations to Johnson & Johnson and I wish them the best in their fight against our own backward tendencies.




Also, that quote can't possibly be accurate. Are they really claiming that there is nothing we could be doing to improve patients' odds (however slightly) that are foregone because they're prohibitively expensive? Because that's the implication.

I understand that the value of a human life is both high and difficult to measure. But throwing your hands up and going with $Infinity is not a valid solution.


You have it backwards. They're saying that there are measures that could save money which are foregone because of the unknown but potentially greater risk. An aneasthetist could fuck up a bunch of operations. A design error in a widely-employed robotic anaesthetic device could affect thousands of people. Juries are not kind to defendants in such cases.


For the most part, though, errors must be subtle and small to escape notice for very long. If, for instance, Device A kills everyone who uses it in five minutes, this will not go undiscovered for long. Unfortunately, we can not remove the chance of that happening.

Though if I were standardizing this software, I would seriously consider mandating that devices MUST NOT (in the RFC vernacular) have real-time human-calendar clocks on board them. Some sort of calendar-based errors could indeed be highly correlated and impossible to respond to quickly, and if the device simply doesn't have it it can't crash because of it.

(I'd also like to mandate buffer-safe languages that aren't C.)

((Also, I'd like a pony.))


I would seriously consider mandating that devices MUST NOT (in the RFC vernacular) have real-time human-calendar clocks on board them.

I am in total agreement. My experience with gps-based clocks in high-precision hard real-time environments taught me that time is really effing hard to do perfectly, it is full of countless non-obvious corner cases. Some of which will only bite you two years into deployment.


I'm not going to dispute what you're saying (that's another conversation), but I really don't see where you're getting that from the quote under discussion.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: