“Even in the best case, the models had a 35% error rate,” said Stanford’s Shah
So, when the AI makes a critical error and you die, who do you sue for malpractice?
The doctor for not catching the error? The hospital for selecting the AI that made a mistake? The AI company that made the buggy slop?
(Kidding, I know the real answer is that you’re already dead and your family will get a coupon good for $3.00 off a sandwich at the hospital cafeteria.)
“AIs are people” will probably be the next conservative rallying cry. That will shield them from all legal repercussions aside from wrist-slaps just like corporations in general.
So, when the AI makes a critical error and you die, who do you sue for malpractice?
well see that is the technology, it is a legal lockpick for mass murderers to escape the consequences of knowingly condemning tens of thousands of innocent people to death for a pathetic hoarding of wealth.
So, when the AI makes a critical error and you die, who do you sue for malpractice?
The doctor for not catching the error? The hospital for selecting the AI that made a mistake? The AI company that made the buggy slop?
(Kidding, I know the real answer is that you’re already dead and your family will get a coupon good for $3.00 off a sandwich at the hospital cafeteria.)
“AIs are people” will probably be the next conservative rallying cry. That will shield them from all legal repercussions aside from wrist-slaps just like corporations in general.
Cool, so they are entitled to wages and labor protections, then.
“Not like that!”
well see that is the technology, it is a legal lockpick for mass murderers to escape the consequences of knowingly condemning tens of thousands of innocent people to death for a pathetic hoarding of wealth.