The future got here a while ago. But thanks to IBM’s Watson, we can’t ignore it any more.
Some background: IBM is turning Watson into a diagnostic support tool. The FDA wants to treat it as a medical device. IBM disputes that categorization and is spending sums that are, shall we say, significant lobbying to avoid this regulatory roadblock.
Fortunately, KJR’s crack medical policy division has come up with a solution that should be satisfactory to all concerned.
IBM has created an AI diagnostician, and claims it isn’t a medical device? Fair enough. Force-fitting it into a medical-device regulatory framework is force-fitting a hexagonal peg into an elliptical hole.
But we don’t need new legislation to cover this. We already know how to certify diagnosticians. They complete medical school and have to pass tests covering each course in the curriculum. They then go through three years of residency before being allowed to hang out their shingle as a medical doctor.
Why should Watson be allowed to bypass any of this, just because it isn’t built out of flesh and blood? The solution is straightforward: Each Watson sold must first pass all medical school exams, then go through three years of as near a replica of actual medical residency as can be devised.
Problem solved. You’re welcome.
But of course, like most solutions to most problems, this solution raises new problems of its own.
Imagine you’re a physician, in a practice that’s acquired a Watson of its very own. Watson provides a diagnosis and recommends a treatment for one of your patients, and you disagree.
Now what?
You have two choices — allow Watson’s judgment to override your own, or override Watson’s judgment with your own.
And your patient later dies. Not only is your conscience torturing you, the absence of tort reform is torturing you with a malpractice action.
It doesn’t matter whether you followed Watson’s judgment or your own. You’ll be susceptible to the tort and torture whether you allowed Watson to override your good judgment, or you overrode Watson’s.
It’s Hobson’s choice. The question for medical ethicians is how to deal with this unresolvable question, and before they even start we know they’ll never come up with a fully satisfactory answer.
We can all thank our lucky stars we don’t have to deal with ethical questions this thorny.
Only, when we thank them we’ll be fooling ourselves, because we’re all dealing with this situation already. It’s the situation we face when our GPS gives us directions we don’t think make sense. We have to decide whether our GPS is routing us strangely because it knows more than we do about the traffic conditions ahead, or whether there’s a glitch in the algorithm and it’s pointing us in the wrong direction.
Not as ethically interesting? If you’re meeting someone for drinks after work, no it isn’t. But what if you’re driving someone to the hospital because they’re in intense pain and the cause might be grave?
It’s the physician and Watson, up close and personal, in real time.
It’s a question of who or what’s in charge, human beings or information technology. It’s a question with a simple answer: Humans, of course, both because we program the computers and because we’re ultimately responsible for the decisions, too.
Except that answer doesn’t always work. A computer-controlled traffic light is an easy-to-understand and inarguable example, because overriding the computer’s recommended course of action (stop before entering the intersection) isn’t merely a violation of traffic laws. It’s a decision with potentially lethal consequences.
When it comes to traffic lights we obey the machine.
The world of commerce is hardly immune from these challenges, starting with the requirement that humans are sometimes required to obey computers here, too. That’s what you face if you work in a call center. A computer (the ACD — automated call distributor) directs a call to your phone. What do you do? You answer it. You, the human, obey the ACD, a computer.
Or, you’re a manager, responsible for a decision that could be influenced by some form of automated support, whether it’s an old-fashioned Decision Support System, a no-longer-fashionable data warehouse, or ultrafashionable Hadoop-driven big-data analytics.
If the analytics indicate a course of action that doesn’t seem right to you, how is that different from the physician deciding about Watson’s diagnosis and recommended treatment?
There are no answers that are both easy and useful, and the questions are becoming more pressing as each day goes by.
Your phone is ringing. It’s the future, calling to let you know it just got into town and would like to meet you for drinks.
Time to get out the GPS.
Having a couple of drinks may be a very good option. Just enough to remove the inhibitions the some unconventional thinking but not so much to impair all mental activity.
New tools can be used in new ways. We must break from the old way of doing things to use those new tools in new ways.
Perhaps we are moving more to consensus medicine where the inputs of multiple medical practitioners combined with their track records directs a course of treatment with the patients consent, where possible. This may be the defense against malpractice suits in the absence of other reforms.
I am thinking that Watson would be in this case a medical device, although I started out on IBM’s side. It’s a diagnostic tool like X-rays and MRIs and blood tests and all that.
So yes it should be regulated the way all those other devices are; whatever regulations exist to insure that the devices work correctly and don’t do any harm. IBM could probably spend the money complying with those regs and save money, time, energy, and their reputation. Like a college complying with accreditation.
But as for your concern about doctors needing to yield their judgement to Watson’s – I am not sure that will be such a big issue. It will be a tool, like mammograms and biopsies and everything else. It’s the doctor’s judgement how to interpret the results of the tool. It will also be Watson’s purpose to report exactly why it arrived at the diagnosis it did: the supporting factors, and any contrary indicators. So doctors should have a lot of information on which to base a judgement. When they happen to disagree with Watson, they should be able to point to what factors Watson didn’t, or couldn’t, take into account. They should be able to defend their actions.
(I stopped paying attention to my car’s GPS when it told me I was a couple miles out in the ocean.)
Doctors frequently consult with one another. If a treatment goes wrong, one doctor, or perhaps a group of them in some cases, is held accountable. That doctor(s) would be the one(s) with the final say, not every doctor who may have been consulted.
Watson would be, at most, a “doctor” that is consulted – the consulting diagnostician lowest in the ranks. It’s up to humans to give Watson complete and accurate information on which it can base its diagnosis, and it’s up to humans to correctly interpret results, and to choose what treatment to implement (Watson-recommended or otherwise) and then to correctly implement that treatment.
It’s also up to humans to program and maintain Watson to continually provide the most up-to-date diagnostic abilities. I really don’t think IBM would want to be on the receiving end of malpractice suits without whatever protection may be accorded by having gone through the process of certifying Watson as a compliant medical device.
Opinion: Watson is qualitatively different from MRIs, X-rays and such, because while the former provide information, Watson provides opinions. That’s a very different matter.
I am thinking Therac-25
I don’t buy the comparison. The doctor makes the decision. If she goes with Watson, it is because Watson suggested something that she hadn’t thought of and that she realizes is preferable, in her educated and certified opinion. Having broad knowledge, all at the top of your mind, and the ability to look at a set of tracks and see the possibility of both the horses and the zebra is so important in diagnosis. Neither of these is a strength for most people. Many doctors are terrible at diagnosis. Even more so today, as doctors everywhere confront diseases from far away and old people with combinations of issues. Just read the NYT Think Like a Doctor column, or, as I did for years, the mystery case in Modern Veterinary Practice. Doctors have used reference books, colleagues, and other outside aids in sifting through the chess match of a diagnosis. O.K., so the textbooks and doctors should be validated in their appropriate ways. The other outside aids can be whatever the doctor runs across that provide information or get thoughts out of a rut onto another path.
I wonder what the pilots of Air France Flight 447 would say on the topic of machine decision making? The Airbus is making split-second decisions and also attempts to protect the pilot from himself, with only rare, but very dramatic, consequences.
The doctor (usually) has the benefit of time to consider and consult, and study the logic readout of Watson’s decision. Even then, the doctor is in the no-win situation you describe. No matter what the decision, if the patient dies, it will end up in court and that makes very bad medicine.
In the case of AF 447, it was not machine decision making that was the cause of the crash. The airspeed indicators provided disagreeing values to the auto-pilot, which led to the machine deciding to disengage the auto-pilot. At this point it was up to the pilots to cope with situation, which would have presented itself, regardless of the use of the auto-pilot in the first place.
Bob, you’re probably already aware of the burgeoning field of “robotic process automation” (RPA) which is eventually going to bring these ethical dilemmas to offices near all of us. IBM’s Watson can be seen as an extreme example of what RPA might achieve. RPA is essentially a new phase of outsourcing–for better or worse. A great resource for tracking outsourcing trends is “Horses for Sources,” and here’s a link to an article discussing the maturity of RPA: http://www.horsesforsources.com/maturity_rpa_111214#sthash.JBCEHbTF.dpbs