Artificial intelligence (AI) in health care is gaining more attention, including a series of articles and podcasts in the New England Journal of Medicine. I asked ChatGPT, “How can AI programs financially benefit physicians?” Its answer, edited for length, was as follows: “They can reduce the time and effort required for administrative tasks such as patient recordkeeping, appointment scheduling, and billing.… [They] can help physicians make more accurate and timely diagnoses by analyzing large amounts of patient data and suggesting treatment plans.… [They] can assist with clinical trials and drug development, helping to speed up the process and potentially bring new treatments to market faster.”1
There are benefits to AI because one of its greatest strengths comes from improved efficiency. AI could evaluate your office process, compare it with others, and find the best way to schedule patients. Moreover, findings from studies have shown that for every hour a physician spends with a patient, they spend 2 hours documenting.2 Using AI as a scribe to effectively build a visit note in real time would be life changing. AI can substitute a manual process by reviewing charts and identifying those who could qualify for research studies. Beyond improvements in resource allocation, AI can also improve the quality of diagnoses. AI never grows tired or bored, so using it to help health care providers perform mundane tasks can help maintain quality. On top of this, AI algorithms can look at large amounts of data and find patterns not otherwise apparent to help provide diagnoses for patients with difficult or rare problems.
Although these applications have potential, I have several concerns about the technology. First, it is not always transparent what training data are used or how a program develops its answers. Underrepresentation of certain patient groups in the training data can lead to ingrained biases; although synthetic data sets can help mitigate this issue, biases—old or new—will remain. AI programs are known to reach conclusions that are not based on fact, which I find disturbing. AI also brings new legal challenges. Who will carry the liability of an AI recommendation? Can a physician be held to further scrutiny if they didn’t follow the advice of the AI program, which has never seen the patient or treated other patients like them? If you don’t think this will happen, I recommend reading a recent article from The Wall Street Journal regarding nurses and AI.3 And what if an algorithm gives a recommendation that leads to a bad outcome? It is expected that physicians and nurses are still responsible, but as the article points out, many health care workers are receiving mixed messages.
We have all seen how new technology (eg, the electronic health record) often falls short of promises to make health care more efficient and enjoyable.4-6
If implemented poorly, AI will create more problems than it solves. At the end of the day, regardless of how good the programs become or how excited hospital administrators may be, we must all remember the most important thing about AI: It doesn’t care about you or the patient. It is our humanity that leads to caring about our patients, and I don’t see AI replacing that any time soon.
Leslie Busby, MD, is chair of the US Oncology Pharmacy & Therapeutics Committee, and a medical oncologist and hematologist at Rocky Mountain Cancer Centers, Boulder, Colorado.