Artificial Intelligence: Is It Really Safe In Health Care?

2E3A5515-E3C9-4B49-9CC0-EC87BF3A4C93

Artificial Intelligence: Is It Really Safe In Health Care?

Artificial Intelligence (A.I.) and machine learning technologies are booming in the medical field as hospitals strive for more efficient methods of patient care. According to the U.S. Food and Drug Administration (F.D.A.), these advanced programs have the ability to transform the healthcare industry entirely. Some current A.I. systems are advanced enough to immediately detect the onset of medical conditions that have yet to display any signs or symptoms. 

Despite the good intentions of A.I. in the medical field, certain pitfalls and errors commonly found in these technologies could cause patients more harm than good. A recent article in The New York Times highlighted several warnings issued by researchers from Harvard and M.I.T. about the unintended consequences patients should be prepared for. 

Please note: This blog is not intended as a form of medical advice. As health advocates, we believe all patients should stay informed about topics in the medical field that can assist them in making educated decisions about their health. 

The Risks of A.I.

The F.D.A. defines Artificial Intelligence as a broad term to describe the engineering of intelligent machines and computer programs. In the healthcare industry, A.I. programs most often include the process of machine learning- a technique that can be used to design and train software algorithms to learn from and act on data. 

One of the most concerning characteristics of healthcare-based A.I. programs noted by Harvard and M.I.T. researchers is its ability to be manipulated. A.I. systems, much like any computer-based technology, operate how they are programmed. These systems can be damaged, reprogrammed, altered, or experience multiple errors that can result in a number of unfavorable outcomes for patients. 

Misdiagnoses or Failure to Diagnosis 

Possibly one of the most frightening consequences of A.I. programs is the possibility of medical errors, including misdiagnosis or failure to diagnosis:

 – Misdiagnosis: Patients who are misdiagnosed with a condition may undergo painful and unnecessary treatments. Not only can this result in delayed treatment of the correct medical condition, taking the wrong medication or undergoing unnecessary testing can put patients at additional risk of injury and infection.

 – Failure to Diagnose: When health providers fail to diagnose conditions, patients do not start the life-saving treatments and medications they need to achieve the best possible prognosis. This lost opportunity can result in reduced quality of life or an untimely death. 

Multiple A.I. studies reviewed in The New York Times article found it simple to manipulate these programs to misdiagnose or fail to diagnose a patient’s condition. By adjusting only a few pixels, the researchers could lead the program to diagnose a malignant tumor as benign and vice versa. Although these manipulations were done on purpose, researchers believe these system errors are capable of naturally occurring the more the technology is used. 

Lack of Patient-Based Care

A.I. technologies do not take into account every aspect of a person’s lifestyle, genetic makeup, symptoms, or other factors that can determine a medical diagnosis. Researchers found that inputting certain information automatically produced treatment options that were too specific. For instance, typing in “alcohol abuse” generated numerous alcohol-related diagnoses. This trend could lead hospitals to pigeon hole patients into less patient-centered treatments that are ineffective and do not take unique factors into account. 

Financial Gain For Institutions 

A.I. programs could put patients at financial risk in addition to physical harm. Harvard and M.I.T. researchers argue that A.I. programs deeply rooted in hospitals will leave patients at the mercy of the institution when it comes to costs. Hospitals and medical centers would be able to program the systems to automatically select treatment options that are capable of bringing in the most money, neglecting to provide more affordable options for the patient. 

Advocate For Your Health

In the next decade, A.I. healthcare-based programs will be everywhere. As hospitals begin to become more reliant on these technologies, patients must stay alert to how they could affect their health. 

Becoming your own health advocate is critical to reducing your risk of preventable medical errors. One tactic described by WebMD is to use the ABC method described below for how to talk to your healthcare provider:

– A: Ask Questions– Tell your doctor when something does not make sense. Ask your doctor about your treatment options and the risks of using A.I. programs for diagnostic purposes.

 – B: Be Prepared: Start your appointment off strong with the most pressing issues. Do not save them for the end. If you are not someone who is able to think of questions on the spot, write them down. Bring photos, lists, or any other information that can help you voice your concerns.

 – C: Communicate Concerns and Desires– Let your doctor know what is concerning you: treatment, testing, pain, medication, or other. If something does not feel right, do not be afraid to address it. If you do not agree with a diagnosis or want a second opinion, trust your instinct. 

If you have question and concerns regarding using A.I. technology in your treatment plan, call your healthcare provider for guidance and support. 

NYC Medical Malpractice Lawyers

There is no excuse for injuries caused by preventable medical errors. If you or a loved one has suffered a serious injury or illness due to medical negligence, our winning team of medical malpractice lawyers is here to fight for your rights.

The law firm of Pazer, Epstein, Jaffe & Fein has been successfully advocating for NYC patients for over 60 years. Contact us using our convenient online form or feel free to phone us in New York at 212-227-1212, or in Huntington/Long Island at 631-864-2429.

Sources

Smith, Craig S. “Warnings of a Dark Side to A.I. in Health Care.” (Retrieved February 7, 2020) https://www.nytimes.com/2019/03/21/science/health-medicine-artificial-intelligence.html

“Artificial Intelligence and Machine Learning in Software as a Medical Device.” U.S. Food & Drug Administration. (Retrieved February 7, 2020) https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device