Dr. Robot: Did Your Doctor Go to Medical School?

Figure 1: AI robot playing the role of physician

Artificial intelligence (AI) is omnipresent in our world today — it is being used for customer service, calculating complex formulas, predicting market values, and face recognition technology, among many other applications. Our world moves more efficiently with artificial intelligence, but is it always beneficial? As AI invades the world of medicine, we must remember that faster does not always mean better. Utilizing AI in medicine requires proof of medical utility and economic value, and its usage must be on ethical and moral grounds; AI must take into account ethical principles of medicine in order for more cognizant awareness of potential flaws of AI in medicine (Hamet, 2017).                                                                                                               

AI has been programmed with ethical standards that closely mirror the four principles of medical ethics: respect for human autonomy, fairness, non-maleficence, and explicability (Mittelstadt, 2019). However, in comparison to real humans working in medicine, AI does not share the same common goals, professional history, norms, or accountability mechanisms. Additionally, Currently, AI cannot be programmed to have the emotional or sympathetic aspects of the human brain (Mittelstadt, 2019). Therefore, while the principles can still apply, AI is not equipped to be ethically on par with humans or be able to help the sick the way a doctor can. However the possibility to improve the ethics of AI systems exists, and is contingent on case studies and testing AI performance in artificial environments before it is allowed into real practice. But if we were to go as far as replacing human medical professionals with AI, we need to consider what is being removed from the equation. 

AI usage, for one, can potentially worsen health disparities for those already struggling with medical access. Moreover, there is the risk of magnifying biases in regards to race, gender, and economic status, among others (Khullar, 2019). But on the other hand, AI brings certain benefits to the field of healthcare. For example, AI programs can detect and diagnose skin cancer faster than a dermatologist, but results and accuracy may vary depending on the population being examined  (Rigsby, 2019). This is because AI programs are trained with historic medical data, which is often biased towards the population majority. Additionally, these medical records could contain missing data and misclassifications (Gianfrancesco et al., 2019). If there is not enough data about a certain medical problem for a particular subset of the general population, the programmed diagnosis is at greater risk of  inaccuracy, and thereby contributes to disparities that disproportionately affect minority populations.

AI is technology created by humans, so it is no surprise that AI programs can be biased in the same way as its designers. A study published in the journal Science found that African Americans are disproportionately affected in healthcare due to AI being programmed to give more diagnoses to White Americans than African Americans (McCullom 2020). This is because the designers were biased to believe that White Americans would be able to pay more for advanced medical care than African Americans, leading to AI being biased towards White Americans (McCullom 2020). This case study is one of the only ones that has explored how bias is a direct result of how an AI system is programmed, so what about the other biased AI systems that unwittingly continue to operate in medicine today? Unfortunately, many AI machines have intricate programming systems that are difficult for the FDA to properly evaluate (Greenfield, 2019). Therefore, if the FDA can allow improperly evaluated AI machines into use, this brings into question whether AI can truly be objectively ethical. 

Figure 2: Survey Respondents’ Results When Asked About Use of AI Doctor

That being said, there are benefits that AI brings to the field of medicine. Despite aforementioned biases, AI programs can gauge diagnoses and risk of disease in patients faster and more accurately than humans (Greenfield, 2019). This is especially useful during dire medical emergencies because AI takes potential human emotions out of an urgent situation, which can be a good thing under these types of situations (Greenfield, 2019).  However, limitations persist:  AI systems so far do not have the technological capabilities to truly understand the medical reasons behind diagnoses and answer the “why” questions following a diagnosis (Davenport and Kalakota, 2019). AI may have a place in medicine, but it currently cannot replace medical professionals. 

Artificial intelligence is an exciting new technology that has the potential to transform the world of medicine and save many lives. When considering the implementation of more AI systems in the medical field, ethical and moral biases must be considered and weighed in opposition to the supposed benefits. AI is beginning to play important roles in medicine, but those roles cannot be defined or expanded until we develop a greater understanding of its effects. Next time you go to the doctor, take a minute to think, “would I want a robot doing this for me?”.

Edited by Aditya Jhaveri

References 

Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future healthcare journal, 6(2), 94–98. https://doi.org/10.7861/futurehosp.6-2-94 

Gianfrancesco, M. A., Tamang, S., Yazdany, J., & Schmajuk, G. (2018). Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA internal medicine, 178(11), 1544–1547. https://doi.org/10.1001/jamainternmed.2018.3763 

Greenfield, D., & Wilson, S. (2019, June 19). Artificial Intelligence in Medicine: Applications, implications, and limitations. Harvard University Graduate School of Arts and Sciences : Science in the News. http://sitn.hms.harvard.edu/flash/2019/artificial-intelligence-in-medicine-applications-implication s-and-limitations/. 

Hamet, P., & Tremblay, J. (2017). Artificial Intelligence in Medicine. Metabolism, 69, S36–S40. https://doi.org/https://doi.org/10.1016/j.metabol.2017.01.011 

Khullar, D. (2019, January 31). A.I. Could Worsen Health Disparities. https://www.nytimes.com/2019/01/31/opinion/ai-bias-healthcare.html.

McCullom, R. (2020, August 24). Is artificial intelligence (AI) medicine racially biased? Genetic Literacy Project. https://geneticliteracyproject.org/2020/08/24/is-artificial-intelligence-ai-medicine-racially-biased/ .

Mittelstadt, B. (2019). Principles Alone Cannot Guarantee Ethical AI. Nature Machine Intelligence, 1, 501–507. https://doi.org/https://doi.org/10.1038/s42256-019-0114-4 

Rigsby, M. J. (2019, February). Ethical Dimensions of Using Artificial Intelligence in Health Care. (2019). AMA Journal of Ethics, 21(2), E121–E124. https://doi.org/10.1001/amajethics.2019.121. 

“AI Robot Acts as Doctor to Woman in Hospital.” Quartz. https://qz.com/989137/when-a-robot-ai-doctor-misdiagnoses-you-whos-to-blame/. 

“Reasons Why Healthcare Consumers Will/Will Not Use an AI Powered Virtual Doctor.” Health Analytics, 2018, healthitanalytics.com/news/arguing-the-pros-and-cons-of-artificial-intelligence-in-healthcare.

Leave a Reply

Your email address will not be published. Required fields are marked *