Monday, December 1, 2025

Does AI Belong in Healthcare?

     Most of us remember the very moment we first used AI tools like ChatGPT - fitting for a cohort that was among the first to use them academically. Whether you used it for inspiration on an assignment, to help decipher complicated texts, or even to write a procrastinated essay, we all have distinct experiences using AI. While ethics surrounding AI have always been present, discussions have taken a turn as the medical industry looks to integrate AI software into practice. At first glance, the thought of ChatGPT being able to provide a diagnosis is thrilling as computers are thought to be more accurate than humans, yet a host of ethical concerns follow close behind. 

The article I referenced spoke on 11 different ethical issues associated with the adoption of AI into healthcare, but for the sake of time,  I will only reference a few. To start, I found the section on privacy to be the most interesting because AI is only as effective as the data it has access to. However, obtaining this data is widely controversial because to advance AI systems quickly, the purchasing of large data banks is required. But by doing so, companies may be violating people’s right to privacy which inevitably conflicts with HIPAA regulations. Furthermore, achieving a level of transparency that complies with HIPPA will be difficult due to constant conflict between owners of AI technologies and the users in terms of profit. A final note on the topic of privacy is that in the wake of learning healthcare systems, true confidentiality might require not noting some information in a person’s medical record, ultimately preventing patients from receiving appropriate care. This ultimately puts a person’s health and well-being at risk just so a company can ensure they are securing profits. 


AI also raises tensions with the long-standing ethical principle of non-maleficence, still considered fundamental in healthcare decision-making. This is perhaps the most serious obstacle AI faces, as its tendency to hallucinate could lead to dangerous misdiagnoses. It is needless to say, a misdiagnosis could be detrimental, if not fatal, but in order to reduce the risk of AI hallucinations, you need data, and lots of it. Considering it is becoming increasingly harder for companies to obtain current and up-to-date real-world data to run their AI engine, there is a real concern with AI models experiencing calibration drift. Continuous learning systems require careful monitoring to stay safe, but in countries like the U.S, AI is only approved in fixed forms. 


In sum, AI is taking the world by storm and is developing faster than we can keep up with. While it may seem reasonable to let a machine keep track of and analyze enormous amounts of data no human could possibly process, it is still a brand new field that lacks development. So, do you think AI should be used in healthcare, or is it too soon to tell?



Work Cited:


Pruski M. (2024). Ethical Challenges to the Adoption of AI in Healthcare: A Review. The New

    bioethics : a multidisciplinary journal of biotechnology and the body, 30(4), 251-267. 

    https://doi.org/10.1080/20502877.2025.2541438




2 comments:

  1. You bring up a lot of interesting points especially involving the tension between the promise of AI and the ethical risk that comes with it. There was a lot of initial excitement about a tool like ChatGPT that can diagnose something, more so we used to believe that this would reduce human error. But as you pointed out in your article, the reality is much more complicated. Especially the privacy issue. The idea that companies would withhold or alter what goes onto your medical record just to protect data really undermines the entire purpose of documentation in healthcare.

    Something that I would be interested in is how these issues can overlap with the problems we already see with self-diagnosis. Most people use Google to figure out their symptoms, which often lead to anxiety or misinformation. Because of that I am curious how you might think healthcare systems should address this risk from people using AI to self-diagnose as this problem already exists in traditional search engines? And if the tools do become more powerful and more widespread (like seeing chatGPT commercials on the TV) the temptation would increase to bypass real clinicians which can potentially amplify the existing problems.

    Overall I do agree with your conclusion, but maybe the better question is not just “should AI be used in healthcare?” but instead, “How do we make sure it is introduced in a way to protect patients first?”

    ReplyDelete
  2. AI scares me so much and I never even considered the perspective you provided here about how giving AI access to medical data is a HIPAA violation - but of course it would be! At the same time, it gives me a ton of hope that AI might be able to provide rapid and accurate cancer detection, much faster and (hopefully) just as reliable if not moreso than a human eye. I think if implemented properly (and with regulation!!!) medical professionals can utilize AI (not be replaced by it) to massively benefit the human population. I certainly think it has a place in healthcare.

    ReplyDelete

Behind Smelling Salts

  If you’ve ever watched powerlifters, athletes or even old movies where someone faints, you’ve probably seen smelling salts make an appeara...