Ethics in Digital Health and Responsible AI

Mansa Shroff • February 15, 2022

Responsible AI and digital ethics will capture center stage in the years to come, but right now it has settled in a trough of disillusionment. What can we do as an industry to change this? Gartner describes digital ethics1 as “managing and securing personal data without a nod toward considerations like accessibility, inclusivity, equity, mental health. New digital health innovations must take ethics into consideration, examine tradeoffs associated with the benefits of making information more efficient and explore the legitimate concerns that people might have around data privacy and ownership. 

Current technology innovation will have a deep impact on healthcare. Some ethical considerations include inability to access technology due to disparities in tech literacy or limited broadband reach2, especially in low income countries. Additionally, systemic issues such as algorithmic bias3 and pay for privacy are huge concerns. During the recent COVID-19 pandemic, we saw digital ethics come into play causing medical, psychological, dignity and economic harm. A key example here is the pulse oximeter. Throughout the crisis, people of color have experienced higher rates of hospitalization and death from COVID-1945. Inaccurate results from pulse oximeters resulted in a failure to identify Black and Hispanic patients who were in need of treatments like the steroid dexamethasone and the antiviral remdesivir6 

The issue points to a larger problem with how medical devices are studied and approved, which is based on trials that involve primarily white individuals. Bias in algorithms and clinical trial frameworks have emanated from unrepresentative or incomplete training data and the reliance on flawed information that reflects historical inequalities7 in clinical research. Discrimination related to race, ethnicity, gender, religion, sexual orientation, or expression are common8. Language access, cultural competencies and inaccessibility built through structural and system bias across the healthcare system impacts who gets access and which/what quality. 

ChatGPT, the new language processing AI from OpenAI, is making waves everywhere. The advanced model, which is trained to generate human-like answers, is already being hailed as a game-changer for businesses9 that rely on natural language processing10. But we don’t have far enough sight to see the implications of how AI will respond to healthcare issues and its implications for medical information. A much more wild version of Wikipedia, ChatGPT allows patients to search for medical and insurance questions. Patients could receive health advice leading them to damaging conclusions and misdiagnosis. The damage from not having any citations, suggestions or reference points could usher in a new wave of disinformation on many levels. This is not to dissuade the use of AI in healthcare, but to do it in a more ethical, controlled and systemic fashion.

With more trust in technology, we will see an increase in engagement. Many still don’t trust technology due to privacy concerns. Information is a map, and your information easily leads to that of people you are connected to11. Digital privacy and respectful information processing means caring for vulnerable groups, such as elders and women, both of whom are often targeted by cybercrime and insider threats12. Quality privacy contributes to quality cybersecurity – they’re symbiotic. A failure to fix these concerns will not only impact recruitment for trials or adoption by patients, but also keep most promising innovation from those who are in grave need of it.

What can we, as an industry, do to mitigate the effects of promising innovations that may lead patients astray? Two broad categories of questions come up in this case:  What issues must be addressed so participation is fair and equitable? What patient safety issues may be created by the technology and how can they be minimized?13 Many questions around who has access to technology and data remain unanswered. The sweet spot likely exists somewhere between ethical marketing and medical skepticism. Building a framework that the industry could adopt such as submitting existing studies to peer review and using them as a starting point for additional, larger, more robust, more carefully designed studies is the obvious way ahead. These things take time, money, and resources, all things that are limited for many startups who may have different priorities14. 

  1. https://www.gartner.com/en/newsroom/press-releases/2021-09-30-gartner-says-digital-ethics-is-at-the-peak-of-inflate
  2. Digital Medicine Society Playbook, 2020: https://playbook.dimesociety.org/
  3. https://www.nejm.org/doi/full/10.1056/NEJMc2029240
  4. https://www.npr.org/sections/health-shots/2022/07/11/1110370384/when-it-comes-to-darker-skin-pulse-oximeters-fall-short
  5. Health, healthcare, and health research have always been impacted by inequity causing ethical implications. Discrimination related to race, ethnicity, gender, religion, sexual orientation or expression are common. Language access, cultural competencies and inaccessibility built through structural and systemic bias across the healthcare system impacts who gets access and of which/what quality5. Failure to address issues of fairness and equity can limit access and skew samples.
  6. https://www.theatlantic.com/health/archive/2018/08/machine-learning-dermatology-skin-color/567619/
  7. https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/#:~:text=Bias%20in%20algorithms%20can%20emanate%20from%20unrepresentative%20or,people%20even%20without%20the%20programmer%E2%80%99s%20intention%20to%20discriminate.
  8. Solon Barocas and Andrew D. Selbst, “Big Data’s Disparate Impact,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, 2016), https://papers.ssrn.com/abstract=247789
  9. Pasricha, Sudeep. (2022). AI Ethics in Smart Healthcare. 10.48550/arXiv.2211.06346.
  10. S. Pasricha, “AI Ethics in Smart Healthcare”, to appear, IEEE Consumer Electronics, 2023.https://lnkd.in/g_JMzXuD
  11. https://www.pewresearch.org/fact-tank/2021/06/22/digital-divide-persists-even-as-americans-with-lower-incomes-make-gains-in-tech-adoption/
  12. Patient trust must come at the top of researchers’ priority list. Nat Med 26, 301 (2020). https://doi.org/10.1038/s41591-020-0813-8
  13. Digital Medicine Society Playbook 2020; https://playbook.dimesociety.org/
  14. https://www.linkedin.com/posts/ben-schwartz-md_whoop-draws-criticism-for-claiming-its-fitness-activity-7007706610353479680-KsGc