Tackling data bias in AI healthcare: Strategies for ensuring fair and accurate outcomes
Jinu Matthew • April 3, 2023
With an accelerating rate of advancements in artificial intelligence (AI), the healthcare industry is on the cusp of a technology-driven revolution with potential to transform the way we diagnose, treat, and prevent diseases. However, the enormous potential of AI in improving patient care comes with the risk of bias in AI algorithms that could lead to deficient patient outcomes. Jason Johnson and Linda A. Malek of Moses & Singer recently provided insights on mitigating potential bias in healthcare algorithms and the associated regulations governing AI-driven medical devices during HITLAB’s Summer Innovator Summit 2022.
Bias can creep into AI algorithms at various phases of development and deployment. The most common source of bias is the use of training data that does not sufficiently represent the target patient population. This can have serious implications for groups that are underrepresented in the training data. Hence, ensuring data diversity is an important measure in improving accuracy and fairness in decision-making. Even if the initial training data set is diverse, bias may arise from choices made during algorithm development. Ignoring differences within populations, such as gender differences, could lead to inaccurate outcomes. Bias can also creep in after market introduction. If an AI device was initially trained on diverse data but is later used only by a specific population, the importation of such mono-population datasets can introduce bias. Healthcare companies must implement ongoing monitoring to detect and rectify such biases in real time. Considering the risks associated with the safety and effectiveness of AI-enabled technology, several federal and state agencies have put forth laws and regulations that govern these new technologies. According to the Food, Drug and Cosmetics Act (FD&C Act), AI-based software used for treating, diagnosing, preventing, or curing diseases fall under the category of medical devices known as “Software as a Medical Device” (SaMD). The U.S. Food and Drug Administration (FDA) is largely responsible for regulating SaMDs and requires that these devices follow the Quality System Regulations (QSR) published by the agency. The Federal Trade Commission (FTC) emphasizes the importance of ensuring transparency and fairness of AI-enabled tools in its guidance. AI companies must be transparent on how automated tools are used, whether sensitive data is collected, and if consumers would be denied something of value based on algorithmic decision-making. The FDA in collaboration with Health Canada and the UK’s Medicines and Healthcare products Regulatory Agency, recently released guiding principles that provide a foundation for industry players to promote good machine learning practices.
The development of AI-based medical devices involves the collection and use of health data. The larger the volume and number of data elements, the greater the risk of data breaches and privacy violations. Manufacturers and healthcare providers must be aware of and comply with regulations governing health data such as the Health Insurance Portability and Accountability Act (HIPAA) regulations and FTC’s health breach notification regulations. Depending on the AI tool’s functionality, state and federal laws may require AI developers and healthcare providers to obtain additional licenses, permits, or registrations. AI companies must take appropriate security measures to safeguard healthcare data, ensure privacy, and build trust in the technology. If entrusting any third party with patient data, it is key to conduct sufficient vendor due diligence to ensure responsible data collection and storage.
“Technology is outpacing regulations in this area”, said Linda when discussing the regulatory framework governing AI-driven tools. Regulatory bodies are continuously evolving and adapting their approaches to effectively address the challenges posed by AI in healthcare. Overall, the guidelines issued by regulatory bodies prioritize increased transparency, improved accountability, reduction of unintended bias, protection of patient privacy, and measures to build increased trust in AI.AI can make healthcare more accessible, affordable, and effective. However, it is important to ensure that every patient benefits from it equally, regardless of gender, ethnicity, or origin. By implementing responsible data management practices and following appropriate regulatory guidelines, the healthcare industry can ensure that AI-driven solutions produce fair and precise results for all patients.
Learn more about
Our upcoming