Artificial Intelligence

HLC is identifying barriers, opportunities, and risks with private sector leaders to shape policy and to unleash the potential of AI to improve health.

A person with a hairnet and blue mask looks at data on two computer monitors
Background

The Issue

For decades, Artificial Intelligence (AI) has enhanced healthcare delivery and research. As machine learning and generative AI capabilities evolve, these technologies improve diagnostic accuracy, support clinical decision-making, and enable earlier disease intervention. Beyond patient care, AI applications also streamline administrative processes, optimize workflows, and use predictive analytics to forecast disease outbreaks.

However, several challenges hinder broader adoption. Private sector-based obstacles include limited access for under-resourced health systems, privacy and security concerns around sensitive data, and incomplete or biased data affecting outcomes. Regulatory, payment and liability obstacles at both the federal and state level include federal uncertainty as well as state-level inconsistencies. Policymakers must collaborate with private-sector leaders to alleviate these challenges and unlock AI’s full potential.

Policy Solutions

HLC supports a risk-based, patient-centered approach that fosters AI innovation while ensuring ethical and responsible implementation. Policies should provide regulatory clarity, promote trust, and encourage safe use in healthcare. The following policy recommendations will help unlock AI’s potential:

Protect AI Innovation from Overly Restrictive Regulations

Overregulation could limit AI’s potential to improve patient outcomes. Federal agencies should align the regulation of AI in health while recognizing the different entity roles in serving patients. HLC recommends avoiding a rigid, one-size-fits-all set of AI regulations and ensuring AI policies allow flexibility.

Strengthen Data Integrity and Mitigate Bias

AI systems must be trained and rely on high-quality, diverse, and unbiased data to ensure the accuracy and reliability of patient care. Organizations should identify and mitigate biases across their model development lifecycle and, where appropriate, align with industry-developed standards. HLC recommends implementing robust data governance practices,, promoting standardized, interoperable health data frameworks, and incorporating data from vulnerable populations.

Develop Voluntary Federal Guidelines to Standardize AI Best Practices

Fragmented state-by-state AI laws are creating compliance burdens and stifling innovation. A national standard will avoid conflicting requirements and facilitate consistent compliance. HLC recommends establishing voluntary federal guidelines based on best practices, accreditation, and safety standards, providing consistent guidance on AI validation, testing, and risk management frameworks, and ensuring risk assessments are tailored to use cases and are entity dependent will align with risk-based approaches.

Define AI Oversight and Ensure Explainability

AI systems must be trained and rely on high-quality, diverse, and unbiased data to ensure the accuracy and reliability of patient care. Organizations should identify and mitigate biases across their model development lifecycle and, where appropriate, align with industry-developed standards. HLC recommends implementing robust data governance practices, promoting standardized, interoperable health data frameworks, and incorporating data from vulnerable populations.

Artificial Intelligence Letters & Comments

Artificial Intelligence News & Updates

About

Explore HLC

HLC is the only group in Washington that unites healthcare CEOs and leaders across all sectors to shape policy that strengthens the system and improves care.