Listen to this post

Last week’s historic executive order (EO) on the development and use of artificial intelligence (AI) is teeming with urgency, both in caution and optimism, to keep pace with the technological advancements. The EO addresses the duality of AI—its promising utility and disconcerting risks—across a range of public and private sectors, including healthcare. Below are five key takeaways from the EO for the healthcare industry.

  • Establish an AI Task Force. Under the EO, the Department of Health and Human Services (HHS) must establish an HHS AI Task Force in consultation with the Department of Defense and the Department of Veterans Affairs within 90 days. Within one year, the HHS AI Task Force must develop a strategic plan on the responsible deployment and use of AI-enabled technologies in the health sector, including guidance, policies, and regulatory action to protect AI deployment.
    • The HHS AI Task Force will focus on potential applications such as research and discovery, drug and device safety, healthcare delivery and financing, and public health. It must specifically consider appropriate human oversight, health equity and bias, and safety, privacy, and security standards.
  • Incentivize Responsible AI Innovation. The EO requires HHS to identify and prioritize grantmaking and other awards to advance responsible AI innovation by healthcare technology developers. Recognizing that data quality underlies the success and usability of AI, HHS must prioritize the 2024 Leading Edge Acceleration Project cooperative agreement awards to initiatives that explore ways to improve healthcare-data quality. In addition, Veterans Affairs must host, within one year of the order, two three-month nationwide AI Tech Sprint competitions to advance AI systems that improve veterans’ healthcare quality and support small businesses’ innovative capacity.
  • Protect Privacy. Privacy was a pervasive subject in the EO, with an eye on ensuring that agencies stay ahead of the privacy implications of AI and work to mitigate the privacy risks potentially exacerbated by AI.
    • The EO requires the Director of the Office of Management and Budget (OMB) to issue an RFI to seek feedback regarding how privacy impact assessments may be more effective at mitigating AI privacy risks. The RFI responses must inform potential revisions to guidance to agencies on implementing the privacy provisions of the E-Government Act of 2002.
    • We may see new guidance or rules on de-identified data and de-identification processes as part of this focus on AI and data privacy. AI may be capable of re-identifying data that was properly de-identified under currently accepted standards. On the other hand, AI may be a useful tool for advanced de-identification of data.
  • Ensure Patient Safety. The EO also directs HHS to establish an AI safety program in consultation with other agencies and in partnership with Patient Safety Organizations. This program must establish a common framework for approaches to identifying and capturing clinical errors resulting from AI deployed in health care and specifications for tracking associated incidents that cause harm to patients, caregivers, and other parties—specifically including harm caused through bias or discrimination. The program will analyze captured data and generated evidence to develop best practices and other guidelines aimed at avoiding these harms.
  • Regulate Drug Development. The EO also requires HHS to explore regulations for the advancement of drug development utilizing AI. HHS must develop a strategy for regulating the use of AI tools throughout each phase of the drug-development processes. It must also identify areas where future rulemaking may be necessary to implement an appropriate regulatory system.

The EO further supports citizen and expert calls for ethical, practical, and thoughtful implementation of AI. Healthcare, along with other critical industries, must ensure AI is engaged as a solution, without increased risk to patients or biased innovation. The EO is a significant step toward that goal as it tasks agencies and stakeholders with action to guide AI development, use, and, ultimately, accountability.