Exactly 12 days before Christmas, the U.S. Department of Health and Human Services’ Office of the National Coordinator (ONC) gave the health industry a unique gift buried in a 900+ page rule adoption. The gift? The first comprehensive U.S. regulation delineating the responsible use and oversight of AI used in connection with health care decision-making.
Disagreeing “with commenters who believe that requirements for AI or machine learning-driven decision support is premature”, ONC states: “we believe now is an opportune time to help optimize the use and improve the quality” of these AI tools.
Starting January 1, 2025, certain developers of health IT certified as per the ONC rules must meet new transparency requirements. Those that create and use IT that supports decision-making based either on clinical evidence (aka evidence-based decision support) or on algorithms or models trained on data to make predictions, recommendations, evaluations, or analysis (aka Predictive Decision Support Intervention or DSI) will have to provide information about how the IT is designed and developed, the data sets used to train the IT (including, for example, data related to race, ethnicity, sexual orientation, and gender identity), and how the IT is continually evaluated. Health IT developers of Predictive DSI must perform risk analysis and risk mitigation “related to validity, reliability, robustness, fairness, intelligibility, safety, security, and privacy.”
Without delving too far into the very detailed weeds of this rule, ONC has provided a detailed roadmap of how AI tools can be developed and monitored responsibly. Developers will be expected to understand and be able to explain how the tool was designed and how it works.
It may not be surprising that the first comprehensive U.S. regulation in the AI space involves health IT used to support health care decisions. It also may not be surprising if other regulators (or AI developers themselves) pull from ONC’s rule in an effort to ensure healthy AI development, use, and oversight.