CHAI publishes its blueprint for AI in healthcare

The Coalition for Health AI's guide takes a patient-centric approach and aims to address, among other challenges, barriers to trust in AI and machine learning. It builds on the White House's AI Bill of Rights and NIST's' AI Risk Management Framework.

The Coalition for Health AI this week released what it's calling the first blueprint for effective and responsible use of artificial intelligence in healthcare. The group aims for the document to spur further discussion and refinement of recommendations around AI and machine learning – ultimately generating standards and robust technical and implementation guidance for AI-guided clinical systems.

WHY IT MATTERS

The 24-page Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare is the result of coordination led by CHAI and the National Academy of Medicine with AI experts from academic medical centers and healthcare, technology and other industry sectors to align on health AI standards.

CHAI comprises organizations such as Change Healthcare, SAS, Google and Duke Health, among others. Its stated mission – to identify health AI standards and best practices and provide guidance where needed and increase trustworthiness within the healthcare community – will help inform and clarify areas that need to be addressed in the National Academy of Medicine's AI Code of Conduct, according to the coalition.

"We have a rare window of opportunity in this early phase of AI development and deployment to act in harmony – honoring, reinforcing and aligning our efforts nationwide to assure responsible AI," said Laura L. Adams, senior advisor at NAM, in this week's announcement.

"The challenge is so formidable and the potential so unprecedented. Nothing less will do."

The future of healthcare depends on responsible AI as the technology proliferates to further use in improving both patient care and healthcare operations to meet growing demand, according to CHAI.

"The needs of all patients must be foremost in this effort," added Dr. John Halamka, president of Mayo Clinic Platform, a cofounder of the coalition.

"In a world with increasing adoption of artificial intelligence for healthcare, we need guidelines and guardrails to ensure ethical, unbiased, appropriate use of the technology. Combating algorithmic bias cannot be done by any one organization, but rather by a diverse group. The blueprint will follow a patient-centered approach in collaboration with experienced federal agencies, academia and industry."

In the blueprint CHAI acknowledges a growing body of evidence demonstrating that the adoption of AI and machine learning may increase risks of negative patient outcomes and introduce or exacerbate bias.

"There is, therefore, an urgent need for a framework focusing on health impact, fairness, ethics, and equity principles to ensure that AI in healthcare benefits all populations, including groups from underserved and under-represented communities," CHAI said in the guide.

Observing CHAI's efforts to foster responsible development and adoption of AI in healthcare delivery are the Office of Science and Technology Policy, Agency for Healthcare Research and Quality, Centers for Medicare & Medicaid Services, U.S. Food and Drug Administration, Office of the National Coordinator for Health Information Technology and the National Institutes of Health.

CHAI is accepting comments until May 5, 2023, according to its website.

THE LARGER TREND

By defining fairness and efficiency goals up front in the ML process and designing AI systems to achieve those goals, many in healthcare believe slanted outcomes can be prevented and the benefits of AI in healthcare operations and patient care can be realized.

CHAI launched in December 2021 to develop consensus, tame the drive to buy AI and machine learning products in healthcare and arm health IT decision-makers with academic research and vetted guidelines to help them choose responsible technologies that ensure equitable benefit for all patients.

CHAI had accepted public comments on its efforts to develop guidelines until October 2022.

The national Blueprint for an AI Bill of Rights, released by the White House this past year as a guide to define guardrails on AI technology in order to protect people from threats to their civil rights, civil liberties and privacy and ensure equal opportunities to access critical resources and services, including healthcare.

"As a coalition we share many of the same goals, including the removal of bias in health-focused algorithms," Halamka said of the Biden Administration's policy document in CHAI's October progress update.

ON THE RECORD

"The successful implementation and impact of AI technology in healthcare hinges on our commitment to responsible development and deployment," said Eric Horvitz, chief scientific officer at CHAI member Microsoft, in a statement.

"Transparency and trust in AI tools that will be influencing medical decisions is absolutely paramount for patients and clinicians," added Dr. Brian Anderson, chief digital health physician at MITRE, a CHAI cofounder. "The CHAI Blueprint seeks to align health AI standards and reporting to enable patients and clinicians to better evaluate the algorithms that may be contributing to their care."