At NASHP’s 2024 conference in Nashville, Amplifying Sound Health Policy, the role of Artificial Intelligence (AI) in healthcare emerged as an opportunity and a concern for state health policy leaders. Moderated by Dan Gorenstein, panelists Jessica Altman, Paige Nong, Micky Tripathi, and Senator Bo Watson emphasized the need for proactive discussions on AI’s role in health care delivery and administration, ensuring its use fosters public trust and comfort.
The panel agreed that AI is poised to transform many aspects of health care, offering both immense promise and challenges. Key themes from the discussion included:
Demystifying AI: There is a real need to foster conversation and education on what AI is (and isn’t). Dr. Nong noted that a good starting point is clarifying the differences between predictive AI tools, which use past data to make forecasts—like Netflix’s recommendation system that suggests shows based on your viewing history—and large language models (LLMs), which understand and generate human-like text. In health care, LLMs can power chatbots and virtual assistants to provide consumers with information about health conditions, treatment options, health coverage and answer common health-related questions, improving engagement and self-management.
Learning from Current AI Use: AI can enhance healthcare, such as improving patient care through predictive analytics and streamlining administrative tasks. States have been exploring the use of AI at a slower pace than the private sector. California is the home of Silicon Valley and state agencies have been proactively thinking about how to use AI. Covered California, the state-based marketplace, developed a framework for the use of AI to get buy in and trust across the agency before they launched an AI tool to enhance the consumer and enrollee experience by automating documentation and verification tasks.
The Human Aspect of AI: The panel emphasized that AI should complement, not replace human expertise. AI is a tool that can be implemented successfully in one setting while failing in another. As states begin using AI, there is an opportunity to expand capacity in the midst of a limited workforce. At the same time, it is critical that states have an infrastructure to ensure there is human monitoring, evaluation, and continuous improvement to ensure AI is meeting its intended purpose with accuracy.
Addressing AI Risks: AI isn’t without risk, such as algorithmic bias, lack of transparency, and security concerns. Discussion among the panel stressed the role of state leaders at all levels to learn and enhance their ability to ask the right questions and work collaboratively to address these potential risks. Areas of considerations include ensuring patient safety, protecting data privacy, and navigating the ethical implications of AI deployment. As AI continues to evolve, proactive conversations on these concerns will be crucial for responsible implementation.
States’ Role in Regulation: States play a critical role in regulating clinical and administrative AI applications. States are enacting a range of legislation on AI, including the establishment of committees to examine the implications of AI, alongside efforts to address transparency, data privacy, algorithmic accountability, and the ethical use of AI in sectors like healthcare. For example, Senator Watson explained how Tennessee has established the Tennessee Artificial Intelligence (AI) Advisory Council to foster needed conversations across public, industry, academic, and government.
Federal-State Dialogue: Ongoing collaboration between federal and state governments is crucial for effective AI oversight. Since creating the Office of the Chief Artificial Intelligence Officer (OCAIO) in 2021 to support AI collaboration across its agencies, the U.S. Department of Health and Human Services (HHS) has recently streamlined its operations by bringing together its technology, data, and AI strategies. Assistant Secretary Tripathi noted that we should address misuse and missed use (missed opportunities). As states consider the use of AI, with Trustworthy AI Playbook with guiding principles, information on major AI concepts, and how to use AI solutions safely and confidently. HHS has also released FAQs on Medicare Advantage Organizations (MAOs) using AI in medical necessity decisions. These guidelines require MAOs ensure that AI relies only on approved and defined clinical evidence and factors to support coverage criteria.
Ultimately, building public trust in AI in health care will require education, transparency, and a commitment to responsible use. State leaders, across legislative and executive branches, must engage in these discussions, ensuring any AI tools are implemented effectively and with public confidence. As AI evolves, states have a vital role in harnessing its potential to improve its use to better serve their communities.
