As artificial intelligence (AI) products continue to make headlines and challenge assumptions about the limits of technology, policymakers and industry leaders must wrestle with questions of transparency, ethics, and security. AI technology has been around for decades, but the rapid emergence of large language models (LLMs) and generative AI in the past year has brought these questions to the forefront. Especially in healthcare applications, developers and deployers of AI tools must carefully balance the benefits of AI systems with the need for privacy-focused design. The Connected Health Initiative (CHI) and Duke AI Health held an event focused on this topic and several others in the intersection of health and AI. For more about our event, find handouts, and watch panel presentations, click here.
Representative Greg Murphy (R-NC-03), who keynoted the event, addressed the importance of privacy and data security in healthcare during his speech. Rep. Murphy’s background in medicine and his position on the House Ways and Means Committee, which handles both health and technology issues, makes him uniquely positioned to understand the existing protections for sensitive health data and the ways those protections can be improved. As he noted, AI is not new, and it is part of an incremental advancement of technology. We have some protections already, like those found in the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule, but health data also exists outside the scope of HIPAA coverage. That’s why a national privacy law is an important first step toward protecting all data, especially health data. Growth in AI system usage in healthcare settings does not change the parameters of a privacy law, but it does underscore the need to act quickly
Presenters from Duke AI Health took the podium later in the day to explain the work they have done on improving algorithm-based clinical decision support (ABCDS) oversight, including their work on data privacy. Strong oversight of these models, including ensuring developers and deployers of AI systems uphold privacy protections, is a cornerstone of ensuring that AI can be used in an ethical and helpful way in a healthcare environment. As Dr. Michael Pencina, chief data scientist at Duke Health and director of Duke AI Health highlighted, Duke AI Health centers their work on privacy by design, ethics by design, and health equity by design. The work Dr. Pencina and his team do is so important, but relying on good actors to make good decisions does not lead to privacy and security for everyone. The staff at Duke AI Health need to continue their good work, but we also need Congress to step in with a national privacy law.
In our final panel, experts from across the industry came together to discuss health AI governance. They acknowledged that there is no single way to solve the privacy challenges posed by AI system use in healthcare, but that developers and deployers need to consider privacy in the creation and use of AI systems. Health AI models need a way to build trust with patients, and one of the ways to do that is to have a privacy framework with expectations for use of patient data. Part of the structure must also be education for patients and consumers: how AI systems work, what data they are trained on, and what data of yours they are or are not using. A strong privacy law at the national level will help lay out some of these rules and encourage better, safer use of AI in the healthcare space.
Throughout the event, experts touched on a wide range of issues related to AI use in healthcare, but many of our panels came back to the necessity of strong privacy laws for the responsible use of health AI. Congress needs to pass a national privacy law to ensure the safety and security of patient data outside the scope of HIPAA, and to promote the responsible, ethical, and safe use of health AI systems.