Skip to Content
Health Information and Data SharingHealth Data Sharing and PrivacyMechanisms for Advancing Health Equity

Artificial Intelligence and Public Health: Emerging Uses, Risks, and Ethical Considerations

October 15, 2024

Overview

The uses of artificial intelligence in public health are far-ranging, and show potential to advance medicine, treat disease, and reduce workforce burden. However, much remains to be seen as challenges and ethical complexities around the rapidly-evolving tool enter public dialogue and public health authorities navigate the changing landscape.

For the last few years, artificial intelligence (“AI”) has featured prominently in the news and public dialogue. Some of the attention has been positive, including the potential for AI to advance medicine, the treatment of disease, and the discovery of new drugs. People have shared how they deploy AI to enhance their productivity at work, such as by providing them with summaries of articles and reports, or using AI to assist with software engineering and programming.  

There has also been extensive negative coverage of AI. There have been stories about the use of AI to generate increasingly real photos and videos, leading to concerns about the proliferation of deepfakes perpetuating misinformation and disinformation. For example, thousands of videos featuring a deepfake Elon Musk in recent months deceived and defrauded scores of people, some of whom drained their life savings believing it was a legitimate investment.

Additionally, there are myriad concerns about the ethical considerations around AI, legitimate concerns that AI algorithms will perpetuate bias and discrimination (an issue the CDC is examining, particularly related to health equity), fears of job displacement, and genuine skepticism as to whether AI, in particular generative AI, will live up to its promise.

Amidst this backdrop, public health is still in the very early stages of understanding AI’s potential applications in the field; approaching it with both interest and apprehension. In the National Association of County and City Health Officials’ (NACCHO’s) 2024 Public Health Informatics Profile, only five percent of local health departments (LHDs) reported that they currently used AI and 84 percent said they had no plans to use AI in the next year.

Large local health departments were three times more likely to use AI when compared to small or medium LHDs. Most LHDs that used AI reported using it to create communication materials or plans. About 70 percent of LHDs not currently using AI reported an interest in using it, with interest substantially higher among urban LHDs. The majority of LHDs also identified perceived risks of AI use in their agency, including threats to data security and cybersecurity and concerns regarding the reliability and accuracy of AI.

There is no uniformly accepted definition of AI, and in fact, our concept of what we even consider AI may even evolve as certain technologies become more commonplace. For example, tailored recommendations from streaming services and predictive algorithms are an example of narrow or non-generative AI. The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence defines Artificial Intelligence as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments” and generative AI as a “class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.” One way to think about generative AI is that it consumes large amounts of data and then creates something new, or novel based on that data.

The uses of AI in public health are far-ranging. Some focus on assisting the public health workforce with completing time-consuming or burdensome administrative work. These include tasks like text generation, for example, completing initial drafts of correspondence and grant writing. AI can also ingest and review large amounts of data much more quickly than a human. Public health entities have reported using AI to help clean up anonymous survey data or review and classify large datasets, such as COVID Public Health Emergency Orders (an initiative that the CDC shared at the NACCHO’s 2024 Public Health Law Practitioners Convening).

In a recent webinar sponsored by ICF, a presenter shared another CDC initiative that used generative AI to monitor publicly available Facebook pages for unplanned school closures, which can be an important warning sign for outbreaks… Other applications for AI include generating health communications, including the use of AI chatbots for health education or developing culturally sensitive or language-specific communications for different populations, meeting a need to further advance health equity. Additional purposes include using AI to review and evaluate data for disease surveillance or epidemiological analysis.

Presenters at the ICF webinar emphasized that one should think of AI has an intern that provides a helpful start to a work product, but the work product still needs to be thoroughly reviewed and vetted by a supervisor (or in this case a human being), before being considered final. This type of guardrail around AI is often referred to as a human-in-the-loop and is foundational to responsible AI use.  

Furthermore, as AI advances there may be applications for the public health workforce that have large upsides but also pose greater risk and ethical considerations. There is currently no comprehensive federal regulatory framework and states are advancing regulation in a piecemeal fashion. Public health departments will need to monitor their own state and local regulations, as well as developing their own policies and procedures.

For public health departments that are interested in utilizing AI, particularly generative AI, but want to ensure it is being safely and thoughtfully adopted, a helpful starting place is a the National Association of Counties (NACo) report: AI County Compass: A Comprehensive Toolkit for Local Governance and Implementation of Artificial Intelligence. While not specific to public health, the toolkit approaches AI with a local government lens. It sets out the potential benefits and challenges around AI and identifies four key themes around the use of generative AI: (1) prepare the workforce, (2) establish an ethical framework, (3) promote policy models, (4) enable responsible applications.  

There are opportunities for public health agencies to improve health outcomes and alleviate workforce burden through AI. However, public health agencies also need to be aware of the challenges and ethical complexities around AI, and thoughtfully evaluate its use and ensure appropriate protections are in place.

This post was written by Meghan Mead, Deputy Director, Network for Public Health Law — Mid-States Region.

The Network promotes public health and health equity through non-partisan educational resources and technical assistance. These materials provided are provided solely for educational purposes and do not constitute legal advice. The Network’s provision of these materials does not create an attorney-client relationship with you or any other person and is subject to the Network’s Disclaimer.

Support for the Network is provided by the Robert Wood Johnson Foundation (RWJF). The views expressed in this post do not represent the views of (and should not be attributed to) RWJF.