Artificial Intelligence in Healthcare: Considerations for Policymakers

By Courtney Burke

In late October 2023, the Biden Administration issued a broad-ranging executive order directing numerous federal government agencies to evaluate Artificial Intelligence’s (AI’s) potential impacts—both good and bad—and to develop policies and procedures that will promote innovation while mitigating AI’s potential negative impacts. AI is an umbrella term that includes many subfields, and its applications are numerous. This piece focuses on some of the uses of AI in healthcare and provides broader considerations for policymakers as they seek to promote innovation while also protecting against unintended consequences.

Defining AI

AI can be best understood as a wide-ranging term that covers a variety of interrelated but distinct subfields. These subfields include machine learning, which trains algorithms to model and perform simple tasks; deep learning, which mimics the human brain to perform complex reasoning tasks; natural language processing, which interprets human communication; and robotics, which allows machines to be capable of learning and performing tasks in real-world environments.1

Example Uses of AI in Healthcare

Within healthcare there are a variety of ways that AI is being used to improve healthcare, from enhancing compliance to easing bill payment, detecting fraud, improving reimbursement for the provision of healthcare services, detecting health issues, and determining the best treatment options, just to name a few.

AI’s use in healthcare includes, but is not limited to, the following examples:

Chatbots: Chatbots are online interactive discussion tools designed to converse with human beings. By using the internet, a patient can, for example, use a chatbot to be directed to the right medical professional through screening, schedule an appointment, or determine whether next steps are needed in care. More recently, chatbots have been used as a means to provide mental healthcare for people with low to moderate behavioral health needs. There are advantages to using technology in this way. For example, mental health professionals are not always available on off hours (e.g., 2 a.m. and 5 a.m.), when such services are not typically available from a traditional human provider, but may be critically and immediately needed by patients. It is likely that new and helpful ways will continue to be developed to allow chatbots to interact with patients to provide health information and expand access to certain services.

Imaging: Perhaps the most promising use of AI in healthcare has been its application to examine vast amount of imaging data. The ability to analyze large amounts of data from diagnostic procedures such as X-rays, MRIs, CT scans, laboratory results, and more can lead to faster and better detection of disease as well as better treatment, resulting in better health outcomes.

Clinical Diagnoses and Intervention: AI has advanced the understanding of disease diagnoses and progression, and that information in turn is being shared to more widely accelerate the advancement of medical interventions. Studies show that AI is being used to diagnose conditions quickly and accurately for diseases such as Alzheimer’s, cancer, diabetes, chronic heart disease, tuberculosis, stroke and cerebrovascular events, hypertension, skin and liver disease, and more.

Process Improvement: Because healthcare has evolved to include the collection of lots of data, summarizing this information and keeping track of it can be daunting for even the most capable health practitioner. Faster synthesis of data and information can free up more time for medical professionals to interact with patients rather than spend time doing paperwork. Examples of the use of AI for process improvement in healthcare include improvements in administrative workflow, clinical documentation, patient outreach, medical device automation, and patient monitoring.

Public Health Surveillance and Disease Spread Prevention: During the COVID-19 pandemic there were calls for greater use of public health surveillance to track when and where COVID-19 outbreaks were occurring. Early detection of COVID-19 could be done using data from a variety of sources as part of efforts to prevent the further spread. The use of AI can improve the ability of public health officials to conduct surveillance such as this by identifying potential threats earlier and more accurately, detecting anomalies based on a more comprehensive base of data sources.

Improved Patient Experience: Prior to the COVID-19 pandemic, AI was already being used in a wide variety of industries to improve the “customer experience.” As noted in a 2020 article in Forbes, improving the customer experience might involve any interactions someone would normally experience in obtaining a product or service or troubleshooting problems. Such technologies are increasingly being used in healthcare to improve the customer (i.e., patient) experience so they can more seamlessly interact with health systems to schedule appointments, gather needed information, and otherwise interact with healthcare providers.

Considerations for Policymakers

The possibilities for the use of AI in healthcare seem endless. As documented by Time, advancements in AI are happening very quickly thanks to breakthroughs in mathematical models, new AI tools and hardware, and the availability of massive datasets. Even with all the positive improvements that AI is bringing to healthcare, there are important considerations for policymakers as new uses continue to develop. These include:

  • Patient Privacy: One of the most significant issues for policymakers to consider when addressing the use of AI in healthcare is how to ensure the protection of privacy for patients’ health information. Whether patients know it or not, technology companies already have large amounts of information about people’s personal health. Since the passage of the Health Insurance Portability and Accountability Act (HIPAA) in 1996, patient health information has been more protected, acknowledging and addressing the sensitivity and politicalization of certain health procedures and diseases. Instituting processes for patient consent for sharing data can help protect patient information, but at the same time it also can constrain the ability of AI algorithms to adequately capture and represent the widest array of information possible. Striking the right balance between allowing access to important and helpful information while protecting patients’ right to privacy is and will be an ongoing challenge for policymakers. That balance requires a nuanced response that ensures privacy protections while allowing for innovations that can greatly improve health outcomes overall.
  • Ownership: Closely related to the issue of privacy is the issue of who owns health data. Since the health of one person can easily be impacted by the health of someone else—COVID-19 transmission being a recent example—there are obvious public health benefits to sharing data. Sharing data can help advance medical practice by more quickly determining the efficacy of different treatments; however, many patients and other stakeholders have concerns that public use of personal health data could be used to harm individuals or groups (such as with respect to obtaining insurance coverage or employment). And although health providers spend billions of dollars a year to protect that data, as it stands now, there are very few laws or regulations that specify who owns health data. New Hampshire is one state that has enacted a law that explicitly makes patients the owners of their own health data, an approach other states and the federal government may consider in the future. Health data ownership is a critical issue that policymakers will need to consider as they ponder the degree to which sharing personal health data can help the public good without hurting individual patients.
  • Bias: Relying on data outputted from AI models is dependent on what data has been inputted. Data inputs may not be as fully inclusive of certain populations as healthcare providers or policymakers intend them to be. Some well-known examples of poor data inputs in healthcare include the use of imaging to detect skin cancer. Such data has historically included more images of white patients compared to Black patients, which has resulted in less accuracy and ability to detect skin cancer among people with darker skin. In instances where consumers willingly share health data, such as an Apple application for healthier living, policymakers should consider that the data is already skewed by the demographics of the people providing that data (i.e., those with the means to afford an iPhone or Apple watch, for example). Studies have even shown that AI algorithms may have preprogrammed biases that cause longer appointment wait times for patients who are Black. Policymakers should be attuned to the development of AI algorithms that may include such input biases.
  • Widening Health Disparities: Closely related to the idea of potential biases in AI data input and algorithms is that access to AI has the potential to create additional disparities in care. Existing disparities in healthcare among marginalized ethnic and racial groups are already well documented, and consideration of who has access to AI technologies and data and who does not, how that impacts health outcomes, and whether interventions would be helpful in mitigating disparities.

Conclusion

The use of AI in healthcare to do such things as improve diagnoses, mitigate staffing shortages, enhance consumer experience, promote faster learning and science advancement, streamline processes, and lessen the spread of disease is revolutionary. Policymakers may want to be cautious not to undervalue or forestall such potentially helpful innovation. At the same time, however, there are legitimate reasons to monitor AI’s uses, honor patient privacy, and be mindful of biases in AI applications. Promoting innovation while also protecting privacy will be one of the most important issues facing health policymakers in the next decade.

ABOUT THE AUTHOR

Courtney Burke is senior fellow for health policy at the Rockefeller Institute of Government


[1] Another way of categorizing AI was developed by Arend Hintze, who also defines AI in four main ways: 1) reactive machines, which have no memory and are task-specific (e.g., play chess, robots); 2) limited memory, which can store experiences and use them to inform future decisions (e.g., self-driving cars); 3) theory of mind, which can understand emotions and beliefs and predict human behavior, which is in its early stages of being used to improve tools like Chat GPT to understand emotion through language and which is still in early development; and self-aware, which exists only in theory (e.g., drones with actual emotions as depicted from various sci-fi movies, such as 2001 Space Odyssey).