AI Advances in Healthcare: Can Patient Privacy Be Protected?
Generative artificial intelligence (AI) has been the spotlight of major headlines, with both optimism and anxiety towards new technologies, like ChatGPT, rapidly changing normative means of obtaining and synthesizing information. ChatGPT has notably been situated at the core of rethinking academic integrity in educational systems, but it also has been facilitating queries about the future of patient privacy in the healthcare sector. AI has already been mobilized to analyze large quantities of data stored by healthcare organizations and can identify insights often undetectable by manual human skill sets. This unique ability enables AI to transform the healthcare system in an unprecedented way. Notably, the McKinsey Global Institute quantified projected annual cost-savings as up to $100 billion in value across the US healthcare system, derived from optimized innovation, improved efficiency of research/clinical trials, and new tools for physicians, consumers, insurers, and regulators. Despite the appeasing resource efficiency of AI, the glaringly harmful effect on the most fundamental stakeholders of the healthcare system should not be left unchecked — the unprecedented infringement on patients’ privacy.
AI is being used to enhance patient engagement and streamline access to care by improving clinician productivity and quality of care. Beyond improving the efficiency of delivering healthcare, AI is a novel tool for medical advancements. It is accelerating the speed of developing new pharmaceutical treatments by predicting the best drug candidates, which is estimated to help create 50 new therapies over a decade and potentially reduce costs by billions annually. The Food and Drug Administration (FDA) has played a crucial role in facilitating AI’s integration by approving over 100 drug applications featuring AI components in 2021. Moreover, the FDA also authorized numerous AI-enabled medical devices, such as atrial fibrillation sensors in smartwatches, to detect diseases. These innovations could undoubtedly elevate the accuracy, efficiency, and accessibility of healthcare for stakeholders across the care continuum. With these revolutionary benefits in mind, will there also be a revolutionary future for patient privacy? Will there even be a future that can protect it?
The fundamental legal safeguard of patient privacy is the Health Insurance Portability and Accountability Act (HIPAA). HIPAA requires certain “Covered Entities” to protect sensitive patient data, including health insurance companies, clearinghouses, and providers that handle Protected Health Information (PHI). Any third-party organizations (“Business Associates”) that receive PHI from a Covered Entity are also required to comply with HIPAA and sign “Business Associate Agreements” (BAAs). Notably, HIPAA only governs “individually identifiable health information” but does not require compliance for “de-identified data,” which does not identify an individual and has no reasonable basis that it can be utilized to do so. In order for data to be deemed “de-identified,” it must ensure its information has been stripped of 18 listed identifiers, meaning that any entity that does remove them will not have to worry about HIPAA. While the parameters of de-identified data might have been clear-cut when HIPAA was passed in 1996, today big data, or holders of large amounts of consumer data, vastly expands what information could be “individually identifiable.” Given the information that big data already has, it may be almost impossible to avoid combining any external information held in big data with already de-identified data to re-identify an individual. With AI’s advanced capabilities to recognize patterns among enormous quantities of data, there is a glaring risk of de-identified data being re-identified, even if all personally identifiable information is removed from a dataset.
With this threat posed to the legitimacy of de-identifiable data, can AI be HIPAA compliant? ChatGPT is a focal point within this discussion of data privacy risks, but has been marveled by leading medical institutions, like the National Institute of Health, for its wide range of potential applications from identifying potential research topics to assisting professionals in clinical/laboratory diagnosis. A major workstream that ChatGPT could revolutionize is medical recordkeeping, which could lift the administrative burden off of caregivers simply by generating AI-automated summaries of patient interactions and medical histories. Despite the tempting time efficiency, clinicians and health systems could unintentionally violate HIPAA and incur substantial penalties by pasting medical information into chatbot queries as it would leave PHI in hands beyond covered entities. However, Open AI, the company that owns and supports ChatGPT technology, has updated its enterprise privacy guidelines, enabling users to contact its sales team to sign BAAs in support of customers’ compliance with HIPAA. While this seems to resolve the issue of unintentionally releasing PHI to big data giants, users on the OpenAI Developer Forum have voiced frequent unresponsiveness when seeking out BAAs. Given this unreliability, data privacy best practices strongly emphasize anonymizing health data before it's processed by ChatGPT to mitigate the risk of PHI breaches, which brings back in full circle the tension between legitimate de-identified data and the pattern-matching power of AI.
Resting highly confidential patient medical information on the fragile foundation of de-identified data would suggest that certain types of AI technologies, like chatbots, should simply be avoided by medical professionals until modernized data privacy regulations are enforced to be suitable matches for big data. The disastrous breaches of patient privacy at the hands of AI have already begun: in June 2019, the University of Chicago’s medical center was sued for sharing hundreds of thousands of patient records with Google that retained identifiable date stamps and doctors’ notes. Beyond privacy breaches conducted by healthcare entities, the power of AI to infer sensitive information about patients is exemplified by how Target was found to infer pregnancy from AI analysis of shopping habits. It is obvious that HIPAA is rooted in a fallacy that no longer is compatible with the technology of our time: the belief that data can be successfully stripped of personal information and not subsequently re-identified. This unprecedented reality must be met with utmost caution to prevent patients’ medical data and other sensitive information from getting into any hands other than their healthcare providers.
This direct threat to patients’ personal data and lives prompts time-sensitive inquiries about how Congress and legal structures can address the increased integration and rapid evolution of AI. It is evident that the existing legal safeguard, HIPAA, is beyond antiquated, but is it even possible to update HIPAA to adapt to AI, or would AI’s rapid evolution simply outpace new legal safeguards again in five years? Time is of the essence, and it is a battle that legislation will ultimately lose against technology.
Hence, the unavoidable lack of protection for patient privacy without continually reshaping HIPAA’s guidelines requires stronger regulations from Congress and other federal agencies. The Federal Trade Commission (FTC) has condemned chatbots’ ability to gain undeserved trust, which can lead to funneled queries that manipulate users into unintentionally entering identifiable data. Beyond condemning specific AI technologies, like chatbots, for their data privacy risks, it is imperative that agencies conduct their due diligence in authorizing the application of AI in other medical settings, where the threat to privacy may seem less glaring. Accountability must also be widespread in Congress to establish new regulatory frameworks for AI in healthcare, which Senate Majority Leader Chuck Schumer (D-NY) initiated with the Senate’s First AI Insight Forum in September 2023. This forum is the beginning of what needs to be Congress’s consistent push for oversight and regulation of the integration of AI in the healthcare system. While this may be met with fierce opposition by supporters of decreased government regulation in technology, innovation would serve no purpose if it directly exploits personal data and jeopardizes patients' lives relying on the system it is trying to improve.
Ashley Ganesh is a junior at Brown University, concentrating in Business Economics and International & Public Affairs. She is a staff writer for the Brown University Undergraduate Law Review and can be contacted at ashley_ganesh@brown.edu.
Kourtney Beauvais is a sophomore at Brown University, concentrating in International and Public Affairs. She is an editor for the Brown Undergraduate Law Review and can be contacted at kourtney_beauvais@brown.edu.