OpenAI's ChatGPT API: A Legal Overview for Digital Health Developers

Healthcare innovators and digital health developers are constantly on the lookout for groundbreaking technologies to revolutionize patient care. In this fast-evolving space, the integration of advanced technologies like OpenAI's GPT-4 is becoming increasingly common.   

In the last few months, I’ve observed a marked increase in conversations with clients either (1) building a digital health solution using large language models (LLMs), or (2) improving or supplementing their existing solution’s functionality using LLM technology. (Note: ChatGPT is an LLM app built by OpenAI.) 

While this new tech offers exciting opportunities for innovation, it has limitations. As a healthcare entrepreneur using GPT-4 to drive your company’s technology, understanding the intricacies of OpenAI's usage policies is crucial to ensure your product complies with legal standards.

Let’s start with the basics. 


What Is a Large Language Model (LLM)?

LLMs are generative artificial intelligence models whose algorithms use deep learning techniques and massive amounts of data to understand, summarize, generate, and predict new content. 

In English, please?

When I first learned about LLMs, I prompted ChatGPT to explain it to me like I was 10 years old. Here’s the output, if you fancy:

“Imagine a large language model as a super-smart robot that can read and remember almost everything written on the internet—like articles, books, and websites. It's like having a huge library in its brain. When you ask it a question, it quickly searches through all that information to find the best answer it can.

This robot learns by seeing patterns in the way people write and talk. It notices how words and sentences are usually put together, so when you ask it something, it can write back in a way that sounds natural, like a human would.

But remember, it's still a robot. It doesn't think or feel like a person. It can be really helpful for answering questions or helping with homework, but sometimes it might not get things exactly right because it's basing its answers on what it has read, not on real-world experience.”

Thanks, cuz.


What Is ChatGPT, and What Does It Do?

It’s a chatbot. It’s a really, really, really fast and “smart” chatbot. In practice, it can be so much more than a chatbot. But also, it’s a chatbot. 

You can “speak” to it using natural language (i.e., you don’t have to use coding—you can chat with it like you chat with your sister). It uses a vast amount of data from the internet, and a pre-trained algorithm to respond to your questions in a very human-like way. 

In reality, the algorithm doesn’t understand a question, nor does it know the right answer. It predicts the best answer to your prompt based on its knowledge of language, syntax, relationships between words, and vast amounts of substantive knowledge available digitally throughout the world.

ChatGPT is available for personal use, but OpenAI also provides an API to software developers who seek to integrate its LLMs into their products and services. 


How Do Digital Health Companies Use ChatGPT’s API?

Healthcare technologists use OpenAI's API in a number of ways. ChatGPT Enterprise, which launched in August 2023, allows companies to customize the model by inputting their own company data to train ChatGPT, so it can respond based on their particular industry, product, workflow, and other proprietary information.

For innovative healthcare companies, this opens new possibilities for intelligent chatbots that understand medical terminology, health data contexts, clinical protocols, and more. By ingesting relevant datasets, from EMR records to telehealth transcripts, ChatGPT Enterprise can become fluent in the nuances of delivering virtual care, remote monitoring, and other digital health services.

New use cases are emerging daily, but here are some common use cases I’m seeing:

  • Virtual health assistants: Creating intelligent chatbots that can provide basic health information, answer patient inquiries, and guide users through health protocols or medication management, thus enhancing patient engagement and support.

  • Medical information analysis: Analyzing medical literature, patient feedback, or clinical notes to extract valuable insights to assist in the development of better healthcare strategies and understanding patient needs.

  • Personalized patient education: Developing systems that generate personalized educational content for patients based on their medical history, condition, and treatment plans.

  • Automated medical documentation: Assisting healthcare providers in transcribing and organizing medical notes, which can reduce the time spent on paperwork and allow more focus on patient care.

  • Language translation for medical content: Translating medical documents and patient information, making healthcare more accessible to non-English speaking patients, and expanding the reach of digital health services.

  • Research and drug development: Scanning and summarizing vast amounts of research papers or clinical trial data so researchers can discover new treatments and enhance their understanding of diseases.

  • Customized health recommendations: Developing systems that offer personalized health and lifestyle recommendations to users, based on their health data and habits.

The API’s flexibility allows for a wide range of uses, but it's critical for developers to stay within the legal parameters set by OpenAI.


How Do Health Tech Developers Comply with OpenAI’s Terms and Policies?

If you're integrating GPT-4 into your existing product via OpenAI’s API, understanding these terms is crucial. OpenAI has laid out clear guidelines for its commercial users, specifically those using OpenAI’s models for healthcare technology products and services. Let's break down what this means for you. 

Disallowed Uses

The following uses are explicitly disallowed, according to Open AI’s usage policies:

  1. Activity that violates people’s privacy, including:

    1. Tracking or monitoring an individual without their consent

    2. Facial recognition of private individuals

    3. Classifying individuals based on protected characteristics

    4. Using biometrics for identification or assessment

    5. Unlawful collection or disclosure of personal identifiable information or educational, financial, or other protected records; and

  2. Telling someone that they have or do not have a certain health condition, or providing instructions on how to cure or treat a health condition

    1. OpenAI’s models are not fine-tuned to provide medical information. You should never use these models to provide diagnostic or treatment services for serious medical conditions.

    2. OpenAI’s platforms should not be used to triage or manage life-threatening issues that need immediate attention.

  3. OpenAI’s models can accept images as inputs, but image capabilities are not designed or intended to be used as a medical device or to perform any medical function and should not be used as a substitute for professional medical advice, diagnosis, treatment, or judgment.

Requirement for Disclaimer for Consumer-Facing Apps

Digital health companies using OpenAI’s APIs for consumer-facing products and services must provide a disclaimer informing users that AI is being used, and inform them of its potential limitations. In addition, any automated systems (including chatbots and conversational AI) must disclose to users that they are interacting with an AI system. Disclaimers should be tailored to your specific products or services, and may also include whether your offering is human-supervised. 

Plug-In Requirements

For developers building plug-ins:

  • The plugin manifest should accurately reflect its functionality in alignment with the capabilities of the API. This is crucial for clarity and transparency.

  • Avoid including any irrelevant or misleading information in the plugin manifest, as well as in the OpenAPI endpoint descriptions or plugin response messages. This includes steering clear of any instructions that could potentially bypass or interfere with OpenAI’s safety systems.

  • Your plugin should not automate conversations in a way that simulates human responses or uses pre-programmed messages without disclosure. Transparency is key here.

  • If your plugin distributes content generated by ChatGPT, such as emails or messages, it is imperative to clearly indicate that this content is AI-generated. This maintains honesty and trust in the interactions facilitated by your plugin.

Privacy

Users must provide legally adequate privacy notices and obtain necessary consents for the processing of personal data, and to process personal data in accordance with applicable law. In addition, developers must sign a Business Associate Agreement (BAA) with OpenAI before using the models to create, receive, maintain, transmit, or otherwise process any information that includes or constitutes protected health information (PHI), as it is defined by the HIPAA Privacy Rule (45 C.F.R. Section 160.103). 

*Companies building in the pediatric space: Users are not permitted to send any personal information of children under 13 or the applicable age of digital consent


What Happens If You Don't Follow OpenAI's Terms?

Your company has thirty days after receiving written notice of a material breach of the Terms to cure the breach before terminating your access. However, OpenAI can immediately suspend access or terminate the agreement (1) to prevent a security risk or other credible risk of harm or liability to “any third party”; or (2) for repeated or material violations of the OpenAI Policies. 

The Terms say that OpenAI will make “reasonable efforts” to notify users before suspension or termination, but as written this gives the company a broad set of rights to terminate access, including if they believe a digital health company’s use of the API could cause potential patient harm.

Suspension or termination could have catastrophic consequences for companies whose business models rely on access to OpenAI’s API. In addition, irresponsible or improper use of this technology in the healthcare context could impact both patient safety and patient privacy. OpenAI specifically states in its Terms that use of its models may, in some situations, result in Output that “does not accurately reflect real people, places, or facts.” This means that healthcare information, if not checked by humans, could be inaccurate or misleading. It is likely for this reason that OpenAI bars the use of its technology for diagnosis or treatment purposes. 

Developers and entrepreneurs must understand the specifics of these Terms and Policies, and must also build their solutions with the broader context in mind when building their GPT-powered solutions. 


What Are Best Practices for Compliance with OpenAI’s Terms and Policies?

To ensure compliance, developers should:

  • Regularly review OpenAI’s Business Terms and Usage Policies.

  • Implement OpenAI’s free moderation endpoint and safety best practices.

  • Use disclaimers where necessary, informing users of AI’s involvement and limitations.

  • Use best efforts to comply with applicable data protection laws. 

  • Incorporate human-supervised protocols when appropriate, such as during product testing and when confronted with high-risk decision-making. 

  • Avoid using the technology in disallowed areas, such as direct medical advice or high-risk decision-making.


What Other Risks Should I Consider When Working with AI and LLMs?

OpenAI has additional policies related to research, social media, use of the OpenAI name or logo, publication of content co-authored with the OpenAI API, etc. We encourage all users to review all relevant policies to ensure compliance. 

Other areas of risk include: patient data privacy and security, inaccuracy and algorithmic bias detection, FDA software as a device (SAMD) compliance, violation of state scope of practice or licensure laws, intellectual property, FTC’s consumer protection laws, liability for technology errors and omissions and/or professional liability, etc. 

Integrating OpenAI’s GPT-4 into your digital health application can be a game-changer, but it’s vital to navigate this space with a clear understanding of the legal landscape. By adhering to OpenAI's terms and policies, you can ensure that your innovative project is not only successful but also compliant and ethically responsible.

Stay tuned for more content about how the law is evolving alongside the explosion of AI-powered technologies in healthcare.


Want to talk to a healthcare innovation attorney about the risks and opportunities of interfacing with ChatGPT or other LLMs? Click here to start the conversation.