AI and Generative Technology Use at CUIMC
As we embrace innovative technologies to support the Columbia University Irving Medical Center’s (CUIMC) mission, it is crucial to address the responsible use of Artificial Intelligence (AI) and generative technology tools at CUIMC to ensure compliance with healthcare regulatory standards and to safeguard sensitive information. This section outlines CUIMC’s supplemental IT guidance on responsible and appropriate use of AI tools, including generative AI tools, while adhering to Columbia’s Generative AI Policy. Please review the Columbia University Data Classification Policy which defines the types of data referenced throughout this document.
Below are the guidelines outlining privacy and security principles, and prohibited and judicious uses of AI:
- Guidelines for the Use of AI in Clinical Care Delivery
- Guidelines for the Use of AI in Education and Training with Sensitive Data
- Guidelines for the Use of AI in Research with Sensitive Data (including RHI)
Approved Tools
Columbia University provides access to HIPAA-compliant versions of OpenAI’s ChatGPT and Microsoft Copilot*, enabling our workforce to leverage these AI tools responsibly and compliantly. Workforce members must access CUIMC-approved enterprise versions of AI tools using their CUIMC-issued accounts. This authentication ensures a secure and compliant environment.
To help users confirm they are using the approved software versions, visual indicators are displayed within the interfaces of CUIMC-sanctioned AI tools. Users should always verify these indicators before entering any CUIMC-related data. If these elements are missing, you may not be using the approved instance. In such cases, immediately stop and report any issue to CUIMC IT Security Office
- Columbia ChatGPT Education: Displays the Columbia University logo(1) at the top right of the interface.
- Available via the CUIT website https://www.cuit.columbia.edu/content/chatgpt-education
- Available via the CUIT website https://www.cuit.columbia.edu/content/chatgpt-education
- Microsoft Copilot*: Included with CUIMC’s Microsoft license, with built-in contractual data protections. When used correctly via your CUIMC issued O365 account (UNI@cumc.columbia.edu), all prompt data remains within the CUIMC tenant and is not shared externally or used for training.
- Access via https://copilot.cloud.microsoft/
- Displays a shield icon (may appear as a green outline with a checkmark(2), a grey outline on a white background(3) or a white outline on a dark background(4), depending on the device) in the chat interface, indicating that Enterprise Data Protection applies to the session. Additionally, confirm your CUIMC email address, indicating you are logged into the CUIMC account.
- The use of third-party plugins and externally accessible prompts is currently restricted on CUIMC’s Microsoft Copilot
Non-HIPAA Compliant Tools Approved for Use With Non-Sensitive Data:
- Columbia CU-GPT: This service has being replaced with CU Chat.
- CU Chat: A streamlined chat interface offering cost-effective access to multiple AI LLM models for various language tasks.
- Available at https://www.cuit.columbia.edu/www.cuit.columbia.edu/content/chat
- Unlike ChatGPT Education, CU CHAT is not approved for Sensitive data use at this time. CUIMC users can request access for processing non-sensitive data in accordance with institutional guidelines.
At this time (February 2026), the following AI Chat services offered through CUIT are not approved for use with Sensitive data or any other HIPAA-regulated data:
- Google Gemini (🔗 Google Gemini)
- NotebookLM (🔗 NotebookLM)
- Anthropic Claude (🔗 Anthropic Claude)
These platforms are not currently HIPAA compliant. Workforce members must not enter, upload, or share Sensitive data (including patient identifiers, medical records, or any information that could reasonably be linked to a patient) when using these tools. Use of these services should be limited to non-sensitive data, non-confidential content only until further notice. Updates will be provided if and when HIPAA-compliant configurations become available.
What's coming
AI tools in the AWS, GCP, and Azure platforms are being reviewed for use with Sensitive Data. These platforms will offer access to many of the leading models through their “model gardens” as well as advanced tools within a HIPAA-compliant environment. Once new tools are approved for use with sensitive data, this page will be updated to reflect those changes.
Summary of Approved AI Tools
|
Name |
Access Link |
Approved Use |
|---|---|---|
|
Columbia ChatGPT Education |
Approved for use with Sensitive Data (PHI) |
|
|
Microsoft Copilot* |
Approved for use with Sensitive Data (PHI) |
|
|
CU-CHAT |
https://www.cuit.columbia.edu/www.cuit.columbia.edu/content/chat |
Approved for use with non-sensitive data. |
Governance
Reviews of AI-related requests are based on the use case and nature of the request.
General Questions
General questions about availability and use of AI tools and environments can be addressed by emailing 5HELP (5help@cumc.columbia.edu).
Educational and Administrative Use Cases
Submit a request through the CUIMC IT PMO request process; CUIMC IT and relevant review groups will evaluate compliance, security, and data handling risks.
Research Use Cases
Requests should be submitted through the existing IRB process(es) and will be routed to the CUIMC AIGC for review as appropriate.
Clinical Use Cases
Clinical use of AI should be submitted to the NYP IT PMO and will be routed for review by the Tri-Institutional AI Governance Committee as appropriate
Data Classification and AI Use
|
Data Type |
Definition |
AI Use Guidance |
|---|---|---|
|
Sensitive Data |
Information requiring protection due to privacy, regulatory, or security concerns. (PHI, RHI, PII) |
Permitted only on the ChatGPT Education and approved Microsoft CoPilot* platforms. Research protocol use requires IRB, and TRAC/ACORD approval. |
|
Confidential and Internal Data |
Contractually protected or proprietary data such as internal policies, drafts, student, financial, or unpublished reports. |
Permitted on approved ChatGPT (Columbia ChatGPT Education and CU-GPT) and MS Copilot*. |
|
Non-Public Data |
General CUIMC-related data classified as sensitive, confidential, or internal data |
Not allowed on unapproved AI tools (e.g., free/Personal/Commercial ChatGPT, Gemini, Claude, etc.). |
Note: CU-GPT is not approved for Sensitive data use.
Training Modules
- CUIT Training on Generative AI (scroll down to “Training Videos”)
- CUIMC Training on Responsible Use of AI (link is external and opens in a new window)
- VP&S Training on AI Clinical Care Delivery (link is external and opens in a new window)
- LinkedIn learning resource portal (link is external and opens in a new window).
Other Resources
Additional Guidance
All use of AI, Large Language Models (LLM), Natural Language processors (NLP) or Machine Learning (ML) systems at the Medical Center must comply with HIPAA and other relevant healthcare and IT regulations to uphold the highest standards of patient privacy and data security. For locally installed AI models (i.e. LLM, NLP, ML), a formal IT Risk Assessment review is required before deployment to evaluate potential security, privacy, and compliance risks.
CUIMC-IT may monitor AI software usage to ensure compliance with these guidelines. When generating AI content for audiences beyond your immediate use, it is imperative to verify all AI-generated information through authoritative sources and report its use in approved research activities. This practice ensures information integrity and helps avoid potential AI-related negative outcomes such as the dissemination of inaccuracies or biases.
Should you have any questions regarding AI Tools, become aware of any data exposures or misuse of sensitive information, or to report issues verifying approved tool instances, contact the CUIMC IT Security Office
*Microsoft CoPilot is an overarching brand that covers several separately licensed Microsoft AI products. As of May 2025, CUIMC-approved HIPAA compliant products include CoPilot Chat (in the Edge browser), CoPilot Pro (in Teams), M365 CoPilot a.k.a. CoPilot Premium (integrated into M365 “Office” suite), and CoPilot in Microsoft Fabric. At this time, any other Copilot products such as CoPilot Studio and GitHub Copilot in VSCode are not approved for use with sensitive data.