New York courts adopt interim policy on AI use
Effective October 2025, the New York State Unified Court System introduces its first formal framework governing AI use among judges and court staff.

The New York State Unified Court System (UCS) has released an Interim Policy on the Use of Artificial Intelligence, establishing clear boundaries and responsibilities for the use AI AI, particularly generative models, across the judiciary. The policy, which took effect in October 2025, applies to all judges and nonjudicial employees, covering both court-owned and personal devices used for official work.
The document outlines strict ethical and security standards to ensure that AI supports, but never replaces, human judgement in legal proceedings and administrative tasks. It recognises AI’s growing role in public institutions while emphasising the risks of bias, misinformation, and data leakage associated with automated tools.
The policy defines AI as any machine-based system capable of making predictions or decisions based on human-defined objectives. Generative AI, such as ChatGPT or Microsoft Copilot, is recognised as a tool that can generate text, summaries, or code based on training from vast datasets.
While AI may help draft memos, summarise documents, or simplify communication, the policy makes clear that human oversight is mandatory. All AI-generated content must be reviewed for factual accuracy, neutrality, and inclusiveness before being used in official work.
The policy warns that generative AI programs can ‘hallucinate,’ meaning they may fabricate information or citations. Because of this, AI is prohibited from performing legal analysis or research without independent verification.
The UCS identifies three primary risks tied to AI use:
- Inaccurate or fabricated content, which could mislead judicial reasoning or record-keeping.
- Bias and prejudice, since AI models reflect the cultural and social biases present in their training data.
- Data vulnerability, as publicly available tools may reuse user inputs for training, potentially exposing confidential court information.
To mitigate these risks, the policy bans the entry of any confidential or identifying information, including docket numbers, party names, or court documents, into AI systems not controlled by the UCS. Even ‘public’ court records are treated as confidential for AI use, since cases can later be sealed.
Guiding principles and requirements
The policy views AI as a supportive tool, rather than a decision maker. Judges and staff remain solely responsible for the content and decisions derived from their work. All use of AI must comply with judicial ethics rules under 22 NYCRR Parts 50 and 100, ensuring impartiality, confidentiality, and accountability.
To standardise use, the UCS mandates that:
- Only AI tools approved by the Division of Technology and Court Research (DoTCR) may be used.
- All employees must undergo initial and ongoing AI training.
- AI-generated content must be carefully reviewed and revised.
- Public or paid AI tools cannot be used without explicit UCS approval.
Personal use of AI on court-owned devices is strictly prohibited.
The appendix lists both private and public AI tools cleared for use under specific conditions. Among private tools, Microsoft Azure AI Services and Microsoft 365 Copilot Chat are approved for productivity tasks within the court’s secure cloud systems. For developers, GitHub Copilot is permitted under supervised licensing.
The only public tool explicitly named is OpenAI’s free version of ChatGPT, though with heavy restrictions: no confidential information may be entered, and paid versions are banned.
Why does it matter?
The interim policy, which applies to all judges, justices, and nonjudicial employees within the New York Unified Court System, restricts the use of generative AI to officially approved tools and requires all staff to undergo AI training. With this framework, New York positions itself as a potential model for other state and federal institutions, and possibly international counterparts, seeking to develop governance approaches for emerging technologies that balance innovation with integrity.