Trials & Litigation
Lawyers should alert clients that AI prompts can be used in court

People who solicit advice from artificial intelligence are being cautioned by U.S. lawyers to not confide in chatbots because those conversations can be used against them in court. (Image from Shutterstock)
People who solicit advice from artificial intelligence are being cautioned by U.S. lawyers to not confide in chatbots because those conversations can be used against them in court.
A judge ruled recently in the U.S. District Court for the Southern District of New York that a former CEO of a bankrupt financial services group accused of securities fraud had to disclose his AI chats to prosecutors.
Exchanges with Claude AI by Anthropic, ChatGPT by OpenAI and other chatbots, which are not lawyers, have resulted in more than a dozen major U.S. law firms outlining advice for clients regarding AI use to lessen the likelihood of chats being used in court.
In one firm’s hiring agreement with a client, the contract stipulated that sharing a lawyer’s advice or communications with a chatbot could erase the legal attorney-client privilege.
The New York case involved Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings, who pleaded not guilty of securities and wire fraud. His attorney argued that reports that Heppner had prepared using the chatbot Claude and AI exchanges should be withheld, as they contained details from lawyers about his defense.
“We are telling our clients: You should proceed with caution here,” said New York-based lawyer Alexandria Gutiérrez Swette, who works at Kobre & Kim, according to coverage by Reuters.
Write a letter to the editor, share a story tip or update, or report an error.

