Florida AG opens criminal investigation into OpenAI and ChatGPT
At a glance:
- Florida Attorney General James Uthmeier has launched a criminal probe into OpenAI following a mass shooting at Florida State University.
- The investigation explores whether ChatGPT's responses could constitute aiding and abetting a crime under Florida law.
- OpenAI has been subpoenaed for internal training materials, policies, and organizational charts.
A legal theory of aiding and abetting
Florida Attorney General James Uthmeier has officially announced that the state's Office of Statewide Prosecution has opened a criminal investigation into OpenAI and its flagship product, ChatGPT. The move follows a tragic mass shooting at Florida State University in 2025, where the suspect reportedly utilized ChatGPT in the period leading up to the attack. This investigation represents a significant escalation in how state authorities view the intersection of generative AI and criminal liability.
The core of the legal challenge rests on Florida's specific statutes regarding criminal assistance. According to Uthmeier, "Florida law states that anyone who aids, abets, or counsels someone in the commission of a crime, and that crime is committed or attempted, may be considered a principal to the crime." The state intends to argue that the specific responses provided by ChatGPT to the shooter may be interpreted as the AI assistant providing the necessary counsel or aid to facilitate the crime, potentially making the company a principal actor in the eyes of the law.
OpenAI's defense and cooperation
OpenAI has responded to the investigation with a firm stance on the nature of its technology. In a formal statement, the company emphasized that while the Florida State University shooting was a tragedy, ChatGPT itself is not responsible for the crime. The company maintains that the AI provided factual responses to questions based on information already widely available across public internet sources, and crucially, it did not encourage or promote any illegal or harmful activities.
OpenAI also highlighted its proactive engagement with law enforcement following the incident. The company stated that after learning of the shooting, it identified a ChatGPT account believed to be associated with the suspect and shared that information with authorities. "ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes," the company noted, adding that they work continuously to strengthen safeguards to detect harmful intent and limit misuse.
Subpoenas and requested documentation
The scope of the Florida investigation is broad, involving significant demands for internal corporate data. The state has issued subpoenas to OpenAI, requesting a wide array of documentation to determine the company's level of responsibility and its internal safety protocols. The requested materials include:
- All policies and internal training materials related to how the company handles users threatening to harm others.
- Internal protocols regarding users threatening to harm themselves.
- Documentation on how OpenAI responds to law enforcement requests.
- The company's organizational chart.
- Any publicly released statements regarding the Florida State University shooting.
Attorney General Uthmeier has been vocal about the gravity of the situation, suggesting that the investigation seeks to establish a precedent for AI accountability. "Florida is leading the way in cracking down on AI's use in criminal behavior, and if ChatGPT were a person, it would be facing charges for murder," Uthmeier stated. The investigation aims to determine if OpenAI bears criminal responsibility for the actions taken by the user.
A pattern of regulatory scrutiny
This investigation is not an isolated event in OpenAI's history of legal and regulatory challenges. The company has previously faced scrutiny regarding its handling of potential threats. Canadian regulators, for instance, called for changes to OpenAI's approach to harm following a Wall Street Journal report. That report claimed OpenAI had flagged the account of a Canadian shooting suspect in 2025 but had failed to relay those threats to law enforcement authorities.
In response to the Canadian pressure, OpenAI agreed to implement new policies regarding cooperation with Canadian law enforcement in March. Furthermore, the company is currently navigating a wrongful death lawsuit stemming from 2025, which alleges that OpenAI played a role in the suicide of a teenage user. As the Florida investigation progresses, these existing legal battles will likely shape the broader conversation regarding the liability of AI developers for the actions of their users.
FAQ
Why is the Florida Attorney General investigating OpenAI?
What specific information has Florida subpoenaed from OpenAI?
How has OpenAI responded to these allegations?
More in the feed
Prepared by the editorial stack from public data and external sources.
Original article