A murder case in Florida has opened an unprecedented legal frontier as state prosecutors pursue criminal charges against a man who allegedly used artificial intelligence to help plan the killing of two university students, while simultaneously investigating the company that created the technology he used.

Hisham Abugharbieh, 26, faces two counts of first-degree murder in connection with the deaths of Zamil Limon and Nahida Bristy, both 27-year-old doctoral students at the University of South Florida. The two friends from Bangladesh, who were pursuing advanced degrees, disappeared on April 16. What investigators discovered in the days that followed has prompted Florida Attorney General James Uthmeier to launch what appears to be the first criminal investigation of its kind into OpenAI, the company behind ChatGPT.

According to evidence presented by prosecutors, Abugharbieh engaged in a series of disturbing exchanges with ChatGPT in the days surrounding the victims’ disappearance. On April 13, three days before Limon and Bristy vanished, Abugharbieh allegedly typed into the chatbot: “What happens if a human has a put in a black garbage bag and thrown in a dumpster?”

When the artificial intelligence program responded that the scenario “sounds dangerous,” Abugharbieh reportedly continued with increasingly specific and macabre questions about concealing bodies, evading detection, and disposing of evidence. Investigators have indicated that this digital trail forms a crucial component of their case against the accused.

The implications extend far beyond a single criminal prosecution. Attorney General Uthmeier’s decision to incorporate this case into a broader criminal investigation of OpenAI raises fundamental questions about corporate liability when artificial intelligence technology is employed in the commission of violent crimes.

The legal theory being pursued by Florida authorities appears to center on whether companies that develop and deploy artificial intelligence systems bear responsibility for how those systems are used, particularly when they provide detailed responses to queries that clearly suggest criminal intent. This represents uncharted legal territory, as no company has previously faced criminal investigation for the actions of users who employed their artificial intelligence technology in the planning or execution of crimes.

The victims in this case were both accomplished scholars pursuing their academic ambitions far from their homeland. Their deaths have sent shockwaves through the University of South Florida community and raised concerns among international students about safety and security.

For OpenAI, the investigation presents a significant legal and reputational challenge. The company has implemented various safeguards designed to prevent its technology from being used for harmful purposes, but the Florida case demonstrates that determined users may still find ways to extract information that could facilitate criminal activity.

As this case proceeds through the courts, it will likely establish important precedents regarding the intersection of artificial intelligence, corporate responsibility, and criminal law. The outcome could shape how technology companies approach the development and deployment of artificial intelligence systems, and whether they face legal exposure when their products are misused.

The legal proceedings against Abugharbieh continue, while the investigation into OpenAI moves forward on a parallel track. Both cases will be watched closely by legal experts, technology companies, and law enforcement agencies nationwide.

Related: Suspect Used Service Staircase to Evade Detection at White House Correspondents’ Dinner