By Emily D’Agostino

ChatGPT is a highly sophisticated language model that generates human-like text responses to user inputs. From inception, large language models like ChatGPT have faced widespread criticism over the concern that they may exacerbate the already rampant misinformation crisis. Particularly, some critics have expressed fear that these models could have the capability to defame individuals, causing irreparable harm.

This fear recently materialized when Australian mayor, Brian Hood, began receiving reports that ChatGPT had labeled him a criminal. Specifically, the AI chatbot identified Hood as one of the many guilty parties involved in a major bribery conspiracy that took place in the early 2000s. Between 1999 and 2004, several Australian business executives utilized Note Printing Australia, a subsidiary of the Reserve Bank of Australia, to bribe foreign officials. While Hood was employed by the subsidiary at the time, he was actually the whistleblower that reported the illicit payments to the authorities. Hood was never convicted, or even charged with any crimes in connection with the scandal.

Hood fears that the AI allegations could cause serious damage to his reputation as he is an elected official. His lawyers have given OpenAI, the company responsible for creating ChatGPT, a 28-day period to correct the error before they proceed with what would be the first ever defamation suit against Artificial Intelligence. This incident may only the beginning and governments should consider establishing new frameworks to regulate these large language models.

The obvious existing legal framework to apply in situations where AI disseminates harmful false information about an individual or entity is tort law. This is the route Hood and his attorneys plan to take in alleging defamation. Although tort law varies by jurisdiction, a person has been defamed, generally, where a false statement about them is published or communicated, and the statement causes reputational harm. The difficulty with relying on tort law in seeking recourse against AI is that the accused AI may have been created by a combination of people and companies, making it difficult to identify the responsible party. Further, as AI becomes more independent, human influence might become too far removed altogether to impose liability.

Some organizations and governing bodies have issued guidance for the development and implementation of large language models. The Partnership on AI developed ethical guidelines requiring transparency, fairness, and accountability. The EU initially envisioned a risk-based regulatory framework that would impose varying market approval requirements on AI developers correspondent with the risk of harm posed by their AI. In September of 2022, the Commission formally proposed the AI Liability Directive (AILD) which contains a set of uniform rules covering certain aspects of civil liability for damage caused by AI systems.

Overall, the emergence of advanced language models raises major ethical and legal concerns. Many of the theoretical fears surrounding AI are beginning to come to fruition. Consequently, legislators should take action and attempt to ascertain the most effective means of regulating this complex space in order to minimize the impending societal harms.