Bridging the Justice Gap: Can Artificial Intelligence Tools Expand Access to the Legal System for Underserved Populations?

By: Trey Gates

The judicial system has long been revered for its fundamental ideals of impartiality and justice. These ideals have formed the cornerstone of our legal institutions, and of our democracy itself. But are these venerable ideals, many of which are written into our founding documents, always realized in our court system and in our community? If not, what can be done to achieve these goals, and how can society ensure that all individuals have access to the tools required to participate effectively in the judicial system, and ensure access to justice?

With the surge in use of unprecedented artificial intelligence tools like generative chatbots, including the leading chatbots in the industry, OpenAI’s ChatGPT and Google’s Bard chatbot, there is a prodigious amount of potential for expanding access to justice and addressing the issues discussed above. If leveraged properly, these generative artificial intelligence tools could provide guidance to populations that have less access to legal resources.[1] After personally evaluating these chatbots in their current form, I have seen that they can provide templates to

individuals to help draft legal documents, provide information about case law, and even suggest possible solutions for complex legal issues.

While it is important to ensure that these artificial intelligence mechanisms can be directly used by individuals to assist with legal issues that they may have, it is perhaps equally as important to implement these tools into the legal profession. Many artificial intelligence tools have already been found useful within the legal community itself; for example, some legal aid organizations have already begun incorporating generative chatbots into their legal research processes.[2] These generative chatbots can be given set parameters that guide the chatbot to research cases in a particular area of the law. Having access to these research tools will drastically increase the speed with which firms are able to handle client cases, and will especially benefit organization with limited resources, such as legal aid organizations that provide free legal services to clients. By having the capability to increase research speed, legal aid organizations will be able to work through client cases faster, and accordingly will be able to take on a greater case load, thus providing more legal services to individuals that the organization might otherwise have had to turn down.

However, the idea of using generative artificial intelligence chatbots to promote legal resources and to act as guides for underserved populations is not without its risks. Most notably, what if chatbots simply generate an incorrect response? Mainstream chatbots, such as ChatGPT, have been known to make errors and provide misleading, and often incorrect, information.[3] To address this issue, chatbots can be developed and tailored to serve the needs of the legal industry specifically. In addition, training methods for legal chatbots should be specially designed to ensure quality data is used for the artificial intelligence model. Traditionally, chatbots are trained by “scraping” data from across the internet; for chatbots used in the legal industry, a more refined methodology should be used to prevent the chatbot from learning inaccuracies and false information.[4]

In addition, there are other risks associated with the use of artificial intelligence chatbots in the legal system. One other risk is data privacy. In the scope of legal representation, confidential information is almost always revealed and passed between attorney and client, and feeding any confidential information into a generative artificial intelligence platform poses several potentially large risks to clients. Even without giving the platform specific details about

the client, questions arise about what the company behind the chatbot will do with the information entered after the user no longer needs it.

Although there may be risks associated with using artificial intelligence chatbots in the legal profession, the potential benefits largely outweigh these risks. Not only could these chatbots provide direct assistance to underserved individuals facing legal issues, but they could also be used by attorneys and those working in the legal industry as a whole to better assist clients and expand access to the judicial system.

______________________

1. Kirsten Sonday, Forum: There’s potential for AI chatbots to increase access to justice, THOMSON REUTERS (May 25, 2023), https://www.thomsonreuters.com/en-us/posts/legal/forum-spring-2023-ai-chatbots/

2. Id.


3. Tyler Roush, FTC Investigating ChatGPT Maker OpenAI For Providing False Information In Chat Results, Report Says, FORBES (July 7, 2023), https://www.forbes.com/sites/tylerroush/2023/07/13/ftc-investigating-chatgpt-maker- openai-for-providing-false-information-report-says/

4. Id.

Copyright and AI: The Implications of Thaler on Businesses claiming Copyright Protection on AI Work

By: Nicholas Barrish

In a recent decision, the U.S. District Court for the District of Columbia in Thaler v. Perlmutter, found that a work entirely made and authored by artificial intelligence cannot receive copyright protection.[1]. The Court ruled that human authorship is a critical key part of what allows a work to fall under copyright protection. [2].The registration of copyright form, filed by the defendant, Thaler, listed his AI machine as the author.[3] The court found such a work cannot have copyright protection as there was no human input even considering that Thaler made the computer program and had to prompt the machine to act.[4] However, Thaler’s lawyer has indicated the possibility of an appeal making this issue for further debate.[5] This case will be the first of many testing the copyright protection of AI-made works as there still exists many questions like how much AI is too much to copyright a work or how can a business use AI while still protecting their investments.


How much AI is too much AI? How much human is too little human?


It should be noted that Thaler is limited to AI work with no human intervention.[6] In other words, in a case where there exists some human intervention, the outcome may be different. This is an important distinction because the copyright office has stated that “AI-generated content that is more than de minimis should be explicitly excluded from the application.”[7] However, this limit is blurry.[8] It is especially blurry, considering the back-and-forth the Copyright Office had with the registration of Zarya of the Dawn, where AI made pictures for a graphic novel and the pictures were deemed to not have copyright protection, and determining if the work was created by artificial intelligence.[9] In addition, it is also unclear when AI and human work are more intertwined if copyright protection can be given to the work.[10] In Zarya of the Dawn, the AI-made pictures could be distinguished and allow solely that part to not fall under copyright protection.[11]. However, if the AI edits a photograph or a human edits an AI-drawn painting, then the line of what is protected, if anything at all, is unknown and will need to be decided in the courts.[12].


Impact on Businesses and their Attempts to Copyright AI Work


As the recent writer strike has shown, AI involvement in creativity is a hotly contested issue. [13]. Writers want AI to be limited, while companies want to use AI to speed up creativity and even art-making processes.[14]. However, the introduction of Thaler limits how far companies can go. If AI writes a movie or book, that work will not fall under copyright protection. [15]. However, such protection may soon be expanded. Many governments are interested in AI and its advancements and how to control it or expand it. [16]. So far, The United Kingdom has taken a friendlier approach to computer-generated content, allowing works that may fall outside of copyright protection in the United States to have such protection in the United Kingdom. [17]. However, early optimism in the United Kingdom has started to turn to apprehension as AI became very powerful very quickly. [18] As the public opinion starts to turn against AI being involved in works it will become harder and harder to use it to create art. As such businesses will have to stay on top of current laws and court decisions as the field changes and the public reacts to such laws and works published.

_____________

[1] Thaler v. Perlmutter, No. CV 22-1564 (BAH), 2023 WL 5333236, at *7 (D.D.C. Aug. 18, 2023).

[2] Id.

[3] Id. at *1.

[4] Id. at *3-6.

[5] Copyright Protection in AI-Generated Works Update: Decision in Thaler V. Perlmutter, Authors Alliance (Aug. 24, 2023), https://www.authorsalliance.org/2023/08/24/copyright-protection-in-ai-generated-works-update-decision-in-thaler-v-perlmutter/.

[6] Thaler, 2023 WL 5333236, at *6-7.

[7] Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16190, 16193 (Mar. 16, 2023) (to be codified at 37 C.F.R. pt. 202).

[8] Aaron Moss, What Copyright’s “Unclaimable Material” Rules Mean for Hollywood’s Use of AI, Copyright Lately (Aug. 25, 2023), https://copyrightlately.com/copyright-unclaimable-material-rules-hollywood-use-of-ai/#:~:text=“De%20Minimis”%20Amounts%20of%20AI,t%20need%20to%20be%20disclosed.

[9] Id.

[10] Id.

[11] Id.

[12] Id.

[13] Ashley Cullins and Katie Kilkenny, As Writers Strike, AI Could Covertly Cross the Picket Line, Hollywood Reporter (May 3, 2023), https://www.hollywoodreporter.com/business/business-news/writers-strike-ai-chatgpt-1235478681/.

[14] Id.

[15] Ellen Glover and Brennan Whitfield, AI-Generated Content and Copyright Law: What We Know, Built In (Aug. 23, 2023), https://builtin.com/artificial-intelligence/ai-copyright.

[17] Id.

[18] Carlton Daniel, Joseph Grasser, and James J Collis, Copyright protection for AI works: UK vs US, Lexology (July 12, 2023), https://www.lexology.com/library/detail.aspx?g=a9b81aa1-7243-4f03-890c-7d29f5ccbdd7.

[1] ChatGPT: what the law says about who owns the copyright of AI-generated content, University of Portsmouth (April 17, 2023), https://www.port.ac.uk/news-events-and-blogs/blogs/security-and-risk/chatgpt-what-the-law-says-about-who-owns-the-copyright-of-ai-generated-content

Biometric identification: is the risk worth the reward?

By: Sarah Simon-Patches

Date: 9/18/2023

If you’ve been to a Whole Foods lately, you may have noticed that there’s a new way to pay – using your hand. If you haven’t yet experienced this new phenomenon, don’t worry. Amazon is bringing its palm-scanning technology to over 500 Whole Foods stores by the end of this year1. If Whole Foods isn’t your thing, you might find the new technology in sports stadiums, raceways, and casinos2.  

The palm payment technology is part of Amazon One, a service that connects your Amazon account, and your payment information, to the palm of your hand3. This is done through a “palm signature”, which is Amazon’s way of storing your information through the lines and wrinkles on your palm4. When you hold your hand above the scanner, it reads your palm’s unique ridges, grooves, and vein patterns, and charges the payment method attached to your Amazon account5.

Amazon’s new technology isn’t only for payment, though. The technology’s capabilities can determine your age for alcohol purchase purposes6. This identification ability has raised major concerns from consumers and vendors to state senators7. Red Rocks Amphitheater, for example, cut ties with Amazon One after an open letter asked entertainment industry groups and venues to cancel contracts with Amazon One due to data security concerns8.

Currently, Amazon is facing a class action lawsuit related to its Amazon Go stores tracking New York customers’ body shapes, sizes, and palm prints without providing proper notice as required by a New York City biometric surveillance law9.

Amazon’s assurance to consumers is that your identity is unique to you. Your stored payment and other information can’t be stolen in the way that a credit card otherwise could10. However, there are experts who believe AI can copy voices, faces, and even your palm11. Amazon’s response has been its use of liveness-detections technology, which is supposed to be able to tell a difference between a real palm and a fake one12. Further, Amazon claims its combination of palm and vein imagery is unusable to third parties and says your data will not be bought or sold to other companies, or used by Amazon itself, for marketing and advertising13.

Even with promises of safety and data protection, Amazon’s track record of data collection may influence your decision to use Amazon One. For one, Amazon recently reached a $25 million settlement following allegations of violating children’s privacy rights through the company’s voice assistant, Alexa14. This comes after Amazon’s failed attempt in 2018 to defeat a California law which requires disclosure of the information collected from consumers15.

So, while replacing a stolen credit card or stolen Social Security number seems time-consuming and difficult, consider the possibility of trying to replace your actual identity– your voice, face, or palm print. Is the convenience of paying with your hand worth the unknown security risks and questions associated with biometric identification technology? If it’s worth it to you, you can try it out at your local Whole Foods.

____________________________

1 Emma Ross, All 500-plus Whole Foods stores will soon let you pay with a palm scan / Amazon One links your palm signature to your payment method and identity, The Verge (July 20, 2023), https://www.theverge.com/2023/7/20/23801571/amazon-one-whole-foods-pay-palm-scan.

2 Sarah Perez, Amazon’s palm-scanning payment technology is coming to all 500+ Whole Foods, TechCrunch (July 20, 2023), https://techcrunch.com/2023/07/20/amazons-palm-scanning-payment-technology-is-coming-to-all-500-whole-foods/.

3 Amazon, How it works Meet Amazon One, one.amazon.com, https://one.amazon.com/how-it-works (last visited Sept. 13, 2023).

4 Id.

5 Id.

6 Supra, note 1.

7 Supra, note 2.

8 Id.

9 Skye Witley, Amazon Sued Over NYC Go Store Collection of Palm, Body Scans (2), Bloomberg Law (March 16, 2023) https://news.bloomberglaw.com/privacy-and-data-security/amazon-sued-over-nyc-go-store-collection-of-palm-body-scans.

10 Cheyenne DeVon, Amazon will soon let you pay for groceries with your palm at any Whole Foods—but tech experts urge caution, CNBC (August 26, 2023) https://www.cnbc.com/2023/08/26/amazon-biometric-payments-privacy-concerns.html.

11 Id.

12 Id.

13 Supra, note 2.

14 Mohamed Dabo, Amazon settles $25m lawsuit over Alexa’s privacy breach, Retail Insight Network (June 1, 2023) https://www.retail-insight-network.com/features/amazon-settles-25m-lawsuit-over-alexas-privacy-breach/?cf-view.

15 Chris Kirkham and Jeffrey Dastin, A look at the intimate details Amazon knows about us, Reuters (November 19, 2021) https://www.reuters.com/technology/look-intimate-details-amazon-knows-about-us-2021-11-19/.

Cultivated meat: is the cost worth the reward?

By: Denice Cioara

9/12/2023

Are you willing to pay more for your Tully’s Tenders knowing that they are 100% cruelty free? On June 21, 2023, the U.S. Department of Agriculture granted two companies, UPSIDE Foods and Good Meat, approval to produce and sell cell-cultivated chicken for the first time in the United States. [1]. According to UPSIDE Foods, the process of making cultivated meat is akin to brewing beer. [2]. Major advancements in food science and cell culture technology have led to this evolution.

Cultivated meat begins in a laboratory as a sample of cells that came from the tissue of an animal. [3]. That sample of cells is placed in a tightly controlled and monitored environment, like a cultivator, that supports cellular multiplication. [4]. After the cells are fed the right blend of nutrients, they multiply into billions or trillions of cells. [5]. Additional substances are added to the cells to differentiate into various cell types and assume the characteristics of muscle, fat, or connective tissue cells. [6]. Once the cells have differentiated into the desired type, they can be harvested. [7]. This entire process takes about two or three weeks. [8].

Like the slaughterhouse industry, the cultivated meat industry is regulated by the Food and Drug Administration (FDA). [9]. The FDA oversees cell collection, cell banks, and cell growth. [10]. However, the United States Department of Agriculture (USDA) and the Food Safety and Inspection Services (FSIS) oversee the harvesting stage, as well as the further production and labeling of these products. [11].

Is cultivated meat the meat of the future? Advocates, such as the Animal Legal Defense Fund, are eager to put an end to the cruelty that occurs in slaughterhouses, as well as the negative environmental impacts of those businesses. [12]. However, Good Meat’s co-founder worries that they may never get the funds needed to scale up production. [13]. High production costs turn into high market prices, which may serve as a deterrent for future consumers. Singapore, the only other country that produces and sells UPSIDE Food’s cultivated meat, has yet to engage in mass production. [14].

Critics are skeptical of the benefits that this new technology can provide. George Santos, a United States representative for New York’s 3rd congressional district, introduced a Bill that would prohibit Federal funds to support lab-grown meat. [15]. Missouri became the first state to restrict the word “meat” during the marketing of alternative meat products. [16]. Does this restriction violate a manufacturer’s First Amendment Right to free speech? More severely, the Washington legislature has introduced a bill that bans the advertisement, selling, or offer for the sale of cultivated meat all together. [17]. Can a state use its police powers enact a law like this one? Legislative decisions will pave the path for the future of cultivated meat.

_________________________

1. Joanna Thompson, Lab-Grown Meat Approved for Sale: What You Need to Know, Scientific American (June
30, 2023), https://www.scientificamerican.com/article/lab-grown-meat-approved-for-sale-what-you-need-to-know/.

2. Upside is approved for sale in the US! Here’s what you need to know, UPSIDE Foods (June 21, 2023), https://www.upsidefoods.com/blog/upside-is-approved-for-sale-in-the-us-heres-what-you-need-to-know.


3. Human Food Made with Cultured Animal Cells, Food and Drug Admin. (March 21, 2023) https://www.fda.gov/food/food-ingredients-packaging/human-food-made-cultured-animal-cells.


4. Id.


5. Id.


6. Id.


7. Id.


8. Supra. note 2.


9. Supra. note 3.


10. Id.

11. Id.


12. Innovation in Food Production: Cultivated Meat, Animal Legal Def. Fund, https://aldf.org/article/innovation-in- food-production-cultivated-meat/.


13. Leah Douglas, Insight: Lab-grown meat moves closer to American dinner plates, Reuters (Jan. 23, 2023), https://www.reuters.com/business/retail-consumer/lab-grown-meat-moves-closer-american-dinner-plates-2023-01- 23/.


14. Reuters, Even after green-light, lab-grown meat yet to take off in Sing., Daily Sabah (Mar. 8, 2023), https://www.dailysabah.com/life/food/even-after-green-light-lab-grown-meat-yet-to-take-off-in-singapore.


15. American Meat Industry Protection Act of 2023, 118 H.R. 4805.


16. Eryn Terry, Note, The Regulation of Commercial Speech: Can Alternative Meet Companies Have Their Beef And Speak It Too, Vand. J. Ent. & Tech. L. 223 (2020).


17. Id.

New York Law Allows a Notary in New York to notarize documents For An Individual Anywhere In The World

By: Tracy Costanzo

Throughout the pandemic, many states allowed remote online notarizations(“RON”) on a temporary basis. This allowed businesses to continue operating when in-person commerce was at a standstill. On January 15, 2021, Senate 1780 was introduced, authorizing traditional Notaries to perform electronic notarial acts using specific technology.[1] On February 1, 2023, Senate 1780 went into effect, enabling RONs in New York State on a permanent basis.[2]

The new law enables a notary sitting in New York to notarize documents for an individual anywhere in the world.[3] The technology allows business to be conducted regardless of where the individual signing is located, provided that the notary complies with the requirements set forth in the new statute. In order to become a remote online notary, an individual must hold a current commission as a traditional Notary Public, contract with a technology provider that meets the state requirements, register with the Secretary of State, and purchase a required Notary journal.[4]

In order to meet the requirements necessary to complete the RONs, the Notary Public will need a computer, webcam, microphone, and secure internet connection.[5]  As with many other states, New York requires that all RON be recorded with audiovisual technology.[6] There are currently severa that provide the necessary technology, including DocuSign, DocVerify, and SIGNix, to name a few.[7] In addition, Notaries are required to have an electronic stamp and digital certificate containing their electronic signature, and a journal is required.[8]

Utilizing the current online providers, the signer and Notary meet virtually through the platform to complete the notarization.[9] In addition, if witnesses are necessary, they will also meet through the platform throughout the singing process. The Notary is responsible for verifying the signer’s identity, as well as the witnesses, if necessary.[10] The identity verification is done directly through the service provider. This regulation allows the remote Notary to rely upon the information received from the service provider for verifying the identity of the individual(s) signing.[11] Further, the Notary is also responsible for confirming that the signer understands and is aware of what they are signing.[12] Finally, Remote Notaries are required to keep an audio and video recording of the electronic notarization for ten (10) years.[13]

Once the document has been electronically notarized, the document may be emailed, printed, faxed, or sent by other electronic means. However, in order to record an electronically signed document, the Remote Notary must provide a Certificate of Authenticity substantiating that no changes have been made to the document since the electronic signature and record were created.[14] The new law provides more security measures than a traditional notary and will likely reduce the number of fraudulent transfers.[15]


[1] How to Become a Remote Online Notary in New York, https://www.nationalnotary.org/knowledge-center/remote-online-notary/how-to-become-a-remote-online-notary/new-york (last updated Feb. 14, 2023)

[2] Id.

[3] Michael A. Markowitz, The Future is Here: New York Approves Remote Online Notarization, https://nysba.org/the-future-is-here-new-york-approves-remote-online-notarization (Apr. 11, 2023)

[4] Id.

[5]  Supra, Note 1.

[6]  Id.

[7]  Id.

[8]  Id.

[9]  Markowitz, supra Note 3.

[10]  Id.

[11]  Id.

[12]  Id.

[13]  Id.

[14] Markowitz, supra Note 3.

[15] Id.

The Impending Death of Manual Bluebook Citations: How AI Will Reshape Legal Education

By Michael Roy Ortizo

Legal education has long prioritized the formalities of citation techniques, such as those found in the Bluebook. Students often find themselves learning an entirely new citation style upon entering law school, even after mastering other writing forms like the Chicago Manual of Style during their undergraduate studies.[1] However, rapid advancements in AI present an opportunity for a shift within the legal landscape that places greater emphasis on the substance of legal writing as opposed to the form of legal citations.

Student-run law journals underscore the importance of providing accurate legal citations, with many law schools administering Bluebook exams as part of the application process. In 2012, Staci Zaretsky reported on a University of Richmond School of Law student who launched a guerilla campaign against the school’s Bluebook exam, arguing that the 40-hour exam was excessive and unnecessary compared to the four-hour standard final exams.[2] This incident sparked controversy and divided opinions among law students regarding the necessity and value of the Bluebook exam.[3]

Traditionally, law students have dedicated significant time and effort to mastering Bluebooking citation forms for legal publications. Rothman, however, contends that perfect Bluebooking is often less critical in legal practice than in law school.[4] The introduction of AI in legal spaces offers a chance for a paradigm shift, enabling scholars to concentrate on the substantive content of legal writing.

While AI models like ChatGPT have been met with skepticism in the legal community due to concerns about trust, security, and cost, recent advancements demonstrate improvements in accuracy.[5] For instance, GPT-4, ChatGPT’s newest AI model, recently passed a simulated bar exam, scoring in the top 10 percentile of test-takers—a significant leap from the previous model, GPT-3.5, which scored in the bottom 10 percentile.[6]

Capitalizing on these advancements, numerous organizations have developed AI-powered tools for various industries.[7] Legal Ease, for example, offers an AI-tool that assists law students, lawyers, and other legal professionals with Bluebook citations, claiming to generate 100% accurate citations in seconds.[8]

AI holds the potential to reshape the future of legal education, driving a much-needed shift towards substantive content. Although proper Bluebooking may not always be as important in legal practice as it is in law school, AI can alleviate the pressure on students and legal professionals to perfect their citation skills. The potential disruption of the formalisms associated with Bluebooking opens up a wealth of opportunities for legal scholars to allocate more resources to increasing their contributions to legal academia, ultimately enhancing the quality and impact of legal scholarship.


[1] Jordan Rothman, Perfect Bluebooking Is Less Important in Legal Practice Than in Law School, Above the Law (Jan. 19, 2022), https://abovethelaw.com/2022/01/perfect-bluebooking-is-less-important-in-legal-practice-than-in-law-school/.

[2] Staci Zaretsky, Law Student Revolts Against Law Review’s Bluebook Exam, Above the Law (Mar. 22, 2012), https://abovethelaw.com/2012/03/law-student-revolts-against-law-reviews-bluebook-exam/.

[3] Id.

[4] Rothman, supra note 1.

[5] Thomas Bacas, ANALYSIS: Will ChatGPT Bring AI to Law Firms? Not Anytime Soon, Bloomberg L. (Dec. 28, 2022), https://news.bloomberglaw.com/bloomberg-law-analysis/analysis-will-chatgpt-bring-ai-to-law-firms-not-anytime-soon.

[6] Kyle Wiggers, OpenAI Releases GPT-4, a Multimodal AI That It Claims Is State-of-the-Art, TechCrunch (Mar. 14, 2023), https://techcrunch.com/2023/03/14/openai-releases-gpt-4-ai-that-it-claims-is-state-of-the-art/.

[7] Sunny Betz, The 15 Best AI Tools to Know, BuiltIn (June 9, 2022), https://builtin.com/artificial-intelligence/ai-tools.

[8] LegalEase Citations, About LegalEase, https://legaleasecitations.com/#about-legalease (last visited Apr. 11, 2023).

Innocent Until a Robot Says You’re Guilty: ChatGPT’s First Defamation Suit (Maybe)

By Emily D’Agostino

ChatGPT is a highly sophisticated language model that generates human-like text responses to user inputs. From inception, large language models like ChatGPT have faced widespread criticism over the concern that they may exacerbate the already rampant misinformation crisis. Particularly, some critics have expressed fear that these models could have the capability to defame individuals, causing irreparable harm.


This fear recently materialized when Australian mayor, Brian Hood, began receiving reports that ChatGPT had labeled him a criminal. Specifically, the AI chatbot identified Hood as one of the many guilty parties involved in a major bribery conspiracy that took place in the early 2000s. Between 1999 and 2004, several Australian business executives utilized Note Printing Australia, a subsidiary of the Reserve Bank of Australia, to bribe foreign officials. While Hood was employed by the subsidiary at the time, he was actually the whistleblower that reported the illicit payments to the authorities. Hood was never convicted, or even charged with any crimes in connection with the scandal.


Hood fears that the AI allegations could cause serious damage to his reputation as he is an elected official. His lawyers have given OpenAI, the company responsible for creating ChatGPT, a 28-day period to correct the error before they proceed with what would be the first ever defamation suit against Artificial Intelligence. This incident may only the beginning and governments should consider establishing new frameworks to regulate these large language models.


The obvious existing legal framework to apply in situations where AI disseminates harmful false information about an individual or entity is tort law. This is the route Hood and his attorneys plan to take in alleging defamation. Although tort law varies by jurisdiction, a person has been defamed, generally, where a false statement about them is published or communicated, and the statement causes reputational harm. The difficulty with relying on tort law in seeking recourse against AI is that the accused AI may have been created by a combination of people and companies, making it difficult to identify the responsible party. Further, as AI becomes more independent, human influence might become too far removed altogether to impose liability.


Some organizations and governing bodies have issued guidance for the development and implementation of large language models. The Partnership on AI developed ethical guidelines requiring transparency, fairness, and accountability. The EU initially envisioned a risk-based regulatory framework that would impose varying market approval requirements on AI developers correspondent with the risk of harm posed by their AI. In September of 2022, the Commission formally proposed the AI Liability Directive (AILD) which contains a set of uniform rules covering certain aspects of civil liability for damage caused by AI systems.


Overall, the emergence of advanced language models raises major ethical and legal concerns. Many of the theoretical fears surrounding AI are beginning to come to fruition. Consequently, legislators should take action and attempt to ascertain the most effective means of regulating this complex space in order to minimize the impending societal harms.