Blog Post

Bridging the Justice Gap: Can Artificial Intelligence Tools Expand Access to the Legal System for Underserved Populations?

By: Trey Gates

The judicial system has long been revered for its fundamental ideals of impartiality and justice. These ideals have formed the cornerstone of our legal institutions, and of our democracy itself. But are these venerable ideals, many of which are written into our founding documents, always realized in our court system and in our community? If not, what can be done to achieve these goals, and how can society ensure that all individuals have access to the tools required to participate effectively in the judicial system, and ensure access to justice?

With the surge in use of unprecedented artificial intelligence tools like generative chatbots, including the leading chatbots in the industry, OpenAI’s ChatGPT and Google’s Bard chatbot, there is a prodigious amount of potential for expanding access to justice and addressing the issues discussed above. If leveraged properly, these generative artificial intelligence tools could provide guidance to populations that have less access to legal resources.[1] After personally evaluating these chatbots in their current form, I have seen that they can provide templates to

individuals to help draft legal documents, provide information about case law, and even suggest possible solutions for complex legal issues.

While it is important to ensure that these artificial intelligence mechanisms can be directly used by individuals to assist with legal issues that they may have, it is perhaps equally as important to implement these tools into the legal profession. Many artificial intelligence tools have already been found useful within the legal community itself; for example, some legal aid organizations have already begun incorporating generative chatbots into their legal research processes.[2] These generative chatbots can be given set parameters that guide the chatbot to research cases in a particular area of the law. Having access to these research tools will drastically increase the speed with which firms are able to handle client cases, and will especially benefit organization with limited resources, such as legal aid organizations that provide free legal services to clients. By having the capability to increase research speed, legal aid organizations will be able to work through client cases faster, and accordingly will be able to take on a greater case load, thus providing more legal services to individuals that the organization might otherwise have had to turn down.

However, the idea of using generative artificial intelligence chatbots to promote legal resources and to act as guides for underserved populations is not without its risks. Most notably, what if chatbots simply generate an incorrect response? Mainstream chatbots, such as ChatGPT, have been known to make errors and provide misleading, and often incorrect, information.[3] To address this issue, chatbots can be developed and tailored to serve the needs of the legal industry specifically. In addition, training methods for legal chatbots should be specially designed to ensure quality data is used for the artificial intelligence model. Traditionally, chatbots are trained by “scraping” data from across the internet; for chatbots used in the legal industry, a more refined methodology should be used to prevent the chatbot from learning inaccuracies and false information.[4]

In addition, there are other risks associated with the use of artificial intelligence chatbots in the legal system. One other risk is data privacy. In the scope of legal representation, confidential information is almost always revealed and passed between attorney and client, and feeding any confidential information into a generative artificial intelligence platform poses several potentially large risks to clients. Even without giving the platform specific details about

the client, questions arise about what the company behind the chatbot will do with the information entered after the user no longer needs it.

Although there may be risks associated with using artificial intelligence chatbots in the legal profession, the potential benefits largely outweigh these risks. Not only could these chatbots provide direct assistance to underserved individuals facing legal issues, but they could also be used by attorneys and those working in the legal industry as a whole to better assist clients and expand access to the judicial system.

______________________

1. Kirsten Sonday, Forum: There’s potential for AI chatbots to increase access to justice, THOMSON REUTERS (May 25, 2023), https://www.thomsonreuters.com/en-us/posts/legal/forum-spring-2023-ai-chatbots/

2. Id.


3. Tyler Roush, FTC Investigating ChatGPT Maker OpenAI For Providing False Information In Chat Results, Report Says, FORBES (July 7, 2023), https://www.forbes.com/sites/tylerroush/2023/07/13/ftc-investigating-chatgpt-maker- openai-for-providing-false-information-report-says/

4. Id.

Copyright and AI: The Implications of Thaler on Businesses claiming Copyright Protection on AI Work

By: Nicholas Barrish

In a recent decision, the U.S. District Court for the District of Columbia in Thaler v. Perlmutter, found that a work entirely made and authored by artificial intelligence cannot receive copyright protection.[1]. The Court ruled that human authorship is a critical key part of what allows a work to fall under copyright protection. [2].The registration of copyright form, filed by the defendant, Thaler, listed his AI machine as the author.[3] The court found such a work cannot have copyright protection as there was no human input even considering that Thaler made the computer program and had to prompt the machine to act.[4] However, Thaler’s lawyer has indicated the possibility of an appeal making this issue for further debate.[5] This case will be the first of many testing the copyright protection of AI-made works as there still exists many questions like how much AI is too much to copyright a work or how can a business use AI while still protecting their investments.


How much AI is too much AI? How much human is too little human?


It should be noted that Thaler is limited to AI work with no human intervention.[6] In other words, in a case where there exists some human intervention, the outcome may be different. This is an important distinction because the copyright office has stated that “AI-generated content that is more than de minimis should be explicitly excluded from the application.”[7] However, this limit is blurry.[8] It is especially blurry, considering the back-and-forth the Copyright Office had with the registration of Zarya of the Dawn, where AI made pictures for a graphic novel and the pictures were deemed to not have copyright protection, and determining if the work was created by artificial intelligence.[9] In addition, it is also unclear when AI and human work are more intertwined if copyright protection can be given to the work.[10] In Zarya of the Dawn, the AI-made pictures could be distinguished and allow solely that part to not fall under copyright protection.[11]. However, if the AI edits a photograph or a human edits an AI-drawn painting, then the line of what is protected, if anything at all, is unknown and will need to be decided in the courts.[12].


Impact on Businesses and their Attempts to Copyright AI Work


As the recent writer strike has shown, AI involvement in creativity is a hotly contested issue. [13]. Writers want AI to be limited, while companies want to use AI to speed up creativity and even art-making processes.[14]. However, the introduction of Thaler limits how far companies can go. If AI writes a movie or book, that work will not fall under copyright protection. [15]. However, such protection may soon be expanded. Many governments are interested in AI and its advancements and how to control it or expand it. [16]. So far, The United Kingdom has taken a friendlier approach to computer-generated content, allowing works that may fall outside of copyright protection in the United States to have such protection in the United Kingdom. [17]. However, early optimism in the United Kingdom has started to turn to apprehension as AI became very powerful very quickly. [18] As the public opinion starts to turn against AI being involved in works it will become harder and harder to use it to create art. As such businesses will have to stay on top of current laws and court decisions as the field changes and the public reacts to such laws and works published.

_____________

[1] Thaler v. Perlmutter, No. CV 22-1564 (BAH), 2023 WL 5333236, at *7 (D.D.C. Aug. 18, 2023).

[2] Id.

[3] Id. at *1.

[4] Id. at *3-6.

[5] Copyright Protection in AI-Generated Works Update: Decision in Thaler V. Perlmutter, Authors Alliance (Aug. 24, 2023), https://www.authorsalliance.org/2023/08/24/copyright-protection-in-ai-generated-works-update-decision-in-thaler-v-perlmutter/.

[6] Thaler, 2023 WL 5333236, at *6-7.

[7] Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16190, 16193 (Mar. 16, 2023) (to be codified at 37 C.F.R. pt. 202).

[8] Aaron Moss, What Copyright’s “Unclaimable Material” Rules Mean for Hollywood’s Use of AI, Copyright Lately (Aug. 25, 2023), https://copyrightlately.com/copyright-unclaimable-material-rules-hollywood-use-of-ai/#:~:text=“De%20Minimis”%20Amounts%20of%20AI,t%20need%20to%20be%20disclosed.

[9] Id.

[10] Id.

[11] Id.

[12] Id.

[13] Ashley Cullins and Katie Kilkenny, As Writers Strike, AI Could Covertly Cross the Picket Line, Hollywood Reporter (May 3, 2023), https://www.hollywoodreporter.com/business/business-news/writers-strike-ai-chatgpt-1235478681/.

[14] Id.

[15] Ellen Glover and Brennan Whitfield, AI-Generated Content and Copyright Law: What We Know, Built In (Aug. 23, 2023), https://builtin.com/artificial-intelligence/ai-copyright.

[17] Id.

[18] Carlton Daniel, Joseph Grasser, and James J Collis, Copyright protection for AI works: UK vs US, Lexology (July 12, 2023), https://www.lexology.com/library/detail.aspx?g=a9b81aa1-7243-4f03-890c-7d29f5ccbdd7.

[1] ChatGPT: what the law says about who owns the copyright of AI-generated content, University of Portsmouth (April 17, 2023), https://www.port.ac.uk/news-events-and-blogs/blogs/security-and-risk/chatgpt-what-the-law-says-about-who-owns-the-copyright-of-ai-generated-content

Biometric identification: is the risk worth the reward?

By: Sarah Simon-Patches

Date: 9/18/2023

If you’ve been to a Whole Foods lately, you may have noticed that there’s a new way to pay – using your hand. If you haven’t yet experienced this new phenomenon, don’t worry. Amazon is bringing its palm-scanning technology to over 500 Whole Foods stores by the end of this year1. If Whole Foods isn’t your thing, you might find the new technology in sports stadiums, raceways, and casinos2.  

The palm payment technology is part of Amazon One, a service that connects your Amazon account, and your payment information, to the palm of your hand3. This is done through a “palm signature”, which is Amazon’s way of storing your information through the lines and wrinkles on your palm4. When you hold your hand above the scanner, it reads your palm’s unique ridges, grooves, and vein patterns, and charges the payment method attached to your Amazon account5.

Amazon’s new technology isn’t only for payment, though. The technology’s capabilities can determine your age for alcohol purchase purposes6. This identification ability has raised major concerns from consumers and vendors to state senators7. Red Rocks Amphitheater, for example, cut ties with Amazon One after an open letter asked entertainment industry groups and venues to cancel contracts with Amazon One due to data security concerns8.

Currently, Amazon is facing a class action lawsuit related to its Amazon Go stores tracking New York customers’ body shapes, sizes, and palm prints without providing proper notice as required by a New York City biometric surveillance law9.

Amazon’s assurance to consumers is that your identity is unique to you. Your stored payment and other information can’t be stolen in the way that a credit card otherwise could10. However, there are experts who believe AI can copy voices, faces, and even your palm11. Amazon’s response has been its use of liveness-detections technology, which is supposed to be able to tell a difference between a real palm and a fake one12. Further, Amazon claims its combination of palm and vein imagery is unusable to third parties and says your data will not be bought or sold to other companies, or used by Amazon itself, for marketing and advertising13.

Even with promises of safety and data protection, Amazon’s track record of data collection may influence your decision to use Amazon One. For one, Amazon recently reached a $25 million settlement following allegations of violating children’s privacy rights through the company’s voice assistant, Alexa14. This comes after Amazon’s failed attempt in 2018 to defeat a California law which requires disclosure of the information collected from consumers15.

So, while replacing a stolen credit card or stolen Social Security number seems time-consuming and difficult, consider the possibility of trying to replace your actual identity– your voice, face, or palm print. Is the convenience of paying with your hand worth the unknown security risks and questions associated with biometric identification technology? If it’s worth it to you, you can try it out at your local Whole Foods.

____________________________

1 Emma Ross, All 500-plus Whole Foods stores will soon let you pay with a palm scan / Amazon One links your palm signature to your payment method and identity, The Verge (July 20, 2023), https://www.theverge.com/2023/7/20/23801571/amazon-one-whole-foods-pay-palm-scan.

2 Sarah Perez, Amazon’s palm-scanning payment technology is coming to all 500+ Whole Foods, TechCrunch (July 20, 2023), https://techcrunch.com/2023/07/20/amazons-palm-scanning-payment-technology-is-coming-to-all-500-whole-foods/.

3 Amazon, How it works Meet Amazon One, one.amazon.com, https://one.amazon.com/how-it-works (last visited Sept. 13, 2023).

4 Id.

5 Id.

6 Supra, note 1.

7 Supra, note 2.

8 Id.

9 Skye Witley, Amazon Sued Over NYC Go Store Collection of Palm, Body Scans (2), Bloomberg Law (March 16, 2023) https://news.bloomberglaw.com/privacy-and-data-security/amazon-sued-over-nyc-go-store-collection-of-palm-body-scans.

10 Cheyenne DeVon, Amazon will soon let you pay for groceries with your palm at any Whole Foods—but tech experts urge caution, CNBC (August 26, 2023) https://www.cnbc.com/2023/08/26/amazon-biometric-payments-privacy-concerns.html.

11 Id.

12 Id.

13 Supra, note 2.

14 Mohamed Dabo, Amazon settles $25m lawsuit over Alexa’s privacy breach, Retail Insight Network (June 1, 2023) https://www.retail-insight-network.com/features/amazon-settles-25m-lawsuit-over-alexas-privacy-breach/?cf-view.

15 Chris Kirkham and Jeffrey Dastin, A look at the intimate details Amazon knows about us, Reuters (November 19, 2021) https://www.reuters.com/technology/look-intimate-details-amazon-knows-about-us-2021-11-19/.

Cultivated meat: is the cost worth the reward?

By: Denice Cioara

9/12/2023

Are you willing to pay more for your Tully’s Tenders knowing that they are 100% cruelty free? On June 21, 2023, the U.S. Department of Agriculture granted two companies, UPSIDE Foods and Good Meat, approval to produce and sell cell-cultivated chicken for the first time in the United States. [1]. According to UPSIDE Foods, the process of making cultivated meat is akin to brewing beer. [2]. Major advancements in food science and cell culture technology have led to this evolution.

Cultivated meat begins in a laboratory as a sample of cells that came from the tissue of an animal. [3]. That sample of cells is placed in a tightly controlled and monitored environment, like a cultivator, that supports cellular multiplication. [4]. After the cells are fed the right blend of nutrients, they multiply into billions or trillions of cells. [5]. Additional substances are added to the cells to differentiate into various cell types and assume the characteristics of muscle, fat, or connective tissue cells. [6]. Once the cells have differentiated into the desired type, they can be harvested. [7]. This entire process takes about two or three weeks. [8].

Like the slaughterhouse industry, the cultivated meat industry is regulated by the Food and Drug Administration (FDA). [9]. The FDA oversees cell collection, cell banks, and cell growth. [10]. However, the United States Department of Agriculture (USDA) and the Food Safety and Inspection Services (FSIS) oversee the harvesting stage, as well as the further production and labeling of these products. [11].

Is cultivated meat the meat of the future? Advocates, such as the Animal Legal Defense Fund, are eager to put an end to the cruelty that occurs in slaughterhouses, as well as the negative environmental impacts of those businesses. [12]. However, Good Meat’s co-founder worries that they may never get the funds needed to scale up production. [13]. High production costs turn into high market prices, which may serve as a deterrent for future consumers. Singapore, the only other country that produces and sells UPSIDE Food’s cultivated meat, has yet to engage in mass production. [14].

Critics are skeptical of the benefits that this new technology can provide. George Santos, a United States representative for New York’s 3rd congressional district, introduced a Bill that would prohibit Federal funds to support lab-grown meat. [15]. Missouri became the first state to restrict the word “meat” during the marketing of alternative meat products. [16]. Does this restriction violate a manufacturer’s First Amendment Right to free speech? More severely, the Washington legislature has introduced a bill that bans the advertisement, selling, or offer for the sale of cultivated meat all together. [17]. Can a state use its police powers enact a law like this one? Legislative decisions will pave the path for the future of cultivated meat.

_________________________

1. Joanna Thompson, Lab-Grown Meat Approved for Sale: What You Need to Know, Scientific American (June
30, 2023), https://www.scientificamerican.com/article/lab-grown-meat-approved-for-sale-what-you-need-to-know/.

2. Upside is approved for sale in the US! Here’s what you need to know, UPSIDE Foods (June 21, 2023), https://www.upsidefoods.com/blog/upside-is-approved-for-sale-in-the-us-heres-what-you-need-to-know.


3. Human Food Made with Cultured Animal Cells, Food and Drug Admin. (March 21, 2023) https://www.fda.gov/food/food-ingredients-packaging/human-food-made-cultured-animal-cells.


4. Id.


5. Id.


6. Id.


7. Id.


8. Supra. note 2.


9. Supra. note 3.


10. Id.

11. Id.


12. Innovation in Food Production: Cultivated Meat, Animal Legal Def. Fund, https://aldf.org/article/innovation-in- food-production-cultivated-meat/.


13. Leah Douglas, Insight: Lab-grown meat moves closer to American dinner plates, Reuters (Jan. 23, 2023), https://www.reuters.com/business/retail-consumer/lab-grown-meat-moves-closer-american-dinner-plates-2023-01- 23/.


14. Reuters, Even after green-light, lab-grown meat yet to take off in Sing., Daily Sabah (Mar. 8, 2023), https://www.dailysabah.com/life/food/even-after-green-light-lab-grown-meat-yet-to-take-off-in-singapore.


15. American Meat Industry Protection Act of 2023, 118 H.R. 4805.


16. Eryn Terry, Note, The Regulation of Commercial Speech: Can Alternative Meet Companies Have Their Beef And Speak It Too, Vand. J. Ent. & Tech. L. 223 (2020).


17. Id.

New York Law Allows a Notary in New York to notarize documents For An Individual Anywhere In The World

By: Tracy Costanzo

Throughout the pandemic, many states allowed remote online notarizations(“RON”) on a temporary basis. This allowed businesses to continue operating when in-person commerce was at a standstill. On January 15, 2021, Senate 1780 was introduced, authorizing traditional Notaries to perform electronic notarial acts using specific technology.[1] On February 1, 2023, Senate 1780 went into effect, enabling RONs in New York State on a permanent basis.[2]

The new law enables a notary sitting in New York to notarize documents for an individual anywhere in the world.[3] The technology allows business to be conducted regardless of where the individual signing is located, provided that the notary complies with the requirements set forth in the new statute. In order to become a remote online notary, an individual must hold a current commission as a traditional Notary Public, contract with a technology provider that meets the state requirements, register with the Secretary of State, and purchase a required Notary journal.[4]

In order to meet the requirements necessary to complete the RONs, the Notary Public will need a computer, webcam, microphone, and secure internet connection.[5]  As with many other states, New York requires that all RON be recorded with audiovisual technology.[6] There are currently severa that provide the necessary technology, including DocuSign, DocVerify, and SIGNix, to name a few.[7] In addition, Notaries are required to have an electronic stamp and digital certificate containing their electronic signature, and a journal is required.[8]

Utilizing the current online providers, the signer and Notary meet virtually through the platform to complete the notarization.[9] In addition, if witnesses are necessary, they will also meet through the platform throughout the singing process. The Notary is responsible for verifying the signer’s identity, as well as the witnesses, if necessary.[10] The identity verification is done directly through the service provider. This regulation allows the remote Notary to rely upon the information received from the service provider for verifying the identity of the individual(s) signing.[11] Further, the Notary is also responsible for confirming that the signer understands and is aware of what they are signing.[12] Finally, Remote Notaries are required to keep an audio and video recording of the electronic notarization for ten (10) years.[13]

Once the document has been electronically notarized, the document may be emailed, printed, faxed, or sent by other electronic means. However, in order to record an electronically signed document, the Remote Notary must provide a Certificate of Authenticity substantiating that no changes have been made to the document since the electronic signature and record were created.[14] The new law provides more security measures than a traditional notary and will likely reduce the number of fraudulent transfers.[15]


[1] How to Become a Remote Online Notary in New York, https://www.nationalnotary.org/knowledge-center/remote-online-notary/how-to-become-a-remote-online-notary/new-york (last updated Feb. 14, 2023)

[2] Id.

[3] Michael A. Markowitz, The Future is Here: New York Approves Remote Online Notarization, https://nysba.org/the-future-is-here-new-york-approves-remote-online-notarization (Apr. 11, 2023)

[4] Id.

[5]  Supra, Note 1.

[6]  Id.

[7]  Id.

[8]  Id.

[9]  Markowitz, supra Note 3.

[10]  Id.

[11]  Id.

[12]  Id.

[13]  Id.

[14] Markowitz, supra Note 3.

[15] Id.

The Impending Death of Manual Bluebook Citations: How AI Will Reshape Legal Education

By Michael Roy Ortizo

Legal education has long prioritized the formalities of citation techniques, such as those found in the Bluebook. Students often find themselves learning an entirely new citation style upon entering law school, even after mastering other writing forms like the Chicago Manual of Style during their undergraduate studies.[1] However, rapid advancements in AI present an opportunity for a shift within the legal landscape that places greater emphasis on the substance of legal writing as opposed to the form of legal citations.

Student-run law journals underscore the importance of providing accurate legal citations, with many law schools administering Bluebook exams as part of the application process. In 2012, Staci Zaretsky reported on a University of Richmond School of Law student who launched a guerilla campaign against the school’s Bluebook exam, arguing that the 40-hour exam was excessive and unnecessary compared to the four-hour standard final exams.[2] This incident sparked controversy and divided opinions among law students regarding the necessity and value of the Bluebook exam.[3]

Traditionally, law students have dedicated significant time and effort to mastering Bluebooking citation forms for legal publications. Rothman, however, contends that perfect Bluebooking is often less critical in legal practice than in law school.[4] The introduction of AI in legal spaces offers a chance for a paradigm shift, enabling scholars to concentrate on the substantive content of legal writing.

While AI models like ChatGPT have been met with skepticism in the legal community due to concerns about trust, security, and cost, recent advancements demonstrate improvements in accuracy.[5] For instance, GPT-4, ChatGPT’s newest AI model, recently passed a simulated bar exam, scoring in the top 10 percentile of test-takers—a significant leap from the previous model, GPT-3.5, which scored in the bottom 10 percentile.[6]

Capitalizing on these advancements, numerous organizations have developed AI-powered tools for various industries.[7] Legal Ease, for example, offers an AI-tool that assists law students, lawyers, and other legal professionals with Bluebook citations, claiming to generate 100% accurate citations in seconds.[8]

AI holds the potential to reshape the future of legal education, driving a much-needed shift towards substantive content. Although proper Bluebooking may not always be as important in legal practice as it is in law school, AI can alleviate the pressure on students and legal professionals to perfect their citation skills. The potential disruption of the formalisms associated with Bluebooking opens up a wealth of opportunities for legal scholars to allocate more resources to increasing their contributions to legal academia, ultimately enhancing the quality and impact of legal scholarship.


[1] Jordan Rothman, Perfect Bluebooking Is Less Important in Legal Practice Than in Law School, Above the Law (Jan. 19, 2022), https://abovethelaw.com/2022/01/perfect-bluebooking-is-less-important-in-legal-practice-than-in-law-school/.

[2] Staci Zaretsky, Law Student Revolts Against Law Review’s Bluebook Exam, Above the Law (Mar. 22, 2012), https://abovethelaw.com/2012/03/law-student-revolts-against-law-reviews-bluebook-exam/.

[3] Id.

[4] Rothman, supra note 1.

[5] Thomas Bacas, ANALYSIS: Will ChatGPT Bring AI to Law Firms? Not Anytime Soon, Bloomberg L. (Dec. 28, 2022), https://news.bloomberglaw.com/bloomberg-law-analysis/analysis-will-chatgpt-bring-ai-to-law-firms-not-anytime-soon.

[6] Kyle Wiggers, OpenAI Releases GPT-4, a Multimodal AI That It Claims Is State-of-the-Art, TechCrunch (Mar. 14, 2023), https://techcrunch.com/2023/03/14/openai-releases-gpt-4-ai-that-it-claims-is-state-of-the-art/.

[7] Sunny Betz, The 15 Best AI Tools to Know, BuiltIn (June 9, 2022), https://builtin.com/artificial-intelligence/ai-tools.

[8] LegalEase Citations, About LegalEase, https://legaleasecitations.com/#about-legalease (last visited Apr. 11, 2023).

Innocent Until a Robot Says You’re Guilty: ChatGPT’s First Defamation Suit (Maybe)

By Emily D’Agostino

ChatGPT is a highly sophisticated language model that generates human-like text responses to user inputs. From inception, large language models like ChatGPT have faced widespread criticism over the concern that they may exacerbate the already rampant misinformation crisis. Particularly, some critics have expressed fear that these models could have the capability to defame individuals, causing irreparable harm.


This fear recently materialized when Australian mayor, Brian Hood, began receiving reports that ChatGPT had labeled him a criminal. Specifically, the AI chatbot identified Hood as one of the many guilty parties involved in a major bribery conspiracy that took place in the early 2000s. Between 1999 and 2004, several Australian business executives utilized Note Printing Australia, a subsidiary of the Reserve Bank of Australia, to bribe foreign officials. While Hood was employed by the subsidiary at the time, he was actually the whistleblower that reported the illicit payments to the authorities. Hood was never convicted, or even charged with any crimes in connection with the scandal.


Hood fears that the AI allegations could cause serious damage to his reputation as he is an elected official. His lawyers have given OpenAI, the company responsible for creating ChatGPT, a 28-day period to correct the error before they proceed with what would be the first ever defamation suit against Artificial Intelligence. This incident may only the beginning and governments should consider establishing new frameworks to regulate these large language models.


The obvious existing legal framework to apply in situations where AI disseminates harmful false information about an individual or entity is tort law. This is the route Hood and his attorneys plan to take in alleging defamation. Although tort law varies by jurisdiction, a person has been defamed, generally, where a false statement about them is published or communicated, and the statement causes reputational harm. The difficulty with relying on tort law in seeking recourse against AI is that the accused AI may have been created by a combination of people and companies, making it difficult to identify the responsible party. Further, as AI becomes more independent, human influence might become too far removed altogether to impose liability.


Some organizations and governing bodies have issued guidance for the development and implementation of large language models. The Partnership on AI developed ethical guidelines requiring transparency, fairness, and accountability. The EU initially envisioned a risk-based regulatory framework that would impose varying market approval requirements on AI developers correspondent with the risk of harm posed by their AI. In September of 2022, the Commission formally proposed the AI Liability Directive (AILD) which contains a set of uniform rules covering certain aspects of civil liability for damage caused by AI systems.


Overall, the emergence of advanced language models raises major ethical and legal concerns. Many of the theoretical fears surrounding AI are beginning to come to fruition. Consequently, legislators should take action and attempt to ascertain the most effective means of regulating this complex space in order to minimize the impending societal harms.

Section 230: The Battle for Content Moderation

By Brad Balach

For nearly 30 years, Section 230 of the Communications Decency Act has shielded internet platforms from the liabilities that third party content can create.[1] Initially, the authors of Section 230 intended it to clarify the liability of online services for content posted by others on their platforms.[2] However, as social media companies emerged and benefitted from favorable judicial interpretations over the decades, Section 230 has become a source of total immunity despite some of the harmful and tragic events that have been contributed by platform use.[3]

With little regulation, internet platforms have had the latitude to moderate content as little or as much as they see fit. That may change soon, as the Supreme Court is expected to rule on a pair of cases addressing Section 230 and the content moderation practices of social media platforms.[4]

In Gonzalez v. Google, the family of a victim of the November 2015 ISIS attack in Paris alleges that the Google-owned service YouTube was used by ISIS to recruit and radicalize combatants, and that the platform provided material support to terrorists by sharing advertising revenue.[5] The Ninth Circuit Court of Appeals dismissed the case, citing Section 230 protection for YouTube and the revenue sharing being part of normal business.[6]

In Twitter v. Taamneh, family members of a victim of a 2017 ISIS attack in Istanbul alleged that Twitter, Google, and Facebook aided and abetted ISIS by allowing the distribution of its material without editorial supervision.[7] The Ninth Circuit found that the companies could face claims for playing an assistive role.[8] Both cases were granted certiorari and completed oral arguments in the first quarter of 2023. [9]

The plaintiffs in the Gonzalez and Taamneh cases have argued that the assumption made in Section 230 that online platforms are simply transporting the work of third parties does not accurately reflect how companies utilize digital technology today.[10] They contend that algorithmic recommendation, which is a standard feature on most platforms, transforms them from an interactive computer service protected under Section 230 to an unprotected information content provider.[11]

Several co-authors of Section 230 have argued in an amicus brief that the law anticipated recommendation algorithms and content curation efforts.[12] The Department of Justice also submitted amicus brief arguing that algorithmic promotion is a distinct form of conduct and differs from the idealized public square as social media platforms are closed businesses designed to maximize revenue.[13] The impact of these factors on Section 230’s liability protections is likely to be a major issue before the Supreme Court.

Justice Thomas has hinted in the past at two possible approaches that the Court could take in guiding their decision in a Section 230 case.[14] In 2020, Justice Thomas noted that many courts have interpreted Section 230 too broadly, and thus platforms have been given total immunity for distributed content.[15] He suggested that scaling back this immunity would not necessarily make these companies liable for online misconduct, but it would give plaintiffs a chance to bring claims against them.[16]

The Second proposed approach suggested that some platforms may be regulated as common carriers or places of accommodation.[17] This concept traditionally applies to telephone companies, and the plaintiffs in Gonzalez and Taamneh argue that online platforms are part of the communications infrastructure, providing a potential opening to make this argument.[18]

The Court’s decision on the content moderation issue is expected to spark public debate and prompt calls for Congress to take the lead in making decisions rather than leaving it to the courts. It remains uncertain how the Court will address the Section 230 issue, as there are several possible directions it could take regarding content moderation, but what is certain is that we await a decision that will likely change the landscape of the internet for decades to come.


[1] Nina Totenberg, Supreme Court showdown for Google, Twitter and the social media world, NPR (Feb. 21, 2023) https://www.npr.org/2023/02/21/1157683233/supreme-court-google-twitter-section-230.

[2] Id.

[3] Id.

[4] Id.

[5] Gonzalez v. Google LLC, 2 F.4th 871, 880 (9th Cir. 2021).

[6] Id.

[7] Twitter, Inc. v. Taamneh, 214 L. Ed. 2d 12, 143 S. Ct. 81 (2022)

[8] Id.

[9] Supra note 1.

[10] Mark McCarthy, Congress Should Reform Section 230 in Light of the Oral Argument in Gonzalez, Lawfare (Mar. 22, 2023) https://www.lawfareblog.com/congress-should-reform-section-230-light-oral-argument-gonzalez.

[11] Id.

[12] Tom Wheeler, The Supreme Court takes up Section 230, Brookings (Jan. 31, 2023) https://www.brookings.edu/blog/techtank/2023/01/31/the-supreme-court-takes-up-section-230/.

[13] Id.

[14] Id.

[15] Id.

[16] Id.

[17] Wheeler, supra note 12.

[18] Wheeler, supra note 12.

Could Proposed Bills to Lower Pharmaceutical Drug Pricing Influence Trade Secrecy over Patent Intellectual Property Protection?

By Renee Sanchez

At the beginning of 2023, Americans saw price hikes from big pharmaceutical companies (collective known as “big pharma”), around the same time new legislation was also proposed in the Senate seeking to lower pharmaceutical drug prices by targeting anticompetitive patent strategies and antitrust abuses.[1] Two out of five of these bills could possibly influence big pharma patent strategy. First, S.79, the Interagency Patent Coordination and Improvement Act of 2023, which aims at improving communication and coordination between the U.S. Patent and Trademark Office (USPTO) and the U.S. Food and Drug Administration (FDA). Second, S.150, The Affordable Prescriptions for Patients Actwhich aims at restricting anticompetitive “product hopping.”[2]

Basics of “Product Hopping”

“Product hopping” is when a company makes minimal changes to their product without any substantial benefits compared the original. They may take the original product of the market completely, known as “hard switches,” or make a “soft switch,” keeping the drug on the market until a generic is released and physicians and patients can decide whether or not the benefits of the new formulation is significant enough to switch prescriptions.[3] Often, companies will make a hard switch just as their patent is about to expire preventing others from creating generics or biosimilar products.[4]

Courts have historically enforced some cases of hard switches but have failed to recognize antitrust or anticompetitive behaviors in soft switches.[5] It has been suggested that the best way to address product hopping is through legislation – S.150 proposes how to identify and address both hard switches and soft switches nearly identical to legislation was also proposed in 2021, S.1435, Affordable Prescriptions for Patients Act of 2021.[6]

Patents Traditionally Used for Pharmaceuticals

Patents are traditionally used in the biomedical industry to encourage innovation and disclosure. Pharmaceuticals are commonly protected best by patents due to the possibility of reverse engineering pharmaceutical compounds.[7] Additionally, the disclosures required throughout the commercialization process often reveal protected information.[8] Trade secrets can, however, be used safely for other information such as ‘know-how’ and manufacturing information not revealed in the patent.[9]

Patents offer a sort of “quid pro quo” where the company specifies and discloses their technology and if all requirements of a patent are met, they are awarded a monopoly on that patent for about 20 years.[10] After that point in time, the patent is dedicated to the public and can be used by anyone.[11] To get around the loss of monopoly over a company’s pharmaceutical invention, they may perform anticompetitive behavior such as “product hopping.”

Consequences of Using Trade Secret Strategy

A company seeking to abuse antitrust laws and ‘product hop’ may also find other ways to monopolize their inventions, such as through trade secrecy. Although trade secrecy has an appropriate place to benefit the biomedical industry, critics suggest that their use can limit access to quality and affordable medications.[12] Conversely, maintaining trade secrets has allowed scientists from research universities and industry corporations to collaborate.[13]

Although there may be some benefits to using trade secrecy for a pharmaceutical compound, the risks may far outweigh those benefits. The following are a few examples weighing these benefits and risks:

  • If an original medication or ‘reference product’ is protected by a trade secret rather than a patent, a company developing a generic may be able to reverse engineer the product and provide a less expensive medication to the public. On the other hand, this puts original company’s trade secret at risk of independent invention.
  • If the pharmaceutical product is difficult to reverse engineer, the original pharmaceutical company may be able to maintain their trade secret for an indeterminate amount of time, thus stifling innovation and monopolizing the market for that drug.
  • However, the original inventor could license the chemical formula protected by trade secret to a company manufacturing a generic, earning royalties or some other type of financial compensation, while providing the market with a low-cost alternative. This option would promote competition meanwhile earning the original inventor or manufacturer additional financial incentives and protecting their trade secret from misappropriation.

There is a fine balance that pharmaceutical companies as well as legislators need to consider between the bottom-line financial benefits, reducing monopolies, benefiting the public, and protecting intellectual property. It is likely, that despite the implications on patents that the current proposed legislation may entail, pharmaceutical companies will likely continue to utilize patents over trade secrets to protect their pharmaceutical compounds.


[1] Id.; S. 79, 118th Cong. (2023); S. 150, 118th Cong. (2023); S. 113, 118th Cong. (2023); S. 142 118th Cong. (2023); S. 148, 118th Cong. (2023); Jeff Overley, HHS Memos, 5 Senate Bills Target Drug Prices And Tactics, Law360 (Feb. 9, 2023, 10:53 PM EST), https://www.law360.com/articles/1573737.

[2] Overley, supra note 2.

[3] Michael A. Carrier, A Simple Solution to the Problem of “Product Hopping,” Harv. Health Pol’y Rev. (Dec. 23, 2021) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4000867.

[4] Id.

[5] Id. at 1-2.

[6] Id.; S. 150, 118th Cong. (2023).

[7] William Dean, Can you keep a trade secret? Understanding pharmaceutical IP, Barker Brettell Intellectual Property (May 19, 2022). Powder River Basin Res. Council v. Wyo. Oil & Gas Conservation Comm’n, 320 P.3d 222, 227 (Wyo. 2014) (referring to deformulation of chemical compounds).

[8] Dean, supra note 8.

[9] Id.

[10] Ariad Pharm., Inc. v. Eli Lilly & Co., 598 F3d 1336, 1345 (Fed. Cir. 2010).

[11] Choosing Between Trade Secret and Patent Protection: A Primer for Businesses, Law.com (May 12, 2022) https://plus.lexis.com/api/permalink/78c7f494-3b24-43d0-acba-1dc59d4a6592/?context=1530671.

[12] Allison Durkin, et al., Addressing the Risks That Trade Secret Protections Post for Health and Rights, Health Hum. Rts. (Jun. 2021), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8233014/ .

[13] Mark F. Schultz, Trade Secrecy and COVID-19, Geneva Network (October 5, 2022), https://geneva-network.com/research/trade-secrecy-and-covid-19/.

The Digital Age: Lawmaking and Keeping up with Constant Innovation

By Elliot Malin

Technology, science, biomedicine. This is how our society advances towards the future. Is it always necessary to regulate this developing technology before understanding the potential impact on our communities?  

As new technologies develop, we see an increasing effort to attempt to regulate this space with varying results.[1] From cryptocurrency to medical development. Our states and federal government struggle to find the answers in this new digital and advanced era.[2]

This requires us to adapt. That adaption does not always come easy, and is often met with skepticism and frustration. An example of this is with the Universal Law Commissions (“ULC”) attempt to create a new area of universal law surrounding cryptocurrency and blockchain technology.[3]


Various states have attempted this with limited success.[4] The question eventually becomes: are we trying to regulate before we are ready? When these new developing technologies come to fruition how can we be effective in our lawmaking capabilities if we do not know what that looks like just yet.

These are the questions that lawmakers struggle with in this area. To help with this issue the ULC is supposed to be the experts on the technology and what they are trying to accomplish. Unfortunately, that is not always the case. As we saw in 2019 the ULC brought forth legislation in various states with no state taking a bite at the apple. Their fears were the potential for a competitive disadvantage and being the first to take the leap.[5]

What the ULC failed to see was that the states were not clamoring for this regulation. What they were trying to do is feel out how this would impact their economies before making a big regulatory change. Wyoming, for example, has been at the forefront of blockchain and cryptocurrency regulation without the need to adopt a uniform law.[6]

When we jump to regulate, we often forget the purpose of why we regulate. That purpose is theoretically for protections for the public. If we fail to materialize any potential harm, then the regulation is all too often a means to stop innovation.

Innovation breeds a stronger economy and these advancements help us move forward to a healthier environment.

It is perfectly acceptable to not write a law without a need for that law. What we are doing is creating a solution for a problem that may not exist. And we are better off using our valuable and finite time to actually regulate where necessary.


[1] Elizabeth Penava, New Technology Will Raise New Legal Questions, The Regulatory Review Jan. 31, 2023), https://www.theregreview.org/2023/01/31/penava-new-technology-will-raise-new-legal-questions/.

[2] Daniel Malan, The law can’t keep up with new tech. Here’s how to close the gap, World Economic Forum (Jun. 21, 2018), https://www.weforum.org/agenda/2018/06/law-too-slow-for-new-tech-how-keep-up/.

[3] Regulation of Virtual-Currency Business Act, Uniform Law Commission (2017), https://www.uniformlaws.org/committees/community-home?CommunityKey=e104aaa8-c10f-45a7-a34a-0423c2106778.

[4] Pamela Michaels Fay, Cryptocurrency Regulation: The Evolving Landscape, The Lumin Lab, https://lumindigital.com/lumin-lab/cryptocurrency-regulation-the-evolving-landscape/.

[5] Letter in Opposition to SB 195 – Enacts the Uniform Regulation of Virtual-Currency Businesses Act and the Uniform Supplemental Commercial Law: Hearing Before the Sen. Comm. on the Judiciary, 2019 Leg., 80th Sess. (Nv. 2019) (statement of Elliot Malin, Nevada Technology Association) (https://www.leg.state.nv.us/App/NELIS/REL/80th2019/ExhibitDocument/OpenExhibitDocument?exhibitId=37874&fileDownloadName=SB%20195_Letter%20of%20Opposition_Elliot%20Malin.pdf).

[6] Elena Botella, Wyoming Wants to Be the Crypto Capital of the U.S., Slate (Jun. 28, 2021, 8:30 AM), https://slate.com/technology/2021/06/wyoming-cryptocurrency-laws.html.