By: Mandy Mobley Li

As Covid-19 has made it necessary for people to keep their distance from each other, robots are stepping in to fill essential roles.[1] There are signs that people may be increasingly receptive to robotic help;[2] however, the legal industry has been reluctant to embrace intelligent, independent machines, and for good reason. Artificial intelligence-based tools are used by only a very small percentage of law firms, according to the ABA’s 2020 Legal Technology Survey Report.[3] Roughly 35% percent of respondents to the ABA survey listed the accuracy of AI technology as one of the major concerns about implementing AI-based tools, while 33% percent said cost was a major issue.[4]

Meanwhile, some governments have been tinkering with inserting artificial intelligence and machine learning into public services. For example, Estonia has deployed AI or machine learning in 13 places where an algorithm has replaced government workers.[5] Farmers who receive government subsidies to cut their hay fields each summer are automatically notified two weeks before the mowing deadline with a link to a satellite image of their field taken by the European Space Agency.[6] These satellite images have been fed into a deep-learning algorithm and overlaid onto a map of fields to prevent them from turning forests over time.[7] The algorithm assesses each pixel in the images, determining if the patch of the field has been cut or not.[8] Prior to deploying this AI system, human inspectors would physically check on farmers who receive government subsidies.[9] Today, an inspector will still drive out to check in cases where something has thrown off the image processing, such as cattle grazing or partial cutting.[10] Estonia claims the new system saved $755,000 in its first year because inspectors made fewer site visits and focused on other enforcement actions.[11]

However, many government deployments of algorithm-based systems have experienced significant downsides. For example, the state of Idaho attempted to institute an algorithm for allocating home care and community integration funds.[12] When the new formula was instituted, funds allocated to care for severely disabled people immediately dropped, for many people by as much as 42 percent.[13] When the people whose benefits were cut tried to determine how their benefits were determined, Idaho declined to disclose the formula it was using, saying that its math qualified as a trade secret.[14] The local ACLU branch sued on behalf of the program’s beneficiaries, arguing that Idaho’s actions had deprived them of their rights to due process.[15] In court, it was revealed that the government relied on deeply flawed data and that it was impossible for the average person to understand or challenge the new system.[16] The court held that the formula itself was so bad that it was unconstitutional because it was effectively producing arbitrary results for many people.[17] The judge ordered that Idaho overhaul its new program with regular testing, regular updating, and the use of quality data.[18]

One explanation for Idaho’s flawed adoption of AI technology is the narrow focus of algorithmic development. Robot researchers are typically educated to solve difficult technical problems, not to consider societal questions about who gets to make robots or how the machines affect society.[19] Minority scientists are calling attention to disparate impact through manifestos, which describe their personal experience of the structural and institutional bias that is integrated into society.[20] Signatories of an open letter, entitled “No Justice, No Robots,” pledged to keep robots and robot research away from law enforcement agencies.[21] This letter proved controversial in the small world of robotics labs, since some researchers felt that it was not socially responsible to shun contact with the police.[22] One robotics researcher who chose not to sign the letter reiterated her position that biased algorithms are the result, in part, of the skewed demographic — white, male, able-bodied — that designs and tests the software.[23] Her concern was with whom our law enforcement entities would ultimately end up working if the developers with strong ethical values were choosing not to.[24]

The lessons we learn from unwinding systemic bias in AI technologies could give us insight into how other forms of institutionalized oppression get inadvertently baked into algorithms. Rather than focus on shocking statistics, it may be more useful to look at the forces that shaped statistics, such as the social and technical processes that have led to the unequal treatment of people, the associations that affect perceptions by implicitly linking minority groups with particular outcomes, and the role institutions play in guiding policies and preferences that perpetuate inequities.[25] Researchers are developing a variety of techniques to help detect and quantify bias and are formulating KPIs to measure the impact of various interventions to correct biases.[26] However, in the long run, organizations like government agencies will need to include proactive auditing programs with ongoing reviews to detect how well algorithms perform after they are released into the wild.[27] These programs would be akin to software quality audits within companies or institutional review boards within medicine.[28]

Identifying and rooting out bias within intelligent government solutions will often compete with business as usual. For example, it may mean not adopting AI technologies because no one can figure out why minority groups are not having the same outcomes as majority groups.[29] It could mean having the courage to not deploy and not release an AI-based solution until the playing field has been leveled along class indicators.[30] Progress will require leadership at the highest levels and sustained effort throughout an entire agency.[31] While building AI is a technology challenge, using AI requires non-software development heavy disciplines such as social science, law, and politics.[32] Responsibly managing the use of AI is no longer just a desirable component of progress but a necessary one.[33] Based on government’s mixed results of implementing AI solutions, law firms must continue to recognize the pitfalls of human bias and the errors of replicating these biases in the machines of tomorrow to prevent achievement of the illusory promise of efficiency at the expense of fundamental constitutional rights.


[1] Jennifer Chu, What to Expect When You’re Expecting Robots, MIT News (Oct. 2020), https://news.mit.edu/2020/expect-when-expecting-robots-1022.

[2] Id.

[3] Lyle Moran, Law Firms are Slow to Adopt AI-based Technology Tools, ABA Survey Finds, ABA J. (Oct. 2020), https://www.abajournal.com/web/article/law-firms-are-slow-to-adopt-artificial-intelligence-based-technology-tools-aba-survey-finds.

[4] Id.

[5] Eric Niller, Can AI Be a Fair Judge in Court? Estonia Thinks So, Wired (Mar. 2019), https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/.

[6] Id.

[7] Id.

[8] Id.

[9] Id.

[10] Niller, supra note 5.

[11] Id.

[12] Colin Lecher, What Happens When an Algorithm Cuts Your Health Care, The Verge (Mar. 2018), https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy.

[13] Id.

[14] Id.

[15] Id.

[16] Jay Stanley, Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case, ACLU (Jun. 2017), https://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking-highlighted-idaho-aclu-case.

[17] Id.

[18] Id.

[19] David Berreby, Can We Make Our Robots Less Biased Than We Are?, The New York Times (Nov. 2020), https://www.nytimes.com/2020/11/22/science/artificial-intelligence-robots-racism-police.html.

[20] Id.

[21] Id.

[22] Id.

[23] Id.

[24] Berreby, supra note 19.

[25] George Lawton, Rooting Out Racism in AI Systems – There’s No Time to Lose, TechTarget (Aug. 2020), https://searchcio.techtarget.com/feature/Rooting-out-racism-in-AI-systems-theres-no-time-to-lose.

[26] Id.

[27] Id.

[28] Id.

[29] Gary Shiffman, We Need a New Field of AI to Combat Racial Bias, TechCrunch (July 2020), https://techcrunch.com/2020/07/03/we-need-a-new-field-of-ai-to-combat-racial-bias/.

[30] Id.

[31] Id.

[32] Id.

[33] Id.