By: Mandy Mobley Li
As Covid-19 has made it necessary for people to keep their distance from each other, robots are stepping in to fill essential roles. There are signs that people may be increasingly receptive to robotic help; however, the legal industry has been reluctant to embrace intelligent, independent machines, and for good reason. Artificial intelligence-based tools are used by only a very small percentage of law firms, according to the ABA’s 2020 Legal Technology Survey Report. Roughly 35% percent of respondents to the ABA survey listed the accuracy of AI technology as one of the major concerns about implementing AI-based tools, while 33% percent said cost was a major issue.
Meanwhile, some governments have been tinkering with inserting artificial intelligence and machine learning into public services. For example, Estonia has deployed AI or machine learning in 13 places where an algorithm has replaced government workers. Farmers who receive government subsidies to cut their hay fields each summer are automatically notified two weeks before the mowing deadline with a link to a satellite image of their field taken by the European Space Agency. These satellite images have been fed into a deep-learning algorithm and overlaid onto a map of fields to prevent them from turning forests over time. The algorithm assesses each pixel in the images, determining if the patch of the field has been cut or not. Prior to deploying this AI system, human inspectors would physically check on farmers who receive government subsidies. Today, an inspector will still drive out to check in cases where something has thrown off the image processing, such as cattle grazing or partial cutting. Estonia claims the new system saved $755,000 in its first year because inspectors made fewer site visits and focused on other enforcement actions.
However, many government deployments of algorithm-based systems have experienced significant downsides. For example, the state of Idaho attempted to institute an algorithm for allocating home care and community integration funds. When the new formula was instituted, funds allocated to care for severely disabled people immediately dropped, for many people by as much as 42 percent. When the people whose benefits were cut tried to determine how their benefits were determined, Idaho declined to disclose the formula it was using, saying that its math qualified as a trade secret. The local ACLU branch sued on behalf of the program’s beneficiaries, arguing that Idaho’s actions had deprived them of their rights to due process. In court, it was revealed that the government relied on deeply flawed data and that it was impossible for the average person to understand or challenge the new system. The court held that the formula itself was so bad that it was unconstitutional because it was effectively producing arbitrary results for many people. The judge ordered that Idaho overhaul its new program with regular testing, regular updating, and the use of quality data.
One explanation for Idaho’s flawed adoption of AI technology is the narrow focus of algorithmic development. Robot researchers are typically educated to solve difficult technical problems, not to consider societal questions about who gets to make robots or how the machines affect society. Minority scientists are calling attention to disparate impact through manifestos, which describe their personal experience of the structural and institutional bias that is integrated into society. Signatories of an open letter, entitled “No Justice, No Robots,” pledged to keep robots and robot research away from law enforcement agencies. This letter proved controversial in the small world of robotics labs, since some researchers felt that it was not socially responsible to shun contact with the police. One robotics researcher who chose not to sign the letter reiterated her position that biased algorithms are the result, in part, of the skewed demographic — white, male, able-bodied — that designs and tests the software. Her concern was with whom our law enforcement entities would ultimately end up working if the developers with strong ethical values were choosing not to.
The lessons we learn from unwinding systemic bias in AI technologies could give us insight into how other forms of institutionalized oppression get inadvertently baked into algorithms. Rather than focus on shocking statistics, it may be more useful to look at the forces that shaped statistics, such as the social and technical processes that have led to the unequal treatment of people, the associations that affect perceptions by implicitly linking minority groups with particular outcomes, and the role institutions play in guiding policies and preferences that perpetuate inequities. Researchers are developing a variety of techniques to help detect and quantify bias and are formulating KPIs to measure the impact of various interventions to correct biases. However, in the long run, organizations like government agencies will need to include proactive auditing programs with ongoing reviews to detect how well algorithms perform after they are released into the wild. These programs would be akin to software quality audits within companies or institutional review boards within medicine.
Identifying and rooting out bias within intelligent government solutions will often compete with business as usual. For example, it may mean not adopting AI technologies because no one can figure out why minority groups are not having the same outcomes as majority groups. It could mean having the courage to not deploy and not release an AI-based solution until the playing field has been leveled along class indicators. Progress will require leadership at the highest levels and sustained effort throughout an entire agency. While building AI is a technology challenge, using AI requires non-software development heavy disciplines such as social science, law, and politics. Responsibly managing the use of AI is no longer just a desirable component of progress but a necessary one. Based on government’s mixed results of implementing AI solutions, law firms must continue to recognize the pitfalls of human bias and the errors of replicating these biases in the machines of tomorrow to prevent achievement of the illusory promise of efficiency at the expense of fundamental constitutional rights.
 Jennifer Chu, What to Expect When You’re Expecting Robots, MIT News (Oct. 2020), https://news.mit.edu/2020/expect-when-expecting-robots-1022.
 Lyle Moran, Law Firms are Slow to Adopt AI-based Technology Tools, ABA Survey Finds, ABA J. (Oct. 2020), https://www.abajournal.com/web/article/law-firms-are-slow-to-adopt-artificial-intelligence-based-technology-tools-aba-survey-finds.
 Eric Niller, Can AI Be a Fair Judge in Court? Estonia Thinks So, Wired (Mar. 2019), https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/.
 Niller, supra note 5.
 Colin Lecher, What Happens When an Algorithm Cuts Your Health Care, The Verge (Mar. 2018), https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy.
 Jay Stanley, Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case, ACLU (Jun. 2017), https://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking-highlighted-idaho-aclu-case.
 David Berreby, Can We Make Our Robots Less Biased Than We Are?, The New York Times (Nov. 2020), https://www.nytimes.com/2020/11/22/science/artificial-intelligence-robots-racism-police.html.
 Berreby, supra note 19.
 George Lawton, Rooting Out Racism in AI Systems – There’s No Time to Lose, TechTarget (Aug. 2020), https://searchcio.techtarget.com/feature/Rooting-out-racism-in-AI-systems-theres-no-time-to-lose.
 Gary Shiffman, We Need a New Field of AI to Combat Racial Bias, TechCrunch (July 2020), https://techcrunch.com/2020/07/03/we-need-a-new-field-of-ai-to-combat-racial-bias/.