By: Scott Ceurvels

While adapting to an online learning environment largely necessitated by the spread of COVID-19, many academic institutions have started to utilize AI-proctoring technology in an effort to reduce cheating and ensure academic integrity as exams loom near.[1] Students and faculty have responded to this decision with great skepticism and opposition, joined by experts emanating an eerie sense of déjà vu.[2]   

In June of 2020, IBM, Microsoft, and Amazon all refused to permit their facial recognition technology to be utilized by law enforcement.[3] The concern? While this technology is advancing rapidly, there are numerous flaws critical to its fair and equitable performance that have not been remedied. [4]

Facial recognition technology is known to be far from perfect, and developers’ decision not to authorize the release of such technology for more consequential use further evidences that these imperfections are in part explained by the measurable bias exhibited in their performance.[5]

While AI technology has improved significantly in recent years, the fact remains that the accuracy of performance varies based on gender and race.[6] Facial recognition has consistently been most successful in identifying white males, while accuracy decreases substantially when identifying women and people of color.[7] This bias is so widely recognized that US lawmakers have gone as far as proposing a federal moratorium to ban its use in law enforcement until (and if) such bias can be eliminated from the technology.[8]

Even the U.S. Department of Commerce has confirmed these concerns, as the National Institute for Science and Technology stated in a December 2019 report that most facial recognition technologies are anywhere from 10 to 100 times more likely to falsely identify African American or Asian faces, as well as more likely to falsely identify women than men.[9]

But this is about academia, not law enforcement, so what’s the connection? Facial recognition, along with object recognition, eye movement detection, voice and audio recognition, and the collection of other biometric data are key components of AI-proctoring technology. [10] Consequently, racial bias, as well as invasions of privacy, security of information, and a host of other disparities have been the center of conversation surrounding the use of AI-proctoring technology. [11]

Despite these known flaws, students around the country are being compelled to comply with their institution’s integration of AI-proctoring technology, as their grades are dependent on the exams this technology will be facilitating.[12] Requiring students to permit these for-profit corporations to access and control their devices in order to record them while they take exams feels like an invasion of privacy to many students.[13] Furthermore, the security (or lack thereof) of these practices in the face of massive data breaches raises additional concerns of the potential for the data these companies collect to be exploited.[14]

While students around the country enter the final stretch of a semester contextualized amid a once in a lifetime global pandemic, anxiety is high while certainty is low.[15] Academic institutions are facing growing scrutiny on their focus on academic integrity, while students protest in opposition to the addition of yet another layer of uncertainty being thrust upon them. [16]

Many academics have insisted that technology is not the solution to preventing cheating, as this is a complex social problem that must instead be addressed by reframing the approach taken to academic assessments.[17] Rather than doubling down on the online systems that have fostered an environment where cheating has increased, institutions should instead focus on adapting with the circumstances with which they are faced. [18]

Whether altering formats to provide for open-book examinations or shifting towards written paper-based evaluations, there are far less controversial alternatives, each of which is accompanied by substantially more data to support its effectiveness in measuring students’ performance. [19]

While the concerns of cheating are undeniable, academic institutions that choose to adopt AI-proctoring technology are begging the question: is this stringent focus on academic integrity really the most effective approach, despite the lack of evidence and potential detriments to our students’ well-being?


[1] Clive Thompson, What AI College Exam Proctors Are Really Teaching Our Kids, Wired (Oct. 10, 2020, 6:00 AM), https://www.wired.com/story/ai-college-exam-proctors-surveillance/.

[2] Maddy Andersen and Hugo Smith, “Essentially Malware”: Experts Raise Concerns about Stuyvesant’s Lockdown Software, The Spectator (Oct. 23, 2020), https://www.stuyspec.com/quaranzine/essentially-malware-experts-raise-concerns-about-stuyvesant-s-lockdown-software.

[3] Larry Magid, IBM, Microsoft and Amazon Not Letting Police Use Their Facial Recognition Technology, Forbes (Jun. 12, 2020, 9:26 PM), https://www.forbes.com/sites/larrymagid/2020/06/12/ibm-microsoft-and-amazon-not-letting-police-use-their-facial-recognition-technology/?sh=73ddec7b1887.

[4] Id.

[5] Id.

[6] Davide Castelvecchi, Is facial recognition too biased to be let loose?, nature (Nov. 18, 2020), https://www.nature.com/articles/d41586-020-03186-4.

[7] Id.

[8] Jonathan Greig, Congress proposes ban on government use of facial recognition software, TechRepublic (Jun. 26, 2020, 11:31 AM), https://www.techrepublic.com/article/congress-proposes-ban-on-government-use-of-facial-recognition-software/.

[9] Patrick Grother, Mei Ngan and Kayee Hanaoka, Face Recognition Vendor Test Part 3: Demographic Effects (NIST, 2019), https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf.

[10] Aileen Scott, Artificial Intelligence is Making Online Proctoring Safe and Secure, Medium (Mar. 14, 2019), https://medium.com/@aileenscott604/artificial-intelligence-is-making-online-proctoring-safe-and-secure-9b03845602da.

[11] Meg Foulkes, Exams that use facial recognition may be ‘fair’ – but they’re also intrusive, The Guardian (Jul. 22, 2020), https://www.theguardian.com/law/2020/jul/22/exams-that-use-facial-recognition-are-fair-but-theyre-also-intrusive-and-biased.

[12] Anushka Patil and Jonah Engel Bromwich, How It Feels When Software Watches You Take Tests, The New York Times (Sept. 29, 2020), https://www.nytimes.com/2020/09/29/style/testing-schools-proctorio.html.

[13] Id.

[14] Joe Patrice, ExamSoft Partner Suffered 440K User Data Breach… ExamSoft Still Says Everything’s Fine, Above The Law (Sep. 8, 2020, 1:13 PM), https://abovethelaw.com/2020/09/examsoft-partner-suffered-440k-user-data-breach-examsoft-still-says-everythings-fine/.

[15] COVID-19 and Mental Health: How America’s high school and college students are coping during the pandemic, Chegg, https://www.chegg.org/covid-19-mental-health-2020 (last visited Nov. 20, 2020).

[16] Avi Asher-Schapiro, ‘Unfair surveillance’? Online exam software sparks global student revolt, Reuters (Nov. 10, 2020, 7:24 AM), https://fr.reuters.com/article/global-tech-education-idUSL8N2HP5DS.

[17] Shea Swauger, Remote testing monitored by AI is failing the students forced to undergo it, NBC News (Nov. 7, 2020, 4:30 AM), https://www.nbcnews.com/think/opinion/remote-testing-monitored-ai-failing-students-forced-undergo-it-ncna1246769; see also Anna Baker, Ai proctoring won’t stop students from cheating, it is just added stress for students, The Cougar (Sept. 24, 2020), http://thedailycougar.com/2020/09/24/ai-proctoring-cheating-added-stress/.

[18] Beckie Supiano, Teaching: Assessment in a Continuing Pandemic, Chronicle (Aug. 20, 2020), https://www.chronicle.com/newsletter/teaching/2020-08-20.

[19] Id.