On September 3, 2018, musical artist Machine Gun Kelly released an attack single (commonly called a “diss track” within the music industry) against fellow rapper Eminem (“Marshall Mathers”) on World Star Hip Hop’s YouTube account. The lyrics to Kelly’s song, Rap Devil, included various attacks against Eminem, but most notably included the following lyric: “I think my dad’s gone crazy. Yeah Hallie, you right. Dad’s always mad cooped up in the studio yelling at the mic.” When Kelly released his music video on World Star Hip Hop’s YouTube account, these lyrics included a sample of an audio recording over the lines “I think my dad’s gone crazy”—implying, arguably, that the pre-recorded voice was Eminem’s daughter Hallie Jade Scott Mathers.
Notably, when Machine Gun Kelly released the attack single for sale on iTunes—through the music label Bad Boy Records (a subsidiary of Interscope Records)—the line “I think my dad’s gone crazy”, that was included in his YouTube release, was partially censored and removed from the master recording of the song. Not only was the line “crazy” completely removed, but the underlying pre-recorded audio was also removed from the recording released to iTunes.[i]
Basic intellectual property law (“IP”) explains this discrepancy. If Machine Gun Kelly did not own the recording included within his pre-release on YouTube, he cannot include this audio recording within his master track sold to the public; unless he owns a license to use the pre-recording in his own song. Now, we may ask: why can the pre-recorded audio still be published on YouTube, but not iTunes? The answer to this involves both IP law and contract law.
From a legal standpoint, Machine Gun Kelly faces potentially less liability for a release of the pre-recorded track within the sample of his song on YouTube, because a copyright holder (attempting to assert a claim for copyright infringement) must generally first undergo the internal procedures outlined by YouTube’s user agreement, when trying to pursue a copyright infringement action within the medium of YouTube’s website. However, in the traditional medium of releasing and selling music to the public, there is no internal issue of dealing with a reproduction of copyright-protected material on a self-contained website. Rather, distributing this copyright material to the public for direct sale would be governed solely by IP law (Copyright Act of 1976) and not the contract law implications attached to a service provider like YouTube.
As such, under the hypothetical situation where Machine Gun Kelly did not have the permission—of the entity who owns and recorded the line “I think my dad’s gone crazy”—to use the recording within Kelly’s YouTube version of Rap Devil, that entity would first need to file an internal claim against World Star Hip Hop for copyright infringement on YouTube’s interface.[ii] Upon notice of a potential IP violation, the owner of the pre-recorded sound bite (“I think my dad’s gone crazy”) can submit a request to remove the video from YouTube under an internal complaint of copyright infringement.[iii] Thereafter, the entity may also pursue a claim against World Star Hip Hop within the traditional bounds of IP law—if in fact there was an unauthorized derivative use of the recorded lyrics under the Copyright Act of 1976.[iv] As mentioned above, if the use was not permitted and the line was still used in the version of the song sold on iTunes, this procedure would not apply and a strict application of 17 U.S.C. § 501-513 would allow an immediate IP action against Kelly’s hypothetical use.
Nevertheless, Machine Gun Kelly (or executives at Interscope Records) made the preemptive decision to remove the pre-recorded audio from the final version of Rap Devil sold to the public. The label clearly understood that selling a song with a lyric of questionable ownership exposed the artist to more liability—or “heat”—than is typically warranted through the release of a diss track.
[i] For a full version of Machine Gun Kelly’s YouTube music video, see https://www.youtube.com/watch?v=Fp0BScQSSvg; (0:23 to 0:25) “I think my dad’s gone crazy”.
On April 12, 2012, a federal district court in Montana invalidated the Federal Government’s approval of the first phase of Rock Creek Mine construction, a major copper and silver mine. The mine was to be constructed beneath the Cabinet Mountains Wilderness in northwest Montana. The miners proposed digging under the boundary of the federal wilderness area to reach large deposits of copper and silver. Heleca Mining Company, is attempting to develop two large mines – the Montanore and Rock Creek projects – in the Cabinet Mountains. The land, which contains high elevation streams that are among the purest water sources in the United States, harbors vital populations of bull trout and grizzly bear. These trout, depended on the cold waters and protected by the Endangered Species Act, are facing increasing threats due to global warming. The area is also home to one of the last five grizzly bear population in the United States today.
A coalition of conservation groups and cultural leaders within the Ksanka band of the Ktunaza Nation have filed various lawsuits challenging the permits. In 2017 a district court in Montana invalided the permits for The Montanore Mine on the grounds that they violated the Endangered Species Act. In the current litigation, challenging the permits for the Rock Creek Mine, the federal court also found that the U.S. Fish and Wildlife Service and the U.S. Forest Service violated the Endangered Special Act by not considering the impact of the full mine proposal on federally protected grizzly bear and bull trout. Heleca Mining Company attempted to break the permitting process into small steps. They first sough permits for the exploration phase of the project which were granted by the government. The federal court ruled that the government could not grant permits for an exploratory stage of a project without first considering the impact on environment resources once the full project was underway.
While gathering minerals under the wilderness area does not alone violate the Wilderness Act, the miners must show that their activities will not disturb surface resources – wilderness and water. The company was unable to show that the roadbuilding, human activity and the mines potential to dewater Cabinet lakes and streams would not impact both grizzly bear and bull trout populations. In 2019, U.S. Forest Service issued a biological opinion relating only to Phase I of the project. In the recent decision, the federal court said, “Unsurprisingly, removing Phase II from consideration drastically reduced both the scope of the activity and the potential environmental impacts. […] For comparison, Phase II, or mine development, would involve the removal of 10,000 tons of ore per day with a production life of 26 to 30 years. Mine activity would disturb over 400 acres of land, cause drawdown of stream levels, and require the construction of roads.” “On the other hand, Phase I involves only the excavation of an evaluation adit (mine shaft). Construction of the ad it is anticipated to take two years and the estimated disturbance area encompasses approximately 19.6 acres.”
The Court ultimately decided that the Defendant’s approval of Phase I without considering the effects of Phase II was arbitrary and capricious. It amounted to a “piecemeal chipping away of habitat for endangered species.” Several of the plaintiff’s blame the Trump Administration, and the Center for Biological Diversity has sued the administration over environmental actions over 250 times since 2016. The Trump Administration repeatedly and purposefully ignored scientists over the course of This presidency and now they are dealing with the legal consequences.
Federal Court Halts Proposed Rock Creek Mine In Montana’s Cabinet Mountains, EarthJustice (Apr. 15, 2021), https://earthjustice.org/news/press/2021/federal-court-halts-proposed-rock-creek-mine-in-montanas-cabinet-mountains.
 Associated Press, Federal Court Halts Proposed Rock Creek Mine, Clark Ford Valley Press & Mineral Independent (Apr. 21, 2021), https://vp-mi.com/news/2021/apr/21/federal-court-halts-proposed-rock-creek-mine/.
Whether or not you are a vehicle owner, it is likely that in the last year, you have received an automated scam call that went something like this; “Hello this is ___ we’re calling to alert you that your car warranty is expiring/has been extended”. After this initial pre-recorded message, the call will almost always include a promotion to renew or cancel the renewal of that warranty. After the pitch, you will be asked to provide personal information such as your social security number, credit card and banking information and driver’s license number. Generally speaking, this type of information should never be conveyed over the phone unless you are able to verify that you are in fact dealing with a legitimate company with whom you have previously established a business relationship. The simple truth is that these calls have nothing to do with your car warranty, and everything to do with trying to get you to willingly release your sensitive personal information.
It follows, not only are these calls annoying, they are also dangerous. According to the Federal Communications Commission (hereinafter FCC), this scam isn’t new, however it has quickly become the preferred method of cheating unknowing victims out of their hard-earned money. The FCC has further indicated that warranty “robocalls” represented the highest call complaint filed by consumers in 2020. Specifically, between June and December of last year alone, Americans filed just short of 200,000 “Do Not Call” complaints, a number which the Federal Trade Commission (hereinafter FTC) says, represents only a small fraction of calls actually received.
You might be asking, are these calls illegal? The short answer is yes. The FTC banned almost all pre-recorded automated telemarketing calls in 2009, with the exception of political calls, charitable solicitations and debt collections. It goes without saying that these robocalls do not fall into any of these categories. More troubling, is the fact that these scams are nearly impossible to trace and are being operated both domestically, as well as overseas. Essentially, the calling operation (scammer) will purchase information or “leads”, in this case phone numbers, from third-party data providers (some more legitimate than others) which then allows those calling operations to facilitate their massive scamming campaigns. Unfortunately, if you have ever: phoned a business that utilizes caller I.D., checked the “sign me up for emails” box while online shopping, registered to vote, applied for credit, or donated to a charitable organization it is likely that you have unknowingly auctioned off your contact information to one of these third-party data providers.
Being that it is nearly impossible to stop the sale of your information from a third-party data provider to a calling operation, the best alternative is to limit the number of calls you receive. Initially, you should feel some assurance in knowing that major U.S. carriers offer at least some level of protection. Additionally, the Traced Act, a bi-partisan piece of legislation signed in 2019, reduces the statute of limitations previously imposed on government agencies and law enforcement officials, in their efforts to punish companies and individuals that break telephone consumer-protections laws. Beyond the measures already implemented by the government and major carriers, there are several steps you can take to limit the amount of scam calls you receive. AT&T, Verizon, Sprint and T-Mobile each offer both free and premium services that block spam calls and provide users with a “nuisance” warning, sign up for these. Further, consumers can purchase third-party apps for both Android and iOS operating systems that offer a wide range of call blocking features. Hiya, Nomorobo, and RoboKiller are a few examples of subscription based services that protect users by constantly updating a database of numbers used by calling operations, numbers which are then blocked when they inevitably dial your phone. Although third-party apps are available for purchase, the FCC has provided several steps the cost-conscious consumer can take instead of purchasing additional apps. Don’t answer calls from numbers you don’t recognize, never hit a button or answer “yes”, avoid providing personal information, ask your phone company about additional tools you can implement to stop these calls, and register your number on the “Do Not Call List”.
Although it is unlikely that car warranty scam calls will cease to exist in the near future, there are several steps that you and your loved ones can take in an effort to keep your information private. Scam calls are not only annoying, they are also dangerous. While blocking scam calls from your phone is certainly worthwhile, spreading awareness of these dangers is of equal importance in the fight to protect our information as we continue to progress through the digital era.
On March 31st, 2021, New York became the 16th state to make the recreational use of Cannabis (“marijuana”) legal after Governor Andrew Cuomo signed into law Senate Bill S854A (“the Bill”). Under this new legislation, it is now legal for individuals in New York who are twenty-one years and older to possess up to three ounces of marijuana on their person, and up to five pounds in their private residence. Further, the Bill expunges the records of all people who were convicted of marijuana-related offenses that are no longer criminalized.
Because marijuana’s legality is such a controversial topic, in order to form an accurate opinion, it is important to understand its background in the United States, including its classification by the federal government, New York’s rationale for legalization, and some potential positives and negatives of the substance.
Right now, marijuana is listed as a Schedule I substance under the Drug Enforcement Administration’s (DEA) drug scheduling chart. By definition, this means that marijuana is federally described as a substance “with no currently accepted medical use and a high potential for abuse.” Other substances under this classification include heroin, lysergic acid diethylamide (LSD), and peyote.
Notwithstanding the illegality of marijuana on a federal level, Colorado and Washington became the first two states to legalize its recreational use in 2012. Now, nine years later, New York has decided to join the bandwagon of states who chose to make marijuana recreationally legal. There are several reasons for the Bill’s enactment, and it is difficult to take a dualistic approach in forming an opinion about its passing.
Governor Cuomo’s rationale for legalization mentions several positive effects that it will have on our State. For example, Cuomo said that with a nine-percent statewide sales tax, a four-percent county and local tax, and another tax per milligram of THC,annual tax revenues could amount to over $300 million. After covering the costs of its regulation, tax revenues would go towards different beneficial aspects of society (e.g., schools, drug treatment and prevention programs, as well as health and housing programs designed to help communities that were most affected by marijuana’s illegality). It is also suggested that legalization will create jobs, increase consumer safety, phase out black markets by taking money away from illegal drug organizations, decrease crime rates, and free up police resources.
However, these pros are met with a noteworthy number of cons. For example, it is possible that with its legalization, marijuana will bear costs on New York and its taxpayers that outweigh its tax revenues. Some of these costs would include paying for increased emergency room visits and medical care for the uninsured, and an increase in victims of marijuana-related driving accidents. Legalization may also increase use by teenagers, which is especially harmful because marijuana’s effects are more adverse on the human brain during stages in which it is still developing.
Public concern about automobile safety is another legitimate criticism of legalization. Between 2013 and 2016, the number of fatal accidents in which at least one driver tested positive for THC increased by ninety-two percent in Colorado, and twenty-eight percent in Washington. Since 2016, marijuana-related traffic deaths have remained higher than the national average in both Colorado and Washington. Interestingly, the New York Bill provides that “the odor of burnt cannabis shall not provide probable cause to search any area of a vehicle that is not readily accessible to the driver and reasonably likely to contain evidence relevant to the driver’s condition.” This text may seem like a legal adaptation to ensure citizens maintain their protection against unreasonable searches guaranteed by the Fourth Amendment; however, it isn’t difficult to see why this sort of protection may entice DUI virgins to dabble, or cause routine high-drivers to push their limits even further.
While states seem to be following a progressive pattern of making marijuana recreationally legal, it is important not only to acknowledge the societal benefits, but the potential detriments as well. As legalization is still a relatively new concept, each recreational state will continue to serve as a subject in the United States’ case study on whether legalization really is a good thing. As for New York, it seems that only time will tell.
During the recent pandemic, the world has had to rethink how it does…everything. Our profession is no exception. The pandemic forced rapid change, particularly adoption of technology and letting go of traditionalism in the field, but seeds were sown long ago.
In 2020, the Association of American Law Schools recognized that technology would be a critical component to the 21st century. Anna Williams Shavers wrote in 2001 of the discretion law schools have regarding coursework and how they were beggining to include tech based electives such as intellectual property. Data showed that such courses were being selected more often than more traditional electives. Around 1981 it was said of legal education that “innovation…comes hard, is limited in scope and permission, and generally dies young.” Sadly, this sentiment was evidenced by the fact that few professors bothered to incorporate educational technologies that were available.
Despite the limited interest broadly, proponents recognized the potential value: increased and easier access to information, better accommodation for students with disabilities and different learning styles, collaboration with students beyond the confines of the classroom or campus. The American Bar Association, however, took an interest in a study done on distance learning and created a specific set of guidelines that ultimately showed their distaste for it due to a concern about limited personal interaction. While not explicitly prohibiting distance learning, they did lump it in with correspondence study which could not be used to earn credits. Even so, it’s clear that even 20 years ago, it was known that technology had a place in legal education.
Contrary to earlier technological innovations, the internet was a game changer. The indifference towards technology and innovation was beginning to change. While schools began offering online courses in the early 21st century, around 2015, some schools were researching the possibility of taking things to the next level and offering full online JD programs. One writer offered a this hope, “ideally, over the next decade a growing number of administrators will encourage and reward innovation in legal education and a growing number of law professors will engage deeply with technologies that enable innovation.” Four years later, in 2019, Syracuse University launched their hybrid JD program—and ABA approved program combining both online an in person requirements.
Over time, changes in legal education and practice would be driven by changes in the students. Students would be arriving at law school having been raised on technology. Schools and the field will need to adapt as technology is more deeply embedded into the K12 and undergraduate spaces, as future law students will ask more of their institutions and the profession.
In practice, lawyers are often resistant to change and technology. It could be due to risk aversion, commitment to a status quo, worry about fewer billable hours, or fear of tech taking over jobs. Reasonable or not, it has been easy for the legal industry to hold fast to these concerns and stay still. Newton’s first law says, in part, that an object at rest will remain at rest until acted upon by a force. In 2020, the legal world was met with a force by the name of COVID-19.
Thankfully, the legal world rose to the occasion and embraced change. From attorneys to judges, old and young, professionals took to video for depositions and hearings, for example. Attorneys and other legal professionals have seen benefits during this time that many hope will remain: less travel time and expense, greater efficiency and flexibility. It has been easier to provide pro bono services without fiscal and geographical limitations. Firms that often connected via conference call, got on video as well and appreciated seeing one another! Not to mention that law schools HAD to transition to virtual instruction and students still took classes and got their JDs. Challengine? Yes. But clearly possible.
It is true that with everything being virtual the level of connection that comes with sharing a physical space is lost, but people are finding that the video alternatives are a close second. By and large forced growth and adoption of technology in the field has been a good thing and is opening eyes to the possibilities of new and still effective ways of practicing.
There is a program at the University of Pennsylvania Law school, Future of the Profession Initiative, that focus on technological advancements in the profession. The initiative’s Executive Director, Jennifer Leonard, said in an article published in February 2020, mere weeks before the world would be upended by COVID-19, that “The future of law is going to be very different than the law profession has been to this point. The successful lawyers of tomorrow will need to be adaptable in a way that lawyers have never needed to be in the past.”
So, what does it all mean? Hopefully, it means that even once we get to the other side of COVID-19 we will continue to move forward and think innovatively about the study and practice of the law, bearing in mind what has been learned and accomplished over the last year and embracing the possibilities of technology, innovation and change in both the study and practice of the law.
 Anna Williams Shavers, The Impact of Technology on Legal Education, 51 J. of Legal Educ. 407, 407 (2001).
 Erika Winston, Is technology making the practice of law better or worse?, TimeSolv, https://www.timesolv.com/blog/is-technology-making-the-practice-of-law-better-or-worse/ (last visited 03.28.21 10:00 PM EST).
Law of inertia. Britannica, https://www.britannica.com/science/law-of-inertia (last visited March 27, 2021).
 Catherine Wilson, ‘Awful Impact’: The Long-Lasting Effects of COVID-19 on the Practice of Law, Law.com (December 07, 2020, 10:48 AM), https://www.law.com/dailybusinessreview/2020/12/07/awful-impact-the-long-lasting-effects-of-covid-19-on-the-practice-of-law/?slreturn=20210228140802.
I will admit it, I am a true crime addict. I even subscribe to podcasts called, “Crime Junkie” and “My Favorite Murder.” However, I am not alone in the zealous fascination. According to Edison Research—the leading podcast research organization—two of the top five the most listened to podcasts in 2020 were true crime. Among the most notorious true crime stories is the Golden State Killer AKA the Visalia Ransacker AKA the East Area Rapist AKA the Original Night Stalker AKA EARONS. For over four decades, the serial killer of many names but no true identity, was a Rubik’s cube for not only the police but for websleuths, national media, and — of course true crime podcasts. Until a third cousin of the Golden State Killer (GSK) took a DNA test.
Detectives finally caught the elusive GSK by “harnessing genetic technology already in use by millions of consumers to trace their family trees.” Otherwise known as a familial search. A familial search is a deliberate effort to find close biological relatives after failure to find an exact DNA match. Detectives first used the Golden State Killer’s genetic material from a rape kit to establish a DNA profile on FamilyTreeDNA which allowed them to set up a fake account and search for matching GSK family-customers. Here they identified GSK’s unknown third cousin. A third cousin is someone who shares a great-great grandparent with you. At four generations of separation, family trees exponentially expand resulting in thousands of third cousins. Nevertheless, that third cousin’s genetic match through process of elimination, led detectives to the GSK’s true identity and capture.
While the true crime world roared in celebration at GSK’s creative investigation approach, others cringed. Mummers from legal scholars and some legislators have grown louder voicing rightful concerns that this largely unregulated revolutionary approach is violating the privacy of the people who join DNA databases to learn about themselves — not to help the police arrest their relatives for violent crimes. Since GSK, the floodgates opened and within two years law enforcement has used familial searches to identify more than forty murder and rape suspects in cases as old as a half-century. As recent as this month, law enforcement used familial search to identify and charge an Oregon man in two cold-case murders. The Portland Police Bureau explained that the crosschecking of DNA recovered from the scene with ancestry records led to them to the suspect’s siblings and through process of elimination, eventual the suspect.
Consequently, the question presents itself, in a process that allows for one of the most prolific serial killers to be brought to justice, what is the scholars and legislators concern? And should those concerns worry the 100 million people who have used direct to consumer (DTC) genetic test? According to Science Magazine, if you are white, live in the United States, and a distant relative has uploaded their DNA to a public ancestry database, there’s a good chance an internet sleuth can identify you from a DNA sample you left somewhere. Now, while law-abiding citizens may brush this reality off, Martin Niemöller’s “First They Came” warning poem should set alarm horns blaring.
New York University law professor and expert on DNA searches Erin Murphy warns, “[i]f your sibling or parent or child engaged in [DNA crowdsourcing] activity online, they are compromising your family for generations.” If the capture of GSK demonstrated anything, it showed that anonymity is impossible. GEDmatch, the most common crowdsourcing DNA database and also known for its role in the GSK case, allows users to upload their DNA profile from other websites. It is estimated that GEDmatch only encompasses about 0.5% of the U.S. adult population, if GEDmatch were to rise to 2%, more than 90% of people of European descent will have a third cousin or closer relative and could be found by this way. The GSK case and further Research has shown that by cross-referencing their birth date, sex and postal code, for instance, with publicly available information. A report published by the University of Washington, demonstrated how researchers were able to run searches that let them guess more than 90% of the DNA data of other users.
Moreover, unlike a bank account number or a password that can be changed, once DNA is out there, it’s out there for good. Security flaws in GEDmatch could permit national adversaries to create a powerful biometric database useful for identifying nearly any American. The same University of Washington research team, demonstrated how they exploited the GEDmatch genetic comparison engine without any illegal actions, “[they] went in through the main gates—they did not break in.” As recently as last year, GEDmatch experienced a data breach and hack that not only let the equivalent of twitter bots in, but also opted all users (regardless of their preference) into the law enforcement matching.
Finally, Consumer Reports’ director of privacy and technology policy Justin Brookman warns that, “An individual’s most personal information is still being bought, sold, and traded without clear understanding or consent.” There is a deep concern that access to long-term care insurance can be impacted by the results of genetic testing. Genetic information gathered by DTC genetic companies can be sold to third parties and used internally to benefit the company, with limited information provided to the consumer. Seventy-one percent (71%) of companies (39 of 55) provided information that indicated a consumer’s genetic data could be used internally by the company for purposes other than providing the results to the consumer. Coincidently, the nation’s largest private equity firm Blackstone acquired DTC Ancestry.com for $4.7 billion ($261 per person DNA). Although Blackstone Spokesperson assures they will not have access to user DNA, Alan Butler, interim executive director and general counsel of the Electronic Privacy Information Center rebuts, “[t]he big concern when there is a big deal like this is that investors might be interested in that data for other reasons, and not in the ways that consumers intended when they gave over that information.”  DTC genetic testing company 23andMe Inc. also recently entered into a deal to merge with VG Acquisition Corp., a special purpose acquisition company founded by billionaire Richard Branson. American Bar Association’s Business Law Section Cyberspace Committee Chair Theodore F. Claypoole warns that “once your DNA is included in the database for Google, Blackstone, Merck, or the FBI, there is no removing it – and no way to change it.”
So while the excitement of the GSK investigation has drifted into memory, each day more and more people upload their DNA profiles, and we inch closer to the 2% threshold.
Fortunately, there is a growing trend by state and federal law makers to increase regulation of genetic information and cover non-traditional entities like DTC genetic testing companies. Additionally, in an effort to regulate themselves—or avoid federal regulation—popular DTC genetic companies like 23andMe and Ancestry have paired with the Future of Privacy Forum to release industry-based standard Privacy Best Practices for Consumer Genetic Testing Services. Additionally, in July 2020, Consumer Reports issued a white page report called, “Direct-to-Consumer Genetic Testing: The Law Must Protect Consumers’ Genetic Privacy.” The report highlights the concerning regulatory gaps in direct-to-consumer (DTC) generic testing that reveals the legal void of privacy safeguards to consumers highly sensitive data.
The information shared with DTC genetic testing companies is neither protected nor are the DTC companies are bound to federal law. Although the Food and Drug Administration (FDA) has oversight over DTC genetic testing the agency’s concern is validity, not privacy. Similarly, the Federal Trade Commission (FTC) and Genetic Information Nondiscrimination Act (GINA) can control the marketing practices and discrimination, respectively of genetic testing companies but not privacy. Nor are DTC genetic testing companies regulated under the Health Insurance Portability and Accountability Act (HIPAA). HIPPA, the federal law that protects sensitive patient health information from being disclosed without the patient’s consent or knowledge, only applies to entities providing medical health care. As a result, DTC genetic testing companies are largely in control of consumers’ most personal information.
States have varied in their successes in trying to safeguard their resident’s genetic privacy. Some states have passed legislation that prohibits discrimination based on genetic information, but laws specifically aimed at genetic privacy remain rare. Texas, Illinois, and Oklahoma, have laws that protect individuals from compelled disclosure of genetic information pursuant to a court order, such as a subpoena. Illinois and Missouri restricts the sale of collected information by strictly limiting whether and the extent to which DTC genetic test results may be used.
The most aggressive of the state laws are Florida’s Florida Bill Restricting Life Insurers’ Use of Genetic Information (House Bill 1189) and California’s now-vetoed Genetic Information Privacy Act (GIPA). Florida’s House Bill 1189, amends a Florida statute which prohibited insurers use of genetic information for insurance purposes and extends the restriction to life and long-term care insurers from canceling, limiting, or denying coverage or adjusting premium rates based on genetic information. Additionally, those insurers are prohibited from requiring or soliciting genetic information, using genetic test results, or considering a person’s decisions or actions relating to genetic testing in any manner for any insurance purpose. California’s GIPA attempted to extend even farther. GIPA would direct DTC genetic companies on how they can use, sell, and share genetic information. California Governor Newson’s veto of GIPA, is a reflection of disagreement over the bill’s details of how to best safeguard genetic information – not a disagreement with the principle of the bill. He plans to work with the California Health and Human Services Agency and Department of Public Health to find solutions.
The most popular of solutions is merger of Florida and California’s state laws. Consumer Reports suggest that policymakers broadly prohibit the use of genetic data in insurance underwriting and prohibit insurers from discriminating against individuals who do not provide such information. Moreover, Consumer Reports urges lawmakers to resolve the regulatory gap by making genetic data, created via DTC genetic testing, privileged and confidential, and empowering consumers to control who has access to their genetic information. However there is an alternative that American Bar Association’s Business Law Section Cyberspace Committee Chair Theodore F. Claypoole points out, “Americans [should] test their DNA through their doctors – where the results are protected by law . . . rather than through charlatans who provide little but entertainment and may use or sell your DNA for any purpose.”
 Edison Research, Edison Research Announces Top 50 U.S. Podcasts for 2020 by Audience Size, Edison Research (Feb. 9, 2021), https://www.edisonresearch.com/the-top-50-most-listened-to-u-s-podcasts-of-2020/.
 SF Gate, Here’s how the Golden State Killer got all of his nicknames, SFGATE.COM (Apr. 26, 2018), https://www.sfgate.com/bayarea/slideshow/The-East-Bay-Killer-Golden-State-Killer-nicknames-180737.php.
 JV Chamary, How Genetic Genealogy Helped Catch The Golden State Killer, Forbes (June 30, 2020), https://www.forbes.com/sites/jvchamary/2020/06/30/genetic-genealogy-golden-state-killer/?sh=5e6596b75a6d
 Paige St. John, The untold story of how the Golden State Killer was found: A covert operation and private DNA, Los Angeles Times (Dec. 8, 2020) https://www.latimes.com/california/story/2020-12-08/man-in-the-window.
 Hon. Herbert B. Dixton Jr., If You Think Your DNA Is Anonymous, Think Again!, American Bar Association (May 13, 2020), https://www.americanbar.org/groups/judicial/publications/judges_journal/2020/spring/if-you-think-your-dna-anonymous-think-again/#ref7.
 Heather Murphy, Tim Arango, Joseph DeAngelo Pleads Guilty in Golden State Killer Cases, New York Times (Jun. 29, 2020), https://www.nytimes.com/2020/06/29/us/golden-state-killer-joseph-deangelo.html.
 Heather Murphy, Genealogy Sites Have Helped Identify Suspects. Now They’ve Helped Convict One., New York Times (Jul. 1, 2019) https://www.nytimes.com/2019/07/01/us/dna-genetic-genealogy-trial.html.
 Neil Vigdor, Oregon Man Is Charged With Two Murders Committed Two Decades Apart, New York Times (Mar. 11, 2021), https://www.nytimes.com/2021/03/11/us/christopher-lovrien-orgeon-murder.html.
 American Hospital Association, Consumers Buy into Genetic Testing Kits, American Hospital Association (Sept. 16, 2019), https://www.aha.org/aha-center-health-innovation-market-scan/2019-09-16-consumers-buy-genetic-testing-kits.
 Jocelyn Kaiser, We will find you: DNA search used to nab Golden State Killer can home in on about 60% of white Americans, Science Mag American Association for the Advancement of Science, https://www.sciencemag.org/news/2018/10/we-will-find-you-dna-search-used-nab-golden-state-killer-can-home-about-60-white.
 Martin Niemöller, First They Came, Amnesty, https://www.amnesty.org.uk/files/2019-01/First%20They%20Came%20by%20Martin%20Niem%C3%B6ller_0.pdf?l6HOtWW1N8umC_ELxnQI6NpaAYbxRCJj= (“First they came for the , And I did not speak out Because I was not a  . . . Then they came for me And there was no one left To speak out for me.”).
 Gina Kolata, Heather Murphy, The Golden State Killer Is Tracked Through a Thicket of DNA, and Experts Shudder, New York Times (Apr. 27, 2018), https://www.nytimes.com/2018/04/27/health/dna-privacy-golden-state-killer-genealogy.html?action=click&module=RelatedLinks&pgtype=Article.
 Heather Murphy, Why a Data Breach at a Genealogy Site Has Privacy Experts Worried, New York Times (Aug. 1, 2020), https://www.nytimes.com/2020/08/01/technology/gedmatch-breach-privacy.html.
 Jocelyn Kaiser, We will find you: DNA search used to nab Golden State Killer can home in on about 60% of white Americans, Science Magazine (Oct. 11, 2018), https://www.sciencemag.org/news/2018/10/we-will-find-you-dna-search-used-nab-golden-state-killer-can-home-about-60-white.
 Kristen V. Brown, How a Third Cousin Could Give Away Your DNA Secrets, The Washington Post (Dec. 4, 2020), https://www.washingtonpost.com/business/how-a-third-cousin-could-give-away-your-dna-secrets/2020/12/04/089339a0-35ef-11eb-9699-00d311f13d2d_story.html.
 Peter Ney, Luis Ceze, Tadayoshi Koohno, Genotype Extraction and False Relative Attacks: Security Risks to Third-Party Genetic Genealogy Services Beyond Identity Inference, Paul G. Allen School of Computer Science & Engineering University of Washington (2019), https://dnasec.cs.washington.edu/genetic-genealogy/ney_ndss.pdf.
 Antonio Regalado, The DNA database used to find the Golden State Killer is a national security leak waiting to happen, MIT Technology Review (Oct. 30, 2019), https://www.technologyreview.com/2019/10/30/132142/dna-database-gedmatch-golden-state-killer-security-risk-hack/#:~:text=Crowdsourcing%20DNA&text=In%202017%2C%20police%20in%20California,identify%20some%20of%20his%20relatives.
 Heather Murphy, Why a Data Breach at a Genealogy Site Has Privacy Experts Worried, New York Times (Aug. 1, 2020), https://www.nytimes.com/2020/08/01/technology/gedmatch-breach-privacy.html.
 Consumer Reports, The privacy risks of at-home DNA tests, The Washington Post (Sept. 14, 2020), https://www.washingtonpost.com/health/dna-tests-privacy-risks/2020/09/11/6a783a34-d73b-11ea-9c3b-dfc394c03988_story.html.
 Justin Brookman, Direct-to-Consumer Genetic Testing: The Law Must Protect Consumers’ Genetic Privacy, Consumer Reports (Jul. 2020), https://advocacy.consumerreports.org/wp-content/uploads/2020/07/DTC-Genetic-Testing-White-Paper-4.pdf.
 James W. Hazel & Christopher Slobogin, Who Knows What, and When: A Survey of the Privacy Policies
Proffered by U.S. Direct-to-Consumer Genetic Testing Companies, 28 Cornell J. L. & Pub. Pol’y 35, 43 (2018),
 Stephen Gandel, Private equity wants to own your DNA, CBSNews (Aug. 7, 2020), https://www.cbsnews.com/news/blackstone-private-equity-ancestry-com-dna/; Matt Anderson, Blackstone to Acquire Ancestry®, Leading Online Family History Business, for $4.7 Billion, Blackstone (Aug. 5, 2020), https://www.blackstone.com/press-releases/article/blackstone-to-acquire-ancestry-leading-online-family-history-business-for-4-7-billion/.
 Stephen Gandel, Private equity wants to own your DNA, CBSNews (Aug. 7, 2020), https://www.cbsnews.com/news/blackstone-private-equity-ancestry-com-dna/.
 Kristen V Brown, 23andMe Goes Public as $3.5 Billion Company With Branson Aid, Bloomberg (Feb. 4, 2021), https://www.bloomberg.com/news/articles/2021-02-04/23andme-to-go-public-as-3-5-billion-company-via-branson-merger.
 Theodore F. Claypoole, Privacy Risk of Recreational DNA Testing: States Take Action, The National Law Review (Sept. 8, 2020), https://www.natlawreview.com/article/privacy-risks-recreational-dna-testing-states-take-action.
 Scott Loughlin, Katherine Kwong, Sophie Baum, California Governor vetoes bill to establish the Genetic Information Privacy Act, Hogan Lovells US LLP (Sept. 29, 2020), https://www.engage.hoganlovells.com/knowledgeservices/viewContent.action?key=Ec8teaJ9VarqvjDR56S2DcxgHJMKLFEppVpbbVX%2B3OXcP3PYxlq7sZUjdbSm5FIetvAtgf1eVU8%3D&nav=FRbANEucS95NMLRN47z%2BeeOgEFCt8EGQ0qFfoEM4UR4%3D&emailtofriendview=true&freeviewlink=true.
 Carson Martinez, Privacy Best Practices for Consumer Genetic Testing Services, Future of Privacy Forum (last updated Feb. 9, 2021), https://fpf.org/blog/privacy-best-practices-for-consumer-genetic-testing-services/.
 Jake Holland, Daniel R. Stoller, With Congress Quiet, States Step in to Safeguard Genetic Privacy, Bloomberg Law (Sept. 1, 2020), https://news.bloomberglaw.com/privacy-and-data-security/with-congress-quiet-states-step-in-to-safeguard-genetic-privacy.
 Jo Cicchetti, Nolan Tully, Florida Bill Restricting Life Insurers’ Use of Genetic Information Signed by Governor DeSantis, Faegre Drinker Biddle & Reath LLP (Jul. 1, 2020), https://tinyurl.com/h5bj524e.
As Covid-19 has made it necessary for people to keep their distance from each other, robots are stepping in to fill essential roles. There are signs that people may be increasingly receptive to robotic help; however, the legal industry has been reluctant to embrace intelligent, independent machines, and for good reason. Artificial intelligence-based tools are used by only a very small percentage of law firms, according to the ABA’s 2020 Legal Technology Survey Report. Roughly 35% percent of respondents to the ABA survey listed the accuracy of AI technology as one of the major concerns about implementing AI-based tools, while 33% percent said cost was a major issue.
Meanwhile, some governments have been tinkering with inserting artificial intelligence and machine learning into public services. For example, Estonia has deployed AI or machine learning in 13 places where an algorithm has replaced government workers. Farmers who receive government subsidies to cut their hay fields each summer are automatically notified two weeks before the mowing deadline with a link to a satellite image of their field taken by the European Space Agency. These satellite images have been fed into a deep-learning algorithm and overlaid onto a map of fields to prevent them from turning forests over time. The algorithm assesses each pixel in the images, determining if the patch of the field has been cut or not. Prior to deploying this AI system, human inspectors would physically check on farmers who receive government subsidies. Today, an inspector will still drive out to check in cases where something has thrown off the image processing, such as cattle grazing or partial cutting. Estonia claims the new system saved $755,000 in its first year because inspectors made fewer site visits and focused on other enforcement actions.
However, many government deployments of algorithm-based systems have experienced significant downsides. For example, the state of Idaho attempted to institute an algorithm for allocating home care and community integration funds. When the new formula was instituted, funds allocated to care for severely disabled people immediately dropped, for many people by as much as 42 percent. When the people whose benefits were cut tried to determine how their benefits were determined, Idaho declined to disclose the formula it was using, saying that its math qualified as a trade secret. The local ACLU branch sued on behalf of the program’s beneficiaries, arguing that Idaho’s actions had deprived them of their rights to due process. In court, it was revealed that the government relied on deeply flawed data and that it was impossible for the average person to understand or challenge the new system. The court held that the formula itself was so bad that it was unconstitutional because it was effectively producing arbitrary results for many people. The judge ordered that Idaho overhaul its new program with regular testing, regular updating, and the use of quality data.
One explanation for Idaho’s flawed adoption of AI technology is the narrow focus of algorithmic development. Robot researchers are typically educated to solve difficult technical problems, not to consider societal questions about who gets to make robots or how the machines affect society. Minority scientists are calling attention to disparate impact through manifestos, which describe their personal experience of the structural and institutional bias that is integrated into society. Signatories of an open letter, entitled “No Justice, No Robots,” pledged to keep robots and robot research away from law enforcement agencies. This letter proved controversial in the small world of robotics labs, since some researchers felt that it was not socially responsible to shun contact with the police. One robotics researcher who chose not to sign the letter reiterated her position that biased algorithms are the result, in part, of the skewed demographic — white, male, able-bodied — that designs and tests the software. Her concern was with whom our law enforcement entities would ultimately end up working if the developers with strong ethical values were choosing not to.
The lessons we learn from unwinding systemic bias in AI technologies could give us insight into how other forms of institutionalized oppression get inadvertently baked into algorithms. Rather than focus on shocking statistics, it may be more useful to look at the forces that shaped statistics, such as the social and technical processes that have led to the unequal treatment of people, the associations that affect perceptions by implicitly linking minority groups with particular outcomes, and the role institutions play in guiding policies and preferences that perpetuate inequities. Researchers are developing a variety of techniques to help detect and quantify bias and are formulating KPIs to measure the impact of various interventions to correct biases. However, in the long run, organizations like government agencies will need to include proactive auditing programs with ongoing reviews to detect how well algorithms perform after they are released into the wild. These programs would be akin to software quality audits within companies or institutional review boards within medicine.
Identifying and rooting out bias within intelligent government solutions will often compete with business as usual. For example, it may mean not adopting AI technologies because no one can figure out why minority groups are not having the same outcomes as majority groups. It could mean having the courage to not deploy and not release an AI-based solution until the playing field has been leveled along class indicators. Progress will require leadership at the highest levels and sustained effort throughout an entire agency. While building AI is a technology challenge, using AI requires non-software development heavy disciplines such as social science, law, and politics. Responsibly managing the use of AI is no longer just a desirable component of progress but a necessary one. Based on government’s mixed results of implementing AI solutions, law firms must continue to recognize the pitfalls of human bias and the errors of replicating these biases in the machines of tomorrow to prevent achievement of the illusory promise of efficiency at the expense of fundamental constitutional rights.
Imagine walking outside to go to the local coffee shop. In your ten minute walk you walk six blocks and pick-up your coffee. You pass several police officers and fail to notice cameras mounted on telephone poles. The government is using surveillance to monitor the area that you walked. This type of surveillance while, not new is beginning to develop rapidly raising issues of privacy in an increasingly modernized world.
In China there have been recent developments with surveillance with “emotion recognition technology.” This type of technology can be used to assist with monitoring human feelings, through tracking facial expressions and the tone of voice. The technology is also not limited to a law enforcement use as it has been used to diagnose depression by listening to the voice of a patient in a psychiatric hospital.
In the United States related technology is being used for facial recognition, with Clearview AI being particularly notable. The company has developed a facial recognition application, that when a photograph is taken of an individual and uploaded to a database it allows the user to public photos of that individual. It is also not clear if these uploaded photos, that by its very nature are sensitive, are on a server that adequately protects the sensitive information. What is concerning is that the United States federal government as well as state law enforcement officers have “only a limited knowledge of how Clearview works”, but have used the application to solve a variety of cases such as shoplifting, murder, and child exploitation.
As of this blog post more than 600 law enforcement agencies in have begun using Clearview from approximately 2019 – 2020. However, Clearview is not limiting their use of the application to law enforcement, but has also licensed their application to some companies for “security purposes.” Clearview is now being used in over 2,400 United States law enforcement agencies representing a rapid growth in use. Despite the United State’s law enforcement agencies embrace of the technology, Canada has recently declared the application as one for mass surveillance and illegal in use. There were many law enforcement agencies and organizations in Canada who used the application prior to statement released by Canada’s privacy commissioner, Daniel Therrien. Clearview is currently at a standstill with the Canadian government with Clearview ceasing operations in Canada in July of 2020, but has made no plans on deleting Canadians from its database.
Turning back to the United States Clearview’s application may be heading to the Supreme Court for review. Illinois passed a law called the Biometric Information Privacy Act of Illinois (BIPA). BIPA applies to Illinois residents and broadly limits what companies are allowed to do with data from a person’s body such as face scans and fingerprints. Generally, companies cannot use biometric details without a person’s knowledge or consent and it allows both an individual as well as the state to sue companies who violate the law.
Clearview is currently engaged in a lawsuit stemming from a class action lawsuit based off the violation of BIPA. In a unusual turn of events the plaintiff for the case argues that they do not have standing, and that it is Clearview who is fighting for the plaintiff’s right to sue, as it was the defendant who removed the case to federal court. In a recent decision from the Seventh Circuit, the Court ruled that the plaintiffs do not have Article III standing to pursue their case. The Court concluded that the plaintiffs alleged “only a general, regulatory violation, not something that is particularized to them and concrete.” The case is now remanded back to the state court for further litigation. However, Clearview’s have asked the Seventh Circuit to stay their mandate as the defendant plans on filing a petition to the Supreme Court. Clearview attempts to be arguing that the alleged violation is a concrete and individual injury. While the motivations behind the petition may be multifaceted, this may be an attempt by Clearview to receive guidance from the Supreme Court for future litigation. By having the Supreme Court rule on Article III standing, if the petition is granted, it will give Clearview a better understanding of the legal landscape as its technology is used in various states.
The use of surveillance technologies such as emotion recognition and facial recognition are a reality that many people are unaware of. However, the mass gathering of data with these technologies represent real privacy concerns over the collection and storage of that data. In the coming years it seems inevitable that companies like Clearview will be embroiled in legal challenges over their rights to collect, store, and use the data obtained from their software.
Picture this: You take your car to a dealership for engine maintenance. After a lengthy inspection, the mechanic tells you that your car needs a new radiator and gives you an estimation for two thousand dollars. Outraged, you leave the dealership and take your car to the local independent mechanic who can perform the same maintenance for a couple hundred dollars in the same amount of time. This is the right to repair movement and its been slowly gathering support for the past decade and on January 6th, a right to repair bill was introduced to the New York State legislature.
Right to repair refers to a movement towards allowing consumers the ability to repair and modify products they own instead of relying on the company that originally sold it. In other words, right to repair seeks to guarantee independent repair shops the right to repair your electronics using any means they deem necessary without fear or threat of litigation from the original manufacturers. While some manufacturers try to make it an issue of security and IP, right to repair is actually about the monopolization of the repair industry.
But how exactly did independent repair become a problem? After all, isn’t it your choice in how you want to fix your property? Not entirely. In the past, electronics were often produced with blueprints and schematics that allowed any user to be aware of how a device was made thus allowing independent repair shops, like Radio Shack, to flourish. Today, it has become far more common for major brands, like Apple, to implement restrictive practices for their products that limit independent repair people’s ability to diagnose and fix. Some of these practices include threatening to void a warranty; requiring specialized tools, diagnostic equipment, data, or schematics not reasonably available to the public; or creating products that are deliberately designed to prevent an end user from fixing them. 
Manufacturers have long argued that maintaining monopoly control of their repair market can prevent bad consumer experiences and that preventing outside maintenance helps protect their brand. They claim that ultimately, the manufacturer knows their product better than the consumer, and that any outside repair places the consumer at a greater risk. In Nebraska, Apple lawyers have even tried claiming that expanding these consumer rights would turn the state into a “mecca” for hackers.
Right to repair came to a boil in 2017 when John Deere required purchasers of farming equipment to sign a license agreement that forbid nearly all repair and modification. Previously, farmers would send their tractors to a local John Deere dealership for diagnostics and repair. However, sending a 2-ton tractor to a dealership 100 miles away for minor fixes was just not feasible for most farmers. Thus, some farmers began “hacking” their tractors with cracked software purchased on a black market in order to self diagnose and repair their own equipment. John Deere responded by locking farmers accused of hacking out of their software, leaving farmers unable to use their equipment. .
John Deere isn’t the only large manufacturer willing to show hostility towards the independent repairperson. Last year, Apple, a trillion dollar company, continued a legal pursuit against the owner of a small independent iPhone repair shop in Norway. Apple sued Henrik Huseby, owner of PCKompaniet, for importing counterfeit iPhone 6 and 6S screens. However, according to Huseby, the screens he purchased were refurbished and never advertised as official Apple parts. Huseby ultimately convinced the court that because the Apple logo was not visible on the screens, that they were not violating Apple’s trademark, and thus not liable for counterfeiting.
Movements like The Repair Association have sprung up to try and garner support for their cause and counter arguments proffered by manufacturers. The movement’s purpose is clear: “We have the right to repair everything we own.” The Repair Association boasts more than 15 million “enthusiasts” engaged with the cause and over 400 companies registered.
Right now, Democrat Kevin Thomas of the 6th Senate District has introduced Senate Bill S149 in the New York State legislature. This bill’s stated purpose is to “protect the rights of consumers to diagnose, service, maintain, and repair their motor vehicles, and for other purposes.” While it has a long way to go in the State legislature, the introduction of the bill is a warm sign that New York State might expand consumer protection rights and embrace right to repair.
Technology is far from perfect. Technical and mechanical issues are par for the course in owning any sort of tech, whether it is a phone or a tractor. Ultimately, people deserve the right to choose how they want to repair their property. To deprive them of such is to deprive of the thing itself. New York State is taking a step in the right direction in forwarding a bill to protect consumers against monopolies.
 Karl Bode, The Right to Repair Movement is Poised to Explode in 2021, Vice (February 1, 2021 12:52pm), https://www.vice.com/en/article/jgqk38/the-right-to-repair-movement-is-poised-to-explode-in-2021.
 Elle Ekman, Here’s One Reason the U.S. Military Can’t Fix its Own Equipment, The New York Times (November 20, 2019), https://www.nytimes.com/2019/11/20/opinion/military-right-to-repair.html.
 Nathan Proctor, Here’s How Manufacturers Argue Against Repair., US Public Interest Research Group, (July 1, 2019) https://uspirg.org/blogs/blog/usp/here%E2%80%99s-how-manufacturers-argue-against-repair.
 Jason Koebler, Why American Farmers Are Hacking Their Tractors with Ukranian Firmware, Vice (March 21, 2017 4:17pm), https://www.vice.com/en/article/xykkkd/why-american-farmers-are-hacking-their-tractors-with-ukrainian-firmware
 Jason Koebler, Apple is Still Trying to Sue the Owner of an Independent iPhone Repair Shop, Vice (June 6, 2019 4:14pm), https://www.vice.com/en/article/9kxzpy/apple-is-still-trying-to-sue-the-owner-of-an-independent-iphone-repair-shop-louis-rossmann-henrik-huseby
As we are approaching a year in quarantine due to the COVID-19 pandemic, ways to detect and limit the spread of the deadly virus have been on the rise, especially since the number of cases continue to skyrocket due to the spread of the virus from individuals with limited to no symptoms. It has been estimated, although hard to accurately depict due to the exhibition of no symptoms, through numerous studies that anywhere up to 70% of COVID-19 cases arise in individuals who exhibit no symptoms at all. And a detection solution may be closer to home than many of us even realize.
We live in a constantly developing technological landscape where wearable devices can track our steps, heart-rate, temperature, and even our sleeping patterns. Can these devices be used to track COVID-19 in individuals with asymptomatic and pre-symptomatic cases? Can utilizing data drawn from wearables limit or even greatly eliminate the spread of COVID-19 and other future viruses? Are these methods of detection even legal for researchers to use? The answer to these questions, and others, may shock individuals who wake up every morning, grab their smart watch from its charger, and set out to start their days with a wealth of information flowing from their wrists.
Let’s start with the value of using wearable devices to detect COVID-19 indicators in the individuals who wear them. Unlike conventional testing methods for COVID-19, i.e. PCR tests, rapid tests, and antibody testing, which offer a time-sensitive window of potentially infected individuals, wearable devices offer a continuous flow of real-time physiological data which can be monitored for the development of COVID-19 symptoms. This method of observational testing tracks the difference in physiological symptoms to that person’s baseline measurements whereas conventional methods of testing compares information gathered to population statistics. Therefore, wearable devices offer a much more personal analysis into the detection of even slight deviations from the individual’s normal baseline physiological data. This information can be utilized to increase early-stage COVID-19 case detection and therefore greatly reduce the amount of exposure an individual infected with COVID-19 interacts with non-infected individuals.
A study conducted by colleagues at Stanford University School of Medicine and Case Western University analyzed the data of 32 infected individuals, which were identified from a pool of over 5,000 participants and tracked certain physiological metrics to see if any correlations existed between the changes in their data that help in the detection of a positive COVID-19 diagnosis. They looked specifically at the change in heart rate in connection to the number of steps they took and 81% of the positive COVID-19 cases showed discrepancies, an increased heart rate, 4 to 7 days before the onset of their COVID-19 symptoms. From this, the researchers were able to develop an algorithm that could detect the beginning stages of infection from monitoring this change of heart rate in conjunction with the number of steps taken.
However, other studies indicate that heart rate and step monitoring alone were insufficient in clearly identifying positive COVID-19 cases, due to other factors like age and exposure to other viral infections. When added to sleep metrics and self-reporting of noticeable symptoms, the rate of proper detection increases. Additionally, there is a smart ring available on the commercial market that has been proven to identify the higher temperature reading in people with even slight symptoms of COVID-19. This shows that while wearable devices are not the end all of detection, their useful collection of data, coupled with the improvement to the ever-evolving wearable technology field, and self-reporting of symptoms can be a huge help in the early detection and slowing of the spread of COVID-19 and other harmful viruses that could arise in the future.
Is the collection of this data legal? According to FDA regulations, wearable devices are classified as a general wellness device which means they are subject to less stringent and lower standards of regulation as they are generally accepted as “low risk” devices that do not collect data for the purpose of treating patients. However, looking solely at the FDA does not give us the whole picture as the FDA regulates safety and efficiency but not privacy standards. When we look towards laws that regulate the privacy of the collection of medical information, we look towards the Health Insurance Portability and Accountability Act, otherwise known as HIPAA laws. These devices are considered outside of the scope of protection under HIPAA as they do not engage directly with healthcare service providers. Looking towards the Federal Trade Commission Act, otherwise known as the FTC, might give us a little more insight as to the privacy regulations surrounding wearable devices provided to the consumer public because the FTC prohibits companies from engaging in deceptive or unfair acts and from failing to comply with the company’s privacy policies by allowing legal actions against companies that fail to maintain security of sensitive data collected. However, many of these companies allow for third party cross-device tracking without explicitly mentioning this within their privacy policies. If companies that allow cross-device tracking do not make this clear and transparent within their statements to their consumers, there is a possibility they could face legal repercussions. In addition to the FTC, companies that sell wearable devices need to be cognizant of the particular restrictions imposed within specific states as there is a possibility for higher levels of privacy protection under individual state laws. The main takeaway from all of this information would be to be aware of what privacy protections the device you want to buy affords you and what level of transparency you feel comfortable with.
Wearable technology devices offer a multitude of benefits to society and those who choose to implement them within their daily lives. They allow people to track their health and identify ways in which they can implement positive change into their routine in order to see themselves attain goals anywhere from being more active to instituting better sleeping habits. The research world also sees a benefit from the access to continuous monitoring of physiological metrics and that has recently been seen through the use of metric data in tracking and preventing the spread of COVID-19 cases. While these metrics are not definitive of a positive COVID-19 case, and the sharing of data has the potential to cross the line into privacy law issues, recognizing the data limitations and companies remaining transparent with how their consumers’ data is being utilized can lead to positive consequences such as the minimization of the spread of the COVID-19 pandemic.
 Sarah Fisher, Anoushka Chowdhary, Karena Puldon, Adam Rao & Frederick Hecht, Wearable Sensor May Signal You’re Developing COVID-19 – Even If Your Symptoms Are Subtle, Univ. of Cal. S.F. (Dec. 14, 2020), https://www.ucsf.edu/news/2020/12/419271/wearable-sensor-may-signal-youre-developing-covid-19-even-if-your-symptoms-are.
 H. Ceren Ates, Ali K. Yetisen, Firat Güder & Can Dincer, Wearable Devices for the Detection of COVID-19, Nature Electronics (Jan. 25 2021), https://www.nature.com/articles/s41928-020-00533-1.
 Gicel Tomimbang, Wearable: Where do they fall within the regulatory landscape, Int’l Ass’n of Priv. Pros. (Jan. 22, 2018), https://iapp.org/news/a/wearables-where-do-they-fall-within-the-regulatory-landscape/.