Blog Post

In an Age of Constant Technology, Is It Always the Right Answer?

By: Sehseh K. Sanan

Procedural Due Process is embedded in the Constitution and demands that a person’s life, liberty, and property, are not denied without “notice, opportunity to be heard, and a decision by a neutral decisionmaker.” [1]

But what is a neutral decisionmaker? Typically, we would think of a judge or a jury, who is not a party in the case, listening to facts, applying law, and coming out to a decision. But what if the decisionmaker was a computer? Specifically, a computer deciding how long someone should be in jail. 

Right now, and approximately since 2016, courts across the world are beginning to use algorithms to determine sentencing guidelines. [2]Prior to using algorithms, in the U.S., judges had to rely on either sentencing guidelines or mandatory sentencing. [3]Sentencing Guidelines are provisions published by the Federal Government that go through factors such as “subjective guilt of the defendant and to the harm caused by his facts.” [4]Judges are not mandated to follow the Guidelines, but they are required to explain why they are departing from the federal suggestions. [5]Under mandatory sentencing, judges are required to impose minimum sentences depending on the charges that the prosecutor brings. [6]This is frequently seen in drug or drug-related charges. [7]What both of these systems have in common is that they are trying to assess the risk that an offender poses on society and how long they should be incarcerated to right the wrong. 

A close up of a map

Description automatically generated

Figure 1[8]

Artificial intelligence and algorithms claim to do the same thing as previous systems: assess risk and attempt to deter the offender. [9]Algorithms take into consideration factors such as “(1) the likelihood that the defendant will re-offend before trial (“recidivism risk”) and (2) the likelihood the defendant will fail to appear at trial.” While this may seem beneficial from a utilitarian perspective, the rise of algorithm usage has had devastating and long-lasting effects. [10]People like Darnell Gates, who served time from 2013 to 2018 for running a car into a house and violently threatening a domestic partner, was deemed high risk and did not even know it. [11]Further, as these algorithms are being made by human beings, their innate biases are being built into these algorithms. [12]It is no secret that people of color, particularly Black men, are more likely to be subjected to higher sentences. [13]

So, where do we go from here? Mandatory sentences are not the answer as they impose higher sentences even for first time offenders. [14]The guidelines also have racial biases. [15]Personally, if the goal of criminal justice and prisons are to reform offenders, then I think that the criminal justice system should be reformed to match that goal. Prisoners should receive education and a support system that will push them to not reoffend. We as a society could focus on a combination of rehabilitative and restorative justice. [16]By focusing on this combination, the troubles and obstacles that offenders find once they leave prison can be alleviated by the support of their community and society. [17]Maybe this could lead to a better society. But either way, we should definitely not let computers and algorithms decide. 


[1]Procedural Due Process, Legal Information Institute, Cornell Law School,https://www.law.cornell.edu/wex/procedural_due_process(last visited Feb. 9, 2020).

[2]Kehl, Danielle, Guo, Priscilla, Kessler, Samuel. Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing. (July 2017). Responsive Communities. Available at: https://cyber.harvard.edu/ publications/2017/07/Algorithms.

[3]Federal Sentencing Guidelines, Legal Information Institute, Cornell Law School, https://www.law.cornell.edu/wex/federal_sentencing_guidelines(last visited Feb. 9, 2020); Mandatory Minimums and Sentencing Reform, Criminal Justice Policy Foundation, https://www.cjpf.org/mandatory-minimums(last visited Feb. 9, 2020). 

[4]Federal Sentencing Guidelines, Legal Information Institute, Cornell Law School, https://www.law.cornell.edu/wex/federal_sentencing_guidelines(last visited Feb. 9, 2020). 

[5]Id

[6]Mandatory Minimums and Sentencing Reform, Criminal Justice Policy Foundation, https://www.cjpf.org/mandatory-minimums(last visited Feb. 9, 2020).

[7]Id

[8]Demographic Differences in Sentencing, U.S. Sentencing Comm’n, https://www.ussc.gov/sites/default/files/pdf/research-and-publications/research-publications/2017/20171114_Demographics.pdf(last visited Feb. 9, 2020).

[9]Algorithms in the Criminal Justice System: Pre-Trial Assessment Tools, Electronic Privacy Information Center, https://epic.org/algorithmic-transparency/crim-justice/(last visited Feb. 9, 2020). 

[10]Cade Metz, Adam Satariano, An Algorithm That Grants Freedom, or Takes It Away, NYTimes, Feb. 6, 2020, https://www.nytimes.com/2020/02/06/technology/predictive-algorithms-crime.html (last visited Feb. 6, 2020).

[11]Id

[12]Id

[13]Demographic Differences in Sentencing, U.S. Sentencing Comm’n, https://www.ussc.gov/sites/default/files/pdf/research-and-publications/research-publications/2017/20171114_Demographics.pdf(last visited Feb. 9, 2020). 

[14]Mandatory Minimums and Sentencing Reform, Criminal Justice Policy Foundation, https://www.cjpf.org/mandatory-minimums(last visited Feb. 9, 2020).

[15]Racial, Ethnic, and Gender Disparities In Federal Sentencing Today, U.S. Sentencing Comm’n, https://www.ussc.gov/sites/default/files/pdf/research-and-publications/research-projects-and-surveys/miscellaneous/15-year-study/chap4.pdf(last visited Feb. 9, 2020). 

[16]David Best, Pathways to Recovery and DesistanceThe Role of the Social Contagion of Hope(2019). 

[17]Id

A BRIEF CASE FOR NATIONAL SECURITY LEGISLATION

By: Casey Bessemer

Let’s say that you are online shopping, as millions of Americans do. You go to Amazon.com and pick out a particularly nice doodad and buy it. It gets that sweet, free two-day shipping because you have Amazon Prime and it arrives and it’s the best time. But several weeks later, you learn that Amazon’s database has been hacked into and, to make matters worse, the hackers got away with enough of your personal information to open, and then spend on, several credit cards. How do you solve this problem? Will you sue Amazon for lack of adequate security measures? But where do you bring the suit, the place where Amazon keeps it servers or the place where the cyberattack originated? What if the attack originated from outside the United States, but was routed through several states before attacking Amazon’s servers? Each of these questions would have to be answered in a variety of ways, making the issue even more confusing. 

In 2016, there were approximately 1,093 data breaches that affected more than 3.6 millions files, with attacks predicted to increase as the world uses the internet to store more and more data.[1]These files contain personal data from various sources, everything from healthcare to social media to consumer websites. About half of Americans “feel that their personal information is less secure than it was five years ago”[2]and with Facebook founder’s Mark Zuckerberg’s recent testimony at his Senate hearing, I can’t blame them. Sure, there are methods, such as password encryption software, that consumers could use to better protect their personal data on their own end, but these major breaches are occurring within the major companies themselves, within their databases that are supposedly “secure.” Shouldn’t these corporations have to follow the security measures to protect our data? It was a rhetorical question: the answer is yes. 

            In light of this, the United States has yet to enact any sweeping legislation that would relieve the consumer of the confusion of how to prosecute cyber-criminals. Consequently, “the struggle to regulate consumer data shows how lawmakers have largely been unable to turn rage at Silicon Valley’s practices into concrete action.”[3]The United States does have three key pieces of legislation (the 1996 Health Insurance Portability and Accountability Act (HIPAA), the 1999 Gramm-Leach-Bliley Act, and the 2002 Homeland Security Act, which included the Federal Information Security Management Act (FISMA)) but these pieces of legislation only cover specific industries, specifically “healthcare organizations, financial institutions, and federal agencies.”[4]They are of little use to the average consumer. Instead, the average consumer needs laws for a global medium that can be implemented across various jurisdictions with consistent rulings and punishments. 

THE INTERNET IS NOT A SINGULAR PLACE
            First, and perhaps the most obvious, is that the internet is not a singular place. Rather the internet is “a global computer network providing a variety of information and communication facilities, consisting of interconnected networks using standardized communication protocols.”[5]And the fact that the internet is a “global” medium means that any information gathered by and then stored on the internet is effectively stored around the globe at a single time. Companies and legislations may argue that the data is on servers and the servers are subject to the laws of the jurisdiction that the servers are located in, but when a malicious third party attacks those servers, they do not go travel to the actual servers – they do attack remotely, from any possible location with an internet connection. Since the nature of these attacks can come from any jurisdiction, it stands to reason that they should be punishable within any jurisdiction, regardless of the location of the attackers or the servers themselves. In absence of some global legislation and taskforce, countries will have to rely on their own individual legislation and taskforces. Unfortunately, the United States has no such national privacy legislation. 


STATE BY STATE PROTECTION

            Currently, “federal level security and privacy legislation are lost in a morass of partisan politics and corporate lobbying delays.”[6]So states have taken the task into their own hands. Currently, “at least 43 states and Puerto Rico [have] introduced or considered close to 300 bills or resolutions that deal significantly with cybersecurity; thirty-one states [have] enacted cybersecurity-related legislation in 2019.”[7]Now that is a lot of coverage, but not complete coverage. States, being independent jurisdictions and personalities, have chosen different methods of defining what a “cyber attack” means and what punishment is available to any offenders. For example, Nevada, Minnesota, and Maine have specific legislation that “prohibits using, disclosing, selling, or permitting access to customer personal information unless the customer expressly consents to such,” while California has many more regulations, each targeting a specific goal such as consumer privacy or children.[8]But some states have yet to address the issue. It is because of this hodgepodge of legislation in different states that a national privacy law makes even more sense: a national security law would impose consistent basis of prosecution and determinable punishments. 

CONCLUSION

            Because of the continued reliance and use of the internet as a medium for transactions and storage of personal data, there will eventually come a time where the United States needs to enact a federal statute to completely cover cybercrimes, something akin to the General Data Protection Regulation (GDPR) in Europe. This change must come because the current method of state-by-state regulation creates a mess of legislation that does not completely cover cyber attacks, and because the internet is a global place, the legislation needs to be applicable everywhere. Although lawmakers have said “they wanted a new federal law to protect people’s online privacy,” little to nothing has actually happened.[9]


[1]A Glance at The United States Cyber Security Lawshttps://www.appknox.com/blog/united-states-cyber-security-laws(last viewed Jan. 31, 2020). 

[2]Aaron Smith, Americans and Cybersecurity, PEW RESEARCH CENTER(Jan. 26, 2017), https://www.pewresearch.org/internet/2017/01/26/americans-and-cybersecurity/.

[3]David McCabe, Congress and Trump Agreed They Want a National Privacy Law. It is Nowhere in Sight., N.Y. TIMES(Oct. 1, 2019), https://www.nytimes.com/2019/10/01/technology/national-privacy-law.html.

[4]Supra note 1.

[5]Internet, Oxford Reference(Jan. 10, 2020), https://www.oxfordreference.com/view/10.1093/oi/authority.20110803100008533.

[6]Cynthia Brumfield, 11 new state privacy and security laws explained: Is your business ready?, CSO(Aug. 8, 2019), https://www.csoonline.com/article/3429608/11-new-state-privacy-and-security-laws-explained-is-your-business-ready.html.

[7]Cybersecurity Legislation 2019, NCSL.COM(Jan. 10, 2020), https://www.ncsl.org/research/telecommunications-and-information-technology/cybersecurity-legislation-2019.aspx.

[8]Id.

[9]Supranote 3.

I Just Took a DNA Test, Turns Out I’m at Risk for Privacy Rights Violations

By: Erin Kelly

In 1990, the Federal Bureau of Investigations (FBI) developed software to compare DNA samples found at crime scenes against the DNA samples of arrestees. This software, the Combined DNA Index System (CODIS), has over 14.3 million DNA samples stored to its system. In 2013, the Supreme Court of the United States ruled in Maryland v. Kingthat states may collect DNA samples from arrestees without probable cause. However, law enforcement officials have recently used independent, commercial databases to identify suspects in criminal cases.

Ancestry is a company that specializes in commercialized DNA testing to show customers their family trees and their genetic heritage. The company has over sixteen million DNA samples stored in the Ancestry DNA database. A competitor company, 23andMe, has over ten million customers who have purchased a DNA testing kit. Both companies have reported that their customer information is not released to law enforcement officials absent a warrant or subpoena. This is not true of every commercial genealogy database.

After receiving their DNA data test results, customers from companies like Ancestry and 23andMe can upload their DNA data to third party websites to expand their search for familial connections. For example, GEDmatch is free to use and allows individuals to find genetic connections with users from across DNA testing websites, not just the original company they used. In 2018, Joseph James DeAngelo was arrested and charged with thirteen counts of murder committed in California between 1976 and 1986. When searching for suspects in this cold case, law enforcement officials created a profile on GEDmatch with the crime scene DNA. This is only one example of how DNA data from publicly available websites has helped to apprehend criminals.

After news broke about DeAngelo’s arrest and law enforcement’s use of GEDmatch, privacy concerns arose. To address these concerns, GEDmatch altered their terms of service. Previously, the website allowed law enforcement to use its database when investigating a violent crime. However, the website’s terms never defined “violent crime.” GEDmatch’s updated terms created an opt-in agreement, where customers must affirmatively agree to opt-in to automatically be included in future law enforcement searches. Customers who choose not to opt-in still have complete access to GEDmatch’s services.

On the surface, this might seem to address privacy concerns. In practice, these terms only apply to situations in which law enforcement formally requests access to GEDmatch’s database. As with the DeAngelo case, law enforcement officials may still create a profile on the website using the crime scene DNA. This provides complete access to the website’s DNA database. In December 2019, a forensic genomics company, Verogen, purchased GEDmatch. According to its website, Verogen works to improve the field of forensic science through technological advances. Verogen announced that “GEDmatch’s terms of service will not change, with respect to the use, purposes of processing, and disclosures of user data.” Verogen has previously worked with the FBI to create DNA profiles for the National DNA Index System, combining federal, state, and local forensic contributions. Therefore, GEDmatch users will likely not experience more privacy protection in the wake of the Verogen takeover.

            If law enforcement officials are able to make a profile with crime scene DNA, like in the DeAngelo case, are they also able to access the more popular Ancestry or 23andMe? These websites operate differently. Individuals gain access to their online profile only after purchasing a DNA test kit from one of these websites and sending in the completed kit. Therefore, unlike other websites, law enforcement officials cannot upload DNA samples collected from crime scenes. This does not necessarily mean the data is secure. These companies are not health care providers, therefore, the Health Insurance Portability and Accountability Act (HIPPA) does not apply. HIPPA is a federal law that ensures the privacy of health data. Subpoenas seeking the release of medical records are usually insufficient to release genetic information protected by HIPPA.

            As science and technology advance, legislatures try to keep up. In January 2020, Senator Susan Lee introduced a bill before Maryland’s General Assembly, requiring the State’s Attorney to make a disclosure to criminal defendants if “forensic genetic genealogical DNA analysis and search” is used during the case investigation. The disclosure is limited to cases where publicly available open databases and consumer genealogy services are used by law enforcement. The bill has support from Maryland House Delegate Charles Sydnor III, who proposed a similar bill in 2019, which did not make it out of committee. Consumers should exercise caution and make an informed decision when using any product, especially those that threaten constitutional rights, like privacy.

Sources:

Antony Barone Kolenc, “23 and Plea”: Limiting Police Use of Genealogy Sites After Carpenter v. United States, 122 W. Va. L. Rev.53 (2019).

Maryland v. King, 569 U.S. 435 (2013).

Our Story,Ancestry, https://www.ancestry.com/corporate/about-ancestry/our-story (last visited Jan. 16, 2020).

About Us, 23andMe, https://mediacenter.23andme.com/company/about-us/ (last visited Jan. 16, 2020).

Jason Tashea, Genealogy Sites Give Law Enforcement a New DNA Sleuthing Tool, but the Battle Over Privacy Looms, ABA (Nov. 1, 2019), http://www.abajournal.com/magazine/article/family-tree-genealogy-sites-arm-law-enforcement-with-a-new-branch-of-dna-sleuthing-but-the-battle-over-privacy-looms.

Breeanna Hare & Christo Taoushiani, What We Know About the Golden State Killer Case, One Year After a Suspect Was Arrested, CNN, https://www.cnn.com/2019/04/24/us/golden-state-killer-one-year-later/index.html (last updated Apr. 24, 2019).

Julian Husbands, GEDmatch Partners with Genomics Firm, Verogen(Dec. 9, 2019), https://verogen.com/gedmatch-partners-with-genomics-firm/. 

Nila Bala,We’re Entering a New Phase in Law Enforcement’s Use of Consumer Genetic Data,Slate(Dec. 19, 2019), https://slate.com/technology/2019/12/gedmatch-verogen-genetic-genealogy-law-enforcement.html.

Heather Murphy, What You’re Unwrapping When You Get a DNA Test for Christmas, N.Y. Times,https://www.nytimes.com/2019/12/22/science/dna-testing-kit-present.html? (last updated Dec. 23, 2019).

Taking the Fear out of Responding to Subpoenas for Medical Records, Norcal Group(June 29, 2017). https://www.norcal-group.com/library/taking-the-fear-out-of-responding-to-subpoenas-for-medical-records.

S.B. 46, 440th Gen. Assemb., Reg. Sess. (Md. 2020).

Unhealthy Side of Biometric Health Data

By: Dominique Kelly

State laws governing biometric data are not common in the United States, as there are only three states that currently employ such laws: Illinois, Washington, and Texas. As it stands, biometric data in the United States are not governed by federal guidelines, but are instead governed by contracts that users must usually agree to in order to utilize most devices (think: terms and conditions longer than a Supreme Court opinion standing between you and using your new smartphone). We may not be aware of the biometric data collection features surrounding our daily lives, but some common features such as fingerprint scanners or facial recognition to unlock phones, voice prints collected in the presence of our good friend ‘Alexa,’ or even retinal scans at a country’s borders. However, there is one important category of biometric data we may be quick to forget about: health data. Fitness tracker brands such as Fitbit, Garmin, Fossil, or even popular smart watches by Samsung, Apple, and other electronic brands, contain features which can track steps, heart rate, sleep cycles, trail locations, and menstrual cycles amongst other things. While this data may be harmless, users should still have a right to control this type of biometric information although they may have agreed to use the device to track the information.  

What could be the harm in collecting health related biometric data? Consider this: your employer offers you discounts on a new fitness tracking device in an effort to boost morale, promote fitness, and use friendly competition to enhance comradery. You do not want to miss out, so you purchase a great fitness tracker and your employer begins offering incentives for taking steps to pursue and/or maintain a healthy lifestyle. Your new fitness tracker arrives and you immediately take it out of the box and slap it right on your wrist and begin set up. You enter basic information, such as age, weight, cycle dates, etc. and the tracker begins, you guessed it: tracking. Now this tracker has obtained more information than you would be willing to give on a first date, in a matter of a few minutes. Not only is your tracker now tracking your steps, heart rate, and cycle, but you’ve also been tracking your weight, your exercise regimens, the running path you were just on for the past 30 minutes, and your caloric and water intake. Before you know it, you’re seeing ads on Instagram for the newest exercise machine, sponsored NIKE ads are attempting to sell you new running shoes, and Amazon is suggesting you purchase sanitary products around the beginning of your cycle.  

Generally, individuals who buy health trackers don’t believe the information that is being tracked will be used by other entities for monetary gain. But in this age of increasing sponsored advertisements on social media platforms based on seemingly unrelated searches completed on search engines, should we have expected our health-related biometric data to be commercialized? There is a push for states to regulate privacy protections of biometric data as there is no federal body currently governing this type of data. Amidst the new acquisition of Fitbit by Google, lawmakers have begun preparing bills in an effort to protect consumer health data. One bill that has been created is the Smartwatch Data Act or the Stop Marketing and Revealing the Wearables and Trackers Consumer Health Data Act. Conveniently defined, the purpose of this proposed Act is to protect identifiable biometric data (sleep, health, exercise data, etc.) that would likely be tracked by fitness trackers created by Fitbit, Garmin, and other brands. This bill would ideally treat these types of biometric data as protected health information and require the enforcement of violations as any other HIPAA (Health Information Portability and Accountability Act) violation. The bill would also stop entities collecting this biometric data on personal fitness trackers from transferring, selling, or sharing said data. 

Currently, HIPAA does not protect this type of personal information because HIPAA regulations currently only applies to health plans, health care clearinghouses, and any health care provider transmitting health information electronically. Since entities who create fitness trackers do not fall under a health plan, health care provider, or clearinghouse, they do not have to submit themselves to HIPAA guidelines. Thus, if those advocating for more protection of health related biometric data cannot successfully bring this protection under HIPAA guidelines, then perhaps an alternative is to encourage each state to create general biometric information protection laws. For example, the Illinois Biometric Information Privacy Act gives individuals the right to control their biometric information by requiring notice before collecting the data and giving the individuals the power to withhold consent. Another law governing collection and use of biometric data can be found in Washington state. This Act requires entities that collect biometric data (including health related biometric data) to disclose the way the information would be used and provide notice and obtain consent from the individual before the data is used. Much like the Illinois Act, only the Washington state attorney general has the ability to enforce the act preventing consumers from being able to sue companies when there is a violation. Texas is the third state that has enacted laws regulating the use and collection of biometric data. The Act requires only that any employer using biometric identifiers must destroy those identifiers within a reasonable time from the date the purpose for collecting the data expires. So, for example, if the data was collected for security purposes, then the expiration date of the purpose would be the date the employee no longer works with the employer.

In turning back to the biometric health data collected by fitness trackers, the state laws that are enacted in Illinois, Washington, and Texas may not provide the protection necessary to prevent the unlawful or unwanted use of consumer health data. The Illinois Act may be the closest law available to protect health data until bills and laws such as the Smartwatch Data Act, that include HIPAA or HIPAA related protection, become widespread in each state. Until then, we should continue to be aware of how our sleep, heart rate, exercise regimen, water intake data, etc. are used. In the meantime, grab a cup of tea and enjoy those long ‘terms and conditions’ agreements.  

Sources:

In re Facebook Biometric Info. Privacy Litig., 185 F. Supp. 3d 1155 (N.D. Cal. 2016).

Fitbit, Inc. v. AliphCom, No. 15-cv-04073-EJD, 2017 U.S. Dist. LEXIS 12657 (N.D. Cal. Jan. 27, 2017).

Chelsea Cirruzzo, Cassidy, Rosen Introduce Consumer Privacy Bill Amid Google Scrutiny, InsideHealthPolicy(Nov. 14, 2019, 7:02 PM), https://insidehealthpolicy.com/daily-news/cassidy-rosen-introduce-consumer-privacy-bill-amid-google-scrutiny.

Jerry Lynn Ward, Texas Biometric Privacy Law restricts certain “biometric identifiers.” Only three states have laws regulating the collection and storage of Biometric data., GarloWard, P.C. (Mar. 26, 2018), https://www.garloward.com/2018/03/26/texas-biometric-privacy-law-restricts-certain-biometric-identifiers-three-states-laws-regulating-collection-storage-biometric-data/.

U.S. Dep’t of Health and Human Servs., Summary of the HIPAA Privacy Rule (2003).

Justin Lee, Washington’s new biometrics law softer on privacy protections than Illinois BIPA, BiometricUpdate.com(July 24, 2017), https://www.biometricupdate.com/201707/washingtons-new-biometrics-law-softer-on-privacy-protections-than-illinois-bipa.

Chinese Tech Giant who Threatens National Security and US Response by Lawmakers and Agencies

By: Bryan Harris

We all know about the tech giants for smartphones such as Apple, Samsung, and Microsoft which are featured in our everyday lives to help us connect to each other. However, there is another tech giant that we may have never heard of before that is caught up in a legal and political battle here in the United States. This company is the second largest in the world, yet many Americans have never heard of it before. This company is called Huawei and it is based out of China. Huawei itself has had $93 billion in sales in 2018, which is similar to what Microsoft earned over the same period. Huawei was looking to enter into the United States market earlier this year with their phones using carriers including Verizon and AT&T, however those deals have fallen through after immense pressure from the U.S. government. Huawei is also a large player in trying to develop the 5G network. 5G is the fifth generation of wireless technology and will be able to have greater speed to move more data, be more responsive, and allow for more devices to connect at one time. It is in the early days of development at this point and will be a technology that is more widespread in the 2020s.

            Huawei has long been a global player in the smartphone race to many different places over the years. They have not been without controversy as they have become a global powerhouse in the smartphone race. Throughout the years they have been accused by many different countries and unions of anything from hacking by the African Union to espionage by former CIA Directors. The first issue with Huawei occurred when their Chief Financial Officer was arrested in Canada for violating the U.S. trade sanctions by shipping Huawei items from the U.S. to Iran.

            One of the big issues with Huawei that has been apparent in their attempt to enter into the U.S. market is that they are seen as acting as a front for the Chinese government to spy on other countries and to subsequently have China become a worldwide leader in technology by 2025 (a stated goal of the Chinese government. The reason that Huawei is seen as being a front for the Chinese government, or the Chinese government having significant influence on the company, is the fact that shareholders within the company need to report the shares that they own to a trade union committee, which reports to the All-China Federation of Trade Unions, which is controlled by the Communist Party of China. Thus, with the shareholders ultimately, though indirectly, reporting to the leading party in China, there is significant influence the government can impose on the company. They have conducted corporate and industrial espionage, and have ignored sanctions, ignored other nations’ laws, and have paid out bribes to others. 

The U.S. government has also added Huawei to a list of companies that there is special approval needed to do business with. Earlier this year, President Trump issued an executive order which would create a process that can ban the use of technology from companies that are deemed to be a national security risk. However, after meeting with the Chinese President, Xi Jinping at the G20 summit, it was announced that President Trump would release the restrictions that were included in the executive order.

More recently however, we have seen that the FCC will stop giving broadband subsidies to companies that do not destroy and get rid of the gear that was made by Huawei and another Chinese company, ZTE. These blockings have the approval from the Attorney General, William Barr, for the national security threat that these companies possess. This ban has also been widely received bipartisan support from those in Congress. Although the FCC, the Attorney General, and other lawmakers have taken a  hard stance on the companies for their national security threat, President Trump seems to just be using the bans and restrictions as a bargaining chip in the trade war, as shown by his willingness to ease up on the restrictions after meeting with Chinese President Xi Jinping at the G20 earlier this year. However, the bans that President Trump has now set on Huawei working with U.S. companies have been pushed back to start in February of 2020. Amid all the bans and restrictions, Huawei has maintained that they do not work for the benefit of the Chinese government and would not act in improper ways.

As we approach the time in which President Trump’s bans go into effect and the FCC’s ruling, we will have to see if the hardball stances that have been placed will continue to act in a way for our best interest for national security, or whether those stances will continue to be used as a bargaining chip for the ongoing trade war with China.

Citations

Colin Lecher, Huawei is Getting Three More Months Before US Ban Takes Effect, The Verge(Nov. 18, 2019), https://www.theverge.com/2019/11/18/20970684/huawei-us-ban-delay-trump-china.

Sascha Segan, What Is 5G?, PCMag(Oct. 31, 2019), https://www.pcmag.com/article/345387/what-is-5g.

Shannon Liao, Verizon Won’t Sell Huawei Phones due to US Government Pressure, Report Says, The Verge(Jan. 30, 2018), https://www.theverge.com/2018/1/30/16950122/verizon-refuses-huawei-phone-att-espionage-cybersecurity-fears.

Dave Smith, Here’s Why It’s So Hard to Buy Huawei Devices in the US, Bus. Insider (May 20, 2019), https://www.businessinsider.com/why-huawei-not-sold-in-united-states-2018-12.

Mike Rogers, Huawei is National Security Issue, Not Trade Football for our Leaders, The Hill(June 21, 2019, 12:00 PM), https://thehill.com/opinion/national-security/449711-huawei-is-national-security-issue-not-trade-football-for-our-leaders.

Brandon Baker, Is Huawei a National Security Threat?, Penn Today(July 19, 2019), https://penntoday.upenn.edu/news/why-huawei-national-security-threat.

John Hendel, FCC Votes to Edge Huawei, ZTE out of U.S. Networks, Politico(Nov. 22, 2019), https://www.politico.com/news/2019/11/22/fcc-huawei-zte-subsidies-072901.

Colin Lecher, The FCC Votes to Block Huawei From Billions in Federal Aid, The Verge(Nov. 22, 2019), https://www.theverge.com/2019/11/22/20977703/fcc-vote-huawei-zte-china-funding-federal-aid.

Kelcee Griffis, FCC Eyes Broader Restrictions on Huawei, ZTE Products, Law360(Nov. 22, 2019), https://www.law360.com/technology/articles/1222689/fcc-eyes-broader-restrictions-on-huawei-zte-products.

Cell-Based Influenza Vaccines as an Alternative to Egg-Based Vaccines

By: William Rankin

 

 

Influenza impacts millions of people across the world each year. According to the World Health Organization, there are 3-5 million cases of influenza with 290,000-650,000 attributable deaths each year. What makes influenza different from most viruses is the fact that it rapidly changes in shape and form due to genetic mutations. As a result, the medical community is faced with continually having to modify influenza vaccines to combat different strains of the virus each year. The predominant influenza vaccine currently on the market is produced by propagating a weakened virus in chicken eggs. Although this production method is widely approved and considered safe, it presents several challenges. First, the egg-based method is highly dependent on a supply of vaccine-quality eggs. In the case of an unforeseen event, such as avian flu, the supply of these eggs can be at risk of serious shortage. Second, the egg-based production method is very time consuming due to various processing steps. And third, because of the inherent egg usage, the current prevailing vaccine cannot be administered to people who have egg or feather allergies.
Two FDA-approved non-egg-based flu vaccines, developed by Protein Sciences (now Sanofi) and Seqirus, show strong potential as alternative vaccines that are quicker to produce, more risk-averse, and available to a wider patient base. The Sanofi vaccine is produced by a method that involves propagating the influenza virus in an insect cell line known as SF9, while the Seqirus design is propagated in canine cells. Studies have shown that the effectiveness of these cell-based vaccines can be 10-11% higher than egg-based vaccines in preventing hospital encounters, inpatient stays, and clinic visits. While production time for the cell-based vaccines have been proven to be as much as 50% shorter than the prevailing egg-based method. If past flu pandemics have taught us anything, it is the importance of speed and efficacy when it comes to flu vaccine production. In addition to the decrease in production time and increase in effectiveness of the vaccines, the reliability in production far exceeds that of the egg-based design. The cell-based vaccines are not reliant on production materials subject to variation, such as eggs or antibiotics, allowing the cell-based vaccines to be produced on a consistent basis for all users.
The disadvantages of the cell-based vaccine compared with their egg-based counterpart are essentially twofold: regulatory hurdles and cost of goods. However, the regulatory hurdles have become less of a barrier in recent years as the Sequris vaccine, Flucelvax Quadrivalent, was approved by the FDA in 2016, and Sanofi quickly followed with FDA approval on Flublok. These approvals show that the high initial regulatory burden of showing the vaccine’s safety has already been met, allowing further development to meet specified regulations as well. Although cell-based production costs remain higher than the egg-based vaccine, under the cell-based production system there is far greater potential for multi-vaccine application of the production system due to the sterile disposable nature of the process. Currently, the egg-based production design only supports significant commercial manufacture of influenza and yellow fever vaccines, whereas the cell-based design could allow for many different vaccines to be produced using the same system. In turn, this diversified use of the production system could increase the yield of total vaccines that the cell-based design can produce, offsetting the expensive nature of the system.
Accounting for all these factors, it may be time to consider cell-based vaccines as the new norm in the influenza vaccine community. If the benefits are there, and the obstacles are beginning to wane, it may soon be that convention is the only thing keeping egg-based production methods prevalent.

Sources:
Influenza (Seasonal), WORLD HEALTH ORG. (Nov. 6, 2018), https://www.who.int/en/news-room/fact-sheets/detail/influenza-(seasonal).

Michael Johnsen, Protein Sciences gains extensive patent protection for Flublok vaccine, DRUG STORE NEWS (June 4, 2015), https://drugstorenews.com/pharmacy/protein-sciences-gains-extensive-patent-protection-flublok.

Bernd Kalbfuss, et al., Puri?cation of Cell Culture-Derived Human In?uenza A Virus by Size-Exclusion and Anion-Exchange Chromatography, BIOTECHNOLOGY AND BIOENGINEERING (Aug. 25, 2006).

Michael L. Perdue, et al., The future of cell culture-based influenza vaccine production, EXPERT REV. OF VACCINES (Aug. 2011).

Lisa Schnirring, Study: Cell-based flu vaccine just a bit better than egg-based, U. OF MINN. CTR. FOR INFECTIOUS DISEASE RES. AND POL’Y (Dec. 18, 2018), http://www.cidrap.umn.edu/news-perspective/2018/12/study-cell-based-flu-vaccine-just-bit-better-egg-based.

Seqirus receives FDA approval for FLUCELVAX QUADRIVALENT™ (Influenza Vaccine) for people four years of age and older, CISION PR NEWSWIRE (May 23, 2016 8:50 AM), https://www.prnewswire.com/news-releases/seqirus-receives-fda-approval-for-flucelvax-quadrivalent-influenza-vaccine-for-people-four-years-of-age-and-older-300273578.html.

Michael R. Page, FDA Approves Flublok Quadrivalent Flu Vaccine, CONTAGIONLIVE (Oct. 13, 2016), https://www.contagionlive.com/news/fda-approves-flublok-quadrivalent-flu-vaccine.

Cell-Based Flu Vaccines, CTRS. FOR DISEASE CONTROL AND PREVENTION (Oct. 11, 2019), https://www.cdc.gov/flu/prevent/cell-based.htm.

Celebrities and Paparazzi and Copyright Infringement Lawsuits, Oh My!

By: Meredith Wallen

What is the harm in taking photographs of yourself that have been taken by someone else and then posting those photographs to your own personal Instagram page or other social media platform? Well if you are Katy Perry, Gigi Hadid, Jennifer Lopez or a number of other celebrities, it could cost you $150,000 per photograph.  Katy Perry dressed as Hillary Clinton for Halloween in 2016, and later posted a photograph taken by the paparazzi to her Instagram page. Perry failed to pay the paparazzi agency to license the photograph and failed to give credit to the agency.

This is an ongoing issue in the entertainment industry, with multiple celebrities facing copyright infringement lawsuits for using the photographs that have been taken of them by the paparazzi and posting the photographs on their personal social media pages without paying or giving credit to the copyright holders.

Copyright law grants the copyright owner the exclusive right to reproduce, distribute, or publish the protected work.  The official page for the United States Copyright Office states that the “mission of the Copyright Office is to promote creativity by administering and sustaining an effective national copyright system.”  The primary purpose behind the copyright system is to promote creativity in society by giving those individuals that choose to create art the security of knowing that what they create will be protected and not able to be reproduced or republished without proper compensation to the artist or the artist’s prior consent.

The United States Copyright Office’s functions include, but are not limited to the administering of copyright law and creating and maintaining a public record through registration of claims and recordation of documents.  Photographs and photographic negatives were added to protected works under copyright law on March 3, 1865.  It is much harder to regulate the reproduction or publication of photographs now than it was years ago because of the accessibility factor.  Social media platforms such as Instagram and Twitter make it much easier to take a photo off of one page and repost it to your own personal page, without ever having to acknowledge or give credit to who holds the copyright for the original photograph.

Paparazzi agencies are continuing to bring more suits against celebrities, and the celebrities are typically settling with the rights holders, which means that most of the suits never make it past preliminary motions. Model Gigi Hadid has dealt with her fair share of copyright infringement lawsuits when it comes to reposting photographs to her social media. Recently the photo agency Xclusive-Lee filed a copyright infringement suit against Hadid in the Eastern District of New York.  Xclusive-Lee’s complaint highlighted that Hadid has had a prior suit brought against her for copyright infringement with the hope that this evidence could show that Hadid was aware she was unable to use a photo of herself taken by a member of the paparazzi community without gaining prior consent or compensating the photographer. Hadid has previously settled when sued over copyright infringement; however, with this suit Hadid and her attorney raised the Fair-Use Doctrine under Section 107 of the Copyright Act in their motion to dismiss.

Section 107 of the Copyright Act states that “the fair use of a copyrighted work,” and here that would be the plaintiff’s photograph of the celebrity, “including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news, reporting, teaching, scholarship, or research is not an infringement of copyright.” One factor to be considered in determining whether the use made of a work in any particular case is a fair use is the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit education purposes. The photograph of Hadid is clearly more of a commercial nature rather than nonprofit or educational purpose. However, Hadid is claiming that she had made no effort to commercially exploit the photograph, but rather the photographer’s intent in taking the photograph “was to commercially exploit Hadid’s popularity.”  This is a risky argument for a celebrity to make considering they are typically exploiting themselves when they are choosing to interact with the public and are themselves boosting their popularity through the use of these photographs on their social media platforms, which makes these photographs of them more desired and of “newsworthy” material to the public.

The other factors in play when considering the Fair-Use Doctrine are the nature of the copyrighted work, the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. The affected use in regard to celebrities republishing these photographs of themselves without paying or even giving credit to the photographers, could substantially affect the potential market for or value of the copyrighted work.  These photographers are being compensated for the capturing of these celebrities’ image by selling the photographs to different social media or news outlets, so if the photographers are not being compensated for or to the extent that they should be, these photographers could in turn lose the incentive to take the photographs which would substantially affect the market for these types of photographs.

Hadid’s argument focused on the first element of § 107, “purpose and character of the use.” Courts tend to look to what extent the work is transformative, which is the extent the new work alters the original work with new expression, meaning or message when analyzing the purpose and character.  Hadid argued that her posting the photograph of herself on Instagram is transformative because the nature of her use was social and not commercial like the intention of the paparazzi.  The reposting of the same photograph, even if a filter or different caption is added to the photograph, is most likely not transformative regardless of the change in purpose of commercial to social.  Hadid also made the argument that her posing for the photographer created an implied license between the photographer and herself.  If this policy was adopted, photographers would certainly suffer a direct economic loss because the need of purchasing photographs would be essentially obliterated if all it took was an implied license that could be gained by what is considered “posing for the camera.”

An argument can be made that celebrities are not directly gaining from an economic sense when posting these photographs of themselves on their personal pages; however, their profiles are being boosted and can indirectly lead to an economic boost with each new post.  One example of this is a lawsuit against Ariana Grande for posting two photographs taken by a New York-based paparazzo who captured Grande carrying a bag that read “Sweetener.”  The photographer alleged that his photographs of Grande were used by Grande to promote her then soon-to-be-released album of the same name.  On the other side of that same argument is would be that these are photographs of the individual being sued and these celebrities are without a doubt being exploited for commercial purposes.

Celebrities can attempt to argue the defense right of publicity which grants an individual the right to control and profit from commercial use of their name or likeness, but it does not grant the right to inhibit the First Amendment right. New York law takes into account the “newsworthiness” and “public interest” factors when considering the right of publicity defense.  Courts will typically rule that when photos are taken in a public place and are of a “newsworthy” subject (i.e. a famous popstar), the photographer has copyright ownership.  Regardless of the outcome for each celebrity, whether it be by a private settlement or a motion to dismiss including a fair-use defense, this is an ongoing issue that will most likely end with a bright line rule of law needing to be established in an effort to minimize the amount of copyright infringement suits coming forward.

Cites:

A Brief Introduction and History, United States Copyright Office (Nov. 4, 2019), https://www.copyright.gov/circs/circ1a.html.

 

Amy Ralph Mudge, Randal M. Shaheen, Alan L. Friel and Linda A. Goldstein, Gigi Hadid “Obliterates” Copyrights With Fair-Use Bazooka, Lexology (June 28, 2019), https://www.lexology.com/library/detail.aspx?g=e162b58c-9e96-4857-8505-5b58042af129.

 

Copyright Act of 1976 § 101, 17 U.S.C. § 106 (2019).

 

Copyright Act of 1976 § 101, 17 U.S.C. § 107 (2019).

 

Legal Entertainment, Ariana Grande Hit With Lawsuit For Posting Paparazzi Photo Of Herself, Forbes (May 14, 2019), https://www.forbes.com/sites/legalentertainment/2019/05/14/ariana-grande-hit-with-lawsuit-for-posting-paparazzi-photo-of-herself/#3ddcd6ff4f39.

 

Ken Shepherd, Khloe Kardashian sued by paparazzi photo agency for $175k for sharing copyrighted photo on Instagram, Washington Times (April 27, 2017), https://www.washingtontimes.com/news/2017/apr/27/khloe-kardashian-sued-photographer-violating-copyr/.

 

Maria Puente, Jennifer Lopez sued by paparazzi agency for copyright infringement, USA Today  (Oct 7, 2019), https://www.usatoday.com/story/entertainment/celebrities/2019/10/07/jennifer-lopez-sued-paparazzi-copyright-infringement/3900908002/. 

 

Quinn D’Isa, Gigi Hadid, Paparazzi, and The Intersection of Copyright Laws and Fair Use on Social Media, (2019), http://www.fordhamiplj.org/2019/09/25/gigi-hadid-paparazzi-and-the-intersection-of-copyright-laws-and-fair-use-on-social-media/ (last visited Nov 4, 2019).

 

 

 

 

 

FACEBOOK’S DOMINANCE: A CENTRAL FORCE IN PEOPLE’S LIVES ONLINE

By:  Jessiya Joseph

Facebook is the world’s largest social network that also owns photo app Instagram, messaging service WhatsApp, and virtual reality company Oculus. In June 2019, the House antitrust committee said it was launching an investigation into Facebook. The Federal Trade Commission and the Department of Justice are both reportedly investigating Facebook over antitrust concerns. 47 Attorney Generals are now investigating Facebook’s potential anticompetitive conduct to determine whether Facebook’s actions subdued competition and put users at risk.

Facebook proposed a new venture of creating its very own digital currency, Libra. Libra has been an important subject in U.S. congressional hearings. Lawmakers worry that Libra has the potential to disrupt the monetary system. If Libra becomes the currency for Facebook’s 2.4 billion or so users, it has the potential to become a globally significant currency.

Libra is incorporated in Switzerland. It would be what’s called in crypto markets, as stable coin. It will be a pegged currency that maintains one-to-one value with the U.S. dollar. This peg will be maintained by a company called Libra Association. It is a newly created currency governing body that includes a total of 21 members that will together drive the project.

For the first time, not a government, but a group of private companies together could hold the power that can upset global markets. Libra Association will function to maintain a reserve. For every dollar’s worth of Libra that is created, one dollar is going to be put into the reserve. Theoretically this reserve has the potential to sustain billions or even trillions of dollars, worth of currencies and short-term securities. Although, the companies are committed to work with the authorities to ensure a safe, transparent and consumer friendly implementation, it remains a concern for many governments. Needless to say, this proposal has gotten a lot of backlash from global regulators. France’s finance minister goes on to say that it is an assault on national sovereignty. Moreover, officials of other countries, including China, Germany, and Italy have experienced similar concerns.

Facebook has been criticized for the way of handling its user data before, and Libra would create a traceable record of all its users transactions. Lawmakers are skeptical about Facebook’s ability to combine user’s identity, online habits, and their spending patterns, should this proposal come into force. Facebook Executive David Marcus has presented himself once before, in the Summer Congressional Hearings of 2019, reassuring the privacy of the user identity and the spending data. Currently, Libra’s viability is still under regulatory scrutiny and is unlikely to come into force anytime soon.

The House that was already concerned about Facebook’s likely anti-competitive nature, heard Mark Zuckerberg’s prepared remarks at the Financial Services Committee Hearing, regarding its proposal to create its very own digital currency. While the hearing was supposed to be about Facebook’s push to create Libra, about half of the topics were related to its controversial political ads policy and Facebook’s record on diversity. Zuckerberg told lawmakers repeatedly that Facebook would not support launching the cryptocurrency unless it received approval from all US regulators. Indeed, Facebook is the brains behind this idea, but it is just one of the 21 partners in the Libra Association. Facebook is willing to leave the Libra Association, if the approval does not come through.

But the lawmakers continue to be skeptical about how promising Libra could be. Even Zuckerberg at one point conceded that it is a “risky project”, which did not help his situation. Other members of Congress remain skeptical solely because this project is in affiliation with Facebook, questioning his very credibility, honesty, and motive behind Libra. Moreover, the members of Congress seemed to also have an issue with Libra’s Association’s place of incorporation being Switzerland and not the U.S. They would probably be little more assured if it incorporated in the U.S. One of the Congressional members went on to say, “Today Facebook is more like a country, than like a company.”

Zuckerberg’s testimony may still not have been enough to address the many questions lawmakers have about Libra. Near the end of the hearing, which lasted more than six hours, the committee’s ranking member Rep. Patrick Henry said he is not sure the committee members had “learned anything new here.”

Carrie Mihalcik, Facebook now faces 47 attorneys general in antitrust probe, msn: news, https://www.msn.com/en-us/news/technology/facebook-now-faces-47-attorneys-general-in-antitrust-probe/ar-AAJaIld (Oct. 22, 2019, 12:45 PM).

Julie Verhage and Kurt Wagner, Facebook CEO Says Libra Won’t Launch Until U.S. Approves, msn: news, https://www.msn.com/en-us/news/technology/facebook-ceo-says-libra-wont-launch-until-us-approves/ar-AAJaZkS (Oct. 22, 2019).

Paul Vigna, The Global Backlash Against Facebook’s Digital Currency Project, WSJ Video, https://www.wsj.com/video/the-global-backlash-against-facebooks-digital-currency-project/2C62D390-CFC9-4865-96D7-E7AD2B467826.html?mod=article_inline (Oct. 22, 2019, 7:02 AM).

Kurt Wagner, Facebook’s Mark Zuckerberg Heads Back to Washington for Questioning, Bloomberg: Politics, https://www.bloomberg.com/news/articles/2019-10-22/facebook-s-zuckerberg-heads-back-to-washington-for-questioning (Oct. 22, 2019, 10:42 AM EDT).

Theodore Schleifer, Congress couldn’t agree on what exactly was wrong with Mark Zuckerberg. But they all wanted a piece of him, Vox, https://www.vox.com/recode/2019/10/23/20928859/libra-hearing-congress-mark-zuckerberg (Oct. 23, 2019, 4:40 PM EDT).

Claire Duffy and Brian Fung, Mark Zuckerberg grilled by Congress over Libra and political ads policy, CNN BUSINESS, https://www.cnn.com/2019/10/23/tech/mark-zuckerberg-facebook-libra-hearing/index.html (Oct. 23, 2019, 4:26 PM ET).

Self-driving cars, but not self-regulating laws

By: Dwij Patel

Technology in cars has come a long way, especially with the innovation of self-driving vehicles.  The first car was not even close to being capable of this.  Even though Karl Benz, a German inventor, was given credit for inventing the first car in 1885, historians have claimed that other cars were invented as early as the 1600s.  One reason Benz took given credit for inventing the first car was because it was the first practical car which featured a gasoline powered engine, and it was the most efficient engine compared to others at the time.  The technology was eventually put into mass production, which made it affordable and accessible to many people around the world. The advancement of cars did not end there, as all cars at that time required a driver behind the wheel.  The concept of having a driverless car began to develop quickly after.

The first step towards autonomous vehicles was in 1925, when Francis Houdina demonstrated a radio-controlled car that drove through the streets of Manhattan.  Nobody was behind the steering wheel, and the radio-controlled vehicle was able to start its own engine, shift gears, and sound the horn.  Someone was still controlling the vehicle, so it was not truly autonomous.  Fifty years later, John McCarthy, one of the founding fathers of artificial intelligence, described in an essay, the idea of a car being capable of driving itself, just as a human driver would.  All a person would have to do is enter a destination and the car would drive them there without assistance.  It was after this time that the technology for a truly autonomous car began to develop.

In 2000, self-parking cars began to emerge, showing the potential for a completely autonomous car.  In 2009, the feature entered the mass market, as major car manufacturers offered cars that could parallel park themselves.  The advancement continued, and in 2013 major car companies such as Ford, Mercedes Benz, BMW, and Tesla worked on self-driving car technologies.  Even though the idea of a self-driving car can be exciting, there is always a level of risk associated with new technology. The first autonomous car fatality occurred in Florida when a self-driving car hit another human driven vehicle.  The self-driving car failed to apply the brakes in time after the vehicles turned in front of it, and the impact resulted in a fatality.  When human mortality is involved, major steps must be taken to prevent future injury or death.

The deaths involving autonomous vehicles have led to a nation-wide review of current laws in this area.  Before autonomous vehicles, only laws regarding human drivers existed.  With the advancement of car technology towards autonomous cars, the laws for such technology were struggling to keep up. California did, however, expand its testing rules which requires a remote operator be able to control the vehicle. California’s Department of Motor Vehicles requires a safety driver to be able to take control of the autonomous car, preventing the car from being truly autonomous.  In 2018, California laws appeared to hinder truly autonomous driving, but car manufacturers were still able to develop and test their cars in Arizona at that time.  Arizona had no laws regulating this technology, yet.  However, no car manufacturer took advantage of the state at the time because developers wanted to convince themselves, along with the public, that the technology for autonomous cars was safe and reliable in all conditions.  They wanted to show that autonomous car technology would eventually be safer than human drivers.

As technology continues to develop at an accelerated rate, the laws have been gradually introduced.  In 2012, many states considered legislation regarding autonomous vehicles.  Due to the proliferation of this technology, the legislatures were forced to shift gears and consider new rules and regulations to promote innovative safety standards, which relate to safety testing involving autonomous cars.  Current laws that govern autonomous cars can be very complex, but no state made the use of the cars illegal.  Completely autonomous cars are available today, and many state legislatures are still catching up to enact and enforce safety regulations to keep up with this advancement.  States that have laws regarding autonomous cars are not consistent with each other, and users of the cars must be aware of each state’s manuals, codes, and handbooks. The U.S. Department of Transportation has yet to enact a nationwide policy regarding self-driving cars.

Forecasts indicate that over the next decade, autonomous cars will become almost as common as cruise control is today.  There could be over 21 million autonomous cars in the United States alone. Creating effective policies or regulations to control autonomous cars is a challenge, and in 2018 the House of Representatives passed autonomous vehicle legislation H.R. 3388 to create a uniform standard.  As legislatures struggle to keep up with regulating autonomous cars, even more innovative technology could be developing.  As no one is truly sure how the car landscape will develop down the road, it is unclear on how the car industry and the regulations of the roads will be affected in the near future.

Geoffrey Migiro, What Was the First Car Ever Made?WorldAtlas, https://www.worldatlas.com/articles/what-was-the-first-car-ever-made.html, (last updated Nov 28, 2018).

Luke Dormehl & Stephen Edelstein, Sit back, relax, and enjoy a ride through the history of self-driving cars, Digital Trends(Feb. 10, 2019, 9:00 PM), https://www.digitaltrends.com/cars/history-of-self-driving-cars-milestones/.

Jack Karsten & Darrell West, The state of self-driving car laws across the U.S., Brookings(May 1, 2018) https://www.brookings.edu/blog/techtank/2018/05/01/the-state-of-self-driving-car-laws-across-the-u-s/.

Aarian Marshall, Fully Self-Driving Cars Are Really Truly Coming to California, Wired(Feb. 26, 2018, 7:10 PM), https://www.wired.com/story/california-self-driving-car-laws/.

Autonomous Vehicles |Self-Driving Vehicles Enacted Legislation, National Conference of State Legislatures(Oct. 9, 2019), http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx.

Sasha Lekach, Self-driving cars must be experts on ridiculously specific road rules, Mashable(Aug. 28, 2019), https://mashable.com/article/autonomous-vehicles-inconsistent-rules-of-the-road/.

Daniel Araya, The Big Challenges In Regulating Self-Driving Cars, Forbes(Jan. 29, 2019, 9:00 AM), https://www.forbes.com/sites/danielaraya/2019/01/29/the-challenges-with-regulating-self-driving-cars/#4c8849d3b260.

Rick Spence, Everything you need to know about the future of self-driving cars, MaClean’s(July 11, 2019), https://www.macleans.ca/society/technology/the-future-of-self-driving-cars/.

Deepfakes – How Will We Take Control of Fake News

By: Sofia Feliciano

What are deepfakes? Many people don’t even know what deepfakes are, much less their threat on society today. A deepfake is a fake or doctored video or audio recording that looks or sounds just like a real video or audio recording. A deepfake is usually not detectable as false by the average viewer or listener. Deepfakes are made using generative adversarial networks, or GANs. Two machine learning models are tested against one another. The first uses a data set provided (such as real video footage) to create a forged video and the second attempts to detect the forgery. The first model continues to create the fake video until the second model can no longer detect the forgery.

Can deepfakes be detected? Yes, usually, but the problem here is time. It usually takes between 24-48 hours to detect the use of deepfake technology. Also, it takes time for other publications of the same event to come out, at which point comparisons to the deepfake can be made to determine its falsity. But one of the dangers of deepfakes is that anyone can download deepfake software and begin to create deepfakes. There is even an app that people can download directly onto their phones – FakeApp.

Deepfakes are likely to be the next big problem for the 2020 elections. Recently, deepfake technology has been used to create videos that look like they show politicians saying or doing things they never said or did. A popular video that has gained much traction in the news is the deepfake video of House Speaker Nancy Pelosi. The President tweeted out doctored footage of Pelosi where it seemed as if she was stuttering through her speech. This deepfake of Pelosi gained more than 2.5 million views on Facebook alone, as Facebook was unable to identify the fake Pelosi video.

Deepfakes could have disastrous implications for those running for positions in our government as well as for the upcoming 2020 election. Deepfakes are a powerful technology that people could use to disseminate misinformation to influence an election. Worse, if the public becomes aware of mass deepfakes being spread over the course of a campaign, voters may become cynical about the ability to discern truth from falsehood. This could lead to apathy and low voter turnout. Additionally, the increase of deepfakes being spread in the media could mean that politicians could try to deny things that really did happen, using the public’s doubts to their own gain.

Some politicians have expressed their concerns with the increasing use of deepfakes. Florida Senator Marco Rubio called deepfakes the “modern equivalent of nuclear weapons.” The fear Rubio is encapsulating is that deepfakes could be used in nefarious means, such as exploitation to start military conflicts by falsely portraying hostilities that never existed. Imagine a widespread circulation of a deepfake of the President saying we have started a war with Russia, China, or any other superpower in the world. The effects could be catastrophic, especially when the ability to identify the falsity of the video would be limited until a day or two after the release of such a video.

Two states have already begun to take action against deepfakes. California has passed a bill, AB 730, that prohibits the distribution of doctored audio or video of a candidate, unless it is clearly marked as a fake. However, this would only be applicable if that deepfake media is distributed within 60 days of an election. The law hinges on whether the distribution of the deepfake could lead a normal person to have a fundamentally different impression of the candidate. However, critics of this law have argued that it is overbroad, vague, and subjective, as “fundamentally different impression” was undefined. Critics also have noted the difficulties this law could have in respect to satirists and parodies. Texas has also passed a similar deepfake law, in which it is now a misdemeanor to create and distribute deepfakes within 30 days of an election, with the intent to harm a candidate or somehow influence an election.

But what else can be done? As of right now, it is still not a federal crime in the United States to create fake videos. However, two federal level bills have recently been introduced with the aim of controlling the use of deepfakes – (1) the Malicious Deep Fake Prohibition Act, introduced in December 2018, and (2) the DEEPFAKES Accountability Act, introduced in June 2019.

The Malicious Deep Fake Prohibition Act was introduced by Senator Ben Sasse. Its purpose is to amend title 18 of the U.S.C to prohibit “certain fraudulent audiovisual records.” This bill defines a deepfake as “an audiovisual record created or altered in a manner that record would falsely appear to a reasonable observer to be an authentic record or the actual speech or conduct of an individual.” If passed, it would make it unlawful for someone to (1) create a deepfake with the intent to distribute it as to facilitate either criminal or tortious conduct, or (2) distribute a deepfake when they have actual knowledge that it is a deepfake and the intent that it would facilitate criminal or tortious conduct.

The DEEPFAKES Accountability Act was introduced by House Representative Yvette Clarke. This bill is a more extensive restriction of the regulation to deepfake audio and visual recordings, requiring deepfakes to display a digital watermark as well as audiovisual disclosures of the alterations to the video. Penalties for failures to disclose or alterations of disclosures include fines up to $150,000 or imprisonment for not more than 5 years, or both.

As technology in our society continues to advance at a rapid pace, these two bills are a start to the beginning of regulating deepfake technology nationwide. It is likely that more states will adopt deepfake laws in preparation for the upcoming 2020 election. With more state support, the chances of national regulation against malicious deepfake use, especially in connection with democratic elections, will only increase. Until then, the public needs to keep a sharp eye out for any more fake news.

Citations:

1. Levi Sumagaysay, California Has A New Deepfakes Law In Time For 2020 Election, THE MERCURY NEWS (Oct. 4, 2019, 3:14 PM), https://www.mercurynews.com/2019/10/04/california-has-a-new-deepfakes-law-in-time-for-2020-election/.

2. Grace Shao, Fake Videos Could Be The Next Big Problem In The 2020 Elections, CNBC TECH (Oct. 15, 2019, 9:40 AM), https://www.cnbc.com/2019/10/15/deepfakes-could-be-problem-for-the-2020-election.html.

3. Philip Ewing, What You Need To Know About Fake Video, Audio, And The 2020 Election, NPR (Sept. 2, 2019, 7:00 AM), https://www.npr.org/2019/09/02/754415386/what-you-need-to-know-about-fake-video-audio-and-the-2020-election.

4. Will Fischer, California’s Governor Signed New Deepfake Laws For Politics And Porn, But Experts Say They Threaten Free Speech, BUSINESS INSIDER (Oct. 10, 2019, 12:51 PM), https://www.businessinsider.com/california-deepfake-laws-politics-porn-free-speech-privacy-experts-2019-10.

5. J.M. Porup, How And Why Deepfake videos work – And What Is At Risk, CSO ONLINE (Apr. 10, 2019, 3:00 AM), https://www.csoonline.com/article/3293002/deepfake-videos-how-and-why-they-work.html.

6. Doctored Nancy Pelosi Video Highlights Threat of “Deepfake” Tech, CBS NEWS (May 25, 2019, 12:39 PM), https://www.cbsnews.com/news/doctored-nancy-pelosi-video-highlights-threat-of-deepfake-tech-2019-05-25/.

7. DEEPFAKES Accountability Act of 2019, H.R. 3230, 116th Cong. (1st Sess. 2019).

8. Malicious Deep Fake Prohibition Act of 2018, S. 3805, 115th Cong. (2d. Sess. 2018).

9. Mike Wendling, The (almost) complete history of ‘fake news’, BBC News, (Jan. 22, 2018),  https://www.bbc.com/news/blogs-trending-42724320.