The Cybersecurity Information Sharing Act of 2015

The Cybersecurity Information Sharing Act of 2015

By: Christopher W. Folk

In the eleventh hour of the twelfth month in the year 2015, the Cybersecurity Information Sharing Act (“CISA”)  (Pub. Law No. 114-113)[1], was pushed through Congress as part of an omnibus spending bill that was subsequently signed by President Obama.[2] This bill has been hailed by its sponsors as long overdue and an important step in enhancing our nation’s cybersecurity; while privacy advocates have decried this as the government’s further encroachment on privacy rights.[3]  CISA 2015 is an expansive and wide-reaching law and consequently, our focus will be limited to the information sharing portion of this law.

Continue Reading

The “Digital Attorney”: Using Artificial Intelligence in Law Practice

Ashley Menard

Today, attorneys spend countless hours doing legal research on legal issues presented to them by their clients. Oftentimes, clients are unable or unwilling to pay for the amount of hours it takes attorneys to do adequate research in response to their legal needs. The answer to these problems may come in the form of artificial intelligence, which can be used to expedite legal research. In response to this problem, one computer scientist named Jimoh Ovbiagele has created what is known as ROSS Intelligence – an artificially intelligent “digital assistant” to assist attorneys.

ROSS Intelligence began as a research project at the University of Toronto in 2014. ROSS was built on IBM Watson, a cognitive system that can answer questions in natural language. The goal of ROSS is to act as an “Artificial intelligence lawyer that helps human lawyers research faster and focus on advising clients.” Today, ROSS acts as a legal research assistant that helps scale the abilities of lawyers through artificial intelligence technology.

ROSS works to quickly respond to questions posed after searching through legal documents. Attorneys can orally ask ROSS legal questions, and ROSS will respond while showing users what the citations are for its responses. ROSS gets its legal information from a legal publisher, although the name of the publisher has not yet been revealed. The more ROSS is used, the more it improves its responses. Also, if the laws change, ROSS will be able to track whether it affects the case cited. This will enable attorneys to focus on the individual facts of a client’s case, while spending less time on mundane tasks like sifting through legal documents.

Law firms have already jumped on the idea of using artificial intelligence for legal research. Prominent law firms such as Baker Hostetler, Latham & Watkins and Wisconsin’s von Brisen & Roper have begun licensing ROSS Intelligence. “We believe that emerging technologies like cognitive computing and other forms of machine learning can help enhance the services we deliver to our clients,” said Bob Craig, Baker & Hostetler’s chief information officer.

Obviagele believes that ROSS Intelligence will allow legal services to be delivered at a more affordable price. However, a concern is that ROSS Intelligence will begin to replace jobs in an already struggling legal market. In response to this concern, Obviagele emphasizes that ROSS cannot advise clients on what they should do. “We give lawyers the information; it’s their job to make the judgment call, which requires a lot of things computers can’t do,” Ovbiagele says. “A lot of people talk about Ross replacing jobs. Really, it just makes the jobs easier.”

As of May 2016, ROSS Intelligence had commercially rolled out its bankruptcy law module. Now, ROSS is building more legal practice modules into the ROSS system. It will only be a matter of time before ROSS Intelligence becomes commonplace in law firms across the United States.

Cybersecurity Giant Re-Emerges: McAfee is Back

Jonathan Ziarko

Six years ago, Intel took a big leap when it bought up McAfee, a powerhouse in creating computer security software, for $7.7 billion.[1] Intel took that company and turned it into Intel Security, but throughout the past few years as computer sales have slowly been dropping the company is beginning to switch directives to the cloud-based data centers.[2] On September 7, 2016, Intel announced its plans to sell 51% of Intel Security’s shares to TPG for $3.1 billion.[3] Intel saw an opportunity to make some money by selling the majority share in Intel Security to TPG and rebranding it back to McAfee. The sale will allow Intel to focus on other things while still giving them a stake in the cybersecurity industry. Christopher Young, the Senior Vice President and General Manager of the current Intel Security Group, announced in an open letter to the shareholders, that Intel and TPG are working together to hopefully create the “largest independent, pure-play cybersecurity companies in the industry”.[4] Mr. young continued on to speak to the company’s shareholders, consumers, and partners outlining his continued goals towards excellence in the field of cybersecurity and his hope to re-establish McAfee as one of the most trusted names in the industry of cybersecurity.[5] An industry that will only continue to grow. Intel of course will retain a 49% share in the new company.[6] The deal has not yet been closed but it is looking like it should close sometime in mid-2017.[7]

 

[1] Michael J. de la Merced & Quentin Hardy, Intel Sells a Majority Stake of Cybersecurity Unit to TPG, The New York Times Company (Sept. 7, 2016), http://www.nytimes.com/2016/09/08/business/dealbook/intel-sells-a-majority-stake-of-cybersecurity-unit-mcafee-to-tpg.html?ref=technology.

[2] Id.

[3] Stephanie Condon, Intel, TPG to form jointly owned cybersecurity company called McAfee, ZDNet (Sept. 7, 2016, 21:29 GMT), http://www.zdnet.com/article/intel-tpg-to-form-jointly-owned-cybersecurity-company-called-mcafee/.

[4] Letter from Christopher Young, Sr. Vice President & Gen. Manager, Intel Security Group, to Intel Security Stakeholders (Sept. 7, 2016), https://newsroom.intel.com/wp-content/uploads/sites/11/2016/09/CY-Letter-090716.pdf.

[5] Id.

[6] Id.

[7] Micheal J. de la Merced & Quentin Hardy, Supra note 1.

Anonymity for Anti-Discrimination: Airbnb’s Effort to Dismantle Worldwide Prejudice

Emma Fusco

Spanning across nearly 34,000 cities and 191 countries, studies have shown that despite the diversity of users on Airbnb, discrimination against race and gender are a continuing issue.[1]  Airbnb, the short-term rental site, has come across issues regarding discrimination by hosts.  A class action suit has been brought after chief plaintiff Gregory Selden, was denied service solely because of his race.[2]  To support his claim, a working paper by Harvard University has found it to be harder for users with African-American sounding names to rent rooms through the website versus users with more Caucasian sounding names.[3]

Airbnb’s plan of action to stomp out this discrimination is quite disheartening in our country’s time of racial reform.  The answer to this seems simple: take down photos of users and make names anonymous.  Airbnb’s answer? Taking away photos and making users anonymous resists the buildup of trust between the host and the renter, thus further heightening the anxiety of letting a stranger stay in his or her home.[4]  In order to be proactive on this issue, Airbnb has permanently banned those who violated the anti-discrimination policy, two hosts of which were removed for writing racist epithets to the user and another for refusing to let a transgender user rent.[5]

Although these actions help promote the anti-discrimination policy currently enacted by the online rental service, the company is taking this issue further from a technological standpoint.  The company is currently “examining its internal structures and technology, and its processes for identifying and handling discrimination incidents.”[6]  But what legal implications follow?

The legal issue here is very similar to the issues surrounding other businesses in the “sharing economy”.[7]  Are these private persons who are acting through a public service be considered public or private? If this service is determined to be a private service, does that give hosts the lawful option to freely discriminate or are hosts bound by other implications within the user contract?

There has been little development published by Airbnb regarding this issue and how the company plans to resolve it.

 

[1] Katie Benner, Airbnb Adopts Rules to Fight Discrimination by Its Hosts, N.Y. Times, Sept. 9, 2016, at A1.

[2] Katie Benner, Airbnb Vows to Fight Racism, but Its Users Can’t Sue to Prompt Fairness, N.Y. Times, June 20, 2016, at B1.

[3] Id.

[4] Mike McPhate, Discrimination by Airbnb Hosts Is Widespread, Report Says, The New York Times, http://www.nytimes.com/2015/12/12/business/discrimination-by-airbnb-hosts-is-widespread-report-says.html.

[5] Benner, supra on Sept. 9, 2016.

[6] Benner, supra, on June 20, 2016.

[7] Id.

 

Protecting Your Brand: An Introduction to The Lanham Act

Lishayne King

Innovations and advances today occur at much faster and more rapid rates than experienced in past decades. With the innovation of the Internet and the ease with which entities can create websites and brand names, access to information, products, and resources can be achieved with the click of a mouse. Protecting your words, products, and resources, however, can prove difficult without knowledge of the specific safeguards and defenses that you are entitled to under the law. One of the paramount mechanisms that individuals and entities can utilize in seeking to protect their trademarked information is The Lanham Act.

The Lanham Act is the chief source of protection for trademarked information under federal law. Two specific sections of The Lanham Act in particular, are essential in the enforcement of the act. Section 32, codified at 15 U.S.C. § 1114, protects against violations of a registered mark. Section 43(a), codified at U.S.C. § 1125(a), protects consumers and businesses from dishonest business practices. These sections together are designed to prevent the confusion that may arise when one entity utilizes another entity’s trademark while engaged in business.

A Case Study of Broadband Regulation: United States Telecom Association v. Federal Communications Commission

Nicholas Fedorka

A recent case was decided in the United States Court of Appeals for the District of Columbia Circuit that is creating ripples throughout the realm of administrative and utility law.  It was a direct response to the Federal Communications Commission (“FCC”) adoption of the “2015 Open Internet Order.”  This order reclassified broadband internet from an “information service” to a “telecommunication service” under the Telecommunications Act of 1996.  U.S. Telecomm. Ass’n  v. Fed. Commc’n  Serv., 825 F.3d 674, 678 (D.C. Cir. 2016).  This is a major step in broadband regulation because telecommunication services are subject to common carrier regulation under the Telecommunication Act.  Id.  The 2015 Open Internet Order consisted of three components: 1) Commission reclassified both fixed and mobile broadband as telecommunications services, 2) the commission forbears to include certain parts of the Telecommunications Act that it saw unnecessary, and 3) the commission promulgated five open internet rules that applied to both fixed and mobile broadband services.  Id. at 687.

Three separate groups of petitioners, including U.S. Telecommunications Association, argued against the FCC’s reclassification.  The DC Court of Appeals turned to the Chevron 2-part test of statutory interpretation to see if the FCC’s 2015 Open Internet Order is constitutional under the Telecommunications Act.   Id. at 699.  First, the court asked whether Congress has directly spoken to the precise question at issue and if it is clear then that is the end of that matter.  U.S. Telecom Association, 825 F.3d at 699.  If the Telecommunications Act of 1996 is silent or ambiguous, then the question is whether the agency’s answer is based on a permissible construction of the statute.  Id.  The court held that the statute was ambiguous but the agency’s answer is based on a permissible construction of the statute.  Id. at 700.  Furthermore, the court upheld the FCC’s decision to regulate on a “case-by-case basis” under the act.  Id. at 709.

It will be interesting to see how the FCC regulates broadband services on such a basis.  The idea of regulation by a federal agency within a particular field on a “case-by-case” basis falls within the constitutional law theory of conflict preemption.  If a state determines to regulate broadband internet outside the scope of the FCC’s 2015 Open Internet Order, it appears that the FCC will attempt to strike it down.  But there are still many questions that have yet to be answered.  Have individual State’s already started to regulate broadband internet providers?  Are they in line with the FCC’s Open Internet Order?  How much power do individual states have to regulate broadband providers?  What are the types of State regulation schemes that conflict with the FCC’s Open Internet order?  These questions seem to be unanswered in this unsettled area of the law.  

CDA Section 230 Liability -Snapchat Faces Class Action Lawsuit

Justin Farooq

Over the last twenty years, an increasing number of claims have been brought against Internet websites that publish third party content, slowly changing the prevailing notion that such content is never subject to liability.[1] This upsurge in lawsuits comes at no surprise — greater accessibility to such online material, coupled with the proliferation of increasingly powerful, interactive online publishers, has increased exposure to an assortment of legal matters and raised the stakes in online publishing.[2]

Many of the lawsuits brought against online publishers claim violation of §230 of the Communications Decency Act of 1996 (CDA), which provides formidable immunity to online publishers.[3] When originally passed, §230 was intended to shield website owners from liability for illegal content on their websites by third parties in order to promote development of the early Internet, vigorous information sharing, and free speech.[4] The legislative rational behind the immunity provision was that while websites share some attributes to print works such as magazines, which are held liable for third-party content, they are more akin to telecom companies that operate as passive outlets of third-party communication, which are not liable.[5]  However, due to the evolving complexity of the Internet, this distinction between an active or passive conduit of third party information has become much harder to clearly define.

The vast majority of courts interpreting §230 have done so broadly, holding the immunity provision as an almost absolute bar on lawsuits against website owners for publishing questionable third-party content.[6]  Recently, however, this seemingly carte blanche immunity for online publishers has been under closer scrutiny.  In the intricate Internet landscape we live in today, websites are becoming not only the publisher of information, but also have significant power and control over the material that third parties post.

Within the past ten years, several judges have dissented with this broad interpretation of section 230, claiming this granting of broad immunity to online publishers has lead to websites carrying defamatory, illegal, or otherwise harmful postings without any recourse for victims.[7] We are no longer in the days of the early Internet where we need the CDA in order to promote vigorous sharing of information and protection of free speech.[8] Nor can we say that websites of this nature were part of the purpose of the CDA, which was to promote sharing of vital, helpful communications over the Internet for the greater good of society.[9] The potential for abuse simply outweighs the benefits imposed by a broad interpretation of the CDA’s immunity provision.[10]

The latest of such lawsuits involves a putative class action filed July 7, 2016 in California federal court.[11] The case involves a minor user of Snapchat Inc.’s photo messaging app.[12] Plaintff alleges that media published within the “Discover” section of Snapchat’s app reveals to minors sexually inappropriate and offensive material in violation of the CDA.[13]  The “Discover” section allows other media outlets, such as Buzzfeed and Cosmopolitan, to publish on the Snapchat app main page, and thus Snapchat has not only a large amount of control over which third-party media outlets can post, but also a great deal of influence over the type of content they ultimately post.[14]

Snapchat has approximately 150 million users and twenty three percent are between the ages of thirteen and seventeen.[15]  Snapchat does ask for the users birthday during registration since access is restricted to people older than thirteen.[16]   Nonetheless, there is no distinction in the videos available to minors and adults, and no language in the terms of service that notifies minor users of explicit content.[17] Snapchat’s own online community guidelines prohibit users from publicly displaying sexually explicit content.[18] The content Snapchat offers on Discover consistently breaches its own rules.

An intriguing layer about the lawsuit is that the plaintiff’s claim is founded on the infrequently cited CDA Section 230(d), a subsection that still remains to be litigated.[19] Basically, section 230(d) says that online interactive services such as Snapchat, when entering into an agreement with users, must alert such users of the availability of parental control safeguards.[20]  If victorious, this may necessitate changes to the terms of service of almost all interactive online publishers.  As the Internet further develops and online communication increases, more courts seem less eager to hand out immunity without close scrutiny.  It looks probable that as more content goes digital, the tendency to construe the scope of immunity a bit more tightly will remain.  It will be interesting to see how Snapchat will respond.

 

[1] Samuel J. Morley, How Broad Is Web Publisher Immunity Under §230 of the Communications Decency Act of 1996?, Fla. B. J., Feb. 2010, at 8.

[2] Id.

[3] 47 U.S.C.A. § 230 (West 1998).

[4] Id.

[5] Morley, supra note 1.

[6] See Zeran v. Am. Online, Inc., 129 F.3d 327, 331 (4th Cir. 1997).

[7] See Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1177 (9th Cir. 2008) (Dissent).

[8] Id.

[9] Id. at 1178.

[10] Id.

[11] Class Action Alleges Snapchat Exposes Minors To Adult Content, Lexis Legal News (July 8, 2016, 1:58 PM), http://www.lexislegalnews.com/articles/9659/class-action-alleges-snapchat-exposes-minors-to-adult-content.

[12] Id.

[13] Lexis Legal News, supra note 10.

[14] Id.

[15] Shayna Posses, Snapchat Exposes Minors To Explicit Content, Suit Says, Law 360 (July 7, 2016, 8:43 PM), http://www.law360.com/articles/815156/snapchat-exposes-minors-to-explicit-content-suit-says.  

[16] Id.

[17] Id.

[18] Id.

[19] Jeffery Neuburger, Liability under CDA Section 230? Recent Lawsuit Tries to Flip the Script against Social Media Service, Proskauer (Sep. 8, 2016), http://newmedialaw.proskauer.com/2016/09/08/liability-under-cda-section-230-recent-lawsuit-tries-to-flip-the-script-against-social-media-service/#more-1387.

[20] Id.

 

Drone Technology Brings Chipotle and Challenges to Privacy

Brittany Charles

On September 8, 2016, Alphabet, a Google parent, announced a partnership with Chipotle to launch a burrito delivery system at Virginia Tech utilizing—autonomous drones[1]. The FAA has approved the program, but before students begin lining up by the dozens to get their Tex-Mex fix, there are some limitations[2]. First, drones are not allowed to fly over students, therefore, the university has designated a select kiosk where individuals order and wait for their burritos[3]. Secondly, because the drones aren’t allowed to fly over students, only select individuals will be initially capable of ordering burritos via drone[4]. The reality of burrito-bearing drones at the hands of our fingertips is one that is only available to a select few and will come with a variety of issues that may make the technology more infuriating than enticing.

So what’s the issue that no one seems to be talking about? Well, we know that drones aren’t allowed to fly over students—but the privacy of students will inevitably be affected by this technology. The FAA currently prohibits individuals flying for hobby or recreation from flying below 400 ft., near groups of people, or over stadiums or sports events[5]. However, the rules in regards to drones for commercial usage are becoming more lenient to increase innovation[6]. The result? An increasingly complex environment where the government, corporations and universities will have to address the benefits of burritos over student’s privacy. So while we can take a moment to bask in the glory of drone armies bringing Chipotle to hungry boys and girls across our college campuses, we must do so with reservation. With power comes responsibility and who is responsible for student’s privacy?

[1] Matt McFarland, Google drones will deliver Chipotle burritos at Virginia Tech, CNN (Sept. 8, 2016), http://money.cnn.com/2016/09/08/technology/google-drone-chipotle-burrito/.

[2] Kate Cox, Google Will Actually Deliver Burritos By Drone To Some Lucky College Kids, CONSUMERIST (Sept. 8 2016), https://consumerist.com/2016/09/08/google-will-actually-deliver-burritos-by-drone-to-some-lucky-college-kids/.

[3] Matt McFarland, Google drones will deliver Chipotle burritos at Virginia Tech, CNN (Sept. 8, 2016), http://money.cnn.com/2016/09/08/technology/google-drone-chipotle-burrito/.

[4] Kate Cox, Google Will Actually Deliver Burritos By Drone To Some Lucky College Kids, CONSUMERIST (Sept. 8 2016), https://consumerist.com/2016/09/08/google-will-actually-deliver-burritos-by-drone-to-some-lucky-college-kids/.

[5] Where to Fly, FAA (June 14, 2016 1:23:02 PM EDT), https://www.faa.gov/uas/where_to_fly/.

[6] Nyshka Chandran, FAA’s new drone laws go into effect Monday, allowing US companies to innovate, CNBC (Aug. 29, 2016), http://www.cnbc.com/2016/08/29/faas-new-drone-laws-go-into-effect-monday-allowing-us-companies-to-innovate.html.