Open Access. Powered by Scholars. Published by Universities.®

Law Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 21 of 21

Full-Text Articles in Law

Montana Is Trying To Ban Tiktok. What Does The First Amendment Have To Say?, Deborah Pearlstein, John Dellamore May 2023

Montana Is Trying To Ban Tiktok. What Does The First Amendment Have To Say?, Deborah Pearlstein, John Dellamore

Online Publications

Last month, Montana became the first U.S. state to pass a bill banning TikTok from operating within its borders. If Governor Greg Gianforte signs some version of the bill, it will become the first statewide ban in the country to take direct aim at the popular social media app, which various U.S. government officials have warned poses a serious national security threat. But while Montana may be the first to act, significant gaps remain in the public debate surrounding both the nature of the threat that TikTok presents, and the constitutional questions that trying to regulate it might create.


Campbell V. Reisch: The Dangers Of The Campaign Loophole In Social Media Blocking Litigation, Clare R. Norins, Mark Bailey Jan 2023

Campbell V. Reisch: The Dangers Of The Campaign Loophole In Social Media Blocking Litigation, Clare R. Norins, Mark Bailey

Scholarly Works

Since 2016, social media blocking by government officials has been a lively battleground for First Amendment rights of free speech and petition. Government officials increasingly rely on social media to communicate with the public while ever greater numbers of private individuals are voicing their opinions and petitioning for change on government officials' interactive social media accounts. Perhaps not surprisingly, this has prompted many government officials to block those users whose comments they deem to be critical or offensive. But such speech regulation by a government actor introduces viewpoint discrimination—a cardinal sin under the First Amendment.

In 2019, three United States …


Platforms As Blackacres, Thomas E. Kadri Jan 2022

Platforms As Blackacres, Thomas E. Kadri

Scholarly Works

While writing this Article, I interviewed a journalist who writes stories about harmful technologies. To do this work, he gathers information from websites to reveal trends that online platforms would prefer to hide. His team has exposed how Facebook threatens people’s privacy and safety, how Amazon hides cheaper deals from consumers, and how Google diverts political speech from our inboxes. You’d think the journalist might want credit for telling these important stories, but he instead insisted on anonymity when we talked because his lawyer was worried he’d be confessing to breaking the law—to committing the crime and tort of cyber-trespass. …


FacebookʼS Latest Attempt To Address Vaccine Misinformation — And Why ItʼS Not Enough, Ana Santos Rutschman Nov 2020

FacebookʼS Latest Attempt To Address Vaccine Misinformation — And Why ItʼS Not Enough, Ana Santos Rutschman

All Faculty Scholarship

On October 13, 2020 Facebook announced the adoption of a series of measures to promote vaccine trust “while prohibiting ads with misinformation that could harm public health efforts.” In the post written by Kang-Xing Jin (head of health) and Rob Leathern (director of product management), the company explained that the new measures were designed with an emphasis on encouraging widespread use of this yearʼs flu vaccine, as well as in anticipation of potential COVID-19 vaccines becoming available in the near future.

The changes focus mainly on the establishment of a multiprong informational campaign about the seasonal flu vaccine, which includes …


Bad Actors: Authenticity, Inauthenticity, Speech, And Capitalism, Sarah C. Haan Jan 2020

Bad Actors: Authenticity, Inauthenticity, Speech, And Capitalism, Sarah C. Haan

Scholarly Articles

“Authenticity” has evolved into an important value that guides social media companies’ regulation of online speech. It is enforced through rules and practices that include real-name policies, Terms of Service requiring users to present only accurate information about themselves, community guidelines that prohibit “coordinated inauthentic behavior,” verification practices, product features, and more.

This Article critically examines authenticity regulation by the social media industry, including companies’ claims that authenticity is a moral virtue, an expressive value, and a pragmatic necessity for online communication. It explains how authenticity regulation provides economic value to companies engaged in “information capitalism,” “data capitalism,” and “surveillance …


Symposium: The California Consumer Privacy Act, Margot Kaminski, Jacob Snow, Felix Wu, Justin Hughes Jan 2020

Symposium: The California Consumer Privacy Act, Margot Kaminski, Jacob Snow, Felix Wu, Justin Hughes

Publications

This symposium discussion of the Loyola of Los Angeles Law Review focuses on the newly enacted California Consumer Privacy Act (CPPA), a statute signed into state law by then-Governor Jerry Brown on June 28, 2018 and effective as of January 1, 2020. The panel was held on February 20, 2020.

The panelists discuss how businesses are responding to the new law and obstacles for consumers to make effective use of the law’s protections and rights. Most importantly, the panelists grapple with questions courts are likely to have to address, including the definition of personal information under the CCPA, the application …


The Facebook Oversight Board: Creating An Independent Institution To Adjudicate Online Free Expression, Kate Klonick Jan 2020

The Facebook Oversight Board: Creating An Independent Institution To Adjudicate Online Free Expression, Kate Klonick

Faculty Publications

For a decade and a half, Facebook has dominated the landscape of digital social networks, becoming one of the most powerful arbiters of online speech. Twenty-four hours a day, seven days a week, over two billion users leverage the platform to post, share, discuss, react to, and access content from all over the globe. Through a system of semipublic rules called “Community Standards,” Facebook has created a body of “laws” and a system of governance that dictate what users may say on the platform. In recent years, as this intricately built system to dispatch the company’s immense private power over …


Facebook V. Sullivan: Public Figures And Newsworthiness In Online Speech, Thomas E. Kadri, Kate Klonick Jan 2019

Facebook V. Sullivan: Public Figures And Newsworthiness In Online Speech, Thomas E. Kadri, Kate Klonick

Faculty Publications

In the United States, there are now two systems to adjudicate disputes about harmful speech. The first is older and more established: the legal system in which judges apply constitutional law to limit tort claims alleging injuries caused by speech. The second is newer and less familiar: the content-moderation system in which platforms like Facebook implement the rules that govern online speech. These platforms are not bound by the First Amendment. But, as it turns out, they rely on many of the tools used by courts to resolve tensions between regulating harmful speech and preserving free expression—particularly the entangled concepts …


How Supreme A Court?, Thomas E. Kadri Nov 2018

How Supreme A Court?, Thomas E. Kadri

Popular Media

Facebook is planning an independent appeals process for content moderation decisions. But how much power will it have?


How To Make Facebook's 'Supreme Court' Work, Kate Klonick, Thomas E. Kadri Nov 2018

How To Make Facebook's 'Supreme Court' Work, Kate Klonick, Thomas E. Kadri

Popular Media

The idea of a body that will decide what kind of content is allowed on the site is promising — but only if it’s done right.


The Law Of Advertising Outrage, Mark Bartholomew Oct 2018

The Law Of Advertising Outrage, Mark Bartholomew

Journal Articles

This article examines the stimulation of audience outrage, both as a marketing strategy and as a subject of legal regulation. A brief history of advertising in the United States reveals repeated yet relatively infrequent attempts to attract consumer attention through overt transgressions of social norms relating to sex, violence, race, and religion. Natural concerns over audience reaction limited use of this particular advertising tactic as businesses needed to be careful not to alienate prospective purchasers. But now companies can engage in “algorithmic outrage”—social media advertising meant to stimulate individual feelings of anger and upset—with less concern for a consumer backlash. …


Inciting Terrorism On The Internet: The Limits Of Tolerating Intolerance, Amos N. Guiora Apr 2018

Inciting Terrorism On The Internet: The Limits Of Tolerating Intolerance, Amos N. Guiora

Utah Law Faculty Scholarship

The Internet is a limitless platform for information and data sharing. It is, in addition, however, a low-cost, high-speed dissemination mechanism that facilitates the spreading of hate speech, including violent and virtual threats. Indictment and prosecution for social media posts that transgress from opinion to incitable hate speech are appropriate in limited circumstances. Several real-world examples discussed here help to explore when limitations on Internet-based hate speech are appropriate.

In October 2015, twenty thousand Israelis joined a civil lawsuit filed against Facebook in the Supreme Court for the State of New York. Led by the civil rights organization Shurat HaDin, …


Hate Speech On Social Media, Amos N. Guiora, Elizabeth Park May 2017

Hate Speech On Social Media, Amos N. Guiora, Elizabeth Park

Utah Law Faculty Scholarship

This essay expounds on Raphael Cohen-Almagor’s recent book, Confronting the Internet’s Dark Side, Moral and Social Responsibility on the Free Highway, and advocates placing narrow limitations on hate speech posted to social media websites. The Internet is a limitless platform for information and data sharing. It is, in addition, however, a low-cost, high-speed dissemination mechanism that facilitates the spreading of hate speech including violent and virtual threats. Indictment and prosecution for social media posts that transgress from opinion to inciteful hate speech are appropriate in limited circumstances. This article uses various real-world examples to explore when limitations on Internet-based hate …


The “Sovereigns Of Cyberspace” And State Action: The First Amendment’S Application (Or Lack Thereof) To Third-Party Platforms, Jonathan Peters Jan 2017

The “Sovereigns Of Cyberspace” And State Action: The First Amendment’S Application (Or Lack Thereof) To Third-Party Platforms, Jonathan Peters

Scholarly Works

Many scholars have commented that the state action doctrine forecloses use of the First Amendment to constrain the policies and practices of online service providers. But few have comprehensively studied this issue, and the seminal article exploring “[c]yberspace and the [s]tate [a]ction [d]ebate” is fifteen years old, published before the U.S. Supreme Court reformulated the federal approach to state action. It is important to give the state action doctrine regular scholarly attention, not least because it is increasingly clear that “the private sector has a shared responsibility to help safeguard free expression.” It is critical to understand whether the First …


Keeping Pace: The U.S. Supreme Court And Evolving Technology, Brian Thomas Jul 2015

Keeping Pace: The U.S. Supreme Court And Evolving Technology, Brian Thomas

Politics Summer Fellows

Contemporary mainstream discussions of the Supreme Court are often qualified with the warning that the nine justices are out of touch with everyday American life, especially when it comes to the newest and most popular technologies. For instance, during oral argument for City of Ontario v. Quon, a 2010 case that dealt with sexting on government-issued devices, Chief Justice John Roberts famously asked what the difference was “between email and a pager,” and Justice Antonin Scalia wondered if the “spicy little conversations” held via text message could be printed and distributed. While these comments have garnered a great deal of …


The New American Privacy, Richard J. Peltz-Steele Jan 2014

The New American Privacy, Richard J. Peltz-Steele

Faculty Publications

The European Union sparked an intercontinental furor last year with proposed legislation to supersede the 1995 Data Protection Directive (DPD). The EU Parliament approved legislation in a 49-3 committee vote in October. The text, which is not yet published in its current draft at the time of this writing, may yet be amended before being accepted by the union’s 28 member states. The legislation is billed a money saver because it would harmonize EU member states’ data protection laws, which have diverged under the DPD umbrella. The business community is not convinced, fearful that costly new demands will strain balance …


Public Forum 2.1: Public Higher Education Institutions And Social Media, Robert H. Jerry Ii, Lyrissa Lidsky Oct 2012

Public Forum 2.1: Public Higher Education Institutions And Social Media, Robert H. Jerry Ii, Lyrissa Lidsky

Faculty Publications

Public colleges and universities increasingly are using Facebook, Second Life, YouTube, Twitter, and other social media communications tools. Yet public colleges and universities are government actors, and their creation and maintenance of social media sites or forums create difficult constitutional and administrative challenges. Our separate experiences, both theoretical and practical, have convinced us of the value of providing guidance for public higher education institutions wishing to engage with their constituents-including prospective, current, and former students and many others-through social media.

Together, we seek to guide public university officials through the complex body of law governing their social media use and …


Public Forum 2.1: Public Higher Education Institutions And Social Media, Robert H. Jerry Ii, Lyrissa Barnett Lidsky Oct 2012

Public Forum 2.1: Public Higher Education Institutions And Social Media, Robert H. Jerry Ii, Lyrissa Barnett Lidsky

UF Law Faculty Publications

Like most of us, public colleges and universities increasingly are communicating via Facebook, Second Life, YouTube, Twitter and other social media. Unlike most of us, public colleges and universities are government actors, and their social media communications present complex administrative and First Amendment challenges. The authors of this article — one the dean of a major public university law school responsible for directing its social media strategies, the other a scholar of social media and the First Amendment — have combined their expertise to help public university officials address these challenges. To that end, this article first examines current and …


Incendiary Speech And Social Media, Lyrissa Barnett Lidsky Jan 2012

Incendiary Speech And Social Media, Lyrissa Barnett Lidsky

UF Law Faculty Publications

Incidents illustrating the incendiary capacity of social media have rekindled concerns about the "mismatch" between existing doctrinal categories and new types of dangerous speech. This Essay examines two such incidents, one in which an offensive tweet and YouTube video led a hostile audience to riot and murder, and the other in which a blogger urged his nameless, faceless audience to murder federal judges. One incident resulted in liability for the speaker, even though no violence occurred; the other did not lead to liability for the speaker even though at least thirty people died as a result of his words. An …


Government Sponsored Social Media And Public Forum Doctrine Under The First Amendment: Perils And Pitfalls, Lyrissa Barnett Lidsky Jul 2011

Government Sponsored Social Media And Public Forum Doctrine Under The First Amendment: Perils And Pitfalls, Lyrissa Barnett Lidsky

UF Law Faculty Publications

The goal of this article is to provide guidance to lawyers trying to navigate the morass that is the U.S. Supreme Court’s public forum jurisprudence in order to advise government actors wishing to establish social media forums.


Incendiary Speech And Social Media, Lyrissa Lidsky Jan 2011

Incendiary Speech And Social Media, Lyrissa Lidsky

Faculty Publications

Incidents illustrating the incendiary capacity of social media have rekindled concerns about the "mismatch" between existing doctrinal categories and new types of dangerous speech. This Essay examines two such incidents, one in which an offensive tweet and YouTube video led a hostile audience to riot and murder, and the other in which a blogger urged his nameless, faceless audience to murder federal judges. One incident resulted in liability for the speaker even though no violence occurred; the other did not lead to liability for the speaker even though at least thirty people died as a result of his words. An …