Open Access. Powered by Scholars. Published by Universities.®

Law Commons

Open Access. Powered by Scholars. Published by Universities.®

Civil Rights and Discrimination

PDF

Washington University in St. Louis

Algorithms

Publication Year

Articles 1 - 5 of 5

Full-Text Articles in Law

Ai And Inequality, Pauline Kim Jan 2021

Ai And Inequality, Pauline Kim

Scholarship@WashULaw

This Chapter examines the social consequences of artificial intelligence (AI) when it is used to make predictions about people in contexts like employment, housing and criminal law enforcement. Observers have noted the potential for erroneous or arbitrary decisions about individuals; however, the growing use of predictive AI also threatens broader social harms. In particular, these technologies risk increasing inequality by reproducing or exacerbating the marginalization of historically disadvantaged groups, and by reinforcing power hierarchies that contribute to economic inequality. Using the employment context as the primary example, this Chapter explains how AI-powered tools that are used to recruit, hire and …


Manipulating Opportunity, Pauline Kim Jan 2020

Manipulating Opportunity, Pauline Kim

Scholarship@WashULaw

Concerns about online manipulation have centered on fears about undermining the autonomy of consumers and citizens. What has been overlooked is the risk that the same techniques of personalizing information online can also threaten equality. When predictive algorithms are used to allocate information about opportunities like employment, housing, and credit, they can reproduce past patterns of discrimination and exclusion in these markets. This Article explores these issues by focusing on the labor market, which is increasingly dominated by tech intermediaries. These platforms rely on predictive algorithms to distribute information about job openings, match job seekers with hiring firms, or recruit …


Data Mining And The Challenges Of Protecting Employee Privacy Under U.S. Law, Pauline Kim Jan 2019

Data Mining And The Challenges Of Protecting Employee Privacy Under U.S. Law, Pauline Kim

Scholarship@WashULaw

Concerns about employee privacy have intensified with the introduction of data mining tools in the workplace. Employers can now readily access detailed data about workers’ online behavior or social media activities, purchase background information from data brokers, and collect additional data from workplace surveillance tools. When data mining techniques are applied to this wealth of data, it is possible to infer additional information about employees beyond the information that is collected directly. As a consequence, these tools can alter the meaning and significance of personal information depending upon what other information it is aggregated with and how the larger dataset …


Data-Driven Discrimination At Work, Pauline Kim Jan 2017

Data-Driven Discrimination At Work, Pauline Kim

Scholarship@WashULaw

A data revolution is transforming the workplace. Employers are increasingly relying on algorithms to decide who gets interviewed, hired, or promoted. Although data algorithms can help to avoid biased human decision-making, they also risk introducing new sources of bias. Algorithms built on inaccurate, biased, or unrepresentative data can produce outcomes biased along lines of race, sex, or other protected characteristics. Data mining techniques may cause employment decisions to be based on correlations rather than causal relationships; they may obscure the basis on which employment decisions are made; and they may further exacerbate inequality because error detection is limited and feedback …


Auditing Algorithms For Discrimination, Pauline Kim Jan 2017

Auditing Algorithms For Discrimination, Pauline Kim

Scholarship@WashULaw

This Essay responds to the argument by Joshua Kroll, et al., in Accountable Algorithms, 165 U.PA.L.REV. 633 (2017), that technical tools can be more effective in ensuring the fairness of algorithms than insisting on transparency. When it comes to combating discrimination, technical tools alone will not be able to prevent discriminatory outcomes. Because the causes of bias often lie, not in the code, but in broader social processes, techniques like randomization or predefining constraints on the decision-process cannot guarantee the absence of bias. Even the most carefully designed systems may inadvertently encode preexisting prejudices or reflect structural bias. For this …