Open Access. Powered by Scholars. Published by Universities.®
Articles 1 - 3 of 3
Full-Text Articles in Law
Aggregating Moral Preferences, Matthew D. Adler
Aggregating Moral Preferences, Matthew D. Adler
Faculty Scholarship
Preference-aggregation problems arise in various contexts. One such context, little explored by social choice theorists, is metaethical. “Ideal-advisor” accounts, which have played a major role in metaethics, propose that moral facts are constituted by the idealized preferences of a community of advisors. Such accounts give rise to a preference-aggregation problem: namely, aggregating the advisors’ moral preferences. Do we have reason to believe that the advisors, albeit idealized, can still diverge in their rankings of a given set of alternatives? If so, what are the moral facts (in particular, the comparative moral goodness of the alternatives) when the advisors do diverge? …
Marriage On The Ballot: An Analysis Of Same-Sex Marriage Referendums In North Carolina, Minnesota, And Washington During The 2012 Elections, Craig M. Burnett, Mathew D. Mccubbins
Marriage On The Ballot: An Analysis Of Same-Sex Marriage Referendums In North Carolina, Minnesota, And Washington During The 2012 Elections, Craig M. Burnett, Mathew D. Mccubbins
Faculty Scholarship
No abstract provided.
Nashbots: How Political Scientists Have Underestimated Human Rationality, And How To Fix It, Daniel Enemark, Mathew D. Mccubbins, Mark Turner
Nashbots: How Political Scientists Have Underestimated Human Rationality, And How To Fix It, Daniel Enemark, Mathew D. Mccubbins, Mark Turner
Faculty Scholarship
Political scientists use experiments to test the predictions of game-theoretic models. In a typical experiment, each subject makes choices that determine her own earnings and the earnings of other subjects, with payments corresponding to the utility payoffs of a theoretical game. But social preferences distort the correspondence between a subject’s cash earnings and her subjective utility, and since social preferences vary, anonymously matched subjects cannot know their opponents’ preferences between outcomes, turning many laboratory tasks into games of incomplete information. We reduce the distortion of social preferences by pitting subjects against algorithmic agents (“Nashbots”). Across 11 experimental tasks, subjects facing …