Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Bias

Discipline
Institution
Publication Year
Publication
Publication Type
File Type

Articles 1 - 30 of 135

Full-Text Articles in Physical Sciences and Mathematics

Integrating Climatological-Hydrodynamic Modeling And Paleohurricane Records To Assess Storm Surge Risk, Amirhosein Begmohammadi, Christine Y. Blackshaw, Ning Lin, Avantika Gori, Elizabeth Wallace, Kerry Emanuel, Jeffrey P. Donnelly Jan 2024

Integrating Climatological-Hydrodynamic Modeling And Paleohurricane Records To Assess Storm Surge Risk, Amirhosein Begmohammadi, Christine Y. Blackshaw, Ning Lin, Avantika Gori, Elizabeth Wallace, Kerry Emanuel, Jeffrey P. Donnelly

OES Faculty Publications

Sediment cores from blue holes have emerged as a promising tool for extending the record of long-term tropical cyclone (TC) activity. However, interpreting this archive is challenging because storm surge depends on many parameters including TC intensity, track, and size. In this study, we use climatological-hydrodynamic modeling to interpret paleohurricane sediment records between 1851 and 2016 and assess the storm surge risk for Long Island in The Bahamas. As the historical TC data from 1988 to 2016 is too limited to estimate the surge risk for this area, we use historical event attribution in paleorecords paired with synthetic storm modeling …


Towards Algorithmic Justice: Human Centered Approaches To Artificial Intelligence Design To Support Fairness And Mitigate Bias In The Financial Services Sector, Jihyun Kim Jan 2024

Towards Algorithmic Justice: Human Centered Approaches To Artificial Intelligence Design To Support Fairness And Mitigate Bias In The Financial Services Sector, Jihyun Kim

CMC Senior Theses

Artificial Intelligence (AI) has positively transformed the Financial services sector but also introduced AI biases against protected groups, amplifying existing prejudices against marginalized communities. The financial decisions made by biased algorithms could cause life-changing ramifications in applications such as lending and credit scoring. Human Centered AI (HCAI) is an emerging concept where AI systems seek to augment, not replace human abilities while preserving human control to ensure transparency, equity and privacy. The evolving field of HCAI shares a common ground with and can be enhanced by the Human Centered Design principles in that they both put humans, the user, at …


Outsourcing Voting To Ai: Can Chatgpt Advise Index Funds On Proxy Voting Decisions?, Chen Wang Dec 2023

Outsourcing Voting To Ai: Can Chatgpt Advise Index Funds On Proxy Voting Decisions?, Chen Wang

Fordham Journal of Corporate & Financial Law

Released in November 2022, Chat Generative Pre-training Transformer (“ChatGPT”), has risen rapidly to prominence, and its versatile capabilities have already been shown in a variety of fields. Due to ChatGPT’s advanced features, such as extensive pre-training on diverse data, strong generalization ability, fine-tuning capabilities, and improved reasoning, the use of AI in the legal industry could experience a significant transformation. Since small passive funds with low-cost business models generally lack the financial resources to make informed proxy voting decisions that align with their shareholders’ interests, this Article considers the use of ChatGPT to assist small investment funds, particularly small passive …


Perception Of Bias In Chatgpt: Analysis Of Social Media Data, Abdullah Wahbeh, Mohammad A. Al-Ramahi, Omar El-Gayar, Ahmed El Noshokaty, Tareq Nasralah Dec 2023

Perception Of Bias In Chatgpt: Analysis Of Social Media Data, Abdullah Wahbeh, Mohammad A. Al-Ramahi, Omar El-Gayar, Ahmed El Noshokaty, Tareq Nasralah

Computer Information Systems Faculty Publications

In this study, we aim to analyze the public perception of Twitter users with respect to the use of ChatGPT and the potential bias in its responses. Sentiment and emotion analysis were also analyzed. Analysis of 5,962 English tweets showed that Twitter users were concerned about six main types of biases, namely: political, ideological, data & algorithmic, gender, racial, cultural, and confirmation biases. Sentiment analysis showed that most of the users reflected a neutral sentiment, followed by negative and positive sentiment. Emotion analysis mainly reflected anger, disgust, and sadness with respect to bias concerns with ChatGPT use.


Insights Into The Application Of Deep Reinforcement Learning In Healthcare And Materials Science, Benjamin R. Smith Aug 2023

Insights Into The Application Of Deep Reinforcement Learning In Healthcare And Materials Science, Benjamin R. Smith

Doctoral Dissertations

Reinforcement learning (RL) is a type of machine learning designed to optimize sequential decision-making. While controlled environments have served as a foundation for RL research, due to the growth in data volumes and deep learning methods, it is now increasingly being applied to real-world problems. In our work, we explore and attempt to overcome challenges that occur when applying RL to solve problems in healthcare and materials science.

First, we explore how issues in bias and data completeness affect healthcare applications of RL. To understand how bias has already been considered in this area, we survey the literature for existing …


Unmasking Bias: Investigating Strategies For Minimizing Discrimination In Ai Models, Julia L. Martin May 2023

Unmasking Bias: Investigating Strategies For Minimizing Discrimination In Ai Models, Julia L. Martin

Computer Science Senior Theses

Artificial Intelligence (AI) models are increasingly used as predictive tools with real-world applications occurring in diverse fields ranging from the healthcare industry to the criminal justice system. While AI often offers efficient and relatively effective solutions, there are growing concerns regarding AI’s role in decision-making processes due to potential biases embedded in these models. In many cases, bias in AI models can produce unfair outcomes, perpetuate social inequities, and undermine the trustworthiness of AI systems. This thesis explores this problem and spotlights certain biased models that are currently utilized in real-world situations. One such example is a highly biased AI …


Architectural Design Of A Blockchain-Enabled, Federated Learning Platform For Algorithmic Fairness In Predictive Health Care: Design Science Study, Xueping Liang, Juan Zhao, Yan Chen, Eranga Bandara, Sachin Shetty Jan 2023

Architectural Design Of A Blockchain-Enabled, Federated Learning Platform For Algorithmic Fairness In Predictive Health Care: Design Science Study, Xueping Liang, Juan Zhao, Yan Chen, Eranga Bandara, Sachin Shetty

VMASC Publications

Background: Developing effective and generalizable predictive models is critical for disease prediction and clinical decision-making, often requiring diverse samples to mitigate population bias and address algorithmic fairness. However, a major challenge is to retrieve learning models across multiple institutions without bringing in local biases and inequity, while preserving individual patients' privacy at each site.

Objective: This study aims to understand the issues of bias and fairness in the machine learning process used in the predictive health care domain. We proposed a software architecture that integrates federated learning and blockchain to improve fairness, while maintaining acceptable prediction accuracy and minimizing overhead …


Utilizing Markov Chains To Estimate Allele Progression Through Generations, Ronit Gandhi Jan 2023

Utilizing Markov Chains To Estimate Allele Progression Through Generations, Ronit Gandhi

Honors Theses

All populations display patterns in allele frequencies over time. Some alleles cease to exist, while some grow to become the norm. These frequencies can shift or stay constant based on the conditions the population lives in. If in Hardy-Weinberg equilibrium, the allele frequencies stay constant. Most populations, however, have bias from environmental factors, sexual preferences, other organisms, etc. We propose a stochastic Markov chain model to study allele progression across generations. In such a model, the allele frequencies in the next generation depend only on the frequencies in the current one.

We use this model to track a recessive allele …


Algorithmic Bias Automation: The Effects Of Proxy On Machine-Learned Systems, Emely J. Galeano Jan 2023

Algorithmic Bias Automation: The Effects Of Proxy On Machine-Learned Systems, Emely J. Galeano

Senior Projects Spring 2023

Senior Project submitted to The Division of Science, Mathematics and Computing of Bard College.


The Basil Technique: Bias Adaptive Statistical Inference Learning Agents For Learning From Human Feedback, Jonathan Indigo Watson Jan 2023

The Basil Technique: Bias Adaptive Statistical Inference Learning Agents For Learning From Human Feedback, Jonathan Indigo Watson

Theses and Dissertations--Computer Science

We introduce a novel approach for learning behaviors using human-provided feedback that is subject to systematic bias. Our method, known as BASIL, models the feedback signal as a combination of a heuristic evaluation of an action's utility and a probabilistically-drawn bias value, characterized by unknown parameters. We present both the general framework for our technique and specific algorithms for biases drawn from a normal distribution. We evaluate our approach across various environments and tasks, comparing it to interactive and non-interactive machine learning methods, including deep learning techniques, using human trainers and a synthetic oracle with feedback distorted to varying degrees. …


Bias Detector Tool For Face Datasets Using Image Recognition, Jatin Vamshi Battu Jan 2023

Bias Detector Tool For Face Datasets Using Image Recognition, Jatin Vamshi Battu

Master's Projects

Computer Vision has been quickly transforming the way we live and work. One of its sub- domains, i.e., Facial Recognition has also been advancing at a rapid pace. However, the development of machine learning models that power these systems has been marred by social biases, which open the door to various societal issues. The objective of this project is to address these issues and ensure that computer vision systems are unbiased and fair to all individuals. To achieve this, we have created a web tool that uses three image classifiers (implemented using CNNs) to classify images into categories based on …


Human-Centred Artificial Intelligence In The Banking Sector, Krishnaraj Arul Obuchettiar, Alan @ Ali Madjelisi Megargel Jan 2023

Human-Centred Artificial Intelligence In The Banking Sector, Krishnaraj Arul Obuchettiar, Alan @ Ali Madjelisi Megargel

Research Collection School Of Computing and Information Systems

Changes in technology have shaped how corporate and retail businesses have evolved, alongside the customers’ preferences. The advent of smart digital devices and social media has shaped how consumers interact and transact with their financial institutions over the past two decades. With the rapid evolution of new technologies and customers' growing preference for digital engagement with financial institutions, organizations need to adopt and align with emerging technologies that support speed, accuracy, efficiency, and security in a user-friendly manner. Today, consumers want hyper-personalized interactions that are more frequent and proactive. Moreover, financial institutions have a growing need to cater to consumers' …


Biasfinder: Metamorphic Test Generation To Uncover Bias For Sentiment Analysis Systems, Muhammad Hilmi Asyrofi, Zhou Yang, Imam Nur Bani Yusuf, Hong Jin Kang, Thung Ferdian, David Lo Dec 2022

Biasfinder: Metamorphic Test Generation To Uncover Bias For Sentiment Analysis Systems, Muhammad Hilmi Asyrofi, Zhou Yang, Imam Nur Bani Yusuf, Hong Jin Kang, Thung Ferdian, David Lo

Research Collection School Of Computing and Information Systems

Artificial intelligence systems, such as Sentiment Analysis (SA) systems, typically learn from large amounts of data that may reflect human bias. Consequently, such systems may exhibit unintended demographic bias against specific characteristics (e.g., gender, occupation, country-of-origin, etc.). Such bias manifests in an SA system when it predicts different sentiments for similar texts that differ only in the characteristic of individuals described. To automatically uncover bias in SA systems, this paper presents BiasFinder, an approach that can discover biased predictions in SA systems via metamorphic testing. A key feature of BiasFinder is the automatic curation of suitable templates from any given …


Examining Bias In Jury Selection For Criminal Trials In Dallas County, Megan Ball, Brandon Birmingham, Matt Farrow, Katherine Mitchell, Bivin Sadler, Lynne Stokes Sep 2022

Examining Bias In Jury Selection For Criminal Trials In Dallas County, Megan Ball, Brandon Birmingham, Matt Farrow, Katherine Mitchell, Bivin Sadler, Lynne Stokes

SMU Data Science Review

One of the hallmarks of the American judicial system is the concept of trial by jury, and for said trial to consist of an impartial jury of your peers. Several landmark legal cases in the history of the United States have challenged this notion of equal representation by jury—most notably Batson v. Kentucky, 476 U.S. 79 (1986). Most of the previous research, focus, and legal precedence has centered around peremptory challenges and attempting to prove if bias was suspected in excluding certain jurors from serving. Few studies, however, focus on examining challenges for cause based on self-reported biases from the …


“Be A Pattern For The World”: The Development Of A Dark Patterns Detection Tool To Prevent Online User Loss, Jordan Donnelly, Alan Downley, Yunpeng Liu, Yufei Su, Quanwei Sun, Lan Zeng, Andrea Curley, Damian Gordon, Paul Kelly, Dympna O'Sullivan, Anna Becevel Sep 2022

“Be A Pattern For The World”: The Development Of A Dark Patterns Detection Tool To Prevent Online User Loss, Jordan Donnelly, Alan Downley, Yunpeng Liu, Yufei Su, Quanwei Sun, Lan Zeng, Andrea Curley, Damian Gordon, Paul Kelly, Dympna O'Sullivan, Anna Becevel

Articles

Dark Patterns are designed to trick users into sharing more information or spending more money than they had intended to do, by configuring online interactions to confuse or add pressure to the users. They are highly varied in their form, and are therefore difficult to classify and detect. Therefore, this research is designed to develop a framework for the automated detection of potential instances of web-based dark patterns, and from there to develop a software tool that will provide a highly useful defensive tool that helps detect and highlight these patterns.


Beyond Accuracy In Machine Learning., Aneseh Alvanpour May 2022

Beyond Accuracy In Machine Learning., Aneseh Alvanpour

Electronic Theses and Dissertations

Machine Learning (ML) algorithms are widely used in our daily lives. The need to increase the accuracy of ML models has led to building increasingly powerful and complex algorithms known as black-box models which do not provide any explanations about the reasons behind their output. On the other hand, there are white-box ML models which are inherently interpretable while having lower accuracy compared to black-box models. To have a productive and practical algorithmic decision system, precise predictions may not be sufficient. The system may need to have transparency and be able to provide explanations, especially in applications with safety-critical contexts …


New Debiasing Strategies In Collaborative Filtering Recommender Systems: Modeling User Conformity, Multiple Biases, And Causality., Mariem Boujelbene May 2022

New Debiasing Strategies In Collaborative Filtering Recommender Systems: Modeling User Conformity, Multiple Biases, And Causality., Mariem Boujelbene

Electronic Theses and Dissertations

Recommender Systems are widely used to personalize the user experience in a diverse set of online applications ranging from e-commerce and education to social media and online entertainment. These State of the Art AI systems can suffer from several biases that may occur at different stages of the recommendation life-cycle. For instance, using biased data to train recommendation models may lead to several issues, such as the discrepancy between online and offline evaluation, decreasing the recommendation performance, and hurting the user experience. Bias can occur during the data collection stage where the data inherits the user-item interaction biases, such as …


An Interactive Game With Virtual Reality Immersion To Improve Cultural Sensitivity In Healthcare, Paul J. Hershberger, Yong Pei, Timothy N. Crawford, Sabrina M. Neeley, Thomas Wischgoll, Dixit B. Patel, Miteshkumar M. Vasoya, Angie Castle, Sankalp Mishra, Lahari Surapaneni, Aman A. Pogaku, Aishwarya Bositty, Todd Pavlack Mar 2022

An Interactive Game With Virtual Reality Immersion To Improve Cultural Sensitivity In Healthcare, Paul J. Hershberger, Yong Pei, Timothy N. Crawford, Sabrina M. Neeley, Thomas Wischgoll, Dixit B. Patel, Miteshkumar M. Vasoya, Angie Castle, Sankalp Mishra, Lahari Surapaneni, Aman A. Pogaku, Aishwarya Bositty, Todd Pavlack

Computer Science and Engineering Faculty Publications

Purpose: Biased perceptions of individuals who are not part of one’s in-groups tend to be negative and habitual. Because health care professionals are no less susceptible to biases than are others, the adverse impact of biases on marginalized populations in health care warrants continued attention and amelioration. Method: Two characters, a Syrian refugee with limited English proficiency and a black pregnant woman with a history of opioid use disorder, were developed for an online training simulation that includes an interactive life course experience focused on social determinants of health, and a clinical encounter in a community health center utilizing virtual …


On The Influence Of Biases In Bug Localization: Evaluation And Benchmark, Ratnadira Widyasari, Stefanus Agus Haryono, Ferdian Thung, Jieke Shi, Constance Tan, Fiona Wee, Jack Phan, David Lo Mar 2022

On The Influence Of Biases In Bug Localization: Evaluation And Benchmark, Ratnadira Widyasari, Stefanus Agus Haryono, Ferdian Thung, Jieke Shi, Constance Tan, Fiona Wee, Jack Phan, David Lo

Research Collection School Of Computing and Information Systems

Bug localization is the task of identifying parts of thesource code that needs to be changed to resolve a bug report.As this task is difficult, automatic bug localization tools havebeen proposed. The development and evaluation of these toolsrely on the availability of high-quality bug report datasets. In2014, Kochhar et al. identified three biases in datasets used toevaluate bug localization techniques: (1) misclassified bug report,(2) already localized bug report, and (3) incorrect ground truthfile in a bug report. They reported that already localized bugreports statistically significantly and substantially impact buglocalization results, and thus should be removed. However, theirevaluation is still limited, …


Examining Bias Against Women In Professional Settings Through Bifurcation Theory, Lauren Cashdan Jan 2022

Examining Bias Against Women In Professional Settings Through Bifurcation Theory, Lauren Cashdan

CMC Senior Theses

When it comes to women in professional hierarchies, it is important to recognize the lack of representation at the higher levels. By modeling these situations we hope to draw attention to the issues currently plaguing professional atmospheres. In a paper by Clifton et. al. (2019), they model the fraction of women at any level in a professional hierarchy using the parameters of hiring gender bias and internal homophily on behalf of the applicant. This thesis will focus on a key theory in Clifton et. al.’s analysis and explain its role in the model, specifically bifrucation analysis. In order to analyze …


Bias Impedes Progress In Physical Biology, Consciousness Studies And Quantum Gravity, Maurice Goodman Jan 2022

Bias Impedes Progress In Physical Biology, Consciousness Studies And Quantum Gravity, Maurice Goodman

Articles

If scientists hope to make progress in consciousness studies they needs to accept that biased judgments have a major influence on the sciences, how we divide them up, how they are funded and this, in turn, has a profound impact on progress. The imbalance in funding, resulting from bias, in favour of the life and health sciences needs to be addressed as does why perversely little of this funding is devoted to a physics explanation of self-organisation and life on the mesoscopic scale? While life (the cell) is an outstanding example of self-organisation on the mesoscopic scale we need to …


Robophobia, Andrew Keane Woods Jan 2022

Robophobia, Andrew Keane Woods

University of Colorado Law Review

Robots-machines, algorithms, artificial intelligence-play an increasingly important role in society, often supplementing or even replacing human judgment. Scholars have rightly become concerned with the fairness, accuracy, and humanity of these systems. Indeed, anxiety about machine bias is at a fever pitch. While these concerns are important, they nearly all run in one direction: we worry about robot bias against humans; we rarely worry about human bias against robots.

This is a mistake. Not because robots deserve, in some deontological sense, to be treated fairly-although that may be true-but because our bias against nonhuman deciders is bad for us. For example, …


Fair And Diverse Group Formation Based On Multidimensional Features, Mohammed Saad A Alqahtani Dec 2021

Fair And Diverse Group Formation Based On Multidimensional Features, Mohammed Saad A Alqahtani

Graduate Theses and Dissertations

The goal of group formation is to build a team to accomplish a specific task. Algorithms are being developed to improve the team's effectiveness so formed and the efficiency of the group selection process. However, there is concern that team formation algorithms could be biased against minorities due to the algorithms themselves or the data on which they are trained. Hence, it is essential to build fair team formation systems that incorporate demographic information into the process of building the group. Although there has been extensive work on modeling individuals’ expertise for expert recommendation and/or team formation, there has been …


Generalized Ratio-Cum-Product Estimator For Finite Population Mean Under Two-Phase Sampling Scheme, Gajendra Kumar Vishwakarma, Sayed Mohammed Zeeshan Jun 2021

Generalized Ratio-Cum-Product Estimator For Finite Population Mean Under Two-Phase Sampling Scheme, Gajendra Kumar Vishwakarma, Sayed Mohammed Zeeshan

Journal of Modern Applied Statistical Methods

A method to lower the MSE of a proposed estimator relative to the MSE of the linear regression estimator under two-phase sampling scheme is developed. Estimators are developed to estimate the mean of the variate under study with the help of auxiliary variate (which are unknown but it can be accessed conveniently and economically). The mean square errors equations are obtained for the proposed estimators. In addition, optimal sample sizes are obtained under the given cost function. The comparison study has been done to set up conditions for which developed estimators are more effective than other estimators with novelty. The …


Sociolinguistically Driven Approaches For Just Natural Language Processing, Su Lin Blodgett Apr 2021

Sociolinguistically Driven Approaches For Just Natural Language Processing, Su Lin Blodgett

Doctoral Dissertations

Natural language processing (NLP) systems are now ubiquitous. Yet the benefits of these language technologies do not accrue evenly to all users, and indeed they can be harmful; NLP systems reproduce stereotypes, prevent speakers of non-standard language varieties from participating fully in public discourse, and re-inscribe historical patterns of linguistic stigmatization and discrimination. How harms arise in NLP systems, and who is harmed by them, can only be understood at the intersection of work on NLP, fairness and justice in machine learning, and the relationships between language and social justice. In this thesis, we propose to address two questions at …


Powered By Ai, Christopher J. Smiley Apr 2021

Powered By Ai, Christopher J. Smiley

The Journal of the Michigan Dental Association

Artificial Intelligence (AI) is revolutionizing dental practice through its ability to process vast amounts of data, enhance diagnosis, and improve patient care. However, AI introduces the challenge of bias and ethical considerations. Dentists and dental benefit providers are utilizing AI for early disease detection and efficient data management, but transparency and fairness in AI algorithms are vital. The Rome Call for AI Ethics emphasizes ethical, non-biased AI development. In the broader context, AI-driven marketing and predictive behavior raise concerns about privacy and ethical data use. The dental community must embrace AI's power while upholding ethical standards and transparency.


Administrative Law In The Automated State, Cary Coglianese Jan 2021

Administrative Law In The Automated State, Cary Coglianese

All Faculty Scholarship

In the future, administrative agencies will rely increasingly on digital automation powered by machine learning algorithms. Can U.S. administrative law accommodate such a future? Not only might a highly automated state readily meet longstanding administrative law principles, but the responsible use of machine learning algorithms might perform even better than the status quo in terms of fulfilling administrative law’s core values of expert decision-making and democratic accountability. Algorithmic governance clearly promises more accurate, data-driven decisions. Moreover, due to their mathematical properties, algorithms might well prove to be more faithful agents of democratic institutions. Yet even if an automated state were …


Bias Of Rank Correlation Under A Mixture Model, Russell Land Jan 2021

Bias Of Rank Correlation Under A Mixture Model, Russell Land

Electronic Theses and Dissertations

This thesis project will analyze the bias in mixture models when contaminated data is present. Specifically, we will analyze the relationship between the bias and the mixing proportion, p, for the rank correlation methods Spearman’s Rho and Kendall’s Tau. We will first look at the history of the two non-parametric rank correlation methods and the sample and population definitions will be introduced. Copulas will be introduced to show a few ways we can define these correlation methods. After that, mixture models will be defined and the main theorem will be stated and proved. As an example, we will apply this …


Law Library Blog (November 2020): Legal Beagle's Blog Archive, Roger Williams University School Of Law Nov 2020

Law Library Blog (November 2020): Legal Beagle's Blog Archive, Roger Williams University School Of Law

Law Library Newsletters/Blog

No abstract provided.


The Efficiency And Accuracy Of Yolo For Neonate Face Detection In The Clinical Setting, Jacqueline Hausmann Oct 2020

The Efficiency And Accuracy Of Yolo For Neonate Face Detection In The Clinical Setting, Jacqueline Hausmann

USF Tampa Graduate Theses and Dissertations

There are many face detection classification models available for download and use in the modern technological world. Based in the field of deep neural networks, these off-the-shelf solutions are generally inadequate to solve real world challenges. This work presents how current approaches biased towards detecting adult human faces must be modified in order to better accommodate face detection of the neonate in a NICU setting.

YOLO is a powerful object detection algorithm. Due to optimizations such as Cross mini-batch Normalization, Modified Spatial Attention Modules, Modified Path Aggregation Networks, Self-Adversarial Training, Mosaic Data Augmentation, DropBox Regularization, Multi-Input Weighted Residual Connections and …