Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 9 of 9

Full-Text Articles in Engineering

Computer Aided Diagnosis System For Breast Cancer Using Deep Learning., Asma Baccouche Aug 2022

Computer Aided Diagnosis System For Breast Cancer Using Deep Learning., Asma Baccouche

Electronic Theses and Dissertations

The recent rise of big data technology surrounding the electronic systems and developed toolkits gave birth to new promises for Artificial Intelligence (AI). With the continuous use of data-centric systems and machines in our lives, such as social media, surveys, emails, reports, etc., there is no doubt that data has gained the center of attention by scientists and motivated them to provide more decision-making and operational support systems across multiple domains. With the recent breakthroughs in artificial intelligence, the use of machine learning and deep learning models have achieved remarkable advances in computer vision, ecommerce, cybersecurity, and healthcare. Particularly, numerous …


Modeling Driver Distraction Mechanism And Its Safety Impact In Automated Vehicle Environment., Song Wang Dec 2021

Modeling Driver Distraction Mechanism And Its Safety Impact In Automated Vehicle Environment., Song Wang

Electronic Theses and Dissertations

Automated Vehicle (AV) technology expects to enhance driving safety by eliminating human errors. However, driver distraction still exists under automated driving. The Society of Automotive Engineers (SAE) has defined six levels of driving automation from Level 0~5. Until achieving Level 5, human drivers are still needed. Therefore, the Human-Vehicle Interaction (HVI) necessarily diverts a driver’s attention away from driving. Existing research mainly focused on quantifying distraction in human-operated vehicles rather than in the AV environment. It causes a lack of knowledge on how AV distraction can be detected, quantified, and understood. Moreover, existing research in exploring AV distraction has mainly …


Impossibility Results In Ai: A Survey, Mario Brcic, Roman Yampolskiy Sep 2021

Impossibility Results In Ai: A Survey, Mario Brcic, Roman Yampolskiy

Faculty Scholarship

An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim. Such theorems put limits on what is possible to do concerning artificial intelligence, especially the super-intelligent one. As such, these results serve as guidelines, reminders, and warnings to AI safety, AI policy, and governance researchers. These might enable solutions to some long-standing questions in the form of formalizing theories in the framework of constraint satisfaction without committing to one option. In this paper, we have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, …


Designing Ai For Explainability And Verifiability: A Value Sensitive Design Approach To Avoid Artificial Stupidity In Autonomous Vehicles, Steven Umbrello, Roman V. Yampolskiy May 2021

Designing Ai For Explainability And Verifiability: A Value Sensitive Design Approach To Avoid Artificial Stupidity In Autonomous Vehicles, Steven Umbrello, Roman V. Yampolskiy

Faculty Scholarship

One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents. This paper explores how decision matrix algorithms, via the belief-desire-intention model for autonomous vehicles, can be designed to minimize the risks of opaque architectures. Primarily through an explicit orientation towards designing for …


Imparting 3d Representations To Artificial Intelligence For A Full Assessment Of Pressure Injuries., Sofia Zahia Dec 2020

Imparting 3d Representations To Artificial Intelligence For A Full Assessment Of Pressure Injuries., Sofia Zahia

Electronic Theses and Dissertations

During recent decades, researches have shown great interest to machine learning techniques in order to extract meaningful information from the large amount of data being collected each day. Especially in the medical field, images play a significant role in the detection of several health issues. Hence, medical image analysis remarkably participates in the diagnosis process and it is considered a suitable environment to interact with the technology of intelligent systems. Deep Learning (DL) has recently captured the interest of researchers as it has proven to be efficient in detecting underlying features in the data and outperformed the classical machine learning …


Chess As A Testing Grounds For The Oracle Approach To Ai Safety, James D. Miller, Roman Yampolskiy, Olle Häggström, Stuart Armstrong Sep 2020

Chess As A Testing Grounds For The Oracle Approach To Ai Safety, James D. Miller, Roman Yampolskiy, Olle Häggström, Stuart Armstrong

Faculty Scholarship

To reduce the danger of powerful super-intelligent AIs, we might make the first such AIs oracles that can only send and receive messages. This paper proposes a possibly practical means of using machine learning to create two classes of narrow AI oracles that would provide chess advice: those aligned with the player's interest, and those that want the player to lose and give deceptively bad advice. The player would be uncertain which type of oracle it was interacting with. As the oracles would be vastly more intelligent than the player in the domain of chess, experience with these oracles might …


Mining Semantic Knowledge Graphs To Add Explainability To Black Box Recommender Systems, Mohammed Alshammari, Olfa Nasraoui, Scott Sanders Aug 2019

Mining Semantic Knowledge Graphs To Add Explainability To Black Box Recommender Systems, Mohammed Alshammari, Olfa Nasraoui, Scott Sanders

Faculty Scholarship

Recommender systems are being increasingly used to predict the preferences of users on online platforms and recommend relevant options that help them cope with information overload. In particular, modern model-based collaborative filtering algorithms, such as latent factor models, are considered state-of-the-art in recommendation systems. Unfortunately, these black box systems lack transparency, as they provide little information about the reasoning behind their predictions. White box systems, in contrast, can, by nature, easily generate explanations. However, their predictions are less accurate than sophisticated black box models. Recent research has demonstrated that explanations are an essential component in bringing the powerful predictions of …


Receptive Fields Optimization In Deep Learning For Enhanced Interpretability, Diversity, And Resource Efficiency., Babajide Odunitan Ayinde May 2019

Receptive Fields Optimization In Deep Learning For Enhanced Interpretability, Diversity, And Resource Efficiency., Babajide Odunitan Ayinde

Electronic Theses and Dissertations

In both supervised and unsupervised learning settings, deep neural networks (DNNs) are known to perform hierarchical and discriminative representation of data. They are capable of automatically extracting excellent hierarchy of features from raw data without the need for manual feature engineering. Over the past few years, the general trend has been that DNNs have grown deeper and larger, amounting to huge number of final parameters and highly nonlinear cascade of features, thus improving the flexibility and accuracy of resulting models. In order to account for the scale, diversity and the difficulty of data DNNs learn from, the architectural complexity and …


Responses To Catastrophic Agi Risk: A Survey, Kaj Sotala, Roman V. Yampolskiy Dec 2014

Responses To Catastrophic Agi Risk: A Survey, Kaj Sotala, Roman V. Yampolskiy

Faculty Scholarship

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fields proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.