Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Engineering

PDF

Faculty Scholarship

Series

Artificial intelligence

Publication Year

Articles 1 - 5 of 5

Full-Text Articles in Engineering

Impossibility Results In Ai: A Survey, Mario Brcic, Roman Yampolskiy Sep 2021

Impossibility Results In Ai: A Survey, Mario Brcic, Roman Yampolskiy

Faculty Scholarship

An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim. Such theorems put limits on what is possible to do concerning artificial intelligence, especially the super-intelligent one. As such, these results serve as guidelines, reminders, and warnings to AI safety, AI policy, and governance researchers. These might enable solutions to some long-standing questions in the form of formalizing theories in the framework of constraint satisfaction without committing to one option. In this paper, we have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, …


Designing Ai For Explainability And Verifiability: A Value Sensitive Design Approach To Avoid Artificial Stupidity In Autonomous Vehicles, Steven Umbrello, Roman V. Yampolskiy May 2021

Designing Ai For Explainability And Verifiability: A Value Sensitive Design Approach To Avoid Artificial Stupidity In Autonomous Vehicles, Steven Umbrello, Roman V. Yampolskiy

Faculty Scholarship

One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents. This paper explores how decision matrix algorithms, via the belief-desire-intention model for autonomous vehicles, can be designed to minimize the risks of opaque architectures. Primarily through an explicit orientation towards designing for …


Chess As A Testing Grounds For The Oracle Approach To Ai Safety, James D. Miller, Roman Yampolskiy, Olle Häggström, Stuart Armstrong Sep 2020

Chess As A Testing Grounds For The Oracle Approach To Ai Safety, James D. Miller, Roman Yampolskiy, Olle Häggström, Stuart Armstrong

Faculty Scholarship

To reduce the danger of powerful super-intelligent AIs, we might make the first such AIs oracles that can only send and receive messages. This paper proposes a possibly practical means of using machine learning to create two classes of narrow AI oracles that would provide chess advice: those aligned with the player's interest, and those that want the player to lose and give deceptively bad advice. The player would be uncertain which type of oracle it was interacting with. As the oracles would be vastly more intelligent than the player in the domain of chess, experience with these oracles might …


Mining Semantic Knowledge Graphs To Add Explainability To Black Box Recommender Systems, Mohammed Alshammari, Olfa Nasraoui, Scott Sanders Aug 2019

Mining Semantic Knowledge Graphs To Add Explainability To Black Box Recommender Systems, Mohammed Alshammari, Olfa Nasraoui, Scott Sanders

Faculty Scholarship

Recommender systems are being increasingly used to predict the preferences of users on online platforms and recommend relevant options that help them cope with information overload. In particular, modern model-based collaborative filtering algorithms, such as latent factor models, are considered state-of-the-art in recommendation systems. Unfortunately, these black box systems lack transparency, as they provide little information about the reasoning behind their predictions. White box systems, in contrast, can, by nature, easily generate explanations. However, their predictions are less accurate than sophisticated black box models. Recent research has demonstrated that explanations are an essential component in bringing the powerful predictions of …


Responses To Catastrophic Agi Risk: A Survey, Kaj Sotala, Roman V. Yampolskiy Dec 2014

Responses To Catastrophic Agi Risk: A Survey, Kaj Sotala, Roman V. Yampolskiy

Faculty Scholarship

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fields proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.