Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 5 of 5

Full-Text Articles in Engineering

Impossibility Results In Ai: A Survey, Mario Brcic, Roman Yampolskiy Sep 2021

Impossibility Results In Ai: A Survey, Mario Brcic, Roman Yampolskiy

Faculty Scholarship

An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim. Such theorems put limits on what is possible to do concerning artificial intelligence, especially the super-intelligent one. As such, these results serve as guidelines, reminders, and warnings to AI safety, AI policy, and governance researchers. These might enable solutions to some long-standing questions in the form of formalizing theories in the framework of constraint satisfaction without committing to one option. In this paper, we have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, …


Understanding And Avoiding Ai Failures: A Practical Guide, Robert Williams, Roman Yampolskiy Sep 2021

Understanding And Avoiding Ai Failures: A Practical Guide, Robert Williams, Roman Yampolskiy

Faculty Scholarship

As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give a more complete picture of the risks of contemporary AI. By focusing on system properties near accidents instead of …


Death In Genetic Algorithms, Micah Burkhardt, Roman Yampolskiy Sep 2021

Death In Genetic Algorithms, Micah Burkhardt, Roman Yampolskiy

Faculty Scholarship

Death has long been overlooked in evolutionary algorithms. Recent research has shown that death (when applied properly) can benefit the overall fitness of a population and can outperform sub-sections of a population that are “immortal” when allowed to evolve together in an environment [1]. In this paper, we strive to experimentally determine whether death is an adapted trait and whether this adaptation can be used to enhance our implementations of conventional genetic algorithms. Using some of the most widely accepted evolutionary death and aging theories, we observed that senescent death (in various forms) can lower the total run-time of genetic …


Designing Ai For Explainability And Verifiability: A Value Sensitive Design Approach To Avoid Artificial Stupidity In Autonomous Vehicles, Steven Umbrello, Roman V. Yampolskiy May 2021

Designing Ai For Explainability And Verifiability: A Value Sensitive Design Approach To Avoid Artificial Stupidity In Autonomous Vehicles, Steven Umbrello, Roman V. Yampolskiy

Faculty Scholarship

One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents. This paper explores how decision matrix algorithms, via the belief-desire-intention model for autonomous vehicles, can be designed to minimize the risks of opaque architectures. Primarily through an explicit orientation towards designing for …


Transdisciplinary Ai Observatory—Retrospective Analyses And Future-Oriented Contradistinctions, Nadisha Marie Aliman, Leon Kester, Roman Yampolskiy Jan 2021

Transdisciplinary Ai Observatory—Retrospective Analyses And Future-Oriented Contradistinctions, Nadisha Marie Aliman, Leon Kester, Roman Yampolskiy

Faculty Scholarship

In the last years, artificial intelligence (AI) safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on …