Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 30 of 39

Full-Text Articles in Engineering

Ai Is Here. Here’S How New Mexicans Can Prepare, Sonia Gipson Rankin, Melanie E. Moses Sep 2023

Ai Is Here. Here’S How New Mexicans Can Prepare, Sonia Gipson Rankin, Melanie E. Moses

Faculty Scholarship

Last December we asked the AI chatbot ChatGPT to solve a programming assignment given to computer science students at UNM. It wrote some Python code, but it generated an error. We gave the chatbot the error message and were astounded by how good its response was.


Death In Genetic Algorithms, Micah Burkhardt, Roman Yampolskiy Sep 2021

Death In Genetic Algorithms, Micah Burkhardt, Roman Yampolskiy

Faculty Scholarship

Death has long been overlooked in evolutionary algorithms. Recent research has shown that death (when applied properly) can benefit the overall fitness of a population and can outperform sub-sections of a population that are “immortal” when allowed to evolve together in an environment [1]. In this paper, we strive to experimentally determine whether death is an adapted trait and whether this adaptation can be used to enhance our implementations of conventional genetic algorithms. Using some of the most widely accepted evolutionary death and aging theories, we observed that senescent death (in various forms) can lower the total run-time of genetic …


Understanding And Avoiding Ai Failures: A Practical Guide, Robert Williams, Roman Yampolskiy Sep 2021

Understanding And Avoiding Ai Failures: A Practical Guide, Robert Williams, Roman Yampolskiy

Faculty Scholarship

As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give a more complete picture of the risks of contemporary AI. By focusing on system properties near accidents instead of …


Impossibility Results In Ai: A Survey, Mario Brcic, Roman Yampolskiy Sep 2021

Impossibility Results In Ai: A Survey, Mario Brcic, Roman Yampolskiy

Faculty Scholarship

An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim. Such theorems put limits on what is possible to do concerning artificial intelligence, especially the super-intelligent one. As such, these results serve as guidelines, reminders, and warnings to AI safety, AI policy, and governance researchers. These might enable solutions to some long-standing questions in the form of formalizing theories in the framework of constraint satisfaction without committing to one option. In this paper, we have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, …


Designing Ai For Explainability And Verifiability: A Value Sensitive Design Approach To Avoid Artificial Stupidity In Autonomous Vehicles, Steven Umbrello, Roman V. Yampolskiy May 2021

Designing Ai For Explainability And Verifiability: A Value Sensitive Design Approach To Avoid Artificial Stupidity In Autonomous Vehicles, Steven Umbrello, Roman V. Yampolskiy

Faculty Scholarship

One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents. This paper explores how decision matrix algorithms, via the belief-desire-intention model for autonomous vehicles, can be designed to minimize the risks of opaque architectures. Primarily through an explicit orientation towards designing for …


Transdisciplinary Ai Observatory—Retrospective Analyses And Future-Oriented Contradistinctions, Nadisha Marie Aliman, Leon Kester, Roman Yampolskiy Jan 2021

Transdisciplinary Ai Observatory—Retrospective Analyses And Future-Oriented Contradistinctions, Nadisha Marie Aliman, Leon Kester, Roman Yampolskiy

Faculty Scholarship

In the last years, artificial intelligence (AI) safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on …


Chess As A Testing Grounds For The Oracle Approach To Ai Safety, James D. Miller, Roman Yampolskiy, Olle Häggström, Stuart Armstrong Sep 2020

Chess As A Testing Grounds For The Oracle Approach To Ai Safety, James D. Miller, Roman Yampolskiy, Olle Häggström, Stuart Armstrong

Faculty Scholarship

To reduce the danger of powerful super-intelligent AIs, we might make the first such AIs oracles that can only send and receive messages. This paper proposes a possibly practical means of using machine learning to create two classes of narrow AI oracles that would provide chess advice: those aligned with the player's interest, and those that want the player to lose and give deceptively bad advice. The player would be uncertain which type of oracle it was interacting with. As the oracles would be vastly more intelligent than the player in the domain of chess, experience with these oracles might …


Artificial Stupidity: Data We Need To Make Machines Our Equals, Michaël Trazzi, Roman V. Yampolskiy May 2020

Artificial Stupidity: Data We Need To Make Machines Our Equals, Michaël Trazzi, Roman V. Yampolskiy

Faculty Scholarship

AI must understand human limitations to provide good service and safe interactions. Standardized data on human limits would be valuable in many domains but is not available. The data science community has to work on collecting and aggregating such data in a common and widely available format, so that any AI researcher can easily look up the applicable limit measurements for their latest project. AI must understand human limitations to provide good service and safe interactions. Standardized data on human limits would be valuable in many domains but is not available. Data science community has to work on collecting and …


The Sounds Of Science - A Symphony For Many Instruments And Voices, Gerianne Alexander, Roland E. Allen, Anthony Atala, Warwick P. Bowen, Alan A. Coley, John B. Goodenough, Mikhail I. Katsnelson, Eugene V. Koonin, Mario Krenn, Lars S. Madsen, Martin Månsson, Nicolas P. Mauranyapin, Art I. Melvin, Ernst Rasel, Linda E. Reichl, Roman Yampolskiy, Philip B. Yasskin, Anton Zeilinger, Suzy Lidström Apr 2020

The Sounds Of Science - A Symphony For Many Instruments And Voices, Gerianne Alexander, Roland E. Allen, Anthony Atala, Warwick P. Bowen, Alan A. Coley, John B. Goodenough, Mikhail I. Katsnelson, Eugene V. Koonin, Mario Krenn, Lars S. Madsen, Martin Månsson, Nicolas P. Mauranyapin, Art I. Melvin, Ernst Rasel, Linda E. Reichl, Roman Yampolskiy, Philip B. Yasskin, Anton Zeilinger, Suzy Lidström

Faculty Scholarship

Sounds of Science is the first movement of a symphony for many (scientific) instruments and voices, united in celebration of the frontiers of science and intended for a general audience. John Goodenough, the maestro who transformed energy usage and technology through the invention of the lithium-ion battery, opens the programme, reflecting on the ultimate limits of battery technology. This applied theme continues through the subsequent pieces on energy-related topics - the sodium-ion battery and artificial fuels, by Martin Månsson - and the ultimate challenge for 3D printing, the eventual production of life, by Anthony Atala. A passage by Gerianne Alexander …


Modeling And Counteracting Exposure Bias In Recommender Systems, Sami Khenissi, Olfa Nasraoui Jan 2020

Modeling And Counteracting Exposure Bias In Recommender Systems, Sami Khenissi, Olfa Nasraoui

Faculty Scholarship

What we discover and see online, and consequently our opinions and decisions, are becoming increasingly affected by automated machine learned predictions. Similarly, the predictive accuracy of learning machines heavily depends on the feedback data that we provide them. This mutual influence can lead to closed-loop interactions that may cause unknown biases which can be exacerbated after several iterations of machine learning predictions and user feedback. Machine-caused biases risk leading to undesirable social effects ranging from polarization to unfairness and filter bubbles. In this paper, we study the bias inherent in widely used recommendation strategies such as matrix factorization. Then we …


Seer: An Explainable Deep Learning Midi-Based Hybrid Song Recommender System, Khalil Damak, Olfa Nasraoui Dec 2019

Seer: An Explainable Deep Learning Midi-Based Hybrid Song Recommender System, Khalil Damak, Olfa Nasraoui

Faculty Scholarship

State of the art music recommender systems mainly rely on either matrix factorization-based collaborative filtering approaches or deep learning architectures. Deep learning models usually use metadata for content-based filtering or predict the next user interaction by learning from temporal sequences of user actions. Despite advances in deep learning for song recommendation, none has taken advantage of the sequential nature of songs by learning sequence models that are based on content. Aside from the importance of prediction accuracy, other significant aspects are important, such as explainability and solving the cold start problem. In this work, we propose a hybrid deep learning …


Mining Semantic Knowledge Graphs To Add Explainability To Black Box Recommender Systems, Mohammed Alshammari, Olfa Nasraoui, Scott Sanders Aug 2019

Mining Semantic Knowledge Graphs To Add Explainability To Black Box Recommender Systems, Mohammed Alshammari, Olfa Nasraoui, Scott Sanders

Faculty Scholarship

Recommender systems are being increasingly used to predict the preferences of users on online platforms and recommend relevant options that help them cope with information overload. In particular, modern model-based collaborative filtering algorithms, such as latent factor models, are considered state-of-the-art in recommendation systems. Unfortunately, these black box systems lack transparency, as they provide little information about the reasoning behind their predictions. White box systems, in contrast, can, by nature, easily generate explanations. However, their predictions are less accurate than sophisticated black box models. Recent research has demonstrated that explanations are an essential component in bringing the powerful predictions of …


Debiasing The Human-Recommender System Feedback Loop In Collaborative Filtering, Wenlong Sun, Sami Khenissi, Olfa Nasraoui, Patrick Shafto May 2019

Debiasing The Human-Recommender System Feedback Loop In Collaborative Filtering, Wenlong Sun, Sami Khenissi, Olfa Nasraoui, Patrick Shafto

Faculty Scholarship

Recommender Systems (RSs) are widely used to help online users discover products, books, news, music, movies, courses, restaurants,etc. Because a traditional recommendation strategy always shows the most relevant items (thus with highest predicted rating), traditional RS’s are expected to make popular items become even more popular and non-popular items become even less popular which in turn further divides the haves (popular) from the have-nots (un-popular). Therefore, a major problem with RSs is that they may introduce biases affecting the exposure of items, thus creating a popularity divide of items during the feedback loop that occurs with users, and this may …


Long-Term Trajectories Of Human Civilization, Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, Roman V. Yampolskiy Mar 2019

Long-Term Trajectories Of Human Civilization, Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, Roman V. Yampolskiy

Faculty Scholarship

Purpose: This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. Design/methodology/approach: This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe trajectories, in which one or more events cause significant harm to human civilization; technological transformation trajectories, in which radical technological …


Towards Ai Welfare Science And Policies, Soenke Ziesche, Roman Yampolskiy Mar 2019

Towards Ai Welfare Science And Policies, Soenke Ziesche, Roman Yampolskiy

Faculty Scholarship

In light of fast progress in the field of AI there is an urgent demand for AI policies. Bostrom et al. provide “a set of policy desiderata”, out of which this article attempts to contribute to the “interests of digital minds”. The focus is on two interests of potentially sentient digital minds: to avoid suffering and to have the freedom of choice about their deletion. Various challenges are considered, including the vast range of potential features of digital minds, the difficulties in assessing the interests and wellbeing of sentient digital minds, and the skepticism that such research may encounter. Prolegomena …


An Explainable Autoencoder For Collaborative Filtering Recommendation, Pegah Sagheb Haghighi, Olurotimi Seton, Olfa Nasraoui Jan 2019

An Explainable Autoencoder For Collaborative Filtering Recommendation, Pegah Sagheb Haghighi, Olurotimi Seton, Olfa Nasraoui

Faculty Scholarship

Autoencoders are a common building block of Deep Learning architectures, where they are mainly used for representation learning. They have also been successfully used in Collaborative Filtering (CF) recommender systems to predict missing ratings. Unfortunately, like all black box machine learning models, they are unable to explain their outputs. Hence, while predictions from an Autoencoderbased recommender system might be accurate, it might not be clear to the user why a recommendation was generated. In this work, we design an explainable recommendation system using an Autoencoder model whose predictions can be explained using the neighborhood based explanation style. Our preliminary work …


Quantum Metalanguage And The New Cognitive Synthesis, Alexey V. Melkikh, Andrei Khrennikov, Roman V. Yampolskiy Jan 2019

Quantum Metalanguage And The New Cognitive Synthesis, Alexey V. Melkikh, Andrei Khrennikov, Roman V. Yampolskiy

Faculty Scholarship

Problems with mechanisms of thinking and cognition in many ways remain unresolved. Why are a priori inferences possible? Why can a human understand but a computer cannot? It has been shown that when creating new concepts, generalization is contradictory in the sense that to be created concepts must exist a priori, and therefore, they are not new. The process of knowledge acquisition is also contradictory, as it inevitably involves recognition, which can be realized only when there is an a priori standard. Known approaches of the framework of artificial intelligence (in particular, Bayesian) do not determine the origins of knowledge, …


Emergence Of Addictive Behaviors In Reinforcement Learning Agents, Vahid Behzadan, Roman Yampolskiy, Arslan Munir Jan 2019

Emergence Of Addictive Behaviors In Reinforcement Learning Agents, Vahid Behzadan, Roman Yampolskiy, Arslan Munir

Faculty Scholarship

This paper presents a novel approach to the technical analysis of wireheading in intelligent agents. Inspired by the natural analogues of wireheading and their prevalent manifestations, we propose the modeling of such phenomenon in Reinforcement Learning (RL) agents as psychological disorders. In a preliminary step towards evaluating this proposal, we study the feasibility and dynamics of emergent addictive policies in Q-learning agents in the tractable environment of the game of Snake. We consider a slightly modified version of this game, in which the environment provides a “drug” seed alongside the original “healthy” seed for the consumption of the snake. We …


Personal Universes: A Solution To The Multi-Agent Value Alignment Problem, Roman V. Yampolskiy Jan 2019

Personal Universes: A Solution To The Multi-Agent Value Alignment Problem, Roman V. Yampolskiy

Faculty Scholarship

AI Safety researchers attempting to align values of highly capable intelligent systems with those of humanity face a number of challenges including personal value extraction, multi-agent value merger and finally in-silico encoding. State-of-the-art research in value alignment shows difficulties in every stage in this process, but merger of incompatible preferences is a particularly difficult challenge to overcome. In this paper we assume that the value extraction problem will be solved and propose a possible way to implement an AI solution which optimally aligns with individual preferences of each user. We conclude by analyzing benefits and limitations of the proposed approach.


Why We Do Not Evolve Software? Analysis Of Evolutionary Algorithms, Roman V. Yampolskiy Nov 2018

Why We Do Not Evolve Software? Analysis Of Evolutionary Algorithms, Roman V. Yampolskiy

Faculty Scholarship

In this article, we review the state-of-the-art results in evolutionary computation and observe that we do not evolve nontrivial software from scratch and with no human intervention. A number of possible explanations are considered, but we conclude that computational complexity of the problem prevents it from being solved as currently attempted. A detailed analysis of necessary and available computational resources is provided to support our findings.


A Psychopathological Approach To Safety Engineering In Ai And Agi, Vahid Behzadan, Arslan Munir, Roman V. Yampolskiy Aug 2018

A Psychopathological Approach To Safety Engineering In Ai And Agi, Vahid Behzadan, Arslan Munir, Roman V. Yampolskiy

Faculty Scholarship

The complexity of dynamics in AI techniques is already approaching that of complex adaptive systems, thus curtailing the feasibility of formal controllability and reachability analysis in the context of AI safety. It follows that the envisioned instances of Artificial General Intelligence (AGI) will also suffer from challenges of complexity. To tackle such issues, we propose the modeling of deleterious behaviors in AI and AGI as psychological disorders, thereby enabling the employment of psychopathological approaches to analysis and control of misbehaviors. Accordingly, we present a discussion on the feasibility of the psychopathological approaches to AI safety, and propose general directions for …


The Singularity May Be Near, Roman V. Yampolskiy Jul 2018

The Singularity May Be Near, Roman V. Yampolskiy

Faculty Scholarship

Toby Walsh in "The Singularity May Never Be Near" gives six arguments to support his point of view that technological singularity may happen, but that it is unlikely. In this paper, we provide analysis of each one of his arguments and arrive at similar conclusions, but with more weight given to the "likely to happen" prediction.


Wisdom Of Artificial Crowds Feature Selection In Untargeted Metabolomics: An Application To The Development Of A Blood-Based Diagnostic Test For Thrombotic Myocardial Infarction, Patrick J. Trainor, Roman V. Yampolskiy, Andrew P. Defilippis May 2018

Wisdom Of Artificial Crowds Feature Selection In Untargeted Metabolomics: An Application To The Development Of A Blood-Based Diagnostic Test For Thrombotic Myocardial Infarction, Patrick J. Trainor, Roman V. Yampolskiy, Andrew P. Defilippis

Faculty Scholarship

Introduction: Heart disease remains a leading cause of global mortality. While acute myocardial infarction (colloquially: heart attack), has multiple proximate causes, proximate etiology cannot be determined by a blood-based diagnostic test. We enrolled a suitable patient cohort and conducted a non-targeted quantification of plasma metabolites by mass spectrometry for developing a test that can differentiate between thrombotic MI, non-thrombotic MI, and stable disease. A significant challenge in developing such a diagnostic test is solving the NP-hard problem of feature selection for constructing an optimal statistical classifier. Objective: We employed a Wisdom of Artificial Crowds (WoAC) strategy for solving the feature …


What Are The Ultimate Limits To Computational Techniques: Verifier Theory And Unverifiability, Roman V. Yampolskiy Jul 2017

What Are The Ultimate Limits To Computational Techniques: Verifier Theory And Unverifiability, Roman V. Yampolskiy

Faculty Scholarship

Despite significant developments in proof theory, surprisingly little attention has been devoted to the concept of proof verifiers. In particular, the mathematical community may be interested in studying different types of proof verifiers (people, programs, oracles, communities, superintelligences) as mathematical objects. Such an effort could reveal their properties, their powers and limitations (particularly in human mathematicians), minimum and maximum complexity, as well as self-verification and self-reference issues. We propose an initial classification system for verifiers and provide some rudimentary analysis of solved and open problems in this important domain. Our main contribution is a formal introduction of the notion of …


On The Origin Of Synthetic Life: Attribution Of Output To A Particular Algorithm, Roman V. Yampolskiy Nov 2016

On The Origin Of Synthetic Life: Attribution Of Output To A Particular Algorithm, Roman V. Yampolskiy

Faculty Scholarship

With unprecedented advances in genetic engineering we are starting to see progressively more original examples of synthetic life. As such organisms become more common it is desirable to gain an ability to distinguish between natural and artificial life forms. In this paper, we address this challenge as a generalized version of Darwin's original problem, which he so brilliantly described in On the Origin of Species. After formalizing the problem of determining the samples' origin, we demonstrate that the problem is in fact unsolvable. In the general case, if computational resources of considered originator algorithms have not been limited and priors …


The Agi Containment Problem, James Babcock, Jànos Kramàr, Roman Yampolskiy Jul 2016

The Agi Containment Problem, James Babcock, Jànos Kramàr, Roman Yampolskiy

Faculty Scholarship

There is considerable uncertainty about what properties, capabilities and motivations future AGIs will have. In some plausible scenarios, AGIs may pose security risks arising from accidents and defects. In order to mitigate these risks, prudent early AGI research teams will perform significant testing on their creations before use. Unfortunately, if an AGI has human-level or greater intelligence, testing itself may not be safe; some natural AGI goal systems create emergent incentives for AGIs to tamper with their test environments, make copies of themselves on the internet, or convince developers and operators to do dangerous things. In this paper, we survey …


Taxonomy Of Pathways To Dangerous Artificial Intelligence, Roman V. Yampolskiy Feb 2016

Taxonomy Of Pathways To Dangerous Artificial Intelligence, Roman V. Yampolskiy

Faculty Scholarship

In order to properly handle a dangerous Artificially Intelligent (AI) system it is important to understand how the system came to be in such a state. In popular culture (science fiction movies/books) AIs/Robots became self-aware and as a result rebel against humanity and decide to destroy it. While it is one possible scenario, it is probably the least likely path to appearance of dangerous AI. In this work, we survey, classify and analyze a number of circumstances, which might lead to arrival of malicious AI. To the best of our knowledge, this is the first attempt to systematically classify types …


Corrigendum: Responses To Catastrophic Agi Risk: A Survey (2015 Phys. Scr. 90 018001), Kaj Sotala, Roman V. Yampolskiy May 2015

Corrigendum: Responses To Catastrophic Agi Risk: A Survey (2015 Phys. Scr. 90 018001), Kaj Sotala, Roman V. Yampolskiy

Faculty Scholarship

No abstract provided.


A Study On The Limitations Of Evolutionary Computation And Other Bio-Inspired Approaches For Integer Factorization, Mohit Mishra, Vaibhav Gupta, Utkarsh Chaturvedi, K. K. Shukla, Roman Yampolskiy Jan 2015

A Study On The Limitations Of Evolutionary Computation And Other Bio-Inspired Approaches For Integer Factorization, Mohit Mishra, Vaibhav Gupta, Utkarsh Chaturvedi, K. K. Shukla, Roman Yampolskiy

Faculty Scholarship

Integer Factorization is a vital number theoretic problem frequently finding application in public-key cryptography like RSA encryption systems, and other areas like Fourier transform algorithm. The problem is computationally intractable because it is a one-way mathematical function. Due to its computational infeasibility, it is extremely hard to find the prime factors of a semi prime number generated from two randomly chosen similar sized prime numbers. There has been a recently growing interest in the community with regards to evolutionary computation and other alternative approaches to solving this problem as an optimization task. However, the results still seem to be very …


Responses To Catastrophic Agi Risk: A Survey, Kaj Sotala, Roman V. Yampolskiy Dec 2014

Responses To Catastrophic Agi Risk: A Survey, Kaj Sotala, Roman V. Yampolskiy

Faculty Scholarship

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fields proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.