Open Access. Powered by Scholars. Published by Universities.®

Computer Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 12 of 12

Full-Text Articles in Computer Engineering

Dedicated Hardware For Machine/Deep Learning: Domain Specific Architectures, Angel Izael Solis Jan 2019

Dedicated Hardware For Machine/Deep Learning: Domain Specific Architectures, Angel Izael Solis

Open Access Theses & Dissertations

Artificial intelligence has come a very long way from being a mere spectacle on the silver screen in the 1920s [Hml18]. As artificial intelligence continues to evolve, and we begin to develop more sophisticated Artificial Neural Networks, the need for specialized and more efficient machines (less computational strain while maintaining the same performance results) becomes increasingly evident. Though these “new” techniques, such as Multilayer Perceptron’s, Convolutional Neural Networks and Recurrent Neural Networks, may seem as if they are on the cutting edge of technology, many of these ideas are over 60 years old! However, many of these earlier models ...


An Efficient Method For Online Identification Of Steady State For Multivariate System, Honglun None Xu Jan 2018

An Efficient Method For Online Identification Of Steady State For Multivariate System, Honglun None Xu

Open Access Theses & Dissertations

Most of the existing steady state detection approaches are designed for univariate signals. For multivariate signals, the univariate approach is often applied to each process variable and the system is claimed to be steady once all signals are steady, which is computationally inefficient and also not accurate. The article proposes an efficient online method for multivariate steady state detection. It estimates the covariance matrices using two different approaches, namely, the mean-squared-deviation and mean-squared-successive-difference. To avoid the usage of a moving window, the process means and the two covariance matrices are calculated recursively through exponentially weighted moving average. A likelihood ratio ...


Adaptive Switched Capacitor Voltage Boost For Thermoelectric Generation, Rene A. Brito Jan 2016

Adaptive Switched Capacitor Voltage Boost For Thermoelectric Generation, Rene A. Brito

Open Access Theses & Dissertations

Thermoelectric generators (TEG) and other forms of energy harvesting often provide voltages that are not directly usable by traditional electronics as levels are too low from the TEG. While increasing the number of thermoelectric elements can ultimately increase the power output, there is a tradeoff between size and power. By implementing charge pumps, a proposed circuit technique is described that can boost the TEG output to levels that can be used for energy harvesting applications. Current voltage boost circuits for TEGs simply boost a voltage by a set amount. The proposed circuit consists of an analog chip, to provide several ...


Symbolic Aggregate Approximation (Sax) Under Interval Uncertainty, Chrysostomos D. Stylios, Vladik Kreinovich Apr 2015

Symbolic Aggregate Approximation (Sax) Under Interval Uncertainty, Chrysostomos D. Stylios, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, we monitor a system by continuously measuring the corresponding quantities, to make sure that an abnormal deviation is detected as early as possible. Often, we do not have ready algorithms to detect abnormality, so we need to use machine learning techniques. For these techniques to be efficient, we first need to compress the data. One of the most successful methods of data compression is the technique of Symbolic Aggregate approXimation (SAX). While this technique is motivated by measurement uncertainty, it does not explicitly take this uncertainty into account. In this paper, we show that we can ...


Why It Is Important To Precisiate Goals, Olga Kosheleva, Vladik Kreinovich, Hung T. Nguyen Mar 2015

Why It Is Important To Precisiate Goals, Olga Kosheleva, Vladik Kreinovich, Hung T. Nguyen

Departmental Technical Reports (CS)

After Zadeh and Bellman explained how to optimize a function under fuzzy constraints, there have been many successful applications of this optimization. However, in many practical situations, it turns out to be more efficient to precisiate the objective function before performing optimization. In this paper, we provide a possible explanation for this empirical fact.


Simple Linear Interpolation Explains All Usual Choices In Fuzzy Techniques: Membership Functions, T-Norms, T-Conorms, And Defuzzification, Vladik Kreinovich, Jonathan Quijas, Esthela Gallardo, Caio De Sa Lopes, Olga Kosheleva, Shahnaz Shahbazova Mar 2015

Simple Linear Interpolation Explains All Usual Choices In Fuzzy Techniques: Membership Functions, T-Norms, T-Conorms, And Defuzzification, Vladik Kreinovich, Jonathan Quijas, Esthela Gallardo, Caio De Sa Lopes, Olga Kosheleva, Shahnaz Shahbazova

Departmental Technical Reports (CS)

Most applications of fuzzy techniques use piece-wise linear (triangular or trapezoid) membership functions, min or product t-norms, max or algebraic sum t-conorms, and centroid defuzzification. Similarly, most applications of interval-valued fuzzy techniques use piecewise-linear lower and upper membership functions. In this paper, we show that all these choices can be explained as applications of simple linear interpolation.


Optimizing Pred(25) Is Np-Hard, Martine Ceberio, Olga Kosheleva, Vladik Kreinovich Jan 2015

Optimizing Pred(25) Is Np-Hard, Martine Ceberio, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Usually, in data processing, to find the parameters of the models that best fits the data, people use the Least Squares method. One of the advantages of this method is that for linear models, it leads to an easy-to-solve system of linear equations. A limitation of this method is that even a single outlier can ruin the corresponding estimates; thus, more robust methods are needed. In particular, in software engineering, often, a more robust pred(25) method is used, in which we maximize the number of cases in which the model's prediction is within the 25% range of the ...


Towards The Possibility Of Objective Interval Uncertainty In Physics. Ii, Luc Longpre, Olga Kosheleva, Vladik Kreinovich Jan 2015

Towards The Possibility Of Objective Interval Uncertainty In Physics. Ii, Luc Longpre, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Applications of interval computations usually assume that while we only know an interval containing the actual (unknown) value of a physical quantity, there is the exact value of this quantity, and that in principle, we can get more and more accurate estimates of this value. Physicists know, however, that, due to uncertainty principle, there are limitations on how accurately we can measure the values of physical quantities. One of the important principles of modern physics is operationalism -- that a physical theory should only use observable properties. This principle is behind most successes of the 20th century physics, starting with relativity ...


Minimax Portfolio Optimization Under Interval Uncertainty, Meng Yuan, Xu Lin, Junzo Watada, Vladik Kreinovich Jan 2015

Minimax Portfolio Optimization Under Interval Uncertainty, Meng Yuan, Xu Lin, Junzo Watada, Vladik Kreinovich

Departmental Technical Reports (CS)

In the 1950s, Markowitz proposed to combine different investment instruments to design a portfolio that either maximizes the expected return under constraints on volatility (risk) or minimizes the risk under given expected return. Markowitz's formulas are still widely used in financial practice. However, these formulas assume that we know the exact values of expected return and variance for each instrument, and that we know the exact covariance of every two instruments. In practice, we only know these values with some uncertainty. Often, we only know the bounds of these values -- i.e., in other words, we only know the ...


A Catalog Of While Loop Specification Patterns, Aditi Barua, Yoonsik Cheon Sep 2014

A Catalog Of While Loop Specification Patterns, Aditi Barua, Yoonsik Cheon

Departmental Technical Reports (CS)

This document provides a catalog of while loop patterns along with their skeletal specifications. The specifications are written in a functional form known as intended functions. The catalog can be used to derive specifications of while loops by first matching the loops to the cataloged patterns and then instantiating the skeletal specifications of the matched patterns. Once their specifications are formulated and written, the correctness of while loops can be proved rigorously or formally using the functional program verification technique in which a program is viewed as a mathematical function from one program state to another.


Observable Causality Implies Lorentz Group: Alexandrov-Zeeman-Type Theorem For Space-Time Regions, Olga Kosheleva, Vladik Kreinovich Jun 2014

Observable Causality Implies Lorentz Group: Alexandrov-Zeeman-Type Theorem For Space-Time Regions, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

The famous Alexandrov-Zeeman theorem proves that causality implies Lorentz group. The physical meaning of this result is that once we observe which event can causally affect which other events, then, using only this information, we can reconstruct the linear structure of the Minkowski space-time. The original Alexandrov-Zeeman theorem is based on the causality relation between events represented by points in space-time. Knowing such a point means that we know the exact moment of time and the exact location of the corresponding event - and that this event actually occurred at a single moment of time and at a single spatial location ...


Imprecise Probabilities In Engineering Analyses, Michael Beer, Scott Ferson, Vladik Kreinovich Apr 2013

Imprecise Probabilities In Engineering Analyses, Michael Beer, Scott Ferson, Vladik Kreinovich

Departmental Technical Reports (CS)

Probabilistic uncertainty and imprecision in structural parameters and in environmental conditions and loads are challenging phenomena in engineering analyses. They require appropriate mathematical modeling and quantification to obtain realistic results when predicting the behavior and reliability of engineering structures and systems. But the modeling and quantification is complicated by the characteristics of the available information, which involves, for example, sparse data, poor measurements and subjective information. This raises the question whether the available information is sufficient for probabilistic modeling or rather suggests a set-theoretical approach. The framework of imprecise probabilities provides a mathematical basis to deal with these problems which ...