Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 28 of 28

Full-Text Articles in Physical Sciences and Mathematics

Online/Incremental Learning To Mitigate Concept Drift In Network Traffic Classification, Alberto R. De La Rosa Dec 2022

Online/Incremental Learning To Mitigate Concept Drift In Network Traffic Classification, Alberto R. De La Rosa

Open Access Theses & Dissertations

Communication networks play a large role in our everyday lives. COVID19 pandemic in 2020 highlighted their importance as most jobs had to be moved to remote work environments. It is possible that the spread of the virus, the death toll, and the economic consequences would have been much worse without communication networks. To remove sole dependence on one equipment vendor, networks are heterogeneous by design. Due to this, as well as their increasing size, network management has become overwhelming for network managers. For this reason, automating network management will have a significant positive impact. Machine learning and software defined networking …


Efficient Approaches To Steady State Detection In Multivariate Systems, Honglun Xu Aug 2022

Efficient Approaches To Steady State Detection In Multivariate Systems, Honglun Xu

Open Access Theses & Dissertations

Steady state detection is critically important in many engineering fields such as fault detection and diagnosis, process monitoring and control. However, most of the existing methods are designed for univariate signals. In this dissertation, we proposed an efficient online steady state detection method for multivariate systems through a sequential Bayesian partitioning approach. The signal is modeled by a Bayesian piecewise constant mean and covariance model, and a recursive updating method is developed to calculate the posterior distributions analytically. The duration of the current segment is utilized to test the steady state. Insightful guidance is provided for hyperparameter selection. The effectiveness …


Hardware For Quantized Mixed-Precision Deep Neural Networks, Andres Rios Aug 2021

Hardware For Quantized Mixed-Precision Deep Neural Networks, Andres Rios

Open Access Theses & Dissertations

Recently, there has been a push to perform deep learning (DL) computations on the edge rather than the cloud due to latency, network connectivity, energy consumption, and privacy issues. However, state-of-the-art deep neural networks (DNNs) require vast amounts of computational power, data, and energyâ??resources that are limited on edge devices. This limitation has brought the need to design domain-specific architectures (DSAs) that implement DL-specific hardware optimizations. Traditionally DNNs have run on 32-bit floating-point numbers; however, a body of research has shown that DNNs are surprisingly robust and do not require all 32 bits. Instead, using quantization, networks can run on …


When Can We Be Sure That Measurement Results Are Consistent: 1-D Interval Case And Beyond, Hani Dbouk, Steffen Schön, Ingo Neumann, Vladik Kreinovich Jun 2020

When Can We Be Sure That Measurement Results Are Consistent: 1-D Interval Case And Beyond, Hani Dbouk, Steffen Schön, Ingo Neumann, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, measurements are characterized by interval uncertainty -- namely, based on each measurement result, the only information that we have about the actual value of the measured quantity is that this value belongs to some interval. If several such intervals -- corresponding to measuring the same quantity -- have an empty intersection, this means that at least one of the corresponding measurement results is an outlier, caused by a malfunction of the measuring instrument. From the purely mathematical viewpoint, if the intersection is non-empty, there is no reason to be suspicious, but from the practical viewpoint, if …


Compound Vision Approach For Autonomous Vehicles Navigation, Michael Mikhael Jan 2020

Compound Vision Approach For Autonomous Vehicles Navigation, Michael Mikhael

Open Access Theses & Dissertations

An analogy can be made between the sensing that occurs in simple robots and drones and that in insects and crustaceans, especially in basic navigation requirements. Thus, an approach in robots/drones based on compound eye vision could be useful. In this research, several image processing algorithms were used to detect and track moving objects starting with images upon which a grid (compound eye image) was superimposed, including contours detection, the second moments of those contours along with the grid applied to the original image, and Fourier Transforms and inverse Fourier Transforms. The latter also provide information about scene or camera …


A Comprehensive And Modular Robotic Control Framework For Model-Less Control Law Development Using Reinforcement Learning For Soft Robotics, Charles Sullivan Jan 2020

A Comprehensive And Modular Robotic Control Framework For Model-Less Control Law Development Using Reinforcement Learning For Soft Robotics, Charles Sullivan

Open Access Theses & Dissertations

Soft robotics is a growing field in robotics research. Heavily inspired by biological systems, these robots are made of softer, non-linear, materials such as elastomers and are actuated using several novel methods, from fluidic actuation channels to shape changing materials such as electro-active polymers. Highly non-linear materials make modeling difficult, and sensors are still an area of active research. These issues have rendered typical control and modeling techniques often inadequate for soft robotics. Reinforcement learning is a branch of machine learning that focuses on model-less control by mapping states to actions that maximize a specific reward signal. Reinforcement learning has …


Deep Learning (Partly) Demystified, Vladik Kreinovich, Olga Kosheleva Nov 2019

Deep Learning (Partly) Demystified, Vladik Kreinovich, Olga Kosheleva

Departmental Technical Reports (CS)

Successes of deep learning are partly due to appropriate selection of activation function, pooling functions, etc. Most of these choices have been made based on empirical comparison and heuristic ideas. In this paper, we show that many of these choices -- and the surprising success of deep learning in the first place -- can be explained by reasonably simple and natural mathematics.


Dedicated Hardware For Machine/Deep Learning: Domain Specific Architectures, Angel Izael Solis Jan 2019

Dedicated Hardware For Machine/Deep Learning: Domain Specific Architectures, Angel Izael Solis

Open Access Theses & Dissertations

Artificial intelligence has come a very long way from being a mere spectacle on the silver screen in the 1920s [Hml18]. As artificial intelligence continues to evolve, and we begin to develop more sophisticated Artificial Neural Networks, the need for specialized and more efficient machines (less computational strain while maintaining the same performance results) becomes increasingly evident. Though these “new” techniques, such as Multilayer Perceptron’s, Convolutional Neural Networks and Recurrent Neural Networks, may seem as if they are on the cutting edge of technology, many of these ideas are over 60 years old! However, many of these earlier models, at …


An Efficient Method For Online Identification Of Steady State For Multivariate System, Honglun None Xu Jan 2018

An Efficient Method For Online Identification Of Steady State For Multivariate System, Honglun None Xu

Open Access Theses & Dissertations

Most of the existing steady state detection approaches are designed for univariate signals. For multivariate signals, the univariate approach is often applied to each process variable and the system is claimed to be steady once all signals are steady, which is computationally inefficient and also not accurate. The article proposes an efficient online method for multivariate steady state detection. It estimates the covariance matrices using two different approaches, namely, the mean-squared-deviation and mean-squared-successive-difference. To avoid the usage of a moving window, the process means and the two covariance matrices are calculated recursively through exponentially weighted moving average. A likelihood ratio …


Improving Time-Of-Flight And Other Depth Images: Super-Resolution And Denoising Using Variational Methods, Salvador Canales Andrade Jan 2018

Improving Time-Of-Flight And Other Depth Images: Super-Resolution And Denoising Using Variational Methods, Salvador Canales Andrade

Open Access Theses & Dissertations

Depth information is a new important source of perception for machines, which allow them to have a better representation of the surroundings. The depth information provides a more precise map of the location of every object and surfaces in a space of interest in comparison with conventional cameras. Time of flight (ToF) cameras provide one of the techniques to acquire depth maps, however they produce low spatial resolution and noisy maps. This research proposes a framework to enhance and up-scale depth maps by using two different regularization terms: Total Generalized Variation (TGV) and Total Generalized Variation with a Structure Tensor …


Decision Making For Dynamic Systems Under Uncertainty: Predictions And Parameter Recomputations, Leobardo Valera Jan 2018

Decision Making For Dynamic Systems Under Uncertainty: Predictions And Parameter Recomputations, Leobardo Valera

Open Access Theses & Dissertations

In this Thesis, we are interested in making decision over a model of a dynamic system. We want to know, on one hand, how the corresponding dynamic phenomenon unfolds under different input parameters (simulations). These simulations might help researchers to design devices with a better performance than the actual ones. On the other hand, we are also interested in predicting the behavior of the dynamic system based on knowledge of the phenomenon in order to prevent undesired outcomes. Finally, this Thesis is concerned with the identification of parameters of dynamic systems that ensure a specific performance or behavior.

Understanding the …


Adaptive Switched Capacitor Voltage Boost For Thermoelectric Generation, Rene A. Brito Jan 2016

Adaptive Switched Capacitor Voltage Boost For Thermoelectric Generation, Rene A. Brito

Open Access Theses & Dissertations

Thermoelectric generators (TEG) and other forms of energy harvesting often provide voltages that are not directly usable by traditional electronics as levels are too low from the TEG. While increasing the number of thermoelectric elements can ultimately increase the power output, there is a tradeoff between size and power. By implementing charge pumps, a proposed circuit technique is described that can boost the TEG output to levels that can be used for energy harvesting applications. Current voltage boost circuits for TEGs simply boost a voltage by a set amount. The proposed circuit consists of an analog chip, to provide several …


Symbolic Aggregate Approximation (Sax) Under Interval Uncertainty, Chrysostomos D. Stylios, Vladik Kreinovich Apr 2015

Symbolic Aggregate Approximation (Sax) Under Interval Uncertainty, Chrysostomos D. Stylios, Vladik Kreinovich

Departmental Technical Reports (CS)

In many practical situations, we monitor a system by continuously measuring the corresponding quantities, to make sure that an abnormal deviation is detected as early as possible. Often, we do not have ready algorithms to detect abnormality, so we need to use machine learning techniques. For these techniques to be efficient, we first need to compress the data. One of the most successful methods of data compression is the technique of Symbolic Aggregate approXimation (SAX). While this technique is motivated by measurement uncertainty, it does not explicitly take this uncertainty into account. In this paper, we show that we can …


Why It Is Important To Precisiate Goals, Olga Kosheleva, Vladik Kreinovich, Hung T. Nguyen Mar 2015

Why It Is Important To Precisiate Goals, Olga Kosheleva, Vladik Kreinovich, Hung T. Nguyen

Departmental Technical Reports (CS)

After Zadeh and Bellman explained how to optimize a function under fuzzy constraints, there have been many successful applications of this optimization. However, in many practical situations, it turns out to be more efficient to precisiate the objective function before performing optimization. In this paper, we provide a possible explanation for this empirical fact.


Simple Linear Interpolation Explains All Usual Choices In Fuzzy Techniques: Membership Functions, T-Norms, T-Conorms, And Defuzzification, Vladik Kreinovich, Jonathan Quijas, Esthela Gallardo, Caio De Sa Lopes, Olga Kosheleva, Shahnaz Shahbazova Mar 2015

Simple Linear Interpolation Explains All Usual Choices In Fuzzy Techniques: Membership Functions, T-Norms, T-Conorms, And Defuzzification, Vladik Kreinovich, Jonathan Quijas, Esthela Gallardo, Caio De Sa Lopes, Olga Kosheleva, Shahnaz Shahbazova

Departmental Technical Reports (CS)

Most applications of fuzzy techniques use piece-wise linear (triangular or trapezoid) membership functions, min or product t-norms, max or algebraic sum t-conorms, and centroid defuzzification. Similarly, most applications of interval-valued fuzzy techniques use piecewise-linear lower and upper membership functions. In this paper, we show that all these choices can be explained as applications of simple linear interpolation.


Minimax Portfolio Optimization Under Interval Uncertainty, Meng Yuan, Xu Lin, Junzo Watada, Vladik Kreinovich Jan 2015

Minimax Portfolio Optimization Under Interval Uncertainty, Meng Yuan, Xu Lin, Junzo Watada, Vladik Kreinovich

Departmental Technical Reports (CS)

In the 1950s, Markowitz proposed to combine different investment instruments to design a portfolio that either maximizes the expected return under constraints on volatility (risk) or minimizes the risk under given expected return. Markowitz's formulas are still widely used in financial practice. However, these formulas assume that we know the exact values of expected return and variance for each instrument, and that we know the exact covariance of every two instruments. In practice, we only know these values with some uncertainty. Often, we only know the bounds of these values -- i.e., in other words, we only know the intervals …


Towards The Possibility Of Objective Interval Uncertainty In Physics. Ii, Luc Longpre, Olga Kosheleva, Vladik Kreinovich Jan 2015

Towards The Possibility Of Objective Interval Uncertainty In Physics. Ii, Luc Longpre, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Applications of interval computations usually assume that while we only know an interval containing the actual (unknown) value of a physical quantity, there is the exact value of this quantity, and that in principle, we can get more and more accurate estimates of this value. Physicists know, however, that, due to uncertainty principle, there are limitations on how accurately we can measure the values of physical quantities. One of the important principles of modern physics is operationalism -- that a physical theory should only use observable properties. This principle is behind most successes of the 20th century physics, starting with …


Optimizing Pred(25) Is Np-Hard, Martine Ceberio, Olga Kosheleva, Vladik Kreinovich Jan 2015

Optimizing Pred(25) Is Np-Hard, Martine Ceberio, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

Usually, in data processing, to find the parameters of the models that best fits the data, people use the Least Squares method. One of the advantages of this method is that for linear models, it leads to an easy-to-solve system of linear equations. A limitation of this method is that even a single outlier can ruin the corresponding estimates; thus, more robust methods are needed. In particular, in software engineering, often, a more robust pred(25) method is used, in which we maximize the number of cases in which the model's prediction is within the 25% range of the observations. In …


Computation Offloading Decisions For Reducing Completion Time, Salvador Melendez Jan 2015

Computation Offloading Decisions For Reducing Completion Time, Salvador Melendez

Open Access Theses & Dissertations

Mobile devices are being widely used in many applications such as image processing, computer vision (e.g. face detection and recognition), wearable computing, language translation, and battlefield operations. However, mobile devices are constrained in terms of their battery life, processor performance, storage capacity, and network bandwidth. To overcome these issues, there is an approach called Computation Offloading, also known as cyber-foraging and surrogate computing. Computation offloading consists of migrating computational jobs from a mobile device to more powerful remote computing resources. Upon completion of the job, the results are sent back to the mobile device. However, a decision must be made; …


A Catalog Of While Loop Specification Patterns, Aditi Barua, Yoonsik Cheon Sep 2014

A Catalog Of While Loop Specification Patterns, Aditi Barua, Yoonsik Cheon

Departmental Technical Reports (CS)

This document provides a catalog of while loop patterns along with their skeletal specifications. The specifications are written in a functional form known as intended functions. The catalog can be used to derive specifications of while loops by first matching the loops to the cataloged patterns and then instantiating the skeletal specifications of the matched patterns. Once their specifications are formulated and written, the correctness of while loops can be proved rigorously or formally using the functional program verification technique in which a program is viewed as a mathematical function from one program state to another.


Observable Causality Implies Lorentz Group: Alexandrov-Zeeman-Type Theorem For Space-Time Regions, Olga Kosheleva, Vladik Kreinovich Jun 2014

Observable Causality Implies Lorentz Group: Alexandrov-Zeeman-Type Theorem For Space-Time Regions, Olga Kosheleva, Vladik Kreinovich

Departmental Technical Reports (CS)

The famous Alexandrov-Zeeman theorem proves that causality implies Lorentz group. The physical meaning of this result is that once we observe which event can causally affect which other events, then, using only this information, we can reconstruct the linear structure of the Minkowski space-time. The original Alexandrov-Zeeman theorem is based on the causality relation between events represented by points in space-time. Knowing such a point means that we know the exact moment of time and the exact location of the corresponding event - and that this event actually occurred at a single moment of time and at a single spatial …


Numerical Investigation Of Impact Of Relative Humidity On Droplet Accumulation And Film Cooling, Luz Irene Bugarin Jan 2014

Numerical Investigation Of Impact Of Relative Humidity On Droplet Accumulation And Film Cooling, Luz Irene Bugarin

Open Access Theses & Dissertations

During the summer, high inlet temperatures affect the power output of gas turbine systems. Evaporative coolers have gained popularity as an inlet cooling method for these systems. Wet compression has been one of the common evaporative cooling methods implemented to increase power output of gas turbine systems due to its simple installation and low cost. This process involves injection of water droplets into the continuous phase of compressor to reduce the temperature of the flow entering the compressor and in turn increase the power output of the whole gas turbine system. This study focused on a single stage rotor-stator compressor …


Imprecise Probabilities In Engineering Analyses, Michael Beer, Scott Ferson, Vladik Kreinovich Apr 2013

Imprecise Probabilities In Engineering Analyses, Michael Beer, Scott Ferson, Vladik Kreinovich

Departmental Technical Reports (CS)

Probabilistic uncertainty and imprecision in structural parameters and in environmental conditions and loads are challenging phenomena in engineering analyses. They require appropriate mathematical modeling and quantification to obtain realistic results when predicting the behavior and reliability of engineering structures and systems. But the modeling and quantification is complicated by the characteristics of the available information, which involves, for example, sparse data, poor measurements and subjective information. This raises the question whether the available information is sufficient for probabilistic modeling or rather suggests a set-theoretical approach. The framework of imprecise probabilities provides a mathematical basis to deal with these problems which …


Efficient, Scalable, Parallel, Matrix-Matrix Multiplication, Enrique Portillo Jan 2013

Efficient, Scalable, Parallel, Matrix-Matrix Multiplication, Enrique Portillo

Open Access Theses & Dissertations

For the past decade, power/energy consumption has become a limiting factor for large-scale and embedded High Performance Computing (HPC) systems. This is especially true for systems that include accelerators, e.g., high-end computing devices, such as Graphics Processing Units (GPUs), with terascale computing capabilities and high power draws that greatly surpass that of multi-core CPUs. Accordingly, improving the node-level power/energy efficiency of an application can have a direct and positive impact on both classes of HPC systems.

The research reported in this thesis explores the use of software techniques to enhance the execution-time and power-consumption performance of applications executed on a …


A Case Study Towards Verification Of The Utility Of Analytical Models In Selecting Checkpoint Intervals, Michael Joseph Harney Jan 2013

A Case Study Towards Verification Of The Utility Of Analytical Models In Selecting Checkpoint Intervals, Michael Joseph Harney

Open Access Theses & Dissertations

As high performance computing (HPC) systems grow larger, with increasing numbers of components, failures become more common. Codes that utilize large numbers of nodes and run for long periods of time must take such failures into account and adopt fault tolerance mechanisms to avoid loss of computation and, thus, system utilization. One of those mechanisms is checkpoint/restart. Although analytical models exist to guide users in the selection of an appropriate checkpoint interval, these models are based on assumptions that may not always be true. This thesis examines some of these assumptions, in particular, the consistency of parameters like Mean Time …


The Application Of Fuzzy Granular Computing For The Analysis Of Human Dynamic Behavior In 3d Space, Murad Mohammad Alaqtash Jan 2012

The Application Of Fuzzy Granular Computing For The Analysis Of Human Dynamic Behavior In 3d Space, Murad Mohammad Alaqtash

Open Access Theses & Dissertations

Human dynamic behavior in space is very complex in that it involves many physical, perceptual and motor aspects. It is tied together at a sensory level by linkages between vestibular, visual and somatosensory information that develop through experience of inertial and gravitational reaction forces. Coordinated movement emerges from the interplay among descending output from the central nervous system, sensory input from the body and environment, muscle dynamics, and the emergent dynamics of the whole neuromusculoskeletal system.

There have been many attempts to directly capture the activities of the neuronal system in human locomotion without the ability to clarify how the …


Development Of Load Balancing Algorithm Based On Analysis Of Multi-Core Architecture On Beowulf Cluster, Damian Valles Jan 2011

Development Of Load Balancing Algorithm Based On Analysis Of Multi-Core Architecture On Beowulf Cluster, Damian Valles

Open Access Theses & Dissertations

In this work, analysis, and modeling were employed to improve the Linux Scheduler for HPC use. The performance throughput of a single compute-node of the 23 node Beowulf cluster, Virgo 2.0, was analyzed to find bottlenecks and limitations that affected performance in the processing hardware where each compute-node consisted of two quad-core processors with eight gigabytes of memory. The analysis was performed using the High Performance Linpack (HPL) benchmark.

In addition, the processing hardware of the compute-node was modeled using an Instruction per Cycle (IPC) metric that was estimated using linear regression. Modeling data was obtained by using the Tuning …


Algorithms For Training Large-Scale Linear Programming Support Vector Regression And Classification, Pablo Rivas Perea Jan 2011

Algorithms For Training Large-Scale Linear Programming Support Vector Regression And Classification, Pablo Rivas Perea

Open Access Theses & Dissertations

The main contribution of this dissertation is the development of a method to train a Support Vector Regression (SVR) model for the large-scale case where the number of training samples supersedes the computational resources. The proposed scheme consists of posing the SVR problem entirely as a Linear Programming (LP) problem and on the development of a sequential optimization method based on variables decomposition, constraints decomposition, and the use of primal-dual interior point methods. Experimental results demonstrate that the proposed approach has comparable performance with other SV-based classifiers. Particularly, experiments demonstrate that as the problem size increases, the sparser the solution …