Open Access. Powered by Scholars. Published by Universities.®
- Institution
- Keyword
-
- Agents (1)
- Authentication (1)
- BWA-MEM (1)
- Biomarker (1)
- Breadth (1)
-
- Chemical (1)
- Classification (1)
- Cognitive computing (1)
- Complexity (1)
- Data mining (1)
- Dead reckoning (1)
- Depth (1)
- Detection algorithms (1)
- Detectors (1)
- Detonation (1)
- Dimensionality reduction (1)
- Ensemble Methods (1)
- Evenness (1)
- FASTQ (1)
- Feature detection (1)
- Genome (re-)sequencing (1)
- Genome-wide coverage (1)
- Hepatocytes (1)
- Historical Documents (1)
- Human tasks (1)
- Human-computer interaction (1)
- Microarray (1)
- Multi-objective optimization (1)
- Next generation sequencing (NGS) (1)
- OCR Error Correction (1)
Articles 1 - 13 of 13
Full-Text Articles in Entire DC Network
A Function-To-Task Process Model For Adaptive Automation System Design, Jason M. Bindewald, Michael E. Miller, Gilbert L. Peterson
A Function-To-Task Process Model For Adaptive Automation System Design, Jason M. Bindewald, Michael E. Miller, Gilbert L. Peterson
Faculty Publications
Adaptive automation systems allow the user to complete a task seamlessly with a computer performing tasks at which the human operator struggles. Unlike traditional systems that allocate functions to either the human or the machine, adaptive automation varies the allocation of functions during system operation. Creating these systems requires designers to consider issues not present during static system development. To assist in adaptive automation system design, this paper presents the concept of inherent tasks and takes advantage of this concept to create the function-to-task design process model. This process model helps the designer determine how to allocate functions to the …
Seqassist: A Novel Toolkit For Preliminary Analysis Of Next-Generation Sequencing Data, Yan Peng, Andrew S. Maxwell, Natalie D. Barker, Jennifer G. Laird, Alan J. Kennedy, Nan Wang, Chaoyang Zhang, Ping Gong
Seqassist: A Novel Toolkit For Preliminary Analysis Of Next-Generation Sequencing Data, Yan Peng, Andrew S. Maxwell, Natalie D. Barker, Jennifer G. Laird, Alan J. Kennedy, Nan Wang, Chaoyang Zhang, Ping Gong
Faculty Publications
Background: While next-generation sequencing (NGS) technologies are rapidly advancing, an area that lags behind is the development of efficient and user-friendly tools for preliminary analysis of massive NGS data. As an effort to fill this gap to keep up with the fast pace of technological advancement and to accelerate data-to-results turnaround, we developed a novel software package named SeqAssist ("Sequencing Assistant" or SA).
Results: SeqAssist takes NGS-generated FASTQ files as the input, employs the BWA-MEM aligner for sequence alignment, and aims to provide a quick overview and basic statistics of NGS data. It consists of three separate workflows: …
Machine Learning Nuclear Detonation Features, Daniel T. Schmitt, Gilbert L. Peterson
Machine Learning Nuclear Detonation Features, Daniel T. Schmitt, Gilbert L. Peterson
Faculty Publications
Nuclear explosion yield estimation equations based on a 3D model of the explosion volume will have a lower uncertainty than radius based estimation. To accurately collect data for a volume model of atmospheric explosions requires building a 3D representation from 2D images. The majority of 3D reconstruction algorithms use the SIFT (scale-invariant feature transform) feature detection algorithm which works best on feature-rich objects with continuous angular collections. These assumptions are different from the archive of nuclear explosions that have only 3 points of view. This paper reduces 300 dimensions derived from an image based on Fourier analysis and five edge …
Timing Mark Detection On Nuclear Detonation Video, Daniel T. Schmitt, Gilbert L. Peterson
Timing Mark Detection On Nuclear Detonation Video, Daniel T. Schmitt, Gilbert L. Peterson
Faculty Publications
During the 1950s and 1960s the United States conducted and filmed over 200 atmospheric nuclear tests establishing the foundations of atmospheric nuclear detonation behavior. Each explosion was documented with about 20 videos from three or four points of view. Synthesizing the videos into a 3D video will improve yield estimates and reduce error factors. The videos were captured at a nominal 2500 frames per second, but range from 2300-3100 frames per second during operation. In order to combine them into one 3D video, individual video frames need to be correlated in time with each other. When the videos were captured …
Epaminondas: Exploring Combat Tactics, David W. King, Gilbert L. Peterson
Epaminondas: Exploring Combat Tactics, David W. King, Gilbert L. Peterson
Faculty Publications
Epaminondas is a two-person, zero-sum strategy game that combines long-term strategic play with highly tactical move sequences. The game has two unique features that make it stand out from other games. The first feature is the creation of phalanxes, which are groups of pieces that can move as a whole unit. As the number of pieces in a phalanx increases, the mobility and capturing power of the phalanx also increases. The second feature differs from many other strategy games: when a player makes a crossing, a winning move in the game, the second player has an opportunity to respond. This …
Querie: Collaborative Database Exploration, Magdalini Eirinaki, Suju Abraham, Neoklis Polyzotis, Naushin Shaikh
Querie: Collaborative Database Exploration, Magdalini Eirinaki, Suju Abraham, Neoklis Polyzotis, Naushin Shaikh
Faculty Publications
No abstract provided.
Narratives As A Fundamental Component Of Consciousness, Sandra L. Vaughan, Robert F. Mills, Michael R. Grimaila, Gilbert L. Peterson, Steven K. Rogers
Narratives As A Fundamental Component Of Consciousness, Sandra L. Vaughan, Robert F. Mills, Michael R. Grimaila, Gilbert L. Peterson, Steven K. Rogers
Faculty Publications
In this paper, we propose a conceptual architecture that models human (spatially-temporally-modally) cohesive narrative development using a computer representation of quale properties. Qualia are proposed to be the fundamental "cognitive" components humans use to generate cohesive narratives. The engineering approach is based on cognitively inspired technologies and incorporates the novel concept of quale representation for computation of primitive cognitive components of narrative. The ultimate objective of this research is to develop an architecture that emulates the human ability to generate cohesive narratives with incomplete or perturbated information.
User Identification And Authentication Using Multi-Modal Behavioral Biometrics, Kyle O. Bailey, James S. Okolica, Gilbert L. Peterson
User Identification And Authentication Using Multi-Modal Behavioral Biometrics, Kyle O. Bailey, James S. Okolica, Gilbert L. Peterson
Faculty Publications
Biometric computer authentication has an advantage over password and access card authentication in that it is based on something you are, which is not easily copied or stolen. One way of performing biometric computer authentication is to use behavioral tendencies associated with how a user interacts with the computer. However, behavioral biometric authentication accuracy rates are worse than more traditional authentication methods. This article presents a behavioral biometric system that fuses user data from keyboard, mouse, and Graphical User Interface (GUI) interactions. Combining the modalities results in a more accurate authentication decision based on a broader view of the user's …
Multi-Objective Optimization Of Dead-Reckoning Error Thresholds For Virtual Environments, Jeremy R. Millar, Douglas D. Hodson, Gary B. Lamont, Gilbert L. Peterson
Multi-Objective Optimization Of Dead-Reckoning Error Thresholds For Virtual Environments, Jeremy R. Millar, Douglas D. Hodson, Gary B. Lamont, Gilbert L. Peterson
Faculty Publications
Design trade-offs between state consistency and system response time are commonplace in virtual environments. Systems typically rely on predictive consistency algorithms such as dead-reckoning to control consistency and response time. Dead-reckoning error threshold selection determines the consistency/response time trade-off. We extend this trade-off space to explicitly account for the concept of system fairness. We derive a multi-objective optimization problem and apply multi-objective evolutionary algorithms to solve for Pareto optimal error thresholds. Abstract ©2014 IEEE.
A Trust-Aware System For Personalized User Recommendations In Social Networks, Magdalini Eirinaki, Malamati Louta, Iraklis Varlamis
A Trust-Aware System For Personalized User Recommendations In Social Networks, Magdalini Eirinaki, Malamati Louta, Iraklis Varlamis
Faculty Publications
Social network analysis has recently gained a lot of interest because of the advent and the increasing popularity of social media, such as blogs, social networking applications, microblogging, or customer review sites. In this environment, trust is becoming an essential quality among user interactions and the recommendation for useful content and trustful users is crucial for all the members of the network. In this paper, we introduce a framework for handling trust in social networks, which is based on a reputation mechanism that captures the implicit and explicit connections between the network members, analyzes the semantics and dynamics of these …
Identification Of Biomarkers That Distinguish Chemical Contaminants Based On Gene Expression Profiles, Xiaomou Wei, Junmei Ai, Youping Deng, Xin Guan, David R. Johnson, Choo Y. Ang, Chaoyang Zhang, Edward J. Perkins
Identification Of Biomarkers That Distinguish Chemical Contaminants Based On Gene Expression Profiles, Xiaomou Wei, Junmei Ai, Youping Deng, Xin Guan, David R. Johnson, Choo Y. Ang, Chaoyang Zhang, Edward J. Perkins
Faculty Publications
Background: High throughput transcriptomics profiles such as those generated using microarrays have been useful in identifying biomarkers for different classification and toxicity prediction purposes. Here, we investigated the use of microarrays to predict chemical toxicants and their possible mechanisms of action.
Results: In this study, in vitro cultures of primary rat hepatocytes were exposed to 105 chemicals and vehicle controls, representing 14 compound classes. We comprehensively compared various normalization of gene expression profiles, feature selection and classification algorithms for the classification of these 105 chemicals into14 compound classes. We found that normalization had little effect on the averaged …
Applicability Of Latent Dirichlet Allocation To Multi-Disk Search, George E. Noel, Gilbert L. Peterson
Applicability Of Latent Dirichlet Allocation To Multi-Disk Search, George E. Noel, Gilbert L. Peterson
Faculty Publications
Digital forensics practitioners face a continual increase in the volume of data they must analyze, which exacerbates the problem of finding relevant information in a noisy domain. Current technologies make use of keyword based search to isolate relevant documents and minimize false positives with respect to investigative goals. Unfortunately, selecting appropriate keywords is a complex and challenging task. Latent Dirichlet Allocation (LDA) offers a possible way to relax keyword selection by returning topically similar documents. This research compares regular expression search techniques and LDA using the Real Data Corpus (RDC). The RDC, a set of over 2400 disks from real …
How Well Does Multiple Ocr Error Correction Generalize?, William B. Lund, Eric K. Ringger, Daniel D. Walker
How Well Does Multiple Ocr Error Correction Generalize?, William B. Lund, Eric K. Ringger, Daniel D. Walker
Faculty Publications
As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the …