Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 22 of 22

Full-Text Articles in Physical Sciences and Mathematics

Fused Visible And Infrared Video For Use In Wilderness Search And Rescue, Dennis Eggett, Michael A. Goodrich, Bryan S. Morse, Nathan Rasmussen Dec 2009

Fused Visible And Infrared Video For Use In Wilderness Search And Rescue, Dennis Eggett, Michael A. Goodrich, Bryan S. Morse, Nathan Rasmussen

Faculty Publications

Mini Unmanned Aerial Vehicles (mUAVs) have the potential to assist Wilderness Search and Rescue groups by providing a bird’s eye view of the search area. This paper proposes a method for augmenting visible-spectrum searching with infrared sensing in order to make use of thermal search clues. It details a method for combining the color and heat information from these two modalities into a single fused display to reduce needed screen space for remote field use. To align the video frames for fusion, a method for simultaneously pre-calibrating the intrinsic and extrinsic parameters of the cameras and their mount using a …


Gpu-Accelerated Hierarchical Dense Correspondence For Real-Time Aerial Video Processing, Stephen Cluff, Bryan S. Morse, Jonathan D. Cohen, Mark Duchaineau Dec 2009

Gpu-Accelerated Hierarchical Dense Correspondence For Real-Time Aerial Video Processing, Stephen Cluff, Bryan S. Morse, Jonathan D. Cohen, Mark Duchaineau

Faculty Publications

Video from aerial surveillance can provide a rich source of data for many applications and can be enhanced for display and analysis through such methods as mosaic construction, super-resolution, and mover detection. All of these methods require accurate frame-to-frame registration, which for live use must be performed in real time. In many situations, scene parallax may make alignment using global transformations impossible or error-prone, limiting the performance of subsequent processing and applications. For these cases, dense (per-pixel) correspondence is required, but this can be computationally prohibitive. This paper presents a hierarchical dense correspondence algorithm designed for implementation on graphics processing …


Classifying Sentence-Based Summaries Of Web Documents, Yiu-Kai D. Ng, Maria Soledad Pera Nov 2009

Classifying Sentence-Based Summaries Of Web Documents, Yiu-Kai D. Ng, Maria Soledad Pera

Faculty Publications

Text classification categorizes Web documents in large collections into predefined classes based on their contents. Unfortunately, the classification process can be time-consuming and users are still required to spend considerable amount of time scanning through the classified Web documents to identify the ones that satisfy their information needs. In solving this problem, we first introduce CorSum, an extractive single-document summarization approach, which is simple and effective in performing the summarization task, since it only relies on word similarity to generate high-quality summaries. Hereafter, we train a Naïve Bayes classifier on CorSum-generated summaries and verify the classification accuracy using the summaries …


Mcc: A Runtime Verification Tool For Mcapi User Applications, Eric G. Mercer, Ganesh Gopalakrishnan, Jim Holt, Subodh Sharma Nov 2009

Mcc: A Runtime Verification Tool For Mcapi User Applications, Eric G. Mercer, Ganesh Gopalakrishnan, Jim Holt, Subodh Sharma

Faculty Publications

We present a dynamic verification tool MCC for Multicore Communication API applications – a new API for communication among cores. MCC systematically explores all relevant interleavings of an MCAPI application using a tailormade dynamic partial order reduction algorithm (DPOR). Our contributions are (i) a way to model the non-overtaking message matching relation underlying MCAPI calls with a high level algorithm to effect DPOR for MCAPI that controls the lower level details so that the intended executions happen at runtime; and (ii) a list of default safety properties that can be utilized in the process of verification. To our knowledge, this …


Chemalign: Biologically Relevant Multiple Sequence Alignment Using Physicochemical Properties, Hyrum Carroll, Mark J. Clement, Quinn O. Snell, David Mcclellan Nov 2009

Chemalign: Biologically Relevant Multiple Sequence Alignment Using Physicochemical Properties, Hyrum Carroll, Mark J. Clement, Quinn O. Snell, David Mcclellan

Faculty Publications

We present a new algorithm, ChemAlign, that uses physicochemical properties and secondary structure elements to create biologically relevant multiple sequence alignments (MSAs). Additionally, we introduce the Physicochemical Property Difference (PPD) score for the evaluation of MSAs. This score is the normalized difference of physicochemical property values between a calculated and a reference alignment. It takes a step beyond sequence similarity and measures characteristics of the amino acids to provide a more biologically relevant metric. ChemAlign is able to produce more biologically correct alignments and can help to identify potential drug docking sites.


Uav Intelligent Path Planning For Wilderness Search And Rescue, Michael A. Goodrich, Lanny Lin Oct 2009

Uav Intelligent Path Planning For Wilderness Search And Rescue, Michael A. Goodrich, Lanny Lin

Faculty Publications

In the priority search phase of Wilderness Search and Rescue, a probability distribution map is created. Areas with higher probabilities are searched first in order to find the missing person in the shortest expected time. When using a UAV to support search, the onboard video camera should cover as much of the important areas as possible within a set time. We explore several algorithms (with and without set destination) and describe some novel techniques in solving this problem and compare their performances against typical WiSAR scenarios. This problem is NP-hard, but our algorithms yield high quality solutions that approximate the …


Livecut: Learning-Based Interactive Video Segmentation By Evaluation Of Multiple Propagated Cues, Bryan S. Morse, Brian L. Price, Scott Cohen Oct 2009

Livecut: Learning-Based Interactive Video Segmentation By Evaluation Of Multiple Propagated Cues, Bryan S. Morse, Brian L. Price, Scott Cohen

Faculty Publications

Video sequences contain many cues that may be used to segment objects in them, such as color, gradient, color adjacency, shape, temporal coherence, camera and object motion, and easily-trackable points. This paper introduces LIVEcut, a novel method for interactively selecting objects in video sequences by extracting and leveraging as much of this information as possible. Using a graph-cut optimization framework, LIVEcut propagates the selection forward frame by frame, allowing the user to correct any mistakes along the way if needed. Enhanced methods of extracting many of the features are provided. In order to use the most accurate information from the …


Versatile Reactive Navigation, Robert P. Burton, Luther A. Tychonievich, Louis P. Tychonievich Oct 2009

Versatile Reactive Navigation, Robert P. Burton, Luther A. Tychonievich, Louis P. Tychonievich

Faculty Publications

Most autonomous mobile agents operate in a highly constrained environment. Despite significant research, existing solutions are limited in their ability to handle heterogeneous constraints within highly dynamic or uncertain environments. This paper presents a novel maneuver selection technique suited for both 2D and 3D environments with highly dynamic maneuvering constraints and multiple mobile obstacles. Agents may have any arbitrary set of nonholonomic control variables; maneuvers can be constrained by a broad class of function inequalities, including time-dependent constraints involving nonlinear relationships between controlled and agent-state variables. The resulting algorithm has been implemented to run in real time using only a …


Reducing Source Load In Bittorrent, Brian Sanderson, Daniel Zappala Aug 2009

Reducing Source Load In Bittorrent, Brian Sanderson, Daniel Zappala

Faculty Publications

One of the main goals of BitTorrent is to reduce load on web servers by encouraging clients to share content between themselves. However, BitTorrent’s current design relies heavily on the original source to serve a disproportionate amount of the file. We modify standard BitTorrent software so that a source determines the current popularity of each of the blocks of a file and tries to serve only those blocks that are rare. Using extensive PlanetLab experiments, we show that this modification can save a significant amount of the source’s upload bandwidth, with the tradeoff of some increased peer download time. In …


A Sophisticated Library Search Strategy Using Folksonomies And Similarity Matching, William Lund, Yiu-Kai D. Ng, Maria Soledad Pera Jul 2009

A Sophisticated Library Search Strategy Using Folksonomies And Similarity Matching, William Lund, Yiu-Kai D. Ng, Maria Soledad Pera

Faculty Publications

Libraries, private and public, offer valuable resources to library patrons. As of today the only way to locate information archived exclusively in libraries is through their catalogs. Library patrons, however, often find it difficult to formulate a proper query, which requires using specific keywords assigned to different fields of desired library catalog records, to obtain relevant results. These improperly formulated queries often yield irrelevant results or no results at all. This negative experience in dealing with existing library systems turn library patrons away from library catalogs; instead, they rely on Web search engines to perform their searches first and upon …


Music Recommendation And Query-By-Content Using Self-Organizing Maps, Kyle B. Dickerson, Dan A. Ventura Jun 2009

Music Recommendation And Query-By-Content Using Self-Organizing Maps, Kyle B. Dickerson, Dan A. Ventura

Faculty Publications

The ever-increasing density of computer storage devices has allowed the average user to store enormous quantities of multimedia content, and a large amount of this content is usually music. Current search techniques for musical content rely on meta-data tags which describe artist, album, year, genre, etc. Query-by-content systems allow users to search based upon the acoustical content of the songs. Recent systems have mainly depended upon textual representations of the queries and targets in order to apply common string-matching algorithms. However, these methods lose much of the information content of the song and limit the ways in which a user …


Improving The Separability Of A Reservoir Facilitates Learning Transfer, David Norton, Dan A. Ventura Jun 2009

Improving The Separability Of A Reservoir Facilitates Learning Transfer, David Norton, Dan A. Ventura

Faculty Publications

We use a type of reservoir computing called the liquid state machine (LSM) to explore learning transfer. The Liquid State Machine (LSM) is a neural network model that uses a reservoir of recurrent spiking neurons as a filter for a readout function. We develop a method of training the reservoir, or liquid, that is not driven by residual error. Instead, the liquid is evaluated based on its ability to separate different classes of input into different spatial patterns of neural activity. Using this method, we train liquids on two qualitatively different types of artificial problems. Resulting liquids are shown to …


Super-Resolution Via Recapture And Bayesian Effect Modeling, Bryan S. Morse, Kevin Seppi, Neil Toronto, Dan A. Ventura Jun 2009

Super-Resolution Via Recapture And Bayesian Effect Modeling, Bryan S. Morse, Kevin Seppi, Neil Toronto, Dan A. Ventura

Faculty Publications

This paper presents Bayesian edge inference (BEI), a single-frame super-resolution method explicitly grounded in Bayesian inference that addresses issues common to existing methods. Though the best give excellent results at modest magnification factors, they suffer from gradient stepping and boundary coherence problems by factors of 4x. Central to BEI is a causal framework that allows image capture and recapture to be modeled differently, a principled way of undoing downsampling blur, and a technique for incorporating Markov random field potentials arbitrarily into Bayesian networks. Besides addressing gradient and boundary issues, BEI is shown to be competitive with existing methods on published …


An Exploration Of Topologies And Communication In Large Particle Swarms, Matthew Gardner, Andrew Mcnabb, Kevin Seppi May 2009

An Exploration Of Topologies And Communication In Large Particle Swarms, Matthew Gardner, Andrew Mcnabb, Kevin Seppi

Faculty Publications

Particle Swarm Optimization (PSO) has typically been used with small swarms of about 50 particles. However, PSO is more efficiently parallelized with large swarms. We formally describe existing topologies and identify variations which are better suited to large swarms in both sequential and parallel computing environments. We examine the performance of PSO for benchmark functions with respect to swarm size and topology. We develop and demonstrate a new PSO variant which leverages the unique strengths of large swarms. “Hearsay PSO” allows for information to flow quickly through the swarm, even with very loosely connected topologies. These loosely connected topologies are …


Test Case Generation Using Model Checking For Software Components Deployed Into New Environments, Tonglaga Bao, Michael D. Jones Apr 2009

Test Case Generation Using Model Checking For Software Components Deployed Into New Environments, Tonglaga Bao, Michael D. Jones

Faculty Publications

In this paper, we show how to generate test cases for a component deployed into a new software environment. This problem is important for software engineers who need to deploy a component into a new environment. Most existing model based testing approaches generate models from high level specifications. This leaves a semantic gap between the high level specification and the actual implementation. Furthermore, the high level specification often needs to be manually translated into a model, which is a time consuming and error prone process. We propose generating the model automatically by abstracting the source code of the component using …


The 20-Minute Genealogist: A Context-Preservation Metaphor For Assisted Family History Research, Charles D. Knutson, Jonathan Krein Mar 2009

The 20-Minute Genealogist: A Context-Preservation Metaphor For Assisted Family History Research, Charles D. Knutson, Jonathan Krein

Faculty Publications

What can you possibly do to be productive as a family history researcher in 20 minutes per week? Our studies suggest that currently the answer is, “Nothing.” In 20 minutes a would-be researcher can’t even remember what happened last week, let alone what they were planning to do next. The 20-Minute Genealogist is a powerful metaphor within which software solutions must consider context preservation as the fundamental domain of the system, thus freeing the researcher to do research while the software manages the tasks that computers do best. Two survey-based studies were conducted that indicate a significant disconnect between the …


A Dynamic Attribute-Based Data Filtering And Recovery Scheme For Web Information Processing, Amit Ahuja, Yiu-Kai D. Ng Mar 2009

A Dynamic Attribute-Based Data Filtering And Recovery Scheme For Web Information Processing, Amit Ahuja, Yiu-Kai D. Ng

Faculty Publications

Web data being transmitted over a network channel on the Internet with excessive amount of data causes data processing problems, which include selectively choosing useful information to be retained for various data applications. In this paper, we present an approach for filtering less-informative attribute data from a source Website. A scheme for filtering attributes, instead of tuples (records), from a Website becomes imperative, since filtering a complete tuple would lead to filtering some informative, as well as less-informative, attribute data in the tuple. Since filtered data at the source Website may be of interest to the user at the destination …


Spamed: A Spam Email Detection Approach Based On Phrase Similarity, Yiu-Kai D. Ng, Maria Soledad Pera Feb 2009

Spamed: A Spam Email Detection Approach Based On Phrase Similarity, Yiu-Kai D. Ng, Maria Soledad Pera

Faculty Publications

Emails are unquestionably one of the most popular communication media these days. Not only they are fast and reliable, but also free in general. Unfortunately, a significant number of emails received by email users on a daily basis are spam. This fact is annoying, since spam emails translate into a waste of user’s time in reviewing and deleting them. In addition, spam emails consume resources, such as storage, bandwidth, and computer processing time. Many attempts have been made in the past to eradicate spam emails; however, none has been proved highly effective. In this paper, we propose a spam-email detection …


Author Entropy Vs. File Size In The Gnome Suite Of Applications, Jason R. Casebolt, Daniel P. Delorey, Charles D. Knutson, Jonathan Krein, Alexander C. Maclean Jan 2009

Author Entropy Vs. File Size In The Gnome Suite Of Applications, Jason R. Casebolt, Daniel P. Delorey, Charles D. Knutson, Jonathan Krein, Alexander C. Maclean

Faculty Publications

We present the results of a study in which author entropy was used to characterize author contributions per file. Our analysis reveals three patterns: banding in the data, uneven distribution of data across bands, and file size dependent distributions within bands. Our re- sults suggest that when two authors contribute to a file, large files are more likely to have a dominant author than smaller files.


Synthesizing Correlated Rss News Articles Based On A Fuzzy Equivalence Relation, Yiu-Kai D. Ng, Maria Soledad Pera Jan 2009

Synthesizing Correlated Rss News Articles Based On A Fuzzy Equivalence Relation, Yiu-Kai D. Ng, Maria Soledad Pera

Faculty Publications

Tens of thousands of news articles are posted on-line each day, covering topics from politics to science to current events. To better cope with this overwhelming volume of information, RSS (news) feeds are used to categorize newly posted articles. Nonetheless, most RSS users must filter through many articles within the same or different RSS feeds to locate articles pertaining to their particular interests. Due to the large number of news articles in individual RSS feeds, there is a need for further organizing articles to aid users in locating non-redundant, informative, and related articles of interest quickly. In this paper, we …


Psoda: Open Source Phylogenetic Search And Dna Analysis, Mark J. Clement, Quinn O. Snell, Kenneth Sundberg Jan 2009

Psoda: Open Source Phylogenetic Search And Dna Analysis, Mark J. Clement, Quinn O. Snell, Kenneth Sundberg

Faculty Publications

PSODA is an open source (GPL v2) sequence analysis package that implements sequence alignment using biochemical properties, phylogeny search with parsimony or maximum likelihood criteria and selection detection using biochemical properties (TreeSAAP ). PSODA is compatible with PAUP and the search algorithms are competitive with those in PAUP. PSODA also adds a basic scripting language to the PAUP block, making it possible to easily create advanced meta-searches. Because PSODA is open-source, we have also been able to easily add in advanced search techniques and characterize the benefits of various optimizations. PSODA is available for Macintosh OS X, Windows, and Linux.


Hardware Accelerated Sequence Alignment With Traceback, Scott Lloyd, Quinn O. Snell Jan 2009

Hardware Accelerated Sequence Alignment With Traceback, Scott Lloyd, Quinn O. Snell

Faculty Publications

Biological sequence alignment is an essential tool used in molecular biology and biomedical applications. The growing volume of genetic data and the complexity of sequence alignment present a challenge in obtaining alignment results in a timely manner. Known methods to accelerate alignment on reconfigurable hardware only address sequence comparison, limit the sequence length, or exhibit memory and I/O bottlenecks. A space-efficient, global sequence alignment algorithm and architecture is presented that accelerates the forward scan and traceback in hardware without memory and I/O limitations. With 256 processing elements in FPGA technology, a performance gain over 300 times that of a desktop …