Open Access. Powered by Scholars. Published by Universities.®

Digital Commons Network

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 19 of 19

Full-Text Articles in Entire DC Network

A Confidence Measure For Boundary Detection And Object Selection, William A. Barrett, Eric N. Mortensen Dec 2001

A Confidence Measure For Boundary Detection And Object Selection, William A. Barrett, Eric N. Mortensen

Faculty Publications

We introduce a confidence measure that estimates the assurance that a graph arc (or edge) corresponds to an object boundary in an image. A weighted, planar graph is imposed onto the watershed lines of a gradient magnitude image and the confidence measure is a function of the cost of fixed-length paths emanating from and extending to each end of a graph arc. The confidence measure is applied to automate the detection of object boundaries and thereby reduces (often greatly) the time and effort required for object boundary definition within a user-guided image segmentation environment.


Image Magnification Using Level-Set Reconstruction, Bryan S. Morse, Duane Schwartzwald Dec 2001

Image Magnification Using Level-Set Reconstruction, Bryan S. Morse, Duane Schwartzwald

Faculty Publications

Image magnification is a common problem in imaging applications, requiring interpolation to “read between the pixels”. Although many magnification/interpolation algorithms have been proposed in the literature, all methods must suffer to some degree the effects of impefect reconstruction―false high-frequency content introduced by the underlying original sampling. Most often, these effects manifest themselves as jagged contours in the image. This paper presents a method for constrained smoothing of such artifacts that attempts to produce smooth reconstructions of the image’s level curves while still maintaining image fidelity. This is similar to other iterative reconstruction algorithms and to Bayesian restoration techniques, but instead …


Fast Focal Length Solution In Partial Panoramic Image Stitching, William A. Barrett, Kirk L. Duffin Dec 2001

Fast Focal Length Solution In Partial Panoramic Image Stitching, William A. Barrett, Kirk L. Duffin

Faculty Publications

Accurate estimation of effective camera focal length is crucial to the success of panoramic image stitching. Fast techniques for estimating the focal length exist, but are dependent upon a close initial approximation or the existence of a full circle panoramic image sequence. Numerical solutions of the focal length demonstrate strong coupling between the focal length and the angles used to position each component image about the common spherical center. This paper demonstrates that parameterizing panoramic image positions using spherical arc length instead of angles effectively decouples the focal length Ji.om the image position. This new parameterization does not require an …


Houghing The Hough: Peak Collection For Detection Of Corners, Junctions And Line Intersections, William A. Barrett, Kevin D. Petersen Dec 2001

Houghing The Hough: Peak Collection For Detection Of Corners, Junctions And Line Intersections, William A. Barrett, Kevin D. Petersen

Faculty Publications

We exploit the Accumulator Array of the Hough Transform by finding collections of (2 or more) peaks through which a given sinusoid will pass. Such sinusoids identify points in the original image where lines intersect. Peak collection (or line aggregation) is performed by making a second pass through the edge map, but instead of laying points down in the accumulator array (as with the original Hough Transform), we compute the line integral over each sinusoid that corresponds to the current edge point. If a sinusoid passes through greater than or equal to 2 peaks, we deposit that sum/integral into a …


Using Ssm Proxies To Provide Efficient Multiple-Source Multicast Delivery, Daniel Zappala, Aaron Fabbri Nov 2001

Using Ssm Proxies To Provide Efficient Multiple-Source Multicast Delivery, Daniel Zappala, Aaron Fabbri

Faculty Publications

We consider the possibility that single-source multicast (SSM) will become a universal multicast service, enabling large-scale distribution of content from a few well-known sources to a general audience. Operating under this assumption, we explore the problem of building the traditional IP model of any-source multicast on top of SSM. Toward this end, we design an SSM proxy service that allows any sender to efficiently deliver content to a multicast group. We demonstrate the performance improvements this service offers over standard SSM and describe extensions for access control, dynamic proxy discovery, and multicast proxy distribution.


Modeling Irda Performance: The Effect Of Irlap Negotiation Parameters On Throughput, Scott V. Hansen, Charles D. Knutson, Michael G. Robertson, Franklin E. Sorenson Oct 2001

Modeling Irda Performance: The Effect Of Irlap Negotiation Parameters On Throughput, Scott V. Hansen, Charles D. Knutson, Michael G. Robertson, Franklin E. Sorenson

Faculty Publications

The Infrared Data Association's (IrDA) infrared data transmission protocol is a widely used mechanism for short-range wireless data communications. In order to provide flexibility for connections between devices of potentially disparate capabilities, IrDA devices negotiate the values of several transmission parameters based on the capabilities of the devices establishing the connection. This paper describes the design and implementation of a software tool, Irdaperf, to model IrDA performance based on negotiated transmission parameters. Using Irdaperf, we demonstrate that for fast data rates, maximizing window size and data size are key factors for overcoming the negative effects of a relatively long link …


Improving Cluster Utilization Through Set Based Allocation Policies, Quinn O. Snell, Julio C. Facelli, Brian D. Haymore, David B. Jackson Sep 2001

Improving Cluster Utilization Through Set Based Allocation Policies, Quinn O. Snell, Julio C. Facelli, Brian D. Haymore, David B. Jackson

Faculty Publications

While clusters have already proven themselves in the world of high performance computing, some clusters are beginning to exhibit resource inefficiencies due to increasing hardware diversity. Much of the success of clusters lies in the use of commodity components built to meet various hardware standards. These standards have allowed a great level of hardware backwards compatibility that is now resulting in a condition referred to as hardware 'drift' or heterogeneity. The hardware heterogeneity introduces problems when diverse compute nodes are allocated to a parallel job, as most parallel jobs are not self-balancing. This paper presents a new method that allows …


Livelock Avoidance For Meta-Schedulers, Mark J. Clement, John Jardine, Quinn O. Snell Aug 2001

Livelock Avoidance For Meta-Schedulers, Mark J. Clement, John Jardine, Quinn O. Snell

Faculty Publications

Meta-scheduling, a process which allows a user to schedule a job across multiple sites, has a potential for livelock. Current systems avoid livelock by locking down resources at multiple sites and allowing a metascheduler to control the resources during the lock down period or by limiting job size to that which will fit on one site. The former approach leads to poor utilization; the later poses limitations on job size. This research uses BYU's Meta-scheduler (YMS) which allows jobs to be scheduled across multiple sites without the need for locking down the nodes. YMS avoids livelock through exponential back-off This …


Improved Hopfield Networks By Training With Noisy Data, Fred Clift, Tony R. Martinez Jul 2001

Improved Hopfield Networks By Training With Noisy Data, Fred Clift, Tony R. Martinez

Faculty Publications

A new approach to training a generalized Hopfield network is developed and evaluated in this work. Both the weight symmetricity constraint and the zero self-connection constraint are removed from standard Hopfield networks. Training is accomplished with Back-Propagation Through Time, using noisy versions of the memorized patterns. Training in this way is referred to as Noisy Associative Training (NAT). Performance of NAT is evaluated on both random and correlated data. NAT has been tested on several data sets, with a large number of training runs for each experiment. The data sets used include uniformly distributed random data and several data sets …


Optimal Artificial Neural Network Architecture Selection For Bagging, Timothy L. Andersen, Tony R. Martinez, Michael E. Rimer Jul 2001

Optimal Artificial Neural Network Architecture Selection For Bagging, Timothy L. Andersen, Tony R. Martinez, Michael E. Rimer

Faculty Publications

This paper studies the performance of standard architecture selection strategies, such as cost/performance and CV based strategies, for voting methods such as bagging. It is shown that standard architecture selection strategies are not optimal for voting methods and tend to underestimate the complexity of the optimal network architecture, since they only examine the performance of the network on an individual basis and do not consider the correlation between responses from multiple networks.


Improving The Hopfield Network Through Beam Search, Tony R. Martinez, Xinchuan Zeng Jul 2001

Improving The Hopfield Network Through Beam Search, Tony R. Martinez, Xinchuan Zeng

Faculty Publications

In this paper we propose a beam search mechanism to improve the performance of the Hopfield network for solving optimization problems. The beam search readjusts the top M (M > 1) activated neurons to more similar activation levels in the early phase of relaxation, so that the network has the opportunity to explore more alternative, potentially better solutions. We evaluated this approach using a large number of simulations (20,000 for each parameter setting), based on 200 randomly generated city distributions of the 10-city traveling salesman problem. The results show that the beam search has the capability of significantly improving the network …


Lazy Training: Improving Backpropagation Learning Through Network Interaction, Timothy L. Andersen, Tony R. Martinez, Michael E. Rimer Jul 2001

Lazy Training: Improving Backpropagation Learning Through Network Interaction, Timothy L. Andersen, Tony R. Martinez, Michael E. Rimer

Faculty Publications

Backpropagation, similar to most high-order learning algorithms, is prone to overfitting. We address this issue by introducing interactive training (IT), a logical extension to backpropagation training that employs interaction among multiple networks. This method is based on the theory that centralized control is more effective for learning in deep problem spaces in a multi-agent paradigm. IT methods allow networks to work together to form more complex systems while not restraining their individual ability to specialize. Lazy training, an implementation of IT that minimizes misclassification error, is presented. Lazy training discourages overfitting and is conducive to higher accuracy in multiclass problems …


Speed Training: Improving The Rate Of Backpropagation Learning Through Stochastic Sample Presentation, Timothy L. Andersen, Tony R. Martinez, Michael E. Rimer Jul 2001

Speed Training: Improving The Rate Of Backpropagation Learning Through Stochastic Sample Presentation, Timothy L. Andersen, Tony R. Martinez, Michael E. Rimer

Faculty Publications

Artificial neural networks provide an effective empirical predictive model for pattern classification. However, using complex neural networks to learn very large training sets is often problematic, imposing prohibitive time constraints on the training process. We present four practical methods for dramatically decreasing training time through dynamic stochastic sample presentation, a technique we call speed training. These methods are shown to be robust to retaining generalization accuracy over a diverse collection of real world data sets. In particular, the SET technique achieves a training speedup of 4278% on a large OCR database with no detectable loss in generalization.


The Need For Small Learning Rates On Large Problems, Tony R. Martinez, D. Randall Wilson Jul 2001

The Need For Small Learning Rates On Large Problems, Tony R. Martinez, D. Randall Wilson

Faculty Publications

In gradient descent learning algorithms such as error backpropagation, the learning rate parameter can have a significant effect on generalization accuracy. In particular, decreasing the learning rate below that which yields the fastest convergence can significantly improve generalization accuracy, especially on large, complex problems. The learning rate also directly affects training speed, but not necessarily in the way that many people expect. Many neural network practitioners currently attempt to use the largest learning rate that still allows for convergence, in order to improve training speed. However, a learning rate that is too large can be as slow as a learning …


On The Utility Of Entanglement In Quantum Neural Computing, Dan A. Ventura Jul 2001

On The Utility Of Entanglement In Quantum Neural Computing, Dan A. Ventura

Faculty Publications

Efforts in combining quantum and neural computation are briefly discussed and the concept of entanglement as it applies to this subject is addressed. Entanglement is perhaps the least understood aspect of quantum systems used for computation, yet it is apparently most responsible for their computational power. This paper argues for the importance of understanding and utilizing entanglement in quantum neural computation.


An Evaluation Of Shared Multicast Trees With Multiple Active Cores, Daniel Zappala, Aaron Fabbri Jul 2001

An Evaluation Of Shared Multicast Trees With Multiple Active Cores, Daniel Zappala, Aaron Fabbri

Faculty Publications

Core-based multicast trees use less router state, but have significant drawbacks when compared to shortest-path trees, namely higher delay and poor fault tolerance. We evaluate the feasibility of using multiple independent cores within a shared multicast tree. We consider several basic designs and discuss how using multiple cores improves fault tolerance without sacrificing router state. We examine the performance of multiple-core trees with respect to single-core trees and find that adding cores significantly lowers delay without increasing cost. Moreover, it takes only a small number of cores, placed with a k-center approximation, for a multiple-core tree to have lower delay …


Interpolating Implicit Surfaces From Scattered Surface Data Using Compactly Supported Radial Basis Functions, Bryan S. Morse, David T. Chen, Penny Rheingans, Kalpathi Subramanian, Terry S. Yoo May 2001

Interpolating Implicit Surfaces From Scattered Surface Data Using Compactly Supported Radial Basis Functions, Bryan S. Morse, David T. Chen, Penny Rheingans, Kalpathi Subramanian, Terry S. Yoo

Faculty Publications

We describe algebraic methods for creating implicit surfaces using linear combinations of radial basis interpolants to form complex models from scattered surface points. Shapes with arbitrary topology are easily represented without the usual interpolation or aliasing errors arising from discrete sampling. These methods were first applied to implicit surfaces by Savchenko, et al. and later developed independently by Turk and O'Brien as a means of performing shape interpolation. Earlier approaches were limited as a modeling mechanism because of the order of the computational complexity involved. We explore and extend these implicit interpolating methods to make them suitable for systems of …


Image Reconstruction Using Data-Dependent Triangulation, Thomas W. Sederberg, Xiaohua Yu, Bryan S. Morse May 2001

Image Reconstruction Using Data-Dependent Triangulation, Thomas W. Sederberg, Xiaohua Yu, Bryan S. Morse

Faculty Publications

Image reconstruction based on data-dependent triangulation with new cost functions and optimization can create higher quality images than traditional bilinear or bicubic spline reconstruction. The article presents a novel method for image reconstruction using a piecewise linear intensity surface whose elements don't generally align with the coordinate axes. This method is based on the technique of data-dependent triangulation (DDT) that N. Dyn et al. (1990) introduced and has proven capable of producing more pleasing reconstructions than axis-aligned methods.


Effective Bandwidth For Traffic Engineering, Mark J. Clement, Rob Kunz, Seth Nielson, Quinn O. Snell May 2001

Effective Bandwidth For Traffic Engineering, Mark J. Clement, Rob Kunz, Seth Nielson, Quinn O. Snell

Faculty Publications

In today’s Internet, demand is increasing for guarantees of speed and efficiency. Current routers are very limited in the type and quantity of observed data they can provide, making it difficult for providers to maximize utilization without the risk of degraded throughput. This research uses statistical data currents provided by router vendors to estimate the impact of changes in network configuration on the probability of link overflow. This allows service providers to calculate in advance, the effect of grooming on a network, eliminating the conservative trial-and-error approach normally used. These predictions are made using Large Deviation Theory, which focuses on …