Open Access. Powered by Scholars. Published by Universities.®
- Institution
- Keyword
-
- Algorithms (4)
- Linear algebra (3)
- High performance computing (2)
- Linear systems (2)
- Matrices (2)
-
- 5-wave resonances (1)
- Algebraic multigrid (1)
- Algorithm (1)
- Analytics (1)
- Artificial intelligence (1)
- Backward stability (1)
- Block Cholesky approaches (1)
- Brownian Motion (1)
- Classical memory (1)
- Classification Accuracy (1)
- Coarsening algorithm (1)
- Commodity processors (1)
- Complexity (1)
- Computational approaches (1)
- Computer architecture (1)
- Conjugate gradient (1)
- Convergence (1)
- Data sparsity (1)
- Decomposition (1)
- Dense matrices (1)
- Dimensionality Reduction (1)
- Dynamic scheduling (1)
- Eigenpairs (1)
- Exascale (1)
- Exascale algorithms (1)
Articles 1 - 17 of 17
Full-Text Articles in Mathematics
On The Application Of Principal Component Analysis To Classification Problems, Jianwei Zheng, Cyril Rakovski
On The Application Of Principal Component Analysis To Classification Problems, Jianwei Zheng, Cyril Rakovski
Mathematics, Physics, and Computer Science Faculty Articles and Research
Principal Component Analysis (PCA) is a commonly used technique that uses the correlation structure of the original variables to reduce the dimensionality of the data. This reduction is achieved by considering only the first few principal components for a subsequent analysis. The usual inclusion criterion is defined by the proportion of the total variance of the principal components exceeding a predetermined threshold. We show that in certain classification problems, even extremely high inclusion threshold can negatively impact the classification accuracy. The omission of small variance principal components can severely diminish the performance of the models. We noticed this phenomenon in …
Contributions To The Teaching And Learning Of Fluid Mechanics, Ashwin Vaidya
Contributions To The Teaching And Learning Of Fluid Mechanics, Ashwin Vaidya
Department of Mathematics Facuty Scholarship and Creative Works
This issue showcases a compilation of papers on fluid mechanics (FM) education, covering different sub topics of the subject. The success of the first volume [1] prompted us to consider another follow-up special issue on the topic, which has also been very successful in garnering an impressive variety of submissions. As a classical branch of science, the beauty and complexity of fluid dynamics cannot be overemphasized. This is an extremely well-studied subject which has now become a significant component of several major scientific disciplines ranging from aerospace engineering, astrophysics, atmospheric science (including climate modeling), biological and biomedical science …
Application Of Randomness In Finance, Jose Sanchez, Daanial Ahmad, Satyanand Singh
Application Of Randomness In Finance, Jose Sanchez, Daanial Ahmad, Satyanand Singh
Publications and Research
Brownian Motion which is also considered to be a Wiener process and can be thought of as a random walk. In our project we had briefly discussed the fluctuations of financial indices and related it to Brownian Motion and the modeling of Stock prices.
Lecture 05: The Convergence Of Big Data And Extreme Computing, David Keyes
Lecture 05: The Convergence Of Big Data And Extreme Computing, David Keyes
Mathematical Sciences Spring Lecture Series
As simulation and analytics enter the exascale era, numerical algorithms, particularly implicit solvers that couple vast numbers of degrees of freedom, must span a widening gap between ambitious applications and austere architectures to support them. We present fifteen universals for researchers in scalable solvers: imperatives from computer architecture that scalable solvers must respect, strategies towards achieving them that are currently well established, and additional strategies currently being developed for an effective and efficient exascale software ecosystem. We consider recent generalizations of what it means to “solve” a computational problem, which suggest that we have often been “oversolving” them at the …
Lecture 09: Hierarchically Low Rank And Kronecker Methods, Rio Yokota
Lecture 09: Hierarchically Low Rank And Kronecker Methods, Rio Yokota
Mathematical Sciences Spring Lecture Series
Exploiting structures of matrices goes beyond identifying their non-zero patterns. In many cases, dense full-rank matrices have low-rank submatrices that can be exploited to construct fast approximate algorithms. In other cases, dense matrices can be decomposed into Kronecker factors that are much smaller than the original matrix. Sparsity is a consequence of the connectivity of the underlying geometry (mesh, graph, interaction list, etc.), whereas the rank-deficiency of submatrices is closely related to the distance within this underlying geometry. For high dimensional geometry encountered in data science applications, the curse of dimensionality poses a challenge for rank-structured approaches. On the other …
Lecture 08: Partial Eigen Decomposition Of Large Symmetric Matrices Via Thick-Restart Lanczos With Explicit External Deflation And Its Communication-Avoiding Variant, Zhaojun Bai
Mathematical Sciences Spring Lecture Series
There are continual and compelling needs for computing many eigenpairs of very large Hermitian matrix in physical simulations and data analysis. Though the Lanczos method is effective for computing a few eigenvalues, it can be expensive for computing a large number of eigenvalues. To improve the performance of the Lanczos method, in this talk, we will present a combination of explicit external deflation (EED) with an s-step variant of thick-restart Lanczos (s-step TRLan). The s-step Lanczos method can achieve an order of s reduction in data movement while the EED enables to compute eigenpairs in batches along with a number …
Lecture 04: Spatial Statistics Applications Of Hrl, Trl, And Mixed Precision, David Keyes
Lecture 04: Spatial Statistics Applications Of Hrl, Trl, And Mixed Precision, David Keyes
Mathematical Sciences Spring Lecture Series
As simulation and analytics enter the exascale era, numerical algorithms, particularly implicit solvers that couple vast numbers of degrees of freedom, must span a widening gap between ambitious applications and austere architectures to support them. We present fifteen universals for researchers in scalable solvers: imperatives from computer architecture that scalable solvers must respect, strategies towards achieving them that are currently well established, and additional strategies currently being developed for an effective and efficient exascale software ecosystem. We consider recent generalizations of what it means to “solve” a computational problem, which suggest that we have often been “oversolving” them at the …
Lecture 14: Randomized Algorithms For Least Squares Problems, Ilse C.F. Ipsen
Lecture 14: Randomized Algorithms For Least Squares Problems, Ilse C.F. Ipsen
Mathematical Sciences Spring Lecture Series
The emergence of massive data sets, over the past twenty or so years, has lead to the development of Randomized Numerical Linear Algebra. Randomized matrix algorithms perform random sketching and sampling of rows or columns, in order to reduce the problem dimension or compute low-rank approximations. We review randomized algorithms for the solution of least squares/regression problems, based on row sketching from the left, or column sketching from the right. These algorithms tend to be efficient and accurate on matrices that have many more rows than columns. We present probabilistic bounds for the amount of sampling required to achieve a …
Lecture 13: A Low-Rank Factorization Framework For Building Scalable Algebraic Solvers And Preconditioners, X. Sherry Li
Lecture 13: A Low-Rank Factorization Framework For Building Scalable Algebraic Solvers And Preconditioners, X. Sherry Li
Mathematical Sciences Spring Lecture Series
Factorization based preconditioning algorithms, most notably incomplete LU (ILU) factorization, have been shown to be robust and applicable to wide ranges of problems. However, traditional ILU algorithms are not amenable to scalable implementation. In recent years, we have seen a lot of investigations using low-rank compression techniques to build approximate factorizations.
A key to achieving lower complexity is the use of hierarchical matrix algebra, stemming from the H-matrix research. In addition, the multilevel algorithm paradigm provides a good vehicle for a scalable implementation. The goal of this lecture is to give an overview of the various hierarchical matrix formats, such …
Lecture 03: Hierarchically Low Rank Methods And Applications, David Keyes
Lecture 03: Hierarchically Low Rank Methods And Applications, David Keyes
Mathematical Sciences Spring Lecture Series
As simulation and analytics enter the exascale era, numerical algorithms, particularly implicit solvers that couple vast numbers of degrees of freedom, must span a widening gap between ambitious applications and austere architectures to support them. We present fifteen universals for researchers in scalable solvers: imperatives from computer architecture that scalable solvers must respect, strategies towards achieving them that are currently well established, and additional strategies currently being developed for an effective and efficient exascale software ecosystem. We consider recent generalizations of what it means to “solve” a computational problem, which suggest that we have often been “oversolving” them at the …
Lecture 02: Tile Low-Rank Methods And Applications (W/Review), David Keyes
Lecture 02: Tile Low-Rank Methods And Applications (W/Review), David Keyes
Mathematical Sciences Spring Lecture Series
As simulation and analytics enter the exascale era, numerical algorithms, particularly implicit solvers that couple vast numbers of degrees of freedom, must span a widening gap between ambitious applications and austere architectures to support them. We present fifteen universals for researchers in scalable solvers: imperatives from computer architecture that scalable solvers must respect, strategies towards achieving them that are currently well established, and additional strategies currently being developed for an effective and efficient exascale software ecosystem. We consider recent generalizations of what it means to “solve” a computational problem, which suggest that we have often been “oversolving” them at the …
Lecture 11: The Road To Exascale And Legacy Software For Dense Linear Algebra, Jack Dongarra
Lecture 11: The Road To Exascale And Legacy Software For Dense Linear Algebra, Jack Dongarra
Mathematical Sciences Spring Lecture Series
In this talk, we will look at the current state of high performance computing and look at the next stage of extreme computing. With extreme computing, there will be fundamental changes in the character of floating point arithmetic and data movement. In this talk, we will look at how extreme-scale computing has caused algorithm and software developers to change their way of thinking on implementing and program-specific applications.
Lecture 00: Opening Remarks: 46th Spring Lecture Series, Tulin Kaman
Lecture 00: Opening Remarks: 46th Spring Lecture Series, Tulin Kaman
Mathematical Sciences Spring Lecture Series
Opening remarks for the 46th Annual Mathematical Sciences Spring Lecture Series at the University of Arkansas, Fayetteville.
Lecture 06: The Impact Of Computer Architectures On The Design Of Algebraic Multigrid Methods, Ulrike Yang
Lecture 06: The Impact Of Computer Architectures On The Design Of Algebraic Multigrid Methods, Ulrike Yang
Mathematical Sciences Spring Lecture Series
Algebraic multigrid (AMG) is a popular iterative solver and preconditioner for large sparse linear systems. When designed well, it is algorithmically scalable, enabling it to solve increasingly larger systems efficiently. While it consists of various highly parallel building blocks, the original method also consisted of various highly sequential components. A large amount of research has been performed over several decades to design new components that perform well on high performance computers. As a matter of fact, AMG has shown to scale well to more than a million processes. However, with single-core speeds plateauing, future increases in computing performance need to …
Lecture 01: Scalable Solvers: Universals And Innovations, David Keyes
Lecture 01: Scalable Solvers: Universals And Innovations, David Keyes
Mathematical Sciences Spring Lecture Series
As simulation and analytics enter the exascale era, numerical algorithms, particularly implicit solvers that couple vast numbers of degrees of freedom, must span a widening gap between ambitious applications and austere architectures to support them. We present fifteen universals for researchers in scalable solvers: imperatives from computer architecture that scalable solvers must respect, strategies towards achieving them that are currently well established, and additional strategies currently being developed for an effective and efficient exascale software ecosystem. We consider recent generalizations of what it means to “solve” a computational problem, which suggest that we have often been “oversolving” them at the …
Lecture 10: Preconditioned Iterative Methods For Linear Systems, Edmond Chow
Lecture 10: Preconditioned Iterative Methods For Linear Systems, Edmond Chow
Mathematical Sciences Spring Lecture Series
Iterative methods for the solution of linear systems of equations – such as stationary, semi-iterative, and Krylov subspace methods – are classical methods taught in numerical analysis courses, but adapting these methods to run efficiently at large-scale on high-performance computers is challenging and a constantly evolving topic. Preconditioners – necessary to aid the convergence of iterative methods – come in many forms, from algebraic to physics-based, are regularly being developed for linear systems from different classes of problems, and similarly are evolving with high-performance computers. This lecture will cover the background and some recent developments on iterative methods and preconditioning …
Five-Wave Resonances In Deep Water Gravity Waves: Integrability, Numerical Simulations And Experiments, Dan Lucas, Marc Perlin, Dian-Yong Liu, Shane Walsh, Rossen Ivanov, Miguel D. Bustamante
Five-Wave Resonances In Deep Water Gravity Waves: Integrability, Numerical Simulations And Experiments, Dan Lucas, Marc Perlin, Dian-Yong Liu, Shane Walsh, Rossen Ivanov, Miguel D. Bustamante
Articles
In this work we consider the problem of finding the simplest arrangement of resonant deep water gravity waves in one-dimensional propagation, from three perspectives: Theoretical, numerical and experimental. Theoretically this requires using a normal-form Hamiltonian that focuses on 5-wave resonances. The simplest arrangement is based on a triad of wave vectors K1 + K2 = K3 (satisfying specific ratios) along with their negatives, corresponding to a scenario of encountering wave packets, amenable to experiments and numerical simulations. The normal-form equations for these encountering waves in resonance are shown to be non-integrable, but they admit an integrable reduction …