Open Access. Powered by Scholars. Published by Universities.®

Algebra Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 5 of 5

Full-Text Articles in Algebra

Lecture 14: Randomized Algorithms For Least Squares Problems, Ilse C.F. Ipsen Apr 2021

Lecture 14: Randomized Algorithms For Least Squares Problems, Ilse C.F. Ipsen

Mathematical Sciences Spring Lecture Series

The emergence of massive data sets, over the past twenty or so years, has lead to the development of Randomized Numerical Linear Algebra. Randomized matrix algorithms perform random sketching and sampling of rows or columns, in order to reduce the problem dimension or compute low-rank approximations. We review randomized algorithms for the solution of least squares/regression problems, based on row sketching from the left, or column sketching from the right. These algorithms tend to be efficient and accurate on matrices that have many more rows than columns. We present probabilistic bounds for the amount of sampling required to achieve a …


Lecture 13: A Low-Rank Factorization Framework For Building Scalable Algebraic Solvers And Preconditioners, X. Sherry Li Apr 2021

Lecture 13: A Low-Rank Factorization Framework For Building Scalable Algebraic Solvers And Preconditioners, X. Sherry Li

Mathematical Sciences Spring Lecture Series

Factorization based preconditioning algorithms, most notably incomplete LU (ILU) factorization, have been shown to be robust and applicable to wide ranges of problems. However, traditional ILU algorithms are not amenable to scalable implementation. In recent years, we have seen a lot of investigations using low-rank compression techniques to build approximate factorizations.
A key to achieving lower complexity is the use of hierarchical matrix algebra, stemming from the H-matrix research. In addition, the multilevel algorithm paradigm provides a good vehicle for a scalable implementation. The goal of this lecture is to give an overview of the various hierarchical matrix formats, such …


Lecture 03: Hierarchically Low Rank Methods And Applications, David Keyes Apr 2021

Lecture 03: Hierarchically Low Rank Methods And Applications, David Keyes

Mathematical Sciences Spring Lecture Series

As simulation and analytics enter the exascale era, numerical algorithms, particularly implicit solvers that couple vast numbers of degrees of freedom, must span a widening gap between ambitious applications and austere architectures to support them. We present fifteen universals for researchers in scalable solvers: imperatives from computer architecture that scalable solvers must respect, strategies towards achieving them that are currently well established, and additional strategies currently being developed for an effective and efficient exascale software ecosystem. We consider recent generalizations of what it means to “solve” a computational problem, which suggest that we have often been “oversolving” them at the …


Lecture 02: Tile Low-Rank Methods And Applications (W/Review), David Keyes Apr 2021

Lecture 02: Tile Low-Rank Methods And Applications (W/Review), David Keyes

Mathematical Sciences Spring Lecture Series

As simulation and analytics enter the exascale era, numerical algorithms, particularly implicit solvers that couple vast numbers of degrees of freedom, must span a widening gap between ambitious applications and austere architectures to support them. We present fifteen universals for researchers in scalable solvers: imperatives from computer architecture that scalable solvers must respect, strategies towards achieving them that are currently well established, and additional strategies currently being developed for an effective and efficient exascale software ecosystem. We consider recent generalizations of what it means to “solve” a computational problem, which suggest that we have often been “oversolving” them at the …


Fast Monte Carlo Algorithms For Computing A Low-Rank Approximation To A Matrix, Vlad S. Burca Apr 2014

Fast Monte Carlo Algorithms For Computing A Low-Rank Approximation To A Matrix, Vlad S. Burca

Senior Theses and Projects

Many of today's applications deal with big quantities of data; from DNA analysis algorithms, to image processing and movie recommendation algorithms. Most of these systems store the data in very large matrices. In order to perform analysis on the collected data, these big matrices have to be stored in the RAM (random-access memory) of the computing system. But this is a very expensive process since RAM is a scarce computational resource. Ideally, one would like to be able to store most of the data matrices on the memory disk (hard disk drive) while loading only the necessary parts of the …