Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 5 of 5

Full-Text Articles in Engineering

Distributed Algorithms In Large-Scaled Empirical Risk Minimization: Non-Convexity, Adaptive-Sampling, And Matrix-Free Second-Order Methods, Xi He Jan 2019

Distributed Algorithms In Large-Scaled Empirical Risk Minimization: Non-Convexity, Adaptive-Sampling, And Matrix-Free Second-Order Methods, Xi He

Theses and Dissertations

The rising amount of data has changed the classical approaches in statistical modeling significantly. Special methods are designed for inferring meaningful relationships and hidden patterns from these large datasets, which build the foundation of a study called Machine Learning (ML). Such ML techniques have already applied widely in various areas and achieved compelling success. In the meantime, the huge amount of data also requires a deep revolution of current techniques, like the availability of advanced data storage, new efficient large-scale algorithms, and their distributed/parallelized implementation.There is a broad class of ML methods can be interpreted as Empirical Risk ...


Conic Optimization: Optimal Partition, Parametric, And Stability Analysis, Ali Mohammad-Nezhad Jan 2019

Conic Optimization: Optimal Partition, Parametric, And Stability Analysis, Ali Mohammad-Nezhad

Theses and Dissertations

A linear conic optimization problem consists of the minimization of a linear objective function over the intersection of an affine space and a closed convex cone. In recent years, linear conic optimization has received significant attention, partly due to the fact that we can take advantage of linear conic optimization to reformulate and approximate intractable optimization problems. Steady advances in computational optimization have enabled us to approximately solve a wide variety of linear conic optimization problems in polynomial time. Nevertheless, preprocessing methods, rounding procedures and sensitivity analysis tools are still the missing parts of conic optimization solvers. Given the output ...


Solution Techniques For Non-Convex Optimization Problems, Wei Xia Jan 2019

Solution Techniques For Non-Convex Optimization Problems, Wei Xia

Theses and Dissertations

This thesis focuses on solution techniques for non-convex optimization problems. The first part of the dissertation presents a generalization of the completely positive reformulation of quadratically constrained quadratic programs (QCQPs) to polynomial optimization problems. We show that by explicitly handling the linear constraints in the formulation of the POP, one obtains a refinement of the condition introduced in Bai's (2015) Thoerem on QCQPs, where the refined theorem only requires nonnegativity of polynomial constraints over the feasible set of the linear constraints. The second part of the thesis is concerned with globally solving non-convex quadratic programs (QPs) using integer programming ...


Applications Of Machine Learning In Supply Chains, Afshin Oroojlooy Jan 2019

Applications Of Machine Learning In Supply Chains, Afshin Oroojlooy

Theses and Dissertations

Advances in new technologies have resulted in increasing the speed of data generation and accessing larger data storage. The availability of huge datasets and massive computational power have resulted in the emergence of new algorithms in artificial intelligence and specifically machine learning, with significant research done in fields like computer vision. Although the same amount of data exists in most components of supply chains, there is not much research to utilize the power of raw data to improve efficiency in supply chains.In this dissertation our objective is to propose data-driven non-parametric machine learning algorithms to solve different supply chain ...


Optimization Algorithms For Machine Learning Problems, Hiva Ghanbari Jan 2019

Optimization Algorithms For Machine Learning Problems, Hiva Ghanbari

Theses and Dissertations

In the first chapter of this thesis, we analyze the global convergence rate of a proximal quasi-Newton algorithm for solving composite optimization problems, in both exact and inexact settings, in the case when the objective function is strongly convex. W