Open Access. Powered by Scholars. Published by Universities.®
Articles 1 - 2 of 2
Full-Text Articles in Statistics and Probability
Multilevel Optimization With Dropout For Neural Networks, Gary Joseph Saavedra
Multilevel Optimization With Dropout For Neural Networks, Gary Joseph Saavedra
Mathematics & Statistics ETDs
Large neural networks have become ubiquitous in machine learning. Despite their widespread use, the optimization process for training a neural network remains com-putationally expensive and does not necessarily create networks that generalize well to unseen data. In addition, the difficulty of training increases as the size of the neural network grows. In this thesis, we introduce the novel MGDrop and SMGDrop algorithms which use a multigrid optimization scheme with a dropout coarsening operator to train neural networks. In contrast to other standard neural network training schemes, MGDrop explicitly utilizes information from smaller sub-networks which act as approximations of the full …
Multilevel Asymptotic Parallel-In-Time Techniques For Temporally Oscillatory Pdes, Nicholas Abel
Multilevel Asymptotic Parallel-In-Time Techniques For Temporally Oscillatory Pdes, Nicholas Abel
Mathematics & Statistics ETDs
As the clock speeds of individual processors level off and the amount of parallel resources continue to increase rapidly, further exploitation of parallelism is necessary to improve compute times. For time-dependent differential equations, the serial computation of time-stepping presents a bottleneck, but parallel-in-time integration methods offer a way to compute the solution in parallel along the time domain. Parallel-in-time methods have been successful in achieving speedup when computing solutions for parabolic problems; however, for problems with large hyperbolic terms and no strong diffusivity, parallel-in-time methods have traditionally struggled to offer speedup. While work has been done to understand why parallel-in-time …