Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Artificial Intelligence and Robotics

PDF

Air Force Institute of Technology

2021

Machine learning

Articles 1 - 2 of 2

Full-Text Articles in Physical Sciences and Mathematics

Cognition-Enhanced Machine Learning For Better Predictions With Limited Data, Florian Sense, Ryan Wood, Michael G. Collins, Joshua Fiechter, Aihua W. Wood, Michael Krusmark, Tiffany Jastrzembski, Christopher W. Myers Sep 2021

Cognition-Enhanced Machine Learning For Better Predictions With Limited Data, Florian Sense, Ryan Wood, Michael G. Collins, Joshua Fiechter, Aihua W. Wood, Michael Krusmark, Tiffany Jastrzembski, Christopher W. Myers

Faculty Publications

The fields of machine learning (ML) and cognitive science have developed complementary approaches to computationally modeling human behavior. ML's primary concern is maximizing prediction accuracy; cognitive science's primary concern is explaining the underlying mechanisms. Cross-talk between these disciplines is limited, likely because the tasks and goals usually differ. The domain of e-learning and knowledge acquisition constitutes a fruitful intersection for the two fields’ methodologies to be integrated because accurately tracking learning and forgetting over time and predicting future performance based on learning histories are central to developing effective, personalized learning tools. Here, we show how a state-of-the-art ML model can …


Characterizing Convolutional Neural Network Early-Learning And Accelerating Non-Adaptive, First-Order Methods With Localized Lagrangian Restricted Memory Level Bundling, Benjamin O. Morris Sep 2021

Characterizing Convolutional Neural Network Early-Learning And Accelerating Non-Adaptive, First-Order Methods With Localized Lagrangian Restricted Memory Level Bundling, Benjamin O. Morris

Theses and Dissertations

This dissertation studies the underlying optimization problem encountered during the early-learning stages of convolutional neural networks and introduces a training algorithm competitive with existing state-of-the-art methods. First, a Design of Experiments method is introduced to systematically measure empirical second-order Lipschitz upper bound and region size estimates for local regions of convolutional neural network loss surfaces experienced during the early-learning stages. This method demonstrates that architecture choices can significantly impact the local loss surfaces traversed during training. Next, a Design of Experiments method is used to study the effects convolutional neural network architecture hyperparameters have on different optimization routines' abilities to …