Open Access. Powered by Scholars. Published by Universities.®

Operations Research, Systems Engineering and Industrial Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Theses and Dissertations

Theses/Dissertations

2021

Approximate dynamic programming

Articles 1 - 2 of 2

Full-Text Articles in Operations Research, Systems Engineering and Industrial Engineering

Examining How Standby Assets Impact Optimal Dispatching Decisions Within A Military Medical Evacuation System Via A Markov Decision Process Model, Kylie Wooten Mar 2021

Examining How Standby Assets Impact Optimal Dispatching Decisions Within A Military Medical Evacuation System Via A Markov Decision Process Model, Kylie Wooten

Theses and Dissertations

The Army medical evacuation (MEDEVAC) system ensures proper medical treatment is readily available to wounded soldiers on the battlefield. The objective of this research is to determine which MEDEVAC unit to task to an incoming 9-line MEDEVAC request and where to station a single standby unit to maximize patient survivability. A discounted, infinite-horizon continuous-time Markov decision process model is formulated to examine this problem. We design, develop, and test an approximate dynamic programming (ADP) technique that leverages a least squares policy evaluation value function approximation scheme within an approximate policy iteration algorithmic framework to solve practical-sized problem instances. A computational …


Resupply Operations Of A Dispersed Infantry Brigade Combat Team Using Approximate Dynamic Programming, Camero K. Song Mar 2021

Resupply Operations Of A Dispersed Infantry Brigade Combat Team Using Approximate Dynamic Programming, Camero K. Song

Theses and Dissertations

Military sustainment planners must consider the effective employment of Cargo Unmanned Aerial Vehicles (CUAVs) into resupply operations under austere threat conditions. Viewed as an inventory routing problem, the complexities of resupplying dislocated Forward Operating Bases (FOBs) under dangerous and dynamic conditions warrant additional features to the baseline problem. This research models a military stochastic inventory routing problem with multiple routing (MIL SIRP- MR) as a Markov decision process (MDP) model and provides high-quality policy solutions using a combination of approximate dynamic programming and ordinal optimization techniques.