Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 5 of 5

Full-Text Articles in Physical Sciences and Mathematics

A Survey Of Transfer Learning Methods For Reinforcement Learning, Nicholas Bone Dec 2008

A Survey Of Transfer Learning Methods For Reinforcement Learning, Nicholas Bone

Computer Science Graduate and Undergraduate Student Scholarship

Transfer Learning (TL) is the branch of Machine Learning concerned with improving performance on a target task by leveraging knowledge from a related (and usually already learned) source task. TL is potentially applicable to any learning task, but in this survey we consider TL in a Reinforcement Learning (RL) context. TL is inspired by psychology; humans constantly apply previous knowledge to new tasks, but such transfer has traditionally been very difficult for—or ignored by—machine learning applications. The goals of TL are to facilitate faster and better learning of new tasks by applying past experience where appropriate, and to enable autonomous …


Development Of A Workflow For The Comparison Of Classification Techniques, Zanifa Omary Sep 2008

Development Of A Workflow For The Comparison Of Classification Techniques, Zanifa Omary

Masters

As the interest in machine learning and data mining springs up, the problem of how to assess learning algorithms and compare classifiers become more pressing. This has been associated with the lack of comprehensive and complete workflow depending on the project scale to provide guidance to its users. This means the success or failure of the project can be highly dependent on the person or team carrying it. The standard practice adopted by many researchers and experimenters has been to follow steps or phases from existing workflows such as CRISP-DM, KDD and SASSEMMA. However, as machine learning and data mining …


Scaling Ant Colony Optimization With Hierarchical Reinforcement Learning Partitioning, Erik J. Dries, Gilbert L. Peterson Jul 2008

Scaling Ant Colony Optimization With Hierarchical Reinforcement Learning Partitioning, Erik J. Dries, Gilbert L. Peterson

Faculty Publications

This paper merges hierarchical reinforcement learning (HRL) with ant colony optimization (ACO) to produce a HRL ACO algorithm capable of generating solutions for large domains. This paper describes two specific implementations of the new algorithm: the first a modification to Dietterich’s MAXQ-Q HRL algorithm, the second a hierarchical ant colony system algorithm. These implementations generate faster results, with little to no significant change in the quality of solutions for the tested problem domains. The application of ACO to the MAXQ-Q algorithm replaces the reinforcement learning, Q-learning, with the modified ant colony optimization method, Ant-Q. This algorithm, MAXQ-AntQ, converges to solutions …


Assessing The Costs Of Sampling Methods In Active Learning For Annotation, James Carroll, Robbie Haertel, Peter Mcclanahan, Eric K. Ringger, Kevin Seppi Jun 2008

Assessing The Costs Of Sampling Methods In Active Learning For Annotation, James Carroll, Robbie Haertel, Peter Mcclanahan, Eric K. Ringger, Kevin Seppi

Faculty Publications

Traditional Active Learning (AL) techniques assume that the annotation of each datum costs the same. This is not the case when annotating sequences; some sequences will take longer than others. We show that the AL technique which performs best depends on how cost is measured. Applying an hourly cost model based on the results of an annotation user study, we approximate the amount of time necessary to annotate a given sentence. This model allows us to evaluate the effectiveness of AL sampling methods in terms of time spent in annotation. We acheive a 77% reduction in hours from a random …


Learning Policies For Embodied Virtual Agents Through Demonstration, Jonathan Dinerstein, Parris K. Egbert, Dan A. Ventura Jan 2008

Learning Policies For Embodied Virtual Agents Through Demonstration, Jonathan Dinerstein, Parris K. Egbert, Dan A. Ventura

Faculty Publications

Although many powerful AI and machine learning techniques exist, it remains difficult to quickly create AI for embodied virtual agents that produces visually lifelike behavior. This is important for applications (e.g., games, simulators, interactive displays) where an agent must behave in a manner that appears human-like. We present a novel technique for learning reactive policies that mimic demonstrated human behavior. The user demonstrates the desired behavior by dictating the agent’s actions during an interactive animation. Later, when the agent is to behave autonomously, the recorded data is generalized to form a continuous state-to-action mapping. Combined with an appropriate animation algorithm …