Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

PDF

Research Collection School Of Computing and Information Systems

Artificial Intelligence and Robotics

Reinforcement learning

Articles 1 - 23 of 23

Full-Text Articles in Physical Sciences and Mathematics

Transferable Curricula Through Difficulty Conditioned Generators, Sidney Tio, Pradeep Varakantham Aug 2023

Transferable Curricula Through Difficulty Conditioned Generators, Sidney Tio, Pradeep Varakantham

Research Collection School Of Computing and Information Systems

Advancements in reinforcement learning (RL) have demonstrated superhuman performance in complex tasks such as Starcraft, Go, Chess etc. However, knowledge transfer from Artificial "Experts" to humans remain a significant challenge. A promising avenue for such transfer would be the use of curricula. Recent methods in curricula generation focuses on training RL agents efficiently, yet such methods rely on surrogate measures to track student progress, and are not suited for training robots in the real world (or more ambitiously humans). In this paper, we introduce a method named Parameterized Environment Response Model (PERM) that shows promising results in training RL agents …


Imitation Improvement Learning For Large-Scale Capacitated Vehicle Routing Problems, The Viet Bui, Tien Mai Jul 2023

Imitation Improvement Learning For Large-Scale Capacitated Vehicle Routing Problems, The Viet Bui, Tien Mai

Research Collection School Of Computing and Information Systems

Recent works using deep reinforcement learning (RL) to solve routing problems such as the capacitated vehicle routing problem (CVRP) have focused on improvement learning-based methods, which involve improving a given solution until it becomes near-optimal. Although adequate solutions can be achieved for small problem instances, their efficiency degrades for large-scale ones. In this work, we propose a newimprovement learning-based framework based on imitation learning where classical heuristics serve as experts to encourage the policy model to mimic and produce similar or better solutions. Moreover, to improve scalability, we propose Clockwise Clustering, a novel augmented framework for decomposing large-scale CVRP into …


Constrained Reinforcement Learning In Hard Exploration Problems, Pankayaraj Pathmanathan, Pradeep Varakantham Feb 2023

Constrained Reinforcement Learning In Hard Exploration Problems, Pankayaraj Pathmanathan, Pradeep Varakantham

Research Collection School Of Computing and Information Systems

One approach to guaranteeing safety in Reinforcement Learning is through cost constraints that are imposed on trajectories. Recent works in constrained RL have developed methods that ensure constraints can be enforced even at learning time while maximizing the overall value of the policy. Unfortunately, as demonstrated in our experimental results, such approaches do not perform well on complex multi-level tasks, with longer episode lengths or sparse rewards. To that end, wepropose a scalable hierarchical approach for constrained RL problems that employs backward cost value functions in the context of task hierarchy and a novel intrinsic reward function in lower levels …


Reinforcement Learning Enhanced Pichunter For Interactive Search, Zhixin Ma, Jiaxin Wu, Weixiong Loo, Chong-Wah Ngo Jan 2023

Reinforcement Learning Enhanced Pichunter For Interactive Search, Zhixin Ma, Jiaxin Wu, Weixiong Loo, Chong-Wah Ngo

Research Collection School Of Computing and Information Systems

With the tremendous increase in video data size, search performance could be impacted significantly. Specifically, in an interactive system, a real-time system allows a user to browse, search and refine a query. Without a speedy system quickly, the main ingredient to engage a user to stay focused, an interactive system becomes less effective even with a sophisticated deep learning system. This paper addresses this challenge by leveraging approximate search, Bayesian inference, and reinforcement learning. For approximate search, we apply a hierarchical navigable small world, which is an efficient approximate nearest neighbor search algorithm. To quickly prune the search scope, we …


End-To-End Hierarchical Reinforcement Learning With Integrated Subgoal Discovery, Shubham Pateria, Budhitama Subagdja, Ah-Hwee Tan, Chai Quek Dec 2022

End-To-End Hierarchical Reinforcement Learning With Integrated Subgoal Discovery, Shubham Pateria, Budhitama Subagdja, Ah-Hwee Tan, Chai Quek

Research Collection School Of Computing and Information Systems

Hierarchical reinforcement learning (HRL) is a promising approach to perform long-horizon goal-reaching tasks by decomposing the goals into subgoals. In a holistic HRL paradigm, an agent must autonomously discover such subgoals and also learn a hierarchy of policies that uses them to reach the goals. Recently introduced end-to-end HRL methods accomplish this by using the higher-level policy in the hierarchy to directly search the useful subgoals in a continuous subgoal space. However, learning such a policy may be challenging when the subgoal space is large. We propose integrated discovery of salient subgoals (LIDOSS), an end-to-end HRL method with an integrated …


Interactive Video Corpus Moment Retrieval Using Reinforcement Learning, Zhixin Ma, Chong-Wah Ngo Oct 2022

Interactive Video Corpus Moment Retrieval Using Reinforcement Learning, Zhixin Ma, Chong-Wah Ngo

Research Collection School Of Computing and Information Systems

Known-item video search is effective with human-in-the-loop to interactively investigate the search result and refine the initial query. Nevertheless, when the first few pages of results are swamped with visually similar items, or the search target is hidden deep in the ranked list, finding the know-item target usually requires a long duration of browsing and result inspection. This paper tackles the problem by reinforcement learning, aiming to reach a search target within a few rounds of interaction by long-term learning from user feedbacks. Specifically, the system interactively plans for navigation path based on feedback and recommends a potential target that …


Reinforcement Learning-Based Interactive Video Search, Zhixin Ma, Jiaxin Wu, Zhijian Hou, Chong-Wah Ngo Jun 2022

Reinforcement Learning-Based Interactive Video Search, Zhixin Ma, Jiaxin Wu, Zhijian Hou, Chong-Wah Ngo

Research Collection School Of Computing and Information Systems

Despite the rapid progress in text-to-video search due to the advancement of cross-modal representation learning, the existing techniques still fall short in helping users to rapidly identify the search targets. Particularly, in the situation that a system suggests a long list of similar candidates, the user needs to painstakingly inspect every search result. The experience is frustrated with repeated watching of similar clips, and more frustratingly, the search targets may be overlooked due to mental tiredness. This paper explores reinforcement learning-based (RL) searching to relieve the user from the burden of brute force inspection. Specifically, the system maintains a graph …


Heterogeneous Attentions For Solving Pickup And Delivery Problem Via Deep Reinforcement Learning, Jingwen Li, Liang Xin, Zhiguang Cao, Andrew Lim, Wen Song, Jie Zhang Mar 2022

Heterogeneous Attentions For Solving Pickup And Delivery Problem Via Deep Reinforcement Learning, Jingwen Li, Liang Xin, Zhiguang Cao, Andrew Lim, Wen Song, Jie Zhang

Research Collection School Of Computing and Information Systems

Recently, there is an emerging trend to apply deep reinforcement learning to solve the vehicle routing problem (VRP), where a learnt policy governs the selection of next node for visiting. However, existing methods could not handle well the pairing and precedence relationships in the pickup and delivery problem (PDP), which is a representative variant of VRP. To address this challenging issue, we leverage a novel neural network integrated with a heterogeneous attention mechanism to empower the policy in deep reinforcement learning to automatically select the nodes. In particular, the heterogeneous attention mechanism specifically prescribes attentions for each role of the …


Hierarchical Control Of Multi-Agent Reinforcement Learning Team In Real-Time Strategy (Rts) Games, Weigui Jair Zhou, Budhitama Subagdja, Ah-Hwee Tan, Darren Wee Sze Ong Dec 2021

Hierarchical Control Of Multi-Agent Reinforcement Learning Team In Real-Time Strategy (Rts) Games, Weigui Jair Zhou, Budhitama Subagdja, Ah-Hwee Tan, Darren Wee Sze Ong

Research Collection School Of Computing and Information Systems

Coordinated control of multi-agent teams is an important task in many real-time strategy (RTS) games. In most prior work, micromanagement is the commonly used strategy whereby individual agents operate independently and make their own combat decisions. On the other extreme, some employ a macromanagement strategy whereby all agents are controlled by a single decision model. In this paper, we propose a hierarchical command and control architecture, consisting of a single high-level and multiple low-level reinforcement learning agents operating in a dynamic environment. This hierarchical model enables the low-level unit agents to make individual decisions while taking commands from the high-level …


Learning To Assign: Towards Fair Task Assignment In Large-Scale Ride Hailing, Dingyuan Shi, Yongxin Tong, Zimu Zhou, Bingchen Song, Weifeng Lv, Qiang Yang Aug 2021

Learning To Assign: Towards Fair Task Assignment In Large-Scale Ride Hailing, Dingyuan Shi, Yongxin Tong, Zimu Zhou, Bingchen Song, Weifeng Lv, Qiang Yang

Research Collection School Of Computing and Information Systems

Ride hailing is a widespread shared mobility application where the central issue is to assign taxi requests to drivers with various objectives. Despite extensive research on task assignment in ride hailing, the fairness of earnings among drivers is largely neglected. Pioneer studies on fair task assignment in ride hailing are ineffective and inefficient due to their myopic optimization perspective and timeconsuming assignment techniques. In this work, we propose LAF, an effective and efficient task assignment scheme that optimizes both utility and fairness. We adopt reinforcement learning to make assignments in a holistic manner and propose a set of acceleration techniques …


Approximate Difference Rewards For Scalable Multigent Reinforcement Learning, Arambam James Singh, Akshat Kumar, Hoong Chuin Lau May 2021

Approximate Difference Rewards For Scalable Multigent Reinforcement Learning, Arambam James Singh, Akshat Kumar, Hoong Chuin Lau

Research Collection School Of Computing and Information Systems

We address the problem ofmultiagent credit assignment in a large scale multiagent system. Difference rewards (DRs) are an effective tool to tackle this problem, but their exact computation is known to be challenging even for small number of agents. We propose a scalable method to compute difference rewards based on aggregate information in a multiagent system with large number of agents by exploiting the symmetry present in several practical applications. Empirical evaluation on two multiagent domains - air-traffic control and cooperative navigation, shows better solution quality than previous approaches.


Learning Index Policies For Restless Bandits With Application To Maternal Healthcare, Arpita Biswas, Gaurav Aggarwal, Pradeep Varakantham, Milind Tambe May 2021

Learning Index Policies For Restless Bandits With Application To Maternal Healthcare, Arpita Biswas, Gaurav Aggarwal, Pradeep Varakantham, Milind Tambe

Research Collection School Of Computing and Information Systems

In many community health settings, it is crucial to have a systematic monitoring and intervention process to ensure that the patients adhere to healthcare programs, such as periodic health checks or taking medications. When these interventions are expensive, they can be provided to only a fixed small fraction of the patients at any period of time. Hence, it is important to carefully choose the beneficiaries who should be provided with interventions and when. We model this scenario as a restless multi-armed bandit (RMAB) problem, where each beneficiary is assumed to transition from one state to another depending on the intervention …


Reinforcement Learning For Zone Based Multiagent Pathfinding Under Uncertainty, Jiajing Ling, Tarun Gupta, Akshat Kumar Oct 2020

Reinforcement Learning For Zone Based Multiagent Pathfinding Under Uncertainty, Jiajing Ling, Tarun Gupta, Akshat Kumar

Research Collection School Of Computing and Information Systems

We address the problem of multiple agents finding their paths from respective sources to destination nodes in a graph (also called MAPF). Most existing approaches assume that all agents move at fixed speed, and that a single node accommodates only a single agent. Motivated by the emerging applications of autonomous vehicles such as drone traffic management, we present zone-based path finding (or ZBPF) where agents move among zones, and agents' movements require uncertain travel time. Furthermore, each zone can accommodate multiple agents (as per its capacity). We also develop a simulator for ZBPF which provides a clean interface from the …


Hierarchical Multiagent Reinforcement Learning For Maritime Traffic Management, Arambam James Singh, Akshat Kumar, Hoong Chuin Lau May 2020

Hierarchical Multiagent Reinforcement Learning For Maritime Traffic Management, Arambam James Singh, Akshat Kumar, Hoong Chuin Lau

Research Collection School Of Computing and Information Systems

Increasing global maritime traffic coupled with rapid digitization and automation in shipping mandate developing next generation maritime traffic management systems to mitigate congestion, increase safety of navigation, and avoid collisions in busy and geographically constrained ports (such as Singapore's). To achieve these objectives, we model the maritime traffic as a large multiagent system with individual vessels as agents, and VTS (Vessel Traffic Service) authority as a regulatory agent. We develop a hierarchical reinforcement learning approach where vessels first select a high level action based on the underlying traffic flow, and then select the low level action that determines their future …


Multi-Agent Collaborative Exploration Through Graph-Based Deep Reinforcement Learning, Tianze Luo, Budhitama Subagdja, Ah-Hwee Tan, Ah-Hwee Tan Oct 2019

Multi-Agent Collaborative Exploration Through Graph-Based Deep Reinforcement Learning, Tianze Luo, Budhitama Subagdja, Ah-Hwee Tan, Ah-Hwee Tan

Research Collection School Of Computing and Information Systems

Autonomous exploration by a single or multiple agents in an unknown environment leads to various applications in automation, such as cleaning, search and rescue, etc. Traditional methods normally take frontier locations and segmented regions of the environment into account to efficiently allocate target locations to different agents to visit. They may employ ad hoc solutions to allocate the task to the agents, but the allocation may not be efficient. In the literature, few studies focused on enhancing the traditional methods by applying machine learning models for agent performance improvement. In this paper, we propose a graph-based deep reinforcement learning approach …


An Efficient Approach To Model-Based Hierarchical Reinforcement Learning, Zhuoru Li, Akshay Narayan, Tze-Yun Leong Feb 2017

An Efficient Approach To Model-Based Hierarchical Reinforcement Learning, Zhuoru Li, Akshay Narayan, Tze-Yun Leong

Research Collection School Of Computing and Information Systems

We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowledge and selective execution at different levels of abstraction, to efficiently solve large, complex problems. Our framework adopts a new transition dynamics learning algorithm that identifies the common action-feature combinations of the subtasks, and evaluates the subtask execution choices through simulation. The framework is sample efficient, and tolerates uncertain and incomplete problem characterization of the subtasks. We test the framework on common benchmark problems and complex simulated robotic environments. It compares favorably against the stateof-the-art algorithms, and scales well in very large problems.


Reinforcement Learning Framework For Modeling Spatial Sequential Decisions Under Uncertainty: (Extended Abstract), Truc Viet Le, Siyuan Liu, Hoong Chuin Lau May 2016

Reinforcement Learning Framework For Modeling Spatial Sequential Decisions Under Uncertainty: (Extended Abstract), Truc Viet Le, Siyuan Liu, Hoong Chuin Lau

Research Collection School Of Computing and Information Systems

We consider the problem of trajectory prediction, where a trajectory is an ordered sequence of location visits and corresponding timestamps. The problem arises when an agent makes sequential decisions to visit a set of spatial locations of interest. Each location bears a stochastic utility and the agent has a limited budget to spend. Given the agent's observed partial trajectory, our goal is to predict the remaining trajectory. We propose a solution framework to the problem considering both the uncertainty of utility and the budget constraint. We use reinforcement learning (RL) to model the underlying decision processes and inverse RL to …


Integrating Motivated Learning And K-Winner-Take-All To Coordinate Multi-Agent Reinforcement Learning, Teck-Hou Teng, Ah-Hwee Tan, Janusz Starzyk, Yuan-Sin Tan, Loo-Nin Teow Aug 2014

Integrating Motivated Learning And K-Winner-Take-All To Coordinate Multi-Agent Reinforcement Learning, Teck-Hou Teng, Ah-Hwee Tan, Janusz Starzyk, Yuan-Sin Tan, Loo-Nin Teow

Research Collection School Of Computing and Information Systems

This work addresses the coordination issue in distributed optimization problem (DOP) where multiple distinct and time-critical tasks are performed to satisfy a global objective function. The performance of these tasks has to be coordinated due to the sharing of consumable resources and the dependency on non-consumable resources. Knowing that it can be sub-optimal to predefine the performance of the tasks for large DOPs, the multi-agent reinforcement learning (MARL) framework is adopted wherein an agent is used to learn the performance of each distinct task using reinforcement learning. To coordinate MARL, we propose a novel coordination strategy integrating Motivated Learning (ML) …


Creating Autonomous Adaptive Agents In A Real-Time First-Person Shooter Computer Game, Di Wang, Ah-Hwee Tan Jul 2014

Creating Autonomous Adaptive Agents In A Real-Time First-Person Shooter Computer Game, Di Wang, Ah-Hwee Tan

Research Collection School Of Computing and Information Systems

Games are good test-beds to evaluate AI methodologies. In recent years, there has been a vast amount of research dealing with real-time computer games other than the traditional board games or card games. This paper illustrates how we create agents by employing FALCON, a self-organizing neural network that performs reinforcement learning, to play a well-known first-person shooter computer game called Unreal Tournament. Rewards used for learning are either obtained from the game environment or estimated using the temporal difference learning scheme. In this way, the agents are able to acquire proper strategies and discover the effectiveness of different weapons without …


Self-Regulating Action Exploration In Reinforcement Learning, Teck-Hou Teng, Ah-Hwee Tan Oct 2012

Self-Regulating Action Exploration In Reinforcement Learning, Teck-Hou Teng, Ah-Hwee Tan

Research Collection School Of Computing and Information Systems

The basic tenet of a learning process is for an agent to learn for only as much and as long as it is necessary. With reinforcement learning, the learning process is divided between exploration and exploitation. Given the complexity of the problem domain and the randomness of the learning process, the exact duration of the reinforcement learning process can never be known with certainty. Using an inaccurate number of training iterations leads either to the non-convergence or the over-training of the learning agent. This work addresses such issues by proposing a technique to self-regulate the exploration rate and training duration …


Motivated Learning For The Development Of Autonomous Agents, Janusz A. Starzyk, James T. Graham, Pawel Raif, Ah-Hwee Tan Apr 2012

Motivated Learning For The Development Of Autonomous Agents, Janusz A. Starzyk, James T. Graham, Pawel Raif, Ah-Hwee Tan

Research Collection School Of Computing and Information Systems

A new machine learning approach known as motivated learning (ML) is presented in this work. Motivated learning drives a machine to develop abstract motivations and choose its own goals. ML also provides a self-organizing system that controls a machine’s behavior based on competition between dynamically-changing pain signals. This provides an interplay of externally driven and internally generated control signals. It is demonstrated that ML not only yields a more sophisticated learning mechanism and system of values than reinforcement learning (RL), but is also more efficient in learning complex relations and delivers better performance than RL in dynamically changing environments. In …


A Biologically-Inspired Cognitive Agent Model Integrating Declarative Knowledge And Reinforcement Learning, Ah-Hwee Tan, Gee-Wah Ng Sep 2010

A Biologically-Inspired Cognitive Agent Model Integrating Declarative Knowledge And Reinforcement Learning, Ah-Hwee Tan, Gee-Wah Ng

Research Collection School Of Computing and Information Systems

The paper proposes a biologically-inspired cognitive agent model, known as FALCON-X, based on an integration of the Adaptive Control of Thought (ACT-R) architecture and a class of self-organizing neural networks called fusion Adaptive Resonance Theory (fusion ART). By replacing the production system of ACT-R by a fusion ART model, FALCON-X integrates high-level deliberative cognitive behaviors and real-time learning abilities, based on biologically plausible neural pathways. We illustrate how FALCON-X, consisting of a core inference area interacting with the associated intentional, declarative, perceptual, motor and critic memory modules, can be used to build virtual robots for battles in a simulated RoboCode …


Motivated Learning As An Extension Of Reinforcement Learning, Janusz Starzyk, Pawel Raif, Ah-Hwee Tan Jan 2010

Motivated Learning As An Extension Of Reinforcement Learning, Janusz Starzyk, Pawel Raif, Ah-Hwee Tan

Research Collection School Of Computing and Information Systems

We have developed a unified framework to conduct computational experiments with both learning systems: Motivated learning based on Goal Creation System, and reinforcedment learning using RL Q-Learning Algorithm. Future work includes combining motivated learning to set abstract motivations and manage goals with reinforcement learning to learn proper actions. This will allow testing of motivated learning on typical reinforcement learning benchmarks with large dimensionality of the state/action spaces.