Model-based reinforcement learning as a way to increase the data-efficiency of reinforcement learning had been extensively studied in the literature for systems modeled in discrete time and space. Capitalizing on recent developments in model-free reinforcement learning in systems modeled in continuous time and space, we developed novel model-based reinforcement learning methods for these systems that vastly improved their data efficiency and their usefulness for online optimal feedback control. Using local approximation methods, we further improved the computational efficiency of model-based reinforcement learning to enable real-time learning. The developed methods can achieve model-based reinforcement learning in the presence of modelling uncertainties and can guarantee stability during the learning phase as was demonstrated in our recent result titled “Online Approximate Optimal Station Keeping of a Marine Craft in the Presence of an Irrotational Current,” published in IEEE Transactions on Robotics.

Publications:

Get in touch

rushikesh.kamalapurkar@okstate.edu
(405) 744-5900