Adaptive optimal control methods for multi-agent systems typically rely on the restrictive assumption that the optimal value function only depends on local data or require knowledge of the states of all agents on the network, and as such, are not implementable if only local communication is allowed. In this project, methods for online real-time learning in networks that are robust to modeling errors and rely on local information at run-time will be developed via integration of state estimation, model validation, and model-free and model-based adaptive optimal control techniques in a partially observed model-aware adaptive optimal control framework. The focus on real-time operations and simultaneous learning and execution requires integration of learning with control theoretic considerations such as system stability during learning phase, that are rarely studied in the machine learning literature.

Publications:

Get in touch

rushikesh.kamalapurkar@okstate.edu
(405) 744-5900