Phuong Nguyen, Odalric-Ambrym Maillard, Daniil Ryabko,Ronald Ortner.
In International Conference on Artificial Intelligence and Statistics, 2013.
Abstract: |
We consider a reinforcement learning setting where the learner also has to deal with the problem of finding a suitable state-representation function from a given set of models. This has to be done while interacting with the environment in an online fashion (no resets), and the goal is to have small regret with respect to any Markov model in the set. For this setting, recently the BLB algorithm has been proposed, which achieves regret of order T^{2/3}, provided that the given set of models is finite. Our first contribution is to extend this result to a countably infinite set of models. Moreover, the BLB regret bound suffers from an additive term that can be exponential in the diameter of the MDP involved, since the diameter has to be guessed. The algorithm we propose avoids guessing the diameter, thus improving the regret bound. |
You can dowload the paper from the JMLR website (here) or from the HAL online open depository* (soon).
Bibtex: |
@InProceedings{Nguyen13, author = "Nguyen, P. and Maillard, O. and Ryabko, D. and Ortner, R. ", title = "Competing with an Infinite Set of Models in Reinforcement Learning", booktitle = "AISTATS", series = {JMLR W\&CP 31}, address = "Arizona, USA", year = "2013", pages = "463--471" } |
Related publications: |
Optimal regret bounds for selecting the state representation in reinforcement learning.
Selecting the State-Representation in Reinforcement Learning. |
--
* The HAL open-access online archive system seeks to make research results available to the widest audience, independently of the major publisher, and cooperates with other large international archives like arXiv.