Book Announcement: “Reinforcement Learning and Dynamic Programming Using Functions Approximators”

Dear machine learning researchers,

We are pleased to announce the recent release of our book:
“Reinforcement Learning and Dynamic Programming Using Functions Approximators”
(Lucian Busoniu, Robert Babuska, Bart De Schutter, and Damien Ernst)
in the Automation and Control Engineering series of Taylor & Francis CRC Press.

Book information:

Reinforcement learning (RL) can optimally solve decision and control problems involving complex dynamic systems, without requiring a mathematical model of the system. If a model is available, dynamic programming (DP), the model-based counterpart of RL, can be used. RL and DP are applicable in a variety of disciplines, including artificial intelligence, automatic control, economics, and medicine. Recent years have seen a surge of interest RL and DP using compact, approximate representations of the solution, which enable algorithms to address realistic problems.

This book provides an in-depth introduction to RL and DP with function approximators, with a focus on continuous-variable control problems. A concise description of classical RL and DP (Chapter 2) builds the foundation for the remainder of the book. This is followed by an extensive review of the state-of-the-art in RL and DP with approximation, which combines algorithm development with theoretical guarantees, illustrative numerical examples, and algorithm comparisons (Chapter 3). Each of the final three chapters (4 to 6) is dedicated to a representative algorithm from the authors’ research. These three algorithms respectively belong to the three major classes of methods: approximate value iteration, approximate policy iteration, and approximate policy search. The features and performance of these algorithms are highlighted in comprehensive experimental studies on a range of control applications.

* A concise introduction to the basics of RL and DP
* A detailed treatment of RL and DP with function approximators, including theoretical results and illustrative examples
* A thorough treatment of policy search techniques
* Comprehensive experimental studies on a range of control problems, including real-time control results
* An extensive, illustrative convergence and consistency analysis of an approximate value iteration algorithm

For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work.

Access the book’s website at for additional information, including computer code used in the experimental studies, information about ordering the book, etc.

Hoping you will find this book useful,


The authors