English | 2020 | ISBN: 978-1839210686 | 702 Pages | EPUB, MOBI | 91 MB
An example-rich guide for beginners to start their reinforcement and deep reinforcement learning journey with state-of-the-art distinct algorithms
With significant enhancement in the quality and quantity of algorithms in recent years, this second edition of Hands-On Reinforcement Learning with Python has been revamped into an example-rich guide to learning state-of-the-art RL and deep RL algorithms with TensorFlow 2 and the OpenAI Gym toolkit.
In addition to exploring reinforcement learning basics and foundational concepts like Bellman equation, Markov decision processes and dynamic programming algorithms, this second edition dives deep into the full spectrum of value-based, policy-based and actor-critic RL methods. It explores in depth state-of-the-art algorithms such as DQN, TRPO, PPO and ACKTR, DDPG, TD3, and SAC, demystifying the underlying math and demonstrating implementations through simple code examples.
The book has several new chapters dedicated to new RL techniques including distributional reinforcement learning, imitation learning, inverse RL, and meta RL. You will learn to leverage stable baselines, an improvement of OpenAI’s baseline library, to effortlessly implement popular RL algorithms. The book concludes with an overview of promising approaches such as meta-learning, and imagination augmented agents in research.
By the end, you will become skilled in effectively employing RL and deep RL in your real-world projects.
What you will learn
- Understand core RL concepts including the methodologies, math, and code
- Train an agent to solve Blackjack, FrozenLake and many other problems using OpenAI Gym
- Train an agent to play Ms Pac-Man using a Deep Q Network
- Learn policy-based, value-based and actor-critic methods
- Master the math behind DDPG, TD3, TRPO, PPO, and many others
- Explore new avenues like distributional RL, meta RL, and inverse RL
- Use Stable Baselines to train an agent to walk and play Atari games