This article will talk about reinforcement learning (RL) and Deep Q-Learning using openAI’s Gym environment and TensorFlow 2, and we will implement a case study using python. I assume that readers have a good understanding of reinforcement learning and deep learning. For the very beginners, I recommend this article before going further.

Introduction to Reinforcement Learning

Reinforcement is a part of machine learning concerned about the action, which an agent in an environment takes to maximize the rewards. Reinforcement Learning differs from supervised learning and unsupervised learning in the sense that it does not need a supervised input/output pair. And not requires a higher amount of correction in any actions to make action highly efficient.

Register

Let’s say I want to make a bot(agent) playing ludo with the other three players with ludo dice(environment); this bot should have the ability to roll the dice (state) and picking up the right token(action), and moving the token based on dice number(rewards).

For a better understanding, you can learn reinforcement learning in depth from here.

In all reinforcement learning subjects, the **Markov Decision Process (MDP) **plays a huge role; an important point to notice is that each state presented in the environment results from its previous state, which is also a result of its previous state. Thus, somewhere, the present state of any environment results from the composition of information gathered from the previous states.

So the task of any agent is to perform an action and make a higher reward provided by the environment. The Markov Decision Process makes an agent decide to choose an optimal action on a given state to maximize the reward. The probability of choosing action at a particular time from a state is called policy. So the goal of the Markov Decision Process is to find the optimal policy.

Introduction to Q-Learning Algorithm

In the figure, we can see that the processes of Q-Learning, from the start to the end of the processes, Q-Learning follow four methods and two sub-process. So, let’s discuss the details of every process.

  1. Initialize parameter – In this step, the model learns about the action and states that an agent needs to perform in a certain environment and time.
  2. Identify current state – An agent needs to store the previous records to act optimally to earn maximized rewards. To act in the current state, it needs to identify the state and perform a combination of actions.
  3. **Choose an action and gain experience ** – By the initialisation process, a Q-table gets generated where it gives the information about the combination of actions and states. Then, it looks for past experiences and compares the weight. If it’s a new situation, the Q-Table will update it for the next step.
  4. **Update the reward in Q-table and determine the next state – **After gaining the experience, agents get the reward from the environment. That reward amplitude gets recorded in the Q-table as experience data, and this becomes helpful in predicting the actions in the next step.

#developers corner #deep q-learning #q-learning #q-learning #data science

Comprehensive Guide To Deep Q-Learning For Data Science Enthusiasts
1.30 GEEK