We are gonna build our AI to play games on its own. Using Deep learning we can build an AI that can play not only the game it’s designed for but of other games also.The London based DeepMind already did this in 2015. DeepMind goal is to create Artificial General Intelligence, that one algorithm that can solve any problem with human level thinking or greater.
They reached an important milestone by creating an algoithm, that can able to master 49 diffrent atari games with no game-specific hyperparameter tuning whatsoever. The algorithm is called Deep Q Learner. It has made the code open source on github. This algorithm only takes two inputs
1. Raw pixels of the game
2. Game Score
Based on that just it has to complete it’s objective, maximize the score. It uses a deep convolutional neural network to interpret the pixels. This neural network is inspired by the human visual cortex.
A neural network built by multiple layers of neurons, where each neuron acts as a detection filter for the presence of specific features in an image and the layers get increasingly abstract with feature representation. So the first layer could be simple features like edges than the next layer would use those edges to detect simple shapes, and the next layer would use those shapes to detect something even more complex. So, once it’s interpreted the pixels, it needs to act on that knowledge in some way. You will probably be hearing of supervised and unsupervised learning before, but these type of learning is called Reinforcement Learning. Reinforcement Learning is all about trial and error. This Learning algorithm learns from its previous experience as human do. It’s about teaching an AI to select actions to maximize future rewards. It’s similar to how human train animal, you give a goal to that animal if the animal complete the goal successfully you would give it a treat, and if it’s doesn’t complete you withhold the treat.
While the game is running, at each time step, the AI executes an action based on what it observes and may or may not receive a reward. If it does receive a reward, we’ll adjust our weights so that AI will likely to do similar game actions in the future.
Q learning is the type of Reinforcement Learning that learns the optimal action selection behavior or policy for the AI without having a prior model of the environment. So based on the current game state like an enemy spaceship being in shooting distance, the AI will know to take the action of shooting it. The mapping of state to action is its policy and it gets even better with training. Deep Q Learning also uses experience replay, which it uses to learn from datasets of its past policies.
Here we are gonna use tensorflow and gym to implement our game bot. We will Google’s Machine Learning library Tensorflow to create a Convolutional neural network, and Gym is OpenAI’s machine learning library which we will use to create Reinforcement learning.
We will start by importing our dependencies
The environment is a helper class that helps initialize the game environment. As an example here it’s Space invaders, but since we are using Reinforcement learning we can easily change the whole host of different environments.
We also import dqn (Deep Q network) helper class to monitor the game play and our training class to initialize reinforcement learning. Once our all dependencies imported, we can initialize our environment.
First, we are gonna populate our initial replay memory with 50000 plays, So we have a little experience. Then we will initialize our convolutional neural network to start reading in pixels and our Q learning algorithm to start updating our agent’s decisions based on the pixels it receives. After running our AI on terminal here what we’ve come with
|Space invaders in terminal|