Reinforcement learning does not only requires a lot of knowledge about the subject to get started, it also requires a lot of tools to help you test your ideas. Since this process is quite lengthy and hard, OpenAI helped us with this. By creating something called the OpenAI Gym, they allow you to get started developing and comparing reinforcement learning algorithms in an easy to use way.
For this blog, we need several components installed upfront to make our lives easier:
- Windows WSL (windows Subsystem for Linux) - Their are different distros, I went for Ubuntu https://www.microsoft.com/en-us/store/p/ubuntu/9nblggh4msv6, but you can also go for OpenSUSE, Kali, Debian, …
Installing our dependencies
Install Xming for Windows: https://sourceforge.net/projects/xming/
Open up your WSL and run the following commands:
# Install Python + Dependencies sudo apt-get install -y python-dev sudo apt-get install -y python-pip sudo apt-get install -y python-numpy python-dev cmake zlib1g-dev libjpeg-dev xvfb xorg-dev python-opengl libboost-all-dev libsdl2-dev swig sudo pip install werkzeug sudo pip install itsdangerous sudo pip install click # Export our display settings for XMing export DISPLAY=localhost:0.0 echo 'export DISPLAY=localhost:0.0 ' >> ~/.bashrc
Setting up our OpenAI Gym
# OpenAI Gym git clone https://github.com/openai/gym-http-api cd gym-http-api sudo pip install -r requirements.txt sudo pip install -e '.[all]' cd binding-js npm install gulp
Running dev server
Once you installed everything correctly, you can start the OpenAI server through the following commands:
cd gym-http-api python gym_http_server.py
and test it with the following script in a different terminal:
cd gym-http-api/bindings-js/dist/examples node exampleAgent.js
This should open up an Xming display running a cartpole example.
Note: Make sure that your Xming display server is started on windows and that it is running in your taskbar!
Get the latest posts delivered right to your inbox