Grid World
Grid World Environments
These are grid world environments intended to be used to with reinforcement learning algorithms. They follow the standards found in many reinforcement learning environments, and are specifically based off the standards of OpenAI Gym. However, my grid worlds give the user maximum control over the environment and have a renderer with the option to show the policy of agents.
The repo can be found here: https://github.com/jfnaro/grid_world_env
You have the choice to specify the following values, though all of them are optional:
Height
Width
Starting position
Ending position (can have multiple)
Hole locations/Quantity of randomly chosen holes
Reward for reaching goal
Penalty for reaching hole
Penalty for taking a step
The repo contains both a deterministic grid world and a stochastic one with the deterministic one being more robust. Improvements will be made on the stochastic environment when the mood strikes me.
Grid World Renderer
My renderer was created to render the environments from my grid worlds, but it could easily render any other grid world. You need only to pass in a two dimensional array or list. You can select how may pixels you would like each squares sides to be, but the default is 50. You have the option of passing in a dictionary of value-color pairs if you would like to specify what each space should be, but the default works quite well too. Below is an example of a gird world created entirely by the default parameters.
My favorite feature of my grid world renderer is its policy renderer. You can enter the color you would like the arrows to be, but the default is deep pink. At present, the renderer will only display on action, even is a given policy determines multiple actions to be equally optimal. That is an easy fix though, and I will make the change when I eventually come back to this project.
Thank you for reading about my grid world project.
As I mentioned above, the repo can be found at: https://github.com/jfnaro/grid_world_env