Skip to content

Commit

Permalink
initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
qlan3 committed Feb 6, 2024
1 parent 5386dcc commit 951dbec
Show file tree
Hide file tree
Showing 53 changed files with 4,038 additions and 0 deletions.
174 changes: 174 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
# Created by https://www.gitignore.io/api/python,windows,visualstudiocode
# Edit at https://www.gitignore.io/?templates=python,windows,visualstudiocode

# My ignores
script.sh
*logs*
logfile
*output*
*DS_Store*
job_idx_*
procfile

### Python ###
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don’t work, or not
# install all needed dependencies.
#Pipfile.lock

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

### VisualStudioCode ###
.vscode
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json

### VisualStudioCode Patch ###
# Ignore all local history of files
.history

### Windows ###
# Windows thumbnail cache files
Thumbs.db
ehthumbs.db
ehthumbs_vista.db

# Dump file
*.stackdump

# Folder config file
[Dd]esktop.ini

# Recycle Bin used on file shares
$RECYCLE.BIN/

# Windows Installer files
*.cab
*.msi
*.msix
*.msm
*.msp

# Windows shortcuts
*.lnk

# End of https://www.gitignore.io/api/python,windows,visualstudiocode
119 changes: 119 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
# Jaxplorer

Jaxplorer is a Jax reinforcement learning framework for **exploring** new ideas.
For PyTorch reinforcement learning framework, please check [Explorer](https://github.com/qlan3/Explorer).

Note: this repo is still under development and in its early stage.

## Requirements

- Python: 3.11
- [Jax](https://jax.readthedocs.io/en/latest/installation.html): >=0.4.20
- [Gymnasium](https://github.com/Farama-Foundation/Gymnasium): pip install gymnasium==0.29.1
- [Mujoco](https://github.com/google-deepmind/mujoco): pip install mujoco==2.3.7
- [Gymnasium(mujoco)](https://gymnasium.farama.org/environments/mujoco/): pip install gymnasium[mujoco]
- Others: Please check `requirements.txt`.


## Implemented algorithms

- [Deep Q-Learning (DQN)](https://users.cs.duke.edu/~pdinesh/sources/MnihEtAlHassibis15NatureControlDeepRL.pdf)
- [Double Deep Q-learning (DDQN)](https://arxiv.org/pdf/1509.06461.pdf)
- [Maxmin Deep Q-learning (MaxminDQN)](https://arxiv.org/pdf/2002.06487.pdf)
- [Proximal Policy Optimisation (PPO)](https://arxiv.org/pdf/1707.06347.pdf)
- [Soft Actor-Critic (SAC)](https://arxiv.org/pdf/1812.05905.pdf)
- [Deep Deterministic Policy Gradients (DDPG)](https://arxiv.org/pdf/1509.02971.pdf)
- [Twin Delayed Deep Deterministic Policy Gradients (TD3)](https://arxiv.org/pdf/1802.09477.pdf)
- [Continuous Deep Q-Learning with Model-based Acceleration (NAF)](https://arxiv.org/pdf/1603.00748.pdf): model-free version; a different exploration strategy is applied for simplicity.

TODO: add more algorithms and improve the performance of PPO.

## Experiments

### Train && Test

All hyperparameters including parameters for grid search are stored in a configuration file in directory `configs`. To run an experiment, a configuration index is first used to generate a configuration dict corresponding to this specific configuration index. Then we run an experiment defined by this configuration dict. All results including log files are saved in directory `logs`. Please refer to the code for details.

For example, run the experiment with configuration file `classic_dqn.json` and configuration index `1`:

```python main.py --config_file ./configs/classic_dqn.json --config_idx 1```

The models are tested for one episode after every `test_per_episodes` training episodes which can be set in the configuration file.


### Grid Search (Optional)

First, we calculate the number of total combinations in a configuration file (e.g. `classic_dqn.json`):

`python utils/sweeper.py`

The output will be:

`Number of total combinations in classic_dqn.json: 18`

Then we run through all configuration indexes from `1` to `18`. The simplest way is using a bash script:

``` bash
for index in {1..18}
do
python main.py --config_file ./configs/classic_dqn.json --config_idx $index
done
```

[Parallel](https://www.gnu.org/software/parallel/) is usually a better choice to schedule a large number of jobs:

``` bash
parallel --eta --ungroup python main.py --config_file ./configs/classic_dqn.json --config_idx {1} ::: $(seq 1 18)
```

Any configuration index that has the same remainder (divided by the number of total combinations) should have the same configuration dict. So for multiple runs, we just need to add the number of total combinations to the configuration index. For example, 5 runs for configuration index `1`:

```
for index in 1 19 37 55 73
do
python main.py --config_file ./configs/classic_dqn.json --config_idx $index
done
```

Or a simpler way:
```
parallel --eta --ungroup python main.py --config_file ./configs/classic_dqn.json --config_idx {1} ::: $(seq 1 18 90)
```

### Slurm (Optional)

Slurm is supported as well. Please check `submit.py`.
TODO: add more details.


### Analysis (Optional)

To analyze the experimental results, just run:

`python analysis.py`

Inside `analysis.py`, `unfinished_index` will print out the configuration indexes of unfinished jobs based on the existence of the result file. `memory_info` will print out the memory usage information and generate a histogram to show the distribution of memory usages in directory `logs/classic_dqn/0`. Similarly, `time_info` will print out the time information and generate a histogram to show the distribution of time in directory `logs/classic_dqn/0`. Finally, `analyze` will generate `csv` files that store training and test results. Please check `analysis.py` for more details. More functions are available in `utils/plotter.py`.

TODO: add more details about hyper-parameter comparison.


## Cite

If you find this repo useful to your research, please cite my paper if related. Otherwise, please cite this repo:

~~~bibtex
@misc{Jaxplorer,
author = {Lan, Qingfeng},
title = {A Jax Reinforcement Learning Framework for Exploring New Ideas},
year = {2024},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/qlan3/Jaxplorer}}
}
~~~

# Acknowledgements

- [Explorer](https://github.com/qlan3/Explorer)
- [Jax RL](https://github.com/ikostrikov/jaxrl)
- [CleanRL](https://github.com/vwxyzjn/cleanrl)
Loading

0 comments on commit 951dbec

Please sign in to comment.