-
Updated
Jun 2, 2022 - Python
openai
Here are 437 public repositories matching this topic...
The following applies to DDPG and TD3, and possibly other models. The following libraries were installed in a virtual environment:
numpy==1.16.4
stable-baselines==2.10.0
gym==0.14.0
tensorflow==1.14.0
Episode rewards do not seem to be updated in model.learn()
before callback.on_step()
. Depending on which callback.locals
variable is used, this means that:
- episode rewards may n
-
Updated
Jun 4, 2022 - TypeScript
-
Updated
May 22, 2022 - Python
-
Updated
Aug 9, 2021 - Python
-
Updated
May 31, 2022 - Jupyter Notebook
-
Updated
Jun 3, 2022 - Python
-
Updated
Jul 14, 2021 - Python
-
Updated
Feb 6, 2021 - Python
-
Updated
May 26, 2022 - Python
-
Updated
Jul 24, 2021 - Python
-
Updated
Jun 3, 2022 - Python
-
Updated
Jun 3, 2022 - Python
-
Updated
Jul 14, 2019 - Python
There seem to be some vulnerabilities in our code that might fail easily. I suggest adding more unit tests for the following:
- Custom agents (there's only VPG and PPO on CartPole-v0 as of now. We should preferably add more to cover discrete-offpolicy, continuous-offpolicy and continuous-onpolicy)
- Evaluation for the Bandits and Classical agents
- Testing of convergence of agents as proposed i
-
Updated
Jun 3, 2022 - Emacs Lisp
-
Updated
Jun 24, 2021 - Python
Hi, I installed FinRL on my macbook using the codes in the installation block of the MultiCrypto_Trading.ipynb, but while testing I get the following warning: "Length of values (2023) does not match length of index (1999)" and my final episode_return is different from running the same codes on the colab! 0.96 vs 1.08! The issue gets resolved when changing the time_interval to '1d' but for any smal
-
Updated
Feb 9, 2018 - Python
-
Updated
Feb 8, 2022 - Python
-
Updated
Oct 7, 2021 - Python
-
Updated
Mar 12, 2022 - Python
-
Updated
Apr 5, 2022 - Python
-
Updated
May 26, 2022 - Python
-
Updated
Jan 28, 2022 - Python
Improve this page
Add a description, image, and links to the openai topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the openai topic, visit your repo's landing page and select "manage topics."
The documentation of DQN agent (https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html) specifies that log_interval parameter is "The number of timesteps before logging". However, when set to 1 (or any other value) the logging is not made at that pace but is instead made every log_interval episode (and not timesteps). In the example below this is made every 200 timesteps.