-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Can't create the PPO model on Macbook M1 #1152
Comments
I'm not sure it is actually duplicated. Looking the code, it is supposed run on the CPU. |
@qgallouedec You're right. I've run the code on the CPU using Jupyter Notebook inside VSCode. |
Thank you. Can you make the code minimal? Therefore, the last line is the one that raises the issue. I think it is a problem of gym rendering in the notebook. |
Okay, let me try to walk you through step by step to reproduce the error.
Error info:
Actually, when I run |
My intuition was bad. For the moment, I fail in reproducing the bug. |
Sure, here you go.
|
Do you agree that the minimal code would be from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env
env = make_vec_env("CartPole-v1", n_envs=4)
PPO("MlpPolicy", env) Does it work without multiple envs?: from stable_baselines3 import PPO
PPO("MlpPolicy", "CartPole-v1") |
I have the same configuration as you and I do not encounter any errors. I've noticed several times that the VSCode jupyter extension is buggy and sometimes behaving a bit strangely. I remember solving one of them by simply reinstalling the virtual environment completely and rebooting my laptop. I think it's really worth a try. |
Possibly memory-related: microsoft/vscode-jupyter#9741 (comment). It would explain why the reboot worked for me. |
@qgallouedec I've tried rebooting the laptop and reinstalling the Anaconda virtual environment but the error remains the same :( |
Does the code fail when executed in a standard python file? |
When running with normal python file (it's my |
I've found a package to help me find out the bug. Just import it and add
Having run the script with each of those models (PPO, A2C, DQN), I got the result:
With model A2C, the error is
And with model DQN, it works fine. |
Could you run this: import torch.nn as nn
linear = nn.Linear(2, 2)
nn.init.orthogonal_(linear.weight) |
Do you still encounter the error? What Python version do you use? |
I can still notice this behaviour in sb3 1.6.2 and python3.9 .Exact same bug "Fatal Python error: Segmentation fault" in case of PPO but DQN is working fine. Please let me know for any additional log |
Have a MacBook Pro M1, I can't reproduce the error. The following works as expected: from stable_baselines3 import PPO
PPO("MlpPolicy", "CartPole-v1")
If anyone finds a code/config to reproduce this error, please post it here. |
@qgallouedec the exact same two lines i tried running in my local vs code . I hope this has nothing do with which i de i am using. |
What are the Jupyter logs? |
Sorry for late response. I resolved my issue just now by reinstalling numpy which is causing the problem. |
Late = 18 minutes ? 😆 Great to hear! I'm closing the issue. |
🐛 Bug
I'm using Macbook Pro M1 and running code in Jupyter Notebook. Every time I run the scripts, I get an error and the Kernel crashed. I've tried running the code in Google Colab and it went well. It seems to me that
stable-baselines3
library doesn't support M1 chips. What do I need to do now to run the code locally?To Reproduce
Relevant log output / Error message
System Info
No response
Checklist
The text was updated successfully, but these errors were encountered: