Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warning: Batch Queue is empty (Require more batch actors Or batch actor fails). #48

Open
hirodeng opened this issue Jul 22, 2024 · 1 comment

Comments

@hirodeng
Copy link

Is it something wrong with this warning?

CUDA_VISIBLE_DEVICES="1,2" python main.py --env BreakoutNoFrameskip-v4 --case atari --opr train --amp_type torch_amp --num_gpus 1 --num_cpus 10 --cpu
_actor 1 --gpu_actor 1 --force
2024-07-22 16:47:01,232 INFO services.py:1164 -- View the Ray dashboard at http://127.0.0.1:8265
A.L.E: Arcade Learning Environment (version 0.7.4+069f8bd)
[Powered by Stella]
[2024-07-22 16:47:02,593][train][INFO][main.py><module>] ==> Path: /mgData4/dengjianhao/EfficientZero/results/atari/none/BreakoutNoFrameskip-v4/seed=0/Mon Jul 22 16:47:02 2024
[2024-07-22 16:47:02,594][train][INFO][main.py><module>] ==> Param: {'action_space_size': 4, 'num_actors': 1, 'do_consistency': True, 'use_value_prefix': True, 'off_correction': True, 'gray_scale': False, 'auto_td_steps_ratio': 0.3, 'episode_life': True, 'change_temperature': True, 'init_zero': True, 'state_norm': False, 'clip_reward': True, 'random_start': True, 'cvt_string': True, 'image_based': True, 'max_moves': 3000, 'test_max_moves': 3000, 'history_length': 400, 'num_simulations': 50, 'discount': 0.988053892081, 'max_grad_norm': 5, 'test_interval': 10000, 'test_episodes': 32, 'value_delta_max': 0.01, 'root_dirichlet_alpha': 0.3, 'root_exploration_fraction': 0.25, 'pb_c_base': 19652, 'pb_c_init': 1.25, 'training_steps': 100000, 'last_steps': 20000, 'checkpoint_interval': 100, 'target_model_interval': 200, 'save_ckpt_interval': 10000, 'log_interval': 1000, 'vis_interval': 1000, 'start_transitions': 2000, 'total_transitions': 100000, 'transition_num': 1, 'batch_size': 256, 'num_unroll_steps': 5, 'td_steps': 5, 'frame_skip': 4, 'stacked_observations': 4, 'lstm_hidden_size': 512, 'lstm_horizon_len': 5, 'reward_loss_coeff': 1, 'value_loss_coeff': 0.25, 'policy_loss_coeff': 1, 'consistency_coeff': 2, 'device': 'cuda', 'debug': False, 'seed': 0, 'value_support': <core.config.DiscreteSupport object at 0x7fd58c7bf100>, 'reward_support': <core.config.DiscreteSupport object at 0x7fd58c7bf160>, 'weight_decay': 0.0001, 'momentum': 0.9, 'lr_warm_up': 0.01, 'lr_warm_step': 1000, 'lr_init': 0.2, 'lr_decay_rate': 0.1, 'lr_decay_steps': 100000, 'mini_infer_size': 64, 'priority_prob_alpha': 0, 'priority_prob_beta': 0.4, 'prioritized_replay_eps': 1e-06, 'image_channel': 3, 'proj_hid': 1024, 'proj_out': 1024, 'pred_hid': 512, 'pred_out': 1024, 'bn_mt': 0.1, 'blocks': 1, 'channels': 64, 'reduced_channels_reward': 16, 'reduced_channels_value': 16, 'reduced_channels_policy': 16, 'resnet_fc_reward_layers': [32], 'resnet_fc_value_layers': [32], 'resnet_fc_policy_layers': [32], 'downsample': True, 'env_name': 'BreakoutNoFrameskip-v4', 'obs_shape': (12, 96, 96), 'case': 'atari', 'amp_type': 'torch_amp', 'use_priority': False, 'use_max_priority': False, 'cpu_actor': 1, 'gpu_actor': 1, 'p_mcts_num': 4, 'use_root_value': False, 'auto_td_steps': 30000.0, 'use_augmentation': True, 'augmentation': ['shift', 'intensity'], 'revisit_policy_search_rate': 0.99, 'model_dir': '/mgData4/dengjianhao/EfficientZero/results/atari/none/BreakoutNoFrameskip-v4/seed=0/Mon Jul 22 16:47:02 2024/model'}
(pid=1482750) A.L.E: Arcade Learning Environment (version 0.7.4+069f8bd)
(pid=1482750) [Powered by Stella]
(pid=1482751) A.L.E: Arcade Learning Environment (version 0.7.4+069f8bd)
(pid=1482751) [Powered by Stella]
(pid=1482751) Start evaluation at step 0.
(pid=1482751) Training step 0, test scores: 
(pid=1482751) [9. 0. 5. 2. 0. 0. 0. 9. 0. 0. 0. 0. 3. 3. 0. 5. 0. 0. 0. 0. 2. 2. 3. 9.
(pid=1482751)  0. 0. 0. 5. 2. 0. 0. 5.] of 416 eval steps.
Begin training...
/mgData4/dengjianhao/EfficientZero/core/train.py:68: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.)
  obs_batch_ori = torch.from_numpy(obs_batch_ori).to(config.device).float() / 255.0
[2024-07-22 16:49:37,539][train][INFO][log.py>_log] ==> #0          Total Loss: 47.875   [weighted Loss:47.875   Policy Loss: 7.901    Value Loss: 37.092   Reward Sum Loss: 30.693   Consistency Loss: 0.004    ] Replay Episodes Collected: 55         Buffer Size: 55         Transition Number: 2.068   k Batch Size: 256        Lr: 0.000   
[2024-07-22 16:49:37,539][train_test][INFO][log.py>_log] ==> #0          Test Mean Score of BreakoutNoFrameskip-v4: 2.0        (max: 9.0       , min:0.0       , std: 2.839454172900137)
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
[2024-07-22 17:13:06,564][train][INFO][log.py>_log] ==> #1000       Total Loss: -0.068   [weighted Loss:-0.068   Policy Loss: 7.664    Value Loss: 0.189    Reward Sum Loss: 0.087    Consistency Loss: -3.933   ] Replay Episodes Collected: 55         Buffer Size: 55         Transition Number: 2.068   k Batch Size: 256        Lr: 0.200   
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
Warning: Batch Queue is empty (Require more batch actors Or batch actor fails).
@1846333564
Copy link

same to you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants