Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The simulator! #1

Open
fayjie92 opened this issue Sep 26, 2017 · 17 comments
Open

The simulator! #1

fayjie92 opened this issue Sep 26, 2017 · 17 comments

Comments

@fayjie92
Copy link

I am glad to work on your codes. While working, the simulator is not loading. Can you describe the simulator connections? Is it uses Socket? or it starts automatically with the python code that you've provided?

Thanks,
Fayjie

@Kyushik
Copy link
Collaborator

Kyushik commented Sep 26, 2017

Thanks for working on my code! It uses socketio to connect DQN code(Python) and Simulator(Unity).
At first, execute python code (drive_Combined.py)
Then download the simulator file via my link on the readme.md and execute simulator exe file.
After that they connects each other and run!

@matrixBT
Copy link

matrixBT commented May 29, 2018

I'm using your simulator for my project work, is there any way to cite it, also is there any name for this simulator??

@matrixBT
Copy link

@Kyushik

@Kyushik
Copy link
Collaborator

Kyushik commented Jun 22, 2018

@matrixBT Sorry for late reply. I went to military training for a month, so I couldn't use any electronic devices. Actually, there is no cite or name for the simulator. I think adding github link is okay. Thanks for using my simulator for your project work!! Are you writing a paper with this simulator?

@champcui
Copy link

Excuse me , I can't find that 'drive_Combined.py'file in your git hub. Where shod I find that python file?

@Kyushik
Copy link
Collaborator

Kyushik commented Aug 16, 2020

@champcui Hi! 'drive_combined.py' is a file that existed in old version. Now you should run the files in the 'RL_algorithms' folder

@champcui
Copy link

@champcui Hi! 'drive_combined.py' is a file that existed in old version. Now you should run the files in the 'RL_algorithms' folder

Thank u very much! I can use Rl_algorthms now, and I want to know how to get photo that in your paper just like "lanechange" "sensor.gif", and so on.

@Kyushik
Copy link
Collaborator

Kyushik commented Sep 21, 2020

I recorded a video and converted it to gif :)

@champcui
Copy link

我录制了视频并将其转换为gif :)

okay, and how should I get that Input Configuration ?for example averge speed ,lanechange and number of overtaking. Because I can just get step and score in that algorthms.ipynb.

@champcui
Copy link

我录制了视频并将其转换为gif :)

It's much trouble for me if I record these every data in running video.

@Kyushik
Copy link
Collaborator

Kyushik commented Sep 22, 2020

You can get speed and other data using vector observation in ipynb file :)

@champcui
Copy link

您可以使用ipynb文件中的矢量观测来获取速度和其他数据:)

Code for tensorboard tensorboard

def setup_summary():
episode_speed = tf.Variable(0.)
episode_overtake = tf.Variable(0.)
episode_lanechange = tf.Variable(0.)

tf.summary.scalar('Average_Speed/' + str(Num_plot_episode) + 'episodes', episode_speed)
tf.summary.scalar('Average_overtake/' + str(Num_plot_episode) + 'episodes', episode_overtake)
tf.summary.scalar('Average_lanechange/' + str(Num_plot_episode) + 'episodes', episode_lanechange)

summary_vars = [episode_speed, episode_overtake, episode_lanechange]
summary_placeholders = [tf.placeholder(tf.float32) for _ in range(len(summary_vars))]
update_ops = [summary_vars[i].assign(summary_placeholders[i]) for i in range(len(summary_vars))]
summary_op = tf.summary.merge_all()
return summary_placeholders, update_ops, summary_op

Sorry to bother you again! Actually, I find these code in your ipynb. But I can not find these calculate data at last. I realy want to use these data. Thanks!

@Kyushik
Copy link
Collaborator

Kyushik commented Sep 23, 2020

I wrote information about vector observation on readme as follows. I think this info can help you :)
image

@champcui
Copy link

I know actions is related to max Qvalue. But I can't find actions and rewards in your ipynb. So how should I do to find it?
Thanks for your paintience again!

@Kyushik
Copy link
Collaborator

Kyushik commented Oct 26, 2020

Action is decided based on the neural network of the code and reward is returned as a result of the action as follows.
image
image

@champcui
Copy link

结果
Thanks! And I want to know how control your speed and lanechange in ipynb?

@Kyushik
Copy link
Collaborator

Kyushik commented Oct 26, 2020

The speed and lanechange should be changed in the Unity project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants