-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
execute GDAM.py #7
Comments
You would require a launchfile that either launches a robot in simulation or connects to the real robot. Since the robot launch is entirely dependent on your robot setup, it is not included here. It is a file that automatically connects and launches your sensors on the robot. But for the reference, the same file call is included in https://github.com/reiniscimurs/DRL-robot-navigation where a robot is launched in a simulation. /r1/cmd_vel is a node to wich you publish the control commands, such as linear and angular velocities, for the robot to execute |
Can i refer to your original launchfile of real robot? |
Sorry, I do not have access to those files anymore. You could try using the launchfile from the repo mentioned and just exclude the launching of the robot model for testing. |
You should be able to use any SLAM package, either Hector or Gmapping, instead of SLAM_Toolbox. However, SLAM_Toolbox has superior performance in mapping quality over other packages. For SLAM_Toolbox issues please follow the guides on their repository: https://github.com/SteveMacenski/slam_toolbox |
This GDAM repository is made with tensorflow implementation in mind and loads a tensorflow trained model. The TD3 repository is using PyTorch. You will not be able to directly load a pytorch model into tensorflow. Moreover, the parameters do not match between the 2 methods as GDAM input is a 23 value vector, but TD3 is a 24 value vector. You will have to change the GDAM codebase to use the pytorch model. What you can do is swap out the tensorflow calls in GDAM with the TD3 pytorch calls and use the pytorch model instead. This should not significantly change the behavior. |
I would guess your tf_tree probably does not have a connection between the map and odom nodes. You should check that in rqt. In this implementation, the connection was made using Slam_toolbox and pointing to base_link as the source of robots odometry. |
The output of the neural network is a tanh. Meaning, it is in range from -1 to 1. But for linear velocity action we need it to be in range 0 to 0.5. So we change the range by adding one and dividing by 4. |
Over time the Odom and mapping frames begin to drift. That is why you have to look up the drift using the transformLookup function. Then you update the locations taking into consideration this drift. If you comment out this stage and keep static transform and rot values, you will not be able to account for the drift and your nodes will be positioned wrong. You should find a way to lookup the transform between the map frame and robots Odom frame in some other way if the specified method does not work. |
I would like to ask, in your experimental video, it seems that a specific target location has been set, and the robot will continue to explore until it reaches the designated target location. I am wondering where this part of the code needs to be set and I only see the parameter setting of x=50 as the initial configuration for starting the robot. |
The goal position is set in the GDAM_args.py file Line 47 in fc793ed
You can see that there is an argument for setting the X and Y goal coordinates. The arguments are then passed when creating the environment Line 75 in fc793ed
Line 67 in fc793ed
|
I'm not sure why my mobile robot keeps moving forward continuously. I printed the value of "linear" and it shows 0.35. Although it avoids obstacles, is this normal? |
I encountered such an error, and I don't know why it happened. min_d = math.sqrt(math.pow(self.nodes[0][2] - self.odomX, 2) + math.pow(self.nodes[0][3] - self.odomY, 2)) |
Hi, Looks like you do not have any nodes to evaluate. Either all of the nodes were reached or no nodes were generated in your implementation. For the robot moving forward, I cannot say why is that, there is not enough information to go on. You should check what is the position of the currently selected node. All of the waypoints and the selected goal node should be visible in Rviz. If not, then you can print our the |
I think it's an issue with my model. It seems that it triggered a freeze due to not moving forward, causing the output of linear = 0.35 when evaluating the self.last_states status in the recover process. May I ask to what extent your model training can reach? |
What do you mean by "extent that the model can reach"? |
"I trained a TD3 model and tested it in a simulated environment, which worked very well. However, when I applied it to the real world, its performance was very poor. It didn't move towards the target position and seemed to be wandering randomly. Is there anything I can adjust to improve its performance? I would like to confirm what are the 24 inputs of the TD3 model, including 20 lidar scans and what else? |
The state representation is explained here: https://medium.com/@reinis_86651/deep-reinforcement-learning-in-mobile-robot-navigation-tutorial-part3-training-13b2875c7b51 You can also see the state info here: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/TD3/velodyne_env.py#L229 For sim2real transfer there is a lot of things that can go wrong, so you would have to be very specific with what your setup looks like and how exactly you implemented it. Only then I give a guess what is happening there. |
Sorry, I'm here to ask a question again
I'm trying to execute GDAM.py
i can't find the file
OSError: File /home/wenzhi/GDAE/Code/assets/launch/multi_robot_scenario.launch does not exist
I'm not sure what's wrong
I haven't connected the device yet, just trying to execute
Another question is can tensorflow errors be ignored?
/r1/cmd_vel What is this node?
The text was updated successfully, but these errors were encountered: