Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

execute GDAM.py #7

Open
chih7715 opened this issue Feb 1, 2023 · 23 comments
Open

execute GDAM.py #7

chih7715 opened this issue Feb 1, 2023 · 23 comments

Comments

@chih7715
Copy link

chih7715 commented Feb 1, 2023

Sorry, I'm here to ask a question again
I'm trying to execute GDAM.py
i can't find the file

OSError: File /home/wenzhi/GDAE/Code/assets/launch/multi_robot_scenario.launch does not exist
2023-02-01 11-17-37 的螢幕擷圖
I'm not sure what's wrong
I haven't connected the device yet, just trying to execute
Another question is can tensorflow errors be ignored?
/r1/cmd_vel What is this node?

@reiniscimurs
Copy link
Owner

You would require a launchfile that either launches a robot in simulation or connects to the real robot. Since the robot launch is entirely dependent on your robot setup, it is not included here. It is a file that automatically connects and launches your sensors on the robot. But for the reference, the same file call is included in https://github.com/reiniscimurs/DRL-robot-navigation where a robot is launched in a simulation.

/r1/cmd_vel is a node to wich you publish the control commands, such as linear and angular velocities, for the robot to execute

@chih7715
Copy link
Author

chih7715 commented Feb 3, 2023

Can i refer to your original launchfile of real robot?
let me try to figure it out.

@reiniscimurs
Copy link
Owner

Sorry, I do not have access to those files anymore.

You could try using the launchfile from the repo mentioned and just exclude the launching of the robot model for testing.

@chih7715
Copy link
Author

2023-02-11 16-40-24 的螢幕擷圖
I try to use slam toolbox, but always show no map received warning.
If I don't use slam toolbox, can I use hector slam instead?will can't run this code?

@reiniscimurs
Copy link
Owner

You should be able to use any SLAM package, either Hector or Gmapping, instead of SLAM_Toolbox. However, SLAM_Toolbox has superior performance in mapping quality over other packages.

For SLAM_Toolbox issues please follow the guides on their repository: https://github.com/SteveMacenski/slam_toolbox

@chih7715
Copy link
Author

2023-02-18 17-33-55 的螢幕擷圖
I have these problems when running this program,tensorflow warnings need to be ignored?
2023-02-18 17-38-13 的螢幕擷圖
Regarding the model path,I use the model trained by td3,the path is as shown in my screenshot,what changes need to be made.

@reiniscimurs
Copy link
Owner

This GDAM repository is made with tensorflow implementation in mind and loads a tensorflow trained model. The TD3 repository is using PyTorch. You will not be able to directly load a pytorch model into tensorflow. Moreover, the parameters do not match between the 2 methods as GDAM input is a 23 value vector, but TD3 is a 24 value vector.

You will have to change the GDAM codebase to use the pytorch model. What you can do is swap out the tensorflow calls in GDAM with the TD3 pytorch calls and use the pytorch model instead. This should not significantly change the behavior.

@chih7715
Copy link
Author

2023-03-10 18-17-51 的螢幕擷圖
I'm not familiar with this part.
Where do I need to change the settings?

@reiniscimurs
Copy link
Owner

I would guess your tf_tree probably does not have a connection between the map and odom nodes. You should check that in rqt. In this implementation, the connection was made using Slam_toolbox and pointing to base_link as the source of robots odometry.

@chih7715
Copy link
Author

2023-03-21 22-09-32 的螢幕擷圖
why do this here ,aIn[0,0] = (aIn[0,0]+1)/4

@reiniscimurs
Copy link
Owner

The output of the neural network is a tanh. Meaning, it is in range from -1 to 1. But for linear velocity action we need it to be in range 0 to 0.5. So we change the range by adding one and dividing by 4.

@chih7715
Copy link
Author

2023-03-30 21-32-05 的螢幕擷圖
"I encountered a strange situation where my goal is x=4.416 and y=-1.75, and my movement distance is x=2.12 and y=0.06. When the distance between them is less than 1.5, I confirm that I have arrived and change the goal. However, in the RViz display, my red and green dots do not seem to be in the positions I described."

@chih7715
Copy link
Author

2023-03-30 23-08-50 的螢幕擷圖
I made a modification to the line (trans, rot) = self.listener.lookupTransform('/map', '/odom', rospy.Time(0)) because Hector SLAM does not use odom as the reference frame. Instead, I used the slam_out_pose from Hector SLAM, which estimates the current position of the robot with respect to the map frame. As there is no odom frame in this case, I changed the frame_id to base_frame.

@reiniscimurs
Copy link
Owner

Over time the Odom and mapping frames begin to drift. That is why you have to look up the drift using the transformLookup function. Then you update the locations taking into consideration this drift. If you comment out this stage and keep static transform and rot values, you will not be able to account for the drift and your nodes will be positioned wrong. You should find a way to lookup the transform between the map frame and robots Odom frame in some other way if the specified method does not work.

@chih7715
Copy link
Author

I would like to ask, in your experimental video, it seems that a specific target location has been set, and the robot will continue to explore until it reaches the designated target location. I am wondering where this part of the code needs to be set and I only see the parameter setting of x=50 as the initial configuration for starting the robot.

@reiniscimurs
Copy link
Owner

The goal position is set in the GDAM_args.py file

parser.add_argument("--x", help="X coordinate of the goal",

You can see that there is an argument for setting the X and Y goal coordinates. The arguments are then passed when creating the environment
env = ImplementEnv(d_args)
and set for the environment
self.original_goal_x = args.x
.

@chih7715
Copy link
Author

I'm not sure why my mobile robot keeps moving forward continuously. I printed the value of "linear" and it shows 0.35. Although it avoids obstacles, is this normal?
I set the goal position nearby, but the mobile robot did not move towards the goal position.

@chih7715
Copy link
Author

I encountered such an error, and I don't know why it happened.

min_d = math.sqrt(math.pow(self.nodes[0][2] - self.odomX, 2) + math.pow(self.nodes[0][3] - self.odomY, 2))
IndexError: deque index out of range

@reiniscimurs
Copy link
Owner

Hi,

Looks like you do not have any nodes to evaluate. Either all of the nodes were reached or no nodes were generated in your implementation.

For the robot moving forward, I cannot say why is that, there is not enough information to go on. You should check what is the position of the currently selected node. All of the waypoints and the selected goal node should be visible in Rviz. If not, then you can print our the self.goalX and self.goalY values to see if they make sense.

@chih7715
Copy link
Author

I think it's an issue with my model. It seems that it triggered a freeze due to not moving forward, causing the output of linear = 0.35 when evaluating the self.last_states status in the recover process. May I ask to what extent your model training can reach?

@reiniscimurs
Copy link
Owner

What do you mean by "extent that the model can reach"?

@chih7715
Copy link
Author

chih7715 commented May 8, 2023

"I trained a TD3 model and tested it in a simulated environment, which worked very well. However, when I applied it to the real world, its performance was very poor. It didn't move towards the target position and seemed to be wandering randomly. Is there anything I can adjust to improve its performance?

I would like to confirm what are the 24 inputs of the TD3 model, including 20 lidar scans and what else?

@reiniscimurs
Copy link
Owner

The state representation is explained here: https://medium.com/@reinis_86651/deep-reinforcement-learning-in-mobile-robot-navigation-tutorial-part3-training-13b2875c7b51

You can also see the state info here: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/TD3/velodyne_env.py#L229

For sim2real transfer there is a lot of things that can go wrong, so you would have to be very specific with what your setup looks like and how exactly you implemented it. Only then I give a guess what is happening there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants