-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Orientation estimation error and delay investigation #1
Comments
Great overview! I am very much interested in this as well since we noticed the same issue inside control and we compensate for it using feed-forward estimation using UAV dynamic model and history of sent commands..... |
Updated the original post with data measured on a more complex trajectory and with the fix in simulation to correctly publish data from PixHawk at 100Hz instead of 30Hz, which significantly reduced the delay introduced by the MUS estimator (although not 100%). |
This is probably a question to @petrlmat, but we've also observed this strange behavior. Interestingly, it only seems to manifest for the Also - do we know if PlotJuggler displays the data according to timestamps or to the time of arrival of the messages? Because when I tried generating similar graphs as in the OP for the In any case, a deeper investigation of the difference between how the Edit: Hmm I cannot seem to reproduce the issue with |
The angular velocities in static nav_msgs::Odometry uavStateToOdom(const mrs_msgs::UavState& uav_state) {
nav_msgs::Odometry odom;
odom.header = uav_state.header;
odom.child_frame_id = uav_state.child_frame_id;
odom.pose.pose = uav_state.pose;
odom.twist.twist.angular = uav_state.velocity.angular;
tf2::Quaternion q;
tf2::fromMsg(odom.pose.pose.orientation, q);
odom.twist.twist.linear = Support::rotateVector(uav_state.velocity.linear, mrs_lib::AttitudeConverter(q.inverse()));
return odom;
} The PlotJuggler displays the data according to the time of arrival by default. If you want to use timestamp, you need to check the box at the top. |
@matemat13 can you please check the delay with this branch? The publishing is then controlled by the rate at which orientation msg arrives and also the timestamp is taken from the orientation msg so the delay should disappear. I don't know if this would be a viable fix as it would make the estimation rate depend on the rate of orientation msg. Maybe it would be possible to use orientation-triggered state publishing for attitude and lower-level output modalities and leave it as it is for the higher-level ones? @klaxalk |
@petrlmat yes, it would make sense to publish the rates and orientation separately at higher rate and leave the whole uav_state at the rate the control manager requests. |
Oh, I probably misunderstood your suggestions. So when the lowest possible control modality would be attitude or lower, you have the rate based on the incoming attitude rate/attitude? |
Because the output of the estimators is not used only as input for the control, I don't think it's a great idea to design it only with control in mind. For e.g. mapping and object localization in a world frame, it is necessary to know the UAV pose as accurately as possible (including a correct timestamp). For agile trajectory planning, it is even important to know the full UAV state (not just pose) and also with as short delay as possible. So I propose to design the estimation with accuracy in mind first, meaning publishing the data as quickly as possible at the highest rate available and with correct timestamps (probably using prediction to achieve accurate fusion of data from sensors with different rates). If control (or other modules) need some data at a specific rate, it should be published on a specialized topic for this purpose in my opinion. What do you think @penickar, @petrlmat, @klaxalk? I think that the new |
@matemat13 Do you know how the timestamps are handled between Mavlink and Mavros? Because I know that the Pixhawk has its own timestamps that come from |
Good question! I asked Matěj, Dan and Tomáš (I hope I remember correctly, if not, then sorry guys :D) and they all confirmed that there is some kind of time synchronization going on between the onboard PC and PX4.
In summary - yes, I think the timestamps of the messages coming from MAVROS should be fairly accurate assuming that the delay estimation has converged, that the estimation parameters are well chosen, and that the transport delay is the same both ways. We could try playing around with the parameters to see if we can get a better estimate. I don't see another way to verify whether the ~10ms MAVROS delay is internal to PixHawk or communication-induced. Any ideas? Edit: I wonder if bandwidth imbalance may cause an imbalance in the timesync message transport delays. I would guess that we're sending much less data to PixHawk than what is coming from PixHawk, no? That may cause the messages to be delayed more when coming from PixHawk simply because of a sending queue. Also I assume that the communication bitrate is the same both ways, is that right @DanHert ? |
This issue documents our findings so far regarding inaccuracies and delays in orientation estimation when using PixHawk with MAVROS and the MRS UAV state estimators. It should also be the place to discuss possible remedies and ideas for improvement.
Context
We have noticed that during agile maneuvers with high angular velocities, the projection of points from a 3D LiDAR to a static world frame is not stable. Upon further investigation it turns out that this is caused by inaccurate estimation of the UAV's own orientation. This problem increases with increasing angular velocities.
This is a problem for basically all situations where flight with high angular velocities is required, which mostly relates to Eagle and agile RL planning.
Methodology
To test the hypothesis that the orientation estimation error correlates with angular velocity around the respective axis, I've set up a Gazebo simulation with a UAV flying using the SE(3) controller and
fast
constraints along a line segment trajectory oriented with the Y axis with thertk
estimator and with ground truth enabled. The rosbag is available here.Three topics were used:
/uav1/mavros/imu/data
- raw estimate from PixHawk's internal EKF2 algorithm as published by MAVROS./uav1/estimation_manager/rtk/odom
- output of the MUS estimator which republishes data from the first topic./uav1/ground_truth
- the ground truth data from Gazebo.Data from either of the first two estimate topics were compared to the ground truth separately using the following method:
Optionally, before step 3, one of these two pre-processing steps were applied:
optimize_time
).predict_angle
).The tmux session and evaluation scripts are available in this repo.
What we found out so far
For both estimation topics, three variants were evaluated:
The resulting graphs are shown below.
MAVROS output:
MUS estimator:
Delays
Findings:
ros::Time::now()
) as the timestamp, not with the timestamp of the data (which comes from multiple sources).By default in simulation, the PixHawk publishes the orientation (topicThis is fixed in ctu-mrs/mrs_uav_gazebo_simulation@721fdd6./uav1/hw_api/orientation
) only on 30Hz instead of 100Hz -- this can be fixed by settingexport OLD_PX4_FW=true
in~/.bashrc
.Questions and ideas:
Errors
Findings:
Questions and ideas:
Methodology and testing problems
Findings:
The text was updated successfully, but these errors were encountered: