-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
1d66d29
commit bd0af1f
Showing
1 changed file
with
68 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,68 @@ | ||
--- | ||
title: Solution for Montecarlo Laser Loc | ||
date: 2024-12-22 20:30:00 +0100 | ||
tags: [weekly progress] | ||
author: juan | ||
img_path: /assets/img/ | ||
toc: true | ||
comments: true | ||
--- | ||
|
||
## Solution for Montecarlo Laser Loc | ||
|
||
These two weeks I have been developing a solution for the exercise Montecarlo Laser Loc to practice for when I do the solution for Montecarlo Visual Loc in the future. | ||
|
||
## Overview | ||
|
||
The exercise consists in developing a localization algorithm based on the Montecarlo localization method, using a laser sensor to update the particles. | ||
|
||
The algorithm can be broken down in four parts. First, the generation of the base set of particles. The motion model to move the robot and simulate that movement with all particles to update their positions. The sensor model, taking the information from the sensors and updating the weight of the particles based on the data gathered. And generating more particles since old ones with low weight are gone or they move to out of bound places, where the particles are deleted. | ||
|
||
## Creating base set of particles | ||
|
||
The base set of particles are placed at random in the map, however, I want them to be as spread out through all the map as possible. To do this, I divided the map in 16 parts and spawn around 32 particles on each section at random places within the boundaries of each section, I also check if the particle is out of bounds (basically detect if it was placed in a position with a black pixel in the map) so I can place it again until it's in a position the robot can reach (the white pixels). I also don't count 2 of the sections since they are all out of bounds in the map, so I work with a total of 14 sections, spawning 32 particles in each one for a total of 448 particles as a base set. I then calculate the weight of each particle so they start with a set weight each one, how I calculate it will be covered in the sensor model section. | ||
|
||
## Motion model | ||
|
||
For the motion model, first I had to implement how the robot moves through the scenario, at first I had the robot go forward until the laser detected a wall close in front of the robot and then it would rotate left or right from 1 to 5 seconds and repeat, but updating all the particles took a lot of processing power and this method became unreliable as the robot couldn't update it's movement fast enough when a wall was near. I then opted for something more simple, just making the robot run in circles with HAL.setV(0.5) and HAL.setW(0.5) and if I wanted to test the algorithm in another room, I would move the robot before starting. | ||
|
||
Then I have to update the particles position, mimicking the movement of the robot, for that I need the distance it traveled and the difference between the previous yaw and the current one, which indicates how much the robot rotated in its trajectory. To get it, I use the function HAL.getOdom(), which gives me the x and y coordinates as well as the yaw (with some noise implemented, this is the function that we must use for the exercise so the algorithm needs to also count the noise). We get the odom information from the previous iteration and the current one, and calculate the euclidean distance between the 2. | ||
|
||
Then for each particle I add the yaw difference to the current yaw of the particle, giving the new angle for it. With it and the distance traveled, we can calculate the new x and y coordinates for the particle like this: | ||
|
||
```bash | ||
p.x = p.x + (dist * cos(p.new_yaw)) | ||
p.y = p.y + (dist * sin(p.new_yaw)) | ||
``` | ||
|
||
Since updating the particles take so much processing power, instead of updating them each iteration, I would update them when the distance travelled was more than 0.1 meters or the yaw difference was more than 0.15. | ||
|
||
## Sensor model | ||
|
||
For the sensor model, first I get the laser data with the HAL.getLaserData() function. | ||
|
||
Then I simulate laser rays originating from each particle, incrementing the distance they reach until they reach a wall (a black pixel in the map), the method is similar to the one where I calculate the distance the particles move, but starting from the position of the particle and adding an increment each iteration. I do this for a total of 3 simulated rays at 0, 90 and 180 degrees and in the end I get a list with three values for each particle, indicating the distance the simulated laser rays reach. | ||
|
||
After that, I calculate the difference of distances between the ones in the particles and the laser data from the real robot (on the same angles) and add (or substract) that amount to the weight, when the difference is bigger than 2.5 meters is when I start substracting, otherwise, it adds to the weight. | ||
|
||
## Generating more particles | ||
|
||
The next step is to generate more particles, replacing the ones with low weight. I do this in a function that also checks certain conditions to estimate the robot position and uses the GUI.showPosition() function. | ||
|
||
First I get the particle with the highest weight and check if the rest of the particles converge around this one, by checking if all particles are within a 2 meter radius of this particle. If it converges (or if the algorithm is spending too much time finding the location), I check if the weight of the particle is higher or equal than 0.9, if it passes the check, I use showPosition to estimate the position of the robot, if not, I reset the particle list and replace it with a new set of random particles. | ||
|
||
If it doesn't converge, then I generate more particles by using the roulette algorithm, which chooses random particles to stay on the particle list, with the particles with higher weight having a higher chance to stay than the ones with low weight. It does this until the size of the list is the same as before, ending with more particles being in the position of the ones with higher weight so the entire group slowly converges to these ones with each iteration. | ||
|
||
I also added some noise to the particles' position and yaw when using the roulette method, to account for the noise in the HAL.getOdom() function | ||
|
||
## Tweaking values | ||
|
||
Now that I have all the components, it's time to tweak values from different places until the algorithm consistently estimates the correct position of the robot, here the major changes were made in the ammount of laser rays simulated for the particles (nine rays instead of three) and reworking how I was assigning the weights. | ||
|
||
The way it works now is there are 2 thresholds for the distance difference between the real laser and the simulated one. If the difference is less or equal to the first threshold, the weight goes from 1 (no difference) to 0.9 (difference == first threshold). If the difference is between the first threshold and the second threshold, the weight assigned can be between 0.9 and 0.1. And if the difference is higher than the second threshold, then the weight can't be higher than 0.1, and it decreases exponentially the bigger the difference is. After testing a lot, I ended up having the first threshold at 3 meters and the second one at 10 meters. | ||
|
||
In the end, the algorithm correctly estimates the position of the robot 95% of the time and 80% of the time in hard places that are very similar to each other. | ||
|
||
## Updating Montecarlo Visual Loc | ||
|
||
After doing this, I realized there were some functions in this exercise that could be useful, if not essential, for the Montecarlo Visual Loc exercise, so I added those functions to the GUI and HAL files. The functions were HAL.getOdom(), GUI.getMap(), GUI.poseToMap() and GUI.mapToPose(). |