-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
speed up the python code for specific area displacement tracking #105
Comments
Another point, currently the geogrid module directly assumes that the input SLC are stored in a folder that contains all bursts. However, this is not true if I want to use the coregistered and merged SLC. It's even worse for cropped SLC... However, I think that's a common situation for specific use (e.g. AOI smaller than a single burst, or locates in the middle), since I don't want to waste too much time on useless region. I tried to set a nodatamask, as defined in the testauroRIFT.py. However, this won't reduce the input size and the speed problem remains as mentioned above.. So now my stupid solution is to read the SLC.vrt file to find the source folder until it points to the burst files. And then read the related parameters, merge the orbits, and then crop the orbit to match the dimension of cropped SLC. Is there any easy way to handle this kind of situation? |
Hi @Leiguo924. Thanks for the good catch on the colfilt function. When we originally wrote autoRIFT no colfilt function existed so we wrote our own. If you have a good replacement please issue a PR and we'll look it over. You must be processing some extremely huge images as out typically single-core image-pair process takes 60s or so from start to finish. |
Hi Alex,
Actually, I am just processing an image composed from about two bursts, the input image is around 17000x4500. But I used a very small skipxy(7 and 2), and very high resolution grid location( about 30m). Maybe that’s the reason?
在 2024年9月29日,08:01,Alex Gardner ***@***.***> 写道:
Hi @Leiguo924<https://github.com/Leiguo924>. Thanks for the good catch on the colfilt function. When we originally wrote autoRIFT no colfilt function existed so we wrote our own. If you have a good replacement please issue a PR and we'll look it over. You must be processing some extremely huge images as out typically single-core image-pair process takes 60s or so from start to finish.
—
Reply to this email directly, view it on GitHub<#105 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AI4ZGFK3MC3ZCTSWM32KQ7LZY6JTHAVCNFSM6AAAAABPBGOQBOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOBRGEZDANRSGQ>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
That's certainly the reason ... seems like a lot of redundant computations given that you'll need chip size (effective resolution) that is much much larger your grid posting |
Hi @alex-s-gardner , the chip size is much larger, l used csmin 128 and csmax 256. And it was specified as pixels (not meters) right? Is there anything wrong for my parameters? Here are my options, the windows_location is 30m resolution |
Well you're making a calculation very 7th and 2nd pixel but your computing a correlation over a 128 to 256 window.. seems a bit overkill as there is so much overlap in the information being cross-correlated. sample spacing of 64 = 128/2 would be more appropriate |
ok, I made a mistake, the window_location already override the skip parameters. So based on your advice, if I want higher resolution, I should use smaller search window right? For example, for S1 image, if I want the result resolution to be 30 (not the fake resolution), I should use a chip size around which range? Thanks a lot! |
if your pixel size is 30m and you want an effective size of 30m then you would need a ChipSize of 1 which is not possible. We typically use a minimum window size of 16 which has an effective resolution of 16*your grid size. |
Ok, now I understand. So the chip size is based on the pixels from grid_location, not the original pixel in radar coordinate... Anyway, I just compared again with the numba implemented colfilt, it's much faster than current version, I will try to submit a PR later, since I installed the module directly through Conda. Thanks a lot for your answers! |
chip size is based on the pixels from input imagery... typically you don't get good correlation in S1 imagery unless your chip size is >~16x16 ... If I remember correctly S1 has a resolution of ~12x3.. but I might be thinking of NISAR. |
Yes, exactly. I tried with different chip size, and found it needs to be larger than 32 for my case... Anyway, I kept the choice of starting from 64 (in range) and 32 (in azimuth). Another question, the direct output offsets should be pixels in slant range and azimuth direction, but they were converted to normal EW and NS offsets by multiplying the X_res and Y_res right? If I want to keep the original offsets in radar direction (but map reprojected), I think I should directly multiply the range pixel size and azimuth pixel size? |
x/y vectors can not be simply multiplied to get rage/azimuth offsets as they are in different projections (vx and vy need to be rotated, not just scaled) to get back to range/azimuth. |
I checked again the code and paper, the direct output offsets (e.g. Dx and Dy) for run_autoRIFT() should be the pixel offsets measured in original image coordinate. Only if the offset2vx is input, the result will be converted to the north and west velocity. So I think I can directly multiply the rangePixelSize and azimuthPixelSize to the Dx and Dy to get the slant range and azimuth displacements in map projections... Thanks a lot for your patience! |
I know that this repo is not so active now and the contributions are mainly made to the hyp3_autoRIFT now, but it really needs some cleanup and improvements. Since now some parts and functions are quite inefficient. For example, the frequently used colfilt function is very slow especially when deal with the fine-search displacement matrix.
I tried to replace the inside function (e.g. MAD) to the numba implementation, and then colfilt has accelerated for near 300 times (from ~2000 to ~6s)! This is just an example in the code....
My case is obtaining the specific glacier displacement with high resolution, which is different from the original objective for global or large-scale observation, since I used self-customed parameters. And maybe that's why the time is a big issue for me, because for regional study, the parameters are quite large for efficient use.. Nevertheless, this is still important for some specific case studies.
Hope to receive the reply from you!
The text was updated successfully, but these errors were encountered: