Advice on using pg_dump and pg_restore for OSM dataset #3076
Unanswered
vidushirai
asked this question in
Q&A
Replies: 1 comment
-
pg_dump/pg_restore should work just fine. You do need to run The Tiger and postcode data will be part of the dump, so no need to reimport. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
👋 hello!
We self host nominatim on ec2 instances and these instances are launched using an AMI that was built a year ago. I'm currently working on creating a new AMI on a different architecture. However, I don't want to import the latest OSM dataset - instead I want the revgeo results from the new AMI to 100% match the revgeo results from the original AMI.
In order to do this, I ran the pg_dump command on an instance running with the original AMI.
I then wrote this dump file to S3. In the new nominatim AMI, I skip the import commands and instead ran
This worked to the extent where the nominatim database was created and its size matched the size of the database in instances with the original AMI. However, when I do a comparison of revgeo results for a sample set of coordinates, I end up with ~0.03% of the revgeo results not matching which is confusing to me.
Wondering if anyone else has tried this approach - is pg_dump and restore not an accurate way to replicate the information in the database?
Notes:
I added these lines to my new AMI and the revgeo results mismatch somehow grew to 33%. Also, from reading the docs (link, link) it seems like the information in these files would get added to the database regardless.
Would appreciate any advice / help!
Beta Was this translation helpful? Give feedback.
All reactions