Find here: OneDrive
Created by ArcQIS, saved with extension xx.kmz. Take Shenzhen.kmz as an example in following contents.
Add Vector Layer
Layer -> Add Layer-> Add Vector Layer
Source: path/to/Shenzhen.kmz
Select Vector Layers to Add: Geometry type: Polygon
Save shapefile to "./src/ori/cities/Shenzhen/shapefile/shenzhen.shp", CRS in EPSG:3857 - WGS84 (meters instead of degrees).
Add Google Satellite Layer
Find Google Maps URL
Plugins -> Tile+ -> Tile+
Add Google Satellite by URL, zoom level information here.
Browser -> XYZ Tiles -> New Connection -> xxx
Extract Google Satellite Imagery
Extract target Google Satellite Imagery based on related shapefile
XYZ Tiles:Google Satellite -> Export layer -> To file ;or XYZ Tiles: New conection-> URL:http://www.google.cn/maps/vt?lyrs=s@189&gl=cn&x={x}&y={y}&z={z}
Save to "./src/ori/cities/Shenzhen/dataset/2.0/image_full/Shenzhen.tif"
no Create VRT
Calculate from Layer -> Shenzhen (as example)
Resolution (current: userdefined) -> 2 (as example)
Load shapefile with related Google Satellite imagery in GIS (just for visualization).
Google Satellite imager dir: "./src/ori/cities/Shenzhen/dataset/2.0/image_full/Shenzhen.tif"
Shapefile dir: "./src/ori/cities/Shenzhen/shapefile/shenzhen.shp"
Clip Google Satellite imagery and related ground truth (shapefile) into small pieces
Run:
$ python ./utils/preprocess.py
Arguments:
# city name
CITY = 'Shenzhen'
# resolution
RESO = '2.0'
VRT_CLIP = True
# tile shape size
if VRT_CLIP:
tile_size_x = 5000
tile_size_y = 5000
shapefile_root = 'to/your/path/slum-mapping/src/ori/cities/%s/shapefile/%s.shp'%(CITY, CITY)
dataset_dir = 'to/your/path/slum-mapping/src/ori/cities/%s/dataset/%s/'%(CITY, RESO)
Dataset saved in "./src/ori/cities/Shenzhen/dataset/2.0/", directory tree:
.
├── anno
│ ├── Shenzhen_0.tfw
│ ├── Shenzhen_0.tif
│ ├── ...
│ ├── Shenzhen_41.tfw
│ └── Shenzhen_41.tif
├── anno_full
│ ├── Shenzhen.tfw
│ └── Shenzhen.tif
├── image
│ ├── Shenzhen_0.tfw
│ ├── Shenzhen_0.tif
│ ├── ...
│ ├── Shenzhen_41.tfw
│ └── Shenzhen_41.tif
└── image_full
└── Shenzhen.tif
SSH connect to reedbush server, run
$ ssh -l p75001 reedbush.cc.u-tokyo.ac.jp
Make training data dirs in server, run
$ cd /lustre/gp75/p75001/Work/slum-mapping/src
$ mkdir Shenzhen_2.0_train && cd Shenzhen_2.0_train
$ mkdir image label
$ touch train.txt test.txt
Upload dataset from local via sftp, run on local terminal
$ sftp [email protected]
sftp> lcd xx # Change the local directory
sftp> cd xx # Change the remote directory
sftp> put # # Transfer file from local root to remote server
an example for uploading annotation, run on local terminal
$ sftp [email protected]
sftp> lcd ./src/ori/cities/Shenzhen/dataset/2.0/anno/
sftp> cd /lustre/gp75/p75001/Work/slum-mapping/src/Shenzhen_2.0_train/label/
sftp> put
an example for uploading image, run on local terminal
$ sftp [email protected]
sftp> lcd ./src/ori/cities/Shenzhen/dataset/2.0/image/
sftp> cd /lustre/gp75/p75001/Work/slum-mapping/src/Shenzhen_2.0_train/image/
sftp> put
In server
Choose training tiles names, and edit train.txt and test.txt in "./src/shenzhen_2.0_train/";
Edit "./utils/run_extractor.sh" for your own requirement
#!/bin/sh
#PBS -q l-debug
#PBS -W group_list=gp75
#PBS -l select=4:mpiprocs=8:ompthreads=4
#PBS -l walltime=00:10:00
cd $PBS_O_WORKDIR
. /etc/profile.d/modules.sh
module purge
module load anaconda3/2019.10 cuda10/10.0.130 intel openmpi/3.1.4/intel
export PYTHONUSERBASE=/lustre/gp75/p75001/packages
export LD_LIBRARY_PATH=/lustre/app/acc/anaconda3/2019.10/lib
python ./extractor.py -data Shenzhen_2.0_train -data_usage train -mode slide-rand -nb_crop 400
run run_extractor.sh in ./utils/
$ qsub run_extractor.sh
extracted data saved in "./dataset/shenzhen_2.0_train_rand/";
Edit ./run_FPN.sh, choose training dataset and related modes:
#!/bin/sh
#PBS -q l-regular
#PBS -W group_list=gp75
#PBS -l select=4:mpiprocs=8:ompthreads=4
#PBS -l walltime=100:00:00
cd $PBS_O_WORKDIR
. /etc/profile.d/modules.sh
module purge
module load anaconda3/2019.10 cuda10/10.0.130 intel openmpi/3.1.4/intel
export PYTHONUSERBASE=/lustre/gp75/p75001/packages
export LD_LIBRARY_PATH=/lustre/app/acc/anaconda3/2019.10/lib
python ./FPN.py -train_data Shenzhen_2.0_train-rand -terminal 1200
Run
$ qsub ./run_FPN.sh
Check running status:
$ rbstat
Log results saved in "./logs/"; trained model saved in ./checkpoint/
Take testing Guangzhou_2.0 as an example;
Prepare Guangzhou_2.0_test as introduced in 4.2.1;
Edit "./run_inference.sh", debug args parameters:
#!/bin/sh
#PBS -q l-regular
#PBS -W group_list=gp75
#PBS -l select=4:mpiprocs=8:ompthreads=4
#PBS -l walltime=00:40:00
cd $PBS_O_WORKDIR
. /etc/profile.d/modules.sh
module purge
module load anaconda3/2019.10 cuda10/10.0.130 intel openmpi/3.1.4/intel
export PYTHONUSERBASE=/lustre/gp75/p75001/packages
export LD_LIBRARY_PATH=/lustre/app/acc/anaconda3/2019.10/lib
python ./vissin_Area.py -data Guangzhou_2.0_test -checkpoints FPN_epoch_300_May11_00_16.pth
Run:
$ qsub run_inference.sh
Results will bed saved in "./result/area-binary".
Convert into Transparent Mode
$ python ./utils/transparent.py
Final results saved in "./result/Guangzhou/2.0_trans/"
Confusion Matrix | Predicted (True) | Predicted (False) |
---|---|---|
Actual (True) | Green | Red |
Actual (False) | Blue | No Color |