To download the dataset, please refer to prepare_data.md.
Self-supervised learning support imagenet(raw and tfrecord) format data.
You can download Imagenet data or use your own unlabeld image data. You should provide a directory which contains images for self-supervised training and a filelist which contains image path to the root directory. For example, the image directory is as follows
images/
├── 0001.jpg
├── 0002.jpg
├── 0003.jpg
|...
└── 9999.jpg
the content of filelist is
0001.jpg
0002.jpg
0003.jpg
...
9999.jpg
We use configs/selfsup/mocov2/mocov2_rn50_8xb32_200e_jpg.py as an example config in which two config variable should be modified
data_train_list = 'filelist.txt'
data_train_root = 'images'
Single gpu:
python tools/train.py \
${CONFIG_PATH} \
--work_dir ${WORK_DIR}
Multi gpus:
bash tools/dist_train.sh \
${NUM_GPUS} \
${CONFIG_PATH} \
--work_dir ${WORK_DIR}
Arguments
-
NUM_GPUS
: number of gpus -
CONFIG_PATH
: the config file path of a selfsup method -
WORK_DIR
: your path to save models and logs
Examples:
Edit data_root
path in the ${CONFIG_PATH}
to your own data path.
GPUS=8
bash tools/dist_train.sh configs/selfsup/mocov2/mocov2_rn50_8xb32_200e_jpg.py $GPUS
python tools/export.py \
${CONFIG_PATH} \
${CHECKPOINT} \
${EXPORT_PATH}
Arguments
CONFIG_PATH
: the config file path of a selfsup methodCHECKPOINT
:your checkpoint file of a selfsup method named as epoch_*.pthEXPORT_PATH
: your path to save export model
Examples:
python tools/export.py configs/selfsup/mocov2/mocov2_rn50_8xb32_200e_jpg.py \
work_dirs/selfsup/mocov2/epoch_200.pth \
work_dirs/selfsup/mocov2/epoch_200_export.pth
Download test_image
import cv2
from easycv.predictors.feature_extractor import TorchFeatureExtractor
output_ckpt = 'work_dirs/selfsup/mocov2/epoch_200_export.pth'
fe = TorchFeatureExtractor(output_ckpt)
img = cv2.imread('248347732153_1040.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
feature = fe.predict([img])
print(feature[0]['feature'].shape)