-
-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
YOLOv5 Now Open-Sourced 🚀 #22
Comments
Hi! First of all, congratulations for your work. Any plan on releasing pre-trained weights for YOLOv5 with xView? |
@sramirez no, but you are free to train YOLOv5 on xView yourself :) See https://docs.ultralytics.com/yolov5/tutorials/train_custom_data |
Is there a way to convert xView GeoJSON annotation file to YOLO format? |
@bartekrdz yes of course. You'd probably want to write your own conversion script and then use YOLOv5 to get started. The only thing missing from YOLOv5 that's used here is a sliding window inference system to run very high res images at native resolution on smaller graphics cards, and a corresponding chip dataloader to train chips at native resolution. The YOLO label format is pretty simple, it's described in |
Hello, I'm curious if anyone had trained an xView model on Yolo? I may go down that path if it hasn't been accomplished yet. |
@pounde we've made it super to train YOLOv5 on xView. Instructions are in xView.yaml in the YOLOv5 repo. First download dataset zips as indicated and then run https://github.com/ultralytics/yolov5/blob/master/data/xView.yaml # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# DIUx xView 2018 Challenge https://challenge.xviewdataset.org by U.S. National Geospatial-Intelligence Agency (NGA)
# -------- DOWNLOAD DATA MANUALLY and jar xf val_images.zip to 'datasets/xView' before running train command! --------
# Example usage: python train.py --data xView.yaml
# parent
# ├── yolov5
# └── datasets
# └── xView ← downloads here |
Perfect, thank you. I just wanted to be sure no one had accomplished it before I set down that path. Thanks for all the hard work. |
Hello! I was wondering if someone (like @pounde for example) had reached good results for xView dataset or done any kind of hyperparameters optimization. I'm actually looking for pretrained weights to use to a more specific project on aerial images and I am wondering if I could use transfer learning or if I should train for xView at first. |
@QuentinAndre11 xView is available on YOLOv5 now, I'd recommend just training it directly there:
Follow directions in yaml first to download: # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# DIUx xView 2018 Challenge https://challenge.xviewdataset.org by U.S. National Geospatial-Intelligence Agency (NGA)
# -------- DOWNLOAD DATA MANUALLY and jar xf val_images.zip to 'datasets/xView' before running train command! --------
# Example usage: python train.py --data xView.yaml
# parent
# ├── yolov5
# └── datasets
# └── xView ← downloads here (20.7 GB) |
@glenn-jocher Yes I followed it and used the script (I cannot login the xview website tho so I used kaggle to download the data) but I reach a [email protected] score of 0.026 after 300 epochs, so I was wondering if the default settings were not really accurate here... I have 847-127 for train-val split so I guess it's the same as the original dataset. |
@QuentinAndre11 👋 Hello! Thanks for asking about improving YOLOv5 🚀 training results. Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement. If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below. Dataset
Model SelectionLarger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.
python train.py --data custom.yaml --weights yolov5s.pt
yolov5m.pt
yolov5l.pt
yolov5x.pt
custom_pretrained.pt
python train.py --data custom.yaml --weights '' --cfg yolov5s.yaml
yolov5m.yaml
yolov5l.yaml
yolov5x.yaml Training SettingsBefore modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.
Further ReadingIf you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: http://karpathy.github.io/2019/04/25/recipe/ Good luck 🍀 and let us know if you have any other questions! |
@QuentinAndre11 I have not set down the path of training xView on YOLO. The weights are available from the DIU S3 bucket. You can also take a look at the repo here for an implementation that may fit your needs. |
Hello, I have been trying to implement an xView dataset using yolov5 and I followed the instructions. But I keep getting an error where it cannot find the labels. It seems to be able to find the images though. Any ideas? |
@ShaashvatShetty I'd recommend going to the YOLOv5 repo as we have an xView.yaml all set up to start training with instructions on dataset download: |
Can you please share the dataset file in utils folder as well. |
@tanya-suri I don't quite understand your question, but perhaps you are asking about utils/datasets.py. This file has been renamed to utils/dataloaders.py recently in YOLOv5. |
@glenn-jocher I followed the yolov5 repo
|
you should have the following structure in your xView directory:
|
@QuentinAndre11 300 epochs may not suffice, since the dataset is quite small in terms of number of images, but the images themselves contain many instances. I think, one should train with |
👋 Hello! Thanks for visiting! Ultralytics has open-sourced YOLOv5 🚀 at https://github.com/ultralytics/yolov5, featuring faster, lighter and more accurate object detection. YOLOv5 is recommended for all new projects.
YOLOv5-P5 640 Figure (click to expand)
Figure Notes (click to expand)
python test.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt
Pretrained Checkpoints
(pixels)
0.5:0.95
0.5:0.95
0.5
V100 (ms)
(M)
640 (B)
Table Notes (click to expand)
python test.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65
python test.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45
python test.py --data coco.yaml --img 1536 --iou 0.7 --augment
For more information and to get started with YOLOv5 🚀 please visit https://github.com/ultralytics/yolov5. Thank you!
The text was updated successfully, but these errors were encountered: