- Support EasyCV as a plug-in for [modelscope](https://github.com/modelscope/modelscope.
- Support STDC, STGCN, ReID and Multi-len MOT.
- Support multi processes for predictor data preprocessing. For the model with more time consuming in data preprocessing, the speedup can reach more than 50%.
- Support multi processes for predictor data preprocessing. (#272)
- Support STDC model. (#284) (#286)
- Support ReID and Multi-len MOT. (#285) (#295)
- Support STGCN model, and support export blade model. (#293) (#299)
- Add pose model zoo and support export torch jit and blade model for pose models. (#294)
- Support train motchallenge and crowdhuman datasets for detection models. (#265)
- Speed up inference for face detector when using mtcnn. (#273)
- Add mobilenet config for itag and imagenet dataset, and optimize
ClsSourceImageList
api to support string label. (#276) (#283) - Support multi-rows replacement for first order parameter. (#282)
- Add a tool to convert itag dataset to raw dataset. (#290)
- Add
PoseTopDownPredictor
to replaceTorchPoseTopDownPredictorWithDetector
(#296)
- Remove git lfs dependencies. (#278)
- Fix wholebody keypoints evaluation. (#287)
- Fix DetSourceRaw while label file and image file not match. (#289)
- Add inception config and voc config for FCN and UperNet (#261)
- Add inference time under V100 for the benchmark of deitiii and hydra attention (#251)
- Add bev-blancehybrid benchmark (#249)
- Fix MAE arg error after timm upgrade (#255)
- Fix export SSL models bug, avoid loading default pretrained backbone model (#257)
- Fix bug can't find config files while easycv is installed (#253)
- Add BEVFormer and improve the performance of BEVFormer (#224)
- Add DINO++ and support objects365 pretrain (#242)
- Add DeiT of Hydra Attention version (#220)
- Add EdgeViTv3 (#214)
- Add BEVFormer and improve the performance of BEVFormer (#224)
- Add DINO++ and support objects365 pretrain (#242)
- Unify the parsing method of config scripts, and support both local and pai platform products (#235)
- Add more data source apis for open source datasets, involving classification, detection, segmentation and keypoints tasks. And part of the data source apis support automatic download. For more information, please refer to data_hub (#206 #229)
- Add confusion matrix metric for Classification models (#241)
- Add prediction script (#239)
- Sync the predict config in the config file for predictor (#238)
- Fix index of image_scale with y2 with bottom_left implemented in _mosaic_combine (#231)
- Add bevformer benchmark and fix classification predict bug (#240)
- Support auto hyperparameter optimization of NNI (#211)
- Add DeiT III (#171)
- Add semantic segmentation model SegFormer (#191)
- Add 3d detection model BEVFormer (#203)
- Support semantic mask2former (#199)
- Support face 2d keypoint detection (#191)
- Support hand keypoints detection (#191)
- Support wholebody keypoint detection (#207)
- Support auto hyperparameter optimization of NNI (#211)
- Add DeiT III (#171)
- Add semantic segmentation model SegFormer (#191)
- Add 3d detection model BEVFormer (#203)
- Optimze predcitor apis, support cpu and batch inference (#195)
- Speed up ViTDet model (#177)
- Support export jit model end2end for yolox (#215)
- Fix missing utils (#183)
- Release YOLOX-PAI which achieves SOTA results within 40~50 mAP (less than 1ms) (#154 #172 #174 )
- Add detection algo DINO (#144)
- Add mask2former algo (#115)
- Releases imagenet1k, imagenet22k, coco, lvis, voc2012 data with BaiduDisk to accelerate downloading (#145 )
- Add detection predictor which support model inference without exporting models(#158 )
- Add VitDet support for faster-rcnn (#155 )
- Release YOLOX-PAI which achieves SOTA results within 40~50 mAP (less than 1ms) (#154 #172 #174 )
- Support DINO algo (#144)
- Add mask2former algo (#115)
- FCOS update torch_style (#170)
- Add algo tables to describe which algo EasyCV support (#157 )
- Refactor datasources api (#156 #140 )
- Add PR and Issule template (#150)
- Update Fast ConvMAE doc (#151)
- Self-Supervised support ConvMAE algorithm ((#101) (#121))
- Classification support EfficientFormer algorithm (#128)
- Detection support FCOS、DETR、DAB-DETR and DN-DETR algorithm ((#100) (#104) (#119))
- Segmentation support UperNet algorithm (#118)
- Support use torchacc to speed up training (#105)
- Support use analyze tools (#133)
- Update yolox config template and fix bugs (#134)
- Fix yolox detector prediction export error (#125)
- Fix common_io url error (#126)
- Add semantic segmentation modules, support FCN algorithm (#71)
- Expand classification model zoo (#55)
- Support export model with blade for yolox (#66)
- Support ViTDet algorithm (#35)
- Add sailfish for extensible fully sharded data parallel training (#97)
- Support run with mmdetection models (#25)
- Set multiprocess env for speedup (#77)
- Add data hub, summarized various datasets in different fields (#70)
- Fix the inaccurate accuracy caused by missing the
groundtruth_is_crowd
field in CocoMaskEvaluator (#61) - Unified the usage of
pretrained
parameter and fix load bugs((#79) (#85) (#95)
- Update MAE pretrained models and benchmark (#50)
- Add detection benchmark for SwAV and MoCo-v2 (#58)
- Add moby swin-tiny pretrained model and benchmark (#72)
- Update prepare_data.md, add more details (#69)
- Optimize quantize code and support to export MNN model (#44)
- Support image visualization for tensorboard and wandb (#15)
- Update moby pretrained model to deit small (#10)
- Support image visualization for tensorboard and wandb (#15)
- Add mae vit-large benchmark and pretrained models (#24)
- Fix extract.py for benchmarks (#7)
- Fix inference error of classifier (#19)
- Fix multi-process reading of detection datasource and accelerate data preprocessing (#23)
- Fix torchvision transforms wrapper (#31)
- Add chinese readme (#39)
- Add model compression tutorial (#20)
- Add notebook tutorials (#22)
- Uniform input and output format for transforms (#6)
- Update model zoo link (#8)
- Support readthedocs (#29)
- refine autorelease gitworkflow (#13)
- initial commit & first release
- SOTA SSL Algorithms
EasyCV provides state-of-the-art algorithms in self-supervised learning based on contrastive learning such as SimCLR, MoCO V2, Swav, DINO and also MAE based on masked image modeling. We also provides standard benchmark tools for ssl model evaluation.
- Vision Transformers
EasyCV aims to provide plenty vision transformer models trained either using supervised learning or self-supervised learning, such as ViT, Swin-Transformer and XCit. More models will be added in the future.
- Functionality & Extensibility
In addition to SSL, EasyCV also support image classification, object detection, metric learning, and more area will be supported in the future. Although convering different area, EasyCV decompose the framework into different componets such as dataset, model, running hook, making it easy to add new compoenets and combining it with existing modules. EasyCV provide simple and comprehensive interface for inference. Additionaly, all models are supported on PAI-EAS, which can be easily deployed as online service and support automatic scaling and service moniting.
- Efficiency
EasyCV support multi-gpu and multi worker training. EasyCV use DALI to accelerate data io and preprocessing process, and use fp16 to accelerate training process. For inference optimization, EasyCV export model using jit script, which can be optimized by PAI-Blade.