Releases: mindspore-lab/mindcv
Releases · mindspore-lab/mindcv
MindCV 0.3.0
What's Changed
- feat: update training recipe and trained weights of regnet by @The-truthh in #695
- feat: add model script, training recipe and trained weights of Swin Transformer V2 by @The-truthh in #673
- fix: fix bug in function load_pretrained by @XixinYang in #690
- feat: add model script, training configs and training weights of Volo by @lxg39 in #676
- hoxfix: mindspore is not compatible with the latest version of pillow by @geniuspatrick in #698
- feat: add model script, training recipe and pretrained weight of cmt_s by @wcrzlh in #680
- feat: add jsd loss and asymmetric loss by @wcrzlh in #682
- supports the augmentations splits by @sageyou in #658
- feat: support custom amp cast list by @The-truthh in #685
- refactor: uniform all model names and correct many other text errors; by @XixinYang in #701
- docs: fix typos in documents. by @XixinYang in #706
- fix ema and edgenext yaml bugs by @Songyuanwei in #703
- docs: fix typo in configs/vit/readme.md by @XixinYang in #708
- fix: fix the bug of mixnet config by @The-truthh in #696
- docs: Add the How-To Guide for Multi-Scale Feature Extraction by @DexterJZ in #709
- fix: fix crossvit 310 infer and update yaml by @Songyuanwei in #710
- fix: fix the incompatibility of swintransformerv2 on ms2.0 by @The-truthh in #719
- feat: add model script, training recipe and pretrained weight of halonet by @rabbit-fgh in #720
- feat: SSD examples based on MindCV by @DexterJZ in #704
- docs: upgrade mkdocs version by @geniuspatrick in #734
- perf: optimize performance for pynative by @chujinjin101 in #730
- docs: add information for the argument “separate” when create transforms by @sageyou in #732
- feat: add model script, training configs and training weights of Volo by @lxg39 in #731
- docs: add tutorials of fine-tune on a custom dataset by @XixinYang in #711
- feat: add some utility scripts ✨ by @geniuspatrick in #738
- feat: DeepLabV3, DeepLabV3+ example based on MindCV by @Fzilan in #735
- docs: add explanation of suffix 'hardware' of training recipe in config README by @The-truthh in #739
- refactor: extend vit and add mae model and finetune checkpoint file by @sageyou in #707
- refactor: init value_and_grad by @geniuspatrick in #737
- feat: add mindspore-version CLIP (rewritten based on openAI-CLIP) by @XixinYang in #740
- feat: add mindspore-version open_clip by @XixinYang in #744
- fix: correct several bugs in open_clip by @XixinYang in #747
- fix: fix an bug in open_clip by @XixinYang in #748
- fix: fix bugs in deeplabv3 examples by @Fzilan in #754
- refactor(optimizer): change "filter_bias_and_bn" to "weight_decay_filter" by @sageyou in #752
- feat: upgrade ms to 2.2 by @The-truthh in #756
- bump version to 0.3.0 by @geniuspatrick in #751
New Contributors
- @lxg39 made their first contribution in #676
- @chujinjin101 made their first contribution in #730
- @Fzilan made their first contribution in #735
Full Changelog: v0.2.2...v0.3.0
MindCV 0.2.2
What's Changed
- feat: add model script, training recipe and trained weights of ConvNeXt V2 by @The-truthh in #654
- feat: support gradient accumulation function by @zhtmike in #609
- refactor: modify filenames of pvtv2, inceptionv3, inceptionv4 and shufflenetv1&v2 by @XixinYang in #677
- feat: support DynamicLossScale for TrainStep by @geniuspatrick in #678
- feat: coat_mini update by @SunJuil-Ty in #662
- feat: add OneCycleLR and CyclicLR scheduler by @The-truthh in #657
- feat: better logging by @geniuspatrick in #684
- feat: add training recipe and pretrained weights of mnasnet_1.3 by @XixinYang in #679
- feat: support multi-scale feature extraction. by @DexterJZ in #643
- hotfix: fix the train_step optimizer.global_step type (int32 -> float32) bug by @The-truthh in #686
- feat: upgrade ms to v2.0 while maintaining compatibility of v1.8 by @geniuspatrick in #674
- fix: fix the bug of mobilenet_v3 on ms1.8 on Ascend in deployment tutorial by @The-truthh in #688
- feat: update training recipes and pretrained weights of shufflenet_v2 (amp_level: O3) by @The-truthh in #681
- fix: correct variable "keep_prob" to "drop_path_prob" of efficientnet by @wcrzlh in #672
- hotfix: top_k save policy; coat_mini & coat_tiny recipe by @geniuspatrick in #689
- docs: update links in markdown files by @XixinYang in #687
- bump version to 0.2.2 by @geniuspatrick in #691
New Contributors
Full Changelog: v0.2.1...v0.2.2
MindCV 0.2.1
What's Changed
- feat: add crossvit9 ckpt, yaml and ut test by @Songyuanwei in #585
- fix: remove usage of mindspore internal interface
_checkparam
by @Songyuanwei in #592 - feat: add model script, training recipe and trained weights of MobileViT by @Rthete in #582
- feat: update training recipe and trained weights of PiT by @The-truthh in #593
- feat: update training recipe and trained weights of MixNet by @The-truthh in #594
- fix mlpmixer.py to make it work by @XixinYang in #596
- fix: correct checkpoint url of mobilenetv3-small by @XixinYang in #598
- fix the bug of using adan optimizer by @wcrzlh in #599
- coat tiny update by @SunJuil-Ty in #600
- feat: update training recipe and trained weights of ResNeSt by @The-truthh in #601
- fix Modify the BatchNorm2d input parameters of xcit model by @bnucsy in #613
- feat: add training recipe and trained weights of Efficientnet_b1 by @wcrzlh in #615
- fix: Correct Top-1 accuracy of several models; Update benchmark_results.md; Correct interpolation configure of visformer; Add reference to "re_value" in config.py; by @XixinYang in #618
- docs: update
What is New
inreadme
by @geniuspatrick in #641 - fix: Fix vit when pooling mode is not cls. by @SamitHuang in #612
- feat: update training recipe and trained weights of PVTV2 by @The-truthh in #645
- feat: Add training recipe and pretrained weights of mnasnet_0.5 by @XixinYang in #619
- feat: add training recipe and trained weights of Repvgg by @wcrzlh in #617
- fix: Correct top-1 and top-5 acc of repvgg_b1g4 by @wcrzlh in #651
- feat: add training recipe & pretrained weights of regnet_(x,y)_(2,4,6,8)00mf by @geniuspatrick in #644
- feat: Update training recipes and pretrained weights of mobilenet v2 (amp_level: O3) by @XixinYang in #655
- feat: add three data augmentation apis(3-Augment, Augmix,TrivialAugmentWide) by @sageyou in #616
- feat: Update training recipes and pretrained weights of mobilenet v3 (amp_level: O3) by @XixinYang in #656
- feat: update model script, add training recipe and trained weights of GhostNet by @The-truthh in #597
- fix: update acc@1 of rexnet by @geniuspatrick in #660
- feat: add training recipe and trained weights of vgg (amp level O2) by @wcrzlh in #659
- fix: update acc@1 of mobilevit by @geniuspatrick in #661
- docs: migrate document builder to mkdocs by @geniuspatrick in #665
- docs: make sure that only the main branch of the main repo will deploy pages by @geniuspatrick in #666
- docs: set long_description to the contents of README.md as the description on PyPI by @geniuspatrick in #669
- bump version to 0.2.1 by @geniuspatrick in #670
New Contributors
Full Changelog: v0.2.0...v0.2.1
MindCV 0.2.0
What's Changed
- [Feature] add training recipe and trained weights of googlenet, inception_v3, inception_v4, xception by @The-truthh in #397
- [Feature] Improve Random Seed by @SamitHuang in #408
- bit101 by @QP-AA in #446
- add hrnet_w32 and hrnet_w48 by @zhtmike in #451
- [feature] support get pretrained(default) cfg by model name by @geniuspatrick in #459
- [Feature] add model script, training recipe and trained weights of SENet by @The-truthh in #460
- configure .gitattributes to manage how Git reads line endings by @geniuspatrick in #468
- [PyCQ] Add configuration of pre-commit, isort, black, flake8 by @geniuspatrick in #471
- [PyCQ] Automatically formatted by pre-commit-hooks. by @geniuspatrick in #472
- [PyCQ] Automatically formatted by isort & black. by @geniuspatrick in #473
- [PyCQ] Make flake8 happy. by @geniuspatrick in #474
- [PyCQ] Update CI/CD by @geniuspatrick in #475
- [docs] update badges in README; add PR template; update contributing.md by @geniuspatrick in #476
- add clip_grad by @Songyuanwei in #469
- improve solution #284 of fixing BCE loss w/o mixup by @geniuspatrick in #478
- [feat] add default download root instead of using current working dir by @geniuspatrick in #479
- modify ema name and clip_grad by @Songyuanwei in #483
- [feat] resorting and regrouping options in config.py by @geniuspatrick in #485
- Add Lion Optimizer by @SamitHuang in #484
- [Feature] add model script, training recipe and trained weights of MixNet by @The-truthh in #467
- fixed set_seed bug by @Songyuanwei in #487
- [Feature] add training recipe and trained weights of DPN by @wcrzlh in #481
- [feat] add trainer factory by @geniuspatrick in #489
- [Feature] add training recipe and trained weights of PVTV2 by @The-truthh in #490
- [Feature] update mobilenetv3_small_100 trained weight by @Songyuanwei in #493
- [Feature] add training recipe and trained weights of VGG by @wcrzlh in #495
- [Feature] add training recipe and trained weights of NasNet by @wcrzlh in #509
- move callbacks train_step and trainer_factory to utils by @Songyuanwei in #502
- [Feature] update edgenext_base and edgenext_x_small by @weizhenhuan in #500
- Fix the typo of model name in nasnet_a_4x1056_ascend.yaml by @wcrzlh in #514
- [hotfix] testing model forward on a subset instead of the full collection by @geniuspatrick in #522
- Add training configs and pretrained weights of squeezenet and readme by @Silver0930 in #494
- [Feature] add model script, training recipe and trained weights of bit50x3 by @QP-AA in #525
- add model mlpmixer by @XixinYang in #518
- [feat] make sure dataset is downloaded in default download path when … by @geniuspatrick in #524
- [Feature] add model script, training recipe and trained weights of CrossViT by @xusheng365 in #523
- [Feature] add training recipe and trained weights of ResNeXt by @The-truthh in #498
- [fix] update mobilenetv1 yaml acc by @Songyuanwei in #533
- [hotfix] disable parameter_broadcast when invoking set_auto_parallel_context by @geniuspatrick in #531
- [Feature] add training recipe and trained weights of Swin Transformer by @wcrzlh in #530
- [Feature] add training recipe and trained weights of EfficientNet by @wcrzlh in #527
- add xcit by @bnucsy in #529
- add coat by @SunJuil-Ty in #519
- [Feature] add model script, training recipe and trained weights of ResNeSt by @The-truthh in #532
- [Feature] add model script, training configs and training weights of resnetv2 by @Songyuanwei in #515
- Fix classifier_name bug in load_pretrain function by @SamitHuang in #535
- [Feature] add model script, training recipe and trained weights of PiT by @ziyi-luo in #492
- [fix] fixed resnet_152_gpu bug by @Songyuanwei in #545
- [Feature] update model script, training recipe and trained weights of GoogLeNet by @The-truthh in #546
- [Feature] add training recipe and trained weights of RepVGG by @wcrzlh in #560
- [Feature] add training weights and readme file of sknet by @Silver0930 in #534
- [Feature] add model script of cait by @wcrzlh in #547
- [Feature] add edgenext_xx_small model yaml and ckpt by @Songyuanwei in #563
- [Feature] update training recipe and trained weights of PVTV2; fix training recipe bug of dynamic loss scale of PVTV2 by @The-truthh in #568
- [Feature] add training recipe and trained weights of ConvNeXt by @The-truthh in #561
- [Feature] update training recipe and trained weights of MixNet by @The-truthh in #570
- [fix] update resnet ckpt url by @Songyuanwei in #548
- [feat] check if options from yaml config exist in argument parser by @geniuspatrick in #574
- [docs] add description of data sets that can be downloaded by @Songyuanwei in #573
- [refactor] update checkpoint url to uniform format by @XixinYang in #581
- updated readme what is new for changes to the resnet url by @Songyuanwei in #580
- [Fixed] fixed regnet yaml error by @Songyuanwei in #583
- [Feature] add training recipe and trained weights of repvgg_a0 by @wcrzlh in #579
- [Feature] update training recipe and trained weights of ResNeXt by @The-truthh in #577
- [fix] correct checkpoint url of edgenext, resnet, resnetv2 by @XixinYang in #586
- [fix] Modify top-1 accuracy of MixNet by @The-truthh in #589
- [fix] Delete scale shufflenet_v1_g3_x1_5 and shufflenet_v1_g3_x2_0 of shufflenetv1 by @Silver0930 in #584
- [docs] update benchmark and add usage for gen_benchmark by @wcrzlh in #590
New Contributors
- @Ash-Lee233 made their first contribution in #431
- @QP-AA made their first contribution in #446
- @zhtmike made their first contribution in #451
- @Silver0930 made their first contribution in #452
- @wcrzlh made their first contribution in #481
- @eltociear made their first contribution in #430
- @xusheng365 made their first contribution in #523
- @bnucsy made their first contribution in #529
- @SunJuil-Ty made their first contribution in #519
- @ziyi-luo made their first contribution in #492
Full Changelog: v0.1.1...v0.2.0
0.1.1
MindCV 0.1.1
0.1
MindCV 0.1
0.0.1-beta
MindCV 0.0.1-beta
0.0.1-alpha
MindCV 0.0.1-alpha