You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+22-173
Original file line number
Diff line number
Diff line change
@@ -35,6 +35,19 @@ And a big thanks to all GitHub sponsors who helped with some of my costs before
35
35
* The Hugging Face Hub (https://huggingface.co/timm) is now the primary source for `timm` weights. Model cards include link to papers, original source, license.
36
36
* Previous 0.6.x can be cloned from [0.6.x](https://github.com/rwightman/pytorch-image-models/tree/0.6.x) branch or installed via pip with version.
* Add `--reparam` arg to `benchmark.py`, `onnx_export.py`, and `validate.py` to trigger layer reparameterization / fusion for models with any one of `reparameterize()`, `switch_to_deploy()` or `fuse()`
48
+
* Including FastViT, MobileOne, RepGhostNet, EfficientViT (MSRA), RepViT, RepVGG, and LeViT
49
+
* Preparing 0.9.6 'back to school' release
50
+
38
51
### Aug 3, 2023
39
52
* Add GluonCV weights for HRNet w18_small and w18_small_v2. Converted by [SeeFun](https://github.com/seefun)
40
53
* Fix `selecsls*` model naming regression
@@ -380,179 +393,6 @@ And a big thanks to all GitHub sponsors who helped with some of my costs before
* CoAtNet (https://arxiv.org/abs/2106.04803) and MaxVit (https://arxiv.org/abs/2204.01697) `timm` original models
389
-
* both found in [`maxxvit.py`](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/maxxvit.py) model def, contains numerous experiments outside scope of original papers
390
-
* an unfinished Tensorflow version from MaxVit authors can be found https://github.com/google-research/maxvit
391
-
* Initial CoAtNet and MaxVit timm pretrained weights (working on more):
*`cs3`, `darknet`, and `vit_*relpos` weights above all trained on TPU thanks to TRC program! Rest trained on overheating GPUs.
464
-
* Hugging Face Hub support fixes verified, demo notebook TBA
465
-
* Pretrained weights / configs can be loaded externally (ie from local disk) w/ support for head adaptation.
466
-
* Add support to change image extensions scanned by `timm` datasets/readers. See (https://github.com/rwightman/pytorch-image-models/pull/1274#issuecomment-1178303103)
467
-
* Default ConvNeXt LayerNorm impl to use `F.layer_norm(x.permute(0, 2, 3, 1), ...).permute(0, 3, 1, 2)` via `LayerNorm2d` in all cases.
468
-
* a bit slower than previous custom impl on some hardware (ie Ampere w/ CL), but overall fewer regressions across wider HW / PyTorch version ranges.
469
-
* previous impl exists as `LayerNormExp2d` in `models/layers/norm.py`
470
-
* Numerous bug fixes
471
-
* Currently testing for imminent PyPi 0.6.x release
472
-
* LeViT pretraining of larger models still a WIP, they don't train well / easily without distillation. Time to add distill support (finally)?
473
-
* ImageNet-22k weight training + finetune ongoing, work on multi-weight support (slowly) chugging along (there are a LOT of weights, sigh) ...
474
-
475
-
### May 13, 2022
476
-
* Official Swin-V2 models and weights added from (https://github.com/microsoft/Swin-Transformer). Cleaned up to support torchscript.
477
-
* Some refactoring for existing `timm` Swin-V2-CR impl, will likely do a bit more to bring parts closer to official and decide whether to merge some aspects.
478
-
* More Vision Transformer relative position / residual post-norm experiments (all trained on TPU thanks to TRC program)
479
-
*`vit_relpos_small_patch16_224` - 81.5 @ 224, 82.5 @ 320 -- rel pos, layer scale, no class token, avg pool
480
-
*`vit_relpos_medium_patch16_rpn_224` - 82.3 @ 224, 83.1 @ 320 -- rel pos + res-post-norm, no class token, avg pool
481
-
*`vit_relpos_medium_patch16_224` - 82.5 @ 224, 83.3 @ 320 -- rel pos, layer scale, no class token, avg pool
482
-
*`vit_relpos_base_patch16_gapcls_224` - 82.8 @ 224, 83.9 @ 320 -- rel pos, layer scale, class token, avg pool (by mistake)
483
-
* Bring 512 dim, 8-head 'medium' ViT model variant back to life (after using in a pre DeiT 'small' model for first ViT impl back in 2020)
484
-
* Add ViT relative position support for switching btw existing impl and some additions in official Swin-V2 impl for future trials
485
-
* Sequencer2D impl (https://arxiv.org/abs/2205.01972), added via PR from author (https://github.com/okojoalg)
486
-
487
-
### May 2, 2022
488
-
* Vision Transformer experiments adding Relative Position (Swin-V2 log-coord) (`vision_transformer_relpos.py`) and Residual Post-Norm branches (from Swin-V2) (`vision_transformer*.py`)
489
-
*`vit_relpos_base_patch32_plus_rpn_256` - 79.5 @ 256, 80.6 @ 320 -- rel pos + extended width + res-post-norm, no class token, avg pool
490
-
*`vit_relpos_base_patch16_224` - 82.5 @ 224, 83.6 @ 320 -- rel pos, layer scale, no class token, avg pool
491
-
*`vit_base_patch16_rpn_224` - 82.3 @ 224 -- rel pos + res-post-norm, no class token, avg pool
492
-
* Vision Transformer refactor to remove representation layer that was only used in initial vit and rarely used since with newer pretrain (ie `How to Train Your ViT`)
493
-
*`vit_*` models support removal of class token, use of global average pool, use of fc_norm (ala beit, mae).
494
-
495
-
### April 22, 2022
496
-
*`timm` models are now officially supported in [fast.ai](https://www.fast.ai/)! Just in time for the new Practical Deep Learning course. `timmdocs` documentation link updated to [timm.fast.ai](http://timm.fast.ai/).
497
-
* Two more model weights added in the TPU trained [series](https://github.com/rwightman/pytorch-image-models/releases/tag/v0.1-tpu-weights). Some In22k pretrain still in progress.
* Add `ParallelBlock` and `LayerScale` option to base vit models to support model configs in [Three things everyone should know about ViT](https://arxiv.org/abs/2203.09795)
503
-
*`convnext_tiny_hnf` (head norm first) weights trained with (close to) A2 recipe, 82.2% top-1, could do better with more epochs.
504
-
505
-
### March 21, 2022
506
-
* Merge `norm_norm_norm`. **IMPORTANT** this update for a coming 0.6.x release will likely de-stabilize the master branch for a while. Branch [`0.5.x`](https://github.com/rwightman/pytorch-image-models/tree/0.5.x) or a previous 0.5.x release can be used if stability is required.
507
-
* Significant weights update (all TPU trained) as described in this [release](https://github.com/rwightman/pytorch-image-models/releases/tag/v0.1-tpu-weights)
* HuggingFace hub support fixed w/ initial groundwork for allowing alternative 'config sources' for pretrained model definitions and weights (generic local file / remote url support soon)
526
-
* SwinTransformer-V2 implementation added. Submitted by [Christoph Reich](https://github.com/ChristophReich1996). Training experiments and model changes by myself are ongoing so expect compat breaks.
527
-
* Swin-S3 (AutoFormerV2) models / weights added from https://github.com/microsoft/Cream/tree/main/AutoFormerV2
528
-
* MobileViT models w/ weights adapted from https://github.com/apple/ml-cvnets
529
-
* PoolFormer models w/ weights adapted from https://github.com/sail-sg/poolformer
530
-
* VOLO models w/ weights adapted from https://github.com/sail-sg/volo
531
-
* Significant work experimenting with non-BatchNorm norm layers such as EvoNorm, FilterResponseNorm, GroupNorm, etc
532
-
* Enhance support for alternate norm + act ('NormAct') layers added to a number of models, esp EfficientNet/MobileNetV3, RegNet, and aligned Xception
533
-
* Grouped conv support added to EfficientNet family
534
-
* Add 'group matching' API to all models to allow grouping model parameters for application of 'layer-wise' LR decay, lr scale added to LR scheduler
535
-
* Gradient checkpointing support added to many models
536
-
*`forward_head(x, pre_logits=False)` fn added to all models to allow separate calls of `forward_features` + `forward_head`
537
-
* All vision transformer and vision MLP models update to return non-pooled / non-token selected features from `foward_features`, for consistency with CNN models, token selection or pooling now applied in `forward_head`
538
-
539
-
### Feb 2, 2022
540
-
*[Chris Hughes](https://github.com/Chris-hughes10) posted an exhaustive run through of `timm` on his blog yesterday. Well worth a read. [Getting Started with PyTorch Image Models (timm): A Practitioner’s Guide](https://towardsdatascience.com/getting-started-with-pytorch-image-models-timm-a-practitioners-guide-4e77b4bf9055)
541
-
* I'm currently prepping to merge the `norm_norm_norm` branch back to master (ver 0.6.x) in next week or so.
542
-
* The changes are more extensive than usual and may destabilize and break some model API use (aiming for full backwards compat). So, beware `pip install git+https://github.com/rwightman/pytorch-image-models` installs!
543
-
*`0.5.x` releases and a `0.5.x` branch will remain stable with a cherry pick or two until dust clears. Recommend sticking to pypi install for a bit if you want stable.
544
-
545
-
### Jan 14, 2022
546
-
* Version 0.5.4 w/ release to be pushed to pypi. It's been a while since last pypi update and riskier changes will be merged to main branch soon....
547
-
* Add ConvNeXT models /w weights from official impl (https://github.com/facebookresearch/ConvNeXt), a few perf tweaks, compatible with timm features
548
-
* Tried training a few small (~1.8-3M param) / mobile optimized models, a few are good so far, more on the way...
549
-
*`mnasnet_small` - 65.6 top-1
550
-
*`mobilenetv2_050` - 65.9
551
-
*`lcnet_100/075/050` - 72.1 / 68.8 / 63.1
552
-
*`semnasnet_075` - 73
553
-
*`fbnetv3_b/d/g` - 79.1 / 79.7 / 82.0
554
-
* TinyNet models added by [rsomani95](https://github.com/rsomani95)
555
-
* LCNet added via MobileNetV3 architecture
556
396
557
397
## Introduction
558
398
@@ -594,26 +434,33 @@ All model architecture families include variants with pretrained weights. There
594
434
* MobileNet-V2 - https://arxiv.org/abs/1801.04381
595
435
* Single-Path NAS - https://arxiv.org/abs/1904.02877
0 commit comments