This repository is a paper digest of Transformer-related approaches in visual tracking tasks. Currently, tasks in this repository include Unified Tracking (UT), Single Object Tracking (SOT) and 3D Single Object Tracking (3DSOT). Note that some trackers involving a Non-Local attention mechanism are also collected. Papers are listed in alphabetical order of the first character.
Note
I find it hard to trace all tasks that are related to tracking, including Video Object Segmentation (VOS), Multiple Object Tracking (MOT), Video Instance Segmentation (VIS), Video Object Detection (VOD) and Object Re-Identification (ReID). Hence, I discard all other tracking tasks in a previous update. If you are interested, you can find plenty of collections in this archived version. Besides, the most recent trend shows that different tracking tasks are coming to the same avenue.
- GRM (Generalized Relation Modeling for Transformer Tracking) [paper] [code] [video]
- AiATrack (AiATrack: Attention in Attention for Transformer Visual Tracking) [paper] [code] [video]
- Image courtesy: https://arxiv.org/abs/2302.11867
- (Survey) Transformers in Single Object Tracking: An Experimental Survey [paper], Visual Object Tracking with Discriminative Filters and Siamese Networks: A Survey and Outlook [paper]
- (Talk) Discriminative Appearance-Based Tracking and Segmentation [video], Deep Visual Reasoning with Optimization-Based Network Modules [video]
- (Library) PyTracking: Visual Tracking Library Based on PyTorch [code]
- (People) Martin Danelljan@ETH [web], Bin Yan@DLUT [web]
-
-
- Benefit from pre-trained vision Transformer models.
- Free from randomly initialized correlation modules.
- More discriminative target-specific feature extraction.
- Much faster inference and training convergence speed.
- Simple and generic one-branch tracking framework.
-
- 1st step 🐾 feature interaction inside the backbone.
- 2nd step 🐾 concatenation-based feature interaction.
- STARK [ICCV'21], SwinTrack [NeurIPS'22]
- 3rd step 🐾 joint feature extraction and interaction.
- 4th step 🐾 generalized and robust relation modeling.
-
- GLEE (General Object Foundation Model for Images and Videos at Scale) [paper] [code]
- OmniViD (OmniVid: A Generative Framework for Universal Video Understanding) [paper] [code]
- OmniTracker (OmniTracker: Unifying Object Tracking by Tracking-with-Detection) [paper] [
code] - UNINEXT (Universal Instance Perception as Object Discovery and Retrieval) [paper] [code]
- MITS (Integrating Boxes and Masks: A Multi-Object Framework for Unified Visual Tracking and Segmentation) [paper] [code]
- HQTrack (Tracking Anything in High Quality) [paper] [code]
- SAM-Track (Segment and Track Anything) [paper] [code]
- TAM (Track Anything: Segment Anything Meets Videos) [paper] [code]
- AQATrack (Autoregressive Queries for Adaptive Tracking with Spatio-Temporal Transformers) [paper] [code]
- ARTrackV2 (ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe) [paper] [
code] - DiffusionTrack (DiffusionTrack: Point Set Diffusion Model for Visual Object Tracking) [paper] [code]
- HDETrack (Event Stream-Based Visual Object Tracking: A High-Resolution Benchmark Dataset and A Novel Baseline) [paper] [code]
- HIPTrack (HIPTrack: Visual Tracking with Historical Prompts) [paper] [code]
- OneTracker (OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning) [paper] [
code] - QueryNLT (Context-Aware Integration of Language and Visual References for Natural Language Tracking) [paper] [code]
- SDSTrack (SDSTrack: Self-Distillation Symmetric Adapter Learning for Multi-Modal Visual Object Tracking) [paper] [code]
- Un-Track (Single-Model and Any-Modality for Video Object Tracking) [paper] [code]
- Diff-Tracker (Diff-Tracker: Text-to-Image Diffusion Models are Unsupervised Trackers) [paper] [
code] - LoRAT (Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance) [paper] [code]
- ChatTracker (ChatTracker: Enhancing Visual Tracking Performance via Chatting with Multimodal Large Language Model) [paper&review] [
code] - CPDTrack (Beyond Accuracy: Tracking More Like Human via Visual Search) [paper&review] [code]
- DeTrack (DeTrack: In-model Latent Denoising Learning for Visual Object Tracking) [paper&review] [
code] - MemVLT (MemVLT: Vision-Language Tracking with Adaptive Memory-Based Prompts) [paper&review] [code]
- OKTrack (WebUOT-1M: Advancing Deep Underwater Object Tracking with A Million-Scale Benchmark) [paper&review] [code]
- BAT (Bi-Directional Adapter for Multi-Modal Tracking) [paper] [code]
- EVPTrack (Explicit Visual Prompts for Visual Object Tracking) [paper] [code]
- ODTrack (ODTrack: Online Dense Temporal Token Learning for Visual Tracking) [paper] [code]
- STCFormer (Sequential Fusion Based Multi-Granularity Consistency for Space-Time Transformer Tracking) [paper] [
code] - TATrack (Temporal Adaptive RGBT Tracking with Modality Prompt) [paper] [
code] - UVLTrack (Unifying Visual and Vision-Language Tracking via Contrastive Learning) [paper] [code]
- AVTrack (Learning Adaptive and View-Invariant Vision Transformer for Real-Time UAV Tracking) [paper] [code]
- DMTrack (Diffusion Mask-Driven Visual-language Tracking) [paper] [
code] - USTrack (Unified Single-Stage Transformer Network for Efficient RGB-T Tracking) [paper] [code]
- ATTracker (Consistencies are All You Need for Semi-Supervised Vision-Language Tracking) [paper&review] [
code] - CKD (Breaking Modality Gap in RGBT Tracking: Coupled Knowledge Distillation) [paper&review] [code]
- SMAT (Separable Self and Mixed Attention Transformers for Efficient Object Tracking) [paper] [code]
- TaMOs (Beyond SOT: It's Time to Track Multiple Generic Objects at Once) [paper] [code]
- CGDenoiser (Conditional Generative Denoiser for Nighttime UAV Tracking) [paper] [code]
- DaDiff (DaDiff: Domain-Aware Diffusion Model for Nighttime UAV Tracking) [paper] [code]
- LDEnhancer (Enhancing Nighttime UAV Tracking with Light Distribution Suppression) [paper] [code]
- PRL-Track (Progressive Representation Learning for Real-Time UAV Tracking) [paper] [code]
- TDA-Track (TDA-Track: Prompt-Driven Temporal Domain Adaptation for Nighttime UAV Tracking) [paper] [code]
- ABTrack (Adaptively Bypassing Vision Transformer Blocks for Efficient Visual Tracking) [paper] [code]
- ACTrack (ACTrack: Adding Spatio-Temporal Condition for Visual Object Tracking) [paper] [
code] - AFter (AFter: Attention-Based Fusion Router for RGBT Tracking) [paper] [code]
- AMTTrack (Long-Term Frame-Event Visual Tracking: Benchmark Dataset and Baseline) [paper] [code]
- BofN (Predicting the Best of N Visual Trackers) [paper] [code]
- CAFormer (Cross-modulated Attention Transformer for RGBT Tracking) [paper] [
code] - CFBT (Cross Fusion RGB-T Tracking with Bi-Directional Adapter) [paper] [
code] - CompressTracker (General Compression Framework for Efficient Transformer Object Tracking) [paper] [code]
- CRSOT (CRSOT: Cross-Resolution Object Tracking using Unaligned Frame and Event Cameras) [paper] [code]
- CSTNet (Transformer-Based RGB-T Tracking with Channel and Spatial Feature Fusion) [paper] [code]
- DT-Training (Closed-Loop Scaling Up for Visual Object Tracking) [paper&review] [
code] - DyTrack (Exploring Dynamic Transformer for Efficient Object Tracking) [paper] [
code] - eMoE-Tracker (eMoE-Tracker: Environmental MoE-Based Transformer for Robust Event-Guided Object Tracking) [paper] [
code] - ESAT (Enhanced Semantic Alignment in Transformer Tracking via Position Learning and Force-Directed Attention) [paper&review] [
code] - HCTrack (Hybrid Contrastive Transformer for Visual Tracking) [paper&review] [
code] - HiPTrack-MLS (Camouflaged Object Tracking: A Benchmark) [paper] [code]
- LoReTrack (LoReTrack: Efficient and Accurate Low-Resolution Transformer Tracking) [paper] [code]
- MAPNet (Multi-Attention Associate Prediction Network for Visual Tracking) [paper] [
code] - MDETrack (Enhanced Object Tracking by Self-Supervised Auxiliary Depth Estimation Learning) [paper] [
code] - MMMP (From Two Stream to One Stream: Efficient RGB-T Tracking via Mutual Prompt Learning and Knowledge Distillation) [paper] [
code] - MST (Learning Effective Multi-Modal Trackers via Modality-Sensitive Tuning) [paper&review] [
code] - M3PT (Middle Fusion and Multi-Stage, Multi-Form Prompts for Robust RGB-T Tracking) [paper] [
code] - NLMTrack (Enhancing Thermal Infrared Tracking with Natural Language Modeling and Coordinate Sequence Generation) [paper] [code]
- OIFTrack (Optimized Information Flow for Transformer Tracking) [paper] [code]
- PDAT (Progressive Domain Adaptation for Thermal Infrared Object Tracking) [paper] [
code] - PiVOT (Improving Visual Object Tracking through Visual Prompting) [paper] [code]
- PromptTrack (Streaming Spatial-Temporal Prompt Learning for RGB-T Tracking) [paper&review] [
code] - SAMURAI (SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory) [paper] [code]
- SAM2.1++ (A Distractor-Aware Memory for Visual Object Tracking with SAM2) [paper] [code]
- SCANet (RGB-Sonar Tracking Benchmark and Spatial Cross-Attention Transformer Tracker) [paper] [code]
- SeqTrackv2 (Unified Sequence-to-Sequence Learning for Single- and Multi-Modal Visual Object Tracking) [paper] [code]
- SPDAN (BihoT: A Large-Scale Dataset and Benchmark for Hyperspectral Camouflaged Object Tracking) [paper] [
code] - STMT (Transformer RGBT Tracking with Spatio-Temporal Multimodal Tokens) [paper] [
code] - SuperSBT (Correlation-Embedded Transformer Tracking: A Single-Branch Framework) [paper] [code]
- TENet (TENet: Targetness Entanglement Incorporating with Multi-Scale Pooling and Mutually-Guided Fusion for RGB-E Object Tracking) [paper] [code]
- TrackMamba (TrackMamba: Mamba-Transformer Tracking) [paper&review] [
code] - XTrack (Towards a Generalist and Blind RGB-X Tracker) [paper] [code]
- ART (ARKitTrack: A New Diverse Dataset for Tracking Using Mobile RGB-D Data) [paper] [code]
- ARTrack (Autoregressive Visual Tracking) [paper] [code]
- DropTrack (DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks) [paper] [code]
- EMT (Resource-Efficient RGBD Aerial Tracking) [paper] [code]
- GRM (Generalized Relation Modeling for Transformer Tracking) [paper] [code]
- JointNLT (Joint Visual Grounding and Tracking with Natural Language Specification) [paper] [code]
- MAT (Representation Learning for Visual Object Tracking by Masked Appearance Transfer) [paper] [code]
- SeqTrack (SeqTrack: Sequence to Sequence Learning for Visual Object Tracking) [paper] [code]
- SwinV2 (Revealing the Dark Secrets of Masked Image Modeling) [paper] [code]
- TBSI (Bridging Search Region Interaction with Template for RGB-T Tracking) [paper] [code]
- VideoTrack (VideoTrack: Learning to Track Objects via Video Transformer) [paper] [code]
- ViPT (Visual Prompt Multi-Modal Tracking) [paper] [code]
- MixFormerV2 (MixFormerV2: Efficient Fully Transformer Tracking) [paper&review] [code]
- RFGM (Reading Relevant Feature from Global Representation Memory for Visual Object Tracking) [paper&review] [
code] - ZoomTrack (ZoomTrack: Target-Aware Non-Uniform Resizing for Efficient Visual Tracking) [paper&review] [code]
- Aba-ViTrack (Adaptive and Background-Aware Vision Transformer for Real-Time UAV Tracking) [paper] [code]
- AiATrack-360 (360VOT: A New Benchmark Dataset for Omnidirectional Visual Object Tracking) [paper] [code]
- CiteTracker (CiteTracker: Correlating Image and Text for Visual Tracking) [paper] [code]
- DecoupleTNL (Tracking by Natural Language Specification with Long Short-Term Context Decoupling) [paper] [
code] - F-BDMTrack (Foreground-Background Distribution Modeling Transformer for Visual Object Tracking) [paper] [
code] - HiT (Exploring Lightweight Hierarchical Vision Transformers for Efficient Visual Tracking) [paper] [code]
- HRTrack (Cross-Modal Orthogonal High-Rank Augmentation for RGB-Event Transformer-Trackers) [paper] [code]
- ROMTrack (Robust Object Modeling for Visual Tracking) [paper] [code]
- CTTrack (Compact Transformer Tracker with Correlative Masked Modeling) [paper] [code]
- GdaTFT (Global Dilated Attention and Target Focusing Network for Robust Tracking) [paper] [
code] - TATrack (Target-Aware Tracking with Long-Term Context Attention) [paper] [code]
- All-in-One (All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment) [paper] [
code] - UTrack (Unambiguous Object Tracking by Exploiting Target Cues) [paper] [
code]
- UPVPT (Robust Tracking via Unifying Pretrain-Finetuning and Visual Prompt Tuning) [paper] [
code]
- ConTrack (ConTrack: Contextual Transformer for Device Tracking in X-ray) [paper] [
code]
- ClimRT (Continuity-Aware Latent Interframe Information Mining for Reliable UAV Tracking) [paper] [code]
- SGDViT (SGDViT: Saliency-Guided Dynamic vision Transformer for UAV tracking) [paper] [code]
- CDT (Cascaded Denoising Transformer for UAV Nighttime Tracking) [paper] [code]
- FDNT (End-to-End Feature Decontaminated Network for UAV Tracking) [paper] [code]
- ScaleAwareDA (Scale-Aware Domain Adaptation for Robust UAV Tracking) [paper] [code]
- TOTEM (Transparent Object Tracking with Enhanced Fusion Module) [paper] [code]
- TRTrack (Boosting UAV Tracking With Voxel-Based Trajectory-Aware Pre-Training) [paper] [code]
- MSTL (Multi-Source Templates Learning for Real-Time Aerial Tracking) [paper] [code]
- ProContEXT (ProContEXT: Exploring Progressive Context Transformer for Tracking) [paper] [code]
- AViTMP (Exploiting Image-Related Inductive Biases in Single-Branch Visual Tracking) [paper] [
code] - DATr (Leveraging the Power of Data Augmentation for Transformer-Based Tracking) [paper] [
code] - DETRack (Efficient Training for Visual Tracking with Deformable Transformer) [paper] [
code] - HHTrack (HHTrack: Hyperspectral Object Tracking Using Hybrid Attention) [paper] [code]
- IPL (Modality-Missing RGBT Tracking via Invertible Prompt Learning and A High-Quality Data Simulation Method) [paper] [
code] - JN (Towards Efficient Training with Negative Samples in Visual Tracking) [paper] [
code] - LiteTrack (LiteTrack: Layer Pruning with Asynchronous Feature Extraction for Lightweight and Efficient Visual Tracking) [paper] [code]
- MACFT (RGB-T Tracking Based on Mixed Attention) [paper] [
code] - MixViT (MixFormer: End-to-End Tracking with Iterative Mixed Attention) [paper] [code]
- MMTrack (Towards Unified Token Learning for Vision-Language Tracking) [paper] [code]
- MPLT (RGB-T Tracking via Multi-Modal Mutual Prompt Learning) [paper] [code]
- ProFormer (RGBT Tracking via Progressive Fusion Transformer with Dynamically Guided Learning) [paper] [
code] - RTrack (RTrack: Accelerating Convergence for Visual Object Tracking via Pseudo-Boxes Exploration) [paper] [
code] - SAM-DA (SAM-DA: UAV Tracks Anything at Night with SAM-Powered Domain Adaptation) [paper] [code]
- SATracker (Beyond Visual Cues: Synchronously Exploring Target-Centric Semantics for Vision-Language Tracking) [paper] [
code] - TCTrack++ (Towards Real-World Visual Tracking with Temporal Contexts) [paper] [code]
- USTAM (USTAM: Unified Spatial-Temporal Attention MixFormer for Visual Object Tracking) [paper&review] [
code]
- CMTR (Cross-Modal Target Retrieval for Tracking by Natural Language) [paper] [
code] - CSWinTT (Transformer Tracking with Cyclic Shifting Window Attention) [paper] [code]
- GTELT (Global Tracking via Ensemble of Local Trackers) [paper] [code]
- MixFormer (MixFormer: End-to-End Tracking with Iterative Mixed Attention) [paper] [code]
- RBO (Ranking-Based Siamese Visual Tracking) [paper] [code]
- SBT (Correlation-Aware Deep Tracking) [paper] [code]
- STNet (Spiking Transformers for Event-Based Single Object Tracking) [paper] [code]
- TCTrack (TCTrack: Temporal Contexts for Aerial Tracking) [paper] [code]
- ToMP (Transforming Model Prediction for Tracking) [paper] [code]
- UDAT (Unsupervised Domain Adaptation for Nighttime Aerial Tracking) [paper] [code]
- SwinTrack (SwinTrack: A Simple and Strong Baseline for Transformer Tracking) [paper&review] [code]
- AiATrack (AiATrack: Attention in Attention for Transformer Visual Tracking) [paper] [code]
- CIA (Hierarchical Feature Embedding for Visual Tracking) [paper] [code]
- DMTracker (Learning Dual-Fused Modality-Aware Representations for RGBD Tracking) [paper] [code]
- HCAT (Efficient Visual Tracking via Hierarchical Cross-Attention Transformer) [paper] [code]
- OSTrack (Joint Feature Learning and Relation Modeling for Tracking: A One-Stream Framework) [paper] [code]
- SimTrack (Backbone is All Your Need: A Simplified Architecture for Visual Object Tracking) [paper] [code]
- VOT2022 (The Tenth Visual Object Tracking VOT2022 Challenge Results) [paper] [code]
- InMo (Learning Target-Aware Representation for Visual Tracking via Informative Interactions) [paper] [code]
- SparseTT (SparseTT: Visual Tracking with Sparse Transformers) [paper] [code]
- TAT (Temporal-Aware Siamese Tracker: Integrate Temporal Context for 3D Object Tracking) [paper] [code]
- HighlightNet (HighlightNet: Highlighting Low-Light Potential Features for Real-Time UAV Tracking) [paper] [code]
- LPAT (Local Perception-Aware Transformer for Aerial Tracking) [paper] [code]
- SiamSA (Siamese Object Tracking for Vision-Based UAM Approaching with Pairwise Scale-Channel Attention) [paper] [code]
- CEUTrack (Revisiting Color-Event Based Tracking: A Unified Network, Dataset, and Metric) [paper] [code]
- FDT (Feature-Distilled Transformer for UAV Tracking) [
paper] [code] - RAMAVT (On Deep Recurrent Reinforcement Learning for Active Visual Tracking of Space Noncooperative Objects) [paper] [code]
- SFTransT (Learning Spatial-Frequency Transformer for Visual Object Tracking) [paper] [code]
- SiamLA (Learning Localization-Aware Target Confidence for Siamese Visual Tracking) [paper] [
code] - SPT (RGBD1K: A Large-scale Dataset and Benchmark for RGB-D Object Tracking) [paper] [code]
- SiamGAT (Graph Attention Tracking) [paper] [code]
- STMTrack (STMTrack: Template-Free Visual Tracking with Space-Time Memory Networks) [paper] [code]
- TMT (Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking) [paper] [code]
- TransT (Transformer Tracking) [paper] [code]
- AutoMatch (Learn to Match: Automatic Matching Network Design for Visual Tracking) [paper] [code]
- DTT (High-Performance Discriminative Tracking with Transformers) [paper] [code]
- DualTFR (Learning Tracking Representations via Dual-Branch Fully Transformer Networks) [paper] [code]
- HiFT (HiFT: Hierarchical Feature Transformer for Aerial Tracking) [paper] [code]
- SAMN (Learning Spatio-Appearance Memory Network for High-Performance Visual Tracking) [paper] [code]
- STARK (Learning Spatio-Temporal Transformer for Visual Tracking) [paper] [code]
- TransT-M (High-Performance Transformer Tracking) [paper] [code]
- VOT2021 (The Ninth Visual Object Tracking VOT2021 Challenge Results) [paper] [code]
- TAPL (TAPL: Dynamic Part-Based Visual Tracking via Attention-Guided Part Localization) [paper] [
code]
- TREG (Target Transformed Regression for Accurate Tracking) [paper] [code]
- TrTr (TrTr: Visual Tracking with Transformer) [paper] [code]
- VTT (VTT: Long-Term Visual Tracking with Transformers) [paper] [
code]
- HVTrack (3D Single-object Tracking in Point Clouds with High Temporal Variation) [paper] [
code]
- CUTrack (Towards Category Unification of 3D Single Object Tracking on Point Clouds) [paper&review] [code]
- M3SOT (M3SOT: Multi-Frame, Multi-Field, Multi-Space 3D Single Object Tracking) [paper] [code]
- SCVTrack (Robust 3D Tracking with Quality-Aware Shape Completion) [paper] [code]
- StreamTrack (Modeling Continuous Motion for 3D Point Cloud Object Tracking) [paper] [
code]
- SeqTrack3D (SeqTrack3D: Exploring Sequence Information for Robust 3D Point Cloud Tracking) [paper] [code]
- EasyTrack (EasyTrack: Efficient and Compact One-Stream 3D Point Clouds Tracker) [paper] [code]
- PillarTrack (PillarTrack: Redesigning Pillar-Based Transformer Network for Single Object Tracking on Point Clouds) [paper] [code]
- PROT3D (GSOT3D: Towards Generic 3D Single Object Tracking in the Wild) [paper] [code]
- SCtrack (Space-Correlated Transformer: Jointly Explore the Matching and Motion Clues in 3D Single Object Tracking) [paper&review] [
code]
- CorpNet (Correlation Pyramid Network for 3D Single Object Tracking) [paper] [
code] - CXTrack (CXTrack: Improving 3D Point Cloud Tracking with Contextual Information) [paper] [code]
- MBPTrack (MBPTrack: Improving 3D Point Cloud Tracking with Memory Networks and Box Priors) [paper] [code]
- SyncTrack (Synchronize Feature Extracting and Matching: A Single Branch Framework for 3D Object Tracking) [paper] [
code]
- GLT-T (GLT-T: Global-Local Transformer Voting for 3D Single Object Tracking in Point Clouds) [paper] [code]
- OSP2B (OSP2B: One-Stage Point-to-Box Network for 3D Siamese Tracking) [paper] [
code]
- PCET (Implicit and Efficient Point Cloud Completion for 3D Single Object Tracking) [paper] [
code] - STTracker (STTracker: Spatio-Temporal Tracker for 3D Single Object Tracking) [paper] [
code]
- GLT-T++ (GLT-T++: Global-Local Transformer for 3D Siamese Tracking with Ranking Loss) [paper] [code]
- MCSTN (Multi-Correlation Siamese Transformer Network with Dense Connection for 3D Single Object Tracking) [paper] [code]
- MMF-Track (Multi-Modal Multi-Level Fusion for 3D Single Object Tracking) [paper] [code]
- MTM-Tracker (Motion-to-Matching: A Mixed Paradigm for 3D Single Object Tracking) [paper] [code]
- StreamTrack (Modeling Continuous Motion for 3D Point Cloud Object Tracking) [paper] [
code]
- CMT (CMT: Context-Matching-Guided Transformer for 3D Tracking in Point Clouds) [paper] [code]
- SpOT (SpOT: Spatiotemporal Modeling for 3D Object Tracking) [paper] [
code] - STNet (3D Siamese Transformer Network for Single Object Tracking on Point Clouds) [paper] [code]
- OST (OST: Efficient One-stream Network for 3D Single Object Tracking in Point Clouds) [paper] [
code] - PTTR++ (Exploring Point-BEV Fusion for 3D Point Cloud Object Tracking with Transformer) [paper] [code]
- RDT (Point Cloud Registration-Driven Robust Feature Matching for 3D Siamese Object Tracking) [paper] [
code]