Adaptive Vision Transformer for efficient image classification, implementing dynamic token sparsification to reduce computational costs while maintaining accuracy.
-
Updated
Apr 3, 2025 - Python
Adaptive Vision Transformer for efficient image classification, implementing dynamic token sparsification to reduce computational costs while maintaining accuracy.
🔧 Recurrent-Depth Transformer Research Lab — LTI-stable looped inference, switchable MLA/GQA attention, MoE routing & adaptive halting (ACT). Independent research, not affiliated with Anthropic.
Research recurrent-depth transformer architectures with this modular PyTorch framework for efficient sequence modeling.
Add a description, image, and links to the halting topic page so that developers can more easily learn about it.
To associate your repository with the halting topic, visit your repo's landing page and select "manage topics."