Skip to content
Change the repository type filter

All

    Repositories list

    • BeTTER

      Public
      Unmasking the Illusion of Embodied Reasoning in Vision-Language-Action Models
      MIT License
      03300Updated Apr 21, 2026Apr 21, 2026
    • This repository provides the BeingBeyond D1 SDK, example Python scripts, and basic guidance for environment setup and common first-time troubleshooting.
      Python
      MIT License
      11000Updated Apr 14, 2026Apr 14, 2026
    • Being-H

      Public
      Being-H is BeingBeyond's family of human-centric embodied foundation models.
      Python
      Apache License 2.0
      4888180Updated Apr 13, 2026Apr 13, 2026
    • Being-H0

      Public
      Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos
      Python
      MIT License
      03800Updated Apr 2, 2026Apr 2, 2026
    • PTR

      Public
      Conservative Offline Robot Policy Learning via Posterior-Transition Reweighting
      0810Updated Mar 18, 2026Mar 18, 2026
    • Universal Dexterous Functional Grasping via Demonstration-Editing Reinforcement Learning (CVPR 2026)
      Python
      22410Updated Mar 8, 2026Mar 8, 2026
    • .github

      Public
      0000Updated Mar 8, 2026Mar 8, 2026
    • JALA

      Public
      Joint-Aligned Latent Action: Towards Scalable VLA Pretraining in the Wild (CVPR 2026)
      01210Updated Mar 5, 2026Mar 5, 2026
    • DemoGrasp

      Public
      DemoGrasp: Universal Dexterous Grasping from a Single Demonstration (ICLR 2026)
      Python
      26210Updated Feb 14, 2026Feb 14, 2026
    • FAST

      Public
      General Humanoid Whole-Body Control via Pretraining and Rapid Adaptation
      01800Updated Feb 13, 2026Feb 13, 2026
    • Rethinking Visual-Language-Action Model Scaling: Alignment, Mixture, and Regularization
      Python
      Apache License 2.0
      0900Updated Feb 10, 2026Feb 10, 2026
    • UniTacHand: Unified Spatio-Tactile Representation for Human-to-Dexterous-Hand Skill Transfer
      02310Updated Dec 25, 2025Dec 25, 2025
    • Being-VL-0.5: Unified Multimodal Understanding via Byte-Pair Visual Encoding (ICCV 2025, Highlight)
      Python
      MIT License
      35300Updated Dec 22, 2025Dec 22, 2025
    • VIPA-VLA

      Public
      Spatial-Aware VLA Pretraining through Visual-Physical Alignment from Human Videos (CVPR 2026)
      12610Updated Dec 16, 2025Dec 16, 2025
    • DiG-Flow

      Public
      DiG-Flow: Discrepancy-Guided Flow Matching for Robust VLA Models
      02110Updated Dec 2, 2025Dec 2, 2025
    • OpenMMEgo

      Public
      OpenMMEgo: Enhancing Egocentric Understanding for LMMs with Open Weights and Data (NeurIPS 2025)
      MIT License
      0200Updated Oct 24, 2025Oct 24, 2025
    • DemoHLM

      Public
      DemoHLM: From One Demonstration to Generalizable Humanoid Loco-Manipulation
      02200Updated Oct 14, 2025Oct 14, 2025
    • BumbleBee

      Public
      From Experts to a Generalist: Toward General Whole-Body Control for Humanoid Robots (NeurIPS 2025, Spotlight)
      27630Updated Sep 28, 2025Sep 28, 2025
    • Being-M0.5: A Real-Time Controllable Vision-Language-Motion Model (ICCV 2025)
      03520Updated Sep 4, 2025Sep 4, 2025
    • From Pixels to Tokens: Byte-Pair Encoding on Quantized Visual Modalities (ICLR 2025)
      MIT License
      11300Updated Jul 12, 2025Jul 12, 2025
    • Being-0

      Public
      Being-0: A Humanoid Robotic Agent with Vision-Language Models and Modular Skills
      16440Updated Jun 19, 2025Jun 19, 2025
    • RLPF

      Public
      RL from Physical Feedback: Aligning Large Motion Models with Humanoid Control
      05510Updated Jun 17, 2025Jun 17, 2025
    • MEgoHand

      Public
      MEgoHand: Multimodal Egocentric Hand-Object Interaction Motion Generation (NeurIPS 2025)
      0810Updated May 26, 2025May 26, 2025
    • Jaeger

      Public
      01200Updated May 22, 2025May 22, 2025
    • Being-M0

      Public
      Scaling Motion Generation Model with Million-Level Human Motions (ICML 2025)
      17040Updated May 14, 2025May 14, 2025
    ProTip! When viewing an organization's repositories, you can use the props. filter to filter by custom property.