Biao Gong*
·
Qiyuan Zhang
·
Shuai Tan
·
Zheng Zhang
·
Hengyuan Cao
·
Jiancheng Pan
Xing Zhu
·
Yujun Shen
Ant Group
* Project Lead and Correspondence
AGG (AI-Generated Gameverse) is an AI-native game foundation for creation and play.
We envision a unified system and a community where researchers, developers, and players evolve together. We prioritize deep playability over visual fidelity to build a coherent framework for AI-centered games. AGG introduces a new class of experiences that currently includes:
- pAGG (Physics-Driven): Strict physical requirements & constraints (e.g., billiards).
- rAGG (Reaction): Real-time responsiveness & player reaction (e.g., rhythm/dance games).
- sAGG (Interactive-Story): Story experience & AI-guided co-creation.
pAGG Preview |
rAGG Preview |
sAGG Preview |
- [2026.02] 🔥 AGG Project Launch! We release the technical report and initial roadmap. Check our Homepage.
- [2026.01] Related works PhysRVG and CoDance released on arXiv.
- [2025.05] Related work Animate-X [Paper][GitHub] accepted by ICLR 2025.
The core gameplay of pAGG is predicated on the consistency of physical laws. In genres ranging from billiards to physics-based puzzlers, the challenge stems from environments governed by intuitive principles of motion, collision, and momentum. Player mastery depends on the ability to predict causal outcomes within this stable framework. Consequently, successful AI generation in this domain demands physically coherent dynamics alongside visual fidelity.
paggvideo1.mp4 |
The rAGG system enables users to animate any character by uploading a source image and a dance video. It supports diverse inputs ranging from anime to real humans in both solo and group settings. The gameplay follows a classic rhythm format similar to Taiko no Tatsujin or QTE in God of War. Players need to press buttons in response to on-screen cues synchronized with the music. The system then calculates a score based on the speed and accuracy of these reactions.
🔊 演示视频包含音乐,请开启声音观看 | Sound On: The video below demonstrates rhythm gameplay
raggvideo1.mp4 |
raggvideo2.mp4 |
raggvideo3.mp4 |
The core appeal of sAGG lies in the dynamic story evolution within a specific framework and the player's ability to steer the story or completely shatter the setting. For instance, players can freely alter the course of history, or combine characters, backgrounds, and items from different cultures, themes, and even dimensions. The system strives to develop the narrative around the core theme and ensure it eventually converges to a finale. This experience is accompanied by exquisite scene images and character portraits that change in real time during gameplay.
sAGG的核心乐趣在于在指定故事框架下的剧情自行演绎发展,以及玩家具备调整故事发展方向乃至彻底打破故事背景的能力。譬如我们可以在sAGG中肆意更改历史走向,感受蝴蝶效应的魅力。也可以随意缝合不同文化、题材乃至不同次元的人物、背景和物品等。sAGG能够尽其所能,围绕故事主题进行剧情发展和最终收敛结局。整个游戏还会有精美的场景图片以及人物立绘,配合对话过程实时变化呈现。
saggvideo1.mp4 |
saggvideo2.mp4 |
The Gallery showcases gameplay screenshots from the current stage of the AGG project. While frontend designs vary slightly across titles, most support both Light and Dark themes as well as window resizing indicators. Currently, all games run locally in single-device mode, with future updates planned to introduce online synchronous and asynchronous interactions.
![]() |
![]() |
![]() |
| pAGG UI | pAGG Gameplay | pAGG Settings |
![]() |
![]() |
![]() |
| rAGG Selection | rAGG Action | sAGG Narrative |
This repository is released under the Apache-2.0 license as found in the LICENSE file.
If you find this project or the associated papers useful for your research, please cite the following:
Technical Report
@misc{AGG,
title={AGG: AI-Generated Gameverse},
author={Biao Gong, Qiyuan Zhang, Shuai Tan, Zheng Zhang, Hengyuan Cao, Jiancheng Pan, Xing Zhu, Yujun Shen},
year={2026},
archivePrefix={arXiv},
primaryClass={cs.CV}
}Research Paper
@misc{PhysRVG,
title={PhysRVG: Physics-Aware Unified Reinforcement Learning for Video Generative Models},
author={Qiyuan Zhang and Biao Gong and Shuai Tan and Zheng Zhang and Yujun Shen and Xing Zhu and Yuyuan Li and Kelu Yao and Chunhua Shen and Changqing Zou},
year={2026},
eprint={2601.11087},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.11087},
}
@misc{CoDance,
title={CoDance: An Unbind-Rebind Paradigm for Robust Multi-Subject Animation},
author={Shuai Tan and Biao Gong and Ke Ma and Yutong Feng and Qiyuan Zhang and Yan Wang and Yujun Shen and Hengshuang Zhao},
year={2026},
eprint={2601.11096},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.11096},
}








