I have successfully deployed humanaigc lite-avatar, but I’m facing problems when trying to generate videos using custom human frames or a customized portrait as the avatar and a background video. The output video is not rendered correctly — I notice issues like misalignment, flickering, frame jitter, and improper background compositing. Even when I match the resolution and FPS of both the avatar and background video, the problem persists.
I need guidance on:
Fixing the rendering issue – Are there specific requirements for input resolutions, frame formats, or pre/post-processing steps to ensure smooth output?
Creating live avatars – Is there a supported way to generate live avatars from my own image (e.g., using a single portrait driven by webcam pose or audio)? If yes, what is the correct pipeline or scripts to achieve this?
Best practices – Are there recommended settings or constraints (FPS, aspect ratio, keypoint detection methods) to avoid visual artifacts?
Any detailed explanation, example commands, or workflow to achieve stable outputs and live avatar creation would be really helpful.
I have successfully deployed humanaigc lite-avatar, but I’m facing problems when trying to generate videos using custom human frames or a customized portrait as the avatar and a background video. The output video is not rendered correctly — I notice issues like misalignment, flickering, frame jitter, and improper background compositing. Even when I match the resolution and FPS of both the avatar and background video, the problem persists.
I need guidance on:
Fixing the rendering issue – Are there specific requirements for input resolutions, frame formats, or pre/post-processing steps to ensure smooth output?
Creating live avatars – Is there a supported way to generate live avatars from my own image (e.g., using a single portrait driven by webcam pose or audio)? If yes, what is the correct pipeline or scripts to achieve this?
Best practices – Are there recommended settings or constraints (FPS, aspect ratio, keypoint detection methods) to avoid visual artifacts?
Any detailed explanation, example commands, or workflow to achieve stable outputs and live avatar creation would be really helpful.