Skip to content

三个问题 #183

@FY-Zhang-G

Description

@FY-Zhang-G

第一个:

Traceback (most recent call last):
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/scripts/train_voxcpm_finetune.py", line 670, in
train(**yaml_args)
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/argbind/argbind.py", line 159, in cmd_func
return func(*cmd_args, **kwargs)
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/scripts/train_voxcpm_finetune.py", line 263,in train
outputs = model(
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/src/voxcpm/model/voxcpm.py",line 258, in forward
feat_embed = self.feat_encoder(audio_feats)
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/src/voxcpm/modules/locenc/local_encoder.py", line 23, in forward
x = self.in_proj(x)
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 125, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 must have the same dtype, but got BFloat16 and Float
E0205 08:27:50.146000 898442 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 898553) of binary: /mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/bin/python3.10
Traceback (most recent call last):
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/bin/torchrun", line 6, in
sys.exit(main())
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/ init.py", line 355, in wrapper
return f(*args, **kwargs)
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/distributed/run.py", line 919, in main
run(args)
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/distributed/run.py", line 910, in run
elastic_launch(
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 138, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

scripts/train_voxcpm_finetune.py FAILED
Failures:
<NO_OTHER_FAILURES>

Root Cause (first observed failure):
[0]:
time : 2026-02-05_08:27:50
host : aispeech-data-f3-3f8b1847fffd11f0-all-0
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 898553)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

第二个:

(VoxCPM) root@aispeech-data-f3-3f8b1847fffd11f0-all-0:/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM# bash /mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/train_single.sh

/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:143: FutureWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.

WeightNorm.apply(module, name, dim)

Running on device: cuda, dtype: bfloat16

/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/autograd/graph.py:825: UserWarning: cuDNN SDPA backward got grad_output.strides() != output.strides(), attempting to materialize a grad_output with matching strides... (Triggered internally at ../aten/src/ATen/native/cudnn/MHA.cpp:674.)

return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass

[train] step 0: loss/diff: 0.902030, loss/stop: 0.029272, lr: 0.000000, epoch: 0.000000, grad_norm: 0.284857

[val] step 0: loss/total: 0.926395, loss/diff: 0.906051, loss/stop: 0.020344, log interval: 13.71s

[Audio] Starting audio generation for 2 samples at step 0

[Audio] Loaded reference audio for sample 0: duration=11.95s

[Audio] Generating sample 0 with text: '全诗情感细腻深沉,从一己思念扩展到家国离乱之痛,温柔中藏着沉郁,尽显诗圣的悲悯情怀与细腻笔触。...'

0%| | 0/292 [00:00<?, ?it/s]

[Warning] Failed to generate audio for sample 0: Dimension out of range (expected to be in range of [-1, 0], but got -2)

Traceback (most recent call last):

File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/scripts/train_voxcpm_finetune.py", line 489, in generate_sample_audio

generated = unwrapped_model.generate(target_text=text, inference_timesteps=10, cfg_value=2.0)

File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/src/voxcpm/model/voxcpm.py", line 336, in generate

return next(self._generate(*args, streaming=False, **kwargs))

File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 36, in generator_context

response = gen.send(None)

File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/src/voxcpm/model/voxcpm.py", line 460, in _generate

latent_pred, pred_audio_feat = next(inference_result)

File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/envs/VoxCPM/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 36, in generator_context

response = gen.send(None)

File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/src/voxcpm/model/voxcpm.py", line 826, in _inference

lm_hidden = self.base_lm.forward_step(

File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/src/voxcpm/modules/minicpm4/model.py", line 402, in forward_step

hidden_states = decoder_layer.forward_step(

File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/src/voxcpm/modules/minicpm4/model.py", line 303, in forward_step

hidden_states = self.self_attn.forward_step(

File "/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/src/voxcpm/modules/minicpm4/model.py", line 211, in forward_step

attn_output = torch.nn.functional.scaled_dot_product_attention(

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got -2)

为了解决上面的两个问题:
第一个问题:RuntimeError: mat1 and mat2 must have the same dtype, but got BFloat16 and Float
这是 dtype 不匹配问题。官方代码训练时使用 torch.bfloat16 做 autocast,但 self.in_proj (Linear层) 的权重是 Float32
即模型初始化时所有参数默认是 Float32。from_local() 加载预训练权重时,因为 training=True,没有调用 .to(lm_dtype),但是训练脚本中使用了 autocast(dtype=torch.bfloat16),导致输入 audio_feats 是 BFloat16,但 self.in_proj 权重还是 Float32,矩阵乘法时 dtype 不匹配 → 崩溃
于是我**修改 voxcpm.py 的 from_local() 方法
修改为:
if not training:
lm_dtype = get_dtype(model.config.dtype)
model = model.to(lm_dtype)
else: # training mode
# 🔥 新增: 训练模式也要转换 dtype,除了 audio_vae
lm_dtype = get_dtype(model.config.dtype)
for name, module in model.named_modules():
if "audio_vae" not in name and hasattr(module, 'weight'):
if hasattr(module.weight, 'data'):
module.weight.data = module.weight.data.to(lm_dtype)
if hasattr(module, 'bias') and module.bias is not None:
module.bias.data = module.bias.data.to(lm_dtype)

for name, param in model.named_parameters():
    if "audio_vae" in name: # freeze VAE weights
        param.requires_grad = False
        continue
    if lora_config is not None:
        if "lora" not in name: # freeze non-LoRA weights
            param.requires_grad = False

问题1完美解决。
但是我还是搞不懂为什么官方的代码会有如此疏漏?是我打开姿势不对么?

问题2:IndexError: Dimension out of range (expected to be in range of [-1, 0], but got -2)
发生在 generate_sample_audio() 函数中调用 model.generate() 时。我能想到可能是因为:
但是模型的某些部分(比如 KV cache)可能需要特殊处理。
暂时禁用音频生成来专注修复训练:
在配置文件中设置:
yamlvalid_interval: 999999 # 暂时禁用验证
val_manifest: "" # 留空,不加载验证集

等训练 dtype 问题修复后,再处理推理的维度问题。
同时在 train_voxcpm_finetune.py 中
将if val_loader is not None and (step % valid_interval == 0 or step == num_iters - 1):
validate(model, val_loader, batch_processor, accelerator, tracker, lambdas,
writer=writer, step=step, val_ds=val_ds, audio_vae=audio_vae_for_gen,
sample_rate=sample_rate, val_texts=val_texts, tokenizer=tokenizer,
valid_interval=valid_interval)
改成if val_loader is not None and step > 0 and (step % valid_interval == 0 or step == num_iters - 1):
validate(model, val_loader, batch_processor, accelerator, tracker, lambdas,
writer=writer, step=step, val_ds=val_ds, audio_vae=audio_vae_for_gen,
sample_rate=sample_rate, val_texts=val_texts, tokenizer=tokenizer,
valid_interval=valid_interval)
添加 step > 0 条件,跳过第一个 step 的验证。

我这么改,然后就可以训练了。

[train] step 0: loss/diff: 0.902044, loss/stop: 0.029702, lr: 0.000000, epoch: 0.000000, grad_norm: 0.284000
[train] step 20: loss/diff: 0.913345, loss/stop: 0.031759, lr: 0.000001, epoch: 0.194411, grad_norm: 0.277544, log interval: 42.98s
[train] step 40: loss/diff: 0.895163, loss/stop: 0.016512, lr: 0.000001, epoch: 0.388821, grad_norm: 0.185361, log interval: 40.72s
[train] step 60: loss/diff: 0.901952, loss/stop: 0.027944, lr: 0.000002, epoch: 0.583232, grad_norm: 0.319881, log interval: 24.06s
[train] step 80: loss/diff: 0.898902, loss/stop: 0.005224, lr: 0.000003, epoch: 0.777643, grad_norm: 0.148175, log interval: 22.02s
[train] step 100: loss/diff: 0.910493, loss/stop: 0.003644, lr: 0.000003, epoch: 0.972053, grad_norm: 0.117063, log interval: 17.00s
.................

然后我使用:
#!/usr/bin/env python3
"""
直接使用 VoxCPMModel 进行 LoRA 推理(绕过 VoxCPM wrapper)
"""
import argparse
import json
import sys
from pathlib import Path
import torch
import soundfile as sf
from voxcpm.model.voxcpm import VoxCPMModel, LoRAConfig

def parse_args():
parser = argparse.ArgumentParser("VoxCPM LoRA 直接推理")
parser.add_argument("--lora_ckpt", type=str, required=True, help="LoRA checkpoint 目录")
parser.add_argument("--base_model", type=str, default="", help="Base model 路径(可选,默认从 lora_config.json 读取)")
parser.add_argument("--text", type=str, required=True, help="要合成的文本")
parser.add_argument("--prompt_audio", type=str, default="", help="参考音频(可选)")
parser.add_argument("--prompt_text", type=str, default="", help="参考音频文本(可选)")
parser.add_argument("--output", type=str, default="lora_output.wav", help="输出音频路径")
parser.add_argument("--cfg_value", type=float, default=2.0, help="CFG 强度")
parser.add_argument("--inference_timesteps", type=int, default=10, help="扩散步数")
parser.add_argument("--max_len", type=int, default=600, help="最大生成长度")
return parser.parse_args()

def main():
args = parse_args()

# 1. 加载 LoRA 配置
ckpt_dir = Path(args.lora_ckpt)
lora_config_path = ckpt_dir / "lora_config.json"
if not lora_config_path.exists():
    raise FileNotFoundError(f"未找到 lora_config.json: {lora_config_path}")

with open(lora_config_path, "r", encoding="utf-8") as f:
    lora_info = json.load(f)

pretrained_path = args.base_model if args.base_model else lora_info.get("base_model")
if not pretrained_path:
    raise ValueError("必须提供 base_model 路径")

lora_cfg_dict = lora_info.get("lora_config", {})
lora_cfg = LoRAConfig(**lora_cfg_dict)

print(f"[1/3] 加载配置:", file=sys.stderr)
print(f"  Base model: {pretrained_path}", file=sys.stderr)
print(f"  LoRA r={lora_cfg.r}, alpha={lora_cfg.alpha}", file=sys.stderr)

# 2. 加载模型(带 LoRA 结构)
print(f"\n[2/3] 加载模型...", file=sys.stderr)
model = VoxCPMModel.from_local(
    pretrained_path, 
    optimize=True,  # 开启优化
    training=False,  # 推理模式
    lora_config=lora_cfg  # 初始化 LoRA 层
)

# 3. 加载 LoRA 权重
print(f"[2/3] 加载 LoRA 权重: {ckpt_dir}", file=sys.stderr)
loaded_keys, skipped_keys = model.load_lora_weights(str(ckpt_dir))
print(f"  已加载 {len(loaded_keys)} 个参数", file=sys.stderr)
if skipped_keys:
    print(f"  跳过 {len(skipped_keys)} 个参数", file=sys.stderr)

# 4. 推理
print(f"\n[3/3] 开始推理...", file=sys.stderr)
print(f"  文本: {args.text[:50]}...", file=sys.stderr)

with torch.inference_mode():
    if args.prompt_audio and args.prompt_text:
        print(f"  使用参考音频: {args.prompt_audio}", file=sys.stderr)
        audio = model.generate(
            target_text=args.text,
            prompt_text=args.prompt_text,
            prompt_wav_path=args.prompt_audio,
            max_len=args.max_len,
            inference_timesteps=args.inference_timesteps,
            cfg_value=args.cfg_value,
        )
    else:
        audio = model.generate(
            target_text=args.text,
            max_len=args.max_len,
            inference_timesteps=args.inference_timesteps,
            cfg_value=args.cfg_value,
        )

# 5. 保存音频
audio_np = audio.cpu().numpy().flatten()
out_path = Path(args.output)
out_path.parent.mkdir(parents=True, exist_ok=True)

sf.write(str(out_path), audio_np, model.sample_rate)
duration = len(audio_np) / model.sample_rate
print(f"\n✓ 推理完成!", file=sys.stderr)
print(f"  输出: {out_path}", file=sys.stderr)
print(f"  时长: {duration:.2f}s", file=sys.stderr)

if name == "main":
main()
推理又出问题了,在 voxcpm/modules/minicpm4/model.py 文件的 MiniCPMAttention.forward_step 方法中,代码生成了一个 1维 的 attn_mask(形状为 [seq_len]),然后将其传给了 scaled_dot_product_attention。然而,当前的输入张量(Query, Key, Value)都是 4维 的([batch, heads, seq, dim])。PyTorch 的 scaled_dot_product_attention 在处理 4D 输入时,无法直接广播 1D 的 boolean mask,导致了 IndexError: Dimension out of range。

所以我在/mnt/dataset/aispeech-asr/users/zhangfangyuan/model/VoxCPM/script/VoxCPM/src/voxcpm/modules/minicpm4/model.py找到 MiniCPMAttention 类中的 forward_step 方法。

将 attn_mask = torch.arange(key_cache.size(2), device=key_cache.device) <= position_id
改成了

--- 修复代码开始 ---

    # 生成 1D mask
    attn_mask_1d = torch.arange(key_cache.size(2), device=key_cache.device) <= position_id
    
    # 将 mask 扩展为 4D 以匹配 query_states 的形状: [batch_size, num_heads, 1, max_len]
    # query_states shape: [bsz, num_heads, 1, head_dim]
    bsz, num_heads, _, _ = query_states.shape
    attn_mask = attn_mask_1d.view(1, 1, 1, -1).expand(bsz, num_heads, 1, -1)
    # --- 修复代码结束 ---

然后就能成功训练和推理了

问题3现在也就来了。

我合成的音频,音色经常跳变,上一句文本音频可能是男声,下一句变成女声????

请问我现在该怎么办呢??

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions