Skip to content

Tensor Size Mismatch When Train A Technique Recognizer #18

@Artanisax

Description

@Artanisax

Hi, thanks for the wonderful datasets and downstream task experiments.

I've been trying to train a Technique Recognizer following the instructions in the repository. However, I keep recieving errors like:

Traceback (most recent call last):
  File "/data1/cse12110524/SSC/methods/GTSinger/Tech-Recognition/tasks/run.py", line 19, in <module>
    run_task()
  File "/data1/cse12110524/SSC/methods/GTSinger/Tech-Recognition/tasks/run.py", line 14, in run_task
    task_cls.start()
  File "/data1/cse12110524/SSC/methods/GTSinger/Tech-Recognition/utils/commons/base_task.py", line 230, in start
    trainer.fit(cls)
  File "/data1/cse12110524/SSC/methods/GTSinger/Tech-Recognition/utils/commons/trainer.py", line 122, in fit
    self.run_single_process(self.task)
  File "/data1/cse12110524/SSC/methods/GTSinger/Tech-Recognition/utils/commons/trainer.py", line 186, in run_single_process
    self.train()
  File "/data1/cse12110524/SSC/methods/GTSinger/Tech-Recognition/utils/commons/trainer.py", line 286, in train
    pbar_metrics, tb_metrics = self.run_training_batch(batch_idx, batch)
  File "/data1/cse12110524/SSC/methods/GTSinger/Tech-Recognition/utils/commons/trainer.py", line 332, in run_training_batch
    output = task_ref.training_step(*args)
  File "/data1/cse12110524/SSC/methods/GTSinger/Tech-Recognition/utils/commons/base_task.py", line 106, in training_step
    loss_ret = self._training_step(sample, batch_idx, optimizer_idx)
  File "/data1/cse12110524/SSC/methods/GTSinger/Tech-Recognition/research/singtech/te_task.py", line 215, in _training_step
    loss_output, _ = self.run_model(sample)
  File "/data1/cse12110524/SSC/methods/GTSinger/Tech-Recognition/research/singtech/te_task.py", line 238, in run_model
    self.add_tech_group_loss(output['tech_logits'], techs, tech_ids, losses)
  File "/data1/cse12110524/SSC/methods/GTSinger/Tech-Recognition/research/singtech/te_task.py", line 249, in add_tech_group_loss
    tech_losses_i = F.binary_cross_entropy_with_logits(tech_logits[b_idx, :, tech_ids[b_idx]], techs[b_idx, :, tech_ids[b_idx]], reduction='none')  # [T, len(tech_ids)]
  File "/home/cse12110524/miniconda3/envs/gttr/lib/python3.9/site-packages/torch/nn/functional.py", line 3193, in binary_cross_entropy_with_logits
    raise ValueError(f"Target size ({target.size()}) must be the same as input size ({input.size()})")
ValueError: Target size (torch.Size([113, 2])) must be the same as input size (torch.Size([106, 2]))

It seems like the model outputs are sometimes shorter than the targets. I returned to check and there was no error reported during the preprocessing. So do you have any idea on how to solve the problem? Thank you again!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions