It would be great if you can provide a meta.json file that contains the evaluation set to reproduce Table 2 from the paper. I thought it was 'Final HR' set which has 1,959 samples; however, in 'Section 4.1 Experiment Setting' it says "We evaluate RORem alongside other competing methods using two test sets, which have the same image scenes but under two resolutions: 512 × 512 and 1024 × 1024. Both test sets have 500 pairs of original images and their corresponding masks."
If these evaluation samples are contained in RORem&RORD, Mulan, or Final HR, it would be convenient to have meta.json containing a triplet of file names. If the evaluation samples are not contained in RORem&RORD, Mulan, or Final HR, then it would be great to release them as a separate set.
It would be great if you can provide a meta.json file that contains the evaluation set to reproduce Table 2 from the paper. I thought it was 'Final HR' set which has 1,959 samples; however, in 'Section 4.1 Experiment Setting' it says "We evaluate RORem alongside other competing methods using two test sets, which have the same image scenes but under two resolutions: 512 × 512 and 1024 × 1024. Both test sets have 500 pairs of original images and their corresponding masks."
If these evaluation samples are contained in RORem&RORD, Mulan, or Final HR, it would be convenient to have meta.json containing a triplet of file names. If the evaluation samples are not contained in RORem&RORD, Mulan, or Final HR, then it would be great to release them as a separate set.