Hi,
It seems like by default, the sentence order prediction weights of ALBERT models are randomly initialized (huggingface/transformers#12196). Did you use some custom export of the ALBERT model for your project? The logs below confirm the random initialization of the ALBERT SOP head when I run the eval/coherence.py file.
(tables-venv) kalpesh@node105:CoRPG$ python eval/coherence.py --coh --pretrain_model albert-xxlarge-v2 --text_file abc.txt
Some weights of AlbertForPreTraining were not initialized from the model checkpoint at albert-xxlarge-v2 and are newly initialized: ['sop_classifier.classifier.weight', 'sop_classifier.classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 54.82it/s]
COH-p: 0.327211 COH: 0.000000
(tables-venv) kalpesh@node105:CoRPG$
Hi,
It seems like by default, the sentence order prediction weights of ALBERT models are randomly initialized (huggingface/transformers#12196). Did you use some custom export of the ALBERT model for your project? The logs below confirm the random initialization of the ALBERT SOP head when I run the
eval/coherence.pyfile.