Evaluation of the usage of LLMs for video quality estimation using only metadata.
In results all collected results from the LLMs are stored, e.g. the results from all commands provided in chatgpt_api_cmds.
All jupyter notebooks are used for plots, tables, and evaluation of the paper, they should run out of the box.
In allmodels.csv all predictions for all LLMs are stored.
For run_ollama.py you need to have Ollama and Python3 installed.
Furthermore for the *.sh files you need curl and jq and Linux (or similar).
You need to change the API key in the following scripts:
chatgpt_api.pydeepseek_api.pygemini.shgemini_flash_light.sh
If you use this software in your research, please include a link to the repository and reference the following paper.
@inproceedings{goering2025llm,
title={Exploiting LLMs for meta-data-based video quality prediction},
author={Steve G\"oring and Rakesh Rao and Alexander Raake},
booktitle="27th IEEE International Symposium on Multimedia (IEEE ISM)",
year = {2025}
}If you like the software that I develop and contribute, you can donate me a ☕.
Because ☕ is a fundamental source for energy and motivation 😄.