A minimal demonstration of LoRA on GPT2.
Basic:
python run.py -w --iterations-to-eval 5 --num-training-steps 10 --quick-eval-one-batchWith downloaded model & logging on wandb.ai:
python run.py -n path\to\model -k \path\to\tokenizer -u [wandb-username] --iterations-to-eval 5 --num-training-steps 10 --quick-eval-one-batch