Machine Learning Model Server for my Todoist to Reclaim Webhook
This code will serve a fine tuned inference model based on google/flan_t5_small, and translate human input time to a format that the Reclaim.ai API expects to receive.
the /utils directory has a python script to auto generate the training data used to fine tune the model. I started with Google's Flan t5 small
clone the repo and cd ./human-time-ai-model/utils
run python3 dataset_generator.py --samples ### to generate a dataset. By default, it will output this to a file called spoken_time_data.jsonl
run the command again with --output spoken_time_data_validation.jsonl to generate a validation set.
You can then use the training.py script to run the fine tuning.
Download the Docker Image from ... not yet available
You can run a one shot of the inference model with the test_model.py script in utils. Pass in your prompt using the --input "YOUR INPUT HERE" flag.
This is designed to be used in conjunction with the webhook application.