Skip to content

Tanchwa/human-time-ai-model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Machine Learning Model Server for my Todoist to Reclaim Webhook

Purpose

This code will serve a fine tuned inference model based on google/flan_t5_small, and translate human input time to a format that the Reclaim.ai API expects to receive.

Methodology

the /utils directory has a python script to auto generate the training data used to fine tune the model. I started with Google's Flan t5 small

Use

Training the Model

clone the repo and cd ./human-time-ai-model/utils run python3 dataset_generator.py --samples ### to generate a dataset. By default, it will output this to a file called spoken_time_data.jsonl run the command again with --output spoken_time_data_validation.jsonl to generate a validation set.

You can then use the training.py script to run the fine tuning.

Running the Model Server

Download the Docker Image from ... not yet available You can run a one shot of the inference model with the test_model.py script in utils. Pass in your prompt using the --input "YOUR INPUT HERE" flag.

This is designed to be used in conjunction with the webhook application.

About

A server for the human language to Reclaim.AI api specific time input ML Model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors