Skip to content

OpenMachine-ai/mlperf-tools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 

Repository files navigation

Colab

This repo contains the following tools for running MLPerf benchmarks:

  • eval.py: For the tiny MLPerf visual wake word (vww), this script downloads the dataset from Silabs and runs both TFLite reference models (int8-model and float-model) with the 1000 images listed in y_labels.csv to measure their accuracy.
  • eval.ipynb: Jupyter notebook generated from eval.py, click here to run it from your browser.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors