MLBenchmark is a toolkit, that allows to conviniently benchmark various machine learning methods on different data modalities.
It can be utilized by ML engineers and scientists, developing their own method, as well as the common users to test different machine learning scenarios.
This project is under active development and new tasks and ML methods to be added soon.
Currently, the only supported task is tabular classification. Also, it currently only supports ML algorithms, that are part of AutoML tools. Specifically, AutoGluon and H2O.
- Clone the project.
- Initialize project with
uv initand create a virtual environment withuv venv -p 3.10. - Install dependencies with
uv sync. For CPU-only installation typeuv sync --extra cpu.
Running on benchmarking datasets.
from core.api import MLBenchmark
from data.repository import OpenMLDatasetRepository
# WARNING: This OpenML benchmark contains big datasets, that may not fit into your RAM.
datasets = OpenMLDatasetRepository(id=271, verbosity=1).load_datasets(x_and_y=False)
bench = MLBenchmark(
automl='ag',
preset='best',
metric='f1',
timeout=360,
verbosity=1
)
for dataset in datasets:
bench.run(dataset)Running on a local dataset.
from mlbenchmark.domain import Dataset
from src.mlbenchmark.api import MLBenchmark
import pandas as pd
path_to_local_data = "datasets/local/ecoli.csv"
dataset = Dataset(name='ecoli', x=pd.read_csv(path_to_local_data))
bench = MLBenchmark(
automl='ag',
metric='f1',
timeout=60,
verbosity=2
)
bench.run(dataset)Contribution is welcome! Feel free to open issues and submit pull requests.