Time series machine learning, built by the researchers behind the algorithms.
aeon is a scikit-learn compatible Python library for learning from time series.
It covers classification, regression, clustering, forecasting, anomaly detection, distances,
segmentation, similarity search, transformations and benchmarking.
Many implementations in aeon are contributed and maintained by the researchers who developed the original methods. These include state-of-the-art models for forecasting, classification, regression, and clustering, including deep learning approaches.
Documentation · Examples · API reference · Getting started · Discussions · Discord
📄 Published in the Journal of Machine Learning Research (2024) — aeon: a Python Toolkit for Learning from Time Series
| Overview | |
|---|---|
| CI/CD | |
| Code | |
| Community | |
| Affiliation |
aeon is developed in close contact with the time series research community.
Many of its algorithms are contributed or maintained by their original authors,
and the same team behind aeon runs the benchmarks that the field uses to
evaluate new methods. That means:
- Faithful implementations. Algorithms reflect what the papers actually describe.
- State of the art, sooner. New methods often land in
aeonalongside publication. - Evidence-based defaults. What's included — and what's recommended — is grounded in published comparative studies.
A selection of algorithms available in aeon written by aeon core developers or contributors:
| Method | Reference | Task |
|---|---|---|
| InceptionTime | Ismail-Fawaz et al., 2020 | Classification |
| Hydra-MultiRocket | Dempster et al., 2023 | Classification |
| SETAR-Tree | Godahewa et al., 2023 | Forecasting |
| KASBA | Holder et al., 2026 | Clustering |
| CLASP | Ermshaus et al., 2023 | Segmentation |
| DrCIF | Guijo-Rubio et al., 2024 | Regression |
| TDE | Guijo-Rubio et al., 2025 | Ordinal Classification |
Code in aeon and related toolkits has been used in a wide range of benchmarking studies:
| Study | Reference | Area |
|---|---|---|
| Clustering | Holder et al., 2024 | Benchmarking |
| Anomaly detection | Schmidl et al., 2022 | Benchmarking |
| Classification (the "bake off") | Bagnall et al., 2017 | Benchmarking |
| Classification ("bake off redux") | Middlehurst et al., 2025 | Benchmarking |
| Deep learning for classification | Ismail-Fawaz et al., 2019 | Benchmarking |
See the API reference for the full list of estimators across all tasks.
aeon requires Python 3.10 or newer.
Install the latest release from PyPI:
pip install aeonTo install with all optional dependencies (including deep learning):
pip install aeon[all_extras]For development installs and platform-specific notes, see the installation guide.
Fit a classifier on a standard UCR dataset:
from aeon.classification.convolution_based import RocketClassifier
from aeon.datasets import load_gunpoint
X_train, y_train = load_gunpoint(split="train")
X_test, y_test = load_gunpoint(split="test")
clf = RocketClassifier()
clf.fit(X_train, y_train)
print("Accuracy:", clf.score(X_test, y_test))Ten task areas, one consistent API:
| Task | What it does | |
|---|---|---|
| Classification | Predict labels for time series | docs → |
| Regression | Predict continuous values from time series | docs → |
| Clustering | Group similar series without labels | docs → |
| Forecasting | Predict future values | docs → |
| Anomaly detection | Find unusual points or subsequences | docs → |
| Segmentation | Split a series into homogeneous regions | docs → |
| Similarity search | Find similar subsequences in long series | docs → |
| Transformations | Feature extraction and preprocessing | docs → |
| Distances & kernels | Time series similarity measures | docs → |
| Benchmarking | Reproducible experimental evaluation | docs → |
Time series classification predicts class labels for unseen series using a model fitted on a collection of labelled time series.
import numpy as np
from aeon.classification.convolution_based import MultiRocketHydraClassifier
X = np.array([
[[1, 2, 3, 4, 5, 5]],
[[1, 2, 3, 4, 4, 2]],
[[8, 7, 6, 5, 4, 4]],
])
y = np.array(["low", "low", "high"])
clf = MultiRocketHydraClassifier(n_kernels=100)
clf.fit(X, y)
X_test = np.array([
[[2, 2, 2, 2, 2, 2]],
[[5, 5, 5, 5, 5, 5]],
[[6, 6, 6, 6, 6, 6]],
])
y_pred = clf.predict(X_test)
print(y_pred)
# ['low' 'low' 'high']Time series clustering groups similar time series together from an unlabelled collection.
from aeon.clustering import KASBA
from aeon.datasets import load_gunpoint
X, _ = load_gunpoint()
clu = KASBA(n_clusters=2)
clu.fit(X)
print(clu.labels_)aeon provides a wide range of forecasting algorithms, including classic
statistical models and modern deep learning approaches.
from aeon.datasets import load_airline
from aeon.forecasting.stats import ARIMA
y = load_airline()
forecaster = ARIMA(1, 1, 1)
pred = forecaster.forecast(y)
print(pred)For more advanced forecasting, aeon also includes deep learning and machine learning methods not available elsewhere in Python, such as SETARTree and SETARForest.
aeon provides Keras/TensorFlow implementations of leading deep learning
architectures for time series through the networks module, with a consistent scikit-learn compatible API
and many models contributed by their original authors:
- Classification: InceptionTime, H-InceptionTime, LITE, LITETime, ResNet, FCN, MLP, CNN, Disjoint-CNN, and more
- Regression: the same backbone architectures, adapted for continuous targets
- Clustering: deep learning based clustering via learned representations
- Forecasting: deep learning based forecasting
A minimal example:
from aeon.datasets import load_basic_motions
from aeon.classification.deep_learning import InceptionTimeClassifier
X_train, y_train = load_basic_motions(split="train")
X_test, y_test = load_basic_motions(split="test")
clf = InceptionTimeClassifier(n_epochs=10)
clf.fit(X_train, y_train)
print(clf.score(X_test, y_test))See the examples gallery for GPU usage, custom architectures, and benchmarking against classical methods.
For more examples across tasks, visit the examples gallery.
There are several ways to engage with the project:
- ⭐ Star this repository to help others discover it
- 👀 Watch releases to get notified when new versions ship
- 📝 Cite
aeonin academic work if you use it - 🐛 Report bugs or request features via GitHub Issues
- 💬 Ask questions or join the discussion on GitHub Discussions or Discord
- 🛠️ Contribute code, tests, documentation, or examples — see the contributing guide
For project or collaboration enquiries, contact contact@aeon-toolkit.org.
If you are interested in contributing, please read the contributing guide before opening a pull request or taking ownership of an issue.
Useful links:
The aeon developers are volunteers, so please be patient with issue triage and pull request review.
If you use aeon in academic work, please cite the project:
@article{aeon24jmlr,
author = {Matthew Middlehurst and Ali Ismail-Fawaz and Antoine Guillaume and Christopher Holder and David Guijo-Rubio and Guzal Bulatova and Leonidas Tsaprounis and Lukasz Mentel and Martin Walter and Patrick Sch{\"a}fer and Anthony Bagnall},
title = {aeon: a Python Toolkit for Learning from Time Series},
journal = {Journal of Machine Learning Research},
year = {2024},
volume = {25},
number = {289},
pages = {1--10},
url = {http://jmlr.org/papers/v25/23-1444.html}
}If you let us know about your paper using aeon, we will happily list it on the project website.
aeon was forked from sktime v0.16.0 in 2022 by an initial group of eight core developers, and has since been substantially rewritten and extended.
Our core development team of 13 spans academia and industry, representing seven nationalities across the globe.
You can read more about the project's history, values, and governance on the About Us page.
aeon is under active development. The core package is stable and widely used. The following modules are currently considered in development, and the deprecation policy does not necessarily apply (although we only rarely make non-compatible changes): anomaly_detection, forecasting, segmentation, similarity_search, visualisation, transformations.collection.self_supervised, transformations.collection.imbalance.
Please check the documentation for task-specific capabilities, limitations, and current status.