Releases: onesixsolutions/torchcast
v1.1.1
Bug fix: LBFGS regression with PyTorch >= 2.10
The default LBFGS optimizer now explicitly sets max_eval=25. This restores correct training behavior after pytorch/pytorch#161488 (shipped in PyTorch 2.10) fixed a bug where max_eval was silently ignored by the strong Wolfe line search.
Prior to that fix, max_eval defaulted to 2 (from max_iter * 1.25 + 1 with max_iter=1), which was effectively ignored — the line search ran freely. After the fix, the cap was correctly enforced, causing the optimizer to converge after only a handful of epochs with a poor loss.
If you trained models with PyTorch >= 2.10 and torchcast < 1.1.1, those models may have been undertrained. We recommend retraining.
See the full CHANGELOG.
v1.1.0
v1.1.0 (2026-04-06)
Refactor of Process API and internals
Rewrite of the Process class and its subclasses to improve maintainability and support extended-kalman-filter
processes. Note the external API behavior is fully backwards-compatible, but models created in an earlier version of
torchcast cannot be loaded into this newer version (and vice versa) due to renaming/reorganization of the
state_dict.
Updates to Utils: Data-Loading and Trainer
- The
TimeSeriesDataLoaderclass has been updated to support batchwise transformations. Itsfrom_dataframe()method now optionally accepts a function forX_colnames, which should take a dataframe for a batch and return the model-matrix for that batch (i.e. a dataframe of predictors). This is useful for memory-intensive transformations, since they can be applied just-in-time to single batch of the data instead of being applied to the entire dataframe before sending it to the dataloader. See the electricity example in the documentation for an example of usage. - The
SeasonalEmbeddingsTrainer(used in the electricity example) has been deprecated in favor of the more generalModelMatEmbeddingsTrainer, which embeds any high-dimensional model-matrix into a lower dimensional space. See the electricity example in the documentation for an example of usage.
Experimental
- State-space models (like
KalmanFilter) now support anadaptive_scalingargument. If set toTrue, then the model will use a learned exponential moving average model to dynamically adjust the model's variance.
Other
- Python 3.9 or greater is now required.
- Pandas is currently pinned to <3, as support for 3.* has not yet been tested.
- The
to_dataframe()method ofPredictionssupports 'predictions', 'states', or 'observed_states'. The last of these replacestype='components', which is now deprecated.
Detailed Changes: v0.6.0...v1.1.0
v0.6.0
v0.6.0 (2025-04-25)
Updated default fit() behavior
The fit() method of torchcast.state_space.StateSpaceModel has been updated:
- The default
LBFGSsettings have been updated to avoid the unnecessary inner loop (see here). - The default convergence settings have been updated to increase
patienceto 2 (instead of 1) and increasemax_iterto 300 (instead of 200). - To restore the old behavior, pass
optimizer=lambda p: torch.optim.LBFGS(p, max_eval=8, line_search_fn='strong_wolfe'), stopping={'patience' : 1, 'max_iter' : 200}. - Convergence is now controlled by a
torch.utils.Stoppinginstance (or kwargs for one). This means passingtol,patience, andmax_iterdirectly tofitis deprecated; instead callfit(stopping={'patience' : ... etc}).
Updated default Covariance behavior
- The 'low_rank' method is never chosen by default; if desired it must be selected manually using the 'method' kwarg (previously would automatically be chosen if rank was >= 10). This was based on poor performance empirically.
- The starting values for the covariance diagonal have been increased.
- Added
initial_covariancekwarg toKalmanFilterand subclasses.
Updates to BinomialFilter
- Added the
observed_countsargument, allowing the user to specify whether observations are counts or proportions. Ifnum_obs==1then this argument is not required (since they are the same). - Fix bug in BinomialStep's kalman-gain calculation when num_obs > 1
- Fix issues with BinomialFilter on the GPU.
- Fix
__getitem__()for BinomialPredictions. - Fix monte-carlo
BinomialPredictions.log_prob()to properly marginalize over samples.
Other Fixes
- Fix
get_durations()on GPU. - Remove redundant matmul in
KalmanStep._update() ss_stepis no longer a property but is instead an attribute, avoids unnecessary re-instantiation on each timestep
v0.5.1
v0.5.1 (2025-01-09)
Documentation
- New Using NN’s for Long-Range Forecasts: Electricity Data example
- Documentation/README cleanup
Trainers
Add torchcast.utils.training module with...
SimpleTrainerfor training simplenn.ModulesSeasonalEmbeddingsTrainerfor trainingnn.Modules to embed seasonal patterns.StateSpaceTrainerfor training torchcast'sStateSpaceModels (when data are too big for thefit()method)
Baseline
- Add
make_baselinehelper to generate baseline forecasts using a simple n-back method 3641e7c
Fixes
- Ensure consistent column-ordering and default RangeIndex in output of
Predictions.to_dataframe()0a0fc81, f33c638 - Fix default behavior in how
TimeSeriesDataLoaderforward-fills nans for theXtensor 0a0fc81 - Fix seasonal initial values when passing
initial_valueto forward cae2879 - Fix behavior of
StateSpaceModel.simulate()when num_sims > 1 cae2879 - Fix extra arg in
ExpSmoother._generate_predictions()b553248 - Make
TimeSeriesDataset.split_measures()usable by removingwhichargument 8f1001b
v0.4.1
v0.4.1 (2024-10-09)
Continuous Integration
-
ci: Update actions/checkout version (
ed64632) -
ci: Clone repo using PAT (
d0adaca) -
ci: Enable repo push (
f565d2a) -
ci: Use SSH Key (
469d531) -
ci: Fix docs job permissions (
e6e2e34) -
ci: Pick python version form pyproject.toml (
2a9eef7) -
ci: Setup auto-release (
9df4f26)
Documentation
-
docs: Fix examples (
6f5a2dc) -
docs: AirQuality datasets [skip ci] (
c675f04) -
docs: Self-hosted docs and fixtures (
baca184)
Fixes
- fix: AQ Dataset (
9b6e23e)
Refactoring
- refactor: Switch to pyproject.toml (
6de2f27)