Skip to content

Releases: onesixsolutions/torchcast

v1.1.1

17 Apr 15:29
ec2a4df

Choose a tag to compare

Bug fix: LBFGS regression with PyTorch >= 2.10

The default LBFGS optimizer now explicitly sets max_eval=25. This restores correct training behavior after pytorch/pytorch#161488 (shipped in PyTorch 2.10) fixed a bug where max_eval was silently ignored by the strong Wolfe line search.

Prior to that fix, max_eval defaulted to 2 (from max_iter * 1.25 + 1 with max_iter=1), which was effectively ignored — the line search ran freely. After the fix, the cap was correctly enforced, causing the optimizer to converge after only a handful of epochs with a poor loss.

If you trained models with PyTorch >= 2.10 and torchcast < 1.1.1, those models may have been undertrained. We recommend retraining.

See the full CHANGELOG.

v1.1.0

06 Apr 16:42
a088bc2

Choose a tag to compare

v1.1.0 (2026-04-06)

Refactor of Process API and internals

Rewrite of the Process class and its subclasses to improve maintainability and support extended-kalman-filter
processes. Note the external API behavior is fully backwards-compatible, but models created in an earlier version of
torchcast cannot be loaded into this newer version (and vice versa) due to renaming/reorganization of the
state_dict.

Updates to Utils: Data-Loading and Trainer

  • The TimeSeriesDataLoader class has been updated to support batchwise transformations. Its from_dataframe() method now optionally accepts a function for X_colnames, which should take a dataframe for a batch and return the model-matrix for that batch (i.e. a dataframe of predictors). This is useful for memory-intensive transformations, since they can be applied just-in-time to single batch of the data instead of being applied to the entire dataframe before sending it to the dataloader. See the electricity example in the documentation for an example of usage.
  • The SeasonalEmbeddingsTrainer (used in the electricity example) has been deprecated in favor of the more general ModelMatEmbeddingsTrainer, which embeds any high-dimensional model-matrix into a lower dimensional space. See the electricity example in the documentation for an example of usage.

Experimental

  • State-space models (like KalmanFilter) now support an adaptive_scaling argument. If set to True, then the model will use a learned exponential moving average model to dynamically adjust the model's variance.

Other

  • Python 3.9 or greater is now required.
  • Pandas is currently pinned to <3, as support for 3.* has not yet been tested.
  • The to_dataframe() method of Predictions supports 'predictions', 'states', or 'observed_states'. The last of these replaces type='components', which is now deprecated.

Detailed Changes: v0.6.0...v1.1.0

v0.6.0

29 Apr 15:22
b739a5e

Choose a tag to compare

v0.6.0 (2025-04-25)

Updated default fit() behavior

The fit() method of torchcast.state_space.StateSpaceModel has been updated:

  • The default LBFGS settings have been updated to avoid the unnecessary inner loop (see here).
  • The default convergence settings have been updated to increase patience to 2 (instead of 1) and increase max_iter to 300 (instead of 200).
  • To restore the old behavior, pass optimizer=lambda p: torch.optim.LBFGS(p, max_eval=8, line_search_fn='strong_wolfe'), stopping={'patience' : 1, 'max_iter' : 200}.
  • Convergence is now controlled by a torch.utils.Stopping instance (or kwargs for one). This means passing tol, patience, and max_iter directly to fit is deprecated; instead call fit(stopping={'patience' : ... etc}).

Updated default Covariance behavior

  • The 'low_rank' method is never chosen by default; if desired it must be selected manually using the 'method' kwarg (previously would automatically be chosen if rank was >= 10). This was based on poor performance empirically.
  • The starting values for the covariance diagonal have been increased.
  • Added initial_covariance kwarg to KalmanFilter and subclasses.

Updates to BinomialFilter

  • Added the observed_counts argument, allowing the user to specify whether observations are counts or proportions. If num_obs==1 then this argument is not required (since they are the same).
  • Fix bug in BinomialStep's kalman-gain calculation when num_obs > 1
  • Fix issues with BinomialFilter on the GPU.
  • Fix __getitem__() for BinomialPredictions.
  • Fix monte-carlo BinomialPredictions.log_prob() to properly marginalize over samples.

Other Fixes

  • Fix get_durations() on GPU.
  • Remove redundant matmul in KalmanStep._update()
  • ss_step is no longer a property but is instead an attribute, avoids unnecessary re-instantiation on each timestep

v0.5.1

10 Jan 04:02

Choose a tag to compare

v0.5.1 (2025-01-09)

Documentation

Trainers

Add torchcast.utils.training module with...

  • SimpleTrainer for training simple nn.Modules
  • SeasonalEmbeddingsTrainer for training nn.Modules to embed seasonal patterns.
  • StateSpaceTrainer for training torchcast's StateSpaceModels (when data are too big for the fit() method)

Baseline

  • Add make_baseline helper to generate baseline forecasts using a simple n-back method 3641e7c

Fixes

  • Ensure consistent column-ordering and default RangeIndex in output of Predictions.to_dataframe() 0a0fc81, f33c638
  • Fix default behavior in how TimeSeriesDataLoader forward-fills nans for the X tensor 0a0fc81
  • Fix seasonal initial values when passing initial_value to forward cae2879
  • Fix behavior of StateSpaceModel.simulate() when num_sims > 1 cae2879
  • Fix extra arg in ExpSmoother._generate_predictions() b553248
  • Make TimeSeriesDataset.split_measures() usable by removing which argument 8f1001b

v0.4.1

09 Oct 20:28

Choose a tag to compare

v0.4.1 (2024-10-09)

Continuous Integration

  • ci: Update actions/checkout version (ed64632)

  • ci: Clone repo using PAT (d0adaca)

  • ci: Enable repo push (f565d2a)

  • ci: Use SSH Key (469d531)

  • ci: Fix docs job permissions (e6e2e34)

  • ci: Pick python version form pyproject.toml (2a9eef7)

  • ci: Setup auto-release (9df4f26)

Documentation

  • docs: Fix examples (6f5a2dc)

  • docs: AirQuality datasets [skip ci] (c675f04)

  • docs: Self-hosted docs and fixtures (baca184)

Fixes

Refactoring

  • refactor: Switch to pyproject.toml (6de2f27)