Read more details about the work here
Small fun project on parameter-conditioned Lagrangian neural networks for double-pendulum dynamics in JAX/Equinox.
Took inspiration from the original paper Cranmer et al., Lagrangian Neural Network 2020, but extended the application to a family of double pendula with different masses and rod lengths, instead of a single double pendulum.
In a Nutshell:
This repository learns a structured mechanics model from simulated trajectories of a double pendulum with varying masses and rod lengths. The model combines:
- a learned normalized kinetic energy that satisfies the always positive-definite mass matrix
$M(q)$ , - a Feature-wise Linear Modulation (FiLM)-conditioned kinetic MLP that helps generalizing the original model to the family of pendula with different
$m$ ,$l$ , - a learned normalized potential
$V$ with its own branch (MLP), - automatic differentiation of the Euler-Lagrange equations to obtain accelerations from the Lagrangian
$L$ , - rollout-based evaluation on held-out and out-of-distribution parameter settings.
You can learn more on the details on the documentation page.
Blind test of ground truth vs. model given same initial conditions:

The core network takes:
- generalized coordinates (angular positions) and velocities
$\boldsymbol{q} = [q_1, q_2, \dot{q}_1, \dot{q}_2]^{\top}$ $\rightarrow$ [q1, q2, w1, w2], - physical parameters
$\boldsymbol{\theta} = [m_1, m_2, l_1, l_2]^{\top}$ $\rightarrow$ [m1, m2, l1, l2],
and predicts the generalized accelerations [q1_tt, q2_tt].
Internally, it learns a structured Lagrangian
where:
- the kinetic term is built from a positive-definite matrix parameterization,
- the kinetic branch is FiLM-conditioned by the physical parameters,
- the potential branch depends on both configuration and parameters,
- accelerations are recovered by differentiating the learned Lagrangian rather than directly regressing dynamics with an unconstrained MLP.
src/lnn/model.py: FiLM-conditionedLagrangianNNsrc/data/doublependulum.py: analytical double-pendulum dynamics and energy functionssrc/data/generate_dataset.py: synthetic trajectory generationsrc/train.py: training loop and optimizationsrc/inference.py: held-out rollouts, energy plots, and OOD testssrc/simulate.py: RK4 rollout utilitiesresults/visualization.py: GIF and phase-space visualization toolsdocs/: MkDocs site content
The current workflow is:
- Generate analytical trajectories for double pendulums with sampled masses and lengths.
- Build supervised tensors with augmented state vector
[q1, q2, w1, w2, m1, m2, l1, l2]. - Normalize velocities, parameters, and acceleration targets.
- Train
LagrangianNNwith Huber loss plus an energy-conservation regularizer. - Roll out the learned model with RK4.
- Compare held-out and OOD trajectories, phase portraits, and learned energy drift.
The model reproduces the overall phase-space structure reasonably well on in-distribution test trajectories.
The repository also includes manual OOD tests over masses and rod lengths outside the training range. When masses and lengths are too far from the training distribution, results start differing qualitatively, as seen below.
The codebase includes a kinetic/potential decomposition check that compares learned structure against the analytical system over a grid of configurations.
The MLP to estimate the kinetic energy clearly shows some errors at the edges of the variable space, mainly because of the lack samples in that range in the training set.
Moreover, the shape of the potential 
The repository currently implies uv as the preferred environment manager via uv.lock.
Install dependencies:
uv syncGenerate a dataset:
uv run python src/data/generate_dataset.pyTrain a model:
uv run python src/train.pyRun inference, held-out rollouts, and OOD tests:
uv run python src/inference.pyGenerate animations and phase plots from saved rollout artifacts:
uv run python results/visualization.pyBuild the docs:
uv run mkdocs serveThe repository also supports a minimal Docker verification path. This is a Linux CPU-only smoke test intended to confirm that the locked environment builds cleanly and that the core JAX stack imports inside a container.
Build the image from the repository root:
docker build -t lagrangiannn .Run the container smoke test:
docker run --rm lagrangiannnThe default container command is:
uv run python -c "import jax, equinox, optax; print('smoke test ok')"If you are running this on a Mac, Docker Desktop is still executing a Linux container in a lightweight VM. This verifies Linux-in-Docker behavior, not native macOS execution.
- The implementation is specialized to a 2-DoF double pendulum.
- Several scripts rely on hard-coded model names and example parameter sets.
- The training code is script-first rather than a packaged CLI workflow.
- The learned energy regularizer acts on a normalized, model-induced quantity, not the exact physical Hamiltonian in original units.
- Some repository metadata still needs cleanup, such as the placeholder package name in
pyproject.toml.
- Deep Lagrangian Networks (Lutter et al., ICLR 2019)
- Lagrangian Neural Networks (Cranmer et al., 2020)
If you use this repository in your research or projects, please cite it as:
@misc{corbetta2026lagrangianfilmnn,
author = {Corbetta, Matteo},
title = {Lagrangian FiLM NN: A JAX Implementation of a FiLM-Conditioned Lagrangian Neural Network for Double Pendula},
year = {2026},
howpublished = {\url{https://github.com/matteocorbetta/lagrangian-film-nn}},
}
