-
Notifications
You must be signed in to change notification settings - Fork 0
Home
bayesianOU is an R package that solves a fundamental problem in economic dynamics: formally testing whether observed market prices exhibit mean-reversion toward theoretical equilibrium values through nonlinear stochastic processes with full Bayesian uncertainty quantification.
At its core, the package addresses situations where you need to determine if disaggregated economic variables (e.g., sectoral market prices) exhibit systematic attraction toward theoretical benchmarks (e.g., production prices)—a question central to classical and Marxian economics but notoriously difficult to answer with standard econometric tools. The solution combines nonlinear Ornstein-Uhlenbeck dynamics, stochastic volatility modeling, heavy-tailed innovations, and hierarchical Bayesian inference to deliver rigorous convergence tests with transparent uncertainty propagation.
Economic convergence questions arise across many domains:
-
Variable
$Y$ : A set of observed market outcomes (e.g., market prices, CPI by sector) -
Variable
$X$ : A set of theoretical or fundamental values (e.g., production prices, labor-value prices) - Aggregate Dynamics: Economy-wide factors (e.g., aggregate profitability, TMG) that modulate convergence speeds
- Structural Heterogeneity: Sector-specific characteristics (e.g., capital composition) affecting adjustment patterns
The challenge: How do you determine whether
- Both are high-dimensional and exhibit sector-specific dynamics
- The convergence process may be nonlinear (stronger reversion for large deviations)
- Volatility varies over time (stochastic volatility)
- Shocks exhibit fat tails (non-Gaussian innovations)
- Aggregate economic conditions modulate the relationship
Traditional approaches fail because:
- Linear OU models assume constant mean-reversion speed regardless of deviation size
- Homoskedastic models ignore time-varying volatility, yielding inefficient inference
- Gaussian innovations underestimate extreme events, biasing tail risk
- Pooled estimators ignore sector-specific heterogeneity
- Frequentist methods provide only point estimates without full uncertainty quantification
- Single-equation approaches cannot capture how aggregate conditions modulate convergence
bayesianOU implements a principled econometric framework that:
- Specifies nonlinear OU dynamics with cubic drift, allowing mean-reversion speed to increase with deviation magnitude
- Incorporates stochastic volatility via AR(1) log-variance processes, capturing volatility clustering
- Models heavy tails through Student-t distributed innovations with estimated degrees of freedom
- Estimates hierarchical priors for sector-specific parameters, sharing information across sectors
-
Includes time-varying coupling between
$Y$ and$X$ modulated by aggregate profitability (TMG) - Performs full Bayesian inference via Stan's Hamiltonian Monte Carlo with NUTS
- Validates through PSIS-LOO cross-validation and out-of-sample forecasting
- Provides interpretable outputs including half-lives, convergence probabilities, and posterior summaries
This package represents a methodological contribution to econometrics, providing the first integrated framework for testing price convergence through nonlinear stochastic processes with stochastic volatility, heavy-tailed innovations, and hierarchical Bayesian inference.
The package's breakthrough lies in recognizing that:
-
Cubic drift
$\kappa(\theta - Y + a_3(Y - \theta)^3)$ nests linear OU as special case ($a_3 = 0$ ) while allowing stronger reversion for large deviations -
Time-varying beta
$\beta_s(t) = \beta_{0,s} + \beta_1 \cdot \text{TMG}_t$ captures how aggregate profitability modulates price adjustment - Stochastic volatility with AR(1) log-variance accounts for volatility persistence common in economic data
-
Student-t innovations with estimated
$\nu$ accommodate fat tails without arbitrary distributional assumptions - Hierarchical structure enables efficient estimation even with moderate time series length per sector
This enables:
-
Formal hypothesis testing for mean-reversion (
$\kappa_s > 0$ ) with exact posterior probabilities -
Interpretable parameters: Half-lives
$\tau_s = \ln(2)/\kappa_s$ in periods, coupling strengths -
Nonlinearity detection: Testing whether
$a_3 < 0$ (strengthening reversion at extremes) - Robust inference: Full posterior distributions for all parameters
- Model comparison: PSIS-LOO for comparing specifications
The continuous-time specification for sector
where:
-
$Y_{s,t}$ is the (standardized) market price for sector$s$ at time$t$ -
$\kappa_s > 0$ is the mean-reversion speed -
$\theta_s$ is the long-run equilibrium level -
$a_{3,s}$ is the cubic nonlinearity coefficient -
$\sigma_{s,t}$ is the time-varying volatility -
$W_{s,t}$ is a standard Wiener process
Interpretation of cubic term: When
Discretizing with
where the conditional mean is:
The time-varying coupling coefficient is:
where:
-
$\beta_{0,s}$ is the sector-specific baseline coupling -
$\beta_1$ is the global effect of aggregate profitability (TMG = Tasa Media de Ganancia) -
$\text{TMG}_t$ is the economy-wide profit rate at time$t$ -
$\gamma$ is the effect of capital composition on price dynamics -
$\text{COM}_{s,t}$ is the organic composition of capital for sector$s$
The log-variance follows an AR(1) process:
where:
-
$\alpha_s$ is the long-run log-variance level -
$\rho_s \in (-1, 1)$ is the persistence parameter -
$\sigma_{\eta,s}$ is the volatility of log-volatility
The standardized innovations follow a Student-t distribution:
where
Sector-specific parameters are drawn from population distributions:
The hyperpriors follow standard recommendations (Gelman, 2006):
Strong Bayesian evidence for convergence requires:
-
Mean-reversion:
$\mathbb{P}(\kappa_s > 0 \mid \text{data}) \approx 1$ for all sectors. -
Stationarity:
$\mathbb{P}(\kappa_s < 1 \mid \text{data}) \approx 1$ for all sectors (ensures finite half-lives). -
95% credible intervals:
$[\kappa_{s, 0.025}, \kappa_{s, 0.975}] \subset (0, 1)$
Half-life interpretation: The time for a deviation to decay to half its initial value is:
For annual data,
We did not find a prior methodology that combines: (i) Nonlinear OU with cubic drift, (ii) Stochastic volatility, (iii) Student-t innovations, (iv) Time-varying X-coupling via aggregate profitability, and (v) Full hierarchical Bayesian inference via MCMC. Adjacent traditions each miss at least one piece:
| Method | Limitation |
|---|---|
| Linear OU (Uhlenbeck & Ornstein, 1930) | Constant mean-reversion regardless of deviation size |
| GARCH models (Bollerslev, 1986) | Observation-driven volatility; no continuous-time interpretation |
| Standard SV (Kim, Shephard & Chib, 1998) | Linear dynamics; Gaussian innovations |
| Threshold models (Tong, 1990) | Regime-switching rather than smooth nonlinearity |
| Frequentist OU (Phillips & Yu, 2009) | Point estimates only; limited uncertainty quantification |
| Beta-convergence (Barro & Sala-i-Martin, 1992) | Cross-sectional; ignores time-series dynamics |
Novelty claim: We phrase this as "to our knowledge, we did not find…" given the vast adjacent literature.
The primary application is testing whether market prices converge to production prices (prices proportional to labor values adjusted for equalized profit rates):
-
Do market prices converge to production prices? Test
$\kappa_s > 0$ with posterior probability -
What is the speed of convergence? Quantify half-lives
$\tau_s = \ln(2)/\kappa_s$ -
Does aggregate profitability modulate convergence? Test
$\beta_1 \neq 0$ -
Is convergence stronger for large deviations? Test
$a_{3,s} < 0$
The framework enables analysis of:
- Cross-sectoral heterogeneity: Which sectors adjust faster/slower?
- Capital composition effects: Does organic composition affect equilibrium levels?
- Volatility patterns: Which sectors exhibit more volatile adjustment paths?
- Fat-tail events: How common are extreme price movements?
By including TMG (aggregate profit rate):
- Procyclical adjustment: Does convergence speed up during high-profit periods?
- Crisis dynamics: How do adjustment patterns change during profitability crises?
- Sectoral sensitivity: Which sectors are most sensitive to aggregate conditions?
The framework generalizes to any domain with the nonlinear mean-reversion problem:
- Test whether asset prices revert to fundamental values
- Model volatility clustering with stochastic volatility
- Capture fat-tailed returns with Student-t innovations
- Estimate mean-reversion speeds for pairs trading strategies
- Extend Vasicek/CIR models with nonlinear drift
- Incorporate stochastic volatility for realistic term structure models
- Estimate regime-dependent mean-reversion
- Model mean-reversion in commodity prices toward production costs
- Capture convenience yield dynamics with time-varying coupling
- Account for volatility clustering in energy markets
- Test whether pollution levels revert toward regulatory targets
- Model nonlinear adjustment to environmental shocks
- Estimate convergence speeds for sustainability metrics
# 1. Prepare data matrices
Y <- as.matrix(market_prices) # T x S matrix
X <- as.matrix(production_prices) # T x S matrix
TMG <- aggregate_profit_rate # Length T vector
COM <- as.matrix(capital_composition) # T x S matrix
K <- as.matrix(total_capital) # T x S matrix
# 2. Fit the nonlinear OU model
results <- fit_ou_nonlinear_tmg(
results_robust = list(),
Y = Y, X = X, TMG = TMG, COM = COM, CAPITAL_TOTAL = K,
model = "base",
priors = list(sigma_delta = 0.002),
com_in_mean = TRUE,
chains = 4,
iter = 8000,
warmup = 4000,
adapt_delta = 0.97,
max_treedepth = 12,
verbose = TRUE
)
# 3. Extract convergence evidence
conv_evidence <- extract_convergence_evidence(results)
# 4. Validate model fit
validate_ou_fit(results)
# 5. Visualize results
plot_beta_tmg(results) # Time-varying coupling
plot_drift_curves(results) # Nonlinear drift function
plot_sv_evolution(results) # Stochastic volatility paths-
Cubic OU specification:
$\kappa(\theta - Y + a_3(Y-\theta)^3)$ -
Sign constraint:
$a_3 < 0$ enforced via log-transformation for basin-of-attraction dynamics -
Grid evaluation: Drift computed over
$z \in [-2.5, 2.5]$ for visualization
-
Baseline: Sector-specific
$\beta_{0,s}$ -
Aggregate modulation:
$\beta_1 \cdot \text{TMG}_t$ common across sectors - Orthogonalization: Optional TMG orthogonalization w.r.t. common factor
-
AR(1) log-variance:
$h_t = \alpha + \rho(h_{t-1} - \alpha) + \sigma_\eta \eta_t$ -
Persistence:
$\rho_s$ estimated per sector with prior$\rho_s \sim \mathcal{N}(0.90, 0.05)$ -
Volatility-of-volatility:
$\sigma_{\eta,s}$ with half-normal prior
-
Student-t innovations: Degrees of freedom
$\nu$ estimated -
Prior:
$\nu - 2 \sim \text{Exp}(3)$ , implying prior median$\nu \approx 2.23$ -
Tail behavior: Smaller
$\nu$ = heavier tails
| Component | Implementation |
|---|---|
| Sampler | Stan's NUTS (No-U-Turn Sampler) |
| Parallelization |
reduce_sum for within-chain threading |
| Backend | cmdstanr (preferred) or rstan |
| Adaptation | Target acceptance rate adapt_delta (default 0.97) |
| Tree depth | Maximum max_treedepth (default 12) |
| Diagnostic | Threshold | Interpretation |
|---|---|---|
|
|
< 1.01 | Chain convergence |
| ESS (bulk) | > 400 | Effective sample size |
| ESS (tail) | > 400 | Tail estimation quality |
| Divergences | 0 | Geometric pathologies |
| Tree depth | < max | Sampler efficiency |
| Metric | Method | Use |
|---|---|---|
| PSIS-LOO | Pareto Smoothed Importance Sampling | Leave-one-out CV |
| ELPD | Expected Log Predictive Density | Model ranking |
| Pareto k | Tail diagnostic | LOO reliability |
| OOS RMSE | Out-of-sample | Predictive accuracy |
| OOS MAE | Out-of-sample | Robust prediction |
# Install from GitHub
devtools::install_github("isadorenabi/bayesianOU")
# For full functionality, install Stan backend (cmdstanr recommended)
install.packages("cmdstanr", repos = c("https://mc-stan.org/r-packages/", getOption("repos")))
cmdstanr::install_cmdstan()
# Alternative: rstan backend
install.packages("rstan")Required:
-
stats: Base statistical functions -
graphics: Base plotting -
utils: Utility functions
Suggested (for full functionality):
-
cmdstanrorrstan: Bayesian inference via Stan -
loo(>= 2.5.0): PSIS-LOO cross-validation -
posterior: Posterior manipulation utilities -
ggplot2: Enhanced visualization -
tidyr: Data reshaping -
openxlsx: Excel export -
testthat(>= 3.0.0): Unit testing
library(bayesianOU)
# Prepare data
T_obs <- 50
S_sectors <- 10
Y <- matrix(rnorm(T_obs * S_sectors), nrow = T_obs, ncol = S_sectors)
X <- matrix(rnorm(T_obs * S_sectors), nrow = T_obs, ncol = S_sectors)
TMG <- rnorm(T_obs)
COM <- matrix(runif(T_obs * S_sectors), nrow = T_obs, ncol = S_sectors)
K <- matrix(runif(T_obs * S_sectors, 100, 1000), nrow = T_obs, ncol = S_sectors)
# Fit model
results <- fit_ou_nonlinear_tmg(
results_robust = list(),
Y = Y, X = X, TMG = TMG, COM = COM, CAPITAL_TOTAL = K,
chains = 4,
iter = 4000,
warmup = 2000,
verbose = TRUE
)
# Validate
validate_ou_fit(results)library(bayesianOU)
# =============================================================================
# DATA PREPARATION
# =============================================================================
# Load your data (example assumes matrices are prepared)
Y <- as.matrix(market_prices_data)
X <- as.matrix(production_prices_data)
TMG <- aggregate_tmg_series
COM <- as.matrix(capital_composition_data)
K <- as.matrix(total_capital_data)
# Validate dimensions
stopifnot(
nrow(Y) == nrow(X),
ncol(Y) == ncol(X),
length(TMG) == nrow(Y),
all(dim(COM) == dim(Y)),
all(dim(K) == dim(Y))
)
# =============================================================================
# MODEL FITTING
# =============================================================================
results <- fit_ou_nonlinear_tmg(
results_robust = list(),
Y = Y, X = X, TMG = TMG, COM = COM, CAPITAL_TOTAL = K,
model = "base",
priors = list(sigma_delta = 0.002),
com_in_mean = TRUE,
chains = 6,
iter = 12000,
warmup = 6000,
thin = 2,
cores = 6,
threads_per_chain = 2,
hard_sum_zero = TRUE,
orthogonalize_tmg = TRUE,
factor_from = "X",
adapt_delta = 0.97,
max_treedepth = 12,
seed = 1234,
verbose = TRUE
)
# =============================================================================
# CONVERGENCE EVIDENCE
# =============================================================================
conv_evidence <- extract_convergence_evidence(results)
# Access key results
print(conv_evidence$kappa_ic95) # 95% CI for kappa by sector
print(conv_evidence$convergence) # Formal convergence test
print(conv_evidence$prob_convergence) # P(convergence | data)
# =============================================================================
# HALF-LIFE ANALYSIS
# =============================================================================
summ <- extract_posterior_summary(results$factor_ou$stan_fit)
half_lives <- log(2) / summ$kappa_s
names(half_lives) <- colnames(Y)
print(sort(half_lives))
# =============================================================================
# VISUALIZATION
# =============================================================================
# Time-varying beta evolution
plot_beta_tmg(results)
# Nonlinear drift curves
plot_drift_curves(results)
# Stochastic volatility path (sector 1)
plot_sv_evolution(results, sector = 1)
# =============================================================================
# MODEL VALIDATION
# =============================================================================
validate_ou_fit(results)
# MCMC diagnostics
message(sprintf("Divergences: %d", results$diagnostics$divergences))
message(sprintf("Max R-hat: %.4f", results$diagnostics$rhat_max))
message(sprintf("Share R-hat > 1.01: %.4f", results$diagnostics$rhat_share))
# PSIS-LOO
print(results$diagnostics$loo)
# Out-of-sample metrics
print(results$diagnostics$oos)
# =============================================================================
# HYPOTHESIS TESTS
# =============================================================================
fit <- results$factor_ou$stan_fit
beta1_draws <- as.vector(fit$draws("beta1", format = "matrix"))
message(sprintf("P(beta1 > 0): %.4f", mean(beta1_draws > 0)))
message(sprintf("95%% CI for beta1: [%.4f, %.4f]",
quantile(beta1_draws, 0.025),
quantile(beta1_draws, 0.975)))
message(sprintf("P(all kappa in (0,1)): %.4f", conv_evidence$prob_convergence))
message(sprintf("Proportion a3 < 0: %.4f", mean(summ$a3_s < 0)))
# =============================================================================
# EXPORT RESULTS
# =============================================================================
# Export to Excel (requires openxlsx)
export_model_comparison(results, results,
path = "model_results.xlsx")| Half-life (years) | Interpretation | |
|---|---|---|
| > 0.35 | < 2 | Very fast convergence |
| 0.14–0.35 | 2–5 | Fast convergence |
| 0.07–0.14 | 5–10 | Moderate convergence |
| 0.035–0.07 | 10–20 | Slow convergence |
| < 0.035 | > 20 | Very slow / near unit root |
| Interpretation | |
|---|---|
| < -0.1 | Strong basin-of-attraction effect |
| -0.1 to -0.01 | Moderate nonlinearity |
| -0.01 to 0 | Weak nonlinearity (near-linear) |
| = 0 | Linear OU (constant reversion speed) |
| Interpretation | |
|---|---|
| > 0.5 | Strong procyclical adjustment |
| 0.2–0.5 | Moderate TMG sensitivity |
| 0–0.2 | Weak TMG effect |
| < 0 | Countercyclical adjustment |
| Interpretation | |
|---|---|
| > 0.95 | Very high persistence (volatility clusters strongly) |
| 0.85–0.95 | High persistence |
| 0.70–0.85 | Moderate persistence |
| < 0.70 | Low persistence |
| Interpretation | |
|---|---|
| < 4 | Very heavy tails (frequent extremes) |
| 4–10 | Moderately heavy tails |
| 10–30 | Light tails (approaching Gaussian) |
| > 30 | Near-Gaussian |
| Metric | Good | Acceptable | Problematic |
|---|---|---|---|
| < 1.01 | 1.01–1.05 | > 1.05 | |
| ESS | > 1000 | 400–1000 | < 400 |
| Divergences | 0 | 1–10 | > 10 |
| Pareto k | < 0.5 | 0.5–0.7 | > 0.7 |
| Function | Description |
|---|---|
fit_ou_nonlinear_tmg() |
Main estimation function for nonlinear OU model |
extract_posterior_summary() |
Extract median estimates and diagnostics |
extract_convergence_evidence() |
Formal convergence test with credible intervals |
validate_ou_fit() |
Print comprehensive model validation |
compare_models_loo() |
Compare two models via PSIS-LOO |
evaluate_oos() |
Out-of-sample forecast evaluation |
| Function | Description |
|---|---|
plot_beta_tmg() |
Plot time-varying |
plot_drift_curves() |
Plot cubic drift function |
plot_sv_evolution() |
Plot stochastic volatility path |
plot_posterior_densities() |
Plot posterior distributions |
| Function | Description |
|---|---|
export_model_comparison() |
Export results to Excel workbook |
export_draws_csv() |
Export posterior draws to CSV |
export_summary_txt() |
Export text summary |
| Function | Description |
|---|---|
zscore_train() |
Standardize using training period statistics |
align_columns() |
Align matrix columns to reference |
check_stan_backend() |
Verify Stan availability |
compute_common_factor() |
Extract first principal component |
count_divergences() |
Extract HMC divergence count |
| Function | Description |
|---|---|
build_beta_tmg_table() |
Construct time-varying beta matrix |
summarize_sv_sigmas() |
Extract median volatility paths |
drift_decomposition_grid() |
Compute drift over z-grid |
build_accounting_block() |
TMG transformation accounting |
The complete Stan model implements the following structure:
data {
int<lower=2> T; // Total time periods
int<lower=1> S; // Number of sectors
int<lower=2> T_train; // Training period length
matrix[T,S] Yz; // Standardized Y
matrix[T,S] Xz; // Standardized X
vector[T] zTMG_byK; // TMG (possibly orthogonalized)
vector[T] zTMG_exo; // TMG (exogenous)
int<lower=0,upper=1> soft_wedge;
real<lower=0> sigma_delta_z;
matrix[T,S] COM_ts; // Capital composition
matrix[T,S] K_ts; // Total capital
int<lower=0,upper=1> com_in_mean;
vector[S] mu_xz;
}parameters {
vector[S] theta_s; // Equilibrium levels
vector[S] kappa_tilde; // log(kappa)
vector[S] a3_tilde; // log(-a3)
vector[S] beta0_s; // Sector-specific baseline coupling
real beta1; // Global TMG effect
// Hierarchical hyperparameters
real kappa0; real kappa_COM; real<lower=1e-6> sigma_kappa;
real theta0; real theta_COM; real<lower=1e-6> sigma_theta;
real a3_0; real<lower=1e-6> sigma_a3;
real beta00; real beta0_COM; real<lower=1e-6> sigma_beta0;
// Stochastic volatility
vector[S] alpha_s;
vector<lower=-0.995, upper=0.995>[S] rho_s;
vector<lower=1e-6>[S] sigma_eta_s;
matrix[T,S] h_raw;
// Student-t degrees of freedom
real<lower=0> nu_tilde;
// TMG wedge (for soft constraint)
vector[T] delta_z;
// COM effect
real gamma;
}transformed parameters {
vector<lower=0>[S] kappa_s = exp(kappa_tilde);
vector[S] a3_s = -exp(a3_tilde);
real<lower=2> nu = 2 + nu_tilde;
// Stochastic volatility construction
matrix[T,S] h;
matrix[T,S] h_std;
for (s in 1:S) {
h_std[1,s] = h_raw[1,s] / sqrt(1 - square(rho_s[s]) + 1e-8);
for (t in 2:T) {
h_std[t,s] = rho_s[s] * h_std[t-1,s] + h_raw[t,s];
}
}
for (t in 1:T) {
for (s in 1:S) {
h[t,s] = alpha_s[s] + sigma_eta_s[s] * h_std[t,s];
}
}
}model {
// Priors
theta_s ~ normal(theta0 + theta_COM * COM_s, sigma_theta);
kappa_tilde ~ normal(-1, 0.5);
a3_tilde ~ normal(log(0.05), 0.4);
beta0_s ~ normal(0, 0.5);
beta1 ~ normal(0.5, 0.25);
alpha_s ~ normal(0, 1);
rho_s ~ normal(0.90, 0.05);
sigma_eta_s ~ normal(0, 0.5);
to_vector(h_raw) ~ normal(0, 1);
nu_tilde ~ exponential(3);
gamma ~ normal(0, 0.5);
// Likelihood (parallelized via reduce_sum)
target += reduce_sum(ou_nl_partial_sum, t_idx, grainsize, ...);
}For each observation
where:
Bollerslev, T. (1986). Generalized Autoregressive Conditional Heteroskedasticity. Journal of Econometrics, 31(3), 307–327.
Gelman, A. (2006). Prior distributions for variance parameters in hierarchical models. Bayesian Analysis, 1(3), 515–534.
Kim, S., Shephard, N., & Chib, S. (1998). Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models. Review of Economic Studies, 65(3), 361–393.
Phillips, P. C. B., & Yu, J. (2009). Maximum Likelihood and Gaussian Estimation of Continuous Time Models in Finance. In Handbook of Financial Time Series (pp. 497–530). Springer.
Stan Development Team (2024). Stan User's Guide. https://mc-stan.org/users/documentation/
Tong, H. (1990). Non-linear Time Series: A Dynamical System Approach. Oxford University Press.
Uhlenbeck, G. E., & Ornstein, L. S. (1930). On the Theory of the Brownian Motion. Physical Review, 36(5), 823–841.
Vehtari, A., Gelman, A., & Gabry, J. (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing, 27(5), 1413–1432.
We welcome contributions! For contributions, contact via [isadore.nabi@pm.me](mailto:isadore.nabi@pm.me)
- Follow tidyverse style guide
- Document all exported functions with roxygen2
- Include unit tests for new functionality
- Update vignettes for user-facing changes
- Ensure Stan code compiles with both cmdstanr and rstan
If you use bayesianOU in your research, please cite:
@software{gomez2025bayesianou,
author = {Gómez Julián, José Mauricio},
title = {bayesianOU: Bayesian Nonlinear Ornstein-Uhlenbeck Models with Stochastic Volatility},
year = {2025},
url = {https://github.com/isadorenabi/bayesianOU}
}MIT License - see [LICENSE] file for details.
Author: José Mauricio Gómez Julián.
ORCID: https://orcid.org/0009-0000-2412-3150.
Year: 2025.