Train an Echo State Network (ESN) to a univariate time series. The function automatically manages data pre-processing, reservoir generation (i.e., internal states) and model estimation and selection.
Arguments
- y
Numeric vector containing the response variable.
- lags
Integer vector with the lag(s) associated with the input variable.
- inf_crit
Character value. The information criterion used for variable selection
inf_crit = c("aic", "aicc", "bic", "hqc").- n_diff
Integer vector. The nth-differences of the response variable.
- n_states
Integer value. The number of internal states of the reservoir. If
n_states = NULL, the reservoir size is determined bytau*n_total, wheren_totalis the time series length.- n_models
Integer value. The maximum number of (random) models to train for model selection. If
n_models = NULL, the number of models is defined asn_states*2.- n_initial
Integer value. The number of observations of internal states for initial drop out (throw-off). If
n_initial = NULL, the throw-off is defined asn_total*0.05, wheren_totalis the time series length.- n_seed
Integer value. The seed for the random number generator (for reproducibility).
- alpha
Numeric value. The leakage rate (smoothing parameter) applied to the reservoir (value greater than 0 and less than or equal to 1).
- rho
Numeric value. The spectral radius for scaling the reservoir weight matrix (value often between 0 and 1, but values above 1 are possible).
- tau
Numeric value. The reservoir scaling parameter to determine the reservoir size based on the time series length (value greater than 0 and less than or equal to 1).
- density
Numeric value. The connectivity of the reservoir weight matrix (dense or sparse) (value greater than 0 and less than or equal to 1).
- lambda
Numeric vector. Lower and upper bound of lambda sequence for ridge regression (numeric vector of length 2 with both values greater than 0 and
lambda[1]<lambda[2]).- scale_win
Numeric value. The lower and upper bound of the uniform distribution for scaling the input weight matrix (value greater than 0, weights are sampled from U(-
scale_win,scale_win)).- scale_wres
Numeric value. The lower and upper bound of the uniform distribution for scaling the reservoir weight matrix (value greater than 0, weights are sampled from U(-
scale_res,scale_res) before applyingrhoanddensity).- scale_inputs
Numeric vector. The lower and upper bound for scaling the time series data (numeric vector of length 2 with
scale_inputs[1]<scale_inputs[2](often symmetric, e.g.,c(-0.5, 0.5)orc(-1, 1)).
Value
A list containing:
actual: Numeric vector containing the actual values.fitted: Numeric vector containing the fitted values.resid: Numeric vector containing the residuals.states_train: Numeric matrix containing the internal states.method: Alistcontaining several objects and meta information of the trained ESN (weight matrices, hyperparameters, model metrics, etc.).
References
Häußer, A. (2026). Echo State Networks for Time Series Forecasting: Hyperparameter Sweep and Benchmarking. arXiv preprint arXiv:2602.03912, 2026. https://arxiv.org/abs/2602.03912
Jaeger, H. (2001). The “echo state” approach to analysing and training recurrent neural networks with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report, 148(34):13.
Jaeger, H. (2002). Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the "echo state network" approach.
Lukosevicius, M. (2012). A practical guide to applying echo state networks. In Neural Networks: Tricks of the Trade: Second Edition, pages 659–686. Springer.
Lukosevicius, M. and Jaeger, H. (2009). Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3):127–149.
See also
Other base functions:
forecast_esn(),
is.esn(),
is.forecast_esn(),
is.tune_esn(),
plot.esn(),
plot.forecast_esn(),
plot.tune_esn(),
print.esn(),
summary.esn(),
summary.tune_esn(),
tune_esn()
Examples
xdata <- as.numeric(AirPassengers)
xmodel <- train_esn(y = xdata)
summary(xmodel)
#>
#> --- Inputs -----------------------------------------------------
#> n_obs = 144
#> n_diff = 1
#> lags = 1
#>
#> --- Reservoir generation ---------------------------------------
#> n_states = 57
#> alpha = 1
#> rho = 1
#> density = 0.5
#> scale_inputs = [-0.5, 0.5]
#> scale_win = [-0.5, 0.5]
#> scale_wres = [-0.5, 0.5]
#>
#> --- Model selection --------------------------------------------
#> n_models = 114
#> df = 16.68
#> lambda = 0.0457
