Dear all,
This is somewhat related to Issue #221. I was testing around with optimizing hyperparameters using optimize!(gp); I get a lot of LinearAlgebra.PosDefException. When rerunning just optimize!(gp), at some point the errors disappear and the optimization converges. But the model performs quite worse on test data, compared to before.
After optimization the mll/target values are actually reduced by the optim.
I would assume if the error occurs it should be consistent and not disappear with rerunning.
Is this behavior expected?
Here is a minimal example to reproduce:
using GaussianProcesses
X = [0.4 1.0 0.6 0.8 0.2; 1.0 0.4 0.2 0.8 0.6]
y = map(sum, eachcol(X))
Xₜₑₛₜ = [0.229267 0.939358 0.538846 0.180355 0.545399; 0.701589 0.149632 0.293056 0.170041 0.423608]
yₜₑₛₜ = map(sum, eachcol(Xₜₑₛₜ))
gp = GPE(X, y, MeanZero(), SE(0.0, 0.0), -6.0)
@info "Before Opt: $(sqrt(sum((GaussianProcesses.predict(gp, Xₜₑₛₜ)[1] .- yₜₑₛₜ).^2)/length(yₜₑₛₜ)))"
function optim()
try
optimize!(gp)
catch
optim()
end
end
optim()
@info "After HyperOpt: $(sqrt(sum((GaussianProcesses.predict(gp, Xₜₑₛₜ)[1] .- yₜₑₛₜ).^2)/length(yₜₑₛₜ)))"
Greetings,
Jan
Dear all,
This is somewhat related to Issue #221. I was testing around with optimizing hyperparameters using
optimize!(gp); I get a lot ofLinearAlgebra.PosDefException. When rerunning justoptimize!(gp), at some point the errors disappear and the optimization converges. But the model performs quite worse on test data, compared to before.After optimization the
mll/targetvalues are actually reduced by the optim.I would assume if the error occurs it should be consistent and not disappear with rerunning.
Is this behavior expected?
Here is a minimal example to reproduce:
Greetings,
Jan