Skip to content

[MRG] COBYLA solver: Track best-performing parameters during optimization#1240

Open
carolinafernandezp wants to merge 7 commits intojonescompneurolab:masterfrom
carolinafernandezp:track-best-params-cobyla
Open

[MRG] COBYLA solver: Track best-performing parameters during optimization#1240
carolinafernandezp wants to merge 7 commits intojonescompneurolab:masterfrom
carolinafernandezp:track-best-params-cobyla

Conversation

@carolinafernandezp
Copy link
Copy Markdown
Contributor

The COBYLA algorithm implemented by SciPy returns the set of parameters that attempts to minimize the objective function at termination, instead of the argmin over all evaluated parameter sets. This means that, currently, optim.net_ when optim = Optimizer(net, solver='cobyla') does not return the optimized network, but the network that was last explored.

The fix I'm implementing here is to track the best parameter set and corresponding objective value over all evaluated parameter sets. In other words, the proposed fix is to store and return the parameter set corresponding to the lowest observed objective value rather than the final iterate for the COBYLA solver.

We already do this with the Bayesian optimization function we implemented in externals, which is why we don't run into the same issue with the bayesian solver.

@asoplata
Copy link
Copy Markdown
Collaborator

asoplata commented Feb 6, 2026

Thanks Carolina, is this ready to go or is it still a work-in-progress?

If it's ready to go, is it okay if I push some commits in order to fix the tests breaking? We just pushed a commit to master yesterday that fixes an issue with our circleci step failing (1fb6074 ). I can add it to your branch myself if that's okay with you (along with other commits that will fix unit test breakage).

@carolinafernandezp
Copy link
Copy Markdown
Contributor Author

Thanks Carolina, is this ready to go or is it still a work-in-progress?

If it's ready to go, is it okay if I push some commits in order to fix the tests breaking? We just pushed a commit to master yesterday that fixes an issue with our circleci step failing (1fb6074 ). I can add it to your branch myself if that's okay with you (along with other commits that will fix unit test breakage).

Yes that sounds great, thank you @asoplata!

@asoplata
Copy link
Copy Markdown
Collaborator

asoplata commented Feb 9, 2026

@carolinafernandezp Is this ready for review / merge? Or is it still a work in progress?

@asoplata
Copy link
Copy Markdown
Collaborator

This appears to be a WIP work in progress, since there are legitimate test failures occurring

@asoplata asoplata changed the title COBYLA solver: Track best-performing parameters during optimization [WIP] COBYLA solver: Track best-performing parameters during optimization Feb 11, 2026
@asoplata
Copy link
Copy Markdown
Collaborator

Hey @carolinafernandezp I've made a pull request to your branch on your fork here carolinafernandezp#1 that should fix the failing test, see that for details

@ntolley ntolley requested review from asoplata and ntolley April 3, 2026 20:02
@ntolley
Copy link
Copy Markdown
Collaborator

ntolley commented Apr 3, 2026

Commenting to remind myself that this should be rebased and merged after #1221

@asoplata asoplata marked this pull request as draft April 28, 2026 18:01
…ed objective value rather than the final iterate (COBYLA solver)
This is a documentation-only change that adds some extra detail to both
the requirements of how a user creates their own objective function, and
explains that `best` is updated inside the objective functions as a
"side-effect", meaning its changes are meant to propagate "silently" but
WITHOUT having their new value returned by the function (and why).
@asoplata asoplata force-pushed the track-best-params-cobyla branch from 4a0a314 to baeec0d Compare April 28, 2026 19:01
@asoplata
Copy link
Copy Markdown
Collaborator

I have done a rebase onto master in order to include Nick's conflicting changes with the rest of the optimization code. Note that anyone else working on Carolina's branch here will need to delete their existing branch and re-download the branch from her fork. For help with this, see here in our Contributing Guide.

@asoplata asoplata marked this pull request as ready for review April 28, 2026 20:18
Copy link
Copy Markdown
Collaborator

@asoplata asoplata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've fixed the remaining issues (passing tests, adding best support to _anticorr_evoked, and made a lot of documentation changes. I also fixed (?) a possible missing application of obj_fun_kwargs defaults to the non-CMA cases where relevant. Some of the doc changes have either ATTN or @ntolley in them so you can grep for them to find them.

I think this is good to go on my end; not only does it pass my test runs but my GUI opt branch passes tests as well with these changes applied.

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 28, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 93.04%. Comparing base (2bc88f1) to head (796df8e).
⚠️ Report is 6 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1240      +/-   ##
==========================================
+ Coverage   93.01%   93.04%   +0.02%     
==========================================
  Files          28       28              
  Lines        6616     6642      +26     
==========================================
+ Hits         6154     6180      +26     
  Misses        462      462              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@asoplata asoplata changed the title [WIP] COBYLA solver: Track best-performing parameters during optimization [MRG] COBYLA solver: Track best-performing parameters during optimization Apr 28, 2026
Comment thread hnn_core/optimization/general_optimization.py Outdated
Comment thread hnn_core/optimization/objective_functions.py
Comment thread hnn_core/optimization/objective_functions.py Outdated

# update best params; this is a "side-effect" that changes the `best` dictionary
# in-place in the parent scope
if best is not None and obj < best["obj"]:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this would work in the batch case

obj will be a list and predicted_params will be a 2D array which matches the length of that list

I think that there will need to be an np.argmin(obj) in the batch case to identify which solution has the lowest score of the population, and then compare that to the current best["obj"]

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I forgot that it's multidimensional

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comment on the other one

# average dipoles per population
dpls.append(average_dipoles(data["dpl"]))

obj = [_rmse(dpl, obj_fun_kwargs["target"], tstop=tstop) for dpl in dpls]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
obj = [_rmse(dpl, obj_fun_kwargs["target"], tstop=tstop) for dpl in dpls]
best_obj_idx = np.argmin(obj)
best_obj = obj[best_obj_idx]
best_params = predicted_params[best_obj_idx, :]

@asoplata one approach would be to add these variables to the batch case (in the non-batch case best_obj = obj)

Then you can re-use the code below which compares the current best_obj to all the previous ones

Copy link
Copy Markdown
Collaborator

@asoplata asoplata May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused now: best is only a parameter in _run_opt_cobyla, and it is only that instance of best that will be changed by the code here in these objective functions. We know that Cobyla will only ever execute the non-batch block of code. In contrast, in CMA, we already appear to have a way to get the "best" parameters that is different than in the Cobyla case, here:
https://github.com/carolinafernandezp/hnn-core/blob/531703e6633da8e33f30cb27dfcce49adb305014/hnn_core/optimization/general_optimization.py#L568-L569
where it seems to be taken from the CMA object directly.

My question now is, doesn't this alternative "best" approach in _run_opt_cma obviate (make unnecessary) any best handling inside the is-batch blocks of the objective functions here? We can totally take the Cobyla's best approach and make the changes you've suggested and add this approach to both _run_opt_cma and both dipole objective functions. However, first, I just want to ask and make sure that this is necessary, and that es.result.xbest isn't already taking care of this for us (I don't know, but I think you added that line).

@ntolley ntolley mentioned this pull request May 1, 2026
17 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In progress

Development

Successfully merging this pull request may close these issues.

3 participants