Skip to content

Conversation

@SarahAlidoost
Copy link
Collaborator

@SarahAlidoost SarahAlidoost commented Jan 15, 2026

closes #73

In this notebook, I implemented a different approach than the one in pcse notebook / 11 Optimizing partitioning in a PCSE model.ipynb.

I will update doc after #62

@SarahAlidoost SarahAlidoost marked this pull request as ready for review January 15, 2026 15:35
Copy link
Collaborator

@SCiarella SCiarella left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👋 @SarahAlidoost, the notebook looks nice.

One first technical comment is that all the notebooks need to add:

from diffwofost.physical_models.config import ComputeConfig
ComputeConfig.set_device('cpu')

otherwise they fail if a GPU is available because it would be the default device of the model, but not of the notebook's variables.


In terms of content, I see that we are no longer optimizing a logistic function, but rather optimizing the parameters directly. To me, this seems cleaner (and it is probably faster), but maybe those sigmoid functions have a deeper meaning that we are losing? Anyway, let's wait for someone with more knowledge to comment on this issue.


Finally, I do not fully understand the last figure: it is just the xy coordinates of predicted vs true parameter?

@SarahAlidoost
Copy link
Collaborator Author

SarahAlidoost commented Jan 16, 2026

👋 @SarahAlidoost, the notebook looks nice.

One first technical comment is that all the notebooks need to add:

from diffwofost.physical_models.config import ComputeConfig
ComputeConfig.set_device('cpu')

otherwise they fail if a GPU is available because it would be the default device of the model, but not of the notebook's variables.

Good point 👍 I'll fix it.

In terms of content, I see that we are no longer optimizing a logistic function, but rather optimizing the parameters directly. To me, this seems cleaner (and it is probably faster), but maybe those sigmoid functions have a deeper meaning that we are losing? Anyway, let's wait for someone with more knowledge to comment on this issue.

You’re right! In the notebook pcse notebook / 11 Optimizing partitioning in a PCSE model.ipynb, the partitioning variables (FL and FO) are first estimated using a sigmoid-based approximation (see the FLTB and FOTB classes in cell 17). Based on sampled DVS values (e.g. np.arange(0, 2.1, stepsize)), lookup tables are then created. These tables are eventually passed to wofost72. Then, as we know, in partitioning model using an interpolation approach (via AfgenTrait), the values of FL and FO are estimated again.

The notebook in this PR uses a softmax instead of a sigmoid. It also enforces some physical constraints: the partitioning values (FL and FO) must be positive, the sum of FLTB, FOTB, and FSTB must equal 1, and the x-values (i.e. DVS) must be strictly increasing. The optimization is based on the outputs of partitioning (FL and FO). Thinking about it now, it would probably make sense to rename the generic x and y variables to dvs and the actual partitioning variables, and define DVS as np.arange(0, 2.1, stepsize) or use another approximation between partitioning variables, and DVS.

Finally, I do not fully understand the last figure: it is just the xy coordinates of predicted vs true parameter?

The x axes is DVS and y axis are the the partitioning values (FL , FS and FO), actual vs predicted.

@SarahAlidoost
Copy link
Collaborator Author

👋 @SarahAlidoost, the notebook looks nice.
One first technical comment is that all the notebooks need to add:

from diffwofost.physical_models.config import ComputeConfig
ComputeConfig.set_device('cpu')

otherwise they fail if a GPU is available because it would be the default device of the model, but not of the notebook's variables.

Good point 👍 I'll fix it.

In terms of content, I see that we are no longer optimizing a logistic function, but rather optimizing the parameters directly. To me, this seems cleaner (and it is probably faster), but maybe those sigmoid functions have a deeper meaning that we are losing? Anyway, let's wait for someone with more knowledge to comment on this issue.

You’re right! In the notebook pcse notebook / 11 Optimizing partitioning in a PCSE model.ipynb, the partitioning variables (FL and FO) are first estimated using a sigmoid-based approximation (see the FLTB and FOTB classes in cell 17). Based on sampled DVS values (e.g. np.arange(0, 2.1, stepsize)), lookup tables are then created. These tables are eventually passed to wofost72. Then, as we know, in partitioning model using an interpolation approach (via AfgenTrait), the values of FL and FO are estimated again.

The notebook in this PR uses a softmax instead of a sigmoid. It also enforces some physical constraints: the partitioning values (FL and FO) must be positive, the sum of FLTB, FOTB, and FSTB must equal 1, and the x-values (i.e. DVS) must be strictly increasing. The optimization is based on the outputs of partitioning (FL and FO). Thinking about it now, it would probably make sense to rename the generic x and y variables to dvs and the actual partitioning variables, and define DVS as np.arange(0, 2.1, stepsize) or use another approximation between partitioning variables, and DVS.

Finally, I do not fully understand the last figure: it is just the xy coordinates of predicted vs true parameter?

The x axes is DVS and y axis are the the partitioning values (FL , FS and FO), actual vs predicted.

I fixed the variable naming, plots and approximation of partitioning variables. What still remains is the logic of loss function. right now we use the loss between the output of partitioning modules (FO, FS and FL) and test data. This might not be the right approach, something to be explored more 🤔

@sonarqubecloud
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Task] Check if parameters of Partioning should be optimizable

3 participants