Skip to content

Enable ESS calculation in metric calc#761

Open
acordonez wants to merge 22 commits into
mainfrom
feature/metric-calc-ess
Open

Enable ESS calculation in metric calc#761
acordonez wants to merge 22 commits into
mainfrom
feature/metric-calc-ess

Conversation

@acordonez
Copy link
Copy Markdown
Contributor

@acordonez acordonez commented May 4, 2026

Summary of changes and related issue

  • Add check_ess option
  • Update ESS code in metric_calc to run on data with extra dims like sim
  • Update unit tests as needed
    • metric_calc_param_validator.py: 83% coverage
    • processor_utils.py: 81% coverage
    • metric_calc.py: 90% coverage (across multiple test files)
  • A variable name bugfix from the confidence interval changes

Link to corresponding Jira ticket(s)

AE-1412

Testing

  • Unit tests written for new/modified code (goal: 80% coverage)
    • All public functions have unit tests
    • Functions that must produce specific values have unit tests
  • Verified that notebooks utilizing affected functions still work
  • Appropriate manual testing completed
  • Advanced Testing label added to PR if this PR makes major changes to the codebase

How to Test

Here's a notebook set up with several tests to check that the ESS test runs, or doesn't, as expected:
ess_test.ipynb

Also verify that unit tests are working in CI.

Documentation

  • Complex code includes comments explaining logic
  • Function documentation created (docstrings)

Code Quality

  • Follows PEP8 naming and style conventions
  • Helper functions prefixed with underscore _
  • Linting completed and all issues resolved
  • Does not replicate existing functionality
  • Aligns with general coding standards of existing codebase
  • Code generalized for multiple uses (unless too complex/time-intensive)

Review Process

  • PR review instructions provided:
    • Type of review requested (scientific/technical/debugging)
    • Level of review detail needed

Administrative Reminders

  • Jira ticket moved to "Review" when PR created
  • Jira ticket will be moved to "Done" when complete
  • PR branch will be deleted after merge

@acordonez
Copy link
Copy Markdown
Contributor Author

@claalmve and @neilSchroeder looking for both technical and methods review - I want to make sure I haven't changed the underlying methodology for the effective sample size, just extended it to run over datasets with more dimensions.

@acordonez acordonez marked this pull request as ready for review May 5, 2026 21:24
Copy link
Copy Markdown
Collaborator

@neilSchroeder neilSchroeder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few small things

Comment thread climakitae/new_core/processors/processor_utils.py Outdated
Comment thread climakitae/new_core/processors/processor_utils.py Outdated
Comment thread climakitae/new_core/processors/processor_utils.py Outdated
Comment thread tests/new_core/processors/test_processor_utils.py
@acordonez acordonez requested a review from neilSchroeder May 12, 2026 17:48
@acordonez acordonez added the Advanced Testing Elevated testing tier for longer running tests label May 12, 2026
Copy link
Copy Markdown
Collaborator

@claalmve claalmve left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR looks great! Tested the notebook, everything looks well. I only left one comment about my confusion on the docstring, otherwise no notes!


References
----------
Zwiers, F. W., and H. von Storch, 1995: Taking Serial Correlation into Account in Tests of the Mean. J. Climate, 8, 336–351, https://doi.org/10.1175/1520-0442(1995)008<0336:TSCIAI>2.0.CO;2.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beautiful!

- block_size (int, optional): Block size in years. Default: 1
- goodness_of_fit_test (bool, optional): Perform KS test. Default: True
- alpha (float, optional): Significance level, between 0-1, for confidence intervals. Default UNSET
- alpha (float, optional): Alpha value (1 - significance level) for confidence intervals. Default UNSET
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come it's 1 - significance level and not just significance level?

Copy link
Copy Markdown
Contributor Author

@acordonez acordonez May 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could use confidence level as a variable rather than alpha, I just thought of alpha first. Alpha's the variable for the probability of being in the tails of the distribution, outside of the confidence interval.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the alpha variable description since I've been getting different feedback on the significance level part - leaving that out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Advanced Testing Elevated testing tier for longer running tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants