Conversation
…, fix for start_lpjml to not close coupled program to early, deprecation warning function
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #17 +/- ##
==========================================
+ Coverage 78.47% 80.23% +1.75%
==========================================
Files 7 7
Lines 1640 1983 +343
==========================================
+ Hits 1287 1591 +304
- Misses 353 392 +39 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Pull request overview
This PR updates pycoupler for compatibility with newer pycopanlpjml/LPJmL coupling expectations, including a new default coupled model name, improved Slurm coupled-job handling, additional port/process cleanup, and expanded test coverage for utilities, run submission, and NetCDF export helpers.
Changes:
- Default coupled model name updated to
copan, with support for overriding via a coupled config. - Slurm coupled-job submission now patches
slurm.jcfto ensure the coupler process is waited on (and submits viasbatchwhen needed). - Adds NetCDF writing + grid transform utilities for
LPJmLData/LPJmLDataSet, plus substantial new tests.
Reviewed changes
Copilot reviewed 16 out of 17 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
pycoupler/config.py |
Adjusts coupled config defaults/behavior (model default + coupled config support). |
pycoupler/coupler.py |
Adds port cleanup utilities; fixes historic years handling; improves input sending conversion logic. |
pycoupler/run.py |
Introduces start_lpjml and deprecated alias; patches Slurm submission flow for coupled runs. |
pycoupler/utils.py |
Adds standardized deprecation warning helper. |
pycoupler/data.py |
Implements transform() and NetCDF export helpers for LPJmL data structures. |
tests/test_config.py |
Updates expectations for new default coupled model. |
tests/test_couple.py |
Adjusts time assignment and historic years expectation in coupling test. |
tests/test_run.py |
Extends Slurm submission tests to cover slurm.jcf patching and sbatch flow. |
tests/test_run_additional.py |
Adds additional run-related tests incl. deprecated alias warning behavior. |
tests/test_utils.py |
Extends detect_io_type test coverage for invalid UTF-8 bytes. |
tests/test_utils_additional.py |
Adds coverage for create_subdirs and read_json. |
tests/test_data.py |
Adds comprehensive coverage for grid transforms and NetCDF writing. |
tests/test_coupler_utils.py |
Adds coverage for new port cleanup utilities. |
tests/data/invalid_utf8.bin |
Adds a sample binary file (currently appears redundant with the updated test). |
pycoupler/release.py |
Clarifies release script instructions. |
pycoupler/__init__.py |
Minor formatting cleanup. |
CITATION.cff |
Bumps version to 1.7.0. |
Comments suppressed due to low confidence (1)
pycoupler/run.py:265
slurm_jcf_diris documented/used as the directory whereslurm.jcfshould be written, but thelpjsubmitsubprocess is still executed withoutcwd=.... Iflpjsubmitwritesslurm.jcfto its working directory (typical),_patch_slurm_and_submit()will look inslurm_jcf_dir/slurm.jcfand fail even though the file exists elsewhere. Consider ensuring the directory exists and runninglpjsubmitwithcwd=slurm_jcf_dir(or otherwise directinglpjsubmitoutput) soslurm_jcf_pathmatches where the file is actually created.
# run in coupled mode and pass coupling program/model
needs_coupler_wait = bool(couple_to)
if slurm_jcf_dir is None:
slurm_jcf_dir = os.getcwd()
slurm_jcf_path = Path(slurm_jcf_dir) / "slurm.jcf"
couple_file = None
if couple_to:
python_path = "python3"
if venv_path:
python_path = os.path.join(venv_path, "bin/python")
if not os.path.isfile(python_path):
raise FileNotFoundError(
f"venv path contains no python binary at '{python_path}'."
)
bash_script = f"""#!/bin/bash
# Define the path to the config file
config_file="{config_file}"
# Call the Python script with the config file as an argument
{python_path} {couple_to} $config_file
"""
couple_file = f"{output_path}/copan_lpjml.sh"
with open(couple_file, "w") as file:
file.write(bash_script)
# Change the permissions of the file to make it executable
run(["chmod", "+x", couple_file])
cmd.extend(["-couple", couple_file])
if needs_coupler_wait:
cmd.append("-norun")
cmd.extend([str(ntasks), config_file])
# Intialize submit_status in higher scope
submit_status = None
# set LPJROOT to model_path to be able to call lpjsubmit
try:
os.environ["LPJROOT"] = config.model_path
# call lpjsubmit via subprocess and return status if successfull
submit_status = run(cmd, capture_output=True)
except Exception as e:
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # read all historic outputs | ||
| hist_years = list() | ||
| output_years = list() | ||
| for year in self.get_historic_years(): | ||
| hist_years.append(year) | ||
| if year == self._config.outputyear: | ||
| output_dict = self.read_output(year=year, to_xarray=False) | ||
| output_years.append(year) | ||
| elif year > self._config.outputyear: | ||
| output_dict = append_to_dict( | ||
| output_dict, self.read_output(year=year, to_xarray=False) | ||
| ) | ||
| output_years.append(year) | ||
|
|
||
| if not output_dict: | ||
| raise ValueError( | ||
| f"No historic output found for year {self._config.outputyear}" | ||
| ) |
There was a problem hiding this comment.
read_historic_output() uses output_dict before it is guaranteed to be initialized. If self._config.outputyear is not encountered in get_historic_years() (e.g., outputyear < firstyear), the elif year > self._config.outputyear: branch will reference output_dict before assignment, and the later if not output_dict: check can also raise UnboundLocalError. Initialize output_dict up front (e.g., to {}) and guard the append path until the first read succeeded; also consider raising a clearer error when no output years were collected.
| enc: dict[str, object] = {} | ||
| target_fill = fill_value | ||
| if target_fill is None: | ||
| if np.issubdtype(dtype, np.floating): | ||
| target_fill = DEFAULT_NETCDF_FILL_VALUE | ||
| elif np.issubdtype(dtype, np.integer): | ||
| target_fill = -999 |
There was a problem hiding this comment.
Default integer _FillValue is inconsistent between LPJmLData.to_netcdf() (uses -999) and _netcdf_encoding() / LPJmLDataSet.to_netcdf() (uses -9999). This can lead to mixed conventions depending on which writer is used. Consider standardizing on a single integer fill value (and ideally reusing one constant) across both code paths.
| # Test binary file that can't be decoded as UTF-8 (triggers UnicodeDecodeError) | ||
| # Create a binary file that's not valid UTF-8 | ||
| from pathlib import Path | ||
|
|
||
| binary_file = Path(test_path) / "data" / "invalid_utf8.bin" | ||
| binary_file.parent.mkdir(parents=True, exist_ok=True) | ||
| binary_file.write_bytes(b"\xff\xfe\x00\x01") # Invalid UTF-8 sequence | ||
| result = detect_io_type(str(binary_file)) | ||
| assert result == "raw" # Should default to 'raw' when decode fails |
There was a problem hiding this comment.
This test writes invalid_utf8.bin into the checked-in tests/data directory via test_path, which can fail in environments where the source tree is read-only and also dirties the working tree during test runs. Prefer using tmp_path (or tmp_path_factory) for generated files, and keep test_path usage read-only.
| coupled_config=None, | ||
| dependency=None, | ||
| coupled_year=None, | ||
| temporal_resolution="annual", | ||
| write_output=[], | ||
| write_file_format="cdf", | ||
| append_output=True, |
There was a problem hiding this comment.
Adding coupled_config as a positional parameter before dependency changes the positional argument order of set_coupled(), which can break existing callers that used positional args. To preserve backward compatibility, consider making the new parameter keyword-only (e.g., by adding * before it) or appending it at the end of the signature.
| coupled_config=None, | |
| dependency=None, | |
| coupled_year=None, | |
| temporal_resolution="annual", | |
| write_output=[], | |
| write_file_format="cdf", | |
| append_output=True, | |
| dependency=None, | |
| coupled_year=None, | |
| temporal_resolution="annual", | |
| write_output=[], | |
| write_file_format="cdf", | |
| append_output=True, | |
| *, | |
| coupled_config=None, |
pycoupler/coupler.py
Outdated
| subprocess.run(["kill", "-9", pid.strip()], timeout=5) | ||
| killed_count += 1 | ||
| except subprocess.TimeoutExpired: |
There was a problem hiding this comment.
kill_process_on_port() increments killed_count even if the kill command fails (non-zero return code), because subprocess.run() is called without check=True and its return value is ignored. This can report success while leaving the port blocked. Consider checking the returncode (or using check=True and handling CalledProcessError) before counting a PID as killed.
| subprocess.run(["kill", "-9", pid.strip()], timeout=5) | |
| killed_count += 1 | |
| except subprocess.TimeoutExpired: | |
| kill_result = subprocess.run( | |
| ["kill", "-9", pid.strip()], | |
| timeout=5, | |
| ) | |
| if kill_result.returncode == 0: | |
| killed_count += 1 | |
| except subprocess.TimeoutExpired: | |
| # If killing a specific PID times out, skip counting it |
| @contextmanager | ||
| def safe_port_binding(host, port): | ||
| """Context manager for safe port binding with automatic cleanup.""" | ||
| # Clean up any existing processes on the port first | ||
| kill_process_on_port(port) | ||
|
|
||
| try: | ||
| yield port | ||
| finally: | ||
| # Clean up on exit | ||
| kill_process_on_port(port) |
There was a problem hiding this comment.
safe_port_binding(host, port) currently never uses host and does not actually bind/check the port; it only performs pre/post cleanup. This makes the context manager name/signature misleading. Either remove the unused host parameter and rename to reflect its behavior (cleanup-only), or implement an actual bind/check step to ensure the port is available.
| global_attrs = _ensure_cf_metadata(lpjml) | ||
| dataset = lpjml.to_dataset() | ||
| _suppress_coordinate_fill(dataset) | ||
| if global_attrs: | ||
| dataset.attrs.update(dict(global_attrs)) | ||
| dataset.attrs.setdefault("Conventions", "CF-1.8") | ||
|
|
||
| var_name = lpjml.name or "__lpjml_data__" | ||
| dtype = lpjml.dtype | ||
| encoding: dict[str, dict[str, object]] = {} | ||
|
|
||
| enc: dict[str, object] = {} | ||
| target_fill = fill_value | ||
| if target_fill is None: | ||
| if np.issubdtype(dtype, np.floating): | ||
| target_fill = DEFAULT_NETCDF_FILL_VALUE | ||
| elif np.issubdtype(dtype, np.integer): | ||
| target_fill = -999 | ||
| if target_fill is not None: | ||
| enc["_FillValue"] = target_fill | ||
|
|
||
| if compression and np.issubdtype(dtype, np.number): | ||
| enc["zlib"] = True | ||
| enc["complevel"] = complevel | ||
|
|
||
| encoding[var_name] = enc |
There was a problem hiding this comment.
LPJmLData.to_netcdf() builds encoding using var_name = lpjml.name or "__lpjml_data__", but then calls dataset = lpjml.to_dataset() without forcing that variable name. When lpjml.name is None (allowed when path is provided), xarray will use its own default variable name, so the encoding entry may not apply to the written variable. Consider calling lpjml.to_dataset(name=var_name) (or renaming before to_dataset) so the encoding key always matches the dataset variable name.
| import tempfile | ||
| import copy | ||
| import warnings | ||
| import subprocess |
There was a problem hiding this comment.
Module 'subprocess' is imported with both 'import' and 'import from'.
| try: | ||
| subprocess.run(["kill", "-9", pid.strip()], timeout=5) | ||
| killed_count += 1 | ||
| except subprocess.TimeoutExpired: |
There was a problem hiding this comment.
'except' clause does nothing but pass and there is no explanatory comment.
| except subprocess.TimeoutExpired: | |
| except subprocess.TimeoutExpired: | |
| # Ignore timeout errors during best-effort port cleanup. |
| raise ValueError("Test exception") | ||
|
|
||
| # Verify cleanup was called twice (start and finally) | ||
| assert mock_kill.call_count == 2 |
There was a problem hiding this comment.
This statement is unreachable.
This MR introduces new features for the compatibility with the latest pycopanlpjml version: