diff --git a/analysis/drought_metrics.ipynb b/analysis/drought_metrics.ipynb
new file mode 100644
index 00000000..b478f6d6
--- /dev/null
+++ b/analysis/drought_metrics.ipynb
@@ -0,0 +1,703 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "5e0361f0",
+ "metadata": {},
+ "source": [
+ "# Drought Metrics"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "64641b82",
+ "metadata": {},
+ "source": [
+ "This notebook calculates two drought metrics, the Palmer Drought Severity Index (PDSI) and the Evaporative Demand Drought Index (EDDI), using WRF data in the AE catalog. PDSI relies on Potential Evapotranspiration (PET), which is computed using the Penman-Monteith method. At the end of the notebook, the user will be able to export monthly PDSI and EDDI under different global warming levels for a specific lat/lon as netcdf files for further analysis."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a347e048-a7da-4748-8489-fa83e5b6e441",
+ "metadata": {},
+ "source": [
+ "We will demonstrate how you can use `climakitae` variables to compute and export two drought indices for a given location, broken down in the table below:\n",
+ "\n",
+ "| Index | Method | Output file |\n",
+ "|-------|--------|-------------|\n",
+ "| PDSI | Palmer–Monteith (`climate_indices`) | `pdsi_wl_*.nc` |\n",
+ "| EDDI | Standardized index (`xclim`) | `eddi_wl_*.nc` |"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "98506237",
+ "metadata": {},
+ "source": [
+ "**Intended Application:** As a user, I want to export future drought indices for different global warming levels by:\n",
+ "1. Calculating PET using the Penman-Monteith Method\n",
+ "2. Calculating the PDSI using PET and exporting the monthly timeseries to a netcdf\n",
+ "3. Calculating the EDDI and exporting the monthly timeseries to a netcdf"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cc35f46e",
+ "metadata": {},
+ "source": [
+ "**Runtime:** With the default settings, this notebook takes approximately **10 minutes** to run from start to finish. Modifications to selections may increase the runtime."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3babc840",
+ "metadata": {},
+ "source": [
+ "## Step 0: Setup"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "xypsmbxjeoj",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Install climate_indices\n",
+ "!pip install -qq git+https://github.com/monocongo/climate_indices.git"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a7e19917",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import climakitae as ck\n",
+ "import xclim\n",
+ "import xarray as xr\n",
+ "import pandas as pd\n",
+ "import numpy as np\n",
+ "import matplotlib.pyplot as plt\n",
+ "from dask.diagnostics import ProgressBar\n",
+ "from climate_indices import palmer\n",
+ "from xclim.indices.stats import standardized_index\n",
+ "from climakitae.core.data_export import export"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "8b31d378",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define the lat/lon for the location of interest and the variables needed for the Penman-Monteith PET method\n",
+ "LAT = 37.787964\n",
+ "LON = -122.065063\n",
+ "\n",
+ "## THIS FUNCTIONALITY IS FOR DEMONSTRATION PURPOSES THAT ONLY WORKS WITH 2 WLS\n",
+ "## Please specify your baseline WL (i.e. 0.8) and your future WL (i.e. 2.0).\n",
+ "WARMING_LEVELS = [0.8, 2.0]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "485befa5f23",
+ "metadata": {},
+ "source": [
+ "## Step 1: Fetch Data and Calculate PET"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "522c9425",
+ "metadata": {},
+ "source": [
+ "### Penman-Monteith PET Method (most physically consistent)\n",
+ "We will be using the PET method for calculating PDSI, since it is the most physically consistent method. It explicitly models the two actual drivers of evaporation, available energy and atmospheric vapor transport, rather than using temperature as a proxy for both. This is especially important in conditions where humidity, wind, and radiation don't scale uniformly with temperature, causing temperature-only methods like Thornthwaite to misestimate PET under warming.\n",
+ "\n",
+ "Below, we list out the variables we will need to calculate PET using the Penman-Monteith method."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cc88bd71",
+ "metadata": {},
+ "source": [
+ "**Variables needed:**\n",
+ "- `tasmin`\n",
+ "- `tasmax`\n",
+ "- `relative humidity`\n",
+ "- `radiation flux`\n",
+ " - rsds (daily)\n",
+ " - rsus (hourly → daily mean)\n",
+ " - rlds (daily)\n",
+ " - rlus (hourly → daily mean)\n",
+ "- `wind speed (10m wind will be converted to 2m)`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a0132b29-804a-4eef-893f-2f60adfefc62",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Initialize the ClimateData object and define the processors needed.\n",
+ "cd = ck.ClimateData(verbosity=-2)\n",
+ "processes = {\n",
+ " \"clip\": (LAT, LON),\n",
+ " \"warming_level\": {\n",
+ " \"warming_levels\": WARMING_LEVELS,\n",
+ " \"add_dummy_time\": True\n",
+ " },\n",
+ "}\n",
+ "\n",
+ "def rename_sims_to_gcm(ds):\n",
+ " \"\"\"\"Renames the 'sim' coordinate in the dataset to only contain the GCM name for easier grouping later.\"\"\"\n",
+ " ds = ds.copy()\n",
+ " new_sims = [s.split('_')[2] for s in ds['sim'].values]\n",
+ " ds['sim'] = ('sim', new_sims)\n",
+ " return ds\n",
+ "\n",
+ "def fetch(variable, table_id=\"day\", units=None):\n",
+ " \"\"\"Fetch a WRF variable from the AE catalog with the configured processes.\"\"\"\n",
+ " proc = {**processes}\n",
+ " if units is not None:\n",
+ " proc[\"convert_units\"] = units\n",
+ " return (\n",
+ " cd.catalog(\"cadcat\")\n",
+ " .activity_id(\"WRF\")\n",
+ " .institution_id(\"UCLA\")\n",
+ " .table_id(table_id)\n",
+ " .grid_label(\"d03\")\n",
+ " .variable(variable)\n",
+ " .processes(proc)\n",
+ " .get()\n",
+ " )\n",
+ "\n",
+ "def get_and_transform(variable, table_id=\"day\", units=None, transform=None):\n",
+ " \"\"\"Fetching variables and applying a transformation if it's passed in.\"\"\"\n",
+ " print(f\"\\nFetching {variable}...\")\n",
+ " da = fetch(variable, table_id=table_id, units=units)\n",
+ " da = da.unify_chunks() # Ensure the data is chunked for Dask processing\n",
+ " if transform:\n",
+ " da = transform(da)\n",
+ " # Drop leap days\n",
+ " da = da.sel(time=~((da['time.month'] == 2) & (da['time.day'] == 29)))\n",
+ " # Rename sims to only contain the GCM name for easier grouping later\n",
+ " da = rename_sims_to_gcm(da)\n",
+ " with ProgressBar():\n",
+ " da = da.compute()\n",
+ " return da\n",
+ " \n",
+ "\n",
+ "# Daily variables\n",
+ "rh = get_and_transform(\"rh\")\n",
+ "sw_dwn = get_and_transform(\"sw_dwn\")\n",
+ "lw_dwn = get_and_transform(\"lw_dwn\")\n",
+ "wspd10mean = get_and_transform(\"wspd10mean\")\n",
+ "prec = get_and_transform(\"prec\")\n",
+ "tasmin = get_and_transform(\"t2min\")\n",
+ "tasmax = get_and_transform(\"t2max\")\n",
+ "rsus_daily = get_and_transform(\"swupb\", table_id=\"1hr\", transform=lambda da: da.resample(time=\"1D\").mean())\n",
+ "rlus_daily = get_and_transform(\"lwupb\", table_id=\"1hr\", transform=lambda da: da.resample(time=\"1D\").mean())\n",
+ "\n",
+ "# xclim requires relative humidity as a fraction (0–1), not a percentage\n",
+ "hurs_frac = (rh / 100)\n",
+ "hurs_frac.rh.attrs['units'] = '1' # Update the units to reflect the transformation to a fraction\n",
+ "\n",
+ "# Ensure all radiation flux variables carry the same units\n",
+ "sw_dwn.sw_dwn.attrs['units'] = 'W m-2'\n",
+ "lw_dwn.lw_dwn.attrs['units'] = 'W m-2'\n",
+ "rsus_daily.swupb.attrs['units'] = 'W m-2'\n",
+ "rlus_daily.lwupb.attrs['units'] = 'W m-2'\n",
+ "\n",
+ "print(\"Finished.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c74e6120",
+ "metadata": {},
+ "source": [
+ "#### Calculate PET from `xclim` using the Penman-Monteith Method"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "6c96f440",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Call `xclim.indicies.potential_evapotranspiration`\n",
+ "tasmin.lat.attrs[\"units\"] = \"degrees_north\"\n",
+ "pet_pm = xclim.indices.potential_evapotranspiration(\n",
+ " tasmin=tasmin.t2min,\n",
+ " tasmax=tasmax.t2max,\n",
+ " hurs=hurs_frac.rh,\n",
+ " rsds=sw_dwn.sw_dwn,\n",
+ " rsus=rsus_daily.swupb,\n",
+ " rlds=lw_dwn.lw_dwn,\n",
+ " rlus=rlus_daily.lwupb,\n",
+ " sfcWind=wspd10mean.wspd10mean,\n",
+ " method=\"FAO_PM98\"\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "26f86306-7e0d-416f-836f-7c0053b2e60d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "with ProgressBar():\n",
+ " comp_pet_pm = pet_pm.compute()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "de169e8e-6e37-43ea-a21a-cf83c00f4c96",
+ "metadata": {},
+ "source": [
+ "## Step 2: Calculate drought indices"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "50d973f5-8d03-48a7-9ac6-4ac36ef4b120",
+ "metadata": {},
+ "source": [
+ "#### Helper function to combine WLs into a single dummy timeseries for PET calculation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "28a1d6fa-a131-4c09-8568-6e2d20c90d44",
+ "metadata": {},
+ "source": [
+ "In order for us to properly use the `PDSI` calculation later in the `palmer` library, we need to combine the WL data into a single timeseries. The below helper function allows us to do that, by combining the WL data together along a dummy time index."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "4a160643-f98f-4e34-80fc-4949c77d404b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "### Combining WL objects together, historical WL as 2000-2030, future WL as 2030-2060\n",
+ "def combine_wl_to_dummy_time(\n",
+ " da: xr.DataArray,\n",
+ " baseline_wl: float,\n",
+ " future_wls: list[float],\n",
+ " start_date: str = \"2000-01-31\",\n",
+ ") -> xr.DataArray:\n",
+ " \"\"\"\n",
+ " Combine baseline warming level with multiple future warming levels into one\n",
+ " DataArray along a new 'combined_wl' dimension.\n",
+ "\n",
+ " Parameters\n",
+ " ----------\n",
+ " da : xr.DataArray\n",
+ " Original data with dims including 'warming_level' and 'time'.\n",
+ " baseline_wl : float\n",
+ " The warming level used for the first time segment.\n",
+ " future_wls : list of float\n",
+ " Warming levels to concatenate after baseline.\n",
+ " start_date : str\n",
+ " Start date for the combined time series (monthly freq).\n",
+ " \n",
+ " Returns\n",
+ " -------\n",
+ " xr.DataArray\n",
+ " Combined DataArray with new dimension 'combined_wl' and coordinate labels like \"0.8 to 1.5\".\n",
+ " \"\"\"\n",
+ " months_per_wl = da.sizes['time']\n",
+ " total_months = 2 * months_per_wl\n",
+ " new_time = pd.date_range(start_date, periods=total_months, freq='ME')\n",
+ "\n",
+ " combined_list = []\n",
+ " combined_labels = []\n",
+ "\n",
+ " for fw in future_wls:\n",
+ " da_base = da.sel(warming_level=baseline_wl)\n",
+ " da_future = da.sel(warming_level=fw)\n",
+ "\n",
+ " combined = xr.concat([da_base, da_future], dim='time')\n",
+ " combined = combined.assign_coords(time=new_time)\n",
+ "\n",
+ " wl_flag = np.array([baseline_wl] * months_per_wl + [fw] * months_per_wl)\n",
+ " combined = combined.assign_coords(warming_level_flag=('time', wl_flag))\n",
+ "\n",
+ " combined_list.append(combined)\n",
+ " combined_labels.append(f\"{int(baseline_wl * 10):02d}_to_{int(fw * 10):02d}\")\n",
+ "\n",
+ " combined_da = xr.concat(combined_list, dim='combined_wl')\n",
+ " combined_da = combined_da.assign_coords(combined_wl=combined_labels)\n",
+ "\n",
+ " return combined_da"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d66b7a90-26ad-4066-b1fd-b14d31f9ecd0",
+ "metadata": {},
+ "source": [
+ "
\n",
+ "\n",
+ "**We'll also create a helper function for cleaning our data before exporting it to the `PDSI` and `EDDI` files respectively.**\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "85bd503f-df22-49ca-ad6a-3a9cf31136e3",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def clean_index(result, index_name):\n",
+ " \"\"\"\n",
+ " Slice to the future WL period, drop combined_wl, assign a scalar warming_level\n",
+ " coordinate, and convert time to an integer time_delta offset.\n",
+ " \"\"\"\n",
+ " da = result.isel(time=slice(360, 720))\n",
+ " da = da.sel(combined_wl=\"08_to_20\").drop_vars(\"combined_wl\")\n",
+ " da = da.drop_vars([\"warming_level\", \"warming_level_flag\"], errors=\"ignore\")\n",
+ " da = da.assign_coords(warming_level=WARMING_LEVELS[1])\n",
+ " da = da.assign_coords(centered_year=(\"sim\", result[\"centered_year\"].isel(time=0).values))\n",
+ " da = da.assign_coords(time=np.arange(-180, 180)).rename({\"time\": \"time_delta\"})\n",
+ " da = da.rename(index_name)\n",
+ " return da"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f8d01841",
+ "metadata": {},
+ "source": [
+ "### 2.1. Calculate PDSI"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6f1518ba-60e9-4ea6-867d-2b68da914996",
+ "metadata": {},
+ "source": [
+ "**PDSI (Palmer Drought Severity Index)** measures long-term drought by accounting for temperature, precipitation, and soil moisture. It captures cumulative water balance over months, making it sensitive to sustained dry or wet periods. You can learn more about PDSI [here](https://www.cpc.ncep.noaa.gov/products/analysis_monitoring/cdus/palmer_drought/wpdanote.shtml)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "415dcfc7",
+ "metadata": {},
+ "source": [
+ "The underlying PDSI computation uses the `climate_indices` library's `palmer_pdsi` function, the same implementation referenced by [drought.gov](https://www.drought.gov).\n",
+ "\n",
+ "We'll be doing a bit of data manipulation before the actual PDSI calculation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "deb75906",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Resampling PET and precip to monthly since the function only takes monthly variables\n",
+ "mon_pet = (pet_pm * 86400 / 25.4).resample(time='1ME').sum()\n",
+ "mon_precip = (prec / 25.4).resample(time='1ME').sum()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "119fa7d5",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Creating one Dataset of PET and precip with WLs combined\n",
+ "mon_pet_transform = combine_wl_to_dummy_time(mon_pet, baseline_wl=WARMING_LEVELS[0], future_wls=WARMING_LEVELS[1:])\n",
+ "mon_precip_transform = combine_wl_to_dummy_time(mon_precip.prec, baseline_wl=WARMING_LEVELS[0], future_wls=WARMING_LEVELS[1:])\n",
+ "mon_pet_transform = mon_pet_transform.assign_coords(mon_precip_transform.coords)\n",
+ "combined_ds = xr.Dataset({'precip': mon_precip_transform, 'pet': mon_pet_transform})"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "999a3437-e29f-402f-9e4e-492f726fdf7f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load `combined_ds` into memory\n",
+ "with ProgressBar():\n",
+ " combined_ds = combined_ds.compute()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "693e67f7-f770-4310-9fda-9bedf28e1b1e",
+ "metadata": {},
+ "source": [
+ "Now that our data is in the correct shape, we'll compute PDSI using the PET values calculated from above along with precip data for our given location.\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c7d02265-022d-4f50-8d31-d56687e88cc5",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def compute_pdsi(precip_1d, pet_1d, awc, data_start_year, calibration_year_initial, calibration_year_final):\n",
+ " \"\"\"Compute drought indices for a single 1-D time series of precipitation and PET, and only capture PDSI.\"\"\"\n",
+ " pdsi, _, _, _, _ = palmer.pdsi(\n",
+ " precips=precip_1d,\n",
+ " pet=pet_1d,\n",
+ " awc=awc,\n",
+ " data_start_year=data_start_year,\n",
+ " calibration_year_initial=calibration_year_initial,\n",
+ " calibration_year_final=calibration_year_final,\n",
+ " )\n",
+ " return pdsi\n",
+ "\n",
+ "pdsi_result = xr.apply_ufunc(\n",
+ " compute_pdsi,\n",
+ " combined_ds[\"precip\"], # shape (time, sim)\n",
+ " combined_ds[\"pet\"], # shape (time, sim)\n",
+ " input_core_dims=[[\"time\"], [\"time\"]], # operate along time\n",
+ " output_core_dims=[[\"time\"]], # output also has time dim\n",
+ " vectorize=True, # loop over non-core dims (sim)\n",
+ " kwargs={\n",
+ " \"awc\": 6, # Default Average Water Capacity\n",
+ " \"data_start_year\": 2000,\n",
+ " \"calibration_year_initial\": 2000,\n",
+ " \"calibration_year_final\": 2030,\n",
+ " }\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2d745c44",
+ "metadata": {},
+ "source": [
+ "The combined time axis has 720 months total: months 0–359 are the baseline warming level (0.8°C, used for calibration), and months 360–719 are the future warming level period. The export below slices to `time=slice(360, 720)` to retain only the future period.\n",
+ "\n",
+ "The exported file will have the following dimensions:\n",
+ "- `time`: 360 months (30-year future warming level period)\n",
+ "- `wl`: warming level pair (e.g., `\"08_to_20\"` = calibrated on 0.8°C, evaluated on 2.0°C)\n",
+ "- `lat`, `lon`: spatial coordinates (EPSG:4326)\n",
+ "- `sim`: ensemble member"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2d802d5d",
+ "metadata": {},
+ "source": [
+ "#### Export PDSI"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b1dc40f2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Cleaning and labeling the data before exporting it in the next cell\n",
+ "final_pdsi = clean_index(pdsi_result, 'pdsi')\n",
+ "final_pdsi = final_pdsi.assign_attrs({\n",
+ " \"long_name\": \"Palmer Drought Severity Index\",\n",
+ " \"units\": \"from -10 (dry) to +10 (wet)\",\n",
+ "})\n",
+ "pdsi_filename = f\"pdsi_wl_lat{str(LAT).replace('.', '_')}_lon{str(LON).replace('.', '_')}.nc\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "3c6f3455",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "export(final_pdsi, pdsi_filename, format=\"NetCDF\", mode=\"local\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3dc2e324",
+ "metadata": {},
+ "source": [
+ "### 2.2. Calculate EDDI"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "43ce0d8d-5425-4033-bc67-e21804b00bef",
+ "metadata": {},
+ "source": [
+ "**EDDI (Evaporative Demand Drought Index)** measures atmospheric \"thirst\", or how aggressively the atmosphere is pulling moisture from the land surface, independent of precipitation. It responds faster than PDSI and is especially useful for detecting early-onset or flash drought driven by heat and low humidity."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a4e3ea25-fe32-436e-9553-5f7e6a8a120c",
+ "metadata": {},
+ "source": [
+ "We will be calculating EDDI as a standardized anomaly via `xclim`'s `standardized_index`, calibrated over the baseline warming level period using the dummy time index constructed above. Values are clipped to [−2.5, 2.5]."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c9941a79-198c-494b-be56-fc6b344238d5",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def compute_eddi_numpy(timeseries_np, time_coords):\n",
+ " \"\"\"Wrapper that reconstructs DataArray from numpy for apply_ufunc.\"\"\"\n",
+ " ts = xr.DataArray(timeseries_np, coords={\"time\": time_coords}, dims=[\"time\"])\n",
+ " eddi = standardized_index(\n",
+ " da=ts,\n",
+ " freq='MS',\n",
+ " window=1,\n",
+ " dist=\"gamma\",\n",
+ " method=\"ML\",\n",
+ " zero_inflated=True,\n",
+ " fitkwargs={},\n",
+ " cal_start=\"2000-01-31\",\n",
+ " cal_end=\"2029-12-31\"\n",
+ " )\n",
+ " return eddi.clip(min=-2.5, max=2.5) # Clipping extraneous values to be within the realistic range of EDDI\n",
+ "\n",
+ "time_coords = combined_ds.time.values\n",
+ "\n",
+ "eddi_result = xr.apply_ufunc(\n",
+ " compute_eddi_numpy,\n",
+ " combined_ds['precip'].compute(),\n",
+ " input_core_dims=[['time']],\n",
+ " output_core_dims=[['time']],\n",
+ " vectorize=True,\n",
+ " kwargs={\"time_coords\": time_coords},\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "45176f23",
+ "metadata": {},
+ "source": [
+ "#### Export EDDI"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d59ee541",
+ "metadata": {},
+ "source": [
+ "The combined time axis has 720 months total: months 0–359 are the baseline warming level (0.8°C, used for calibration), and months 360–719 are the future warming level period. The export below slices to `time=slice(360, 720)` to retain only the future period.\n",
+ "\n",
+ "The exported file will have the following dimensions:\n",
+ "- `time`: 360 months (30-year future warming level period)\n",
+ "- `wl`: warming level pair (e.g., `\"08_to_20\"` = calibrated on 0.8°C, evaluated on 2.0°C)\n",
+ "- `lat`, `lon`: spatial coordinates (EPSG:4326)\n",
+ "- `sim`: ensemble member"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "d86782f2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Saving these results and cleaning the data\n",
+ "final_eddi = clean_index(eddi_result, 'eddi')\n",
+ "final_eddi = final_eddi.assign_attrs({\n",
+ " \"long_name\": \"Evaporative Demand Drought Index\",\n",
+ " \"units\": \"from -2.5 (wet) to +2.5 (dry)\",\n",
+ "})\n",
+ "eddi_filename = f\"eddi_wl_lat{str(LAT).replace('.', '_')}_lon{str(LON).replace('.', '_')}.nc\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "707286f3",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "export(final_eddi, eddi_filename, format=\"NetCDF\", mode=\"local\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bdcbbb08-7724-4be7-9b61-56e831fba539",
+ "metadata": {},
+ "source": [
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "14d7c23e",
+ "metadata": {},
+ "source": [
+ "## Conclusion"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c9b1fe28-259c-416e-b299-6397b649bd97",
+ "metadata": {},
+ "source": [
+ "This notebook demonstrates how you can use `climakitae` variables to compute two drought indices, PDSI and EDDI, from WRF-downscaled climate projections across multiple warming levels, and export them as analysis-ready NetCDF files.\n",
+ "\n",
+ "Both indices are calibrated over a base warming level period and evaluated over the future warming level period, enabling direct comparisons of drought severity across warming levels."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e3d4d966-cb44-4af4-a356-acd81992be4f",
+ "metadata": {},
+ "source": [
+ "Below, we list the same table as above, in the introduction, to emphasize the indices discussed and the methodologies to producing them.\n",
+ "\n",
+ "| Index | Method | Output file |\n",
+ "|-------|--------|-------------|\n",
+ "| PDSI | Palmer–Monteith (`climate_indices`) | `pdsi_wl_*.nc` |\n",
+ "| EDDI | Standardized index (`xclim`) | `eddi_wl_*.nc` |"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (climakitae)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/work-in-progress/drought_metrics.ipynb b/work-in-progress/drought_metrics.ipynb
deleted file mode 100644
index 71fd643e..00000000
--- a/work-in-progress/drought_metrics.ipynb
+++ /dev/null
@@ -1,679 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "id": "5e0361f0",
- "metadata": {},
- "source": [
- "# Drought Metrics"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "64641b82",
- "metadata": {},
- "source": [
- "This notebook calculates two drought metrics, the Palmer Drought Severity Index (PDSI) and the Evaporative Demand Drought Index (EDDI), using WRF data in the AE catalog. Both PDSI and EDDI require Potentinal Evaportranspiration (PET). PET is computed first using the Penman-Monteith method. At the end of the notebook, the user will be able to export monthly PDSI and EDDI for under different global warming levels for a specific lat/lon as netcdf files to be used for further analyses. "
- ]
- },
- {
- "cell_type": "markdown",
- "id": "98506237",
- "metadata": {},
- "source": [
- "**Intended Application:** As a user, I want to export future drought indicies for different global warming levels by:\n",
- "1. Calculating PET using the Penman-Montieth Method\n",
- "2. Calculating the PDSI using PET and exporting the monthly timeseries to a netcdf\n",
- "3. Calculating the EDDI using PET and exporting the monthly timeseries to a netcdf"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "cc35f46e",
- "metadata": {},
- "source": [
- "**Runtime:** With the default settings, this notebook takes approximately 25 minutes to run from start to finish. Modifications to selections may increase the runtime."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "89ab2f9c",
- "metadata": {},
- "source": [
- "**Troubleshooting:** Getting an `IndexError: index 40 is out of bounds for axis 0 with size 40` when trying to run the PDSI calculation? Try changing your lat/lon or point of interest further away from the boundaries of our 3 km WRF domain (as seen in [this graphic](https://analytics.cal-adapt.org/faq/#what-data-is-available))."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "a5de287f",
- "metadata": {},
- "source": [
- "### Using the Penman-Monteith method (most physically accurate) to calculate Potential Evapotranspiration"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "5c04c18f",
- "metadata": {},
- "source": [
- "**Variables needed:**\n",
- "- `tasmin`\n",
- "- `tasmax`\n",
- "- `relative humidity`\n",
- "- `radiation flux`\n",
- " - rsds\n",
- " - rsus\n",
- " - rlds\n",
- " - rlus\n",
- "- `wind speed (10m wind will be converted to 2m)`"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "3babc840",
- "metadata": {},
- "source": [
- "### Imports"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "a7e19917",
- "metadata": {},
- "outputs": [],
- "source": [
- "import xclim\n",
- "import os\n",
- "import xarray as xr\n",
- "import pandas as pd\n",
- "import numpy as np\n",
- "import matplotlib.pyplot as plt\n",
- "from pyproj import CRS\n",
- "from climakitae.core.data_interface import get_data\n",
- "from climakitae.core.data_load import load\n",
- "from climakitae.core.data_export import export\n",
- "from climakitae.util.utils import add_dummy_time_to_wl\n",
- "from climakitae.util.utils import reproject_data"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "7c5626f6",
- "metadata": {},
- "source": [
- "### Initial Setup"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "8b31d378",
- "metadata": {},
- "outputs": [],
- "source": [
- "lat = 37.787964\n",
- "lon = -122.065063\n",
- "\n",
- "variables_dict = {\n",
- " \"tasmax\": \"Maximum air temperature at 2m\",\n",
- " \"tasmin\": \"Minimum air temperature at 2m\",\n",
- " \"hurs\": \"Relative humidity\",\n",
- " \"rsds\": \"Instantaneous downwelling shortwave flux at bottom\",\n",
- " \"rsus\": \"Instantaneous upwelling shortwave flux at bottom\",\n",
- " \"rlds\": \"Instantaneous downwelling longwave flux at bottom\",\n",
- " \"rlus\": \"Instantaneous upwelling longwave flux at bottom\",\n",
- " \"wspd10mean\": \"Mean wind speed at 10m\",\n",
- " \"precip\": \"Precipitation (total)\",\n",
- "}"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "92a9173e",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Making a `input_data` folder to hold intermediate data variables needed for PET\n",
- "if not os.path.exists(\"input_data\"):\n",
- " os.mkdir(\"input_data\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "03173361",
- "metadata": {},
- "source": [
- "**IMPORTANT NOTE**: The following cell saves out the intermediate data that is required for PET into an `input_data` directory. If you change the lat/lon coordinate from above and DON'T delete or move the data already inside the `input_data` directory, it will just re-read the same data files."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "a0132b29-804a-4eef-893f-2f60adfefc62",
- "metadata": {},
- "outputs": [],
- "source": [
- "### Retrieving the different variables needed for PET\n",
- "datas = []\n",
- "\n",
- "for _, (variable, var_long_name) in enumerate(variables_dict.items()):\n",
- "\n",
- " file_path = f\"input_data/{variable}_daily.nc\"\n",
- "\n",
- " if os.path.exists(file_path):\n",
- " print(f\"Reading {variable} from file.\")\n",
- " da = xr.open_dataarray(file_path)\n",
- " \n",
- " else:\n",
- " print(f\"Computing {var_long_name}\")\n",
- " ae_var_name = var_long_name\n",
- " timescale = 'daily'\n",
- " if variable == 'rlus' or variable == 'rsus':\n",
- " timescale = 'hourly'\n",
- " da = get_data(\n",
- " variable=ae_var_name,\n",
- " resolution='3 km',\n",
- " timescale=timescale,\n",
- " # Expanding the spatial extent to cover gridcells outside of WRF domain will throw an error!\n",
- " # Please reduce the extent of your data retrieval if you do get an error like the `IndexError` mentioned above\n",
- " latitude=(lat - 0.04, lat + 0.04),\n",
- " longitude=(lon - 0.04, lon + 0.04),\n",
- " approach=\"Warming Level\",\n",
- " warming_level=[0.8, 1.5, 2.0, 3.0],\n",
- " downscaling_method=\"Dynamical\"\n",
- " )\n",
- " da = load(add_dummy_time_to_wl(da), progress_bar=True)\n",
- " if variable == 'tasmin':\n",
- " agg_da = da.squeeze().resample(time='D').min()\n",
- " elif variable == 'tasmax':\n",
- " agg_da = da.squeeze().resample(time='D').max()\n",
- " elif variable == 'precip':\n",
- " agg_da = da.squeeze().resample(time='D').sum()\n",
- " else:\n",
- " agg_da = da.squeeze().resample(time='D').mean()\n",
- " agg_da.to_netcdf(file_path) # Save for reuse\n",
- " da = agg_da\n",
- "\n",
- " datas.append(da)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "df54b41e",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Creating daily variables for all hourly variables\n",
- "tasmin = datas[0]\n",
- "tasmax = datas[1]\n",
- "hurs = datas[2] / 100 # Convert from % to fraction\n",
- "new_hurs = hurs.assign_attrs(units='1')\n",
- "rsds = datas[3]\n",
- "rsus = datas[4]\n",
- "rlds = datas[5]\n",
- "rlus = datas[6]\n",
- "sfcWind = datas[7]\n",
- "precip = datas[8]"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "32f5bcba",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Calculating PET from xclim\n",
- "pet_calc = xclim.indices.potential_evapotranspiration(\n",
- " tasmin=tasmin,\n",
- " tasmax=tasmax,\n",
- " hurs=new_hurs,\n",
- " rsds=rsds,\n",
- " rsus=rsus,\n",
- " rlds=rlds,\n",
- " rlus=rlus,\n",
- " sfcWind=sfcWind,\n",
- " method=\"FAO_PM98\"\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "58cbcd75",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Preserving CRS and a spatial mask for later\n",
- "crs = CRS.from_cf(pet_calc['Lambert_Conformal'].attrs)\n",
- "spatial_mask = ~pet_calc.isel(warming_level=0, time=0, simulation=0).isnull()"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "f8d01841",
- "metadata": {},
- "source": [
- "# PDSI"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "415dcfc7",
- "metadata": {},
- "source": [
- "Here, we will use the PDSI function from the `climate_indices` library, which is also what drought.gov has referenced. However, we will install a specific commit of the package that is compatible with the AE hub environment."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "f0eaac76-5c73-405d-817a-4c2bdf44f7fb",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Pip install the specific commit of the package that supports AE package versions\n",
- "!pip install -qq git+https://github.com/monocongo/climate_indices.git@43c5451"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "b0bc9fcf",
- "metadata": {
- "editable": true,
- "slideshow": {
- "slide_type": ""
- },
- "tags": []
- },
- "outputs": [],
- "source": [
- "# Making imports from `climate_indices` package\n",
- "import climate_indices\n",
- "from climate_indices.palmer import pdsi"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "deb75906",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Resampling PET and precip to monthly since the function only takes monthly variables\n",
- "mon_pet = (pet_calc * 86400 / 25.4).resample(time='1ME').sum()\n",
- "mon_precip = (precip / 25.4).resample(time='1ME').sum()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "ffd18cf8",
- "metadata": {},
- "outputs": [],
- "source": [
- "### Combining WL objects together, historical WL as 2000-2030, future WL as 2030-2060\n",
- "def combine_wl_to_dummy_time(\n",
- " da: xr.DataArray,\n",
- " baseline_wl: float,\n",
- " future_wls: list[float],\n",
- " start_date: str = \"2000-01-31\",\n",
- ") -> xr.DataArray:\n",
- " \"\"\"\n",
- " Combine baseline warming level with multiple future warming levels into one\n",
- " DataArray along a new 'combined_wl' dimension.\n",
- "\n",
- " Parameters\n",
- " ----------\n",
- " da : xr.DataArray\n",
- " Original data with dims including 'warming_level' and 'time'.\n",
- " baseline_wl : float\n",
- " The warming level used for the first time segment.\n",
- " future_wls : list of float\n",
- " Warming levels to concatenate after baseline.\n",
- " start_date : str\n",
- " Start date for the combined time series (monthly freq).\n",
- " \n",
- " Returns\n",
- " -------\n",
- " xr.DataArray\n",
- " Combined DataArray with new dimension 'combined_wl' and coordinate labels like \"0.8 to 1.5\".\n",
- " \"\"\"\n",
- " months_per_wl = da.sizes['time']\n",
- " total_months = 2 * months_per_wl\n",
- " new_time = pd.date_range(start_date, periods=total_months, freq='ME')\n",
- "\n",
- " combined_list = []\n",
- " combined_labels = []\n",
- "\n",
- " for fw in future_wls:\n",
- " da_base = da.sel(warming_level=baseline_wl)\n",
- " da_future = da.sel(warming_level=fw)\n",
- "\n",
- " combined = xr.concat([da_base, da_future], dim='time')\n",
- " combined = combined.assign_coords(time=new_time)\n",
- "\n",
- " wl_flag = np.array([baseline_wl] * months_per_wl + [fw] * months_per_wl)\n",
- " combined = combined.assign_coords(warming_level_flag=('time', wl_flag))\n",
- "\n",
- " combined_list.append(combined)\n",
- " combined_labels.append(f\"{int(baseline_wl * 10):02d}_to_{int(fw * 10):02d}\")\n",
- "\n",
- " combined_da = xr.concat(combined_list, dim='combined_wl')\n",
- " combined_da = combined_da.assign_coords(combined_wl=combined_labels)\n",
- "\n",
- " return combined_da"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "119fa7d5",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Creating one Dataset of PET and precip with WLs combined\n",
- "mon_pet_transform = combine_wl_to_dummy_time(mon_pet, baseline_wl=0.8, future_wls=[1.5,2.0,3.0])\n",
- "mon_precip_transform = combine_wl_to_dummy_time(mon_precip, baseline_wl=0.8, future_wls=[1.5,2.0,3.0])\n",
- "\n",
- "combined_ds = xr.Dataset({'precip': mon_precip_transform, 'pet': mon_pet_transform})\n",
- "\n",
- "# Applying spatial mask\n",
- "combined_ds = combined_ds.where(spatial_mask)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "c6b5de48",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Helper function to vectorize PDSI calculation across `combined_ds` dimensions.\n",
- "def calc_pdsi(timeseries: xr.Dataset):\n",
- " \"\"\"\n",
- " Compute the Palmer Drought Severity Index (PDSI) from a Dataset.\n",
- "\n",
- " Parameters\n",
- " ----------\n",
- " timeseries : xarray.Dataset\n",
- " Dataset containing 'precip' and 'pet' variables with a time dimension.\n",
- "\n",
- " Returns\n",
- " -------\n",
- " xarray.DataArray\n",
- " PDSI values along the time dimension.\n",
- " \"\"\"\n",
- " # Extracting precip and PET by each timeseries and calculating PDSI\n",
- " precip = timeseries['precip'].squeeze()\n",
- " pet = timeseries['pet'].squeeze()\n",
- " \n",
- " pdsi_calc = pdsi(\n",
- " precips=precip.values,\n",
- " pet=pet.values,\n",
- " awc=5,\n",
- " data_start_year=2000,\n",
- " calibration_year_initial=2000,\n",
- " calibration_year_final=2030,\n",
- " )\n",
- " retval = xr.DataArray(pdsi_calc[0], coords={\"time\": precip.time.values}, dims=['time'])\n",
- " \n",
- " # Clipping PDSI to realistic values\n",
- " return retval.clip(min=-10, max=10)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "c625f3af",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Applies the PDSI function across all dimensions so that a timeseries of PET/precip is always being passed into `pdsi`\n",
- "pdsi_da = combined_ds.groupby([\n",
- " 'combined_wl',\n",
- " 'x',\n",
- " 'y',\n",
- " 'simulation'\n",
- "]).apply(\n",
- " lambda timeseries: calc_pdsi(timeseries)\n",
- ")\n",
- "\n",
- "# Writing crs and reprojecting PDSI to lat/lon\n",
- "pdsi_da = pdsi_da.rio.write_crs(crs.to_wkt())\n",
- "pdsi_da = pdsi_da.transpose('time', 'combined_wl', 'simulation', 'y', 'x')\n",
- "pdsi_latlon = reproject_data(pdsi_da, 'EPSG:4326')\n",
- "del pdsi_latlon.attrs[\"_FillValue\"]"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "6cc3157f",
- "metadata": {},
- "source": [
- "### Exporting the results"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "2d745c44",
- "metadata": {},
- "source": [
- "The results will have the following dimensions:\n",
- "- time\n",
- "- wl (showing which WL PDSI was calibrated on, and then which WL PDSI was calculated on)\n",
- "- lat\n",
- "- lon\n",
- "- simulation"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "b1dc40f2",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Cleaning and labeling the data before exporting it in the next cell\n",
- "final_pdsi = pdsi_latlon.isel(time=slice(360, 720))\n",
- "final_pdsi = final_pdsi.rename({'combined_wl': 'wl'}).rename(\"pdsi\")\n",
- "final_pdsi = final_pdsi.assign_attrs({\n",
- " \"long_name\": \"Palmer Drought Severity Index\",\n",
- " \"units\": \"from -10 (dry) to +10 (wet)\",\n",
- "})\n",
- "pdsi_filename = f\"pdsi_wl_lat{str(lat).replace('.', '_')}_lon{str(lon).replace('.', '_')}.nc\""
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "3c6f3455",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Exporting the DataArray\n",
- "if os.path.exists(pdsi_filename):\n",
- " raise Exception(\n",
- " (\n",
- " f\"File {pdsi_filename} exists. \"\n",
- " \"Please either delete that file from the work space \"\n",
- " \"or specify a new file name here.\"\n",
- " )\n",
- " )\n",
- "else:\n",
- " final_pdsi.to_netcdf(pdsi_filename, encoding={\"pdsi\": {\"_FillValue\": -9999.0}})"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "3dc2e324",
- "metadata": {},
- "source": [
- "## EDDI"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "f0645679",
- "metadata": {},
- "source": [
- "Now, we will calculate EDDI using PET."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "d38a1969",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Import `standardized_index` from xclim, which we will apply to our PET data object to generate EDDI\n",
- "from xclim.indices.stats import standardized_index"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "3228b9fb",
- "metadata": {},
- "outputs": [],
- "source": [
- "def calc_eddi(timeseries: xr.DataArray):\n",
- " \"\"\"\n",
- " Compute the Evaporative Demand Drought Index (EDDI) for a time series.\n",
- "\n",
- " Parameters\n",
- " ----------\n",
- " timeseries : xarray.DataArray\n",
- " 1D time series of ET₀. NaNs are skipped.\n",
- "\n",
- " Returns\n",
- " -------\n",
- " xarray.DataArray\n",
- " EDDI values. Positive = dry, negative = wet.\n",
- " \"\"\"\n",
- " eddi = standardized_index(\n",
- " da=timeseries,\n",
- " freq='MS',\n",
- " window=1,\n",
- " dist=\"gamma\",\n",
- " method=\"ML\",\n",
- " zero_inflated=True,\n",
- " fitkwargs={},\n",
- " cal_start=\"2000-01-31\",\n",
- " cal_end=\"2029-12-31\"\n",
- " )\n",
- " # Clipping EDDI to realistic values\n",
- " retval = eddi.clip(min=-2.5, max=2.5)\n",
- " return retval"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "1240839c",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Applies the `calc_eddi` function across all dimensions so that a timeseries of PET is always being passed into `calc_eddi`\n",
- "eddi_da = combined_ds['pet'].groupby([\n",
- " 'combined_wl',\n",
- " 'x',\n",
- " 'y',\n",
- " 'simulation'\n",
- "]).apply(\n",
- " lambda timeseries: calc_eddi(timeseries)\n",
- ")\n",
- "\n",
- "# Writing crs and reprojecting EDDI to lat/lon\n",
- "eddi_da = eddi_da.rio.write_crs(crs.to_wkt())\n",
- "eddi_da = eddi_da.transpose('time', 'combined_wl', 'simulation', 'y', 'x')\n",
- "eddi_da_latlon = reproject_data(eddi_da, 'EPSG:4326')\n",
- "del eddi_da_latlon.attrs[\"_FillValue\"]"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "45176f23",
- "metadata": {},
- "source": [
- "### Exporting the results"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "d59ee541",
- "metadata": {},
- "source": [
- "The results will have the following dimensions:\n",
- "- time\n",
- "- wl (showing which WL EDDI was calibrated on, and then which WL EDDI was calculated on)\n",
- "- lat\n",
- "- lon\n",
- "- simulation"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "d86782f2",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Saving these results and cleaning the data\n",
- "final_eddi = eddi_da_latlon.isel(time=slice(360, 720))\n",
- "final_eddi = final_eddi.rename({'combined_wl': 'wl'}).rename(\"eddi\")\n",
- "final_eddi = final_eddi.assign_attrs({\n",
- " \"long_name\": \"Evaporative Demand Drought Index\",\n",
- " \"units\": \"from -2.5 (wet) to +2.5 (dry)\",\n",
- "})\n",
- "eddi_filename = f\"eddi_wl_lat{str(lat).replace('.', '_')}_lon{str(lon).replace('.', '_')}.nc\""
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "707286f3",
- "metadata": {},
- "outputs": [],
- "source": [
- "# Exporting the DataArray\n",
- "if os.path.exists(eddi_filename):\n",
- " raise Exception(\n",
- " (\n",
- " f\"File {eddi_filename} exists. \"\n",
- " \"Please either delete that file from the work space \"\n",
- " \"or specify a new file name here.\"\n",
- " )\n",
- " )\n",
- "else:\n",
- " final_eddi.to_netcdf(eddi_filename, encoding={\"eddi\": {\"_FillValue\": -9999.0}})"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3 (climakitae)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.12.12"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 5
-}