Skip to content

Calculate prices at the appropriate level#1196

Merged
tsmbland merged 25 commits intomainfrom
buffered_prices
Mar 19, 2026
Merged

Calculate prices at the appropriate level#1196
tsmbland merged 25 commits intomainfrom
buffered_prices

Conversation

@tsmbland
Copy link
Collaborator

@tsmbland tsmbland commented Mar 12, 2026

Description

This PR is to ensure that commodities have prices specified at the appropriate granularity. For example, commodities with a season timeslice level should have prices specified at the season level (i.e. flat within each season), and prices with an annual timeslice level should have one price for the year. This is already the case with shadow prices (we just duplicate the duals), but not necessarily the case for the marginal/full strategies, because these are calculated using the price of other commodities which may vary at the timeslice level.

For lack of a better approach, the way I'm doing this is to calculate marginal costs using a weighted average within each season/year, weighted according to timeslice-level activity. We then calculate prices by taking the maximum of this across technologies, as before (or the maximum of full costs for that strategy).

It's possible that some assets may not have any activity within a particular season, which makes it impossible to do a weighted average. However, we still want these assets to be able contribute to the price, as discussed in #1148. So, in this case I'm using the activity limits as the weight (i.e. maximum potential activity assuming full utilisation). This is also the case for candidates.

Notably, all this doesn't have any impact on any of the example models, because prices for seasonal commodities are already flat. I think this is inevitable when seasonal commodities are at the top of the supply chain, or more specifically if granularity only ever increases when going down the supply chain. This reminds me of MUSE1, where we sort of have a buffering feature, but it only works as expected in these scenarios. I wonder if it's worth adding an example model that goes against this rule, or at least having a look to see what would happen (we've been trying to do something in MUSE1 where electricity gets buffered, but haven't been able to due to the aforementioned restriction). We already sort of have this with the circularity model, but this doesn't work with the new pricing strategies.

Going forward, I wonder if it would be worth keying the price maps with TimeSliceSelection instead of TimeSliceID. This would save us from having to duplicate prices, but may make things a bit more fiddly when it comes to retrieving prices.

Fixes #1184

Type of change

  • Bug fix (non-breaking change to fix an issue)
  • New feature (non-breaking change to add functionality)
  • Refactoring (non-breaking, non-functional change to improve maintainability)
  • Optimization (non-breaking change to speed up the code)
  • Breaking change (whatever its nature)
  • Documentation (improve or add documentation)

Key checklist

  • All tests pass: $ cargo test
  • The documentation builds and looks OK: $ cargo doc
  • Update release notes for the latest release if this PR adds a new feature or fixes a bug
    present in the previous release

Further checks

  • Code is commented, particularly in hard-to-understand areas
  • Tests added that prove fix is effective or that feature works

@codecov
Copy link

codecov bot commented Mar 12, 2026

Codecov Report

❌ Patch coverage is 86.69202% with 35 lines in your changes missing coverage. Please review.
✅ Project coverage is 89.31%. Comparing base (5389d44) to head (dd75c66).
⚠️ Report is 27 commits behind head on main.

Files with missing lines Patch % Lines
src/simulation/prices.rs 86.77% 32 Missing and 2 partials ⚠️
src/time_slice.rs 83.33% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1196      +/-   ##
==========================================
+ Coverage   89.02%   89.31%   +0.28%     
==========================================
  Files          57       57              
  Lines        7838     7974     +136     
  Branches     7838     7974     +136     
==========================================
+ Hits         6978     7122     +144     
+ Misses        564      555       -9     
- Partials      296      297       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@tsmbland tsmbland marked this pull request as ready for review March 12, 2026 16:47
Copilot AI review requested due to automatic review settings March 12, 2026 16:47
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adjusts marginal-cost and full-cost commodity price calculations so that prices are produced at the commodity’s declared time-slice granularity (annual/season/DayNight), flattening prices within each season/year via activity-weighted averaging (with an activity-limit fallback when activity is zero).

Changes:

  • Added TimeSliceLevel::containing_selection to map a TimeSliceID to the appropriate TimeSliceSelection at a given level.
  • Reworked marginal/full cost pricing to compute per-(commodity, region, selection) group prices using activity-weighted (or limit-weighted) averages, then expand them back to per-TimeSliceID prices.
  • Simplified Solution API by removing iter_activity_keys_for_existing (callers now use iter_activity_for_existing directly).

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
src/time_slice.rs Adds helper to derive a coarser TimeSliceSelection (annual/season/single) from a TimeSliceID.
src/simulation/prices.rs Implements grouped/weighted averaging for marginal/full cost prices and expands grouped prices back to time-slice keys; adds GroupAccum tests.
src/simulation/optimisation.rs Removes iter_activity_keys_for_existing now that pricing consumes activity values directly.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings March 12, 2026 16:53
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Copilot AI review requested due to automatic review settings March 16, 2026 17:06
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines +463 to +516
// For each (asset, commodity, region, group), accumulate marginal costs. For seasonal/annual
// commodities, marginal costs are weighted by activity (or the activity limit for assets with
// no activity)
let mut existing_accum: HashMap<_, GroupAccum> = HashMap::new();
for (asset, time_slice, activity) in activity_for_existing {
let region_id = asset.region_id();

// Iterate over all the SED/SVD marginal costs for commodities we need prices for
// Get activity limits: used as a backup weight if no activity across the group
let activity_limit = *asset
.get_activity_limits_for_selection(&TimeSliceSelection::Single(time_slice.clone()))
.end();

// Iterate over the marginal costs for commodities we need prices for
for (commodity_id, marginal_cost) in asset.iter_marginal_costs_with_filter(
upstream_prices,
year,
time_slice,
|commodity_id: &CommodityID| {
markets_to_price.contains(&(commodity_id.clone(), region_id.clone()))
},
|cid: &CommodityID| markets_to_price.contains(&(cid.clone(), region_id.clone())),
) {
// Update the highest cost for this commodity/region/time slice
let key = (commodity_id.clone(), region_id.clone(), time_slice.clone());
prices
.entry(key.clone())
.and_modify(|c| *c = c.max(marginal_cost))
.or_insert(marginal_cost);
priced_by_existing.insert(key);
let group = commodities[&commodity_id]
.time_slice_level
.containing_selection(time_slice);
let key = (
asset.clone(),
commodity_id.clone(),
region_id.clone(),
group,
);
let accum = existing_accum.entry(key).or_default();
accum.add(marginal_cost, activity, activity_limit);
}
}

// Next, look at candidate assets for any markets not covered by existing assets
// For these, we take the _lowest_ marginal cost
// Compute per-group weighted-average marginal cost per asset, then take the max across assets
let mut group_prices: IndexMap<_, MoneyPerFlow> = IndexMap::new();
let mut priced_groups: HashSet<_> = HashSet::new();
for ((_, commodity_id, region_id, group), accum) in &existing_accum {
// Solve the weighted average marginal cost for this group
let Some(avg_cost) = accum.solve() else {
continue;
};

// Take the max across assets for each group
group_prices
.entry((commodity_id.clone(), region_id.clone(), group.clone()))
.and_modify(|c| *c = c.max(avg_cost))
.or_insert(avg_cost);
priced_groups.insert((commodity_id.clone(), region_id.clone(), group.clone()));
}

// Expand each group to individual time slices
let mut prices: IndexMap<_, MoneyPerFlow> = IndexMap::new();
expand_groups_to_prices(&group_prices, time_slice_info, &mut prices);

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm keen to avoid this as it would probably be quite a complicated test. Better to come up with a suitable integration/regression test, I think

@tsmbland tsmbland marked this pull request as draft March 16, 2026 21:15
@tsmbland tsmbland marked this pull request as ready for review March 17, 2026 10:07
Copilot AI review requested due to automatic review settings March 17, 2026 10:07
@tsmbland tsmbland requested review from Aurashk March 17, 2026 10:24
Copy link
Collaborator

@Aurashk Aurashk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few thoughts but looks good.

Copilot AI review requested due to automatic review settings March 17, 2026 14:41
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated no new comments.

Comments suppressed due to low confidence (1)

src/simulation/prices.rs:201

  • CommodityPrices::insert / extend now assert! that a key does not already exist. Since CommodityPrices is a public type (re-exported from simulation), this changes behavior from “last write wins” to “panic on duplicate insert”, which is a breaking API change and can crash callers at runtime. Consider either (a) keeping overwrite semantics, (b) providing a fallible try_insert/try_extend that returns a Result, or (c) making this a debug_assert! so it only trips in debug builds while still catching logic bugs during development.
        price: MoneyPerFlow,
    ) {
        let key = (commodity_id.clone(), region_id.clone(), time_slice.clone());
        let existing = self.0.insert(key.clone(), price).is_some();
        assert!(!existing, "Key {key:?} already exists in the map");
    }

    /// Extend the prices map, panic if any key already exists
    pub fn extend<T>(&mut self, iter: T)
    where
        T: IntoIterator<Item = ((CommodityID, RegionID, TimeSliceID), MoneyPerFlow)>,
    {
        for (key, price) in iter {
            let existing = self.0.insert(key.clone(), price).is_some();
            assert!(!existing, "Key {key:?} already exists in the map");
        }

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Copy link
Collaborator

@alexdewar alexdewar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, this looks ok, but I think it can be improved a little.

I'd take @Aurashk's suggestion of passing maps by mutable ref to improve efficiency. There are also a couple of other places where we're creating IndexMaps unnecessarily.

Other than that, I think we can avoid some of the repetition without making things much more fiddly. Firstly, both calculate_marginal_cost_prices and calculate_full_cost_prices have two loops, which are virtually identical, except that one is for existing assets and the other for candidates. The only difference I could see is that the latter skips ahead if there is already a value in group_prices, so you could just factor out this loop into a separate function and pass in an empty map for group_prices for the existing-assets case. Secondly, you could write an internal function that handles both the marginal and full-cost cases. There are different ways you could go about this, but I'd just pass in an empty map for annual_activities in the marginal prices case and rewrite things to use None for "unknown annual activity" rather than panicking. Does that make sense? I think you should do the first of these, but the second one is partly a matter of taste (and I can see it getting a little convoluted), so I'll leave that one up to you.

}

#[test]
fn weighted_average_accumulator_single_value() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most/all of these tests could be parameterised, but they're also fine the way they are (and arguably more readable this way).

Copilot AI review requested due to automatic review settings March 18, 2026 11:40
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

@tsmbland
Copy link
Collaborator Author

Thanks both. I've implemented some of the suggestions and I think it's better for it.

Firstly, both calculate_marginal_cost_prices and calculate_full_cost_prices have two loops, which are virtually identical, except that one is for existing assets and the other for candidates. The only difference I could see is that the latter skips ahead if there is already a value in group_prices, so you could just factor out this loop into a separate function and pass in an empty map for group_prices for the existing-assets case

@alexdewar I think it's a bit more complicated than that:

  • we use different weights/accumulator types for existing assets and candidates
  • we use different approaches for aggregating over assets (max vs min)
  • this will get even more complicated in Add price strategies using load-weighted averaging #1205, where I'm adding a new approach for aggregating over assets (weighted average), and also have a different structure for the accumulator maps (single level vs nested).

By this point I think we're looking at 6 distinct pathways across the 4 pricing strategies: marginal/existing/max, marginal/candidate/min, full/existing/max, full/candidate/min, marginal/existing/average, full/existing/average (6 rather than 8 pathways because #1205 will use the average approach for existing assets only - candidates will still use min, which is currently a direct copy-paste). It might be possible to create one master function that takes these various options, but I think it would look quite horrible and obscure the logic.

Secondly, you could write an internal function that handles both the marginal and full-cost cases. There are different ways you could go about this, but I'd just pass in an empty map for annual_activities in the marginal prices case and rewrite things to use None for "unknown annual activity" rather than panicking. Does that make sense?

This seems a bit hacky to me (using the presence/absence of an annual activities map to signal the type of cost calculation). That said, I think there is more similarity between the marginal/full cost functions than within them. They're almost identical apart from whether or not we're adding fixed costs. Whereas I think treatment of existing assets vs candidates is more fundamentally different (especially in #1205).

Anyway, not sure where to go from here...

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings March 18, 2026 12:09
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings March 18, 2026 13:28
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines +39 to +44
/// Solve the weighted average.
///
/// Returns `None` if the denominator is zero (or close to zero)
fn finalise(self) -> Option<MoneyPerFlow> {
(self.denominator > Dimensionless::EPSILON).then(|| self.numerator / self.denominator)
}
Comment on lines 179 to +200
@@ -121,7 +187,7 @@ impl CommodityPrices {
price: MoneyPerFlow,
) {
let key = (commodity_id.clone(), region_id.clone(), time_slice.clone());
self.0.insert(key, price);
try_insert(&mut self.0, &key, price).unwrap();
}

/// Extend the prices map, panic if any key already exists
@@ -130,8 +196,23 @@ impl CommodityPrices {
T: IntoIterator<Item = ((CommodityID, RegionID, TimeSliceID), MoneyPerFlow)>,
{
for (key, price) in iter {
let existing = self.0.insert(key.clone(), price).is_some();
assert!(!existing, "Key {key:?} already exists in the map");
try_insert(&mut self.0, &key, price).unwrap();
}
Copy link
Collaborator

@alexdewar alexdewar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tl;dr: LGTM!

[snip]

By this point I think we're looking at 6 distinct pathways across the 4 pricing strategies: marginal/existing/max, marginal/candidate/min, full/existing/max, full/candidate/min, marginal/existing/average, full/existing/average (6 rather than 8 pathways because #1205 will use the average approach for existing assets only - candidates will still use min, which is currently a direct copy-paste). It might be possible to create one master function that takes these various options, but I think it would look quite horrible and obscure the logic.

Oh, oops. Not sure how that escaped me. In that case, dw about it. You could always pass a function as an argument for the min/max/average part of this problem space, but I'm assuming there's not going to be a clean way of doing it.

Secondly, you could write an internal function that handles both the marginal and full-cost cases. There are different ways you could go about this, but I'd just pass in an empty map for annual_activities in the marginal prices case and rewrite things to use None for "unknown annual activity" rather than panicking. Does that make sense?

This seems a bit hacky to me (using the presence/absence of an annual activities map to signal the type of cost calculation). That said, I think there is more similarity between the marginal/full cost functions than within them. They're almost identical apart from whether or not we're adding fixed costs. Whereas I think treatment of existing assets vs candidates is more fundamentally different (especially in #1205).

I don't think it's necessarily that weird to have the logic be "if there's an annual activities map, use it, otherwise don't", but I agree that it will make things more convoluted. Def not wedded to it.

It probs makes sense for me to look at #1205 before talking about more refactoring anyways.

@tsmbland tsmbland merged commit 6c9170c into main Mar 19, 2026
8 checks passed
@tsmbland tsmbland deleted the buffered_prices branch March 19, 2026 09:29
@tsmbland tsmbland mentioned this pull request Mar 19, 2026
11 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Flaw with price calculation for buffered commodities

4 participants