Calculate prices at the appropriate level#1196
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1196 +/- ##
==========================================
+ Coverage 89.02% 89.31% +0.28%
==========================================
Files 57 57
Lines 7838 7974 +136
Branches 7838 7974 +136
==========================================
+ Hits 6978 7122 +144
+ Misses 564 555 -9
- Partials 296 297 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Pull request overview
This PR adjusts marginal-cost and full-cost commodity price calculations so that prices are produced at the commodity’s declared time-slice granularity (annual/season/DayNight), flattening prices within each season/year via activity-weighted averaging (with an activity-limit fallback when activity is zero).
Changes:
- Added
TimeSliceLevel::containing_selectionto map aTimeSliceIDto the appropriateTimeSliceSelectionat a given level. - Reworked marginal/full cost pricing to compute per-(commodity, region, selection) group prices using activity-weighted (or limit-weighted) averages, then expand them back to per-
TimeSliceIDprices. - Simplified
SolutionAPI by removingiter_activity_keys_for_existing(callers now useiter_activity_for_existingdirectly).
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
src/time_slice.rs |
Adds helper to derive a coarser TimeSliceSelection (annual/season/single) from a TimeSliceID. |
src/simulation/prices.rs |
Implements grouped/weighted averaging for marginal/full cost prices and expands grouped prices back to time-slice keys; adds GroupAccum tests. |
src/simulation/optimisation.rs |
Removes iter_activity_keys_for_existing now that pricing consumes activity values directly. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
src/simulation/prices.rs
Outdated
| // For each (asset, commodity, region, group), accumulate marginal costs. For seasonal/annual | ||
| // commodities, marginal costs are weighted by activity (or the activity limit for assets with | ||
| // no activity) | ||
| let mut existing_accum: HashMap<_, GroupAccum> = HashMap::new(); | ||
| for (asset, time_slice, activity) in activity_for_existing { | ||
| let region_id = asset.region_id(); | ||
|
|
||
| // Iterate over all the SED/SVD marginal costs for commodities we need prices for | ||
| // Get activity limits: used as a backup weight if no activity across the group | ||
| let activity_limit = *asset | ||
| .get_activity_limits_for_selection(&TimeSliceSelection::Single(time_slice.clone())) | ||
| .end(); | ||
|
|
||
| // Iterate over the marginal costs for commodities we need prices for | ||
| for (commodity_id, marginal_cost) in asset.iter_marginal_costs_with_filter( | ||
| upstream_prices, | ||
| year, | ||
| time_slice, | ||
| |commodity_id: &CommodityID| { | ||
| markets_to_price.contains(&(commodity_id.clone(), region_id.clone())) | ||
| }, | ||
| |cid: &CommodityID| markets_to_price.contains(&(cid.clone(), region_id.clone())), | ||
| ) { | ||
| // Update the highest cost for this commodity/region/time slice | ||
| let key = (commodity_id.clone(), region_id.clone(), time_slice.clone()); | ||
| prices | ||
| .entry(key.clone()) | ||
| .and_modify(|c| *c = c.max(marginal_cost)) | ||
| .or_insert(marginal_cost); | ||
| priced_by_existing.insert(key); | ||
| let group = commodities[&commodity_id] | ||
| .time_slice_level | ||
| .containing_selection(time_slice); | ||
| let key = ( | ||
| asset.clone(), | ||
| commodity_id.clone(), | ||
| region_id.clone(), | ||
| group, | ||
| ); | ||
| let accum = existing_accum.entry(key).or_default(); | ||
| accum.add(marginal_cost, activity, activity_limit); | ||
| } | ||
| } | ||
|
|
||
| // Next, look at candidate assets for any markets not covered by existing assets | ||
| // For these, we take the _lowest_ marginal cost | ||
| // Compute per-group weighted-average marginal cost per asset, then take the max across assets | ||
| let mut group_prices: IndexMap<_, MoneyPerFlow> = IndexMap::new(); | ||
| let mut priced_groups: HashSet<_> = HashSet::new(); | ||
| for ((_, commodity_id, region_id, group), accum) in &existing_accum { | ||
| // Solve the weighted average marginal cost for this group | ||
| let Some(avg_cost) = accum.solve() else { | ||
| continue; | ||
| }; | ||
|
|
||
| // Take the max across assets for each group | ||
| group_prices | ||
| .entry((commodity_id.clone(), region_id.clone(), group.clone())) | ||
| .and_modify(|c| *c = c.max(avg_cost)) | ||
| .or_insert(avg_cost); | ||
| priced_groups.insert((commodity_id.clone(), region_id.clone(), group.clone())); | ||
| } | ||
|
|
||
| // Expand each group to individual time slices | ||
| let mut prices: IndexMap<_, MoneyPerFlow> = IndexMap::new(); | ||
| expand_groups_to_prices(&group_prices, time_slice_info, &mut prices); | ||
|
|
There was a problem hiding this comment.
I'm keen to avoid this as it would probably be quite a complicated test. Better to come up with a suitable integration/regression test, I think
Aurashk
left a comment
There was a problem hiding this comment.
Just a few thoughts but looks good.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated no new comments.
Comments suppressed due to low confidence (1)
src/simulation/prices.rs:201
CommodityPrices::insert/extendnowassert!that a key does not already exist. SinceCommodityPricesis a public type (re-exported fromsimulation), this changes behavior from “last write wins” to “panic on duplicate insert”, which is a breaking API change and can crash callers at runtime. Consider either (a) keeping overwrite semantics, (b) providing a fallibletry_insert/try_extendthat returns aResult, or (c) making this adebug_assert!so it only trips in debug builds while still catching logic bugs during development.
price: MoneyPerFlow,
) {
let key = (commodity_id.clone(), region_id.clone(), time_slice.clone());
let existing = self.0.insert(key.clone(), price).is_some();
assert!(!existing, "Key {key:?} already exists in the map");
}
/// Extend the prices map, panic if any key already exists
pub fn extend<T>(&mut self, iter: T)
where
T: IntoIterator<Item = ((CommodityID, RegionID, TimeSliceID), MoneyPerFlow)>,
{
for (key, price) in iter {
let existing = self.0.insert(key.clone(), price).is_some();
assert!(!existing, "Key {key:?} already exists in the map");
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
alexdewar
left a comment
There was a problem hiding this comment.
In general, this looks ok, but I think it can be improved a little.
I'd take @Aurashk's suggestion of passing maps by mutable ref to improve efficiency. There are also a couple of other places where we're creating IndexMaps unnecessarily.
Other than that, I think we can avoid some of the repetition without making things much more fiddly. Firstly, both calculate_marginal_cost_prices and calculate_full_cost_prices have two loops, which are virtually identical, except that one is for existing assets and the other for candidates. The only difference I could see is that the latter skips ahead if there is already a value in group_prices, so you could just factor out this loop into a separate function and pass in an empty map for group_prices for the existing-assets case. Secondly, you could write an internal function that handles both the marginal and full-cost cases. There are different ways you could go about this, but I'd just pass in an empty map for annual_activities in the marginal prices case and rewrite things to use None for "unknown annual activity" rather than panicking. Does that make sense? I think you should do the first of these, but the second one is partly a matter of taste (and I can see it getting a little convoluted), so I'll leave that one up to you.
| } | ||
|
|
||
| #[test] | ||
| fn weighted_average_accumulator_single_value() { |
There was a problem hiding this comment.
Most/all of these tests could be parameterised, but they're also fine the way they are (and arguably more readable this way).
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
|
Thanks both. I've implemented some of the suggestions and I think it's better for it.
@alexdewar I think it's a bit more complicated than that:
By this point I think we're looking at 6 distinct pathways across the 4 pricing strategies: marginal/existing/max, marginal/candidate/min, full/existing/max, full/candidate/min, marginal/existing/average, full/existing/average (6 rather than 8 pathways because #1205 will use the average approach for existing assets only - candidates will still use min, which is currently a direct copy-paste). It might be possible to create one master function that takes these various options, but I think it would look quite horrible and obscure the logic.
This seems a bit hacky to me (using the presence/absence of an annual activities map to signal the type of cost calculation). That said, I think there is more similarity between the marginal/full cost functions than within them. They're almost identical apart from whether or not we're adding fixed costs. Whereas I think treatment of existing assets vs candidates is more fundamentally different (especially in #1205). Anyway, not sure where to go from here... |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| /// Solve the weighted average. | ||
| /// | ||
| /// Returns `None` if the denominator is zero (or close to zero) | ||
| fn finalise(self) -> Option<MoneyPerFlow> { | ||
| (self.denominator > Dimensionless::EPSILON).then(|| self.numerator / self.denominator) | ||
| } |
| @@ -121,7 +187,7 @@ impl CommodityPrices { | |||
| price: MoneyPerFlow, | |||
| ) { | |||
| let key = (commodity_id.clone(), region_id.clone(), time_slice.clone()); | |||
| self.0.insert(key, price); | |||
| try_insert(&mut self.0, &key, price).unwrap(); | |||
| } | |||
|
|
|||
| /// Extend the prices map, panic if any key already exists | |||
| @@ -130,8 +196,23 @@ impl CommodityPrices { | |||
| T: IntoIterator<Item = ((CommodityID, RegionID, TimeSliceID), MoneyPerFlow)>, | |||
| { | |||
| for (key, price) in iter { | |||
| let existing = self.0.insert(key.clone(), price).is_some(); | |||
| assert!(!existing, "Key {key:?} already exists in the map"); | |||
| try_insert(&mut self.0, &key, price).unwrap(); | |||
| } | |||
alexdewar
left a comment
There was a problem hiding this comment.
tl;dr: LGTM!
[snip]
By this point I think we're looking at 6 distinct pathways across the 4 pricing strategies: marginal/existing/max, marginal/candidate/min, full/existing/max, full/candidate/min, marginal/existing/average, full/existing/average (6 rather than 8 pathways because #1205 will use the average approach for existing assets only - candidates will still use min, which is currently a direct copy-paste). It might be possible to create one master function that takes these various options, but I think it would look quite horrible and obscure the logic.
Oh, oops. Not sure how that escaped me. In that case, dw about it. You could always pass a function as an argument for the min/max/average part of this problem space, but I'm assuming there's not going to be a clean way of doing it.
Secondly, you could write an internal function that handles both the marginal and full-cost cases. There are different ways you could go about this, but I'd just pass in an empty map for
annual_activitiesin the marginal prices case and rewrite things to useNonefor "unknown annual activity" rather than panicking. Does that make sense?This seems a bit hacky to me (using the presence/absence of an annual activities map to signal the type of cost calculation). That said, I think there is more similarity between the marginal/full cost functions than within them. They're almost identical apart from whether or not we're adding fixed costs. Whereas I think treatment of existing assets vs candidates is more fundamentally different (especially in #1205).
I don't think it's necessarily that weird to have the logic be "if there's an annual activities map, use it, otherwise don't", but I agree that it will make things more convoluted. Def not wedded to it.
It probs makes sense for me to look at #1205 before talking about more refactoring anyways.
Description
This PR is to ensure that commodities have prices specified at the appropriate granularity. For example, commodities with a season timeslice level should have prices specified at the season level (i.e. flat within each season), and prices with an annual timeslice level should have one price for the year. This is already the case with shadow prices (we just duplicate the duals), but not necessarily the case for the marginal/full strategies, because these are calculated using the price of other commodities which may vary at the timeslice level.
For lack of a better approach, the way I'm doing this is to calculate marginal costs using a weighted average within each season/year, weighted according to timeslice-level activity. We then calculate prices by taking the maximum of this across technologies, as before (or the maximum of full costs for that strategy).
It's possible that some assets may not have any activity within a particular season, which makes it impossible to do a weighted average. However, we still want these assets to be able contribute to the price, as discussed in #1148. So, in this case I'm using the activity limits as the weight (i.e. maximum potential activity assuming full utilisation). This is also the case for candidates.
Notably, all this doesn't have any impact on any of the example models, because prices for seasonal commodities are already flat. I think this is inevitable when seasonal commodities are at the top of the supply chain, or more specifically if granularity only ever increases when going down the supply chain. This reminds me of MUSE1, where we sort of have a buffering feature, but it only works as expected in these scenarios. I wonder if it's worth adding an example model that goes against this rule, or at least having a look to see what would happen (we've been trying to do something in MUSE1 where electricity gets buffered, but haven't been able to due to the aforementioned restriction). We already sort of have this with the circularity model, but this doesn't work with the new pricing strategies.
Going forward, I wonder if it would be worth keying the price maps with
TimeSliceSelectioninstead ofTimeSliceID. This would save us from having to duplicate prices, but may make things a bit more fiddly when it comes to retrieving prices.Fixes #1184
Type of change
Key checklist
$ cargo test$ cargo docpresent in the previous release
Further checks