(fix): update the daily summary to be more generic#753
(fix): update the daily summary to be more generic#753narsimhaReddyJuspay wants to merge 1 commit into
Conversation
WalkthroughThe PR refactors ChangesGeneric Outcome Breakdown Refactoring
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
app/services/langfuse/tasks/score_monitor/score.py (2)
19-24:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winRemove unused
get_lead_based_analyticsimport — unblock autoflake check.The import at line 21 is not used anywhere in this file after the refactor to
_get_daily_call_stats. Drop it to resolve the PR Build Check failure.Fix
from app.database.accessor.breeze_buddy.lead_call_tracker import ( get_all_lead_call_trackers, - get_lead_based_analytics, get_lead_by_call_id, update_langfuse_scores, )🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@app/services/langfuse/tasks/score_monitor/score.py` around lines 19 - 24, Remove the unused import get_lead_based_analytics from the import list in this module to satisfy the autoflake check: edit the import block importing from app.database.accessor.breeze_buddy.lead_call_tracker and delete get_lead_based_analytics so only the used symbols (get_all_lead_call_trackers, get_lead_by_call_id, update_langfuse_scores) remain; ensure no other references to get_lead_based_analytics exist (the new helper _get_daily_call_stats replaces its usage).
394-500:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winExclude
ABORTfrom "successful" metrics.The code counts
ABORTas a successful call because it lands incall_outcome_breakdown(lines 469–472) and contributes tocalls_successful(line 500). However,ABORTrepresents a prematurely terminated call (customer hangup, error, or agent termination), not a successful resolution. The Slack summary then reports this count under "Successful Calls" (line 668) and "Successful Leads" (line 695), which misrepresents the daily metrics.Add
ABORTtoNON_OUTCOME(line 397) to align withBUSY, or track it as a separate counter similar to thecalls_busypattern (lines 407, 437–438, 476).🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@app/services/langfuse/tasks/score_monitor/score.py` around lines 394 - 500, The code currently counts "ABORT" as a successful outcome; update NON_OUTCOME (the frozenset constant) to include "ABORT" so it is excluded from call_outcome_breakdown and lead outcome aggregation, and add a separate counter (e.g., calls_abort and per-lead "abort" count similar to calls_busy/no_answer handling) in default_stats and in the loop where outcomes are tallied (referencing NON_OUTCOME, call_outcome_breakdown, calls_successful, calls_busy, and per_lead) so ABORTs are not summed into calls_successful/leads_successful but are tracked independently for reporting.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@app/services/langfuse/tasks/score_monitor/score.py`:
- Around line 515-535: leads_successful is currently computed by summing
per-outcome buckets in lead_outcome_breakdown which double-counts leads that
have multiple distinct outcomes; instead compute leads_successful directly from
per_lead by counting unique leads whose lead_data["outcomes"] contains at least
one non-NON_OUTCOME outcome (e.g., any outcome != "NON_OUTCOME" and count>0),
leave lead_outcome_breakdown as the per-outcome distribution, and then recalc
leads_successful_pct using that unique-lead count (and guard any later divisions
by zero).
---
Outside diff comments:
In `@app/services/langfuse/tasks/score_monitor/score.py`:
- Around line 19-24: Remove the unused import get_lead_based_analytics from the
import list in this module to satisfy the autoflake check: edit the import block
importing from app.database.accessor.breeze_buddy.lead_call_tracker and delete
get_lead_based_analytics so only the used symbols (get_all_lead_call_trackers,
get_lead_by_call_id, update_langfuse_scores) remain; ensure no other references
to get_lead_based_analytics exist (the new helper _get_daily_call_stats replaces
its usage).
- Around line 394-500: The code currently counts "ABORT" as a successful
outcome; update NON_OUTCOME (the frozenset constant) to include "ABORT" so it is
excluded from call_outcome_breakdown and lead outcome aggregation, and add a
separate counter (e.g., calls_abort and per-lead "abort" count similar to
calls_busy/no_answer handling) in default_stats and in the loop where outcomes
are tallied (referencing NON_OUTCOME, call_outcome_breakdown, calls_successful,
calls_busy, and per_lead) so ABORTs are not summed into
calls_successful/leads_successful but are tracked independently for reporting.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f13d9a05-3034-4239-87b2-dd3889d985a8
📒 Files selected for processing (1)
app/services/langfuse/tasks/score_monitor/score.py
| lead_outcome_breakdown: Dict[str, int] = {} | ||
|
|
||
| for lead_data in per_lead.values(): | ||
| # Lead is "picked" if at least one call was answered | ||
| if lead_data["finished"] > lead_data["no_answer"]: | ||
| leads_picked += 1 | ||
| # Accumulate per-outcome lead count | ||
| for outcome, count in lead_data["outcomes"].items(): | ||
| if count > 0: | ||
| lead_outcome_breakdown[outcome] = ( | ||
| lead_outcome_breakdown.get(outcome, 0) + 1 | ||
| ) | ||
|
|
||
| leads_successful = sum(lead_outcome_breakdown.values()) | ||
|
|
||
| leads_picked_pct = ( | ||
| (leads_picked / total_leads * 100) if total_leads > 0 else 0.0 | ||
| ) | ||
| leads_successful_pct = ( | ||
| (leads_successful / leads_picked * 100) if leads_picked > 0 else 0.0 | ||
| ) |
There was a problem hiding this comment.
leads_successful can double-count and exceed leads_picked.
lead_outcome_breakdown[outcome] is incremented once per (lead, outcome) pair (line 522-526), so a single lead whose calls produced two distinct outcomes (e.g., one call CONFIRM, another CANCEL) contributes +1 to each bucket. Summing those buckets at line 528 to derive leads_successful therefore counts that lead twice, which means:
leads_successfulno longer represents "unique leads that reached a successful outcome" (which the docstring/default_stats comment at line 417 implies).leads_successful_pct = leads_successful / leads_picked * 100(line 533-534) can exceed 100%.- The Slack output line at 695 (
Successful Leads: ...) may print a number larger thanleads_pickedand confuse on-call readers.
The per-outcome percentages at lines 700-705 (count / leads_successful) are internally consistent with this inflated denominator, but the headline leads_successful field is the one users will trust.
Suggest deriving leads_successful directly from per_lead as the count of unique leads that had at least one non-NON_OUTCOME outcome, and keeping the breakdown as a separate per-outcome distribution.
🛠️ Proposed fix
for lead_data in per_lead.values():
# Lead is "picked" if at least one call was answered
if lead_data["finished"] > lead_data["no_answer"]:
leads_picked += 1
# Accumulate per-outcome lead count
for outcome, count in lead_data["outcomes"].items():
if count > 0:
lead_outcome_breakdown[outcome] = (
lead_outcome_breakdown.get(outcome, 0) + 1
)
- leads_successful = sum(lead_outcome_breakdown.values())
+ # Count unique leads with at least one real template outcome,
+ # so leads_successful never exceeds leads_picked even when a
+ # single lead produced multiple distinct outcomes.
+ leads_successful = sum(
+ 1 for lead_data in per_lead.values() if lead_data["outcomes"]
+ )🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@app/services/langfuse/tasks/score_monitor/score.py` around lines 515 - 535,
leads_successful is currently computed by summing per-outcome buckets in
lead_outcome_breakdown which double-counts leads that have multiple distinct
outcomes; instead compute leads_successful directly from per_lead by counting
unique leads whose lead_data["outcomes"] contains at least one non-NON_OUTCOME
outcome (e.g., any outcome != "NON_OUTCOME" and count>0), leave
lead_outcome_breakdown as the per-outcome distribution, and then recalc
leads_successful_pct using that unique-lead count (and guard any later divisions
by zero).
There was a problem hiding this comment.
Pull request overview
This PR updates the Breeze Buddy daily Slack summary to treat call/lead outcomes generically, replacing hard-coded outcome buckets (CONFIRM/CANCEL/ADDRESS_UPDATED) with dynamic per-outcome breakdowns derived from recent call trackers.
Changes:
- Compute call stats using a dynamic
call_outcome_breakdown(excludingNO_ANSWERandBUSY). - Compute lead stats from per-lead aggregation built from call trackers and emit a dynamic
lead_outcome_breakdown. - Update the Slack daily summary formatting to display per-outcome breakdown lines for both calls and leads.
Comments suppressed due to low confidence (1)
app/services/langfuse/tasks/score_monitor/score.py:507
calls_pickedis derived ascalls_attempted - calls_no_answer, butcalls_attemptedcounts FINISHED calls even whentracker.outcomeis None (possible in Daily mode where completion can setoutcome=None). Those calls end up counted as “picked up” despite having an unknown outcome. Consider defining “picked” as FINISHED calls with a non-null outcome other than NO_ANSWER (or track a separatecalls_unknown_outcomecounter and exclude it from picked/success percentages).
# Initialize counters for call-based metrics
calls_attempted = 0 # FINISHED status
calls_no_answer = 0
calls_busy = 0
call_outcome_breakdown: Dict[str, int] = {}
provider_counts = default_stats["provider_split"].copy()
# Per-lead data grouped by request_id
# { request_id → {"finished": int, "no_answer": int, "outcomes": {outcome: count}} }
per_lead: Dict[str, Dict[str, Any]] = {}
# Process each call tracker
for tracker, calling_provider in call_trackers:
# Count attempted calls (FINISHED status)
if tracker.status and tracker.status.value == "FINISHED":
calls_attempted += 1
outcome_value: Optional[str] = tracker.outcome or None
# Call-level outcome tallying
if outcome_value == "NO_ANSWER":
calls_no_answer += 1
elif outcome_value == "BUSY":
calls_busy += 1
elif outcome_value and outcome_value not in NON_OUTCOME:
# Any named outcome that isn't NO_ANSWER or BUSY is a real
# template outcome (includes ABORT, CONFIRM, custom outcomes, etc.)
call_outcome_breakdown[outcome_value] = (
call_outcome_breakdown.get(outcome_value, 0) + 1
)
# Count by provider
if calling_provider:
provider_upper = calling_provider.upper()
if provider_upper in provider_counts:
provider_counts[provider_upper] += 1
# Accumulate per-lead data (group by request_id)
request_id = tracker.request_id or tracker.id # fallback to row id
if request_id not in per_lead:
per_lead[request_id] = {
"finished": 0,
"no_answer": 0,
"outcomes": {},
}
if tracker.status and tracker.status.value == "FINISHED":
per_lead[request_id]["finished"] += 1
if outcome_value == "NO_ANSWER":
per_lead[request_id]["no_answer"] += 1
if outcome_value and outcome_value not in NON_OUTCOME:
per_lead[request_id]["outcomes"][outcome_value] = (
per_lead[request_id]["outcomes"].get(outcome_value, 0) + 1
)
# Calculate call-based derived metrics
calls_picked = calls_attempted - calls_no_answer
calls_successful = sum(call_outcome_breakdown.values())
calls_picked_pct = (
(calls_picked / calls_attempted * 100) if calls_attempted > 0 else 0.0
)
calls_successful_pct = (
(calls_successful / calls_picked * 100) if calls_picked > 0 else 0.0
)
| for lead_data in per_lead.values(): | ||
| # Lead is "picked" if at least one call was answered | ||
| if lead_data["finished"] > lead_data["no_answer"]: | ||
| leads_picked += 1 | ||
| # Accumulate per-outcome lead count | ||
| for outcome, count in lead_data["outcomes"].items(): | ||
| if count > 0: | ||
| lead_outcome_breakdown[outcome] = ( | ||
| lead_outcome_breakdown.get(outcome, 0) + 1 | ||
| ) | ||
|
|
||
| leads_successful = sum(lead_outcome_breakdown.values()) | ||
|
|
||
| leads_picked_pct = ( | ||
| (leads_picked / total_leads * 100) if total_leads > 0 else 0.0 | ||
| ) | ||
| leads_successful_pct = ( | ||
| (leads_successful / leads_picked * 100) if leads_picked > 0 else 0.0 | ||
| ) |
| call_analytics_lines = [ | ||
| f"• Total Calls Attempted: *{call_stats['calls_attempted']}*", | ||
| f"• Calls Picked Up: *{call_stats['calls_picked']}* ({call_stats['calls_picked_pct']}% of attempted calls)", | ||
| f"• Successful Calls: *{call_stats['calls_successful']}* ({call_stats['calls_successful_pct']}% of picked calls)", | ||
| f"• Answered but Unsuccessful (Busy): *{call_stats['calls_busy']}* ({call_stats['calls_busy_pct']}% of picked calls)", | ||
| ] | ||
| # Dynamic per-outcome call count | ||
| call_outcome_breakdown = call_stats.get("call_outcome_breakdown", {}) | ||
| calls_successful = call_stats["calls_successful"] | ||
| for outcome, count in call_outcome_breakdown.items(): | ||
| pct = ( | ||
| round(count / calls_successful * 100, 1) | ||
| if calls_successful > 0 | ||
| else 0.0 | ||
| ) | ||
| outcome_label = outcome.replace("_", " ").title() | ||
| call_analytics_lines.append( | ||
| f" ↳ {outcome_label}: *{count}* ({pct}% of successful)" | ||
| ) | ||
| sections.append( | ||
| { | ||
| "title": "Call-Based Stats", | ||
| "text": call_analytics_text, | ||
| "text": "\n".join(call_analytics_lines), | ||
| } | ||
| ) | ||
|
|
||
| # Section 4: Lead-based analytics | ||
| lead_analytics_text = ( | ||
| f"• Total Leads Processed: *{call_stats['total_leads']}*\n" | ||
| f"• Leads Picked: *{call_stats['leads_picked']}* ({call_stats['leads_picked_pct']}% of total leads)\n" | ||
| f"• Successful Leads: *{call_stats['leads_successful']}* ({call_stats['leads_successful_pct']}% of picked leads)\n" | ||
| f"• Confirmed: *{call_stats['leads_confirmed']}* ({call_stats['leads_confirmed_pct']}% of successful)\n" | ||
| f"• Cancelled: *{call_stats['leads_cancelled']}* ({call_stats['leads_cancelled_pct']}% of successful)\n" | ||
| f"• Address Updated: *{call_stats['leads_address_updated']}* ({call_stats['leads_address_updated_pct']}% of successful)" | ||
| ) | ||
| lead_analytics_lines = [ | ||
| f"• Total Leads Processed: *{call_stats['total_leads']}*", | ||
| f"• Leads Picked: *{call_stats['leads_picked']}* ({call_stats['leads_picked_pct']}% of total leads)", | ||
| f"• Successful Leads: *{call_stats['leads_successful']}* ({call_stats['leads_successful_pct']}% of picked leads)", | ||
| ] |
Summary by CodeRabbit