Skip to content

(fix): update the daily summary to be more generic#753

Open
narsimhaReddyJuspay wants to merge 1 commit into
juspay:releasefrom
narsimhaReddyJuspay:update-daily-summary-alert
Open

(fix): update the daily summary to be more generic#753
narsimhaReddyJuspay wants to merge 1 commit into
juspay:releasefrom
narsimhaReddyJuspay:update-daily-summary-alert

Conversation

@narsimhaReddyJuspay
Copy link
Copy Markdown
Contributor

@narsimhaReddyJuspay narsimhaReddyJuspay commented May 7, 2026

Summary by CodeRabbit

  • Refactor
    • Refactored daily analytics calculation to use dynamic outcome tracking, providing enhanced per-outcome breakdowns in daily summary reports for more comprehensive visibility into call and lead outcomes.

Copilot AI review requested due to automatic review settings May 7, 2026 18:09
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 7, 2026

Review Change Stack

Walkthrough

The PR refactors ScoreMonitor._get_daily_call_stats to use generic outcome classification, excluding only NO_ANSWER and BUSY from per-outcome breakdowns. It replaces hardcoded lead outcome fields with dynamic call_outcome_breakdown and lead_outcome_breakdown maps computed from call and lead aggregations.

Changes

Generic Outcome Breakdown Refactoring

Layer / File(s) Summary
Data Contract & Outcome Definition
app/services/langfuse/tasks/score_monitor/score.py
Docstring updated to reflect new generic outcome handling; NON_OUTCOME constant defined to exclude NO_ANSWER and BUSY from outcome breakdowns.
Call-Tracker Outcome Processing
app/services/langfuse/tasks/score_monitor/score.py
Call-tracker iteration computes calls_no_answer, calls_busy, and generic call_outcome_breakdown for outcomes outside NON_OUTCOME; per-lead aggregation structure added to support lead-level breakdown.
Lead-Level Outcome Aggregation
app/services/langfuse/tasks/score_monitor/score.py
Lead-based metrics computed from per-lead aggregation; lead_outcome_breakdown generated from aggregated outcomes; leads_picked and leads_successful derived from accumulated per-lead data.
Return Payload Assembly
app/services/langfuse/tasks/score_monitor/score.py
Final stats object assembled with sorted call_outcome_breakdown and lead_outcome_breakdown dictionaries; fixed outcome fields (leads_confirmed, leads_cancelled, leads_address_updated) and their percentage fields removed.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • juspay/clairvoyance#414: Main PR refactors the same ScoreMonitor._get_daily_call_stats implementation to use dynamic per-outcome breakdowns, replacing the hardcoded outcome handling that was initially added.
  • juspay/clairvoyance#399: Both PRs modify score monitor logic in send_daily_summary_if_time and related daily-summary helpers in the same file.
  • juspay/clairvoyance#584: Both PRs modify score.py's daily summary logic, specifically provider breakdown and Slack summary construction.

Poem

🐇 Dynamic outcomes hop along,
No more hardcoded paths gone wrong,
Call and lead breakdowns now align,
Generic logic makes it fine! ✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: refactoring the daily summary logic to use generic, template-defined outcome breakdowns instead of hardcoded outcome handling.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
app/services/langfuse/tasks/score_monitor/score.py (2)

19-24: ⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Remove unused get_lead_based_analytics import — unblock autoflake check.

The import at line 21 is not used anywhere in this file after the refactor to _get_daily_call_stats. Drop it to resolve the PR Build Check failure.

Fix
 from app.database.accessor.breeze_buddy.lead_call_tracker import (
     get_all_lead_call_trackers,
-    get_lead_based_analytics,
     get_lead_by_call_id,
     update_langfuse_scores,
 )
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@app/services/langfuse/tasks/score_monitor/score.py` around lines 19 - 24,
Remove the unused import get_lead_based_analytics from the import list in this
module to satisfy the autoflake check: edit the import block importing from
app.database.accessor.breeze_buddy.lead_call_tracker and delete
get_lead_based_analytics so only the used symbols (get_all_lead_call_trackers,
get_lead_by_call_id, update_langfuse_scores) remain; ensure no other references
to get_lead_based_analytics exist (the new helper _get_daily_call_stats replaces
its usage).

394-500: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Exclude ABORT from "successful" metrics.

The code counts ABORT as a successful call because it lands in call_outcome_breakdown (lines 469–472) and contributes to calls_successful (line 500). However, ABORT represents a prematurely terminated call (customer hangup, error, or agent termination), not a successful resolution. The Slack summary then reports this count under "Successful Calls" (line 668) and "Successful Leads" (line 695), which misrepresents the daily metrics.

Add ABORT to NON_OUTCOME (line 397) to align with BUSY, or track it as a separate counter similar to the calls_busy pattern (lines 407, 437–438, 476).

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@app/services/langfuse/tasks/score_monitor/score.py` around lines 394 - 500,
The code currently counts "ABORT" as a successful outcome; update NON_OUTCOME
(the frozenset constant) to include "ABORT" so it is excluded from
call_outcome_breakdown and lead outcome aggregation, and add a separate counter
(e.g., calls_abort and per-lead "abort" count similar to calls_busy/no_answer
handling) in default_stats and in the loop where outcomes are tallied
(referencing NON_OUTCOME, call_outcome_breakdown, calls_successful, calls_busy,
and per_lead) so ABORTs are not summed into calls_successful/leads_successful
but are tracked independently for reporting.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@app/services/langfuse/tasks/score_monitor/score.py`:
- Around line 515-535: leads_successful is currently computed by summing
per-outcome buckets in lead_outcome_breakdown which double-counts leads that
have multiple distinct outcomes; instead compute leads_successful directly from
per_lead by counting unique leads whose lead_data["outcomes"] contains at least
one non-NON_OUTCOME outcome (e.g., any outcome != "NON_OUTCOME" and count>0),
leave lead_outcome_breakdown as the per-outcome distribution, and then recalc
leads_successful_pct using that unique-lead count (and guard any later divisions
by zero).

---

Outside diff comments:
In `@app/services/langfuse/tasks/score_monitor/score.py`:
- Around line 19-24: Remove the unused import get_lead_based_analytics from the
import list in this module to satisfy the autoflake check: edit the import block
importing from app.database.accessor.breeze_buddy.lead_call_tracker and delete
get_lead_based_analytics so only the used symbols (get_all_lead_call_trackers,
get_lead_by_call_id, update_langfuse_scores) remain; ensure no other references
to get_lead_based_analytics exist (the new helper _get_daily_call_stats replaces
its usage).
- Around line 394-500: The code currently counts "ABORT" as a successful
outcome; update NON_OUTCOME (the frozenset constant) to include "ABORT" so it is
excluded from call_outcome_breakdown and lead outcome aggregation, and add a
separate counter (e.g., calls_abort and per-lead "abort" count similar to
calls_busy/no_answer handling) in default_stats and in the loop where outcomes
are tallied (referencing NON_OUTCOME, call_outcome_breakdown, calls_successful,
calls_busy, and per_lead) so ABORTs are not summed into
calls_successful/leads_successful but are tracked independently for reporting.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f13d9a05-3034-4239-87b2-dd3889d985a8

📥 Commits

Reviewing files that changed from the base of the PR and between a40123c and 941c086.

📒 Files selected for processing (1)
  • app/services/langfuse/tasks/score_monitor/score.py

Comment on lines +515 to 535
lead_outcome_breakdown: Dict[str, int] = {}

for lead_data in per_lead.values():
# Lead is "picked" if at least one call was answered
if lead_data["finished"] > lead_data["no_answer"]:
leads_picked += 1
# Accumulate per-outcome lead count
for outcome, count in lead_data["outcomes"].items():
if count > 0:
lead_outcome_breakdown[outcome] = (
lead_outcome_breakdown.get(outcome, 0) + 1
)

leads_successful = sum(lead_outcome_breakdown.values())

leads_picked_pct = (
(leads_picked / total_leads * 100) if total_leads > 0 else 0.0
)
leads_successful_pct = (
(leads_successful / leads_picked * 100) if leads_picked > 0 else 0.0
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

leads_successful can double-count and exceed leads_picked.

lead_outcome_breakdown[outcome] is incremented once per (lead, outcome) pair (line 522-526), so a single lead whose calls produced two distinct outcomes (e.g., one call CONFIRM, another CANCEL) contributes +1 to each bucket. Summing those buckets at line 528 to derive leads_successful therefore counts that lead twice, which means:

  • leads_successful no longer represents "unique leads that reached a successful outcome" (which the docstring/default_stats comment at line 417 implies).
  • leads_successful_pct = leads_successful / leads_picked * 100 (line 533-534) can exceed 100%.
  • The Slack output line at 695 (Successful Leads: ...) may print a number larger than leads_picked and confuse on-call readers.

The per-outcome percentages at lines 700-705 (count / leads_successful) are internally consistent with this inflated denominator, but the headline leads_successful field is the one users will trust.

Suggest deriving leads_successful directly from per_lead as the count of unique leads that had at least one non-NON_OUTCOME outcome, and keeping the breakdown as a separate per-outcome distribution.

🛠️ Proposed fix
             for lead_data in per_lead.values():
                 # Lead is "picked" if at least one call was answered
                 if lead_data["finished"] > lead_data["no_answer"]:
                     leads_picked += 1
                 # Accumulate per-outcome lead count
                 for outcome, count in lead_data["outcomes"].items():
                     if count > 0:
                         lead_outcome_breakdown[outcome] = (
                             lead_outcome_breakdown.get(outcome, 0) + 1
                         )
 
-            leads_successful = sum(lead_outcome_breakdown.values())
+            # Count unique leads with at least one real template outcome,
+            # so leads_successful never exceeds leads_picked even when a
+            # single lead produced multiple distinct outcomes.
+            leads_successful = sum(
+                1 for lead_data in per_lead.values() if lead_data["outcomes"]
+            )
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@app/services/langfuse/tasks/score_monitor/score.py` around lines 515 - 535,
leads_successful is currently computed by summing per-outcome buckets in
lead_outcome_breakdown which double-counts leads that have multiple distinct
outcomes; instead compute leads_successful directly from per_lead by counting
unique leads whose lead_data["outcomes"] contains at least one non-NON_OUTCOME
outcome (e.g., any outcome != "NON_OUTCOME" and count>0), leave
lead_outcome_breakdown as the per-outcome distribution, and then recalc
leads_successful_pct using that unique-lead count (and guard any later divisions
by zero).

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the Breeze Buddy daily Slack summary to treat call/lead outcomes generically, replacing hard-coded outcome buckets (CONFIRM/CANCEL/ADDRESS_UPDATED) with dynamic per-outcome breakdowns derived from recent call trackers.

Changes:

  • Compute call stats using a dynamic call_outcome_breakdown (excluding NO_ANSWER and BUSY).
  • Compute lead stats from per-lead aggregation built from call trackers and emit a dynamic lead_outcome_breakdown.
  • Update the Slack daily summary formatting to display per-outcome breakdown lines for both calls and leads.
Comments suppressed due to low confidence (1)

app/services/langfuse/tasks/score_monitor/score.py:507

  • calls_picked is derived as calls_attempted - calls_no_answer, but calls_attempted counts FINISHED calls even when tracker.outcome is None (possible in Daily mode where completion can set outcome=None). Those calls end up counted as “picked up” despite having an unknown outcome. Consider defining “picked” as FINISHED calls with a non-null outcome other than NO_ANSWER (or track a separate calls_unknown_outcome counter and exclude it from picked/success percentages).
            # Initialize counters for call-based metrics
            calls_attempted = 0  # FINISHED status
            calls_no_answer = 0
            calls_busy = 0
            call_outcome_breakdown: Dict[str, int] = {}
            provider_counts = default_stats["provider_split"].copy()

            # Per-lead data grouped by request_id
            # { request_id → {"finished": int, "no_answer": int, "outcomes": {outcome: count}} }
            per_lead: Dict[str, Dict[str, Any]] = {}

            # Process each call tracker
            for tracker, calling_provider in call_trackers:
                # Count attempted calls (FINISHED status)
                if tracker.status and tracker.status.value == "FINISHED":
                    calls_attempted += 1

                outcome_value: Optional[str] = tracker.outcome or None

                # Call-level outcome tallying
                if outcome_value == "NO_ANSWER":
                    calls_no_answer += 1
                elif outcome_value == "BUSY":
                    calls_busy += 1
                elif outcome_value and outcome_value not in NON_OUTCOME:
                    # Any named outcome that isn't NO_ANSWER or BUSY is a real
                    # template outcome (includes ABORT, CONFIRM, custom outcomes, etc.)
                    call_outcome_breakdown[outcome_value] = (
                        call_outcome_breakdown.get(outcome_value, 0) + 1
                    )

                # Count by provider
                if calling_provider:
                    provider_upper = calling_provider.upper()
                    if provider_upper in provider_counts:
                        provider_counts[provider_upper] += 1

                # Accumulate per-lead data (group by request_id)
                request_id = tracker.request_id or tracker.id  # fallback to row id
                if request_id not in per_lead:
                    per_lead[request_id] = {
                        "finished": 0,
                        "no_answer": 0,
                        "outcomes": {},
                    }
                if tracker.status and tracker.status.value == "FINISHED":
                    per_lead[request_id]["finished"] += 1
                if outcome_value == "NO_ANSWER":
                    per_lead[request_id]["no_answer"] += 1
                if outcome_value and outcome_value not in NON_OUTCOME:
                    per_lead[request_id]["outcomes"][outcome_value] = (
                        per_lead[request_id]["outcomes"].get(outcome_value, 0) + 1
                    )

            # Calculate call-based derived metrics
            calls_picked = calls_attempted - calls_no_answer
            calls_successful = sum(call_outcome_breakdown.values())

            calls_picked_pct = (
                (calls_picked / calls_attempted * 100) if calls_attempted > 0 else 0.0
            )
            calls_successful_pct = (
                (calls_successful / calls_picked * 100) if calls_picked > 0 else 0.0
            )

Comment on lines +517 to 535
for lead_data in per_lead.values():
# Lead is "picked" if at least one call was answered
if lead_data["finished"] > lead_data["no_answer"]:
leads_picked += 1
# Accumulate per-outcome lead count
for outcome, count in lead_data["outcomes"].items():
if count > 0:
lead_outcome_breakdown[outcome] = (
lead_outcome_breakdown.get(outcome, 0) + 1
)

leads_successful = sum(lead_outcome_breakdown.values())

leads_picked_pct = (
(leads_picked / total_leads * 100) if total_leads > 0 else 0.0
)
leads_successful_pct = (
(leads_successful / leads_picked * 100) if leads_picked > 0 else 0.0
)
Comment on lines +665 to +696
call_analytics_lines = [
f"• Total Calls Attempted: *{call_stats['calls_attempted']}*",
f"• Calls Picked Up: *{call_stats['calls_picked']}* ({call_stats['calls_picked_pct']}% of attempted calls)",
f"• Successful Calls: *{call_stats['calls_successful']}* ({call_stats['calls_successful_pct']}% of picked calls)",
f"• Answered but Unsuccessful (Busy): *{call_stats['calls_busy']}* ({call_stats['calls_busy_pct']}% of picked calls)",
]
# Dynamic per-outcome call count
call_outcome_breakdown = call_stats.get("call_outcome_breakdown", {})
calls_successful = call_stats["calls_successful"]
for outcome, count in call_outcome_breakdown.items():
pct = (
round(count / calls_successful * 100, 1)
if calls_successful > 0
else 0.0
)
outcome_label = outcome.replace("_", " ").title()
call_analytics_lines.append(
f" ↳ {outcome_label}: *{count}* ({pct}% of successful)"
)
sections.append(
{
"title": "Call-Based Stats",
"text": call_analytics_text,
"text": "\n".join(call_analytics_lines),
}
)

# Section 4: Lead-based analytics
lead_analytics_text = (
f"• Total Leads Processed: *{call_stats['total_leads']}*\n"
f"• Leads Picked: *{call_stats['leads_picked']}* ({call_stats['leads_picked_pct']}% of total leads)\n"
f"• Successful Leads: *{call_stats['leads_successful']}* ({call_stats['leads_successful_pct']}% of picked leads)\n"
f"• Confirmed: *{call_stats['leads_confirmed']}* ({call_stats['leads_confirmed_pct']}% of successful)\n"
f"• Cancelled: *{call_stats['leads_cancelled']}* ({call_stats['leads_cancelled_pct']}% of successful)\n"
f"• Address Updated: *{call_stats['leads_address_updated']}* ({call_stats['leads_address_updated_pct']}% of successful)"
)
lead_analytics_lines = [
f"• Total Leads Processed: *{call_stats['total_leads']}*",
f"• Leads Picked: *{call_stats['leads_picked']}* ({call_stats['leads_picked_pct']}% of total leads)",
f"• Successful Leads: *{call_stats['leads_successful']}* ({call_stats['leads_successful_pct']}% of picked leads)",
]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants