feat(voice): conversational trading AI via Telegram + web dashboard#86
feat(voice): conversational trading AI via Telegram + web dashboard#86eddiebelaval wants to merge 3 commits intomainfrom
Conversation
…oard Implements DeepStack Voice — a conversational AI interface for the trading system, cloning the proven HYDRA/MILO architecture (Bash long-polling daemon + Python handlers + Claude Sonnet brain + ElevenLabs TTS). Phase 1 — Telegram Bot: - Bash listener daemon with atomic lock, exponential backoff, voice pipeline - Python brain (Claude Sonnet) with TRADING_BRAIN.md knowledge document - Trading context gatherer aggregating SQLite journal, config, and Supabase - Two-tier NL parsing: Claude Haiku primary, keyword fallback - ElevenLabs TTS with Deepgram fallback, async voice delivery - launchd plist for production daemon management Phase 2 — Web Dashboard: - Desk Analyst persona with trading-focused prompt and visual config - 5 Vercel AI SDK tools: portfolio, strategies, signals, history, explanations Security hardening (11 fixes from code review): - Token-safe curl via --config stdin (no ps aux exposure) - Replaced unsafe eval with temp-file JSON extraction - Sanitized API error logging (no key leakage) - Explicit httpx timeouts, timezone-aware market hours, atomic file writes Code simplifications (9 applied from code simplifier): - Extracted shared getBaseUrl() into utils.ts (was duplicated 4x) - Deduplicated Python path constants (context_gatherer imports from voice_config) - Moved httpx to top-level imports, fixed redundant except clause - Hoisted static strategy explanations to module scope Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughAdds a DeepStack Voice subsystem: static configuration, context aggregation from journals/signals, Claude-based brain and intent classification, a Telegram listener with TTS/STT, CLI/context helpers, web LLM tools for trading insights, and a new desk-analyst persona. Changes
Sequence DiagramsequenceDiagram
actor User
participant Telegram
participant Listener as Voice Listener (Bash)
participant Context as Context Gatherer
participant Brain as Brain (Claude)
participant TTS as TTS/STT Services
User->>Telegram: send text or voice
Telegram->>Listener: listener long-polling delivers update
Listener->>Listener: parse update, acquire lock
alt voice message
Listener->>TTS: transcribe audio (Deepgram)
TTS->>Listener: transcript
else text message
Listener->>Listener: use text directly
end
Listener->>Context: gather_full_context()
Context->>Context: read SQLite, YAML, DeepSignals
Context->>Listener: aggregated context
Listener->>Brain: classify_intent() then ask_brain(message, context)
Brain->>Brain: build system prompt + call Claude
Brain->>Listener: response text
Listener->>Telegram: send text response
Listener->>TTS: generate voice (ElevenLabs, fallback Deepgram)
TTS->>Listener: audio file
Listener->>Telegram: send voice note
Listener->>Listener: release lock, cleanup
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 69547c072d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| def _is_market_hours(now: Optional[datetime] = None) -> bool: | ||
| """Check if US stock market is currently open (rough check, no holidays).""" | ||
| # Always use Eastern Time for market hours check | ||
| et = timezone(timedelta(hours=-5)) | ||
| if now is None: |
There was a problem hiding this comment.
Use DST-aware Eastern Time for market-hours check
The market-hours helper hardcodes Eastern Time as UTC-5, which is incorrect during daylight saving time (EDT, UTC-4). In those months, the function will report the market as open/closed one hour off (e.g., treat 8:30am EDT as 9:30am ET), which feeds incorrect context into the trading prompt and status. Consider using a DST-aware zone (e.g., zoneinfo.ZoneInfo("America/New_York")) instead of a fixed offset.
Useful? React with 👍 / 👎.
| # Convert markdown to Telegram HTML | ||
| local html_text | ||
| html_text=$(python3 -c " | ||
| import sys, re | ||
| t = sys.stdin.read() | ||
| # Bold: **text** -> <b>text</b> | ||
| t = re.sub(r'\*\*(.+?)\*\*', r'<b>\1</b>', t) | ||
| # Code: \`text\` -> <code>text</code> | ||
| t = re.sub(r'\`\`\`(.+?)\`\`\`', r'<pre>\1</pre>', t, flags=re.DOTALL) | ||
| t = re.sub(r'\`(.+?)\`', r'<code>\1</code>', t) | ||
| # Headers: # text -> <b>text</b> | ||
| t = re.sub(r'^#+\s+(.+)$', r'<b>\1</b>', t, flags=re.MULTILINE) | ||
| # Escape remaining HTML entities (but not our tags) | ||
| print(t) |
There was a problem hiding this comment.
Escape HTML before sending Telegram messages
Messages are sent with parse_mode: HTML, but the markdown-to-HTML conversion never escapes existing <, >, or &. If Claude’s response contains a literal < (e.g., comparisons like 1 < 2 or untrusted text), Telegram treats it as HTML and may reject the message with a 400 “Bad Request: can't parse entities,” causing responses to be dropped. Escaping HTML entities before/after the substitutions would prevent this.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 16
🤖 Fix all issues with AI agents
In `@config/voice_config.py`:
- Around line 42-48: Update the invalid Anthropic model identifiers by setting
BRAIN_MODEL to a valid Sonnet model (e.g., "claude-sonnet-4-0" or
"claude-sonnet-4-20250514") and set PARSE_MODEL to a valid Haiku model (e.g.,
"claude-3-5-haiku-latest" or "claude-3-5-haiku-20241022"); keep existing
constants BRAIN_MAX_TOKENS, BRAIN_TIMEOUT, PARSE_MAX_TOKENS, and PARSE_TIMEOUT
unchanged and ensure only the BRAIN_MODEL and PARSE_MODEL string values are
replaced.
In `@core/voice/context_gatherer.py`:
- Line 314: The context_gatherer currently inserts raw exception text into the
prompt via context["status"] = f"error: {str(e)[:100]}", leaking internal
details; change this to a generic status (e.g., context["status"] = "error" or
"error: internal") so no exception content is placed into the LLM prompt, and
rely on the existing log on the previous line that already records the full
exception for debugging; update the code around the context["status"] assignment
in context_gatherer.py accordingly.
- Around line 47-56: Use a context manager to guarantee the SQLite connection is
closed even on unexpected exceptions: replace the manual connect/close around
sqlite3.connect(db_path, timeout=5) / conn.close() with a "with
sqlite3.connect(...) as conn" block and run conn.row_factory,
conn.execute(query, params) and cursor.fetchall() inside it; also replace
logger.error(...) with logger.exception(...) so the traceback is logged when an
exception occurs (referencing conn, sqlite3.connect, cursor, query, params, and
logger in context_gatherer.py).
- Around line 267-314: The try/except block uses httpx.Client() but calls
client.close() only on the happy path, risking connection-pool leaks; replace
the manual client management with a context manager (use "with
httpx.Client(timeout=10) as client:" around the requests in the block) so the
client is always closed even on exceptions, and move setting
context["status"]="ok" inside that with-block; additionally, when each GET
returns a non-200 status_code, add a logger.warn/error that includes the
URL/endpoint and resp.status_code/resp.text to surface Supabase
auth/rate-limit/JSON issues (refer to httpx.Client, client.get(...),
client.close(), logger.error and context keys like "pcr", "dark_pool",
"insider_trades", "congress_trades" to locate where to insert these changes).
- Around line 99-106: The code uses naive datetime.now() to compute today
(variable today) and week_ago, which can mismatch DB session_date; update the
logic that builds the date filters to use the same Eastern Time aware conversion
used by _is_market_hours (or the helper it uses) before formatting to
"%Y-%m-%d", then pass those ET-aware date strings into the _safe_query calls
(reference: variables today, week_ago and function _safe_query, and
helper/_is_market_hours for the ET conversion pattern).
- Around line 471-493: The _is_market_hours function uses a fixed
timezone(timedelta(hours=-5)) which ignores DST; replace that with
zoneinfo.ZoneInfo("America/New_York") (import ZoneInfo from zoneinfo) so DST is
handled automatically, keep the existing logic for handling now being None or
naive (assume UTC when now.tzinfo is None and convert to the NY zone with
astimezone), and update any imports accordingly so _is_market_hours continues to
compute ET-based market open (9:30–16:00) correctly year-round.
In `@scripts/voice/com.deepstack.voice-listener.plist`:
- Around line 1-51: The plist commits user-specific hardcoded paths (see keys
ProgramArguments, EnvironmentVariables, WorkingDirectory, StandardOutPath,
StandardErrorPath and the Label com.deepstack.voice-listener); replace those
absolute /Users/... values with placeholders (e.g., __HOME__ or $HOME) and
commit a template file like com.deepstack.voice-listener.plist.example, add the
real plist to .gitignore, and provide a small setup script (e.g.,
generate-voice-plist.sh) that reads the template and substitutes the current
user HOME into ProgramArguments, EnvironmentVariables, WorkingDirectory,
StandardOutPath and StandardErrorPath to produce the final
com.deepstack.voice-listener.plist.
In `@scripts/voice/deepstack_brain.py`:
- Around line 86-90: The ask_brain function currently accepts intent_type but
never uses it; either remove the unused parameter from ask_brain and update all
callers and type hints (eliminate intent_type from signatures and tests), or
implement intent-based context fetching inside ask_brain by branching on
intent_type (e.g., call an existing full-context path for "general_chat" and a
lightweight get_intent_context(intent_type, conversation_history) for other
intents), ensure the conversation_history handling stays consistent, and update
any callers/tests to pass the correct intent when using the new behavior.
- Around line 129-146: Replace manual client construction/close with a context
manager to ensure the connection pool is always released: wrap the
httpx.Client(timeout=timeout) usage in a with block (e.g., with
httpx.Client(timeout=timeout) as client:) so the client is closed automatically
even on exceptions; update both the block that sets timeout/creates client and
posts to "https://api.anthropic.com/v1/messages" (the code using variables
timeout, client, response, system_prompt, messages, BRAIN_MODEL,
BRAIN_MAX_TOKENS) and the classify_intent block (the other httpx.Client usage
referenced) to use the same with-pattern.
In `@scripts/voice/deepstack_context.py`:
- Around line 28-36: The env loader currently preserves surrounding quotes in
values (e.g., API_KEY="sk...") causing auth failures; after splitting with value
= value.strip() in the loop that reads ENV_FILE (refer to ENV_FILE, PROJECT_ROOT
and the for-line parsing block), strip matching surrounding single or double
quotes from the value (and unescape any escaped quotes/newlines if needed)
before calling os.environ.setdefault so stored env vars do not include the
literal quote characters.
In `@scripts/voice/deepstack-voice-listener.sh`:
- Around line 155-167: The inline python that builds html_text for Telegram
parse_mode=HTML fails to escape HTML entities, so raw '<', '>' and '&' in the
input can break Telegram; update the python snippet (the python3 -c block that
produces html_text) to first HTML-escape the entire input (escape &, <, >) and
then run the markdown-to-HTML regex substitutions, or alternatively perform
escaping only outside of generated tags by parsing the text and escaping non-tag
regions before wrapping with <b>, <code>, <pre>; ensure the final html_text is
safe for Telegram parse_mode=HTML.
- Around line 398-400: The bash listener passes user text directly as argv to
deepstack_context.py which allows messages starting with "-" to be interpreted
as flags; change the invocation in deepstack-voice-listener.sh to pass an
explicit end-of-options marker before the message (invoke python3
"$BRAIN_SCRIPT" -- "$text") and update deepstack_context.py to recognize and
skip a leading '--' in sys.argv (e.g., check args = sys.argv[1:], if args and
args[0] == '--' then args = args[1:], then treat the remainder as the raw
message by joining them) so no message text is parsed as CLI flags and the
default brain path is used.
- Line 558: The guard using voice_file_id is comparing against the literal
two-character string "''", which is dead logic; update the conditional(s) that
reference the variable voice_file_id (the if at the current check and the
similar one at line 569) to remove the != "''" comparison and rely solely on the
-n "$voice_file_id" test (i.e., keep if [[ -n "$voice_file_id" ]]; then) so
empty strings are handled correctly.
- Around line 44-54: The env loader currently preserves surrounding quotes in
values (so KEY="value" exports with quotes); update the read loop that processes
ENV_FILE (the while IFS='=' read -r key value; ... done < "$ENV_FILE" block) to
strip surrounding single or double quotes from value after trimming whitespace
and before export (you can use bash parameter expansion or a sed replacement to
remove a leading and trailing matching quote), then export "$key=$value" as
before; keep the existing handling for empty/commented keys and error branch
using LOG_FILE.
- Line 127: The removal command uses rm -rf "$VOICE_TEMP"/* which is unsafe if
VOICE_TEMP is unset or empty; change the deletion to use parameter expansion
with the :? guard to ensure VOICE_TEMP is defined before expanding (i.e.,
protect the rm invocation by requiring VOICE_TEMP to be non-empty) and retain
the same intent of removing files under VOICE_TEMP; update the rm line that
references VOICE_TEMP so it fails fast if VOICE_TEMP is unset instead of
expanding to root.
- Around line 536-541: The current loop spawns five separate python3 processes
to read update_id, chat_id, message_id, text and voice_file_id from
$fields_file; replace those five calls with a single python3 invocation that
loads the JSON once, extracts all five keys, and prints them safely
shell-escaped (use shlex.quote) so you can read them into the local variables
update_id, chat_id, message_id, text and voice_file_id in one shot; update the
code that sets those variables to call that single python3 command and split its
output into the respective variables.
🧹 Nitpick comments (9)
web/src/lib/llm/utils.ts (1)
4-8: Consider stripping trailing slashes fromNEXT_PUBLIC_APP_URL.
VERCEL_URLis constructed by you so it's safe, butNEXT_PUBLIC_APP_URLis user-configured and could contain a trailing slash, producing URLs likehttps://example.com//api/....🔧 Suggested fix
export const getBaseUrl = () => { if (process.env.VERCEL_URL) return `https://${process.env.VERCEL_URL}`; - if (process.env.NEXT_PUBLIC_APP_URL) return process.env.NEXT_PUBLIC_APP_URL; + if (process.env.NEXT_PUBLIC_APP_URL) return process.env.NEXT_PUBLIC_APP_URL.replace(/\/+$/, ''); return 'http://localhost:3000'; };docs/TRADING_BRAIN.md (1)
112-114: Add a language identifier to the fenced code block.Per the markdownlint hint, the code block should specify a language for consistency and proper rendering.
Proposed fix
-``` +```text position_size = account_balance * kelly_fraction * (win_prob - (1-win_prob)/payoff_ratio)</details> </blockquote></details> <details> <summary>scripts/voice/deepstack_context.py (1)</summary><blockquote> `69-76`: **Missing guard for empty message with `--intent-only`.** If invoked as `deepstack_context.py --intent-only` without a message, `args[1:]` is empty and `classify_intent("")` is called with an empty string. Consider validating that a message is provided. <details> <summary>Proposed fix</summary> ```diff if args[0] == "--intent-only": # Just classify intent (for the listener's dispatch logic) message = " ".join(args[1:]) + if not message.strip(): + print("Error: --intent-only requires a message", file=sys.stderr) + sys.exit(1) from scripts.voice.deepstack_brain import classify_intentconfig/voice_config.py (2)
67-67: Remove unusednoqadirective.Ruff reports
E501is not enabled, so the# noqa: E501comment is unnecessary.-NL_PARSE_SYSTEM = ( # noqa: E501 +NL_PARSE_SYSTEM = (
14-20: Hardcoded home directory path is fragile.
TRADE_JOURNAL_DBembeds a machine-specific path (~/clawd/projects/kalshi-trading/). Consider making this configurable via an environment variable with this as the default.-TRADE_JOURNAL_DB = os.path.join( - os.path.expanduser("~"), - "clawd", - "projects", - "kalshi-trading", - "trade_journal.db", -) +TRADE_JOURNAL_DB = os.environ.get( + "TRADE_JOURNAL_DB", + os.path.join( + os.path.expanduser("~"), + "clawd", "projects", "kalshi-trading", "trade_journal.db", + ), +)scripts/voice/deepstack_brain.py (1)
258-261: Substring matching causes false positives for short keywords.
"hi" in msgmatches "this", "thinking", "within", etc. Since this is the fallback classifier and more specific intents are checked first, impact is limited, but "general_chat" keywords like"hi"will over-trigger.Consider using word-boundary matching or moving
general_chatto a true default (remove it from keyword_map and only return it as the final fallback on line 263).web/src/lib/llm/deepstack-voice-tools.ts (1)
212-258: Non-ok HTTP responses silently drop signals, while network errors produce{ status: 'unavailable' }.When a signal endpoint returns a non-
2xxstatus, theresp.okcheck fails and the signal key is never set in thesignalsobject. But when a network error occurs, thecatchblock sets{ status: 'unavailable' }. This means callers see different shapes depending on the failure mode.Consider handling non-ok responses the same way:
if (resp.ok) signals.pcr = await resp.json(); + else signals.pcr = { status: 'unavailable' };Apply the same pattern for
dark_pool,insider, andcongressfetches.core/voice/context_gatherer.py (2)
209-210: Uselogger.exceptionto preserve stack traces in error handlers.At lines 210, 234 (and 313),
logger.erroris used insideexceptblocks.logger.exceptionautomatically includes the traceback, which is essential for diagnosing failures in a multi-source aggregation pipeline where partial failures are silently swallowed.Proposed fix
- except Exception as e: - logger.error("Failed to load config.yaml: %s", e) + except Exception: + logger.exception("Failed to load config.yaml")- except Exception as e: - logger.error("Failed to load risk_limits.yaml: %s", e) + except Exception: + logger.exception("Failed to load risk_limits.yaml")Apply the same pattern at line 313.
Also applies to: 233-234
496-502: Minor: docstring underif __name__is a no-op.Line 497's triple-quoted string is not attached to a function or class — it's just an expression statement that does nothing. Use a comment instead.
Proposed fix
if __name__ == "__main__": - """Quick test: print the full context.""" + # Quick test: print the full context. logging.basicConfig(level=logging.INFO)
| BRAIN_MODEL = "claude-sonnet-4-5-20250929" | ||
| BRAIN_MAX_TOKENS = 1024 | ||
| BRAIN_TIMEOUT = 30 # seconds | ||
|
|
||
| PARSE_MODEL = "claude-haiku-4-5-20251001" | ||
| PARSE_MAX_TOKENS = 100 | ||
| PARSE_TIMEOUT = 5 # seconds |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Anthropic Claude API current model IDs claude-sonnet claude-haiku 2025
💡 Result:
Anthropic’s Claude API model IDs (as used in 2025) for Sonnet and Haiku are:
-
Claude Sonnet 4
- Alias:
claude-sonnet-4-0(docs.anthropic.com) - Snapshot model ID:
claude-sonnet-4-20250514(docs.anthropic.com)
- Alias:
-
Claude Haiku 3.5
- Alias:
claude-3-5-haiku-latest(docs.anthropic.com) - Snapshot model ID:
claude-3-5-haiku-20241022(docs.anthropic.com)
- Alias:
(Older Haiku still listed in docs: claude-3-haiku-20240307.) (docs.anthropic.com)
Citations:
- 1: https://docs.anthropic.com/en/docs/models-overview?utm_source=openai
- 2: https://docs.anthropic.com/en/docs/models-overview?utm_source=openai
- 3: https://docs.anthropic.com/en/docs/models-overview?utm_source=openai
- 4: https://docs.anthropic.com/en/docs/models-overview?utm_source=openai
- 5: https://docs.anthropic.com/en/docs/models-overview?utm_source=openai
Update model identifiers to valid Anthropic API models.
The model strings claude-sonnet-4-5-20250929 and claude-haiku-4-5-20251001 are invalid and will cause API errors. Use one of these valid alternatives:
- Sonnet:
claude-sonnet-4-0(recommended alias) orclaude-sonnet-4-20250514(snapshot) - Haiku:
claude-3-5-haiku-latest(recommended alias) orclaude-3-5-haiku-20241022(snapshot)
🤖 Prompt for AI Agents
In `@config/voice_config.py` around lines 42 - 48, Update the invalid Anthropic
model identifiers by setting BRAIN_MODEL to a valid Sonnet model (e.g.,
"claude-sonnet-4-0" or "claude-sonnet-4-20250514") and set PARSE_MODEL to a
valid Haiku model (e.g., "claude-3-5-haiku-latest" or
"claude-3-5-haiku-20241022"); keep existing constants BRAIN_MAX_TOKENS,
BRAIN_TIMEOUT, PARSE_MAX_TOKENS, and PARSE_TIMEOUT unchanged and ensure only the
BRAIN_MODEL and PARSE_MODEL string values are replaced.
| try: | ||
| conn = sqlite3.connect(db_path, timeout=5) | ||
| conn.row_factory = sqlite3.Row | ||
| cursor = conn.execute(query, params) | ||
| rows = [dict(row) for row in cursor.fetchall()] | ||
| conn.close() | ||
| return rows | ||
| except sqlite3.Error as e: | ||
| logger.error("SQLite error on %s: %s", db_path, e) | ||
| return [] |
There was a problem hiding this comment.
SQLite connection leaks if an error occurs after connect() but before close().
If conn.execute() or cursor.fetchall() raises a non-sqlite3.Error exception (e.g., MemoryError, or a sqlite3.Error subclass not caught by the current flow), the connection won't be closed. Use a context manager to guarantee cleanup.
Proposed fix using context manager
try:
- conn = sqlite3.connect(db_path, timeout=5)
- conn.row_factory = sqlite3.Row
- cursor = conn.execute(query, params)
- rows = [dict(row) for row in cursor.fetchall()]
- conn.close()
- return rows
- except sqlite3.Error as e:
- logger.error("SQLite error on %s: %s", db_path, e)
+ with sqlite3.connect(db_path, timeout=5) as conn:
+ conn.row_factory = sqlite3.Row
+ cursor = conn.execute(query, params)
+ return [dict(row) for row in cursor.fetchall()]
+ except sqlite3.Error:
+ logger.exception("SQLite error on %s", db_path)
return []Note: logging.exception is preferred over logging.error here as it automatically includes the traceback (Ruff TRY400).
🧰 Tools
🪛 Ruff (0.14.14)
[warning] 53-53: Consider moving this statement to an else block
(TRY300)
[warning] 55-55: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🤖 Prompt for AI Agents
In `@core/voice/context_gatherer.py` around lines 47 - 56, Use a context manager
to guarantee the SQLite connection is closed even on unexpected exceptions:
replace the manual connect/close around sqlite3.connect(db_path, timeout=5) /
conn.close() with a "with sqlite3.connect(...) as conn" block and run
conn.row_factory, conn.execute(query, params) and cursor.fetchall() inside it;
also replace logger.error(...) with logger.exception(...) so the traceback is
logged when an exception occurs (referencing conn, sqlite3.connect, cursor,
query, params, and logger in context_gatherer.py).
| # Today's P&L | ||
| today = datetime.now().strftime("%Y-%m-%d") | ||
| today_rows = _safe_query( | ||
| TRADE_JOURNAL_DB, | ||
| "SELECT COALESCE(SUM(pnl_cents), 0) as total " | ||
| "FROM trades WHERE session_date = ?", | ||
| (today,), | ||
| ) |
There was a problem hiding this comment.
datetime.now() uses server-local time — may mismatch the DB's date convention.
Lines 100 and 151 use naive datetime.now() for date filters, while _is_market_hours carefully converts to Eastern Time. If the server doesn't run in the same timezone as the DB's session_date/date columns, "today" and "week ago" filters will return incorrect results. Consider using the same ET-aware approach here for consistency.
Suggested fix
+ _ET = timezone(timedelta(hours=-5))
# Today's P&L
- today = datetime.now().strftime("%Y-%m-%d")
+ today = datetime.now(_ET).strftime("%Y-%m-%d")Apply the same pattern to week_ago on line 151.
🤖 Prompt for AI Agents
In `@core/voice/context_gatherer.py` around lines 99 - 106, The code uses naive
datetime.now() to compute today (variable today) and week_ago, which can
mismatch DB session_date; update the logic that builds the date filters to use
the same Eastern Time aware conversion used by _is_market_hours (or the helper
it uses) before formatting to "%Y-%m-%d", then pass those ET-aware date strings
into the _safe_query calls (reference: variables today, week_ago and function
_safe_query, and helper/_is_market_hours for the ET conversion pattern).
| try: | ||
| client = httpx.Client(timeout=10) | ||
|
|
||
| # Latest put/call ratios | ||
| resp = client.get( | ||
| f"{url}/rest/v1/deepsignals_pcr?order=date.desc&limit=1", | ||
| headers=headers, | ||
| ) | ||
| if resp.status_code == 200: | ||
| rows = resp.json() | ||
| if rows: | ||
| context["pcr"] = rows[0] | ||
|
|
||
| # Recent dark pool / short volume (top 10 by short ratio) | ||
| dp_query = "order=date.desc,short_volume_ratio.desc&limit=10" | ||
| resp = client.get( | ||
| f"{url}/rest/v1/deepsignals_dark_pool?{dp_query}", | ||
| headers=headers, | ||
| ) | ||
| if resp.status_code == 200: | ||
| context["dark_pool"] = resp.json() | ||
|
|
||
| # Recent insider trades (last 7 days) | ||
| week_ago = (datetime.now() - timedelta(days=7)).strftime("%Y-%m-%d") | ||
| insider_query = ( | ||
| f"transaction_date=gte.{week_ago}" "&order=transaction_date.desc&limit=10" | ||
| ) | ||
| resp = client.get( | ||
| f"{url}/rest/v1/deepsignals_insider?{insider_query}", | ||
| headers=headers, | ||
| ) | ||
| if resp.status_code == 200: | ||
| context["insider_trades"] = resp.json() | ||
|
|
||
| # Recent congressional trades | ||
| resp = client.get( | ||
| f"{url}/rest/v1/deepsignals_congress?order=transaction_date.desc&limit=10", | ||
| headers=headers, | ||
| ) | ||
| if resp.status_code == 200: | ||
| context["congress_trades"] = resp.json() | ||
|
|
||
| client.close() | ||
| context["status"] = "ok" | ||
|
|
||
| except Exception as e: | ||
| logger.error("DeepSignals fetch failed: %s", e) | ||
| context["status"] = f"error: {str(e)[:100]}" |
There was a problem hiding this comment.
httpx.Client is never closed on exception — connection pool leak.
client.close() on line 309 is only reached in the happy path. Any exception (network error, JSON decode error, etc.) skips it. Use a context manager to guarantee cleanup.
Additionally, non-200 responses are silently ignored with no logging, making it hard to diagnose Supabase API issues (auth failures, rate limits, etc.).
Proposed fix
try:
- client = httpx.Client(timeout=10)
-
- # Latest put/call ratios
- resp = client.get(
- f"{url}/rest/v1/deepsignals_pcr?order=date.desc&limit=1",
- headers=headers,
- )
- if resp.status_code == 200:
- rows = resp.json()
- if rows:
- context["pcr"] = rows[0]
+ with httpx.Client(timeout=10) as client:
+ # Latest put/call ratios
+ resp = client.get(
+ f"{url}/rest/v1/deepsignals_pcr?order=date.desc&limit=1",
+ headers=headers,
+ )
+ if resp.status_code == 200:
+ rows = resp.json()
+ if rows:
+ context["pcr"] = rows[0]
+ else:
+ logger.warning("PCR fetch returned %s", resp.status_code)
- # ... (same pattern for other endpoints)
+ # ... apply same pattern to remaining requests ...
- client.close()
- context["status"] = "ok"
+ context["status"] = "ok"🧰 Tools
🪛 Ruff (0.14.14)
[warning] 312-312: Do not catch blind exception: Exception
(BLE001)
[warning] 313-313: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🤖 Prompt for AI Agents
In `@core/voice/context_gatherer.py` around lines 267 - 314, The try/except block
uses httpx.Client() but calls client.close() only on the happy path, risking
connection-pool leaks; replace the manual client management with a context
manager (use "with httpx.Client(timeout=10) as client:" around the requests in
the block) so the client is always closed even on exceptions, and move setting
context["status"]="ok" inside that with-block; additionally, when each GET
returns a non-200 status_code, add a logger.warn/error that includes the
URL/endpoint and resp.status_code/resp.text to surface Supabase
auth/rate-limit/JSON issues (refer to httpx.Client, client.get(...),
client.close(), logger.error and context keys like "pcr", "dark_pool",
"insider_trades", "congress_trades" to locate where to insert these changes).
|
|
||
| except Exception as e: | ||
| logger.error("DeepSignals fetch failed: %s", e) | ||
| context["status"] = f"error: {str(e)[:100]}" |
There was a problem hiding this comment.
Internal error details leak into the LLM prompt context.
f"error: {str(e)[:100]}" embeds raw exception text (which may contain internal hostnames, paths, or credentials from connection strings) into the context dict that later gets formatted into Claude's system prompt. Use a generic status instead.
Proposed fix
- context["status"] = f"error: {str(e)[:100]}"
+ context["status"] = "fetch_error"The full error is already logged on line 313, so the prompt doesn't need the raw message.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| context["status"] = f"error: {str(e)[:100]}" | |
| context["status"] = "fetch_error" |
🤖 Prompt for AI Agents
In `@core/voice/context_gatherer.py` at line 314, The context_gatherer currently
inserts raw exception text into the prompt via context["status"] = f"error:
{str(e)[:100]}", leaking internal details; change this to a generic status
(e.g., context["status"] = "error" or "error: internal") so no exception content
is placed into the LLM prompt, and rely on the existing log on the previous line
that already records the full exception for debugging; update the code around
the context["status"] assignment in context_gatherer.py accordingly.
| RUNNING=false | ||
| release_lock | ||
| # Clean up temp files | ||
| rm -rf "$VOICE_TEMP"/* |
There was a problem hiding this comment.
rm -rf "$VOICE_TEMP"/* is dangerous if VOICE_TEMP is ever empty.
ShellCheck SC2115 flags this correctly. If VOICE_TEMP were unset or empty (despite set -u), this expands to rm -rf /*. Use parameter expansion with :? to guard.
Proposed fix
- rm -rf "$VOICE_TEMP"/*
+ rm -rf "${VOICE_TEMP:?}"/*📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| rm -rf "$VOICE_TEMP"/* | |
| rm -rf "${VOICE_TEMP:?}"/* |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 127-127: Use "${var:?}" to ensure this never expands to /* .
(SC2115)
🤖 Prompt for AI Agents
In `@scripts/voice/deepstack-voice-listener.sh` at line 127, The removal command
uses rm -rf "$VOICE_TEMP"/* which is unsafe if VOICE_TEMP is unset or empty;
change the deletion to use parameter expansion with the :? guard to ensure
VOICE_TEMP is defined before expanding (i.e., protect the rm invocation by
requiring VOICE_TEMP to be non-empty) and retain the same intent of removing
files under VOICE_TEMP; update the rm line that references VOICE_TEMP so it
fails fast if VOICE_TEMP is unset instead of expanding to root.
| html_text=$(python3 -c " | ||
| import sys, re | ||
| t = sys.stdin.read() | ||
| # Bold: **text** -> <b>text</b> | ||
| t = re.sub(r'\*\*(.+?)\*\*', r'<b>\1</b>', t) | ||
| # Code: \`text\` -> <code>text</code> | ||
| t = re.sub(r'\`\`\`(.+?)\`\`\`', r'<pre>\1</pre>', t, flags=re.DOTALL) | ||
| t = re.sub(r'\`(.+?)\`', r'<code>\1</code>', t) | ||
| # Headers: # text -> <b>text</b> | ||
| t = re.sub(r'^#+\s+(.+)$', r'<b>\1</b>', t, flags=re.MULTILINE) | ||
| # Escape remaining HTML entities (but not our tags) | ||
| print(t) | ||
| " <<< "$text" 2>/dev/null || echo "$text") |
There was a problem hiding this comment.
Missing HTML entity escaping breaks Telegram parse_mode=HTML.
The markdown→HTML conversion adds <b>, <code>, <pre> tags but never escapes &, <, > in the rest of the text. If the brain response contains raw < or & (common in trading context, e.g., "P/E < 10"), Telegram will reject the message or render it incorrectly.
Escape entities before applying the markdown-to-HTML substitutions, or escape only outside the generated tags.
🤖 Prompt for AI Agents
In `@scripts/voice/deepstack-voice-listener.sh` around lines 155 - 167, The inline
python that builds html_text for Telegram parse_mode=HTML fails to escape HTML
entities, so raw '<', '>' and '&' in the input can break Telegram; update the
python snippet (the python3 -c block that produces html_text) to first
HTML-escape the entire input (escape &, <, >) and then run the markdown-to-HTML
regex substitutions, or alternatively perform escaping only outside of generated
tags by parsing the text and escaping non-tag regions before wrapping with <b>,
<code>, <pre>; ensure the final html_text is safe for Telegram parse_mode=HTML.
| # Get brain response (Python handles intent classification + context + Claude call) | ||
| local response | ||
| response=$(python3 "$BRAIN_SCRIPT" "$text" 2>> "$LOG_FILE") |
There was a problem hiding this comment.
Telegram users can trigger CLI flags by sending messages like --context-only.
The user's message text is passed directly as argv to the Python script. If a user sends --context-only or --intent-only foo, deepstack_context.py will interpret those as flags, executing a different code path than intended. Use -- to signal end-of-options to the Python script, and update deepstack_context.py accordingly to separate flags from the message argument.
Proposed fix in the Bash listener
- response=$(python3 "$BRAIN_SCRIPT" "$text" 2>> "$LOG_FILE")
+ response=$(python3 "$BRAIN_SCRIPT" -- "$text" 2>> "$LOG_FILE")Then in scripts/voice/deepstack_context.py, handle -- as the separator:
# Parse flags
args = sys.argv[1:]
# Skip '--' separator if present
if args and args[0] == '--':
args = args[1:]
# Force default (brain) mode — no flag processing
message = " ".join(args)
# ... proceed with brain response🤖 Prompt for AI Agents
In `@scripts/voice/deepstack-voice-listener.sh` around lines 398 - 400, The bash
listener passes user text directly as argv to deepstack_context.py which allows
messages starting with "-" to be interpreted as flags; change the invocation in
deepstack-voice-listener.sh to pass an explicit end-of-options marker before the
message (invoke python3 "$BRAIN_SCRIPT" -- "$text") and update
deepstack_context.py to recognize and skip a leading '--' in sys.argv (e.g.,
check args = sys.argv[1:], if args and args[0] == '--' then args = args[1:],
then treat the remainder as the raw message by joining them) so no message text
is parsed as CLI flags and the default brain path is used.
| local update_id chat_id message_id text voice_file_id | ||
| update_id=$(python3 -c "import json; print(json.load(open('$fields_file'))['update_id'])" 2>/dev/null || echo "0") | ||
| chat_id=$(python3 -c "import json; print(json.load(open('$fields_file'))['chat_id'])" 2>/dev/null || echo "") | ||
| message_id=$(python3 -c "import json; print(json.load(open('$fields_file'))['message_id'])" 2>/dev/null || echo "") | ||
| text=$(python3 -c "import json; print(json.load(open('$fields_file'))['text'])" 2>/dev/null || echo "") | ||
| voice_file_id=$(python3 -c "import json; print(json.load(open('$fields_file'))['voice_file_id'])" 2>/dev/null || echo "") |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Five separate python3 invocations per update to read JSON fields is very inefficient.
Each update spawns 5 Python processes just to extract fields that were already written to a single JSON file. Consolidate into one invocation that outputs all fields.
Proposed fix
- local update_id chat_id message_id text voice_file_id
- update_id=$(python3 -c "import json; print(json.load(open('$fields_file'))['update_id'])" 2>/dev/null || echo "0")
- chat_id=$(python3 -c "import json; print(json.load(open('$fields_file'))['chat_id'])" 2>/dev/null || echo "")
- message_id=$(python3 -c "import json; print(json.load(open('$fields_file'))['message_id'])" 2>/dev/null || echo "")
- text=$(python3 -c "import json; print(json.load(open('$fields_file'))['text'])" 2>/dev/null || echo "")
- voice_file_id=$(python3 -c "import json; print(json.load(open('$fields_file'))['voice_file_id'])" 2>/dev/null || echo "")
+ local update_id chat_id message_id text voice_file_id
+ eval "$(python3 -c "
+import json, shlex
+with open('$fields_file') as f:
+ d = json.load(f)
+for k in ('update_id','chat_id','message_id','text','voice_file_id'):
+ print(f'{k}={shlex.quote(str(d.get(k, \"\")))}')
+" 2>/dev/null)" || {
+ log WARN "Failed to extract fields from update $i"
+ ((i++))
+ continue
+ }This uses shlex.quote to safely escape all values, preventing shell injection from Telegram message content.
🤖 Prompt for AI Agents
In `@scripts/voice/deepstack-voice-listener.sh` around lines 536 - 541, The
current loop spawns five separate python3 processes to read update_id, chat_id,
message_id, text and voice_file_id from $fields_file; replace those five calls
with a single python3 invocation that loads the JSON once, extracts all five
keys, and prints them safely shell-escaped (use shlex.quote) so you can read
them into the local variables update_id, chat_id, message_id, text and
voice_file_id in one shot; update the code that sets those variables to call
that single python3 command and split its output into the respective variables.
| fi | ||
|
|
||
| # Handle voice messages | ||
| if [[ -n "$voice_file_id" && "$voice_file_id" != "''" ]]; then |
There was a problem hiding this comment.
Odd guard: "$voice_file_id" != "''" compares against a literal two-character string ''.
Python's print() on an empty string outputs an empty line, not ''. The -n "$voice_file_id" check already correctly handles empty strings. The != "''" check is dead logic.
- if [[ -n "$voice_file_id" && "$voice_file_id" != "''" ]]; then
+ if [[ -n "$voice_file_id" ]]; thenApply the same fix on line 569:
- elif [[ -n "$text" && "$text" != "''" ]]; then
+ elif [[ -n "$text" ]]; then📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [[ -n "$voice_file_id" && "$voice_file_id" != "''" ]]; then | |
| if [[ -n "$voice_file_id" ]]; then |
🤖 Prompt for AI Agents
In `@scripts/voice/deepstack-voice-listener.sh` at line 558, The guard using
voice_file_id is comparing against the literal two-character string "''", which
is dead logic; update the conditional(s) that reference the variable
voice_file_id (the if at the current check and the similar one at line 569) to
remove the != "''" comparison and rely solely on the -n "$voice_file_id" test
(i.e., keep if [[ -n "$voice_file_id" ]]; then) so empty strings are handled
correctly.
Create a three-layer AI personality architecture: SOUL.md (voice/personality) sits above TRADING_BRAIN.md (knowledge/facts) and live context (data). The soul defines DeepStack's communication style inspired by Keith Gill — deep value conviction, spreadsheet transparency, casual-but-rigorous analysis. Wire soul loading into the brain module with caching and update the Desk Analyst web persona to match the new personality. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Read strategy configuration from the actual Kalshi trading bot config instead of the DeepStack dashboard config, and infer trading mode from real data (open positions = live). Also fix launchd plist PATH to include Python venv for httpx/yaml deps. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@scripts/voice/deepstack_brain.py`:
- Around line 45-55: The _load_document function currently assumes a readable
file and will propagate any read/encoding/permission exceptions to callers
(_load_brain_document, _load_soul_document); wrap the file access in a
try/except that catches IO-related and decoding errors (e.g., OSError, IOError,
UnicodeDecodeError or a broad Exception), log a warning or logger.exception
including the path, label and error details, and return the provided fallback
string so callers don't crash when a file exists but is unreadable.
- Around line 172-184: On non-200 responses the call to response.json() can
raise JSONDecodeError and mask the original HTTP error; wrap the JSON parsing
used to extract error_type in its own try/except (catching json.JSONDecodeError
and falling back to "unknown") before calling logger.error, so error_type is
always defined and you don't re-raise; keep avoiding logging the raw response
body, and use the existing response.status_code, response.json() (inside the
safe try), and logger.error calls (adjusted to use the fallback) to implement
the fix.
🧹 Nitpick comments (6)
config/voice_config.py (2)
75-75: Remove unusednoqadirective.Ruff RUF100 flags this:
E501is not enabled, so the# noqa: E501is unnecessary.Proposed fix
-NL_PARSE_SYSTEM = ( # noqa: E501 +NL_PARSE_SYSTEM = (
15-29: Hardcoded user-home paths reduce portability.
TRADE_JOURNAL_DBandKALSHI_BOT_CONFIG_PATHembed a fixed~/clawd/projects/kalshi-trading/path. Consider making these configurable via environment variables with the current values as defaults, e.g.:TRADE_JOURNAL_DB = os.getenv( "TRADE_JOURNAL_DB", os.path.join(os.path.expanduser("~"), "clawd", "projects", "kalshi-trading", "trade_journal.db"), )This avoids breakage if the project layout or user changes.
core/voice/context_gatherer.py (2)
339-356:gather_full_contextdoesn't guard against exceptions from its callees.
get_trade_journal_context(),get_strategy_config(), andget_deepsignals_context()each have internal error handling, but the docstring promises "partial failures produce partial context." If any of these functions raises an unexpected exception (e.g., from YAML parsing propagating aScannerErroroutside thetryblock, or anOSError), the entiregather_full_contextcall fails. Consider wrapping each call:try: context["trade_journal"] = get_trade_journal_context() except Exception: logger.exception("Failed to gather trade journal context") context["trade_journal"] = {"status": "error"}This would truly deliver on the partial-failure contract.
516-522: Docstring inif __name__block.Line 517 uses a docstring (
"""Quick test: ...""") as the body of theif __name__ == "__main__":block. This is a no-op string expression, not an actual docstring (sinceifblocks don't have docstrings). Use a comment instead.Proposed fix
if __name__ == "__main__": - """Quick test: print the full context.""" + # Quick test: print the full context. logging.basicConfig(level=logging.INFO)scripts/voice/deepstack_brain.py (2)
20-24: Unconditionalsys.path.insertmay add duplicates.Unlike
context_gatherer.pywhich guards withif _PROJECT_ROOT not in sys.path, this file always inserts. Minor, but worth being consistent.Proposed fix
PROJECT_ROOT = os.path.dirname( os.path.dirname(os.path.dirname(os.path.abspath(__file__))) ) -sys.path.insert(0, PROJECT_ROOT) +if PROJECT_ROOT not in sys.path: + sys.path.insert(0, PROJECT_ROOT)
290-303: Docstring used as a statement inif __name__block.Same as in
context_gatherer.py— this is a no-op string, not a real docstring. Use a#comment.Proposed fix
if __name__ == "__main__": - """Quick test: ask the brain a question from CLI.""" + # Quick test: ask the brain a question from CLI. logging.basicConfig(level=logging.INFO)
| def _load_document(path: str, label: str, fallback: str) -> str: | ||
| """Load a markdown document from disk with caching.""" | ||
| if not os.path.exists(path): | ||
| logger.warning("%s not found at %s", label, path) | ||
| return fallback | ||
|
|
||
| with open(path, "r") as f: | ||
| content = f.read() | ||
|
|
||
| logger.info("Loaded %s (%d chars)", label, len(content)) | ||
| return content |
There was a problem hiding this comment.
_load_document doesn't handle file read errors.
If the file exists but is unreadable (permissions, encoding error, etc.), this will raise an unhandled exception. The callers (_load_brain_document, _load_soul_document) don't catch this either, so an I/O error during document load would crash ask_brain.
Proposed fix
def _load_document(path: str, label: str, fallback: str) -> str:
"""Load a markdown document from disk with caching."""
if not os.path.exists(path):
logger.warning("%s not found at %s", label, path)
return fallback
- with open(path, "r") as f:
- content = f.read()
-
- logger.info("Loaded %s (%d chars)", label, len(content))
- return content
+ try:
+ with open(path, "r") as f:
+ content = f.read()
+ logger.info("Loaded %s (%d chars)", label, len(content))
+ return content
+ except OSError as e:
+ logger.error("Failed to read %s at %s: %s", label, path, e)
+ return fallback🤖 Prompt for AI Agents
In `@scripts/voice/deepstack_brain.py` around lines 45 - 55, The _load_document
function currently assumes a readable file and will propagate any
read/encoding/permission exceptions to callers (_load_brain_document,
_load_soul_document); wrap the file access in a try/except that catches
IO-related and decoding errors (e.g., OSError, IOError, UnicodeDecodeError or a
broad Exception), log a warning or logger.exception including the path, label
and error details, and return the provided fallback string so callers don't
crash when a file exists but is unreadable.
| if response.status_code != 200: | ||
| # Sanitize error: never log raw response (may contain API key echoes) | ||
| error_type = ( | ||
| response.json().get("error", {}).get("type", "unknown") | ||
| if response.headers.get("content-type", "").startswith( | ||
| "application/json" | ||
| ) | ||
| else "unknown" | ||
| ) | ||
| logger.error( | ||
| "Claude API error %d: type=%s", response.status_code, error_type | ||
| ) | ||
| return f"Brain error (HTTP {response.status_code}). Try again in a moment." |
There was a problem hiding this comment.
Error-path JSON parsing can itself raise, masking the original error.
On a non-200 response, response.json() is called to extract the error type. If the response body is malformed JSON despite a content-type: application/json header (which does happen), this raises json.JSONDecodeError, which falls through to the generic except Exception on line 196 — returning a less informative message and losing the HTTP status code.
Proposed fix — wrap the error extraction in its own try
if response.status_code != 200:
- # Sanitize error: never log raw response (may contain API key echoes)
- error_type = (
- response.json().get("error", {}).get("type", "unknown")
- if response.headers.get("content-type", "").startswith(
- "application/json"
- )
- else "unknown"
- )
+ error_type = "unknown"
+ if response.headers.get("content-type", "").startswith("application/json"):
+ try:
+ error_type = response.json().get("error", {}).get("type", "unknown")
+ except (ValueError, KeyError):
+ pass
logger.error(
"Claude API error %d: type=%s", response.status_code, error_type
)🤖 Prompt for AI Agents
In `@scripts/voice/deepstack_brain.py` around lines 172 - 184, On non-200
responses the call to response.json() can raise JSONDecodeError and mask the
original HTTP error; wrap the JSON parsing used to extract error_type in its own
try/except (catching json.JSONDecodeError and falling back to "unknown") before
calling logger.error, so error_type is always defined and you don't re-raise;
keep avoiding logging the raw response body, and use the existing
response.status_code, response.json() (inside the safe try), and logger.error
calls (adjusted to use the fallback) to implement the fix.
Summary
getBaseUrl()across 4 files, consolidated Python path constants, hoisted static data)New Files (8)
config/voice_config.pycore/voice/context_gatherer.pydocs/TRADING_BRAIN.mdscripts/voice/deepstack-voice-listener.shscripts/voice/deepstack_brain.pyscripts/voice/deepstack_context.pyscripts/voice/com.deepstack.voice-listener.plistweb/src/lib/llm/deepstack-voice-tools.tsModified Files (5)
web/src/lib/llm/tools.tsgetBaseUrl()web/src/lib/llm/perplexity-finance-tools.tsgetBaseUrl()→ shared utilsweb/src/lib/llm/prediction-market-tools.tsgetBaseUrl()→ shared utilsweb/src/lib/personas/persona-configs.tsweb/src/lib/types/persona.tsdesk-analystto PersonaId unionTest plan
npx tsc --noEmitinweb/)@deepstack_voice_botvia BotFather and filldeepstack-voice.envfor live testingbash scripts/voice/deepstack-voice-listener.sh🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Refactor