Skip to content

Copilot/update run action step#32

Merged
digitalstore2025 merged 8 commits intomainfrom
copilot/update-run-action-step
Apr 1, 2026
Merged

Copilot/update run action step#32
digitalstore2025 merged 8 commits intomainfrom
copilot/update-run-action-step

Conversation

@digitalstore2025
Copy link
Copy Markdown
Owner

@digitalstore2025 digitalstore2025 commented Apr 1, 2026

User description

Summary by Sourcery

Improve robustness of media generation and setup scripts while adding basic CI for Node-based build.

Enhancements:

  • Handle failures of ffmpeg and Wav2Lip subprocesses gracefully with clear logging and fallbacks to audio-only outputs.
  • Allow clean shutdown of the TV channel loop on keyboard interrupt and provide clearer error messages during segment generation.
  • Enhance broadcast and QudsCast setup scripts with version checks, multi-platform ffmpeg install/verification, directory and sample metadata initialization, consolidated dependency installation, and clearer progress output.

CI:

  • Introduce a GitHub Actions workflow to run Node.js setup and build on pushes and pull requests to main.

Documentation:

  • Add a README for the broadcast AI component describing setup, usage, dataset structure, APIs, and troubleshooting guidance.

CodeAnt-AI Description

Improve setup, error handling, and build checks for the broadcast apps

What Changed

  • Audio generation now falls back to the raw file if ffmpeg is missing or fails, so speech can still be produced.
  • TV segment generation now keeps running when ffmpeg, Wav2Lip, or other steps fail, and it stops cleanly on keyboard interrupt.
  • Setup scripts now check required versions, install and verify ffmpeg, create missing folders and sample metadata files, and show clearer progress and success messages.
  • QudsCast setup now creates a .env file, prepares storage folders, installs all dependencies in one step, and handles Python TTS setup more safely.
  • A new build workflow runs automatically on pushes and pull requests to main.
  • Added a README for the broadcast AI system with setup, dataset, training, generation, and troubleshooting guidance.

Impact

✅ Fewer broken audio generations
✅ Fewer broadcast interruptions
✅ Shorter setup for new users
✅ Clearer setup and build feedback

💡 Usage Guide

Checking Your Pull Request

Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

Talking to CodeAnt AI

Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

@codeant-ai ask: Your question here

This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

Example

@codeant-ai ask: Can you suggest a safer alternative to storing this secret?

Preserve Org Learnings with CodeAnt

You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:

@codeant-ai: Your feedback here

This helps CodeAnt AI learn and adapt to your team's coding style and standards.

Example

@codeant-ai: Do not flag unused imports.

Retrigger review

Ask CodeAnt AI to review the PR again, by typing:

@codeant-ai: review

Check Your Repository Health

To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.

Copilot AI and others added 8 commits February 6, 2026 02:22
Co-authored-by: digitalstore2025 <211062017+digitalstore2025@users.noreply.github.com>
Co-authored-by: digitalstore2025 <211062017+digitalstore2025@users.noreply.github.com>
- Add version checks (Node.js 18+, Python 3)
- Add .env file creation from .env.example
- Add storage directory creation
- Add step-by-step progress indicators with checkmarks
- Add verification steps for installations
- Add example CSV file creation for broadcast-ai datasets
- Improve error handling and messages
- Use npm run install:all instead of manual installs
- Add brew package manager support for macOS
- Better Python pip detection logic
- Add success indicators throughout the process

Co-authored-by: digitalstore2025 <211062017+digitalstore2025@users.noreply.github.com>
Co-authored-by: digitalstore2025 <211062017+digitalstore2025@users.noreply.github.com>
Co-authored-by: digitalstore2025 <211062017+digitalstore2025@users.noreply.github.com>
Co-authored-by: digitalstore2025 <211062017+digitalstore2025@users.noreply.github.com>
Co-authored-by: digitalstore2025 <211062017+digitalstore2025@users.noreply.github.com>
@codeant-ai
Copy link
Copy Markdown

codeant-ai Bot commented Apr 1, 2026

CodeAnt AI is reviewing your PR.


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai Bot commented Apr 1, 2026

Reviewer's Guide

Adds more robust error handling and resilience around media generation and broadcast loop, improves setup automation for both broadcast-ai and qudscast-ai, documents the broadcast-ai system, and introduces a basic Node-centric CI build workflow.

Flow diagram for enhanced broadcast-ai setup.sh automation

flowchart TD
    A[Start setup.sh] --> B[Check python3 presence]
    B -->|python3 found| C[Get Python version and print]
    B -->|python3 missing| Z1[[Print error and exit 1]]

    C --> D[Detect package manager]
    D -->|apt-get| E[sudo apt-get update && install ffmpeg]
    D -->|yum| F[sudo yum install ffmpeg]
    D -->|brew| G[brew install ffmpeg]
    D -->|none| H[[Print warning: install ffmpeg manually]]

    E --> I[Verify ffmpeg in PATH]
    F --> I
    G --> I
    H --> I

    I -->|ffmpeg found| J[[Print ffmpeg version]]
    I -->|ffmpeg missing| K[[Print warning: ffmpeg not found]]

    J --> L[Install Python requirements via pip3 -r requirements.txt]
    K --> L

    L --> M[Create dataset and input/output directories]
    M --> N[For each dataset_* create metadata.csv.example if missing]

    N --> O[[Print Setup complete and next steps with python3 commands]]
    O --> P[End]
Loading

File-Level Changes

Change Details Files
Harden media generation and broadcast loop against missing tools and runtime failures.
  • Wrap anchor video ffmpeg invocation in try/except, logging specific errors for ffmpeg failures and missing binary instead of raising.
  • Wrap Wav2Lip subprocess call in try/except to fall back to audio-only segments when processing fails or the Python/script executable is missing, with clearer warnings.
  • Improve the main broadcast loop error handling to handle KeyboardInterrupt gracefully and log unexpected exceptions with type information before continuing.
broadcast-ai/run_tv_channel.py
Make audio generation resilient to missing ffmpeg while preserving raw output.
  • Wrap ffmpeg-based post-processing in generate_voice with try/except around CalledProcessError and FileNotFoundError.
  • On ffmpeg failure or absence, log warnings and fall back to using the raw synthesized audio file as the final output path.
broadcast-ai/generate.py
Enhance broadcast-ai setup script with environment validation, better tooling installation, and dataset scaffolding.
  • Add explicit Python 3 presence and version check before proceeding.
  • Expand ffmpeg installation to support multiple package managers (apt, yum, brew) and verify installation, emitting clearer success/warning messages.
  • Switch to pip3 for Python dependencies and improve progress logging.
  • Create all expected dataset directory structures and generate example metadata CSV files for each dataset_* folder.
  • Update terminal messaging, completion banner, and usage instructions to reference python3 commands and new setup phases.
broadcast-ai/setup.sh
Improve qudscast-ai setup automation, environment safety, and dependency handling.
  • Enable set -e for early failure on errors in the setup script.
  • Add Node.js version check enforcing Node 18+.
  • Automatically create or reuse a .env file from .env.example.
  • Create storage directories for voices, jingles, audio, and videos before use.
  • Use a consolidated npm run install:all to install dependencies and keep existing frontend build, now with progress markers and success messages.
  • Make Python TTS installation more robust by preferring pip3, falling back to pip, and handling absence of pip gracefully.
  • Ensure backend helper scripts are made executable and log completion.
qudscast-ai/setup.sh
Document the broadcast-ai system and add basic CI for Node tooling.
  • Add a comprehensive README for broadcast-ai describing purpose, setup, usage, datasets, API, and troubleshooting.
  • Introduce a GitHub Actions workflow that checks out the repo, sets up Node.js 20, installs npm dependencies via npm ci, and runs npm run build on pushes and PRs to main.
  • Remove the previous Jekyll-specific GitHub Actions workflow in favor of the new build pipeline.
broadcast-ai/README.md
.github/workflows/build.yml
.github/workflows/jekyll-docker.yml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@codeant-ai codeant-ai Bot added the size:L This PR changes 100-499 lines, ignoring generated files label Apr 1, 2026
Copy link
Copy Markdown

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 6 issues, and left some high level feedback:

  • In _ensure_anchor_video, if ffmpeg fails you only log and continue; consider returning early or signalling failure so later code doesn’t assume ANCHOR_VIDEO exists and valid.
  • In run_tv_channel.py you invoke the Wav2Lip script with python while the setup script and docs standardize on python3; aligning these to a single interpreter (and perhaps detecting it) would reduce environment inconsistency.
  • The new GitHub Actions workflow runs npm ci and npm run build at the repo root; if your Node project lives in a subdirectory (e.g. qudscast-ai or frontend), it would be more robust to set working-directory explicitly so the build doesn’t depend on the current layout.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `_ensure_anchor_video`, if ffmpeg fails you only log and continue; consider returning early or signalling failure so later code doesn’t assume `ANCHOR_VIDEO` exists and valid.
- In `run_tv_channel.py` you invoke the Wav2Lip script with `python` while the setup script and docs standardize on `python3`; aligning these to a single interpreter (and perhaps detecting it) would reduce environment inconsistency.
- The new GitHub Actions workflow runs `npm ci` and `npm run build` at the repo root; if your Node project lives in a subdirectory (e.g. `qudscast-ai` or `frontend`), it would be more robust to set `working-directory` explicitly so the build doesn’t depend on the current layout.

## Individual Comments

### Comment 1
<location path="broadcast-ai/run_tv_channel.py" line_range="59-68" />
<code_context>
-        check=True,
-        capture_output=True,
-    )
+    try:
+        subprocess.run(
+            [
</code_context>
<issue_to_address>
**issue (bug_risk):** Anchor video creation failures are now silently ignored, which can cause later steps to fail in less obvious ways.

The previous `ffmpeg` failure would raise and halt the flow; now it only logs and `_ensure_anchor_video` returns without signaling a problem. Callers may proceed assuming `ANCHOR_VIDEO` exists, leading to harder‑to‑debug downstream errors. Please either propagate the exception, return an explicit success/failure indicator, or ensure later code validates `ANCHOR_VIDEO` before use to prevent cascading failures.
</issue_to_address>

### Comment 2
<location path="broadcast-ai/run_tv_channel.py" line_range="138-140" />
<code_context>
+        except KeyboardInterrupt:
+            print("[TV] Shutting down broadcast loop")
+            break
         except Exception as exc:
-            print(f"[TV] Segment failed: {exc}")
+            print(f"[TV] Unexpected error during segment generation ({type(exc).__name__}): {exc}")
+            print("[TV] Continuing to next segment...")

         time.sleep(LOOP_PAUSE_SECONDS)
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Catching bare Exception in the main loop may hide useful debugging information.

To keep robustness without losing debuggability, either log the full traceback (e.g., via `traceback.print_exc()` or the logging module) or narrow the `except` to specific, expected exception types so unexpected failures still surface during development.

Suggested implementation:

```python
        except Exception as exc:
            print(f"[TV] Unexpected error during segment generation ({type(exc).__name__}): {exc}")
            # Print full traceback to aid debugging while keeping the loop robust
            traceback.print_exc()
            print("[TV] Continuing to next segment...")

```

To fully implement this change, ensure `traceback` is imported at the top of `broadcast-ai/run_tv_channel.py`:

1. Add `import traceback` near the other imports, e.g.:
   - `import traceback`

If the file already uses a logging framework instead of `print`, you may want to replace `traceback.print_exc()` with `logger.exception(...)` following your existing logging conventions.
</issue_to_address>

### Comment 3
<location path="broadcast-ai/setup.sh" line_range="44-45" />
<code_context>
 # --- Python dependencies ----------------------------------------------------
-echo "[2/3] Installing Python dependencies..."
-pip install -q -r requirements.txt
+echo "[3/5] Installing Python dependencies..."
+pip3 install -q -r requirements.txt
+echo "✓ Python packages installed"

</code_context>
<issue_to_address>
**suggestion (bug_risk):** Using pip3 directly after checking python3 may bind to a different Python installation.

Since you’ve just validated `python3`, calling `pip3` directly can still hit a different interpreter (or be missing) on some systems. Prefer `python3 -m pip install -r requirements.txt` so the packages are installed into the same Python you just checked.
</issue_to_address>

### Comment 4
<location path="qudscast-ai/setup.sh" line_range="10-16" />
<code_context>
-echo "Installing backend dependencies..."
-npm install
+# Check Node.js version
+echo "[1/7] Checking Node.js version..."
+NODE_VERSION=$(node --version | cut -d'v' -f2 | cut -d'.' -f1)
+if [ "$NODE_VERSION" -lt 18 ]; then
+    echo "ERROR: Node.js 18+ is required (found: $(node --version))"
+    exit 1
+fi
+echo "✓ Node.js $(node --version) detected"

-# Install frontend dependencies
</code_context>
<issue_to_address>
**issue:** Node.js existence is not checked before calling `node --version`, which will cause an abrupt exit with `set -e`.

Because `set -e` is enabled, `node --version` will cause the script to exit with a generic error if Node isn’t installed. Add a `command -v node >/dev/null 2>&1 || { echo "ERROR: Node.js is required"; exit 1; }`-style check (as you do for Python in the other script) before reading the version, so the failure produces a clear, actionable message.
</issue_to_address>

### Comment 5
<location path="broadcast-ai/generate.py" line_range="99-104" />
<code_context>
+            check=True,
+            capture_output=True,
+        )
+    except (subprocess.CalledProcessError, FileNotFoundError) as e:
+        error_type = "not found" if isinstance(e, FileNotFoundError) else "failed"
+        print(f"[WARN] ffmpeg {error_type} - skipping audio post-processing")
+        print(f"[WARN] Using raw audio output instead")
+        # If ffmpeg fails or is not installed, use the raw audio file
+        final_path = raw_path

     print(f"[GEN] {style} → {final_path}")
</code_context>
<issue_to_address>
**suggestion:** ffmpeg errors are logged without stderr, which may reduce diagnosability when post-processing fails.

The fallback to `raw_path` makes sense, but since you're already using `capture_output=True`, it would help to include ffmpeg’s stderr in the warning (e.g., `e.stderr.decode('utf-8', errors='ignore')` for `CalledProcessError`) so failures are easier to debug.

```suggestion
    except (subprocess.CalledProcessError, FileNotFoundError) as e:
        error_type = "not found" if isinstance(e, FileNotFoundError) else "failed"
        print(f"[WARN] ffmpeg {error_type} - skipping audio post-processing")

        # Include stderr for diagnosability when ffmpeg is present but fails
        if isinstance(e, subprocess.CalledProcessError) and e.stderr:
            stderr_text = e.stderr.decode("utf-8", errors="ignore")
            if stderr_text.strip():
                print("[WARN] ffmpeg stderr:")
                print(stderr_text)

        print(f"[WARN] Using raw audio output instead")
        # If ffmpeg fails or is not installed, use the raw audio file
        final_path = raw_path
```
</issue_to_address>

### Comment 6
<location path="broadcast-ai/README.md" line_range="127-128" />
<code_context>
+
+## Common Issues
+
+**Training fails with memory error:**
+Your system needs more RAM or reduce the batch size parameter.
+
+**Cannot find model:**
</code_context>
<issue_to_address>
**suggestion (typo):** The sentence "Your system needs more RAM or reduce the batch size parameter" is grammatically unbalanced.

Consider rephrasing to: "Your system needs more RAM, or you need to reduce the batch size parameter."

```suggestion
**Training fails with memory error:**
Your system needs more RAM, or you need to reduce the batch size parameter.
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread broadcast-ai/run_tv_channel.py
Comment on lines 138 to +140
except Exception as exc:
print(f"[TV] Segment failed: {exc}")
print(f"[TV] Unexpected error during segment generation ({type(exc).__name__}): {exc}")
print("[TV] Continuing to next segment...")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Catching bare Exception in the main loop may hide useful debugging information.

To keep robustness without losing debuggability, either log the full traceback (e.g., via traceback.print_exc() or the logging module) or narrow the except to specific, expected exception types so unexpected failures still surface during development.

Suggested implementation:

        except Exception as exc:
            print(f"[TV] Unexpected error during segment generation ({type(exc).__name__}): {exc}")
            # Print full traceback to aid debugging while keeping the loop robust
            traceback.print_exc()
            print("[TV] Continuing to next segment...")

To fully implement this change, ensure traceback is imported at the top of broadcast-ai/run_tv_channel.py:

  1. Add import traceback near the other imports, e.g.:
    • import traceback

If the file already uses a logging framework instead of print, you may want to replace traceback.print_exc() with logger.exception(...) following your existing logging conventions.

Comment thread broadcast-ai/setup.sh
Comment on lines +44 to +45
echo "[3/5] Installing Python dependencies..."
pip3 install -q -r requirements.txt
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Using pip3 directly after checking python3 may bind to a different Python installation.

Since you’ve just validated python3, calling pip3 directly can still hit a different interpreter (or be missing) on some systems. Prefer python3 -m pip install -r requirements.txt so the packages are installed into the same Python you just checked.

Comment thread qudscast-ai/setup.sh
Comment on lines +10 to +16
echo "[1/7] Checking Node.js version..."
NODE_VERSION=$(node --version | cut -d'v' -f2 | cut -d'.' -f1)
if [ "$NODE_VERSION" -lt 18 ]; then
echo "ERROR: Node.js 18+ is required (found: $(node --version))"
exit 1
fi
echo "✓ Node.js $(node --version) detected"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: Node.js existence is not checked before calling node --version, which will cause an abrupt exit with set -e.

Because set -e is enabled, node --version will cause the script to exit with a generic error if Node isn’t installed. Add a command -v node >/dev/null 2>&1 || { echo "ERROR: Node.js is required"; exit 1; }-style check (as you do for Python in the other script) before reading the version, so the failure produces a clear, actionable message.

Comment thread broadcast-ai/generate.py
Comment on lines +99 to +104
except (subprocess.CalledProcessError, FileNotFoundError) as e:
error_type = "not found" if isinstance(e, FileNotFoundError) else "failed"
print(f"[WARN] ffmpeg {error_type} - skipping audio post-processing")
print(f"[WARN] Using raw audio output instead")
# If ffmpeg fails or is not installed, use the raw audio file
final_path = raw_path
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: ffmpeg errors are logged without stderr, which may reduce diagnosability when post-processing fails.

The fallback to raw_path makes sense, but since you're already using capture_output=True, it would help to include ffmpeg’s stderr in the warning (e.g., e.stderr.decode('utf-8', errors='ignore') for CalledProcessError) so failures are easier to debug.

Suggested change
except (subprocess.CalledProcessError, FileNotFoundError) as e:
error_type = "not found" if isinstance(e, FileNotFoundError) else "failed"
print(f"[WARN] ffmpeg {error_type} - skipping audio post-processing")
print(f"[WARN] Using raw audio output instead")
# If ffmpeg fails or is not installed, use the raw audio file
final_path = raw_path
except (subprocess.CalledProcessError, FileNotFoundError) as e:
error_type = "not found" if isinstance(e, FileNotFoundError) else "failed"
print(f"[WARN] ffmpeg {error_type} - skipping audio post-processing")
# Include stderr for diagnosability when ffmpeg is present but fails
if isinstance(e, subprocess.CalledProcessError) and e.stderr:
stderr_text = e.stderr.decode("utf-8", errors="ignore")
if stderr_text.strip():
print("[WARN] ffmpeg stderr:")
print(stderr_text)
print(f"[WARN] Using raw audio output instead")
# If ffmpeg fails or is not installed, use the raw audio file
final_path = raw_path

Comment thread broadcast-ai/README.md
Copy link
Copy Markdown
Owner Author

@digitalstore2025 digitalstore2025 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do

@digitalstore2025 digitalstore2025 merged commit 26c7d26 into main Apr 1, 2026
3 checks passed
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a2deb51186

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread broadcast-ai/generate.py
print(f"[WARN] ffmpeg {error_type} - skipping audio post-processing")
print(f"[WARN] Using raw audio output instead")
# If ffmpeg fails or is not installed, use the raw audio file
final_path = raw_path
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve requested output filename on ffmpeg fallback

When post-processing fails, this assigns final_path to output/raw.wav, which ignores the caller’s output_name and causes downstream breakage in the fallback path: run_tv_channel.py requests tv_audio.wav, but tv_server.py only serves output/tv_audio.wav as audio fallback, so /health can report no media even though synthesis succeeded. It also makes repeated generations overwrite a single raw.wav file instead of producing per-request outputs.

Useful? React with 👍 / 👎.

Comment thread broadcast-ai/generate.py
print(f"[WARN] ffmpeg {error_type} - skipping audio post-processing")
print(f"[WARN] Using raw audio output instead")
# If ffmpeg fails or is not installed, use the raw audio file
final_path = raw_path
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Falling back by setting final_path to raw_path breaks the function contract and returns a shared temporary filename (output/raw.wav) instead of the requested output_name. This causes callers to lose the expected output path and makes repeated generations overwrite the same fallback file. Keep final_path as the requested destination and move/copy the raw audio there when ffmpeg fails. [logic error]

Severity Level: Major ⚠️
- ❌ TV audio-only fallback fails when ffmpeg and Wav2Lip absent.
- ⚠️ CLI `generate.py` returns unexpected path ignoring `output_name`.
- ⚠️ Raw audio saved under shared temp name `output/raw.wav`.
- ⚠️ Future consumers relying on `output_name` receive wrong path.
Suggested change
final_path = raw_path
os.replace(raw_path, final_path)
Steps of Reproduction ✅
1. Start the TV channel loop by running `python broadcast-ai/run_tv_channel.py` which
calls `run()` in `broadcast-ai/run_tv_channel.py:120-145`, and separately start the
streaming server with `python broadcast-ai/tv_server.py` which serves media from
`broadcast-ai/tv_server.py:100-113`.

2. On a machine where `ffmpeg` is NOT installed on the PATH, `_generate_segment()` at
`broadcast-ai/run_tv_channel.py:80-88` calls `generate_voice(headline, style="news",
output_name="tv_audio.wav")`. In `generate_voice()` (`broadcast-ai/generate.py:45-107`),
the `subprocess.run([... "ffmpeg" ...], check=True, ...)` at
`broadcast-ai/generate.py:88-98` raises `FileNotFoundError`, triggering the `except` block
at `broadcast-ai/generate.py:99-104`.

3. Inside that `except` block, the code executes the fallback `final_path = raw_path` at
`broadcast-ai/generate.py:104`, so `generate_voice()` returns `"output/raw.wav"` instead
of the requested `"output/tv_audio.wav"` (which was originally set at
`broadcast-ai/generate.py:66`). `_generate_segment()` therefore returns `"output/raw.wav"`
as `audio_path` when Wav2Lip is unavailable, due to the early returns at
`broadcast-ai/run_tv_channel.py:85-91`.

4. The streaming server in `broadcast-ai/tv_server.py` is hard-coded to look for
`AUDIO_FALLBACK = "output/tv_audio.wav"` at `broadcast-ai/tv_server.py:23-25`. When only
`"output/raw.wav"` exists (ffmpeg missing) and Wav2Lip is also missing or its checkpoint
is missing so no video is produced (`broadcast-ai/run_tv_channel.py:85-91`), the `/stream`
endpoint at `broadcast-ai/tv_server.py:100-113` finds neither `VIDEO_PATH` nor
`AUDIO_FALLBACK` and returns `404`, while the `/health` endpoint at
`broadcast-ai/tv_server.py:116-124` reports `"audio": false`. Users see "no broadcast
currently" despite valid audio being generated, because the fallback changed the return
path from the requested `output_name` to the shared temporary `raw.wav`.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** broadcast-ai/generate.py
**Line:** 104:104
**Comment:**
	*Logic Error: Falling back by setting `final_path` to `raw_path` breaks the function contract and returns a shared temporary filename (`output/raw.wav`) instead of the requested `output_name`. This causes callers to lose the expected output path and makes repeated generations overwrite the same fallback file. Keep `final_path` as the requested destination and move/copy the raw audio there when ffmpeg fails.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

Comment on lines +94 to +96
subprocess.run(
[
"python", WAV2LIP_INFERENCE,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The Wav2Lip subprocess is invoked with a hardcoded python executable, which can point to a missing or wrong interpreter (for example, systems that only provide python3 or active virtualenvs), causing lip-sync to fail every time and always falling back to audio. Use the current interpreter to preserve environment compatibility. [possible bug]

Severity Level: Major ⚠️
- ⚠️ Lip-synced video generation fails on many Python3-only systems.
- ⚠️ Broadcast loop silently degrades to audio-only segments.
Suggested change
subprocess.run(
[
"python", WAV2LIP_INFERENCE,
import sys
subprocess.run(
[
sys.executable, WAV2LIP_INFERENCE,
Steps of Reproduction ✅
1. Execute the TV channel loop via `python3 broadcast-ai/run_tv_channel.py`, which
triggers the `run()` entry point and its infinite loop (see
`broadcast-ai/run_tv_channel.py`, lines 119–146 in the final PR state).

2. Ensure that `WAV2LIP_INFERENCE` (`Wav2Lip/inference.py`) and `WAV2LIP_CHECKPOINT`
(`models/wav2lip.pth`) exist so `_generate_segment()` takes the Wav2Lip path (function
body around lines 80–113 in `broadcast-ai/run_tv_channel.py`).

3. In `_generate_segment()`, when it reaches the Wav2Lip call, it executes
`subprocess.run([... "python", WAV2LIP_INFERENCE, ...], check=True)` at lines 94–103 in
`broadcast-ai/run_tv_channel.py`; on a system where `python` is missing or points to a
different interpreter than the one running `run_tv_channel.py` (for example, only
`python3` exists or dependencies are installed only in a virtualenv), this subprocess
fails with `FileNotFoundError` or `CalledProcessError`.

4. The failure is caught by the surrounding `except` blocks in `_generate_segment()`
(lines 105–112), which log warnings and return `audio_path` instead of `VIDEO_OUTPUT`, so
the main loop logs a "Segment ready" message but only ever produces audio-only output,
never the intended lip-synced video.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** broadcast-ai/run_tv_channel.py
**Line:** 94:96
**Comment:**
	*Possible Bug: The Wav2Lip subprocess is invoked with a hardcoded `python` executable, which can point to a missing or wrong interpreter (for example, systems that only provide `python3` or active virtualenvs), causing lip-sync to fail every time and always falling back to audio. Use the current interpreter to preserve environment compatibility.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

Comment thread broadcast-ai/setup.sh
echo "[ERROR] Python 3 is required but not found"
exit 1
fi
PYTHON_VERSION=$(python3 --version 2>&1 | awk '{print $2}' | cut -d'.' -f1,2)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The script claims to perform a Python version check, but it only verifies that python3 exists and never enforces the documented minimum version (3.8+). On systems with older Python 3 versions, later training/runtime steps can fail despite setup reporting success. Add an explicit major/minor comparison and stop early with a clear error. [logic error]

Severity Level: Major ⚠️
- ❌ Training via `train.py` may break on older Python.
- ❌ Runtime scripts `run_tv_channel.py` / `tv_server.py` can fail.
- ⚠️ Setup claims success despite incompatible Python interpreter.
- ⚠️ Debugging version issues becomes harder for end users.
Suggested change
PYTHON_VERSION=$(python3 --version 2>&1 | awk '{print $2}' | cut -d'.' -f1,2)
PYTHON_VERSION=$(python3 --version 2>&1 | awk '{print $2}')
PYTHON_MAJOR=$(echo "$PYTHON_VERSION" | cut -d'.' -f1)
PYTHON_MINOR=$(echo "$PYTHON_VERSION" | cut -d'.' -f2)
if [ "$PYTHON_MAJOR" -lt 3 ] || { [ "$PYTHON_MAJOR" -eq 3 ] && [ "$PYTHON_MINOR" -lt 8 ]; }; then
echo "[ERROR] Python 3.8+ is required (found: $PYTHON_VERSION)"
exit 1
fi
Steps of Reproduction ✅
1. On a system where `python3` is installed but is an older 3.x interpreter (for example
Python 3.6), the user runs the setup entrypoint `broadcast-ai/setup.sh` (script header and
entry at lines 1–10).

2. The script executes the Python check block at `broadcast-ai/setup.sh:11-18`, which only
verifies that `python3` exists and extracts the major.minor string
(`PYTHON_VERSION=$(python3 --version ... | cut -d'.' -f1,2)` at line 17), then prints `✓
Python $PYTHON_VERSION detected` at line 18 without enforcing a minimum.

3. Setup continues through dependency installation and directory creation, and eventually
prints the success banner and next steps at the end of the script (the "✓ Setup complete!"
and "Run: python3 train.py" / "run_tv_channel.py" / "tv_server.py" instructions in the
final block).

4. Following those printed instructions, the user runs `python3 train.py` or `python3
run_tv_channel.py` from the broadcast-ai directory; if those scripts or their dependencies
rely on Python 3.8+ features, they will now fail at runtime (e.g., SyntaxError or
ImportError) even though setup previously reported success, illustrating that the version
check in `setup.sh` did not actually enforce the effective minimum version.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** broadcast-ai/setup.sh
**Line:** 17:17
**Comment:**
	*Logic Error: The script claims to perform a Python version check, but it only verifies that `python3` exists and never enforces the documented minimum version (3.8+). On systems with older Python 3 versions, later training/runtime steps can fail despite setup reporting success. Add an explicit major/minor comparison and stop early with a clear error.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

Comment thread broadcast-ai/setup.sh
echo "[2/3] Installing Python dependencies..."
pip install -q -r requirements.txt
echo "[3/5] Installing Python dependencies..."
pip3 install -q -r requirements.txt
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Installing dependencies with a bare pip3 call can fail when pip3 is not installed or can target a different interpreter than python3, causing missing-package errors at runtime. Use python3 -m pip with an explicit availability check so dependencies are installed into the same interpreter validated earlier. [possible bug]

Severity Level: Major ⚠️
- ❌ Python entrypoints may miss required dependencies at runtime.
- ⚠️ Setup can fail with opaque `pip3` command errors.
- ⚠️ Dependencies may install into a different Python environment.
- ⚠️ Environment reproducibility across machines becomes unreliable.
Suggested change
pip3 install -q -r requirements.txt
if ! python3 -m pip --version &>/dev/null; then
echo "[ERROR] pip for Python 3 is required but not found"
exit 1
fi
python3 -m pip install -q -r requirements.txt
Steps of Reproduction ✅
1. On a machine where `python3` is available and passes the earlier check in
`broadcast-ai/setup.sh:11-18`, but where `pip3` is either not installed or points to a
different Python 3 interpreter than `python3` (common when multiple Python installations
exist), the user runs `broadcast-ai/setup.sh`.

2. After system package installation and ffmpeg verification, the script reaches the
Python dependency block at `broadcast-ai/setup.sh:44-46`, logs `"[3/5] Installing Python
dependencies..."` (line 44), and then executes `pip3 install -q -r requirements.txt` (line
45).

3. If `pip3` is missing, `set -euo pipefail` causes the script to abort on the `pip3`
command with a generic "command not found" shell error; if `pip3` points at a different
interpreter than `python3`, the packages from `requirements.txt` are installed into that
other environment, while the validated `python3` remains without those dependencies.

4. Following the printed instructions at the end of `setup.sh`, the user runs `python3
train.py` or other Python entrypoints; these then fail with `ModuleNotFoundError` /
`ImportError` for packages that were incorrectly installed via `pip3`, demonstrating that
the dependency installation was not reliably tied to the validated interpreter.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** broadcast-ai/setup.sh
**Line:** 45:45
**Comment:**
	*Possible Bug: Installing dependencies with a bare `pip3` call can fail when `pip3` is not installed or can target a different interpreter than `python3`, causing missing-package errors at runtime. Use `python3 -m pip` with an explicit availability check so dependencies are installed into the same interpreter validated earlier.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

Comment thread qudscast-ai/setup.sh
#!/bin/bash
# QudsCast AI - Setup Script
set -e

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The script uses relative paths everywhere, so running it from outside qudscast-ai (for example ./qudscast-ai/setup.sh from repo root) will fail on .env.example, frontend, and backend/* paths. Change it to always cd into the script's own directory first so path-dependent steps run reliably. [logic error]

Severity Level: Major ⚠️
- ❌ Setup fails when run from repository root directory.
- ⚠️ Environment file not created inside `qudscast-ai` project.
- ⚠️ QudsCast AI dependencies not installed via `install:all`.
Suggested change
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"
Steps of Reproduction ✅
1. From the repository root `/workspace/-` (which points to
`/tmp/pr-review/repo-clones/digitalstore2025/-/a2deb511860a9e0d5fccb66e84425aba5dbd04d7`,
verified via `ls`), note there is a `qudscast-ai/` subdirectory containing `setup.sh`
(`/workspace/-/qudscast-ai/setup.sh:1-3`).

2. Still in the repository root (not `cd`'d into `qudscast-ai/`), execute the setup script
with `bash qudscast-ai/setup.sh`, which runs the script while the working directory
remains the repo root.

3. The script at `qudscast-ai/setup.sh:18-22` checks for `.env` and, not finding it in the
root, executes `cp .env.example .env`; however, `.env.example` exists only under
`/workspace/-/qudscast-ai/.env.example` (confirmed by `ls /workspace/-/qudscast-ai`), so
`cp` fails and, because `set -e` is enabled at line 3, the script terminates immediately.

4. As a result, none of the intended QudsCast AI setup steps run in the correct project
directory: `npm run install:all` at `qudscast-ai/setup.sh:32-34` executes (if at all)
against the root `package.json` (which has only `dev`, `build`, `preview` scripts at
`/workspace/-/package.json:6-9` and no `install:all`), and the frontend build and
permission setup at lines 36-55 never successfully configure the `qudscast-ai` app when
the script is invoked from the repository root.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** qudscast-ai/setup.sh
**Line:** 4:4
**Comment:**
	*Logic Error: The script uses relative paths everywhere, so running it from outside `qudscast-ai` (for example `./qudscast-ai/setup.sh` from repo root) will fail on `.env.example`, `frontend`, and `backend/*` paths. Change it to always `cd` into the script's own directory first so path-dependent steps run reliably.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

@codeant-ai
Copy link
Copy Markdown

codeant-ai Bot commented Apr 1, 2026

CodeAnt AI finished reviewing your PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:L This PR changes 100-499 lines, ignoring generated files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants