Type a topic, and AI finds YouTube videos, analyzes them, and creates podcasts, slides, and reports for you
νκ΅μ΄ | English
Type a topic you're curious about, and this tool finds related YouTube videos, has AI analyze them, and turns them into various content formats β automatically.
| Content | Description | Perfect for... |
|---|---|---|
| π§ AI Podcast | Two AI hosts discuss the topic in a natural audio conversation (.mp3) | Learning on your commute β just listen and absorb |
| π Briefing Report | A clean, structured summary document (.md) | 5-minute read before a meeting |
| π¬ Presentation Slides | Auto-generated slide deck (.pptx) | Cutting your presentation prep time dramatically |
| π§ Mind Map | Visual map showing how topics connect | Seeing the big picture of complex subjects fast |
| π¬ AI Q&A | Ask questions about your collected sources and get instant answers | When you don't have time to watch hours of video |
| π Web Research | AI automatically finds more related sources from the web | When you need broader perspectives |
All of this happens with a single command. Just type a topic, and the system finds YouTube videos, feeds them to AI, generates your chosen content, and saves everything to your computer.
| Tool | Install Command | Purpose |
|---|---|---|
| Claude Code | Official install guide | AI assistant that runs everything |
| uv | curl -LsSf https://astral.sh/uv/install.sh | sh |
Fast Python package manager |
| nlm | uv tool install notebooklm-mcp-cli |
NotebookLM CLI + MCP server |
| yt-dlp | uv tool install yt-dlp |
YouTube search |
Verify your installation:
claude --version
nlm --version
yt-dlp --versionAlternative: install with pip (if uv is not available)
pip install notebooklm-mcp-cli yt-dlpThe NotebookLM MCP server enables AI podcast, slides, and report generation. Without it, you'll see studio_create tool not found errors.
Option A: Automatic (recommended) β just run Claude Code inside this project directory:
cd nlm-research
claudeThe .claude/settings.json in this repo auto-configures the MCP connection. No extra setup needed.
Option B: Manual β add to your global Claude Code config:
claude mcp add notebooklm-mcp -- notebooklm-mcpNotebookLM needs Google sign-in. Run this once and follow the browser prompt:
nlm logincd nlm-research # Navigate to this folder
claude # Start the AI assistant
# Now type this:
/research run AI agent trends --autoAdd --auto to run everything automatically without pausing. Remove it if you want to approve each step.
Note: The
/researchcommand is a Claude Code skill loaded from this project directory. Make sure you runclaudefrom inside thenlm-researchfolder.
Choose the scenario that fits your needs.
/research run 2026 AI market outlook --preset presentation --autoYou get a π briefing report + π¬ slide deck (.pptx) automatically.
/research run React 19 new features --preset learning --autoYou get a π study guide + π§ AI podcast (mp3) + π quiz.
/research run AI agent 2026 trends --preset trend-report --autoYou get a π trend analysis briefing report.
/research run competitor-X product review --preset competitor --autoYou get a π SWOT analysis report.
/research run LLM architecture comparison --preset deep-dive --autoAI finds additional web sources and creates a π deep analysis report.
/research run AI agents --autoDefault preset: π briefing report + π¬ Q&A analysis.
Think of it like a research assistant: finds relevant books (search) β borrows them from the library (collect) β reads and summarizes them (analyze) β prints the report (export) β all done by AI.
| Step | What happens | Think of it as... |
|---|---|---|
| π Search | Finds related YouTube videos for your topic | Finding relevant books at the library |
| π Collect | Adds those videos to NotebookLM | Borrowing the books |
| π§ Analyze | AI analyzes everything and creates your outputs | Reading and summarizing |
| π€ Export | Saves the finished files to your computer | Printing the report |
Everything is automatically saved by topic under ~/research-output/<topic>/.
~/research-output/
βββ AI_agent_trends/
β βββ AI_agent_trends_report.md # Briefing report
β βββ AI_agent_trends_analysis.md # Q&A deep analysis
β βββ AI_agent_trends_podcast.mp3 # AI podcast audio
β βββ AI_agent_trends_quiz.json # Learning quiz
β βββ AI_agent_trends_slides.pptx # Presentation slides
βββ last_session.json # Last session reference
βββ research_sessions.jsonl # Session history
| File | What it is | Which presets create it |
|---|---|---|
*_report.md |
A structured briefing document synthesized by AI | All presets |
*_analysis.md |
Deep insights from AI Q&A sessions | All presets |
*_podcast.mp3 |
An audio conversation between two AI hosts | learning |
*_quiz.json |
A quiz to test your understanding | learning |
*_slides.pptx |
A ready-to-present slide deck | presentation |
| Option | What it does | Default |
|---|---|---|
-n <number> |
How many YouTube results to fetch | 10 |
-d |
Sort by newest first | off |
| Option | What it does | Default |
|---|---|---|
--auto |
Run without asking for confirmation | off |
--preset <name> |
Choose a preset (see Use Case Guide above) | default |
--top <N> |
How many top videos to actually use | depends on preset |
--notebook <id> |
Add to an existing notebook instead of creating a new one | create new |
--lang <code> |
Language for generated content (e.g., en, ko) |
ko |
-ncontrols how many videos to search for on YouTube--topcontrols how many of those videos to actually use
# Search 15 videos, but only use the top 3
/research run AI -n 15 --top 3 --autoInstead of running everything automatically, you can run each step yourself:
/research search AI agents # 1. Search YouTube
/research collect # 2. Collect selected videos into NotebookLM
/research analyze <notebook-id> # 3. AI analysis
/research export <notebook-id> # 4. Export files
/research status # Check current progressThe system handles problems at three levels:
- π’ Most issues fix themselves. Things like expired login sessions are refreshed automatically. Rate limits are waited out and retried.
- π‘ Partial failures are skipped gracefully. If one video can't be added (maybe it's private or deleted), the system skips it and continues with the rest.
- π΄ Serious problems stop the process and tell you what to do. For example, if your login is completely broken or no videos were found at all.
| What you see | What to do |
|---|---|
deno install fails / JSR package not found |
The old Deno/JSR installation method has been deprecated. Install via uv tool install notebooklm-mcp-cli instead (see Step 1) |
studio_create tool not found / download_artifact not available |
NotebookLM MCP server is not connected to Claude Code. Run claude inside the nlm-research directory (auto-configured), or run claude mcp add notebooklm-mcp -- notebooklm-mcp |
/research command not found |
Claude Code doesn't see the skill files. Make sure you cd nlm-research first, then run claude. Skills are loaded from the project directory |
| "NotebookLM authentication required" | Run nlm login in your terminal |
| Login keeps failing | Try nlm login switch default or clear your browser cache |
| A specific video fails to load | It's probably private or deleted β the system skips it automatically |
| Report generation fails | You may have too few or too many sources β check with /research status |
| 0 sources collected | Try different search keywords |
nlm-research/
βββ CLAUDE.md # Project instructions for Claude Code
βββ .claude/
β βββ settings.json # MCP server + permissions config
β βββ rules/
β β βββ research-pipeline.md # Pipeline rules
β βββ skills/research/ # Skill files (recommended)
β β βββ SKILL.md # Command router
β β βββ run.md # Full automated flow
β β βββ search.md # YouTube search
β β βββ collect.md # Source collection
β β βββ analyze.md # AI analysis
β β βββ export.md # File export
β β βββ status.md # Session status
β β βββ scripts/
β β β βββ youtube_search.py # YouTube search script
β β βββ references/
β β βββ nlm-commands.md # NotebookLM reference
β β βββ workflow-examples.md # Workflow examples
β βββ commands/research/ # Legacy (backward compatible)
βββ README.md # This file (English)
βββ README_KO.md # Documentation (Korean)
βββ assets/ # Images used in documentation
Contributions are welcome! Here's how:
- Fork this repository
- Create a feature branch (
git checkout -b feat/amazing-feature) - Commit your changes (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feat/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License.




