Transform scattered web content into organized, searchable knowledge through natural conversation with your Agent assistant
Compatible Frameworks: Claude Code, OpenClaw, Codex, and other mainstream agent frameworks
Core Pages
If Tapestry helps you, please give the project a Star!
Your Star is not just recognition of the developer's work, but also motivation for continuous improvement. Every Star encourages us to develop more useful features, fix bugs and improve stability, enhance documentation and guides, and support more platforms and languages.
Tapestry is an AI-native skill pack that transforms how you capture, organize, and synthesize web content. Instead of bookmarking links or copy-pasting articles, you get a complete workflow that crawls sources, normalizes content, and builds a structured knowledge base—all through natural conversation with your AI assistant.
- Researchers who need to track discussions across multiple platforms (Zhihu, Reddit, HN, X/Twitter)
- Content curators building organized knowledge repositories from diverse sources
- Developers who want to archive technical discussions and documentation systematically
- Knowledge workers tired of losing valuable insights scattered across bookmarks and tabs
Problem 1: Platform Fragmentation 🌐 Valuable content lives across Zhihu, X, Reddit, Hacker News, Xiaohongshu, Weibo, WeChat Official Accounts, and countless blogs. Each platform has different structures, APIs, and access patterns. Tapestry provides unified crawlers that handle the complexity for you.
Problem 2: Content Decay ⏳ Web content disappears, gets edited, or becomes inaccessible. Tapestry captures content at the moment you care about it and preserves it in your local knowledge base forever.
Problem 3: Knowledge Fragmentation 🧩 Even when you save content, it stays isolated. Tapestry's synthesis skill uses AI to understand your content and organize it into a coherent, navigable knowledge structure.
- 🕷️ Multi-Platform Crawlers: Native support for Zhihu, X/Twitter, Xiaohongshu, Weibo, Hacker News, Reddit, WeChat Official Accounts, and generic HTML pages
- 📦 Three-Layer Architecture: Ingest (capture) → Feed (normalize) → Synthesis (analyze)
- 📖 Book-Like Knowledge Base: Hierarchical organization with topics, chapters, automatic index generation, tags, categories, and rich metadata
- 🔍 Term Extraction: Automatically extracts and explains key terms inline; hover over any term in the viewer to see its definition
- 🎨 Visual Frontend: Browse your knowledge base through a clean, readable web interface with LaTeX rendering, Markdown support, and visual card generation
- 📤 Export: Export any note, feed, or article as Markdown, HTML, or PDF directly from the viewer or via a skill command
- 🛜 RSS Subscriptions: Subscribe to RSS feeds and automatically ingest new content as it arrives
- 🤖 AI-Native Workflow: Designed for mainstream agent frameworks—work through natural language, not CLI commands
- 🔄 Deterministic Pipeline: Reproducible captures with clear separation between facts and interpretation
- 🔧 Automatic Dependency Repair: Intelligently detects and auto-fixes missing dependencies without manual intervention
-
2026-03-24:
- 📚 Extensive knowledge base feature updates:
- Automatically extracts terms and provides explanations (hover over a term to view)
- Auto-generates article tags and categories; added more metadata fields
- Article pages now support export as Markdown, HTML, and PDF
- Pages recently modified or created now display a
Newbadge for easy identification
- Further refined knowledge base styling and layout
- 📚 Extensive knowledge base feature updates:
-
2026-03-23:
- 🛜 Added an RSS feed subscription Skill that supports automatic ingestion of RSS updates
- Added a Skill for exporting knowledge base notes, feeds, or articles to local Markdown, HTML, or PDF documents
-
2026-03-22:
- Added a Release Building workflow and published the first release v0.0.1.
- Added installation commands to the landing page; adjusted some text content and knowledge-base preview screenshots.
-
2026-03-21:
- Fixed path issues in the Skills script directories.
- Added and documented the Claude plugin marketplace installation flow as well as the
npx skillsinstallation method. - Optimized the landing page's layout and styling to make it more polished.
-
2026-03-20:
- Fixed GitHub Actions errors to ensure all workflows pass correctly.
- 🏠 Added a project homepage and deployed it to GitHub Pages — visit it at https://natsufox.github.io/Tapestry
-
2026-03-18:
- Optimized knowledge-base frontend layout and Markdown rendering.
- Implemented direct URL navigation within the knowledge base.
- Improved Markdown syntax compliance and formatting standards in the synthesis skill.
- Added LaTeX rendering support in the knowledge-base frontend.
- 🎨 Added visual card generation feature — inspired by beilunyang/visual-note-card-skills.
-
2026-03-17:
- Added a WeChat Official Account article crawler.
- Implemented Markdown rendering for the knowledge-base frontend.
Ingest a Zhihu answer:
First, open an agent framework (e.g. Claude Code), then call the tapestry skill directly:
/tapestry https://www.zhihu.com/question/12345/answer/67890Or invoke it implicitly with natural language:
Fetch content https://www.zhihu.com/question/12345/answer/67890Your AI assistant will:
- Automatically recognize it's a Zhihu link
- Select the Zhihu crawler
- Capture full content (including comments)
- Save in three formats:
captures/- Raw JSONfeeds/- Normalized JSONnotes/- Markdown notes
Terminal Demo:
🎬 Real-World Test Demonstration
During this actual Zhihu content fetching test, Tapestry demonstrated powerful capabilities:
- Automatic Dependency Repair: System detected missing package dependencies during connection setup and automatically completed installation and configuration
- Successful Content Retrieval: After dependency repair, successfully completed full Zhihu content capture (including main text and comments)
- Knowledge Base Integration: Captured content was automatically analyzed and integrated into the appropriate topics in the core knowledge base
This entire process is fully automated—users simply issue natural language commands, and the system handles all technical details.
Organize into knowledge base:
/tapestry synthesisOr with natural language:
Synthesize recently collected content into my knowledge baseYour AI assistant will analyze the content and automatically decide which topic/chapter to place it under.
Browse knowledge base:
/tapestry display
Or with natural language:
Show my knowledge base as a website
Your AI assistant will generate a static frontend and start a local server (usually http://localhost:8766).
Knowledge Base Visualization - Book-like hierarchical structure with topic navigation and chapter browsing
Method 1: Claude Code plugin marketplace
claude plugin marketplace add https://github.com/NatsuFox/Tapestry
claude plugin install tapestry@tapestry-skillsMethod 2: Universal npx skills install
Installs the bundle-first tapestry skill pack:
npx skills add NatsuFox/Tapestry --skill tapestry
# Use this line only when you want a user-global install
# npx skills add NatsuFox/Tapestry --skill tapestry -gAll generated artifacts from skill-only installs live inside the installed Tapestry skill directory under _data/:
~/.claude/skills/tapestry/_data/
~/.openclaw/skills/tapestry/_data/
~/.codex/skills/tapestry/_data/
Method 3: Manual GitHub release bundle
- Download
tapestry-skills-vX.Y.Z.ziportapestry-skills-vX.Y.Z.tar.gzfrom the GitHub Releases page. - Extract the archive.
- Copy the bundled
skills/tapestrydirectory into your agent's skill directory.
# Claude Code
cp -r tapestry-skills-vX.Y.Z/skills/tapestry ~/.claude/skills/
# OpenClaw
cp -r tapestry-skills-vX.Y.Z/skills/tapestry ~/.openclaw/skills/
# Codex
cp -r tapestry-skills-vX.Y.Z/skills/tapestry ~/.codex/skills/Method 4: Local checkout (recommended for development and auto-updates)
git clone https://github.com/NatsuFox/Tapestry.git
cd Tapestry
# Stable local copy
cp -r skills/tapestry ~/.claude/skills/
cp -r skills/tapestry ~/.openclaw/skills/
cp -r skills/tapestry ~/.codex/skills/
# Live development symlink
ln -s "$(pwd)/skills/tapestry" ~/.claude/skills/tapestry
ln -s "$(pwd)/skills/tapestry" ~/.openclaw/skills/tapestry
ln -s "$(pwd)/skills/tapestry" ~/.codex/skills/tapestryOpen your agent framework and type:
List available crawlers
If you see the list of supported platforms, installation is successful!
Tapestry provides intelligent dependency installation that automatically detects your environment and installs required packages.
How to Use:
After installing the skill pack, simply type in your agent framework:
Set up the Tapestry project, and install Tapestry dependencies
How It Works:
-
Environment Detection: Automatically identifies your Python environment
- Virtual environments (venv, virtualenv)
- Conda environments
- System Python
- Package managers (pip, conda, poetry, uv)
-
Dependency Analysis: Scans
pyproject.tomland identifies:- Core dependencies (httpx, pydantic, selectolax, etc.)
- Optional dependencies (playwright for browser rendering)
- Development tools (pytest, black, ruff, etc.)
-
Generate Installation Plan: Creates a detailed installation plan
- Python package installation commands
- System-level tools (e.g.,
playwright install chromium) - Optional components and recommendations
-
User Confirmation: Presents the plan and waits for your approval
-
Execute Installation: Runs approved commands and reports results
Installation Options:
- Install All (Recommended): Core dependencies + browser support + tooling
- Core Only: Only required dependencies, skip optional packages
- Custom Selection: Manually choose which components to install
Example Output:
Environment: Python 3.11.5 in conda environment 'myenv'
Package Manager: conda (with pip fallback)
Installation Steps:
1. Install core dependencies:
pip install -e .
2. Install browser support (recommended for JavaScript-heavy sites):
pip install -e .[browser]
playwright install chromium
3. [Optional] Install development tools:
pip install -e .[dev]
Important Notes:
- If using system Python, you'll receive a warning and recommendation to create a virtual environment
- All installation operations require your explicit approval
- After installation, automatic verification ensures all packages import correctly
Manual Installation (Alternative):
If you prefer manual control, run the install from the installed tapestry skill directory:
# Example: Claude Code skill install
cd ~/.claude/skills/tapestry
# Install core dependencies
pip install -e .
# Install browser support (optional, for JavaScript rendering)
pip install -e .[browser]
playwright install chromium
# Install development tools (optional)
pip install -e .[dev]Scenario 1: Track Technical Discussions
Collect Hacker News discussions on a topic:
Ingest these Hacker News discussions:
https://news.ycombinator.com/item?id=123
https://news.ycombinator.com/item?id=456
Text analysis and synthesis (powered by your Agent's backbone model):
Synthesize these discussions and identify common viewpoints
Integrate results into the knowledge base:
Organize these viewpoints under the "Technical Discussions" topic in my knowledge base
Scenario 2: Archive Research Materials
Collect source material:
Ingest all highly-voted answers under this Zhihu question:
https://www.zhihu.com/question/12345
Manually specify a knowledge base topic to create:
Create a new topic in the knowledge base: Machine Learning Basics
Integrate collected content under the topic:
Organize these answers under the new topic
Scenario 3: Content Curation
Collect all notes from a Xiaohongshu user:
Ingest all notes from this Xiaohongshu user:
https://www.xiaohongshu.com/user/profile/xxx
Analyze user content to extract main interests and themes:
Generate a content summary for this user
Organize user content under a profile topic in the knowledge base:
Organize this content under the "Profiles" topic in my knowledge base, archived under a sub-chapter for user xxx
Below are some more detailed configuration options and features.
Important: Frequent merging into the knowledge base can lead to high overhead, especially if you perform a merge after every single ingest. Tapestry provides flexible merge strategies to balance real-time updates with performance.
Configuration file location: skills/tapestry/config/tapestry.config.json
{
"synthesis": {
"mode": "auto",
"kb_template": "default"
}
}1. Auto Mode (Intelligent Automatic)
"mode": "auto"- Behavior: Agent automatically assesses the current accumulation of notes and decides whether to proceed with merge
- Advantages: Automated decision-making based on load, avoids unnecessary merge overhead
- Use Cases:
- Daily usage, balancing real-time updates with performance
- Uncertain when merging is most appropriate
- Want AI to intelligently manage knowledge base updates
How it works:
- Agent evaluates the quantity and quality of unmerged notes
- Considers content relevance and importance
- Decides whether to merge immediately, delay, or batch merge
- Avoids forced merge after every single ingest
2. Manual Mode (Manual Control)
"mode": "manual"- Behavior: Synthesis only runs when explicitly invoked
- Advantages: Complete control over merge timing, zero automatic overhead
- Use Cases:
- Batch capture content, organize later
- Need to review notes before deciding to merge
- Performance-critical scenarios
Workflow Example:
# Quickly capture multiple URLs
"Ingest this Zhihu answer: https://..."
"Ingest this HN discussion: https://..."
"Ingest this article: https://..."
# Later, selectively merge
"Synthesize the first answer into the knowledge base"
"Synthesize the HN discussion under technical discussions topic"3. Batch Mode (Batch Processing)
"mode": "batch"- Behavior: After ingesting multiple URLs, merge all content in one pass
- Advantages: Minimizes merge count, suitable for large-scale content collection
- Use Cases:
- Bulk import historical content
- Periodic organization of large amounts of material
- Need unified analysis of multiple sources
Workflow Example:
# Batch ingest
"Ingest these URLs:
https://example.com/1
https://example.com/2
https://example.com/3"
# Automatically triggers batch merge
# Agent analyzes all content and organizes into knowledge baseIf you need to force knowledge base updates after every ingest:
{
"synthesis": {
"mode": "deterministic",
"kb_template": "default"
}
}- Behavior: Immediately executes knowledge base merge after each ingest
- Advantages: Knowledge base always stays up-to-date
- Disadvantages: High overhead, frequent merging may impact performance
- Use Cases:
- Real-time knowledge base update requirements
- Low ingest frequency (few times per day)
- Performance is not a primary concern
Merge Overhead Sources:
- Reading and analyzing existing knowledge base structure
- Semantic matching and topic decision-making
- Updating multiple
index.mdfiles - Maintaining navigation and cross-references
Recommended Strategies:
- Daily use:
automode (recommended) - Bulk import:
batchmode - Fine control:
manualmode - Real-time updates:
deterministicmode (use cautiously)
Optimization Tips:
- Avoid merging individually after ingesting large amounts of content in a short time
- Use
batchorautomode to let Agent optimize merge timing - Update knowledge base regularly rather than frequently
- Consider batch processing historical content during off-hours
# Edit configuration directly
vim skills/tapestry/config/tapestry.config.json
# Or let Agent help you modify
"Change merge mode to manual"
"Enable auto mode intelligent merging"Configuration takes effect immediately, no restart required.
flowchart LR
A[URL Input] --> B[Ingest<br/>tapestry-ingest]
B --> C[Feed<br/>tapestry-feed]
C --> D[Synthesis<br/>tapestry-synthesis]
D --> E[Display<br/>tapestry-display]
E --> F[Knowledge Base<br/>Website]
style A fill:#e3f2fd,stroke:#1976d2,stroke-width:3px,color:#000
style B fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000
style C fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
style D fill:#e8f5e9,stroke:#388e3c,stroke-width:3px,color:#000
style E fill:#fce4ec,stroke:#c2185b,stroke-width:3px,color:#000
style F fill:#e1f5ff,stroke:#0288d1,stroke-width:3px,color:#000
Tapestry is not a traditional Python library—it's a skill pack meticulously designed for AI agent framework workflow models.
flowchart LR
A[AI Agent<br/>Conversation Layer] --> B1[INGEST]
A --> B2[FEED]
A --> B3[SYNTHESIS]
B1 & B2 & B3 --> C[Shared Deterministic<br/>Logic Layer]
C --> D[Platform<br/>Crawler Layer]
D --> E[Data Persistence<br/>Layer]
style A fill:#e3f2fd,stroke:#1976d2,stroke-width:3px,color:#000
style B1 fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000
style B2 fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000
style B3 fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000
style C fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
style D fill:#e8f5e9,stroke:#388e3c,stroke-width:3px,color:#000
style E fill:#fce4ec,stroke:#c2185b,stroke-width:3px,color:#000
1. Skill Workflow Layer (SKILL.md files)
- Defines high-level workflow logic in natural language
- Describes trigger conditions, execution steps, and output expectations
- Remains human-readable for easy understanding and maintenance
- Automatically invoked through Agent framework intent recognition
2. Shared Deterministic Logic Layer (_src/)
- Provides reusable, testable core functionality
- Handles HTTP requests, HTML parsing, and data normalization
- Implements the crawler registry and URL routing mechanism
- Ensures deterministic and reproducible data processing
3. Platform Crawler Implementation Layer (_src/crawlers/)
- One independent module per platform
- Handles platform-specific APIs, DOM structures, and authentication
- Unified interface:
CrawlerDefinition+CrawlHandler - Hot-pluggable, easy to extend with new platforms
4. Data Persistence Layer
- Three artifact types:
- Capture: Raw crawled data (JSON)
- Feed: Normalized feed (JSON)
- Note: Human-readable notes (Markdown)
- Knowledge base uses a book-like hierarchical structure
- All artifacts are timestamped for version traceability
URL Input
│
├─→ Router (domain resolution)
│
├─→ Registry (crawler matching)
│
├─→ Crawler (platform capture)
│ │
│ ├─→ Fetcher (HTTP requests)
│ ├─→ Parser (content parsing)
│ └─→ Generates CrawlerProduct
│
├─→ Store (persistence)
│ │
│ ├─→ captures/{timestamp}.json
│ ├─→ feeds/{timestamp}.json
│ └─→ notes/{timestamp}.md
│
└─→ Handoff (pass to downstream skills)
│
├─→ Feed Skill (optional formatting)
├─→ Synthesis Skill (AI analysis)
└─→ Display Skill (visualization)
Adding a New Crawler
- Create a module in
_src/crawlers/new_platform/ - Implement
CrawlerDefinitionandCrawlHandler - Register in
registry.py - Add a corresponding Feed spec to
feed/_specs/
Adding a New Skill
- Create a
SKILL.mdto define the workflow - Add execution scripts in
_scripts/ - Reuse shared logic from
_src/ - Update documentation and tests
This architecture ensures:
- ✅ Separation of Concerns: Workflow, logic, and implementation are distinct
- ✅ Testability: Deterministic logic layer is fully unit-testable
- ✅ Extensibility: New platforms and skills are easy to add
- ✅ Maintainability: Natural language workflows + clear code structure
| Platform | Coverage | Notes |
|---|---|---|
| 🇨🇳 Zhihu | Questions, Answers, Articles, Profiles | Reverse-engineered API |
| 🐦 X/Twitter | Posts, Threads | Public pages only |
| 📱 Xiaohongshu | Notes, Profiles | Public content |
| Posts | Public posts | |
| 🔶 Hacker News | Discussions | Full comment trees |
| Threads | Public threads | |
| 🇨🇳 WeChat Official Accounts | Articles | Public articles |
| 🌐 Generic HTML | Any webpage | Fallback crawler |
Tapestry organizes content into a book-like hierarchy:
knowledge-base/
├── index.md # Root navigation
├── topic-1/
│ ├── index.md # Topic overview
│ ├── chapter-1/
│ │ ├── index.md # Chapter content
│ │ └── artifacts/ # Supporting files
│ └── chapter-2/
└── topic-2/
The synthesis skill automatically:
- Decides where content belongs based on semantic fit
- Creates new topics/chapters when needed
- Updates all parent
index.mdfiles for navigation - Maintains governance rules for consistency
Generate a browsable website from your knowledge base:
# Your AI assistant will run this for you when you say:
# "Show me my knowledge base as a website"
python skills/tapestry/display/_scripts/publish_viewer.py
python -m http.server 8766 --directory knowledge-base/_viewerVisit http://localhost:8766 to browse your organized content with:
- Proper topic/chapter navigation and book-like hierarchy
- Markdown and LaTeX rendering
- Inline term definitions (hover over highlighted terms)
- Article tags, categories, and metadata
Newbadge on recently added or updated pages- One-click export of any article as Markdown, HTML, or PDF
Validation lives alongside the code:
cd skills/tapestry/_tests
pytestTests cover the shared _src support code and registry behavior.
Tapestry is a skill pack for agent frameworks that crawls web content from multiple platforms and organizes it into a structured knowledge base. It's not a traditional library or tool, but an AI-native workflow that works through natural language conversation.
No. You simply talk to your AI assistant naturally to use Tapestry. Most agent frameworks support both explicit skill invocation and implicit natural language commands.
Tapestry respects platform rate limits and robots.txt. For public content, the risk is low. However:
- Don't crawl too frequently
- Follow platform Terms of Service
- Only crawl publicly accessible content
All data is stored on your local filesystem. Tapestry does not send data to any external servers (except the original platforms being crawled).
Simply backup the entire project directory, especially captures/, feeds/, notes/, and knowledge-base/ directories.
See the Contributing section below. Basic steps:
- Create new module in
_src/crawlers/ - Implement
CrawlerDefinitionandCrawlHandler - Register in
registry.py - Add Feed spec to
feed/_specs/ - Write tests
- Check Issues for similar problems
- Create a new Bug Report
- Join Discussions to ask questions
The public repository keeps its durable reference surface in this README and the root-level project files. The docs/ directory is now treated as a local workspace and is no longer part of the remote repo.
Core sections in this README
Installation- setup paths and verification stepsConfiguration and Merge Frequency- configuration shape, merge modes, and tradeoffsWorkflow Overview- everyday usage flowArchitecture Design- layered responsibilities, data flow, and extension seamsSupported Sources- currently supported platformsFrequently Asked Questions- troubleshooting and operational boundaries
Root-level project files
- Contributing Guide - How to contribute to Tapestry
- Changelog - Version history and updates
- Roadmap - Future plans and features
We welcome all forms of contributions! Whether it's new features, bug fixes, documentation improvements, or usage feedback—everything helps make Tapestry better.
1. Add New Platform Crawlers 🕷️
- Create a new platform module under
_src/crawlers/ - Implement the
CrawlerDefinitionandCrawlHandlerinterfaces - Register the crawler in
registry.py - Add the corresponding Feed spec to
feed/_specs/ - Write unit tests to validate crawler behavior
2. Improve Feed Specifications 📝
- Refine the platform-specific formatting rules in
feed/_specs/ - Ensure specs accurately reflect platform characteristics
- Maintain consistency with
_shared-standard.md
3. Enhance the Visual Frontend 🎨
- Improve the viewer interface in
display/_ui/ - Optimize navigation UX and content presentation
- Ensure responsive design and accessibility
4. Refine Knowledge Base Governance 📚
- Optimize the organization rules in
_kb_rules/ - Improve topic classification and chapter-placement logic
- Increase knowledge base maintainability
5. Documentation and Examples 📖
- Add use cases and best practices
- Expand the FAQ
- Provide examples for additional platforms
Before submitting a PR, please ensure:
-
Code Quality
- Follow the project's existing code style
- Add necessary type annotations and docstrings
- Ensure code passes all tests
-
Test Coverage
cd skills/tapestry/_tests pytest- Add unit tests for new functionality
- Ensure all existing tests pass
- Cover critical paths and edge cases
-
Commit Messages
- Format:
<type>(<scope>): <subject> - Types:
feat,fix,docs,style,refactor,test,chore - Example:
feat(crawlers): add Bilibili video crawler fix(zhihu): handle deleted answers gracefully docs(readme): update installation instructions
- Format:
-
PR Description
- Clearly state the motivation and goal of the change
- List the main changes
- Reference related issue numbers (if any)
- Include test steps or screenshots where applicable
## Change Type
- [ ] New feature
- [ ] Bug fix
- [ ] Documentation update
- [ ] Performance improvement
- [ ] Code refactor
## Description
<!-- Briefly describe what this PR does -->
## Motivation and Context
<!-- Why is this change needed? What problem does it solve? -->
## Testing
<!-- How was this change verified? Provide test steps -->
## Related Issues
<!-- Related issue numbers, e.g. #123 -->
## Checklist
- [ ] Code follows the project style guide
- [ ] Tests have been added
- [ ] All tests pass
- [ ] Documentation has been updated
- [ ] Commit messages are clear and descriptive# Clone the repository
git clone https://github.com/NatsuFox/Tapestry.git
cd Tapestry
# Run tests
cd skills/tapestry/_tests
pytest -v
# Install to your agent framework for testing (symlink for live development)
ln -s "$(pwd)/skills/tapestry" ~/.claude/skills/tapestry- Respect all contributors
- Keep discussions constructive
- Accept constructive criticism
- Focus on what is best for the project
- Show empathy toward community members
- Browse Issues for tasks to contribute to
- Issues labeled
good first issueare great for new contributors - Issues labeled
help wantedneed community assistance - Have questions? Open an issue or start a discussion
This project is licensed under the MIT License - see the LICENSE file for details.
Built with ❤️ for Agent Frameworks
Transform scattered web content into organized knowledge
