-
Notifications
You must be signed in to change notification settings - Fork 130
feat: Dockerize GitHub MCP server and optimize build #250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: Dockerize GitHub MCP server and optimize build #250
Conversation
📝 WalkthroughWalkthroughAdds containerization and local-development docs for the GitHub MCP backend: Dockerfile, docker-compose, Python dependency list, README Docker Compose section, and an env.example variable addition. No API or exported/interface changes. Changes
Sequence Diagram(s)(Skipped — changes are infrastructure/docs and do not introduce a new multi-component runtime control flow warranting a sequence diagram.) Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@backend/github_mcp_server/Dockerfile`:
- Around line 17-19: The HEALTHCHECK uses curl but the base image
python:3.10-slim does not include it, so update the Dockerfile to install curl
before the HEALTHCHECK; modify the Dockerfile to run apt-get update and apt-get
install -y --no-install-recommends curl (and clean up apt lists) after the base
image is set and before the HEALTHCHECK line so the CMD curl -f
http://localhost:8001/health || exit 1 can succeed.
In `@backend/github_mcp_server/requirements.txt`:
- Line 13: requirements.txt currently lists a bare "torch" which can pull
CUDA-enabled wheels from PyPI; update requirements.txt to explicitly pin the
CPU-only torch and add the PyTorch CPU index (e.g., add an --extra-index-url
pointing to https://download.pytorch.org/whl/cpu and specify torch==2.3.0+cpu)
so pip install -r requirements.txt installs CPU-only wheels, or alternatively
remove the separate torch install from the Dockerfile and ensure
requirements.txt is the single source of truth for torch installation.
In `@README.md`:
- Line 22: Update the README line that reads "**ReAct Reasoning Pattern** -
Think → Act → Observe workflow for intelligent decision making" to hyphenate
"decision-making" so it becomes "decision-making"; locate and edit the exact
string "**ReAct Reasoning Pattern** - Think → Act → Observe workflow for
intelligent decision making" in the README and replace "decision making" with
"decision-making".
- Around line 182-187: Update the README's environment configuration section to
document the SUPABASE_URL and SUPABASE_KEY variables (the SUPABASE_* vars seen
being passed into the container) and state whether they are required at runtime
or optional; add example entries for SUPABASE_URL and SUPABASE_KEY to the `.env`
example and a short note that docker-compose injects these into the container if
present, or explicitly mark them optional if the app can start without them.
| ddgs | ||
| sqlalchemy | ||
| langchain-community | ||
| torch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -name "Dockerfile" -o -name "dockerfile" | head -20Repository: AOSSIE-Org/Devr.AI
Length of output: 100
🏁 Script executed:
git ls-files | grep -i dockerRepository: AOSSIE-Org/Devr.AI
Length of output: 144
🏁 Script executed:
cat backend/github_mcp_server/requirements.txtRepository: AOSSIE-Org/Devr.AI
Length of output: 296
🏁 Script executed:
cat -n backend/github_mcp_server/DockerfileRepository: AOSSIE-Org/Devr.AI
Length of output: 864
🏁 Script executed:
web_search
What is the correct syntax for specifying CPU-only PyTorch in requirements.txt and how should pip index URLs be configured?Repository: AOSSIE-Org/Devr.AI
Length of output: 155
🌐 Web query:
CPU-only PyTorch installation pip requirements.txt syntax torch index url
💡 Result:
Use pip’s requirements.txt “global options” lines (--index-url, --extra-index-url) to point torch installs at PyTorch’s CPU wheel index. Pip supports these options in requirements files. [1]
Option A (recommended): keep PyPI as default, add PyTorch CPU as extra index
--extra-index-url https://download.pytorch.org/whl/cpu
torch==2.3.0+cpu
torchvision==0.18.1+cpu
torchaudio==2.3.0+cpuThis matches the common CPU-only install approach (--extra-index-url …/cpu). [2][3]
Option B: make PyTorch CPU the primary index, fall back to PyPI for everything else
--index-url https://download.pytorch.org/whl/cpu
--extra-index-url https://pypi.org/simple
torch==2.2.1
# other deps...(Useful if you want pip to prefer the CPU-only torch builds.) [3][1]
Option C: pin an explicit wheel URL (works even if index settings are awkward)
In requirements files you can also use direct references (e.g., torch @ <wheel-url>). [4][1]
Sources: pip requirements file options [1]; PyTorch “Get Started” CPU pip command pattern [2]; working requirements.txt examples for CPU wheels [3]; PyTorch forum note about using @ direct references [4].
Ensure CPU-only PyTorch is properly configured in requirements.txt, not just in the Dockerfile.
The Dockerfile does install CPU-only torch on line 8, but requirements.txt lists bare torch on line 13 with no version or index specification. When pip install -r requirements.txt runs (line 9), pip will install torch from the default PyPI index, which includes CUDA wheels by default. While the Dockerfile order makes this work in Docker context, the configuration is fragile and causes issues if requirements.txt is used elsewhere (local development, alternative CI).
Move torch specification into requirements.txt with proper CPU index configuration using one of these approaches:
--extra-index-url https://download.pytorch.org/whl/cpu
torch==2.3.0+cpu
Or update the Dockerfile to remove the separate torch install and rely on requirements.txt with CPU index settings.
🤖 Prompt for AI Agents
In `@backend/github_mcp_server/requirements.txt` at line 13, requirements.txt
currently lists a bare "torch" which can pull CUDA-enabled wheels from PyPI;
update requirements.txt to explicitly pin the CPU-only torch and add the PyTorch
CPU index (e.g., add an --extra-index-url pointing to
https://download.pytorch.org/whl/cpu and specify torch==2.3.0+cpu) so pip
install -r requirements.txt installs CPU-only wheels, or alternatively remove
the separate torch install from the Dockerfile and ensure requirements.txt is
the single source of truth for torch installation.
|
|
||
| ### 🧠 LangGraph Agent-Based Intelligence | ||
|
|
||
| - **ReAct Reasoning Pattern** - Think → Act → Observe workflow for intelligent decision making |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hyphenate “decision-making.”
Minor grammar fix per style/clarity.
✍️ Proposed fix
-- **ReAct Reasoning Pattern** - Think → Act → Observe workflow for intelligent decision making
+- **ReAct Reasoning Pattern** - Think → Act → Observe workflow for intelligent decision-making📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **ReAct Reasoning Pattern** - Think → Act → Observe workflow for intelligent decision making | |
| - **ReAct Reasoning Pattern** - Think → Act → Observe workflow for intelligent decision-making |
🧰 Tools
🪛 LanguageTool
[grammar] ~22-~22: Use a hyphen to join words.
Context: ...bserve workflow for intelligent decision making - Conversational Memory - Per...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
In `@README.md` at line 22, Update the README line that reads "**ReAct Reasoning
Pattern** - Think → Act → Observe workflow for intelligent decision making" to
hyphenate "decision-making" so it becomes "decision-making"; locate and edit the
exact string "**ReAct Reasoning Pattern** - Think → Act → Observe workflow for
intelligent decision making" in the README and replace "decision making" with
"decision-making".
| 1. **Configure Environment**: Ensure your `.env` file in the root directory has the required variables: | ||
|
|
||
| ```bash | ||
| GITHUB_TOKEN=your_token | ||
| GITHUB_ORG=your_org | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Document SUPABASE_ vars if they’re required for startup.*
docker-compose.yml passes SUPABASE_URL and SUPABASE_KEY into the container. If they’re required at runtime, the setup instructions should include them (or explicitly mark them optional).
📄 Suggested doc addition
GITHUB_TOKEN=your_token
GITHUB_ORG=your_org
+SUPABASE_URL=your_supabase_url
+SUPABASE_KEY=your_supabase_anon_key📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 1. **Configure Environment**: Ensure your `.env` file in the root directory has the required variables: | |
| ```bash | |
| GITHUB_TOKEN=your_token | |
| GITHUB_ORG=your_org | |
| ``` | |
| 1. **Configure Environment**: Ensure your `.env` file in the root directory has the required variables: |
🤖 Prompt for AI Agents
In `@README.md` around lines 182 - 187, Update the README's environment
configuration section to document the SUPABASE_URL and SUPABASE_KEY variables
(the SUPABASE_* vars seen being passed into the container) and state whether
they are required at runtime or optional; add example entries for SUPABASE_URL
and SUPABASE_KEY to the `.env` example and a short note that docker-compose
injects these into the container if present, or explicitly mark them optional if
the app can start without them.
|
closes #195
|
I have successfully completed issue #195 the Docker setup for the GitHub MCP Server .
As requested, I have focused only on containerizing the MCP service without modifying the existing backend infrastructure configuration.
Implementation Details
Standalone Docker Compose (docker-compose.yml)
I created a root-level docker-compose.yml to orchestrate the github-mcp service.
Why:
This keeps the lightweight MCP environment decoupled from the heavier backend
infrastructure (Weaviate/RabbitMQ), allowing for faster local testing.
Networking:
Maps internal port 8001 to host 5001.
Build Optimization (Crucial Fix)
I modified the GitHub MCP Dockerfile to enforce a CPU-only installation for PyTorch.
Context:
The default PyTorch installation was attempting to pull ~2.5GB of Nvidia CUDA binaries,
causing hash mismatches and build failures.
Result:
The build is now stable and significantly smaller, which is appropriate for this service.
Dependency Fix
I added pydantic-settings to the requirements.txt.
Reason:
The app lication failed to start with an ImportError because settings.py relies on
BaseSettings, which was missing from the MCP environment.
✅ Verification
I have verified the setup on a fresh environment:
Build:
docker-compose up --build (Successful)
Health Check:
curl http://localhost:5001/health
Output: {"status": "healthy", "service": "github-mcp"}
Summary by CodeRabbit
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.