Skip to content

Conversation

@KumarADITHYA123
Copy link

@KumarADITHYA123 KumarADITHYA123 commented Jan 28, 2026

This Pull Request resolves the critical scalability flaw identified in Issue #254 . By replacing the existing in-memory session storage with a Redis-backed solution, the application can now support horizontal scaling across multiple worker processes or containers without losing authentication data.

Technical Implementation Steps

Infrastructure Configuration I have added a new Redis service to the docker-compose configuration. This service uses the lightweight Alpine image to minimize footprint. I also updated the backend requirements to include the official Redis Python client (version 5.0.0 or higher).
Core Connection Factory I implemented a new module at backend/app/core/redis.py. This centralized factory manages the Redis connection pool, ensuring efficient resource usage and providing a consistent interface for the rest of the application.
Session Store Refactoring I completely refactored the verification service (backend/app/services/auth/verification.py). The global in-memory dictionary has been removed. In its place, I implemented a VerificationSessionStore class that interacts with Redis. This class handles the serialization of complex data types (like datetime objects) to JSON before storage.
Automatic Expiry Instead of the manual cleanup loop that was previously running, the new implementation leverages Redis native Time-To-Live (TTL) feature. Verification sessions are now automatically expired and removed by the Redis engine after 5 minutes, significantly simplifying the application logic.
Verification Performed

I have performed a full rebuild of the container environment to verify the new infrastructure.

Confirmed that the Redis container (devr-redis) starts correctly and listens on port 6379.
Confirmed that the backend service (github-mcp) successfully connects to Redis on startup.
Verified that the new code logic correctly handles data serialization and does not introduce runtime errors during import.
Deployment Instructions

Because this change introduces a new infrastructure component, all developers and deployment pipelines must run a build command to pull the new Redis image and install the new dependencies.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 28, 2026

📝 Walkthrough

Walkthrough

Added Redis integration and a Redis-backed verification session store with an async Redis client singleton; introduced a GitHub MCP server Dockerfile, service requirements, and docker-compose including a Redis service; expanded README and env.example with Docker/REDIS instructions; added redis_url setting.

Changes

Cohort / File(s) Summary
Docs / README
README.md
Added Docker Compose setup, development workflow, Quick Start, Troubleshooting, and wording updates; verify accuracy of commands and env var names.
Docker / Compose
backend/github_mcp_server/Dockerfile, docker-compose.yml
New Dockerfile for github_mcp_server and docker-compose with github-mcp and redis services; review build steps, ports, healthchecks, and mounted volumes.
Backend deps
backend/github_mcp_server/requirements.txt, backend/requirements.txt
Added service dependencies and redis>=5.0.0; verify compatibility and dependency resolution.
Config / Env
backend/app/core/config/settings.py, env.example
Added redis_url setting and REDIS_URL env var entry; ensure defaults align with docker-compose and local development notes.
Redis client
backend/app/core/redis.py
New async, thread-safe Redis client singleton with init/close APIs and logging; review concurrency, lifecycle, encoding, and error handling.
Auth verification
backend/app/services/auth/verification.py
Replaced in-memory sessions with VerificationSessionStore backed by Redis (save/get/delete) and TTL handling; review JSON serialization, expiry semantics, and deletion points to avoid race conditions.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Backend as Backend Service
    participant Redis
    participant Discord as Discord API

    Client->>Backend: POST /create_verification_session
    Backend->>Redis: SET devr:auth:verification:{id} {discord_id, expiry_time} EX 300
    Redis-->>Backend: OK
    Backend-->>Client: 200 (session info)

    Client->>Backend: POST /verify?session_id={id}
    Backend->>Redis: GET devr:auth:verification:{id}
    Redis-->>Backend: {discord_id, expiry_time}
    Backend->>Discord: verify user identity (discord_id)
    Discord-->>Backend: verification result
    Backend->>Redis: DEL devr:auth:verification:{id} (on success)
    Backend-->>Client: 200 (verified)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • chandansgowda
  • smokeyScraper

Poem

"I hopped through files with eager paws,
I tucked session keys in cozy claws. 🐰
Redis hums and containers sing,
healthchecks tick — the servers spring.
A little rabbit cheers: deploy with paws and glee!"

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately reflects the main change: replacing in-memory session storage with Redis-backed storage. It is concise, specific, and clearly communicates the primary modification.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@backend/app/core/redis.py`:
- Line 14: The log statement logger.info(f"Initializing Redis client connecting
to {settings.redis_url}") exposes credentials if settings.redis_url contains
them; change it to avoid logging secrets by parsing settings.redis_url and
either logging only the host (and optional port) or replacing credentials with a
mask (e.g., redacting the password/userinfo) before passing to logger.info.
Locate the log in the Redis initialization (the logger.info call) and derive a
safe string from settings.redis_url (using URL parsing utilities) to log instead
of the raw URL.

In `@backend/app/services/auth/verification.py`:
- Around line 126-128: store.delete(session_id) is being called before the
Supabase user-verification update completes, which can permanently remove the
session if the update fails; move the call to store.delete(session_id) so it
runs only after the Supabase update that marks the user as verified succeeds
(i.e., after the code that updates the user's verification status in
verification.py), or implement a two-phase approach: instead of immediate
deletion set the session to an "in-progress" flag/status and only delete it
after the Supabase update succeeds (and on failure clear or restore the
session), ensuring any network/db error allows retry or rollback.

In `@backend/github_mcp_server/Dockerfile`:
- Around line 5-13: The Dockerfile runs pip install -r requirements.txt before
the repository is copied in, causing a missing file error; fix it by copying
requirements.txt into the image before running pip install (e.g., add a step to
COPY requirements.txt . or move the COPY . . earlier), so the existing RUN block
that executes "pip install --no-cache-dir -r requirements.txt" and the related
"pip install" commands can find the file; ensure you still clean apt lists and
keep image layering optimized.

In `@docker-compose.yml`:
- Around line 24-25: The docker-compose service currently exposes Redis with the
ports mapping "ports: - \"6379:6379\"" which opens Redis to the host; update the
Redis service entry to either bind to localhost by replacing the mapping with
"127.0.0.1:6379:6379" or remove the ports mapping entirely if only
inter-container access is required (i.e., rely on the service name and internal
networking), and ensure you modify the Redis service block that contains the
ports key.
🧹 Nitpick comments (11)
env.example (1)

8-8: Consider adding REDIS_URL to the environment example.

The PR introduces redis_url in settings.py with a default value of redis://redis:6379/0. For developers running outside Docker or needing a custom Redis configuration, documenting this variable would be helpful.

 GITHUB_ORG=your_github_org_here
+
+# Redis (optional - uses default if not set)
+REDIS_URL=redis://localhost:6379/0
backend/app/core/config/settings.py (1)

39-41: Default Redis URL assumes Docker network.

The default redis://redis:6379/0 uses the Docker service hostname redis, which won't resolve for local development without Docker. While pydantic-settings will automatically read REDIS_URL from the environment, consider adding a comment to clarify this for developers:

     # Redis configuration
+    # Override with REDIS_URL env var for local dev: redis://localhost:6379/0
     redis_url: str = "redis://redis:6379/0"
README.md (2)

182-189: Consider documenting REDIS_URL for the Docker setup.

The environment variables section should include REDIS_URL since the PR introduces Redis-backed session storage. While a default is provided in settings, users may need to customize it:

     GITHUB_TOKEN=your_token
     GITHUB_ORG=your_org
     SUPABASE_URL=your_supabase_url
     SUPABASE_KEY=your_supabase_key
+    # Optional: defaults to redis://redis:6379/0 in Docker
+    REDIS_URL=redis://redis:6379/0

22-22: Minor grammar nit: use hyphen in compound modifier.

"decision making" should be "decision-making" when used as a compound adjective.

-- **ReAct Reasoning Pattern** - Think → Act → Observe workflow for intelligent decision making
+- **ReAct Reasoning Pattern** - Think → Act → Observe workflow for intelligent decision-making
backend/github_mcp_server/requirements.txt (1)

1-16: Pin langchain-core to match langchain version requirements.

langchain==0.3.26 requires langchain-core>=0.3.66. Since langchain-core is currently unpinned, this could lead to version conflicts or unexpected behavior. Pinning it ensures compatibility:

Suggested change
-langchain-core
+langchain-core==0.3.66

Additionally, consider pinning all dependencies for build reproducibility—many packages like fastapi, uvicorn, requests, pydantic, and others lack explicit versions and could introduce breaking changes.

docker-compose.yml (1)

21-26: Consider adding Redis data persistence and healthcheck.

The Redis service lacks:

  1. Data persistence: No volume mount means session data is lost on container restart. For short-lived verification sessions (5-min TTL), this may be acceptable, but worth documenting.
  2. Healthcheck: depends_on alone doesn't wait for Redis to be ready. The github-mcp service may fail to connect on startup.
  3. Authentication: Redis is exposed without a password, which is a security risk if deployed beyond local development.
♻️ Suggested improvements
   redis:
     image: redis:alpine
     container_name: devr-redis
     ports:
       - "6379:6379"
     restart: unless-stopped
+    healthcheck:
+      test: ["CMD", "redis-cli", "ping"]
+      interval: 5s
+      timeout: 3s
+      retries: 5
+    volumes:
+      - redis_data:/data
+
+volumes:
+  redis_data:

Then update the dependency:

     depends_on:
-      - redis
+      redis:
+        condition: service_healthy
backend/app/core/redis.py (1)

11-20: Singleton initialization is not thread-safe.

The check if cls._instance is None followed by assignment is subject to race conditions in concurrent contexts. Multiple coroutines or threads could simultaneously pass the check and create multiple client instances.

Consider using a lock or leveraging Python's module-level initialization:

♻️ Thread-safe singleton using a lock
+import asyncio
+
 class RedisClient:
     _instance: Optional[redis.Redis] = None
+    _lock: asyncio.Lock = asyncio.Lock()
 
     `@classmethod`
-    def get_client(cls) -> redis.Redis:
+    async def get_client(cls) -> redis.Redis:
-        if cls._instance is None:
-            logger.info(f"Initializing Redis client connecting to {settings.redis_url}")
-            cls._instance = redis.from_url(
-                settings.redis_url,
-                encoding="utf-8",
-                decode_responses=True
-            )
+        if cls._instance is None:
+            async with cls._lock:
+                if cls._instance is None:
+                    logger.info("Initializing Redis client")
+                    cls._instance = redis.from_url(
+                        settings.redis_url,
+                        encoding="utf-8",
+                        decode_responses=True
+                    )
         return cls._instance

Note: This would require updating callers to await get_client().

backend/app/services/auth/verification.py (4)

49-53: Use logging.exception to capture the traceback.

When handling exceptions, logging.exception automatically includes the traceback, which aids debugging.

♻️ Suggested fix
         except json.JSONDecodeError:
-            logger.error(f"Failed to decode session data for {session_id}")
+            logger.exception(f"Failed to decode session data for {session_id}")
             return None

202-206: Redundant expiry check creates a race condition.

Since Redis TTL handles expiry automatically, this manual check is redundant. Additionally, there's a small window where store.get() succeeds (key exists in Redis) but expiry_time has already passed, causing inconsistent behavior.

Either trust Redis TTL entirely or synchronize the TTL with the stored expiry_time. For simplicity, you could remove this check:

♻️ Simplified version trusting Redis TTL
     discord_id = session_data["discord_id"]
     expiry_time_str = session_data["expiry_time"]
     expiry_time = datetime.fromisoformat(expiry_time_str)
 
-    # Calculate remaining time
-    now = datetime.now()
-    if now > expiry_time:
-        await store.delete(session_id)
-        return None
+    now = datetime.now()
 
     return {
         "discord_id": discord_id,
         "expiry_time": expiry_time_str,
         "time_remaining": str(expiry_time - now)
     }

Note: time_remaining could become negative in edge cases; consider using max(timedelta(0), expiry_time - now) if that's a concern.


14-22: VerificationSessionStore instantiated repeatedly per request.

Each function creates a new VerificationSessionStore() instance. While the underlying Redis client is a singleton, consider making the store a module-level singleton or passing it as a dependency for consistency and testability.


107-109: Remove or implement the commented-out expiry check.

Commented code suggests uncertainty about the expiry validation strategy. If Redis TTL is the source of truth, remove this comment. If double-checking is needed, implement it properly.

♻️ Suggested cleanup
         discord_id = session_data["discord_id"]
-        # We rely on Redis TTL for expiry, but can double check if needed
-        # expiry_time = datetime.fromisoformat(session_data["expiry_time"])

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@backend/app/core/redis.py`:
- Around line 19-26: The redis URL masking currently assigns an unused auth
variable and catches all exceptions; update the logic that builds log_url from
settings.redis_url to avoid unused variables and narrow the exception: split
using schema, rest = log_url.split("://", 1) and then use rest.rpartition("@")
(e.g., userinfo, sep, host = rest.rpartition("@")) to determine host and
construct log_url = f"{schema}://****:****@{host}" (or leave masked placeholder
if sep is empty); replace the broad except Exception with a specific ValueError
handler and/or an explicit fallback assignment so only split errors are caught
and Ruff warnings are resolved while preserving masking behavior.

In `@backend/app/services/auth/verification.py`:
- Around line 41-53: In get(self, session_id) update the JSON decode error
handling to log the full traceback and remove the corrupt Redis payload: in the
except json.JSONDecodeError block call logger.exception with a clear message
including session_id, then await self.redis.delete(key) to purge the bad key,
and finally return None; this ensures the get method (and symbol
self._get_key/session_id) both reports the exception and prevents repeated
decode failures.
- Around line 67-74: The Redis client creation and store initialization are
outside the try/except so connection failures escape the verification flow; move
the call to get_redis_client() and the VerificationSessionStore(redis_client)
construction inside the existing try block in the verification session creation
function so any exceptions from get_redis_client or VerificationSessionStore are
caught and logged by the existing except handler; keep generation of
token/session_id/expiry_time as is and ensure subsequent code uses the
locally-initialized redis_client and store variables.

In `@docker-compose.yml`:
- Around line 11-16: The environment block for the container currently lists
GITHUB_TOKEN, GITHUB_ORG, SUPABASE_URL and SUPABASE_KEY but omits REDIS_URL, so
add REDIS_URL=${REDIS_URL} to the same environment section (next to the existing
GITHUB_TOKEN / GITHUB_ORG / SUPABASE_URL / SUPABASE_KEY entries) so the
github-mcp container receives any override from .env; update the environment
list in the same block to include the REDIS_URL variable.
🧹 Nitpick comments (3)
env.example (1)

22-25: Separate Redis from the RabbitMQ header for clarity.
Right now REDIS_URL appears under the RabbitMQ section, which can be confusing for readers. Consider adding a Redis header.

♻️ Suggested doc tweak
-# RabbitMQ (optional - uses default if not set)
-RABBITMQ_URL=amqp://localhost:5672/
-REDIS_URL=redis://localhost:6379/0
+# RabbitMQ (optional - uses default if not set)
+RABBITMQ_URL=amqp://localhost:5672/
+
+# Redis (optional - uses default if not set)
+REDIS_URL=redis://localhost:6379/0
backend/app/services/auth/verification.py (2)

65-66: Remove duplicate Supabase client assignments.

The consecutive duplicate calls add noise without changing behavior.

♻️ Suggested cleanup
-    supabase = get_supabase_client()
-    supabase = get_supabase_client()
+    supabase = get_supabase_client()
...
-    supabase = get_supabase_client()
-    supabase = get_supabase_client()
+    supabase = get_supabase_client()

Also applies to: 100-101


172-173: Avoid redundant exception interpolation in logger.exception.

logger.exception already captures the exception; interpolating {e} is redundant.

♻️ Suggested cleanup
-        logger.exception(f"Database error in find_user_by_session_and_verify: {e}")
+        logger.exception("Database error in find_user_by_session_and_verify")

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
backend/app/services/auth/verification.py (2)

137-144: Session not deleted when GitHub account is already linked to another user.

When the GitHub account is already linked to a different user (line 137), the session is not deleted from Redis before raising the exception. This could allow repeated verification attempts with the same session until TTL expires.

🐛 Suggested fix
         if existing_github_user.data:
             logger.warning(f"GitHub account {github_username} is already linked to another user")
             await supabase.table("users").update({
                 "verification_token": None,
                 "verification_token_expires_at": None,
                 "updated_at": datetime.now().isoformat()
             }).eq("id", user_to_verify['id']).execute()
+            await store.delete(session_id)
             raise Exception(f"GitHub account {github_username} is already linked to another Discord user")

194-218: Missing error handling for Redis operations.

Unlike create_verification_session and find_user_by_session_and_verify, this function doesn't wrap Redis operations in a try/except block. If get_redis_client() fails or Redis is unavailable, the exception will propagate unhandled.

🐛 Suggested fix
 async def get_verification_session_info(session_id: str) -> Optional[Dict[str, str]]:
     """
     Get information about a verification session.
     """
-    redis_client = await get_redis_client()
-    store = VerificationSessionStore(redis_client)
-
-    session_data = await store.get(session_id)
-    if not session_data:
-        return None
-
-    discord_id = session_data["discord_id"]
-    expiry_time_str = session_data["expiry_time"]
-    expiry_time = datetime.fromisoformat(expiry_time_str)
-
-    # Calculate remaining time
-    now = datetime.now()
-    if now > expiry_time:
-        await store.delete(session_id)
+    try:
+        redis_client = await get_redis_client()
+        store = VerificationSessionStore(redis_client)
+
+        session_data = await store.get(session_id)
+        if not session_data:
+            return None
+
+        discord_id = session_data.get("discord_id")
+        expiry_time_str = session_data.get("expiry_time")
+        if not discord_id or not expiry_time_str:
+            logger.warning(f"Malformed session data for {session_id}")
+            await store.delete(session_id)
+            return None
+
+        expiry_time = datetime.fromisoformat(expiry_time_str)
+
+        # Calculate remaining time
+        now = datetime.now()
+        if now > expiry_time:
+            await store.delete(session_id)
+            return None
+
+        return {
+            "discord_id": discord_id,
+            "expiry_time": expiry_time_str,
+            "time_remaining": str(expiry_time - now)
+        }
+    except Exception as e:
+        logger.exception("Error getting verification session info")
         return None
-
-    return {
-        "discord_id": discord_id,
-        "expiry_time": expiry_time_str,
-        "time_remaining": str(expiry_time - now)
-    }
🧹 Nitpick comments (3)
docker-compose.yml (2)

19-20: depends_on won't wait for Redis to be healthy by default.

The depends_on: redis only ensures the container starts, not that Redis is ready to accept connections. With Compose v3.8, you need to use the extended depends_on syntax with condition: service_healthy to leverage the healthcheck.

♻️ Suggested fix to wait for Redis health
     depends_on:
-      - redis
+      redis:
+        condition: service_healthy

Also applies to: 30-34


22-34: Consider adding Redis authentication for non-development environments.

Redis is configured without password authentication. While this is acceptable for local development (especially with the port bound to localhost), consider adding --requirepass for production or staging deployments.

♻️ Optional: Add Redis password protection
   redis:
     image: redis:alpine
     container_name: devr-redis
+    command: redis-server --requirepass ${REDIS_PASSWORD:-}
     ports:
       - "127.0.0.1:6379:6379"

Then update REDIS_URL to include the password: redis://:password@redis:6379/0

backend/app/services/auth/verification.py (1)

171-172: Remove redundant exception object from logger.exception.

logger.exception() automatically includes the exception information from the current context, making the {e} interpolation redundant.

♻️ Suggested fix
-        logger.exception(f"Database error in find_user_by_session_and_verify: {e}")
+        logger.exception("Database error in find_user_by_session_and_verify")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant