Skip to content

OpenTelemetry, Grafana & Prometheus#95

Open
MagnusTerak wants to merge 9 commits intomainfrom
issue/10
Open

OpenTelemetry, Grafana & Prometheus#95
MagnusTerak wants to merge 9 commits intomainfrom
issue/10

Conversation

@MagnusTerak
Copy link
Copy Markdown
Contributor

@MagnusTerak MagnusTerak commented Dec 12, 2025

Feature - Opentelemetry, Grafana, Prometheus

Co-authors:

@nilsishome
@Kalu32k

Description

We have added Opentelemetry, Grafana & Prometheus to manage & see metrics, statistics over the usage of our server

Motivation and Context

This feature is a great addition to our team to be able to supervise & manage statistics & metrics

How Has This Been Tested?

Manual testing, you can access Grafana with port 3000 and login with
username: admin
password: admin

How to test locally

Type in localhost:3000 in the URL and you will be greeted by a login page where you enter:
admin as username
admin as password

Screenshots:

image image

Related Issues:

OpenTelemetry + Grafana #10 | Prometheus #91

Summary by CodeRabbit

  • New Features

    • Integrated a monitoring & observability stack (OpenTelemetry Collector, Prometheus, Grafana, node-exporter) for real-time metrics, scraping, and dashboarding.
  • Chores

    • Added telemetry/metrics dependencies and enabled actuator endpoints.
    • Added orchestration and provisioning for Prometheus, Grafana, OTEL collector, and node-exporter.

✏️ Tip: You can customize this high-level summary in your review settings.

MagnusTerak and others added 7 commits December 8, 2025 14:49
Co-authored-by: Kalu32k <158482431+Kalu32k@users.noreply.github.com>
Co-authored-by: nilsishome <181159296+nilsishome@users.noreply.github.com>
# Conflicts:
#	backend/pom.xml
#	backend/src/main/java/org/fungover/zipp/security/SecurityConfig.java
Co-authored-by: Kalu32k <158482431+Kalu32k@users.noreply.github.com>
Co-authored-by: nilsishome <181159296+nilsishome@users.noreply.github.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Dec 12, 2025

Walkthrough

Adds an observability stack: OpenTelemetry Collector, Prometheus, Grafana provisioning, node-exporter, Prometheus/Grafana configs, Maven telemetry dependencies, and docker-compose services to run them alongside the backend.

Changes

Cohort / File(s) Summary
Grafana provisioning
backend/config/grafana/provisioning/dashboards/dashboards.yaml, backend/config/grafana/provisioning/datasources/datasources.yaml
Adds Grafana provisioning: file-based dashboards provider (editable, /var/lib/grafana/dashboards) and a Prometheus datasource pointing to http://prometheus:9090.
OpenTelemetry Collector config
backend/config/otel/otel-collector-config.yaml
Adds OTEL Collector config with OTLP receivers (gRPC 4317, HTTP 4318 with CORS), hostmetrics receiver (10s, multiple scrapers), resourcedetection processor, Prometheus + debug exporters, and a metrics pipeline linking receivers → processor → exporters.
Prometheus config
backend/config/prometheus/prometheus-config.yaml
Adds Prometheus config: global scrape_interval: 60s and scrape_jobs for prometheus (self), node (node-exporter:9100), and otel-collector (otel-collector:9464).
Docker Compose services
docker-compose.yml
Adds services: otel-collector, prometheus, grafana, and node-exporter with images, volumes, ports, resource limits, networks, env vars and restart policies to run the monitoring stack alongside existing services.
Backend build
backend/pom.xml
Adds dependencies: spring-boot-starter-opentelemetry, spring-boot-starter-actuator, micrometer-registry-prometheus (runtime), and protobuf-java:4.33.2.
Backend config
backend/src/main/resources/application-dev.yml
Enables exposure of all Spring Boot Actuator endpoints via management.endpoints.web.exposure.include: '*' in dev profile.

Sequence Diagram(s)

mermaid
sequenceDiagram
autonumber
participant App as Application
participant OTEL as OTEL Collector
participant Node as node-exporter
participant Prom as Prometheus
participant Graf as Grafana
Note over App,OTEL: App exports telemetry via OTLP (gRPC/HTTP)
App->>OTEL: Send OTLP metrics/traces
Node->>OTEL: Host metrics collected by hostmetrics receiver
OTEL->>Prom: Expose Prometheus metrics endpoint (/metrics)
Prom->>OTEL: Scrape otel-collector:9464/metrics
Prom->>Node: Scrape node-exporter:9100/metrics
Graf->>Prom: Query metrics API (http://prometheus:9090)
Graf->>Graf: Load dashboards/datasources from mounted provisioning files

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Inspect docker-compose: volume mounts, host path usage, port mappings, resource limits, and restart policies.
  • Validate OTEL Collector ports, CORS, hostmetrics scrapers, and pipeline/exporter targets.
  • Verify Prometheus scrape_configs and relabeling for correct target labels.
  • Check Grafana provisioning file structure and datasource URL.
  • Confirm Maven dependency compatibility with project Spring Boot version and protobuf-java choice.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch issue/10

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

This was linked to issues Dec 12, 2025
@jenkins-cd-for-zipp
Copy link
Copy Markdown

Jenkins Build #1 Summary (for PR #95)

  • Status: SUCCESS
  • Duration: 1 min 34 sec
  • Branch: PR-95
  • Commit: c410b9a
  • Docker Image: 192.168.0.82:5000/zipp:c410b9a (pushed to registry)

Details:

  • Checkout: Successful
  • Build & Scan: Passed
  • Push: Successful

All stages passed—no issues detected.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
docker-compose.yml (3)

109-121: Add resource limits to Grafana service for consistency.

Lines 109-121: The Grafana service lacks resource limits, while otel-collector (200M) and prometheus (300M) have memory constraints defined. This inconsistency could lead to Grafana consuming unbounded memory.

Add memory resource limits to the Grafana service:

   grafana:
     image: grafana/grafana:12.2.0
     container_name: grafana
     ports:
       - "3000:3000"
     restart: unless-stopped
     networks:
       - app-net
+    deploy:
+      resources:
+        limits:
+          memory: 256M
     volumes:
       - ./backend/config/grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
       - ./backend/config/grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
       - ./backend/config/grafana/dashboards:/var/lib/grafana/dashboards

122-139: Pin node-exporter image version for consistency and reproducibility.

Line 123: The node-exporter service uses the latest tag instead of a pinned version, creating non-deterministic deployments. Other services (prometheus v3.7.3, grafana 12.2.0, otel-collector 0.139.0) use specific versions.

Pin the node-exporter version to match the practice of other services:

   node-exporter:
-    image: prom/node-exporter:latest
+    image: prom/node-exporter:v1.8.2
     container_name: node-exporter

Verify the latest stable version of node-exporter and update accordingly.


62-85: otel-collector runs with root privileges.

Line 71: The otel-collector service explicitly runs as user: 0:0 (root), which is a security anti-pattern. The container should run with minimal required privileges.

Remove the explicit root user configuration and allow the container image's default unprivileged user to be used:

   otel-collector:
     image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.139.0
     container_name: otel-collector
     deploy:
       resources:
         limits:
           memory: 200M
     restart: unless-stopped
     command: [ "--config=/backend/config/otel/otel-collector-config.yaml"]
-    user: 0:0
     volumes:
       - ./backend/config/otel/otel-collector-config.yaml:/backend/config/otel/otel-collector-config.yaml

Verify that the otel-collector container runs correctly with its default user and has necessary read permissions for the config file.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6f40508 and c410b9a.

📒 Files selected for processing (7)
  • backend/config/grafana/provisioning/dashboards/dashboards.yaml (1 hunks)
  • backend/config/grafana/provisioning/datasources/datasources.yaml (1 hunks)
  • backend/config/otel/otel-collector-config.yaml (1 hunks)
  • backend/config/prometheus/prometheus-config.yaml (1 hunks)
  • backend/pom.xml (1 hunks)
  • backend/src/main/resources/application-dev.yml (1 hunks)
  • docker-compose.yml (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: PR Build
  • GitHub Check: Jenkins
  • GitHub Check: Build, Test & Analyze
🔇 Additional comments (6)
backend/src/main/resources/application-dev.yml (1)

25-29: Expose all Actuator endpoints for development observability.

Line 25-29: The wildcard exposure of Actuator endpoints is appropriate for the dev profile. Ensure this profile is never enabled in production environments.

Verify that the dev profile is strictly development-only and cannot be accidentally enabled in staging or production.

backend/config/grafana/provisioning/dashboards/dashboards.yaml (1)

1-10: Grafana dashboards provisioning configuration is correct.

The provisioning structure follows Grafana conventions correctly. However, note that actual dashboard files should be placed in backend/config/grafana/dashboards/ directory (referenced on line 10) for them to be loaded.

Verify that dashboard JSON files exist in the backend/config/grafana/dashboards/ directory, as the provisioning configuration references this path but no dashboards are included in this PR.

backend/config/prometheus/prometheus-config.yaml (1)

1-35: Prometheus configuration is sound.

Lines 1-35: Configuration correctly defines three scrape jobs corresponding to the docker-compose services (prometheus, node-exporter, otel-collector) with appropriate targets and a reasonable 60-second global scrape interval. The hostname relabeling (lines 19-23) is a good practice for identifying the scraping instance.

backend/config/grafana/provisioning/datasources/datasources.yaml (1)

1-6: Grafana Prometheus datasource configuration is correct.

Lines 1-6: Datasource correctly references the Prometheus service (http://prometheus:9090) from docker-compose using proxy access and sets it as the default datasource, which is appropriate for this monitoring stack.

backend/config/otel/otel-collector-config.yaml (1)

27-28: Ensure environment variables are properly defined.

Lines 27-28: The prometheus exporter references ${OTEL_COLLECTOR_PORT_PROM} environment variable, which must be defined in the docker-compose environment section or a .env file.

Verify that all environment variables referenced in this config (especially OTEL_COLLECTOR_PORT_PROM) are defined in the docker-compose service or a .env file with appropriate values.

backend/pom.xml (1)

169-192: OpenTelemetry dependencies added correctly.

Lines 169-192: Dependencies follow project conventions by inheriting versions from the parent spring-boot-starter-parent (4.0.0), except for protobuf-java which is explicitly pinned to 4.33.2. Spring Boot 4.0.0 provides first-class support for Java 25, and protobuf-java 4.33.2 is free of known critical vulnerabilities. All additions are compatible with the project stack.

Comment thread backend/config/otel/otel-collector-config.yaml Outdated
Comment thread docker-compose.yml
Co-authored-by: Kalu32k <158482431+Kalu32k@users.noreply.github.com>
Co-authored-by: nilsishome <181159296+nilsishome@users.noreply.github.com>
@jenkins-cd-for-zipp
Copy link
Copy Markdown

Jenkins Build #2 Summary (for PR #95)

  • Status: SUCCESS
  • Duration: 1 min 38 sec
  • Branch: PR-95
  • Commit: 9dfd013
  • Docker Image: 192.168.0.82:5000/zipp:9dfd013 (pushed to registry)

Details:

  • Checkout: Successful
  • Build & Scan: Passed
  • Push: Successful

All stages passed—no issues detected.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
docker-compose.yml (2)

62-85: Fix Compose-time port interpolation (ports:) vs container env (environment:)

ports: uses ${OTEL_COLLECTOR_PORT_*} but those are only set under environment: (container runtime), so Compose interpolation can fail unless the vars are defined in a .env file / shell environment. Same issue applies to ${PROMETHEUS_PORT} later.

Suggested fix (pick one approach):

  otel-collector:
@@
     ports:
-      - ${OTEL_COLLECTOR_PORT_GRPC}:${OTEL_COLLECTOR_PORT_GRPC}
-      - ${OTEL_COLLECTOR_PORT_HTTP}:${OTEL_COLLECTOR_PORT_HTTP}
-      - ${OTEL_COLLECTOR_PORT_PROM}:${OTEL_COLLECTOR_PORT_PROM}
+      - "4317:4317"
+      - "4318:4318"
+      - "9464:9464"
@@
-    environment:
-      - OTEL_COLLECTOR_HOST=otel-collector
-      - OTEL_COLLECTOR_PORT_GRPC=4317
-      - OTEL_COLLECTOR_PORT_HTTP=4318
-      - OTEL_COLLECTOR_PORT_PROM=9464
+    environment:
+      OTEL_COLLECTOR_HOST: otel-collector

(or use defaults like "${OTEL_COLLECTOR_PORT_GRPC:-4317}:${OTEL_COLLECTOR_PORT_GRPC:-4317}" and document required vars in .env).


86-108: Prometheus port mapping will fail unless PROMETHEUS_PORT is defined externally

ports: - "${PROMETHEUS_PORT}:${PROMETHEUS_PORT}" requires PROMETHEUS_PORT at Compose evaluation time. Either hardcode 9090:9090, or use a default:

  prometheus:
@@
     ports:
-      - "${PROMETHEUS_PORT}:${PROMETHEUS_PORT}"
+      - "${PROMETHEUS_PORT:-9090}:${PROMETHEUS_PORT:-9090}"
🧹 Nitpick comments (2)
docker-compose.yml (2)

109-121: Grafana default admin creds are OK for this repo, but add persistence

Given the repo learning, hard-coded dev creds aren’t a concern here. But Grafana currently has no named volume; dashboards/users/plugins will be lost on container recreate. Consider adding:

  grafana:
@@
     volumes:
       - ./backend/config/grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
       - ./backend/config/grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
       - ./backend/config/grafana/dashboards:/var/lib/grafana/dashboards
+      - grafana-data:/var/lib/grafana
@@
 volumes:
@@
   mysql-data:
+  grafana-data:

62-139: Optional: gate observability stack behind a Compose profile

If this file is used beyond local dev, consider putting otel-collector, prometheus, grafana, node-exporter behind profiles: ["observability"] to avoid always mounting host paths / exposing ports.

  prometheus:
+    profiles: ["observability"]
  grafana:
+    profiles: ["observability"]
  otel-collector:
+    profiles: ["observability"]
  node-exporter:
+    profiles: ["observability"]
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c410b9a and 9dfd013.

📒 Files selected for processing (2)
  • backend/config/otel/otel-collector-config.yaml (1 hunks)
  • docker-compose.yml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/config/otel/otel-collector-config.yaml
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-26T09:51:20.066Z
Learnt from: alfredbrannare
Repo: fungover/zipp PR: 12
File: docker-compose.yml:6-6
Timestamp: 2025-11-26T09:51:20.066Z
Learning: For the fungover/zipp project, hard-coded development credentials (like MYSQL_ROOT_PASSWORD: root in docker-compose.yml or datasource defaults in application.properties) are intentional development defaults and are not considered security concerns by the project maintainers.

Applied to files:

  • docker-compose.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: PR Build
  • GitHub Check: Jenkins
  • GitHub Check: Build, Test & Analyze

Comment thread docker-compose.yml Outdated
Comment thread docker-compose.yml
@sonarqubecloud
Copy link
Copy Markdown

Copy link
Copy Markdown
Contributor

@Tobias-hubs Tobias-hubs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very good integration of OpenTelemetry, Prometheus, and Grafana.
Configs are clear and container setup looks solid.

  • Well-structured configs and reproducible docker-compose setup
  • Automated Grafana provisioning

Overall, a strong step toward a reliable monitoring stack.
The only point to reconsider is whether to embrace changes from coderabbit.

jennymakki
jennymakki previously approved these changes Dec 14, 2025
Copy link
Copy Markdown
Contributor

@jennymakki jennymakki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice addition — bringing in OpenTelemetry with Prometheus and Grafana is a big improvement for observability.

One small suggestion: it would be helpful to add a bit of documentation (in the README or PR description) explaining how to start the stack and where to view the metrics/dashboards. That would make it easier for others to try this out locally.

Overall looks good and valuable for the project. Great work!

@jenkins-cd-for-zipp
Copy link
Copy Markdown

Jenkins Build #3 Summary (for PR #95)

  • Status: FAILURE
  • Duration: 1 min 18 sec
  • Branch: PR-95
  • Commit: ca25b8c
  • Docker Image: 192.168.0.82:5000/zipp:ca25b8c (pushed to registry)

Details:

  • Checkout: Successful
  • Build & Scan: Failed (check logs below)
  • Push: Skipped (due to earlier failure)

Error Logs (truncated):
For full logs, contact the Jenkins admin.

@jenkins-cd-for-zipp
Copy link
Copy Markdown

Jenkins Build #4 Summary (for PR #95)

  • Status: SUCCESS
  • Duration: 50 min
  • Branch: PR-95
  • Commit: 5b6a287
  • Docker Image: 192.168.0.82:5000/zipp:5b6a287 (pushed to registry)

Details:

  • Checkout: Successful
  • Build & Scan: Passed
  • Push: Successful

All stages passed—no issues detected.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
docker-compose.yml (1)

126-126: Pin node-exporter image version.

Using latest prevents reproducible deployments and may introduce breaking changes unexpectedly. Pin to a specific version such as v1.9.1.

🔎 Apply this diff
   node-exporter:
-    image: prom/node-exporter:latest
+    image: prom/node-exporter:v1.9.1
🧹 Nitpick comments (1)
docker-compose.yml (1)

112-124: Consider adding persistent volume for Grafana data.

Grafana currently only mounts provisioning directories. Without a volume for /var/lib/grafana, any dashboards created via the UI, user preferences, or configuration changes will be lost on container restart.

🔎 Add persistent volume
   grafana:
     image: grafana/grafana:12.2.0
     container_name: grafana
     ports:
       - "3000:3000"
     restart: unless-stopped
     networks:
       - app-net
     volumes:
+      - grafana-data:/var/lib/grafana
       - ./backend/config/grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
       - ./backend/config/grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
       - ./backend/config/grafana/dashboards:/var/lib/grafana/dashboards

And add to the volumes section:

 volumes:
   kafka-data:
   mysql-data:
+  grafana-data:
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9dfd013 and e902912.

📒 Files selected for processing (1)
  • docker-compose.yml (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-26T09:51:20.066Z
Learnt from: alfredbrannare
Repo: fungover/zipp PR: 12
File: docker-compose.yml:6-6
Timestamp: 2025-11-26T09:51:20.066Z
Learning: For the fungover/zipp project, hard-coded development credentials (like MYSQL_ROOT_PASSWORD: root in docker-compose.yml or datasource defaults in application.properties) are intentional development defaults and are not considered security concerns by the project maintainers.

Applied to files:

  • docker-compose.yml
🔇 Additional comments (1)
docker-compose.yml (1)

71-74: Security improvements applied correctly.

Good use of read_only filesystem, tmpfs for temporary storage, and running as non-root user (by leaving user commented to use image default).

Comment thread docker-compose.yml
Comment on lines +80 to +87
- ${OTEL_COLLECTOR_PORT_GRPC}:${OTEL_COLLECTOR_PORT_GRPC}
- ${OTEL_COLLECTOR_PORT_HTTP}:${OTEL_COLLECTOR_PORT_HTTP}
- ${OTEL_COLLECTOR_PORT_PROM}:${OTEL_COLLECTOR_PORT_PROM}
environment:
- OTEL_COLLECTOR_HOST=otel-collector
- OTEL_COLLECTOR_PORT_GRPC=4317
- OTEL_COLLECTOR_PORT_HTTP=4318
- OTEL_COLLECTOR_PORT_PROM=9464
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Environment variables: inconsistent definition creates deployment risk.

Lines 80-82 use ${OTEL_COLLECTOR_PORT_GRPC} substitution requiring these variables in a .env file or shell environment at docker-compose parse time. However, lines 84-87 define the same variables with hardcoded values inside the container environment—where they cannot affect port mapping substitution.

Either hardcode the ports directly or ensure external variables are defined and reuse them consistently.

🔎 Option 1: Hardcode ports (simpler for local dev)
     networks:
       - app-net
     ports:
-      - ${OTEL_COLLECTOR_PORT_GRPC}:${OTEL_COLLECTOR_PORT_GRPC}
-      - ${OTEL_COLLECTOR_PORT_HTTP}:${OTEL_COLLECTOR_PORT_HTTP}
-      - ${OTEL_COLLECTOR_PORT_PROM}:${OTEL_COLLECTOR_PORT_PROM}
+      - "4317:4317"
+      - "4318:4318"
+      - "9464:9464"
     environment:
       - OTEL_COLLECTOR_HOST=otel-collector
       - OTEL_COLLECTOR_PORT_GRPC=4317
       - OTEL_COLLECTOR_PORT_HTTP=4318
       - OTEL_COLLECTOR_PORT_PROM=9464

Option 2: Use external variables consistently

Create .env file:

OTEL_COLLECTOR_PORT_GRPC=4317
OTEL_COLLECTOR_PORT_HTTP=4318
OTEL_COLLECTOR_PORT_PROM=9464

Then reference them:

     ports:
       - ${OTEL_COLLECTOR_PORT_GRPC}:${OTEL_COLLECTOR_PORT_GRPC}
       - ${OTEL_COLLECTOR_PORT_HTTP}:${OTEL_COLLECTOR_PORT_HTTP}
       - ${OTEL_COLLECTOR_PORT_PROM}:${OTEL_COLLECTOR_PORT_PROM}
     environment:
       - OTEL_COLLECTOR_HOST=otel-collector
-      - OTEL_COLLECTOR_PORT_GRPC=4317
-      - OTEL_COLLECTOR_PORT_HTTP=4318
-      - OTEL_COLLECTOR_PORT_PROM=9464
+      - OTEL_COLLECTOR_PORT_GRPC=${OTEL_COLLECTOR_PORT_GRPC}
+      - OTEL_COLLECTOR_PORT_HTTP=${OTEL_COLLECTOR_PORT_HTTP}
+      - OTEL_COLLECTOR_PORT_PROM=${OTEL_COLLECTOR_PORT_PROM}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- ${OTEL_COLLECTOR_PORT_GRPC}:${OTEL_COLLECTOR_PORT_GRPC}
- ${OTEL_COLLECTOR_PORT_HTTP}:${OTEL_COLLECTOR_PORT_HTTP}
- ${OTEL_COLLECTOR_PORT_PROM}:${OTEL_COLLECTOR_PORT_PROM}
environment:
- OTEL_COLLECTOR_HOST=otel-collector
- OTEL_COLLECTOR_PORT_GRPC=4317
- OTEL_COLLECTOR_PORT_HTTP=4318
- OTEL_COLLECTOR_PORT_PROM=9464
- ${OTEL_COLLECTOR_PORT_GRPC}:${OTEL_COLLECTOR_PORT_GRPC}
- ${OTEL_COLLECTOR_PORT_HTTP}:${OTEL_COLLECTOR_PORT_HTTP}
- ${OTEL_COLLECTOR_PORT_PROM}:${OTEL_COLLECTOR_PORT_PROM}
networks:
- app-net
ports:
- "4317:4317"
- "4318:4318"
- "9464:9464"
environment:
- OTEL_COLLECTOR_HOST=otel-collector
- OTEL_COLLECTOR_PORT_GRPC=4317
- OTEL_COLLECTOR_PORT_HTTP=4318
- OTEL_COLLECTOR_PORT_PROM=9464
Suggested change
- ${OTEL_COLLECTOR_PORT_GRPC}:${OTEL_COLLECTOR_PORT_GRPC}
- ${OTEL_COLLECTOR_PORT_HTTP}:${OTEL_COLLECTOR_PORT_HTTP}
- ${OTEL_COLLECTOR_PORT_PROM}:${OTEL_COLLECTOR_PORT_PROM}
environment:
- OTEL_COLLECTOR_HOST=otel-collector
- OTEL_COLLECTOR_PORT_GRPC=4317
- OTEL_COLLECTOR_PORT_HTTP=4318
- OTEL_COLLECTOR_PORT_PROM=9464
ports:
- ${OTEL_COLLECTOR_PORT_GRPC}:${OTEL_COLLECTOR_PORT_GRPC}
- ${OTEL_COLLECTOR_PORT_HTTP}:${OTEL_COLLECTOR_PORT_HTTP}
- ${OTEL_COLLECTOR_PORT_PROM}:${OTEL_COLLECTOR_PORT_PROM}
environment:
- OTEL_COLLECTOR_HOST=otel-collector
- OTEL_COLLECTOR_PORT_GRPC=${OTEL_COLLECTOR_PORT_GRPC}
- OTEL_COLLECTOR_PORT_HTTP=${OTEL_COLLECTOR_PORT_HTTP}
- OTEL_COLLECTOR_PORT_PROM=${OTEL_COLLECTOR_PORT_PROM}
🤖 Prompt for AI Agents
In docker-compose.yml around lines 80 to 87, the port mapping uses
${OTEL_COLLECTOR_PORT_*} substitutions while the same variables are defined only
in the container environment — this is inconsistent and may cause docker-compose
to fail at parse time. Fix by either hardcoding the host:container port mappings
to the known ports (4317, 4318, 9464) in the top section, or move the variable
definitions out to an external .env file (with OTEL_COLLECTOR_PORT_GRPC=4317,
OTEL_COLLECTOR_PORT_HTTP=4318, OTEL_COLLECTOR_PORT_PROM=9464) and keep using the
${...} substitutions, ensuring the same source is used for both mapping and
container environment.

Comment thread docker-compose.yml
Comment on lines +99 to +101
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /etc:/host/etc:ro
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reconsider mounting host paths, especially /etc.

Mounting /proc and /sys as read-only is common for host metrics collection, but /etc increases the attack surface. If Prometheus is compromised, sensitive host configuration files become accessible.

Consider whether Prometheus truly needs /etc access or if node-exporter can provide necessary metrics instead.

🤖 Prompt for AI Agents
In docker-compose.yml around lines 99 to 101, the service mounts host paths
/proc, /sys and /etc into the container; mounting /etc increases attack surface
and likely isn’t needed for Prometheus. Remove the /etc:/host/etc:ro bind mount
(or restrict to specific files if absolutely required) and ensure any required
host metrics/config are obtained via node-exporter or a minimal, explicit file
mount; update service docs and test that metrics still work after removing the
/etc mount.

Comment thread docker-compose.yml
memory: 300M
restart: unless-stopped
ports:
- "${PROMETHEUS_PORT}:${PROMETHEUS_PORT}"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

${PROMETHEUS_PORT} requires external definition.

Similar to the otel-collector service, this substitution requires the variable in a .env file or shell environment. Consider hardcoding the port as "9090:9090" for local development simplicity, or create a .env file with PROMETHEUS_PORT=9090.

🤖 Prompt for AI Agents
In docker-compose.yml around line 110, the port mapping uses a shell variable
"${PROMETHEUS_PORT}" which requires an external .env or environment definition;
replace it with an explicit mapping "9090:9090" for local/dev simplicity or add
PROMETHEUS_PORT=9090 to the repository .env and document it in README so the
substitution resolves at runtime.

Comment thread docker-compose.yml
Comment on lines +134 to +136
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Host root filesystem mount is necessary for node-exporter but increases attack surface.

To monitor the host from a container, node-exporter requires the path.rootfs argument, which necessitates mounting the root filesystem. While the read-only flag limits write access, mounting a host's full root directory into a container is a high risk practice and not aligned with the principle of least privilege. Consider using more granular mounting of only required directories instead of a blanket host fs mount. Ensure this trade-off is intentional and that you trust the node-exporter image supply chain.

🤖 Prompt for AI Agents
docker-compose.yml lines 134-136: the compose mounts the entire host rootfs
(/:/rootfs:ro) which increases attack surface; replace this blanket mount with
only the specific host paths node-exporter truly needs (keep /proc:/host/proc:ro
and /sys:/host/sys:ro if required, remove /:/rootfs:ro) or alternatively
bind-mount a minimal directory that maps to the required tree (e.g., only /etc,
/var/lib or the specific subsystems you need) and update node-exporter startup
args to point path.rootfs to that minimal mount; ensure you document the
intentional trade-off and validate the node-exporter image trust before keeping
any host mounts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Prometheus (Metrics) OpenTelemetry

5 participants