-
-
Notifications
You must be signed in to change notification settings - Fork 311
feat(api): MCP, OpenAPI & Dynamic Introspection #1429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
New Features: - API endpoints now support comprehensive input validation with detailed error responses via Pydantic models. - OpenAPI specification endpoint (/openapi.json) and interactive Swagger UI documentation (/docs) now available for API discovery. - Enhanced MCP session lifecycle management with create, retrieve, and delete operations. - Network diagnostic tools: traceroute, nslookup, NMAP scanning, and network topology viewing exposed via API. - Device search, filtering by status (including 'offline'), and bulk operations (copy, delete, update). - Wake-on-LAN functionality for remote device management. - Added dynamic tool disablement and status reporting. Bug Fixes: - Fixed get_tools_status in registry to correctly return boolean values instead of None for enabled tools. - Improved error handling for invalid API inputs with standardized validation responses. - Fixed OPTIONS request handling for cross-origin requests. Refactoring: - Significant refactoring of api_server_start.py to use decorator-based validation (@validate_request).
📝 WalkthroughWalkthroughAdds a registry-driven OpenAPI/MCP framework with Pydantic v2 validation, dynamic OpenAPI spec generation, session/SSE-based MCP server, centralized auth/CORS, MAC normalization, environment helpers, UI wait helpers, many tests, and a pydantic requirement bump. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant APIServer as API Server
participant Auth as Auth Module
participant Registry
participant SessionMgr as Session Manager
participant ToolExec as Tool Executor
participant FlaskApp as Flask App
Client->>APIServer: POST /mcp/sessions (create)
APIServer->>SessionMgr: create_session()
SessionMgr-->>APIServer: session_id
APIServer-->>Client: 201 {session_id}
Client->>APIServer: POST /mcp (JSON‑RPC with session_id)
APIServer->>Auth: is_authorized(headers, token)
alt unauthorized
Auth-->>APIServer: False
APIServer-->>Client: 403 Unauthorized
else authorized
Auth-->>APIServer: True
APIServer->>Registry: find_route_for_tool(tool_name)
Registry-->>APIServer: route_metadata
APIServer->>ToolExec: _execute_tool(route, args)
ToolExec->>FlaskApp: internal HTTP request (loopback)
FlaskApp-->>ToolExec: response
ToolExec-->>APIServer: result
APIServer->>SessionMgr: enqueue session message
APIServer-->>Client: JSON‑RPC result
end
Client->>APIServer: GET /mcp/sse?session_id (open stream)
loop messages / keep-alive
SessionMgr->>APIServer: dequeue message
APIServer-->>Client: SSE message
end
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
scripts/generate-device-inventory.py (1)
322-324: Dead conditional: always evaluates to "Node".The ternary expression evaluates to
"Node"regardless of the condition. This appears to be leftover from an incomplete implementation or a copy-paste error.Suggested fix
If all device types should use the same prefix, simplify:
- name_prefix = "Node" if dev_type == "Server" else "Node" - name = f"{name_prefix}-{idx:02d}" + name = f"Node-{idx:02d}"Or, if different prefixes were intended per device type:
- name_prefix = "Node" if dev_type == "Server" else "Node" + name_prefix = dev_type if dev_type in {"Server", "Laptop", "Phone"} else "Node" name = f"{name_prefix}-{idx:02d}"server/db/db_helper.py (1)
17-42: Update docstring and clarify "offline" status semantics.The docstring doesn't list the 'offline' status, and the condition overlaps with "down" (both match devices not in the last scan). Update the docstring to document 'offline', and add
AND devAlertDown = 0to the offline condition to exclude alert-down devices:Suggested fix
- 'down' : Devices not present in the last scan but with alerts + - 'offline' : Devices not present in the last scan (no alert-down flag) - 'archived' : Devices that are archived- "offline": "WHERE devIsArchived=0 AND devPresentLastScan=0", + "offline": "WHERE devIsArchived=0 AND devPresentLastScan=0 AND devAlertDown = 0",server/models/device_instance.py (1)
589-613: Double-close on success path.On the success path,
conn.close()is called at line 593, but thefinallyblock at lines 612–613 will also callconn.close(). While sqlite3 tolerates multiple closes, this indicates a logic issue. Remove the explicit close at line 593 and let thefinallyblock handle cleanup uniformly.Proposed fix
conn = get_temp_db_connection() cur = conn.cursor() cur.execute(sql, values) conn.commit() - conn.close() mylog("debug", f"[DeviceInstance] setDeviceData SQL: {sql.strip()}") mylog("debug", f"[DeviceInstance] setDeviceData VALUES:{values}") return {"success": True}server/api_server/api_server_start.py (2)
390-411: Use the validatedpayloadinstead of raw JSON in mutating endpoints.These handlers ignore the
payloadproduced by@validate_request, which bypasses normalization/sanitization (e.g., MAC normalization, column allow‑lists, alias sanitization) and makes schema flags likeconfirm_delete_allineffective. This risks unsafe or inconsistent DB writes and undermines the validation layer.✅ Example fix (apply pattern to similar endpoints)
def api_device_set_alias(mac, payload=None): - data = request.get_json() or {} - alias = data.get('alias') + data = payload if payload is not None else (request.get_json() or {}) + if hasattr(data, "model_dump"): + data = data.model_dump(exclude_unset=True) + alias = data.get("alias")As per coding guidelines, MAC addresses must be normalized before database writes.
Also applies to: 413-443, 446-471, 521-539
682-699: Schema exposeslimit, but handler ignores it.
DeviceSearchRequest.limitis defined, yet the endpoint never applies it. Either honor the limit or remove it from the schema to avoid misleading clients.
🤖 Fix all issues with AI agents
In `@front/plugins/plugin_helper.py`:
- Around line 91-107: Modify plugin_helper.py to add a new function
validate_concrete_mac(input) that mirrors is_mac's behavior but only accepts
full 6-octet MACs (use the existing full_mac_re regexp and reject any
wildcard_re matches); it should return True for valid concrete MACs, log the
same verbose message and return False otherwise. Then update the device-specific
validators in server/api_server/openapi/schemas.py—specifically
DeleteDevicesRequest.validate_mac_list,
UpdateDeviceRequest.validate_mac_addresses,
AddDeviceRequest.validate_mac_address, and
GetDeviceResponse.validate_mac_address—to call validate_concrete_mac instead of
is_mac so wildcard patterns (e.g., "AA:BB:CC:*") are rejected while leaving
wildcard acceptance for search/filter code that still uses is_mac.
In `@server/api_server/api_server_start.py`:
- Around line 102-116: The fallback that assigns hardcoded localhost ports to
_cors_origins should be removed and replaced with a configuration-driven
approach: stop injecting literal ports in the block that defines _cors_origins
(_cors_origins_env and _cors_origins); instead, if no CORS_ORIGINS env var is
present, derive allowed localhost origins from a configuration value (e.g., a
settings constant like DEFAULT_CORS_PORTS or an env var such as
CORS_DEFAULT_PORTS) or fall back to scheme-only localhost entries
("http://localhost", "http://127.0.0.1") so no fixed ports are hardcoded; update
the initialization logic to read that settings source and build _cors_origins
accordingly.
- Around line 274-299: The handlers return raw payloads instead of the required
response contract; update api_get_device (and similarly the device export
handlers, latest/favorite device endpoints, and metrics endpoints) to always
return a JSON object with "success": True and the payload under a consistent key
(e.g., "data": device_data) on success, and on failure return "success": False
with an "error" string (and optional "message") — for example replace the plain
return jsonify(device_data) in api_get_device with a wrapper like
jsonify({"success": True, "data": device_data}) and ensure
DeviceInstance.getDeviceData callers and the export/latest/favorite/metrics
functions follow the same pattern for both success and error branches.
- Around line 975-999: The /openapi.json and /docs endpoints bypass token
validation; update serve_openapi_spec and api_docs to enforce the same
Authorization: Bearer <API_TOKEN> check used by other endpoints (e.g., call the
existing token validation helper or apply the same `@require_api_auth` decorator
used elsewhere) and return 401/403 on missing or invalid tokens; keep the
existing behavior of returning the spec and swagger.html when the token is
valid. Include references to serve_openapi_spec and api_docs so reviewers can
locate and update those route handlers.
In `@server/api_server/mcp_endpoint.py`:
- Around line 632-645: In the _execute_tool request-building block, ensure the
loopback call propagates client query tokens: if request.headers does not
already contain "Authorization", check request.args.get("token") and, if
present, set headers["Authorization"] = f"Bearer {token}"; otherwise fall back
to a configured API token from get_setting_value('API_TOKEN') and set it the
same way; keep the existing behavior of using request.headers["Authorization"]
when present so we don't overwrite explicit headers.
- Around line 668-683: The server deadlocks because _execute_tool() issues
synchronous loopback HTTP requests while Flask is started single-threaded;
update start_server() so app.run(...) is invoked with threaded=True (or
otherwise run under a multi-worker WSGI server like Gunicorn) to allow
concurrent handling of those loopback requests; ensure references to app.run in
start_server() are adjusted and document the alternative deployment
recommendation for production.
In `@server/api_server/openapi/schema_converter.py`:
- Around line 7-22: Change pydantic_to_json_schema to accept a mode: str
parameter (default "validation") and call model.model_json_schema(mode=mode)
inside it; leave request-body call sites to use the default (no arg) so they use
validation mode, and update response-body call sites that currently call
pydantic_to_json_schema(...) to pass mode="serialization" explicitly so response
schemas include serialization rules. Ensure the function signature and all
callers (request and response schema generators) are updated to match the new
parameter name.
In `@server/api_server/openapi/schemas.py`:
- Around line 628-653: The DbQueryRequest schema currently defaults
confirm_dangerous_query to True, effectively disabling the confirmation guard;
change confirm_dangerous_query default to False and add Pydantic-level
validation on the DbQueryRequest model (validate_confirm_dangerous_query or
root_validator) to reject requests unless confirm_dangerous_query is True and/or
the rawSql (Base64-decoded) is verified to be a safe SELECT-only statement;
update the Field description for confirm_dangerous_query to reflect the new
required explicit opt-in and keep rawSql handling unchanged but ensure
decoding/validation occurs before execution.
In `@server/initialise.py`:
- Around line 337-345: The conf.FLASK_DEBUG setting is defined but unused and
incorrectly exposed; either delete the conf.FLASK_DEBUG registration or make it
truly environment-only by setting overriddenByEnv=1 and preventing UI exposure:
update the ccd(...) call that creates conf.FLASK_DEBUG to include
overriddenByEnv=1 (and adjust metadata to readonly/non-exportable) or remove the
entire conf.FLASK_DEBUG declaration, and ensure the Flask runtime uses a single
source of truth (check where app.run(...) or Flask.run is invoked and read
os.environ["FLASK_DEBUG"] if you intend env-only behavior) so there is no
UI-exposed mutable setting.
In `@server/models/device_instance.py`:
- Around line 519-549: The mac addresses are being written raw; import and use
normalize_mac (and is_mac if desired) from front.plugins.plugin_helper and
normalize both the local mac variable and the devParentMAC value before
assembling values: call normalize_mac(mac) and
normalize_mac(data.get("devParentMAC") or "") (or use is_mac to validate first)
and replace their usages in the values tuple with the normalized/fallback
empty-string values so only validated MACs are written to the DB; also add the
import line for normalize_mac/is_mac at the top of the file.
- Around line 562-587: The update path must normalize MACs before DB writes:
call normalize_mac() (from front/plugins/plugin_helper) on
data.get("devParentMAC") and on the local mac variable used in the WHERE clause
so both the devParentMAC entry in the values tuple and the mac used for the
WHERE are normalized; update the code that builds values (the tuple including
devParentMAC) and normalize the mac passed into the SQL execution in the same
function (e.g., the update handler in device_instance.py) to ensure consistent,
validated MAC format.
In `@test/api_endpoints/test_mcp_tools_endpoints.py`:
- Around line 243-265: The /devices/latest and /devices/totals endpoints
currently return raw lists and must be changed to the API contract wrapper
{success, error} and tests updated accordingly: modify the endpoints (the route
handlers that call device_handler.getTotals() and whatever function returns the
latest device) to return jsonify({"success": True, "data": <payload>}) on
success and jsonify({"success": False, "error": "<message>"}) with appropriate
status codes on failure, and update the tests in test_mcp_tools_endpoints.py
(the test_get_latest_device and the totals test) to assert response.status_code,
parse response.get_json(), assert json["success"] is True and then validate the
payload in json["data"] (e.g., check json["data"][0]["devName"] and ["devMac"]
for latest and that totals appear inside json["data"] for totals).
In `@test/test_mcp_disablement.py`:
- Around line 9-15: The reset_registry fixture is calling
registry._disabled_tools.clear() redundantly because registry.clear_registry()
already clears that state; update the fixture (reset_registry) to only call
registry.clear_registry() before yield and after yield, and remove both direct
accesses to the private attribute registry._disabled_tools to avoid unnecessary
private attribute manipulation.
In `@test/ui/run_all_tests.py`:
- Around line 9-16: The current import lines in run_all_tests.py mistakenly
import test modules from test_helpers (which only defines helper functions) —
update the imports to import the actual test module files themselves, e.g.
replace "from .test_helpers import test_ui_dashboard" with direct imports of the
modules (use "from . import test_ui_dashboard" or "from .test_ui_dashboard
import <exported names>" as appropriate) for each referenced test module
(test_ui_dashboard, test_ui_devices, test_ui_network, test_ui_maintenance,
test_ui_multi_edit, test_ui_notifications, test_ui_settings, test_ui_plugins) so
the real test modules are loaded at runtime.
🟡 Minor comments (7)
test/ui/test_ui_settings.py-180-180 (1)
180-180: Incorrect wait usage aftersend_keys.
wait_for_page_loadwaits fordocument.readyState == 'complete', which is already true after typing into an input field. This wait doesn't help synchronize with any JavaScript processing of the input change. Consider using a brief explicit delay or a more targeted wait if needed.🔧 Suggested fix
plugins_keep_hist_input.clear() plugins_keep_hist_input.send_keys(new_value) - wait_for_page_load(driver, timeout=10) + time.sleep(0.5) # Brief delay for JS change handlersOr remove entirely if the subsequent button click doesn't depend on async processing.
test/test_wol_validation.py-45-66 (1)
45-66: Fail fast on authorization errors in “valid” WOL tests.
A 401/403 currently satisfies!= 422and can mask auth regressions. Add explicit checks like in the invalid tests.🛠️ Proposed fix
def test_wol_valid_mac(auth_headers): """Ensure a valid MAC request is accepted (anything except 422 is acceptable).""" payload = {"devMac": "00:11:22:33:44:55"} resp = requests.post( f"{BASE_URL}/nettools/wakeonlan", json=payload, headers=auth_headers, timeout=REQUEST_TIMEOUT, ) + if resp.status_code in (401, 403): + pytest.fail(f"Authorization failed: {resp.status_code} {resp.text}") assert resp.status_code != 422, f"Validation failed for valid MAC: {resp.text}" def test_wol_valid_ip(auth_headers): """Ensure an IP-based request passes validation (404 acceptable, 422 is not).""" payload = {"ip": "1.2.3.4"} resp = requests.post( f"{BASE_URL}/nettools/wakeonlan", json=payload, headers=auth_headers, timeout=REQUEST_TIMEOUT, ) + if resp.status_code in (401, 403): + pytest.fail(f"Authorization failed: {resp.status_code} {resp.text}") assert resp.status_code != 422, f"Validation failed for valid IP payload: {resp.text}"server/api_server/sse_endpoint.py-157-165 (1)
157-165: Add CORS headers to/sse/statsOPTIONS response and usejsonify()for unauthorized response.The
/sse/stateendpoint manually adds CORS headers to its OPTIONS response;/sse/statsomits them despite handling OPTIONS the same way. Additionally, the unauthorized response in/sse/statsreturns a plain dict instead of usingjsonify(), which is inconsistent with/sse/state.🛠️ Proposed fix
`@app.route`("/sse/stats", methods=["GET", "OPTIONS"]) def api_sse_stats(): """Get SSE endpoint statistics for debugging""" if request.method == "OPTIONS": - return jsonify({"success": True}), 200 + response = jsonify({"success": True}) + response.headers["Access-Control-Allow-Origin"] = request.headers.get("Origin", "*") + response.headers["Access-Control-Allow-Methods"] = "GET, OPTIONS" + response.headers["Access-Control-Allow-Headers"] = "Content-Type, Authorization" + return response, 200 if is_authorized and not is_authorized(): - return {"success": False, "error": "Unauthorized"}, 401 + return jsonify({"success": False, "error": "Unauthorized"}), 401test/api_endpoints/test_mcp_openapi_spec.py-255-287 (1)
255-287: Registry cleanup may leak across tests.The
finallyblock calls_register_all_endpoints(), which is a no‑op now, leaving the registry empty and potentially affecting later tests that assume it’s populated. Consider repopulating viagenerate_openapi_spec()(or explicit introspection) afterclear_registry().🛠️ Suggested fix
finally: # Restore original registry clear_registry() - from api_server.openapi.spec_generator import _register_all_endpoints - _register_all_endpoints() + from api_server.openapi.spec_generator import generate_openapi_spec + generate_openapi_spec()front/plugins/plugin_helper.py-178-208 (1)
178-208: Guardnormalize_macagainst None/empty input to avoid bogus outputs.
str(None)becomes"NONE"and normalizes to"NO:NE", which can leak into storage if callers pass missing values. Add an early return for None/empty inputs (or raise).🛠️ Suggested fix
def normalize_mac(mac): """ Normalize a MAC address to the standard format with colon separators. @@ :param mac: The MAC address to normalize. :return: The normalized MAC address. """ - s = str(mac).upper().strip() + if mac is None: + return "" + s = str(mac).strip() + if not s: + return "" + s = s.upper()test/api_endpoints/test_mcp_openapi_spec.py-121-178 (1)
121-178: Fix Ruff warnings: annotate ClassVar and rename unused loop variable.RUF012 flags
HTTP_METHODSas a mutable class attribute withoutClassVar, and B007 flags an unused loop variable intest_post_operations_have_request_body_schema.🛠️ Suggested fix
-from pydantic import ValidationError +from pydantic import ValidationError +from typing import ClassVar @@ class TestOpenAPISpecGenerator: """Test the OpenAPI spec generator.""" - HTTP_METHODS = {"get", "post", "put", "patch", "delete", "options", "head", "trace"} + HTTP_METHODS: ClassVar[set[str]] = {"get", "post", "put", "patch", "delete", "options", "head", "trace"} @@ - for path, methods in spec["paths"].items(): + for _path, methods in spec["paths"].items(): if "post" in methods: details = methods["post"]server/api_server/openapi/spec_generator.py-187-191 (1)
187-191:_register_all_endpointsis now a no‑op; restore registry for legacy callers/tests.Legacy tests (and call sites) still invoke this function expecting registry population. Leaving it empty can yield an empty spec or brittle tests. Consider rebuilding the registry here using the existing introspection helpers.
🛠️ Suggested fix
def _register_all_endpoints(): """Dummy function for compatibility with legacy tests.""" - pass + try: + from ..api_server_start import app as start_app + from ..graphql_endpoint import devicesSchema + except (ImportError, AttributeError): + return + clear_registry() + introspect_graphql_schema(devicesSchema) + introspect_flask_app(start_app)
🧹 Nitpick comments (26)
test/api_endpoints/test_events_endpoints.py (1)
142-161: Move import to module level and avoid silently swallowing parse errors.
- The
datetimeimport should be at the top of the file with other imports for clarity and slight performance improvement.- Silently passing on
ValueError/TypeError(lines 160-161) can mask unexpected data issues during test runs. Consider logging or usingpytest.warns/pytest.failto surface these.♻️ Suggested refactor
At top of file, add:
from datetime import timedelta +from datetime import datetimeThen simplify the parsing block:
- except (ValueError, TypeError): - pass # Skip events with unparseable dates + except (ValueError, TypeError) as e: + # Log unexpected parse failures for visibility + print(f"Warning: Could not parse eve_DateTime '{ev_time_str}': {e}")test/ui/test_helpers.py (1)
142-151: Consider catching a more specific exception.The bare
except Exceptionis overly broad and could mask unexpected errors. Sincefind_elementtypically raisesNoSuchElementExceptionorStaleElementReferenceException, consider catching those specifically, or at minimumselenium.common.exceptions.WebDriverException.♻️ Suggested improvement
+from selenium.common.exceptions import WebDriverException + def wait_for_input_value(driver, element_id, timeout=10): """Wait for the input with given id to have a non-empty value and return it.""" def _get_val(d): try: el = d.find_element(By.ID, element_id) val = el.get_attribute("value") return val if val else False - except Exception: + except WebDriverException: return False return WebDriverWait(driver, timeout).until(_get_val)test/ui/test_ui_maintenance.py (1)
10-10: Remove unusednoqadirective.The
# noqa: E402comment is unnecessary since this import is not placed after module-level code.♻️ Suggested fix
-from .test_helpers import BASE_URL, api_get, wait_for_page_load # noqa: E402 +from .test_helpers import BASE_URL, api_get, wait_for_page_loadtest/ui/test_ui_notifications.py (2)
14-21: Redundant wait pattern.The test waits for body presence with
WebDriverWaitthen immediately callswait_for_page_load. Sincewait_for_page_loadwaits fordocument.readyState == "complete", the body will already be present. Consider removing the explicit body wait for consistency with other tests.♻️ Suggested simplification
def test_notifications_page_loads(driver): """Test: Notifications page loads successfully""" driver.get(f"{BASE_URL}/userNotifications.php") - WebDriverWait(driver, 10).until( - EC.presence_of_element_located((By.TAG_NAME, "body")) - ) wait_for_page_load(driver, timeout=10) assert "notification" in driver.page_source.lower(), "Page should contain notification content"
7-9: Potentially unused imports.If the redundant wait pattern is removed from
test_notifications_page_loads, theECimport becomes unused. Consider cleaning up imports if you simplify the wait logic.test/ui/test_ui_plugins.py (1)
14-21: Redundant wait pattern (same as other tests).Same issue as in
test_ui_notifications.py- the explicit body presence wait followed bywait_for_page_loadis redundant.♻️ Suggested simplification
def test_plugins_page_loads(driver): """Test: Plugins page loads successfully""" driver.get(f"{BASE_URL}/plugins.php") - WebDriverWait(driver, 10).until( - EC.presence_of_element_located((By.TAG_NAME, "body")) - ) wait_for_page_load(driver, timeout=10) assert "plugin" in driver.page_source.lower(), "Page should contain plugin content"test/ui/test_ui_multi_edit.py (1)
14-23: Redundant wait pattern (same as other tests).Consistent with the pattern seen in other test files - consider removing the explicit body presence wait since
wait_for_page_loadalready ensures the document is ready.♻️ Suggested simplification
def test_multi_edit_page_loads(driver): """Test: Multi-edit page loads successfully""" driver.get(f"{BASE_URL}/multiEditCore.php") - WebDriverWait(driver, 10).until( - EC.presence_of_element_located((By.TAG_NAME, "body")) - ) wait_for_page_load(driver, timeout=10) # Check page loaded without fatal errors assert "fatal" not in driver.page_source.lower(), "Page should not show fatal errors" assert len(driver.page_source) > 100, "Page should load some content"test/ui/test_ui_network.py (1)
17-20: Redundant wait pattern.The
WebDriverWaitfor body presence (lines 17-19) followed bywait_for_page_load(line 20) is redundant. Thewait_for_page_loadhelper already waits for the document to be complete, which implies the body is present.♻️ Suggested simplification
def test_network_page_loads(driver): """Test: Network page loads successfully""" driver.get(f"{BASE_URL}/network.php") - WebDriverWait(driver, 10).until( - EC.presence_of_element_located((By.TAG_NAME, "body")) - ) wait_for_page_load(driver, timeout=10) assert driver.title, "Network page should have a title"test/ui/test_ui_dashboard.py (1)
15-15: Remove unusednoqadirective.The
# noqa: E402comment is unnecessary here. Since you're using a relative import (.test_helpers), the import-not-at-top-of-file rule doesn't apply with the current linter configuration.♻️ Suggested fix
-from .test_helpers import BASE_URL, wait_for_page_load, wait_for_element_by_css # noqa: E402 +from .test_helpers import BASE_URL, wait_for_page_load, wait_for_element_by_csstest/ui/test_ui_devices.py (3)
14-14: Remove unusednoqadirective.The
# noqa: E402comment is unnecessary since relative imports don't trigger this rule with the current linter configuration.♻️ Suggested fix
-from .test_helpers import BASE_URL, API_BASE_URL, api_get, wait_for_page_load, wait_for_element_by_css, wait_for_input_value # noqa: E402 +from .test_helpers import BASE_URL, API_BASE_URL, api_get, wait_for_page_load, wait_for_element_by_css, wait_for_input_value
77-79: Redundantrequestsimport inside function.The
requestsmodule is imported inside the function, butapi_getfromtest_helpersalready wrapsrequests.get. Consider usingapi_getfor consistency, or importrequestsat the module level if direct usage is preferred.
125-127: Consider usingapi_gethelper for consistency.Line 127 uses
requests.getdirectly, but theapi_gethelper is already imported and handles authorization headers. Using it would improve consistency and reduce code duplication.♻️ Suggested refactor
# --- Verify device via API --- - headers = {"Authorization": f"Bearer {api_token}"} - verify_response = requests.get(f"{API_BASE_URL}/device/{test_mac}", headers=headers) + verify_response = api_get(f"/device/{test_mac}", api_token)This also allows you to remove the
import requestsstatement inside the function (line 79).test/ui/test_ui_settings.py (2)
19-22: Redundant wait pattern.Similar to
test_ui_network.py, the explicitWebDriverWaitfor body presence followed bywait_for_page_loadis redundant.♻️ Suggested simplification
def test_settings_page_loads(driver): """Test: Settings page loads successfully""" driver.get(f"{BASE_URL}/settings.php") - WebDriverWait(driver, 10).until( - EC.presence_of_element_located((By.TAG_NAME, "body")) - ) wait_for_page_load(driver, timeout=10) assert "setting" in driver.page_source.lower(), "Page should contain settings content"
101-117: Incomplete migration fromtime.sleep.Several
time.sleepcalls remain in this function while other tests in the PR have been fully migrated to wait helpers. For consistency and test reliability, consider replacing these with appropriate explicit waits or removing them if unnecessary.test/ui/test_ui_waits.py (3)
13-13: Remove unusednoqadirective.The
# noqa: E402comment is unnecessary since relative imports don't trigger this rule.♻️ Suggested fix
-from .test_helpers import BASE_URL, wait_for_page_load, wait_for_element_by_css, wait_for_input_value # noqa: E402 +from .test_helpers import BASE_URL, wait_for_page_load, wait_for_element_by_css, wait_for_input_value
39-67: Broad exception handling and try-except-pass patterns.While catching broad
Exceptionin test code is often acceptable for resilience (allowing tests to skip gracefully), the nested try-except blocks make this hard to follow. Thetry-except-passat lines 60-61 silently swallows failures without any indication of what happened.Consider at minimum adding a comment explaining the fallback strategy, or using more specific exception types where possible (e.g.,
TimeoutExceptionfrom Selenium).♻️ Suggested improvement for clarity
+from selenium.common.exceptions import TimeoutException + def test_wait_for_input_value_on_devices(driver): ... try: - wait_for_element_by_css(driver, "#NEWDEV_devMac", timeout=5) - except Exception: + wait_for_element_by_css(driver, "#NEWDEV_devMac", timeout=5) + except TimeoutException: + # Form didn't appear via modal; try direct navigation ...
27-77: Code duplication withtest_ui_devices.py.This test function (
test_wait_for_input_value_on_devices) largely duplicates the add-device flow fromtest_add_device_with_generated_mac_ipintest_ui_devices.py. Consider extracting the shared "open add device form" logic into a helper function intest_helpers.pyto reduce duplication.server/models/device_instance.py (1)
521-548: Consider using explicit defaults for clarity.The
data.get("field") or defaultpattern treats all falsy values (including0,"",False) as triggers for the default. While this works for the current use case, it differs subtly fromdata.get("field", default)which only applies the default when the key is absent.For example, if a caller explicitly passes
devFavorite=False, it becomes0, which may be intended but is implicit. Consider documenting this coercion behavior or using explicit type checks for critical fields if strict value preservation is needed.server/api_server/graphql_endpoint.py (1)
585-586: Consider splitting this module to stay under 500 LOC.
This file is ~586 lines; extracting GraphQL types or resolvers would align with the size guideline. As per coding guidelines, please keep Python files ≤500 lines.server/api_server/openapi/swagger.html (1)
17-21: Make the OpenAPI URL resilient to sub-path deployments.
Using an absolute/openapi.jsonbreaks when NetAlertX is hosted under a base path (e.g.,/netalertx/docs). Consider resolving relative to the current location.♻️ Proposed tweak
- url: '/openapi.json', + url: new URL('openapi.json', window.location.href).toString(),test/test_wol_validation.py (1)
17-27: Consider sharing the server-readiness helper across runtime tests.
Thiswait_for_server()mirrors the one intest/verify_runtime_validation.py; extracting a shared helper would avoid drift in retry timing/endpoints.test/verify_runtime_validation.py (1)
10-31: Make retry delay configurable to match other runtime tests.
This file hard-codestime.sleep(1). Consider aNETALERTX_SERVER_DELAYenv var to keep runtime tests consistent and tunable.♻️ Proposed tweak
BASE_URL = os.getenv("NETALERTX_BASE_URL", "http://localhost:20212") REQUEST_TIMEOUT = float(os.getenv("NETALERTX_REQUEST_TIMEOUT", "5")) SERVER_RETRIES = int(os.getenv("NETALERTX_SERVER_RETRIES", "5")) +SERVER_DELAY = float(os.getenv("NETALERTX_SERVER_DELAY", "1")) ... except requests.RequestException: pass - time.sleep(1) + time.sleep(SERVER_DELAY)test/api_endpoints/test_mcp_openapi_spec.py (2)
7-33: Avoid hardcoded “/app” fallback and movesys.pathsetup before imports.
INSTALL_PATHdefaults to/app, andsys.pathis extended after theapi_serverimports, so it won’t help whenPYTHONPATHisn’t set. Consider moving the path setup above the imports and relying onNETALERTX_APPbeing set in the test environment (or skipping path extension when unset). As per coding guidelines, avoid hardcoded paths for file locations.♻️ Suggested refactor
-import sys -import os -import pytest - -from pydantic import ValidationError +import os +import sys +import pytest + +INSTALL_PATH = os.getenv("NETALERTX_APP") +if INSTALL_PATH: + sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"]) + +from pydantic import ValidationError @@ -INSTALL_PATH = os.getenv('NETALERTX_APP', '/app') -sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
65-88: Remove or re‑enable the commented‑out ValidationError checks.The commented negative tests reduce coverage and leave dead code. Either re‑enable the assertions or delete them.
server/api_server/openapi/introspection.py (1)
39-105: Avoid direct access to_operation_idsto reduce coupling and race risk.
introspect_flask_appchecks_operation_idswithout the registry lock; a concurrent registration could still race. Consider moving uniqueness handling into the registry (e.g., try/exceptDuplicateOperationIdErrorand suffixing there) or acquiring the registry lock while reserving the ID.server/api_server/api_server_start.py (1)
98-100: Consider splitting this file into smaller modules/blueprints.This file is far beyond 500 LOC and has grown into multiple concerns; separating by domain (devices, events, nettools, messaging, etc.) will improve maintainability and align with repo standards. As per coding guidelines, keep files ≤500 LOC.
- Mac - Flask debug - Threaded flask - propagate token in GET requests - enhance spec docs - normalize MAC x2 - mcp disablement redundant private attribute - run all tests imports
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@test/ui/test_ui_devices.py`:
- Around line 9-14: The import line that brings in test_helpers contains an
unnecessary "noqa: E402" directive; open the file and remove the trailing " #
noqa: E402" from the "from .test_helpers import BASE_URL, API_BASE_URL, api_get,
wait_for_page_load, wait_for_element_by_css, wait_for_input_value" statement so
the import is normal and no lint suppression remains (no other changes needed).
- Around line 85-99: The try/except around wait_for_element_by_css should catch
the specific Selenium timeout exception rather than a broad Exception: import
TimeoutException from selenium.common.exceptions and change the except block to
"except TimeoutException:" so only a missing-element timeout falls through to
the fallback find_elements XPath logic (variables/functions to locate:
add_selector, add_button, wait_for_element_by_css, add_buttons,
driver.execute_script). Leave other exceptions unhandled so they propagate.
In `@test/ui/test_ui_maintenance.py`:
- Line 11: Remove the unused flake8 suppression by deleting " # noqa: E402" from
the import statement that brings in BASE_URL, api_get, and wait_for_page_load
(the line starting with "from .test_helpers import BASE_URL, api_get,
wait_for_page_load # noqa: E402"); simply keep the plain import so the unused
E402 suppression is no longer present.
- Around line 45-52: The current silent try/except around the Backup/Restore tab
activation hides real failures; remove the try/except so exceptions from
WebDriverWait/EC.element_to_be_clickable (and tab.click()) surface, or if you
really must ignore failures add explicit logging that includes the exception and
context (e.g., log the exception and that tab activation failed) before
continuing; update the block using WebDriverWait(driver,
5).until(EC.element_to_be_clickable((By.ID, "tab_BackupRestore_id"))) and
tab.click() in test_ui_maintenance.py accordingly.
- Around line 86-94: The exception handler around driver.switch_to.alert should
preserve traceback and avoid using assert False; replace the `assert False,
f"Alert present: {alert_text}"` with `raise AssertionError(f"Alert present:
{alert_text}")` and change the inner `except Exception: raise e` to a bare
`raise` so the original traceback is preserved; update the block that references
`driver.switch_to.alert`, `alert_text`, and the outer `except Exception as e`
accordingly.
♻️ Duplicate comments (2)
server/api_server/api_server_start.py (2)
102-116: Hardcoded CORS localhost ports remain.
This still violates the “no hardcoded ports” guideline and should be config-driven.As per coding guidelines, ports must not be hardcoded.
569-607: These endpoints still bypass thesuccess/errorresponse contract.
Device export (JSON), latest/favorite, and metrics remain unwrapped.As per coding guidelines, API responses should include
success(anderrorwhen false).Also applies to: 729-758, 1515-1526
🧹 Nitpick comments (6)
test/ui/run_all_tests.py (1)
34-43: Consider validating test file existence before execution.If a test file is missing or renamed, pytest will report it but the error context may be less clear. A pre-check could provide a more actionable error message.
♻️ Optional improvement for clearer error handling
for name, filename in test_modules: try: print(f"\nRunning {name} tests...") file_path = os.path.join(base_dir, filename) + if not os.path.exists(file_path): + print(f"\n✗ {name} tests skipped: file not found ({filename})") + results[name] = False + continue # Run pytest result = pytest.main([file_path, "-v"]) results[name] = result == 0server/models/device_instance.py (1)
506-617: Close the DB connection only once.
conn.close()is called in the try block and again infinally. Prefer a single close to avoid double-close surprises in some sqlite drivers.♻️ Suggested simplification
cur.execute(sql, values) conn.commit() - conn.close() mylog("debug", f"[DeviceInstance] setDeviceData SQL: {sql.strip()}") mylog("debug", f"[DeviceInstance] setDeviceData VALUES:{values}") @@ finally: if conn: conn.close()test/api_endpoints/test_device_update_normalization.py (1)
35-70: Ensure cleanup runs even if assertions fail.
If an assertion fails mid-test, the device entry may be left behind. Consider atry/finallycleanup or a fixture to guarantee deletion.🧹 Example cleanup guard
def test_update_normalization(client, api_token, test_mac_norm, test_parent_mac_input, test_parent_mac_norm): + device_handler = DeviceInstance() + try: # 1. Create a device (using normalized MAC) create_payload = { "createNew": True, "devName": "Normalization Test Device", "devOwner": "Unit Test", } resp = client.post(f"/device/{test_mac_norm}", json=create_payload, headers=auth_headers(api_token)) assert resp.status_code == 200 assert resp.json.get("success") is True @@ - device_handler = DeviceInstance() - device = device_handler.getDeviceData(test_mac_norm) + device = device_handler.getDeviceData(test_mac_norm) @@ - # Cleanup - device_handler.deleteDeviceByMAC(test_mac_norm) + finally: + device_handler.deleteDeviceByMAC(test_mac_norm)server/helper.py (1)
364-397: Prefer narrowing broadexcept Exceptionblocks.
ValueError/json.JSONDecodeErroris sufficient here and avoids masking unexpected failures.♻️ Proposed tweak
- try: - if re.fullmatch(r"-?\d+", v): - return int(v) - except Exception: - pass + if re.fullmatch(r"-?\d+", v): + return int(v) @@ - try: - return json.loads(v) - except Exception: + try: + return json.loads(v) + except json.JSONDecodeError: # Fallback to raw string return vtest/server/test_api_server_start.py (1)
6-16: Drop the unusedrecorderparameter in_make_fake_thread.
Keeps tests lint-clean without changing behavior.♻️ Proposed tweak
-def _make_fake_thread(recorder): +def _make_fake_thread(): class FakeThread: def __init__(self, target=None): self._target = target @@ - FakeThread = _make_fake_thread(called) + FakeThread = _make_fake_thread() @@ - FakeThread = _make_fake_thread(called) + FakeThread = _make_fake_thread() @@ - FakeThread = _make_fake_thread(called) + FakeThread = _make_fake_thread()Also applies to: 34-36, 65-67, 98-100
server/api_server/api_server_start.py (1)
134-159: Use constant‑time comparison for token checks.
This avoids timing side‑channels and aligns with the MCP auth helper.🔒 Suggested change
+import secrets @@ - is_authorized_result = (header_token == expected_token) or (query_token == expected_token) + expected = str(expected_token) + is_authorized_result = ( + (header_token and secrets.compare_digest(header_token, expected)) + or (query_token and secrets.compare_digest(query_token, expected)) + )
| from selenium.webdriver.common.by import By | ||
|
|
||
| # Add test directory to path | ||
| sys.path.insert(0, os.path.dirname(__file__)) | ||
|
|
||
| from test_helpers import BASE_URL, API_BASE_URL, api_get # noqa: E402 [flake8 lint suppression] | ||
| from .test_helpers import BASE_URL, API_BASE_URL, api_get, wait_for_page_load, wait_for_element_by_css, wait_for_input_value # noqa: E402 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's find the test file and check its location
fd -t f "test_ui_devices.py"Repository: netalertx/NetAlertX
Length of output: 89
🏁 Script executed:
# Read the first 20 lines to see the context
head -20 test/ui/test_ui_devices.pyRepository: netalertx/NetAlertX
Length of output: 648
🏁 Script executed:
# Look for Ruff configuration files
fd -t f "(pyproject.toml|.ruff.toml|setup.cfg|ruff.toml)" | head -10Repository: netalertx/NetAlertX
Length of output: 87
🏁 Script executed:
# Check if there's a pyproject.toml and search for ruff configuration
cat pyproject.toml 2>/dev/null | grep -A 20 "\[tool.ruff\]" || echo "No Ruff config in pyproject.toml"Repository: netalertx/NetAlertX
Length of output: 199
🏁 Script executed:
cat ruff.tomlRepository: netalertx/NetAlertX
Length of output: 969
Remove unused noqa: E402 directive.
E402 is not enforced in the active Ruff configuration (ruff.toml only selects E7 and E9 error codes); drop the noqa: E402 comment.
🧹 Proposed cleanup
-from .test_helpers import BASE_URL, API_BASE_URL, api_get, wait_for_page_load, wait_for_element_by_css, wait_for_input_value # noqa: E402
+from .test_helpers import BASE_URL, API_BASE_URL, api_get, wait_for_page_load, wait_for_element_by_css, wait_for_input_value📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from selenium.webdriver.common.by import By | |
| # Add test directory to path | |
| sys.path.insert(0, os.path.dirname(__file__)) | |
| from test_helpers import BASE_URL, API_BASE_URL, api_get # noqa: E402 [flake8 lint suppression] | |
| from .test_helpers import BASE_URL, API_BASE_URL, api_get, wait_for_page_load, wait_for_element_by_css, wait_for_input_value # noqa: E402 | |
| from selenium.webdriver.common.by import By | |
| # Add test directory to path | |
| sys.path.insert(0, os.path.dirname(__file__)) | |
| from .test_helpers import BASE_URL, API_BASE_URL, api_get, wait_for_page_load, wait_for_element_by_css, wait_for_input_value |
🧰 Tools
🪛 Ruff (0.14.13)
14-14: Unused noqa directive (non-enabled: E402)
Remove unused noqa directive
(RUF100)
🤖 Prompt for AI Agents
In `@test/ui/test_ui_devices.py` around lines 9 - 14, The import line that brings
in test_helpers contains an unnecessary "noqa: E402" directive; open the file
and remove the trailing " # noqa: E402" from the "from .test_helpers import
BASE_URL, API_BASE_URL, api_get, wait_for_page_load, wait_for_element_by_css,
wait_for_input_value" statement so the import is normal and no lint suppression
remains (no other changes needed).
| # Wait for the "New Device" link specifically to ensure it's loaded | ||
| add_selector = "a[href*='deviceDetails.php?mac=new'], button#btnAddDevice, .btn-add-device" | ||
| try: | ||
| add_button = wait_for_element_by_css(driver, add_selector, timeout=10) | ||
| except Exception: | ||
| # Fallback to broader search if specific selector fails | ||
| add_buttons = driver.find_elements(By.XPATH, "//button[contains(text(),'Add') or contains(text(),'New')] | //a[contains(text(),'Add') or contains(text(),'New')]") | ||
| if not add_buttons: | ||
| assert True, "Add device button not found, skipping test" | ||
| return | ||
| add_buttons[0].click() | ||
| time.sleep(2) | ||
| if add_buttons: | ||
| add_button = add_buttons[0] | ||
| else: | ||
| assert True, "Add device button not found, skipping test" | ||
| return | ||
|
|
||
| # Use JavaScript click to bypass any transparent overlays from the chart | ||
| driver.execute_script("arguments[0].click();", add_button) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's check if the file exists and read the relevant lines
cd test/ui && wc -l test_ui_devices.pyRepository: netalertx/NetAlertX
Length of output: 85
🏁 Script executed:
# Read the file around lines 85-99
head -100 test/ui/test_ui_devices.py | tail -30Repository: netalertx/NetAlertX
Length of output: 1330
🏁 Script executed:
# Also check the imports at the top of the file
head -30 test/ui/test_ui_devices.pyRepository: netalertx/NetAlertX
Length of output: 1112
🏁 Script executed:
# Find the wait_for_element_by_css function to understand what exceptions it raises
find . -type f -name "*.py" | xargs grep -l "def wait_for_element_by_css"Repository: netalertx/NetAlertX
Length of output: 88
🏁 Script executed:
cat test/ui/test_helpers.pyRepository: netalertx/NetAlertX
Length of output: 5329
Narrow the exception type for clarity and better error handling.
Catching broad Exception can mask unexpected errors in Selenium operations. Since wait_for_element_by_css raises TimeoutException when the element is not found within the timeout period, narrow the exception to that specific type.
🎯 Changes needed
+from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By- except Exception:
+ except TimeoutException:🧰 Tools
🪛 Ruff (0.14.13)
89-89: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In `@test/ui/test_ui_devices.py` around lines 85 - 99, The try/except around
wait_for_element_by_css should catch the specific Selenium timeout exception
rather than a broad Exception: import TimeoutException from
selenium.common.exceptions and change the except block to "except
TimeoutException:" so only a missing-element timeout falls through to the
fallback find_elements XPath logic (variables/functions to locate: add_selector,
add_button, wait_for_element_by_css, add_buttons, driver.execute_script). Leave
other exceptions unhandled so they propagate.
| from selenium.webdriver.support import expected_conditions as EC | ||
|
|
||
| from test_helpers import BASE_URL, api_get | ||
| from .test_helpers import BASE_URL, api_get, wait_for_page_load # noqa: E402 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the unused noqa suppression.
Ruff flags the E402 suppression as unused here.
🧹 Proposed fix
-from .test_helpers import BASE_URL, api_get, wait_for_page_load # noqa: E402
+from .test_helpers import BASE_URL, api_get, wait_for_page_load📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from .test_helpers import BASE_URL, api_get, wait_for_page_load # noqa: E402 | |
| from .test_helpers import BASE_URL, api_get, wait_for_page_load |
🧰 Tools
🪛 Ruff (0.14.13)
11-11: Unused noqa directive (non-enabled: E402)
Remove unused noqa directive
(RUF100)
🤖 Prompt for AI Agents
In `@test/ui/test_ui_maintenance.py` at line 11, Remove the unused flake8
suppression by deleting " # noqa: E402" from the import statement that brings in
BASE_URL, api_get, and wait_for_page_load (the line starting with "from
.test_helpers import BASE_URL, api_get, wait_for_page_load # noqa: E402");
simply keep the plain import so the unused E402 suppression is no longer
present.
| # Ensure the Backup/Restore tab is active so the button is in a clickable state | ||
| try: | ||
| tab = WebDriverWait(driver, 5).until( | ||
| EC.element_to_be_clickable((By.ID, "tab_BackupRestore_id")) | ||
| ) | ||
| tab.click() | ||
| except Exception: | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "test_ui_maintenance.py" | head -20Repository: netalertx/NetAlertX
Length of output: 95
🏁 Script executed:
cat -n ./test/ui/test_ui_maintenance.pyRepository: netalertx/NetAlertX
Length of output: 6073
Remove the try-except wrapping tab activation or add explicit logging.
Silently swallowing the exception here can mask real UI regressions. If the tab activation fails due to page load issues or UI changes, the test will continue and potentially fail at a later point with misleading errors. Either let failures surface or add logging to clarify why this is safe to ignore.
Suggested adjustment
- try:
- tab = WebDriverWait(driver, 5).until(
- EC.element_to_be_clickable((By.ID, "tab_BackupRestore_id"))
- )
- tab.click()
- except Exception:
- pass
+ tab = WebDriverWait(driver, 5).until(
+ EC.element_to_be_clickable((By.ID, "tab_BackupRestore_id"))
+ )
+ tab.click()🧰 Tools
🪛 Ruff (0.14.13)
51-52: try-except-pass detected, consider logging the exception
(S110)
51-51: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In `@test/ui/test_ui_maintenance.py` around lines 45 - 52, The current silent
try/except around the Backup/Restore tab activation hides real failures; remove
the try/except so exceptions from WebDriverWait/EC.element_to_be_clickable (and
tab.click()) surface, or if you really must ignore failures add explicit logging
that includes the exception and context (e.g., log the exception and that tab
activation failed) before continuing; update the block using
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.ID,
"tab_BackupRestore_id"))) and tab.click() in test_ui_maintenance.py accordingly.
| except Exception as e: | ||
| # Check for alerts that might be blocking page_source access | ||
| try: | ||
| alert = driver.switch_to.alert | ||
| alert_text = alert.text | ||
| alert.accept() | ||
| assert False, f"Alert present: {alert_text}" | ||
| except Exception: | ||
| raise e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n test/ui/test_ui_maintenance.py | sed -n '80,100p'Repository: netalertx/NetAlertX
Length of output: 1138
🏁 Script executed:
cat -n test/ui/test_ui_maintenance.py | sed -n '70,110p'Repository: netalertx/NetAlertX
Length of output: 2167
🏁 Script executed:
python3 << 'EOF'
# Verify Python behavior with assert False and -O flag
import sys
# Test 1: Check if assert statements are removed with optimization
code_with_assert = """
def test():
assert False, "This should fail"
"""
code_with_raise = """
def test():
raise AssertionError("This should fail")
"""
# Compile normally
normal_code = compile(code_with_assert, '<string>', 'exec')
optimized_code = compile(code_with_assert, '<string>', 'exec', optimize=2)
print("Normal compilation has assert bytecode:", b'assert' in str(normal_code))
print("Optimized compilation (-O flag) removes assert bytecode")
# Test 2: Exception chaining behavior
def test_raise_e():
try:
1 / 0
except Exception as e:
raise e
def test_bare_raise():
try:
1 / 0
except Exception:
raise
print("\nException context:")
try:
test_raise_e()
except ZeroDivisionError as e:
print(f"With 'raise e': __context__ is {e.__context__}, __cause__ is {e.__cause__}")
try:
test_bare_raise()
except ZeroDivisionError as e:
print(f"With 'bare raise': __context__ is {e.__context__}, __cause__ is {e.__cause__}")
EOFRepository: netalertx/NetAlertX
Length of output: 149
Preserve exception context and avoid assert False.
assert False is stripped with -O, and raise e loses traceback context. Use raise AssertionError(...) and bare raise.
✅ Proposed fix
- assert False, f"Alert present: {alert_text}"
+ raise AssertionError(f"Alert present: {alert_text}")
@@
- except Exception:
- raise e
+ except Exception:
+ raise🧰 Tools
🪛 Ruff (0.14.13)
86-86: Do not catch blind exception: Exception
(BLE001)
92-92: Do not assert False (python -O removes these calls), raise AssertionError()
Replace assert False
(B011)
93-93: Do not catch blind exception: Exception
(BLE001)
94-94: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
In `@test/ui/test_ui_maintenance.py` around lines 86 - 94, The exception handler
around driver.switch_to.alert should preserve traceback and avoid using assert
False; replace the `assert False, f"Alert present: {alert_text}"` with `raise
AssertionError(f"Alert present: {alert_text}")` and change the inner `except
Exception: raise e` to a bare `raise` so the original traceback is preserved;
update the block that references `driver.switch_to.alert`, `alert_text`, and the
outer `except Exception as e` accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
server/api_server/api_server_start.py (1)
401-411: Validation bypassed — use the validatedpayloadinstead ofrequest.get_json().This handler is decorated with
@validate_request(request_model=CopyDeviceRequest)but ignores the validatedpayloadparameter, reading directly fromrequest.get_json(). This renders Pydantic validation ineffective — malformed input that should fail validation may still reach handler logic.The same pattern appears in multiple handlers:
api_device_update_column,api_device_set_alias,api_device_open_ports,api_devices_delete,api_devices_search,api_wakeonlan, and severaldbquery_*handlers.🔧 Proposed fix
def api_device_copy(payload=None): - data = request.get_json() or {} - mac_from = data.get("macFrom") - mac_to = data.get("macTo") - - if not mac_from or not mac_to: - return jsonify({"success": False, "message": "ERROR: Missing parameters", "error": "macFrom and macTo are required"}), 400 + # Validation already performed by decorator; payload is guaranteed valid + mac_from = payload.macFrom + mac_to = payload.macTo device_handler = DeviceInstance() result = device_handler.copyDevice(mac_from, mac_to) return jsonify(result)Apply this pattern to all handlers with
request_modelthat currently read fromrequest.get_json().
♻️ Duplicate comments (1)
server/api_server/api_server_start.py (1)
102-125: CORS wildcard fallback is overly permissive.Including
"*"in the default origins list (line 116) effectively disables CORS protection whenCORS_ORIGINSis not configured. While this may be intentional for development, it poses a security risk in production.
🧹 Nitpick comments (4)
server/api_server/api_server_start.py (4)
317-328: Fallback torequest.jsonbypasses validation.The fallback
payload if payload is not None else request.jsonmaintains backward compatibility but creates a code path where validation is skipped. IfpayloadisNone(e.g., validation decorator misconfiguration), unvalidated data reaches the handler.Consider removing the fallback once the validation layer is stable:
- data = payload if payload is not None else request.json + if payload is None: + return jsonify({"success": False, "error": "Invalid request", "message": "Request body required"}), 400 + data = payload
955-970: Remove commented-out dead code.This entire commented function adds 16 lines of clutter. Git history preserves the old implementation if ever needed.
🧹 Remove dead code
-# def trigger_scan(scan_type): -# """Trigger a network scan by adding it to the execution queue.""" -# if scan_type not in ["ARPSCAN", "NMAPDEV", "NMAP"]: -# return {"success": False, "message": f"Invalid scan type: {scan_type}"} -# -# queue = UserEventsQueueInstance() -# res = queue.add_event("run|" + scan_type) -# -# # Handle mocks in tests that don't return a tuple -# if isinstance(res, tuple) and len(res) == 2: -# success, message = res -# else: -# success = True -# message = f"Action \"run|{scan_type}\" added to the execution queue." -# -# return {"success": success, "message": message, "scan_type": scan_type} -
984-1000: Move import to module level.The
osimport on line 994 is already imported at line 3. Thesend_from_directoryimport could be moved to the top with other Flask imports for consistency.💡 Suggested cleanup
Add to imports at top of file:
from flask import Flask, request, jsonify, Response, send_from_directoryThen simplify the function:
def api_docs(): """Serve Swagger UI for API documentation.""" - # We need to serve the static HTML file we created. - import os - from flask import send_from_directory - # Assuming swagger.html is in the openapi directory api_server_dir = os.path.dirname(os.path.realpath(__file__)) openapi_dir = os.path.join(api_server_dir, 'openapi') return send_from_directory(openapi_dir, 'swagger.html')
1-1728: File exceeds 500-line guideline (~1725 lines).Per coding guidelines, files should be kept to a maximum of 500 lines. Consider splitting into domain-specific modules:
device_endpoints.py— device CRUD and searchnettools_endpoints.py— WoL, traceroute, nmap, etc.events_endpoints.py— event managementsessions_endpoints.py— session managementmessaging_endpoints.py— notificationsEach module could export a Flask Blueprint, which
api_server_start.pywould register.This is a larger refactoring effort that can be deferred, but would significantly improve maintainability. As per coding guidelines, files exceeding 500 lines should be split.
New Features:
Bug Fixes:
Refactoring:
Summary by CodeRabbit
New Features
Improvements
Tests
✏️ Tip: You can customize this high-level summary in your review settings.