☁️ CloudExploit by KhulnaSoft Security, Ltd. 🔐 Multi‑Cloud Security Auditing Platform 🚀 AWS • Azure • GCP • Oracle • GitHub#52
Conversation
Feature/update ci cd and docs
🔐 Multi‑Cloud Security Auditing Platform 🚀 AWS • Azure • GCP • Oracle • GitHub
There was a problem hiding this comment.
Sorry @envrs, your pull request is larger than the review limit of 150000 diff characters
|
Important Review skippedToo many files! This PR contains 212 files, which is 62 over the limit of 150. You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing touches🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoMulti-cloud security platform expansion with enhanced validation, new plugins, and comprehensive test coverage
WalkthroughsDescription• **ASL validation engine improvements**: Enhanced path parsing with better wildcard [*] handling, added IP/CIDR validation functions (inCidr(), inCidrIPv4(), inCidrIPv6()), improved operator comparisons (added LT, ISEMPTY), and refined condition evaluation logic • **Cloud provider service expansions**: Added 13+ new AWS services (ECR, GuardDuty, WorkSpaces, Transfer, AppFlow, Cognito, WAF, Glue, ConfigService, Firehose, SES, FSx, OpenSearch), 3 new GCP services (CloudBuild, Cloud Composer, Resource Manager), and 7+ new Azure services (Blob Service, Virtual Machines, Event Grid, Event Hubs, Defender, Application Gateway, Entra ID) • **New security plugins**: Added Cloud Functions network exposure detection, Cloud Functions V2 deprecated runtime detection, MySQL Flexible Server public access validation with firewall rules, and enhanced IAM role policy reporting with detailed policy names • **Extensive test coverage**: Added 20+ new test suites covering PostgreSQL servers, MySQL Flexible Server, ECS Fargate, DocumentDB encryption, ASG tag propagation, Key Vault access, App Service public access, Cloud Functions V2 configurations, and Access Analyzer findings • **Azure authentication modernization**: Migrated from deprecated request library to axios and from ms-rest-azure to @azure/identity with async/await patterns and improved error handling • **Azure API resilience**: Implemented retry logic with token refresh mechanism for failed API calls with comprehensive throttling pattern detection • **AWS helper functions enhancements**: Improved network exposure checking, tag validation, federated user role detection, and Lambda function URL authorization support • **Configuration updates**: Updated AWS finspace region availability, fixed service definitions, and enhanced API call configurations across multiple cloud providers Diagramflowchart LR
ASL["ASL Validation<br/>Engine"]
AWS["AWS Services<br/>13+ New"]
Azure["Azure Services<br/>7+ New"]
GCP["GCP Services<br/>3 New"]
Plugins["Security Plugins<br/>Network/Runtime/Access"]
Tests["Test Coverage<br/>20+ New Suites"]
Auth["Azure Auth<br/>Modernization"]
Retry["API Resilience<br/>Retry Logic"]
ASL -- "Enhanced<br/>Path Parsing" --> Plugins
AWS -- "Service<br/>Definitions" --> Plugins
Azure -- "Service<br/>Definitions" --> Plugins
GCP -- "Service<br/>Definitions" --> Plugins
Plugins -- "Validated by" --> Tests
Auth -- "Improves" --> Retry
Retry -- "Supports" --> Azure
File Changes1. helpers/asl/asl-1.js
|
|
Code Review by Qodo
1. Tokens logged in collector.js
|
| console.log('Token refreshed successfully. New token data:', JSON.stringify({ | ||
| environment: loginData.environment, | ||
| token: loginData.token ? '***' + loginData.token.slice(-8) : null, | ||
| graphToken: loginData.graphToken ? '***' + loginData.graphToken.slice(-8) : null, | ||
| vaultToken: loginData.vaultToken ? '***' + loginData.vaultToken.slice(-8) : null | ||
| }, null, 2)); |
There was a problem hiding this comment.
1. Tokens logged in collector.js 📘 Rule violation ⛨ Security
The new token-refresh flow logs token-related fields (including partial token values) via console.log, which can leak secrets in logs. This violates secure logging requirements and increases risk of credential exposure.
Agent Prompt
## Issue description
`collectors/azure/collector.js` logs token-related values to stdout during token refresh. Even partial token values should not be logged.
## Issue Context
Compliance requires that logs contain no secrets/PII at any level, and that logs be safe for production debugging.
## Fix Focus Areas
- collectors/azure/collector.js[93-98]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| } | ||
| if (response && ((response.statusCode && response.statusCode === 429) || (response.status && response.status === 429)) && | ||
| body && | ||
| body.error && | ||
| body.error.message && | ||
| typeof body.error.message == 'string') { | ||
| errorMessage = `TooManyRequests: ${body.error.message}`; | ||
| return callback(errorMessage, null, response); | ||
| } else if (body && | ||
| body.error && | ||
| body.error.message && | ||
| typeof body.error.message == 'string') { | ||
| return callback(body.error.message); | ||
| } else if (body && | ||
| body['odata.error'] && | ||
| body['odata.error'].message && | ||
| body['odata.error'].message.value && | ||
| typeof body['odata.error'].message.value == 'string') { | ||
| if (body['odata.error'].requestId) { | ||
| body['odata.error'].message.value += ` RequestId: ${body['odata.error'].requestId}`; | ||
| } | ||
| return callback(body['odata.error'].message.value); | ||
| } else if (body && | ||
| body.message && | ||
| typeof body.message == 'string') { | ||
| if (body.code && typeof body.code == 'string') { | ||
| body.message = (body.code + ': ' + body.message); | ||
| } | ||
| return callback(body.message); | ||
| } else if (body && | ||
| body.Message && | ||
| typeof body.Message == 'string') { | ||
| if (body.Code && typeof body.Code == 'string') { | ||
| body.Message = (body.Code + ': ' + body.Message); | ||
| } | ||
| return callback(body.Message); | ||
| } | ||
|
|
||
| console.log(`[ERROR] Unhandled error from Azure API: Error: ${error}`); | ||
| return callback('Unknown error occurred while calling the Azure API'); | ||
| console.log(`[ERROR] Unhandled error from Azure API: Body: ${JSON.stringify(body)}`); | ||
| } | ||
| }); | ||
|
|
||
| console.log('[ERROR] Unhandled error from Azure API'); | ||
| return callback('Unknown error occurred while calling the Azure API'); | ||
| } |
There was a problem hiding this comment.
2. Azure auth logs response body 📘 Rule violation ⛨ Security
The Azure auth helper logs JSON.stringify(body) for unhandled Azure API errors, which may include sensitive data returned by the API. It also propagates raw Azure error messages via callbacks, potentially exposing internal details to end users.
Agent Prompt
## Issue description
`helpers/azure/auth.js` logs full Azure API error bodies and returns raw Azure error messages via callbacks, which can leak sensitive details.
## Issue Context
Secure logging requires that logs never include secrets/PII, and secure error handling requires generic user-facing errors while preserving details only in internal logs.
## Fix Focus Areas
- helpers/azure/auth.js[223-266]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| if (settings.input && settings.input[ipv4InputKey]) { | ||
| const newIpCidrRange = settings.input[ipv4InputKey].split(','); | ||
| for (const cidr of newIpCidrRange) { | ||
| remediation_file['remediate']['actions'][pluginName][resource]['steps'].push({ | ||
| 'inboundRule': cidr, | ||
| 'action': 'ADDED' | ||
| }); | ||
| } |
There was a problem hiding this comment.
3. Unvalidated cidrs in remediation 📘 Rule violation ⛨ Security
User-controlled settings.input[ipv4InputKey] is split and inserted directly into remediation steps without validation or sanitization. Invalid or malicious CIDR entries could produce incorrect remediation artifacts or unsafe rule updates.
Agent Prompt
## Issue description
`settings.input[ipv4InputKey]` is treated as trusted and used directly as an `inboundRule` without validation.
## Issue Context
This input can originate outside the system (user/config), so it must be validated and sanitized to prevent unsafe or broken remediation output.
## Fix Focus Areas
- helpers/aws/functions.js[812-819]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| var locations = require(__dirname + '/locations.js'); | ||
| var axios = require('axios'); | ||
| var locations_gov = require(__dirname + '/locations_gov.js'); |
There was a problem hiding this comment.
4. Undeclared azure dependencies 🐞 Bug ✓ Correctness
helpers/azure/auth.js now requires 'axios' and '@azure/identity', but package.json does not declare them. Fresh installs/CI will fail with MODULE_NOT_FOUND and Azure collection will not run.
Agent Prompt
## Issue description
`helpers/azure/auth.js` now imports `axios` and `@azure/identity`, but `package.json` does not declare them. This will crash at runtime (`MODULE_NOT_FOUND`) on clean installs/CI.
## Issue Context
Azure collection depends on `helpers/azure/auth.js` for login and HTTP calls.
## Fix Focus Areas
- package.json[41-61]
- helpers/azure/auth.js[1-45]
## Expected fix
- Add `axios` and `@azure/identity` to `dependencies`.
- (Optional) If `ms-rest-azure` is no longer used anywhere, remove it to avoid dead deps.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| if (response && ((response.statusCode && response.statusCode === 429) || (response.status && response.status === 429)) && | ||
| body && | ||
| body.error && | ||
| body.error.message && | ||
| typeof body.error.message == 'string') { | ||
| errorMessage = `TooManyRequests: ${body.error.message}`; | ||
| return callback(errorMessage, null, response); | ||
| } else if (body && |
There was a problem hiding this comment.
5. Azure auth implicit global 🐞 Bug ⛯ Reliability
helpers/azure/auth.js assigns to 'errorMessage' without declaration inside handleErrorResponse and duplicates the same decision tree twice. This can create/overwrite a global variable (or throw in strict mode) and makes error handling brittle and confusing.
Agent Prompt
## Issue description
`handleErrorResponse()` duplicates its decision tree and performs an undeclared assignment to `errorMessage`, which creates an implicit global (or throws under strict mode).
## Issue Context
This function gates how Azure API errors are surfaced and how throttling (429) is reported.
## Fix Focus Areas
- helpers/azure/auth.js[163-266]
- pr_files_diffs/helpers_azure_auth_js.patch[286-305]
## Expected fix
- Delete the duplicated second block (keep one canonical classification flow).
- Replace `errorMessage = ...` with a properly-scoped declaration (`const`/`let`).
- Ensure parsing handles already-object bodies safely (optional hardening).
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| // Retry with new token | ||
| makeApiCall(retryAttempt + 1) | ||
| .then(() => cb(null, null, null)) | ||
| .catch(err => cb(err, null, null)); |
There was a problem hiding this comment.
6. Azure collector early callback 🐞 Bug ✓ Correctness
In collectors/azure/collector.js, the ECONNRESET retry path calls cb(null, null, null) after makeApiCall() resolves, but makeApiCall does not await pagination (it triggers processCall and returns). This can mark the call complete before remaining pages are fetched, dropping results.
Agent Prompt
## Issue description
The ECONNRESET retry path calls the completion callback early (`cb(null, null, null)`) even when pagination is still in-flight, which can truncate results.
## Issue Context
`handleResponse()` paginates by calling `processCall()` but does not return/await it, so `makeApiCall()` resolves after launching the next page.
## Fix Focus Areas
- collectors/azure/collector.js[64-140]
- pr_files_diffs/collectors_azure_collector_js.patch[56-76]
## Expected fix
- Remove `.then(() => cb(null, null, null))` entirely.
- On retry success, rely on `handleResponse()` to call `cb` when all pages are collected.
- If you need promise-based flow, make `makeApiCall()` await pagination completion (e.g., return/await `processCall` via a promise wrapper).
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
There was a problem hiding this comment.
Actionable comments posted: 6
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (10)
plugins/aws/route53/domainExpiry.js (1)
13-13:⚠️ Potential issue | 🟡 MinorTypo in realtime trigger:
route53domians→route53domains.The third trigger has a misspelling that would prevent it from matching deletion events.
Proposed fix
- realtime_triggers: ['route53domains:RegisterDomain','route53domains:RenewDomain','route53domians:DeleteDomain'], + realtime_triggers: ['route53domains:RegisterDomain','route53domains:RenewDomain','route53domains:DeleteDomain'],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/aws/route53/domainExpiry.js` at line 13, The realtime_triggers array in domainExpiry.js contains a typo: change the third entry 'route53domians:DeleteDomain' to the correct 'route53domains:DeleteDomain' so that the deletion event matches; update the realtime_triggers definition (the array literal containing 'route53domains:RegisterDomain','route53domains:RenewDomain',...) to fix the misspelled symbol.plugins/aws/cloudfront/cloudfrontLoggingEnabled.js (1)
18-23:⚠️ Potential issue | 🟡 MinorAlign compliance wording with the new “S3 legacy logging” terminology.
The HIPAA/PCI strings still say “request logging,” which conflicts with the updated “S3 legacy logging” wording and could mislead auditors.
🔧 Proposed fix
- hipaa: 'As part of the audit control requirement for HIPAA, request logging for ' + - 'CloudFront should be enabled and logs should be stored in a secure location ' + + hipaa: 'As part of the audit control requirement for HIPAA, S3 legacy logging for ' + + 'CloudFront should be enabled and logs should be stored in a secure location ' + 'with proper retention policies.', - pci: 'Request logging for networks hosting cardholder data is required ' + - 'for PCI. Enable CloudFront logging to log requests sent to ' + + pci: 'S3 legacy logging for networks hosting cardholder data is required ' + + 'for PCI. Enable CloudFront S3 legacy logging to log requests sent to ' + 'applications in a PCI environment.'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/aws/cloudfront/cloudfrontLoggingEnabled.js` around lines 18 - 23, Update the HIPAA and PCI description strings in cloudfrontLoggingEnabled.js to replace "request logging" phrasing with the new "S3 legacy logging" terminology: modify the hipaa message (the hipaa property) so it reads that "S3 legacy logging for CloudFront should be enabled and logs should be stored..." and modify the pci message (the pci property) to state "S3 legacy logging for networks hosting cardholder data is required" and "Enable CloudFront S3 legacy logging to log requests..." so both entries consistently use "S3 legacy logging" instead of "request logging."plugins/azure/kubernetesservice/aksLatestVersion.spec.js (1)
68-79:⚠️ Potential issue | 🟡 MinorUpdate
agentPoolProfiles.upgradesformat to match the actual Azure API structure.The
agentPoolProfiles[0].upgrades(lines 76-79) uses plain string format["1.12.7", "1.12.8"], whilecontrolPlaneProfile.upgrades(lines 68-69) correctly uses object format[{"kubernetesVersion": "1.12.7"}, ...]. Although the plugin currently only readscontrolPlaneProfile, the test data should match the actual Azure API response structure for both fields to maintain consistency and aid future development.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/azure/kubernetesservice/aksLatestVersion.spec.js` around lines 68 - 79, The test fixture uses plain strings for agentPoolProfiles[0].upgrades but Azure returns objects like controlPlaneProfile.upgrades; update the fixture so agentPoolProfiles.upgrades is an array of objects with kubernetesVersion keys (match the structure used by controlPlaneProfile.upgrades) so tests reflect the real API shape and remain consistent for future code that may read agentPoolProfiles.upgrades.plugins/aws/connect/customerProfilesDomainEncrypted.js (2)
14-21:⚠️ Potential issue | 🟡 MinorSettings default value doesn't match the validation regex.
The
default: 'awskms'does not matchregex: '^(awscmk|externalcmk|cloudhsm)$'. This means:
- Users cannot explicitly set
awskmsas it would fail regex validation.- The documented encryption levels in
descriptiondon't mentionawskms.Consider either adding
awskmsto the regex and description, or documenting thatawskmsis an implicit default that skips validation.Suggested fix to include awskms in regex
customer_profiles_desired_encryption_level: { name: 'Connect Customer Profiles Encrypted', - description: 'In order (lowest to highest) awscmk=Customer managed KMS; externalcmk=Customer managed externally sourced KMS; cloudhsm=Customer managed CloudHSM sourced KMS', - regex: '^(awscmk|externalcmk|cloudhsm)$', + description: 'In order (lowest to highest) awskms=AWS managed KMS (default, skips validation); awscmk=Customer managed KMS; externalcmk=Customer managed externally sourced KMS; cloudhsm=Customer managed CloudHSM sourced KMS', + regex: '^(awskms|awscmk|externalcmk|cloudhsm)$', default: 'awskms' }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/aws/connect/customerProfilesDomainEncrypted.js` around lines 14 - 21, The default value 'awskms' for settings.customer_profiles_desired_encryption_level conflicts with its regex and description; update the validation and documentation or change the default. Either add 'awskms' to the regex in customerProfilesDomainEncrypted.js (e.g., include awskms in the pattern) and mention it in the description alongside awscmk/externalcmk/cloudhsm, or change default to one of the existing valid options (awscmk, externalcmk, or cloudhsm); update the settings object (customer_profiles_desired_encryption_level) accordingly so the default conforms to the regex and description.
22-22:⚠️ Potential issue | 🟡 MinorTypo in realtime trigger:
customerprofileshould becustomerprofiles.The
DeleteDomaintrigger is missing the 's' incustomerprofiles, making it inconsistent with the other triggers and potentially non-functional.Suggested fix
- realtime_triggers: ['customerprofiles:CreateDomain', 'customerprofiles:UpdateDomain', 'customerprofile:DeleteDomain'], + realtime_triggers: ['customerprofiles:CreateDomain', 'customerprofiles:UpdateDomain', 'customerprofiles:DeleteDomain'],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/aws/connect/customerProfilesDomainEncrypted.js` at line 22, The realtime_triggers array contains a typo: change the string 'customerprofile:DeleteDomain' to 'customerprofiles:DeleteDomain' so all triggers use the correct namespace; locate the realtime_triggers declaration (realtime_triggers: [...]) and update that entry to 'customerprofiles:DeleteDomain'.plugins/azure/batchAccounts/batchAccountsManagedIdentity.js (1)
40-44:⚠️ Potential issue | 🟡 MinorAlign identity type validation with other Azure plugins.
The check only validates the existence of
batchAccount.identity.typewithout verifying its value. Other Azure managed identity plugins (e.g., openai/accountManagedIdentity.js, mediaServices/amsManagedIdentityEnabled.js, sqlServerManagedIdentity.js) explicitly check that the type is either'SystemAssigned'or'UserAssigned'. Consider updating this check to validate the actual type value rather than just its presence, for consistency and defensive programming:if (batchAccount.identity && batchAccount.identity.type && (batchAccount.identity.type.toLowerCase() === 'systemassigned' || batchAccount.identity.type.toLowerCase() === 'userassigned')) { // pass } else { // fail }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/azure/batchAccounts/batchAccountsManagedIdentity.js` around lines 40 - 44, The current check only verifies presence of batchAccount.identity.type; change the condition to validate the actual type value (case-insensitively) to match other plugins: verify batchAccount.identity && batchAccount.identity.type and that batchAccount.identity.type.toLowerCase() equals 'systemassigned' or 'userassigned' before calling helpers.addResult(..., 0, ...); otherwise call helpers.addResult(..., 2, ...). Keep the same result messages and use the existing helpers.addResult calls for pass/fail.plugins/aws/configservice/configServiceMissingBucket.spec.js (1)
17-34:⚠️ Potential issue | 🟡 MinorAvoid hard‑coding
listBuckets.errin the test cache.The helper currently sets
err: 'err'unconditionally, so listBuckets is always in an error state. That makes success‑path tests brittle if production code starts honoringlistBuckets.err.🔧 Suggested fix (allow optional error injection)
-const createCache = (records, headBucket, recordsErr, headBucketErr, buckets) => { +const createCache = (records, headBucket, recordsErr, headBucketErr, buckets, listBucketsErr = null) => { @@ - listBuckets: { + listBuckets: { 'us-east-1': { - err: 'err', + err: listBucketsErr, data: buckets } },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/aws/configservice/configServiceMissingBucket.spec.js` around lines 17 - 34, The test helper createCache is hard‑coding listBuckets.err to the string 'err', which forces listBuckets into an error state; change the object returned by createCache so the s3.listBuckets['us-east-1'].err field uses the headBucketErr parameter (or undefined when not provided) instead of the literal 'err' so tests can inject success or failure via headBucketErr; update any test call sites that rely on the current behavior to pass undefined/null for a success path or a specific error value to simulate failure.plugins/aws/accessanalyzer/accessAnalyzerActiveFindings.spec.js (1)
219-239:⚠️ Potential issue | 🟠 Major
createCachedoes not supportlistFindingsV2, making V2 tests ineffective.The
createCachefunction only populateslistFindingsin the cache structure. When tests passlistFindingsV2data (lines 255, 279), it gets placed in thelistFindingsslot, notlistFindingsV2. This means the V2 code path in the plugin is never actually exercised by these tests.🧪 Proposed fix to support both V1 and V2 test data
-const createCache = (analyzer, listFindings, analyzerErr, listFindingsErr) => { +const createCache = (analyzer, listFindings, listFindingsV2, analyzerErr, listFindingsErr, listFindingsV2Err) => { var analyzerArn = (analyzer && analyzer.length) ? analyzer[0].arn: null; return { accessanalyzer: { listAnalyzers: { 'us-east-1': { err: analyzerErr, data: analyzer }, }, listFindings: { 'us-east-1': { [analyzerArn]: { data:listFindings, err: listFindingsErr } } - } + }, + listFindingsV2: { + 'us-east-1': { + [analyzerArn]: { + data: listFindingsV2, + err: listFindingsV2Err + } + } + } }, }; };Then update test calls to provide both V1 and V2 data appropriately.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/aws/accessanalyzer/accessAnalyzerActiveFindings.spec.js` around lines 219 - 239, The createCache helper currently only populates accessanalyzer.listFindings, so tests supplying V2 payloads never exercise the listFindingsV2 code path; update createCache to accept and populate a listFindingsV2 parameter (e.g., createCache(analyzer, listFindings, listFindingsV2, analyzerErr, listFindingsErr, listFindingsV2Err)) and add a parallel structure under accessanalyzer.listFindingsV2['us-east-1'][analyzerArn] with data and err fields, keeping the existing listFindings behavior intact; then update tests to pass V2 data into the new parameter so the plugin's listFindingsV2 path is actually exercised.helpers/asl/asl-1.js (1)
1-74:⚠️ Potential issue | 🟠 MajorHandle ARN placeholders safely and keep falsy scalar values
In the scalar-path branch,
obj[path]truthy checks treat0/''as “not set.” Also, when the ARN template has no{}placeholders,.forEachwill throw.💡 Suggested fix
- if (obj[path] || typeof obj[path] === 'boolean') return obj[path]; + if (Object.prototype.hasOwnProperty.call(obj, path)) return obj[path]; else { if (cloud==='aws' && path.startsWith('arn:aws')) { const template_string = path; const placeholders = template_string.match(/{([^{}]+)}/g); let extracted_values = []; if (placeholders) { extracted_values = placeholders.map(placeholder => { const key = placeholder.slice(1, -1); if (key === 'value') return [obj][0]; else return obj[key]; }); } // Replace other variables let converted_string = template_string .replace(/\{region\}/g, region) .replace(/\{cloudAccount\}/g, accountId) .replace(/\{resourceId\}/g, resourceId); - placeholders.forEach((placeholder, index) => { + (placeholders || []).forEach((placeholder, index) => { if (index === placeholders.length - 1) { converted_string = converted_string.replace(placeholder, extracted_values.pop()); } else { converted_string = converted_string.replace(placeholder, extracted_values.shift()); } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@helpers/asl/asl-1.js` around lines 1 - 74, The scalar-path branch in parse treats falsy values like 0 or '' as missing and will crash if ARN templates have no placeholders; change the existence check to use a defined-key test (e.g., check Object.prototype.hasOwnProperty.call(obj, path) or obj[path] !== undefined) so booleans/0/empty strings are returned, and guard placeholder logic by handling placeholders possibly being null/undefined before calling placeholders.forEach; also fix the placeholder extraction for the special 'value' key to return obj itself (not [obj][0]) and only run the replace loop when placeholders.length > 0 so parse (function parse) handles ARN templates safely.plugins/aws/ec2/ebsRecentSnapshots.js (1)
55-55:⚠️ Potential issue | 🔴 CriticalMalformed ARN: missing
ec2service identifier.The ARN construction is missing the
ec2service component. AWS EBS snapshot ARNs follow the formatarn:partition:ec2:region:account-id:snapshot/snapshot-id.🐛 Proposed fix
-var resource = `arn:${awsOrGov}:${region}:${snapshot.OwnerId}:snapshot/${snapshot.SnapshotId}`; +var resource = `arn:${awsOrGov}:ec2:${region}:${snapshot.OwnerId}:snapshot/${snapshot.SnapshotId}`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/aws/ec2/ebsRecentSnapshots.js` at line 55, The constructed ARN in the variable resource is missing the EC2 service identifier; update the template for resource (using awsOrGov, region, snapshot.OwnerId, snapshot.SnapshotId) to follow the format arn:{partition}:ec2:{region}:{account}:snapshot/{snapshotId} by inserting "ec2" after the partition (awsOrGov) so the resulting string is arn:${awsOrGov}:ec2:${region}:${snapshot.OwnerId}:snapshot/${snapshot.SnapshotId}.
🤖 Fix all issues with AI agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/scans_ci.yml:
- Line 14: Replace the fragile uses entry that currently references
codespell-project/actions-codespell@master with a pinned release tag; locate the
line containing "uses: codespell-project/actions-codespell@master" and change it
to a specific version such as "@v2.2" (or the floating major "@v2") to ensure
reproducible CI runs and avoid unexpected breaking changes.
In `@collectors/aws/accessanalyzer/listFindingsV2.js`:
- Around line 35-43: The execute function has redundant conditional branches:
localParams already copies params and sets localParams.nextToken when nextToken
exists, so replace the if/else that calls helpers.makeCustomCollectorCall with a
single call using localParams; in other words, keep the localParams creation and
nextToken assignment, then always call
helpers.makeCustomCollectorCall(accessanalyzer, 'listFindingsV2', localParams,
retries, null, null, null, paginateCb) from execute (preserve params,
localParams, accessanalyzer, retries, and paginateCb).
In `@collectors/azure/blobService/listContainersSegmented.js`:
- Around line 38-51: The iteratee passed to async.eachLimit is declared async so
async.eachLimit will await its returned promise; calling entryCb() explicitly is
redundant and can cause double-callback issues—remove the explicit entryCb()
call inside the async iteratee used over containers and let the async function
complete after awaiting containerClient.getAccessPolicy(); keep the try/catch
that assigns to collection['blobService']['getContainerAcl'][region][entryId]
(use entryId as constructed) and rely on the promise resolution instead of
invoking entryCb().
- Around line 29-56: The current listContainersSegmented flow mixes an async
IIFE with callback-style async.eachLimit (entryCb/sCb) which is inconsistent;
convert the entire collector to async/await by making the containing function
async (remove the IIFE), await blobServiceClient.listContainers() and populate
collection['blobService']['listContainersSegmented'][region][resourceId].data,
then replace async.eachLimit(entryCb/sCb) with a Promise-based concurrency
mechanism (either promisify async.eachLimit via util.promisify or use a small
concurrency helper like p-limit/p-map) to await each container's getContainerAcl
call (getContainerAcl handling stays the same but use try/catch per container
and set collection['blobService']['getContainerAcl'][region][entryId].data or
.err), and finally await the concurrency promise instead of calling sCb so
errors propagate cleanly.
In `@collectors/azure/queueService/listQueuesSegmented.js`:
- Around line 1-57: The file mistakenly implements Table Service logic (uses
TableServiceClient, listTables(), and writes to collection['tableService'] and
getTableAcl) but lives in the queueService collector; replace its contents with
the correct Queue Service implementation (use QueueServiceClient,
listQueues()/listQueuesSegmented, and write to
collection['queueService']['listQueuesSegmented'] and
collection['queueService']['getQueueAcl'] / getQueueAccessPolicy) by copying the
implementation from listQueuesSegmentedNew.js (or delete this file if the new
file supersedes it), ensuring any references to TableServiceClient,
listTables(), collection['tableService'], and getTableAcl are removed and
replaced with the queue-specific symbols (QueueServiceClient, listQueues(),
collection['queueService'] and queue ACL retrieval).
- Around line 37-49: The iterator passed to async.eachLimit is declared async
and therefore should not use the callback parameter or call tableCb(); remove
the tableCb parameter and the tableCb() invocation from the async(tableName,
tableCb) iterator inside async.eachLimit so the function becomes async
(tableName) => { ... } and relies on promise resolution after awaiting
serviceClient.getAccessPolicy; keep the final sCb as the completion callback to
async.eachLimit.
In `@Dockerfile`:
- Line 1: Update the Dockerfile's base image by replacing the outdated FROM
node:lts-alpine3.12 with a supported LTS Alpine variant (e.g.,
node:lts-alpine3.18 or node:lts-alpine3.19, or simply node:lts-alpine to track
the latest LTS), so the image receives current security patches; change the FROM
instruction in the Dockerfile accordingly and rebuild to verify compatibility.
In `@docs/writing-plugins.md`:
- Line 25: Revert the broken repository links in the docs by changing all
occurrences of "cloudexploit/scans/blob/master" back to
"khulnasoft/cloudexploit/blob/master" for the collector URLs referenced in the
collect.js/postcalls explanation (the AWS, Azure, GCP and Oracle collector links
mentioned around the collect.js section and the `postcalls` example);
specifically update the eight modified link instances so the AWS, Azure, Google
and Oracle collector URLs point to the khulnasoft/cloudexploit repository as
listed in the review comment.
In `@helpers/asl/asl-1.js`:
- Around line 1-74: The scalar-path branch in parse treats falsy values like 0
or '' as missing and will crash if ARN templates have no placeholders; change
the existence check to use a defined-key test (e.g., check
Object.prototype.hasOwnProperty.call(obj, path) or obj[path] !== undefined) so
booleans/0/empty strings are returned, and guard placeholder logic by handling
placeholders possibly being null/undefined before calling placeholders.forEach;
also fix the placeholder extraction for the special 'value' key to return obj
itself (not [obj][0]) and only run the replace loop when placeholders.length > 0
so parse (function parse) handles ARN templates safely.
In `@helpers/aws/api.js`:
- Around line 564-572: The BridgeResourceNameIdentifier currently set to 'id'
for the "Guard Duty" entry is invalid because the GetDetector response does not
contain an id/DetectorId; update the mapping so inventory uses the detector ID
from request context or by calling ListDetectors prior to GetDetector instead of
relying on the response body. Concretely, change the
BridgeResourceNameIdentifier mapping (for the "Guard Duty" object) to reference
the path/request parameter name that contains the detector id (e.g., detectorId
or the request context key your code uses), or implement a ListDetectors call to
fetch the detector IDs and use those as the inventory identifier before calling
GetDetector; ensure any code that populates DataIdentifier/InvResource fields
pulls the detector id from that request context or ListDetectors result rather
than from GetDetector's response.
In `@helpers/aws/functions.js`:
- Around line 1151-1154: The loop over resourceList is using Array.prototype.map
for side effects which creates and discards an unused array; change the
iteration to resourceList.forEach(arn => { helpers.addResult(results, 3, 'Unable
to query all resources from group tagging api:' +
helpers.addError(allResources), region, arn); }); so the intention is clear and
no temporary array is allocated—locate the map call on resourceList and replace
it with forEach while keeping the same callback and references to
helpers.addResult, helpers.addError, results, region, and arn.
In `@helpers/shared.js`:
- Line 30: The assignment to localEvent.cloud_account_identifier reads
settings.identifier.cloud_account_identifier which can throw if
settings.identifier is undefined; either use optional chaining when reading it
(e.g., use settings.identifier?.cloud_account_identifier) or move this
assignment to after the existing defensive check on localSettings.identifier
(the block that starts with if (!localSettings.identifier) ) so you only access
identifier when it is present; update the code path that sets
localEvent.cloud_account_identifier accordingly.
In `@plugins/aws/accessanalyzer/accessAnalyzerActiveFindings.spec.js`:
- Around line 219-239: The createCache helper currently only populates
accessanalyzer.listFindings, so tests supplying V2 payloads never exercise the
listFindingsV2 code path; update createCache to accept and populate a
listFindingsV2 parameter (e.g., createCache(analyzer, listFindings,
listFindingsV2, analyzerErr, listFindingsErr, listFindingsV2Err)) and add a
parallel structure under accessanalyzer.listFindingsV2['us-east-1'][analyzerArn]
with data and err fields, keeping the existing listFindings behavior intact;
then update tests to pass V2 data into the new parameter so the plugin's
listFindingsV2 path is actually exercised.
In `@plugins/aws/cloudfront/cloudfrontLoggingEnabled.js`:
- Around line 18-23: Update the HIPAA and PCI description strings in
cloudfrontLoggingEnabled.js to replace "request logging" phrasing with the new
"S3 legacy logging" terminology: modify the hipaa message (the hipaa property)
so it reads that "S3 legacy logging for CloudFront should be enabled and logs
should be stored..." and modify the pci message (the pci property) to state "S3
legacy logging for networks hosting cardholder data is required" and "Enable
CloudFront S3 legacy logging to log requests..." so both entries consistently
use "S3 legacy logging" instead of "request logging."
In `@plugins/aws/configservice/configServiceMissingBucket.spec.js`:
- Around line 17-34: The test helper createCache is hard‑coding listBuckets.err
to the string 'err', which forces listBuckets into an error state; change the
object returned by createCache so the s3.listBuckets['us-east-1'].err field uses
the headBucketErr parameter (or undefined when not provided) instead of the
literal 'err' so tests can inject success or failure via headBucketErr; update
any test call sites that rely on the current behavior to pass undefined/null for
a success path or a specific error value to simulate failure.
In `@plugins/aws/connect/customerProfilesDomainEncrypted.js`:
- Around line 103-127: The messages use multiline template literals with
backslash continuations which introduce unwanted whitespace; update the three
helpers.addResult calls (the awskms warning, the "Unable to find Customer
Profile domain encryption key" block's successors, and the
currentEncryptionLevel comparisons that reference currentEncryptionLevelString
and config.desiredEncryptionLevelString) to build single-line strings (either a
single-line template literal or string concatenation) or trim/normalize
whitespace (e.g., .replace(/\s+/g, ' ') or .trim()) so the output contains no
extra internal spacing and reads correctly.
- Around line 14-21: The default value 'awskms' for
settings.customer_profiles_desired_encryption_level conflicts with its regex and
description; update the validation and documentation or change the default.
Either add 'awskms' to the regex in customerProfilesDomainEncrypted.js (e.g.,
include awskms in the pattern) and mention it in the description alongside
awscmk/externalcmk/cloudhsm, or change default to one of the existing valid
options (awscmk, externalcmk, or cloudhsm); update the settings object
(customer_profiles_desired_encryption_level) accordingly so the default conforms
to the regex and description.
- Line 22: The realtime_triggers array contains a typo: change the string
'customerprofile:DeleteDomain' to 'customerprofiles:DeleteDomain' so all
triggers use the correct namespace; locate the realtime_triggers declaration
(realtime_triggers: [...]) and update that entry to
'customerprofiles:DeleteDomain'.
In `@plugins/aws/ec2/amiNamingConvention.js`:
- Around line 30-34: The code constructs namingPattern with new
RegExp(config.ami_naming_pattern) without validation; wrap the RegExp creation
in a try/catch and validate config.ami_naming_pattern (coming from
settings.ami_naming_pattern) so a malformed pattern does not throw uncaught
SyntaxError: on failure, log or throw a clearer error (or fallback to a safe
default pattern) using the same symbols (config, ami_naming_pattern,
namingPattern) so callers receive a descriptive message instead of a crash.
In `@plugins/aws/ec2/ebsRecentSnapshots.js`:
- Line 55: The constructed ARN in the variable resource is missing the EC2
service identifier; update the template for resource (using awsOrGov, region,
snapshot.OwnerId, snapshot.SnapshotId) to follow the format
arn:{partition}:ec2:{region}:{account}:snapshot/{snapshotId} by inserting "ec2"
after the partition (awsOrGov) so the resulting string is
arn:${awsOrGov}:ec2:${region}:${snapshot.OwnerId}:snapshot/${snapshot.SnapshotId}.
In `@plugins/aws/ec2/oldAmi.spec.js`:
- Around line 6-16: Tests in oldAmi.spec.js use Date.now() for CreationDate
(ImageId 'ami-046b09f5340dfd8gb'/'gc'/'gd'), causing flaky time-dependent
boundaries; replace Date.now() with a fixed base timestamp (e.g., a constant
BASE_TIME or BASE_DATE) and compute the 100/70/30-day offsets from that constant
so CreationDate values become deterministic and stable across runs.
In `@plugins/aws/ecs/ecsFargatePlatformVersion.js`:
- Around line 65-105: When describeServices.data.services is present but empty
(or describeServices.data is missing), explicitly record an UNKNOWN result
instead of silently skipping: in the loop where you inspect describeServices
(referencing describeServices, service, hasFargateServices), add a branch that
detects an empty or missing services array and calls helpers.addResult(results,
3, 'ECS describeServices returned no service entries for cluster', region,
clusterArn) (or use your project's UNKNOWN severity constant) and continue; this
ensures you distinguish "no data returned" from "no Fargate services found"
before the existing hasFargateServices logic runs.
In `@plugins/aws/ecs/ecsServicePublicIpDisabled.js`:
- Around line 74-86: The current check in ecsServicePublicIpDisabled.js only
evaluates services when networkMode.awsvpcConfiguration exists and silently
skips others; update the logic around the networkMode/awsvpcConfiguration check
so that if awsvpcConfiguration is missing you call helpers.addResult(results, 0,
'Service does not use awsvpc network mode', region, serviceArn) (or add a clear
comment if you intentionally want to skip) and keep the existing branch that
inspects assignPublicIp (using assignPublicIp, assignPublicIpLower and the
existing helpers.addResult calls) for awsvpc services; ensure you reference the
same results, region and serviceArn variables so every enumerated service yields
a result.
- Around line 64-69: Replace the verbose extraction of the first service with a
single concise assignment and keep the existing guard; specifically, assign the
first element of describeServices.data.services to the variable service using
optional chaining/array indexing (so it becomes undefined if the array is
missing or empty) and then use the existing if (!service) continue; check.
Update the block that currently declares var service = null and the conditional
that sets it from describeServices.data.services to this simplified assignment
while keeping the same control flow.
- Around line 95-96: Remove the trailing empty lines at the end of the file so
it ends cleanly (keep a single newline at EOF if your editor enforces it);
locate the module/file named ecsServicePublicIpDisabled (e.g., the exported
function or module.exports in plugins/aws/ecs/ecsServicePublicIpDisabled.js) and
delete the extra blank lines after the last statement so the file doesn't
contain unnecessary trailing whitespace.
In `@plugins/aws/eks/eksPrivilegeAnalysis.js`:
- Line 1: Remove the unnecessary leading empty line at the top of
plugins/aws/eks/eksPrivilegeAnalysis.js by deleting the first blank line so the
file starts immediately with the module contents (e.g., the first import,
function, or export declaration).
- Line 11: The exported plugin object has an incorrect apis value of [''] which
creates a spurious empty-string API entry; update the apis property in the
eksPrivilegeAnalysis export to an empty array [] (replace [''] with []) so it
correctly declares no APIs, ensuring the apis symbol is fixed in the
eksPrivilegeAnalysis module.
In `@plugins/aws/iam/iamRolePolicies.js`:
- Around line 402-422: The hasFailures function currently checks
roleFailures.managed.regexMismatch.length and
roleFailures.inline.regexMismatch.length, but regexMismatch is an object so
.length is undefined; update those two checks in hasFailures to test
Object.keys(roleFailures.managed.regexMismatch).length and
Object.keys(roleFailures.inline.regexMismatch).length (or otherwise check
Object.keys(...).length > 0) so regex mismatch entries are detected correctly;
keep all other checks the same and only change the two regexMismatch checks to
use Object.keys to determine emptiness.
In `@plugins/aws/iam/iamRolePolicies.spec.js`:
- Around line 392-410: Add unit tests exercising the new formatted message mode
by invoking iamRolePolicies.run with the config flag iam_policy_message_format:
true (use existing helpers like createCache and test fixtures
listRolePolicies/getRolePolicy) and assert that returned results' messages
include policy names and formatted details; add a test that exercises the
hasFailures path by feeding fixtures that produce each failure type and
asserting hasFailures is true (or results contain the expected failures); and
add a test that supplies >5 failing policies (generate extra listRolePolicies
fixtures via existing helpers) to verify truncation behavior in the formatted
message.
In `@plugins/aws/kinesisvideo/videostreamDataEncrypted.spec.js`:
- Around line 112-120: Add a test that covers alias-based KmsKeyId resolution by
creating a stream entry whose KmsKeyId is an alias (e.g., "alias/my-kinesis-key"
or full alias ARN) and passing it into createCache (e.g., a new
listStreamsWithAlias variable) along with the existing listAliases/listKeys
fixtures; then call videostreamDataEncrypted.run with
video_stream_data_desired_encryption_level: 'awscmk' and assert the result
status is 0 (similar to the existing test) to verify the alias-to-key resolution
logic in videostreamDataEncrypted.run is exercised.
In `@plugins/aws/neptune/neptuneDbDeletionProtection.js`:
- Line 40: Update the conditional in neptuneDbDeletionProtection.js to use
strict inequality for the engine check: replace the loose comparison
"cluster.Engine != 'neptune'" with the strict form "cluster.Engine !==
'neptune'" (the check that already verifies cluster.DBClusterArn and filters
non-Neptune clusters uses the symbols cluster.DBClusterArn and cluster.Engine).
In `@plugins/aws/rds/rdsPublicSubnet.js`:
- Around line 58-69: The code accesses instance.DBSubnetGroup.Subnets without
checking DBSubnetGroup exists; update the describeDBInstances loop to first
verify instance.DBSubnetGroup is truthy (and that .Subnets is an array) before
reading Subnets, e.g. if instance.DBSubnetGroup is missing, call
helpers.addResult(results, 2, 'RDS instance has no subnet group (possible
EC2-Classic) and may be public', region, instance.DBInstanceArn) or otherwise
skip processing, otherwise continue with the existing allPrivate check and
helpers.addResult calls; touch the describeDBInstances handling and any use of
instance.DBSubnetGroup/Subnets to add this defensive null check.
In `@plugins/aws/route53/domainExpiry.js`:
- Line 13: The realtime_triggers array in domainExpiry.js contains a typo:
change the third entry 'route53domians:DeleteDomain' to the correct
'route53domains:DeleteDomain' so that the deletion event matches; update the
realtime_triggers definition (the array literal containing
'route53domains:RegisterDomain','route53domains:RenewDomain',...) to fix the
misspelled symbol.
In `@plugins/azure/appservice/functionAppNetworkExposure.js`:
- Around line 60-61: The comparison is currently case-sensitive; change the
check on functionApp.publicNetworkAccess in functionAppNetworkExposure.js to a
case-insensitive comparison (e.g., normalize with .toLowerCase()) and guard
against null/undefined before calling toLowerCase; update the branch that sets
internetExposed to 'public network access' so it triggers when
functionApp.publicNetworkAccess exists and equals 'enabled' case-insensitively.
In `@plugins/azure/appservice/functionPrivilegeAnalysis.js`:
- Around line 18-23: The run function in functionPrivilegeAnalysis.js currently
returns empty results (variables results and source) as a stub; update the file
by adding a clear TODO comment above the run implementation referencing the
missing privilege analysis logic and/or add a link or ID for a tracking issue
that will implement the analysis (mention the run function name and the
variables results/source to locate the stub), so future maintainers know this is
intentionally unfinished and where to find the work item.
- Line 10: The apis array currently contains an empty string element ("apis:
['']") which can break API dependency logic; update the object/property in
functionPrivilegeAnalysis.js where apis is defined (look for the apis property
in the exported object or config) to use an empty array instead ("apis: []")
when there are no API dependencies so iteration/resolution logic receives a
proper empty list.
In `@plugins/azure/automationAccounts/automationAcctExpiredWebhooks.spec.js`:
- Around line 3-4: The test creates nextMonthExpiry as a Date object which is
inconsistent with the other webhook's expiryTime (an ISO string); convert
nextMonthExpiry to an ISO string (call nextMonthExpiry.toISOString()) before
assigning it to the webhook's expiryTime so both webhooks use the same string
format and avoid type-related test failures (update usages of the variable in
the spec where the webhook objects are defined).
In `@plugins/azure/batchAccounts/batchAccountsManagedIdentity.js`:
- Around line 40-44: The current check only verifies presence of
batchAccount.identity.type; change the condition to validate the actual type
value (case-insensitively) to match other plugins: verify batchAccount.identity
&& batchAccount.identity.type and that batchAccount.identity.type.toLowerCase()
equals 'systemassigned' or 'userassigned' before calling helpers.addResult(...,
0, ...); otherwise call helpers.addResult(..., 2, ...). Keep the same result
messages and use the existing helpers.addResult calls for pass/fail.
In `@plugins/azure/keyvaults/keyVaultPublicAccess.js`:
- Around line 86-90: The IP allowlist check in keyVaultPublicAccess.js currently
does exact string matching using allowedIps.includes(rule.value), which fails to
match CIDR ranges; update the comparison inside the block guarded by
checkAllowedIps to support CIDR containment (e.g., parse allowedIps entries and
test whether rule.value IP falls within any CIDR/netmask or single-IP entry)
rather than strict equality so hasPublicAccess and publicAccessFound correctly
reflect IPs that are inside allowed CIDR ranges; update the logic that iterates
allowedIps and the contains-check used for rule.value accordingly (refer to
symbols checkAllowedIps, allowedIps, rule.value, hasPublicAccess,
publicAccessFound).
- Around line 85-95: The loop over ipRules only flags wildcard addresses when
checkAllowedIps is false, which misses specific public IPs (e.g., 52.168.1.1);
update the logic in the ipRules iteration so that when checkAllowedIps is false
it treats any ipRule.value as public (set hasPublicAccess = true and push to
publicAccessFound) rather than only matching '0.0.0.0/0' or '0.0.0.0', or
alternatively add a clarifying comment above the check explaining that the
intent of the current code (using checkAllowedIps, allowedIps, hasPublicAccess,
publicAccessFound) is to only flag internet-wide wildcards and not individual
external IPs.
In `@plugins/azure/keyvaults/keyVaultPublicAccess.spec.js`:
- Around line 4-74: The fixture array vaults contains a naming gap (test1..test4
then test6); rename the object whose "name" is "test6" (and its "id" segment
".../vaults/test6") to "test5" (and update the id string accordingly) so the
sequence is consistent and maintainable while leaving all other properties and
objects unchanged.
In `@plugins/azure/kubernetesservice/aksLatestVersion.spec.js`:
- Around line 68-79: The test fixture uses plain strings for
agentPoolProfiles[0].upgrades but Azure returns objects like
controlPlaneProfile.upgrades; update the fixture so agentPoolProfiles.upgrades
is an array of objects with kubernetesVersion keys (match the structure used by
controlPlaneProfile.upgrades) so tests reflect the real API shape and remain
consistent for future code that may read agentPoolProfiles.upgrades.
In `@plugins/azure/mysqlserver/mysqlFlexibleServerPublicAccess.spec.js`:
- Around line 154-186: Replace the duplicated "protected" test with one that
exercises the mysql_flexible_server_allowed_ips branch: instead of using the
same firewallRules[2], build a cache with listMysqlFlexibleServer[1] and a
firewallRules entry that represents an allowed IP (or CIDR) so the rule triggers
the allowed‑IPs path; call mysqlFlexibleServerPublicAccess.run(cache, {},
callback) and in the callback assert results.length === 1, results[0].status ===
0 (or the expected status for allowed IPs), results[0].region === 'eastus', and
results[0].message includes a string referencing allowed IPs (e.g. 'allowed IP'
or the specific CIDR) to validate mysql_flexible_server_allowed_ips behavior.
In `@plugins/azure/postgresqlserver/postgresqlFlexibleServerPublicAccess.spec.js`:
- Around line 38-56: The test fixtures include IPv6 wildcard rules
(firewallRules entries named AllowIPv6 and AllowIPv6Alt) that are unused and the
implementation only detects IPv4 wildcards (0.0.0.0); either remove those IPv6
fixtures or update the public-access detection logic to recognize IPv6 wildcards
(:: and ::/0). Locate the code that inspects firewallRules for public access
(the function that currently checks for "0.0.0.0" patterns) and extend it to
treat "::" and "::/0" as public as well, then update or add tests asserting that
AllowIPv6 and AllowIPv6Alt cause public access to be reported (or delete those
two fixtures if you choose to remove unused IPv6 tests).
In `@plugins/azure/postgresqlserver/postgresqlServerPublicAccess.js`:
- Around line 73-90: The code in postgresqlServerPublicAccess.js inverts the
intended semantics: postgresql_server_allowed_ips (used via checkAllowedIps and
allowedIps) currently triggers publicAccess=true when an endIpAddr matches an
entry, which is confusing; either rename the setting to
postgresql_server_flagged_end_ips and update its description to "IP addresses
which, when used as end IPs with a 0.0.0.0 start, indicate public exposure" and
rename usages (checkAllowedIps, allowedIps) accordingly, or keep the setting
name but invert the logic so publicAccess is set only when an endIpAddr is NOT
in allowedIps; update the if conditions around startIpAddr/endIpAddr and any
docs/tests that reference postgresql_server_allowed_ips to reflect the chosen
semantics.
In `@plugins/azure/sqlserver/azureADAdminEnabled.spec.js`:
- Around line 72-78: The test's description string in the it(...) block is
misleadingly referencing "App Service" while the assertions check Entra ID admin
on a SQL Server; update the it(...) description to accurately describe the
behavior (e.g., "should give passing result if Entra ID admin is enabled on the
SQL server") so the test name matches the assertions — locate the it('should
give passing result if enabled App Service', function(done) { ... }) and change
only the human-readable description string.
- Around line 43-49: Rename the test description string in the it block
currently written as "should give failing result if disable App Service" to
accurately reflect the assertion about Entra ID admin on SQL Server (e.g.,
"should give failing result if Entra ID admin is not enabled on SQL Server");
update the it(...) description where this appears so the test name matches the
expectations checked inside the callback (results[0].message contains 'Entra ID
admin is not enabled on the server').
In `@plugins/azure/sqlserver/sqlServerManagedIdentity.js`:
- Around line 38-44: The loop in servers.data.forEach currently checks
server.identity.type for exact matches 'userassigned' or 'systemassigned', which
misses combined values like 'SystemAssigned,UserAssigned'; update the condition
in that loop (where server.identity.type is inspected before calling
helpers.addResult) to perform a case-insensitive substring check (e.g.,
normalize toLowerCase() and use includes('userassigned') ||
includes('systemassigned')) or alternatively check
server.identity.userAssignedIdentities or server.identity.principalId to detect
any managed identity, then call helpers.addResult(results, 0, ...) when any
managed identity is present and helpers.addResult(results, 2, ...) only when
none are found (retain location and server.id).
In `@plugins/azure/storageaccounts/storageAccountPrivateEndpoint.spec.js`:
- Around line 4-79: Add a new storage account fixture to the storageAccounts
array that exercises the missing edge case: create an entry (e.g., name 'acc7')
with "publicNetworkAccess" set to "Enabled" (or omitted), "networkAcls": {
"defaultAction": "Allow", "ipRules": [ { "value": "10.0.0.0/24", "action":
"Allow" } ], "virtualNetworkRules": [] } so the test covers defaultAction:
'Allow' combined with a non-empty ipRules array and exposes the implementation
bug; place it alongside the other fixtures in
storageAccountPrivateEndpoint.spec.js.
- Around line 103-205: Add a new test that asserts
storageAccountPrivateEndpoint.run returns a failing status when an account (use
storageAccounts[6] / acc7) has public network access enabled with
networkAcls.defaultAction === 'Allow' even though ipRules exist: call
storageAccountPrivateEndpoint.run with cache created from storageAccounts[6] and
settings { check_selected_networks: true }, then expect results[0].status === 2
and results[0].message to include 'Storage account is publicly accessible' and
region 'eastus'. This mirrors the other tests (see existing it blocks) and
ensures the edge case is covered.
In `@plugins/azure/synapse/synapseWorkspaceAdAuthEnabled.js`:
- Around line 9-11: Update the terminology in the exported metadata strings of
synapseWorkspaceAdAuthEnabled.js: replace occurrences of "Azure Entra ID" with
"Microsoft Entra ID" in the description and more_info (and anywhere else in that
file such as the description and more_info properties) so the text uses the
official "Microsoft Entra ID" branding while keeping the existing messaging and
recommended_action unchanged.
In `@plugins/azure/virtualmachines/vmPrivilegeAnalysis.js`:
- Around line 17-22: The run function in vmPrivilegeAnalysis.js (run) currently
returns empty results; replace this placeholder with actual privilege analysis
or at minimum add a clear TODO and return an informational result indicating
"not implemented" so callers get meaningful output. Locate the run function and
either implement the analysis logic using the plugin's expected inputs from
cache/settings (collect VM identity, role assignments, effective permissions,
etc.) and populate results/source accordingly, or add a TODO comment and push an
informational result object (e.g., status/message fields) instead of an empty
array so the plugin communicates its unimplemented state.
In `@plugins/azure/virtualmachines/vmWindowsAntiMalwareExtension.js`:
- Around line 57-63: The find predicate in virtualMachineExtensions.data
(variable found) redundantly checks vmExt.name against 'iaasantimalware' and
'iaasantimalwareext' in addition to
vmExt.name.toLowerCase().includes('antimalware'); simplify the predicate by
removing the explicit equality checks and just use a single guard that verifies
vmExt.name exists, then vmExt.name.toLowerCase().includes('antimalware') &&
vmExt.provisioningState === 'Succeeded' so the find uses only the include-based
match plus the provisioningState check.
In `@plugins/azure/virtualmachinescaleset/scaleSetAdAuthEnabled.spec.js`:
- Around line 1-2: Rename the module and test files to use Entra ID terminology:
change filenames from scaleSetAdAuthEnabled.js and scaleSetAdAuthEnabled.spec.js
to scaleSetEntraIdAuthEnabled.js and scaleSetEntraIdAuthEnabled.spec.js, update
the require/import in the spec (the variable currently named
scaleSetAdAuthEnabled) and any other references to use
scaleSetEntraIdAuthEnabled, and ensure module.exports in the implementation
matches the new name so tests import the renamed symbol correctly.
In `@plugins/google/cloudfunctionsv2/cloudFunctionV2HttpsOnly.spec.js`:
- Line 74: The describe block name 'httpTriggerRequireHttps' should be renamed
to match the plugin module 'cloudFunctionV2HttpsOnly' for consistency; update
the describe(...) invocation in the test file (currently
describe('httpTriggerRequireHttps', ...)) to
describe('cloudFunctionV2HttpsOnly', ...) so test output and the plugin name
align and are easier to read when running tests.
In `@plugins/google/cloudfunctionsv2/cloudFunctionV2IngressSettings.js`:
- Around line 53-56: The message added in the else branch when ingressSettings
is missing (the helpers.addResult call referencing results, region, func.name)
is misleading; update that message to state that missing ingressSettings
defaults to ALLOW_ALL (allows all traffic) and therefore represents a potential
security risk. Keep the same call to helpers.addResult and parameters (results,
severity 2, region, func.name) but replace the text 'Cloud Function does not
have ingress settings configured' with a clear warning that the default
ALLOW_ALL ingress is in effect and may expose the function to unrestricted
traffic.
In `@plugins/google/cloudfunctionsv2/cloudFunctionV2IngressSettings.spec.js`:
- Line 69: The test suite Describe block name is inconsistent with the plugin
filename; update the describe('ingressAllTrafficDisabled', ...) to
describe('cloudFunctionV2IngressSettings', ...) (or another name that matches
the plugin file cloudFunctionV2IngressSettings) so the test suite aligns with
the plugin; ensure any nested test titles or references expecting
"ingressAllTrafficDisabled" are adjusted accordingly to avoid mismatch when
running tests.
In `@plugins/google/cloudfunctionsv2/cloudFunctionV2LabelsAdded.spec.js`:
- Line 57: Rename the describe block string from 'cloudFunctionLabelsAdded' to
'cloudFunctionV2LabelsAdded' to match the plugin module name
(cloudFunctionV2LabelsAdded); locate the describe(...) invocation in
cloudFunctionV2LabelsAdded.spec.js (the line currently reading
describe('cloudFunctionLabelsAdded', ...)) and update the first argument to
'cloudFunctionV2LabelsAdded' so the test suite name is consistent with the
plugin.
In `@plugins/google/compute/computePrivilegeAnalysis.js`:
- Line 10: The apis: [''] entry in computePrivilegeAnalysis.js (and the same
pattern in the other 7 privilege-analysis plugins) fails the ASL input
validation (see helpers/asl.js check if (!input.apis || !input.apis[0]) ), so
change the plugin either by removing ASL dependency (delete the apis property
and any ASL wiring) when the plugin.run is a no-op, or supply real API IDs
(replace apis: [''] with a non-empty array of valid API identifier strings) and
ensure the plugin.run implementation actually calls those APIs; update the
computePrivilegeAnalysis.js apis property and run function (and mirror the same
change across Kubernetes, Cloud Functions, Lambda, VM, EKS, AKS, and App Service
Functions plugins) so the ASL check passes.
In `@plugins/google/compute/instanceLeastPrivilege.js`:
- Around line 95-125: The broad-role detection in the instances.data.forEach
block (using serviceAccountRoles, hasBroadRole and roles.filter) is too narrow;
update the logic to (1) expand the predicate to include known permissive roles
like 'roles/iam.securityAdmin' and any role name patterns that imply wide
permission (e.g., contains 'securityAdmin' or other known admin keywords), (2)
detect custom roles that may grant wildcard permissions by checking role
definitions/permissions (from serviceAccountRoles data or an available
rolePermissions mapping) for '*' or overly broad permissions, and (3) collect
the matching broad roles into a matchedBroadRoles array (instead of just setting
hasBroadRole) and pass or include that information in the helpers.addResult call
so the result message for the instance service account includes which specific
broad role(s) were detected.
In `@plugins/google/kubernetes/kubernetesPrivilegeAnalysis.js`:
- Around line 19-24: The run function is a stub that returns empty results;
implement it to (1) declare the required API names in the plugin's apis array
(e.g., "container:list:projects.locations.clusters.list" and
"iam:list:projects.serviceAccounts.list"), (2) read clusters and service account
data from the provided cache parameter (use cache lookups for cluster and
service account keys), (3) analyze IAM bindings and Kubernetes RBAC/workload
identity mappings to detect over-privileged service accounts (check for broad
roles, wildcard principals, and mismatched workload-identity bindings), and (4)
populate the existing results array with findings objects (include severity,
resource identifiers, and descriptions) before invoking callback(null, results,
source); reference the run function, results, source, cache, and callback
identifiers when making changes.
| const { TableServiceClient, AzureNamedKeyCredential } = require('@azure/data-tables'); | ||
| var async = require('async'); | ||
|
|
||
| module.exports = function(collection, reliesOn, callback) { | ||
| if (!reliesOn['storageAccounts.listKeys']) return callback(); | ||
|
|
||
| var azureStorage = require('azure-storage'); | ||
| if (!collection['tableService']['listTablesSegmented']) collection['tableService']['listTablesSegmented'] = {}; | ||
| if (!collection['tableService']['getTableAcl']) collection['tableService']['getTableAcl'] = {}; | ||
|
|
||
| if (!collection['queueService']['listQueuesSegmented']) collection['queueService']['listQueuesSegmented'] = {}; | ||
| if (!collection['queueService']['getQueueAcl']) collection['queueService']['getQueueAcl'] = {}; | ||
|
|
||
| // Loop through regions and properties in reliesOn | ||
| async.eachOfLimit(reliesOn['storageAccounts.listKeys'], 10,function(regionObj, region, cb) { | ||
| collection['queueService']['listQueuesSegmented'][region] = {}; | ||
| collection['queueService']['getQueueAcl'][region] = {}; | ||
| async.eachOfLimit(reliesOn['storageAccounts.listKeys'], 10, function(regionObj, region, cb) { | ||
| collection['tableService']['listTablesSegmented'][region] = {}; | ||
| collection['tableService']['getTableAcl'][region] = {}; | ||
|
|
||
| async.eachOfLimit(regionObj, 10, function(subObj, resourceId, sCb) { | ||
| collection['queueService']['listQueuesSegmented'][region][resourceId] = {}; | ||
|
|
||
| if (subObj && subObj.data && subObj.data.keys && subObj.data.keys[0] && subObj.data.keys[0].value) { | ||
| // Extract storage account name from resourceId | ||
| var storageAccountName = resourceId.substring(resourceId.lastIndexOf('/') + 1); | ||
| var storageService = new azureStorage['QueueService'](storageAccountName, subObj.data.keys[0].value); | ||
|
|
||
| storageService.listQueuesSegmented(null, function(serviceErr, serviceResults) { | ||
| if (serviceErr || !serviceResults) { | ||
| collection['queueService']['listQueuesSegmented'][region][resourceId].err = (serviceErr || 'No data returned'); | ||
| sCb(); | ||
| } else { | ||
| collection['queueService']['listQueuesSegmented'][region][resourceId].data = serviceResults.entries; | ||
|
|
||
| // Add ACLs | ||
| async.eachLimit(serviceResults.entries, 10, function(entryObj, entryCb) { | ||
| var entryId = `${resourceId}/queueService/${entryObj.name}`; | ||
| collection['queueService']['getQueueAcl'][region][entryId] = {}; | ||
|
|
||
| storageService.getQueueAcl(entryObj.name, function(getErr, getData) { | ||
| if (getErr || !getData) { | ||
| collection['queueService']['getQueueAcl'][region][entryId].err = (getErr || 'No data returned'); | ||
| } else { | ||
| collection['queueService']['getQueueAcl'][region][entryId].data = getData; | ||
| } | ||
| entryCb(); | ||
| }); | ||
| }, function() { | ||
| sCb(); | ||
| }); | ||
| collection['tableService']['listTablesSegmented'][region][resourceId] = {}; | ||
|
|
||
| const key = subObj && subObj.data && subObj.data.keys && subObj.data.keys[0] && subObj.data.keys[0].value? subObj.data.keys[0].value:null; | ||
| if (!key) return sCb(); | ||
|
|
||
| const storageAccountName = resourceId.substring(resourceId.lastIndexOf('/') + 1); | ||
| const credential = new AzureNamedKeyCredential(storageAccountName, key); | ||
| const serviceClient = new TableServiceClient( | ||
| `https://${storageAccountName}.table.core.windows.net`, | ||
| credential | ||
| ); | ||
|
|
||
| const tables = []; | ||
|
|
||
| (async() => { | ||
| try { | ||
| for await (const table of serviceClient.listTables()) { | ||
| tables.push(table.name); | ||
| } | ||
| }); | ||
| } else { | ||
| sCb(); | ||
| } | ||
| }, function() { | ||
| cb(); | ||
| }); | ||
| }, function() { | ||
| callback(); | ||
| }); | ||
|
|
||
| collection['tableService']['listTablesSegmented'][region][resourceId].data = tables; | ||
|
|
||
| async.eachLimit(tables, 10, async(tableName, tableCb) => { | ||
| const tableId = `${resourceId}/tableService/${tableName}`; | ||
| collection['tableService']['getTableAcl'][region][tableId] = {}; | ||
|
|
||
| try { | ||
| const aclResponse = await serviceClient.getAccessPolicy(tableName); | ||
| collection['tableService']['getTableAcl'][region][tableId].data = aclResponse; | ||
| } catch (getErr) { | ||
| collection['tableService']['getTableAcl'][region][tableId].err = getErr.message || getErr; | ||
| } | ||
|
|
||
| tableCb(); | ||
| }, sCb); | ||
| } catch (tableErr) { | ||
| collection['tableService']['listTablesSegmented'][region][resourceId].err = tableErr.message || tableErr; | ||
| sCb(); | ||
| } | ||
| })(); | ||
| }, cb); | ||
| }, callback); | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for table service collector
echo "=== Checking for tableService collector ==="
fd -t f "listTablesSegmented.js" collectors/azure/
# Check if queue service should have its own implementation
echo "=== Checking queueService directory structure ==="
ls -la collectors/azure/queueService/ 2>/dev/null || echo "Directory not found"
# Search for references to queueService collection
echo "=== Searching for queueService collection usage ==="
rg -n "collection\['queueService'\]" --type js -C2
# Check collector.js for queue/table service wiring
echo "=== Checking collector wiring ==="
rg -n "queueService|tableService" collectors/azure/collector.js 2>/dev/null | head -30Repository: khulnasoft/cloudexploit
Length of output: 7252
Critical: File implements Table Service logic but must implement Queue Service logic.
This file is in collectors/azure/queueService/ but uses Table Service operations:
- Uses
TableServiceClientandlistTables()instead ofQueueServiceClient - Populates
collection['tableService']instead ofcollection['queueService'] - References table ACL retrieval instead of queue ACL retrieval
A correct implementation exists in collectors/azure/queueService/listQueuesSegmentedNew.js, which properly:
- Uses Queue Service APIs
- Populates
collection['queueService']['listQueuesSegmented']andcollection['queueService']['getQueueAcl']
The current listQueuesSegmented.js is an incorrect migration artifact that breaks queue service enumeration. Replace with the correct implementation from listQueuesSegmentedNew.js or remove this file if superseded.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@collectors/azure/queueService/listQueuesSegmented.js` around lines 1 - 57,
The file mistakenly implements Table Service logic (uses TableServiceClient,
listTables(), and writes to collection['tableService'] and getTableAcl) but
lives in the queueService collector; replace its contents with the correct Queue
Service implementation (use QueueServiceClient,
listQueues()/listQueuesSegmented, and write to
collection['queueService']['listQueuesSegmented'] and
collection['queueService']['getQueueAcl'] / getQueueAccessPolicy) by copying the
implementation from listQueuesSegmentedNew.js (or delete this file if the new
file supersedes it), ensuring any references to TableServiceClient,
listTables(), collection['tableService'], and getTableAcl are removed and
replaced with the queue-specific symbols (QueueServiceClient, listQueues(),
collection['queueService'] and queue ACL retrieval).
| ``` | ||
|
|
||
| The second section in `collect.js` is `postcalls`, which is an array of objects defining API calls that rely on other calls first returned. For example, if you need to query for all `CloudFront distributions`, and then loop through each one and run a more detailed call, you would add the `CloudFront:listDistributions` call in the [`calls`](https://github.com/khulnasoft/cloudexploit/blob/master/collectors/aws/collector.js#L58-L64) section and then the more detailed call in [`postcalls`](https://github.com/khulnasoft/cloudexploit/blob/master/collectors/aws/collector.js#L467-L473), setting it to rely on the output of `listDistributions` call. | ||
| The second section in `collect.js` is `postcalls`, which is an array of objects defining API calls that rely on other calls first returned. For example, if you need to query for all `CloudFront distributions`, and then loop through each one and run a more detailed call, you would add the `CloudFront:listDistributions` call in the [`calls`](https://github.com/cloudexploit/scans/blob/master/collectors/aws/collector.js#L58-L64) section and then the more detailed call in [`postcalls`](https://github.com/cloudexploit/scans/blob/master/collectors/aws/collector.js#L467-L473), setting it to rely on the output of `listDistributions` call. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify all updated GitHub collector URLs return valid responses
declare -a urls=(
"https://github.com/cloudexploit/scans/blob/master/collectors/aws/collector.js"
"https://github.com/cloudexploit/scans/blob/master/collectors/azure/collector.js"
"https://github.com/cloudexploit/scans/blob/master/collectors/google/collector.js"
"https://github.com/cloudexploit/scans/blob/master/collectors/oracle/collector.js"
)
echo "Checking new URLs (cloudexploit/scans):"
for url in "${urls[@]}"; do
status=$(curl -I -s -o /dev/null -w "%{http_code}" "$url")
echo "$url - HTTP $status"
done
echo ""
echo "Checking old URLs (khulnasoft/cloudexploit) for comparison:"
declare -a old_urls=(
"https://github.com/khulnasoft/cloudexploit/blob/master/collectors/aws/collector.js"
"https://github.com/khulnasoft/cloudexploit/blob/master/collectors/azure/collector.js"
"https://github.com/khulnasoft/cloudexploit/blob/master/collectors/google/collector.js"
"https://github.com/khulnasoft/cloudexploit/blob/master/collectors/oracle/collector.js"
)
for url in "${old_urls[@]}"; do
status=$(curl -I -s -o /dev/null -w "%{http_code}" "$url")
echo "$url - HTTP $status"
doneRepository: khulnasoft/cloudexploit
Length of output: 915
Revert all collector URLs to correct repository path — all new links are broken (HTTP 404).
The systematic change from khulnasoft/cloudexploit/blob/master to cloudexploit/scans/blob/master has introduced broken documentation links. Verification confirms all four collector URLs now return HTTP 404:
- AWS collector: https://github.com/cloudexploit/scans/blob/master/collectors/aws/collector.js
- Azure collector: https://github.com/cloudexploit/scans/blob/master/collectors/azure/collector.js
- GCP collector: https://github.com/cloudexploit/scans/blob/master/collectors/google/collector.js
- Oracle collector: https://github.com/cloudexploit/scans/blob/master/collectors/oracle/collector.js
The correct URLs (returning HTTP 200) are:
- https://github.com/khulnasoft/cloudexploit/blob/master/collectors/aws/collector.js
- https://github.com/khulnasoft/cloudexploit/blob/master/collectors/azure/collector.js
- https://github.com/khulnasoft/cloudexploit/blob/master/collectors/google/collector.js
- https://github.com/khulnasoft/cloudexploit/blob/master/collectors/oracle/collector.js
Revert the URL changes on lines 25, 40, 55, 70, 86, 102, 118, 132 back to the original khulnasoft/cloudexploit paths.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/writing-plugins.md` at line 25, Revert the broken repository links in
the docs by changing all occurrences of "cloudexploit/scans/blob/master" back to
"khulnasoft/cloudexploit/blob/master" for the collector URLs referenced in the
collect.js/postcalls explanation (the AWS, Azure, GCP and Oracle collector links
mentioned around the collect.js section and the `postcalls` example);
specifically update the eight modified link instances so the AWS, Azure, Google
and Oracle collector URLs point to the khulnasoft/cloudexploit repository as
listed in the review comment.
| 'Guard Duty': | ||
| { | ||
| enabled: true, isSingleSource: true, InvAsset: 'detector', InvService: 'guardduty', | ||
| InvResourceCategory: 'cloud_resources', InvResourceType: 'Guardduty Detector', | ||
| BridgeProvider: 'aws', BridgeServiceName: 'guardduty', BridgePluginCategoryName: 'GuardDuty', | ||
| BridgeArnIdentifier: '', BridgeIdTemplate: 'arn:aws:guardduty:{region}:{cloudAccount}:detector/{id}', BridgeResourceType: 'detector', | ||
| BridgeResourceNameIdentifier: 'id', BridgeExecutionService: 'GuardDuty', | ||
| BridgeCollectionService: 'guardduty', BridgeCall: 'getDetector', DataIdentifier: 'data', | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
AWS GuardDuty GetDetector API response schema fields
💡 Result:
GetDetector returns a JSON object with these top-level response fields: createdAt, dataSources (deprecated), features, findingPublishingFrequency, serviceRole, status, tags, updatedAt. [1]
Schema (shape)
{
"createdAt": "string",
"dataSources": { /* deprecated */ },
"features": [
{
"additionalConfiguration": [
{ "name": "string", "status": "string", "updatedAt": 0 }
],
"name": "string",
"status": "string",
"updatedAt": 0
}
],
"findingPublishingFrequency": "FIFTEEN_MINUTES|ONE_HOUR|SIX_HOURS",
"serviceRole": "string",
"status": "ENABLED|DISABLED",
"tags": { "string": "string" },
"updatedAt": "string"
}[1]
Notes: dataSources is deprecated in favor of features (GuardDuty feature-object migration). [2]
Sources: AWS GuardDuty API Reference GetDetector [1]; GuardDuty API changes (feature object vs dataSources) [2].
🌐 Web query:
AWS GuardDuty GetDetector API response includes DetectorId in metadata or headers
💡 Result:
No. GuardDuty GetDetector does not return DetectorId in the response body, response metadata, or HTTP response headers. The detectorId is only part of the request URI path (GET /detector/{detectorId}), and the documented successful response contains only detector properties like createdAt, status, serviceRole, etc. (no detectorId field). [1]
What you can expect in SDK “metadata” / headers is the AWS request ID (e.g., x-amzn-requestid / RequestId in ResponseMetadata), not the GuardDuty DetectorId. [2][3]
So if you need the DetectorId alongside the GetDetector result, you must carry it from your request context (or obtain it via ListDetectors, then call GetDetector). [1]
Sources: [1] GetDetector API reference (request/response schema), [2] AWS SDK ResponseMetadata (request ID), [3] example SDK ResponseMetadata including HTTPHeaders / x-amzn-requestid.
Fix GuardDuty identifier mapping—GetDetector doesn't return detector ID.
Line 570 sets BridgeResourceNameIdentifier: 'id', but the AWS GuardDuty GetDetector API response does not include an id or DetectorId field. The detector ID only exists in the request URI path, not the response body, metadata, or headers. This mapping will cause inventory IDs to be empty. Obtain the detector ID from request context or retrieve it via ListDetectors before calling GetDetector.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@helpers/aws/api.js` around lines 564 - 572, The BridgeResourceNameIdentifier
currently set to 'id' for the "Guard Duty" entry is invalid because the
GetDetector response does not contain an id/DetectorId; update the mapping so
inventory uses the detector ID from request context or by calling ListDetectors
prior to GetDetector instead of relying on the response body. Concretely, change
the BridgeResourceNameIdentifier mapping (for the "Guard Duty" object) to
reference the path/request parameter name that contains the detector id (e.g.,
detectorId or the request context key your code uses), or implement a
ListDetectors call to fetch the detector IDs and use those as the inventory
identifier before calling GetDetector; ensure any code that populates
DataIdentifier/InvResource fields pulls the detector id from that request
context or ListDetectors result rather than from GetDetector's response.
| localEvent.collection = {}; | ||
| localEvent.previousCollection = {}; | ||
|
|
||
| localEvent.cloud_account_identifier = settings.identifier.cloud_account_identifier; |
There was a problem hiding this comment.
Potential null dereference: settings.identifier may be undefined.
Line 30 accesses settings.identifier.cloud_account_identifier before the defensive check on line 39 (if (!localSettings.identifier)). If settings.identifier is undefined, this will throw a TypeError.
🐛 Proposed fix: use optional chaining or move after the defensive check
Option 1: Use optional chaining
- localEvent.cloud_account_identifier = settings.identifier.cloud_account_identifier;
+ localEvent.cloud_account_identifier = settings.identifier?.cloud_account_identifier;Option 2: Move the assignment after the defensive check (line 39)
if (!localSettings.identifier) localSettings.identifier = {};
localSettings.identifier.service = serviceName.toLowerCase();
+ localEvent.cloud_account_identifier = localSettings.identifier.cloud_account_identifier;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| localEvent.cloud_account_identifier = settings.identifier.cloud_account_identifier; | |
| localEvent.cloud_account_identifier = settings.identifier?.cloud_account_identifier; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@helpers/shared.js` at line 30, The assignment to
localEvent.cloud_account_identifier reads
settings.identifier.cloud_account_identifier which can throw if
settings.identifier is undefined; either use optional chaining when reading it
(e.g., use settings.identifier?.cloud_account_identifier) or move this
assignment to after the existing defensive check on localSettings.identifier
(the block that starts with if (!localSettings.identifier) ) so you only access
identifier when it is present; update the code path that sets
localEvent.cloud_account_identifier accordingly.
| function hasFailures(roleFailures) { | ||
| if (roleFailures.admin) return true; | ||
|
|
||
| if (roleFailures.managed) { | ||
| if (roleFailures.managed.allActionsAllResources.length) return true; | ||
| if (roleFailures.managed.allActionsSelectedResources.length) return true; | ||
| if (roleFailures.managed.actionsAllResources.length) return true; | ||
| if (Object.keys(roleFailures.managed.wildcardActions).length) return true; | ||
| if (roleFailures.managed.regexMismatch.length) return true; | ||
| } | ||
|
|
||
| if (roleFailures.inline) { | ||
| if (roleFailures.inline.allActionsAllResources.length) return true; | ||
| if (roleFailures.inline.allActionsSelectedResources.length) return true; | ||
| if (roleFailures.inline.actionsAllResources.length) return true; | ||
| if (Object.keys(roleFailures.inline.wildcardActions).length) return true; | ||
| if (roleFailures.inline.regexMismatch.length) return true; | ||
| } | ||
|
|
||
| return false; | ||
| } |
There was a problem hiding this comment.
Bug: regexMismatch.length is always undefined since regexMismatch is an object, not an array.
Lines 410 and 418 check .length on regexMismatch, but it's defined as an object ({}) on lines 338 and 347. Objects don't have a .length property, so this check will always be falsy, causing regex mismatch failures to be silently ignored and producing false PASS results.
🐛 Proposed fix
if (roleFailures.managed) {
if (roleFailures.managed.allActionsAllResources.length) return true;
if (roleFailures.managed.allActionsSelectedResources.length) return true;
if (roleFailures.managed.actionsAllResources.length) return true;
if (Object.keys(roleFailures.managed.wildcardActions).length) return true;
- if (roleFailures.managed.regexMismatch.length) return true;
+ if (Object.keys(roleFailures.managed.regexMismatch).length) return true;
}
if (roleFailures.inline) {
if (roleFailures.inline.allActionsAllResources.length) return true;
if (roleFailures.inline.allActionsSelectedResources.length) return true;
if (roleFailures.inline.actionsAllResources.length) return true;
if (Object.keys(roleFailures.inline.wildcardActions).length) return true;
- if (roleFailures.inline.regexMismatch.length) return true;
+ if (Object.keys(roleFailures.inline.regexMismatch).length) return true;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/aws/iam/iamRolePolicies.js` around lines 402 - 422, The hasFailures
function currently checks roleFailures.managed.regexMismatch.length and
roleFailures.inline.regexMismatch.length, but regexMismatch is an object so
.length is undefined; update those two checks in hasFailures to test
Object.keys(roleFailures.managed.regexMismatch).length and
Object.keys(roleFailures.inline.regexMismatch).length (or otherwise check
Object.keys(...).length > 0) so regex mismatch entries are detected correctly;
keep all other checks the same and only change the two regexMismatch checks to
use Object.keys to determine emptiness.
| more_info: 'Compute instances having service account attached with excessive permissions can lead to security risks. Compute instances should have restrictive permissions assigned through service accounts for security best practices.', | ||
| link: 'https://cloud.google.com/compute/docs/access/iam', | ||
| recommended_action: 'Make sure that compute instances are using service account with only required permissions.', | ||
| apis: [''], |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check the file and verify line 10
head -20 plugins/google/compute/computePrivilegeAnalysis.jsRepository: khulnasoft/cloudexploit
Length of output: 951
🏁 Script executed:
# Check how apis is defined across plugins
rg -n --type=js "apis\s*:" plugins | head -30Repository: khulnasoft/cloudexploit
Length of output: 2366
🏁 Script executed:
# Check how the runner/helpers consume apis
rg -n --type=js "\bapis\b" helpers collectors | head -40Repository: khulnasoft/cloudexploit
Length of output: 637
🏁 Script executed:
# Search for other instances of apis with empty string
rg -n --type=js "apis\s*:\s*\[\s*''\s*\]" pluginsRepository: khulnasoft/cloudexploit
Length of output: 640
🏁 Script executed:
# Check the helpers/asl.js validation logic more closely
sed -n '195,205p' helpers/asl.jsRepository: khulnasoft/cloudexploit
Length of output: 689
Empty string in apis array and ineffective suggested fix.
apis: [''] is problematic, but changing it to apis: [] won't resolve the issue. The ASL validation in helpers/asl.js (line 197) rejects both—it checks if (!input.apis || !input.apis[0]), which fails when apis[0] is either undefined (empty array) or '' (empty string).
This pattern appears across 8 privilege analysis plugins (Kubernetes, Compute, Cloud Functions, Lambda, VM, EKS, AKS, and App Service Functions), all with empty run implementations. The actual fix requires either:
- Removing the ASL dependency if these plugins don't call APIs, or
- Providing legitimate API IDs that match the check's requirements
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/google/compute/computePrivilegeAnalysis.js` at line 10, The apis:
[''] entry in computePrivilegeAnalysis.js (and the same pattern in the other 7
privilege-analysis plugins) fails the ASL input validation (see helpers/asl.js
check if (!input.apis || !input.apis[0]) ), so change the plugin either by
removing ASL dependency (delete the apis property and any ASL wiring) when the
plugin.run is a no-op, or supply real API IDs (replace apis: [''] with a
non-empty array of valid API identifier strings) and ensure the plugin.run
implementation actually calls those APIs; update the computePrivilegeAnalysis.js
apis property and run function (and mirror the same change across Kubernetes,
Cloud Functions, Lambda, VM, EKS, AKS, and App Service Functions plugins) so the
ASL check passes.




Summary by CodeRabbit
New Features
Enhancements
Updates