Conversation
- Update Azure and AWS security settings - Modify managed identity configurations - Update test cases for security features
There was a problem hiding this comment.
Sorry @envrs, your pull request is larger than the review limit of 150000 diff characters
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughThis PR introduces multiple enhancements and refactoring across the codebase: removes ESLint configuration, upgrades CI/CD workflows with newer action versions and additional checks, expands AWS/Azure/Google service coverage with new privilege analysis and network exposure plugins, refactors authentication and validation helpers with modern SDK patterns, systematically replaces "Azure AD" terminology with "Azure Entra ID", rebrands the tool from "CloudSploit" to "CloudExploit", and introduces SaaS-based output delivery alongside structured scan data formatting. Changes
Sequence DiagramssequenceDiagram
participant App as Application
participant Output as Output Handler
participant FileSystem as File System
participant SaaS as SaaS Endpoint
participant Logger as Logger
App->>Output: write(collection)
Output->>Output: Populate scanData.resources
Output->>Output: Update scanData.summary
App->>Output: close(error, settings)
Output->>Output: Compute scan_duration
Output->>FileSystem: Write scanData JSON
FileSystem-->>Output: Written
Output->>Logger: Log file location
alt SaaS Endpoint Configured
Output->>Output: Build HTTPS POST request
Output->>SaaS: POST scanData (with Bearer token)
alt Success (2xx Response)
SaaS-->>Output: 200 OK
Output->>Logger: Log successful delivery
else Error
SaaS-->>Output: Error/Non-2xx
Output->>Logger: Log delivery failure
end
end
sequenceDiagram
participant Validator as ASL Validator
participant Transform as Transform Engine
participant CIDR as CIDR Helper
participant Result as Result Aggregator
Validator->>Validator: Parse condition with runValidation
alt Wildcard Path [*]
Validator->>Validator: Iterate array items
loop Each Item
Validator->>Transform: Apply transformations
alt IPRANGE Transformation
Transform->>Transform: Normalize IP/CIDR
Transform->>CIDR: Check IP in CIDR range
CIDR-->>Transform: IPv4/IPv6 match result
end
Validator->>Validator: Evaluate per-item condition
Validator->>Result: Collect item result
end
else AliasTarget Property
Validator->>Validator: Resolve nested AliasTarget
Validator->>Transform: Apply transformations
Validator->>Validator: Evaluate condition
else Standard Path
Validator->>Transform: Apply transformations
Validator->>Validator: Evaluate condition
end
Result->>Result: Aggregate per-item results
Result-->>Validator: Composite result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
|||||||||||||||||||||||||
PR Code Suggestions ✨Explore these optional code suggestions:
|
||||||||||||||||||
There was a problem hiding this comment.
Actionable comments posted: 14
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
helpers/asl/asl-1.js (1)
45-66: Potential null reference error when processing ARN templates without placeholders.The
placeholdersvariable is assigned from.match()which returnsnullwhen there are no matches. However, line 59 callsplaceholders.forEach()without a null check, which will throw aTypeErrorif the ARN template has no{...}placeholders.Apply this diff to add a null guard:
if (placeholders) { extracted_values = placeholders.map(placeholder => { const key = placeholder.slice(1, -1); if (key === 'value') return [obj][0]; else return obj[key]; }); - } - // Replace other variables - let converted_string = template_string - .replace(/\{region\}/g, region) - .replace(/\{cloudAccount\}/g, accountId) - .replace(/\{resourceId\}/g, resourceId); - placeholders.forEach((placeholder, index) => { + // Replace other variables + let converted_string = template_string + .replace(/\{region\}/g, region) + .replace(/\{cloudAccount\}/g, accountId) + .replace(/\{resourceId\}/g, resourceId); + placeholders.forEach((placeholder, index) => {helpers/shared.js (1)
18-42: Guard against missingsettings.identifierbefore readingcloud_account_identifierAt Line 30,
settings.identifier.cloud_account_identifierwill throw ifsettings.identifieris undefined. This function later defensively initializeslocalSettings.identifier, but that happens after this access and onlocalSettings, notsettings.Consider guarding the access (or initializing
identifier) before using it, e.g.:- localEvent.cloud_account_identifier = settings.identifier.cloud_account_identifier; + if (!settings.identifier) { + settings.identifier = {}; + } + + if (settings.identifier.cloud_account_identifier) { + localEvent.cloud_account_identifier = settings.identifier.cloud_account_identifier; + }This keeps the new field populated when available but avoids runtime errors when
identifieris absent.collectors/azure/collector.js (1)
98-141: HardenerrorFilteragainst non-object errors and simplify throttling detectionTwo issues in the new throttling detection:
errisn’t guarded:
const errorMessage = typeof err === 'string' ? err : err.message || err.toString();- If
errisnull/undefined(or lacksmessage/toString), this can throw inside the retry handler.Case handling is inconsistent:
- You maintain both upper- and lower-case variants in
throttlingPatternsbut still do a case-sensitive.includes, which is fragile and harder to maintain.A safer, simpler implementation:
- errorFilter: function(err) { - const errorMessage = typeof err === 'string' ? err : err.message || err.toString(); - - // Azure throttling patterns - const throttlingPatterns = [ - 'TooManyRequests', - 'RateLimitExceeded', - 'Throttling', - 'Throttled', - 'RequestThrottled', - 'RequestLimitExceeded', - 'ServerBusy', - 'ServiceBusy', - 'toomanyrequests', - 'ratelimitexceeded', - 'throttling', - 'throttled', - 'requestthrottled', - 'requestlimitexceeded', - 'serverbusy', - 'servicebusy', - 'too many requests', - 'rate limit', - 'retry after', - 'the request is being throttled', - 'request rate is large', - 'rate exceeded' - ]; - - const errorMatch = throttlingPatterns.some(pattern => errorMessage.includes(pattern)); - return errorMatch; - } + errorFilter: function(err) { + const rawErrorMessage = + typeof err === 'string' + ? err + : (err && err.message) || + (err && err.toString && err.toString()) || + ''; + const errorMessage = rawErrorMessage.toLowerCase(); + + // Azure throttling patterns (lowercased) + const throttlingPatterns = [ + 'toomanyrequests', + 'ratelimitexceeded', + 'throttling', + 'throttled', + 'requestthrottled', + 'requestlimitexceeded', + 'serverbusy', + 'servicebusy', + 'too many requests', + 'rate limit', + 'retry after', + 'the request is being throttled', + 'request rate is large', + 'rate exceeded' + ]; + + return throttlingPatterns.some(pattern => + errorMessage.includes(pattern) + ); + }This avoids crashes in the retry path and keeps throttling detection robust with a single, normalized list. If you want to address Sonar’s nesting complaint later,
errorFilterandthrottlingPatternswould also be good candidates to lift into a top-level helper.
🟡 Minor comments (10)
plugins/azure/apiManagement/apiInstanceManagedIdentity.js-10-10 (1)
10-10: Inconsistent terminology: both "Azure AD" and "Azure Entra ID" in the same sentence.The text references both "Azure AD" (first part) and "Azure Entra ID" (second part), which is confusing. Since this PR updates terminology from Azure AD to Azure Entra ID, both occurrences should use the new terminology consistently.
Additionally, "eliminate" should be "eliminates" for subject-verb agreement.
Apply this diff to fix the inconsistent terminology and grammar:
- more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Entra ID tokens.', + more_info: 'Enabling managed identities eliminates the need for developers having to manage credentials by providing an identity for the Azure resource in Azure Entra ID and using it to obtain Azure Entra ID tokens.',plugins/azure/containerapps/containerAppManagedIdentity.js-10-10 (1)
10-10: Fix inconsistent terminology and grammar.The
more_infotext contains mixed terminology, using both "Azure AD" and "Azure Entra ID" in the same sentence. Additionally, there's a subject-verb agreement issue with "eliminate."Apply this diff to use consistent terminology and correct grammar:
- more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Entra ID tokens.', + more_info: 'Enabling managed identities eliminates the need for developers having to manage credentials by providing an identity for the Azure resource in Microsoft Entra ID and using it to obtain Microsoft Entra ID tokens.',plugins/azure/automationAccounts/automationAcctManagedIdentity.js-10-10 (1)
10-10: Incomplete terminology migration in documentation text.The
more_infofield contains inconsistent terminology, referencing both "Azure AD" and "Azure Entra ID" in the same sentence. Since this PR updates Azure AD references to Azure Entra ID, this line should be fully migrated for consistency.Apply this diff to complete the terminology update:
- more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Entra ID tokens.', + more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure Entra ID and using it to obtain Azure Entra ID tokens.',plugins/azure/cosmosdb/cosmosdbManagedIdentity.spec.js-10-10 (1)
10-10: Terminology inconsistency: mixes "Azure AD" and "Azure Entra ID" in the same sentence.Consider using consistent terminology. Since Azure AD has been rebranded to Microsoft Entra ID, update to use the new name throughout.
- more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Entra ID tokens.', + more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Microsoft Entra ID and using it to obtain tokens.',plugins/aws/kms/kmsDefaultKeyUsage.js-24-24 (1)
24-24: Fix typo in EFS delete realtime trigger identifierThe last trigger is
':efs:DeleteFileSystem'(leading colon), which likely prevents it from ever matching anefs:DeleteFileSystemevent. This looks like a small typo and should be corrected to keep EFS delete events covered in realtime.Suggested change:
- realtime_triggers: ['cloudtrail:CreateTrail','cloudtrail:UpdateTrail','cloudtrail:DeleteTrail','ec2:CreateVolume','ec2:DeleteVolume','rds:CreateDBInstance','rds:ModifyDBInstance','rds:DeleteDBInstance','redshift:CreateCluster','redshift:ModifyCluster','redshift:DeleteCluster','s3:CreateBucket','s3:DeleteBucket','s3:PutBucketEncryption','ses:CreateReceiptRule','ses:DeleteReceiptRule','ses:UpdateReceiptRule','workspaces:CreateWorkspaces','workspaces:TerminateWorkspaces','lambda:UpdateFunctionConfiguration','lambda:CreateFunction','lambda:DeleteFunction','cloudwatchlogs:CreateLogGroup','cloudwatchlogs:DeleteLogGroup','cloudwatchlogs:AssociateKmsKey','efs:CreateFileSystem',':efs:DeleteFileSystem'], + realtime_triggers: ['cloudtrail:CreateTrail','cloudtrail:UpdateTrail','cloudtrail:DeleteTrail','ec2:CreateVolume','ec2:DeleteVolume','rds:CreateDBInstance','rds:ModifyDBInstance','rds:DeleteDBInstance','redshift:CreateCluster','redshift:ModifyCluster','redshift:DeleteCluster','s3:CreateBucket','s3:DeleteBucket','s3:PutBucketEncryption','ses:CreateReceiptRule','ses:DeleteReceiptRule','ses:UpdateReceiptRule','workspaces:CreateWorkspaces','workspaces:TerminateWorkspaces','lambda:UpdateFunctionConfiguration','lambda:CreateFunction','lambda:DeleteFunction','cloudwatchlogs:CreateLogGroup','cloudwatchlogs:DeleteLogGroup','cloudwatchlogs:AssociateKmsKey','efs:CreateFileSystem','efs:DeleteFileSystem'],plugins/azure/appConfigurations/appConfigManagedIdentity.js-10-10 (1)
10-10: Fix inconsistent Azure terminology.The text mixes old and new Azure terminology in the same sentence: "identity for the Azure resource in Azure AD and using it to obtain Azure Entra ID tokens." Since this PR is migrating to Azure Entra ID terminology, both references should use the new naming.
- more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Entra ID tokens.', + more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure Entra ID and using it to obtain Azure Entra ID tokens.',plugins/azure/containerregistry/acrManagedIdentityEnabled.js-10-10 (1)
10-10: Inconsistent terminology: mixes "Azure AD" and "Azure Entra ID" in the same sentence.The
more_infotext references both "Azure AD" and "Azure Entra ID" which is inconsistent with the broader terminology migration in this PR.- more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Entra ID tokens.', + more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure Entra ID and using it to obtain Azure Entra ID tokens.',plugins/azure/appservice/functionPrivilegeAnalysis.js-10-10 (1)
10-10: Invalidapisconfiguration.The
apisarray contains an empty string['']which is likely incorrect. This should either list the actual Azure APIs needed for this plugin or be an empty array[]if no APIs are required yet.- apis: [''], + apis: [],exports.js-1044-1051 (1)
1044-1051: Typo in Entra ID export key and filename (appOrgnaizationalDirectoryAccess)Both the export key and required module use
appOrgnaizationalDirectoryAccess(missing the second “i” in “Organizational”). This is easy to miss in consumers and in docs.Recommend correcting the spelling everywhere:
- 'appOrgnaizationalDirectoryAccess' : require(__dirname + '/plugins/azure/entraid/appOrgnaizationalDirectoryAccess.js'), + 'appOrganizationalDirectoryAccess' : require(__dirname + '/plugins/azure/entraid/appOrganizationalDirectoryAccess.js'),…and renaming the actual plugin file accordingly.
helpers/azure/auth.js-62-79 (1)
62-79: GovCloud vault token error silently logs instead of failing.On line 67, when vault token fetch fails for GovCloud, it only logs
'No vault'but still proceeds successfully. This differs from the public Azure path (line 90) which returns the error via callback. Consider consistent error handling.getToken(credential, [vaultScope], function(vaultErr, vaultToken) { - if (vaultErr) console.log('No vault'); + if (vaultErr) { + console.log('Warning: Could not obtain vault token for GovCloud:', vaultErr.message || vaultErr); + } callback(null, {
🧹 Nitpick comments (24)
plugins/aws/eks/eksKubernetesVersion.spec.js (1)
79-87: Test data update to 1.30 looks good; consider centralizing the “current” version stringBumping the mocked cluster
versionto"1.30"in the “current” scenario is consistent with the intent of this test and doesn’t introduce any behavioral risk.To reduce future churn when EKS moves to the next minor, you could optionally extract a shared constant for the “current” version (e.g. at the top of this spec or in a common test helper) and use it in all related tests instead of hard‑coding
"1.30"here and elsewhere.plugins/aws/ec2/ebsRecentSnapshots.js (2)
14-21: Well-defined settings schema with clear validation.The regex
^[1-9]{1}[0-9]{0,2}$allows values from 1 to 999 days, which provides substantial flexibility. While 999 days (nearly 3 years) is quite generous for snapshot recency, this wide range accommodates various organizational backup policies.If you want to enforce a more reasonable upper bound (e.g., 90 or 180 days), consider tightening the regex. Otherwise, the current implementation is sound.
25-27: PreferNumber.parseIntover globalparseInt.The config parsing logic is correct, but using
Number.parseIntis the modern best practice as it's more explicit and avoids potential global scope issues.Apply this diff:
- var config = { - ebs_recent_snapshot_days: parseInt(settings.ebs_recent_snapshot_days || this.settings.ebs_recent_snapshot_days.default) - }; + var config = { + ebs_recent_snapshot_days: Number.parseInt(settings.ebs_recent_snapshot_days || this.settings.ebs_recent_snapshot_days.default) + };postprocess/output.js (1)
339-357: SaaS delivery is fire-and-forget.The
_sendToSaaScall afterstream.end()meansclose()returns before the SaaS delivery completes. The caller won't be notified if delivery fails. If this is intentional (non-blocking), consider documenting it; otherwise, you may want to return a Promise or callback for completion status.plugins/azure/batchAccounts/batchAccountsManagedIdentity.js (1)
10-10: Standardize "Azure Entra ID" capitalization.The text uses "Azure Entra Id" but other files in this PR use "Azure Entra ID" (with uppercase "ID"). For consistency across the codebase, standardize the capitalization to "Azure Entra ID".
- more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure and using it to obtain Azure Entra Id tokens.', + more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure and using it to obtain Azure Entra ID tokens.',Note: This also removes the extra space before "and".
plugins/azure/appservice/scmSiteAccessRestriction.js (1)
48-70: Consider optional chaining for cleaner code.The refactored logic correctly prioritizes checking
publicNetworkAccessbefore evaluating IP restrictions. The implementation is sound.As suggested by SonarCloud, you could simplify lines 56-61 using optional chaining:
- let denyAllIp; - if (config.scmIpSecurityRestrictions && config.scmIpSecurityRestrictions.length) { - denyAllIp = config.scmIpSecurityRestrictions.find(ipSecurityRestriction => + const denyAllIp = config.scmIpSecurityRestrictions?.find(ipSecurityRestriction => ipSecurityRestriction.ipAddress && ipSecurityRestriction.ipAddress.toUpperCase() === 'ANY' && ipSecurityRestriction.action && ipSecurityRestriction.action.toUpperCase() === 'DENY' ); - }plugins/azure/appservice/appServiceAccessRestriction.js (1)
52-76: Consider optional chaining for consistency.The refactored logic correctly prioritizes
publicNetworkAccessbeing disabled before checking for explicit deny rules. The implementation aligns well with the companion changes inscmSiteAccessRestriction.js.Similar to the pattern in
scmSiteAccessRestriction.js, you could use optional chaining for cleaner code:- let denyAllIp; - if (config.ipSecurityRestrictions && config.ipSecurityRestrictions.length) { - denyAllIp = config.ipSecurityRestrictions.find(ipSecurityRestriction => + const denyAllIp = config.ipSecurityRestrictions?.find(ipSecurityRestriction => ipSecurityRestriction.ipAddress && ipSecurityRestriction.ipAddress.toUpperCase() === 'ANY' && ipSecurityRestriction.action && ipSecurityRestriction.action.toUpperCase() === 'DENY' ); - }plugins/azure/appservice/privateEndpointsEnabled.js (1)
55-67: Use optional chaining for cleaner conditionals.SonarCloud correctly flags that optional chaining would be more concise.
- if (webApp.kind && webApp.kind.toLowerCase().includes('functionapp')) { + if (webApp.kind?.toLowerCase().includes('functionapp')) { helpers.addResult(results, 0, 'Function App has Private Endpoints configured', location, webApp.id); } else { helpers.addResult(results, 0, 'App Service has Private Endpoints configured', location, webApp.id); } } else { // No private endpoints configured - if (webApp.kind && webApp.kind.toLowerCase().includes('functionapp')) { + if (webApp.kind?.toLowerCase().includes('functionapp')) { helpers.addResult(results, 2, 'Function App does not have Private Endpoints configured', location, webApp.id); } else { helpers.addResult(results, 2, 'App Service does not have Private Endpoints configured', location, webApp.id); }plugins/azure/appservice/privateEndpointsEnabled.spec.js (1)
41-69: createCache correctly builds getWebAppDetails; consider minor readability tweakThe
createCachehelper matches the expectedwebApps.getWebAppDetails[location][id].datashape and tiesprivateEndpointConnections[index]to each app, which should exercise the implementation path accurately. As a small readability polish, you could replaceif (webApp && webApp.id)with optional chaining:- if (webApp && webApp.id) { + if (webApp?.id) {This keeps the intent while aligning with the static analysis suggestion.
helpers/asl/asl-1.js (7)
510-542: Duplicate branch logic can be consolidated.The branches at lines 510-517 (
allNotSet) and 526-533 (anyNotSet) produce identical result objects. As flagged by static analysis, these can be merged to reduce duplication.- if (condition.parsed.length === 0 || allNotSet) { - message.push(`${condition.property}: ${arrayMessages.join(', ')}`); - let resultObj = { - status: 2, // FAIL if array is empty or all items are not set (property missing everywhere) - message: message.join(', ') - }; - inputResultsArr.push(resultObj); - return resultObj; - } else if (anyMatch) { + if (anyMatch) { message.push(`${condition.property}: ${arrayMessages.join(', ')}`); let resultObj = { status: 0, // PASS if any item matches and at least one is set message: message.join(', ') }; inputResultsArr.push(resultObj); return resultObj; - } else if (anyNotSet) { + } else { + // FAIL if: array is empty, all items are not set, any item is not set, or no matches message.push(`${condition.property}: ${arrayMessages.join(', ')}`); let resultObj = { - status: 2, // FAIL if any item is not set - message: message.join(', ') - }; - inputResultsArr.push(resultObj); - return resultObj; - } else { - message.push(`${condition.property}: ${arrayMessages.join(', ')}`); - let resultObj = { - status: 2, // FAIL if none match and all are set + status: 2, message: message.join(', ') }; inputResultsArr.push(resultObj); return resultObj; }
366-366: Function has 8 parameters;nestedResultArrappears unused.Static analysis flags this function for having too many parameters (8 vs recommended max of 7). Additionally,
nestedResultArris never used within the function body.Consider removing the unused parameter and grouping related parameters into an options object:
-var runValidation = function(obj, condition, inputResultsArr, nestedResultArr, region, cloud, accountId, resourceId) { +var runValidation = function(obj, condition, inputResultsArr, context) { + var { region, cloud, accountId, resourceId } = context;Then update call sites to pass a context object:
runValidation(data, condition, inputResultsArr, { region, cloud, accountId, resourceId: resourceName });
77-77: Useconst/letinstead ofvarand preferNumber.*methods.Static analysis flagged several modernization opportunities throughout the file. Consider updating for consistency:
Replace
varwithconst/letat lines 21, 77, 102, 135, 144, 176, 188, 198, 206, 215, 225, etc.Prefer
Number.parseIntover globalparseInt:- var prefixLength = parseInt(cidrMatch[2]); + var prefixLength = Number.parseInt(cidrMatch[2], 10);
- Prefer
Number.isNaNover globalisNaN:- if (cidrParts.some(function(part) { return isNaN(part) || part < 0 || part > 255; })) + if (cidrParts.some(function(part) { return Number.isNaN(part) || part < 0 || part > 255; }))
- Use optional chaining at line 284:
- if (!inputResultsArr || !inputResultsArr.length) { + if (!inputResultsArr?.length) {
210-212: Empty catch block silently swallows errors.The catch block returns
nullwithout logging or preserving any error information, making debugging difficult if the IPv6 expansion fails.} catch (e) { + // IPv6 parsing failed - return null to indicate invalid format + console.debug && console.debug('IPv6 expansion failed:', e.message); return null; }
544-544: Redundant condition check.The trailing
&& conditionin the if statement is always truthy sinceconditionis a required object parameter. This appears to be leftover code.- if (condition.transform && condition.transform == 'EACH' && condition) { + if (condition.transform && condition.transform == 'EACH') {
480-482: VariableuserRegexshadows outer declaration.
userRegexis declared at line 436-438 and then re-declared at line 481 within theforEachcallback, shadowing the outer variable. While intentional, this can cause confusion.Consider renaming the inner variable for clarity:
} else if (condition.op == 'MATCHES') { - let userRegex = RegExp(condition.value); - itemMatch = userRegex.test(item); + let itemRegex = new RegExp(condition.value); + itemMatch = itemRegex.test(item); }
808-1059: Large code block duplicates operator logic fromrunValidation.This 250-line block (lines 808-1059) duplicates most of the operator handling logic (CONTAINS, EQ, NE, GT, LT, MATCHES, etc.) that already exists in
runValidation. This duplication makes maintenance difficult—any bug fix or new operator must be updated in both places.Consider extracting the comparison logic into a shared helper function:
function evaluateCondition(propValue, operator, conditionValue) { if (operator === 'EQ') return propValue === conditionValue; if (operator === 'NE') return propValue !== conditionValue; if (operator === 'CONTAINS') { return (typeof propValue === 'string' || Array.isArray(propValue)) && propValue.includes(conditionValue); } // ... other operators }Then both
runConditions(for AliasTarget) andrunValidationcan use this shared helper..env.example (1)
1-27: dotenv-linter ordering warnings are purely cosmeticThe current example values look safe (all clear placeholders). The reported dotenv-linter warnings are only about key ordering (e.g.,
AWS_REGIONbeforeAWS_SECRET_ACCESS_KEY,LOG_LEVELbeforeNODE_ENV).If you rely on dotenv-linter in CI and want a clean run, you can reorder the keys as suggested; otherwise this can be safely ignored.
plugins/aws/s3/s3BucketHasTags.js (1)
39-39: Useconstinstead ofvarforbucketArn.Static analysis flagged this line. Since
bucketArnis not reassigned, useconstfor immutability.- var bucketArn = `arn:${awsOrGov}:s3:::${bucket.Name}`; + const bucketArn = `arn:${awsOrGov}:s3:::${bucket.Name}`;plugins/aws/s3/s3BucketHasTags.spec.js (1)
127-168: Consider extracting shared callback for "no tags" tests.SonarCloud flagged duplicate callback implementations. While acceptable for test code, you could extract a shared callback for tests that verify status 2 with "S3 bucket does not have any tags" message.
// At the top of the describe block, add: const expectNoTagsResult = (done) => (err, results) => { expect(results.length).to.equal(1); expect(results[0].status).to.equal(2); expect(results[0].message).to.include('S3 bucket does not have any tags'); done(); }; // Then use in tests: // s3BucketHasTags.run(cache, {}, expectNoTagsResult(done));plugins/azure/appservice/functionPrivilegeAnalysis.js (1)
18-23: Useconstinstead ofvarand clarify stub implementation.Static analysis flagged the
varusage. Also, consider adding a TODO comment to clarify this is a placeholder.run: function(cache, settings, callback) { - var results = []; - var source = {}; + // TODO: Implement privilege analysis for Azure Functions + const results = []; + const source = {}; callback(null, results, source); },.eslintrc.json (1)
38-46: Test files are ignored, so overrides for them will never run
ignorePatternsexcludes*.test.jsand*.spec.js, but you also define test-specific overrides for those globs. As written, ESLint will skip these files entirely and the overrides are dead code. Consider dropping the test patterns fromignorePatternsand relying on theoverridesblock instead.Also applies to: 99-109
README.md (1)
85-137: New dev/testing section triggers markdownlint (MD003/MD014); consider minor cleanupsThe added commands use
$prompts without showing output, and the new headings are ATX style where the repo’s markdownlint config expects setext. If you care about passing markdownlint, either:
- Drop the leading
$(or add example output), and/or- Switch these headings to the repo’s preferred style, or relax the markdownlint rules for this file/section.
helpers/aws/api.js (1)
663-680: OpenSearch/OpenSearchServerless mapping via array index is a bit brittle
serviceMap.OpenSearchis an array where index0is the domain and index1is the serverless collection; postcalls then reference these by index. This works but couples call‑site correctness to array ordering.If you touch this again, consider using separate named entries (e.g.
OpenSearchDomain,OpenSearchServerless) or an object with explicit keys instead of relying on[0]/[1].Also applies to: 2340-2347, 3224-3236
helpers/azure/auth.js (1)
2-2: Useconstinstead ofvarfor imports.As flagged by static analysis, module imports should use
constsince they are not reassigned.-var axios = require('axios'); +const axios = require('axios');
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (48)
.env.example(1 hunks).eslintrc.json(1 hunks).github/pull_request_templates/pull_request_template.md(1 hunks).github/workflows/scans_ci.yml(1 hunks).gitignore(1 hunks).prettierrc(1 hunks)README.md(1 hunks)collectors/azure/collector.js(1 hunks)exports.js(13 hunks)helpers/asl/asl-1.js(10 hunks)helpers/aws/api.js(17 hunks)helpers/aws/api_multipart.js(2 hunks)helpers/azure/api.js(24 hunks)helpers/azure/auth.js(5 hunks)helpers/shared.js(2 hunks)index.js(1 hunks)package.json(2 hunks)plugins/aws/ec2/ebsRecentSnapshots.js(3 hunks)plugins/aws/ec2/ebsRecentSnapshots.spec.js(3 hunks)plugins/aws/eks/eksKubernetesVersion.spec.js(1 hunks)plugins/aws/kms/kmsDefaultKeyUsage.js(2 hunks)plugins/aws/s3/bucketSecureTransportEnabled.spec.js(1 hunks)plugins/aws/s3/s3BucketHasTags.js(2 hunks)plugins/aws/s3/s3BucketHasTags.spec.js(5 hunks)plugins/azure/apiManagement/apiInstanceManagedIdentity.js(2 hunks)plugins/azure/appConfigurations/appConfigAccessKeyAuthDisabled.js(2 hunks)plugins/azure/appConfigurations/appConfigManagedIdentity.js(2 hunks)plugins/azure/appservice/appServiceAccessRestriction.js(4 hunks)plugins/azure/appservice/appServiceAccessRestriction.spec.js(6 hunks)plugins/azure/appservice/authEnabled.js(2 hunks)plugins/azure/appservice/automatedBackupsEnabled.spec.js(3 hunks)plugins/azure/appservice/functionPrivilegeAnalysis.js(1 hunks)plugins/azure/appservice/privateEndpointsEnabled.js(2 hunks)plugins/azure/appservice/privateEndpointsEnabled.spec.js(4 hunks)plugins/azure/appservice/scmSiteAccessRestriction.js(1 hunks)plugins/azure/appservice/scmSiteAccessRestriction.spec.js(2 hunks)plugins/azure/appservice/webAppsADEnabled.js(3 hunks)plugins/azure/appservice/webAppsADEnabled.spec.js(1 hunks)plugins/azure/automationAccounts/automationAcctExpiredWebhooks.spec.js(1 hunks)plugins/azure/automationAccounts/automationAcctManagedIdentity.js(2 hunks)plugins/azure/batchAccounts/batchAccountsAADEnabled.js(2 hunks)plugins/azure/batchAccounts/batchAccountsAADEnabled.spec.js(1 hunks)plugins/azure/batchAccounts/batchAccountsManagedIdentity.js(1 hunks)plugins/azure/containerapps/containerAppManagedIdentity.js(1 hunks)plugins/azure/containerregistry/acrManagedIdentityEnabled.js(2 hunks)plugins/azure/cosmosdb/cosmosdbLocalAuth.js(1 hunks)plugins/azure/cosmosdb/cosmosdbManagedIdentity.spec.js(1 hunks)postprocess/output.js(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (7)
helpers/shared.js (1)
index.js (1)
settings(83-83)
plugins/azure/appservice/privateEndpointsEnabled.spec.js (3)
plugins/azure/appservice/appServiceAccessRestriction.spec.js (2)
createCache(63-82)webApps(4-9)plugins/azure/appservice/webAppsADEnabled.spec.js (2)
createCache(39-49)webApps(4-37)plugins/azure/appservice/privateEndpointsEnabled.js (1)
webApps(22-23)
plugins/azure/appservice/appServiceAccessRestriction.spec.js (3)
plugins/azure/appservice/scmSiteAccessRestriction.spec.js (10)
cache(112-112)cache(123-123)cache(134-134)cache(145-145)cache(156-156)cache(167-167)cache(178-178)createCache(63-82)webApps(4-9)configurations(11-61)plugins/azure/appservice/disableFTPDeployments.spec.js (2)
createCache(22-41)configurations(11-20)plugins/azure/appservice/appServiceAccessRestriction.js (1)
webApps(27-28)
plugins/azure/appservice/functionPrivilegeAnalysis.js (4)
plugins/azure/appservice/authEnabled.js (2)
results(35-35)source(36-36)plugins/azure/automationAccounts/automationAcctManagedIdentity.js (2)
results(17-17)source(18-18)plugins/azure/cosmosdb/cosmosdbLocalAuth.js (2)
results(17-17)source(18-18)plugins/azure/cosmosdb/cosmosdbManagedIdentity.spec.js (2)
results(17-17)source(18-18)
plugins/azure/appservice/appServiceAccessRestriction.js (3)
plugins/azure/appservice/scmSiteAccessRestriction.js (2)
config(48-48)webConfigs(39-41)plugins/azure/appservice/http20Enabled.js (2)
webConfigs(46-48)helpers(2-2)plugins/azure/appservice/tlsVersionCheck.js (1)
webConfigs(51-53)
plugins/aws/ec2/ebsRecentSnapshots.js (1)
plugins/aws/ec2/ebsRecentSnapshots.spec.js (2)
settings(156-156)settings(168-168)
plugins/azure/appservice/privateEndpointsEnabled.js (4)
plugins/azure/appservice/authEnabled.js (3)
helpers(2-2)source(36-36)results(35-35)plugins/azure/automationAccounts/automationAcctManagedIdentity.js (3)
helpers(2-2)source(18-18)results(17-17)plugins/azure/cosmosdb/cosmosdbLocalAuth.js (3)
helpers(2-2)source(18-18)results(17-17)plugins/azure/cosmosdb/cosmosdbManagedIdentity.spec.js (3)
helpers(2-2)source(18-18)results(17-17)
🪛 actionlint (1.7.9)
.github/workflows/scans_ci.yml
19-19: the runner of "actions/checkout@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
22-22: the runner of "actions/setup-node@v2" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 4-4: [UnorderedKey] The AWS_REGION key should go before the AWS_SECRET_ACCESS_KEY key
(UnorderedKey)
[warning] 8-8: [UnorderedKey] The AZURE_CLIENT_ID key should go before the AZURE_TENANT_ID key
(UnorderedKey)
[warning] 9-9: [UnorderedKey] The AZURE_CLIENT_SECRET key should go before the AZURE_TENANT_ID key
(UnorderedKey)
[warning] 10-10: [UnorderedKey] The AZURE_SUBSCRIPTION_ID key should go before the AZURE_TENANT_ID key
(UnorderedKey)
[warning] 17-17: [UnorderedKey] The GITHUB_ORG key should go before the GITHUB_TOKEN key
(UnorderedKey)
[warning] 21-21: [UnorderedKey] The LOG_LEVEL key should go before the NODE_ENV key
(UnorderedKey)
🪛 GitHub Check: SonarCloud Code Analysis
plugins/azure/appservice/privateEndpointsEnabled.spec.js
[warning] 57-57: Prefer using an optional chain expression instead, as it's more concise and easier to read.
collectors/azure/collector.js
[failure] 139-139: Refactor this code to not nest functions more than 4 levels deep.
plugins/azure/appservice/functionPrivilegeAnalysis.js
[failure] 19-19: Unexpected var, use let or const instead.
plugins/aws/s3/s3BucketHasTags.spec.js
[warning] 150-150: Update this function so that its implementation is not identical to the one on line 128.
plugins/aws/s3/s3BucketHasTags.js
[failure] 39-39: Unexpected var, use let or const instead.
plugins/aws/ec2/ebsRecentSnapshots.js
[warning] 26-26: Prefer Number.parseInt over parseInt.
helpers/azure/auth.js
[failure] 2-2: Unexpected var, use let or const instead.
[warning] 231-233: Prefer using an optional chain expression instead, as it's more concise and easier to read.
[warning] 185-185: Consider moving declaration of 'errorMessage' as it is referenced outside current binding context.
plugins/azure/appservice/scmSiteAccessRestriction.js
[warning] 56-56: Prefer using an optional chain expression instead, as it's more concise and easier to read.
helpers/asl/asl-1.js
[failure] 21-21: Unexpected var, use let or const instead.
[warning] 122-122: Prefer Number.isNaN over isNaN.
[failure] 144-144: Unexpected var, use let or const instead.
[failure] 206-206: Unexpected var, use let or const instead.
[failure] 135-135: Unexpected var, use let or const instead.
[warning] 121-121: Prefer Number.parseInt over parseInt.
[warning] 210-212: Handle this exception or don't catch it at all.
[failure] 225-225: Unexpected var, use let or const instead.
[failure] 198-198: Unexpected var, use let or const instead.
[warning] 112-112: Prefer Number.isNaN over isNaN.
[failure] 77-77: Unexpected var, use let or const instead.
[warning] 111-111: Prefer Number.parseInt over parseInt.
[warning] 284-284: Prefer using an optional chain expression instead, as it's more concise and easier to read.
[warning] 366-366: Function has too many parameters (8). Maximum allowed is 7.
[warning] 526-534: This branch's code block is the same as the block for the branch on line 510.
[warning] 891-891: Prefer Number.isNaN over isNaN.
plugins/azure/appservice/privateEndpointsEnabled.js
[warning] 56-56: Prefer using an optional chain expression instead, as it's more concise and easier to read.
postprocess/output.js
[warning] 364-364: Prefer node:https over https.
🪛 LanguageTool
.github/pull_request_templates/pull_request_template.md
[style] ~10-~10: Consider using a different verb for a more formal wording.
Context: ...- [ ] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaki...
(FIX_RESOLVE)
🪛 markdownlint-cli2 (0.18.1)
README.md
81-81: Dollar signs used before commands without showing output
(MD014, commands-show-output)
82-82: Dollar signs used before commands without showing output
(MD014, commands-show-output)
85-85: Heading style
Expected: setext; Actual: atx
(MD003, heading-style)
87-87: Heading style
Expected: setext; Actual: atx
(MD003, heading-style)
95-95: Heading style
Expected: setext; Actual: atx
(MD003, heading-style)
103-103: Heading style
Expected: setext; Actual: atx
(MD003, heading-style)
111-111: Heading style
Expected: setext; Actual: atx
(MD003, heading-style)
119-119: Heading style
Expected: setext; Actual: atx
(MD003, heading-style)
128-128: Heading style
Expected: setext; Actual: atx
(MD003, heading-style)
| return 0; | ||
| } else if (conditionStringified && conditionStringified.length){ | ||
| message.push(`${condition.value} found in ${conditionStringified}`); | ||
| return 2; | ||
| } else { | ||
| message.push(`${condition.parsed} is not the right property type for this operation`); | ||
| return 2; | ||
| } | ||
| } else { | ||
| // Recurse into the same function | ||
| var subProcessed = []; | ||
| if (!condition.parsed.length) { | ||
| conditionResult = 2; | ||
| message.push(`${property}: is not iterable using EACH transformation`); | ||
| } else { | ||
| condition.parsed.forEach(function(parsed) { | ||
| subProcessed.push(runValidation(parsed, condition, inputResultsArr, null, region, cloud, accountId, resourceId)); | ||
| }); | ||
| subProcessed.forEach(function(sub) { | ||
| if (sub.status) conditionResult = sub.status; | ||
| if (sub.message) message.push(sub.message); | ||
| }); | ||
| } | ||
| } | ||
| } else if (condition.op == 'EQ') { | ||
| if (condition.parsed == condition.value) { |
There was a problem hiding this comment.
Inconsistent return values: returns raw numbers instead of result objects.
The NOTCONTAINS branch within the EACH transform returns raw status numbers (0 or 2) at lines 577, 580, 584, 587, and 590. All other branches in runValidation return result objects with { status, message }. This inconsistency will cause errors when calling code attempts to access .status or .message properties.
Apply this diff to return consistent result objects:
if (conditionStringified.includes(conditionKey) && !conditionStringified.includes(conditionValue)){
message.push(`${property}: ${condition.value} not found in ${conditionStringified}`);
- return 0;
+ let resultObj = { status: 0, message: message.join(', ') };
+ inputResultsArr.push(resultObj);
+ return resultObj;
} else {
message.push(`${condition.value} found in ${conditionStringified}`);
- return 2;
+ let resultObj = { status: 2, message: message.join(', ') };
+ inputResultsArr.push(resultObj);
+ return resultObj;
}
} else if (conditionStringified && !conditionStringified.includes(condition.value)) {
message.push(`${property}: ${condition.value} not found in ${conditionStringified}`);
- return 0;
+ let resultObj = { status: 0, message: message.join(', ') };
+ inputResultsArr.push(resultObj);
+ return resultObj;
} else if (conditionStringified && conditionStringified.length){
message.push(`${condition.value} found in ${conditionStringified}`);
- return 2;
+ let resultObj = { status: 2, message: message.join(', ') };
+ inputResultsArr.push(resultObj);
+ return resultObj;
} else {
message.push(`${condition.parsed} is not the right property type for this operation`);
- return 2;
+ let resultObj = { status: 2, message: message.join(', ') };
+ inputResultsArr.push(resultObj);
+ return resultObj;
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In helpers/asl/asl-1.js around lines 568 to 591, the NOTCONTAINS branch
currently returns raw numbers (0 or 2) instead of result objects; change each
return to return an object of the form { status: <number>, message } where
message is the same string pushed into the message array so callers can access
.status and .message consistently; ensure all five return points are updated to
return the object rather than the raw number.
package.json
Outdated
| "ali-oss": "^6.17.0", | ||
| "argparse": "^2.0.1", | ||
| "async": "^3.2.5", | ||
| "aws-sdk": "^2.1574.0", |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🌐 Web query:
AWS SDK for JavaScript v2 maintenance mode status 2025
💡 Result:
As of Dec 3, 2025 the AWS SDK for JavaScript v2 is past maintenance and has reached end-of-support: it entered maintenance mode on September 8, 2024 and reached end-of-support on September 8, 2025. [1][2]
Sources:
- AWS Developer Tools blog announcement. [1]
- AWS SDK for JavaScript v2 README / docs. [2]
🌐 Web query:
AWS SDK v2 vs v3 migration guide JavaScript 2025
💡 Result:
Summary—what changed and why migrate
- v3 is a ground-up, TypeScript-first rewrite that is modular (one @aws-sdk/client-XXX package per service), uses a middleware stack, and drops the single monolithic aws-sdk package used by v2. Migrate for smaller bundles, better tree-shaking, first-class TypeScript, performance, and continued support (v2 is EoS Sep 8, 2025). [1][2][3]
Quick migration checklist (minimal, actionable)
- Read the official migration guide. [1]
- Run the codemod to convert imports/usage at scale:
- Dry-run: npx aws-sdk-js-codemod@latest --dry --print -t v2-to-v3 PATH...
- Apply: npx aws-sdk-js-codemod@latest -t v2-to-v3 PATH...
(package: aws-sdk-js-codemod). [4][1]
- Replace aws-sdk imports with modular clients and Commands:
- v2: import AWS from "aws-sdk"; const s3 = new AWS.S3(); await s3.getObject(params).promise();
- v3: import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); await client.send(new GetObjectCommand(params));
(Note: .promise() is removed; use client.send(...)). [1][5]
- Migrate higher-level helpers:
- DynamoDB DocumentClient -> @aws-sdk/lib-dynamodb (marshall/unmarshall differences).
- S3 multipart helpers -> @aws-sdk/lib-storage (Upload). [5]
- Update paginators & waiters to v3 APIs (they are separate helpers/commands). See v3 docs for service-specific changes. [1][5]
- Migrate custom request hooks to the v3 middleware stack (more explicit than v2 request event listeners). [1][5]
- Test thoroughly: unit + integration tests; validate streaming APIs (GetObject.Body handling differs between runtimes). [5]
- Update Node runtime if needed: v3 dropped Node.js 16.x support (effective Jan 6, 2025); target an Active LTS (Node 20+ recommended). [6]
- Remove aws-sdk from package.json and install only required @aws-sdk packages to reduce bundle size. [5]
Useful commands / packages
- Codemod (recommended): npx aws-sdk-js-codemod -t v2-to-v3 PATH...
- Common v3 packages:
- Core client: @aws-sdk/client-s3, @aws-sdk/client-dynamodb, etc.
- Helpers: @aws-sdk/lib-dynamodb, @aws-sdk/lib-storage, @aws-sdk/smithy-client (if needed).
- v2 package (for reference only): aws-sdk (EoS Sep 8, 2025). [2][3]
Where to read more (official)
- v3 migration guide (developer guide): [1]
- v2 README / EoS announcement: [2][3]
- Codemod npm + repo: [4]
- v3 repo & README (modular design, examples): [5]
- Node.js 16.x end-of-support announcement for v3: [6]
If you want, I can:
- run the codemod on a sample file you paste, or
- produce a small before/after conversion for specific services (S3, DynamoDB, Lambda).
Sources
- AWS SDK for JavaScript v3 — Migrate from v2.x to v3.x (Developer Guide). [AWS Docs].
- AWS SDK for JavaScript (v2) README — maintenance/EoS details. [AWS Docs / GitHub].
- AWS Developer Blog — Announcing end-of-support for AWS SDK for JavaScript v2 (Sep 8, 2025).
- aws-sdk-js-codemod (npm + GitHub) — v2-to-v3 codemod usage.
- aws-sdk-js-v3 GitHub README / Developer Guide (modular packages, middleware, helpers).
- AWS Developer Blog — v3 end of support for Node.js 16.x (effective Jan 6, 2025).
AWS SDK v2 has reached end-of-support as of September 8, 2025 and must be migrated.
The project uses aws-sdk v2, which is no longer maintained. Migrate to @aws-sdk/* v3 modular packages immediately for continued support, improved tree-shaking, performance, and TypeScript compatibility.
AWS provides an automated codemod for migration:
npx aws-sdk-js-codemod@latest -t v2-to-v3 <path>
See the official migration guide for details on API changes and higher-level helper migrations.
🤖 Prompt for AI Agents
In package.json around line 62, the project currently depends on the deprecated
"aws-sdk" v2; migrate to AWS SDK v3 by replacing that dependency with the
modular @aws-sdk/* packages required by your code (e.g., @aws-sdk/client-s3,
@aws-sdk/client-dynamodb, etc.), run the automated codemod to convert imports
and usages (npx aws-sdk-js-codemod@latest -t v2-to-v3 <path>), then update code
to use the new client constructors, command objects, and async send() calls or
higher-level helpers, adjust any changed option names and error handling, remove
the old aws-sdk entry from package.json, run npm install and full test suite /
linting to fix remaining call-signature or import errors, and commit the updated
dependencies and code changes.
There was a problem hiding this comment.
| "aws-sdk": "^2.1574.0", | |
| "aws-sdk": "^2.1574.0", |
There was a problem hiding this comment.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
There was a problem hiding this comment.
Question:
suggestion aws-sdk: ^2.1574.0,
Answer:
If you want Qodo Merge to stop proposing “dependency bumps” like aws-sdk: ^2... (and instead focus on code-level issues in the PR diff), add tighter guidance to the /improve prompt:
[pr_code_suggestions]
extra_instructions = """\
- Do NOT suggest dependency version bumps in package.json/yarn.lock/pnpm-lock.yaml unless the PR is explicitly about dependency maintenance.
- Do NOT suggest migrations/replacements of major libraries (e.g., aws-sdk v2 -> v3) unless the PR already touches that dependency or the code changes clearly introduce a bug/security issue that requires it.
- Treat external “EOL/EOS” statements and web lookups as out of scope; base suggestions only on the PR diff and repository context.
- Prefer actionable, local code fixes over ecosystem-wide refactors.
"""Optional (if you also want to reduce suggestions that touch config/metadata files):
[ignore]
glob = ["package.json", "*lock*", "*.toml"](Use this only if you’re comfortable excluding those files from analysis entirely; otherwise rely on extra_instructions.)
Relevant Sources:
- https://docs.qodo.ai/qodo-documentation/qodo-merge/tools/improve#extra-instructions-and-best-practices
- https://docs.qodo.ai/qodo-documentation/qodo-merge/tools/improve#configuration-options
- https://docs.qodo.ai/qodo-documentation/qodo-merge/usage-guide/additional_configurations#ignoring-files-from-analysis
| const async = require('async'); | ||
| const helpers = require('../../../helpers/azure'); | ||
|
|
||
| module.exports = { | ||
| title: 'Cosmos DB Managed Identity', | ||
| category: 'Cosmos DB', | ||
| domain: 'Databases', | ||
| severity: 'Medium', | ||
| description: 'Ensures that Azure Cosmos DB accounts have managed identity enabled.', | ||
| more_info: 'Enabling managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Entra ID tokens.', | ||
| link: 'https://learn.microsoft.com/en-us/azure/cosmos-db/managed-identity-based-authentication', | ||
| recommended_action: 'Enable system or user-assigned identities for all Azure Cosmos DB accounts.', | ||
| apis: ['databaseAccounts:list'], | ||
| realtime_triggers: ['microsoftdocumentdb:databaseaccounts:write','microsoftdocumentdb:databaseaccounts:delete'], | ||
|
|
||
| run: function(cache, settings, callback) { | ||
| const results = []; | ||
| const source = {}; | ||
| const locations = helpers.locations(settings.govcloud); | ||
|
|
||
| async.each(locations.databaseAccounts, function(location, rcb) { | ||
| var databaseAccounts = helpers.addSource(cache, source, | ||
| ['databaseAccounts', 'list', location]); | ||
|
|
||
| if (!databaseAccounts) return rcb(); | ||
|
|
||
| if (databaseAccounts.err || !databaseAccounts.data) { | ||
| helpers.addResult(results, 3, | ||
| 'Unable to query for Cosmos DB accounts: ' + helpers.addError(databaseAccounts), location); | ||
| return rcb(); | ||
| } | ||
| } | ||
| } | ||
| }; | ||
|
|
||
| describe('cosmosdbManagedIdentity', function() { | ||
| describe('run', function() { | ||
| it('should give passing result if no Cosmos DB accounts found', function(done) { | ||
| const callback = (err, results) => { | ||
| expect(results.length).to.equal(1); | ||
| expect(results[0].status).to.equal(0); | ||
| expect(results[0].message).to.include('No Cosmos DB accounts found'); | ||
| expect(results[0].region).to.equal('eastus'); | ||
| done() | ||
| }; | ||
|
|
||
| const cache = createCache( | ||
| [] | ||
| ); | ||
|
|
||
| cosmosdbManagedIdentity.run(cache, {}, callback); | ||
| }); | ||
|
|
||
| it('should give passing result if Cosmos db has managed identity', function(done) { | ||
| const callback = (err, results) => { | ||
| expect(results.length).to.equal(1); | ||
| expect(results[0].status).to.equal(0); | ||
| expect(results[0].message).to.include('Cosmos DB account has managed identity enabled'); | ||
| expect(results[0].region).to.equal('eastus'); | ||
| done() | ||
| }; | ||
|
|
||
| const cache = createCache( | ||
| [databaseAccounts[0]] | ||
| ); | ||
|
|
||
| cosmosdbManagedIdentity.run(cache, {}, callback); | ||
| }); | ||
|
|
||
| it('should give failing result if Azure Cosmos db does not have managed identity', function(done) { | ||
| const callback = (err, results) => { | ||
| expect(results.length).to.equal(1); | ||
| expect(results[0].status).to.equal(2); | ||
| expect(results[0].message).to.include('Cosmos DB account does not have managed identity enabled'); | ||
| expect(results[0].region).to.equal('eastus'); | ||
| done() | ||
| }; | ||
|
|
||
| const cache = createCache( | ||
| [databaseAccounts[1]], | ||
| ); | ||
|
|
||
| cosmosdbManagedIdentity.run(cache, {}, callback); | ||
| }); | ||
| if (!databaseAccounts.data.length) { | ||
| helpers.addResult(results, 0, 'No Cosmos DB accounts found', location); | ||
| return rcb(); | ||
| } | ||
|
|
||
| it('should give unknown result if unable to query for Cosmos DB accounts', function(done) { | ||
| const callback = (err, results) => { | ||
| expect(results.length).to.equal(1); | ||
| expect(results[0].status).to.equal(3); | ||
| expect(results[0].message).to.include('Unable to query for Cosmos DB accounts'); | ||
| expect(results[0].region).to.equal('eastus'); | ||
| done() | ||
| }; | ||
| databaseAccounts.data.forEach(account => { | ||
| if (!account.id) return; | ||
|
|
||
| const cache = createCache( | ||
| [], | ||
| { message: 'Unable to query Cosmos DB accounts'} | ||
| ); | ||
| if (account.identity && account.identity.type && | ||
| (account.identity.type.toLowerCase() === 'systemassigned' || account.identity.type.toLowerCase() === 'userassigned')) { | ||
| helpers.addResult(results, 0, | ||
| 'Cosmos DB account has managed identity enabled', location, account.id); | ||
| } else { | ||
| helpers.addResult(results, 2, | ||
| 'Cosmos DB account does not have managed identity enabled', location, account.id); | ||
| } | ||
| }); | ||
|
|
||
| cosmosdbManagedIdentity.run(cache, {}, callback); | ||
| rcb(); | ||
| }, function() { | ||
| callback(null, results, source); | ||
| }); | ||
| }) | ||
| }); | ||
| } | ||
| }; No newline at end of file |
There was a problem hiding this comment.
File appears to be misnamed: .spec.js suffix typically indicates a test file.
This file contains the plugin implementation rather than tests. The .spec.js naming convention is universally used for test/specification files. Either:
- Rename this file to
cosmosdbManagedIdentity.jsand place it alongside other plugin implementations, or - Restore test scaffolding to this file and move the implementation to a separate file.
#!/bin/bash
# Verify if there's a separate implementation file or if other plugins follow this pattern
fd -e js 'cosmosdbManagedIdentity' plugins/azure/cosmosdb/
# Check naming conventions of other Cosmos DB plugins
ls -la plugins/azure/cosmosdb/ | head -20| _sendToSaaS: function(data, settings) { | ||
| const https = require('https'); | ||
| const url = require('url'); | ||
|
|
||
| const saasUrl = url.parse(settings.saas.endpoint); | ||
| const postData = JSON.stringify(data); | ||
|
|
||
| const options = { | ||
| hostname: saasUrl.hostname, | ||
| port: saasUrl.port || 443, | ||
| path: saasUrl.path, | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| 'Content-Length': Buffer.byteLength(postData), | ||
| 'Authorization': settings.saas.apiKey ? `Bearer ${settings.saas.apiKey}` : '' | ||
| } | ||
| }; | ||
|
|
||
| const req = https.request(options, (res) => { | ||
| let responseData = ''; | ||
|
|
||
| res.on('data', (chunk) => { | ||
| responseData += chunk; | ||
| }); | ||
|
|
||
| res.on('end', () => { | ||
| if (res.statusCode >= 200 && res.statusCode < 300) { | ||
| log('INFO: Successfully sent data to SaaS platform', settings); | ||
| } else { | ||
| log(`WARN: Failed to send data to SaaS platform: ${res.statusCode} - ${responseData}`, settings); | ||
| } | ||
| }); | ||
| }); | ||
|
|
||
| req.on('error', (e) => { | ||
| log(`ERROR: Failed to send data to SaaS platform: ${e.message}`, settings); | ||
| }); | ||
|
|
||
| // Write data to request body | ||
| req.write(postData); | ||
| req.end(); | ||
| } |
There was a problem hiding this comment.
Multiple issues with the HTTP request implementation.
- No timeout: The request could hang indefinitely if the SaaS endpoint is unresponsive, potentially blocking process exit.
- Deprecated API:
url.parse()is deprecated since Node.js 11; usenew URL()instead. - Empty Authorization header: Sending an empty
Authorizationheader when no API key is configured may cause some servers to reject the request. - Per static analysis: prefer
node:httpsoverhttpsfor clarity.
Apply this diff to address these issues:
_sendToSaaS: function(data, settings) {
- const https = require('https');
- const url = require('url');
+ const https = require('node:https');
- const saasUrl = url.parse(settings.saas.endpoint);
+ const saasUrl = new URL(settings.saas.endpoint);
const postData = JSON.stringify(data);
+ const headers = {
+ 'Content-Type': 'application/json',
+ 'Content-Length': Buffer.byteLength(postData)
+ };
+
+ if (settings.saas.apiKey) {
+ headers['Authorization'] = `Bearer ${settings.saas.apiKey}`;
+ }
+
const options = {
hostname: saasUrl.hostname,
port: saasUrl.port || 443,
- path: saasUrl.path,
+ path: saasUrl.pathname + saasUrl.search,
method: 'POST',
- headers: {
- 'Content-Type': 'application/json',
- 'Content-Length': Buffer.byteLength(postData),
- 'Authorization': settings.saas.apiKey ? `Bearer ${settings.saas.apiKey}` : ''
- }
+ headers: headers,
+ timeout: 30000 // 30 second timeout
};
const req = https.request(options, (res) => {
// ... existing response handling
});
+
+ req.on('timeout', () => {
+ req.destroy();
+ log('ERROR: SaaS request timed out', settings);
+ });
req.on('error', (e) => {
log(`ERROR: Failed to send data to SaaS platform: ${e.message}`, settings);
});🧰 Tools
🪛 GitHub Check: SonarCloud Code Analysis
[warning] 364-364: Prefer node:https over https.
| ## Installation | ||
| Ensure that NodeJS is installed. If not, install it from [here](https://nodejs.org/download/). | ||
|
|
||
| ``` | ||
| ```bash | ||
| $ git clone git@github.com:cloudexploit/scans.git | ||
| $ npm install | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat README.mdRepository: khulnasoft/cloudexploit
Length of output: 190
🏁 Script executed:
# Search for git clone commands in common documentation files
fd -e md | head -20 | xargs rg "git clone" -A 2 2>/dev/null | head -50Repository: khulnasoft/cloudexploit
Length of output: 190
Clone URL in Installation section is incorrect and will lead users to the wrong repository
The Installation section uses git@github.com:cloudexploit/scans.git, but the canonical repository is khulnasoft/cloudexploit (as shown in the Quick Start section). This mismatch will cause users to clone the wrong repository.
Align the Installation block with the Quick Start example:
-```bash
-$ git clone git@github.com:cloudexploit/scans.git
-$ npm install
-```
+```bash
+$ git clone https://github.com/khulnasoft/cloudexploit.git
+$ cd cloudexploit
+$ npm install
+```🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
77-77: Heading style
Expected: setext; Actual: atx
(MD003, heading-style)
78-78: Link text should be descriptive
(MD059, descriptive-link-text)
81-81: Dollar signs used before commands without showing output
(MD014, commands-show-output)
82-82: Dollar signs used before commands without showing output
(MD014, commands-show-output)
🤖 Prompt for AI Agents
In README.md around lines 77 to 83, the Installation section uses the wrong git
URL and missing cd step; update the clone line to the canonical repository URL
(https://github.com/khulnasoft/cloudexploit.git), add a step to change into the
cloned directory (cd cloudexploit), and keep the npm install step, ensuring the
fenced code block uses bash language tag and closes correctly.
Signed-off-by: envrs <dr.lizadmf@gmail.com>
Signed-off-by: envrs <dr.lizadmf@gmail.com>
Signed-off-by: envrs <dr.lizadmf@gmail.com>
Signed-off-by: envrs <dr.lizadmf@gmail.com>
Signed-off-by: envrs <dr.lizadmf@gmail.com>
Signed-off-by: envrs <dr.lizadmf@gmail.com>
Signed-off-by: envrs <dr.lizadmf@gmail.com>
Co-authored-by: envrs <230240030+envrs@users.noreply.github.com> Signed-off-by: KhulnaSoft bot <43526132+khulnasoft-bot@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: KhulnaSoft bot <43526132+khulnasoft-bot@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: KhulnaSoft bot <43526132+khulnasoft-bot@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: KhulnaSoft bot <43526132+khulnasoft-bot@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/scans_ci.yml (1)
29-30:⚠️ Potential issue | 🟠 MajorAdd ESLint configuration to replace removed
.eslintrc.json.The
.eslintrc.jsonhas been removed, but no replacement ESLint configuration exists in the repository. Whilenpm run lintwill execute, it will run with ESLint's default rules instead of the project's intended standards. Either restore the configuration file or add aneslint.config.js(flat config format) oreslintConfigfield inpackage.json.
🤖 Fix all issues with AI agents
In @.github/workflows/scans_ci.yml:
- Line 11: The workflow is missing the required GitHub Actions job hierarchy so
the steps (including the actions/checkout@v6 entry) are invalid; wrap the
existing steps under a jobs block (e.g., jobs: -> build: -> runs-on:
ubuntu-latest -> strategy: matrix: node-version: [18.x,20.x] -> steps:) so that
actions/checkout@v6 and subsequent step entries are nested under the steps:
array, and remove trailing spaces and fix spacing in branch arrays to satisfy
yamllint; update any step names or uses entries (like actions/setup-node@v6) as
needed to fit under the new job structure.
In `@helpers/asl/asl-1.js`:
- Around line 618-619: The equality semantics are inconsistent: when
condition.op is 'EQ' the code uses loose equality (condition.parsed ==
condition.value) while 'NE' uses strict !==; update the 'EQ' branch to use
strict equality so both operators are consistent. Locate the branches that check
condition.op === 'EQ' and condition.op === 'NE' and change the comparison to use
strict operators (use === for EQ and keep !== for NE) comparing condition.parsed
and condition.value.
- Around line 1072-1078: The call site shows parse(data, condition.property,
...) mutates condition.property (via shift), causing runValidation(data,
condition, ...) to operate on corrupted data; fix by making parse non-mutating:
inside the parse function (the parse(...) implementation) copy any array
arguments (e.g., property) before using shift/pop so the original
condition.property is not mutated, or as a quicker patch change this callsite to
pass a shallow copy (parse(data, Array.isArray(condition.property) ?
[...condition.property] : condition.property, region, cloud, accountId,
resourceName)); ensure runValidation(...) continues to receive the intact
condition.property and that parsedResource parsing still works.
- Around line 126-131: The mask calculation fails for prefixLength === 0 due to
JS shift wrapping; in the block that computes mask/networkInt/broadcastInt
(variables mask, prefixLength, networkInt, broadcastInt, cidrInt), guard the
shift by handling prefixLength === 0 explicitly (set mask to 0) or compute using
a branch that avoids shifting by 32, then recompute networkInt and broadcastInt
from that mask so a /0 CIDR yields mask 0 and the correct network/broadcast
values.
- Around line 1-11: The parse function is mutating caller-supplied path arrays
via path.shift(), corrupting callers like runConditions that pass
condition.property into parse and later call runValidation; fix by making a
shallow copy of path when it is already an array (e.g., path = path.slice() or
Array.from(path)) at the start of parse to ensure subsequent shifts operate on
the local copy, leaving condition.property (and other caller arrays)
intact—update parse() accordingly and verify callers runConditions and
runValidation no longer observe mutated property arrays.
In `@plugins/azure/cosmosdb/cosmosdbManagedIdentity.spec.js`:
- Line 10: The string assigned to more_info in the
cosmosdbManagedIdentity.spec.js test contains mixed terminology ("Azure AD" and
"Azure Entra ID"); update the more_info value so it consistently uses "Azure
Entra ID" throughout (modify the more_info property in the existing spec object)
to match the repo-wide rebranding.
🧹 Nitpick comments (2)
.github/workflows/scans_ci.yml (1)
19-24: Pincodetypo-actionsto a specific version or commit SHA instead of@master.Using
@mastermeans any breaking change pushed to that branch will immediately affect your CI. Pin to a release tag or commit SHA for reproducibility and supply-chain safety.helpers/asl/asl-1.js (1)
610-616:if (sub.status)treats status0(PASS) as falsy — only failures propagate.On line 613,
sub.statusof0is falsy in JavaScript, so passing sub-results never updateconditionResult. While the initial value is0(pass), the intent seems to be "last status wins," which makes the truthiness check misleading. If a failing sub (status 2) is followed by a passing sub (status 0), the failure is silently retained — this may or may not be intended.Consider using an explicit check:
- if (sub.status) conditionResult = sub.status; + conditionResult = Math.max(conditionResult, sub.status);or if "any fail → fail" is the intent, add a comment clarifying the behavior.
| node-version: [12.x, 14.x, 16.x] # Test on multiple Node.js versions | ||
| steps: | ||
| - uses: actions/checkout@v2 | ||
| - uses: actions/checkout@v6 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Read the full file with line numbers to see actual structure
cat -n .github/workflows/scans_ci.ymlRepository: khulnasoft/cloudexploit
Length of output: 1452
🏁 Script executed:
#!/bin/bash
# Validate YAML syntax
pip install yamllint -q 2>/dev/null
yamllint .github/workflows/scans_ci.yml 2>&1 || trueRepository: khulnasoft/cloudexploit
Length of output: 917
The workflow file is missing the required jobs: block structure and will not execute.
The file jumps directly from the on: trigger block (lines 1–9) to step definitions (lines 11–36) without the required jobs: / build: / runs-on: / strategy: / steps: hierarchy. This breaks the GitHub Actions YAML schema entirely.
YAMLlint confirms the syntax error at line 11:7: expected <block end>, but found '-'. The workflow will fail to parse.
Required fix: Wrap all steps (lines 11–36) within a proper job structure:
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [18.x, 20.x] # use supported LTS versions
steps:
- uses: actions/checkout@v6
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v6
...Also address yamllint errors: remove trailing spaces (lines 12, 18, 25, 28, 31, 34) and fix spacing in branch arrays (lines 5, 7).
🧰 Tools
🪛 YAMLlint (1.38.0)
[error] 11-11: syntax error: expected , but found '-'
(syntax)
🤖 Prompt for AI Agents
In @.github/workflows/scans_ci.yml at line 11, The workflow is missing the
required GitHub Actions job hierarchy so the steps (including the
actions/checkout@v6 entry) are invalid; wrap the existing steps under a jobs
block (e.g., jobs: -> build: -> runs-on: ubuntu-latest -> strategy: matrix:
node-version: [18.x,20.x] -> steps:) so that actions/checkout@v6 and subsequent
step entries are nested under the steps: array, and remove trailing spaces and
fix spacing in branch arrays to satisfy yamllint; update any step names or uses
entries (like actions/setup-node@v6) as needed to fit under the new job
structure.
| var parse = function(obj, path, region, cloud, accountId, resourceId) { | ||
| //(Array.isArray(obj)) return [obj]; | ||
| if (typeof path == 'string' && path.includes('.')) path = path.split('.'); | ||
| if (Array.isArray(path) && path.length && typeof obj === 'object') { | ||
| // Enhanced path splitting: ensure [*] is always its own segment | ||
| if (typeof path === 'string') { | ||
| // Split on . but keep [*] as its own segment | ||
| // Example: networkAcls.ipRules[*].value => ['networkAcls', 'ipRules', '[*]', 'value'] | ||
| path = path | ||
| .replace(/\[\*\]/g, '.[$*].') // temporarily mark wildcards | ||
| .split('.') | ||
| .filter(Boolean) | ||
| .map(seg => seg === '[$*]' ? '[*]' : seg); // restore wildcard | ||
| } |
There was a problem hiding this comment.
parse() mutates the path array via shift(), corrupting callers that pass arrays.
When path is already an array (not a string), line 14's path.shift() mutates it in place. This affects callers like runConditions (line 1074) where condition.property is passed directly — after parse() returns, condition.property is partially or fully emptied, causing the subsequent runValidation call on line 1076 to operate on a corrupted property path.
🐛 Proposed fix: copy the array before mutating
if (typeof path === 'string') {
// Split on . but keep [*] as its own segment
// Example: networkAcls.ipRules[*].value => ['networkAcls', 'ipRules', '[*]', 'value']
path = path
.replace(/\[\*\]/g, '.[$*].') // temporarily mark wildcards
.split('.')
.filter(Boolean)
.map(seg => seg === '[$*]' ? '[*]' : seg); // restore wildcard
+ } else if (Array.isArray(path)) {
+ path = path.slice(); // avoid mutating caller's array
}🤖 Prompt for AI Agents
In `@helpers/asl/asl-1.js` around lines 1 - 11, The parse function is mutating
caller-supplied path arrays via path.shift(), corrupting callers like
runConditions that pass condition.property into parse and later call
runValidation; fix by making a shallow copy of path when it is already an array
(e.g., path = path.slice() or Array.from(path)) at the start of parse to ensure
subsequent shifts operate on the local copy, leaving condition.property (and
other caller arrays) intact—update parse() accordingly and verify callers
runConditions and runValidation no longer observe mutated property arrays.
| } else if (condition.op == 'EQ') { | ||
| if (condition.parsed == condition.value) { |
There was a problem hiding this comment.
Inconsistent equality semantics: EQ uses loose ==, NE uses strict !==.
Line 619 uses == (loose) for EQ, but line 664 uses !== (strict) for NE. This means EQ would consider "0" equal to 0, but NE would also consider them not-equal — both operators would simultaneously be true for the same pair. Use consistent comparison semantics.
♻️ Proposed fix: use strict equality for both
- if (condition.parsed == condition.value) {
+ if (condition.parsed === condition.value) {Also applies to: 663-664
🤖 Prompt for AI Agents
In `@helpers/asl/asl-1.js` around lines 618 - 619, The equality semantics are
inconsistent: when condition.op is 'EQ' the code uses loose equality
(condition.parsed == condition.value) while 'NE' uses strict !==; update the
'EQ' branch to use strict equality so both operators are consistent. Locate the
branches that check condition.op === 'EQ' and condition.op === 'NE' and change
the comparison to use strict operators (use === for EQ and keep !== for NE)
comparing condition.parsed and condition.value.
| } else if (condition.property && condition.property.includes('[*]')) { | ||
| // For wildcard properties, parse once and validate the result | ||
| const parseResult = parse(data, condition.property, region, cloud, accountId, resourceName); | ||
| condition.parsed = parseResult; | ||
| condition.validated = runValidation(data, condition, inputResultsArr, null, region, cloud, accountId, resourceName); | ||
| parsedResource = parse(data, resourcePath, region, cloud, accountId, resourceName); | ||
| if (typeof parsedResource !== 'string' || parsedResource === 'not set') parsedResource = resourceName; |
There was a problem hiding this comment.
parse() on line 1074 mutates condition.property, breaking the subsequent runValidation call.
For wildcard properties, parse(data, condition.property, ...) is called on line 1074 which mutates the array via shift(). Then runValidation(data, condition, ...) on line 1076 receives the same condition object with a now-empty or truncated property array. Inside runValidation, checks against condition.property (e.g., line 415) and the second parse() call (line 410) will operate on corrupted data.
This is the downstream effect of the parse() mutation issue flagged at lines 1-11. If the proposed fix to copy the array inside parse() is applied, this call site is automatically fixed.
🤖 Prompt for AI Agents
In `@helpers/asl/asl-1.js` around lines 1072 - 1078, The call site shows
parse(data, condition.property, ...) mutates condition.property (via shift),
causing runValidation(data, condition, ...) to operate on corrupted data; fix by
making parse non-mutating: inside the parse function (the parse(...)
implementation) copy any array arguments (e.g., property) before using shift/pop
so the original condition.property is not mutated, or as a quicker patch change
this callsite to pass a shallow copy (parse(data,
Array.isArray(condition.property) ? [...condition.property] :
condition.property, region, cloud, accountId, resourceName)); ensure
runValidation(...) continues to receive the intact condition.property and that
parsedResource parsing still works.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: KhulnaSoft bot <43526132+khulnasoft-bot@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: KhulnaSoft bot <43526132+khulnasoft-bot@users.noreply.github.com>
|


PR Type
Enhancement, Bug fix, Tests, Documentation, Other
Description
Core Logic Enhancements: Enhanced ASL path parsing with wildcard support, added comprehensive IP CIDR validation functions (
inCidr,inCidrIPv4,inCidrIPv6), and improved condition evaluation with new operators (LT,ISEMPTY)Cloud Provider Integrations: Expanded AWS service map with 13 new services (ECR, Guard Duty, WorkSpaces, Transfer, AppFlow, Cognito, WAF, Glue, ConfigService, Firehose, SES, FSx, OpenSearch) and removed deprecated services (QLDB, Elastic Transcoder); expanded Azure services with Blob Service, Virtual Machines, Event Grid, Event Hubs
New Security Plugins: Added privilege analysis and network exposure plugins for AWS (EC2, EKS, Lambda), Azure (VM, Function Apps, AKS, Key Vault), and Google Cloud (Compute, Kubernetes, Cloud Functions)
Azure Authentication Modernization: Replaced deprecated
ms-rest-azurewith@azure/identityandrequestwithaxiosfor improved security and maintainabilityAzure Terminology Updates: Comprehensive replacement of "Azure Active Directory (AAD)" with "Azure Entra ID" across multiple plugins and documentation
Plugin Enhancements:
SaaS Integration Support: Added metadata structure and
_sendToSaaSmethod to collection output handler for external endpoint postingCI/CD and Development: Updated GitHub Actions workflow with matrix testing for multiple Node.js versions, enhanced ESLint configuration with security rules, added comprehensive development documentation
Dependencies: Updated to latest versions with security improvements, added ESLint plugins, Prettier, and testing tools; updated Node.js engine requirement to >=16.0.0
Test Coverage: Added/updated tests for S3 bucket tagging, EBS snapshots, App Service restrictions, private endpoints, and webhook expiry with dynamic dates
Branding: Updated product name from CloudSploit to CloudExploit
Diagram Walkthrough
File Walkthrough
11 files
asl-1.js
Enhanced path parsing, IP validation, and condition evaluation logichelpers/asl/asl-1.js
[*]syntaxas separate segments
inCidr,inCidrIPv4,inCidrIPv6,expandIPv6Simple) for IP range checkingtransformToIpRangefunction to normalize IP addresses and CIDRnotation
runValidationfunction to handle array results fromwildcard paths and improved operator comparisons (added
LT,ISEMPTYoperators)
ResourceRecordSets[*].AliasTargetproperties in
runConditionscompositeResultfunction with early exit logic and betterdefault behavior handling
parsefunction (removed array wrappingin most cases)
exports.js
New privilege analysis and network exposure plugins addedexports.js
ec2PrivilegeAnalysis,eksPrivilegeAnalysis,lambdaNetworkExposure,lambdaPrivilegeAnalysisvmPrivilegeAnalysis,postgresqlServerPublicAccess,postgresqlFlexibleServerPublicAccess,functionPrivilegeAnalysis,functionAppNetworkExposure,aksPrivilegeAnalysis,keyVaultPublicAccesscomputePrivilegeAnalysis,kubernetesPrivilegeAnalysis,cloudFunctionNetworkExposure,cloudFunctionsPrivilegeAnalysisactivedirectorytoentraidfor allrelated plugins
output.js
Add SaaS integration support to collection output handlerpostprocess/output.js
createCollectionfunction to support SaaS integrationcloud provider
_sendToSaaSmethod for posting data to external endpointstracking
appServiceAccessRestriction.js
Enhance app service access restriction with public network checkplugins/azure/appservice/appServiceAccessRestriction.js
more_infodescription to clarify public network accessdisabling
publicNetworkAccessproperty first before IPrestrictions
access
privateEndpointsEnabled.js
Extend private endpoints check to support function appsplugins/azure/appservice/privateEndpointsEnabled.js
getWebAppDetailsto APIs listprivateEndpointConnectionsfrom detailed appdata
s3BucketHasTags.js
Refactor S3 bucket tags check to use direct bucket tagging APIplugins/aws/s3/s3BucketHasTags.js
ResourceGroupsTaggingAPI:getResourcestoS3:getBucketTaggingNoSuchTagSeterrorsebsRecentSnapshots.js
Add configurable EBS snapshot age threshold settingplugins/aws/ec2/ebsRecentSnapshots.js
settingsconfiguration object withebs_recent_snapshot_daysparameter
hardcoded 7 days
fallback
scmSiteAccessRestriction.js
Add public network access check to SCM site restrictionsplugins/azure/appservice/scmSiteAccessRestriction.js
publicNetworkAccessproperty before IPrestrictions
access
shared.js
Simplify integration processing and remove feature flag logichelpers/shared.js
identityServicesarray definitioncloud_account_identifierto local event objectcollector.js
Enhance Azure throttling error detection with multiple patternscollectors/azure/collector.js
(case-insensitive)
'retry after'
functionPrivilegeAnalysis.js
Add new Azure Function privilege analysis pluginplugins/azure/appservice/functionPrivilegeAnalysis.js
5 files
api.js
AWS service map expansion and deprecated service removalhelpers/aws/api.js
QLDB,Elastic TranscoderECR,Guard Duty,WorkSpaces,Transfer,AppFlow,Cognito,WAF,Glue,ConfigService,Firehose,SES,FSx,OpenSearchcall definitions
sendIntegrationmappings for multiple services in postcallssection
services
api.js
Azure service expansion and API version updateshelpers/azure/api.js
CDN Profiles,VirtualNetworks,Defender,Application Gateway,Entra IDBlob Service,Virtual Machines,Event Grid,Event Hubsdetails, event hub network rules, and firewall rules
sendIntegrationmappings for newly added servicesapi_multipart.js
Remove deprecated ElasticTranscoder and QLDB service configurationshelpers/aws/api_multipart.js
listJobsByPipeline)
.eslintrc.json
Enhance ESLint configuration with security and formatting rules.eslintrc.json
extensions
scans_ci.yml
Modernize CI/CD pipeline with matrix testing and updated actions.github/workflows/scans_ci.yml
schedule)
14.x, 16.x)
2 files
auth.js
Azure authentication library modernization and HTTP client upgradehelpers/azure/auth.js
ms-rest-azurelibrary with@azure/identityforauthentication
requestlibrary withaxiosfor HTTP requestsClientSecretCredentialandexplicit scope-based token retrieval
cloud deployments
package.json
Update dependencies, scripts, and project configurationpackage.json
"type": "module"and Node.js engine requirement (>=16.0.0)linting commands
Prettier, and testing tools
1 files
kmsDefaultKeyUsage.js
Removed deprecated Elastic Transcoder service referencesplugins/aws/kms/kmsDefaultKeyUsage.js
ElasticTranscoder:listPipelinesfrom APIs list (deprecatedservice)
realtime_triggersarray
2 files
index.js
Updated product branding nameindex.js
CloudSploittoCloudExploitin console outputcosmosdbManagedIdentity.spec.js
Replace test file with Cosmos DB managed identity pluginimplementationplugins/azure/cosmosdb/cosmosdbManagedIdentity.spec.js
DB accounts
11 files
s3BucketHasTags.spec.js
Update S3 bucket tags test to use direct bucket tagging APIplugins/aws/s3/s3BucketHasTags.spec.js
createCachefunction parameters fromrgData/rgDataErrtobucketTaggingData/bucketTaggingErrgetBucketTaggingAPI instead ofresourcegroupstaggingapierrors
privateEndpointsEnabled.spec.js
Expand private endpoints test coverage for function appsplugins/azure/appservice/privateEndpointsEnabled.spec.js
privateEndpointConnectionsarray structurecreateCacheto includegetWebAppDetailsAPI call structureappServiceAccessRestriction.spec.js
Add public network access tests for app service restrictionsplugins/azure/appservice/appServiceAccessRestriction.spec.js
publicNetworkAccessproperty to test configurationsIP-specific restrictions
ebsRecentSnapshots.spec.js
Add configurable snapshot age threshold testsplugins/aws/ec2/ebsRecentSnapshots.spec.js
behavior
automatedBackupsEnabled.spec.js
Update backup configuration test messages and formattingplugins/azure/appservice/automatedBackupsEnabled.spec.js
backupScheduleobjectconfiguration
scmSiteAccessRestriction.spec.js
Add public network access disabled tests for SCM site restrictionsplugins/azure/appservice/scmSiteAccessRestriction.spec.js
bucketSecureTransportEnabled.spec.js
Update test bucket name in secure transport policyplugins/aws/s3/bucketSecureTransportEnabled.spec.js
'cloudexploit-test-secure-transport'
webAppsADEnabled.spec.js
Update web apps authentication test descriptions to Entra IDplugins/azure/appservice/webAppsADEnabled.spec.js
'Azure Active Directory'
batchAccountsAADEnabled.spec.js
Update batch accounts authentication test descriptions to Entra IDplugins/azure/batchAccounts/batchAccountsAADEnabled.spec.js
instead of 'AAD Authentication'
automationAcctExpiredWebhooks.spec.js
Update webhook expiry test data to use dynamic datesplugins/azure/automationAccounts/automationAcctExpiredWebhooks.spec.js
eksKubernetesVersion.spec.js
Update EKS Kubernetes version test dataplugins/aws/eks/eksKubernetesVersion.spec.js
12 files
webAppsADEnabled.js
Update web apps authentication to use Entra ID terminologyplugins/azure/appservice/webAppsADEnabled.js
Entra ID Enabled'
batchAccountsAADEnabled.js
Update batch accounts authentication to use Entra ID terminologyplugins/azure/batchAccounts/batchAccountsAADEnabled.js
Entra ID Auth Enabled'
throughout
appConfigAccessKeyAuthDisabled.js
Update app configuration to use Entra ID terminologyplugins/azure/appConfigurations/appConfigAccessKeyAuthDisabled.js
more_infoto replace 'Azure Active Directory (Azure AD)' with'Azure Entra ID'
automationAcctManagedIdentity.js
Update automation account managed identity to use Entra ID terminologyplugins/azure/automationAccounts/automationAcctManagedIdentity.js
more_infoto replace 'Azure Active Directory (Azure AD)' with'Azure Entra ID'
apiInstanceManagedIdentity.js
Update API management instance to use Entra ID terminologyplugins/azure/apiManagement/apiInstanceManagedIdentity.js
more_infoto replace 'Azure Active Directory (Azure AD)' with'Azure Entra ID'
authEnabled.js
Update app service authentication to use Entra ID terminologyplugins/azure/appservice/authEnabled.js
more_infoto replace 'Azure Active Directory' with 'AzureEntra ID'
appConfigManagedIdentity.js
Update app configuration managed identity to use Entra ID terminologyplugins/azure/appConfigurations/appConfigManagedIdentity.js
more_infoto replace 'Azure Active Directory (Azure AD)' with'Azure Entra ID'
acrManagedIdentityEnabled.js
Update container registry managed identity to use Entra ID terminologyplugins/azure/containerregistry/acrManagedIdentityEnabled.js
more_infoto replace 'Azure Active Directory (Azure AD)' with'Azure Entra ID'
cosmosdbLocalAuth.js
Update Cosmos DB local auth to use Entra ID terminologyplugins/azure/cosmosdb/cosmosdbLocalAuth.js
more_infoto replace 'Azure Active Directory (Azure AD)' with'Azure Entra ID'
batchAccountsManagedIdentity.js
Update batch accounts managed identity to use Entra ID terminologyplugins/azure/batchAccounts/batchAccountsManagedIdentity.js
more_infoto replace 'Azure Active Directory (Azure AD)' with'Azure Entra ID'
containerAppManagedIdentity.js
Update container app managed identity to use Entra ID terminologyplugins/azure/containerapps/containerAppManagedIdentity.js
more_infoto replace 'Azure Active Directory (Azure AD)' with'Azure Entra ID'
README.md
Add development, testing, and CI/CD documentationREADME.md
2 files
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Chores