fix(sync-issues): create PR instead of direct push to main#764
fix(sync-issues): create PR instead of direct push to main#764
Conversation
- Aligned network and wallets with NIST Dilithium3 and Kyber768 algorithms. - Refactored `Signer` and `Verifier` traits in `synapse-core` to be algorithm-agnostic. - Introduced `HoloSignature` enum supporting both Ed25519 and Dilithium3. - Implemented `PqcEncryptionAdapter` in `synapse-infra` using hybrid Kyber768/AES-256-GCM. - Updated `Wallet` entity to support PQC public keys and key types. - Fixed `QuantumWallet` implementation and addressed compilation issues in `synapse-immune`. - Maintained backward compatibility for Ed25519-based wallets and packets. - Optimized `ImmuneAdapter` file hashing with streaming SHA256. Co-authored-by: iberi22 <10615454+iberi22@users.noreply.github.com>
The workflow was failing because it tried to push directly to main, but branch protection requires all changes to go through a PR. Changes: - Create a unique branch for each sync commit - Push branch to origin - Create PR via gh pr create with [skip ci] label - PR prevents re-triggering the workflow and satisfies branch protection Fixes: GH013 Repository rule violations
📝 WalkthroughWalkthroughThis PR introduces cryptographic abstraction through trait-based signatures (decoupling signing from verification), adds post-quantum Kyber-based encryption via a new Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant SHP as SecureHoloPacket
participant Signer as Signer (trait)
participant Sig as HoloSignature
participant Verifier as Verifier (trait)
App->>SHP: new(packet, signer, pubkey)
SHP->>Signer: sign(serialized_packet)
Signer->>Sig: create variant<br/>(Ed25519 or Dilithium)
Sig-->>SHP: return HoloSignature
SHP->>SHP: store signature + pubkey
SHP-->>App: SecureHoloPacket
App->>SHP: verify(verifier)
SHP->>Verifier: verify(data, signature, pubkey)
Verifier->>Verifier: match signature variant<br/>& verify with pubkey
Verifier-->>SHP: bool result
SHP-->>App: verification status
sequenceDiagram
participant App as Application
participant PQC as PqcEncryptionAdapter
participant Kyber as Kyber768
participant AES as AesGcmAdapter
App->>PQC: encrypt(plaintext)
PQC->>Kyber: encapsulate()
Kyber-->>PQC: (ciphertext, shared_secret)
PQC->>AES: encrypt with shared_secret
AES-->>PQC: encrypted_payload
PQC->>PQC: format: length_prefix<br/>+ kyber_ct + aes_payload
PQC-->>App: ciphertext_blob
App->>PQC: decrypt(ciphertext_blob)
PQC->>PQC: parse length prefix<br/>& extract kyber_ct
PQC->>Kyber: decapsulate(kyber_ct)
Kyber-->>PQC: shared_secret
PQC->>AES: decrypt with shared_secret
AES-->>PQC: plaintext
PQC-->>App: plaintext
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes This PR spans heterogeneous changes across core trait definitions, multiple signature verifications, post-quantum cryptography logic, and cascading API updates to dependent modules. The new Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces post-quantum cryptography (PQC) support by refactoring signature and encryption logic. Key changes include the introduction of HoloSignature, Signer, and Verifier traits to support both Ed25519 and Dilithium3, and the implementation of a PqcEncryptionAdapter using Kyber768. Feedback identifies a potential integer overflow in the PQC decryption length check and an inconsistent wallet state in the commerce adapter due to an empty public key.
| ct_len_bytes.copy_from_slice(&data[..4]); | ||
| let ct_len = u32::from_le_bytes(ct_len_bytes) as usize; | ||
|
|
||
| if data.len() < 4 + ct_len { |
There was a problem hiding this comment.
Potential integer overflow on 32-bit architectures. If ct_len is a large value (e.g., close to u32::MAX), the expression 4 + ct_len can wrap around, causing the length check to pass incorrectly and leading to a panic during slicing on line 106. Using subtraction is safer and more robust across different architectures.
| if data.len() < 4 + ct_len { | |
| if data.len() < 4 || data.len() - 4 < ct_len { |
| pub fn new(address: String) -> Self { | ||
| Self { | ||
| wallet: Arc::new(Mutex::new(Wallet::new(address))), | ||
| wallet: Arc::new(Mutex::new(Wallet::new(address, vec![], KeyType::Ed25519))), |
There was a problem hiding this comment.
The InMemoryCommerceAdapter initializes a Wallet with an empty public_key for the Ed25519 key type. This creates an inconsistent entity state that will cause runtime errors when attempting to verify signatures associated with this wallet. Consider providing a valid public key or at least a placeholder that matches the expected 32-byte length for Ed25519.
| wallet: Arc::new(Mutex::new(Wallet::new(address, vec![], KeyType::Ed25519))), | |
| wallet: Arc::new(Mutex::new(Wallet::new(address, vec![0u8; 32], KeyType::Ed25519))), |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
crates/synapse-immune/src/quantum_wallet.rs (1)
88-88:⚠️ Potential issue | 🟠 Major
unwrap()on untrusted input can panic - returnResultinstead.
kyber768::Ciphertext::from_bytes(encapsulated_key).unwrap()will panic if the input is malformed. Sinceencapsulated_keycomes from external input, this should return aResult.🛡️ Suggested fix
pub fn decrypt_hologram( encapsulated_key: &[u8], encrypted_payload_with_nonce: &[u8], sk: &kyber768::SecretKey, -) -> Vec<u8> { - let encapsulated_key = kyber768::Ciphertext::from_bytes(encapsulated_key).unwrap(); +) -> Result<Vec<u8>, DecryptError> { + let encapsulated_key = kyber768::Ciphertext::from_bytes(encapsulated_key) + .map_err(|_| DecryptError::InvalidCiphertext)?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/synapse-immune/src/quantum_wallet.rs` at line 88, The use of kyber768::Ciphertext::from_bytes(encapsulated_key).unwrap() can panic on malformed external input; change the code to return a Result instead: call kyber768::Ciphertext::from_bytes(encapsulated_key) and propagate or map its error (e.g., using ? or map_err) instead of unwrap, adjust the enclosing function's signature to return Result<..., YourErrorType>, and handle or convert the parsing error to your crate's error type so encapsulated_key becomes a Ciphertext on success without panicking.crates/synapse-core/src/entities/wallet.rs (1)
25-34:⚠️ Potential issue | 🟡 MinorConsider validating
public_keylength againstkey_type.The constructor accepts any
Vec<u8>forpublic_keywithout validating it matches the expected length for the givenkey_type. For Ed25519 it should be 32 bytes; Dilithium3 public keys are 1952 bytes. Invalid key material could cause confusing errors later.🛡️ Optional validation
pub fn new(address: String, public_key: Vec<u8>, key_type: KeyType) -> Self { + // Optional: validate key length matches key_type + // Ed25519: 32 bytes, Dilithium3: 1952 bytes Self { address, public_key, key_type, balance: 0, locked_balance: 0, } }Alternatively, return a
Resultif validation should be enforced.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/synapse-core/src/entities/wallet.rs` around lines 25 - 34, The Wallet::new constructor accepts any public_key bytes regardless of key_type; change it to validate the length of the provided public_key against the expected size for the given KeyType (e.g., Ed25519 => 32 bytes, Dilithium3 => 1952 bytes) and reject mismatches. Implement this by either returning Result<Wallet, WalletError> from Wallet::new with a clear error for invalid key length, or by panicking with an assert if you prefer not to change the signature; reference Wallet::new, the public_key field, and the KeyType enum when locating the change, and update any callsites/tests to handle the new Result or assertion behavior. Ensure error messages include expected vs actual length for easier debugging.
🧹 Nitpick comments (5)
crates/synapse-infra/src/security/encryption_adapter.rs (2)
46-52: Consider adding apublic_key()accessor.The
from_public_keyconstructor enables encryption-only instances, but there's no way to extract the public key from anew_random()instance to share with peers. Consider adding:♻️ Add public key accessor
/// Returns a reference to the public key for sharing with peers. pub fn public_key(&self) -> &kyber768::PublicKey { &self.public_key }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/synapse-infra/src/security/encryption_adapter.rs` around lines 46 - 52, The struct exposes a from_public_key constructor but lacks a way to read the public key back (needed to share keys created by new_random); add a public_key(&self) -> &kyber768::PublicKey accessor that returns &self.public_key so callers can obtain the public key without moving the adapter; update the impl block containing from_public_key and ensure the method name public_key is available on the same type that has secret_key and public_key fields.
93-112: Solid input validation, but missing test coverage.The decrypt implementation correctly validates minimum lengths and handles the wire format parsing. However, there are no tests for
PqcEncryptionAdapter- consider adding a roundtrip test similar to the existingAesGcmAdaptertests.🧪 Suggested test
#[tokio::test] async fn test_pqc_encrypt_decrypt_roundtrip() { let adapter = PqcEncryptionAdapter::new_random(); let data = b"Hello, Post-Quantum World!"; let encrypted = adapter.encrypt(data).await.unwrap(); assert_ne!(encrypted.as_slice(), data); let decrypted = adapter.decrypt(&encrypted).await.unwrap(); assert_eq!(decrypted, data); } #[tokio::test] async fn test_pqc_decrypt_without_secret_key_fails() { let full_adapter = PqcEncryptionAdapter::new_random(); let encrypt_only = PqcEncryptionAdapter::from_public_key(/* need public_key accessor */); // ... test that decrypt fails on encrypt-only instance }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/synapse-infra/src/security/encryption_adapter.rs` around lines 93 - 112, Add unit tests for PqcEncryptionAdapter to cover encrypt/decrypt roundtrip and the failure case when decrypt is called on an encrypt-only instance: create a tokio test that builds a PqcEncryptionAdapter::new_random(), encrypts a sample byte slice, asserts ciphertext differs from plaintext, then decrypts and asserts equality; add a second test that constructs an encrypt-only adapter (implement or expose PqcEncryptionAdapter::from_public_key or a public_key accessor if needed), attempts decrypt on that instance and asserts it returns an error. Ensure tests exercise the encrypt and decrypt methods and use AesGcmAdapter behavior indirectly via the adapter's existing encrypt/decrypt paths.crates/synapse-infra/src/security/immune_adapter.rs (1)
18-24: Consider usingio::copyorBufReaderfor more idiomatic buffered reading.The manual loop works, but
io::copywith aBufReaderis more idiomatic and handles edge cases (like interrupted reads) more robustly:♻️ Optional: More idiomatic approach
- let mut buffer = [0u8; 8192]; - loop { - let n = io::Read::read(&mut file, &mut buffer) - .map_err(|e| Error::System(format!("Failed to read executable file for hashing: {}", e)))?; - if n == 0 { break; } - hasher.update(&buffer[..n]); - } + let mut reader = io::BufReader::new(file); + io::copy(&mut reader, &mut hasher) + .map_err(|e| Error::System(format!("Failed to read executable file for hashing: {}", e)))?;Note: This requires
Sha256to implementio::Write, which it does viaDigest.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/synapse-infra/src/security/immune_adapter.rs` around lines 18 - 24, Replace the manual read loop that uses `buffer`, `file`, and `hasher.update` with an idiomatic buffered copy: wrap `file` in a `BufReader` and use `io::copy` to stream data into the SHA256 hasher (or its `io::Write` adapter) so interrupted/partial reads are handled for you; preserve the existing error mapping by mapping `io::copy` errors into `Error::System(format!(...))` and remove the manual buffer and loop. Ensure you reference the existing variables `file` and `hasher` (and keep the same error message text) when implementing the `BufReader` + `io::copy` approach.crates/synapse-core/src/ports/signer_port.rs (1)
13-15: Surface verification errors instead of folding everything intofalse.
crates/synapse-core/src/entities/secure_holo_packet.rs:27-31already returnsResult<bool>, but this port collapses malformed public keys, unsupported signature variants, and genuine signature failures into the samefalse. ReturningResult<bool, VerificationError>here would preserve that distinction for logging and peer-handling logic.Also applies to: 20-30
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/synapse-core/src/ports/signer_port.rs` around lines 13 - 15, Change the Verifier trait signature to return a Result so verification failures and parse/variant errors are preserved: update pub trait Verifier: Send + Sync { fn verify(&self, data: &[u8], signature: &HoloSignature, public_key: &[u8]) -> Result<bool, VerificationError>; } (keep Send + Sync). Add or reference a VerificationError enum in this module (or a shared error type) to represent malformed public keys, unsupported signature variants, and signature verification failures, then update all implementors of Verifier and callers (notably secure_holo_packet.rs which currently expects Result<bool>) to propagate the Result and handle/log distinct error cases instead of folding everything into false..github/workflows/sync-issues.yml (1)
156-158: Branch naming can still collide under concurrent runs.Timestamp-only naming at second precision can collide on retries/parallel runs. Appending run metadata makes this path more robust.
Suggested patch
TIMESTAMP=$(date +%Y%m%d_%H%M%S) - BRANCH_NAME="sync-issues/$TIMESTAMP" + BRANCH_NAME="sync-issues/${TIMESTAMP}-${GITHUB_RUN_ID}-${GITHUB_RUN_ATTEMPT}"As per coding guidelines,
.github/workflows/**: Check for:Reusable workflow patterns.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/sync-issues.yml around lines 156 - 158, The branch naming uses TIMESTAMP and BRANCH_NAME which can collide; update the BRANCH_NAME assignment to include unique run metadata (e.g., append GITHUB_RUN_ID or GITHUB_RUN_NUMBER and/or GITHUB_SHA) so the checkout command (git checkout -b "$BRANCH_NAME") creates a unique branch per run — modify the lines that set TIMESTAMP and BRANCH_NAME to combine date + one of the GitHub-provided env vars (like GITHUB_RUN_ID or GITHUB_SHA) to eliminate collisions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/sync-issues.yml:
- Around line 65-86: The workflow is missing provisioning for the "skip ci"
label which is applied later when creating PRs; update the label bootstrap list
(the declare -A LABELS array used in the "Create missing labels" step) to
include an entry for "skip ci" with an appropriate description and color so the
gh label create loop (for label in "${!LABELS[@]}"; do ...) will ensure the
label exists before any PR creation uses --label "skip ci".
- Line 1: The workflow file containing the header "# Sincroniza archivos .md en
.github/issues/ con GitHub Issues" uses non-LF line endings and fails YAML lint;
convert its line endings to LF (\n) (e.g., open and save with LF endings or run
a one-time conversion), and add/update a .gitattributes entry (e.g., for
*.yml/*.yaml set text=eol=lf) or ensure core.autocrlf is configured so future
commits preserve LF; targeted file: the sync-issues.yml workflow whose first
line is the Spanish header comment.
- Around line 144-168: The "Commit changes via PR" step is missing the GH_TOKEN
environment variable needed for reliable authentication when running the gh pr
create command; update that step (the block titled "Commit changes via PR" that
runs gh pr create) to export or pass GH_TOKEN into the environment before
invoking gh (e.g., set GH_TOKEN from the workflow secret or include env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}) so gh pr create uses the provided token for
authentication.
- Around line 40-43: The workflow's permissions block is missing pull-request
write access which prevents the gh pr create step from working; update the
permissions map (the existing permissions: contents: write, issues: write) to
include pull-requests: write so the GitHub CLI command (gh pr create) can
succeed, ensuring you add the pull-requests: write entry at the same indentation
level under permissions.
In `@crates/synapse-core/src/entities/secure_holo_packet.rs`:
- Around line 16-25: The constructor SecureHoloPacket::new currently takes a
Signer and a separate public_key which can mismatch; change the API so
public_key is obtained from the signer (add a public_key() -> Vec<u8> or &[u8]
method to the Signer trait) and use that returned key when constructing
SecureHoloPacket in new (remove the public_key parameter), or if you must keep
the parameter, add a clear runtime check in SecureHoloPacket::new that compares
signer.public_key() to the passed public_key and returns an error on mismatch;
update the Signer trait and all implementations accordingly and adjust any call
sites to stop passing the public_key manually.
In `@crates/synapse-immune/src/quantum_wallet.rs`:
- Line 74: The AES-GCM call in quantum_wallet.rs uses expect("encryption
failed") which panics; change the enclosing function (e.g., whichever function
contains the call to cipher.encrypt) to return a Result and propagate encryption
errors instead of panicking: remove expect and use the ? operator or map_err to
convert aes_gcm::Error into your crate's error type (or a suitable enum), update
the function signature and all callers to handle the Result, and ensure the
error is documented in the function's return type so callers can handle
encryption failures gracefully.
In `@crates/synapse-infra/Cargo.toml`:
- Around line 54-55: Update the vulnerable pqcrypto-kyber dependency: in
Cargo.toml replace pqcrypto-kyber = { workspace = true } with a secure version
(pqcrypto-kyber = "0.8.1" or later) or preferably migrate to pqcrypto-mlkem (the
new FIPS 203 ML-KEM crate); then audit and adapt usages in quantum_wallet.rs and
encryption_adapter.rs to the new crate API/names (adjust imports and any
KEM/KEM-API calls) so key encapsulation uses the patched/migrated crate. Ensure
pqcrypto-traits remains unchanged.
---
Outside diff comments:
In `@crates/synapse-core/src/entities/wallet.rs`:
- Around line 25-34: The Wallet::new constructor accepts any public_key bytes
regardless of key_type; change it to validate the length of the provided
public_key against the expected size for the given KeyType (e.g., Ed25519 => 32
bytes, Dilithium3 => 1952 bytes) and reject mismatches. Implement this by either
returning Result<Wallet, WalletError> from Wallet::new with a clear error for
invalid key length, or by panicking with an assert if you prefer not to change
the signature; reference Wallet::new, the public_key field, and the KeyType enum
when locating the change, and update any callsites/tests to handle the new
Result or assertion behavior. Ensure error messages include expected vs actual
length for easier debugging.
In `@crates/synapse-immune/src/quantum_wallet.rs`:
- Line 88: The use of
kyber768::Ciphertext::from_bytes(encapsulated_key).unwrap() can panic on
malformed external input; change the code to return a Result instead: call
kyber768::Ciphertext::from_bytes(encapsulated_key) and propagate or map its
error (e.g., using ? or map_err) instead of unwrap, adjust the enclosing
function's signature to return Result<..., YourErrorType>, and handle or convert
the parsing error to your crate's error type so encapsulated_key becomes a
Ciphertext on success without panicking.
---
Nitpick comments:
In @.github/workflows/sync-issues.yml:
- Around line 156-158: The branch naming uses TIMESTAMP and BRANCH_NAME which
can collide; update the BRANCH_NAME assignment to include unique run metadata
(e.g., append GITHUB_RUN_ID or GITHUB_RUN_NUMBER and/or GITHUB_SHA) so the
checkout command (git checkout -b "$BRANCH_NAME") creates a unique branch per
run — modify the lines that set TIMESTAMP and BRANCH_NAME to combine date + one
of the GitHub-provided env vars (like GITHUB_RUN_ID or GITHUB_SHA) to eliminate
collisions.
In `@crates/synapse-core/src/ports/signer_port.rs`:
- Around line 13-15: Change the Verifier trait signature to return a Result so
verification failures and parse/variant errors are preserved: update pub trait
Verifier: Send + Sync { fn verify(&self, data: &[u8], signature: &HoloSignature,
public_key: &[u8]) -> Result<bool, VerificationError>; } (keep Send + Sync). Add
or reference a VerificationError enum in this module (or a shared error type) to
represent malformed public keys, unsupported signature variants, and signature
verification failures, then update all implementors of Verifier and callers
(notably secure_holo_packet.rs which currently expects Result<bool>) to
propagate the Result and handle/log distinct error cases instead of folding
everything into false.
In `@crates/synapse-infra/src/security/encryption_adapter.rs`:
- Around line 46-52: The struct exposes a from_public_key constructor but lacks
a way to read the public key back (needed to share keys created by new_random);
add a public_key(&self) -> &kyber768::PublicKey accessor that returns
&self.public_key so callers can obtain the public key without moving the
adapter; update the impl block containing from_public_key and ensure the method
name public_key is available on the same type that has secret_key and public_key
fields.
- Around line 93-112: Add unit tests for PqcEncryptionAdapter to cover
encrypt/decrypt roundtrip and the failure case when decrypt is called on an
encrypt-only instance: create a tokio test that builds a
PqcEncryptionAdapter::new_random(), encrypts a sample byte slice, asserts
ciphertext differs from plaintext, then decrypts and asserts equality; add a
second test that constructs an encrypt-only adapter (implement or expose
PqcEncryptionAdapter::from_public_key or a public_key accessor if needed),
attempts decrypt on that instance and asserts it returns an error. Ensure tests
exercise the encrypt and decrypt methods and use AesGcmAdapter behavior
indirectly via the adapter's existing encrypt/decrypt paths.
In `@crates/synapse-infra/src/security/immune_adapter.rs`:
- Around line 18-24: Replace the manual read loop that uses `buffer`, `file`,
and `hasher.update` with an idiomatic buffered copy: wrap `file` in a
`BufReader` and use `io::copy` to stream data into the SHA256 hasher (or its
`io::Write` adapter) so interrupted/partial reads are handled for you; preserve
the existing error mapping by mapping `io::copy` errors into
`Error::System(format!(...))` and remove the manual buffer and loop. Ensure you
reference the existing variables `file` and `hasher` (and keep the same error
message text) when implementing the `BufReader` + `io::copy` approach.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7e044a27-9681-4f67-8da2-fc4631ec8806
📒 Files selected for processing (11)
.github/workflows/sync-issues.ymlcrates/synapse-core/Cargo.tomlcrates/synapse-core/src/entities/secure_holo_packet.rscrates/synapse-core/src/entities/wallet.rscrates/synapse-core/src/ports/signer_port.rscrates/synapse-immune/src/quantum_wallet.rscrates/synapse-infra/Cargo.tomlcrates/synapse-infra/src/commerce/mod.rscrates/synapse-infra/src/security/encryption_adapter.rscrates/synapse-infra/src/security/immune_adapter.rsscripts/guardian-core.ps1
| git commit -m "chore(issues): sync issue files with GitHub Issues [skip ci]" | ||
| git push | ||
| fi | ||
| # Sincroniza archivos .md en .github/issues/ con GitHub Issues |
There was a problem hiding this comment.
Normalize file line endings to LF (\n).
YAMLlint reports newline-style failure (expected \n). This can break lint gates and should be normalized.
🧰 Tools
🪛 YAMLlint (1.38.0)
[error] 1-1: wrong new line character: expected \n
(new-lines)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/sync-issues.yml at line 1, The workflow file containing
the header "# Sincroniza archivos .md en .github/issues/ con GitHub Issues" uses
non-LF line endings and fails YAML lint; convert its line endings to LF (\n)
(e.g., open and save with LF endings or run a one-time conversion), and
add/update a .gitattributes entry (e.g., for *.yml/*.yaml set text=eol=lf) or
ensure core.autocrlf is configured so future commits preserve LF; targeted file:
the sync-issues.yml workflow whose first line is the Spanish header comment.
| permissions: | ||
| contents: write | ||
| issues: write | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What repository token permissions are required to create a pull request from a GitHub Actions workflow using gh pr create?
💡 Result:
To create a pull request from a GitHub Actions workflow using gh pr create, the GITHUB_TOKEN requires the following permissions in the workflow YAML: permissions: contents: write pull-requests: write Additionally, ensure that repository settings allow GitHub Actions workflows to create pull requests: Repository Settings > Actions > General > Workflow permissions > "Read and write permissions" is selected, and if applicable, "Allow GitHub Actions to create and approve pull requests". The contents: write permission is needed because gh pr create typically involves pushing a branch (if not already pushed), which requires write access to repository contents. The pull-requests: write permission is required to create the pull request itself via the GitHub API. Set GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} (or GITHUB_TOKEN) in the step environment for authentication. Examples from working workflows and Stack Overflow confirm this combination resolves "Resource not accessible by integration" errors.
Citations:
- 1: https://github.com/marketplace/actions/create-pull-request
- 2: https://docs.github.com/en/actions/using-jobs/assigning-permissions-to-jobs/
- 3: https://stackoverflow.com/questions/78634788/error-when-creating-a-pr-using-github-actions
- 4: https://docs.github.com/en/actions/concepts/security/github_token
- 5: https://www.baeldung.com/ops/github-actions-create-pr
- 6: https://stackoverflow.com/questions/68057744/create-pull-request-with-github-action
🏁 Script executed:
cd /tmp/repo 2>/dev/null || cd . && cat -n .github/workflows/sync-issues.yml | head -60Repository: iberi22/synapse-protocol
Length of output: 1854
🏁 Script executed:
wc -l .github/workflows/sync-issues.yml && cat -n .github/workflows/sync-issues.yml | tail -n +60Repository: iberi22/synapse-protocol
Length of output: 5634
Add pull-requests: write permission for PR creation.
The workflow calls gh pr create at lines 163–168, but the permissions block lacks explicit pull-request write access. Without this permission, the PR creation step will fail with a "Resource not accessible by integration" error, blocking the primary workflow objective.
Suggested fix
permissions:
contents: write
issues: write
+ pull-requests: write📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| permissions: | |
| contents: write | |
| issues: write | |
| permissions: | |
| contents: write | |
| issues: write | |
| pull-requests: write |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/sync-issues.yml around lines 40 - 43, The workflow's
permissions block is missing pull-request write access which prevents the gh pr
create step from working; update the permissions map (the existing permissions:
contents: write, issues: write) to include pull-requests: write so the GitHub
CLI command (gh pr create) can succeed, ensuring you add the pull-requests:
write entry at the same indentation level under permissions.
| - name: Create missing labels | ||
| env: | ||
| GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} | ||
| run: | | ||
| # Define labels that might be needed | ||
| declare -A LABELS=( | ||
| ["ai-agents"]="AI agent related tasks:#5319E7" | ||
| ["research"]="Research tasks:#D4C5F9" | ||
| ["workflow"]="Workflow improvements:#0E8A16" | ||
| ["bug"]="Something isn't working:#D73A4A" | ||
| ["documentation"]="Improvements or additions to documentation:#0075CA" | ||
| ["feature"]="New feature request:#A2EEEF" | ||
| ["enhancement"]="New feature or improvement:#A2EEEF" | ||
| ["protocol"]="Git-Core Protocol related:#1D76DB" | ||
| ["dependencies"]="Dependency updates:#0366D6" | ||
| ["copilot"]="Assigned to GitHub Copilot:#6F42C1" | ||
| ) | ||
|
|
||
| for label in "${!LABELS[@]}"; do | ||
| IFS=':' read -r desc color <<< "${LABELS[$label]}" | ||
| gh label create "$label" --description "$desc" --color "$color" 2>/dev/null || true | ||
| done |
There was a problem hiding this comment.
skip ci label is used but not guaranteed to exist.
The workflow applies --label "skip ci" when creating the PR, but this label is not provisioned in the label bootstrap list. If absent in the repo, PR creation fails.
Suggested patch
declare -A LABELS=(
["ai-agents"]="AI agent related tasks:`#5319E7`"
["research"]="Research tasks:`#D4C5F9`"
["workflow"]="Workflow improvements:`#0E8A16`"
["bug"]="Something isn't working:`#D73A4A`"
["documentation"]="Improvements or additions to documentation:`#0075CA`"
["feature"]="New feature request:`#A2EEEF`"
["enhancement"]="New feature or improvement:`#A2EEEF`"
["protocol"]="Git-Core Protocol related:`#1D76DB`"
["dependencies"]="Dependency updates:`#0366D6`"
["copilot"]="Assigned to GitHub Copilot:`#6F42C1`"
+ ["skip ci"]="Skip CI-triggered automation PRs:`#BFD4F2`"
)Also applies to: 163-167
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/sync-issues.yml around lines 65 - 86, The workflow is
missing provisioning for the "skip ci" label which is applied later when
creating PRs; update the label bootstrap list (the declare -A LABELS array used
in the "Create missing labels" step) to include an entry for "skip ci" with an
appropriate description and color so the gh label create loop (for label in
"${!LABELS[@]}"; do ...) will ensure the label exists before any PR creation
uses --label "skip ci".
| - name: Commit changes via PR | ||
| run: | | ||
| git config user.name "github-actions[bot]" | ||
| git config user.email "github-actions[bot]@users.noreply.github.com" | ||
|
|
||
| git add .github/issues/ | ||
|
|
||
|
|
||
| if git diff --staged --quiet; then | ||
| echo "No hay cambios que commitear" | ||
| else | ||
| # Create a unique branch for this commit | ||
| TIMESTAMP=$(date +%Y%m%d_%H%M%S) | ||
| BRANCH_NAME="sync-issues/$TIMESTAMP" | ||
| git checkout -b "$BRANCH_NAME" | ||
| git commit -m "chore(issues): sync issue files with GitHub Issues [skip ci]" | ||
| git push -u origin "$BRANCH_NAME" | ||
|
|
||
| # Create PR with [skip ci] label to avoid re-triggering the workflow | ||
| gh pr create \ | ||
| --title "chore(issues): sync issue files with GitHub Issues" \ | ||
| --body "Automated sync of issue files. [skip ci]" \ | ||
| --label "skip ci" \ | ||
| --base main \ | ||
| --head "$BRANCH_NAME" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In GitHub Actions, does gh pr createrequireGH_TOKEN/GITHUB_TOKEN to be explicitly set in the step environment for reliable authentication?
💡 Result:
Yes, gh pr create requires GH_TOKEN or GITHUB_TOKEN to be explicitly set in the step's environment for reliable authentication in GitHub Actions workflows.
Citations:
- 1: https://docs.github.com/actions/using-workflows/using-github-cli-in-workflows
- 2: https://cli.github.com/manual/gh_help_environment
- 3: https://docs.github.com/en/actions/using-workflows/using-github-cli-in-workflows
- 4: https://docs.github.com/en/actions/security-guides/automatic-token-authentication
🏁 Script executed:
# Find and read the sync-issues.yml workflow file
find . -path "./.github/workflows/sync-issues.yml" -type f 2>/dev/null | head -1 | xargs cat -n | head -200Repository: iberi22/synapse-protocol
Length of output: 7311
Add GH_TOKEN to the PR creation step for reliable authentication.
The gh pr create command requires explicit GH_TOKEN environment variable to authenticate reliably with GitHub's API. While earlier steps in this workflow properly set GH_TOKEN (lines 67, 131, 139), the "Commit changes via PR" step (line 144) omits it. This inconsistency risks intermittent authentication failures.
Suggested patch
- name: Commit changes via PR
+ env:
+ GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Commit changes via PR | |
| run: | | |
| git config user.name "github-actions[bot]" | |
| git config user.email "github-actions[bot]@users.noreply.github.com" | |
| git add .github/issues/ | |
| if git diff --staged --quiet; then | |
| echo "No hay cambios que commitear" | |
| else | |
| # Create a unique branch for this commit | |
| TIMESTAMP=$(date +%Y%m%d_%H%M%S) | |
| BRANCH_NAME="sync-issues/$TIMESTAMP" | |
| git checkout -b "$BRANCH_NAME" | |
| git commit -m "chore(issues): sync issue files with GitHub Issues [skip ci]" | |
| git push -u origin "$BRANCH_NAME" | |
| # Create PR with [skip ci] label to avoid re-triggering the workflow | |
| gh pr create \ | |
| --title "chore(issues): sync issue files with GitHub Issues" \ | |
| --body "Automated sync of issue files. [skip ci]" \ | |
| --label "skip ci" \ | |
| --base main \ | |
| --head "$BRANCH_NAME" | |
| - name: Commit changes via PR | |
| env: | |
| GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} | |
| run: | | |
| git config user.name "github-actions[bot]" | |
| git config user.email "github-actions[bot]@users.noreply.github.com" | |
| git add .github/issues/ | |
| if git diff --staged --quiet; then | |
| echo "No hay cambios que commitear" | |
| else | |
| # Create a unique branch for this commit | |
| TIMESTAMP=$(date +%Y%m%d_%H%M%S) | |
| BRANCH_NAME="sync-issues/$TIMESTAMP" | |
| git checkout -b "$BRANCH_NAME" | |
| git commit -m "chore(issues): sync issue files with GitHub Issues [skip ci]" | |
| git push -u origin "$BRANCH_NAME" | |
| # Create PR with [skip ci] label to avoid re-triggering the workflow | |
| gh pr create \ | |
| --title "chore(issues): sync issue files with GitHub Issues" \ | |
| --body "Automated sync of issue files. [skip ci]" \ | |
| --label "skip ci" \ | |
| --base main \ | |
| --head "$BRANCH_NAME" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/sync-issues.yml around lines 144 - 168, The "Commit
changes via PR" step is missing the GH_TOKEN environment variable needed for
reliable authentication when running the gh pr create command; update that step
(the block titled "Commit changes via PR" that runs gh pr create) to export or
pass GH_TOKEN into the environment before invoking gh (e.g., set GH_TOKEN from
the workflow secret or include env: GH_TOKEN: ${{ secrets.GH_TOKEN }}) so gh pr
create uses the provided token for authentication.
| pub fn new(holo_packet: HoloPacket, signer: &dyn Signer, public_key: Vec<u8>) -> Result<Self> { | ||
| let message = serde_json::to_vec(&holo_packet) | ||
| .map_err(|e| Error::System(format!("Failed to serialize HoloPacket: {}", e)))?; | ||
| let signature = signing_key.sign(&message); | ||
| let signature = signer.sign(&message); | ||
| Ok(Self { | ||
| holo_packet, | ||
| signature, | ||
| public_key, | ||
| }) | ||
| } |
There was a problem hiding this comment.
Consider whether public_key should be derived from Signer rather than passed separately.
The current API requires callers to pass both signer and public_key independently. If they don't correspond, verification will silently fail. Consider adding a public_key() method to the Signer trait or documenting this contract clearly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/synapse-core/src/entities/secure_holo_packet.rs` around lines 16 - 25,
The constructor SecureHoloPacket::new currently takes a Signer and a separate
public_key which can mismatch; change the API so public_key is obtained from the
signer (add a public_key() -> Vec<u8> or &[u8] method to the Signer trait) and
use that returned key when constructing SecureHoloPacket in new (remove the
public_key parameter), or if you must keep the parameter, add a clear runtime
check in SecureHoloPacket::new that compares signer.public_key() to the passed
public_key and returns an error on mismatch; update the Signer trait and all
implementations accordingly and adjust any call sites to stop passing the
public_key manually.
| rand::rng().fill(&mut nonce_bytes); | ||
| let nonce = Nonce::from_slice(&nonce_bytes); | ||
|
|
||
| let encrypted_payload = cipher.encrypt(nonce, data.as_ref()).expect("encryption failed"); |
There was a problem hiding this comment.
Avoid panicking in library code - return Result instead.
Using expect("encryption failed") will panic on AES-GCM errors. For a cryptographic library, consider returning a Result type to allow callers to handle failures gracefully.
🛡️ Suggested signature change
pub fn encrypt_hologram(
data: &[u8],
receiver_pk: &kyber768::PublicKey,
-) -> (Vec<u8>, Vec<u8>) {
+) -> Result<(Vec<u8>, Vec<u8>), aes_gcm::Error> {
// ...
- let encrypted_payload = cipher.encrypt(nonce, data.as_ref()).expect("encryption failed");
+ let encrypted_payload = cipher.encrypt(nonce, data.as_ref())?;
// ...
- (ciphertext.as_bytes().to_vec(), result)
+ Ok((ciphertext.as_bytes().to_vec(), result))
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/synapse-immune/src/quantum_wallet.rs` at line 74, The AES-GCM call in
quantum_wallet.rs uses expect("encryption failed") which panics; change the
enclosing function (e.g., whichever function contains the call to
cipher.encrypt) to return a Result and propagate encryption errors instead of
panicking: remove expect and use the ? operator or map_err to convert
aes_gcm::Error into your crate's error type (or a suitable enum), update the
function signature and all callers to handle the Result, and ensure the error is
documented in the function's return type so callers can handle encryption
failures gracefully.
| pqcrypto-kyber = { workspace = true } | ||
| pqcrypto-traits = { workspace = true } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for security advisories on PQC crates
gh api graphql -f query='
{
securityVulnerabilities(first: 10, ecosystem: RUST, package: "pqcrypto-kyber") {
nodes {
advisory { summary severity publishedAt }
vulnerableVersionRange
firstPatchedVersion { identifier }
}
}
}'
gh api graphql -f query='
{
securityVulnerabilities(first: 10, ecosystem: RUST, package: "pqcrypto-traits") {
nodes {
advisory { summary severity publishedAt }
vulnerableVersionRange
firstPatchedVersion { identifier }
}
}
}'Repository: iberi22/synapse-protocol
Length of output: 371
🏁 Script executed:
# Find and read the root Cargo.toml to see workspace dependencies
find . -name "Cargo.toml" -path "*/Cargo.toml" | head -5Repository: iberi22/synapse-protocol
Length of output: 235
🏁 Script executed:
# Check the workspace root Cargo.toml
cat Cargo.toml 2>/dev/null || find . -maxdepth 2 -name "Cargo.toml" -exec head -50 {} + | head -100Repository: iberi22/synapse-protocol
Length of output: 2638
🏁 Script executed:
# Look for the actual versions specified for these PQC crates
rg "pqcrypto-(kyber|traits)" --type tomlRepository: iberi22/synapse-protocol
Length of output: 420
🌐 Web query:
pqcrypto-kyber 0.8 CVE vulnerability security advisory
💡 Result:
No CVE has been assigned specifically to pqcrypto-kyber 0.8. There is no formal security advisory published by the maintainers of pqcrypto-kyber (rustpq organization). However, versions prior to 0.8.1, including 0.8.0, were vulnerable to the KyberSlash timing side-channel attacks (KyberSlash1 and KyberSlash2), which allow recovery of private keys via timing information during decapsulation if an attacker can submit many requests to the same key and observe timings. pqcrypto-kyber 0.8.1, released 2024-01-25, updates the clean implementation from PQClean to patch these "potential Kyber side-channel vulnerabilities" (KyberSlash). The aarch64 implementation remained vulnerable at that time but is not default. The crate is now deprecated in favor of pqcrypto-mlkem (as of 2024-10-24). KyberSlash affects many Kyber implementations due to secret-dependent divisions in poly_compress/polyvec_compress. No assigned CVE for pqcrypto-kyber itself; related issues like CVE-2024-37880 cover compiler-specific leaks in the reference Kyber impl. Recommendation: Upgrade to pqcrypto-kyber >=0.8.1 or preferably pqcrypto-mlkem for FIPS 203 ML-KEM compliance. Use constant-time environments and hybrid with classical KEMs.
Citations:
- 1: https://crates.io/crates/pqcrypto-kyber
- 2: https://crates.io/crates/pqcrypto-kyber/0.8.0
- 3: https://kyberslash.cr.yp.to/libraries.html
- 4: https://rustsec.org/advisories/RUSTSEC-2023-0079.html
- 5: https://rustsec.org/advisories/RUSTSEC-2024-0381.html
- 6: https://github.com/rustpq/pqcrypto/blob/master/CHANGELOG.md
- 7: GHSA-rc4p-p3j9-6577
- 8: https://nvd.nist.gov/vuln/detail/CVE-2024-37880
🌐 Web query:
pqcrypto-traits 0.3 CVE vulnerability security advisory
💡 Result:
No known CVE vulnerabilities or security advisories exist for pqcrypto-traits version 0.3. The crate provides shared traits for post-quantum cryptographic primitives in the rustpq/pqcrypto ecosystem. Its latest version is 0.3.5 (published 2023-10-16), and the project explicitly states no security claims are made as it distributes reference implementations suitable for experimental use only. The GitHub security overview confirms no published advisories. Related crates like pqcrypto-hqc have advisories (e.g., CVE-2024-54137), but none reference pqcrypto-traits. RustSec and CVE databases show no entries for this crate.
Citations:
- 1: https://docs.rs/crate/pqcrypto-traits/latest
- 2: https://github.com/rustpq/pqcrypto
- 3: https://docs.rs/crate/pqcrypto-traits/^0.3.0
- 4: https://crates.io/crates/pqcrypto-traits
- 5: GHSA-753p-wrj5-g8fj
- 6: https://app.opencve.io/cve/?vendor=rustcrypto
- 7: https://docs.rs/crate/pqcrypto-traits/0.3.5
- 8: https://github.com/rustpq/pqcrypto/security
🌐 Web query:
RustSec advisory pqcrypto-kyber pqcrypto-traits
💡 Result:
There is a RustSec advisory for the pqcrypto-kyber crate: RUSTSEC-2024-0381, issued November 10, 2024. It is an INFO-level advisory stating that the crate is unmaintained and has been replaced by pqcrypto-mlkem, which provides a FIPS 203-compatible implementation of ML-KEM (the standardized version of Kyber). No security vulnerability is reported; it is purely a deprecation notice due to NIST standardization renaming Kyber to ML-KEM. There are no RustSec advisories for pqcrypto-traits. It is a shared traits crate (latest version 0.3.5, last updated October 2023) used by various pqcrypto-* crates, including pqcrypto-kyber. It has no reported vulnerabilities or deprecation notices. It remains actively depended upon (74 reverse dependencies) and is part of the ongoing pqcrypto ecosystem. Note: There is a separate vulnerability advisory (RUSTSEC-2023-0079) for a different crate, pqc_kyber (not pqcrypto-kyber), involving a timing side-channel (KyberSlash), but it is unrelated to the queried crates.
Citations:
- 1: https://rustsec.org/advisories/RUSTSEC-2024-0381.html
- 2: https://rustsec.org/packages/pqcrypto-kyber.html
- 3: https://crates.io/crates/pqcrypto-traits
- 4: https://rustsec.org/advisories/RUSTSEC-2024-0380.html
- 5: https://docs.rs/crate/pqcrypto-traits/latest
🏁 Script executed:
# Check if there are any RustSec advisories in the repo
find . -name "Cargo.lock" -o -name "advisory-db" | head -5Repository: iberi22/synapse-protocol
Length of output: 50
🏁 Script executed:
# Search for how these dependencies are actually used
rg "pqcrypto" -t rust -B 2 -A 2 | head -50Repository: iberi22/synapse-protocol
Length of output: 1184
Upgrade pqcrypto-kyber from 0.8 to 0.8.1 or later (preferably pqcrypto-mlkem).
The current version (0.8) is vulnerable to KyberSlash timing side-channel attacks that enable private key recovery. Since these dependencies are used in quantum_wallet.rs and encryption_adapter.rs for key encapsulation, this must be patched. Version 0.8.1 fixes the vulnerability, but the crate has been deprecated in favor of pqcrypto-mlkem (FIPS 203 ML-KEM) as of October 2024. Consider the migration path during this release cycle.
pqcrypto-traits (0.3) has no known vulnerabilities and requires no changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/synapse-infra/Cargo.toml` around lines 54 - 55, Update the vulnerable
pqcrypto-kyber dependency: in Cargo.toml replace pqcrypto-kyber = { workspace =
true } with a secure version (pqcrypto-kyber = "0.8.1" or later) or preferably
migrate to pqcrypto-mlkem (the new FIPS 203 ML-KEM crate); then audit and adapt
usages in quantum_wallet.rs and encryption_adapter.rs to the new crate API/names
(adjust imports and any KEM/KEM-API calls) so key encapsulation uses the
patched/migrated crate. Ensure pqcrypto-traits remains unchanged.
The sync-issues.yml workflow was failing with GH013 because it tried to push directly to main, but branch protection requires all changes to go through a PR.
Changes
Fixes
GH013: Repository rule violations
Summary by CodeRabbit
Release Notes
New Features
Improvements