Skip to content

Conversation

@Serhii-Braavos
Copy link
Collaborator

No description provided.

{
let state = state.clone();
let period = Duration::from_secs(config.poll_secs);
tokio::spawn(async move {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like when the main loop preparation process fails, the program stalls and doesn't exit

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added panic

}

impl SqlStorageProvider {
pub async fn new(database_url: &str) -> Result<Self> {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure, but perhaps these functions need more robust error handling so the main loop won't quit due to a db error?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

db is essential for LC to work, if it fails the client should not start

///
/// L1 range
///
async fn read_l1_range(&self, block_number: i64) -> Result<L1Range> {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't it possible to have 2 values returned here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the ranges are overwritten if we found a new one, so we'll have
[A; B], [B; C] [C; D], etc

this code can potentially have 2 rows if the value is on the edge, but values on the edge already L2 sync, and we don't need L1 ranges for them

self.storage().write_l1_ranges(&new_l1_ranges).await?;

// Fetch the starting state for the minimal range. This will always be present, otherwise we received invalid data from L1.
let end_state_from_storage =

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not fetch this data before the loop above? is there anything about the loop above that can update the state in storage?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to know the range.
the comment was outdated

tracing::info!(state=?current, "updated");
}
let _guard = async_blocker.block_tasks();
match execute_sync(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cant this part be in the background worker as well?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

background worker is a low priority historical sync. This is where we sync the head of the chain. It determines which block light client sees as the 'latest'

///
/// # Panics
/// Panics if the stored L2 block hash does not match the newly fetched L1 state.
async fn execute_sync(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I can tell, it seems the syncing process misses the first range 1000000-1000056, do you see this as well?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1000000 and 1000056 are added as default data. if you don't request blocks in-between they will not be synced

let beerus = Arc::new(Client::new(&config.client, http, storage).await?);
let server =
beerus::rpc::Server::new(beerus.clone(), async_blocker.clone());
let background_loader =

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

won't the two background processes here interfere with each other? it seems they are using different instances of the asynchronous blocker? (because it is cloned below)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is Arc::clone which points to the same instance

let stored_state =
beerus.storage().read_state(l1_state.block_number).await?;

assert_eq!(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this lack of consistency really that uncommon? Isn't there always some difference between what we have on l1 and the latest block on l2? won't this exit the entire process?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is fundamental. The blocks that we sync on L2 will have commitments on L1 after some time.
If we found that the block that we processed doesn't match it's commitement on L1 - we have invalid chain in LC

///
/// # Panics
/// Panics if the stored L2 block hash does not match the newly fetched L1 state.
async fn execute_sync(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it implied here that gateway_state.block_number == verified_state.block_number?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, they updated simultaneously

src/client.rs Outdated
/// # Errors
/// Returns an error if any RPC or data conversion fails, the parent hash does not match (if provided),
/// the Starknet version is unsupported, or validation of the block hash relationship fails.
pub async fn get_verified_state(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this function also updates storage, maybe it should have a name that implies it's not just a getter?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

let block_header_copy = block_header.clone();
// Validate in a blocking thread since it may be CPU-heavy
tokio::task::spawn_blocking(move || {
validate_block_hash_from_header(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason to have this validation, given that both parameters are extracted from block_header which we got from the rpc? Are we checking if the math is correct?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are validating proofs against 'root' that is a part of block hash.
block hashes are needed to validate the chain (because prev block hash is a part of calculations).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants