Distributing burningman data to Bisq 2 in a resiliant way #3665
HenrikJannsen
started this conversation in
Ideas
Replies: 1 comment 2 replies
-
|
I implemented that retention policy: |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We need the Burningman addresses and receiver shares for the Musig fee payment and the delayed payout transaction (redirect tx in musig).
For the fee payment is smaller amounts and its not that critical if that data is missing or incorrect as damage is limited.
For the delayed payout transaction (DPT), though it is has severe security implications.
The (currently) 2 oracle nodes request those data from Bisq 1 and will publish it to Bisq 2 network. This data is signed and oracles are bonded roles. As long oracle operators are not malicious or hacked that part is safe.
If oracle nodes would be offline for an extended time it should not become a single point of failure for the trades.
So we want:
We publish the BM data only at snapshot heights (as in Bisq 1 for the DPT use case).
The block height is the last mod(10) height from the range of the last 10-20 blocks (139 -> 120; 140 -> 130, 141 -> 130).
This ensure that we do not need to deal with re-orgs and we can be sure that both traders have the same snapshot height even if one has recently received a new block.
We still might have the issue that the traders have a different view on the network data, thus one might have a more recent snapshot block. To resolve that we can exchange in the trade protocol the past x blocks and the peer takes the most recent they have in common.
We will have about 14 such blocks per day as we have every 10 block a new snapshot (144 blocks per day at 10 min average confirmation time).
To solve both goals stated above we can use a time-machine like backup strategy for that data to thin out the data kept in the p2p network with the age of the data. This we will not use the simple TTL strategy but let the oracle nodes delete old data according to that strategy and use a long TTL to cover the oldest to be kept data.
Suggestion for deletion algorithm:
TTL is 100 days, so older data are not present.
That way we have
14 blocks with 100 min interval (the 10 block snapshot)
9 blocks with 1 day interval
9 blocks with 10 days interval
We can use that data to verify new received blocks from the oracle if they are similar to the past blocks up to a certain tolerance level.
In case oracle nodes are offline for extended time, we still have up to 100 days of old data which traders can use.
The similarity check could be done by creating a metric for each detected change:
e.g. if same address but receiver share is different by 1 %: add change factor of the old receiver share * 0.01. receiver share is already a % value and all add up to 1.
If a new address is added, add the new receiver share.
If n old address is removed, add the receiver share of the previous block of that address.
This should ensure that bigger impacts are reflected in that change factor. We can see what are common change factors from our historical data and then decide what we put as tolerance level.
Beta Was this translation helpful? Give feedback.
All reactions