Skip to content

Feature/multi upstream#100

Open
bansalayush247 wants to merge 8 commits into
dmnd-pool:masterfrom
bansalayush247:feature/multi-upstream
Open

Feature/multi upstream#100
bansalayush247 wants to merge 8 commits into
dmnd-pool:masterfrom
bansalayush247:feature/multi-upstream

Conversation

@bansalayush247
Copy link
Copy Markdown

@bansalayush247 bansalayush247 commented Jun 25, 2025

Multi-Upstream Dynamic Load Balancing
🎯 Overview
Implements intelligent hashrate distribution across multiple mining pools with real-time load balancing and disconnect tracking.

✨ Key Features
Configurable pool distribution (e.g., 70%/30% split)
Dynamic miner assignment based on current vs target ratios
Proper disconnect tracking with accurate pool count updates
Real-time rebalancing when miners connect/disconnect

📸 Load Balancing Flow

  1. Initial Connections - Miners Connect to Pools
    Initial Connections Miners dynamically assigned to pools based on target distribution (70%/30%)

  2. Disconnect Handling - Miners Disconnect
    Disconnect Tracking Proper disconnect tracking with pool count updates and rebalancing

  3. Rebalancing - New Miners Connect
    Rebalancing New miners assigned to under-target pools for optimal distribution

Key Files
mod.rs - Load balancing logic
downstream - Connection handling
config.rs - Multi-pool configuration

📊 Example Flow

Target: Pool 0 (70%), Pool 1 (30%)
Initial: Pool 0: 2/4 (50%), Pool 1: 2/4 (50%)
Action: Next miner → Pool 0 (under-target)
Result: Pool 0: 3/5 (60%), Pool 1: 2/5 (40%)

✅ Testing Results
Multi-pool connection and assignment
Disconnect tracking and count updates
Dynamic rebalancing after changes

Comment thread src/config.rs Outdated
|| config.auto_update.unwrap_or(true)
|| std::env::var("AUTO_UPDATE").is_ok();

let hashrate_distribution = args.hashrate_distribution.or(config.hashrate_distribution);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also support hashrate_distribution to be set via std::env::var like the the rest of the other variables

Comment thread src/router/mod.rs Outdated
};

/// Router handles connection to Multiple upstreams.
use std::sync::{Arc, Mutex}; // Use std::sync::Mutex instead
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Nope. Use roles_logic_sv2::utils::Mutex.
  • Remove all unnecessary comments.
  • Don't change/remove existing code or comment that does not interfere with your changes. for example this docs you removed. /// Router handles connection to Multiple upstreams.

Comment thread src/router/mod.rs Outdated
latency_tx: watch::Sender<Option<Duration>>,
pub latency_rx: watch::Receiver<Option<Duration>>,
pool_configs: Option<Vec<PoolConfig>>,
pub weighted_dist: Arc<Mutex<Vec<u32>>>,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Consider using Atomic instead of Mutex. it is faster and thread safe
  • Alsoweighted_dist should be Optional. None in single-upstream mode (when pool_configs is None or has only one pool) instead of empty vec
  • Find a more suitable name for pool_configs.

Comment thread src/router/mod.rs Outdated
info!("Router::new - pool_addresses: {:?}", pool_addresses);

let pool_configs = Configuration::pool_configs();
info!("Router::new - pool_configs: {:?}", pool_configs);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this log is necessary

Comment thread src/main.rs Outdated
let epsilon = Duration::from_millis(30_000);
let best_upstream = router.select_pool_connect().await;
initialize_proxy(&mut router, best_upstream, epsilon).await;
//let best_upstream = router.select_pool_connect().await;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove if not needed instead of commenting out.

Comment thread src/main.rs Outdated
continue;
}

// Use the first successful connection for the main flow (backward compatibility)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Using only the first successful pool connection and ignoring the rest defeats the purpose of having multiple upstream, because all the work done with hashrate division and routing is wasted if we connect to just one pool. Instead, you should start all the necessary components (translator, share_accounter, etc.) for each successful connection.
  • Avoid the use of unwrap as we almost never want to the proxy to panic, instead log error and restart proxy if there is no other way to handle the error.

Comment thread src/main.rs Outdated
@@ -228,12 +275,10 @@ async fn initialize_proxy(
match monitor(router, abort_handles, epsilon, server_handle).await {
Reconnect::NewUpstream(new_pool_addr) => {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use the new_pool_addr returned by monitor to update pool_addr in single-upstream mode for the next loop iteration to reconnect to a better upstream pool, since monitoring is disabled for multi-upstream for now.
Introduction of multi upstream should not change how single upstream already works.

…ogging

- Enhance configuration handling for hashrate distribution and improve router's multi-upstream support
- Refactor logging and improve share validation
- Clean up formatting and remove redundant code in main and downstream modules
- Fix formatting in test module by removing unnecessary parameter
- Update downstream struct passing value
- Remove unused Mutex import and simplify weighted distribution initialization in Router
- Fix formatting and improve logging messages in main and router modules; update TODO comments in notify module
- Refactor imports in router module for improved organization and readability
- Enhance proxy initialization and downstream handling for multi-upstream mode; improve logging format in router
- Enhance logging by including pool address in various components
@bansalayush247 bansalayush247 force-pushed the feature/multi-upstream branch 2 times, most recently from 1953eba to 13d994c Compare July 23, 2025 13:26
Copy link
Copy Markdown
Collaborator

@Priceless-P Priceless-P left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good!

add logs so its clear which pool is sending/receiving which message

- Updated various logging statements across the codebase to include the pool address, enhancing the context of log messages.
- Cleaned up formatting and spacing in several functions for improved readability.
- Removed unnecessary parameters and streamlined error handling in the proxy and translator modules.
- Ensured consistent logging practices in the downstream and upstream components.

solved conflict merge
@bansalayush247 bansalayush247 force-pushed the feature/multi-upstream branch from ada3f5a to ab78ebf Compare August 5, 2025 09:52
@bansalayush247 bansalayush247 force-pushed the feature/multi-upstream branch from f7df7df to b88bd66 Compare August 23, 2025 12:31
@bansalayush247
Copy link
Copy Markdown
Author

Hi @Priceless-P,

I’ve addressed all the requested changes:

  • Refactored multi-upstream to properly handle parallel connections instead of relying on a single pool
  • Removed all unwraps and added safe error handling
  • Updated Mutex usage and improved concurrency handling
  • Made weighted distribution optional for single-upstream mode
  • Cleaned up logs and removed commented code
  • Added env support for hashrate_distribution
  • Ensured single-upstream behavior remains unchanged

Could you please take another look when you have time?
Let me know if anything else needs improvement.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants