README clarification #52
-
|
Based on your README it says: Tool Time Speed vs Surge is this true for surge get in the CLI? surge's TUI seems much faster, though that might be due to the fact that there is perceived progress bar and stats in the TUI. |
Beta Was this translation helpful? Give feedback.
Replies: 14 comments 2 replies
-
|
Hi! Yes, the CLI, TUI, and the headless instance all use the same backend, so the actual performance is identical across all of them. The difference you're feeling is likely just perceived because of how the TUI handles the progress bars and live stats. If you want to see the numbers, you can check out the latest runs from our benchmark CI in GitHub Actions (it uses get command), the speed results there should match up with what's in the README |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for the clarification :D I would like to contribute benchmark tests on Wayland EndeavourOS (Arch Linux based). Would that be fine with you? |
Beta Was this translation helpful? Give feedback.
-
|
That would be awesome! |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for letting me know. Hmm, I realised that the choice of Wayland or X11 should not really affect the overall download speed. Like you mentioned there is the factor of CDN cache, network latency and one's hardware specs. In general surge seems to be much faster (under a minute) while For more benchmarking features you could consider using hyperfine. Though that being said the Python script is fine. https://github.com/sharkdp/hyperfine Not sure if wget, curl or aria2c has chunking capabilities. A comparison with max settings (and no too many requests error) could be good. |
Beta Was this translation helpful? Give feedback.
-
|
stdout is real nice would test later with aria2c later. relevant specs: |
Beta Was this translation helpful? Give feedback.
-
|
Yeah wget/curl are single threaded by default so that probably explains the gap here. Surge just opens way more connections to saturate the bandwidth. Hyperfine is a good suggestion btw, definitely cleaner than a custom script. might add a standard config for it later so it's easier to run these. Thanks for running this. let me know if you get around to testing aria2c, that’s the one Im actually curious about since it also does concurrency. |
Beta Was this translation helpful? Give feedback.
-
|
Here you go.
I suppose concurrency plays a big role. Realised I ran benchmarks once instead of five times 🗿 , mb. |
Beta Was this translation helpful? Give feedback.
-
|
Considering aria2c is written in C++ and has been around for quite a while, I do not see how your project is going to beat it drastically in terms of performance metrics. That aside even having a progress bar for the CLI and the aesthetics of the TUI make it a joy to use. aria2c uses max concurrent downloads of 5 and max connections per server of 1 (based on the manual page), not sure if this maps to your that aside CDN cache acts as a sort of cache warm up since we are querying the same URL frequently (when running benchmarks on the same URL multiple times). |
Beta Was this translation helpful? Give feedback.
-
|
here you go |
Beta Was this translation helpful? Give feedback.
-
|
Yeah, networks can be quite unpredictable! |
Beta Was this translation helpful? Give feedback.
-
|
Does the CI mainly use Windows? Hmm I checked that it uses Ubuntu. Your README states the use of Windows 11 Pro as a test environment, so I am not sure for which the 1.38x is referring to. You mentioned Not sure if this is a OS specific difference in how your code and aria2c handles downloads, though it seems my network is getting slow. |
Beta Was this translation helpful? Give feedback.
-
|
Yeah, maybe running benchmark.py with -n 5 would be more appropriate. |
Beta Was this translation helpful? Give feedback.
-
|
not sure if this would be valid benchmarking of the sort but you could add some means of spawning a local http server so you get to configure the cache or other parts of it by using the binaries to download from said server. this could complement rather than replace existing tests. |
Beta Was this translation helpful? Give feedback.
-
|
We actually tried testing with a local HTTP server, but ran into issues. If you don't limit the speed, the download is instant. If you do limit it (basically by making the network sleep), it just comes down in bursts. Neither really mimics real conditions, so we thought it was safer to stick to external servers. If you know a more robust way to handle this, though, definitely let us know. |
Beta Was this translation helpful? Give feedback.
Hi!
Yes, the CLI, TUI, and the headless instance all use the same backend, so the actual performance is identical across all of them.
The difference you're feeling is likely just perceived because of how the TUI handles the progress bars and live stats.
If you want to see the numbers, you can check out the latest runs from our benchmark CI in GitHub Actions (it uses get command), the speed results there should match up with what's in the README