Add all-to-all benchmark#760
Conversation
|
cc @jakirkham @gjoseph92 you both may find this PR interesting for running networking experiments which compare Tornado, UCX, and Asyncio |
This prevets asyncio.iscoroutinefunction from returning False in Python < 3.8
|
rerun tests |
|
It might be interesting to try with uvloop as well. Since the event loop is largely handled by libuv in C, would expect that performs better than asyncio alone |
|
Thanks for the suggestion @jakirkham , I added that now. |
|
rerun tests |
1 similar comment
|
rerun tests |
|
I will be curious to see the results that come out of this. If anyone has anything preliminary that they want to share I highly encourage that :) |
| @@ -0,0 +1,240 @@ | |||
| import argparse | |||
There was a problem hiding this comment.
Do we want this in tests/ or would it make sense to include in benchmarks/?
There was a problem hiding this comment.
I'm still thinking about it. I don't want it in tests, but I want a test there (for the UCX part only). However, a lot of the code is going to be shared and we don't have a good place currently where that common code would be visible to both. I'm not even sure the non-UCX code should live in this repo as we'll soon upstream it to OpenUCX, so it doesn't really make sense to have non-UCX code there. I'm still thinking of an appropriate place for this, if you have any suggestions, please let me know.
There was a problem hiding this comment.
Yeah that makes sense. Will think about it as well.
To run the benchmark in single-node:
To run each process separately, allowing multi-node as well: