Skip to content

Conversation

@ddomingu
Copy link
Contributor

@ddomingu ddomingu commented Jan 3, 2025

…erf-scale module

Type of change

  • Refactor
  • New feature
  • [X ] Bug fix
  • Optimization
  • Documentation Update

Description

With the current code, when running more than one uperf pair in the uperf-scale module, the feature does not work correctly. The client port is always set to 30000 instead of dynamically adjusting to match the corresponding server pair’s assigned port. Below is an example of the output with the current code:

$ for server in $(oc -n benchmark-operator get pods -o name | grep server); do \
    echo "Uperf server: ${server}"; \
    oc -n benchmark-operator exec -it ${server} -- ps aux; \
done
Uperf server: pod/uperf-scale-server-performance-worker-0.ddomin-0-78c714d7-kjspb
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0  26700  5376 ?        Ss   12:29   0:00 uperf -s -v -P 30000
Uperf server: pod/uperf-scale-server-performance-worker-0.ddomin-1-78c714d7-rvgxb
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0  26700  5248 ?        Ss   12:29   0:00 uperf -s -v -P 30100
...

Similarly, when inspecting the clients, the client ports are incorrectly set to 30000 for all pairs, as shown below:

$ for client in $(oc -n benchmark-operator get pods -o name | grep client); do \
    echo "Uperf client: ${client}"; \
    oc -n benchmark-operator exec -it ${client} -- ps aux; \
done
Uperf client: pod/uperf-scale-client-10.133.0.82-78c714d7-2w4xm
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
1000740+       1  0.5  0.0  11928  2944 ?        Ss   12:29   0:00 /bin/sh -c export h=10.133.0.82; export port=30000 export colocate=False; ...
Uperf client: pod/uperf-scale-client-10.133.0.83-78c714d7-lmk6q
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
1000740+       1  0.5  0.0  11928  2944 ?        Ss   12:29   0:00 /bin/sh -c export h=10.133.0.83; export port=30000 export colocate=False; ...
...

After Applying the Fix
With the fix implemented, the port configuration for the uperf clients now matches the server pairs correctly. Below is the updated output for the server and client configurations:

$ for server in $(oc -n benchmark-operator get pods -o name | grep server); do \
    echo "Uperf server: ${server}"; \
    oc -n benchmark-operator exec -it ${server} -- ps aux; \
done
Uperf server: pod/uperf-scale-server-performance-worker-0.ddomin-0-485dc5c3-zdmrc
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0  26700  5376 ?        Ss   12:37   0:00 uperf -s -v -P 30000
Uperf server: pod/uperf-scale-server-performance-worker-0.ddomin-1-485dc5c3-9ck4f
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0  26700  5248 ?        Ss   12:37   0:00 uperf -s -v -P 30100
...

Example updated client output:

$ for client in $(oc -n benchmark-operator get pods -o name | grep client); do \
    echo "Uperf client: ${client}"; \
    oc -n benchmark-operator exec -it ${client} -- ps aux; \
done
Uperf client: pod/uperf-scale-client-10.133.0.91-485dc5c3-jvp2l
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
1000720+       1  0.4  0.0  11928  2944 ?        Ss   12:38   0:00 /bin/sh -c export h=10.133.0.91; export port=30000 export colocate=False; ...
Uperf client: pod/uperf-scale-client-10.133.0.92-485dc5c3-hn6nl
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
1000720+       1  0.4  0.0  11928  2944 ?        Ss   12:38   0:00 /bin/sh -c export h=10.133.0.92; export port=30100 export colocate=False; ...
...

By ensuring the client ports are correctly configured, uperf pairs now function as expected in the uperf-scale module, supporting dynamic port allocation for each server-client pair.

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • [ X] I have performed a self-review of my code.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please describe the System Under Test.
  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

@jtaleric
Copy link
Member

jtaleric commented Feb 3, 2025

Do we need to open new ports for this to work?

@ddomingu
Copy link
Contributor Author

ddomingu commented Feb 3, 2025

No, it works out of the box. The issue here is that while the server port is calculated correctly, the client port is fixed to port 30000, which causes the problem. However, once this fix is applied, everything works as expected without any other specific requirements

@jtaleric
Copy link
Member

jtaleric commented Feb 3, 2025

hm.. I am curious about other platforms like AWS.

@ddomingu
Copy link
Contributor Author

ddomingu commented Feb 5, 2025

That would be the case when using host networking, yes. You'd need to make sure the right ports are open for the connection to work. But this isn’t specific to this change; with host networking, you always have to ensure network traffic between nodes and the required ports is allowed.
If you're using OVN-Kubernetes or Multus networks, only network policies apply, but by default, communication within the same namespace is open. So everything should work out of the box unless you're doing something more advanced with network policies or dealing with L3 networks with firewalls in between.

@rsevilla87 rsevilla87 self-requested a review May 28, 2025 10:48
Copy link
Member

@rsevilla87 rsevilla87 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@rsevilla87 rsevilla87 merged commit 952f17a into cloud-bulldozer:master May 28, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants