Skip to content

Conversation

@mkroening
Copy link
Member

@mkroening mkroening commented Nov 29, 2025

We currently define ObjectInterface::read and ObjectInterface::write ourselves:

kernel/src/fd/mod.rs

Lines 205 to 215 in 4712cd2

/// `async_read` attempts to read `len` bytes from the object references
/// by the descriptor
async fn read(&self, _buf: &mut [u8]) -> io::Result<usize> {
Err(Errno::Nosys)
}
/// `async_write` attempts to write `len` bytes to the object references
/// by the descriptor
async fn write(&self, _buf: &[u8]) -> io::Result<usize> {
Err(Errno::Nosys)
}

It would be useful to migrate to embedded_io_async::Read and embedded_io_async::Write instead: ObjectInterface: Read + Write.

#1891 has already migrated non-async I/O to embedded-io.

#1900 has prepared migrating to ObjectInterface to Read and Write traits by moving socket-internal synchronization to the outside.
This was required since Read and Write take self mutably, while our methods take self sharedly.
Additionally, moving the synchronization to the outside allowed us to get rid of a lot of boilerplate code and enabled performing multiple operations while avoiding frequent relocking.

This PR further prepares migrating ObjectInterface to Read and Write.
This is done by replacing dyn ObjectInterface via async-trait with an enum Fd, enumerating all possible file descriptors.

Continuing to use dyn ObjectInterface would require the respective macros (dynosaur or async-trait) to also be applied to the respective traits, since dyn async traits in Rust are not there yet (Dyn Async Traits · baby steps, In-place initialization - Rust Project Goals).
I am not sure if embedded-io-async would be open to this optional additional dependency.
This would also require us to do the same thing with every new async trait that we would want to depend on in the future.

Using static-dispatch to avoid writing macros ourselves for forwarding is also not possible, because the respective trait needs to be registered with the macro too.

TODO: Maybe use delegate?

@mkroening mkroening self-assigned this Nov 29, 2025
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmark Results

Details
Benchmark Current: 5a20529 Previous: befcf26 Performance Ratio
startup_benchmark Build Time 114.61 s 111.11 s 1.03
startup_benchmark File Size 0.79 MB 0.87 MB 0.91
Startup Time - 1 core 1.01 s (±0.02 s) 0.98 s (±0.02 s) 1.03
Startup Time - 2 cores 1.01 s (±0.02 s) 0.99 s (±0.03 s) 1.02
Startup Time - 4 cores 1.00 s (±0.03 s) 0.99 s (±0.02 s) 1.01
multithreaded_benchmark Build Time 116.05 s 111.66 s 1.04
multithreaded_benchmark File Size 0.90 MB 0.97 MB 0.92
Multithreaded Pi Efficiency - 2 Threads 87.71 % (±8.94 %) 87.45 % (±8.10 %) 1.00
Multithreaded Pi Efficiency - 4 Threads 44.83 % (±3.91 %) 42.77 % (±3.00 %) 1.05
Multithreaded Pi Efficiency - 8 Threads 25.88 % (±2.24 %) 25.18 % (±1.60 %) 1.03
micro_benchmarks Build Time 299.88 s 296.75 s 1.01
micro_benchmarks File Size 0.90 MB 0.98 MB 0.92
Scheduling time - 1 thread 187.02 ticks (±38.81 ticks) 176.35 ticks (±34.79 ticks) 1.06
Scheduling time - 2 threads 108.99 ticks (±20.89 ticks) 104.10 ticks (±21.43 ticks) 1.05
Micro - Time for syscall (getpid) 12.17 ticks (±3.79 ticks) 11.69 ticks (±5.33 ticks) 1.04
Memcpy speed - (built_in) block size 4096 61654.88 MByte/s (±43943.77 MByte/s) 61256.19 MByte/s (±45369.75 MByte/s) 1.01
Memcpy speed - (built_in) block size 1048576 13723.32 MByte/s (±11410.18 MByte/s) 14154.64 MByte/s (±11768.17 MByte/s) 0.97
Memcpy speed - (built_in) block size 16777216 9810.41 MByte/s (±7938.79 MByte/s) 9650.18 MByte/s (±7829.51 MByte/s) 1.02
Memset speed - (built_in) block size 4096 62309.79 MByte/s (±44394.97 MByte/s) 62658.40 MByte/s (±45879.30 MByte/s) 0.99
Memset speed - (built_in) block size 1048576 14029.10 MByte/s (±11575.01 MByte/s) 14567.64 MByte/s (±12031.17 MByte/s) 0.96
Memset speed - (built_in) block size 16777216 9988.49 MByte/s (±8028.45 MByte/s) 9902.13 MByte/s (±7993.63 MByte/s) 1.01
Memcpy speed - (rust) block size 4096 55553.61 MByte/s (±40593.47 MByte/s) 55637.12 MByte/s (±41593.40 MByte/s) 1.00
Memcpy speed - (rust) block size 1048576 13445.69 MByte/s (±11006.88 MByte/s) 13921.96 MByte/s (±11517.33 MByte/s) 0.97
Memcpy speed - (rust) block size 16777216 9584.04 MByte/s (±7700.33 MByte/s) 9776.63 MByte/s (±7947.05 MByte/s) 0.98
Memset speed - (rust) block size 4096 55900.74 MByte/s (±40806.78 MByte/s) 56255.41 MByte/s (±41950.49 MByte/s) 0.99
Memset speed - (rust) block size 1048576 13807.14 MByte/s (±11240.68 MByte/s) 14238.58 MByte/s (±11680.58 MByte/s) 0.97
Memset speed - (rust) block size 16777216 9814.26 MByte/s (±7837.49 MByte/s) 10072.00 MByte/s (±8153.23 MByte/s) 0.97
alloc_benchmarks Build Time 295.15 s 293.98 s 1.00
alloc_benchmarks File Size 0.86 MB 0.95 MB 0.91
Allocations - Allocation success 100.00 % 100.00 % 1
Allocations - Deallocation success 100.00 % 100.00 % 1
Allocations - Pre-fail Allocations 100.00 % 100.00 % 1
Allocations - Average Allocation time 22703.99 Ticks (±1061.89 Ticks) 25979.67 Ticks (±1072.71 Ticks) 0.87
Allocations - Average Allocation time (no fail) 22703.99 Ticks (±1061.89 Ticks) 25979.67 Ticks (±1072.71 Ticks) 0.87
Allocations - Average Deallocation time 2849.10 Ticks (±835.57 Ticks) 3078.39 Ticks (±1348.13 Ticks) 0.93
mutex_benchmark Build Time 299.09 s 295.48 s 1.01
mutex_benchmark File Size 0.90 MB 0.98 MB 0.92
Mutex Stress Test Average Time per Iteration - 1 Threads 39.36 ns (±3.70 ns) 36.76 ns (±3.72 ns) 1.07
Mutex Stress Test Average Time per Iteration - 2 Threads 31.68 ns (±2.90 ns) 29.88 ns (±3.19 ns) 1.06

This comment was automatically generated by workflow using github-action-benchmark.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants