Description
The current implementation does a syscall to get the current system time. This is slow. I think that this can be solved by caching the current time and measuring the difference.
How it would work.
The timestamp doesn't actually need to correlate to now only the sequence in which the id is created. This is a fundamental change to some of the assumptions.
var cached_ts = atomic.Value(u64);
var counter = atomic.Value(u32);
pub fn generate() u64 {
// load the current timestamp
var ts = cached_timestamp_ms.load(.relaxed);
// get the current counter
var seq = counter.fetchAdd(1, .relaxed);
if (seq >= MAX_COUNTER) {
// we have breached the number of ids in this bracket
var now_ms = std.time.milliTimestamp(); // syscall once per overflow
var old_ts = cached_timestamp_ms.load(.relaxed);
// only update if the new time is actually newer
if (now_ms <= old_ts) {
// spin until we see a new millisecond
while (true) {
now_ms = std.time.milliTimestamp();
if (now_ms > old_ts) break;
}
}
cached_timestamp_ms.store(now_ms, .relaxed);
counter.store(0, .relaxed);
ts = now_ms;
seq = 0;
}
// combine fields
// ....
}
The changes the requirement from calling the OS everytime an id is generated to only calling it when the current count of IDs breaches the maximum count per ms.
Description
The current implementation does a syscall to get the current system time. This is slow. I think that this can be solved by caching the current time and measuring the difference.
How it would work.
The timestamp doesn't actually need to correlate to
nowonly the sequence in which the id is created. This is a fundamental change to some of the assumptions.The changes the requirement from calling the OS everytime an id is generated to only calling it when the current count of IDs breaches the maximum count per ms.