Skip to content

Lifecycle documentation #2

@jerinphilip

Description

@jerinphilip

Eviction: Write

Eviction happens only during writes (or a time-to-live based setting, which we'll ignore for purposes here). This happens at WritableHashTable's implementation, below.

// Evict this record if
// 1: the record is expired, or
// 2: the entry is not recently accessed (and unset the access bit
// if set).
if (metadata.IsExpired(curEpochTime, this->m_recordTimeToLive) ||
!metadata.UpdateAccessStatus(false)) {
const auto numBytesFreed = record.m_key.m_size + value.m_size;
numBytesToFree = (numBytesFreed >= numBytesToFree)
? 0U
: numBytesToFree - numBytesFreed;
WritableBase::Remove(*entry, i);
this->m_hashTable.m_perfData.Increment(
HashTablePerfCounter::EvictedRecordsCount);
}
}
}

This takes locks, so structure is expected to be protected during the process. So it's removed if time to live has expired, or access-bit is off. If it was accessed recently, it lives until another full rotation (because this process unsets the part).

typename HashTable::UniqueLock lock{
this->m_hashTable.GetMutex(currentBucketIndex)};

Removes are just "mark for Removals":

void ReleaseRecord(RecordBuffer* record) {
if (record == nullptr) {
return;
}
m_epochManager.RegisterAction([this, record]() {
record->~RecordBuffer();
this->m_hashTable.template GetAllocator<RecordBuffer>().deallocate(record,
1U);
});

This is a problem, we're leaking memory here, while we await the action-queue to clear these up.

Epochs are incremented and references to epochs are held, so:

void Add() {
// Incrementing the global epoch counter before incrementing per-connection
// epoch counter is safe (not so the other way around). If the server
// process is registering an action at the m_currentEpochCounter in
// RegisterAction(), it is happening in the "future," and this means that if
// the client is referencing the memory to be deleted in the "future," it
// will be safe.
++m_currentEpochCounter;
m_epochCounterManager.AddNewEpoch();
}

The following can keep running forever if a frontIndex L4::Context is not freed while the final ones keep on piling up.

// Note that this function is NOT thread safe, and should be run on the
// same thread as the one that calls AddNewEpoch().
std::uint64_t RemoveUnreferenceEpochCounters() {
while (m_epochQueue.m_backIndex > m_epochQueue.m_frontIndex) {
if (m_epochQueue.m_refCounts[m_epochQueue.m_frontIndex %
m_epochQueue.m_refCounts.size()] == 0U) {
++m_epochQueue.m_frontIndex;
} else {
// There are references to the front of the queue and will return this
// front index.
break;
}
}

Reads

There are checks if this record is past it's time-to-live, in which case this is returned false.

bool GetInternal(const Key& key, Value& value) const {
if (!Base::Get(key, value)) {
return false;
}
assert(value.m_size > Metadata::c_metaDataSize);
// If the record with the given key is found, check if the record is expired
// or not. Note that the following const_cast is safe and necessary to
// update the access status.
Metadata metaData{const_cast<std::uint32_t*>(
reinterpret_cast<const std::uint32_t*>(value.m_data))};
if (metaData.IsExpired(this->GetCurrentEpochTime(), m_recordTimeToLive)) {
return false;
}
metaData.UpdateAccessStatus(true);
value.m_data += Metadata::c_metaDataSize;
value.m_size -= Metadata::c_metaDataSize;
return true;
}

Misc:

There is a check for our size_t key, so collisions are improbable.

if (record.m_key == key) {
value = record.m_value;
return true;
}

Any successful evictions trigger an increment in global-epoch. This runs every n seconds on a background thread.

m_processingThread{m_config.m_epochProcessingInterval, [this] {
this->Remove();
this->Add();

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions