Skip to content

Limited cache and performance queries #38

@oliver-s-lee

Description

@oliver-s-lee

Hi there, really nice project you've got here! I'm using mongita as an embedded DB in one of my projects and it's great, but I've started making a few tweaks I thought you might be interested in?

Firstly, I see the on-disk storage engine currently has an infinitely growing cache, which naturally leads to memory leaks with large read/write cycles. I've had a go at writing a simple limited cache that should work as a drop-in replacement (#36). Limiting the cache size to below the benchmarking set size will obviously have a negative impact on performance, but when set larger (or to infinite) the change has limited impact.

Secondly, I think one of the current major bottlenecks for performance is the copy.deepcopy() calls on insertion and retrieval. For insertion, I'm fairly certain this can be replaced with a simple shallow copy, as all that's changed is the addition of the _id field? I've made a PR to test this out (#37) and all seems to work fine. On my system, the increase in insertion performance with the benchmarking set is ~50%.

For retrieval it looks like things are more complicated. Currently, the returned record is copied regardless of whether it's fetched from cache or from disk, but of course the record returned from disk is already unique so the copy is wasted. I don't see any easy way to change this at present without changing some other internals, most probably collection.__find_ids(). Is there a reasons this function couldn't return the actual documents rather than just the IDs? It's already gone to the bother of fetching the records so it seems wasteful to discard them only to retrieve again later?

Cheers, and I hope you don't mind the comments!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions