hal: dedicated buffer/image memory#2929
Conversation
| gfx-hal = { path = "../../hal", version = "0.2" } | ||
| smallvec = "0.6" | ||
| winit = { version = "0.19", optional = true } | ||
| #vk-mem = { version = "0.1", optional = true } |
|
It's there for future work on the topic ;)
… On Jul 31, 2019, at 03:15, msiglreith ***@***.***> wrote:
@msiglreith approved this pull request.
In src/backend/vulkan/Cargo.toml:
> @@ -29,6 +29,7 @@ ash = "0.29.0"
gfx-hal = { path = "../../hal", version = "0.2" }
smallvec = "0.6"
winit = { version = "0.19", optional = true }
+#vk-mem = { version = "0.1", optional = true }
should be removed
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
| /// Create a dedicated allocation for a buffer. | ||
| /// | ||
| /// Returns a memory object that can't be used for binding any resources. | ||
| unsafe fn allocate_buffer_memory( |
There was a problem hiding this comment.
To actually do that more efficiently, the function must return a view into memory with offset and size.
There was a problem hiding this comment.
The user is expected to still call get_buffer_requirements before doing this, so the size is known. And the offset is known to be zero. Is this not enough?
There was a problem hiding this comment.
Oh, so this will work only for dedicated allocations?
There was a problem hiding this comment.
I thought we want to integrate vma into vulkan backend this way
There was a problem hiding this comment.
I think what we can do with VMA is:
- make it a compile time optional dependency
- turn
Memoryinto an enum, with one variant for normal allocations and another - for VMA-owned ones. The code dealing with memory would need to match the enum, but I don't think it's a problem. - the user is then expected to treat the
Memoryas dedicated allocation
Does that sound reasonable?
There was a problem hiding this comment.
So the memory type exposed by the Vulkan backend would be:
enum Memory {
Physical(vk::Memory),
SubAllocated(vma::Allocation),
}Mapping a Memory::SubAllocated will go though VMA and just give out a pointer without really doing any more Vulkan mapping, if I understand correctly, much like with Rendy-memory.
There was a problem hiding this comment.
If VMA has no limitations here. Rendy has to map all non-dedicated mappable memory persistently to allow this, which is not optimal. I wonder how VMA does that.
There was a problem hiding this comment.
which is not optimal
Is it, really? My understanding is that it's both optimal and recommended for CPU-visible memory.
There was a problem hiding this comment.
It's not a problem when RAM is abundant, like on typical modern PC. But in other cases it may be not the best approach.
There was a problem hiding this comment.
Looks like it doesn't map memory by default on creation, unless there is a specific flag provided. Relevant docs - https://gpuopen-librariesandsdks.github.io/VulkanMemoryAllocator/html/memory_mapping.html
It confirms that VMA acts roughly like rendy-memory, and the approach this PR is taking should work.
|
I suppose I can integrate VMA in this PR as a validation of the API. |
3e1f8b1 to
375af89
Compare
Fixes #2511
Closes #2513
The idea is that backends can implement this more efficiently than the default implementation.
PR checklist:
makesucceeds (on *nix)make reftestssucceedsrustfmtrun on changed code