WIP: Float16 KV Cache in voicecraft.py#72
Open
Ph0rk0z wants to merge 1 commit into
Open
Conversation
Owner
|
Thanks! Do you have an estimate on how much VRAM after do make the cache fp16? With fp32, for the default example in the demo, For the 830M model, it needs around 22GB with kvcache on, 12GB with kvcache off (i.e. kvcache=0); for the 330M model, 15GB with kvcache on, 5GB with kvcache off In addition, can one make the entire model/operation in fp16? |
Contributor
Author
|
The model loading with whisperX is about 6gb but it goes up on inference. I tried to add model.half() in the model loading code too but there was no difference. It could be due to the 4 batches, I think it uses less if you set it do do 1 batch. |
Contributor
Author
|
https://files.catbox.moe/azwyj4.mov here is what it does on my machine. I wonder why the CPU use is so high as well. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Didn't appear to do anything bad. Not sure how much it helps. Give it a try. I think there are some missing torch GC calls somewhere because not all memory is always cleared. Are there other places we can use FP16? In inference it shouldn't matter, unlike training.