refactor: replace uniform_ noise with rand-based formulation for torch.compile() #350
refactor: replace uniform_ noise with rand-based formulation for torch.compile() #350studyingeugene wants to merge 1 commit intoInterDigitalInc:masterfrom
Conversation
…h.compile - Replace `empty_like(...).uniform_()` with `rand_like()`-based expression - Avoid in-place random ops that are problematic for torch.compile / dynamo - Preserve identical noise range [-half, half] and statistical behavior
Note on failing
|
Summary
This PR refactors the additive noise generation logic in
quantize()to improve compatibility withtorch.compile/ TorchDynamo.The previous implementation relied on an in-place
uniform_()random operation, which can cause graph breaks or compilation issues undertorch.compile.This PR replaces it with a
rand_like()formulation that is compile friendly while preserving identical statistical behaviorObserved Issues
In several environments,
torch.compilefails when encountering in-place random initialization insidequantize().I observed
torch.compile(fullgraph=True)failures in some environments originating fromtorch.ops.aten.uniform.default. TorchInductorasserts an expected size/stride for the temporary buffer at runtime, but the actual stride can differ (e.g., due to layout differences such as channels-last), causing assert_size_stride to fail.rand_like()avoids the issue becauseTorchInductorlowers it as an out-of-place value-producing operation, whereasaten.uniformis lowered with static stride assumptions that can conflict with runtime layouts.What Changed
Error Reproducing
Unfortunately, I’m not able to provide a reliable minimal reproducing code for this issue.
I apologize for the lack of a reproducible test case.
Compatibility
This change preserves:
Thanks for reading
I appreciate your time reviewing this change.