test: Add Torch AOTI Tests#8771
Conversation
This change: - Creates a new L0_torch_aoti test suit. - Adds complex Torch AOTI model generation to qa/common/gen_qa_models.py. - Cleans up existion AOTI model generation in qa/common/gen_qa_models.py. - Enabled torchvision AOTI model generation in qa/common/gen_qa_model_repository.
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
This change: - Creates a new L0_torch_aoti test suit. - Adds complex Torch AOTI model generation to qa/common/gen_qa_models.py. - Cleans up existion AOTI model generation in qa/common/gen_qa_models.py. - Enabled torchvision AOTI model generation in qa/common/gen_qa_model_repository.
There was a problem hiding this comment.
Pull request overview
Adds a new QA L0 suite to exercise PyTorch Torch AOTI models (including complex/nested I/O) and enables generation of the required QA model repository artifacts (including TorchVision AOTI ResNet50).
Changes:
- Introduces
qa/L0_torch_aotitest suite (shell harness + HTTP client unittest coverage for complex/simple/torchvision AOTI models). - Extends
qa/common/gen_qa_models.pyto generate complex Torch AOTI models/configs and updates Torch AOTI/TorchVision AOTI packaging/config generation. - Enables TorchVision AOTI model generation in
qa/common/gen_qa_model_repository.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 8 comments.
| File | Description |
|---|---|
| qa/L0_torch_aoti/torch_aoti_infer_test.py | Adds HTTP-based unittest coverage for complex/named/index Torch AOTI models, simple dtype variants, and a TorchVision AOTI model. |
| qa/L0_torch_aoti/test.sh | Adds the L0 harness that stages models, starts Triton, runs the Python tests, and cleans up. |
| qa/common/gen_qa_models.py | Adds complex Torch AOTI generation/configs; refactors paths; updates TorchVision AOTI generation. |
| qa/common/gen_qa_model_repository | Enables TorchVision AOTI QA model repository generation. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
| @@ -1298,69 +1299,44 @@ def generate_sample_inputs( | |||
| input_shape = [abs(ips) for ips in input_shape] | |||
|
|
|||
| if input_dtype == np.int8: | |||
There was a problem hiding this comment.
Why not simply create a numpy to torch dtype map?
There was a problem hiding this comment.
what do you mean? This function generates sample inputs, not translate types. Sorry, just confused.
There was a problem hiding this comment.
Something like
np_to_torch_dtype_dict = {
np.int8: torch.int8,
...
}
...
if input_dtype in np_to_torch_dtype_dict:
input0 = torch.zeros(input_shape, dtype=np_to_torch_dtype_dict[input_dtype], device=device)
input1 = torch.zeros(input_shape, dtype=np_to_torch_dtype_dict[input_dtype], device=device)
else:
print(
f"{_color_yellow}warning: dtype {input_dtype} is unsupported; falling back to torch.int32{_color_reset}"
)
input0 = torch.zeros(input_shape, dtype=torch.int32, device=device)
input1 = torch.zeros(input_shape, dtype=torch.int32, device=device)
| if input_dtype == np.int8: | ||
| input0 = torch.randint(-128, 127, input_shape, dtype=torch.int8, device=device) | ||
| input1 = torch.randint(-128, 127, input_shape, dtype=torch.int8, device=device) | ||
| input0 = torch.zeros(input_shape, dtype=torch.int8, device=device) |
There was a problem hiding this comment.
What's the reasoning behind this change? How do you tell backend is working correctly for a zero-value output tensor?
There was a problem hiding this comment.
these are just sample values used by PyTorch to compile the model. These aren't used for inference.
| print(f"Created {label_path}") | ||
|
|
||
|
|
||
| def create_torch_aoti_complex_modelconfig( |
There was a problem hiding this comment.
| def create_torch_aoti_complex_modelconfig( | |
| def create_torch_aoti_complex_model_config( |
There was a problem hiding this comment.
all other similar functions are named modelconfig not model_config, I'm just following the existing pattern.
There was a problem hiding this comment.
I see. "Model config" should really be two words.
There was a problem hiding this comment.
agreed. I can change them all, what do you think?
There was a problem hiding this comment.
For now I am fine with only changing this new method.
This change: