Releases: microsoft/onnxscript
v0.6.2
Note
This patch release includes important bug fixes to version converter.
What's Changed
Optimizer and rewriter
- Fix SlicesSplit rewrite rule by @justinchuby in #2802
Torch Lib
Other Changes
- Bump version from 0.6.1 to 0.6.2 by @justinchuby in #2798
- Fix version converter regression by @titaiwangms in #2799
New Contributors
Full Changelog: v0.6.1...v0.6.2
v0.6.1
What's Changed
Note
This update includes a slight change in how PyTorch SDPA's Boolean attention mask is exported in opset<23
- Fix attention mask to use float_lowest instead of -inf and add NaN-safe softmax handling by @Aniketsy in #2654
Other Changes
- Fix compatibility with the latest onnx-ir version (0.1.16) by supporting onnx functions in version converter by @titaiwangms in #2791
New Contributors
Full Changelog: v0.6.0...v0.6.1
v0.6.0
What's Changed
This release introduces breaking changes to ONNXFunction. The .param_schemas and schema properties are removed, and are replaced by .op_signature which is more flexible and captures the complete signature of the ONNXFunction.
Additionally, onnxscript's AST converter for the scripting mode has been migrated to fully leverage onnx-ir.
Breaking Changes
- Remove ParamSchema by @justinchuby in #2768
- Replace op_schema with op_signature by @justinchuby in #2771
Core ONNX Script
- Fix call to ast.Expression by @justinchuby in #2758
- Remove creation of duplicate value by @gramalingam in #2775
- Remove duplicated return_var by @justinchuby in #2776
- Fix loop body value creation by @justinchuby in #2777
Optimizer and rewriter
- Add test for unused initializer check with graph outputs by @Copilot in #2733
- Add OutputFixPass and NameFixPass in optimize by @justinchuby in #2779
Torch Lib
- Add support for more complex operators by @simonbyrne in #2734
- [torchlib] Consolidate all overloads and prevent new ones from being created by @justinchuby in #2621
- [torchlib] Fix
aten__native_batch_norm_legit_functionalby @justinchuby in #2753 - [torchlib] Implement signbit by @justinchuby in #2754
- feat: modify aten_bilinear from einsum to matmul by @fw7th in #2746
- [torchlib] Migrate torchvision implementations by @justinchuby in #2569
- [torchlib] Fix linspace implementation for int64 by @Aravind-11 in #2693
- [torchlib] Fix prod and normal.float_float by @justinchuby in #2762
- Fix conversion when enable_gqa is False and dimension are different by @xadupre in #2763
- [torchlib] prims::sum by @justinchuby in #2778
- [torchlib] Fix irfft by @simonbyrne in #2770
Other Changes
- Bump version from 0.5.7 to 0.6.0 by @justinchuby in #2735
- Migrate onnxscript converter to use onnx ir by @gramalingam in #2706
- Added padding_idx=None option and new test cases for aten_embedding_bag by @crypto-a in #2549
- Move converter implementation files to _internal folder by @Copilot in #2738
- Minor cleanup of onnxscript converter by @gramalingam in #2748
- Update onnx_ir and onnx dependency versions by @justinchuby in #2749
- Add read permissions for contents in optional lint by @justinchuby in #2750
- Cleanup SymbolValue by @gramalingam in #2752
- Remove
select_ir_versionsfrom public API by @justinchuby in #2739 - Fix unicode issue by @gramalingam in #2747
- Clean up onnxscript/_internal/converter.py by @justinchuby in #2759
- Remove 3rd-party code by @justinchuby in #2760
- Create torch apis 2.11 by @justinchuby in #2767
- [ez] Expose FOLDED_FROM_KEY to onnxscript.optimizer public API by @Copilot in #2773
- Temporarily disable mypy by @justinchuby in #2786
- Support metadata_prop merge and version 25 in version converter by @titaiwangms in #2782
New Contributors
- @simonbyrne made their first contribution in #2734
- @crypto-a made their first contribution in #2549
- @fw7th made their first contribution in #2746
- @Aravind-11 made their first contribution in #2693
Full Changelog: v0.5.7...v0.6.0
v0.5.7
What's Changed
Optimizer and rewriter
- Improve constant folding error messages and allow Identity to skip shape merging by @justinchuby in #2670
- Fix scalar constant check by @gramalingam in #2672
- Capture rewrite rule name as metadata by @gramalingam in #2675
- Keep creating constants when constants are folded inside ir.Function by @titaiwangms in #2679
- Avoid initializer name collision in _fuse_batchnorm.py by @titaiwangms in #2680
- Merge metadata props in rewriter by @gramalingam in #2682
- Implement SDPA via MHA by @gramalingam in #2683
- Don't constant fold Quantize/DequantizeLinear nodes by default by @ruro in #2713
- Fix unused initializer check by @gramalingam in #2732
ONNX IR
- Add option to clear metadata in ort fusion by @gramalingam in #2685
Torch Lib
- feat: implement LSTM and GRU operators for torchlib by @ombrdr47 in #2674
- [torchlib] Fix unbind.int if num_outputs=1 by @sebimarkgraf in #2684
- [torchlib] Fix mod on SymInt by @justinchuby in #2686
- Implement aten.stft by @moatom in #2645
- Add converter for unique_consecutive by @xadupre in #2694
- Add missing output_size kwarg to repeat_interleave by @yuanyao-nv in #2691
- add converter for aten::sym_storage_offset by @xadupre in #2697
- Implement ONNX export for
fake_quantize_per_*_affineby @ruro in #2696 - Fix aten_unbind for torch >= 2.7 dynamo export by @afshin-paydar in #2719
- Update aten_index_put implementation by @gramalingam in #2712
- [torchlib] Fix and implement overloads for aten::remainder by @justinchuby in #2727
Documentation
- Utility and example for custom op expansion by @gramalingam in #2701
Other Changes
- Add GQA fusion test cases by @gramalingam in #2669
- chore(deps): bump ruff from 0.14.2 to 0.14.3 in /requirements/lintrunner by @dependabot[bot] in #2676
- chore(deps): bump editorconfig-checker from 3.4.0 to 3.4.1 in /requirements/lintrunner by @dependabot[bot] in #2677
- chore(deps): bump onnx-weekly from 1.20.0.dev20251027 to 1.21.0.dev20251103 in /requirements/ci by @dependabot[bot] in #2678
- Bump version by @gramalingam in #2702
- Provide inplace replacement util by @gramalingam in #2708
- chore(deps): bump ruff from 0.14.3 to 0.14.6 in /requirements/lintrunner by @dependabot[bot] in #2716
- chore(deps): bump actions/checkout from 5 to 6 by @dependabot[bot] in #2715
- chore(deps): bump onnxruntime from 1.23.1 to 1.23.2 in /requirements/ci by @dependabot[bot] in #2652
- chore(deps): bump github/codeql-action from 3 to 4 by @dependabot[bot] in #2626
- chore(deps): bump ruff from 0.14.6 to 0.14.7 in /requirements/lintrunner by @dependabot[bot] in #2721
- support opset23 by @titaiwangms in #2725
- chore(deps): bump actions/upload-artifact from 5 to 6 by @dependabot[bot] in #2730
- chore(deps): bump ruff from 0.14.7 to 0.14.9 in /requirements/lintrunner by @dependabot[bot] in #2731
New Contributors
- @ombrdr47 made their first contribution in #2674
- @moatom made their first contribution in #2645
- @ruro made their first contribution in #2696
- @afshin-paydar made their first contribution in #2719
Full Changelog: v0.5.6...v0.5.7
v0.5.6
What's Changed
Optimizer and rewriter
- Clear initializers in constant folding pass by @justinchuby in #2668
Full Changelog: v0.5.5...v0.5.6
v0.5.5
What's Changed
Breaking Changes
- Create initializers not constant nodes in constant folding pass by @titaiwangms in #2650
Core ONNX Script
- Add support for traced if statements in onnxscript script by @gramalingam in #2644
Optimizer and rewriter
- Add RMS Normalization rule variant by @gramalingam in #2638
- Extend GQA fusion for Qwen by @gramalingam in #2662
Torch Lib
- Unsqueeze unbatched input of avg_pool by @wodesuck in #2646
- Support math trunc by @titaiwangms in #2653
- [torchlib] Fix concat when input tensor has shape
(0,)by @justinchuby in #2661
Other Changes
- Extend GQA fusion for Gemma3 by @gramalingam in #2639
- Bump version to 0.5.5 by @titaiwangms in #2640
- Add Gemma3 GQA fusion test case by @gramalingam in #2642
- [Rewriter]: introduce remove_optional_bias by @AyoubMDL in #2635
- Add a verbose mode to torch api for external data save by @justinchuby in #2643
- [version converter] Fix DFT opset 20 by @titaiwangms in #2659
- Declare support for Python 3.14 in pyproject.toml by @justinchuby in #2663
Full Changelog: v0.5.4...v0.5.5
v0.5.4
What's Changed
Optimizer and rewriter
- Fix constant in constant folding by @titaiwangms in #2622
- Create helper for comparing semantic equivalence of shapes by @justinchuby in #2620
- Fix GQA fusion to produce present key/value by @justinchuby in #2634
Torch Lib
- Separated implementation of aten::scatter overloads by @linshokaku in #2605
- Enhanced type annotations and simplified implementation of scatter.value by @linshokaku in #2612
- support for scalar args to aten::scatter by @linshokaku in #2613
- [torchlib] Implement aten_bilinear function using Einsum by @Copilot in #2574
- Simplify aten_unbind when shape is static by @justinchuby in #2597
- Consolidate overloads in torchlib by @justinchuby in #2604
- [torchlib] Fix implementations for bitwise_* overloads by @justinchuby in #2618
- [torchlib] Deprecate Rank and IsScalar by @justinchuby in #2624
- [torchlib] Fix operator add by @justinchuby in #2630
- Remove redundant registration of operator::add and fix sub.Scalar by @justinchuby in #2631
Other Changes
- Update torch api error message to include value names by @justinchuby in #2599
- Remove beartype by @justinchuby in #2603
- Allow
opset_versionto be set explicitly when exporting by @NoRaincheck in #2615 - Merge shapes only in identity op and nodel-level shape inference by @titaiwangms in #2623
Full Changelog: v0.5.3...v0.5.4
v0.5.3
What's Changed
Optimizer and rewriter
- Fix Onnx 23 Rotary Fusion by @gramalingam in #2576
- Record names of contributing values in the constant folding pass by @justinchuby in #2575
- Merge output shape with input shape instead of override by @wodesuck in #2578
- Extend utilities for checking a scalar value by @gramalingam in #2587
- Merge input and output shape when removing identity by @wodesuck in #2588
- Add NaN handling in softmax pattern in SDPA fusion by @gramalingam in #2593
- Fix collapse slices rewrite rules to handle unknown dims by @justinchuby in #2583
- Expose the should_fold option to optimize() by @justinchuby in #2594
Torch Lib
- [torchlib] Add trace_only flag to aten_copy, aten_tril, aten_triu by @justinchuby in #2572
- [torchlib] Support integers in logical_and/or ops and update other logical ops by @justinchuby in #2582
- [torchlib] Add back operator and/or by @justinchuby in #2590
- Improve aten_floor_divide for int inputs by @justinchuby in #2592
Other Changes
- Remove usages of ir.Input in test by @justinchuby in #2591
New Contributors
Full Changelog: v0.5.2...v0.5.3
v0.5.2
What's Changed
Optimizer and rewriter
- [rewriter] Remove generic pattern matcher by @justinchuby in #2567
- Add GQA fusion to ONNX fusions by @gramalingam in #2524
Torch Lib
- [torchlib] Fix aten_gather to correctly handle scalar indices by @linshokaku in #2566
- [torchlib] Simplify linalg_vector_norm to remove the redundant Abs by @justinchuby in #2570
New Contributors
- @linshokaku made their first contribution in #2566
Full Changelog: v0.5.1...v0.5.2
v0.5.1
What's Changed
Optimizer and rewriter
- Remove CheckerPass from ort_fusion by @justinchuby in #2560
Other Changes
- Bump version from 0.5.0 to 0.5.1 by @justinchuby in #2559
- Use ir.val to replace ir.Input by @justinchuby in #2556
- chore(deps): bump ruff from 0.12.11 to 0.13.0 in /requirements/lintrunner by @dependabot[bot] in #2563
Full Changelog: v0.5.0...v0.5.1