Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
Allow to return `ONNXProgram` when calling `to_onnx(dynamo=True)` (#20811)
* feat: return `ONNXProgram` when exporting with dynamo=True. * test: add to_onnx(dynamo=True) unittests. * fix: add ignore filter in pyproject.toml * fix: change the return type annotation of `to_onnx`. * test: add parametrized `dynamo` to test `test_if_inference_output_is_valid`. * test: add difference check in `test_model_return_type`. * fix: fix unittest. * test: add test `test_model_onnx_export_missing_onnxscript`. * feat: enable ONNXProgram export on torch 2.5.0 * extensions --------- Co-authored-by: Jirka B <[email protected]> Co-authored-by: Jirka Borovec <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
G
GdoongMathew committed
577c04d0c45e2bffa7d88b97932fea99e9bd6a94
Parent: 105bb20
Committed by GitHub <[email protected]>
on 8/12/2025, 5:18:12 PM