Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
CI: enable testing with coming PT 2.2 (#19289)
* ci: build dockers for PT 2.2 * py3.12 * --pre --extra-index-url * typing-extensions * bump jsonargparse * install latest jsonargparse * Add windows skips for Fabric * convert to xfail * add pytorch skips * skip checkpoint consolidation test * set max torch --------- Co-authored-by: awaelchli <aedu.waelchli@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
J
Jirka Borovec committed
3bd133b1074c2d962787259a8e0acc4e16d0c683
Parent: ee9f17e
Committed by GitHub <noreply@github.com>
on 1/26/2024, 3:42:09 PM