Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
bump: Torch `2.5` (#20351)
* bump: Torch `2.5.0` * push docker * docker * 2.5.1 and mypy * update USE_DISTRIBUTED=0 test * also for pytorch lightning no distributed * set USE_LIBUV=0 on windows * try drop pickle warning * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * disable compiling update_metrics * bump 2.2.x to bugfix * disable also log in logger connector (also calls metric) * more point release bumps * remove unloved type ignore and print some more on exit * update checkgroup * minor versions * shortened version in build-pl * pytorch 2.4 is with python 3.11 * 2.1 and 2.3 without patch release * for 2.4.1: docker with 3.11 test with 3.12 --------- Co-authored-by: Thomas Viehmann <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> (cherry picked from commit 61a403a512466d65ebe730b1cc0cf4a909a533f2)
J
Jirka Borovec committed
b1eceb151693b49bbf0c02b217b974c00cbdd22a
Parent: d62b53a
Committed by Luca Antiga <[email protected]>
on 11/12/2024, 9:05:41 PM