Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
[Feat] Add TORCH_DISTRIBUTED_BACKEND env variable (#5981)
* add backend support * resolve flake8 * update changelog * update * Apply suggestions from code review * Update docs/source/advanced/multi_gpu.rst * add patch as context manager Co-authored-by: Carlos Mocholí <[email protected]>
C
chaton committed
5700fd091fcb17d0465d10969d98bc98eec8cd09
Parent: 5157ba5
Committed by GitHub <[email protected]>
on 2/17/2021, 4:37:39 PM