Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
Make optimizers skippable when using amp (#7975)
Co-authored-by: Yifu Wang <yifuwang@2012@gmail.com> Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
Y
Yifu Wang committed
b71aa55b9e81dd5cabb0ddf6b57dae916de6f2bf
Parent: 0004216
Committed by GitHub <noreply@github.com>
on 6/16/2021, 12:23:30 AM