Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
Enable DDP Plugin to pass through args to LightningDistributedDataParallel (#4382)
* Update ddp_plugin.py * Update ddp_plugin.py * Update ddp_plugin.py * Update test_ddp_plugin.py * Update pytorch_lightning/plugins/ddp_plugin.py * Update pytorch_lightning/plugins/ddp_plugin.py * Fixed imports, make ddp_kwargs protected Co-authored-by: SeanNaren <sean.narenthiran@gmail.com>
A
ananthsub committed
6878f3bf4e2df94eed430cb9db4b47e419948af9
Parent: c50c225
Committed by GitHub <noreply@github.com>
on 10/27/2020, 12:27:59 PM