Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
[bug-fix] DDP and automatic_optimization=False (#4485)
* resolve bug * add self._running_manual_optim * update * update tests * update lightning module * resolve bug * update tests * update * resolve pep8 * update * replace by `ddp_spawn` * temporary fix * update * update * move update to training_loop * make both ddp_spawn * introduce `manual_optimizer_step` * update changelog * added changelog wrong place * add force_optimizer_step * update docstring for tests * update optimizer_step * update zero_grad * resolve flake8 * move update into manual_optimizer_step * add zero_grad * remove zero_grad tests * remove manual_backward in AMP, it doesn't help * update * loosen tests * update * update doc * add TODO * Removed unnecessary get model from native amp * Remove try except with pytest raise * Add seed, clean up imports, remove try catch to reproduce error * update code * update test * revert back * formatting * Update pytorch_lightning/core/lightning.py Co-authored-by: Jirka Borovec <[email protected]> Co-authored-by: SeanNaren <[email protected]> Co-authored-by: Sean Naren <[email protected]> Co-authored-by: Jirka Borovec <[email protected]>
C
chaton committed
7e08b0d710208e980f27896aa62a59f26f0cb3b3
Parent: abf1d4b
Committed by GitHub <[email protected]>
on 11/10/2020, 7:44:51 PM