Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
COMMITS
/ pytorch_lightning/callbacks/gradient_accumulation_scheduler.py December 7, 2020
C
Simplify optimization Logic (#4984)
chaton committed
September 23, 2020
A
Add stronger typing to gradient accumulation scheduler callback (#3558)
ananthsub committed
August 20, 2020
W
added copyright notices (#3062)
William Falcon committed
August 15, 2020
J
Fix accumulate_grad_batches for last batch (#2853)
Jeff Yang committed
August 7, 2020
J
clean imports (#2867)
Jirka Borovec committed
June 20, 2020
J
Revert/Fix: epoch indexing from 1, to be from 0 (#2289)
Jirka Borovec committed
June 16, 2020
J
deprecated: epoch indexing from 1 (#2206)
Jirka Borovec committed
April 26, 2020
J
changelog (#1616)
Jirka Borovec committed
April 9, 2020
J
add rank warning (#1428)
Jirka Borovec committed
April 5, 2020
A
Improved docs for callbacks (#1370)
Adrian Wälchli committed
March 20, 2020
A
CI: Force docs warnings to be raised as errors (+ fix all) (#1191)
Adrian Wälchli committed
March 19, 2020
J
improve partial Codecov (#1172)
Jirka Borovec committed
March 6, 2020
J
Test deprecated API for 0.8.0 and 0.9.0 (#1071)
Jirka Borovec committed
March 3, 2020
W
Docs2 (#1028)
William Falcon committed
February 26, 2020
H
Callbacks [wip] (#889)
Hadrien Mary committed
February 23, 2020
H
Split callbacks (#849)
Hadrien Mary committed