Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
Caching MNIST dataset for testing (#917)
* Caching MNIST dataset for testing * Added MNIST datset to the tests directory * Caches dataset based off hash of the test.pt file * Cleaned Up yml file * Cleaned Up yml file * Removed MNIST Data from framework * Set cache key for dataset to 'mnist' * Apply suggestions from code review * Apply suggestions from code review Co-authored-by: Jirka Borovec <[email protected]>
D
Donal Byrne committed
985408413643a602e0e4443862c8b64cf9d0d38e
Parent: ceec51d
Committed by GitHub <[email protected]>
on 2/25/2020, 2:20:41 PM