Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
Add checkpoint artifact path prefix to MLflow logger (#20538)
* Add checkpoint artifact path prefix to MLflow logger Add a new `checkpoint_artifact_path_prefix` parameter to the MLflow logger. * Modify `src/lightning/pytorch/loggers/mlflow.py` to include the new parameter in the `MLFlowLogger` class constructor and use it in the `after_save_checkpoint` method. * Update the documentation in `docs/source-pytorch/visualize/loggers.rst` to include the new `checkpoint_artifact_path_prefix` parameter. * Add a new test in `tests/tests_pytorch/loggers/test_mlflow.py` to verify the functionality of the `checkpoint_artifact_path_prefix` parameter and ensure it is used in the artifact path. * Add CHANGELOG * Fix MLflow logger test for `checkpoint_path_prefix` * Update stale documentation --------- Co-authored-by: Luca Antiga <[email protected]>
B
Ben Lewis committed
87108d82a55050da9b2aec8f8043e5987d20cbf4
Parent: 93d707d
Committed by GitHub <[email protected]>
on 3/10/2025, 1:15:05 PM