MLflow
Track ML experiments, register models, and compare runs — all from your Calliope workspace.
Overview
MLflow is the standard open-source platform for managing the machine learning lifecycle. Inside Calliope, MLflow runs as a shared service so your whole team can log experiments, track metrics, compare model versions, and manage artifacts — without any infrastructure setup. Whether you’re training in a Lab notebook or running scripts in the IDE, MLflow is ready to receive your run data.
Key Features
- Experiment Tracking — Log parameters, metrics, and artifacts from every training run
- Run Comparison — Side-by-side comparison of runs across metrics and parameters
- Model Registry — Version, stage, and promote models (Staging → Production)
- Artifact Storage — Save model files, plots, datasets, and any files alongside runs
- Auto-logging — One-line integration with scikit-learn, PyTorch, TensorFlow, XGBoost, and more
Getting Started
- From the Hub, click MLflow to launch the tracking UI
- Browse existing experiments or create a new one
- In your notebook or script, log to MLflow:
import mlflow
mlflow.set_experiment("my-experiment")
with mlflow.start_run():
mlflow.log_param("learning_rate", 0.01)
mlflow.log_param("epochs", 50)
# ... training code ...
mlflow.log_metric("accuracy", 0.94)
mlflow.log_metric("loss", 0.12)
mlflow.log_artifact("model.pkl")- Switch to the MLflow UI to see your run appear
Auto-logging
Enable one-line auto-logging for popular frameworks:
import mlflow
# scikit-learn
mlflow.sklearn.autolog()
# PyTorch Lightning
mlflow.pytorch.autolog()
# TensorFlow/Keras
mlflow.tensorflow.autolog()
# XGBoost
mlflow.xgboost.autolog()Auto-logging captures parameters, metrics, and model artifacts automatically without manual log calls.
Comparing Runs
In the MLflow UI:
- Go to your experiment
- Select multiple runs using the checkboxes
- Click Compare to open a side-by-side view
- View parameter diffs, metric charts, and artifact comparisons
Model Registry
Promote your best models through stages:
- After a successful run, click Register Model in the run view
- Create a new model or add a version to an existing one
- Transition versions through stages:
None → Staging → Production → Archived - Load registered models by name in downstream code:
model = mlflow.sklearn.load_model("models:/my-model/Production")Connecting from Your Workspace
The MLflow tracking server is pre-configured in your Calliope environment. No URI setup needed — just import mlflow and start logging:
import mlflow
# Already pointed at your Calliope MLflow serverIf you need to set the tracking URI explicitly:
mlflow.set_tracking_uri("https://your-hub/mlflow")When to Use MLflow
| Task | Tool |
|---|---|
| Tracking model training runs | MLflow |
| Comparing hyperparameter experiments | MLflow |
| Registering production models | MLflow |
| Writing training code | AI Notebook Lab or AI IDE |
| Data preparation and EDA | AI Notebook Lab |