Save a Checkpoint
This page documents advanced/checkpoint.py save in mint-quickstart.
What this command does
- creates a fresh LoRA training client
- runs one minimal SFT step with
cross_entropy - saves a full training-state checkpoint with
save_state(...) - saves a sampler-only checkpoint with
save_weights_for_sampler(...) - prints both server-side checkpoint paths for later download or resume
Before you run it
- finish Getting Started
- set
MINT_API_KEY - optionally set
MINT_BASE_URLto the MinT endpoint for your region
Use the MinT endpoint that matches your region:
- Mainland China:
https://mint-cn.macaron.xin/ - Outside Mainland China:
https://mint.macaron.xin/
Command
export MINT_API_KEY=sk-...
python advanced/checkpoint.py save --name my-ckptOptional overrides:
--model: defaults toMINT_BASE_MODELorQwen/Qwen3-0.6B--rank: defaults toMINT_LORA_RANKor16--lr: defaults toMINT_RL_LRor5e-5
Core APIs
state_ckpt = training_client.save_state(name=f"{name}-state").result()
sampler_ckpt = training_client.save_weights_for_sampler(name=f"{name}-sampler").result()save_state(...) preserves weights and optimizer state for later training resume. save_weights_for_sampler(...) produces a weights-only checkpoint intended for inference or export.
Expected output
[save] model=Qwen/Qwen3-0.6B rank=16 lr=5e-05 name=my-ckpt
[save] running forward_backward (1 datum, cross_entropy)...
[save] step done
[save] state (weights+optimizer): tinker://.../weights/my-ckpt-state
[save] sampler (weights only): tinker://.../sampler_weights/my-ckpt-samplerCurrent SDKs commonly print tinker://... paths here. The companion download and resume commands accept those raw paths directly.