AdvancedCheckpointSave

Save a Checkpoint

This page documents advanced/checkpoint.py save in mint-quickstart.

What this command does

  • creates a fresh LoRA training client
  • runs one minimal SFT step with cross_entropy
  • saves a full training-state checkpoint with save_state(...)
  • saves a sampler-only checkpoint with save_weights_for_sampler(...)
  • prints both server-side checkpoint paths for later download or resume

Before you run it

  • finish Getting Started
  • set MINT_API_KEY
  • optionally set MINT_BASE_URL to the MinT endpoint for your region

Use the MinT endpoint that matches your region:

  • Mainland China: https://mint-cn.macaron.xin/
  • Outside Mainland China: https://mint.macaron.xin/

Command

export MINT_API_KEY=sk-...
python advanced/checkpoint.py save --name my-ckpt

Optional overrides:

  • --model: defaults to MINT_BASE_MODEL or Qwen/Qwen3-0.6B
  • --rank: defaults to MINT_LORA_RANK or 16
  • --lr: defaults to MINT_RL_LR or 5e-5

Core APIs

state_ckpt = training_client.save_state(name=f"{name}-state").result()
sampler_ckpt = training_client.save_weights_for_sampler(name=f"{name}-sampler").result()

save_state(...) preserves weights and optimizer state for later training resume. save_weights_for_sampler(...) produces a weights-only checkpoint intended for inference or export.

Expected output

[save] model=Qwen/Qwen3-0.6B rank=16 lr=5e-05 name=my-ckpt
[save] running forward_backward (1 datum, cross_entropy)...
[save] step done
[save] state (weights+optimizer): tinker://.../weights/my-ckpt-state
[save] sampler (weights only):    tinker://.../sampler_weights/my-ckpt-sampler

Current SDKs commonly print tinker://... paths here. The companion download and resume commands accept those raw paths directly.

Next steps

  • Download a checkpoint archive: Download
  • Upload a local archive back to MinT: Upload
  • Continue training from the saved state: Resume