First Training
Fine-tune a model with LoRA:
# Fine-tune with LoRA adapter
apr finetune model.gguf \
--adapter lora \
--rank 64 \
--data train.jsonl \
--output adapter.safetensors
# Apply the adapter
apr merge model.gguf adapter.safetensors -o fine-tuned.gguf
Training Pipeline
For full training workflows:
# Plan (dry-run, shows what will happen)
apr train plan config.yaml
# Execute
apr train apply config.yaml
# Monitor in real-time
apr monitor
Knowledge Distillation
apr distill \
--teacher large-model.gguf \
--student small-model.gguf \
--data train.jsonl \
-o distilled.gguf