Appendix A: Batuta Oracle Consultation
Query: “distributed LLM training across heterogeneous GPUs using sovereign AI stack”
Response (2026-03-01):
- Primary:
repartir(95% confidence) — distributed computing primitives - Supporting:
entrenar(70%) — distributed_training pattern - Supporting:
trueno(80%) — SIMD/GPU backend for compute acceleration