Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Example Overview

This chapter provides runnable examples demonstrating batuta’s capabilities across the Sovereign AI Stack.

Running Examples

All examples are in the examples/ directory and can be run with:

cargo run --example <example_name>

Some examples require specific features:

# Examples requiring oracle-mode
cargo run --example oracle_demo --features oracle-mode

# Examples requiring inference
cargo run --example serve_demo --features inference

# Examples requiring native features (TUI, tracing)
cargo run --example stack_graph_tui --features native

Example Categories

Core Pipeline Examples

ExampleDescriptionFeatures
pipeline_demo5-phase transpilation pipeline with Jidoka validation-
backend_selectionCost-based GPU/SIMD/Scalar selection-
moe_routingMixture-of-Experts backend routing-
full_transpilationEnd-to-end transpilation workflow-

ML Framework Conversion

ExampleDescriptionFeatures
numpy_conversionNumPy → Trueno operation mapping-
sklearn_conversionscikit-learn → Aprender migration-
pytorch_conversionPyTorch → Realizar conversion-

Oracle Mode Examples

ExampleDescriptionFeatures
oracle_demoKnowledge graph queriesoracle-mode
oracle_local_demoLocal workspace discoveryoracle-mode
rag_oracle_demoRAG-enhanced oracle queriesoracle-mode

Stack Management

ExampleDescriptionFeatures
stack_dogfoodSelf-analysis of batuta codebasenative
stack_graph_tuiTUI visualization of stack dependenciesnative
stack_quality_demoQuality metrics across stacknative
stack_diagnostics_demoComprehensive stack health checknative
publish_status_democrates.io publish status checker-
sovereign_stack_e2eEnd-to-end stack validation-

Infrastructure Components

ExampleDescriptionFeatures
trueno_zram_demoSIMD compression with trueno-zram-
trueno_ublk_demoGPU block device acceleration-
repartir_distributedDistributed computing patterns-
multi_machine_demoMulti-node GPU/SIMD orchestration-

Model Serving

ExampleDescriptionFeatures
serve_demoPrivacy-tiered model servinginference
whisper_apr_demoWhisper ASR inferenceinference
pepita_kernel_demoGPU kernel interfaces-
int8_rescore_demoINT8 quantized inferenceinference

Content & Data

ExampleDescriptionFeatures
content_demoContent analysis and generation-
hf_catalog_demoHuggingFace catalog integration-
parf_analysisPARF (Project ARtifact Format) analysis-

MCP Integration

ExampleDescriptionFeatures
mcp_demoMCP server integration-
custom_pluginCustom plugin development-
graph_tui_demoGraph visualization TUInative

Quick Start Examples

1. Pipeline Demo (No Features Required)

cargo run --example pipeline_demo

Demonstrates the 5-phase transpilation pipeline with Jidoka (stop-on-error) validation.

2. Oracle Local Demo

cargo run --example oracle_local_demo --features oracle-mode

Discovers PAIML projects in ~/src and shows their development state (Clean/Dirty/Unpushed).

3. Stack Quality Demo

cargo run --example stack_quality_demo --features native

Analyzes quality metrics across the Sovereign AI Stack components.

4. Backend Selection Demo

cargo run --example backend_selection

Shows cost-based GPU/SIMD/Scalar backend selection using the 5× PCIe rule.

Example Dependencies

Some examples have external dependencies:

  • Model files: Examples in serve_demo, whisper_apr_demo require GGUF/APR model files
  • GPU: CUDA examples require NVIDIA GPU with CUDA toolkit
  • Network: hf_catalog_demo requires internet access for HuggingFace API

Building All Examples

Verify all examples compile:

cargo check --examples
cargo check --examples --features oracle-mode,native,inference

Navigate: Table of Contents | Next: Python ML Example