Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction: CODE IS THE WAY

Welcome to the Sovereign AI Stack Book - a CODE-FIRST guide to building EU-compliant AI systems using the complete Pragmatic AI Labs toolchain.

Core Principle: SHOW, DON’T TELL

This book documents working code. Every claim is verifiable.

# Clone the book
git clone https://github.com/paiml/sovereign-ai-stack-book.git
cd sovereign-ai-stack-book

# Verify EVERYTHING
make test          # All examples compile and pass (20+ tests)
make run-ch01      # Run Chapter 1 example (see sovereign AI in action)
make run-ch03      # Run Chapter 3 (see SIMD speedups yourself)
make run-ch05      # Run Chapter 5 (see quality enforcement)

# Run any chapter's examples
make run-all       # Execute all chapter examples

If make test passes, the book’s claims are true. If not, file an issue.

What Makes This Book Different

1. METRICS OVER ADJECTIVES

Vaporware: “Our tensor library is blazing fast!” ✅ This book: “trueno achieves 11.9x speedup via SIMD (see make bench-ch03)”

Vaporware: “High test coverage ensures quality” ✅ This book: “95.3% line coverage, 82% mutation score, TDG grade A- (91.2)”

2. BRUTAL HONESTY

We show failures, not just successes:

  • Chapter 3 demonstrates when GPU is 65x SLOWER than CPU (PCIe overhead)
  • Quality enforcement examples show real uncovered lines
  • All benchmarks include variance and test environment specs

3. ZERO VAPORWARE

Every example:

  • ✅ Compiles with cargo build
  • ✅ Passes tests with cargo test
  • ✅ Runs with cargo run
  • ✅ Benchmarks with cargo bench

No “coming soon” features. No “left as an exercise.” All code works.

4. SCIENTIFIC REPRODUCIBILITY

Following academic standards:

  • Test Environment Documentation: Hardware specs, software versions, date measured
  • Statistical Rigor: Criterion benchmarks with 100+ runs
  • Variance Tolerance: ±5% acceptable variance documented
  • Reproducibility Protocol: git clonemake test validates all claims

Book Structure

Part 0: The Crisis and The Response (Chapters 1-4)

Establishes why sovereign AI matters:

  • Crisis of determinism (LLMs are non-deterministic)
  • Toyota Way principles (Jidoka, Heijunka, Genchi Genbutsu)
  • EU regulatory compliance (AI Act, GDPR, Cyber Resilience Act)
  • Byzantine Fault Tolerance (dual-model verification)

Part I: Infrastructure Foundations (Chapters 5-7)

Quality enforcement and tensor operations:

  • pmat: O(1) pre-commit validation, TDG scoring, ≥95% coverage
  • trueno: SIMD-accelerated vectors/matrices
  • GPU acceleration (when it helps, honest about when it doesn’t)

Part II-VI: Complete Toolchain

Transpilers, ML pipeline, databases, orchestration, and production deployment.

Who This Book Is For

  • Systems engineers building EU-compliant AI infrastructure
  • ML engineers seeking reproducible, deterministic AI systems
  • CTOs/Architects evaluating sovereign AI solutions
  • Policy makers understanding technical implementation of AI regulations
  • Anyone who can run make test (the code speaks for itself)

Prerequisites

Minimal:

  • Rust installed (rustup update stable)
  • Git
  • Basic command-line skills
  • Curiosity about sovereign AI

Helpful but not required:

  • Familiarity with ML concepts
  • Understanding of EU AI regulations
  • Experience with TDD

How to Use This Book

For Learners

  1. Start with Chapter 1: Run make run-ch01 to see sovereign AI in action
  2. Follow chapters sequentially
  3. Run every example: make run-ch03, make run-ch05, etc.
  4. Modify the code, break it, fix it - learn by doing

For Practitioners

  1. Jump to relevant chapters (see SUMMARY.md)
  2. Copy working examples into your projects
  3. Run benchmarks to verify claims: make bench-ch03
  4. Adapt patterns to your use case

For Auditors/Reviewers

  1. Clone the repository
  2. Run make test - verify all tests pass
  3. Run make bench-all - verify all performance claims
  4. Examine code coverage: make coverage
  5. Review quality metrics: make run-ch05-tdg

The “Noah Gift” Style

This book follows the code patterns from Noah Gift’s repositories:

  • CODE DEMONSTRATES REALITY (not marketing speak)
  • BENCHMARK EVERY PERFORMANCE CLAIM (with statistical rigor)
  • SHOW FAILURES (Genchi Genbutsu - go and see)
  • ZERO VAPORWARE (delete “coming soon”, show working code)
  • MASTER-ONLY GIT (no feature branches, push working code frequently)

Quality Standards

This book enforces EXTREME TDD standards:

  • 95%+ test coverage (enforced by pmat)
  • TDG grade ≥ A- (90+ score)
  • Zero compiler warnings (clippy -D warnings)
  • 80%+ mutation score (tests actually catch bugs)
  • All examples compile and run (CI/CD validates)

Contributing

Found an issue? Example doesn’t work?

  1. File an issue: https://github.com/paiml/sovereign-ai-stack-book/issues
  2. Include: Chapter number, error message, environment (rustc --version)
  3. Expected: We fix it (reproducibility is our promise)

Acknowledgments

This book documents the Pragmatic AI Labs toolchain:

  • Built by Noah Gift and team
  • Used in production at https://paiml.com
  • Open source: MIT/Apache-2.0 licensed

Let’s Begin

Ready to see sovereign AI in action?

make run-ch01

Your first sovereign AI program runs in local mode with zero network calls.

Welcome to the Sovereign AI Stack. CODE IS THE WAY.