Course 4: GenAI Engineering on Databricks
Subtitle
Build LLM Applications with Foundation Models, Vector Search, and RAG
Description
Construct production GenAI systems on Databricks: serve foundation models, implement vector search for semantic retrieval, build RAG pipelines, and fine-tune models for domain adaptation. Understand the internals by building equivalent systems with the Sovereign AI Stack.
Learning Outcomes
- Serve and query foundation models on Databricks
- Generate embeddings and build vector search indexes
- Implement production RAG pipelines with hybrid retrieval
- Fine-tune models with LoRA/QLoRA for domain adaptation
- Deploy privacy-aware GenAI systems with proper governance
Duration
~34 hours | 40 videos | 12 labs | 5 quizzes | 1 capstone
Weeks
| Week | Topic | Sovereign AI Stack |
|---|---|---|
| 1 | Foundation Models and LLM Serving | realizar, tokenizers |
| 2 | Prompt Engineering and Structured Output | batuta, serde |
| 3 | Embeddings and Vector Search | trueno, trueno-rag |
| 4 | RAG Pipelines | trueno-rag, alimentar |
| 5 | Fine-Tuning and Model Security | entrenar, pacha |
| 6 | Production Deployment | batuta, renacer |
| 7 | Capstone: Enterprise Knowledge Assistant | Full stack |
Databricks Free Edition Features Used
- Playground (Foundation Models)
- Vector Search (via Catalog)
- Genie (AI/BI demo)
- Experiments (evaluation tracking)
- Jobs & Pipelines (RAG orchestration)