A complete and budget-friendly way to build a private, low-cost, always-available LLM server with a Raspberry Pi 5, Ollama inference engine, Tailscale for secure remote access, and Chatbox as a desktop/mobile client.
Notes tagged with llm
llm
A deep technical guide to designing, orchestrating, and scaling multi-agent systems using LLM-based agents, coordination protocols, and modern AI engineering patterns.
A comprehensive guide to running Large Language Models locally on affordable single-board computers, exploring hardware options, performance benchmarks, and practical setup for private and cost-effective AI.
A deep dive into building a scalable, environment-driven architecture for integrating multiple LLM providers using the Factory pattern. Learn how to abstract provider differences, manage configurations, and build production-ready AI applications that can switch between OpenAI, Anthropic, Google Gemini, and local models without code changes.
A guide to deciphering model names for better AI Engineering decisions