Ollama - a runtime environment for LLMs that allows models to be run locally. My favorite model at the moment: qwen2.5vl:7b-q4_K_M. With only 6.6 GB in size, this runs smoothly on a MacBook Air M4 and still has enough memory and capacity to run programs alongside it. The model is surprisingly usable in chat and above all has excellent vision capabilities. Ideal for providing titles, alt text, or summaries for images without having to pay big providers for it. And an important building block to bring bDS back to full-offline.