Daily Shaarli
December 9, 2025
Ollama makes it easy to run large language models (LLMs) locally on your computer. It provides a lightweight runtime with an OpenAI-compatible API, model library, and simple installation process.
With Ollama, you can download and run models like LLaMA, Mistral, Gemma, Phi, and more directly on macOS, Linux, or Windows.
It supports GPU acceleration, custom model creation, and integration with developer tools. Designed for privacy and control, Ollama keeps all data on your machine while enabling powerful AI workflows without relying on cloud services.
Notes:
π₯οΈ Run LLMs locally with minimal setup.
π¦ Includes a growing library of prebuilt models.
β‘ Supports GPU acceleration for faster inference.
π Privacy-first: data stays on your device.
π§ Developer-friendly with OpenAI-compatible API.
π Cross-platform: macOS, Linux, Windows
InvokeAI is a leading creative engine built on Stable Diffusion, designed to empower professionals, artists, and enthusiasts to generate and refine visual media with cutting-edge AI technologies.
It offers an industry-leading web-based UI, a unified canvas for in/out-painting, node-based workflows, and gallery management. Compatible with SD1.5, SD2.0, SDXL, and FLUX models, InvokeAI supports upscaling, embeddings, and advanced workflow creation.
Free to use under a commercially-friendly license, itβs the foundation for multiple commercial products and a vibrant open-source community
Notes:
π Runs locally with a powerful web UI.
π¨ Unified Canvas for sketching, inpainting, and outpainting.
π§ Node-based workflows for customizable pipelines.
π Organized gallery system with metadata for easy remixing.