Ollama

Ollama is a local AI orchestration platform that enables running and managing large language models (LLMs) entirely on users’ own hardware, without depending on cloud services. It prioritizes privacy, data control, and offline functionality, making it suitable for developers and organizations needing secure, customizable AI solutions. Ollama supports multiple open-source models like Llama 3, Code Llama, and Phi-3, offers CLI-based control with mod file customization, and runs cross-platform on macOS, Linux, and Windows (experimental). It caters especially to use cases involving sensitive data, rapid prototyping, and edge deployments, combining flexibility with full local execution.

Our thoughts on Ollama

by Ben Fletcher 18 August 2025
Want faster, smarter FileMaker apps with AI? We compared gpt-oss, Llama & DeepSeek to find the best local model for Apple hardware. Find out who wins
Post pattern artwork.
by Ben Fletcher 30 June 2025
Integrate Claris FileMaker with local vision AI using llama3.2-vision and Ollama. Keep data private while adding powerful image processing to your workflows.
Ai Robot
by Ben Fletcher 17 June 2025
Discover the power of DeepSeek R1—an efficient, open-source AI model ideal for private, local deployment using Ollama and Apple Silicon. Learn how DeepSeek compares to LLaMA, Mistral, and other LLMs in real-world applications, and explore how it integrates seamlessly with Claris FileMaker for secure, on-premise AI automation. Ideal for businesses needing low-cost, high-performance, and fully private AI solutions.
by Ben Fletcher 16 May 2025
Learn how to run private AI locally using Ollama and integrate it with Claris FileMaker. Discover step-by-step setup, model recommendations, and how to build secure, AI-powered workflows without sending data to the cloud. Perfect for businesses prioritising data privacy and compliance.