Getting Started with FileMaker and Private AI: Ollama
In today’s AI-driven business landscape, public cloud services like OpenAI, Copilot, and Perplexity are dominating headlines. But fewer people realise that Apple’s long-term vision for AI was baked into its hardware from the start. Since the launch of the Apple M1 chip in 2020, Apple Silicon has included dedicated Neural Processing Units (NPUs)—making powerful, private AI accessible from your desktop or laptop.
That means with even a modest Apple Silicon Mac, you can now run your own private AI service—powered by Large Language Models (LLMs)—without relying on third-party cloud providers.
Why Run Local AI Models on Your Mac?
Running local LLMs gives you:
- Full data privacy – your data stays on your machine
- No cloud dependency – perfect for secure or offline environments
- Faster response times – no network latency
- Cost control – no pay-per-query billing
Whether you’re in healthcare, legal services, finance, or manufacturing, private AI lets you leverage powerful automation without compromising data compliance or sovereignty.
Step-by-Step: Setting Up Ollama for Local AI
Ollama is an open-source tool that makes running LLMs locally easy. It supports models like:
- Meta’s Llama 3
- DeepSeek’s R1
- Google’s Gemma
1. Choose the Right Model
Ollama supports various versions of Llama 3:
- Llama 3.2 (~3 billion parameters, ~2GB in size) – suitable for Apple M1 with 8GB RAM
- Llama 3.3 (~70 billion parameters, up to 150GB) – requires high-end M3 Pro/M4 systems with significant RAM
2. Download and Install Ollama
👉 Download here: https://ollama.com/download
- Move Ollama.app into your Applications folder
- Open the app and approve any permissions (including full disk access if prompted)
3. Run Your First Model
Open Terminal and run:
ollama run llama3.2
Ollama will automatically download the model and run an interactive prompt. Enter a question or command to test how the model responds.
4. Check the Server Is Running
Open a web browser and go to:
If everything is configured correctly, you should see “Ollama is running” in your web browser page.
If not, check your local firewall and security settings to ensure port 11434 is open and accessible.
Testing the API Locally with Postman
Debugging API integrations in FileMaker can be tricky, so we recommend using Postman for testing first.
Configure a New Request:
- Method: POST
- URL: http://localhost:11434/api/generate
- Body (JSON):
{
"model": "llama3.2",
"prompt": "What are the limitations of llama3.2?",
"stream": false
}
This sends a prompt to the local LLM, and waits for a complete response before returning.
You should receive a 200 OK response along with the full JSON output.

Connecting Ollama with FileMaker Pro
Once the API is working locally, the next step is to connect FileMaker to Ollama.
If you’re unfamiliar with how to convert cURL requests into FileMaker-compatible Insert from URL steps, we recommend using a free cURL to FileMaker parser. These tools convert Postman snippets into native FileMaker script steps.
Here’s a basic structure for a script:
- Read prompt text from a field
- Send POST request using Insert from URL
- Parse the JSON response
- Write the output back into a FileMaker field

Hosting Options: On-Premise or Private Cloud
If your Claris FileMaker Server runs on an Apple Silicon Mac or a compatible GPU-based machine, you can host Ollama on the same system. Alternatively, we can help you migrate to a private cloud cluster for high-performance, scalable AI—without sacrificing data privacy or control.
What’s Next?
🔍 Explore real-world use cases and benefits in our detailed post:
👉 Supercharge FileMaker with Private AI: Integrating Local LLMs like Llama 3
Ready to Bring Private AI Into Your Business?
At DataTherapy, we specialise in integrating private AI with Claris FileMaker. Whether you’re just exploring or ready to go all-in on secure, AI-enhanced automation—we can help.
✅ Certified Claris FileMaker Developers
✅ UK-based team of full-time professionals
✅ Platinum Claris Partner
✅ Experts in secure, on-premise and private cloud AI deployment
📞 Contact us today for a free consultation and discover how local AI can transform your business—without ever sending your data to the public cloud.

