by Ben Fletcher
•
30 June 2025
Previously, we have discussed why you may be interested in leveraging local LLMs for to keep data private , how to get started with integrating Claris FileMaker with Ollama and a brief discussion of the trade offs between using the different text generative LLMs (Llama vs DeepSeek). The common thread will all these articles is that these are LLMs optimised for processing text prompt inputs to generate text outputs. However, the Ollama site also provides models that support vision processing , including Gemma3, LlaVA and llama3.2-vision. Impressively, vision processing is still viable using modest Apple Silicon Mac using the llama3.2-vision model. A quick reminder - Why run local AI models on your Mac? Running local LLMs gives you: Full data privacy – your data stays on your machine. No cloud dependency – perfect for secure or offline environments. Faster response times – no network latency. Cost control – no pay-per-query billing. Development feasibility testing - quick prototyping to provide the efficiency of the AI assisted workflow and the fit of different models for your application. Whether you’re in healthcare, legal services, finance, or manufacturing, private AI lets you leverage powerful automation without compromising data compliance or sovereignty. Setting Up llama3.2-vision integration with Claris FileMaker 1. Choose the Right Model The llama3.2-vision build weighs in at 7.8GB and runs with reasonable performance on an entry level M3 class MacBook Air with 24GB RAM. The more advanced 'multimodal' llama4 weighs in at 67GB by comparison - good luck running that with anything less than the 128GB+ RAM configurations of the M2/M3 Ultra or M4 Pro variants. 2. Download and Install Ollama 👉 Download here: https://ollama.com/download Move Ollama.app into your Applications folder Open the app and approve any permissions (including full disk access if prompted). 3. Run Your First Model Open Terminal and run: ollama run llama3.2-vision 4. Check the Server Is Running Open a web browser and go to: http://localhost:11434/ If everything is configured correctly, you should see “Ollama is running” in your web browser page. If not, check your local firewall and security settings to ensure port 11434 is open and accessible. Connecting llama3.2-vision with FileMaker Pro The process for integrating is identical to the text model as a starting point, but the image needs to be converted to Base64 text encoding to send in an additional parameter in the JSON format (" images"; ['imageInBase64'] ). The simplest way to do this within FileMaker is to save your image into a Container field and then convert that using FileMaker's built in Base64Encode() function (this could be stored in a temporary variable or a calculation field as in this example below):