← Back to OwnPodAI

Licenses & Credits

Last updated: May 6, 2026

OwnPodAI is built with open-source software and AI models. We are grateful to the developers and researchers who make private on-device AI possible.

Open Source Libraries

llama.cppMIT
Georgi Gerganov and contributors
High-performance inference engine for large language models. The core runtime that powers on-device AI inference in OwnPodAI, compiled for ARM64 Apple Silicon.
github.com/ggml-org/llama.cpp →
ggmlMIT
Georgi Gerganov
Tensor computation library optimized for CPU and GPU inference. The mathematical foundation used by llama.cpp for efficient on-device computation.
github.com/ggml-org/ggml →

AI Models

Gemma 4 / Gemma 2Gemma Terms
Google DeepMind
Open-weight models from Google. Gemma 4 supports thinking and vision capabilities. Available in sizes optimized for mobile inference.
ai.google.dev/gemma/terms →
Qwen 3 / Qwen 2.5Apache 2.0
Alibaba Cloud — Qwen Team
Multilingual models supporting 100+ languages. Excel at creative writing, coding, and reasoning tasks. Available in 0.6B, 1.7B, and 4B sizes.
github.com/QwenLM/Qwen →
Llama 3.2Llama License
Meta
Meta's open-weight model family optimized for edge devices. Strong general-purpose capabilities in compact form factors.
llama.meta.com/llama3/license →
Phi-4MIT
Microsoft Research
Compact model with strong reasoning and thinking capabilities. Optimized for efficient mobile inference with high quality outputs.
huggingface.co/microsoft/phi-4 →
Ministral 3Apache 2.0
Mistral AI
Efficient edge model from Mistral AI. Designed specifically for on-device deployment with strong multilingual support.
mistral.ai/licenses →
Granite 4.0Apache 2.0
IBM
Enterprise-grade model from IBM Research. Strong at summarization, document understanding, and structured output generation.
huggingface.co/ibm-granite →
Cogito v1Apache 2.0
Deep Cogito
Reasoning-focused model with thinking capabilities. Optimized for complex problem-solving and analytical tasks.
huggingface.co/deepcogito →
BonsaiApache 2.0
PrismML
Ultra-efficient 1-bit quantized models. Delivers 8B parameter intelligence in just 1 GB, making powerful AI accessible on any iPhone.
prismml.com →
LFM 2.5 / LFM 2LFM License
Liquid AI
Hybrid architecture models with vision and thinking capabilities. Compact sizes optimized for edge deployment with state-of-the-art performance.
huggingface.co/LiquidAI →
SmolLM2Apache 2.0
Hugging Face
Compact language model designed for on-device inference. Excellent for basic chat, summarization, and lightweight tasks.
huggingface.co/HuggingFaceTB/SmolLM2 →
StableLM 2Stability License
Stability AI
Compact conversational model from Stability AI. Good for everyday chat and creative writing at small model sizes.
huggingface.co/stabilityai/stablelm-2 →

GGUF Quantizations

Bartowski GGUFSame as source
bartowski
Pre-quantized GGUF model files optimized for various hardware. Licenses follow the original model licenses.
huggingface.co/bartowski →
Unsloth GGUFSame as source
Unsloth AI
Optimized GGUF quantizations with efficient fine-tuning support. Licenses follow the original model licenses.
huggingface.co/unsloth →

Platforms & Services

Apple Foundation ModelsApple Terms
Apple Inc.
On-device foundation model framework provided by Apple. Used under Apple's standard developer terms for on-device AI inference.
developer.apple.com/documentation/FoundationModels →
Hugging FaceApache 2.0
Hugging Face, Inc.
AI model hosting and distribution platform. All downloadable models in OwnPodAI are sourced from Hugging Face repositories.
huggingface.co →
Apple Speech FrameworkApple Terms
Apple Inc.
On-device speech recognition framework used for voice input. All processing happens locally on the device.
developer.apple.com/documentation/speech →