Installation
Prerequisites
- Ollama — Local LLM runtime (ollama.com)
- Python 3.12+ — For the backend (if running from source)
- Node.js 22+ — For the frontend (if running from source)
Option 1: Docker (Recommended)
The easiest way to run Confidential Translator — no coding knowledge required.
# Clone the repository
git clone https://github.com/jajupmochi/confidential-translator.git
cd confidential-translator
# Start with Docker Compose (includes Ollama)
docker compose up -d
Then open http://localhost:8000 in your browser.
GPU Acceleration
For NVIDIA GPU support, uncomment the deploy section in docker-compose.yml to enable GPU passthrough to Ollama.
Option 2: Standalone Binary
Download the latest release for your OS from the Releases page.
| Platform | Download |
|---|---|
| Linux | confidential-translator-linux-amd64 |
| Windows | confidential-translator-windows-amd64.exe |
| macOS | confidential-translator-macos-x64 |
- Download and run the executable
- The app will auto-start Ollama and open your browser to
http://127.0.0.1:8000