Self-Hosted Translation
Running your own translation service ensures privacy, eliminates per-character costs, and provides unlimited translation volume. Open-source models like NLLB and Opus-MT provide quality comparable to commercial services.
Using Argos Translate
pip install argostranslate
import argostranslate.package, argostranslate.translate
# Download language packages
argostranslate.package.update_package_index()
packages = argostranslate.package.get_available_packages()
en_fr = next(p for p in packages if p.from_code=="en" and p.to_code=="fr")
argostranslate.package.install_from_path(en_fr.download())
# Translate
result = argostranslate.translate.translate("Hello world", "en", "fr")
print(result) # "Bonjour le monde"
LibreTranslate API
# Deploy LibreTranslate (uses Argos Translate)
docker run -d --name libretranslate \
-p 5000:5000 \
-v /opt/libretranslate:/home/libretranslate/.local \
--restart unless-stopped \
libretranslate/libretranslate
# API usage
curl -X POST http://localhost:5000/translate \
-H "Content-Type: application/json" \
-d '{"q":"Hello world","source":"en","target":"fr"}'
Using LLMs for Translation
# Ollama for high-quality contextual translation
import ollama
result = ollama.chat(model="llama3", messages=[{
"role": "system",
"content": "You are a professional translator. Translate the following text to French. Preserve formatting and tone."
}, {
"role": "user",
"content": "Our server maintenance is scheduled for this weekend."
}])
print(result["message"]["content"])
Best Practices
- Use specialized translation models (NLLB, Opus-MT) for bulk translation
- Use LLMs for context-sensitive or creative translation
- Cache translated content to avoid redundant processing
- Support language detection for automatic source language identification