FAQ
Frequently Asked Questions about Overseer
❓ Frequently Asked Questions
Common questions and answers about Overseer.
Table of Contents
- General
- Installation & Setup
- Usage
- Configuration
- Troubleshooting
- Performance
- Security
- Cost & Pricing
- Advanced Topics
General
What is Overseer?
Overseer is a self-hosted AI agent platform that gives you complete control over your server through natural language. It combines:
- Chat interfaces (Telegram, Discord, Web)
- 35+ built-in tools for server management
- 20+ LLM providers (OpenAI, Claude, Gemini, local models)
- Extensible architecture (skills, MCP servers, sub-agents)
How is Overseer different from ChatGPT or Claude?
| Feature | Overseer | ChatGPT | Claude |
|---|---|---|---|
| Server Control | ✅ Full VPS access | ❌ No | ❌ No |
| Self-Hosted | ✅ Your infrastructure | ❌ Cloud only | ❌ Cloud only |
| Data Privacy | ✅ Complete | ⚠️ Sent to OpenAI | ⚠️ Sent to Anthropic |
| Customization | ✅ Full (SOUL.md, tools) | ⚠️ Limited | ⚠️ Limited |
| Cost | 💰 API usage only | 💰💰 $20+/month | 💰💰 $20+/month |
| Provider Choice | ✅ 20+ providers | ❌ OpenAI only | ❌ Anthropic only |
What can Overseer do?
Server Management:
- Monitor system resources (CPU, memory, disk)
- Manage processes and services
- Execute shell commands
- File operations (read, write, search)
Development:
- Git operations (status, commit, push)
- Code deployment
- Docker container management
- Database queries
Automation:
- Scheduled tasks
- Workflow automation
- Multi-step operations
- Integration with external services (via MCP)
Is Overseer production-ready?
Yes, Overseer is production-ready for:
- ✅ Personal VPS management
- ✅ Small team server administration
- ✅ Development environment automation
- ✅ Internal tools and workflows
For enterprise use, consider:
- Load balancing (if needed)
- Database migration to PostgreSQL
- Additional security hardening
- Professional support
Is Overseer open source?
Yes! Overseer is MIT licensed. You can:
- ✅ Use commercially
- ✅ Modify source code
- ✅ Create derivative works
- ✅ Distribute copies
Installation & Setup
What are the system requirements?
Minimum:
- OS: Ubuntu 20.04+, Debian 11+, Windows Server 2019+, macOS 12+
- RAM: 2GB
- CPU: 1 core
- Disk: 5GB
- Node.js: 20.0.0+
Recommended:
- RAM: 4GB
- CPU: 2 cores
- Disk: 10GB
- Dedicated server or VPS
Which VPS provider should I use?
Cost-effective:
- Hetzner CX21: €5.83/month (2 vCPU, 4GB RAM) - Best value
- Linode Shared 4GB: $24/month
Premium:
- DigitalOcean Basic Droplet: $24/month
- AWS t3.medium: ~$30/month (more features)
Free tier:
- Oracle Cloud Always Free: 1GB RAM (limited but free)
Do I need a domain name?
Optional, but recommended for:
- SSL/TLS certificates (Let's Encrypt)
- Professional appearance
- Easy access to web admin
Without domain:
- Access via IP:
http://YOUR_IP:3000 - Self-signed SSL certificate
- Works fine for personal use
How do I get a Telegram bot token?
- Open Telegram
- Message @BotFather
- Send
/newbot - Follow prompts to choose name and username
- Copy the token (format:
123456:ABC-DEF...) - Add to Overseer Settings → Interfaces
Find your user ID:
- Message @userinfobot
- Copy your numeric user ID
- Add to
TELEGRAM_ALLOWED_USERS
Can I use Overseer without a bot? Just the web interface?
Yes! The Telegram/Discord bots are optional. You can:
- Install Overseer
- Configure LLM provider
- Use only the web admin chat interface
- Skip bot configuration entirely
Usage
Which LLM provider should I use?
Depends on your priorities:
Best Overall:
- OpenAI GPT-4o: Great balance of speed, quality, cost
- Anthropic Claude 3.5 Sonnet: Best for complex reasoning
Fastest:
- Groq (llama-3.3-70b): Ultra-fast, good quality, cheap
- Google Gemini Flash: Very fast, good for simple tasks
Cheapest:
- Ollama (local): Free! But requires more RAM
- Groq: $0.59/M tokens (very cheap)
Best Quality:
- OpenAI O1: Best reasoning, slow, expensive
- Claude 3.5 Opus: Very capable, expensive
Privacy-Focused:
- Ollama (local): Data never leaves your server
- Azure OpenAI: Enterprise compliance
Can I use local models (Ollama)?
Yes! Overseer supports Ollama:
-
Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh -
Pull a model:
ollama pull llama3.2 -
Add to Overseer:
- Provider: Ollama
- Model: llama3.2
- Base URL: http://localhost:11434
- No API key needed
Pros:
- ✅ Free (no API costs)
- ✅ Private (data stays local)
- ✅ Fast (local inference)
Cons:
- ❌ Requires more RAM (8GB+ recommended)
- ❌ Lower quality than GPT-4/Claude
- ❌ Limited tool calling (depends on model)
How do I customize the agent's personality?
Edit SOUL.md via web admin:
- Go to Settings → SOUL
- Edit the markdown content
- Save changes
- Agent uses new personality immediately
Example customizations:
- Professional vs casual tone
- Domain expertise (DevOps, developer, sysadmin)
- Safety preferences (cautious vs permissive)
- Response format (concise vs detailed)
See User Guide — Customizing SOUL.md for examples.
Can Overseer send me notifications?
Via Telegram: Yes, the bot can proactively message you:
// In a skill or tool
await telegramBot.sendMessage(userId, 'Server CPU is at 95%!');Via Discord: Yes, with mentions or DMs
Via Email: Not built-in, but you can:
- Create a skill that sends emails
- Use MCP server for email
- Integrate with external service (SendGrid, etc.)
How do I update Overseer?
# 1. Backup database
cp data/overseer.db data/overseer.db.backup
# 2. Pull latest code
git pull origin main
# 3. Install dependencies
npm install
# 4. Rebuild
npm run build
# 5. Restart services
systemctl restart overseer-web overseer-telegram overseer-discordCheck for updates:
git fetch origin
git log HEAD..origin/main --onelineConfiguration
How many conversations can Overseer handle?
Tested capacity:
- 1,000 conversations/day on 2GB RAM server
- 10,000+ messages/day
Database size:
- ~100KB per conversation (with 20 messages)
- 10GB disk can store ~100,000 conversations
For higher loads, consider:
- Periodic conversation cleanup
- Database archiving
- Migration to PostgreSQL
- Horizontal scaling
Can I use multiple LLM providers simultaneously?
Yes! You can:
- Configure multiple providers
- Set one as default
- Switch per-conversation (via
/switchcommand) - Use different providers for different skills
Use cases:
- GPT-4o for complex tasks, GPT-4o-mini for simple
- Claude for writing, GPT-4 for code
- Ollama for privacy-sensitive operations
- Groq for speed-critical tasks
How do I whitelist multiple Telegram users?
Method 1: Environment variable
TELEGRAM_ALLOWED_USERS=123456789,987654321,111222333Method 2: Web admin
- Settings → Interfaces → Telegram
- Allowed Users:
123456789, 987654321 - Save
Can I run multiple bots (Telegram + Discord) simultaneously?
Yes! Run all services:
# Using systemd
systemctl start overseer-web overseer-telegram overseer-discord
# Or manually
npm run dev # Terminal 1: Web
npm run bot # Terminal 2: Telegram
npm run discord # Terminal 3: Discord
# Or with concurrently
npm run bots # Starts both botsEach interface maintains separate conversations.
Troubleshooting
Bot doesn't respond to messages
Checklist:
- ✅ Bot service is running:
systemctl status overseer-telegram - ✅ Your user ID is in
TELEGRAM_ALLOWED_USERS - ✅ Bot token is correct
- ✅ LLM provider is configured and has credits
- ✅ No errors in logs:
journalctl -u overseer-telegram -f
Common causes:
- Typo in user ID
- Bot token expired/invalid
- LLM API key issues
- Network connectivity
"Database is locked" error
Cause: Multiple processes accessing database simultaneously
Solution:
# Stop all services
systemctl stop overseer-web overseer-telegram overseer-discord
# Remove lock files
rm data/overseer.db-wal data/overseer.db-shm
# Start services one by one
systemctl start overseer-web
systemctl start overseer-telegramPrevention:
- Use WAL mode (already default)
- Don't run multiple instances
- Proper service dependencies
High memory usage
Normal usage:
- Web admin: 200-300 MB
- Telegram bot: 150-200 MB
- Total: ~500-700 MB
If higher:
- Check conversation count:
SELECT COUNT(*) FROM conversations - Clean old conversations
- Restart services
- Reduce Node.js memory:
NODE_OPTIONS="--max-old-space-size=1024"
Slow responses
Possible causes:
- Slow LLM provider: Try Groq or Gemini Flash
- Large context: Reduce conversation history length
- Complex tools: Some tools take time
- Network latency: Check connectivity
Solutions:
- Use faster model (gpt-4o-mini, gemini-flash)
- Reduce
maxStepsin agent config - Optimize tool implementations
- Increase server resources
Performance
How much does it cost to run Overseer?
Server costs:
- VPS: $6-30/month (depends on provider)
- Domain: $10-15/year (optional)
- SSL: Free (Let's Encrypt)
LLM API costs: Varies by usage and provider:
| Usage | Model | Est. Cost |
|---|---|---|
| Light (100 msg/day) | GPT-4o-mini | $3/month |
| Medium (500 msg/day) | GPT-4o | $30/month |
| Heavy (2000 msg/day) | GPT-4o | $120/month |
| Any | Ollama (local) | $0 (free!) |
Cost optimization:
- Use Ollama for simple tasks
- Use GPT-4o-mini instead of GPT-4o
- Use Groq (very cheap)
- Set conversation limits
Can I limit API usage to control costs?
Currently: Not built-in
Workarounds:
- LLM provider limits: Set budgets in OpenAI/Anthropic dashboard
- Message limits: Track usage in database
- Rate limiting: Limit messages per user/day
- Model selection: Use cheaper models by default
Coming soon:
- Per-user quotas
- Monthly spending limits
- Usage alerts
How can I improve response speed?
-
Use faster models:
- Groq: Ultra-fast
- Gemini Flash: Very fast
- GPT-4o-mini: Fast and cheap
-
Optimize configuration:
MAX_TOKENS=1000 # Reduce for faster responses TEMPERATURE=0.5 # Slightly faster -
Reduce tool steps:
- Simpler SOUL.md instructions
- Fewer enabled skills
- Optimize tool implementations
-
Server optimization:
- More RAM/CPU
- Closer to LLM provider (region)
- SSD storage
Security
Is Overseer secure?
Yes, when configured properly:
- ✅ AES-256 encryption for secrets
- ✅ Bcrypt password hashing
- ✅ Session security (HttpOnly cookies)
- ✅ User whitelisting
- ✅ Command confirmation
- ✅ Audit logging
You must:
- Use strong passwords
- Keep encryption keys secret
- Use HTTPS/SSL
- Regularly update
- Review audit logs
See Security Guide for details.
Can Overseer access my entire server?
Yes, Overseer runs with the permissions of its user.
Best practice:
- Run as dedicated user (not root)
- Limit file permissions
- Use command confirmation
- Review SOUL.md carefully
- Monitor audit logs
For maximum security:
- Run in Docker container
- Use AppArmor/SELinux
- Sandbox dangerous operations
- Restrict network access
What data does Overseer store?
Stored locally (in your database):
- Conversations and messages
- Tool execution history
- LLM API keys (encrypted)
- User authentication data
- System settings
Sent to LLM providers:
- Your messages
- Conversation context
- Tool call results
- SOUL.md system prompt
Not stored:
- LLM API responses are not logged (by default)
- Passwords are never logged
- Encryption keys are never logged
How do I back up Overseer?
Automated backup:
#!/bin/bash
# backup.sh
cp data/overseer.db backups/overseer_$(date +%Y%m%d).db
cp .env backups/.env_$(date +%Y%m%d)
# Keep last 30 days
find backups/ -mtime +30 -deleteAdd to crontab:
0 2 * * * /path/to/backup.shWhat to backup:
- ✅
data/overseer.db(database) - ✅
.env(configuration) - ✅
src/agent/soul.md(if customized) - ✅
skills/(if custom skills)
Cost & Pricing
Is Overseer free?
The software is free (MIT licensed), but you pay for:
- VPS/hosting: $6-30/month
- LLM API usage: Varies ($0-100+/month)
- (Optional) Domain: ~$10/year
Total: $10-150/month depending on usage
Free options:
- Ollama (local models) = $0 API costs
- Oracle Cloud free tier = $0 hosting
- Total: $0/month! (but requires 8GB+ RAM server)
Which is cheaper: OpenAI or Anthropic?
For GPT-4o vs Claude 3.5 Sonnet:
| Model | Input | Output | 1M tokens |
|---|---|---|---|
| GPT-4o | $2.50 | $10.00 | ~$12.50 |
| Claude 3.5 Sonnet | $3.00 | $15.00 | ~$18.00 |
| GPT-4o-mini | $0.15 | $0.60 | ~$0.75 |
| Gemini Flash | $0.075 | $0.30 | ~$0.375 |
| Groq (Llama 3.3) | $0.59 | $0.79 | ~$0.69 |
Cheapest: Groq or Gemini Flash
Can I use Overseer commercially?
Yes! MIT license allows:
- ✅ Commercial use
- ✅ Modification
- ✅ Distribution
- ✅ Private use
You must:
- Include original license
- Include copyright notice
No warranty (use at your own risk)
Advanced Topics
Can I extend Overseer with custom tools?
Yes! See Developer Guide — Adding Tools
Example:
// src/agent/tools/my-tool.ts
export const myTool = tool({
description: 'My custom tool',
parameters: z.object({
input: z.string(),
}),
execute: async ({ input }) => {
return { result: `Processed: ${input}` };
},
});How do I create a custom skill?
See Developer Guide — Creating Skills
Quick steps:
- Create
skills/my-skill/directory - Add
skill.jsonmanifest - Implement functions in
index.ts - Restart Overseer
- Enable skill in settings
Can I integrate with other services (Slack, email, etc.)?
Yes, via:
1. MCP Servers:
- Install MCP server for service
- Connect via Settings → MCP Servers
- Use new tools automatically
2. Custom Skills:
- Create skill with API integration
- Install and enable
- Use via natural language
3. Webhooks (planned):
- Subscribe to events
- Send to external service
- Receive webhooks from services
Can I run Overseer in a Docker container?
Yes! See Deployment Guide — Docker
docker-compose up -dBenefits:
- Isolated environment
- Easy deployment
- Reproducible builds
- Resource limits
How do I scale Overseer for high traffic?
Current architecture: Single server, good for 1000+ conversations/day
For higher scale:
-
Horizontal scaling:
- Multiple web instances behind load balancer
- Shared database (PostgreSQL)
- Redis for sessions/cache
-
Vertical scaling:
- More RAM/CPU
- SSD storage
- Optimize database queries
-
Optimize:
- Enable caching
- Reduce conversation history
- Use faster LLM models
- Async job queue
See Architecture — Scalability (or Architecture for overview).
Still Have Questions?
- 📖 Browse the docs: Introduction, Quick Start, User Guide, FAQ
- 💬 Join Discord: discord.gg/overseer
- 🐛 Report issues: GitHub Issues
Contributing: Know the answer to a common question? Add it to this FAQ via pull request!