16 specialist agents. 19 integrations. All running on your own box, with your own LLM provider, behind your own auth. No cloud middleman, no vendor lock-in, no data leaving your network.
We don't just say the agents work — we prove it with an automated eval suite scored by an independent LLM judge on every commit. Here's the current baseline, run against Claude Sonnet 4.6 with Opus 4.6 grading.
| Role | Passing |
|---|---|
| 🔎 Knowledge Agent | 4 / 4 |
| ⚙️ Backend Engineer | 5 / 5 |
| 🎨 Frontend Engineer | 4 / 4 |
| 🚀 DevOps Engineer | 5 / 5 |
| 🔍 QA Engineer | 4 / 4 |
| ✏️ UI/UX Designer | 4 / 4 |
| 📊 Data Analyst | 4 / 4 |
| 📈 Growth Analyst | 4 / 4 |
| 📢 Marketing Manager | 4 / 4 |
| 💼 Sales Rep | 4 / 4 |
| 💬 Customer Support | 4 / 4 |
| 📝 Technical Writer | 4 / 4 |
| 📋 Project Manager | 4 / 4 |
| 🎯 Recruiter | 4 / 4 |
| ⚖️ Legal Analyst | 4 / 4 |
| 💰 Finance / Bookkeeper | 4 / 4 |
These are the choices that make Smart Agents Pro a product you install, not a service you subscribe to.
docker compose up -d
and you're live. Your data, your box, your network. Fully air-gapped
with Ollama is a single env var.
Eight providers wired: Anthropic, OpenAI, Groq, Cerebras, Gemini, OpenRouter, Ollama, and any OpenAI-compatible endpoint. Swap models per role from the admin UI, no code changes.
When Sales learns that a customer prefers async demos, Marketing sees it too. Every specialist writes to and queries a shared knowledge store — the Knowledge Agent searches across it all.
Destructive actions stop and ask. Your agents can push code, send emails, and touch production — but only after you approve from the web UI or Telegram.
Every state-changing action — logins, task submissions, model swaps, integration changes — lands in a tamper-evident log. Filterable by user, action, time.
Regression suite per role. Run it before every release, catch behavioral drift when you swap models. No more "it worked last week" surprises.
The Knowledge Agent fans out across your shared memory, Confluence, Notion, Jira, Linear, Slack, and GitHub — then synthesizes a cited answer in 5–15 seconds. Read-only by design. No hallucinated quotes.
Here's everything the team's knowledge base currently contains on both topics:
customer:acmebilling,decisionEach one is a focused agent with its own system prompt, tool selection, and regression suite. Toggle any of them on or off from the admin page.
Paste a credential, click Test connection, it's live. No code changes, no restarts. Everything encrypted at rest with Fernet, every change logged to the audit trail.
git clone to "signed in."
One repo. One env file. One docker compose up.
Smart Agents Pro brings its own Mongo, its own JWT secret, and the app
container. You bring an LLM key.
# 1. Clone the repo $ git clone https://github.com/choochtech/smart-agents-pro.git $ cd smart-agents-pro # 2. Configure — set at least ANTHROPIC_API_KEY in .env $ cp .env.example .env $ nano .env # or your editor of choice # 3. Bring the stack up (app + mongo sidecar) $ docker compose up -d # 4. Open the app — first signup becomes the admin $ open http://localhost:8090
Set one of: ANTHROPIC_API_KEY, OPEN_AI_API_KEY, GROQ_API_KEY, CEREBRAS_API_KEY, GEMINI_API_KEY, OPENROUTER_API_KEY. Or point at a local Ollama.
Set AI_AGENTS_ADMIN_EMAIL + AI_AGENTS_ADMIN_PASSWORD in .env before booting, or let the first visitor sign up as admin.
From the Integrations admin page, paste credentials for GitHub, Jira, Slack, databases, etc. Click Test connection, it's live.