BYO-AI rewrite

AI that respects your wallet.

A ✨ AI button on every composer. You bring the key — OpenAI, Anthropic, Ollama, or anything OpenAI-compatible. Per-project glossary primes the model with your lingo. Org-managed mode locks the whole team to one provider; BYOK mode lets users wire their own. Usage badge ships full transparency.

Your key Your model Your data
Three adapters out of the box

Pluggable providers. No vendor lock.

New providers drop in by adding a file under backend/services/aiProviders/ and registering in index.js. Each adapter ships a curated recommendedModels list (cheap / balanced / heavy tiers) plus a Refresh button that pulls the live model catalog.

OpenAI

Native + OpenAI-compatible endpoints — Azure OpenAI, OpenRouter, vLLM, LM Studio. Helper banner links to platform.openai.com/api-keys; live model fetch via /v1/models.

Anthropic

Claude Messages API (Haiku / Sonnet / Opus tiers). Helper banner links to console.anthropic.com/settings/keys. Surprisingly strong on rewrite tasks for the price.

Ollama

Self-hosted — no key needed. Helper banner links to ollama.com/library. Live model list via /api/tags. Set the endpoint to your local or LAN Ollama instance.

Every composer

✨ AI button, everywhere it makes sense.

One click opens a preview-before-send modal. Pick tone, verbosity, optionally ELI5 mode (Admin / Manager only). Side-by-side original / rewritten panes. Apply commits, Re-roll generates a new variant, Cancel discards.

  • Internal comments
  • Vendor comments (auto-flips when 'Share with vendor' is on)
  • Ticket title (single-line, terse-default)
  • Ticket description
  • Canned response body
  • Project description
  • Admin email templates (Subject + Body — preserves {tag.path} substitutions verbatim)
# Rewrite modal
Tone: neutral · formal · friendly · polite · apologetic · terse · funny
Verbosity: short · functional · verbose
ELI5: Admin / Manager
Original / Rewritten side-by-side
Apply · Re-roll · Cancel
Tokens + provider stamped on the comment
Project context glossary

Teach the model your lingo.

At Admin → Integrations → AI Assist → Project contexts, paste a markdown glossary per project — sites, integrations, acronyms, internal jargon. Up to 8000 chars. Gets prepended to every rewrite fired from a ticket in that project so the model uses your terms verbatim instead of generic alternates.

Three layers of opt-in (org admin / project admin / per-user) all must be ON for the context to ship. Per-comment badge shows when the glossary actually fired.

# projects.ai_context_md (Acme Helpdesk)
## Sites
- HQ-DFW (Dallas, primary)
- BR-AUS (Austin branch, 40 users)

## Integrations
- M365 tenant: acme.onmicrosoft.com
- Backup: Veeam B&R 12

## Lingo
- "the IT OU" = OU=Corp/Computers/IT
- "the floor" = warehouse VLAN 240
Org-managed vs BYOK

Lock the team. Or don't.

Set an org provider + encrypted org API key at Admin → Integrations → AI Assist. Permissions tab picks how users interact:

  • Lock to org config — every rewrite uses the org key. No override.
  • Allow user BYOK — personal credentials override the org fallback.
  • Org kill switch — turns the whole AI surface off org-wide.
Usage disclosure badge

Every rewrite is stamped.

Each AI-rewritten comment + ticket description carries a compact ✨ AI pill inline. Hover for the full readout — provider, model, tokens, tone, verbosity, project context flag. Click copies the metadata block to clipboard.

Org-level audience tier controls who sees the badge:

Author + Admin

Default. Only the author and admins see the badge on a given comment.

Admin only

Tighter — even the author can't audit their own AI usage afterwards.

All internal users

Maximum transparency. Vendors never see it regardless.

Per-user "Publish my AI usage" override lets users opt their own posts up to org-wide visibility — useful when paying with a personal key and wanting reimbursement.

Friendly errors

Provider broke? Get a real answer.

Adapters wrap non-ok responses + fetch errors in a ProviderError carrying a kind-aware friendly message. Rendered inline with an expandable "Provider details" disclosure containing the raw upstream body for debugging.

Kind Friendly message
auth API key invalid — double-check it in your provider's console
billing Provider account billing failed — top up credits
rate_limit Rate limited — wait a moment and try again
model_not_found Model not available for your account — pick another from the list
bad_request Provider rejected the request — check the model + endpoint
server_error Provider is having issues — try again shortly
network Couldn't reach the provider — check network + endpoint URL

Wire up your provider.

Account → Preferences → AI Assist for personal keys. Admin → Integrations → AI Assist for the org config. Test connection verifies before you ship.