Skip to main content
The Inviolet gateway is the choke point every LLM tool call passes through. Install one of three ways depending on where your application lives.

Option 1 — npm SDK (simplest)

Wrap your existing Anthropic / OpenAI client. No infrastructure changes.
npm install @inviolet/sdk
import { InvioletGateway } from "@inviolet/sdk"
import Anthropic from "@anthropic-ai/sdk"

const inviolet = new InvioletGateway({ apiKey: process.env.INVIOLET_API_KEY! })
const anthropic = inviolet.wrap(new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY! }))
Best for: serverless apps, Next.js, single-team prototypes.

Option 2 — Docker (self-hosted)

docker run -d \
  -e INVIOLET_API_KEY=${INVIOLET_API_KEY} \
  -p 8080:8080 \
  ghcr.io/craig-fire/inviolet-gateway:latest
Point your application at http://localhost:8080 instead of the LLM provider’s API URL. Best for: regulated environments, on-prem, self-hosted Kubernetes.

Option 3 — Vercel Edge

Deploy the gateway as an edge function alongside your Next.js app.
// app/api/llm/route.ts
import { invioletEdge } from "@inviolet/edge"
export const runtime = "edge"
export const POST = invioletEdge({ apiKey: process.env.INVIOLET_API_KEY! })
Best for: existing Vercel-hosted apps that want zero new infrastructure.

Verify install

After install, make any LLM tool call. Within 30 seconds it should appear at app.inviolet.ai/decision-feed.