← Anulum Institute
Director Class AI

Integrations

Drop-in guards for every major LLM provider and framework. Zero code changes with the REST proxy.

LLM provider SDK guards

OpenAI
Wraps ChatCompletion with real-time hallucination scoring. Works with gpt-4o, gpt-4o-mini, o1, o3.
from director_ai.integrations.openai import OpenAIGuard guard = OpenAIGuard(knowledge_base="./docs/") result = guard.chat("What is our refund policy?") # result.passed, result.score, result.claims
Anthropic (Claude)
Guards Claude responses with NLI fact-checking against your RAG knowledge base.
from director_ai.integrations.anthropic import AnthropicGuard guard = AnthropicGuard(model="claude-sonnet-4-20250514")
AWS Bedrock
Native Bedrock integration for AWS-deployed LLMs. IAM-aware, region-configurable.
from director_ai.integrations.bedrock import BedrockGuard
Google Gemini
Guards Gemini API responses. Supports streaming and multi-turn conversations.
from director_ai.integrations.gemini import GeminiGuard
Cohere
Cohere Command and Command-R family with RAG-aware guardrails.
from director_ai.integrations.cohere import CohereGuard

Framework adapters

LangChain
Custom callback handler that intercepts chain outputs for fact-checking.
from director_ai.integrations.langchain import DirectorCallback chain = your_chain | DirectorCallback(knowledge_base=kb)
LlamaIndex
Query engine wrapper with automatic claim verification against your index.
from director_ai.integrations.llama_index import GuardedQueryEngine
LangGraph · Haystack · CrewAI · DSPy · Semantic Kernel
Native adapter modules for each framework. See documentation for framework-specific integration guides.

Deployment options

REST proxy (zero code changes)
Run Director Class AI as a reverse proxy between your app and the LLM API. No code changes required.
# Terminal director-ai serve --port 8000 --upstream https://api.openai.com/v1 # Your app points to localhost:8000 instead of api.openai.com
FastAPI middleware
Add guardrails to any FastAPI application with a single line.
app.add_middleware(DirectorGuard, knowledge_base="./kb/")
Docker
Pre-built CPU and GPU images. Docker Compose for multi-service deployments.
docker run -p 8000:8000 anulum/director-ai:latest serve
Kubernetes Helm chart
Production-ready Helm chart with health checks, resource limits, and horizontal autoscaling.
Voice AI pipeline
AsyncVoiceGuard for real-time voice applications. Adapters for ElevenLabs, OpenAI TTS, and Deepgram.
from director_ai.voice import AsyncVoiceGuard

Full integration docs

Detailed guides, code examples, and API reference for every integration.

Open documentation