Observatory’s AI and machine learning integration is fully open. You choose the provider, the jurisdiction, the contract. We never enter the loop.
We don’t choose your AI. You do.
AI providers are not interchangeable. Their jurisdictions differ, their data policies differ, their politics differ. So we don’t pick one for you. Observatory exposes a clean integration interface and lets you plug in whichever provider matches your geography, your compliance posture, and your budget — American, European, Chinese, or self-hosted.
The Mechanism
An open hook. Your provider. Your contract.
Observatory exposes a clean integration interface for AI and ML providers. You write a few lines of configuration pointing at your chosen provider’s API. Observatory passes the data through, the provider does the inference, the result comes back into the pipeline.
The contract for that inference is between you and the provider directly — we are not party to it, do not see your API keys, do not see your data, and do not collect a margin. The integration runs entirely on your infrastructure.
Provider-Agnostic
Jurisdiction Choice
Direct Contract
No Middleware
Self-Hosted Ready
THREE INTEGRATION SURFACES
Surface I
Bring Your Own AI
Plug in any large language model provider for natural-language insights, automated root-cause hypotheses, and case-note text analytics. American (OpenAI, Anthropic, Google), European (Mistral, Aleph Alpha), Chinese (Qwen, DeepSeek, Kimi), or self-hosted via Ollama or vLLM. The choice is yours, the contract is yours.
Surface II
Bring Your Own ML
For statistical machine learning — anomaly detection, drift monitoring, ML-augmented Monte Carlo priors, classification models — Observatory accepts any scikit-learn, PyTorch, or ONNX-format estimator. Train it on your data, on your hardware. Drop the artifact in. Observatory handles the orchestration.
Surface III
Sovereign Integration
Ghost Citadel is not party to your AI contract. We don’t see your API keys, your prompts, your completions, your training data, or your billing. The integration is between you and the provider you chose. We make sure it works — nothing more, nothing less.
Identical Code. Different Provider.
Switch jurisdictions in one line.
The same Observatory pipeline runs against any AI provider. Need to operate under EU data sovereignty? Point at Mistral. Need APAC residency? Point at Qwen. Need full air-gap? Point at a local Ollama instance. The pipeline doesn’t care — it’s just a function call.
# One pipeline. Four jurisdictional options.
from ghostcitadel import Observatory
obs = Observatory("jira_export.csv")
# US — your contract with Anthropic
obs.connect_ai("anthropic", api_key=os.environ[...])
# EU — your contract with Mistral
obs.connect_ai("mistral", api_key=os.environ[...])
# APAC — your contract with Qwen
obs.connect_ai("qwen", api_key=os.environ[...])
# Air-gapped — local Ollama
obs.connect_ai("ollama", host="localhost:11434")
# Bring your own ML estimatorfrom joblib import load
obs.connect_ml(load("my_anomaly_model.pkl"))
obs.mine(metrics=["value_yield", "anomaly_score"])
# Same diagnostics. Your jurisdiction. Your terms.
Your Jurisdiction. Your Provider. Your Terms.
AI sovereignty is becoming a procurement question. European public sector, Chinese state-owned enterprises, and US-regulated industries all face different rules about where inference can happen and who can see the data. Observatory’s architecture treats this as your decision, not ours.
No Forced Provider
We don’t bundle an AI vendor with the library, we don’t mark up your inference costs, and we don’t lock you to a specific jurisdiction. If your compliance posture changes, you swap providers in one line of configuration. The pipeline is unaffected.
No Data Pass-Through
Observatory never sees your prompts, completions, embeddings, or training data. The integration is direct between your infrastructure and your provider. We make the connection possible — we don’t sit in the middle of it.
Full Air-Gap Mode
Need to operate without external AI calls entirely? Plug in a local Ollama, vLLM, or Triton instance, or skip the AI surface altogether. The core diagnostics work without any AI integration — AI is augmentation, not dependency.