Your infrastructure. Your data. Your perimeter.
cgiCore runs entirely inside your environment. The knowledge graph, vectors, reasoning, and audit log stay within customer-controlled infrastructure — VPC, on-prem, or private cloud.
Three postures. One perimeter.
cgiCore ships as containers and Helm charts your platform team runs. We don’t operate a multitenant service. Your data never transits a vendor-controlled plane.
Pick the perimeter that fits your org.
Same architecture, three deployment shapes. No version of cgiCore exfiltrates customer data outside your boundary.
Cloud VPC
Deploy cgiCore inside your own AWS, GCP, or Azure VPC. Managed databases stay in your account; network egress stays inside your controls.
Air-gapped
Runs in regulated, disconnected, or sovereign environments. Ship model weights via your own pipeline; cgiCore makes no outbound calls it isn’t configured to make.
Private Kubernetes
Deploys to your existing Kubernetes or OpenShift clusters via Helm. Inherits your SSO, secrets management, and observability stack.
Six principles that don’t flex.
What cgiCore commits to, regardless of posture, scale, or customer tier.
Customer-controlled storage
Graph, vectors, reasoning state, and event log live in databases you own. We have no shared tenancy.
Provenance on every fact
Nothing enters the graph anonymously. Source, extraction run, and authoring session are mandatory.
Typed audit surface
Every read, write, and reasoning step emits a typed event your SIEM can consume directly.
Per-workspace isolation
Keys, rate limits, retention, and policy run per workspace — a hard boundary, not a filter.
Bring-your-own model
Self-hosted or vendor — cgiCore acts as the LLM proxy and routes to whichever model you choose, never forcing a provider.
No telemetry back to us
Operational metrics stream to your own observability. We don’t ingest customer content, period.
Works with the stack you already run.
cgiCore is infrastructure that strengthens any downstream AI environment. If your agents speak OpenAI’s API, your apps speak HTTP, and your platform speaks Kubernetes — you’re ready.