Security & Deployment

Your infrastructure. Your data. Your perimeter.

cgiCore runs entirely inside your environment. The knowledge graph, vectors, reasoning, and audit log stay within customer-controlled infrastructure — VPC, on-prem, or private cloud.

Deployment

Three postures. One perimeter.

cgiCore ships as containers and Helm charts your platform team runs. We don’t operate a multitenant service. Your data never transits a vendor-controlled plane.

CUSTOMER-CONTROLLED PERIMETER · VPC · ON-PREM · PRIVATE CLOUDCGICOREGateway & LLM proxy (model-agnostic)auth · rate-limitGraphrelationalVectorsemantic indexReasoning + MLsidecarsProvenance · audit log · typed event busObservability · metrics · tracesOpenTelemetryRuns on Docker / Kubernetes / EKSENTERPRISE DATAConnectors (internal)Document storesDatabases & APIsEvent streamsDOWNSTREAM AIAgents & orchestratorsAssistants & chat UIsPipelinesSelf-hosted or external LLMs
Postures

Pick the perimeter that fits your org.

Same architecture, three deployment shapes. No version of cgiCore exfiltrates customer data outside your boundary.

01 / VPC

Cloud VPC

Deploy cgiCore inside your own AWS, GCP, or Azure VPC. Managed databases stay in your account; network egress stays inside your controls.

02 / On-premise

Air-gapped

Runs in regulated, disconnected, or sovereign environments. Ship model weights via your own pipeline; cgiCore makes no outbound calls it isn’t configured to make.

03 / Private cloud

Private Kubernetes

Deploys to your existing Kubernetes or OpenShift clusters via Helm. Inherits your SSO, secrets management, and observability stack.

Data sovereignty

Six principles that don’t flex.

What cgiCore commits to, regardless of posture, scale, or customer tier.

01

Customer-controlled storage

Graph, vectors, reasoning state, and event log live in databases you own. We have no shared tenancy.

02

Provenance on every fact

Nothing enters the graph anonymously. Source, extraction run, and authoring session are mandatory.

03

Typed audit surface

Every read, write, and reasoning step emits a typed event your SIEM can consume directly.

04

Per-workspace isolation

Keys, rate limits, retention, and policy run per workspace — a hard boundary, not a filter.

05

Bring-your-own model

Self-hosted or vendor — cgiCore acts as the LLM proxy and routes to whichever model you choose, never forcing a provider.

06

No telemetry back to us

Operational metrics stream to your own observability. We don’t ingest customer content, period.

Compatibility

Works with the stack you already run.

cgiCore is infrastructure that strengthens any downstream AI environment. If your agents speak OpenAI’s API, your apps speak HTTP, and your platform speaks Kubernetes — you’re ready.

Agent frameworksLangChain · LlamaIndex · CrewAI · AutoGen · custom
Model providersOpenAI · Anthropic · Google · Azure · self-hosted
IdentityOIDC · SAML · API keys · workspace-scoped
ObservabilityOpenTelemetry · Prometheus · Grafana · SIEM
RuntimeDocker · Kubernetes · EKS · OpenShift
ConnectorsDocuments · Jira · Confluence · Salesforce · custom
StorageCustomer-managed, inside your perimeter
DeploymentHelm chart · Docker Compose · CloudFormation