Domain adaptation through governed context.
cgiCore makes downstream models, assistants, and agents perform like domain specialists — not by retraining weights, but by assembling the right governed context at every call.
Load a domain. Everything downstream gets sharper.
A domain pack is a curated set of knowledge structures, extraction rules, and policies for a specific field — software delivery, clinical knowledge, financial controls, regulatory compliance. Load it, and any agent or assistant using cgiCore’s context immediately reasons inside that field.
- Shipped
- Software delivery · Enterprise operations · Generic
- Custom
- Build your own pack against the trait engine
- Composable
- Multiple packs can coexist per workspace
Traits, not types.
The engine never checks whether an entity is a Decision or a Regulation. It checks which behaviors that entity carries. Every engine action declares the behavior it requires, and only entities carrying it participate — built-in or custom. Types come and go; behaviors are the contract.
Two paths. One governed memory.
The read path infers which domains matter for the current call, assembles multi-signal context, and packs it into a tight budget. The write path ingests documents, connectors, and structured data — validating, normalizing, deduplicating, and embedding before anything reaches the graph.
See the read/write pipeline in architecture →cgiCore routes the model call. You write the rules.
cgiCore acts as the LLM proxy for your stack. It speaks the standard chat-completions shape, so your agents and SDKs don’t change a line. What changes is what happens inside the call: governed context is injected, policy is evaluated, and the request is routed to the model you’ve configured for that workload — a frontier model for reasoning, an efficient model for bulk work, an on-prem model for sensitive data, or a specialist bound to a domain pack.
Swap providers, add a tier, hot-load a policy, route differently by workspace or domain. The SDK surface stays the same; the behavior adapts. Your models, your rules, our governed context.
Domain experts, assembled on your data.
Once a domain pack is loaded, cgiCore can stand up a council — a set of specialists bound to the same governed context, each with its own remit: research, drafting, review, compliance. They share one memory and one source of truth, so handoffs don’t lose state and disagreements surface as contradictions, not silent drift.
- No fine-tuning
- Specialists draw depth from the context layer, not model weights
- Shared memory
- Every council member reads and writes the same governed store
- Structured handoffs
- Review gates between phases; provenance follows the work
Specialists route through the same proxy. Each call sees the council’s prior work without re-sending it.
Policy, provenance, and audit from day one.
Every fact in the graph carries its lineage. Every contradiction routes through policy. Every read and write emits a typed event. Compliance, security, and platform teams inherit a full audit surface without adding a separate pipeline.
- Per-workspace
- Isolation, keys, rate limits, retention
- Event bus
- Typed events across reads, writes, and reasoning
- Audit log
- ExtractionRun · EntityVersion · Evidence
Fine-tuning moves weights. cgiCore moves context.
Fine-tuning is a powerful tool — for a narrow set of problems. For most enterprise knowledge workflows, the bottleneck isn’t the model, it’s the context. A frontier model given the right governed, contradiction-aware, provenanced context outperforms a fine-tuned model given noisy retrieval.
− Fine-tuning
- Model weights change, per model, per version
- Retraining cycles for new knowledge
- Opaque decision process; weak provenance
- Locked to a specific base model family
- Expensive to govern across a large org
+ Governed context (cgiCore)
- Base model weights untouched
- New knowledge lands as data, not as training runs
- Every fact traceable to its source
- Model-agnostic — use any LLM you trust
- Policy, audit, and contradiction handling built-in
cgiCore doesn’t train your base model by default and doesn’t claim to. It changes the context layer beneath the model so domain performance improves without touching weights.
The five questions we always get.
Short answers for CTOs, platform leads, and architects evaluating cgiCore against adjacent approaches.
Is this an agent platform?+
No. cgiCore does not orchestrate agents or run workflows. It sits below whatever agent, assistant, or pipeline you already use and supplies the governed context they need to perform well.
Is this just RAG?+
Semantic retrieval is one layer inside cgiCore, not the product. We combine multiple reasoning signals with governed context packing. RAG is table stakes; what matters is how the context is selected, reconciled, and trusted.
Does cgiCore replace our LLM?+
No. cgiCore strengthens the intelligence layer beneath your models and agents. It acts as the LLM proxy for your stack and routes each call to whichever model you choose — frontier, open, private, or hosted.
How do domains get added?+
Load an existing domain pack, or define your own against the trait engine. A pack declares the types, traits, extraction rules, and policies for a field; cgiCore handles ingestion, graph construction, reasoning, and context assembly.
Why is this different from fine-tuning?+
Fine-tuning changes weights. cgiCore changes context. New knowledge lands as governed data with provenance, available immediately to every downstream model — no training run, no re-release, no per-model drift.