Platform

Domain adaptation through governed context.

cgiCore makes downstream models, assistants, and agents perform like domain specialists — not by retraining weights, but by assembling the right governed context at every call.

01 / Domain packs

Load a domain. Everything downstream gets sharper.

A domain pack is a curated set of knowledge structures, extraction rules, and policies for a specific field — software delivery, clinical knowledge, financial controls, regulatory compliance. Load it, and any agent or assistant using cgiCore’s context immediately reasons inside that field.

Shipped
Software delivery · Enterprise operations · Generic
Custom
Build your own pack against the trait engine
Composable
Multiple packs can coexist per workspace
Example: a clinical domain pack
CLINICAL PACKConditionsMedicationsInteractionsGuidelinesTrial protocolsEntitiesTraitsRulesPoliciesAssistantRAG appAgentPipeline
02 / Trait-based modeling

Traits, not types.

The engine never checks whether an entity is a Decision or a Regulation. It checks which behaviors that entity carries. Every engine action declares the behavior it requires, and only entities carrying it participate — built-in or custom. Types come and go; behaviors are the contract.

Example: who participates in an action?
ENTITYClinical Guidelinesource-backedversionedconflict-awareENTITYMeeting NoteshareableENGINE ACTIONRun consistency checkREQUIRESconflict-awareQUALIFIESEXCLUDED
03 / Read & write paths

Two paths. One governed memory.

The read path infers which domains matter for the current call, assembles multi-signal context, and packs it into a tight budget. The write path ingests documents, connectors, and structured data — validating, normalizing, deduplicating, and embedding before anything reaches the graph.

See the read/write pipeline in architecture
READ PATHInfer domainGraph retrieveSemantic retrievePattern signalRank & packWRITE PATHIngestValidateNormalizeDedupEmbedReason
04 / cgiCore as LLM proxy

cgiCore routes the model call. You write the rules.

cgiCore acts as the LLM proxy for your stack. It speaks the standard chat-completions shape, so your agents and SDKs don’t change a line. What changes is what happens inside the call: governed context is injected, policy is evaluated, and the request is routed to the model you’ve configured for that workload — a frontier model for reasoning, an efficient model for bulk work, an on-prem model for sensitive data, or a specialist bound to a domain pack.

Swap providers, add a tier, hot-load a policy, route differently by workspace or domain. The SDK surface stays the same; the behavior adapts. Your models, your rules, our governed context.

Inside a single call
AGENT / SDKyour codeCGICOREcontext + policyYOUR ROUTING RULESrouteFRONTIERreasoningEFFICIENTbulk work · this callON-PREMsensitiveper call · per pack · per policy
Routing
Frontier · Efficient · On-prem · By domain pack
Policy
Evaluated per call, hot-loaded, versioned
Providers
Your keys, any compatible endpoint
Change surface
Rewrite rules, not agent code
05 / Council of Specialists

Domain experts, assembled on your data.

Once a domain pack is loaded, cgiCore can stand up a council — a set of specialists bound to the same governed context, each with its own remit: research, drafting, review, compliance. They share one memory and one source of truth, so handoffs don’t lose state and disagreements surface as contradictions, not silent drift.

No fine-tuning
Specialists draw depth from the context layer, not model weights
Shared memory
Every council member reads and writes the same governed store
Structured handoffs
Review gates between phases; provenance follows the work
Council · clinical domain
RESEARCHGather evidence · build contextreads
DRAFTProduce structured outputwrites
REVIEWCheck against policy · flag conflictsgates
SHAREDcgiCore context · one source of truthgoverned

Specialists route through the same proxy. Each call sees the council’s prior work without re-sending it.

06 / Governance

Policy, provenance, and audit from day one.

Every fact in the graph carries its lineage. Every contradiction routes through policy. Every read and write emits a typed event. Compliance, security, and platform teams inherit a full audit surface without adding a separate pipeline.

Per-workspace
Isolation, keys, rate limits, retention
Event bus
Typed events across reads, writes, and reasoning
Audit log
ExtractionRun · EntityVersion · Evidence
Event bus · live trace
read.startsess_a91b · domain=clinicalt+0ms
graph.hit14 nodes · 22 edgest+41ms
vector.hittop-k=8 · recall=0.91t+63ms
contradictroute → review queuet+79ms
pack.done1,842 tokens · sources=6t+102ms
Why not retrain?

Fine-tuning moves weights. cgiCore moves context.

Fine-tuning is a powerful tool — for a narrow set of problems. For most enterprise knowledge workflows, the bottleneck isn’t the model, it’s the context. A frontier model given the right governed, contradiction-aware, provenanced context outperforms a fine-tuned model given noisy retrieval.

Fine-tuning

  • Model weights change, per model, per version
  • Retraining cycles for new knowledge
  • Opaque decision process; weak provenance
  • Locked to a specific base model family
  • Expensive to govern across a large org

+ Governed context (cgiCore)

  • Base model weights untouched
  • New knowledge lands as data, not as training runs
  • Every fact traceable to its source
  • Model-agnostic — use any LLM you trust
  • Policy, audit, and contradiction handling built-in

cgiCore doesn’t train your base model by default and doesn’t claim to. It changes the context layer beneath the model so domain performance improves without touching weights.

FAQ

The five questions we always get.

Short answers for CTOs, platform leads, and architects evaluating cgiCore against adjacent approaches.

Is this an agent platform?+

No. cgiCore does not orchestrate agents or run workflows. It sits below whatever agent, assistant, or pipeline you already use and supplies the governed context they need to perform well.

Is this just RAG?+

Semantic retrieval is one layer inside cgiCore, not the product. We combine multiple reasoning signals with governed context packing. RAG is table stakes; what matters is how the context is selected, reconciled, and trusted.

Does cgiCore replace our LLM?+

No. cgiCore strengthens the intelligence layer beneath your models and agents. It acts as the LLM proxy for your stack and routes each call to whichever model you choose — frontier, open, private, or hosted.

How do domains get added?+

Load an existing domain pack, or define your own against the trait engine. A pack declares the types, traits, extraction rules, and policies for a field; cgiCore handles ingestion, graph construction, reasoning, and context assembly.

Why is this different from fine-tuning?+

Fine-tuning changes weights. cgiCore changes context. New knowledge lands as governed data with provenance, available immediately to every downstream model — no training run, no re-release, no per-model drift.