When knowledge disagrees, your AI shouldn’t guess.
Enterprise AI fails quietly when contradictions are invisible. cgiCore detects them, surfaces them, and routes them through policy — so downstream models work from reconciled context, not from whichever document happened to rank first.
Most RAG stacks hide conflict.
Retrieval returns the top-k passages and passes them to the model. If two of those passages disagree, the LLM picks a side — silently, on your behalf, with no audit trail. That’s a governance problem.
The loudest document wins.
Stale policies, draft specs, and superseded decisions show up alongside current ones. The model ranks by similarity, not by truth. Nobody can tell which answer came from which source — and nobody notices when the answers drift.
Conflicts become first-class events.
Every fact carries provenance. Symbolic reasoning surfaces logical disagreements. Policy decides who reviews, who defers, and who resolves. The context your model sees is reconciled, and the trail is fully auditable.
Detect. Surface. Resolve. Preserve.
A contradiction is a workflow, not an alert. Each stage has an owner, a policy, and a persistent record.
A conflict the model shouldn’t resolve alone.
A concrete shape for what surfaces in the review queue, and how it reaches the downstream model.
Reasoning and policy own the outcome.
The ML layer is a supporting signal. Final authority stays with the reasoning engine and the policies your domain owners define.
Authority
- Declares what counts as a contradiction
- Enforces workspace & domain policy
- Routes cases to owners or auto-resolve rules
- Governs what enters the context packet
- Writes the audit trail and provenance record
Support
- Structural embeddings for ranking candidates
- Pattern support across neighborhoods
- Resolution assist — proposes, never decides
- Surfaces subtle similarities rule-based reasoning would miss
- Always deferrable to the reasoning layer