Skip to content

Knowledge Graph — What It Is

The Knowledge Graph uses LLMs (OpenAI) to extract entities, decisions, relationships, and constraints from natural-language documents. Results are persisted to Neo4j for rich graph queries.

Given a document about authentication architecture, the KG extracts:

Entity: AuthService (component)
Entity: JWT (technology)
Entity: Session-based auth (option)
Relationship: AuthService USES JWT
Relationship: Team DECIDED_FOR JWT
Relationship: Team DECIDED_AGAINST Session-based auth

You can then query:

Terminal window
iw query "What decisions were made about authentication?"
# → JWT was chosen over session-based auth for the authentication system
# → Reasons: statelessness, horizontal scaling, API compatibility
  1. IN — Chunks documents (semantic markdown splitting, ~16k chars/chunk)
  2. FX — LLM extracts raw entity-relationship triples per chunk
  3. KX — Canonicalizes entities and predicates into a consistent schema (30 predicate types)
  4. GX — Cross-document entity deduplication (exact + fuzzy merge)

Entity types: concept, decision, option, requirement, feature, component, technology, resource, role, risk, phase, constraint, question, tradeoff

Relationship types: CONTAINS, DEPENDS_ON, IMPLEMENTS, EXTENDS, CALLS, USES, DECIDED_FOR, DECIDED_AGAINST, SUPERSEDES, ENABLES, BLOCKS, RISKS, and more.

NeedUse
Find relevant filesCARI (free, fast)
Check doc freshnessCARI (free, fast)
Understand decisions and rationaleKG
Trace impact of architectural changesKG
Query natural-language questionsKG
Build RAG context for AI agentsKG
DependencyPurpose
Neo4j 5Graph storage (Docker recommended)
OpenAI API keyLLM extraction + NL queries
  • Try It — set up Neo4j and run your first extraction
  • CLI Reference — full command documentation