Level 11.25 β Interactive REPL Mode
Golden Chain Cycle: Level 11.25 Date: 2026-02-16 Status: COMPLETE β 153/153 (100%)
Key Metricsβ
| Test | Description | Result | Status |
|---|---|---|---|
| Test 127 | REPL Multi-Turn Session (direct, chain, mixed, determinism) | 50/50 (100%) | PASS |
| Test 128 | REPL Session Statistics (per-relation, segments, cumulative) | 40/40 (100%) | PASS |
| Test 129 | REPL Conversation Continuity (follow-up, cross-domain, bidirectional, similarity) | 63/63 (100%) | PASS |
| Total | Level 11.25 | 153/153 (100%) | PASS |
| Full Regression | All 401 tests | 397 pass, 4 skip, 0 fail | PASS |
| CLI REPL | zig build query -- --repl | Compiles and runs | PASS |
What This Meansβ
For Usersβ
- Trinity now has an interactive REPL β run
zig build query -- --repland query continuously without restarting - Type natural commands:
Paris capital_of,chain Eiffel landmark_in capital_of,list,relations,help - Session tracking: the REPL counts your queries and reports statistics on exit
- Follow-up workflows: query Paris β get France, then query France β get French β all within one session
- Type
quitor press Ctrl+D to exit gracefully with session summary
For Operatorsβ
- The REPL binary is the same
zig-out/bin/trinity-queryβ just add--replflag - All queries within a session use the same pre-built KG (built once at startup)
- Deterministic: identical queries always produce identical results, even across sessions
- No state leakage between queries β each query is independent within the stateless KG
For Investorsβ
- Level 11.25 delivers the first conversational interface for Trinity's symbolic AI
- The REPL proves Trinity VSA can power interactive, multi-turn reasoning sessions
- 25 development cycles: from basic ternary operations to a complete conversational symbolic AI tool
- This is the foundation for chat interfaces, web APIs, and LLM-integrated reasoning
REPL Usageβ
Starting a Sessionβ
$ zig build query -- --repl
Building knowledge graph (30 entities, 5 relations, DIM=1024)...
KG ready.
Trinity REPL v1.0.0 -- Interactive Symbolic Query Session
Commands: <entity> <relation> | chain <entity> <rel1> <rel2> ...
list | relations | info | help | quit
Direct Queriesβ
trinity> Paris capital_of
capital_of(Paris) = France (sim=0.278)
trinity> Sushi cuisine_of
cuisine_of(Sushi) = Japan (sim=0.278)
Chain Queriesβ
trinity> chain Eiffel landmark_in capital_of
Eiffel --[landmark_in]--> Paris --[capital_of]--> France
trinity> chain Colosseum landmark_in capital_of
Colosseum --[landmark_in]--> Rome --[capital_of]--> Italy
Discovery & Helpβ
trinity> list
Entities (30):
Cities: Paris, Tokyo, Rome, London, Cairo
Countries: France, Japan, Italy, UK, Egypt
...
trinity> relations
Relations (5):
capital_of: city -> country
landmark_in: landmark -> city
...
trinity> help
Commands:
<entity> <relation> Direct query
chain <entity> <rel1> [rel2] ... Multi-hop chain
list Show entities
relations Show relations
info Show KG info
quit Exit REPL
trinity> quit
Session ended. 4 queries executed.
Technical Detailsβ
Test 127: REPL Multi-Turn Session (50/50)β
Architecture: Simulates a complete multi-turn REPL session with sequential query execution.
Four sub-tests:
| Sub-test | Description | Result |
|---|---|---|
| Sequential direct | 10 queries across all 5 relations | 10/10 (100%) |
| Sequential chains | 5 landmarkβcityβcountry chains (2 checks each) | 10/10 (100%) |
| Mixed session | Alternating direct + chain queries (15 checks) | 15/15 (100%) |
| Deterministic replay | 15 queries re-executed for consistency | 15/15 (100%) |
Key result: Query order has zero effect on accuracy β the KG is fully stateless.
Test 128: REPL Session Statistics (40/40)β
Architecture: Tracks per-relation accuracy, session segment consistency, and cumulative milestones.
Three sub-tests:
| Sub-test | Description | Result |
|---|---|---|
| Per-relation accuracy | 5 relations Γ 5 pairs = 25 queries | 25/25 (100%) |
| Segment consistency | First 5 vs last 5 identical queries | 10/10 (100%) |
| Cumulative milestones | All 5 relations at 100% | 5/5 (100%) |
Key result: Every relation independently achieves 100% accuracy. Session segments produce bit-identical results.
Test 129: REPL Conversation Continuity (63/63)β
Architecture: Verifies multi-step workflows where each query result feeds the next query.
Four sub-tests:
| Sub-test | Description | Result |
|---|---|---|
| Follow-up workflows | cityβcountryβlanguage (2 steps Γ 5 cities) | 10/10 (100%) |
| Cross-domain exploration | landmarkβcityβcountryβcuisine (3 steps Γ 5) | 15/15 (100%) |
| Bidirectional verification | cityβcountry and countryβcity (both ways Γ 5) | 10/10 (100%) |
| Similarity consistency | 25 threshold + 3 range checks | 28/28 (100%) |
Sample follow-up workflows:
- Paris β France β French (2-step)
- Tokyo β Japan β Japanese (2-step)
- Eiffel β Paris β France β Croissant (3-step cross-domain)
- Pyramids β Cairo β Egypt β Falafel (3-step cross-domain)
Similarity range: min 0.266, max 0.871, spread 0.605. All well above 0.10 threshold.
Key insight: Bidirectional queries work because bipolar bind is commutative β bind(city, country) = bind(country, city), so unbinding a country from cityβcountry memory resolves the city.
REPL Architectureβ
src/query_cli.zig (576 lines)
βββ Entity definitions (30 names, seeds)
βββ Relation definitions (5 types, pairs)
βββ bipolarRandom() β same seeds as test suite
βββ treeBundleN() β relation memory construction
βββ findEntity() β case-insensitive name lookup
βββ findRelation() β relation name lookup
βββ main() β argument parsing
β βββ --info / --list / --relations (no KG needed)
β βββ --repl β runRepl() (continuous session)
β βββ --chain <entity> <rel1> <rel2> ... (single chain)
β βββ <entity> <relation> (single direct query)
βββ executeQuery() β single named query execution
βββ executeChain() β multi-hop chain from tokens
βββ runRepl() β REPL loop
βββ stdin line reading (character-by-character)
βββ whitespace trimming + tokenization
βββ Command dispatch (query/chain/list/relations/info/help/quit)
βββ Query counter tracking
βββ EOF / quit graceful exit with session summary
.vibee Specificationsβ
Three specifications created and compiled:
specs/tri/repl_multi_turn_session.vibeeβ multi-turn session simulationspecs/tri/repl_session_statistics.vibeeβ session statistics trackingspecs/tri/repl_conversation_continuity.vibeeβ conversation continuity workflows
All compiled via vibeec β generated/*.zig
Cumulative Level 11 Progressβ
| Level | Tests | Description | Result |
|---|---|---|---|
| 11.1-11.15 | 73-105 | Foundation through Massive Weighted | PASS |
| 11.17 | β | Neuro-Symbolic Bench | PASS |
| 11.18 | 106-108 | Full Planning SOTA | PASS |
| 11.19 | 109-111 | Real-World Demo | PASS |
| 11.20 | 112-114 | Full Engine Fusion | PASS |
| 11.21 | 115-117 | Deployment Prototype | PASS |
| 11.22 | 118-120 | User Testing | PASS |
| 11.23 | 121-123 | Massive KG + CLI Dispatch | PASS |
| 11.24 | 124-126 | Interactive CLI Binary | PASS |
| 11.25 | 127-129 | Interactive REPL Mode | PASS |
Total: 401 tests, 397 pass, 4 skip, 0 fail
Critical Assessmentβ
Strengthsβ
- First conversational interface β users can interact with Trinity in multi-turn sessions
- 100% accuracy maintained across all 153 queries including follow-up workflows, cross-domain, and bidirectional
- Stateless KG design β no state leakage between queries, deterministic replay verified
- Bidirectional queries work via commutative bind β both forward and reverse lookups resolve correctly
- Session tracking β query counter provides basic usage analytics
Weaknessesβ
- No session history β the REPL doesn't remember previous query results; users must re-type entities
- No tab completion β entity and relation names require exact typing (no autocomplete)
- Single KG per session β cannot load different knowledge graphs during a REPL session
- No output formatting options β results always in text format, no JSON/CSV export
Tech Tree Options for Next Iterationβ
| Option | Description | Difficulty |
|---|---|---|
| A. File-Based KG Loading | Load entities and relations from JSON/CSV file at startup | Medium |
| B. Session History & Recall | Store previous results, enable !1 to recall query #1 result | Medium |
| C. REST API Server | HTTP endpoint for querying the KG from web clients | Hard |
Conclusionβ
Level 11.25 delivers the first interactive REPL mode for Trinity's symbolic reasoning engine. Users can start a continuous query session, execute direct lookups and multi-hop chains, discover entities and relations, and build multi-step reasoning workflows β all within a single interactive session.
The REPL achieves 100% accuracy across 153 test queries spanning sequential sessions, session statistics, follow-up workflows, cross-domain exploration, bidirectional verification, and similarity consistency checks.
Trinity Conversational. REPL Lives. Quarks: Sessioned.