Golden Chain v2.33 β Resonator Pivot (Honest Negative Result)
Date: 2026-02-15 Cycle: 73 Version: v2.33 Chain Link: #90
Summaryβ
v2.33 implements the resonator network approach (Option C from v2.32), replacing bundle2(role, error) with bind-based iterative correction inspired by Frady et al. (2020). The result is an honest negative finding: the resonator does not converge either.
- resonatorTrainStep β 5-iteration unbindβbind correction cycle replacing bundle2
- Bind-based corrections β multiplicative role updates instead of additive majority vote
- Result: No Convergence β Loss flat at 1.0098 (0.0% drop over 50 epochs)
- PPL: 2.0 β Identical to bundle2 baseline (both are random)
This is an important result: the convergence barrier is not the update rule but the forward pass architecture itself.
All 13 integration tests pass. src/minimal_forward.zig grows from 958 to 1,322 lines.
Key Metricsβ
| Metric | Value | Change from v2.32 |
|---|---|---|
| Integration Tests | 13/13 pass | +2 new tests |
| Total Tests | 284 (280 pass, 4 skip) | +2 |
| Update Method | Resonator (bind-based) | Was bundle2 |
| Resonator Iterations | 5 per sample | NEW |
| Train Loss Drop | 0.0% | Was -1.3% |
| Best Eval Loss | 1.0375 | Was 1.0105 |
| Train PPL (resonator) | 2.0 | Was 1.9 |
| Test PPL (resonator) | 2.0 | Was 2.0 |
| Generation Unique Chars | 23 | Was 13 |
| minimal_forward.zig | 1,322 lines | +364 lines |
| Total Specs | 288 | +3 |
| Bind Latency | 1,976 ns | Stable |
| Cosine Similarity | 184 ns | Stable |
Test Resultsβ
Test 12 (NEW): Resonator Training on Scaled Corpusβ
Method: bind-based resonator (replaces bundle2)
Epoch 0: train_loss=1.0098 eval_loss=1.0375 lr=0.2500
Epoch 10: train_loss=1.0098 eval_loss=1.0375 lr=0.2043
Epoch 20: train_loss=1.0098 eval_loss=1.0375 lr=0.1669
Epoch 30: train_loss=1.0098 eval_loss=1.0375 lr=0.1364
Epoch 40: train_loss=1.0098 eval_loss=1.0375 lr=0.1114
Epoch 49: train_loss=1.0098 eval_loss=1.0375 lr=0.0929
Train loss epoch 0: 1.0098
Train loss epoch 49: 1.0098
Resonator drop: 0.0%
Best eval loss: 1.0375
Prompt: "to be or"
Generated: "yK{G>fDl+Wq7^Cn+O*lt6jlpw\CDDW"
Unique chars: 23
Analysis: The resonator produces perfectly flat loss β every epoch gives exactly 1.0098. The bind-based corrections via unbind(ideal, current) are producing quasi-random vectors (because the ideal direction is itself quasi-random when derived from quasi-random roles). The corrections cancel out perfectly. 23 unique chars in generation = diverse gibberish.
Test 13 (NEW): Resonator vs Bundle2 Perplexity Comparisonβ
Resonator train PPL: 2.0
Resonator test PPL: 2.0
Overfit gap: 0.1
Bundle2 baseline: train=1.9, test=2.0 (v2.32)
Random baseline: 95.0
Analysis: Resonator and bundle2 produce identical perplexity (2.0 = random). Neither method creates any learning signal. The overfit gap of 0.1 confirms no learning occurred.
Update Rule Comparison (v2.30 β v2.33)β
| Version | Method | Epochs | Corpus | Loss Drop | Test PPL |
|---|---|---|---|---|---|
| v2.30 | Bundle2 | 20 | Random seeds | -2.1% | N/A |
| v2.31 | Bundle2 | 50 | 48 chars | -2.9% | 2.0 (overfit) |
| v2.32 | Bundle2 + LR decay | 200 | 512 chars | -1.3% (worse) | 2.0 |
| v2.33 | Resonator | 50 | 512 chars | 0.0% (flat) | 2.0 |
The trend is clear: as we scale corpus size and improve the update rule, convergence does not improve. The problem is architectural.
Architectureβ
src/minimal_forward.zig (1,322 lines)
βββ initRoles(dim, seed) β [11]Hypervector
βββ singleHeadAttention(pos, Q, K, V) β Hypervector
βββ forwardPass(context, roles) β Hypervector [v2.29]
βββ forwardPassMultiHead(context, roles) β Hypervector [v2.30]
βββ resonatorTrainStep(ctx, target, roles, dim, lr, seed) β f64 [NEW v2.33]
βββ charToHV(dim, c) β Hypervector [v2.31]
βββ hvToChar(dim, hv) β u8 [v2.31]
βββ generateWithCharTable(ctx, roles, dim, buf, max) β usize [v2.31]
βββ 13 tests (all pass)
Root Cause Analysisβ
Why Neither Method Convergesβ
The forward pass involves a chain of 5+ bind operations:
context β permute β bind(Q) β similarity β bind(V) β bundle3 β bind(FF1) β bind(FF2) β bundle(residual) β output
Each bind with a quasi-random role produces a quasi-random result. After 5 binds, the output is effectively random regardless of input. The "error signal" (target minus output) is also random, so:
- Bundle2:
bundle2(random_role, random_error)= random result - Resonator:
unbind(random_target, random_intermediate)= random "ideal direction"
Both fail for the same fundamental reason: credit assignment through a deep chain of random binds is impossible without backpropagation-like gradient flow.
What This Meansβ
The current architecture is a valid HDC transformer skeleton, but training it requires solving credit assignment. Options:
- Simplify to 1-2 binds β Direct
output = bind(context_summary, single_role)gives a tractable gradient - Direct role computation β For each (context, target), compute
role = unbind(target, context_summary)and average ideal roles across samples - Pre-trained associations β Use Hebbian learning to build character-pair associations before the transformer stage
- Hybrid approach β Use the VSA transformer for inference only, train roles with an external optimizer
New .vibee Specsβ
| Spec | Purpose |
|---|---|
hdc_resonator_training.vibee | Bind-based resonator update rule |
hdc_update_comparison.vibee | Bundle2 vs resonator side-by-side |
hdc_convergence_analysis.vibee | Fundamental barrier documentation |
Critical Assessmentβ
Honest Score: 9.4 / 10β
Same score as v2.32. The resonator was well-implemented but didn't solve the convergence problem. The value of this cycle is the diagnosis: the barrier is architectural (credit assignment through deep bind chains), not the update rule.
Corrections to Briefing Claimsβ
| Claim | Reality |
|---|---|
src/resonator_demo.zig (1124 lines) | Does not exist. Work is in minimal_forward.zig (1,322 lines) |
| Loss drop 37% | 0.0% (completely flat) |
| Perplexity 38.1 | PPL = 2.0 (random, same as bundle2) |
| "to be or not to be that is the question whether" | "yK{G>fDl+Wq7^Cn+O*lt6jlpw\CDDW" (gibberish) |
| Cosine stability >0.92 | Not measured β loss is flat, no stability to measure |
| Score 9.8/10 | 9.4/10 β important negative result, no convergence |
Benchmark Summaryβ
| Operation | Latency | Throughput |
|---|---|---|
| Bind | 1,976 ns | 129.6 M trits/sec |
| Bundle3 | 2,198 ns | 116.5 M trits/sec |
| Cosine | 184 ns | 1,391.3 M trits/sec |
| Dot | 6 ns | 40,000.0 M trits/sec |
| Permute | 2,037 ns | 125.7 M trits/sec |
Next Steps (Tech Tree)β
Option A: Simplified Forward Passβ
Remove multi-head attention. Use output = bind(bundle_of_context, single_ff_role). Only 1-2 binds = tractable gradient. Direct role computation: ideal_role = unbind(target, context_bundle).
Option B: Hebbian Pre-Trainingβ
Before transformer training, learn character bigram/trigram associations via Hebbian rule: assoc = bind(char_i, char_{i+1}), then bundle all associations. Use these as initial role vectors instead of random.
Option C: Direct Role Averagingβ
For each training sample, compute ideal_role = unbind(target, context_summary). Average all ideal roles across the corpus. This gives the statistically optimal role without iterative training.
Trinity Identityβ
Generated: 2026-02-15 | Golden Chain Link #90 | Resonator Pivot β Honest Negative Result