Skip to main content

Trinity Storage Network v1.5 Report

Proof-of-Storage, Shard Rebalancing, Bandwidth Aggregation, Auto-Discovery

Build on v1.4 (Reed-Solomon, Connection Pooling, Manifest DHT, 12-node test)


Key Metrics​

MetricValueStatus
Total Tests1,382PASS (1,381 pass, 1 pre-existing flaky)
v1.5 Integration Tests3 new (node churn, PoS, bandwidth)PASS
v1.5 Unit Tests16 new across 3 modulesPASS
New Zig Modules3 (proof_of_storage, shard_rebalancer, bandwidth_aggregator)Complete
Modified Modules6 (protocol, storage_discovery, discovery, network, main, integration_test)Complete
Protocol Messages4 new (0x28-0x2B)Backward-compatible
Node Churn Test10 nodes, 3 killed, rebalanced to target=3PASS
PoS Challenge Round8 nodes, 7 honest + 1 tamperedDetected
Bandwidth Aggregation10 nodes, proportional reward sharesVerified

What This Means​

For Users​

  • Data survives node failures: Shard rebalancer automatically redistributes under-replicated data when nodes go offline, maintaining target replication factor.
  • Honest storage verified: Proof-of-Storage challenges cryptographically verify that peers actually store the data they claim to host. Cheaters are flagged as unreliable.
  • Fair rewards: Bandwidth aggregation tracks per-node contribution (upload, download, hosting) and computes proportional reward shares.

For Operators​

  • CLI flags: --pos enables Proof-of-Storage challenges, --rebalance enables shard auto-redistribution, --network-stats shows network-wide bandwidth summary.
  • UDP auto-discovery: Storage capacity is now broadcast alongside peer discovery, enabling automatic peer-to-peer storage awareness on LAN.
  • Zero-config replication: Rebalancer targets 3 replicas per shard by default.

For the Network​

  • Production hardening: v1.5 closes the gap between "demo network" and "production network" with cryptographic verification, automatic fault recovery, and fair resource accounting.
  • Backward compatible: Old v1.4 nodes silently ignore new message types (0x28-0x2B fall into the else wildcard).

Technical Details​

Architecture​

v1.5 Module Dependency Graph:

protocol.zig ──────────────────────────────────────
β”‚ StorageChallengeMsg (0x28, 144B) β”‚
β”‚ StorageProofMsg (0x29, 104B) β”‚
β”‚ BandwidthReportMsg (0x2A, 72B) β”‚
β”‚ BandwidthSummaryMsg (0x2B, 28B) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ β”‚ β”‚
β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”
β”‚ proof_ β”‚ β”‚ shard_ β”‚ β”‚ bandwidth_ β”‚
β”‚ of_ β”‚ β”‚ rebalancer.zig β”‚ β”‚ aggregator. β”‚
β”‚ storage β”‚ β”‚ β”‚ β”‚ zig β”‚
β”‚ .zig β”‚ β”‚ ShardRebalancerβ”‚ β”‚ β”‚
β”‚ β”‚ β”‚ ShardLocation β”‚ β”‚ BandwidthAgg β”‚
β”‚ PoS β”‚ β”‚ UnderReplicatedβ”‚ β”‚ BandwidthRpt β”‚
β”‚ Engine β”‚ β”‚ β”‚ β”‚ RewardShare β”‚
β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ β”‚ β”‚
β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”
β”‚ network.zig (NetworkNode) β”‚
β”‚ handleConnection() switch on msg_type β”‚
β”‚ poll() periodic PoS/rebalance/bandwidth β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ discovery.zig (DiscoveryService) β”‚
β”‚ broadcastAnnounce() + StorageAnnounce UDP β”‚
β”‚ receiveLoop() dispatches 60B vs 106B packets β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Proof-of-Storage (Challenge-Response Protocol)​

  1. Challenge: Challenger selects random shard + random byte range (up to 64B), sends StorageChallengeMsg to target node
  2. Response: Target reads byte range from local storage, computes SHA256, returns StorageProofMsg
  3. Verification: Challenger computes same hash from own copy, compares. Mismatch increments failure count.
  4. Eviction: After max_failures (default: 3), node is marked unreliable in StoragePeerRegistry

Shard Rebalancer​

  • Tracks shard locations via AutoHashMap([32]u8, ShardLocationEntry)
  • findUnderReplicated() scans for shards below target_replication
  • rebalance() copies shard data from existing holder to new peer
  • removeNode() cleans up when a node goes offline
  • Thread-safe via std.Thread.Mutex

Bandwidth Aggregator​

  • Collects BandwidthReport per node (upload, download, shards hosted, time period)
  • aggregate() computes network-wide totals
  • getRewardShare(node_id) returns proportional share: node_bandwidth / total_bandwidth
  • Integrates with existing RewardTracker via generateLocalReport()

UDP Storage Auto-Discovery​

  • DiscoveryService.setStorageInfo() caches serialized StorageAnnounce (60B)
  • broadcastAnnounce() now sends both PeerAnnounce (106B) and StorageAnnounce (60B)
  • receiveLoop() dispatches by packet length: >= 106 = PeerAnnounce, == 60 = StorageAnnounce

Test Results​

v1.5 Integration Tests (3 new)​

TestNodesDescriptionResult
Node Churn10Kill 3 nodes, rebalance 3 shards to target=36 copies made, all restored
PoS Challenge87 honest passes, 1 tampered detected7 pass, 1 fail, failure counted
Bandwidth Agg10Proportional shares sum to 1.0, totals correctVerified within 0.001 tolerance

v1.5 Unit Tests (16 new)​

ModuleTestsAll Pass
proof_of_storage.zig6 (challenge creation, proof response, honest verify, tampered detect, unreliable flagging, timing)Yes
shard_rebalancer.zig5 (register/track, removeNode, findUnderReplicated, rebalance, no-op)Yes
bandwidth_aggregator.zig5 (aggregate, reward share, local report, empty, timing)Yes

Full Suite​

zig build test: 1,381/1,382 passed, 1 pre-existing flaky (LRU eviction timing)

New CLI Flags​

--pos                   Enable Proof-of-Storage challenges (v1.5)
--rebalance Enable shard rebalancing (v1.5)
--network-stats Show network-wide bandwidth stats (v1.5)

Files Changed​

Created​

  • specs/tri/storage_network_v1_5.vibee - VIBEE specification
  • src/trinity_node/proof_of_storage.zig - PoS engine (~380 lines)
  • src/trinity_node/shard_rebalancer.zig - Rebalancer (~395 lines)
  • src/trinity_node/bandwidth_aggregator.zig - Aggregator (~270 lines)

Modified​

  • src/trinity_node/protocol.zig - 4 new message types + structs + tests
  • src/trinity_node/storage_discovery.zig - reliability flag, 3 new methods, 2 tests
  • src/trinity_node/discovery.zig - StorageAnnounce broadcast + receive, 1 test
  • src/trinity_node/network.zig - 3 new module imports, message handlers, poll() extensions
  • src/trinity_node/main.zig - 3 CLI flags, module initialization, HKDF path fix
  • src/trinity_node/integration_test.zig - 3 v1.5 integration tests
  • build.zig - 3 new test targets

Conclusion​

v1.5 transforms the Trinity Storage Network from a "store and retrieve" system into a production-grade decentralized storage network with:

  • Cryptographic verification (Proof-of-Storage)
  • Automatic fault recovery (Shard Rebalancing)
  • Fair resource accounting (Bandwidth Aggregation)
  • Zero-config peer discovery (UDP StorageAnnounce)

Next Steps​

  • Network Admin Panel UI (dashboard for monitoring PoS challenges, rebalancer activity, bandwidth)
  • Erasure coding integration with rebalancer (RS-aware rebalancing)
  • Economic model: convert bandwidth shares to $TRI rewards on-chain
  • Real network deployment with 20+ physical nodes