Skip to main content

Python SDK — trinity-vsa

Python ctypes binding to libtrinity-vsa. Uses the real SIMD-accelerated Zig core — ~20x faster than pure-Python numpy.

Package: libs/python/trinity_vsa/ Module: trinity_vsa.native

Setup

# 1. Build the native library
zig build libvsa

# 2. Use from Python (no pip install needed)
python -c "
import sys; sys.path.insert(0, 'libs/python/trinity_vsa/src')
from trinity_vsa.native import NativeVSA
vsa = NativeVSA()
print(vsa.version())
"

The library auto-detects zig-out/lib/libtrinity-vsa.dylib (macOS) or .so (Linux). You can also pass an explicit path:

vsa = NativeVSA(lib_path="/path/to/libtrinity-vsa.dylib")

Two API Levels

NativeVSA — Low-level handle-based API

Returns integer handles. You must call vsa.free(handle) for every vector.

from trinity_vsa.native import NativeVSA

vsa = NativeVSA()

# Create vectors
a = vsa.random(10000, seed=42)
b = vsa.random(10000, seed=123)

# Compute similarity
sim = vsa.similarity(a, b) # ~0.0 (quasi-orthogonal)

# Bind and unbind
bound = vsa.bind(a, b)
recovered = vsa.unbind(bound, b)
print(vsa.similarity(a, recovered)) # > 0.8

# Cleanup (required!)
vsa.free(a)
vsa.free(b)
vsa.free(bound)
vsa.free(recovered)

Vector — RAII wrapper with automatic memory management

Vectors are freed automatically when garbage collected. Use keyword constructors.

from trinity_vsa.native import NativeVSA, Vector

vsa = NativeVSA()

# Create vectors (multiple constructors)
v1 = Vector(vsa, random=(10000, 42))
v2 = Vector(vsa, text_words="machine learning")
v3 = Vector(vsa, zeros=1000)
v4 = Vector(vsa, data=[1, -1, 0, 1, -1])

# Operations return new Vectors
bound = v1.bind(v2)
bundled = v1.bundle(v2)
permuted = v1.permute(5)
clone = v1.clone()

# Similarity
print(v1.similarity(v2))
print(v1.hamming(v2))
print(v1.dot(v2))

# Properties
print(v1.dim) # 10000
print(v1.to_list()) # [-1, 0, 1, 1, -1, ...]
# No manual free needed — handled by __del__

Word-level encoding

vsa = NativeVSA()

v1 = Vector(vsa, text_words="machine learning")
v2 = Vector(vsa, text_words="deep learning")
v3 = Vector(vsa, text_words="database optimization")

print(v1.similarity(v2)) # 0.4133 (shared "learning")
print(v1.similarity(v3)) # -0.03 (unrelated)
print(v1.similarity(v1)) # 1.0 (identical)
vsa = NativeVSA()

corpus = [
"machine learning algorithms for classification",
"deep neural networks and backpropagation",
"database query optimization techniques",
"Zig systems programming language",
"ternary computing and balanced ternary",
]

results = vsa.search("machine learning", corpus, top_n=3)
for sim, idx, text in results:
print(f" [{sim:.4f}] {text}")

# Output:
# [0.5317] machine learning algorithms for classification
# [0.2574] deep neural networks and backpropagation (shared: "learning" context)
# [0.0809] ...

Associative memory

vsa = NativeVSA()

france = Vector(vsa, text_words="france")
paris = Vector(vsa, text_words="paris")
germany = Vector(vsa, text_words="germany")
berlin = Vector(vsa, text_words="berlin")

# Bind: country * capital
fr_pair = france.bind(paris)

# Query: what is the capital of France?
result = fr_pair.unbind(france)
print(result.similarity(paris)) # 0.8153 (strong match)
print(result.similarity(berlin)) # 0.0487 (weak — wrong city)

NativeVSA API Reference

Info

MethodReturnsDescription
version()strLibrary version (e.g. "0.2.0")
max_dim()intMaximum dimension (59049)

Vector Creation

MethodReturnsDescription
zeros(dim)handleZero vector
random(dim, seed)handleRandom hypervector (deterministic)
from_array(list)handleFrom list of int8 values
clone(v)handleDeep copy
free(v)NoneFree vector (NULL-safe)

VSA Operations

MethodReturnsDescription
bind(a, b)handleElement-wise multiply (association)
unbind(bound, key)handleInverse of bind
bundle2(a, b)handleMajority vote of 2 vectors
bundle3(a, b, c)handleMajority vote of 3 vectors
permute(v, shift)handleCyclic permutation

Similarity

MethodReturnsDescription
similarity(a, b)floatCosine similarity [-1.0, 1.0]
hamming(a, b)intHamming distance
dot(a, b)intDot product

Text Encoding

MethodReturnsDescription
encode_text(text)handleCharacter-level positional encoding
encode_text_words(text)handleWord-level bag-of-words (recommended for search)
decode_text(v, max_len)strDecode vector back to text
MethodReturnsDescription
search(query, corpus, top_n)list[(sim, idx, text)]Semantic search over text list

Vector Access

MethodReturnsDescription
dim(v)intVector dimension
get_trit(v, i)intTrit at index (-1, 0, +1)
set_trit(v, i, val)NoneSet trit at index
to_list(v)list[int]Export to list

Performance

Measured on Apple Silicon M1 (Python 3.12, ctypes -> libtrinity-vsa.dylib):

OperationLatencyNotes
cosine_similarity0.053 msSIMD-accelerated
bind + free0.106 msHeap alloc included
encode_text_words1.441 msPer text string
Search 16 items0.5-2.4 msEnd-to-end
bundle2~0.1 msSIMD majority vote

Compared to pure-Python numpy implementation: ~20x faster for core operations.