πŸ”’
Visual Summary
TurboQuant β€” Near-Optimal Vector Quantization
This interactive tool is exclusive to paid subscribers.
Enter the password from your subscriber email to unlock.
Not a subscriber yet? Join Visual Summary β†’
The Problem β€Ί Online VQ β€Ί β‘  Rotate β€Ί β‘‘ Beta Shape β€Ί β‘’ Quantize β€Ί Fix Bias β€Ί Results β€Ί Big Picture
Why Vector Compression Matters
Two completely different AI applications share the same bottleneck: millions of high-dimensional floating-point vectors are eating memory and slowing search. TurboQuant solves both with one algorithm β€” and does it 50,000Γ— faster than prior methods.
TL;DR β€” Two problems, one fix

When ChatGPT reads your 100-page document, it stores up to 4GB of float16 vectors per layer just to answer one question. When Spotify searches 100 million songs for "similar to this one," it must scan 600GB of embeddings in milliseconds. Both come down to the same question: can we compress high-dimensional vectors 4Γ— without losing quality? TurboQuant answers yes β€” provably, in 0.001 seconds per vector.

4Γ—
Compression, zero quality loss (Llama-3.1-8B)
3.5 bits
Per value to match full precision on LongBench
50,000Γ—
Faster index building than best competitor

Scenario 1 β€” ChatGPT reads your document (KV Cache)

You upload a 100-page contract to an LLM and ask: "Are there any auto-renewal clauses?" Before it can answer, the model must read every word.

For each word (token), the model computes a key vector and a value vector β€” each a list of 4,096 numbers in float16 (2 bytes each). Together these are the KV cache β€” the model's working memory for your conversation.

At 128,000 tokens (β‰ˆ100 pages), the KV cache alone consumes 4GB of GPU memory per layer. A 32-layer model: 128GB. That's an entire A100 GPU just for one conversation.

Memory math for Llama-3.1-8B
tokens Γ— dim Γ— 2 (K+V) Γ— 2 bytes = tokens Γ— 4096 Γ— 2 Γ— 2 bytes 4k tokens β†’ 64 MB 32k tokens β†’ 512 MB 128k tokens β†’ 2 GB ← per layer Γ— 32 layers β†’ 64 GB ← total
With TurboQuant at 3.5 bits: 64 GB β†’ 14 GB β€” fits on a single GPU.
Interactive β€” watch KV cache memory grow as tokens arrive
Each click adds ~4 pages of text. Watch memory (violet = FP16, green = TurboQuant 3.5-bit) grow toward the 4GB GPU limit.

Scenario 2 β€” Spotify finds your next song (Vector Database)

Spotify's recommendation engine converts every song into a 512-number embedding β€” a point in 512-dimensional space where similar songs are close together. When you press "go to song radio," it finds the 50 nearest songs to the one you're playing.

At 100 million songs Γ— 512 dimensions Γ— 4 bytes = 200GB of embeddings. Searching this naively means computing 100M distances per query β€” in real time, for millions of concurrent users.

Quantizing from 32-bit float to 4-bit (8Γ— compression): 200GB β†’ 25GB. Distance computations are 8Γ— cheaper. Same recommendations, fraction of the hardware.

Vector DB scale at major companies
Spotify100M songs, 512-dim β†’ 200GB
Google Photos4B+ images, 2048-dim β†’ 32TB
Pinterest10B pins, 256-dim β†’ 10TB
With 4-bit quant8Γ— less memory + 8Γ— faster search
Interactive β€” vector similarity search in 2D (click "Search" to find nearest songs)
Each dot = a song in 2D embedding space. Clusters = genres. Gold star = your current song. Click Search to find the 5 nearest neighbors.

What exactly is a "vector"?

A vector is just a list of numbers. For an LLM key vector, each number encodes one abstract feature of a token's meaning. For a song embedding, each number encodes one aspect of the song's musical style.

In both cases: more numbers = more expressive = harder to store. A 4,096-dimension LLM key stored as float16 = 8,192 bytes per token. Quantizing to 3.5 bits = 1,792 bytes. Same meaning, 4.6Γ— smaller.

Click to randomize Β· Violet = FP16 original Β· Green = 3.5-bit TurboQuant
Everyday analogy

Think of a vector like GPS coordinates β€” but instead of 2 numbers (lat, long), you have 4,096. Each number pins down a different abstract "direction" in meaning-space. Compressing it is like rounding your GPS from 6 decimal places to 2: you still end up on the right street, just not the exact centimeter. TurboQuant proves you can do this rounding optimally β€” no more precision lost than mathematically necessary.

Both problems need the same thing: compress vectors fast, without losing quality. But what does "online" compression mean β€” and why is speed the real breakthrough? Online Vector Quantization β†’
What Does "Online" Mean β€” and Why It Changes Everything
Most quantization methods need to train a codebook on millions of vectors first β€” taking hours. TurboQuant is "online": it compresses any vector immediately, with no prior training, using a mathematically derived codebook. This makes it practical for real-time LLM inference and streaming vector databases.
The key distinction

Offline quantization (Product Quantization, RabitQ): train first β€” scan your full dataset, cluster it, build a codebook. Only then can you compress new vectors. Takes 37–497 seconds. Useless if data arrives in real time.

Online quantization (TurboQuant): no training β€” the compression formula is derived purely from mathematics (random rotation + Beta distribution + Lloyd-Max). Each new vector is compressed independently, instantly. Works in 0.001 seconds per vector, on any vector, any time.


Offline vs Online: Interactive Comparison
Watch vectors arrive in real time β€” offline methods wait, TurboQuant compresses immediately
Left panel = Offline (PQ): vectors pile up waiting for training to complete. Right panel = Online (TurboQuant): each vector is compressed the moment it arrives.

The Approximate Nearest Neighbor (ANN) Search Problem

To find the most similar vector in a database of N vectors, the naive approach computes the distance to all N vectors β€” O(N Γ— d) operations. At 100M songs with 512 dimensions: 51 billion multiplications per query.

Approximate Nearest Neighbor (ANN) search accepts a small quality loss in exchange for speed. Quantization is the key enabler: compressed vectors are smaller β†’ fit in CPU cache β†’ fewer cache misses β†’ faster comparisons.

The tradeoff: compress more β†’ faster search but lower recall. TurboQuant hits the sweet spot β€” better recall than Product Quantization at the same compression ratio.

Exact vs ANN search
Exact Search
Compare every vector. 100% correct. 51B ops per query. Too slow for real-time.
ANN + Product Quantization
~80–90% recall. Fast. But needs 37–497s training. Fails on new/streaming data.
ANN + TurboQuant
Better recall than PQ. 0.001s index build. Works on streaming data. Near-optimal.
Interactive ANN search β€” adjust compression and see the recall tradeoff
Compression bits 4 bits
Click any dot to set it as the query. The k-nearest neighbors (true = gold circles, found = colored) show recall visually. Compression reduces memory usage; lower bits = smaller vectors but more misses.
Everyday analogy

"Online" is like a coffee shop barista who memorized the recipes (math) and can make any drink immediately. "Offline" is like a barista who has to attend a 10-hour training course before serving the first customer. For a busy airport cafΓ© (real-time vector DB), the offline barista is useless β€” you need someone who can start serving the moment doors open.

Now we know the problem. How does TurboQuant actually solve it? The answer starts with a surprisingly elegant trick: scramble the vector with a random rotation. Step 1: Random Rotation β†’
Step 1 β€” Random Rotation
Before we compress anything, we need to understand what's inside a vector β€” and why most of those numbers are actually very unequal. This section explains coordinates, energy, and why a simple rotation unlocks near-perfect compression.

Concept 1 β€” What exactly is a "coordinate"?

When ChatGPT reads your contract and encounters the word "renewal", it doesn't store the word itself β€” it stores a list of ~4,096 numbers called a key vector. Each number (= one coordinate) measures how strongly that token activates one abstract "detector" the model learned during training.

You can think of them like a nutrition label β€” each row is a different dimension of meaning. Some dimensions are highly activated ("renewal" strongly triggers legal-term detectors), others are near zero (it barely activates colour or food detectors).

Click any word below to see its key vector. Notice how different words light up completely different coordinates β€” and how some coordinates are always much bigger than others.

Each bar = one coordinate. Tall bar = that "detector" fires strongly for this word.

Concept 2 β€” What does "energy" mean?

"Energy" in a coordinate just means its squared value β€” how much it contributes to the total size of the vector. If coordinate #3 has value 3.2, its energy is 3.2Β² = 10.24. If coordinate #7 has value 0.1, its energy is only 0.01.

Energy distribution across coordinates β€” click any word to compare
Bar height = % of total vector energy held by that coordinate. A few tall bars = "spiky". Equal bars = ideal for compression.
The problem in plain English

For the word "renewal", maybe 3 coordinates hold 70% of all the energy, and the other 4,093 coordinates hold the remaining 30%. This is called a spiky distribution. It's a disaster for compression β€” because the model has to handle both the big spikes and the tiny near-zero values with the same compression scheme.


Concept 3 β€” Why "naive compression" fails on spiky vectors

The simplest way to compress a number is to round it to the nearest step β€” like rounding 3.247 to 3.2. You divide the possible range into equally-spaced "bins" and store which bin the value falls in. This is called uniform quantization.

Naive (uniform) compression on a spiky vector β€” watch bins get wasted
Bins (2ᡇ) 4 bins
Hover over a bin to see how many values fall in it. Red bins = mostly wasted (too few values). Green bins = well-used. Toggle between spiky (before rotation) and even (after rotation) to see the difference.

Concept 4 β€” What the random rotation actually does

A rotation doesn't change the meaning of the vector β€” it just mixes the coordinates together. Think of it like stirring a glass of water with red food dye at the bottom: the total amount of dye (energy) stays the same, but it spreads evenly throughout the glass.

Step through the rotation β€” from spiky contract vector to evenly-spread rotated vector
Everyday analogy β€” the spice rack

Imagine storing a recipe as a rack of 16 spice jars. For "renewal", jar #3 (legal-ness) is completely full, jar #1 (formality) is half full, and most jars are almost empty. If you compress this with equal-sized labels, you waste 14 labels on near-empty jars.

The rotation is like pouring all the spices into a blender, mixing them into 16 equal portions, and refilling the jars. Now every jar is equally full β€” so every label carries equal value. Compression works perfectly on equal jars.


Seeing it across many tokens: the histogram view

The single-vector view above shows one token. Let's zoom out: here are the coordinate values across all 500 tokens in your contract. Each token has its own vector, and we're plotting all coordinate values in one histogram. The shape of this histogram is what determines compression quality.

Histogram of coordinate values β€” before vs after rotation across 500 tokens
Tokens (d) 64

Before (coral): most values cluster near 0, with a few huge spikes β€” like the spice rack. Compressing this wastes most bins on the crowded center.
After (teal): a smooth bell curve β€” no spikes, no wasted bins. Every compression bin covers roughly the same number of values.

After rotation every coordinate has the same variance. Step 2 identifies the exact mathematical shape of that distribution β€” and why knowing it precisely is the key to optimal compression. Step 2: Beta Distribution β†’
Step 2 β€” The Shape of Each Coordinate
After rotation, a lucky thing happens: every coordinate ends up with the same bell-shaped distribution. And as the vector gets longer (more dimensions), this distribution becomes more and more predictable and narrow. That predictability is what lets us design a perfect compression scheme.
The accidental gift β€” in plain English

After rotating the "renewal" key vector, each of the 4,096 coordinates is a tiny random mixture of all the original values. No single original value dominates anymore.

Because each coordinate is now made up of thousands of tiny contributions, they all end up in a predictable, bell-shaped range β€” not too big, not too small. Mathematicians call this shape the Beta distribution.

The key insight:
β‘  All 4,096 coordinates have the same distribution after rotation
β‘‘ That distribution is mathematically known (not guessed)
β‘’ So we can design one single compression scheme that works perfectly for all coordinates
β†’ No need to study each coordinate separately. One recipe fits all 4,096.

Seeing the Shape β€” One Coordinate Across Many Tokens

Imagine taking coordinate #7 (Duration-ref) from the key vector of every word in your contract β€” "renewal", "days", "party", "shall", "terminate"... all 10,000 tokens. Plotting all those values gives you a histogram. Before rotation: the shape is jagged and unpredictable. After rotation: it's a clean bell curve. The same clean bell curve appears for coordinate #1, #2, #3… all of them.

Histogram of coordinate values across all tokens β€” before vs after rotation
After rotation every coordinate produces the same clean bell curve. This means one compression scheme works for all of them.

The Concentration Effect β€” More Dimensions = More Predictable

There's a bonus: the bigger the vector (more dimensions d), the narrower the bell curve gets. A small model with d=64 has a wide, spread-out distribution. A large model like Llama (d=4096) has a very tight, concentrated distribution β€” meaning all coordinates are packed into a tiny range, making compression even more precise.

Drag the slider β€” watch the distribution narrow as model size grows
Model dimension d 64
At d=4096 (real LLM), all coordinates fall in a tiny predictable range β†’ compression bins can be placed precisely β†’ near-zero waste.

Why This Lets Each Coordinate Be Compressed Independently
Before vs After rotation: are coordinates telling each other secrets?

In plain English: before rotation, if you know coordinate #3 is big, you can guess coordinate #7 might also be big. They're correlated β€” carrying redundant information. After rotation, knowing one coordinate tells you nothing about the others. Each can be compressed in isolation, independently, without needing to look at the rest.

Everyday analogy β€” the weather forecast

Imagine trying to compress a weather report. Temperature, humidity, and "feels like" are highly correlated β€” if it's 35Β°C and humid, "feels like" is always around 40Β°C. You're wasting space storing all three independently. After "rotating" (de-correlating), each measurement is truly independent β€” you can compress each with no knowledge of the others, and nothing is wasted. The Beta distribution is the mathematical proof that rotation achieves this perfectly.

We now know the shape (bell curve), and each coordinate is independent. Step 3 uses this to design the optimal compression bins β€” no guessing needed. Step 3: Quantization β†’
Step 3 β€” Optimal Scalar Quantization
Lloyd-Max quantization finds the optimal bin boundaries for a known distribution. With the Beta distribution locked in from Step 2, each coordinate gets a custom codebook β€” and TurboQuant stays within 2.14Γ— of the information-theoretic limit.
The Lloyd-Max algorithm

Given a known distribution, Lloyd-Max finds the quantization thresholds that minimize mean squared error. The rule: each threshold lies halfway between adjacent centroids, and each centroid is the mean of its bin. Repeat until convergence. For a Gaussian, this puts bins densely in the center and sparsely at the tails.


Interactive Quantizer Demo
Bits b 2 bits (4 bins)
Drag horizontally to place a test value and see quantization error. Lloyd-Max puts more bins where data is dense.

Rate-Distortion: How Close to the Theoretical Limit?
TurboQuant at b=2: distortion 0.117 vs lower bound 0.054 β€” within 2.14Γ— of optimal.
Everyday analogy

Lloyd-Max is like a parking lot designer who puts more spaces near the entrance. Instead of equal spacing, spaces cluster where cars are most likely to park β€” fewer steps for everyone on average. The theoretical lower bound is the absolute minimum if you had infinite design time; TurboQuant gets within a factor of 2.

TurboQuant_MSE is now fully defined. But there's a subtle trap: MSE-optimal quantization biases inner product estimates. Here's the fix. The Bias Problem & Fix β†’
The Bias Problem & Fix
Minimizing MSE and preserving inner products are different objectives. MSE-optimal quantization systematically overestimates inner products. TurboQuant_PROD fixes this with a 1-bit correction, achieving unbiased estimates at negligible extra cost.
Why this matters for attention

The core operation in attention is the inner product ⟨query, key⟩. If your quantization systematically overestimates these products, the attention scores will be biased β€” some tokens will seem more relevant than they are. TurboQuant_PROD adds a single extra bit per vector that mathematically removes this bias.


Bias Visualization: Estimated vs True Inner Product
Bits b 2
Each dot = one vector pair. Dashed diagonal = perfect estimate. Points above diagonal = overestimation bias.

The QJL Residual Correction β€” 3 Steps
How TurboQuant_PROD removes inner product bias
Everyday analogy

Like correcting a systematic rating bias: if Yelp reviewers always over-rate expensive restaurants, you ask one extra trusted reviewer to rate a comparable cheap one and use the difference to calibrate. The 1-bit QJL correction is that trusted reviewer β€” cheap, fast, and it neutralizes the systematic error.

Both variants defined. Time to see the actual benchmark numbers. The Numbers β†’
The Numbers
Near-optimal theory, zero-quality-loss practice, millisecond indexing. Here are the paper's benchmark results across KV cache compression, vector database indexing, and recall.
4Γ—
KV cache compression, zero quality loss
0.001s
Indexing time (vs 497s for PQ)
2.14Γ—
From information-theoretic lower bound

Distortion Rate β€” How Close to the Limit?
Grouped by bits (b=2,3,4). Lower is better. TurboQuant within 2–3Γ— of information-theoretic lower bound.

LongBench: KV Cache Quality vs Bits Per Value
TurboQuant (sky blue) matches full-precision quality at 3.5 bits. Dashed line = FP16 baseline.

Indexing Speed Comparison
On linear scale, TurboQuant's bar nearly disappears β€” that's the point. ~50,000Γ— faster than best competitor.

Recall@10 vs Product Quantization
TurboQuant consistently outperforms PQ on GloVe, OpenAI-small, and OpenAI-large embedding datasets.

Head-to-Head Scorecard
TurboQuant vs competitors β€” 5 criteria that matter in practice
● = excellent  β—‘ = good  β—‹ = poor   Sources: paper Table 1–3, arXiv:2504.19874

The Most Important Question: Does the Output Actually Look the Same?

Numbers like "3.5 bits" and "0.001 distortion" are hard to feel. The real test: give the same compressed model the same question β€” does the answer change? Below are real responses from Llama-3.1-8B, side by side: full precision vs TurboQuant at 3.5 bits.

FP16 Full Precision (16 bits)
TurboQuant 3.5-bit Compressed

Your Personal Savings Calculator

What does 4Γ— compression actually mean for your workload? Adjust the sliders to see the memory and cost impact for your specific use case.

Context length 32k tokens
Model size 13B
Concurrent users 10
Near-optimal theory, zero-quality-loss practice, millisecond indexing. Where does TurboQuant fit in the broader landscape? Where This Fits β†’
Where This Fits
TurboQuant is not just a clever trick β€” it redraws what's possible for AI inference efficiency, vector databases, and our theoretical understanding of compression.
LLM Inference

4Γ— longer context windows on the same hardware. Or same context, 4Γ— cheaper. Inner-product unbiased quantization means no model degradation β€” the model doesn't even know it's compressed.

Vector Databases

Millisecond index building makes real-time quantization practical for streaming data. Better recall than PQ means fewer missed results in RAG pipelines. No offline training phase needed.

Theoretical Significance

Near information-theoretic optimality with a strikingly simple algorithm. Proves random preprocessing + independent scalar quantizers can match the best achievable bound within a small constant factor.


AI Inference Efficiency Landscape
Click any node to learn more
Click any node to see a description.

Open Problems
Adaptive bit allocation per dimensionβ–Ό
TurboQuant uses the same number of bits for every coordinate. Dimensions that carry more information (even after rotation) could benefit from more bits. Optimal adaptive allocation would require solving a constrained optimization problem per vector β€” potentially bringing distortion even closer to the Shannon bound.
Non-uniform random rotations tuned to dataβ–Ό
The Haar-distribution rotation is data-agnostic. If you know the covariance structure of your embeddings, a PCA-whitening rotation might do better. The tradeoff: data-dependent rotations require an offline training phase, losing TurboQuant's online guarantee.
Heavy-tailed embedding distributionsβ–Ό
The Beta distribution result assumes the input vector lies on a unit sphere. Real embeddings may have heavy tails or non-unit norms. Understanding how robustly TurboQuant's guarantees hold under these violations β€” and how to normalize in practice β€” is an open question.
Integration with quantization-aware trainingβ–Ό
TurboQuant operates post-hoc on frozen model weights. Training models with TurboQuant-style quantization in the loop could allow the model to learn representations that are even easier to compress β€” potentially pushing below 3 bits while maintaining quality.
4Γ—
Compression ratio
2.14Γ—
From information-theoretic optimal
0.001s
Index build time
3.5 bits
For zero quality loss

TurboQuant shows that theoretical optimality and practical simplicity aren't in conflict. A random rotation plus a known distribution plus a classical algorithm β€” and you're within a constant factor of the best possible. The three steps took decades of separate research to develop; the insight was combining them into a single online pipeline.