top | item 42858741

DeepSeek's multi-head latent attention and other KV cache tricks

292 points| t55 | 1 year ago |pyspur.dev | reply

72 comments

order
[+] evertedsphere|1 year ago|reply
> This blog post is mostly AI-generated using a PySpur workflow with minor human edits.

it's funny that this was clear about 5% in just due to the classic chatgpt-style format and tone

[+] TeMPOraL|1 year ago|reply
Okay, so this is a PySpur ad, alright. Since I'm interested in this kind of tools, and I see on their GitHub that they don't have loops yet, I have to ask: does anyone know of a similar tool (node/DAG-based) that does support looping?

It seems to be a common problem; so far, I've played with Rivet, n8n, and "LLM Party" nodes for ComfyUI; they all seem to focus on everything other than allowing to conveniently loop the flows.

[+] didgeoridoo|1 year ago|reply
It was going pretty well until the exclamation point at the end of the first paragraph.
[+] GalaxyNova|1 year ago|reply
It's blatantly obvious; nobody uses so many bullet points.
[+] t55|1 year ago|reply
Fair point! Do you prefer a different format or tone? We really like the concise bullet point format :)
[+] llmthrow102|1 year ago|reply
I'd rather eat sand than read an AI-generated article. If you don't care enough to write it, I don't care enough to read it.
[+] visarga|1 year ago|reply
I don't like formatting in bullet points and listicles much, but the contents are pretty good, they cover many papers in a lightweight way, you can get a decent overview in 10 minutes for what would take hours to research.
[+] t55|1 year ago|reply
Hi, OP here; this article helped me a lot to understand better KV caches, which is ultimately why I co-wrote it with AI + read it several times before posting
[+] seanvelasco|1 year ago|reply
getting tired of these blog posts that end with "this post is AI-generated" as if it's going to surprise us. it's getting repetitive. imo, articles should be prefaced if they're ai generated or not to make the reader not feel stupid after reading the whole thing

with that said, i love the content! will be bookmarking for future reference

[+] t55|1 year ago|reply
Hi, OP here. My intention wasn't to "gotcha" anyone by mentioning that in the end, it was simply to be upfront. Many blog posts/content put out these days are obviously 100% AI-generated, yet it's never being mentioned. This one was probably 80%/20% (I still did many manual edits).

Glad you overall liked it!

[+] spencerf|1 year ago|reply
I feel like we’re living in strange times where your comment appears to be AI generated as well. You complain about the surprise at the end and then offer up a similar structural surprise in your reply.
[+] amelius|1 year ago|reply
Not sure if I'm getting this. Is this cache implemented as part of the forward pass through the network, in a general Python datastructure like a dict? Or is the cache somehow part of the fabric of the neural network?
[+] t55|1 year ago|reply
The KV cache is typically stored in a data structure external to the trained weights—often a buffer or set of tensors kept alongside the model’s forward pass (e.g., in PyTorch, one might store it in a dictionary-like container). It’s not baked into the neural network parameters themselves; instead, it’s an auxiliary memory that holds precomputed key-value pairs so the model doesn’t have to re-encode past tokens on each new inference step.
[+] anvuong|1 year ago|reply
Neither. Think of it as something like redis or memcached. It's external to the program, and the program will run just fine without it. But it avoids a lot of duplicate works.
[+] ahzhou|1 year ago|reply
It’s a tensor stored in GPU memory to improve inference throughput. Check out the PagedAttention (which introduces vLLM) paper for how most systems implement it nowadays.
[+] deepdarkforest|1 year ago|reply
Very clean writeup. On the attention sinks, you mention they enable "infinite-length sequence processing". What does that mean exactly in practice? Isn't deepseek still capped at 128k?
[+] t55|1 year ago|reply
Thank you! Great question.

"Infinite-length sequence processing" in StreamingLLM refers to handling much longer sequences than the model's training window (e.g., millions of tokens), by combining a sliding window for recent tokens with fixed attention sinks from the start of the sequence.

I can't speak for DeepSeek, but if I had to guess, I'd say that the infinite context window isn’t practical because storing all past tokens eventually becomes too expensive.

[+] m348e912|1 year ago|reply
Agreed on the writeup itself. It's beautifully written and presented. Kudos to Jean Kaddour and anyone else that may have been involved in putting it together.
[+] spps11|1 year ago|reply
When you say sequence length, does it only count the output tokens or are input tokens also included in that?

Thanks for the post, it was an excellent read!

[+] t55|1 year ago|reply
Thanks for reading! In most contexts (including this one), seq length encompasses both the initial input (prompt) tokens and the output tokens the model generates. It’s the total length of all tokens processed by the model so far.
[+] karolist|1 year ago|reply
What's specific to deepseek here that other models do not use, or are you just riding the keyword wave?
[+] t55|1 year ago|reply
DeepSeep proposed the multi-head latent attention technique! :)

As far as I know, they are the only ones using it so far

[+] 8note|1 year ago|reply
hmm. after my engineer degree put all of the vector math in the form

k = Wx

seeing

k = xW

is jarring. Is there a reason for using horizontal vectors? Common for data science docs?

[+] t55|1 year ago|reply
It’s mostly a convention. In many deep learning frameworks (PyTorch, TensorFlow, etc.), inputs are stored with the “batch × length × hidden-dim” shape, effectively making the token embeddings row vectors. Multiplying “xW” is then the natural shape-wise operation. On the other hand, classical linear algebra references often treat vectors as column vectors and write “Wx.”
[+] quanto|1 year ago|reply
You are in the right here. Horizontal vectors are common for (some) deep learning docs, but column factors are the literature standard elsewhere.
[+] sifar|1 year ago|reply
It is more efficient to compute k = xW with the weights transposed than k = Wx.
[+] narmiouh|1 year ago|reply
How were the Images in the blog generated?
[+] yellow_lead|1 year ago|reply
> each token needs to "look at" or "attend to" all other tokens to understand the context.

> First token: Look at 1 token (cost: O(1^2))

Umm, is this right? There is not 1 token existing before generating the first token, so how do you look at it? AI slop?

[+] t55|1 year ago|reply
The phrase, “the first token looks at 1 token,” is simply a shorthand for the self-attention step when the sequence length is one. Although there are no preceding tokens, we still treat it as an O(1^2) operation where the first token effectively attends to itself (or a special [BOS] token). This approach preserves the big-O analysis when summing over all tokens.
[+] Vampiero|1 year ago|reply
> Umm, is this right?

No way to know until you painstakingly verify every single assertion that the AI made! The author of this article certainly didn't, and the content was good enough to them.

Trust me, AGI is almost there.