top | item 42554161

RWKV Language Model

183 points| simonpure | 1 year ago |rwkv.com

52 comments

order
[+] swyx|1 year ago|reply
for those who want a more conversational intro, we've been covering RWKV for a bit!

2023: https://latent.space/p/rwkv

2024: https://www.youtube.com/watch?v=LPe6iC73lrc <- offers a bit of professional compare and contrast vs state space models.

i think its cool that now both RNN and LSTM (with xLSTM) now have modern attention-inspired variants that solve the previous issues. I wonder if 1) its possible to overcome the "hardware lottery" that transformers have now won, and 2) if recurrent/selective state can do the kind of proper lookback on extremely long context that we will want it to do to compete with full attention (easy to say no, harder to propose what to do about it).

there's also Liquid AI, whatever it is that they do.

[+] HarHarVeryFunny|1 year ago|reply
The Transformer was specifically conceived to take advantage of pre-existing massively parallel hardware, so it's a bit backwards to say it "won the hardware lottery". Where the Transformer did "win the lottery" is that the key-value form of self-attention (invented by Noam Shazeer) needed to make parallel processing work seems to have accidentally unlocked capabilities like "induction heads" that make this type of architecture extremely well suited to language prediction.

Given limits on clock speed, massive parallelism is always going to be the way to approach brain-like levels of parallel computation, so any model architecture aspiring to human level AGI needs to be able to take advantage of that.

[+] shawntan|1 year ago|reply
Although marketed as such, RWKV isn't really an RNN.

In the recent RWKV7 incarnation, you could argue it's a type of Linear RNN, but past versions had an issue of taking its previous state from a lower layer, allowing for parallelism, but makes it closer to a convolution than a recurrent computation.

As for 1), I'd like to believe so, but it's hard to get people away from the addictive drug that is the easily parallelised transformer, 2) (actual) RNNs and attention mechanisms to me seem fairly powerful (expressivity wise) and perhaps most acceptable by the community.

[+] inciampati|1 year ago|reply
the recurrent model needs a mechanism to replay past context. no need to go quadratic to access all of it. they could replay multiple times to get effects similar to attention.

the hardware lottery, well... imo it's really about leveraging fully parallel training to learn how to use a memory. attention is quadratic but it can be computed in parallel. it's an end to end learned memory. getting that kind of pattern into RNNs won't be easy but it's going to be crucial before we boil the ocean.

[+] intalentive|1 year ago|reply
Idea for killer app for recurrent models: low latency, low memory LLM / TTS coupling. Start decoding / generating speech as soon as new tokens are generated. When the LLM is cranking out token t, the TTS is already working on token t-1. It doesn’t have to wait. Then when the LLM is finished, the TTS is nearly finished too. The two models being colocated you just saved another network call as well.

Recurrent models with constant hidden state are naturally suited to streaming data, potentially opening the door to unexplored new use cases.

[+] computerex|1 year ago|reply
New multimodal models take raw speech input and provide raw speech output, no tts in the middle.
[+] pico_creator|1 year ago|reply
This is actually the hypothesis for cartesia (state space team), and hence their deep focus on voice model specifically. Taking full advantage of recurrent models constant time compute, for low latencies.

RWKV team's focus is still however is first in the multi-lingual text space, then multi-modal space in the future.

[+] cootsnuck|1 year ago|reply
This can currently already be done using a streaming capable LLM with a streaming input/output TTS model.
[+] yshui|1 year ago|reply
Any autoregressive model can do what you are describing. transformers are generating one token at a time too, not all at once.
[+] pico_creator|1 year ago|reply
Hey there, im Eugene / PicoCreator - co-leading the RWKV project - feel free to AMA =)
[+] Ey7NFZ3P0nzAe|1 year ago|reply
I noticed the lack of support from ollama and llama.cpp for RWKV. As those are (to my eyes) very strong drivers of experimentation (i.e. supporting them means vastly more outreach) I was considering whether you were considering taking this into your own hands by contributing code to them? Or rather is the fact that you are not (AFAIK) doing it because you lack the bandwidth in terms of man power or any other reason?
[+] nickpsecurity|1 year ago|reply
It’s really, interesting work. I’m glad you’ve kept at it. I’d like to ask you about two issues.

I keep seeing papers like “Repeat After Me” claiming serious weaknesses of state space vs transformer models. What are the current weaknesses of RWKV vs transformers? Have you mitigated them? If so, how?

The other issue is that file sharing being illegal, Wikipedia requiring derivatives to be copyleft, etc means I can’t train models with most data legally. Pre-1920’s works in Project Gutenberg are totally public domain. Both the model and the training data would be 100% legal for reproducible research. Would your team be willing to train a 3B-7B model on only Gutenberg and release it to the public domain?

(Note: The Stack without GitHub Issues can be used for permissive code. However, there could be contamination issues like incorrect licenses, PII, etc. So, maybe at least one, 100% legal model. Maybe a second with Gutenberg and The Stack for coding research.)

Example use of Gutenberg:

https://www.tensorflow.org/datasets/catalog/pg19

[+] Ey7NFZ3P0nzAe|1 year ago|reply
Has there been progress towards making RWKV multimodal? Can be use projector layers to send images to RWKV?
[+] Ey7NFZ3P0nzAe|1 year ago|reply
I'm quite interested in repeng [0] (representztion engineering) for steerability of (so fzr transformer based) LLMs and was wondering if anyone had tried such methods on rwkv (or mamba for that matter). Maybe there are some low hanging fruits about it.

[0] https://github.com/vgel/repeng/issues

[+] low_tech_punk|1 year ago|reply
Thanks! The 0.1B version looks perfect for embedded system. What is the key benefit of attention-free architecture?
[+] bratao|1 year ago|reply
What would be the most performant way to run a inference using RWKV? Do you have and speed comparison to a similar sized transformer?

I have a task(OCR cleaning) that I´m evaluating faster options and look like RWKV would be a nice alternative.

[+] littlestymaar|1 year ago|reply
Has there been any plans to build a “reasoning” llm using RWKV? With the increase in inference token count caused by such methods, the muhc lower footprint of recurrent architecture could really make a difference for such a use-case.
[+] theLiminator|1 year ago|reply
Do you have an in depth comparison between RWKV and models like mamba or s4?
[+] jharohit|1 year ago|reply
congrats and great work on RWKV and Recursal.ai
[+] smusamashah|1 year ago|reply
How does it compare with other LLMs in terms of performance? Is.it near GPT 3 or Llama or what?
[+] Fischgericht|1 year ago|reply
"RWKV (pronounced RwaKuv)" - love it. How does the corw make? Rwa! Rwa! Rwa!
[+] bbor|1 year ago|reply
Thank god I’m not the only one stunned by that. I don’t need IPA, but this isn’t even vaguely pronounceable!
[+] upghost|1 year ago|reply
Seems really cool. Does anyone have any sample code to link to? Do RNN models use the same pytorch/hugging face Python stuff or is it completely different...?
[+] bigattichouse|1 year ago|reply
I've spent an afternoon attempting to compile and run RWKV7 locally.. and I just don't get it. lotta errors in compiling... and it's a lot. Like a lot, a lot... it's walls of versions and sub projects.

Any kind of quickstart guide?

Also found.tried rwkv.cpp, and I can't seem to compile that either.

[+] nullc|1 year ago|reply
Anyone ever look at doing a MoE like composition with RWKV and a transformer?
[+] sushidev|1 year ago|reply
Interesting. Very cryptic for simple user like me. I wonder if it’s useful today and for what purposes
[+] pico_creator|1 year ago|reply
Currently the strongest RWKV model is 32B in size: https://substack.recursal.ai/p/q-rwkv-6-32b-instruct-preview

This is a full drop in replacement for any transformer model use cases on model sizes 32B and under, as it has equal performance to existing open 32B models in most benchmarks

We are in works on a 70B, which will be a full drop in replacement for most text use cases