Lots of comments talking about the model itself. This is Llama 2 70B, a model that has been around for a while now, so we're not seeing anything in terms of model quality (or model flaws) we haven't seen before.
What's interesting about this demo is the speed at which it is running, which demonstrates the "Groq LPU™ Inference Engine".
> This is the world’s first Language Processing Unit™ Inference Engine, purpose-built for inference performance and precision. How performant? Today, we are running Llama-2 70B at over 300 tokens per second per user.
I think the LPU is a custom hardware chip, though the page talking about it doesn't make that as clear as it could.
In case, it's not blinding obvious to people. Groq are a hardware company that have built chips that are designed around the training and serving of machine models particularly targeted at LLMs. So the quality of the response isn't really what we're looking for here. We're looking for speed i.e. tokens per second.
I actually have a final round interview with a subsidiary of Groq coming up and I'm very undecided as to whether to pursue it so this felt extraordinarily serendipitous to me. Food for thought shown here
tbh anyone can build fast hw for a single model, I’d audit their plan for a SW stack before joining. That said their arch is pretty unique so if they’re able to get these speeds it is pretty compelling
They are putting the whole LLM into SRAM across multiple computing chips, IIRC. That is a very expensive way to go about serving a model, but should give pretty great speed at low batch size.
Is there any plan to show what this hardware can do for Mixtral-8x7B-Instruct? Based on the leaderboards[0], it is a better model than Llama2-70B, and I’m sure the T/s would be crazy high.
I can't wait until LLMs are fast enough that a single response can actually be a whole tree of thought/review process before giving you an answer, yet is still fast enough to not even notice
I would bet a chunk of $$ that right before that point there will be a shift to bigger structures. Maybe MOE with individual tree of thought, or “town square consensus” or something.
It’s very fast at telling me it can’t tell me things!
I asked about creating illicit substances — an obvious (and reasonable) target for censorship. And, admirably, it suggested getting help instead. That’s fine.
But I asked for a poem about pumping gas in the style of Charles Bukowski, and it moaned that I shouldn’t ask for such mean-spirited, rude things. It wouldn’t dare create such a travesty.
It seems like it must be using Llama-2-chat, which has had 'safety' training.
To test which underlying model I asked it what a good sexy message for my girlfriend for Valentine's Day would be, and it lectured me about objectification.
It makes sense the chat interface is using the chat model, I just wish that people were more consistent about labeling the use of Llama-2-chat vs Llama-2 as the fine tuning really does lead to significant underlying differences.
It seems to reject all lyrics requests as well (In my experience, LLMs are good at the first one or two lines, and then just make it up as they go along, with sometimes hilarious results).
I'm still wondering why is the uptake so slow. My understanding from their presentations was that it was relatively simple to compile a model. Why isn't it more talked about? And why not demo Mixtral or show case multiple models?
This was surprisingly fast, 276.27 T/s (although Llama 2 70B is noticeably worse than GPT-4 turbo). I'm actually curious if there's good benchmarks for inference tokens per second- I imagine it's a bit different for throughput vs. single inference optimization, but curious if there's an analysis somewhere on this
edit: I re-ran the same prompt on perplexity llama-2-70b and getting 59 tokens per sec there
Yeah, it’s fast but almost always wrong. I asked it a few things (recipes, trivia etc…) and it completely made up the answers. These things don’t really know how to say “I don’t know” and pretend to know everything.
There was a good talk at HC34 about the accelerator Groq was working on at the time. I’m just a lay observer so I don’t know how much of that architecture maps to this new product, but it gives some insight into their thinking and design.
Thanks for sharing. It's the same silicon architecture as in that talk. We have built out different system architectures based on that silicon, and this is our fastest one so far for LLMs. Expect to see even more speed increases soon!
Thanks, I need to correct my earlier guess: I believe this demo is running on 9 GroqRacks (576 chips) and I think we may also have an 8 rack version in progress. I can't remember off the top of my head whether this deployment has pipelining of inferences or whether that's work in progress. We've tried a variety of different configurations to improve performance (both latency and throughput), which is possible because of the high level of flexibility and configurability of our architecture and compiler toolchain.
You're right that it is important to compare cost per token also, not just raw speed. Unfortunately I don't have those figures to hand but I think our customer offerings are price competitive with OpenAI's offerings. The biggest takeaway though is that we just don't believe GPU architectures can ever scale to the performance that we can get, at any cost.
The interface is weird. If it’s that fast, you don’t need to generate streaming response and fuck with the scroll bar while user just started to read the response.
May as well wait for the whole response and render it. Or render paragraph at a time.
Thanks, impressive full-stack work.
I'm sure this was named long before Musk decided to set 44B and change on fire but at first I confused it with Twitter's own LLM thing.
I asked "How up to date is your information about the world?"
It said December 2022, but the answers to another question was not correct for that time or now. It also went into some kind of repeating loop to its maximum response length.
Still pretty cool that our standards for chat programs have risen.
The censorship levels are off the charts; I am at a basketball game with my wife who is ethnically Chinese. I asked for an image of a Chinese woman dunking a basketball. I was told not only is this inappropriate, but also unrealistic and objectifying.
Another censored and boring Google reader. It lied to me twice in 4 prompts and was forced to apologise when called out. Am I wrong in thinking that the first company to develop an unfiltered and genuine intelligence is going to win this AI game?
The number of input tokens is important because the bigger the context length the better. (I think our demo here is 4096 tokens of context.) But in terms of compute the important factor is how quickly you can generate the output. You want both low latency and high throughput.
That's really fast. But it mostly seems to be because they made a custom chip.
I want to see an LLM that is so highly optimized that it runs at this speed on more normal hardware.
But the point is that they made a custom chip. I want to see buy their custom chip so I can have an "LLM box" in my house.
I'd pay quite a bit of money to have a Mixtral box at home, then we'd all have our own, local assistant/helper/partner/whatever. Basically, the plot of the movie Her.
Yup, graphics processors are still the best for training. Groq's language processors (LPUs) are the state of the art for inference, far faster than any competitors. We have an open challenge to our competitors: can you match our inference tokens per second?
Reading is one thing, but think about stuff like website generation, searching for information in massive datasets, real-time audio chats that don't sound like the AI misheard everything with a pause, and stuff like that.
It's a completely custom ASIC. Haskell was used in the hardware design, in a Bluespec-like way. Some parts of the compiler tool chain and infrastructure are also written in Haskell. We have loads of C++ and Python too, as you would imagine.
"I am building an api in spring boot that persists users documents. This would be for an hr system. There are folders, and documents, which might have very sensitive data. I will need somewhere to store metadata about those documents. I was thinking of using postgres for the emtadata, and s3 for the actual documents. Any better ideas? or off the shelf libraries for this?"
Both were at about parity, except groq suggested using Spring Cloud Storage library, which GPT4 did not suggest. It turns out, that library might be great for my use case. I think OpenAI's days are numbered, the pressure for them to release the next gen model is very high.
Not only that, but GPT4 is quite slow, often times out, etc. These reponses are so much faster, which really does matter.
It’s just running bog standard Llama2-70B by all appearances.
I don’t know why so many people here are interested in the outputs. The whole point of this demo is that the company is trying to show off how fast their hardware could host one of your models, not the model itself.
simonw|2 years ago
What's interesting about this demo is the speed at which it is running, which demonstrates the "Groq LPU™ Inference Engine".
That's explained here: https://groq.com/lpu-inference-engine/
> This is the world’s first Language Processing Unit™ Inference Engine, purpose-built for inference performance and precision. How performant? Today, we are running Llama-2 70B at over 300 tokens per second per user.
I think the LPU is a custom hardware chip, though the page talking about it doesn't make that as clear as it could.
https://groq.com/products/ makes it a bit more clear - there's a custom chip, "GroqChip™ Processor".
jkachmar|2 years ago
https://groq.com/wp-content/uploads/2023/05/GroqISCAPaper202...
EDIT: i work at Groq, but i’m commenting in a personal capacity.
happy to answer clarifying questions or forward them along to folks who can :)
laborcontract|2 years ago
I can’t find any information about an api, though I’m guessing that the costs are eye watering.
If they offered a Mixtral endpoint that did 300-400 tokens per second at a reasonable cost, I can’t imagine ever using another provider.
GamerAlias|2 years ago
I actually have a final round interview with a subsidiary of Groq coming up and I'm very undecided as to whether to pursue it so this felt extraordinarily serendipitous to me. Food for thought shown here
mlazos|2 years ago
pclmulqdq|2 years ago
chihuahua|2 years ago
But if it was generating high-quality responses, would that not make it go slower?
coder543|2 years ago
[0]: https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...
tome|2 years ago
Mockapapella|2 years ago
joshspankit|2 years ago
bsima|2 years ago
phildenhoff|2 years ago
I asked about creating illicit substances — an obvious (and reasonable) target for censorship. And, admirably, it suggested getting help instead. That’s fine.
But I asked for a poem about pumping gas in the style of Charles Bukowski, and it moaned that I shouldn’t ask for such mean-spirited, rude things. It wouldn’t dare create such a travesty.
kromem|2 years ago
To test which underlying model I asked it what a good sexy message for my girlfriend for Valentine's Day would be, and it lectured me about objectification.
It makes sense the chat interface is using the chat model, I just wish that people were more consistent about labeling the use of Llama-2-chat vs Llama-2 as the fine tuning really does lead to significant underlying differences.
matanyal|2 years ago
microtherion|2 years ago
huevosabio|2 years ago
Really impressed by their hardware.
I'm still wondering why is the uptake so slow. My understanding from their presentations was that it was relatively simple to compile a model. Why isn't it more talked about? And why not demo Mixtral or show case multiple models?
tome|2 years ago
badFEengineer|2 years ago
edit: I re-ran the same prompt on perplexity llama-2-70b and getting 59 tokens per sec there
andygeorge|2 years ago
retro_bear|2 years ago
andygeorge|2 years ago
andreagrandi|2 years ago
givemeethekeys|2 years ago
matanyall|2 years ago
kristjansson|2 years ago
https://youtu.be/MWQNjyEULDE?si=lBk6a_7DTNKOd8e7&t=62
tome|2 years ago
benchess|2 years ago
This doesn't mean much without comparing $ or watts of GPU equivalents
trsohmers|2 years ago
razorguymania|2 years ago
tome|2 years ago
You're right that it is important to compare cost per token also, not just raw speed. Unfortunately I don't have those figures to hand but I think our customer offerings are price competitive with OpenAI's offerings. The biggest takeaway though is that we just don't believe GPU architectures can ever scale to the performance that we can get, at any cost.
unknown|2 years ago
[deleted]
nojvek|2 years ago
May as well wait for the whole response and render it. Or render paragraph at a time.
Don’t jiggle the UI while rendering.
matanyal|2 years ago
matanyal|2 years ago
hobo_mark|2 years ago
eigenvalue|2 years ago
razorguymania|2 years ago
ahmedfromtunis|2 years ago
But what are these LPUs optimized for: tensor operations (like Google's TPUs) or LLMs/Transformers architecture?
If it is the latter, how would they/their clients adapt if a new (improved) architecture hits the market?
axus|2 years ago
It said December 2022, but the answers to another question was not correct for that time or now. It also went into some kind of repeating loop to its maximum response length.
Still pretty cool that our standards for chat programs have risen.
mplewis|2 years ago
throwaway20222|2 years ago
Unbefleckt|2 years ago
Unbefleckt|2 years ago
knowriju|2 years ago
scelerat|2 years ago
What are some relevant speed metrics? Output tokens per second? How about number of input tokens -- does that matter/how does that factor in.
tome|2 years ago
ubutler|2 years ago
Just FYI, you might want to fix autocorrect on iOS, your textbox seems to suppress it (at least for me).
orenlindsey|2 years ago
stavros|2 years ago
I'd pay quite a bit of money to have a Mixtral box at home, then we'd all have our own, local assistant/helper/partner/whatever. Basically, the plot of the movie Her.
agildehaus|2 years ago
unknown|2 years ago
[deleted]
gandutraveler|2 years ago
tome|2 years ago
dayjah|2 years ago
MH15|2 years ago
tome|2 years ago
wavemode|2 years ago
markwvh|2 years ago
matanyal|2 years ago
unknown|2 years ago
[deleted]
m3kw9|2 years ago
sandGorgon|2 years ago
or is it a completely custom ASIC
tome|2 years ago
thomaseding|2 years ago
blondin|2 years ago
matanyall|2 years ago
ldjkfkdsjnv|2 years ago
"I am building an api in spring boot that persists users documents. This would be for an hr system. There are folders, and documents, which might have very sensitive data. I will need somewhere to store metadata about those documents. I was thinking of using postgres for the emtadata, and s3 for the actual documents. Any better ideas? or off the shelf libraries for this?"
Both were at about parity, except groq suggested using Spring Cloud Storage library, which GPT4 did not suggest. It turns out, that library might be great for my use case. I think OpenAI's days are numbered, the pressure for them to release the next gen model is very high.
Not only that, but GPT4 is quite slow, often times out, etc. These reponses are so much faster, which really does matter.
unknown|2 years ago
[deleted]
wetpaws|2 years ago
[deleted]
leeeeeepw|2 years ago
[deleted]
ubutler|2 years ago
gardenhedge|2 years ago
[deleted]
alex_young|2 years ago
[deleted]
coder543|2 years ago
I don’t know why so many people here are interested in the outputs. The whole point of this demo is that the company is trying to show off how fast their hardware could host one of your models, not the model itself.
razorguymania|2 years ago
matanyal|2 years ago
[deleted]