top | item 40527336

Groq surpasses 1,200 tokens/sec with Llama 3 8B

43 points| YourCupOTea | 1 year ago |twitter.com

31 comments

order

LorenDB|1 year ago

Groq is an insane company. SambaNova (discussed yesterday[0]) is also very promising. However, what I really want to see is local AI accelerator chips a la Tenstorrent Grayskull that can boost local generation to hundreds of tokens per second while being more efficient than GPUs.

[0]: https://news.ycombinator.com/item?id=40508797

frozenport|1 year ago

Samba is on gen 4 silicon and still lagging, somebody over there is doing something wrong

windowshopping|1 year ago

Is groq related to Twitter's grok or is that just a very unfortunate naming coincidence?

spiderfarmer|1 year ago

I think groq has more users and a better business model.

Me1000|1 year ago

Completely unrelated.

andy_xor_andrew|1 year ago

When reading Hacker News you develop a signal/noise filter, where lots of headlines make bold claims but you filter them out as embellishment or exaggeration.

My bullshit detector went off when I first saw Groq posted on HN - a startup is making their own chips (doubt) that performs faster than anything Nvidia has for inference (doubt) and accelerates LLMs to hundreds/thousands of tokens per second?? Mega doubt.

But... then I tried their demo, and... yeah, it's that good. Such an amazing company of talented individuals.

saberience|1 year ago

The issue is that their chips need a huge amount of server blades and there's a big doubt whether this model actually scales. That is, how will Groq handle much larger models with a context of hundreds of thousands or millions of tokens? Right now this would require them to deploy a cluster with thousands of chips, versus 10 chips for say an NVidia system.

The other issue they don't mention is power, space, efficiency etc. We want to run larger models with less power, fewer server blades, at lower cost. Not use more server blades, more chips, more power, etc.

frozenport|1 year ago

8 year old unicorn++ with a public demo sounds credible?

behnamoh|1 year ago

They're not responsive to my questions on Twitter, so I'm asking here:

    When will Groq support a real API (not experimental beta preview)?

    When will Groq support logprobs?!

    When will Groq actually tell us what their rate limit is?!

Until these aren't answered, many of us can't actually build on Groq.

Edit: It seems I'm getting downvoted by Groq employees...

porphyra|1 year ago

Try asking in the groq discord [0]. Some groq employees are fairly responsive there.

For groqcloud the rate limits are fairly clear [1]. For example, for llama3-8b-8192 you get 30 requests per minute, 14400 per day, and 30000 tokens per minute. That said, it's the beta free tier so it sometimes goes down randomly and the limits may be different once they start charging for it.

I'm not affiliated with groq but I use groqcloud to make some simple chatbots since it's currently free.

[0] https://discord.com/invite/n8KtCjfAug

[1] https://console.groq.com/settings/limits