(no title)
cateye | 2 years ago
Regular Twitch Neurons (RTN) - running wherever there's capacity at $0.01 / 1k neurons
Fast Twitch Neurons (FTN) - running at nearest user location at $0.125 / 1k neurons
Neurons are a way to measure AI output that always scales down to zero. To give you a sense of what you can accomplish with a thousand neurons, you can: generate 130 LLM responses, 830 image classifications, or 1,250 embeddings.
Who came up with this? This is ridiculous. I understand the underlying issues but would still prefer a metric like seconds of utilization multiplied by the size of worker.
Besides this, the expected pricing doesn't talk about the expected pricing but just the pricing model. Have the feeling that this is not going to be competitive to platforms like Vast.ai
jokethrowaway|2 years ago
Quality will likely be heaps worse than chatgpt3.5, given it's llama 7b
It's 0.96$ per 100 fast chat responses It's 0.0076$ per 100 slow chat responses
Chatgpt 3.5 with 50 tokens input, 50 tokens output will give you 0.02$ per 100 fast responses If the llm responses are 500 tokens in and 500 tokens out then you get 0.2$ per 100 fast responses
I presume people will flock to the cheap version for when they can't afford the price and quality of chatgpt3.5.
akmittal|2 years ago
tempaccount420|2 years ago
unknown|2 years ago
[deleted]