WingNews logo WingNews
top | new | best | ask | show | jobs
top | item 46584593

(no title)

changbai | 1 month ago

Inference cost for leading models and more complex tasks is high. However, inference cost for a stationary model and task has dropped drastically.

https://a16z.com/llmflation-llm-inference-cost/ for example shows this to be true.

The report from OpenRouter https://openrouter.ai/state-of-ai also makes the same observation.

discuss

order

No comments yet.

powered by hn/api // news.ycombinator.com