(no title)
gptfiveslow | 3 months ago
It loves doing a whole bunch of reasoning steps and prolaim how mucf of a very good job it did clearing up its own todo steps and all that mumbo jumbo, but at the end of the day, I only asked it a small piece of information about nginx try_files that even GPT3 could answer instantly.
Maybe before you make reasoning models that go on funny little sidequests wher they multiply numbers by 0 a couple of times, make it so its good at identfying the length of a task. ntil then, I'll ask little bro and advance only if necessity arrives. And if it ends up gathering dust, well... yeah.
rho4|3 months ago
Imagine waiting for a minute until Google spits out the first 10 results.
My prediction: All AI models of the future will give an immediate result, with more and more innovation in mechanisms and UX to drill down further on request.
Edit: After reading my reply I realize that this is also true for interactions with other people. I like interacting with people who give me a 1 sentence response to my question, and only start elaborating and going on tangents and down rabbit holes upon request.
philipwhiuk|3 months ago
I doubt it. In fact I would predict the speed/detail trade-off continues to diverge.
confirmmesenpai|3 months ago
what if the instantaneous responses make you waste 10 min realizing they were not what you searched for?
unparagoned|3 months ago
EagnaIonat|3 months ago
If you are talking about local models, you can switch that off. The reasoning is a common technique now to improve the accuracy of the output where the question is more complex.
geldedus|3 months ago
szundi|3 months ago
[deleted]
Tepix|3 months ago
(§) You know that it's a hyperlink, do you? /s