(no title)
AndyNemmity | 2 months ago
That uses less tokens. The LLM is just calling the script, and getting the response, and then using that to continue to reason.
So I'm not exactly following.
AndyNemmity | 2 months ago
That uses less tokens. The LLM is just calling the script, and getting the response, and then using that to continue to reason.
So I'm not exactly following.
dnautics|1 month ago