I have no idea what everyone is talking about. LLMs are based on relatively simple math, inference is much easier to learn and customize than say Android APIs. Once you do you can apply familiar programming style logic to messy concepts like language and images. Give you model a JSON schema like "warp_factor": Integer if you don't want chatter, that's way better than Star Trek computer could do. Or have it write you a simple domain specific library on top of Android API that you can then program from memory like old style BASIC rather than having to run to stack overflow for evwery new task.
layer8|21 days ago
The fact that LLMs are based on a network of simple matrix multiplications doesn’t change that. That’s like saying that the human brain is based on simple physical field equations, and therefore its behavior is easy to understand.
orangecat|21 days ago
Right, which is the point: LLMs are much more like human coworkers than compilers in terms of how you interact with them. Nobody would say that there's no point to working with other people because you can't predict their behavior exactly.
cat_plus_plus|21 days ago