(no title)
ssivark | 2 months ago
Implicit in your claim are specific assumptions about how expensive/untenable it is to build systemic guardrails and human feedback, and specific cost/benefit ratio of approximate goal attainment instead of perfect goal attainment. Rest assured that there is a whole portfolio of situations where different design points make most sense.
nkmnz|2 months ago
1. law of diminishing returns - AI is already much, much faster at many tasks than humans, especially at spitting out text, so becoming even faster doesn’t always make that much of a difference. 2. theory of constraints - throughput of a system is mostly limited by the „weakest link“ or slowest part, which might not be the LLM, but some human-in-the-loop, which might be reduced only by smarter AI, not by faster AI. 3. Intelligence is an emergent property of a system, not a property of its parts - with other words: intelligent behaviour is created through interactions. More powerful LLMs enable new levels of interaction that are just not available with less capable models. You don’t want to bring a knife, not even the quickest one in town, to a massive war of nukes.