(no title)
lukebuehler | 3 months ago
They are doing useful stuff, saving time, etc, which can be measured. Thus also the defintion of AGI has largely become: "can produce or surpass the economic output of a human knowledge worker".
But I think this detracts from the more interesting discussion of what they are more essentially. So, while I agree that we should push on getting our terms defined, I think I'd rather work with a hazy definition, than derail so many AI discussion to mere economic output.
Rebuff5007|3 months ago
Do you think someone who has only ever studied pre-calc would be able to work through a calculus book if they had sufficient time? how about a multi-variable calc book? How about grad level mathematics?
IMO intelligence and thinking is strictly about this ratio; what can you extrapolate from the smallest amount of information possible, and why? From this perspective, I dont think any of our LLMs are remotely intelligent despite what our tech leaders say.
kryogen1c|3 months ago
I have long thought this, but not had as good way to put it as you did.
If you think about geniuses like Einstein and ramanujen, they understood things before they had the mathematical language to express them. LLMs are the opposite; they fail to understand things after untold effort, training data, and training.
So the question is, how intelligent are LLMs when you reduce their training data and training? Since they rapidly devolve into nonsense, the answer must be that they have no internal intelligence
Ever had the experience of helping someone who's chronically doing the wrong thing, to eventually find they had an incorrect assumption, an incorrect reasoning generating deterministic wrong answers? LLMs dont do that; they just lack understanding. They'll hallucinate unrelated things because they dont know what they're talking about - you may have also had this experience with someone :)
mycall|3 months ago
A crow bending a piece of wire into a hook to retrieve food demonstrates a novel solution extrapolated from minimal, non-instinctive, environmental input. This kind of zero-shot problem-solving aligns better with your definition of intelligence.
tremon|3 months ago
lukebuehler|3 months ago
I would say a good definition has to, minimally, take on the Turing test (even if you disagree, you should say why). Or in current vibe parlance, it does "feel" intelligent to many people--they see intelligence in it. In my book this allows us to call it intelligent, at least loosely.
fragmede|3 months ago
hodgehog11|3 months ago
And yes, by this definition, LLMs pass with flying colours.
skeeter2020|3 months ago
jononor|3 months ago
rolisz|3 months ago
felipeerias|3 months ago
Nevertheless, we don’t have a good conceptual framework for thinking about these things, perhaps because we keep trying to apply human concepts to them.
The way I see it, a LLM crystallises a large (but incomplete and disembodied) slice of human culture, as represented by its training set. The fact that a LLM is able to generate human-sounding language
roenxi|3 months ago
idiotsecant|3 months ago
lukebuehler|3 months ago
keiferski|3 months ago
If you’re asking big questions like “can a machine think?” Or “is an AI conscious?” without doing the work of clarifying your concepts, then you’re only going to get vague ideas, sci-fi cultural tropes, and a host of other things.
I think the output question is also interesting enough on its own, because we can talk about the pragmatic effects of ChatGPT on writing without falling into this woo trap of thinking ChatGPT is making the human capacity for expression somehow extinct. But this requires one to cut through the hype and reactionary anti-hype, which is not an easy thing to do.
That is how I myself see AI: immensely useful new tools, but in no way some kind of new entity or consciousness, at least without doing the real philosophical work to figure out what that actually means.
jlaternman|3 months ago
IMO the issue is we won't be able to adequately answer this question before we first clearly describe what we mean of conscious thinking applied to ourselves. First we'd need to clearly define our own consciousness and what we mean by our own "conscious thinking" in a much, much clearer way than we currently do.
If we ever reach that point, I think we'd be able to fruitfully apply it to AI, etc., to assess.
Unfortunately we haven't been obstructed from answering this question about ourselves for centuries or millennia, but have failed to do so, so it's unlikely to happen suddenly now. Unless we use AIs to first solve that problem of defining our own consciousness, before applying it back on them. Which would be a deeply problematic order, since nobody would trust a breakthrough in the understanding of consciousness that came from AI, that is then potentially used to put them in the same class and define them as either thinking things or conscious things.
Kind of a shame we didn't get our own consciousness worked out before AI came along. Then again, wasn't for the lack of trying… Philosophy commanded the attention of great thinkers for a long time.
lukebuehler|3 months ago