(no title)
derac | 2 months ago
I'm not sure English is a bad way to outline what the system should do. It has tradeoffs. I'm not sure library functions are a 1:1 analogy either. Or if they are, you might grant me that it's possible to write a few english sentences that would expand into a massive amount of code.
It's very difficult to measure progress on these models in a way that anyone can trust, moreso when you involve "agent" code around the model.
AdieuToLogic|2 months ago
It isn't, as these are how stakeholders convey needs to those charged with satisfying same (a.k.a. "requirements"). Where expectations become unrealistic is believing language models can somehow "understand" those outlines as if a human expert were doing so in order to produce an equivalent work product.
Language models can produce nondeterministic results based on the statistical model derived from their training data set(s), with varying degrees of relevance as determined by persons interpreting the generated content.
They do not understand "what the system should do."
veqq|2 months ago
Human language is imprecise and allows unclear and logically contradictory things, besides not being checkable. That's literally why we have formal languages, programming languages and things like COBOL failed: https://alexalejandre.com/languages/end-of-programming-langs...
idopmstuff|2 months ago
kjkjadksj|2 months ago
raincole|2 months ago
Top HN comments sometime read like a random generator:
return random_criticism_of_ai_companies() + " " + unrelated_trivia_fact()
Why are people treating everything OpenAI does as an evidence of anti- AGI? It's like saying if you don't mortgage your house to all-in AAPL, you "don't really believe Apple has a future." Even OpenAI does believe there is X% chance AGI will be achieved, it doesn't mean they should stop literally everything else they're doing.
unknown|2 months ago
[deleted]
adastra22|2 months ago
What is AGI? Artificial. General. Intelligence. Applying domain independent intelligence to solve problems expressed in fully general natural language.
It’s more than a pedantic point though. What people expect from AGI is the transformative capabilities that emerge from removing the human from the ideation-creation loop. How do you do that? By systematizing the knowledge work process and providing deterministic structure to agentic processes.
Which is exactly what these developments are doing.
colechristensen|2 months ago
Here's the thing, I get it, and it's easy to argue for this and difficult to argue against it. BUT
It's not intelligent. It just is not. It's tremendously useful and I'd forgive someone for thinking the intelligence is real, but it's not.
Perhaps it's just a poor choice of words. What a LOT of people really mean would go along the lines more like Synthetic Intelligence.
That is, however difficult it might be to define, REAL intelligence that was made, not born.
Transformer and Diffusion models aren't intelligent, they're just very well trained statistical models. We actually (metaphorically) have a million monkeys at a million typewriters for a million years creating Shakespeare.
My efforts manipulating LLMs into doing what I want is pretty darn convincing that I'm cajoling a statistical model and not interacting with an intelligence.
A lot of people won't be convinced that there's a difference, it's hard to do when I'm saying it might not be possible to have a definition of "intelligence" that is satisfactory and testable.
aaronblohowiak|2 months ago
bluefirebrand|2 months ago
Even if this is true, which I disagree with, it simply creates a new bar: AGCI. Artificial Generally Correct Intelligence
Because Right now it is more like Randomly correct