top | item 46251201

(no title)

derac | 2 months ago

Call me naive, but my read is the opposite. It's impressive to me that we have systems which can interpret plain english instructions with a progressively higher degree of reliability. Also, that such a simple mechanism for extending memory (if you believe it's an apt analogy) is possible. That seems closer to AGI to me, though maybe it is a stopgap to better generality/"intelligence" in the model.

I'm not sure English is a bad way to outline what the system should do. It has tradeoffs. I'm not sure library functions are a 1:1 analogy either. Or if they are, you might grant me that it's possible to write a few english sentences that would expand into a massive amount of code.

It's very difficult to measure progress on these models in a way that anyone can trust, moreso when you involve "agent" code around the model.

discuss

order

AdieuToLogic|2 months ago

> I'm not sure English is a bad way to outline what the system should do.

It isn't, as these are how stakeholders convey needs to those charged with satisfying same (a.k.a. "requirements"). Where expectations become unrealistic is believing language models can somehow "understand" those outlines as if a human expert were doing so in order to produce an equivalent work product.

Language models can produce nondeterministic results based on the statistical model derived from their training data set(s), with varying degrees of relevance as determined by persons interpreting the generated content.

They do not understand "what the system should do."

veqq|2 months ago

> not sure English is a bad way to outline

Human language is imprecise and allows unclear and logically contradictory things, besides not being checkable. That's literally why we have formal languages, programming languages and things like COBOL failed: https://alexalejandre.com/languages/end-of-programming-langs...

idopmstuff|2 months ago

This is just semantics. You can say they don't understand, but I'm sitting here with Nano Banana Pro creating infographics, and it's doing as good of a job as my human designer does with the same kinds of instructions. Does it matter if that's understanding or not?

kjkjadksj|2 months ago

When do we jump the shark and replace the stakeholders with ai acting in their best interest (tm)? Seems that would come soon. It makes no sense to me that we’d obsolete engineering talent but then keep the people who got a 3.1 gpa in a business program around for reasons. Once we hit that point just dispense with english and have the models communicate to each other in binary. We can play with sticks in caves.

raincole|2 months ago

I 100% agree. I don't know what the GP is on. Being able to write instructions in a .md file is "further away from AGI"? Like... what? It's just a little quality of life feature. How and why is it related to AGI?

Top HN comments sometime read like a random generator:

return random_criticism_of_ai_companies() + " " + unrelated_trivia_fact()

Why are people treating everything OpenAI does as an evidence of anti- AGI? It's like saying if you don't mortgage your house to all-in AAPL, you "don't really believe Apple has a future." Even OpenAI does believe there is X% chance AGI will be achieved, it doesn't mean they should stop literally everything else they're doing.

adastra22|2 months ago

I’ve posted this before, but here goes: we achieved AGI in either 2017 or 2022 (take your pick) with the transformer architecture and the achievement of scaled-up NLP in ChatGPT.

What is AGI? Artificial. General. Intelligence. Applying domain independent intelligence to solve problems expressed in fully general natural language.

It’s more than a pedantic point though. What people expect from AGI is the transformative capabilities that emerge from removing the human from the ideation-creation loop. How do you do that? By systematizing the knowledge work process and providing deterministic structure to agentic processes.

Which is exactly what these developments are doing.

colechristensen|2 months ago

>What is AGI? Artificial. General. Intelligence.

Here's the thing, I get it, and it's easy to argue for this and difficult to argue against it. BUT

It's not intelligent. It just is not. It's tremendously useful and I'd forgive someone for thinking the intelligence is real, but it's not.

Perhaps it's just a poor choice of words. What a LOT of people really mean would go along the lines more like Synthetic Intelligence.

That is, however difficult it might be to define, REAL intelligence that was made, not born.

Transformer and Diffusion models aren't intelligent, they're just very well trained statistical models. We actually (metaphorically) have a million monkeys at a million typewriters for a million years creating Shakespeare.

My efforts manipulating LLMs into doing what I want is pretty darn convincing that I'm cajoling a statistical model and not interacting with an intelligence.

A lot of people won't be convinced that there's a difference, it's hard to do when I'm saying it might not be possible to have a definition of "intelligence" that is satisfactory and testable.

aaronblohowiak|2 months ago

We have achieved AGI no more than we have achieved human flight.

bluefirebrand|2 months ago

> we achieved AGI in either 2017 or 2022

Even if this is true, which I disagree with, it simply creates a new bar: AGCI. Artificial Generally Correct Intelligence

Because Right now it is more like Randomly correct