top | item 37793461

(no title)

el_isma | 2 years ago

I find it intriguing. It makes sense that this new kind of "thing" (LLMs) could be "programmed", and that you could craft a language specifically for it's abilities. I've read the tutorials but I still find it hard to wrap my head around it.

Have you heard of any other language like this? Or had success using SudoLang?

discuss

order

atleta|2 years ago

It actually quite the opposite: it's counter intuitive that you could program these or, for that matter, any intelligence. The very point of a system being intelligent is that it will figure things out on its own which both means that you don't need to program it (provide a very detailed and strict set of instructions) and you won't be able to program it. The latter might be less obvious, and it's really just an intuition, but to me it seems that the fact/capability that it can figure out what you mean from a less precise set of instructions (i.e. prompts) is equal to it not following your instructions even when you think they are to be followed. Because, first of all, how would it know when to do which? And even if we introduce a magic word that switches modes it's still contradictory because your "program" would still be a loosely defined set of instructions and not a real program. Otherwise you'd be just using an actual programming language.

Now, if the system has some form of common sense (what we, humans call common sense), then it will be able to follow your instructions without doing unexpected things most of the time but it will still fail, just as natural intelligences do.

Instead of programming the "thing", what you can do is make the thing generate a program that you can test and review and run that. But that's definitely more work than giving a set of instructions to the LLM. But, for common tasks, it may acquire enough common sense so that the surprises will be rare enough.

el_isma|2 years ago

But among colleages we use certain jargon which varies by industry and probably by country. Could LLMs have their own preffered jargon?

I usually write pseudocode when I'm thinking about a problem to solve, so in a way I'm "thinking with pseudocode" instead of plain language. Pseudocode is probably more accurate than plain language, and it's something I'd use when explaining to other humans what I want them to code (along with diagrams, which seems ChatGPT would understand now). So, to me, speccing this pseudocode to something the LLMs find easier to understand sounds reasonable. It's like understanding how a fellow programmer prefers to get his requirements.

warrenm|2 years ago

>The very point of a system being intelligent is that it will figure things out on its own which both means that you don't need to program it (provide a very detailed and strict set of instructions) and you won't be able to program it

Humans are "intelligent", yet also "programmable" - why would you think an artificial "intelligence" (which, by definition was programmed to start with) would not be programmable?

cmgriffing|2 years ago

> It actually quite the opposite: it's counter intuitive that you could program these or, for that matter, any intelligence.

Isn't that kind of what Pavlov proved with his dog? It happens to people all the time too. We are easily conditioned (on the aggregate) to give desired results.