top | item 42679612

(no title)

shcheklein | 1 year ago

On the other hand it might become a next level of abstraction.

Machine -> Asm -> C -> Python -> LLM (Human language)

It compiles human prompt into some intermediate code (in this case Python). Probably initial version of CPython was not perfect at all, and engineers were also terrified. If we are lucky this new "compiler" will be becoming better and better, more efficient. Never perfect, but people will be paying the same price they are already paying for not dealing directly with ASM.

discuss

order

sdesol|1 year ago

> Machine -> Asm -> C -> Python -> LLM (Human language)

Something that you neglected to mention is, with every abstraction layer up to Python, everything is predictable and repeatable. With LLMs, we can give the exact same instructions, and not be guaranteed the same code.

theptip|1 year ago

I’m not sure why that matters here. Users want code that solves their business need. In general most don’t care about repeatability if someone else tries to solve their problem.

The question that matters is: can businesses solve their problems cheaper for the same quality, or at lower quality while beating the previous Pareto-optimal cost/quality frontier.

compumetrika|1 year ago

LLMs use pseudo-random numbers. You can set the seed and get exactly the same output with the same model and input.

zurn|1 year ago

> > Machine -> Asm -> C -> Python -> LLM (Human language)

> Something that you neglected to mention is, with every abstraction layer up to Python, everything is predictable and repeatable.

As long as you consider C and dragons flying out of your nose predictable.

(Insert similar quip about hardware)

zajio1am|1 year ago

There is no reason to assume that say C compiler generates the same machine code for the same source code. AFAIK, a C compiler that chooses randomly between multiple C-semantically equivalent sequences of instructions is a valid C compiler.

CamperBob2|1 year ago

With LLMs, we can give the exact same instructions, and not be guaranteed the same code.

That's something we'll have to give up and get over.

See also: understanding how the underlying code actually works. You don't need to know assembly to use a high-level programming language (although it certainly doesn't hurt), and you won't need to know a high-level programming language to write the functional specs in English that the code generator model uses.

I say bring it on. 50+ years was long enough to keep doing things the same way.

SkyBelow|1 year ago

Even compiling code isn't deterministic given different compilers and different items installed on a machine can influence the final resulting code, right? Ideally they shouldn't have any noticeable impact, but in edge cases it might, which is why you compile your code once during a build step and then deploy the same compiled code to different environments instead of compiling it per environment.

jsjohnst|1 year ago

> With LLMs, we can give the exact same instructions, and not be guaranteed the same code.

Set temperature appropriately, that problem is then solved, no?

12345hn6789|1 year ago

assuming you have full control over which compiler youre using for each step ;)

What's to say LLMs will not have a "compiler" interface in the future that will reign in their variance

omgwtfbyobbq|1 year ago

Aren't some models deterministic with temperature set to 0?

vages|1 year ago

It may be a “level of abstraction”, but not a good one, because it is imprecise.

When you want to make changes to the code (which is what we spend most of our time on), you’ll have to either (1) modify the prompt and accept the risk of using the new code or (2) modify the original code, which you can’t do unless you know the lower level of abstraction.

Recommended reading: https://ian-cooper.writeas.com/is-ai-a-silver-bullet

MVissers|1 year ago

Yup!

No goal to become a programmer– But I like to build programs.

Build a rather complex AI-ecosystem simulator with me as the director and GPT-4 now Claude 3.5 as the programmer.

Would never have been able to do this beforehand.

saurik|1 year ago

I think there is a big difference between an abstraction layer that can improve -- one where you maybe write "code" in prompts and then have a compiler build through real code, allowing that compiler to get better over time -- and an interactive tool that locks bad decisions autocompleted today into both your codebase and your brain, involving you still working at the lower layer but getting low quality "help" in your editor. I am totally pro- compilers and high-level languages, but I think the idea of writing assembly with the help of a partial compiler where you kind of write stuff and then copy/paste the result into your assembly file with some munging to fix issues is dumb.

By all means, though: if someone gets us to the point where the "code" I am checking in is a bunch of English -- for which I will likely need a law degree in addition to an engineering background to not get evil genie with a cursed paw results from it trying to figure out what I must have meant from what I said :/ -- I will think that's pretty cool and will actually be a new layer of abstraction in the same class as compiler... and like, if at that point I don't use it, it will only be because I think it is somehow dangerous to humanity itself (and even then I will admit that it is probably more effective)... but we aren't there yet and "we're on the way there" doesn't count anywhere near as much as people often want it to ;P.