top | item 43322691

(no title)

indigovole | 11 months ago

It's a really interesting question.

I'm sure you could go back 40 years earlier and find programmers complaining about using FORTRAN and COBOL compilers instead writing the assembly by hand.

I think that the assembler->compiler transition is a better metaphor for the brain->brain+AI transition than Visual Studio's old-hat autocomplete etc.

After working with Cursor for a couple of hours, I had a bunch of code that was working according to my tests, but when I integrated it, I found that Claude had inferred a completely different protocol for interacting with data structure than the rest of my code was using. Yeah, write better tests... but I then found that I did not really understand most of the code Claude had written, even though I'd approved every change on a granular level. Worked manually through an solid hour of wtf before I figured out the root cause, then Clauded my way through the fix.

I can picture an assembly developer having a similar experience trying to figure out why the compiler generated _this_ instead of _that_ in the decade where every byte of memory mattered.

Having lived through the dumb editor->IDE transition, though, I _never_ had anything like that experience of not understanding what I'd done in hours 1 and 2 at the very beginning of hour 3.

discuss

order

yoyohello13|11 months ago

This feels very similar to me as the "Tutorial Hell" effect. Where I can watch videos/read books, and fully feel like I understand everything. However, when hand touches keyboard I realize I didn't really retain any of it. I think that's something that makes AI code gen so dangerous. Even if you think you understand and can troubleshoot the output. Is your perception accurate?

golergka|11 months ago

> Where I can watch videos/read books, and fully feel like I understand everything. However, when hand touches keyboard I realize I didn't really retain any of it.

Always type everything down from a tutorial when you follow it. Don't even copy and paste, literally type it down. And make small adjustments here and there, according your personal taste and experience.

vunderba|11 months ago

I've called this out before around LLMs - when the act of development becomes passive (versus active) - there is a significant risk around not fully being cognizant of the code.

Even copy pasta from Stack Overflow would require more active effort around grabbing exactly what you need, replacing variable names, etc. to integrate the solution into your project.

irishloop|11 months ago

Yeah I never really learn something until I actually hack away at it, and even then I need to really understand things on a granular level.

cassepipe|11 months ago

I have done that too much. When learning now, when I read the solution, I always make sure that I am able to implement myself, else I don't consider I learned it. I apply the same for LLM code.

jgrahamc|11 months ago

I was programming 40 years ago and was very happy to be able to use "high-level languages" and not write everything in assembly. The high-level languages enabled expressiveness that was hard with lower levels.

scarface_74|11 months ago

In 1985, any time I needed any level of performance on my 1Mhz Apple //e, I still had to use assembly or when BASIC didn’t expose the functionality I needed. Mostly around double hires graphics and sound.

the__alchemist|11 months ago

This is a tangent, but perhaps relevant:

I think sending your LLM all relevant data structures (structs, enums, function signatures etc) is mandatory for any code-related queries. It will avoid problems like this; it seems required in many cases to get integratable results.

mystified5016|11 months ago

Embedded programming is still like this. Most people just don't inspect the assembly produced by their compiler. Unless you're working on an extremely mainstream chip with a bleeding edge compiler, your assembly is going to be absolutely full of complete nonsense.

For instance, if you aren't aware, AVR and most other uCs have special registers and instructions for pointers. Say you put a pointer to an array in Z. You can load the value at Z and increment or decrement the pointer as a single instruction in a single cycle.

GCC triples the cost of this operation with some extremely naive implementations.

Instead of doing 'LD Z+' GCC gives you ``` inc Z ld Z dec Z ```

Among other similar annoyances. You can carefully massage the C++ code to get better assembly, but that can take many hours of crazy-making debugging. Sometimes it's best to just write the damn assembly by hand.

In this same project, I had to implement Morton ordering on a 3D bit field (don't ask). The C implementation was well over 200 instructions but by utilizing CPU features GCC doesn't know about, my optimized assembly is under 30 instructions.

Modern sky-high abstracted languages are the source of brain rot, not compilers or IDEs in general. Most programmers are completely and utterly detached from the system they're programming. I can't see how one could ever make any meaningful progress or optimization without any understanding of what the CPU actually does.

And this is why I like embedded. It's very firmly grounded in physical reality. My code is only slightly abstracted away from the machine itself. If I can understand the code, I understand the machine.

lukev|11 months ago

And this is appropriate for your domain and the jobs you work on.

If your job was to build websites, this would drive you insane.

I think I'm coming around to a similar position on AI dev tools: it just matters what you're trying to do. If it's a well known problem that's been done before, by all means. Claude Code is the new Ruby on Rails.

But if I need to do some big boy engineering to actually solve new problems, it's time to break out Emacs and get to work.

iuvcaw|11 months ago

The vast majority of time spent building software has little to do with optimization. Sky-high abstracted brain rot languages are useful precisely because usually you don’t need to worry about the type of details that you would if you were optimizing performance

And then if you are optimizing for performance, you can make an incredible amount of progress just fixing the crappy Java etc code before you need to drop down a layer of abstraction

Even hedge funds, which make money executing trades fractions of milliseconds quicker than others, use higher level languages and fix performance issues within those languages if needed

larve|11 months ago

As a long time embedded programmer, I don't understand this. Even 20 years ago, there is no way I really understood the machine, despite writing assembly and looking at compiler output.

10 years ago, running an arm core at 40 Mhz, I barely had the need to inspect my compiler's assembly. I still could roughly read things when I needed to (since embedded compilers tend to have bugs more regularly), but there's no way I could write assembly anymore. I had no qualms at the time using a massively inefficient library like arduino to try things out. If it works and the timing is correct, it works.

These days where I don't do embedded for work, I have no qualms writing my embedded projects in micropython. I want to build things, not micro optimize assembly.

phyllistine|11 months ago

If you're active on social media (twitter), you will still see people, like the FFmpeg account, still bashing higher level languages (C) and praising hand written assembly.