(no title)
indigovole | 11 months ago
I'm sure you could go back 40 years earlier and find programmers complaining about using FORTRAN and COBOL compilers instead writing the assembly by hand.
I think that the assembler->compiler transition is a better metaphor for the brain->brain+AI transition than Visual Studio's old-hat autocomplete etc.
After working with Cursor for a couple of hours, I had a bunch of code that was working according to my tests, but when I integrated it, I found that Claude had inferred a completely different protocol for interacting with data structure than the rest of my code was using. Yeah, write better tests... but I then found that I did not really understand most of the code Claude had written, even though I'd approved every change on a granular level. Worked manually through an solid hour of wtf before I figured out the root cause, then Clauded my way through the fix.
I can picture an assembly developer having a similar experience trying to figure out why the compiler generated _this_ instead of _that_ in the decade where every byte of memory mattered.
Having lived through the dumb editor->IDE transition, though, I _never_ had anything like that experience of not understanding what I'd done in hours 1 and 2 at the very beginning of hour 3.
yoyohello13|11 months ago
golergka|11 months ago
Always type everything down from a tutorial when you follow it. Don't even copy and paste, literally type it down. And make small adjustments here and there, according your personal taste and experience.
vunderba|11 months ago
Even copy pasta from Stack Overflow would require more active effort around grabbing exactly what you need, replacing variable names, etc. to integrate the solution into your project.
irishloop|11 months ago
cassepipe|11 months ago
ashoeafoot|11 months ago
jgrahamc|11 months ago
scarface_74|11 months ago
the__alchemist|11 months ago
I think sending your LLM all relevant data structures (structs, enums, function signatures etc) is mandatory for any code-related queries. It will avoid problems like this; it seems required in many cases to get integratable results.
mystified5016|11 months ago
For instance, if you aren't aware, AVR and most other uCs have special registers and instructions for pointers. Say you put a pointer to an array in Z. You can load the value at Z and increment or decrement the pointer as a single instruction in a single cycle.
GCC triples the cost of this operation with some extremely naive implementations.
Instead of doing 'LD Z+' GCC gives you ``` inc Z ld Z dec Z ```
Among other similar annoyances. You can carefully massage the C++ code to get better assembly, but that can take many hours of crazy-making debugging. Sometimes it's best to just write the damn assembly by hand.
In this same project, I had to implement Morton ordering on a 3D bit field (don't ask). The C implementation was well over 200 instructions but by utilizing CPU features GCC doesn't know about, my optimized assembly is under 30 instructions.
Modern sky-high abstracted languages are the source of brain rot, not compilers or IDEs in general. Most programmers are completely and utterly detached from the system they're programming. I can't see how one could ever make any meaningful progress or optimization without any understanding of what the CPU actually does.
And this is why I like embedded. It's very firmly grounded in physical reality. My code is only slightly abstracted away from the machine itself. If I can understand the code, I understand the machine.
lukev|11 months ago
If your job was to build websites, this would drive you insane.
I think I'm coming around to a similar position on AI dev tools: it just matters what you're trying to do. If it's a well known problem that's been done before, by all means. Claude Code is the new Ruby on Rails.
But if I need to do some big boy engineering to actually solve new problems, it's time to break out Emacs and get to work.
iuvcaw|11 months ago
And then if you are optimizing for performance, you can make an incredible amount of progress just fixing the crappy Java etc code before you need to drop down a layer of abstraction
Even hedge funds, which make money executing trades fractions of milliseconds quicker than others, use higher level languages and fix performance issues within those languages if needed
larve|11 months ago
10 years ago, running an arm core at 40 Mhz, I barely had the need to inspect my compiler's assembly. I still could roughly read things when I needed to (since embedded compilers tend to have bugs more regularly), but there's no way I could write assembly anymore. I had no qualms at the time using a massively inefficient library like arduino to try things out. If it works and the timing is correct, it works.
These days where I don't do embedded for work, I have no qualms writing my embedded projects in micropython. I want to build things, not micro optimize assembly.
unknown|11 months ago
[deleted]
phyllistine|11 months ago