> The ELF file contains a dynamic section which tells the kernel which shared libraries to load, and another section which tells the kernel to dynamically “relocate” pointers to those functions, so everything checks out.
This is not how dynamic linking works on GNU/Linux. The kernel processes the program headers for the main program (mapping the PT_LOAD segments, without relocating them) and notices the PT_INTERP program interpreter (the path to the dynamic linker) among the program headers. The kernel then loads the dynamic linker in much the same way as the main program (again without relocation) and transfers control to its entry point. It's up to the dynamic linker to self-relocate, load the referenced share objects (this time using plain mmap and mprotect, the kernel ELF loader is not used for that), relocate them and the main program, and then transfer control to the main program.
The scheme is not that dissimilar to the #! shebang lines, with the dynamic linker taking the role of the script interpreter, except that ELF is a binary format.
Yeah it turns out the kernel doesn't care about sections at all. It only ever cares about the PT_LOAD segments in the program header table, which is essentially a table of arguments for the mmap system call. Sections are just dynamic linker metadata and are never covered by PT_LOAD segments.
This seems to be a common misconception. I too suffered from it once... Tried to embed arbitrary files into ELF files using objcopy. The tool could easily create new sections with the file contents just fine, but the kernel wouldn't load them into memory. It was really confusing at first.
There were no tools for patching the program header table, I ended up making them! The mold linker even added a feature just to make this patching easy!
You’re right, and I knew this back in February when I wrote most of this post. I must have revised it down incorrectly before posting; will correct. Bit of a facepalm from my side.
It's also possible to pack a whole codebase into "before main()" - or with no main() at all. I was recently experimenting doing this, as well as a whole codebase that only uses main() and calls itself over and over. Good fun: https://joshua.hu/packing-codebase-into-single-function-disr...
This is awesome! To anyone interested in learning more about this, I wrote https://cpu.land/ a couple years ago. It doesn't go as in-depth into e.g. memory layout as OP does but does cover multitasking and how the code is loaded in the first place.
> A note on interpreters: If the executable file starts with a shebang (#!), the kernel will use the shebang-specified interpreter to run the program. For example, #!/usr/bin/python3 will run the program using the Python interpreter, #!/bin/bash will run the program using the Bash shell, etc.
This caused me a lot of pain while trying to debug a 3rd party Java application that was trying to launch an executable script, and throwing an IO error "java.io.IOException: error=2, No such file or directory." I was puzzled because I know the script is right there (using its full path) and it had the executable bit set. It turns out that the shebang in the script was wrong, so the OS was complaining (actual error from a shell would be "The file specified the interpreter '/foo/bar', which is not an executable command."), but the Java error was completely misleading :|
Note: If you wonder why I didn't see this error by running the script myself: I did, and it ran fine locally. But the application was running on a remote host that had a different path for the interpreter.
Note, that this is not a Java specific problem, it can occur with other programs as well. "No such file or directory" is just the nice description for ENOENT, which can occur in a lot of syscalls. I typically just run the program through strace, then you will quickly see what the program did.
Not exactly the same, but on Windows if you use entirely Win32 calls you can avoid linking any C runtime library. Win32 is below the C standard library on Windows and the C runtime is optional.
I think using syscalls directly is a worse idea than loading shared libraries, and new kernel features, like ALSA (audio playback), DRM (graphics rendering) and other use libraries instead of documenting syscalls and ioctls. This is better because it allows intercepting and subverting the calls, adding support for features even if the kernel doesn't support it, makes it easier to port code to other OSes, support different architectures (32-bit code on 64-bit kernel), and allows changing kernel interface without breaking anything. So Windows-style approach with system libraries is better in every aspect.
I abandoned it because Linux itself now has a rich set of nolibc headers.
Now I'm working on a whole programming language based around this concept. A freestanding lisp interpreter targeting Linux directly with builtin system call support. The idea is to complete the interpreter and then write the standard library and Linux user space in lisp using the system calls.
It's been an amazing journey. It's incredible how far one can take this.
Its been a while since I've touched this stuff but my recollection is the ELF interpreter (ldso, not the kernel) is responsible for everything after mapping the initial ELF's segments.
iirc execve maps pt_load segments from the program header, populates the aux vector on the stack, and jump straight to the ELF interpreter's entry point. Any linked objects are loaded in userspace by the elf interpreter. The kernel has no knowledge of the PLT/GOT.
Especially relevant for dynamic linkers is the AT_PHDR and AT_BASE auxiliary vector entries which provide the address of the executable's program header table and the address of the interpreter, respectively.
The poster was receiving a SIGFPE (floating point exception) on a C program that is simply “int main() { return 0; }”. A fun little mystery to dive into!
As someone who teaches this stuff at university, I see students getting confused every single year by how textbooks draw memory. The problem is mostly visual, not conceptual.
Most diagrams in books and slides use an old hardware-centric convention: they draw higher addresses at the top of the page and lower addresses at the bottom. People sometimes justify this with an analogy like “floors in a building go up,” so address 0x7fffffffe000 is drawn “higher” than 0x400000.
But this is backwards from how humans read almost everything today. When you look at code in VS Code or any other IDE, line 1 is at the top, then line 2 is below it, then 3, 4, etc. Numbers go up as you go down. Your brain learns: “down = bigger index.”
Memory in a real Linux process actually matches the VS Code model much more closely than the textbook diagrams suggest.
↑ the stack pointer starts here and moves upward
(toward lower addresses) when you push
[0x7ffd8c418000] STACK start
...
[0xffffffffff600000] higher addresses
...
The output is printed from low addresses to high addresses. At the top of the output you'll usually see the binary, shared libs, heap, etc. Those all live at lower virtual addresses. Farther down in the output you'll eventually see the stack, which lives at a higher virtual address. In other words: as you scroll down, the addresses get bigger. Exactly like scrolling down in an editor gives you bigger line numbers.
The phrases “the heap grows up” and “the stack grows down” aren't wrong. They're just describing what happens to the numeric addresses: the heap expands toward higher addresses, and the stack moves into lower addresses.
The real problem is how we draw it. We label “up” on the page as “higher address,” which is the opposite of how people read code or even how /proc/<pid>/maps is printed. So students have to mentally flip the diagram before they can even think about what the stack and heap are doing.
If we just drew memory like an editor (low addresses at the top, high addresses further down) it would click instantly. Scroll down, addresses go up, and the stack sits at the bottom. At that point it’s no longer “the stack grows down”: it’s just the stack pointer being decremented, moving to lower addresses (which, in the diagram, means moving upward).
The stack does grow down though no matter what, in the sense that the pushing decrements the stack pointer. You can represent this as "up" in your diagram, but I don't think this makes it any easier conceptually because by analogy to a simple push/pop on an array, you'd naively expect higher addresses to contain more recent stack contents.
The core of the issue is that the direction stack growth differs from "usual" memory access patterns which usually allocate from lower to higher addresses (consider array access, or how strings are laid out in memory. And little-endian systems are the majority)
But if we're going with visualization options I prefer to visualize it horizontally, with lower addresses on left. This has a natural correspondence with how you access an array or lay out strings in memory.
I think I got stuck in the same rut that I learned address space in whilst writing that diagram. I would tend to agree with you that your model makes much more sense to the student.
Related: In notation, one thing that I used to struggle with is how addresses (e.g. 0xAB_CD) actually have the bit representation of [0xCD, 0xAB]. Wonder if there's a common way to address that?
That's how stacks on my desk grow and how everything grows in reality. I wouldn't numerate stacked things on my desk from the top, since this constantly changes. You also wouldn't name the first branch of a tree (the plant) to be the top-most one.
In your example "the stack grows down", seems to be wrong in the image.
> Yeah, that’s it. Now, 2308 may be slightly bloated because we link against musl instead of glibc, but the point still stands: There’s a lot of stuff going on behind the scenes here.
Slightly bloated is a slight understatement. The same program linked to glibc tops at 36 symbols in .symtab:
$ readelf -a hello|grep "'.symtab'"
Symbol table '.symtab' contains 36 entries:
From the title, I thought this was going to be about the parts of a program that run before the main function is entered. Static objects have to be constructed. Quite a bit of code can run. Order of initialization can be a problem. What happens if you try to do I/O from a static constructor? Does that even work?
This is heavily language runtime dependent — there’s nothing that fundamentally stops you from doing anything during the phase between jumping to an entry point and the main()
did you see the relocations for the main binary applied before or after the linker resolves its own symbols? the ordering always feels like black magic when you step through it in a debugger
> Depending on your program, _start may be the only thing between the entrypoint and your main function
I once developed a liblinux project entirely built around this idea.
I wanted to get rid of libc and all of its initialization, complexity and global state. The C library is so complex it has a primitive form of package management built into it:
You can get surprisingly far with just this, and it's actually possible to understand what's going on. Biggest pain point was the lack of C library utility functions like number/string conversion. I simply wrote my own.
Linux is the only operating system that lets us do this. In other systems, the C library is part of the kernel interface. Bypassing it like this can and does break things. Go developers once discovered this the hard way.
Note also that _start is an arbitrary symbol. The name is not special at all. It's just some linker default. The ELF header contains a pointer to the entry point, not a symbol. Feel free to choose a nice name!
fweimer|4 months ago
This is not how dynamic linking works on GNU/Linux. The kernel processes the program headers for the main program (mapping the PT_LOAD segments, without relocating them) and notices the PT_INTERP program interpreter (the path to the dynamic linker) among the program headers. The kernel then loads the dynamic linker in much the same way as the main program (again without relocation) and transfers control to its entry point. It's up to the dynamic linker to self-relocate, load the referenced share objects (this time using plain mmap and mprotect, the kernel ELF loader is not used for that), relocate them and the main program, and then transfer control to the main program.
The scheme is not that dissimilar to the #! shebang lines, with the dynamic linker taking the role of the script interpreter, except that ELF is a binary format.
matheusmoreira|4 months ago
This seems to be a common misconception. I too suffered from it once... Tried to embed arbitrary files into ELF files using objcopy. The tool could easily create new sections with the file contents just fine, but the kernel wouldn't load them into memory. It was really confusing at first.
https://stackoverflow.com/q/77468641
There were no tools for patching the program header table, I ended up making them! The mold linker even added a feature just to make this patching easy!
https://www.matheusmoreira.com/articles/self-contained-lone-...
mkoubaa|4 months ago
amitprasad|4 months ago
mmsc|4 months ago
1718627440|4 months ago
thatxliner|4 months ago
archmaster|4 months ago
fuzzy_biscuit|4 months ago
khaledh|4 months ago
This caused me a lot of pain while trying to debug a 3rd party Java application that was trying to launch an executable script, and throwing an IO error "java.io.IOException: error=2, No such file or directory." I was puzzled because I know the script is right there (using its full path) and it had the executable bit set. It turns out that the shebang in the script was wrong, so the OS was complaining (actual error from a shell would be "The file specified the interpreter '/foo/bar', which is not an executable command."), but the Java error was completely misleading :|
Note: If you wonder why I didn't see this error by running the script myself: I did, and it ran fine locally. But the application was running on a remote host that had a different path for the interpreter.
1718627440|4 months ago
gjf|4 months ago
mscdex|4 months ago
vbezhenar|4 months ago
electroly|4 months ago
codedokode|4 months ago
matheusmoreira|4 months ago
https://news.ycombinator.com/item?id=45709141
I abandoned it because Linux itself now has a rich set of nolibc headers.
Now I'm working on a whole programming language based around this concept. A freestanding lisp interpreter targeting Linux directly with builtin system call support. The idea is to complete the interpreter and then write the standard library and Linux user space in lisp using the system calls.
It's been an amazing journey. It's incredible how far one can take this.
1718627440|4 months ago
jjmarr|4 months ago
forrestthewoods|4 months ago
Windows support is a requirement, and no WSL2 doesn’t count.
C standard library is pretty bad and it’d be great if not using it was a little easier and more common.
turbert|4 months ago
iirc execve maps pt_load segments from the program header, populates the aux vector on the stack, and jump straight to the ELF interpreter's entry point. Any linked objects are loaded in userspace by the elf interpreter. The kernel has no knowledge of the PLT/GOT.
matheusmoreira|4 months ago
https://lwn.net/Articles/631631/
https://github.com/torvalds/linux/blob/master/fs/binfmt_elf....
Especially relevant for dynamic linkers is the AT_PHDR and AT_BASE auxiliary vector entries which provide the address of the executable's program header table and the address of the interpreter, respectively.
https://lwn.net/Articles/519085/
nneonneo|4 months ago
The poster was receiving a SIGFPE (floating point exception) on a C program that is simply “int main() { return 0; }”. A fun little mystery to dive into!
bignerd_95|4 months ago
Most diagrams in books and slides use an old hardware-centric convention: they draw higher addresses at the top of the page and lower addresses at the bottom. People sometimes justify this with an analogy like “floors in a building go up,” so address 0x7fffffffe000 is drawn “higher” than 0x400000.
But this is backwards from how humans read almost everything today. When you look at code in VS Code or any other IDE, line 1 is at the top, then line 2 is below it, then 3, 4, etc. Numbers go up as you go down. Your brain learns: “down = bigger index.”
Memory in a real Linux process actually matches the VS Code model much more closely than the textbook diagrams suggest.
You can see it yourself with:
cat /proc/$$/maps
(pick any PID instead of $$).
[0x00000000] lower addresses [0x00620000] HEAP start[0x00643000] HEAP extended ↓ (more allocations => higher addresses)
[0x7ffd8c3f7000] STACK top (<- stack pointer) [0x7ffd8c418000] STACK start [0xffffffffff600000] higher addresses The output is printed from low addresses to high addresses. At the top of the output you'll usually see the binary, shared libs, heap, etc. Those all live at lower virtual addresses. Farther down in the output you'll eventually see the stack, which lives at a higher virtual address. In other words: as you scroll down, the addresses get bigger. Exactly like scrolling down in an editor gives you bigger line numbers.The phrases “the heap grows up” and “the stack grows down” aren't wrong. They're just describing what happens to the numeric addresses: the heap expands toward higher addresses, and the stack moves into lower addresses.
The real problem is how we draw it. We label “up” on the page as “higher address,” which is the opposite of how people read code or even how /proc/<pid>/maps is printed. So students have to mentally flip the diagram before they can even think about what the stack and heap are doing.
If we just drew memory like an editor (low addresses at the top, high addresses further down) it would click instantly. Scroll down, addresses go up, and the stack sits at the bottom. At that point it’s no longer “the stack grows down”: it’s just the stack pointer being decremented, moving to lower addresses (which, in the diagram, means moving upward).
krackers|4 months ago
The core of the issue is that the direction stack growth differs from "usual" memory access patterns which usually allocate from lower to higher addresses (consider array access, or how strings are laid out in memory. And little-endian systems are the majority)
But if we're going with visualization options I prefer to visualize it horizontally, with lower addresses on left. This has a natural correspondence with how you access an array or lay out strings in memory.
amitprasad|4 months ago
Related: In notation, one thing that I used to struggle with is how addresses (e.g. 0xAB_CD) actually have the bit representation of [0xCD, 0xAB]. Wonder if there's a common way to address that?
1718627440|4 months ago
In your example "the stack grows down", seems to be wrong in the image.
hagbard_c|4 months ago
> Yeah, that’s it. Now, 2308 may be slightly bloated because we link against musl instead of glibc, but the point still stands: There’s a lot of stuff going on behind the scenes here.
Slightly bloated is a slight understatement. The same program linked to glibc tops at 36 symbols in .symtab:
amitprasad|4 months ago
More generally, I'm not surprised at the symtab bloat from statically-linking given the absolute size increase of the binary.
itopaloglu83|4 months ago
Animats|4 months ago
amitprasad|4 months ago
yawpitch|4 months ago
ramanvarma|4 months ago
matheusmoreira|4 months ago
> Depending on your program, _start may be the only thing between the entrypoint and your main function
I once developed a liblinux project entirely built around this idea.
I wanted to get rid of libc and all of its initialization, complexity and global state. The C library is so complex it has a primitive form of package management built into it:
https://blogs.oracle.com/solaris/post/init-and-fini-processi...
So I made _start functions which did nothing but pass argc, argv, envp and auxv to the actual main function:
https://github.com/matheusmoreira/liblinux/blob/master/start...
https://github.com/matheusmoreira/liblinux/blob/master/start...
You can get surprisingly far with just this, and it's actually possible to understand what's going on. Biggest pain point was the lack of C library utility functions like number/string conversion. I simply wrote my own.
https://github.com/matheusmoreira/liblinux/tree/master/examp...
Linux is the only operating system that lets us do this. In other systems, the C library is part of the kernel interface. Bypassing it like this can and does break things. Go developers once discovered this the hard way.
https://www.matheusmoreira.com/articles/linux-system-calls
The kernel has their own nolibc infrastructure now, no doubt much better than my project.
https://github.com/torvalds/linux/tree/master/tools/include/...
I encourage everyone to use it.
Note also that _start is an arbitrary symbol. The name is not special at all. It's just some linker default. The ELF header contains a pointer to the entry point, not a symbol. Feel free to choose a nice name!