I read the paper, and they make a lot of good points about fork's warts.
But I really wanted some explanation of why Windows process startup seems to be so heavyweight. Why does anything that spawns lots of little independent processes take so bloody long on Windows?
I'm not saying "lots of processes on Windows is slow, lots of processes on Linux is fast, Windows uses CreateProcess, Linux uses fork, CreateProcess is an alternative to fork/exec, therefore fork/exec is better than any alternative." I can imagine all kinds of reasons for the observed behavior, few of which would prove that fork is a good model. But I still want to know what's going on.
I'm a bit rusty on this but from memory the overhead is by and large specific to the Win32 environment. Creating a "raw" process is cheap and fast (as you'd reasonably expect), but there's a lot of additional initialisation that needs to occur for a "fully-fledged" Win32 process before it can start executing.
Beyond the raw Process and Thread kernel objects, which are represented by EPROCESS + KPROCESS and ETHREAD + KTHREAD structures in kernel address space, a Win32 process also needs to have:
- A PEB (Process Environment Block) structure in its user address space
- An associated CSR_PROCESS structure maintained by Csrss (Win32 subsystem user-mode)
- An associated W32PROCESS structure for Win32k (Win32 subsystem kernel-mode)
I'm pretty sure these days the W32PROCESS structure only gets created on-demand with the first creation of a GDI or USER object, so presumably CLI apps don't have to pay that price. But either way, those latter three structures are non-trivial. They are complicated structures and I assume involve a context switch (or several) at least for the Csrss component. At least some steps in the process also involve manipulating global data structures which block other process creation/destruction (Csrss steps only?).
I expect all this Win32 specific stuff largely doesn't apply to e.g. the Linux subsystem, and so creating processes should be much faster. The key takeaway is its all the Win32 stuff that contributes the bulk of the overhead, not the fundamental process or thread primitives themselves.
EDIT: If you want to learn more, Mark Russinovich's Windows Internals has a whole chapter on process creation which I'm sure explains all this.
I used to work on a cross-platform project, and spent several weeks trying to figure out why our application ran significantly faster on linux than windows. One major culprit was process creation (another was file creation). I never really uncovered the true reason, but I suspect it had to do with the large number of DLLs that Windows would automatically link if you weren't very careful. Linux, of course, can also load shared code objects, but in my experience, they are smaller and lighter weight.
This probably isn't the technical explanation your looking for, but, in general, processes on Windows and processes on Unix aren't the same--or, at least, they're not meant to be used the same way. Creating lots of small processes on Windows has long been discouraged and considered poor design, whereas the opposite is true on Unix.
One could probably argue that processes on Windows need to be lighter-weight now that sandboxing is a common security practice. These days, programs like web browsers opt to create a large number of processes both for security and stability purposes. In much the same way that POSIX should deprecate the fork model, Windows should provide lighter-weight processes.
CreateProcess requires an application to initialize from scratch. When you fork, you cheaply inherit the initialized state of the whole application image. Only a few pages that are mutated have to be subject to copy-on-write. Even that copy-on-write is cheaper than calculating the contents of those pages from scratch.
If I had to guess, I'd point to DLLs. The minimal Windows process loads probably half a dozen, plus the entry points are called in a serialized manner.
Many frameworks are backed by XPC services, where the parent process has a socket-like connection to a backend server. After forking, the child would have no valid connection to the server. The fork() function establishes a new connection in the child for libSystem, to allow Unix programs to port easily to macOS, but other services' connections are not re-established. This makes fork on macOS (i) slow, and (ii) unsafe for code that touches virtually any of Apple's APIs.
fork() is generally unsafe for that reason, and OS X is only special in this regard in that it has more of these hidden C library handles that can blow up on the child-side of fork(). vfork()+exec()-or-_exit() is much safer.
Fork() is now basically the root of a looong list of special cases in so many aspects of programming. Things get even worse when you use a language with built-in runtime such as Golang for which multi-threaded programming the default behaviour. If fork() can't even handle multiple threads, what is the real point of having it when a 8 core 16 threads AMD processor is about $150 each.
> If fork() can't even handle multiple threads, what is the real point of having it when a 8 core 16 threads AMD processor ...
These threads and those threads are not the same. The 16-threads SMT processor will happily chew on 16 different programs, processes or whatever the load at the moment is, e.g. if you use Python's multiprocessing you can create 16 processes and they'll be executed in parallel.
fork() can handle multiple threads but you have to be attentive when cleaning up etc. - quite often, code using fork() will get confused when you spawn threads, and code using threads will get confused when you fork()
Not even just the semantics, the performance is awful. Even when the fork is virtual (as any modern fork is) and there's no memory copying because it's COW, all the kernel page tables still need to be copied and for a multi-GB process that's nontrivial. That's why any sane large service that needs to fork anything will early on start up a slave subprocess whose only job is to fork quickly when the master process needs it.
* redirect stdin, stdout, and stderr
* open files that might be needed and close files that aren't
* change process limits
* drop privileges
* change the root directory
* change namespaces
And there are a few other things I am probably forgetting.
fork() is also used to daemonize and for privilege separation, two tasks where posix_spawn() cannot be used. I suppose daemonization can be seen as something of the past, but privilege separation is not. On Linux, privileges are attached to a thread, so it should be possible to spawn a new thread instead of a new process. However, a privileged thread sharing the same address space as an unprivileged one doesn't seem a good idea.
The paper also mention the use case of multiprocess servers which relies heavily on fork() but dismiss it as it could be implemented with threads. A crash in a worker would lead to the crash of the whole application. While a worker could just be restarted.
A proper use case of removing fork() from an actual program would help. For example, how nginx on Windows is implemented?
Can anybody elucidate about why fork() is still used in Chromium or Node.js? They are not old-grown traditional forking Unix servers (unlike Apache or the mentioned databases in the paper). I would expect them to implement some of the alternatives and having fork() only as a fallback in the code (i.e. after a cascade of #ifdefs) if no other API is available. Therefore, I wonder where the fork() bottlenecks really appear in everyday's life.
Chrome on Windows uses CreateProcess, and Windows came first, so Chrome is mostly architected around an approach that would fit posix_spawn better. However, fork has some benefits that I went into here:
For me it was poll() due to it's simple and intuitive API. Also, it's much faster then select() when you have a large number of file descriptors being monitored.
While fork() might be sub-optimal for launching different programs (fork() + exec() vs. posix_spawn()), it's absolutely essential in several types of common systems that don't use it to launch different programs.
Fork-requiring program class 1:
The biggest example where fork() is needed are webservers/long-running programs with significant unchanging memory overhead and/or startup time.
Many large applications written in a language or framework that prefers the single-process/single-thread model for executing requests (e.g. Python/gunicorn, Perl, a lot of Ruby, NodeJS with ‘cluster’ for multicore, etc.) are basically dependent on fork(). Such applications often have a huge amount of memory required at startup (due to loading libraries and initializing frameworks/constant state). Creating workers that can execute requests in parallel but don’t require any additional memory overhead (just what they consume per request) is essential for them. fork()ing without exec()ing a new program facilitates this memory sharing; everything is copy-on-write, and most big webapps don’t need to write most of the startup-initialized memory they have, though they may need to read it.
Additionally, starting up such programs can take a long time due to costly initialization (seconds or minutes in the worst cases); using fork() allows them to quickly replace failed or aged-out subprocesses without having to pay that overhead (which also typically pegs a CPU core) to change their parallelism. “Quickly” might not be quick enough if a program needs to continually launch new subprocesses, but for periodically forking (or just forking-at-startup) long-running servers with a big footprint, it’s far better than re-initializing the whole runtime. For better or worse, we’ve come far enough from old-school process-per-request CGI that it is no longer feasible in most production deployments.
Anticipated rebuttals:
Q: Wouldn't it be nice if everyone wrote apps small enough that startup time was minimized and memory footprint was low?
A: Sure, but they won’t.
Q: People should just write their big, long-running services in a framework that starts fast, has low memory requirements, and uses threads instead of fork()s.
A: See previous answer. Also see zzzcpan’s response.
Q: Can you access some of those benefits with careful use of shared memory?
A: Yes, but it’s much harder to do than it is to use fork() in most cases (caveat Windows, but it’s still hard).
Q: Do tools exist in single-proc/single-thread forking frameworks/languages which switch from forking to hybrid async/threaded paradigms (like gevent) instead?
A: Yes, but they’re not nearly as mature, capable, or useful (especially when you need to utilize multiple cores).
Fork-requiring program class 2:
Programs which fork infrequently in order to parallelize uncommon tasks over shared memory. Redis does this to great effect; it doesn’t exec(), it just forks off a child process which keeps the memory image at the time of fork from the parent, and writes most of that memory state to disk so that the parent can keep handling requests while the child snapshots.
Python’s multiprocessing excels at these kinds of cases as well. If you’re launching and destroying multiprocessing pools multiple times a second, then sure, you’re holding it wrong, but many people get huge wins from using multiprocessing to do parallel operations on big data sets that were present in memory at the time multiprocessing fork()ed off processes. While this isn’t cross-platform, it can be a really massive performance advantage: no need to serialize data and pass it to a multiprocessing child (this is what apply_async does under the covers) if the data is already accessible in memory when the child starts. Node's 'cluster' module will do this too, if you ask nicely. Many other languages and frameworks support similar patterns: the common thread is making fork()ing parallelism "easy enough" with the option of spending a little extra effort to make it really really cheap to get pre-fork memory state into children for processing. Oh, and you basically don't have to worry about corrupting anyone else's in-memory state if you do this (not so with threads).
Anticipated Rebuttals:
Q: $language provides a really accessible way to use true threads that isn’t nearly as tricky as e.g. multiprocessing or knowing all the gotchas (e.g. accidental file descriptor sharing between non-fork-safe libraries) of fork(); why not use that?
A: Many people still prefer languages with primarily-forking parallelism[1] constructs for reasons besides their fork-based concurrency capabilities--nobody’s claiming multiprocessing beats goroutines for API friendliness--so fork() remains useful in much more than a legacy capacity.
Q: Why not use $tool which does this via threads or why not bind $threaded_language to $scripting_language and use threads on the other side of the FFI boundary?
A: People won’t switch. They won’t switch because it’s hard (don't tell me threaded Rust is as easy to pick up as multiprocessing--Rust has a lot of advantages in this space, but that ain't one of them) and because there’s a positive benefit to staying within a given platform, even if some infrequent tasks (hopefully your Python doesn’t invoke multiprocessing too much) are a bit more cumbersome than usual. Also, “Friendly, easy-to-use concurrency with threads” is often a very false promise. There’s a reason Antirez is resistant to threading.
--------------
TL;DR perhaps using fork() and exec() for launching new programs needs to stop. But fork() itself is absolutely essential for common real-world use cases.
[1] References to parallelism via fork() above assume you have more than one core to schedule processes onto. Otherwise it’s not that parallel.
EDITs: grammar. There will be several because essay. I won't change the substance.
There is one case where fork() is fantastic: as a way to dump a core of a running process while leaving the process running -- just fork() and abort()! But even this case should be handled by having something like gcore(1).
Another common use of fork() for things other than exec()ing is multi-process services where all will keep running te same program. Arranging to spawn or vfork-then-exec self and have the child realize it's a worker and not a (re)starter is more work because a bunch of state needs to be passed to the child somehow (via an internal interface), and that feels hackish... And also this case doesn't suffer much from fork()s badness: you fork() early and have little or no state in the parent that could have fork-unsafety issues. But it's worth switching this use-case to spawn or vfork-then-exec just so we have no use cases for fork() left.
> While fork() might be sub-optimal for launching different programs (fork() + exec() vs. posix_spawn())
I don't think it is suboptimal. As the paper acknowledges it primary use is to set up the environment of the program you are about to exec(). There are four points to be made about that:
1. If you don't need to set up the environment it imposes almost no coding overhead. It reduces to "if (!(pid = fork()) exec(...)". That's hardly a huge imposition.
2. It doesn't seem to impose much runtime overhead either. If it did Linux and BSD would have acquired a spawn() syscall's ages ago. As it is they all implement posix_spawn() using a vfork() / exec(). Given we are talking a 30 year history here any claims getting rid of the fork() would give a noticeable performance boost should not be taken seriously without evidence.
3. If you do need to setup the environment then yes there are traps with threads and other things. As the paper says it's terrible - but to paraphrase Churchill the one thing it has in it's favour is it's better than all the other ways of doing the same thing. They actually acknowledge how to replace flexibility allowed by fork()
is an open research question. "We think it's horrible, but we don't have an alternative" isn't a convincing argument.
4. For all it's faults fork() has one outstanding attribute - it's conceptually drop dead simple: "create an exact copy of the process, the sole difference being getpid() returns a different value". That translates to bugger all code needed to implement it, few bugs, small man pages and a simple interface. A replacement providing the same flexibility will be some hideously complex thing that tries to implement all the use cases people used fork() for. It will be big and hard to learn, hard to use correctly, take reams of code, still won't do all that fork() allowed you to do. We will be complaining about if for decades to come.
I stopped reading the paper when they claims O_CLOEXEC was an overhead imposed by fork(). It isn't. The telltale give away should be it doesn't take effect on a fork() - it happens on the exec(), and the spawn() or whatever does exec()'s job. If you remove fork() things like O_CLOEXEC is your only way to control what environment your child process gets. Therefore one outcome of removing fork() is the reverse of what they claim - you won't get less O_CLOEXEC's, you will get many, many more of them as programmers clamour for ways to do the things fork() allowed them to do.
It's hard to take them seriously when they imply the mess that threads are is somehow acceptable and necessary, but nicer, less error prone and simpler fork isn't. Threads are a nasty hack and a liability for the modern programmer to use. And systems researchers really should acknowledge that their continued existence as first class OS primitives is holding back systems research much more, than fork. I guess they are looking to spread FUD and justify the mess that Windows got itself into, not doing actual research.
While I am sure that this is wise criticism, it might also be concluded that Windows itself contains no small amount of architectural decisions that limit performance.
Fork is quite excellent, except in cases when the intent is to run a different program or when threads are involved (threads are basically an incompatible, competing model of concurrency).
The use of fork as a concurrency mechanism (creating a new thread of control that executes in a copy of the address space) is very good and useful.
In the POSIX shell language, the subshell syntax (command1; command2; ...) is easily implemented using fork. This is useful: all destructive manipulations in the subshell like assignments to variables or changing the current directory do not affect the parent.
This essentially simulates continuations (in a way). (If the parent process does nothing but wait for the child to finish, fork can be used to perform speculative execution, similar to creating a continuation and immediately invoking it).
Microsoft "researchers" can stuff it and their company's flagship piece of shit OS.
The paper agrees with you that the fork models had a reason to exist and that is is perfect for shells.
They also point out that on modern hardware you often should want to write multithreaded multiprocess application.
Their main criticism of fork is that it does not compose at any level of the OS (as it cannot be implemented over a different primitive)
I understand that a lot of people here dislike Microsoft for good reason (not only historical), but drawbacks in fork() are well known and recognized, here they point out that it is also hard-to-impossible to implement as a compatibility layer if the kernel does not support fork.
Also:
> Microsoft "researchers" can stuff it and their company's flagship piece of shit OS.
Do you have any reason to insult Microsoft researchers? They have plenty of citations in this paper of other researchers that appear to agree with them. This type of comments does not appear constructive to me
I realize that "compating" is a misspelling, but I prefer to read it as a portmanteau of "compatible" and "competing" and think it's quite an excellent word for that difficult concept except that it errs slightly too far on the "competing" side.
[+] [-] sfink|7 years ago|reply
But I really wanted some explanation of why Windows process startup seems to be so heavyweight. Why does anything that spawns lots of little independent processes take so bloody long on Windows?
I'm not saying "lots of processes on Windows is slow, lots of processes on Linux is fast, Windows uses CreateProcess, Linux uses fork, CreateProcess is an alternative to fork/exec, therefore fork/exec is better than any alternative." I can imagine all kinds of reasons for the observed behavior, few of which would prove that fork is a good model. But I still want to know what's going on.
[+] [-] ralish|7 years ago|reply
Beyond the raw Process and Thread kernel objects, which are represented by EPROCESS + KPROCESS and ETHREAD + KTHREAD structures in kernel address space, a Win32 process also needs to have:
- A PEB (Process Environment Block) structure in its user address space
- An associated CSR_PROCESS structure maintained by Csrss (Win32 subsystem user-mode)
- An associated W32PROCESS structure for Win32k (Win32 subsystem kernel-mode)
I'm pretty sure these days the W32PROCESS structure only gets created on-demand with the first creation of a GDI or USER object, so presumably CLI apps don't have to pay that price. But either way, those latter three structures are non-trivial. They are complicated structures and I assume involve a context switch (or several) at least for the Csrss component. At least some steps in the process also involve manipulating global data structures which block other process creation/destruction (Csrss steps only?).
I expect all this Win32 specific stuff largely doesn't apply to e.g. the Linux subsystem, and so creating processes should be much faster. The key takeaway is its all the Win32 stuff that contributes the bulk of the overhead, not the fundamental process or thread primitives themselves.
EDIT: If you want to learn more, Mark Russinovich's Windows Internals has a whole chapter on process creation which I'm sure explains all this.
[+] [-] speedplane|7 years ago|reply
[+] [-] zenexer|7 years ago|reply
One could probably argue that processes on Windows need to be lighter-weight now that sandboxing is a common security practice. These days, programs like web browsers opt to create a large number of processes both for security and stability purposes. In much the same way that POSIX should deprecate the fork model, Windows should provide lighter-weight processes.
[+] [-] SifJar|7 years ago|reply
https://randomascii.wordpress.com/2018/12/03/a-not-called-fu...
[+] [-] naasking|7 years ago|reply
[+] [-] kazinator|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] muststopmyths|7 years ago|reply
[+] [-] rgovostes|7 years ago|reply
Many frameworks are backed by XPC services, where the parent process has a socket-like connection to a backend server. After forking, the child would have no valid connection to the server. The fork() function establishes a new connection in the child for libSystem, to allow Unix programs to port easily to macOS, but other services' connections are not re-established. This makes fork on macOS (i) slow, and (ii) unsafe for code that touches virtually any of Apple's APIs.
[+] [-] cryptonector|7 years ago|reply
[+] [-] ksherlock|7 years ago|reply
[+] [-] dis-sys|7 years ago|reply
Fork() is now basically the root of a looong list of special cases in so many aspects of programming. Things get even worse when you use a language with built-in runtime such as Golang for which multi-threaded programming the default behaviour. If fork() can't even handle multiple threads, what is the real point of having it when a 8 core 16 threads AMD processor is about $150 each.
[+] [-] sqrt17|7 years ago|reply
These threads and those threads are not the same. The 16-threads SMT processor will happily chew on 16 different programs, processes or whatever the load at the moment is, e.g. if you use Python's multiprocessing you can create 16 processes and they'll be executed in parallel.
fork() can handle multiple threads but you have to be attentive when cleaning up etc. - quite often, code using fork() will get confused when you spawn threads, and code using threads will get confused when you fork()
[+] [-] swiftcoder|7 years ago|reply
[+] [-] evilotto|7 years ago|reply
[+] [-] spc476|7 years ago|reply
[+] [-] saagarjha|7 years ago|reply
Regardless of this paper, I don't see its use declining significantly any time soon.
[+] [-] vbernat|7 years ago|reply
The paper also mention the use case of multiprocess servers which relies heavily on fork() but dismiss it as it could be implemented with threads. A crash in a worker would lead to the crash of the whole application. While a worker could just be restarted.
A proper use case of removing fork() from an actual program would help. For example, how nginx on Windows is implemented?
[+] [-] cryptonector|7 years ago|reply
[+] [-] ktpsns|7 years ago|reply
[+] [-] evmar|7 years ago|reply
http://neugierig.org/software/chromium/notes/2011/08/zygote....
[+] [-] xyzzyz|7 years ago|reply
To support a multi-process web browser architecture that Chromium pioneered, you need to spawn processes. See https://chromium.googlesource.com/chromium/src/+/HEAD/docs/l...
[+] [-] IshKebab|7 years ago|reply
[+] [-] eesmith|7 years ago|reply
In section 7 it suggests "We should therefore strongly discourage the use of fork in new code, and seek to remove it from existing apps."
Is anyone here going to help work on changing those 1304 packages?
I have already over-volunteered for thankless FOSS tasks like this, so I know it won't be me.
[+] [-] wbl|7 years ago|reply
[+] [-] harryf|7 years ago|reply
> 7. GET THE FORK OUT OF MY OS!
Someone couldn't resist...
[+] [-] heavenlyblue|7 years ago|reply
[+] [-] fopen64|7 years ago|reply
[+] [-] IshKebab|7 years ago|reply
[+] [-] pova|7 years ago|reply
[+] [-] stuaxo|7 years ago|reply
Interested to see what this paper has to say.
[+] [-] zbentley|7 years ago|reply
Fork-requiring program class 1:
The biggest example where fork() is needed are webservers/long-running programs with significant unchanging memory overhead and/or startup time.
Many large applications written in a language or framework that prefers the single-process/single-thread model for executing requests (e.g. Python/gunicorn, Perl, a lot of Ruby, NodeJS with ‘cluster’ for multicore, etc.) are basically dependent on fork(). Such applications often have a huge amount of memory required at startup (due to loading libraries and initializing frameworks/constant state). Creating workers that can execute requests in parallel but don’t require any additional memory overhead (just what they consume per request) is essential for them. fork()ing without exec()ing a new program facilitates this memory sharing; everything is copy-on-write, and most big webapps don’t need to write most of the startup-initialized memory they have, though they may need to read it.
Additionally, starting up such programs can take a long time due to costly initialization (seconds or minutes in the worst cases); using fork() allows them to quickly replace failed or aged-out subprocesses without having to pay that overhead (which also typically pegs a CPU core) to change their parallelism. “Quickly” might not be quick enough if a program needs to continually launch new subprocesses, but for periodically forking (or just forking-at-startup) long-running servers with a big footprint, it’s far better than re-initializing the whole runtime. For better or worse, we’ve come far enough from old-school process-per-request CGI that it is no longer feasible in most production deployments.
Anticipated rebuttals:
Q: Wouldn't it be nice if everyone wrote apps small enough that startup time was minimized and memory footprint was low?
A: Sure, but they won’t.
Q: People should just write their big, long-running services in a framework that starts fast, has low memory requirements, and uses threads instead of fork()s.
A: See previous answer. Also see zzzcpan’s response.
Q: Can you access some of those benefits with careful use of shared memory?
A: Yes, but it’s much harder to do than it is to use fork() in most cases (caveat Windows, but it’s still hard).
Q: Do tools exist in single-proc/single-thread forking frameworks/languages which switch from forking to hybrid async/threaded paradigms (like gevent) instead?
A: Yes, but they’re not nearly as mature, capable, or useful (especially when you need to utilize multiple cores).
Fork-requiring program class 2:
Programs which fork infrequently in order to parallelize uncommon tasks over shared memory. Redis does this to great effect; it doesn’t exec(), it just forks off a child process which keeps the memory image at the time of fork from the parent, and writes most of that memory state to disk so that the parent can keep handling requests while the child snapshots.
Python’s multiprocessing excels at these kinds of cases as well. If you’re launching and destroying multiprocessing pools multiple times a second, then sure, you’re holding it wrong, but many people get huge wins from using multiprocessing to do parallel operations on big data sets that were present in memory at the time multiprocessing fork()ed off processes. While this isn’t cross-platform, it can be a really massive performance advantage: no need to serialize data and pass it to a multiprocessing child (this is what apply_async does under the covers) if the data is already accessible in memory when the child starts. Node's 'cluster' module will do this too, if you ask nicely. Many other languages and frameworks support similar patterns: the common thread is making fork()ing parallelism "easy enough" with the option of spending a little extra effort to make it really really cheap to get pre-fork memory state into children for processing. Oh, and you basically don't have to worry about corrupting anyone else's in-memory state if you do this (not so with threads).
Anticipated Rebuttals:
Q: $language provides a really accessible way to use true threads that isn’t nearly as tricky as e.g. multiprocessing or knowing all the gotchas (e.g. accidental file descriptor sharing between non-fork-safe libraries) of fork(); why not use that?
A: Many people still prefer languages with primarily-forking parallelism[1] constructs for reasons besides their fork-based concurrency capabilities--nobody’s claiming multiprocessing beats goroutines for API friendliness--so fork() remains useful in much more than a legacy capacity.
Q: Why not use $tool which does this via threads or why not bind $threaded_language to $scripting_language and use threads on the other side of the FFI boundary?
A: People won’t switch. They won’t switch because it’s hard (don't tell me threaded Rust is as easy to pick up as multiprocessing--Rust has a lot of advantages in this space, but that ain't one of them) and because there’s a positive benefit to staying within a given platform, even if some infrequent tasks (hopefully your Python doesn’t invoke multiprocessing too much) are a bit more cumbersome than usual. Also, “Friendly, easy-to-use concurrency with threads” is often a very false promise. There’s a reason Antirez is resistant to threading.
--------------
TL;DR perhaps using fork() and exec() for launching new programs needs to stop. But fork() itself is absolutely essential for common real-world use cases.
[1] References to parallelism via fork() above assume you have more than one core to schedule processes onto. Otherwise it’s not that parallel.
EDITs: grammar. There will be several because essay. I won't change the substance.
[+] [-] cryptonector|7 years ago|reply
Another common use of fork() for things other than exec()ing is multi-process services where all will keep running te same program. Arranging to spawn or vfork-then-exec self and have the child realize it's a worker and not a (re)starter is more work because a bunch of state needs to be passed to the child somehow (via an internal interface), and that feels hackish... And also this case doesn't suffer much from fork()s badness: you fork() early and have little or no state in the parent that could have fork-unsafety issues. But it's worth switching this use-case to spawn or vfork-then-exec just so we have no use cases for fork() left.
[+] [-] rstuart4133|7 years ago|reply
I don't think it is suboptimal. As the paper acknowledges it primary use is to set up the environment of the program you are about to exec(). There are four points to be made about that:
1. If you don't need to set up the environment it imposes almost no coding overhead. It reduces to "if (!(pid = fork()) exec(...)". That's hardly a huge imposition.
2. It doesn't seem to impose much runtime overhead either. If it did Linux and BSD would have acquired a spawn() syscall's ages ago. As it is they all implement posix_spawn() using a vfork() / exec(). Given we are talking a 30 year history here any claims getting rid of the fork() would give a noticeable performance boost should not be taken seriously without evidence.
3. If you do need to setup the environment then yes there are traps with threads and other things. As the paper says it's terrible - but to paraphrase Churchill the one thing it has in it's favour is it's better than all the other ways of doing the same thing. They actually acknowledge how to replace flexibility allowed by fork() is an open research question. "We think it's horrible, but we don't have an alternative" isn't a convincing argument.
4. For all it's faults fork() has one outstanding attribute - it's conceptually drop dead simple: "create an exact copy of the process, the sole difference being getpid() returns a different value". That translates to bugger all code needed to implement it, few bugs, small man pages and a simple interface. A replacement providing the same flexibility will be some hideously complex thing that tries to implement all the use cases people used fork() for. It will be big and hard to learn, hard to use correctly, take reams of code, still won't do all that fork() allowed you to do. We will be complaining about if for decades to come.
I stopped reading the paper when they claims O_CLOEXEC was an overhead imposed by fork(). It isn't. The telltale give away should be it doesn't take effect on a fork() - it happens on the exec(), and the spawn() or whatever does exec()'s job. If you remove fork() things like O_CLOEXEC is your only way to control what environment your child process gets. Therefore one outcome of removing fork() is the reverse of what they claim - you won't get less O_CLOEXEC's, you will get many, many more of them as programmers clamour for ways to do the things fork() allowed them to do.
[+] [-] makach|7 years ago|reply
[+] [-] zzzcpan|7 years ago|reply
[+] [-] IshKebab|7 years ago|reply
[+] [-] voldacar|7 years ago|reply
[deleted]
[+] [-] zerr|7 years ago|reply
[+] [-] Mic92|7 years ago|reply
[+] [-] bbsimonbb|7 years ago|reply
[+] [-] chasil|7 years ago|reply
However, may I point out that Microsoft SQL Server benchmarks have been posted that show Linux TCP-H outperforming Windows?
https://www.dbbest.com/blog/running-sql-server-on-linux/
While I am sure that this is wise criticism, it might also be concluded that Windows itself contains no small amount of architectural decisions that limit performance.
[+] [-] kazinator|7 years ago|reply
The use of fork as a concurrency mechanism (creating a new thread of control that executes in a copy of the address space) is very good and useful.
In the POSIX shell language, the subshell syntax (command1; command2; ...) is easily implemented using fork. This is useful: all destructive manipulations in the subshell like assignments to variables or changing the current directory do not affect the parent.
Check out the fork-based Perl solution to the Amb task in Rosetta code: https://rosettacode.org/wiki/Amb#Using_fork
This essentially simulates continuations (in a way). (If the parent process does nothing but wait for the child to finish, fork can be used to perform speculative execution, similar to creating a continuation and immediately invoking it).
Microsoft "researchers" can stuff it and their company's flagship piece of shit OS.
[+] [-] afiori|7 years ago|reply
They also point out that on modern hardware you often should want to write multithreaded multiprocess application.
Their main criticism of fork is that it does not compose at any level of the OS (as it cannot be implemented over a different primitive)
I understand that a lot of people here dislike Microsoft for good reason (not only historical), but drawbacks in fork() are well known and recognized, here they point out that it is also hard-to-impossible to implement as a compatibility layer if the kernel does not support fork.
Also:
> Microsoft "researchers" can stuff it and their company's flagship piece of shit OS.
Do you have any reason to insult Microsoft researchers? They have plenty of citations in this paper of other researchers that appear to agree with them. This type of comments does not appear constructive to me
[+] [-] debatem1|7 years ago|reply