top | item 39670530

(no title)

protomikron | 2 years ago

Although this is nice, the problems with the GIL are often blown out of proportion: people stating that you couldn't do efficient (compute-bounded) multi-processing, which was never the case as the `multiprocessing` module works just fine.

discuss

order

ynik|2 years ago

multiprocessing only works fine when you're working on problems that don't require 10+ GB of memory per process. Once you have significant memory usage, you really need to find a way to share that memory across multiple CPU cores. For non-trivial data structures partly implemented in C++ (as optimization, because pure python would be too slow), that means messing with allocators and shared memory. Such GIL-workarounds have easily cost our company several man-years of engineer time, and we still have a bunch of embarrassingly parallel stuff that we still cannot parallelize due to GIL and not yet supporting shared memory allocation for that stuff.

Once the Python ecosystem supports either subinterpreters or nogil, we'll happily migrate to those and get rid of our hacky interprocess code.

Subinterpreters with independent GILs, released with 3.12, theoretically solve our problems but practically are not yet usable, as none of Cython/pybind11/nanobind support them yet. In comparison, nogil feels like it'll be easier to support.

ebiester|2 years ago

And I guess what I don't understand is why people choose Python for these use cases. I am not in the "Rustify" everything camp, but Go + C, Java + JNI, Rust, and C++ all seem like more suitable solutions.

pillusmany|2 years ago

"Ray" can share python objects memory between processes. It's also much easier to use than multi processing.

jononor|2 years ago

I think that 90 or maybe even 99% of cases has under 1GB of memory per process? At least it has been the case for me the last 15 years.

Of course, getting threads to be actually useful for concurrency (GIL removed) adds another very useful tool to the performance toolkit, so that is great.

vita7777777|2 years ago

On the other hand, this particular argument also gets overused. Not all compute-bounded parallel workloads are easily solved by dropping into multiprocessing. When you need to share non-trivial data structures between the processes you may quickly run into un/marshalling issues and inefficiency.

kroolik|2 years ago

Managing processes is more annoying than threads, though. Incl. data passing and so forth.

pillusmany|2 years ago

The "ray" library makes running python code on multi core and clusters very easy.

liuliu|2 years ago

`multiprocessing` works fine for serving HTTP requests or do some other subset of embarrassingly-parallel problems.

skrause|2 years ago

> `multiprocessing` works fine for serving HTTP requests

Not if you use Windows, then it's a mess. I have a suspicion that people who say that the multiprocessing works just fine never had to seriously use Python on Windows.

jcranmer|2 years ago

> as the `multiprocessing` module works just fine.

Something that tripped me up when I last did `multiprocessing` was that communication between the processes requires marshaling all the data into a binary format to be unmarshaled on the other side; if you're dealing with 100s of MB of data or more, that can be quite some significant expense.