top | item 28788753

(no title)

mpdehaan2 | 4 years ago

I am probably the most commercially successful user of the multiprocessing module :)

It's basically fine anywhere you need a function call that you can dispatch out to like 50 or 500 workers on a queue and then do something after that returns, but any shared memory or IPC between the workers is up to you.

Python is also fine for webserving because most web servers pre-fork workers or whatever, so this doesn't come into play there either.

It's harder if you want to do something different where you want threaded workflows with synchronized/protected like constructs that folks might be familiar with from say Java.

Firing up multiprocessing (forking) has some costs to bringing up the interpreters so it's not something you want to start up a lot and then close down a lot, better if you can start things and leave them running. Once it's up it is pretty fast.

I guess mainly it changes the style of your program too much - it's basically just glue around forks.

discuss

order

zohch|4 years ago

> any shared memory or IPC between the workers is up to you.

There is this:

- https://docs.python.org/3/library/multiprocessing.shared_mem...

- https://docs.python.org/3/library/queue.html

- https://docs.python.org/3.8/library/multiprocessing.html#mul...

- https://docs.python.org/3.8/library/multiprocessing.html#mul...

> It's harder if you want to do something different where you want threaded workflows with synchronized/protected like constructs that folks might be familiar with from say Java.

There is this:

- https://docs.python.org/3/library/threading.html - which you can use as a context manager (i.e. `with lock`)