from __future__ import print_function import multiprocessing def countdown(count): while count > 0: print("Count value", count) count -= 1 return if __name__ == "__main__": p1 = multiprocessing.Process(target=countdown, args=(10,)) p1.start() p2 = multiprocessing.Process(target=countdown, args=(20,)) p2.start() p1.join() p2.join()
Here, each function is executed in a new process. Since a new instance of Python VM is running the code, there is no
GIL and you get parallelism running on multiple cores.
Process.start method launches this new process and run the function passed in the
target argument with the arguments
Process.join method waits for the end of the execution of processes
The new processes are launched differently depending on the version of python and the plateform on which the code is running e.g.:
spawnto create the new process.
multiprocessing.set_start_methodat the beginning of your program.
spawnmethods are slower than forking but avoid some unexpected behaviors.
POSIX fork usage:
After a fork in a multithreaded program, the child can safely call only async-signal-safe functions until such time as it calls execve.
Using fork, a new process will be launched with the exact same state for all the current mutex but only the
MainThread will be launched.
This is unsafe as it could lead to race conditions e.g.:
MainThreadand pass it to an other thread which is suppose to lock it at some point. If the
forkoccures simultaneously, the new process will start with a locked lock which will never be released as the second thread does not exist in this new process.
Actually, this kind of behavior should not occured in pure python as
multiprocessing handles it properly but if you are interacting with other library, this kind of behavior can occures, leading to crash of your system (for instance with numpy/accelerated on macOS).