I experienced deadlocks when using the logging example for multiprocessing, where a QueueHandler is used.
I found that sometimes a multiprocessing process is not terminating, when it has put an element in a Queue,
even if the parent process runs a thread to empty the queue and successfully retrieved the item.
removing the sleep leads to a very high propability of deadlocking when I then try to join the processes, e.g,:
forwinworkers:
print(f'trying to join on {w.name}, alive={w.is_alive()}, exitcode={w.exitcode}', w.name, w.is_alive(), w.exitcode)
w.join()
The Process is still marked as alive, hitting Ctrl+C gives this:
trying to join on Worker 14, alive=True, exitcode=None Worker 14 True None
^CTraceback (most recent call last):
File "/home/fls/pybug/deadlock.py", line 43, in <module>
w.join()
File "/usr/lib/python3.10/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)
File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 43, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
Can reproduce using Python 3.9.10 and 3.10.4 on Linux:
importtimeimportthreadingfrommultiprocessingimportProcess, Queue, current_processdeflogger_thread(q: Queue):
whileTrue:
record=q.get()
ifrecordisNone:
breakprint("logrecord: ", record)
defworker(q):
q.put(current_process().name)
returnlogq=Queue()
lp=threading.Thread(target=logger_thread, args=(logq,), daemon=True)
lp.start()
workers= []
foriinrange(15):
p=Process(target=worker, args=(logq,), name=f"Worker {i+1}")
workers.append(p)
print("starting workers")
forpinworkers:
p.start()
# no deadlock when added:# time.sleep(.1)print("waiting a bit")
time.sleep(1)
print("trying to join workers")
forwinworkers:
print(f'trying to join on {w.name}, alive={w.is_alive()}, exitcode={w.exitcode}', w.name, w.is_alive(), w.exitcode)
w.join()
print(f'joined on {w.name}', w.name)
logq.put(None)
lp.join()
The text was updated successfully, but these errors were encountered:
I experienced deadlocks when using the logging example for multiprocessing, where a QueueHandler is used.
I found that sometimes a multiprocessing process is not terminating, when it has put an element in a Queue,
even if the parent process runs a thread to empty the queue and successfully retrieved the item.
The worker looks like this:
and while this works:
removing the sleep leads to a very high propability of deadlocking when I then try to join the processes, e.g,:
The Process is still marked as alive, hitting Ctrl+C gives this:
Can reproduce using Python 3.9.10 and 3.10.4 on Linux:
The text was updated successfully, but these errors were encountered: