Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpo-39104 Fix hanging ProcessPoolExcutor on shutdown nowait with pickling failure #17670

Open
wants to merge 2 commits into
base: master
from

Conversation

@tomMoral
Copy link
Contributor

tomMoral commented Dec 20, 2019

As reported initially by @rad-pat in #6084, the following script causes a deadlock.

from concurrent.futures import ProcessPoolExecutor


class ObjectWithPickleError():
    """Triggers a RuntimeError when sending job to the workers"""

    def __reduce__(self):
        raise RuntimeError()


if __name__ == "__main__":
    e = ProcessPoolExecutor()
    f = e.submit(id, ObjectWithPickleError())
    e.shutdown(wait=False)
    f.result()  # Deadlock on get

This is caused by the fact that the main process is closing communication channels that might be necessary to the queue_management_thread later. To avoid this, this PR let the queue_management_thread manage all the closing.

https://bugs.python.org/issue39104

Copy link

rad-pat left a comment

Just a comment. I had also started trying to make a solution, but perhaps I didn't grasp the entire problem. Please see rad-pat@188c222, including a more basic test without the pickle error.

@@ -87,7 +87,12 @@ def close(self):
self._reader.close()

def wakeup(self):
self._writer.send_bytes(b"")
try:

This comment has been minimized.

Copy link
@rad-pat

rad-pat Dec 20, 2019

Maybe a _closed flag might be better than catching the exception? You could also then check the flag in the clear method?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.