Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gh-44098: Release the GIL during mmap on Unix #98146

Merged
merged 4 commits into from Oct 10, 2022

Conversation

hauntsaninja
Copy link
Contributor

@hauntsaninja hauntsaninja commented Oct 10, 2022

This seems pretty straightforward. The issue mentions other calls in mmapmodule that we could release the GIL on, but those are in methods where we'd need to be careful to ensure that something sensible happens if those are called concurrently. In prior art, note that #12073 released the GIL for munmap. In a toy benchmark, I see the speedup you'd expect from doing this.

Automerge-Triggered-By: GH:gvanrossum

This seems pretty straightforward. The issue mentions other calls in
mmapmodule that we could release the GIL on, but those are in methods
where we'd need to be careful to ensure that something sensible happens
if those are called concurrently. In prior art, note that python#12073
released the GIL for munmap.  In a toy benchmark, I see the speed up
you'd expect from doing this.
@@ -0,0 +1 @@
Release the GIL when creating :class:`mmap.mmap` objects.
Copy link
Member

@gvanrossum gvanrossum Oct 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might I suggest qualifying this with "on UNIX"? The Windows version is different, and I won't ask you to update it (we should leave that up to @zooba).

Otherwise seems harmless. Is this based on an actual use case?

Copy link
Contributor Author

@hauntsaninja hauntsaninja Oct 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, good suggestion, made the change.

This is not based on an actual use case. I was going through old issues and saw your comment asking for a patch here: #44098 (comment)
That said, I think I have some code at work that may benefit (would take some effort to figure out exactly how much though, and today's a holiday).

@gvanrossum gvanrossum added the 🤖 automerge PR will be merged once it's been approved and all CI passed label Oct 10, 2022
@gvanrossum gvanrossum changed the title gh-44098: Release the GIL during mmap gh-44098: Release the GIL during mmap on Unix Oct 10, 2022
@miss-islington
Copy link
Contributor

miss-islington commented Oct 10, 2022

Status check is done, and it's a success .

@miss-islington miss-islington merged commit f871e9a into python:main Oct 10, 2022
15 checks passed
@vstinner
Copy link
Member

vstinner commented Oct 10, 2022

In a toy benchmark, I see the speedup you'd expect from doing this.

Just curious: What is the benchmark? What are results?

@hauntsaninja hauntsaninja deleted the gh-44098 branch Oct 10, 2022
@hauntsaninja
Copy link
Contributor Author

hauntsaninja commented Oct 10, 2022

Toy benchmark code:

import mmap
import time
import threading

with open("hello.txt", "wb") as f:
    f.write(b"Hello Python!\n")

with open("hello.txt", "r+b") as f:
    fn = f.fileno()

    def run_mmap():
        for _ in range(100):
            mmap.mmap(fn, 0)

    s = time.perf_counter_ns()
    run_mmap()
    e = time.perf_counter_ns()
    print((e - s) / 1e9)

    s = time.perf_counter_ns()
    threads = [threading.Thread(target=run_mmap) for _ in range(8)]
    for t in threads:
        t.start()
    for t in threads:
        t.join()
    e = time.perf_counter_ns()
    print((e - s) / 1e9)

On my laptop, the threaded portion goes from about 5-7s to 1-2s with this patch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖 automerge PR will be merged once it's been approved and all CI passed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants