Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-113464: A copy-and-patch JIT compiler #113465

Draft
wants to merge 425 commits into
base: main
Choose a base branch
from

Conversation

brandtbucher
Copy link
Member

@brandtbucher brandtbucher commented Dec 25, 2023

'Twas the night before Christmas, when all through the code
Not a core dev was merging, not even Guido;
The CI was spun on the PRs with care
In hopes that green check-markings soon would be there;
The buildbots were nestled all snug under desks,
Even PPC64 AIX;
Doc-writers, triage team, the Council of Steering,
Had just stashed every change and stopped engineering,

When in the "PRs" tab arose such a clatter,
They opened GitHub to see what was the matter.
Away to CPython they flew like a flash,
Towards sounds of PROT_EXEC and __builtin___clear_cache.
First LLVM was downloaded, unzipped
Then the Actions were running a strange new build script,
When something appeared, they were stopped in their tracks,
jit_stencils.h, generated from hacks,
With their spines all a-shiver, they muttered "Oh, shit...",
They knew in a moment it must be a JIT.

More rapid than interpretation it came
And it copied-and-patched every stencil by name:
"Now, _LOAD_FAST! Now, _STORE_FAST! _BINARY_OP_ADD_INT!
On, _GUARD_DORV_VALUES_INST_ATTR_FROM_DICT!
To the top of the loop! And down into the call!
Now cache away! Cache away! Cache away all!"
But why now? And how so? They needed a hint,
Thankfully, Brandt gave a great talk at the sprint;
So over to YouTube the reviewers flew,
They read the white paper, and the blog post too.

And then, after watching, they saw its appeal
Not writing the code themselves seemed so unreal.
And the platform support was almost too easy,
ARM64 Macs to 32-bit PCs.
There was some runtime C, not too much, just enough,
Basically a loader, relocating stuff;
It ran every test, one by one passed them all,
With not one runtime dependency to install.
Mostly build-time Python! With strict static typing!
For maintenance ease, and also nerd-sniping!

Though dispatch was faster, the JIT wasn't wise,
And the traces it used still should be optimized;
The code it was JIT'ing still needed some thinning,
With code models small, and some register pinning;
Or new calling conventions, shared stubs for paths slow,
Since this JIT was brand new, there was fruit hanging low.
It was awkwardly large, parsed straight out of the ELFs,
And they laughed when they saw it, in spite of themselves;

A configure flag, and no merging this year,
Soon gave them to know they had nothing to fear;
It wasn't much faster, at least it could work,
They knew that'd come later; no one was a jerk,
But they were still smart, and determined, and skilled,
They opened a shell, and configured the build;
--enable-experimental-jit, then made it,
And away the JIT flew as their "+1"s okay'ed it.
But they heard it exclaim, as it traced out of sight,
"Happy JIT-mas to all, and to all a good night!"

@brandtbucher
Copy link
Member Author

I'm curious, given the perf results reported in your talk, do you have any documented ideas on improving the generated code - either by tinkering with whatever gets generated (though I'm aware messing with it too much manually defeats the idea of having it work "magically"), or by improving the template for LLVM? Some of the things I see in generated code seem really pessimized, like the obligatory jump-to-continue in each op (with a jump-to-register too, probably enforced by mcmodel=large), or 64-bit oparg immediates.

Yep. Things on the roadmap (not in this PR) include:

  • removing simple zero-length jumps at the ends of stencils in a postprocessing step
  • using the small or medium code models for stencils that don't require 64-bit holes
  • using the ghccc calling convention for more efficient tail calls (way less pushing, popping, and register shuffling at the beginning and end of instructions)
  • using shared stubs for slow paths
  • using shared const data instead of duplicating stuff like static strings every time they're used
  • top-of-stack caching in registers (plays nicely with ghccc, above)
  • compiling different variants of each stencil when the oparg changes control flow
  • compiling super-stencils that combine common sequences of instructions

Each of these increases the complexity a tiny bit, and probably deserve to be their own projects that are reviewed individually. I've roughly prototyped many of them to prove they're viable, though.

@brandtbucher
Copy link
Member Author

Hi, Brandt, Thanks for the marvelous work, I got a little bit question about this JIT machnism.

  1. The time we chose to JIT is chosen by the Optimizor right. Is there some data to represent how much the possibility to be compiled as native code for the normal workflows. If there is not enough data now, I would like to take some benchmark for this PR.

This piggybacks on the existing tier two machinery (activated by passing -X uops on the command line or setting the PYTHON_UOPS environment variable). You can build an instrumented version of main today using --enable-pystats, which dumps tons of internal counters. These include stats on how effective tier two is at finding, optimizing, and executing hot spots in your code.

  1. Is there any way to monitor the JIT status

Not yet, but there almost certainly will be in the future. I think we need to play with the JIT a bit to see what kind of info/control is most useful.

  1. After gh-96143: Allow Linux perf profiler to see Python calls #96123, we make a trampoline to point the code address and symbol, so the user can use perf or other tools to monitor the code by add-hook on some user space address, Is it possible to do the same thing after the JIT ?( I would like to help with this feature

It should be possible, but I haven't experimented with this at all. This is probably a related problem to making sure that C-level debuggers can work with the jitted code effectively, which I'm also not worrying about for now (contributions welcome once this lands, though)!

@Zheaoli
Copy link
Contributor

Zheaoli commented Jan 4, 2024

Hi, Brandt, Thanks for the marvelous work, I got a little bit question about this JIT machnism.

  1. The time we chose to JIT is chosen by the Optimizor right. Is there some data to represent how much the possibility to be compiled as native code for the normal workflows. If there is not enough data now, I would like to take some benchmark for this PR.

This piggybacks on the existing tier two machinery (activated by passing -X uops on the command line or setting the PYTHON_UOPS environment variable). You can build an instrumented version of main today using --enable-pystats, which dumps tons of internal counters. These include stats on how effective tier two is at finding, optimizing, and executing hot spots in your code.

  1. Is there any way to monitor the JIT status

Not yet, but there almost certainly will be in the future. I think we need to play with the JIT a bit to see what kind of info/control is most useful.

  1. After gh-96143: Allow Linux perf profiler to see Python calls #96123, we make a trampoline to point the code address and symbol, so the user can use perf or other tools to monitor the code by add-hook on some user space address, Is it possible to do the same thing after the JIT ?( I would like to help with this feature

It should be possible, but I haven't experimented with this at all. This is probably a related problem to making sure that C-level debuggers can work with the jitted code effectively, which I'm also not worrying about for now (contributions welcome once this lands, though)!

Thanks a lot for your patience! I got another here

I think the base template code is from the tire2 executor case. I'm very curious about the performance between the tire2 interpreter and the JITed code.

@brandtbucher
Copy link
Member Author

brandtbucher commented Jan 4, 2024

I think the base template code is from the tire2 executor case. I'm very curious about the performance between the tire2 interpreter and the JITed code.

As it stands now, it's somewhere between 2% and 9% faster than the tier two interpreter, depending on platform (individual benchmarks vary widely, from 13% slower to 47% faster). See my comment above for possible improvements to the generated code once this initial implementation is in (all of which are orthogonal to optimizing the trace itself, which is being worked on separately).

@penguin-wwy
Copy link
Contributor

Hi, Brandt, thanks for the amazing work. Allow me to ask a few little bit question about future optimisation.

  • The current implementation only supports binary templates for a single bytecode node. So how are we going to support supernode of a common bytecode sequence.
  • In addition to support for supernodes, perhaps we could somehow stitch binary templates to generate a function or superblock, and if we did that perhaps we'd need to customise the register allocation algorithm for pass parameters and removes calling overhead between stencils.
  • Finally, can we generate inline-optimised templates for C-API with type mocks, such as pyston does. These help us to make the less common types of bytecode native as well (e.g _binary_op_add_list).

@Zheaoli
Copy link
Contributor

Zheaoli commented Jan 4, 2024

I think the base template code is from the tire2 executor case. I'm very curious about the performance between the tire2 interpreter and the JITed code.

As it stands now, it's somewhere between 2% and 9% faster than the tier two interpreter, depending on platform (individual benchmarks vary widely, from 13% slower to 47% faster). See my comment above for possible improvements to the generated code once this initial implementation is in (all of which are orthogonal to optimizing the trace itself, which is being worked on separately).

Got it, I think we might need to have a continuous benchmark pipeline to evaluate the performance issue.

About the test case, we may need to cover some real use case which is complex enough and will run for long time. Just like what the Ruby community do the benchmark(The Shopify run the JIT in main branch, and report the profile results to the community https://railsatscale.com

@brandtbucher
Copy link
Member Author

Hi, Brandt, thanks for the amazing work. Allow me to ask a few little bit question about future optimisation.

  • The current implementation only supports binary templates for a single bytecode node. So how are we going to support supernode of a common bytecode sequence.

One of two ways:

  • the tier two optimizer can combine tier two instructions into superinstructions before the JIT even sees them (then, to the JIT, they are just normal instructions)
  • in addition to individual stencils, we'll also compile stencils for common pairs or triples of instructions (then the JIT can use them if they show up in the trace)
  • In addition to support for supernodes, perhaps we could somehow stitch binary templates to generate a function or superblock, and if we did that perhaps we'd need to customise the register allocation algorithm for pass parameters and removes calling overhead between stencils.

There are lots of things we can do with this, since at its core it's really just a general-purpose backend. :)

But for register allocation, LLVM's ghccc calling convention makes it very easy to pin registers across the tail calls by passing them as arguments to the continuation... so we actually have a surprising amount of control there!

  • Finally, can we generate inline-optimised templates for C-API with type mocks, such as pyston does. These help us to make the less common types of bytecode native as well (e.g _binary_op_add_list).

Not sure I follow... I don't know what you mean by "type mocks" (and Google isn't helping).

@brandtbucher
Copy link
Member Author

Got it, I think we might need to have a continuous benchmark pipeline to evaluate the performance issue.

About the test case, we may need to cover some real use case which is complex enough and will run for long time. Just like what the Ruby community do the benchmark(The Shopify run the JIT in main branch, and report the profile results to the community https://railsatscale.com

We already have automated performance testing of a comprehensive benchmark suite, if that's what you mean: https://github.com/faster-cpython/benchmarking-public

@penguin-wwy
Copy link
Contributor

Hi, Brandt, thanks for the amazing work. Allow me to ask a few little bit question about future optimisation.

  • The current implementation only supports binary templates for a single bytecode node. So how are we going to support supernode of a common bytecode sequence.

One of two ways:

* the tier two optimizer can combine tier two instructions into superinstructions before the JIT even sees them (then, to the JIT, they are just normal instructions)

* in addition to individual stencils, we'll also compile stencils for common pairs or triples of instructions (then the JIT can use them if they show up in the trace)
  • In addition to support for supernodes, perhaps we could somehow stitch binary templates to generate a function or superblock, and if we did that perhaps we'd need to customise the register allocation algorithm for pass parameters and removes calling overhead between stencils.

There are lots of things we can do with this, since at its core it's really just a general-purpose backend. :)

But for register allocation, LLVM's ghccc calling convention makes it very easy to pin registers across the tail calls by passing them as arguments to the continuation... so we actually have a surprising amount of control there!

  • Finally, can we generate inline-optimised templates for C-API with type mocks, such as pyston does. These help us to make the less common types of bytecode native as well (e.g _binary_op_add_list).

Not sure I follow... I don't know what you mean by "type mocks" (and Google isn't helping).

Sorry, my wording is not very standard. What I want to say is that generating binary template functions (optimised for inline) for bytecode (e.g. binary_op_add, but adding two list) by llvm, and then call them with a generic method

add_tow_list = load_fast + load_fast + binary_op_add:
    mov xxx
    mov yyy
    call  (X86_64_RELOC_UNSIGNED)   -> redirect to list_extend

which can help make some of the less common (as opposed to int, float) bytecode operations also native

@Zheaoli
Copy link
Contributor

Zheaoli commented Jan 4, 2024

Got it, I think we might need to have a continuous benchmark pipeline to evaluate the performance issue.
About the test case, we may need to cover some real use case which is complex enough and will run for long time. Just like what the Ruby community do the benchmark(The Shopify run the JIT in main branch, and report the profile results to the community https://railsatscale.com

We already have automated performance testing of a comprehensive benchmark suite, if that's what you mean: https://github.com/faster-cpython/benchmarking-public

I have seen this before, But a little bit different. I will try to do some more complex workflow(like Django with a lot of ORM query etc..) to benchmark some extra metric like the TPS improvment, CPU usgae etc...

@tekknolagi
Copy link
Contributor

tekknolagi commented Jan 4, 2024

Got it, I think we might need to have a continuous benchmark pipeline to evaluate the performance issue.
About the test case, we may need to cover some real use case which is complex enough and will run for long time. Just like what the Ruby community do the benchmark(The Shopify run the JIT in main branch, and report the profile results to the community https://railsatscale.com

We already have automated performance testing of a comprehensive benchmark suite, if that's what you mean: https://github.com/faster-cpython/benchmarking-public

I have seen this before, But a little bit different. I will try to do some more complex workflow(like Django with a lot of ORM query etc..) to benchmark some extra metric like the TPS improvment, CPU usgae etc...

We did something like this with https://github.com/facebookarchive/django-workload some years ago, but I don't know how relevant that exact code is today. Also, I no longer work at FB.

@ericsnowcurrently
Copy link
Member

Got it, I think we might need to have a continuous benchmark pipeline to evaluate the performance issue.
About the test case, we may need to cover some real use case which is complex enough and will run for long time. Just like what the Ruby community do the benchmark(The Shopify run the JIT in main branch, and report the profile results to the community https://railsatscale.com

We already have automated performance testing of a comprehensive benchmark suite, if that's what you mean: https://github.com/faster-cpython/benchmarking-public

I have seen this before, But a little bit different. I will try to do some more complex workflow(like Django with a lot of ORM query etc..) to benchmark some extra metric like the TPS improvment, CPU usgae etc...

FWIW, the faster-cpython team does also run a number of additional high-level ("workload-oriented") benchmarks that are included in the results: https://github.com/pyston/python-macrobenchmarks/tree/main/benchmarks.

@brandtbucher
Copy link
Member Author

brandtbucher commented Jan 10, 2024

Sorry, my wording is not very standard. What I want to say is that generating binary template functions (optimised for inline) for bytecode (e.g. binary_op_add, but adding two list) by llvm, and then call them with a generic method

Yup, that does seem like something we could do with this approach in the future.

Tools/jit/build.py Outdated Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet