Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-99554: marshal bytecode more efficiently #99555

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

brandtbucher
Copy link
Member

@brandtbucher brandtbucher commented Nov 17, 2022

This shrinks the size of .pyc files by about 25%. It also removes an intermediate copy of the bytecode created during marshalling.

Next steps will be removing the intermediate bytes object created during unmarshalling and performing the work of _PyCode_Quicken as part of this same move.

@brandtbucher brandtbucher added performance Performance or resource usage stdlib Python modules in the Lib dir labels Nov 17, 2022
@brandtbucher brandtbucher requested a review from tiran as a code owner Nov 17, 2022
@brandtbucher brandtbucher self-assigned this Nov 17, 2022
@brandtbucher brandtbucher requested a review from markshannon Nov 17, 2022
@brandtbucher
Copy link
Member Author

brandtbucher commented Nov 17, 2022

Hm, that's annoying. It appears that umarshal is used for deepfreeze, but it needs to be able to run with a version of Python other than the one being built... that makes this whole thing a lot harder, since those Pythons don't have access to the opcode module of the Python we're unmarshalling.

I might need to find a hack clever way of importing the opcode module from Lib.

Copy link
Contributor

@mdboom mdboom left a comment

Just one drive-by comment ;)

Python/marshal.c Outdated
assert(0x00 <= oparg && oparg < 0x100);
buffer[i++] = _Py_MAKECODEUNIT(opcode, oparg);
for (int j = 0; j < _PyOpcode_Caches[opcode]; j++) {
buffer[i++] = _Py_MAKECODEUNIT(CACHE, oparg);
Copy link
Contributor

@mdboom mdboom Nov 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe assert that i < size here, since there is potential for a buffer overrun here.

@brandtbucher brandtbucher requested a review from gvanrossum Nov 21, 2022
Copy link
Member

@gvanrossum gvanrossum left a comment

TBH, I'm a bit torn. This looks brilliant, but there's one thing that worries me -- unless you have the correct table of inline cache entries and know the value of HAVE_ARGUMENT you can't even skip the bytecode, which means you can't do anything else with the code object (e.g. looking at co_consts or co_names or even co_firstlineno).

That would be easier if the size you wrote was the number of bytes written instead of the number of code units including cache; but then r_bytecode() would have to make a guess at the size of the bytes object it is allocating and perhaps reallocate it later, which isn't ideal either.

How would you feel about writing both numbers to the file? That wastes 4 bytes per code object, but makes decoding more robust.

FWIW: Here's a wild scheme for avoiding the extra copy of the bytecode (which would presumably save a little time and memory on all imports). Once you have read the size of the bytecode you could just allocate a blank PyCodeObject object of the right size, and then you could deposit the expanded bytecode directly into that. Then the struct _PyCodeConstructor-based API would have to allow a field where you pass in that object and it would just initialize the rest based on the various given fields and return it (or dealloc it if there's another error). A little fiddly and not something for this PR, but given how heroic this all already sounds, perhaps worth the effort in the future.

bytecode = bytearray()
while len(bytecode) < nbytes:
opcode_byte = self.r_byte()
if opcode.HAVE_ARGUMENT <= opcode_byte:
Copy link
Member

@gvanrossum gvanrossum Nov 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks weird. I'm so used to if x >= N that seeing if N <= x just feels wrong.

Suggested change
if opcode.HAVE_ARGUMENT <= opcode_byte:
if opcode_byte >= opcode.HAVE_ARGUMENT:

Copy link
Member

@markshannon markshannon Nov 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const == variable is a C-ism to prevent accidentally writing = instead of ==. No need in Python.

Copy link
Member Author

@brandtbucher brandtbucher Dec 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a weird habit where I always use < and <= for comparisons, regardless of the placement of constants, etc. I guess I like the parallel with chained comparisons like lo <= x < hi, which almost always use < and <=.

I'll change it, though.

Copy link
Member

@markshannon markshannon left a comment

Looks good.
Any performance numbers? (not that it matters, the 25% space saving is worth it).

@@ -17,13 +17,13 @@
from typing import Dict, FrozenSet, TextIO, Tuple

import umarshal
import opcode_for_build as opcode
Copy link
Member

@markshannon markshannon Nov 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't a normal import so it shouldn't look like one. Maybe something like opcode = opcode_finder.get_opcodes()

bytecode = bytearray()
while len(bytecode) < nbytes:
opcode_byte = self.r_byte()
if opcode.HAVE_ARGUMENT <= opcode_byte:
Copy link
Member

@markshannon markshannon Nov 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const == variable is a C-ism to prevent accidentally writing = instead of ==. No need in Python.

buffer[i++] = _Py_MAKECODEUNIT(opcode, oparg);
buffer[i].opcode = opcode;
buffer[i++].oparg = oparg;
for (int j = 0; j < _PyOpcode_Caches[opcode]; j++) {
buffer[i++] = _Py_MAKECODEUNIT(CACHE, oparg);
buffer[i].opcode = CACHE;
buffer[i++].oparg = 0;
Copy link
Member

@gvanrossum gvanrossum Dec 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly I'd rather see the i++ on separate lines instead of this hybrid approach.

@python python deleted a comment from netlify bot Dec 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting core review performance Performance or resource usage stdlib Python modules in the Lib dir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants