Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Sign upbpo-39308: Make _tp_cache typed #17974
Conversation
This comment has been minimized.
This comment has been minimized.
the-knights-who-say-ni
commented
Jan 12, 2020
Hello, and thanks for your contribution! I'm a bot set up to make sure that the project can legally accept this contribution by verifying everyone involved has signed the PSF contributor agreement (CLA). CLA MissingOur records indicate the following people have not signed the CLA: For legal reasons we need all the people listed to sign the CLA before we can look at your contribution. Please follow the steps outlined in the CPython devguide to rectify this issue. If you have recently signed the CLA, please wait at least one business day You can check yourself to see if the CLA has been received. Thanks again for the contribution, we look forward to reviewing it! |
This comment has been minimized.
This comment has been minimized.
ok, haven't exactly read entire contributing docs and missed just signed CLA again, changed |
Looks good. CLA usually takes a business day. |
This comment has been minimized.
This comment has been minimized.
@gvanrossum alright, so let's wait for CLA to be processed. I assume my job here is done and bot will simply change label if CLA gets in. cheers |
This comment has been minimized.
This comment has been minimized.
I am a bit worried about performance impact. This is relevant only for |
This comment has been minimized.
This comment has been minimized.
(Removed the auto-merge label until perf question is resolved). |
This comment has been minimized.
This comment has been minimized.
@ilevkivskyi seems a bit contradicting ;), performance issue and caching. But that's worth checking anyway, I will do such comparison later on. |
This comment has been minimized.
This comment has been minimized.
What is contradicting here? Typed cache lookup will be slower (because we need to compare the types), the question is how much slower. |
This comment has been minimized.
This comment has been minimized.
Looking at the source, the difference in the cache is that with |
This comment has been minimized.
This comment has been minimized.
Results:
Scripts: import typing as t
import timeit as tt
import time
import operator
import functools
stmts = (
't.Literal[1]',
't.Literal[1]',
'(t.Literal[1], t.Literal[1], t.Literal[0], t.Literal[0])',
't.Literal[True]',
't.Literal[True, False]',
'(t.Literal[True, False], t.Literal[True, False], t.Literal[True, False], t.Literal[True, False], t.Literal[True, False])',
't.Literal[0]',
't.Literal[0, 1]',
't.List[t.Literal[1,2,3]]',
't.Union[float, complex]',
't.Union[str, bytes, float, complex, t.Literal[1]]',
't.NewType("NT", int)',
't.Literal[1,2,3,4,5,6,7,8]',
't.Dict[t.AnyStr, complex]',
't.Set[t.AnyStr]',
't.Set[t.Union[float, int]]',
't.AbstractSet[t.Union[float, int]]',
't.Sequence[t.Union[float, int]]',
't.NamedTuple("A", x=int)',
't.NamedTuple("B", x=int,y=float)',
't.NamedTuple("C", x=int,y=float,z=complex)',
)
repeats = 300.0
exec_number = 1000
for stmt in stmts:
times=tt.repeat(
stmt,
globals={
't': t,
},
number=exec_number,
repeat=int(repeats),
timer=time.perf_counter,
)
avg = functools.reduce(operator.add, times) / repeats
print(f'{stmt} => {avg*1000.0:.5f}ms') indeed, having normal_cache = functools.lru_cache(typed=False)(func)
literal_cached = functools.lru_cache(typed=True)(func)
_cleanups.append(normal_cache.cache_clear)
_cleanups.append(literal_cached.cache_clear) |
This comment has been minimized.
This comment has been minimized.
Can you clarify what you mean by two caches?
--
--Guido (mobile)
|
This comment has been minimized.
This comment has been minimized.
@ilevkivskyi mentioned about having cache |
This comment has been minimized.
This comment has been minimized.
I was hoping that maybe there will be a possibility to affect |
kornicameister commentedJan 12, 2020
•
edited by miss-islington
https://bugs.python.org/issue39308
Automerge-Triggered-By: @gvanrossum