New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
3.12 setrecursionlimit is ignored in connection with @functools.cache
#112215
Comments
also tested on Linux (with the call to As well, there is no discrepancy with 3.11 vs 3.12 if The current master is bad too. In my testing I modified the lines
I removed the 1st one (it's very recent), and only left
|
I found the 1st bad commit, 7644935, by bisecting:
|
@markshannon - greetings from Oxford, by the way :-) |
Yes. I was suggesting exposing an API for the C recursion limit, but not receiving active feedback. See the discussion here: #107263 |
@PeterLuschny - basic Fibonacci # fib.py
import sys
sys.setrecursionlimit(2000)
from functools import cache
@cache
def fib(n):
if n<1: return 0
if n==1: return 1
return fib(n-1) + fib(n-2)
print(fib(500)) shows the bug just as well |
it's not "probably in connection with |
@functools.cache
The bug is triggered by the use of the C implementation of --- a/Lib/functools.py
+++ b/Lib/functools.py
@@ -641,7 +641,7 @@ def cache_clear():
return wrapper
try:
- from _functools import _lru_cache_wrapper
+ from _functools import no_no_lru_cache_wrapper
except ImportError:
pass and see In a way, it's to be expected, as C code isn't governed by |
@arhadthedev This is not related to performance. It's a new hard-coded limit not adjustable. |
@markshannon ^^ |
We added the C recursion limit for safety reasons. We don't want to give that up. We also opted for portability of code by setting the C recursion limit to a single value for all platforms. |
I don't understand how you can nuke all the recursive uses of Python C extensions in the name of "safety", sorry. For a small sampling, have a look how many code chunks on GitHub have |
@dimpase: your tone is aggressive. that's uncalled for. please keep it civil. |
Apologies. In my defense, I can only say that I am speaking from one of the pitfalls wisely predicted by the SC here.
in the rejection of PEP 651 – Robust Stack Overflow Handling |
I agree with this. It causes large inconvenience for users who do end up relying on recursion. Their right to use C recursion is deprived, even without notice. |
And, by the way, limiting the C stack cannot prevent all stack overflows. 500, 1000, or 1500 are just rough estimates. We can still create stack overflows under these conditions. The idiomatic way is to register a signal handler (on UNIX) / VectoredExceptionHandler (on Windows) to check if the stack pointer is out of the boundary while not recoverable. I am fine with having these limits, but please provide a way to circumvent them. Why are users forbidden to use large stack memory even if they provide one? |
I think it would be more productive to come up with a way that we can calculate the C stack size accurately rather than continue to explain why it's not great (we know, we know). |
Yes, we can estimate it better. We don't need to be that accurate. We can make sure the user can make it larger. For example on Linux, we can query the stack limit at the When the user sets the stack limit of 1MB, he gets When he needs 2000, all he needs to do is set the stack limit to |
|
Note that there is a Linux only GNU extension that works if you size_t get_pthread_stack_size(pthread_t tid) {
pthread_attr_t attr;
pthread_attr_init( &attr ); // probably not needed
if (pthread_getattr_np(tid, &attr) < 0) {
perror("pthread_getattr_np failed");
return 0;
}
size_t stack_size = 0;
if (pthread_attr_getstacksize(&attr, &stack_size) < 0) {
perror("pthread_attr_getstacksize failed");
return 0;
}
return stack_size;
} macOS appears to have its own non-standard (my best guess is that the |
Yeah, you are right. It's just too niche (too target-specific). I also understand Python would prefer resumeable error to just aborting the program. Then, how about providing the user an option to disable this C recursion limit check? Let the user manage themselves under this mode. |
Isn't it possible to get C stack size (at thread creation time) directly via One way or another, knowing C stack limit does not give one complete knowledge on the maximal C recursion depth, as different functions place different amount of data on stack. Anyhow, for the issue at hand, the C version |
Nope. That is merely a function for accessing the opaque |
the code you posted in #112215 (comment) is basically a small wrapper around |
no it isn't. |
OK, sorry, I misunderstood that. It seems that |
Python 3.12 should not be left in its current state. The OP's example demonstrates a major regression. |
Starting about mid-week, I always see 14 test failures on Windows 10 (Pro), seemingly due to this. Current main branch, release build, Visual Studio 2022, nothing customized:
Sometimes there's an explicit message about Windows stack overflow. Historically, as I recall we always needed a lower recursion limit on Windows (than on Linux). |
Note: 13 of the 14 tests named above still pass in a debug build. As I recall,
|
Hm, maybe the CI uses debug build. |
Marking this as release blocker since we discussed this off-line. The plan is to restore 3.12 to the 3.11 behavior (which is a little tricky since various tests were added/updated, but it's doable). For 3.13 we will do per-platform things (and per-config-setting, e.g. debug) by default and add an API to change the C recursion limit. See also #113403 (comment) for some measurements of what we can actually support without crashing (on Mac, but apparently Linux is similar, and Windows is dramatically different). |
I get this too. Windows 11, main branch. We also see it in the Azure Pipelines CI build, e.g. |
Bug report
A minimal example:
CPython versions tested on:
CPython main branch
Operating systems tested on:
Windows
Edit: This (simpler) example is from @dimpase .
Linked PRs
The text was updated successfully, but these errors were encountered: