-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Test Annotation Framework for Imported CPython test #1973
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like this idea! It would at least allow us to mark which tests should actually need work on our side in the test results webpage. @palaviv @youknowone what do you think?
I love this idea! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wanted to read this to see the new scheme and found some typos
…and comments Co-authored-by: James Webber <jamestwebber@users.noreply.github.com> Co-authored-by: Jeong YunWon <youknowone@users.noreply.github.com>
I agree marking things more than comments is a good idea. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we push this in @coolreader18, @youknowone?
Yeah, lgtm. There might be more skips/expectedFailures to replace with this since this pr was opened, but we can do that as a follow-up |
I strongly prefer this idea to current state. Not sure librptest has to be placed in I don't agree |
👍 for the last paragraph @youknowone |
When importing test currentlyfrom CPython, they need to be annotated with comments like
or how ever the author likes. Further the original file revision and the orignating python (language) revision is missing in most of the imported tests, what makes tracking nearly impossible.
To improve the convention base "solution", I propose a python annotation framework using decorators to properly annotate the imported test cases, such that they can be tracked and evaluated in a comprehnesive and comfortable way.
Examples can be seen in Lib/test/test_rpt.py.
This the initial implementation. The core functionality is there and tested, but, e.g., the reporting is very rudimentary and more for demonstration purpoposes.