Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bpo-30856: Update TestResult early, without buffering in _Outcome #28180
bpo-30856: Update TestResult early, without buffering in _Outcome #28180
Changes from all commits
ee4b771
96480af
f2d5ffc
File filter
Conversations
Jump to
There are no files selected for viewing
ambvSep 17, 2021
•
edited
Contributor
_addSkip
checks whether the TestResult given has anaddSkip
method. Are we sure that it will always have anaddSubTest
method? Sub-test support was added later (BPO-16997) than test skipping support (available since unittest.py was split into a package in bed7d04).In fact,
_Outcome
specifically tests for this with itsresult_supports_subtests
attribute. I would add a check here for this just to be safe. I know that TestCase will exit early if_Outcome
hasresult_supports_subtests
set toFalse
but this is a weird circular dependency._Outcome
should make this check on its own for consistency.serhiy-storchakaSep 17, 2021
Author
Member
Yes, I am sure.
testPartExecutor()
withsubTest=True
is only called insubTest()
if_outcome.result_supports_subtests
is true.Adding redundant test would make the code less clear. If add it, the reader should think about yet one possible special case: if
subTest
is true, butresult
doesn't haveaddSubTest
. In what circumstances is it possible? Is it covered by tests? The answer is: no, it is not possible, and no tests are needed.ambvSep 17, 2021
Contributor
I disagree. The
subTest
method argument to the executor means "this test part is executing as a sub-test". It has nothing to do with whether theTestResult
object used by_Outcome
supportsaddSubTest
. Those are two different things:self.subTest
context manager in a user test method.If what you said was true, then we wouldn't have to test for skip support either, right? After all, if
SkipTest
is raised then it meansunittest
supports it, no? But we check for it anyway.Lastly, putting an additional
if
statement here that uses the already existingself.result_supports_subtests
attribute of that same class feels the correct thing to do because it's_Outcome
's own state. As it stands now, you made it the dependency of the caller ofoutcome.testPartExecutor
to ensure thatoutcome.result_supports_subtests
isTrue
which is weird.serhiy-storchakaSep 17, 2021
Author
Member
The executor is only used for subtests if the
TestResult
object supportsaddSubTest
. If it does not -- there is no subtests, and thesubTest()
context manager is null. No addSubTest -- no subtests.result_supports_subtests
is not_Outcome
's own state. It is only used outside of_Outcome
. Actually, its only purpose is caching the result ofhasattr(result, "addSubTest")
, because it is usually tested several times per test. It is only for microoptimization.ambvSep 17, 2021
•
edited
Contributor
You're describing the current state of how the code works but that is fragile. None of this is explicitly documented. If there ever is a new caller to the executor, it might not check
result_supports_subtests
on its own and in that way break.The attribute is certainly available for other users of an
_Outcome
object to look at but saying it's not its state is kind of weird when it's defined in its own__init__
. Not using it directly but relying on it being checked by an external user is an anti-pattern, I cannot put it any clearer.serhiy-storchakaSep 17, 2021
Author
Member
_Outcome
is just a collection of variables (for input and for output) and a context manager which have access to these variables. It is an implementation detail, it should not be documented, and it does not have callers outside of this module.result_supports_subtests
cannot be not checked outside of_Outcome
. It should be checked to determine whether the executor can be used for subtests.I can add an assert at the beginning of
testPartExecutor()
:Although I do not think that AssertError is better than AttributeError here.
serhiy-storchakaSep 17, 2021
Author
Member
What is the worst consequence if not add any check or assert?
ambvSep 17, 2021
Contributor
As I mentioned in the original comment, I would consider mimicking what
_addSkip
is doing a good defensive solution: warn about incompatibility but continue running.To be quite frank, we either need both checks or none of them. It's unclear to me what kind of TestResult-like classes in modern usage actually lack subtest and skip support. Do you know?
But, again, if the code assumes that skip support can't be guaranteed in TestResult objects, then neither subtest support can. And so, recovering from that seems the sensible thing to do.
serhiy-storchakaSep 17, 2021
Author
Member
We are circling. There is already a check for
addSubTest
. It is just earlier in code, because the absence ofaddSubTest
affects more things. Here,addSubTest
is only called if thesubTest
parameter is true, and it can be true only ifresult_supports_subtests
is true, which is set to the result of the check.What is the worst consequence if not add the additional (redundant) check which you want to add?
ambvSep 17, 2021
Contributor
I think out disagreement is about the fact that, in my opinion, the current state of things is crossing boundaries of responsibility when calling a method on Object A is only safe because you, the caller, previously have to check that an attribute on Object A is
True
.Functionally everything works today because, as you're saying,
TestCase.subTest
has a check forresult_supports_subtests
. But that doesn't mean it's good factoring.In any case, I suspect the likelihood of TestResult-like objects that don't subclass actual
TestResult
is pretty low in 2021 so this is all low stakes. I obviously failed to convince you so carry on.serhiy-storchakaSep 18, 2021
Author
Member
I do not think there are some boundaries.
_Outcome
is not some class with internal state and public interface. It is just a collection of variables and one function. Its only purpose is to reduce code repetition. All its state is open for code which uses it.It does not only work today. I am steel seeking for scenario in which the additional check would help.
As for supporting TestResult-like objects that don't subclass actual TestResult, I think it is a time to start deprecating this. But it does not have any relation to this code.