Skip to content

Show subprocess stdout and stderr on pytest failure #29890

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

thomashopkins32
Copy link
Contributor

PR summary

Displays the captured stdout and stderr when a pytest test that uses the subprocess helper exits in failure.

PR checklist

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for opening your first PR into Matplotlib!

If you have not heard from us in a week or so, please leave a new comment below and that should bring it to our attention. Most of our reviewers are volunteers and sometimes things fall through the cracks.

You can also join us on gitter for real-time discussion.

For details on testing, writing docs, and our review process, please see the developer guide

We strive to be a welcoming and open project. Please follow our Code of Conduct.

Copy link
Member

@timhoffm timhoffm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR.

@@ -105,6 +105,9 @@ def subprocess_run_for_testing(command, env=None, timeout=60, stdout=None,
import pytest
pytest.xfail("Fork failure")
raise
except subprocess.CalledProcessError as e:
import pytest
pytest.xfail(f"Subprocess failed with exit code {e.returncode}:\n{e.stdout}\n{e.stderr}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a plain fail not an xfail, also made it a bit more verbose to make stdout and stderr output easier to read:

Suggested change
pytest.xfail(f"Subprocess failed with exit code {e.returncode}:\n{e.stdout}\n{e.stderr}")
pytest.fail(
f"Subprocess failed with exit code {e.returncode}:\n\n"
f"STDOUT:\n{e.stdout}\n\nSTDERR:\n{e.stderr}")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I agree I like yours better. Just made the change.

@timhoffm
Copy link
Member

timhoffm commented Apr 9, 2025

In fact, it's a little more complicated than that. We have use cases where we expect the subprocess to fail (or more generally are ok with that), e.g.
https://github.com/matplotlib/matplotlib/blob/7e997ae87b2abd927830445df5d768c835503504/lib/matplotlib/tests/test_backends_interactive.py#L462-473

So we cannot unconditionally pytest.fail on a CalledProcessError. The correct way forward depends on the common use cases. Options are:

  • wrap usages of subprocess_run_for_testing() or subprocess_run_helper() (which uses it) in a try/expect directly inside the test functions if the function does not further catch CalledProcessError (to be checked whether TimeoutError is relevant too). This would we reasonable if we only have very few such cases.
  • Introduce a flag subprocess_run_for_testing(..., fail_on_error=...). This let's the call site choose the behavior. I'd set this to False by default, because a function automatically failing a test can be surprising. The behavior should be explicitly opt-in: subprocess_run_for_testing(..., fail_on_error=True).

@thomashopkins32
Copy link
Contributor Author

Looks like some tests catch the subprocess.CalledProcessError to validate the error outputs. Should this be refactored such that the actual warnings/errors are caught or should this PR be closed?

For example, b43b568 is a change that could be made to make sure that the actual UserWarning is caught rather than shown through the stderr of the subprocess.

@thomashopkins32
Copy link
Contributor Author

Introduce a flag subprocess_run_for_testing(..., fail_on_error=...). This let's the call site choose the behavior. I'd set this to False by default, because a function automatically failing a test can be surprising. The behavior should be explicitly opt-in: subprocess_run_for_testing(..., fail_on_error=True).

I like this approach for keeping this PR small and in case tests actually need to catch a subprocess error. The tests I have seen can be refactored to set this fail_on_error to True in the future similar to my previous comment.

@QuLogic
Copy link
Member

QuLogic commented Apr 10, 2025

For example, b43b568 is a change that could be made to make sure that the actual UserWarning is caught rather than shown through the stderr of the subprocess.

That change is invalid though; as the comment says, the test is specifically ignoring all other failures.

@thomashopkins32
Copy link
Contributor Author

@QuLogic Yeah I realized that after it failed again, sorry about that. So the errors definitely need to propagate through I agree, just wanted to see if it was easy to add the outputs to stdout and stderr from the subprocess back into the main process to make errors more visible.

@thomashopkins32
Copy link
Contributor Author

This may work better now. It should show the subprocess' stdout and stderr both when the subprocess fails and when it succeeds. This way, when someone uses log_cli=True they can get the outputs of the subprocess in addition to the main process even when tests pass.

If you think this change is unnecessary feel free to close the PR.

@tacaswell
Copy link
Member

@thomashopkins32 Can you please squash this to one commit?

@tacaswell
Copy link
Member

Confirmed that this comes out nicely in the logs.

@tacaswell tacaswell added this to the v3.11.0 milestone Jul 30, 2025
Copy link
Member

@tacaswell tacaswell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

modulo squashing.

For full disclosure, Tom is a co-worker so scale my approval by whatever conflict of interest factor you want.

@thomashopkins32 thomashopkins32 force-pushed the capture-subprocess-test-errors branch from 9165149 to 126507f Compare July 30, 2025 18:44
@timhoffm
Copy link
Member

so scale my approval by whatever conflict of interest factor you want.

This is a low-risk change, so I'm happy to merge with any 1.x approval factor.

Thanks @ Tom x 2 (to stay in the math terminology 😆)

@timhoffm timhoffm merged commit 9573103 into matplotlib:main Jul 30, 2025
34 of 37 checks passed
@github-project-automation github-project-automation bot moved this from Needs review to Waiting for author in First Time Contributors Jul 30, 2025
@QuLogic QuLogic moved this from Waiting for author to Merged in First Time Contributors Jul 30, 2025
@thomashopkins32 thomashopkins32 deleted the capture-subprocess-test-errors branch July 30, 2025 20:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging this pull request may close these issues.

4 participants