Message ID | 20230707132649.3841870-1-alex@linutronix.de |
---|---|
State | New |
Headers | show |
Series | python3: rework ptest handling | expand |
On 2023-07-07 09:26, Alexander Kanavin wrote: > There are several issues handled here: > > - a number of tests were failing due to missing dependencies, which > are added (thanks Trevor Gamblin for identifying the fixes) > > - worse, these failures were not reported due to a combination > of regressed sed processing that relies on brittle, unstable verbose > output of python's test module, and shell pipeline obscuring a non-zero > exit code from python itself. This is changed to a direct handling > of python's exit code into a single PASS or FAIL, and changing full verbosity (-v) > into verbose output only for items that did fail (-W). > > - bonus improvement: python can run its tests in parallel, so take advantage > of that, and indeed that makes a complete ptest run four times faster (just over > five minutes). There's also extra memory required for that. Thanks Alex. This is a huge improvement on the run time! > > Signed-off-by: Alexander Kanavin <alex@linutronix.de> > --- > meta/recipes-core/images/core-image-ptest.bb | 1 + > meta/recipes-devtools/python/python3/run-ptest | 2 +- > meta/recipes-devtools/python/python3_3.11.4.bb | 2 +- > 3 files changed, 3 insertions(+), 2 deletions(-) > > diff --git a/meta/recipes-core/images/core-image-ptest.bb b/meta/recipes-core/images/core-image-ptest.bb > index 90c26641ba3..563fdb4731f 100644 > --- a/meta/recipes-core/images/core-image-ptest.bb > +++ b/meta/recipes-core/images/core-image-ptest.bb > @@ -25,6 +25,7 @@ IMAGE_ROOTFS_EXTRA_SPACE:virtclass-mcextend-lttng-tools = "1524288" > # ptests need more memory than standard to avoid the OOM killer > QB_MEM = "-m 1024" > QB_MEM:virtclass-mcextend-lttng-tools = "-m 4096" > +QB_MEM:virtclass-mcextend-python3 = "-m 2048" > QB_MEM:virtclass-mcextend-python3-cryptography = "-m 4096" > > TEST_SUITES = "ping ssh parselogs ptest" > diff --git a/meta/recipes-devtools/python/python3/run-ptest b/meta/recipes-devtools/python/python3/run-ptest > index 05396e91abb..4271a7c94ea 100644 > --- a/meta/recipes-devtools/python/python3/run-ptest > +++ b/meta/recipes-devtools/python/python3/run-ptest > @@ -1,3 +1,3 @@ > #!/bin/sh > > -SETUPTOOLS_USE_DISTUTILS=nonlocal python3 -m test -v | sed -u -e '/\.\.\. ok/ s/^/PASS: /g' -r -e '/\.\.\. (ERROR|FAIL)/ s/^/FAIL: /g' -e '/\.\.\. skipped/ s/^/SKIP: /g' -e 's/ \.\.\. ok//g' -e 's/ \.\.\. ERROR//g' -e 's/ \.\.\. FAIL//g' -e 's/ \.\.\. skipped//g' > +SETUPTOOLS_USE_DISTUTILS=nonlocal python3 -m test -W -j 4 && echo "PASS: python3" || echo "FAIL: python3" > diff --git a/meta/recipes-devtools/python/python3_3.11.4.bb b/meta/recipes-devtools/python/python3_3.11.4.bb > index 6b074c48cf2..bc03986e175 100644 > --- a/meta/recipes-devtools/python/python3_3.11.4.bb > +++ b/meta/recipes-devtools/python/python3_3.11.4.bb > @@ -426,7 +426,7 @@ FILES:${PN}-man = "${datadir}/man" > # See https://bugs.python.org/issue18748 and https://bugs.python.org/issue37395 > RDEPENDS:libpython3:append:libc-glibc = " libgcc" > RDEPENDS:${PN}-ctypes:append:libc-glibc = " ${MLPREFIX}ldconfig" > -RDEPENDS:${PN}-ptest = "${PN}-modules ${PN}-tests ${PN}-dev unzip bzip2 libgcc tzdata coreutils sed" > +RDEPENDS:${PN}-ptest = "${PN}-modules ${PN}-tests ${PN}-dev ${PN}-cgitb ${PN}-zipapp unzip bzip2 libgcc tzdata coreutils sed gcc g++ binutils" > RDEPENDS:${PN}-ptest:append:libc-glibc = " locale-base-fr-fr locale-base-en-us locale-base-tr-tr locale-base-de-de" > RDEPENDS:${PN}-tkinter += "${@bb.utils.contains('PACKAGECONFIG', 'tk', '${MLPREFIX}tk ${MLPREFIX}tk-lib', '', d)}" > RDEPENDS:${PN}-idle += "${@bb.utils.contains('PACKAGECONFIG', 'tk', '${PN}-tkinter ${MLPREFIX}tcl', '', d)}" > > -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > View/Reply Online (#184011): https://lists.openembedded.org/g/openembedded-core/message/184011 > Mute This Topic: https://lists.openembedded.org/mt/100005852/7611679 > Group Owner: openembedded-core+owner@lists.openembedded.org > Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub [tgamblin@baylibre.com] > -=-=-=-=-=-=-=-=-=-=-=- >
On 7 Jul 2023, at 14:26, Alexander Kanavin via lists.openembedded.org <alex.kanavin=gmail.com@lists.openembedded.org> wrote: > > - worse, these failures were not reported due to a combination > of regressed sed processing that relies on brittle, unstable verbose > output of python's test module, and shell pipeline obscuring a non-zero > exit code from python itself. This is changed to a direct handling > of python's exit code into a single PASS or FAIL, and changing full verbosity (-v) > into verbose output only for items that did fail (-W). My only concern with this is that we’re missing the breakdown of tests so resulttool can’t report individual regressions. Ross
On Fri, 7 Jul 2023 at 16:59, Ross Burton <Ross.Burton@arm.com> wrote:
> My only concern with this is that we’re missing the breakdown of tests so resulttool can’t report individual regressions.
Python prints at the end:
3 tests failed:
test_cgitb test_cppext test_zipapp
Perhaps the module could be patched (no sed please :) to also print
those as FAIL: lines?
Alex
On Fri, 7 Jul 2023 at 17:23, Alexander Kanavin via lists.openembedded.org <alex.kanavin=gmail.com@lists.openembedded.org> wrote: > Python prints at the end: > > 3 tests failed: > test_cgitb test_cppext test_zipapp > > Perhaps the module could be patched (no sed please :) to also print > those as FAIL: lines? A better but more involved fix is to have '-m test' write out a junit xml with results (it supports that with a command line option), then process it into PASS/FAIL printouts. I'm about to head out to see Depeche Mode and then two weeks of cabin by the lake thing, so for now I'm out of time for this :) Alex
On 2023-07-07 11:32, Alexander Kanavin wrote: > On Fri, 7 Jul 2023 at 17:23, Alexander Kanavin via > lists.openembedded.org <alex.kanavin=gmail.com@lists.openembedded.org> > wrote: >> Python prints at the end: >> >> 3 tests failed: >> test_cgitb test_cppext test_zipapp >> >> Perhaps the module could be patched (no sed please :) to also print >> those as FAIL: lines? > A better but more involved fix is to have '-m test' write out a junit > xml with results (it supports that with a command line option), then > process it into PASS/FAIL printouts. I'm about to head out to see > Depeche Mode and then two weeks of cabin by the lake thing, so for now > I'm out of time for this :) A middle ground that also seems to work is to take this patch but leave the pipe to sed in. We still get fixed ptests and a much faster runtime that way. What are everyone's thoughts? > > Alex > > -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > View/Reply Online (#184015): https://lists.openembedded.org/g/openembedded-core/message/184015 > Mute This Topic: https://lists.openembedded.org/mt/100005852/7611679 > Group Owner: openembedded-core+owner@lists.openembedded.org > Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub [tgamblin@baylibre.com] > -=-=-=-=-=-=-=-=-=-=-=- >
On 2023-07-07 15:01, Trevor Gamblin via lists.openembedded.org wrote: > > On 2023-07-07 11:32, Alexander Kanavin wrote: >> On Fri, 7 Jul 2023 at 17:23, Alexander Kanavin via >> lists.openembedded.org <alex.kanavin=gmail.com@lists.openembedded.org> >> wrote: >>> Python prints at the end: >>> >>> 3 tests failed: >>> test_cgitb test_cppext test_zipapp >>> >>> Perhaps the module could be patched (no sed please :) to also print >>> those as FAIL: lines? >> A better but more involved fix is to have '-m test' write out a junit >> xml with results (it supports that with a command line option), then >> process it into PASS/FAIL printouts. I'm about to head out to see >> Depeche Mode and then two weeks of cabin by the lake thing, so for now >> I'm out of time for this :) > A middle ground that also seems to work is to take this patch but > leave the pipe to sed in. We still get fixed ptests and a much faster > runtime that way. What are everyone's thoughts? FYI I didn't see the junit output option anywhere. Are you referring to an extra module? >> >> Alex >> >> >> > > -=-=-=-=-=-=-=-=-=-=-=- > Links: You receive all messages sent to this group. > View/Reply Online (#184016): https://lists.openembedded.org/g/openembedded-core/message/184016 > Mute This Topic: https://lists.openembedded.org/mt/100005852/7611679 > Group Owner: openembedded-core+owner@lists.openembedded.org > Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub [tgamblin@baylibre.com] > -=-=-=-=-=-=-=-=-=-=-=- >
On Fri, 7 Jul 2023 at 21:46, Trevor Gamblin <tgamblin@baylibre.com> wrote: > FYI I didn't see the junit output option anywhere. Are you referring to > an extra module? If you run 'python3 -m test --help' you should see it: --junit-xml FILENAME writes JUnit-style XML results to the specified file Alex
On 7 Jul 2023, at 23:43, Alexander Kanavin <alex.kanavin@gmail.com> wrote: > > On Fri, 7 Jul 2023 at 21:46, Trevor Gamblin <tgamblin@baylibre.com> wrote: >> FYI I didn't see the junit output option anywhere. Are you referring to >> an extra module? > > If you run 'python3 -m test --help' you should see it: > > --junit-xml FILENAME writes JUnit-style XML results to the specified file I like this. JUnit XML is fairly common and a well-written conversion tool can be used by multiple recipes. I offer https://gitlab.com/rossburton/python-unittest-automake-output as a holding location once it’s been proven to work. Ross
On 2023-07-10 06:10, Ross Burton wrote: > On 7 Jul 2023, at 23:43, Alexander Kanavin <alex.kanavin@gmail.com> wrote: >> On Fri, 7 Jul 2023 at 21:46, Trevor Gamblin <tgamblin@baylibre.com> wrote: >>> FYI I didn't see the junit output option anywhere. Are you referring to >>> an extra module? >> If you run 'python3 -m test --help' you should see it: >> >> --junit-xml FILENAME writes JUnit-style XML results to the specified file > I like this. JUnit XML is fairly common and a well-written conversion tool can be used by multiple recipes. > > I offer https://gitlab.com/rossburton/python-unittest-automake-output as a holding location once it’s been proven to work. It looks like there are some issues with the test code itself. Running "python3 -m test -W -j 4 --junit-xml TESTRESULTS" to try it out, I see things like this at the top of the output file: |<testsuites tests="39396" errors="0" failures="0"><testsuite start="2023-07-10 16:02:38.119463" tests="8" errors="0" failures="0"><testcase name="test.test_opcodes.OpcodeTest.test_compare_function_objects" |status="run" result="completed" time="0.000520" /><testcase name="test.test_opcodes.OpcodeTest.test_default_anno | File "/usr/lib/python3.11/unittest/case.py", line 57, in testPartExecutor | yield | File "/usr/lib/python3.11/unittest/case.py", line 623, in run | self._callTestMethod(testMethod) | File "/usr/lib/python3.11/unittest/case.py", line 579, in _callTestMethod | if method() is not None: | ^^^^^^^^ | File "/usr/lib/python3.11/test/test_descr.py", line 1790, in test_bad_new | self.assertRaises(TypeError, C) | File "/usr/lib/python3.11/unittest/case.py", line 766, in assertRaises | return context.handle('assertRaises', args, kwargs) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/usr/lib/python3.11/unittest/case.py", line 236, in handle | with self: | File "/usr/lib/python3.11/unittest/case.py", line 259, in __exit__ | self._raiseFailure("{} not raised by {}".format(exc_name, | File "/usr/lib/python3.11/unittest/case.py", line 199, in _raiseFailure | raise self.test_case.failureException(msg) |AssertionError: TypeError not raised by C |</output></testcase><testcase name="test.test_descr.ClassPropertiesAndMethods.test_basic_inheritance" status="run" result="completed" time="0.008526" /><testcase |name="test.test_descr.ClassPropertiesAndMethods.test_binary_operator_override" status="run" result="completed" time="0.000292" /><testcase name="test.test_d | File "/usr/lib/python3.11/unittest/case.py", line 57, in testPartExecutor | yield | File "/usr/lib/python3.11/unittest/case.py", line 623, in run | self._callTestMethod(testMethod) | File "/usr/lib/python3.11/unittest/case.py", line 579, in _callTestMethod | if method() is not None: | ^^^^^^^^ | File "/usr/lib/python3.11/test/test_descr.py", line 1841, in test_restored_object_new | b = B(3) | ^^^^ |TypeError: object.__new__() takes exactly one argument (the type to instantiate) There are numerous others as well. I compiled 3.11.4 on my F38 host and saw the same errors, which seems to suggest there's a problem with that part of the source. I wonder if we should continue using sed for now, but add the missing dependencies for test_cppext and the parallelized testing option. Thoughts? > > Ross
diff --git a/meta/recipes-core/images/core-image-ptest.bb b/meta/recipes-core/images/core-image-ptest.bb index 90c26641ba3..563fdb4731f 100644 --- a/meta/recipes-core/images/core-image-ptest.bb +++ b/meta/recipes-core/images/core-image-ptest.bb @@ -25,6 +25,7 @@ IMAGE_ROOTFS_EXTRA_SPACE:virtclass-mcextend-lttng-tools = "1524288" # ptests need more memory than standard to avoid the OOM killer QB_MEM = "-m 1024" QB_MEM:virtclass-mcextend-lttng-tools = "-m 4096" +QB_MEM:virtclass-mcextend-python3 = "-m 2048" QB_MEM:virtclass-mcextend-python3-cryptography = "-m 4096" TEST_SUITES = "ping ssh parselogs ptest" diff --git a/meta/recipes-devtools/python/python3/run-ptest b/meta/recipes-devtools/python/python3/run-ptest index 05396e91abb..4271a7c94ea 100644 --- a/meta/recipes-devtools/python/python3/run-ptest +++ b/meta/recipes-devtools/python/python3/run-ptest @@ -1,3 +1,3 @@ #!/bin/sh -SETUPTOOLS_USE_DISTUTILS=nonlocal python3 -m test -v | sed -u -e '/\.\.\. ok/ s/^/PASS: /g' -r -e '/\.\.\. (ERROR|FAIL)/ s/^/FAIL: /g' -e '/\.\.\. skipped/ s/^/SKIP: /g' -e 's/ \.\.\. ok//g' -e 's/ \.\.\. ERROR//g' -e 's/ \.\.\. FAIL//g' -e 's/ \.\.\. skipped//g' +SETUPTOOLS_USE_DISTUTILS=nonlocal python3 -m test -W -j 4 && echo "PASS: python3" || echo "FAIL: python3" diff --git a/meta/recipes-devtools/python/python3_3.11.4.bb b/meta/recipes-devtools/python/python3_3.11.4.bb index 6b074c48cf2..bc03986e175 100644 --- a/meta/recipes-devtools/python/python3_3.11.4.bb +++ b/meta/recipes-devtools/python/python3_3.11.4.bb @@ -426,7 +426,7 @@ FILES:${PN}-man = "${datadir}/man" # See https://bugs.python.org/issue18748 and https://bugs.python.org/issue37395 RDEPENDS:libpython3:append:libc-glibc = " libgcc" RDEPENDS:${PN}-ctypes:append:libc-glibc = " ${MLPREFIX}ldconfig" -RDEPENDS:${PN}-ptest = "${PN}-modules ${PN}-tests ${PN}-dev unzip bzip2 libgcc tzdata coreutils sed" +RDEPENDS:${PN}-ptest = "${PN}-modules ${PN}-tests ${PN}-dev ${PN}-cgitb ${PN}-zipapp unzip bzip2 libgcc tzdata coreutils sed gcc g++ binutils" RDEPENDS:${PN}-ptest:append:libc-glibc = " locale-base-fr-fr locale-base-en-us locale-base-tr-tr locale-base-de-de" RDEPENDS:${PN}-tkinter += "${@bb.utils.contains('PACKAGECONFIG', 'tk', '${MLPREFIX}tk ${MLPREFIX}tk-lib', '', d)}" RDEPENDS:${PN}-idle += "${@bb.utils.contains('PACKAGECONFIG', 'tk', '${PN}-tkinter ${MLPREFIX}tcl', '', d)}"
There are several issues handled here: - a number of tests were failing due to missing dependencies, which are added (thanks Trevor Gamblin for identifying the fixes) - worse, these failures were not reported due to a combination of regressed sed processing that relies on brittle, unstable verbose output of python's test module, and shell pipeline obscuring a non-zero exit code from python itself. This is changed to a direct handling of python's exit code into a single PASS or FAIL, and changing full verbosity (-v) into verbose output only for items that did fail (-W). - bonus improvement: python can run its tests in parallel, so take advantage of that, and indeed that makes a complete ptest run four times faster (just over five minutes). There's also extra memory required for that. Signed-off-by: Alexander Kanavin <alex@linutronix.de> --- meta/recipes-core/images/core-image-ptest.bb | 1 + meta/recipes-devtools/python/python3/run-ptest | 2 +- meta/recipes-devtools/python/python3_3.11.4.bb | 2 +- 3 files changed, 3 insertions(+), 2 deletions(-)