mbox series

[0/2] image specific configuration with oeqa runtime tests

Message ID 20221117071223.107064-1-mikko.rapeli@linaro.org
Headers show
Series image specific configuration with oeqa runtime tests | expand

Message

Mikko Rapeli Nov. 17, 2022, 7:12 a.m. UTC
Many runtime tests would need customization for different
machines and images. Currently some tests like parselogs.py are hard
coding machine specific exceptions into the test itself. I think these
machine specific exceptions fit better as image specific ones, since a
single machine config can generate multiple images which behave
differently. Thus create a "testimage_data.json" file format which image
recipes can deploy. This is then used by tests like parselogs.py to find
the image specific exception list.

Same approach would fit other runtime tests too. For example systemd
tests could include a test case which checks that an image specific list of
services are running.

I don't know how this data storage would be used with SDK or selftests,
but maybe it could work there too with some small tweaks.

Mikko Rapeli (2):
  oeqa: add utils/data.py with get_data() function
  oeqa parselogs.py: use get_data() to fetch image specific error list

 meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
 meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
 2 files changed, 54 insertions(+), 4 deletions(-)
 create mode 100644 meta/lib/oeqa/utils/data.py

Comments

Alexandre Belloni Nov. 17, 2022, 2:22 p.m. UTC | #1
Hello,

With this two patches, I have multiple new warnings on the autobuilders
for qemuarm and qemuarm-alt

https://autobuilder.yoctoproject.org/typhoon/#/builders/53/builds/6185/steps/13/logs/stdio
https://autobuilder.yoctoproject.org/typhoon/#/builders/110/builds/5064/steps/12/logs/stdio

On 17/11/2022 09:12:21+0200, Mikko Rapeli wrote:
> Many runtime tests would need customization for different
> machines and images. Currently some tests like parselogs.py are hard
> coding machine specific exceptions into the test itself. I think these
> machine specific exceptions fit better as image specific ones, since a
> single machine config can generate multiple images which behave
> differently. Thus create a "testimage_data.json" file format which image
> recipes can deploy. This is then used by tests like parselogs.py to find
> the image specific exception list.
> 
> Same approach would fit other runtime tests too. For example systemd
> tests could include a test case which checks that an image specific list of
> services are running.
> 
> I don't know how this data storage would be used with SDK or selftests,
> but maybe it could work there too with some small tweaks.
> 
> Mikko Rapeli (2):
>   oeqa: add utils/data.py with get_data() function
>   oeqa parselogs.py: use get_data() to fetch image specific error list
> 
>  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
>  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
>  2 files changed, 54 insertions(+), 4 deletions(-)
>  create mode 100644 meta/lib/oeqa/utils/data.py
> 
> -- 
> 2.34.1
> 

> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> View/Reply Online (#173404): https://lists.openembedded.org/g/openembedded-core/message/173404
> Mute This Topic: https://lists.openembedded.org/mt/95085492/3617179
> Group Owner: openembedded-core+owner@lists.openembedded.org
> Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub [alexandre.belloni@bootlin.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
Mikko Rapeli Nov. 17, 2022, 2:28 p.m. UTC | #2
Hi,

On Thu, Nov 17, 2022 at 03:22:20PM +0100, Alexandre Belloni wrote:
> Hello,
> 
> With this two patches, I have multiple new warnings on the autobuilders
> for qemuarm and qemuarm-alt
> 
> https://autobuilder.yoctoproject.org/typhoon/#/builders/53/builds/6185/steps/13/logs/stdio
> https://autobuilder.yoctoproject.org/typhoon/#/builders/110/builds/5064/steps/12/logs/stdio

WARNING: core-image-sato-sdk-1.0-r0 do_testimage: No ignore list found for this machine and no valid testimage_data.json, using defaults

I can change these to be info messages to be at info level. Previously
these were only in the test output log, not in the bitbake output.
I think they need to be in both.

Cheers,

-Mikko

> On 17/11/2022 09:12:21+0200, Mikko Rapeli wrote:
> > Many runtime tests would need customization for different
> > machines and images. Currently some tests like parselogs.py are hard
> > coding machine specific exceptions into the test itself. I think these
> > machine specific exceptions fit better as image specific ones, since a
> > single machine config can generate multiple images which behave
> > differently. Thus create a "testimage_data.json" file format which image
> > recipes can deploy. This is then used by tests like parselogs.py to find
> > the image specific exception list.
> > 
> > Same approach would fit other runtime tests too. For example systemd
> > tests could include a test case which checks that an image specific list of
> > services are running.
> > 
> > I don't know how this data storage would be used with SDK or selftests,
> > but maybe it could work there too with some small tweaks.
> > 
> > Mikko Rapeli (2):
> >   oeqa: add utils/data.py with get_data() function
> >   oeqa parselogs.py: use get_data() to fetch image specific error list
> > 
> >  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
> >  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
> >  2 files changed, 54 insertions(+), 4 deletions(-)
> >  create mode 100644 meta/lib/oeqa/utils/data.py
> > 
> > -- 
> > 2.34.1
> > 
> 
> > 
> > -=-=-=-=-=-=-=-=-=-=-=-
> > Links: You receive all messages sent to this group.
> > View/Reply Online (#173404): https://lists.openembedded.org/g/openembedded-core/message/173404
> > Mute This Topic: https://lists.openembedded.org/mt/95085492/3617179
> > Group Owner: openembedded-core+owner@lists.openembedded.org
> > Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub [alexandre.belloni@bootlin.com]
> > -=-=-=-=-=-=-=-=-=-=-=-
> > 
> 
> 
> -- 
> Alexandre Belloni, co-owner and COO, Bootlin
> Embedded Linux and Kernel engineering
> https://bootlin.com
Richard Purdie Nov. 17, 2022, 3:17 p.m. UTC | #3
On Thu, 2022-11-17 at 09:12 +0200, Mikko Rapeli wrote:
> Many runtime tests would need customization for different
> machines and images. Currently some tests like parselogs.py are hard
> coding machine specific exceptions into the test itself. I think these
> machine specific exceptions fit better as image specific ones, since a
> single machine config can generate multiple images which behave
> differently. Thus create a "testimage_data.json" file format which image
> recipes can deploy. This is then used by tests like parselogs.py to find
> the image specific exception list.
> 
> Same approach would fit other runtime tests too. For example systemd
> tests could include a test case which checks that an image specific list of
> services are running.
> 
> I don't know how this data storage would be used with SDK or selftests,
> but maybe it could work there too with some small tweaks.
> 
> Mikko Rapeli (2):
>   oeqa: add utils/data.py with get_data() function
>   oeqa parselogs.py: use get_data() to fetch image specific error list
> 
>  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
>  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
>  2 files changed, 54 insertions(+), 4 deletions(-)
>  create mode 100644 meta/lib/oeqa/utils/data.py

This patch looks like it is one side of the equation, i.e. importing
the data into the tests. How does the data get into the deploy
directory in the first place? I assume there are other patches which do
that?

We have a bit of contention with two approaches to data management in
OEQA. One is where the runtime tests are directly run against an image,
in which case the datastore is available. You could therefore have
markup in the recipe as normal variables and access them directly in
the tests.

The second is the "testexport" approach where the tests are run without
the main metadata. I know Ross and I would like to see testexport
dropped as it complicates things and is a pain.

This new file "feels" a lot like more extensions in the testexport
direction and I'm not sure we need to do that. Could we handle this
with more markup in the image recipe?

Cheers,

Richard
Mikko Rapeli Nov. 17, 2022, 3:39 p.m. UTC | #4
Hi,

On Thu, Nov 17, 2022 at 03:17:43PM +0000, Richard Purdie wrote:
> On Thu, 2022-11-17 at 09:12 +0200, Mikko Rapeli wrote:
> > Many runtime tests would need customization for different
> > machines and images. Currently some tests like parselogs.py are hard
> > coding machine specific exceptions into the test itself. I think these
> > machine specific exceptions fit better as image specific ones, since a
> > single machine config can generate multiple images which behave
> > differently. Thus create a "testimage_data.json" file format which image
> > recipes can deploy. This is then used by tests like parselogs.py to find
> > the image specific exception list.
> > 
> > Same approach would fit other runtime tests too. For example systemd
> > tests could include a test case which checks that an image specific list of
> > services are running.
> > 
> > I don't know how this data storage would be used with SDK or selftests,
> > but maybe it could work there too with some small tweaks.
> > 
> > Mikko Rapeli (2):
> >   oeqa: add utils/data.py with get_data() function
> >   oeqa parselogs.py: use get_data() to fetch image specific error list
> > 
> >  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
> >  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
> >  2 files changed, 54 insertions(+), 4 deletions(-)
> >  create mode 100644 meta/lib/oeqa/utils/data.py
> 
> This patch looks like it is one side of the equation, i.e. importing
> the data into the tests. How does the data get into the deploy
> directory in the first place? I assume there are other patches which do
> that?

Patches in other layers do that, yes.

> We have a bit of contention with two approaches to data management in
> OEQA. One is where the runtime tests are directly run against an image,
> in which case the datastore is available. You could therefore have
> markup in the recipe as normal variables and access them directly in
> the tests.

My use case is running tests right after build, but I would like to export
them to execute later as well.

> The second is the "testexport" approach where the tests are run without
> the main metadata. I know Ross and I would like to see testexport
> dropped as it complicates things and is a pain.
> 
> This new file "feels" a lot like more extensions in the testexport
> direction and I'm not sure we need to do that. Could we handle this
> with more markup in the image recipe?

For simple variables this would do but how about a long list of strings
like poky/meta/lib/oeqa/runtime/cases/parselogs.py:

common_errors = [
    "(WW) warning, (EE) error, (NI) not implemented, (??) unknown.",
    "dma timeout",
    "can\'t add hid device:",
    "usbhid: probe of ",
    "_OSC failed (AE_ERROR)",
    "_OSC failed (AE_SUPPORT)",
    "AE_ALREADY_EXISTS",
    "ACPI _OSC request failed (AE_SUPPORT)",
    "can\'t disable ASPM",
    "Failed to load module \"vesa\"",
    "Failed to load module vesa",
    "Failed to load module \"modesetting\"",
    "Failed to load module modesetting",
    "Failed to load module \"glx\"",
    "Failed to load module \"fbdev\"",
    "Failed to load module fbdev",
    "Failed to load module glx"
]

Embed json into a bitbake variable? Or embed directly as python code?

Cheers,

-Mikko
Richard Purdie Nov. 17, 2022, 4:57 p.m. UTC | #5
On Thu, 2022-11-17 at 17:39 +0200, Mikko Rapeli wrote:
> Hi,
> 
> On Thu, Nov 17, 2022 at 03:17:43PM +0000, Richard Purdie wrote:
> > On Thu, 2022-11-17 at 09:12 +0200, Mikko Rapeli wrote:
> > > Many runtime tests would need customization for different
> > > machines and images. Currently some tests like parselogs.py are hard
> > > coding machine specific exceptions into the test itself. I think these
> > > machine specific exceptions fit better as image specific ones, since a
> > > single machine config can generate multiple images which behave
> > > differently. Thus create a "testimage_data.json" file format which image
> > > recipes can deploy. This is then used by tests like parselogs.py to find
> > > the image specific exception list.
> > > 
> > > Same approach would fit other runtime tests too. For example systemd
> > > tests could include a test case which checks that an image specific list of
> > > services are running.
> > > 
> > > I don't know how this data storage would be used with SDK or selftests,
> > > but maybe it could work there too with some small tweaks.
> > > 
> > > Mikko Rapeli (2):
> > >   oeqa: add utils/data.py with get_data() function
> > >   oeqa parselogs.py: use get_data() to fetch image specific error list
> > > 
> > >  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
> > >  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
> > >  2 files changed, 54 insertions(+), 4 deletions(-)
> > >  create mode 100644 meta/lib/oeqa/utils/data.py
> > 
> > This patch looks like it is one side of the equation, i.e. importing
> > the data into the tests. How does the data get into the deploy
> > directory in the first place? I assume there are other patches which do
> > that?
> 
> Patches in other layers do that, yes.
> 
> > We have a bit of contention with two approaches to data management in
> > OEQA. One is where the runtime tests are directly run against an image,
> > in which case the datastore is available. You could therefore have
> > markup in the recipe as normal variables and access them directly in
> > the tests.
> 
> My use case is running tests right after build, but I would like to export
> them to execute later as well.

When you execute later, are you going to use testexport or will the
metadata still be available? As I mentioned, removing testexport would
be desirable for a number of reasons but I suspect there are people who
might want it.
> 

> > The second is the "testexport" approach where the tests are run without
> > the main metadata. I know Ross and I would like to see testexport
> > dropped as it complicates things and is a pain.
> > 
> > This new file "feels" a lot like more extensions in the testexport
> > direction and I'm not sure we need to do that. Could we handle this
> > with more markup in the image recipe?
> 
> For simple variables this would do but how about a long list of strings
> like poky/meta/lib/oeqa/runtime/cases/parselogs.py:
> 
> common_errors = [
>     "(WW) warning, (EE) error, (NI) not implemented, (??) unknown.",
>     "dma timeout",
>     "can\'t add hid device:",
>     "usbhid: probe of ",
>     "_OSC failed (AE_ERROR)",
>     "_OSC failed (AE_SUPPORT)",
>     "AE_ALREADY_EXISTS",
>     "ACPI _OSC request failed (AE_SUPPORT)",
>     "can\'t disable ASPM",
>     "Failed to load module \"vesa\"",
>     "Failed to load module vesa",
>     "Failed to load module \"modesetting\"",
>     "Failed to load module modesetting",
>     "Failed to load module \"glx\"",
>     "Failed to load module \"fbdev\"",
>     "Failed to load module fbdev",
>     "Failed to load module glx"
> ]
> 
> Embed json into a bitbake variable? Or embed directly as python code?

I've wondered if we could add some new syntax to bitbake to support
this somehow, does anyone have any ideas to propose?

I'd wondered about both python data and/or json format (at which point
someone will want yaml :/).

Cheers,

Richard
Mikko Rapeli Nov. 18, 2022, 2:32 p.m. UTC | #6
Hi,

On Thu, Nov 17, 2022 at 04:57:36PM +0000, Richard Purdie wrote:
> On Thu, 2022-11-17 at 17:39 +0200, Mikko Rapeli wrote:
> > Hi,
> > 
> > On Thu, Nov 17, 2022 at 03:17:43PM +0000, Richard Purdie wrote:
> > > On Thu, 2022-11-17 at 09:12 +0200, Mikko Rapeli wrote:
> > > > Many runtime tests would need customization for different
> > > > machines and images. Currently some tests like parselogs.py are hard
> > > > coding machine specific exceptions into the test itself. I think these
> > > > machine specific exceptions fit better as image specific ones, since a
> > > > single machine config can generate multiple images which behave
> > > > differently. Thus create a "testimage_data.json" file format which image
> > > > recipes can deploy. This is then used by tests like parselogs.py to find
> > > > the image specific exception list.
> > > > 
> > > > Same approach would fit other runtime tests too. For example systemd
> > > > tests could include a test case which checks that an image specific list of
> > > > services are running.
> > > > 
> > > > I don't know how this data storage would be used with SDK or selftests,
> > > > but maybe it could work there too with some small tweaks.
> > > > 
> > > > Mikko Rapeli (2):
> > > >   oeqa: add utils/data.py with get_data() function
> > > >   oeqa parselogs.py: use get_data() to fetch image specific error list
> > > > 
> > > >  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
> > > >  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
> > > >  2 files changed, 54 insertions(+), 4 deletions(-)
> > > >  create mode 100644 meta/lib/oeqa/utils/data.py
> > > 
> > > This patch looks like it is one side of the equation, i.e. importing
> > > the data into the tests. How does the data get into the deploy
> > > directory in the first place? I assume there are other patches which do
> > > that?
> > 
> > Patches in other layers do that, yes.

Note to self and anyone else interested in this, it is rather
tricky to get SRC_URI and do_deploy() working in image recipes.
Something like this will do it though:

SUMMARY = "Test image"
LICENSE = "MIT"

SRC_URI = "file://testimage_data.json"

inherit deploy

# re-enable SRC_URI handling, it's disabled in image.bbclass
python __anonymous() {
    d.delVarFlag("do_fetch", "noexec")
    d.delVarFlag("do_unpack", "noexec")
}
...
do_deploy() {
    # to customise oeqa tests
    mkdir -p "${DEPLOYDIR}"
    install "${WORKDIR}/testimage_data.json" "${DEPLOYDIR}"
}
# do_unpack needed to run do_fetch and do_unpack which are disabled by image.bbclass.
addtask deploy before do_build after do_rootfs do_unpack

> > > We have a bit of contention with two approaches to data management in
> > > OEQA. One is where the runtime tests are directly run against an image,
> > > in which case the datastore is available. You could therefore have
> > > markup in the recipe as normal variables and access them directly in
> > > the tests.
> > 
> > My use case is running tests right after build, but I would like to export
> > them to execute later as well.
> 
> When you execute later, are you going to use testexport or will the
> metadata still be available? As I mentioned, removing testexport would
> be desirable for a number of reasons but I suspect there are people who
> might want it.

I was planning to use testexport and also make sure all images and other
things needed for running tests are in the output of a build.

> > > The second is the "testexport" approach where the tests are run without
> > > the main metadata. I know Ross and I would like to see testexport
> > > dropped as it complicates things and is a pain.
> > > 
> > > This new file "feels" a lot like more extensions in the testexport
> > > direction and I'm not sure we need to do that. Could we handle this
> > > with more markup in the image recipe?
> > 
> > For simple variables this would do but how about a long list of strings
> > like poky/meta/lib/oeqa/runtime/cases/parselogs.py:
> > 
> > common_errors = [
> >     "(WW) warning, (EE) error, (NI) not implemented, (??) unknown.",
> >     "dma timeout",
> >     "can\'t add hid device:",
> >     "usbhid: probe of ",
> >     "_OSC failed (AE_ERROR)",
> >     "_OSC failed (AE_SUPPORT)",
> >     "AE_ALREADY_EXISTS",
> >     "ACPI _OSC request failed (AE_SUPPORT)",
> >     "can\'t disable ASPM",
> >     "Failed to load module \"vesa\"",
> >     "Failed to load module vesa",
> >     "Failed to load module \"modesetting\"",
> >     "Failed to load module modesetting",
> >     "Failed to load module \"glx\"",
> >     "Failed to load module \"fbdev\"",
> >     "Failed to load module fbdev",
> >     "Failed to load module glx"
> > ]
> > 
> > Embed json into a bitbake variable? Or embed directly as python code?
> 
> I've wondered if we could add some new syntax to bitbake to support
> this somehow, does anyone have any ideas to propose?
> 
> I'd wondered about both python data and/or json format (at which point
> someone will want yaml :/).

This sounds pretty far fetched currently. json files are quite simple
to work with in python so I'd just stick to this. If this approach is ok
I could update the testimage.bbclass documentation with these details.
I really want to re-use tests and infratructure for running them but I need
to customize various details.

Cheers,

-Mikko
Richard Purdie Nov. 18, 2022, 3:04 p.m. UTC | #7
On Fri, 2022-11-18 at 16:32 +0200, Mikko Rapeli wrote:
> Hi,
> 
> On Thu, Nov 17, 2022 at 04:57:36PM +0000, Richard Purdie wrote:
> > On Thu, 2022-11-17 at 17:39 +0200, Mikko Rapeli wrote:
> > > Hi,
> > > 
> > > On Thu, Nov 17, 2022 at 03:17:43PM +0000, Richard Purdie wrote:
> > > > On Thu, 2022-11-17 at 09:12 +0200, Mikko Rapeli wrote:
> > > > > Many runtime tests would need customization for different
> > > > > machines and images. Currently some tests like parselogs.py are hard
> > > > > coding machine specific exceptions into the test itself. I think these
> > > > > machine specific exceptions fit better as image specific ones, since a
> > > > > single machine config can generate multiple images which behave
> > > > > differently. Thus create a "testimage_data.json" file format which image
> > > > > recipes can deploy. This is then used by tests like parselogs.py to find
> > > > > the image specific exception list.
> > > > > 
> > > > > Same approach would fit other runtime tests too. For example systemd
> > > > > tests could include a test case which checks that an image specific list of
> > > > > services are running.
> > > > > 
> > > > > I don't know how this data storage would be used with SDK or selftests,
> > > > > but maybe it could work there too with some small tweaks.
> > > > > 
> > > > > Mikko Rapeli (2):
> > > > >   oeqa: add utils/data.py with get_data() function
> > > > >   oeqa parselogs.py: use get_data() to fetch image specific error list
> > > > > 
> > > > >  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
> > > > >  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
> > > > >  2 files changed, 54 insertions(+), 4 deletions(-)
> > > > >  create mode 100644 meta/lib/oeqa/utils/data.py
> > > > 
> > > > This patch looks like it is one side of the equation, i.e. importing
> > > > the data into the tests. How does the data get into the deploy
> > > > directory in the first place? I assume there are other patches which do
> > > > that?
> > > 
> > > Patches in other layers do that, yes.
> 
> Note to self and anyone else interested in this, it is rather
> tricky to get SRC_URI and do_deploy() working in image recipes.
> Something like this will do it though:
> 
> SUMMARY = "Test image"
> LICENSE = "MIT"
> 
> SRC_URI = "file://testimage_data.json"
> 
> inherit deploy
> 
> # re-enable SRC_URI handling, it's disabled in image.bbclass
> python __anonymous() {
>     d.delVarFlag("do_fetch", "noexec")
>     d.delVarFlag("do_unpack", "noexec")
> }
> ...
> do_deploy() {
>     # to customise oeqa tests
>     mkdir -p "${DEPLOYDIR}"
>     install "${WORKDIR}/testimage_data.json" "${DEPLOYDIR}"
> }
> # do_unpack needed to run do_fetch and do_unpack which are disabled by image.bbclass.
> addtask deploy before do_build after do_rootfs do_unpack

Since the image code doesn't need SRC_URI and has it's own handling of
deployment, we didn't really anyone should be needing to do that :/.

> > > 
> > When you execute later, are you going to use testexport or will the
> > metadata still be available? As I mentioned, removing testexport would
> > be desirable for a number of reasons but I suspect there are people who
> > might want it.
> 
> I was planning to use testexport and also make sure all images and other
> things needed for running tests are in the output of a build.

I guess that means if we were to propose patches removing testexport
functionality you'd be very much opposed then? :(


> 
> > > > The second is the "testexport" approach where the tests are run without
> > > > the main metadata. I know Ross and I would like to see testexport
> > > > dropped as it complicates things and is a pain.
> > > > 
> > > > This new file "feels" a lot like more extensions in the testexport
> > > > direction and I'm not sure we need to do that. Could we handle this
> > > > with more markup in the image recipe?
> > > 
> > > For simple variables this would do but how about a long list of strings
> > > like poky/meta/lib/oeqa/runtime/cases/parselogs.py:
> > > 
> > > common_errors = [
> > >     "(WW) warning, (EE) error, (NI) not implemented, (??) unknown.",
> > >     "dma timeout",
> > >     "can\'t add hid device:",
> > >     "usbhid: probe of ",
> > >     "_OSC failed (AE_ERROR)",
> > >     "_OSC failed (AE_SUPPORT)",
> > >     "AE_ALREADY_EXISTS",
> > >     "ACPI _OSC request failed (AE_SUPPORT)",
> > >     "can\'t disable ASPM",
> > >     "Failed to load module \"vesa\"",
> > >     "Failed to load module vesa",
> > >     "Failed to load module \"modesetting\"",
> > >     "Failed to load module modesetting",
> > >     "Failed to load module \"glx\"",
> > >     "Failed to load module \"fbdev\"",
> > >     "Failed to load module fbdev",
> > >     "Failed to load module glx"
> > > ]
> > > 
> > > Embed json into a bitbake variable? Or embed directly as python code?
> > 
> > I've wondered if we could add some new syntax to bitbake to support
> > this somehow, does anyone have any ideas to propose?
> > 
> > I'd wondered about both python data and/or json format (at which point
> > someone will want yaml :/).
> 
> This sounds pretty far fetched currently.

Not really. If we can find a syntax that works, the rest of the code in
bitbake can support that fairly easily. The datastore already handles
objects of different types.

> json files are quite simple to work with in python so I'd just stick to 
> this. If this approach is ok I could update the testimage.bbclass 
> documentation with these details.
> I really want to re-use tests and infratructure for running them but I need
> to customize various details.

My concern is having multiple different file formats and data streams.
It means we no longer have one definitive data mechanism but two, then
the argument for people also shipping yaml and other files with recipes
also becomes difficult. We'd also have people wanting to query from one
to the other eventually.

The real issue here seems to be that our data format (.bb) is
struggling with some forms of data. I've therefore a preference for
fixing that rather than encouraging working around it.

Cheers,

Richard
Mikko Rapeli Nov. 18, 2022, 3:57 p.m. UTC | #8
On Fri, Nov 18, 2022 at 03:04:29PM +0000, Richard Purdie wrote:
> On Fri, 2022-11-18 at 16:32 +0200, Mikko Rapeli wrote:
> > Hi,
> > 
> > On Thu, Nov 17, 2022 at 04:57:36PM +0000, Richard Purdie wrote:
> > > On Thu, 2022-11-17 at 17:39 +0200, Mikko Rapeli wrote:
> > > > Hi,
> > > > 
> > > > On Thu, Nov 17, 2022 at 03:17:43PM +0000, Richard Purdie wrote:
> > > > > On Thu, 2022-11-17 at 09:12 +0200, Mikko Rapeli wrote:
> > > > > > Many runtime tests would need customization for different
> > > > > > machines and images. Currently some tests like parselogs.py are hard
> > > > > > coding machine specific exceptions into the test itself. I think these
> > > > > > machine specific exceptions fit better as image specific ones, since a
> > > > > > single machine config can generate multiple images which behave
> > > > > > differently. Thus create a "testimage_data.json" file format which image
> > > > > > recipes can deploy. This is then used by tests like parselogs.py to find
> > > > > > the image specific exception list.
> > > > > > 
> > > > > > Same approach would fit other runtime tests too. For example systemd
> > > > > > tests could include a test case which checks that an image specific list of
> > > > > > services are running.
> > > > > > 
> > > > > > I don't know how this data storage would be used with SDK or selftests,
> > > > > > but maybe it could work there too with some small tweaks.
> > > > > > 
> > > > > > Mikko Rapeli (2):
> > > > > >   oeqa: add utils/data.py with get_data() function
> > > > > >   oeqa parselogs.py: use get_data() to fetch image specific error list
> > > > > > 
> > > > > >  meta/lib/oeqa/runtime/cases/parselogs.py | 17 +++++++---
> > > > > >  meta/lib/oeqa/utils/data.py              | 41 ++++++++++++++++++++++++
> > > > > >  2 files changed, 54 insertions(+), 4 deletions(-)
> > > > > >  create mode 100644 meta/lib/oeqa/utils/data.py
> > > > > 
> > > > > This patch looks like it is one side of the equation, i.e. importing
> > > > > the data into the tests. How does the data get into the deploy
> > > > > directory in the first place? I assume there are other patches which do
> > > > > that?
> > > > 
> > > > Patches in other layers do that, yes.
> > 
> > Note to self and anyone else interested in this, it is rather
> > tricky to get SRC_URI and do_deploy() working in image recipes.
> > Something like this will do it though:
> > 
> > SUMMARY = "Test image"
> > LICENSE = "MIT"
> > 
> > SRC_URI = "file://testimage_data.json"
> > 
> > inherit deploy
> > 
> > # re-enable SRC_URI handling, it's disabled in image.bbclass
> > python __anonymous() {
> >     d.delVarFlag("do_fetch", "noexec")
> >     d.delVarFlag("do_unpack", "noexec")
> > }
> > ...
> > do_deploy() {
> >     # to customise oeqa tests
> >     mkdir -p "${DEPLOYDIR}"
> >     install "${WORKDIR}/testimage_data.json" "${DEPLOYDIR}"
> > }
> > # do_unpack needed to run do_fetch and do_unpack which are disabled by image.bbclass.
> > addtask deploy before do_build after do_rootfs do_unpack
> 
> Since the image code doesn't need SRC_URI and has it's own handling of
> deployment, we didn't really anyone should be needing to do that :/.

Yep, but it can be done. Images can deploy files from SRC_URI too.

> > > > 
> > > When you execute later, are you going to use testexport or will the
> > > metadata still be available? As I mentioned, removing testexport would
> > > be desirable for a number of reasons but I suspect there are people who
> > > might want it.
> > 
> > I was planning to use testexport and also make sure all images and other
> > things needed for running tests are in the output of a build.
> 
> I guess that means if we were to propose patches removing testexport
> functionality you'd be very much opposed then? :(

An alternative would be nice. I like that build environment provides
full infrastructure for running tests also elsewhere. But if that is too
tricky then running tests only works on build machines after the build.

> > > > > The second is the "testexport" approach where the tests are run without
> > > > > the main metadata. I know Ross and I would like to see testexport
> > > > > dropped as it complicates things and is a pain.
> > > > > 
> > > > > This new file "feels" a lot like more extensions in the testexport
> > > > > direction and I'm not sure we need to do that. Could we handle this
> > > > > with more markup in the image recipe?
> > > > 
> > > > For simple variables this would do but how about a long list of strings
> > > > like poky/meta/lib/oeqa/runtime/cases/parselogs.py:
> > > > 
> > > > common_errors = [
> > > >     "(WW) warning, (EE) error, (NI) not implemented, (??) unknown.",
> > > >     "dma timeout",
> > > >     "can\'t add hid device:",
> > > >     "usbhid: probe of ",
> > > >     "_OSC failed (AE_ERROR)",
> > > >     "_OSC failed (AE_SUPPORT)",
> > > >     "AE_ALREADY_EXISTS",
> > > >     "ACPI _OSC request failed (AE_SUPPORT)",
> > > >     "can\'t disable ASPM",
> > > >     "Failed to load module \"vesa\"",
> > > >     "Failed to load module vesa",
> > > >     "Failed to load module \"modesetting\"",
> > > >     "Failed to load module modesetting",
> > > >     "Failed to load module \"glx\"",
> > > >     "Failed to load module \"fbdev\"",
> > > >     "Failed to load module fbdev",
> > > >     "Failed to load module glx"
> > > > ]
> > > > 
> > > > Embed json into a bitbake variable? Or embed directly as python code?
> > > 
> > > I've wondered if we could add some new syntax to bitbake to support
> > > this somehow, does anyone have any ideas to propose?
> > > 
> > > I'd wondered about both python data and/or json format (at which point
> > > someone will want yaml :/).
> > 
> > This sounds pretty far fetched currently.
> 
> Not really. If we can find a syntax that works, the rest of the code in
> bitbake can support that fairly easily. The datastore already handles
> objects of different types.
> 
> > json files are quite simple to work with in python so I'd just stick to�
> > this. If this approach is ok I could update the testimage.bbclass�
> > documentation with these details.
> > I really want to re-use tests and infratructure for running them but I need
> > to customize various details.
> 
> My concern is having multiple different file formats and data streams.
> It means we no longer have one definitive data mechanism but two, then
> the argument for people also shipping yaml and other files with recipes
> also becomes difficult. We'd also have people wanting to query from one
> to the other eventually.
> 
> The real issue here seems to be that our data format (.bb) is
> struggling with some forms of data. I've therefore a preference for
> fixing that rather than encouraging working around it.

For oeqa runtime tests I propose this json file. If tests have any
customization need they should use either image recipe variables or this
file format if recipe variables can't support the format. For other
alternatives I'd need pointers where to implement and what. ptests are
normal packages so they don't complicate this.

Additionally I'm currently interested in kirkstone LTS branch so would like
any changes to be there too...

Cheers,

-Mikko

> Cheers,
> 
> Richard
> 
> 
> 
> 
>
Richard Purdie Nov. 18, 2022, 4:04 p.m. UTC | #9
On Fri, 2022-11-18 at 17:57 +0200, Mikko Rapeli wrote:
> Additionally I'm currently interested in kirkstone LTS branch so would like
> any changes to be there too...

The idea is that we develop things in master, then they become
available in the next LTS release, not the previous one.

People somehow seem to think that we add development patches to master
and immediately backport them to the last LTS. This is not how
development or the LTS is meant to work :(.

We've not going to be forced into poor interface/API choices just
because people want things backported to the current LTS.

This is why I'm so worried about the lack of planning and development
happening on master, people need to be thinking and planning ahead. To
be clear, this isn't just about this issue but a pattern we're seeing
far more widely.

Cheers,

Richard
Mikko Rapeli Nov. 18, 2022, 4:09 p.m. UTC | #10
Hi,

On Fri, Nov 18, 2022 at 04:04:01PM +0000, Richard Purdie wrote:
> On Fri, 2022-11-18 at 17:57 +0200, Mikko Rapeli wrote:
> > Additionally I'm currently interested in kirkstone LTS branch so would like
> > any changes to be there too...
> 
> The idea is that we develop things in master, then they become
> available in the next LTS release, not the previous one.
> 
> People somehow seem to think that we add development patches to master
> and immediately backport them to the last LTS. This is not how
> development or the LTS is meant to work :(.
> 
> We've not going to be forced into poor interface/API choices just
> because people want things backported to the current LTS.
> 
> This is why I'm so worried about the lack of planning and development
> happening on master, people need to be thinking and planning ahead. To
> be clear, this isn't just about this issue but a pattern we're seeing
> far more widely.

Agreed, I'm trying to get into working closer to master but not there
yet...

Cheers,

-Mikko
Richard Purdie Nov. 18, 2022, 4:11 p.m. UTC | #11
On Fri, 2022-11-18 at 17:57 +0200, Mikko Rapeli wrote:
> On Fri, Nov 18, 2022 at 03:04:29PM +0000, Richard Purdie wrote:
> 
> > My concern is having multiple different file formats and data streams.
> > It means we no longer have one definitive data mechanism but two, then
> > the argument for people also shipping yaml and other files with recipes
> > also becomes difficult. We'd also have people wanting to query from one
> > to the other eventually.
> > 
> > The real issue here seems to be that our data format (.bb) is
> > struggling with some forms of data. I've therefore a preference for
> > fixing that rather than encouraging working around it.
> 
> For oeqa runtime tests I propose this json file. If tests have any
> customization need they should use either image recipe variables or this
> file format if recipe variables can't support the format. For other
> alternatives I'd need pointers where to implement and what. ptests are
> normal packages so they don't complicate this.

The key question this comes down to is can anyone suggest a syntax for
including python data structures in our metadata (and/or json data)?

Cheers,

Richard